title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
|---|---|---|
Initialization of a multidimensional arrays in C/C++
|
In multidimensional array, the array should have dimension more that 1. The following diagram shows the memory allocation strategy for a multidimensional array with dimension 3 x 3 x 3.
This is a C++ program to initialize a multidimensional array.
Begin
Initialize the elements of a multidimensional array.
Print the size of the array.
Display the content of the array.
End
#include<iostream>
using namespace std;
int main()
{
int r, c;
int a[][2] = {{3,1},{7,6}};
cout<< "Size of the Array:"<<sizeof(a)<<"\n";
cout<< "Content of the Array:"<<sizeof(a)<<"\n";
for(r=0; r<2; r++) {
for(c=0; c<2; c++) {
cout << " " << a[r][c];
}
cout << "\n";
}
return 0;
}
Size of the Array:16
Content of the Array:16
3 1
7 6
|
[
{
"code": null,
"e": 1248,
"s": 1062,
"text": "In multidimensional array, the array should have dimension more that 1. The following diagram shows the memory allocation strategy for a multidimensional array with dimension 3 x 3 x 3."
},
{
"code": null,
"e": 1310,
"s": 1248,
"text": "This is a C++ program to initialize a multidimensional array."
},
{
"code": null,
"e": 1445,
"s": 1310,
"text": "Begin\n Initialize the elements of a multidimensional array.\n Print the size of the array.\n Display the content of the array.\nEnd"
},
{
"code": null,
"e": 1775,
"s": 1445,
"text": "#include<iostream>\nusing namespace std;\nint main()\n{\n int r, c;\n int a[][2] = {{3,1},{7,6}};\n cout<< \"Size of the Array:\"<<sizeof(a)<<\"\\n\";\n cout<< \"Content of the Array:\"<<sizeof(a)<<\"\\n\";\n for(r=0; r<2; r++) {\n for(c=0; c<2; c++) {\n cout << \" \" << a[r][c];\n }\n cout << \"\\n\";\n }\n return 0;\n}"
},
{
"code": null,
"e": 1828,
"s": 1775,
"text": "Size of the Array:16\nContent of the Array:16\n3 1\n7 6"
}
] |
How to add center align text it in each subplot graph in seaborn? - GeeksforGeeks
|
11 Jun, 2021
In this article, we are going to see how to add text in the center above each subplot using seaborn. Centering a title is a great way to represent the variability in your data. It can be applied to graphs to provide an additional layer of information on the presented data.
FacetGrid: It is a general way of plotting a grid based on a function. It helps us in visualizing the distribution of variables as well as the relationship between multiple variables. FacetGrid object uses the data frame as Input and the names of the variables that shape the column, row, dimensions of the grid, the syntax is given below:
Syntax: seaborn.FacetGrid( data, \*\*kwargs)
data: Tidy data frame where each column is a variable and each row is an observation.
\*\*kwargs: It uses many arguments as input such as i.e. row, col, hue, palette, etc.
Map method: The map() method is vastly used to apply a function or operation on a sequence of data. It applies a function on all the items of an iterator given as input after applying a specific function to all the elements of iterable and return
Syntax: map(function, iterable)
Parameter:
function Required: The function to execute for each item
iterable Required: A collection of row-column, sequence, or an iterator object.
Text method: This function is used to add text to the axes at location x, y in data coordinates.
Syntax: text(x, y, text, fontsize = int )
x, y: The position to place the text.
text: “your text”.
fontsize: size of text in integer form.
Below is the implementation of the above method:
Example 1: Here we are plotting a regplot graph by calling sns.regplot, This method is used to plot data and a linear regression model.Here we have a graph in which we have added an annotation to the inner part of the graph at a certain area where we’re adding our text at positions x=10 and y=120 with fontsize=12. Please find my code below:
Python3
# Import Libraryimport seaborn as sns # style must be one of white, dark,# whitegrid, darkgrid (Optional)sns.set_style("darkgrid") # Loading default data of seabornexercise = sns.load_dataset("exercise")g = sns.FacetGrid(exercise, row="diet", col="time", margin_titles = True) g.map(sns.regplot, "id", "pulse", color = ".3") # Set Title for each subplotcol_order=['Deltaic', 'Plains','Hummock', 'Swale', 'Sand Dunes', 'Mountain'] # embedding center-text with its title# using loop.for txt, title in zip(g.axes.flat, col_order): txt.set_title(title) # add text txt.text(10, 120,'Geeksforgeeks', fontsize = 12)
Output:
Example 2: In this example, we are plotting kdeplot by calling sns.kdeplot, which represents the probability distribution of the data values as the area under the plotted curve. Here we have a graph in which we have added an annotation to the inner part of the graph at a certain area, here we’re adding our text at positions x=10.58 and y=0.04 with fontsize=11. Please find my code below:
Python3
# import Libraryimport seaborn as snsimport pandas as pd # style must be one of white, dark,# whitegrid, darkgrid (Optional)sns.set_style("darkgrid") # Loading default data of seabornexercise = sns.load_dataset("exercise")exercise_kind = exercise.kind.value_counts().index g = sns.FacetGrid(exercise, row="kind", row_order=exercise_kind, height=1.7, aspect=4,)g.map(sns.kdeplot, "id") # Set Titlecol_order=['Deltaic Plains','Hummock and Swale', 'Sand Dunes'] # embedding center-text with its title# at each iterationfor txt, title in zip(g.axes.flat, col_order): txt.set_title(title) # add text txt.text(10.58, 0.04,'Geeksforgeeks', fontsize = 11)
Output:
Example 3: In this example, we are plotting line plot, sns.lineplot is charts that are normally used to identify trends over a period of time. Here we have a graph in which we have added an annotation to the inner part of the graph at a certain area here we’re adding our text at positions x=15 and y=6 with fontsize=12. Please find my code below:
Python3
# Import Libraryimport seaborn as snsimport pandas as pd # style must be one of white,# dark, whitegrid, darkgridsns.set_style("darkgrid") # Loading default data of seaborntips = sns.load_dataset("tips")g = sns.FacetGrid(tips, row = "sex", col = "smoker", margin_titles = True)g.map(sns.lineplot, "total_bill", 'tip') # Set Title for each subplotcol_order = ['Deltaic Plains','Hummock and Swale', 'Sand Dunes', 'Mountain'] # embedding center-text with its# title at each iterationfor txt, title in zip(g.axes.flat, col_order): txt.set_title(title) # add text txt.text(15, 6,'Geeksforgeeks', fontsize = 12)
Output:
Example 4: In this example, we will plot Barplot by calling sns.barplot. It is a visualization of x and y numeric and categorical dataset variables in a graph to find the relationship between them. Here we add our text at positions x= -0.2 and y=60 with fontsize=12. Please find my code below:
Python3
# import Libraryimport seaborn as snsimport pandas as pd # style must be one of white, dark,# whitegrid, darkgridsns.set_style("darkgrid") # Loading default data of seabornexercise = sns.load_dataset("exercise")g = sns.FacetGrid(exercise, col="time", height=4, aspect=.5) g.map(sns.barplot, "diet", "pulse", order=["no fat", "low fat"]) # Set Title for each subplotcol_order=['Deltaic Plains','Hummock and Swale', 'Sand Dunes'] # embedding center-text with its title# at each iterationfor txt, title in zip(g.axes.flat, col_order): txt.set_title(title) # add text txt.text(-0.2, 60,'Geeksforgeeks', fontsize = 12)
Output:
adnanirshad158
Python-Seaborn
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Python | os.path.join() method
Create a directory in Python
Defaultdict in Python
Python | Pandas dataframe.groupby()
Python | Get unique values from a list
|
[
{
"code": null,
"e": 25647,
"s": 25619,
"text": "\n11 Jun, 2021"
},
{
"code": null,
"e": 25922,
"s": 25647,
"text": "In this article, we are going to see how to add text in the center above each subplot using seaborn. Centering a title is a great way to represent the variability in your data. It can be applied to graphs to provide an additional layer of information on the presented data. "
},
{
"code": null,
"e": 26262,
"s": 25922,
"text": "FacetGrid: It is a general way of plotting a grid based on a function. It helps us in visualizing the distribution of variables as well as the relationship between multiple variables. FacetGrid object uses the data frame as Input and the names of the variables that shape the column, row, dimensions of the grid, the syntax is given below:"
},
{
"code": null,
"e": 26307,
"s": 26262,
"text": "Syntax: seaborn.FacetGrid( data, \\*\\*kwargs)"
},
{
"code": null,
"e": 26393,
"s": 26307,
"text": "data: Tidy data frame where each column is a variable and each row is an observation."
},
{
"code": null,
"e": 26479,
"s": 26393,
"text": "\\*\\*kwargs: It uses many arguments as input such as i.e. row, col, hue, palette, etc."
},
{
"code": null,
"e": 26726,
"s": 26479,
"text": "Map method: The map() method is vastly used to apply a function or operation on a sequence of data. It applies a function on all the items of an iterator given as input after applying a specific function to all the elements of iterable and return"
},
{
"code": null,
"e": 26758,
"s": 26726,
"text": "Syntax: map(function, iterable)"
},
{
"code": null,
"e": 26769,
"s": 26758,
"text": "Parameter:"
},
{
"code": null,
"e": 26826,
"s": 26769,
"text": "function Required: The function to execute for each item"
},
{
"code": null,
"e": 26906,
"s": 26826,
"text": "iterable Required: A collection of row-column, sequence, or an iterator object."
},
{
"code": null,
"e": 27003,
"s": 26906,
"text": "Text method: This function is used to add text to the axes at location x, y in data coordinates."
},
{
"code": null,
"e": 27045,
"s": 27003,
"text": "Syntax: text(x, y, text, fontsize = int )"
},
{
"code": null,
"e": 27083,
"s": 27045,
"text": "x, y: The position to place the text."
},
{
"code": null,
"e": 27102,
"s": 27083,
"text": "text: “your text”."
},
{
"code": null,
"e": 27142,
"s": 27102,
"text": "fontsize: size of text in integer form."
},
{
"code": null,
"e": 27191,
"s": 27142,
"text": "Below is the implementation of the above method:"
},
{
"code": null,
"e": 27535,
"s": 27191,
"text": "Example 1: Here we are plotting a regplot graph by calling sns.regplot, This method is used to plot data and a linear regression model.Here we have a graph in which we have added an annotation to the inner part of the graph at a certain area where we’re adding our text at positions x=10 and y=120 with fontsize=12. Please find my code below:"
},
{
"code": null,
"e": 27543,
"s": 27535,
"text": "Python3"
},
{
"code": "# Import Libraryimport seaborn as sns # style must be one of white, dark,# whitegrid, darkgrid (Optional)sns.set_style(\"darkgrid\") # Loading default data of seabornexercise = sns.load_dataset(\"exercise\")g = sns.FacetGrid(exercise, row=\"diet\", col=\"time\", margin_titles = True) g.map(sns.regplot, \"id\", \"pulse\", color = \".3\") # Set Title for each subplotcol_order=['Deltaic', 'Plains','Hummock', 'Swale', 'Sand Dunes', 'Mountain'] # embedding center-text with its title# using loop.for txt, title in zip(g.axes.flat, col_order): txt.set_title(title) # add text txt.text(10, 120,'Geeksforgeeks', fontsize = 12)",
"e": 28197,
"s": 27543,
"text": null
},
{
"code": null,
"e": 28205,
"s": 28197,
"text": "Output:"
},
{
"code": null,
"e": 28596,
"s": 28205,
"text": "Example 2: In this example, we are plotting kdeplot by calling sns.kdeplot, which represents the probability distribution of the data values as the area under the plotted curve. Here we have a graph in which we have added an annotation to the inner part of the graph at a certain area, here we’re adding our text at positions x=10.58 and y=0.04 with fontsize=11. Please find my code below:"
},
{
"code": null,
"e": 28604,
"s": 28596,
"text": "Python3"
},
{
"code": "# import Libraryimport seaborn as snsimport pandas as pd # style must be one of white, dark,# whitegrid, darkgrid (Optional)sns.set_style(\"darkgrid\") # Loading default data of seabornexercise = sns.load_dataset(\"exercise\")exercise_kind = exercise.kind.value_counts().index g = sns.FacetGrid(exercise, row=\"kind\", row_order=exercise_kind, height=1.7, aspect=4,)g.map(sns.kdeplot, \"id\") # Set Titlecol_order=['Deltaic Plains','Hummock and Swale', 'Sand Dunes'] # embedding center-text with its title# at each iterationfor txt, title in zip(g.axes.flat, col_order): txt.set_title(title) # add text txt.text(10.58, 0.04,'Geeksforgeeks', fontsize = 11)",
"e": 29312,
"s": 28604,
"text": null
},
{
"code": null,
"e": 29320,
"s": 29312,
"text": "Output:"
},
{
"code": null,
"e": 29669,
"s": 29320,
"text": "Example 3: In this example, we are plotting line plot, sns.lineplot is charts that are normally used to identify trends over a period of time. Here we have a graph in which we have added an annotation to the inner part of the graph at a certain area here we’re adding our text at positions x=15 and y=6 with fontsize=12. Please find my code below:"
},
{
"code": null,
"e": 29677,
"s": 29669,
"text": "Python3"
},
{
"code": "# Import Libraryimport seaborn as snsimport pandas as pd # style must be one of white,# dark, whitegrid, darkgridsns.set_style(\"darkgrid\") # Loading default data of seaborntips = sns.load_dataset(\"tips\")g = sns.FacetGrid(tips, row = \"sex\", col = \"smoker\", margin_titles = True)g.map(sns.lineplot, \"total_bill\", 'tip') # Set Title for each subplotcol_order = ['Deltaic Plains','Hummock and Swale', 'Sand Dunes', 'Mountain'] # embedding center-text with its# title at each iterationfor txt, title in zip(g.axes.flat, col_order): txt.set_title(title) # add text txt.text(15, 6,'Geeksforgeeks', fontsize = 12)",
"e": 30347,
"s": 29677,
"text": null
},
{
"code": null,
"e": 30355,
"s": 30347,
"text": "Output:"
},
{
"code": null,
"e": 30650,
"s": 30355,
"text": "Example 4: In this example, we will plot Barplot by calling sns.barplot. It is a visualization of x and y numeric and categorical dataset variables in a graph to find the relationship between them. Here we add our text at positions x= -0.2 and y=60 with fontsize=12. Please find my code below:"
},
{
"code": null,
"e": 30658,
"s": 30650,
"text": "Python3"
},
{
"code": "# import Libraryimport seaborn as snsimport pandas as pd # style must be one of white, dark,# whitegrid, darkgridsns.set_style(\"darkgrid\") # Loading default data of seabornexercise = sns.load_dataset(\"exercise\")g = sns.FacetGrid(exercise, col=\"time\", height=4, aspect=.5) g.map(sns.barplot, \"diet\", \"pulse\", order=[\"no fat\", \"low fat\"]) # Set Title for each subplotcol_order=['Deltaic Plains','Hummock and Swale', 'Sand Dunes'] # embedding center-text with its title# at each iterationfor txt, title in zip(g.axes.flat, col_order): txt.set_title(title) # add text txt.text(-0.2, 60,'Geeksforgeeks', fontsize = 12)",
"e": 31321,
"s": 30658,
"text": null
},
{
"code": null,
"e": 31329,
"s": 31321,
"text": "Output:"
},
{
"code": null,
"e": 31344,
"s": 31329,
"text": "adnanirshad158"
},
{
"code": null,
"e": 31359,
"s": 31344,
"text": "Python-Seaborn"
},
{
"code": null,
"e": 31366,
"s": 31359,
"text": "Python"
},
{
"code": null,
"e": 31464,
"s": 31366,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31496,
"s": 31464,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 31538,
"s": 31496,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 31580,
"s": 31538,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 31636,
"s": 31580,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 31663,
"s": 31636,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 31694,
"s": 31663,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 31723,
"s": 31694,
"text": "Create a directory in Python"
},
{
"code": null,
"e": 31745,
"s": 31723,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 31781,
"s": 31745,
"text": "Python | Pandas dataframe.groupby()"
}
] |
Measuring and enhancing image quality attributes | by Marian Stefanescu | Towards Data Science
|
Before starting our discussion about measuring or enhancing image quality attributes, we have to first properly introduce them. For this, I’ve taken inspiration from the book Camera Image Quality Benchmarking, which describes in great detail the attributes that I will be speaking about here. It’s important to note that, although the attributes described in the book are camera attributes, our discussion is centered around image attributes. Fortunately, a couple of camera attributes can be used as image attributes as well.
Usually referring to the time of exposure, which is a property of the camera that affects the amount of light in an image. The corresponding image attribute is actually the brightness. There are multiple ways to compute the brightness or an equivalent measure:
I found that mapping from RGB to HSB(Hue, Saturation, Brightness) or HSL (Hue, Saturation, Luminance) and looking only at the last component (L or B) would be a possibility.
A really nice measure of perceived brightness is the one proposed by Darel Rex Finley where:
If we do the average for all the pixels, we can obtain a measure of perceived brightness. Also, by splitting the resulting value into five pieces (because the min is 0 and the max is 255) we can define a scale: (Very dark, dark, Normal, Bright, Very Bright).
import cvimport mathimg = cv2.read(‘image.jpg’)def pixel_brightness(pixel): assert 3 == len(pixel) r, g, b = pixel return math.sqrt(0.299 * r ** 2 + 0.587 * g ** 2 + 0.114 * b ** 2)def image_brightness(img): nr_of_pixels = len(img) * len(img[0]) return sum(pixel_brightness(pixel) for pixel in row for row in img) / nr_of_pixels
High-dynamic-range imaging (HDRI or HDR) is a technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. While the human eye can adjust to a wide range of light conditions, most imaging devices use 8-bits per channel, so we are limited to only 256 levels. HDR imaging works with images that use more than 8 bits per channel (usually 32-bit float values), allowing a much wider dynamic range.
What is tone mapping?
There are different ways to obtain HDR images, but the most common one is to use photographs of the scene taken with different exposure values. To combine these exposures it is useful to know your camera’s response function and there are algorithms to estimate it. After the HDR image has been merged, it has to be converted back to 8-bit to view it on usual displays. This process is called tone mapping.
Measuring if an image is well tone-mapped
From the above definition, I propose (so it’s possible it’s totally wrong) the following procedure for measuring tone mapping. The intuition behind this comes from the way histograms look when the images are not properly tone mapped. Most of the time they look like this:
They are either too dark (shadow clipping), too bright (highlights clipping), or with both (for example a dark bathroom with the blitz visible in the mirror or a photo of a light pole in the middle of the night).
In contrast, a well tone mapped image looks like this:
Based on this, I propose (so take it with a grain of salt) a scoring method that tries to take into account the things described above. The score will be between [0, 1], 0 meaning the image is not correctly tone-mapped, and 1 that it is correctly tone-mapped. Besides the saturation effect, a poorly tone-mapped image might also be an image that has the majority of the brightness values in a tight interval (small variance => fewer available tones).
For simplicity, to not work with distinct color channels, we can use the brightness (pixel_brightness) from above.We construct a brightness histogram (x is from [0, 255])We build a probability distribution from the histogram, in the [0, 1] range:
For simplicity, to not work with distinct color channels, we can use the brightness (pixel_brightness) from above.
We construct a brightness histogram (x is from [0, 255])
We build a probability distribution from the histogram, in the [0, 1] range:
4. We define a parabolic penalizing probability distribution, that’s 0 in 0 and 1 with a maximum in 1/2 (This should be pretty fine as long as we penalize the extremes — thus low scores <=> the majority of the brightness was concentrated in the head and tail of the distribution).
(Note: This is actually a simple example of a Bernoulli distribution, a good thing to use as a prior probability distribution).
5. Next, we can define the “penalized” brightness probability distribution as . The only thing left is to properly limit this product...between 0 and 1. The first part is already solved...the minimum of this product, for all the values of is 0. That’s because we can define a black and white image that’s with the following probability distribution:
We can see because f is not 0 only at 0 and 255 the sum over all the pixels in the example image will be 0. Any other configuration would result in a sum that’s greater than 0.
To make the sum at most 1 we can use a high school trick, via the CBS inequality. In general:
In our case, that would be:
If we divide the left part with the right part we finally get a score that’s between 0 and 1. Thus the final form of the first term is:
I don’t know why, but it resembles a lot with Pearson’s correlation factor... 🤔
The next term I would simply define as:
In the end, we get the following tone mapping score:
Now, let’s see some code as well:
import mathimport numpy as npfrom scipy.stats import betaRED_SENSITIVITY = 0.299GREEN_SENSITIVITY = 0.587BLUE_SENSITIVITY = 0.114def convert_to_brightness_image(image: np.ndarray) -> np.ndarray: if image.dtype == np.uint8: raise ValueError("uint8 is not a good dtype for the image") return np.sqrt( image[..., 0] ** 2 * RED_SENSITIVITY + image[..., 1] ** 2 * GREEN_SENSITIVITY + image[..., 2] ** 2 * BLUE_SENSITIVITY )def get_resolution(image: np.ndarray): height, width = image.shape[:2] return height * widthdef brightness_histogram(image: np.ndarray) -> np.ndarray: nr_of_pixels = get_resolution(image) brightness_image = convert_to_brightness_image(image) hist, _ = np.histogram(brightness_image, bins=256, range=(0, 255)) return hist / nr_of_pixelsdef distribution_pmf(dist: Any, start: float, stop: float, nr_of_steps: int): xs = np.linspace(start, stop, nr_of_steps) ys = dist.pdf(xs) # divide by the sum to make a probability mass function return ys / np.sum(ys)def correlation_distance( distribution_a: np.ndarray, distribution_b: np.ndarray) -> float: dot_product = np.dot(distribution_a, distribution_b) squared_dist_a = np.sum(distribution_a ** 2) squared_dist_b = np.sum(distribution_b ** 2) return dot_product / math.sqrt(squared_dist_a * squared_dist_b)def compute_hdr(cv_image: np.ndarray): img_brightness_pmf = brightness_histogram(np.float32(cv_image)) ref_pmf = distribution_pmf(beta(2, 2), 0, 1, 256) return correlation_distance(ref_pmf, img_brightness_pmf)
Because, the blurred image’s edge is smoothed, so the variance is small. It's a one-liner in OpenCV, simply code 🎨: (https://stackoverflow.com/questions/48319918/whats-the-theory-behind-computing-variance-of-an-image).
import cv2def blurry(image, threshold=100): return cv2.Laplacian(image, cv2.CV_64F).var() < threshold
HDR with multiple images
HDR with multiple images
The OpenCV docs have a nice tutorial on this, High Dynamic Range (HDR).
For brevity’s sake, I’m putting here only the results obtained with Debevec’s algorithm (http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf).
First, multiple pictures are taken with different exposure times (the exposure times are known, and the camera is not moving).
First, multiple pictures are taken with different exposure times (the exposure times are known, and the camera is not moving).
import cv2 as cvimport numpy as np# Loading exposure images into a listimg_fn = [“img0.jpg”, “img1.jpg”, “img2.jpg”, “img3.jpg”]img_list = [cv.imread(fn) for fn in img_fn]exposure_times = np.array([15.0, 2.5, 0.25, 0.0333], dtype=np.float32)# Merge exposures to HDR imagemerge_debevec = cv.createMergeDebevec()hdr_debevec = merge_debevec.process(img_list, times=exposure_times.copy())# Tonemap HDR image (i.e. map the 32-bit float HDR data into the range [0..1])tonemap1 = cv.createTonemap(gamma=2.2)res_debevec = tonemap1.process(hdr_debevec.copy())# Convert datatype to 8-bit and save (! 8-bit per channel)res_debevec_8bit = np.clip(res_debevec*255, 0, 255).astype(‘uint8’)cv.imwrite(“ldr_debevec.jpg”, res_debevec_8bit)
The end result:
Finding flares reduces to the problem of finding very bright regions in the image. I haven’t found a specific method of finding if an image has a flare, only for correcting one: The method is called CLAHE (Contrast Limited Adaptive Histogram Equalization).
import numpy as npimport cv2img = cv2.imread('statue.jpg',0)res = cv2.equalizeHist(img)cv2.imwrite('global_hist_eq_statue.jpg',res)
Before speaking about CLAHE, it’s good to know why Histogram Equalization does NOT work:
While the background contrast has improved after histogram equalization, the face of the statue became too bright. Because of this, a local version is preferred and thus, adaptive histogram equalization is used. In this, the image is divided into small blocks called “tiles” (tile size is 8x8 by default in OpenCV). Then each of these blocks is histogram equalized as usual. So in a small area, a histogram would confine to a small region (unless there is noise). If the noise is there, it will be amplified. To avoid this, contrast limiting is applied.
import numpy as npimport cv2img = cv2.imread('statue.jpg',0)# create a CLAHE object (Arguments are optional).clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))cl1 = clahe.apply(img)cv2.imwrite('clahe_statue.jpg',cl1)
More on histogram equalization on the OpenCV docs (https://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html).
|
[
{
"code": null,
"e": 698,
"s": 171,
"text": "Before starting our discussion about measuring or enhancing image quality attributes, we have to first properly introduce them. For this, I’ve taken inspiration from the book Camera Image Quality Benchmarking, which describes in great detail the attributes that I will be speaking about here. It’s important to note that, although the attributes described in the book are camera attributes, our discussion is centered around image attributes. Fortunately, a couple of camera attributes can be used as image attributes as well."
},
{
"code": null,
"e": 959,
"s": 698,
"text": "Usually referring to the time of exposure, which is a property of the camera that affects the amount of light in an image. The corresponding image attribute is actually the brightness. There are multiple ways to compute the brightness or an equivalent measure:"
},
{
"code": null,
"e": 1133,
"s": 959,
"text": "I found that mapping from RGB to HSB(Hue, Saturation, Brightness) or HSL (Hue, Saturation, Luminance) and looking only at the last component (L or B) would be a possibility."
},
{
"code": null,
"e": 1226,
"s": 1133,
"text": "A really nice measure of perceived brightness is the one proposed by Darel Rex Finley where:"
},
{
"code": null,
"e": 1485,
"s": 1226,
"text": "If we do the average for all the pixels, we can obtain a measure of perceived brightness. Also, by splitting the resulting value into five pieces (because the min is 0 and the max is 255) we can define a scale: (Very dark, dark, Normal, Bright, Very Bright)."
},
{
"code": null,
"e": 1829,
"s": 1485,
"text": "import cvimport mathimg = cv2.read(‘image.jpg’)def pixel_brightness(pixel): assert 3 == len(pixel) r, g, b = pixel return math.sqrt(0.299 * r ** 2 + 0.587 * g ** 2 + 0.114 * b ** 2)def image_brightness(img): nr_of_pixels = len(img) * len(img[0]) return sum(pixel_brightness(pixel) for pixel in row for row in img) / nr_of_pixels"
},
{
"code": null,
"e": 2330,
"s": 1829,
"text": "High-dynamic-range imaging (HDRI or HDR) is a technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. While the human eye can adjust to a wide range of light conditions, most imaging devices use 8-bits per channel, so we are limited to only 256 levels. HDR imaging works with images that use more than 8 bits per channel (usually 32-bit float values), allowing a much wider dynamic range."
},
{
"code": null,
"e": 2352,
"s": 2330,
"text": "What is tone mapping?"
},
{
"code": null,
"e": 2758,
"s": 2352,
"text": "There are different ways to obtain HDR images, but the most common one is to use photographs of the scene taken with different exposure values. To combine these exposures it is useful to know your camera’s response function and there are algorithms to estimate it. After the HDR image has been merged, it has to be converted back to 8-bit to view it on usual displays. This process is called tone mapping."
},
{
"code": null,
"e": 2800,
"s": 2758,
"text": "Measuring if an image is well tone-mapped"
},
{
"code": null,
"e": 3072,
"s": 2800,
"text": "From the above definition, I propose (so it’s possible it’s totally wrong) the following procedure for measuring tone mapping. The intuition behind this comes from the way histograms look when the images are not properly tone mapped. Most of the time they look like this:"
},
{
"code": null,
"e": 3285,
"s": 3072,
"text": "They are either too dark (shadow clipping), too bright (highlights clipping), or with both (for example a dark bathroom with the blitz visible in the mirror or a photo of a light pole in the middle of the night)."
},
{
"code": null,
"e": 3340,
"s": 3285,
"text": "In contrast, a well tone mapped image looks like this:"
},
{
"code": null,
"e": 3791,
"s": 3340,
"text": "Based on this, I propose (so take it with a grain of salt) a scoring method that tries to take into account the things described above. The score will be between [0, 1], 0 meaning the image is not correctly tone-mapped, and 1 that it is correctly tone-mapped. Besides the saturation effect, a poorly tone-mapped image might also be an image that has the majority of the brightness values in a tight interval (small variance => fewer available tones)."
},
{
"code": null,
"e": 4038,
"s": 3791,
"text": "For simplicity, to not work with distinct color channels, we can use the brightness (pixel_brightness) from above.We construct a brightness histogram (x is from [0, 255])We build a probability distribution from the histogram, in the [0, 1] range:"
},
{
"code": null,
"e": 4153,
"s": 4038,
"text": "For simplicity, to not work with distinct color channels, we can use the brightness (pixel_brightness) from above."
},
{
"code": null,
"e": 4210,
"s": 4153,
"text": "We construct a brightness histogram (x is from [0, 255])"
},
{
"code": null,
"e": 4287,
"s": 4210,
"text": "We build a probability distribution from the histogram, in the [0, 1] range:"
},
{
"code": null,
"e": 4568,
"s": 4287,
"text": "4. We define a parabolic penalizing probability distribution, that’s 0 in 0 and 1 with a maximum in 1/2 (This should be pretty fine as long as we penalize the extremes — thus low scores <=> the majority of the brightness was concentrated in the head and tail of the distribution)."
},
{
"code": null,
"e": 4696,
"s": 4568,
"text": "(Note: This is actually a simple example of a Bernoulli distribution, a good thing to use as a prior probability distribution)."
},
{
"code": null,
"e": 5049,
"s": 4696,
"text": "5. Next, we can define the “penalized” brightness probability distribution as . The only thing left is to properly limit this product...between 0 and 1. The first part is already solved...the minimum of this product, for all the values of is 0. That’s because we can define a black and white image that’s with the following probability distribution:"
},
{
"code": null,
"e": 5227,
"s": 5049,
"text": "We can see because f is not 0 only at 0 and 255 the sum over all the pixels in the example image will be 0. Any other configuration would result in a sum that’s greater than 0."
},
{
"code": null,
"e": 5321,
"s": 5227,
"text": "To make the sum at most 1 we can use a high school trick, via the CBS inequality. In general:"
},
{
"code": null,
"e": 5349,
"s": 5321,
"text": "In our case, that would be:"
},
{
"code": null,
"e": 5485,
"s": 5349,
"text": "If we divide the left part with the right part we finally get a score that’s between 0 and 1. Thus the final form of the first term is:"
},
{
"code": null,
"e": 5565,
"s": 5485,
"text": "I don’t know why, but it resembles a lot with Pearson’s correlation factor... 🤔"
},
{
"code": null,
"e": 5605,
"s": 5565,
"text": "The next term I would simply define as:"
},
{
"code": null,
"e": 5658,
"s": 5605,
"text": "In the end, we get the following tone mapping score:"
},
{
"code": null,
"e": 5692,
"s": 5658,
"text": "Now, let’s see some code as well:"
},
{
"code": null,
"e": 7261,
"s": 5692,
"text": "import mathimport numpy as npfrom scipy.stats import betaRED_SENSITIVITY = 0.299GREEN_SENSITIVITY = 0.587BLUE_SENSITIVITY = 0.114def convert_to_brightness_image(image: np.ndarray) -> np.ndarray: if image.dtype == np.uint8: raise ValueError(\"uint8 is not a good dtype for the image\") return np.sqrt( image[..., 0] ** 2 * RED_SENSITIVITY + image[..., 1] ** 2 * GREEN_SENSITIVITY + image[..., 2] ** 2 * BLUE_SENSITIVITY )def get_resolution(image: np.ndarray): height, width = image.shape[:2] return height * widthdef brightness_histogram(image: np.ndarray) -> np.ndarray: nr_of_pixels = get_resolution(image) brightness_image = convert_to_brightness_image(image) hist, _ = np.histogram(brightness_image, bins=256, range=(0, 255)) return hist / nr_of_pixelsdef distribution_pmf(dist: Any, start: float, stop: float, nr_of_steps: int): xs = np.linspace(start, stop, nr_of_steps) ys = dist.pdf(xs) # divide by the sum to make a probability mass function return ys / np.sum(ys)def correlation_distance( distribution_a: np.ndarray, distribution_b: np.ndarray) -> float: dot_product = np.dot(distribution_a, distribution_b) squared_dist_a = np.sum(distribution_a ** 2) squared_dist_b = np.sum(distribution_b ** 2) return dot_product / math.sqrt(squared_dist_a * squared_dist_b)def compute_hdr(cv_image: np.ndarray): img_brightness_pmf = brightness_histogram(np.float32(cv_image)) ref_pmf = distribution_pmf(beta(2, 2), 0, 1, 256) return correlation_distance(ref_pmf, img_brightness_pmf)"
},
{
"code": null,
"e": 7480,
"s": 7261,
"text": "Because, the blurred image’s edge is smoothed, so the variance is small. It's a one-liner in OpenCV, simply code 🎨: (https://stackoverflow.com/questions/48319918/whats-the-theory-behind-computing-variance-of-an-image)."
},
{
"code": null,
"e": 7586,
"s": 7480,
"text": "import cv2def blurry(image, threshold=100): return cv2.Laplacian(image, cv2.CV_64F).var() < threshold"
},
{
"code": null,
"e": 7611,
"s": 7586,
"text": "HDR with multiple images"
},
{
"code": null,
"e": 7636,
"s": 7611,
"text": "HDR with multiple images"
},
{
"code": null,
"e": 7708,
"s": 7636,
"text": "The OpenCV docs have a nice tutorial on this, High Dynamic Range (HDR)."
},
{
"code": null,
"e": 7862,
"s": 7708,
"text": "For brevity’s sake, I’m putting here only the results obtained with Debevec’s algorithm (http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf)."
},
{
"code": null,
"e": 7989,
"s": 7862,
"text": "First, multiple pictures are taken with different exposure times (the exposure times are known, and the camera is not moving)."
},
{
"code": null,
"e": 8116,
"s": 7989,
"text": "First, multiple pictures are taken with different exposure times (the exposure times are known, and the camera is not moving)."
},
{
"code": null,
"e": 8839,
"s": 8116,
"text": "import cv2 as cvimport numpy as np# Loading exposure images into a listimg_fn = [“img0.jpg”, “img1.jpg”, “img2.jpg”, “img3.jpg”]img_list = [cv.imread(fn) for fn in img_fn]exposure_times = np.array([15.0, 2.5, 0.25, 0.0333], dtype=np.float32)# Merge exposures to HDR imagemerge_debevec = cv.createMergeDebevec()hdr_debevec = merge_debevec.process(img_list, times=exposure_times.copy())# Tonemap HDR image (i.e. map the 32-bit float HDR data into the range [0..1])tonemap1 = cv.createTonemap(gamma=2.2)res_debevec = tonemap1.process(hdr_debevec.copy())# Convert datatype to 8-bit and save (! 8-bit per channel)res_debevec_8bit = np.clip(res_debevec*255, 0, 255).astype(‘uint8’)cv.imwrite(“ldr_debevec.jpg”, res_debevec_8bit)"
},
{
"code": null,
"e": 8855,
"s": 8839,
"text": "The end result:"
},
{
"code": null,
"e": 9112,
"s": 8855,
"text": "Finding flares reduces to the problem of finding very bright regions in the image. I haven’t found a specific method of finding if an image has a flare, only for correcting one: The method is called CLAHE (Contrast Limited Adaptive Histogram Equalization)."
},
{
"code": null,
"e": 9245,
"s": 9112,
"text": "import numpy as npimport cv2img = cv2.imread('statue.jpg',0)res = cv2.equalizeHist(img)cv2.imwrite('global_hist_eq_statue.jpg',res)"
},
{
"code": null,
"e": 9334,
"s": 9245,
"text": "Before speaking about CLAHE, it’s good to know why Histogram Equalization does NOT work:"
},
{
"code": null,
"e": 9888,
"s": 9334,
"text": "While the background contrast has improved after histogram equalization, the face of the statue became too bright. Because of this, a local version is preferred and thus, adaptive histogram equalization is used. In this, the image is divided into small blocks called “tiles” (tile size is 8x8 by default in OpenCV). Then each of these blocks is histogram equalized as usual. So in a small area, a histogram would confine to a small region (unless there is noise). If the noise is there, it will be amplified. To avoid this, contrast limiting is applied."
},
{
"code": null,
"e": 10115,
"s": 9888,
"text": "import numpy as npimport cv2img = cv2.imread('statue.jpg',0)# create a CLAHE object (Arguments are optional).clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))cl1 = clahe.apply(img)cv2.imwrite('clahe_statue.jpg',cl1)"
}
] |
Caret vs. tidymodels — create reusable machine learning workflows | by Hannah Roos | Towards Data Science
|
If you use machine learning models in R, you probably use either caret or tidymodels. Interestingly, both packages were developed by the same author among many others: Max Kuhn. But how do they compare to each other in terms of feasibility and performance?
You may wonder which package you should learn for predictive modelling in R. One thing right ahead: Caret is R’s traditional go-to machine learning package while tidymodels is rather unknown to parts of the R community. Accordingly, it was new to me: a smart medium user recommended tidymodels to me in order to automate the feature pre-processing section that I had shown in my last tutorial. Even though I have been a bit sceptical, I gave it a shot to find out if is any better than caret. Recently, I have challenged myself to predict employee churn on a small simulated dataset with caret — a perfect case to try this with tidymodels, too. It’s a pretty realistic example as well since some of the big players from industry already apply predictive analytics to decrease attrition and increase retention of their valuable talents. Using data mining techniques, historical and labelled employee data can be used to detect features that are associated with turnover (last pay raise, business travel, person-job fit, distance from home etc.). Hence, the algorithm is trained with data from the past to predict the future. This can be a serious problem too, because it allows for discriminatory biases easily. Thus, when a predictive model is applied, it must be constantly evaluated before it is already outdated and makes profound mistakes. Still, if we use them thoughtfully, I am convinced that the use of data analytics can be a powerful tool to improve the workplace for good.
As users slightly differ in their way to make use of the variety of machine learning algorithms available in R, Max Kuhn aimed to develop a uniform machine learning platform which allows for consistent and replicable code. In the R community, the good old caret has already played the role of the state-of-the-art package for a while when Max Kuhn also developed tidymodels in 2020 — the tidyverse version of caret so to speak. Similar to the dplyr-syntax (including %>%, mutate() etc.), tidymodels is based on the idea to structure your code in workflows in which each step is explicitly defined. This is intuitive and allows you to sequentially follow along what your program actually does. A lot of functions are borrowed from several core packages that are all already included in tidymodels, therefore it can be considered a meta- package:
rsample: for sample splitting (e.g. train/test or cross-validation)
recipes: for pre-processing
parsnip: for specifying the model
yardstick: for evaluating the model
dials: for hyperparameter tuning
workflow: for creating ML pipelines
broom: for tidying model outputs
Because tidymodels makes use of these packages, it is easier to learn if you are already familiar with them, but it is not really required.
Please note: If you are interested in the coding part only, you can find complete scripts in my GitHub repository here.
The dataset we will use for our case study is a simulated dataset created by IBM Watson Analytics which can be found on Kaggle. It contains 1470 employee entries and common 38 features (Monthly income, Job satisfaction, Gender etc.) — one of which is our target variable (Employee) Attrition (YES/NO). After having defined an R-project, we can use the neat here-package to set up our paths in a simple way that is more resistant against changes on your local machine. Let’s take a look at our raw data.
Disclaimer: All graphics are made by the author unless specified differently.
None of the observations are missing and the summary the skim function gives us shows some descriptive statistics including mean, standard deviation, percentiles as well as a histogram of each of the variables (26 numeric; 9 characters). In the real world, we would probably never have such a clean and complete dataset, but still we have another problem that is pretty realistic: it appears that 237 employees (16%) left the company while a majority (almost 84%) stayed, so the classes are not equally balanced. Class imbalance requires special treatment because it’s inconvenient to optimize and assess the model’s performance using conventional performance metrics (accuracy, AUC ROC etc.). The reason for this is the following: As accuracy is the proportion of correctly classified cases out of all cases, it would not be a big deal for the algorithm to give us a high score even if it simply classified ALL cases as the majority class (e.g., no) even some of them actually belong to the minority class (e.g., yes). But in our case, we intensely care about the positive cases as it is more detrimental NOT to correctly identify that an employee would leave (e.g., sensitivity or true positive rate) than accidentally predict that an employee would leave if the person actually stayed (false positive rate or 1 — specificity). Thus, we are going to use the F1-score as accuracy metric for training optimization because it assigns more importance to correctly classify positive cases (e.g., employee churn) and boosts the model’s performance in heavily imbalanced datasets.
As a first step, we can calculate any additional features. a payment that is perceived as unfair can influence a person’s intention to leave the job to look for better payment (Harden, Boakye & Ryan, 2018; Sarkar, 2018; Bryant & Allen, 2013). This is why we would like to create another variable that represents the payment competitiveness of each employee’s monthly income — the reasoning behind this is that employees may compare their income against those of their peers who share the same job level. Somebody who perceives his or her payment as fair should be less likely to leave the company compared to a person who gets considerably less for a similar position. To get there, we will use the data.table syntax to first calculate the median compensation by job level and store the appropriate value for each observation. Then we will divide each employee’s monthly income by the median income to get his or her compensation ratio: a measure that directly represents the person’s payment in respect to what would be expected by job level. Thus, a score of 1 means that the employee exactly matches the average payment for this position. A score of 1.2 means that the employee is paid 20% above the average pay and a score of 0.8 means that the person is paid 20% less than what would be expected by the usual payment per job level. As a next step, we remove all variables that are very unlikely to have any predictive power. For example, employee-ID won’t explain any meaningful variation in employee turnover, therefore it should be deleted for now among some other variables. Tidymodels provides us with the opportunity to assign roles to variables, for example such an ID column to retain the identifier while still excluding it from the actual modelling later. But we want to do it manually here to use the same training data for caret (in which we cannot assign a role). Other examples of variables that we want to exclude are those that obviously redundant and therefore could lead to multicollinearity issues (e.g., hourly rate and monthly income). We then save the reduced dataset by properly converting all string variables (e.g., Department) to factors at the same time.
The training data must be used to fit our models and tune our hyperparameters while we save some testing data for our final model evaluation. Why? We could run into the problem to overly fit our model to our sample-specific data so we cannot apply it to new employee data later. A problem that is often referred to as overfitting — a phenomenon that could explain why we sometimes cannot replicate previously found effects anymore. For machine learning models, we often split our data into training and validation/test sets to overcome this issue. The training set is used to train the model. The validation set serves to estimate model performance to tune hyperparameters accordingly. Finally, the test set is saved to challenge the predictive accuracy on data the model has never seen before. For tidymodels, the first rough split can be done using initial_split()which provides a list with indices for splitting which are tied to the main dataset. We use 70% of our data for training and save the remaining 30% for testing. Stratification is used with the target variable. It means that we give each model the chance to be both trained and tested on the same proportion of classes (e.g., 16% yes and 84 % no). Similarly, caret’s createDataPartionining() function automatically uses the outcome variable to balance the class distributions within the splits. By setting list = FALSE, we make sure that the function returns a matrix instead of a list. Thus, we can use the randomly assembled indices on our main dataset using the familiar data.table-syntax. To make the model even more generalizable to new data, we can use another magic trick from our statistical toolbox: cross-validation with various splits. We later apply our trained candidate model(s) to an extra set of observations and repeatedly adjust the parameters to reduce prediction error. These extra observations are several random samples from the training data, leaving out some of the data each time. This makes multiple folds of data to validate the hyperparameters on. We repeat this 3 times and average the performance to receive a preliminary estimation of our model’s performance — a resampling technique thus called 3-fold-cross-validation. Tidymodel’s vfold_cv() does the job for us — we define the number of folds that should be created as well as the number of repeats that should be used for resampling. When we use caret instead, we can use the createFolds() function for the same purpose, but for our comparison we do something better: tidymodels includes a function called rsample2caret() which transforms the folds we have already made in tidymodels to the familiar caret format. Thanks to rsample2caret()and caret2rsample() commands, it’s easy to use identical resamples in whichever package you prefer. This way, the performance metrics we calculate on our folds with each of the frameworks are less biased and more directly comparable.
Now, we use recipes to prepare our data for modelling with tidymodels: the idea is simple — we create an object that contains all the pre-processing steps that serve as ingredients to bake a nicely prepped dataset. This way, our data are ready to be ingested by the model. We need to take care of the order of pre-processing steps here because they are carried out in the order they entered (for example, it does not make much sense to normalize numeric predictors after dummy variables are created because 0 and 1 are treated as numeric values as well). You can think of a recipe as a blueprint that sketches our model formula, variable encodings and the logic of our pre-processing steps before their actual execution. This recipe is later included in a workflow — a structure that can be thought of as a kind of analysis pipeline that bundles your own prespecified feature engineering steps with the model specification.For our study, we want to create dummy variables, remove variables with near to zero variance as well as those that are already highly correlated with a similar variable to deal with multicollinearity. Moreover, we apply random oversampling to deal with class imbalance borrowed from the themis-package. Since there are considerably more negative cases than positive cases in our dataset, we can assume that our models won’t have a good chance to learn how to identify the minority class. The ROSE algorithm can deal with this by generating artificial additional observations with positive outcomes (e.g., leave the company) that mimic our true positives. So, the classes are more nicely balanced. If you are interested in how much class imbalance can change your performance metrics, take a look at my GitHub repo and run bal_log_example.Ras well as imbal_log_example.R.
Caret takes care of all feature engineering steps while training and even automatically handles dummy encoding, so we do not need to specify this in the preProcess argument in our train() call later. A downside of caret’s preProcess option is that has no option to selectively transform specific variables (e.g., certain numeric variables). To make use of the same upsampling technique as in our tidymodels example, we need to use a separate function on the training data imported by the ROSE package. We need to be careful here — for whatever reason, the function swaps the factor levels of our target variable which could affect performance metrics later. To avoid this, we manually assign the factor levels once again in a way that the first level is the positive class.
To create the appropriate models in tidymodels, we first use a recipe that contains a formula as well as the logic of our preprocessing steps. This recipe is later included in a workflow — a structure that can be thought of as a kind of analysis pipeline that bundles your own prespecified feature engineering steps with the model specification. The parsnip package helps us to tell the program which specific model to use. You can also connect an “engine” to the model specification by calling set_engine()to tell the model which underlying algorithm to use (Bayesians would maybe love to hear that you can even use stan). By setting the hyperparameters to tune(), we already make sure they are marked for tuning later. If we already know which setting to use because it worked well in the past, we can also pass it to model specification directly (e.g., if we want to have 200 trees). This step is not necessary if we use caret because we simply pass the model type to the method argument of our train function. Hyperparameters are then tuned with caret’s default tuning range and usually it produces very good results. Let’s say you are interested in which tuning range caret uses for glmnet, you can use getModelInfo(“glmnet")[[1]]$grid. Using tidymodels, we next need to prepare a param object using the parameters function from parsnip. Then, we pass them to the grid function of our choice — here I have applied an irregular grid to find close-to-optimal parameter combinations. In contrast to a grid search for which we go would iterate through all possible parameter combinations and assess their associated model performance, it can quickly become computationally costly and inefficient when the optimal solution is not represented in the grid. Using irregular grids, especially so called “space-filling designs”, chances are higher that we find a good combination more quickly while still covering the full parameter space.
Apparently, the idea of integrating multiple models into one object is not new to tidymodels! After having set up a recipe for pre-processing as well as a model specification, we create a workflow that bundles all the steps we have previously defined for preparing and modelling the data and fit them in one call. But we even go a step further. We create an even richer object that holds multiple workflows. These workflows (e.g., recipe, formula, model) are crossed in a way they can be later all fitted and tuned in a single request. In our case, we try several models, xgboost and glmnet, but this could be extended easily. This workflowset is now our master structure for training multiple models at once. By using option_add, we pass our different custom grids into the option column to our workflowset to modify the default tuning range. We can use the Github code from parsnip to find out which tuning parameters are available and correspond to the parameters we are used to from caret.
Before we start tuning our tidy models, we save a set of performance metrics that should be used to evaluate the goodness of our models. Now we use workflow_map to actually tune different models.
To specify our hyperparameter tuning in caret, we wrap everything including all preprocessing steps in a trainControl-Object.
Unfortunately, it is not possible to use F1 as our metric for optimization: therefore, I needed to write a custom function to make it work.
Unlike in tidymodels, there is no prespecified way to fit and tune different models at once. Thus, we can write our own function to do the job for us: we first create a custom function called train_model as a wrapper for caret’s train but with a placeholder for the method. Then we use lapply()to apply our custom function onto a list of methods. These contain the model types we want to compare with each other. The good thing about this approach is that we can easily adapt or extent the list at any time.
As we can see in the model objects above, we have tuning results for 108 parameter combinations for XGBoost and 9 for glmnet. This is the default that was automatically chosen by caret and the reason why I have specified the same tuning size for tidymodels previously to make a fair comparison between the two. Our tuning results from tidymodels have a similar structure like the nested tuning output from caret shown above, except for the fact that they provide us with more performance metrices. Just call the following:
While experimenting with the two frameworks, I have realized that the tuning process takes much more time in tidymodels compared to caret. Both packages handled a lot of candidate models — 108 combinations * 10 folds * 3 repeats = 3240 iterations for XGBoost and 9 combinations * 10 folds * 3 repeats = 270 iterations for glmnet. This should be very CPU intense anyways, but how do the packages deal with this task? To find out, I have wrapped each training-command into a system.time() to benchmark the timing differences. It turns out that caret just needs a fraction of the time that tidymodels allocates for tuning, just around 2% to be exact. While our tidymodel’s workflow_map() command took more than 6 hours to run through, caret accomplished the same task in less than 8 minutes. At least for my little experiment, tidymodels was 48 times slower than caret! Such a huge difference. Be aware though that this might be partly due to my old machine and could slightly vary each time you track the time. Therefore, I highly encourage you to try the microbenchmark package with the two functions: In case you have the time to evaluate the functions more than once, this should give you a more accurate and reliable result than system.time(). Nevertheless, the issue that tidymodels seems to be much slower than caret overall is not new: Max Kuhn argues that recipes slow down the training process because all pre-processing steps are re-applied within each resample. Another possible explanation comes down to differences in parallel computing between tidymodels and caret.
To find out which method works best on our problem, we need to measure performance for all of them. In our tidymodels example, we have saved all tuning results from the training data in model_race. We can use group by the workflow to collect metrices by model type. The resulting tibble provides us with several performance metrics that were computed for the models that each try a specific hyperparameter configuration from our grid. In our case it provides 10 folds x 3 repeats = 30 results per performance metric for each workflow. Now we can run tidymodel’s autoplot to show ranked workflow performance among candidate models.
So, what is our winning model? The plot shows that among all metrics, logistic regression (in blue) outperforms XGBoost on all candidate models. To compare model performances with caret, we use the resamples-function. It provides us with the range of F1 values across all 3 folds, giving us an estimation on how the model performs on different samples. By visualizing this output with bwplot(resamples), we can see how the performance is distributed across folds and then select the model with the highest average performance or lowest variance in performance. Unlike for tidymodels, we cannot see the performance across training folds for each parameter combination, but we get the range of performance for the best set of hyperparameters. Even if the plot suggests that XGBoost yields a slightly better performance, we will later make predictions with glmnet to make a valid comparison with tidymodels’ logistic regression.
Surprisingly, caret generally provides a much higher F1-score compared to tidymodels even though the F1_Score function from MLmetrics as well as the f_meas() function from yardstick both calculate the harmonic mean between recall and precision without any extra weight put on either of the metrics (balanced F1-score). Thus, the difference in performance is unlikely due to differences in calculation. At best, the difference results from a more targeted optimization process since caret trained our models to keep the F1-score as high as possible.
Still, we are not finished yet: For tidymodels, we need to commit to a model and then finalize it appropriately. In this case, we extract all glmnet candidate models from our model comparison. By running select_best() on the tuning results of our tidymodels logistic regression workflow, we get the performance metrics for the candidate model that ranked highest on the F1-score. To further commit on this model, we finalize our winning workflow: we extract the log_regworkflow from our model race with the set of hyperparameters that worked best numerically. This is how our finalized workflow looks like:
If we however look at caret’s final tuning result, we get other values for the same hyperparameters which could be due to differences in the optimization criteria: Maybe it’s because tidymodel’s glmnet model was not explicitly trained to maximize the F1-value?
print(model_glmnet)
For our tidymodels approach, we use our finalized workflow and fit it to the resamples to calculate performance metrices across folds. For this purpose, we can use fit_resamples()to assess model performance across different folds of the data. In contrast to fitting the finalized workflow to our whole training set, resampling gives us a closer estimation on how the model would perform on new samples. For the final conclusion on its performance, we still must wait for the evaluation on the test data though. As the next step, we use last_fit()to generate the final resampling results that are assessed on the test data. We can take a look at each model’s performance on the training data to confirm this observation.
This last_fit object does not only include the finalized workflow along with our final performance metrics, but it also contains a nice section on our predictions: the predicted classes, class probabilities and actual outcomes. These are nested within the last_fit object, so when we convert them to a data.frame, we can visualize the final predictions.For example, we can plot the density distributions of predicted probabilities coloured by actual attrition outcomes (inspired by a post by Julia Silge).
Because the distributions overlap quite a bit, it seems like there are a lot of cases in which the model predicted both outcomes (yes/no) to be almost equally likely. This also means that active employees and leavers were not that easily separable, but our performance metrics show that the algorithm assign just slightly more probability to the correct outcome in most of the cases. For the positive class, the classifier must have detected the majority of true leavers but also incorrectly left about 30 % of them to the negative class (and vice versa). What happens if we plot the predictions from caret?
Wow, it appears that more true negatives receive an attrition-probability close to zero while the classifier tends to assign a high attrition-probability to employees who actually left. But it also incorrectly classified about a third of true positives as well as true negatives... How does this performance translate to the whole training set? After fitting the model on the training data, we can make predictions using predict().
As hypothesized, about 30% of the data were incorrectly classified within each class which is also reflected in the similar levels of sensitivity (TPR) vs. specificity (TNR). To get similar information for our caret classifier, we can simply call predict.train()and use our model and training data as inputs, then we wrap the predictions in a ConfustionMatrix() command. Now we have a very nice overview of the training data performance.
The output underlines our assumption that caret has been a bit better in disentangling the positive from the negative class. In addition, we get much less false negatives compared to our tidymodels output, suggesting that recall should be relatively enhanced (76 % vs. 71 %).
We have been smart enough to save some of the data for testing purposes — now it is time for the models to actually showcase their prediction skills! The reason for this is that evaluating performance on the same data the classifiers were trained on yields a high risk of overfitting: the model highly depends on the data it has originally trained on, so performance metrics calculated on the same data are by no means independent. To challenge the model’s capacity to handle new observations, we therefore need to test it on data they have never seen before. In order to get performance metrics on the test data for tidymodels, we can call our last_fit object that already carries metrics on the test data for convenience.
It turns out that estimation on the training data would have indeed overestimated the model’s performance since except for specificity, all metrics are lowered a bit on the testing data (e.g., 0.61 vs. 0.71 for sensitivity). This can be further confirmed with our raw confusion matrix on the test data:
In caret, we can simply pass the model along with the target column of our test data to ConfusionMatrix()and also get some performance metrics.
The output suggests that the same methods again perform slightly better compared to tidymodels even though the differences are rather negligible. All in all, it yet again shows how much sense it makes to save some data for testing to avoid becoming overly optimistic about the model’s performance.
We can interpret it like this: The closer the line resembles an 90°-angle, the more specificity comes with high sensitivity. You can think of sensitivity as 1 — FPR, so this is the probability of the model to correctly identify all true positive cases as being positive (or true positive rate). Conversely, specificity is 1 — FNR, so it is the probability of the model to correctly identify all true negative cases as being negative (or true negative rate). Because sensitivity is plotted against 1- specificity, the area under the curve increases when a high sensitivity comes with a high specificity. You can think of the ROC curve as a kind of thought experiment: When we have a sensitivity of 0.75, the best possible value for 1 — specificity or false positive rate would be 0.00. In this case, the detection of true positives would come with all negative cases still identified as such. Imagine we would have a pretty rough model that would simply classify all cases as positives: we would have a perfect sensitivity (TPR), but a miserable specificity or FPR because true negatives (e.g., loyal employees) would also get a positive label. This is bad if you also care about the negative cases and want to differentiate them from the positives. So, the higher the sensitivity, the higher the risk for low specificity and vice versa, represented by the dotted diagonal on the plot. For tidymodels, a sensitivity of 0.75 comes with a specificity of 0.6 which is not a bad trade-off. Thus, as our model becomes smart enough to detect leavers (attrition = yes) while still also identifying active employees (attrition = no), the area under the curve increases.
To make a similar ROC-curve within our caret-framework, we can use MLeval to provide us with a range of ROC-related outputs. Overall, the area under the curve for glmnet appears to be a tiny bit larger compared to our tidymodels output and there is not a huge gap in performance between both frameworks.
Tidymodels comes with a high flexibility because it is based on various modern packages and has a completely customizable workflow structure. This means that you have a lot of degrees of freedom when it comes to creating your very own machine learning project. But because so many steps and objects are included, it can certainly be a bit confusing for beginners. During my own project, I felt like I got a better grasp of what the program does since I have worked my way around a lot of error messages, learning a lot of theory along the way. But this can also be due to the fact that tidymodels is still under development and therefore not as stable yet. If you want a quick and concise solution to your prediction problem instead of setting up a big project, I would recommend caret to you. It is not only faster in terms of runtime, but there are still more resourced and solved issues from experienced users out there. Caret fits a lot of models for you with very little coding work while being as fast as possible thanks to parallel processing. Unlike in tidymodels, the best candidate model across resamples is selected for you automatically which can be neat. One thing is for sure: Max Kuhn certainly did a great job with designing these packages and you are lucky to have two powerful open-source packages to choose from.
My special thanks goes to Robert Lohne, a consultant from Norway, who acted as my intellectual sparring-partner during this project. I really enjoyed our collaborative exchange and I hope that our learning journey won’t stop here!
[1] G. Harden, K. G. Boakye & S. Ryan, Turnover intention of technology professionals: A social exchange theory perspective (2018), Journal of Computer Information Systems, 58(4), 291–300.
[2] J. Sarkar, Linking Compensation and Turnover: Retrospection and Future Directions (2018), IUP Journal of Organizational Behavior, 17(1).
[3] P.C. Bryant & D. G. Allen, Compensation, benefits and employee turnover: HR strategies for retaining top talent (2013), Compensation & Benefits Review, 45(3), 171–175.
|
[
{
"code": null,
"e": 429,
"s": 172,
"text": "If you use machine learning models in R, you probably use either caret or tidymodels. Interestingly, both packages were developed by the same author among many others: Max Kuhn. But how do they compare to each other in terms of feasibility and performance?"
},
{
"code": null,
"e": 1913,
"s": 429,
"text": "You may wonder which package you should learn for predictive modelling in R. One thing right ahead: Caret is R’s traditional go-to machine learning package while tidymodels is rather unknown to parts of the R community. Accordingly, it was new to me: a smart medium user recommended tidymodels to me in order to automate the feature pre-processing section that I had shown in my last tutorial. Even though I have been a bit sceptical, I gave it a shot to find out if is any better than caret. Recently, I have challenged myself to predict employee churn on a small simulated dataset with caret — a perfect case to try this with tidymodels, too. It’s a pretty realistic example as well since some of the big players from industry already apply predictive analytics to decrease attrition and increase retention of their valuable talents. Using data mining techniques, historical and labelled employee data can be used to detect features that are associated with turnover (last pay raise, business travel, person-job fit, distance from home etc.). Hence, the algorithm is trained with data from the past to predict the future. This can be a serious problem too, because it allows for discriminatory biases easily. Thus, when a predictive model is applied, it must be constantly evaluated before it is already outdated and makes profound mistakes. Still, if we use them thoughtfully, I am convinced that the use of data analytics can be a powerful tool to improve the workplace for good."
},
{
"code": null,
"e": 2758,
"s": 1913,
"text": "As users slightly differ in their way to make use of the variety of machine learning algorithms available in R, Max Kuhn aimed to develop a uniform machine learning platform which allows for consistent and replicable code. In the R community, the good old caret has already played the role of the state-of-the-art package for a while when Max Kuhn also developed tidymodels in 2020 — the tidyverse version of caret so to speak. Similar to the dplyr-syntax (including %>%, mutate() etc.), tidymodels is based on the idea to structure your code in workflows in which each step is explicitly defined. This is intuitive and allows you to sequentially follow along what your program actually does. A lot of functions are borrowed from several core packages that are all already included in tidymodels, therefore it can be considered a meta- package:"
},
{
"code": null,
"e": 2826,
"s": 2758,
"text": "rsample: for sample splitting (e.g. train/test or cross-validation)"
},
{
"code": null,
"e": 2854,
"s": 2826,
"text": "recipes: for pre-processing"
},
{
"code": null,
"e": 2888,
"s": 2854,
"text": "parsnip: for specifying the model"
},
{
"code": null,
"e": 2924,
"s": 2888,
"text": "yardstick: for evaluating the model"
},
{
"code": null,
"e": 2957,
"s": 2924,
"text": "dials: for hyperparameter tuning"
},
{
"code": null,
"e": 2993,
"s": 2957,
"text": "workflow: for creating ML pipelines"
},
{
"code": null,
"e": 3026,
"s": 2993,
"text": "broom: for tidying model outputs"
},
{
"code": null,
"e": 3166,
"s": 3026,
"text": "Because tidymodels makes use of these packages, it is easier to learn if you are already familiar with them, but it is not really required."
},
{
"code": null,
"e": 3286,
"s": 3166,
"text": "Please note: If you are interested in the coding part only, you can find complete scripts in my GitHub repository here."
},
{
"code": null,
"e": 3789,
"s": 3286,
"text": "The dataset we will use for our case study is a simulated dataset created by IBM Watson Analytics which can be found on Kaggle. It contains 1470 employee entries and common 38 features (Monthly income, Job satisfaction, Gender etc.) — one of which is our target variable (Employee) Attrition (YES/NO). After having defined an R-project, we can use the neat here-package to set up our paths in a simple way that is more resistant against changes on your local machine. Let’s take a look at our raw data."
},
{
"code": null,
"e": 3867,
"s": 3789,
"text": "Disclaimer: All graphics are made by the author unless specified differently."
},
{
"code": null,
"e": 5443,
"s": 3867,
"text": "None of the observations are missing and the summary the skim function gives us shows some descriptive statistics including mean, standard deviation, percentiles as well as a histogram of each of the variables (26 numeric; 9 characters). In the real world, we would probably never have such a clean and complete dataset, but still we have another problem that is pretty realistic: it appears that 237 employees (16%) left the company while a majority (almost 84%) stayed, so the classes are not equally balanced. Class imbalance requires special treatment because it’s inconvenient to optimize and assess the model’s performance using conventional performance metrics (accuracy, AUC ROC etc.). The reason for this is the following: As accuracy is the proportion of correctly classified cases out of all cases, it would not be a big deal for the algorithm to give us a high score even if it simply classified ALL cases as the majority class (e.g., no) even some of them actually belong to the minority class (e.g., yes). But in our case, we intensely care about the positive cases as it is more detrimental NOT to correctly identify that an employee would leave (e.g., sensitivity or true positive rate) than accidentally predict that an employee would leave if the person actually stayed (false positive rate or 1 — specificity). Thus, we are going to use the F1-score as accuracy metric for training optimization because it assigns more importance to correctly classify positive cases (e.g., employee churn) and boosts the model’s performance in heavily imbalanced datasets."
},
{
"code": null,
"e": 7629,
"s": 5443,
"text": "As a first step, we can calculate any additional features. a payment that is perceived as unfair can influence a person’s intention to leave the job to look for better payment (Harden, Boakye & Ryan, 2018; Sarkar, 2018; Bryant & Allen, 2013). This is why we would like to create another variable that represents the payment competitiveness of each employee’s monthly income — the reasoning behind this is that employees may compare their income against those of their peers who share the same job level. Somebody who perceives his or her payment as fair should be less likely to leave the company compared to a person who gets considerably less for a similar position. To get there, we will use the data.table syntax to first calculate the median compensation by job level and store the appropriate value for each observation. Then we will divide each employee’s monthly income by the median income to get his or her compensation ratio: a measure that directly represents the person’s payment in respect to what would be expected by job level. Thus, a score of 1 means that the employee exactly matches the average payment for this position. A score of 1.2 means that the employee is paid 20% above the average pay and a score of 0.8 means that the person is paid 20% less than what would be expected by the usual payment per job level. As a next step, we remove all variables that are very unlikely to have any predictive power. For example, employee-ID won’t explain any meaningful variation in employee turnover, therefore it should be deleted for now among some other variables. Tidymodels provides us with the opportunity to assign roles to variables, for example such an ID column to retain the identifier while still excluding it from the actual modelling later. But we want to do it manually here to use the same training data for caret (in which we cannot assign a role). Other examples of variables that we want to exclude are those that obviously redundant and therefore could lead to multicollinearity issues (e.g., hourly rate and monthly income). We then save the reduced dataset by properly converting all string variables (e.g., Department) to factors at the same time."
},
{
"code": null,
"e": 10552,
"s": 7629,
"text": "The training data must be used to fit our models and tune our hyperparameters while we save some testing data for our final model evaluation. Why? We could run into the problem to overly fit our model to our sample-specific data so we cannot apply it to new employee data later. A problem that is often referred to as overfitting — a phenomenon that could explain why we sometimes cannot replicate previously found effects anymore. For machine learning models, we often split our data into training and validation/test sets to overcome this issue. The training set is used to train the model. The validation set serves to estimate model performance to tune hyperparameters accordingly. Finally, the test set is saved to challenge the predictive accuracy on data the model has never seen before. For tidymodels, the first rough split can be done using initial_split()which provides a list with indices for splitting which are tied to the main dataset. We use 70% of our data for training and save the remaining 30% for testing. Stratification is used with the target variable. It means that we give each model the chance to be both trained and tested on the same proportion of classes (e.g., 16% yes and 84 % no). Similarly, caret’s createDataPartionining() function automatically uses the outcome variable to balance the class distributions within the splits. By setting list = FALSE, we make sure that the function returns a matrix instead of a list. Thus, we can use the randomly assembled indices on our main dataset using the familiar data.table-syntax. To make the model even more generalizable to new data, we can use another magic trick from our statistical toolbox: cross-validation with various splits. We later apply our trained candidate model(s) to an extra set of observations and repeatedly adjust the parameters to reduce prediction error. These extra observations are several random samples from the training data, leaving out some of the data each time. This makes multiple folds of data to validate the hyperparameters on. We repeat this 3 times and average the performance to receive a preliminary estimation of our model’s performance — a resampling technique thus called 3-fold-cross-validation. Tidymodel’s vfold_cv() does the job for us — we define the number of folds that should be created as well as the number of repeats that should be used for resampling. When we use caret instead, we can use the createFolds() function for the same purpose, but for our comparison we do something better: tidymodels includes a function called rsample2caret() which transforms the folds we have already made in tidymodels to the familiar caret format. Thanks to rsample2caret()and caret2rsample() commands, it’s easy to use identical resamples in whichever package you prefer. This way, the performance metrics we calculate on our folds with each of the frameworks are less biased and more directly comparable."
},
{
"code": null,
"e": 12347,
"s": 10552,
"text": "Now, we use recipes to prepare our data for modelling with tidymodels: the idea is simple — we create an object that contains all the pre-processing steps that serve as ingredients to bake a nicely prepped dataset. This way, our data are ready to be ingested by the model. We need to take care of the order of pre-processing steps here because they are carried out in the order they entered (for example, it does not make much sense to normalize numeric predictors after dummy variables are created because 0 and 1 are treated as numeric values as well). You can think of a recipe as a blueprint that sketches our model formula, variable encodings and the logic of our pre-processing steps before their actual execution. This recipe is later included in a workflow — a structure that can be thought of as a kind of analysis pipeline that bundles your own prespecified feature engineering steps with the model specification.For our study, we want to create dummy variables, remove variables with near to zero variance as well as those that are already highly correlated with a similar variable to deal with multicollinearity. Moreover, we apply random oversampling to deal with class imbalance borrowed from the themis-package. Since there are considerably more negative cases than positive cases in our dataset, we can assume that our models won’t have a good chance to learn how to identify the minority class. The ROSE algorithm can deal with this by generating artificial additional observations with positive outcomes (e.g., leave the company) that mimic our true positives. So, the classes are more nicely balanced. If you are interested in how much class imbalance can change your performance metrics, take a look at my GitHub repo and run bal_log_example.Ras well as imbal_log_example.R."
},
{
"code": null,
"e": 13121,
"s": 12347,
"text": "Caret takes care of all feature engineering steps while training and even automatically handles dummy encoding, so we do not need to specify this in the preProcess argument in our train() call later. A downside of caret’s preProcess option is that has no option to selectively transform specific variables (e.g., certain numeric variables). To make use of the same upsampling technique as in our tidymodels example, we need to use a separate function on the training data imported by the ROSE package. We need to be careful here — for whatever reason, the function swaps the factor levels of our target variable which could affect performance metrics later. To avoid this, we manually assign the factor levels once again in a way that the first level is the positive class."
},
{
"code": null,
"e": 15056,
"s": 13121,
"text": "To create the appropriate models in tidymodels, we first use a recipe that contains a formula as well as the logic of our preprocessing steps. This recipe is later included in a workflow — a structure that can be thought of as a kind of analysis pipeline that bundles your own prespecified feature engineering steps with the model specification. The parsnip package helps us to tell the program which specific model to use. You can also connect an “engine” to the model specification by calling set_engine()to tell the model which underlying algorithm to use (Bayesians would maybe love to hear that you can even use stan). By setting the hyperparameters to tune(), we already make sure they are marked for tuning later. If we already know which setting to use because it worked well in the past, we can also pass it to model specification directly (e.g., if we want to have 200 trees). This step is not necessary if we use caret because we simply pass the model type to the method argument of our train function. Hyperparameters are then tuned with caret’s default tuning range and usually it produces very good results. Let’s say you are interested in which tuning range caret uses for glmnet, you can use getModelInfo(“glmnet\")[[1]]$grid. Using tidymodels, we next need to prepare a param object using the parameters function from parsnip. Then, we pass them to the grid function of our choice — here I have applied an irregular grid to find close-to-optimal parameter combinations. In contrast to a grid search for which we go would iterate through all possible parameter combinations and assess their associated model performance, it can quickly become computationally costly and inefficient when the optimal solution is not represented in the grid. Using irregular grids, especially so called “space-filling designs”, chances are higher that we find a good combination more quickly while still covering the full parameter space."
},
{
"code": null,
"e": 16050,
"s": 15056,
"text": "Apparently, the idea of integrating multiple models into one object is not new to tidymodels! After having set up a recipe for pre-processing as well as a model specification, we create a workflow that bundles all the steps we have previously defined for preparing and modelling the data and fit them in one call. But we even go a step further. We create an even richer object that holds multiple workflows. These workflows (e.g., recipe, formula, model) are crossed in a way they can be later all fitted and tuned in a single request. In our case, we try several models, xgboost and glmnet, but this could be extended easily. This workflowset is now our master structure for training multiple models at once. By using option_add, we pass our different custom grids into the option column to our workflowset to modify the default tuning range. We can use the Github code from parsnip to find out which tuning parameters are available and correspond to the parameters we are used to from caret."
},
{
"code": null,
"e": 16246,
"s": 16050,
"text": "Before we start tuning our tidy models, we save a set of performance metrics that should be used to evaluate the goodness of our models. Now we use workflow_map to actually tune different models."
},
{
"code": null,
"e": 16372,
"s": 16246,
"text": "To specify our hyperparameter tuning in caret, we wrap everything including all preprocessing steps in a trainControl-Object."
},
{
"code": null,
"e": 16512,
"s": 16372,
"text": "Unfortunately, it is not possible to use F1 as our metric for optimization: therefore, I needed to write a custom function to make it work."
},
{
"code": null,
"e": 17020,
"s": 16512,
"text": "Unlike in tidymodels, there is no prespecified way to fit and tune different models at once. Thus, we can write our own function to do the job for us: we first create a custom function called train_model as a wrapper for caret’s train but with a placeholder for the method. Then we use lapply()to apply our custom function onto a list of methods. These contain the model types we want to compare with each other. The good thing about this approach is that we can easily adapt or extent the list at any time."
},
{
"code": null,
"e": 17543,
"s": 17020,
"text": "As we can see in the model objects above, we have tuning results for 108 parameter combinations for XGBoost and 9 for glmnet. This is the default that was automatically chosen by caret and the reason why I have specified the same tuning size for tidymodels previously to make a fair comparison between the two. Our tuning results from tidymodels have a similar structure like the nested tuning output from caret shown above, except for the fact that they provide us with more performance metrices. Just call the following:"
},
{
"code": null,
"e": 19121,
"s": 17543,
"text": "While experimenting with the two frameworks, I have realized that the tuning process takes much more time in tidymodels compared to caret. Both packages handled a lot of candidate models — 108 combinations * 10 folds * 3 repeats = 3240 iterations for XGBoost and 9 combinations * 10 folds * 3 repeats = 270 iterations for glmnet. This should be very CPU intense anyways, but how do the packages deal with this task? To find out, I have wrapped each training-command into a system.time() to benchmark the timing differences. It turns out that caret just needs a fraction of the time that tidymodels allocates for tuning, just around 2% to be exact. While our tidymodel’s workflow_map() command took more than 6 hours to run through, caret accomplished the same task in less than 8 minutes. At least for my little experiment, tidymodels was 48 times slower than caret! Such a huge difference. Be aware though that this might be partly due to my old machine and could slightly vary each time you track the time. Therefore, I highly encourage you to try the microbenchmark package with the two functions: In case you have the time to evaluate the functions more than once, this should give you a more accurate and reliable result than system.time(). Nevertheless, the issue that tidymodels seems to be much slower than caret overall is not new: Max Kuhn argues that recipes slow down the training process because all pre-processing steps are re-applied within each resample. Another possible explanation comes down to differences in parallel computing between tidymodels and caret."
},
{
"code": null,
"e": 19752,
"s": 19121,
"text": "To find out which method works best on our problem, we need to measure performance for all of them. In our tidymodels example, we have saved all tuning results from the training data in model_race. We can use group by the workflow to collect metrices by model type. The resulting tibble provides us with several performance metrics that were computed for the models that each try a specific hyperparameter configuration from our grid. In our case it provides 10 folds x 3 repeats = 30 results per performance metric for each workflow. Now we can run tidymodel’s autoplot to show ranked workflow performance among candidate models."
},
{
"code": null,
"e": 20678,
"s": 19752,
"text": "So, what is our winning model? The plot shows that among all metrics, logistic regression (in blue) outperforms XGBoost on all candidate models. To compare model performances with caret, we use the resamples-function. It provides us with the range of F1 values across all 3 folds, giving us an estimation on how the model performs on different samples. By visualizing this output with bwplot(resamples), we can see how the performance is distributed across folds and then select the model with the highest average performance or lowest variance in performance. Unlike for tidymodels, we cannot see the performance across training folds for each parameter combination, but we get the range of performance for the best set of hyperparameters. Even if the plot suggests that XGBoost yields a slightly better performance, we will later make predictions with glmnet to make a valid comparison with tidymodels’ logistic regression."
},
{
"code": null,
"e": 21227,
"s": 20678,
"text": "Surprisingly, caret generally provides a much higher F1-score compared to tidymodels even though the F1_Score function from MLmetrics as well as the f_meas() function from yardstick both calculate the harmonic mean between recall and precision without any extra weight put on either of the metrics (balanced F1-score). Thus, the difference in performance is unlikely due to differences in calculation. At best, the difference results from a more targeted optimization process since caret trained our models to keep the F1-score as high as possible."
},
{
"code": null,
"e": 21834,
"s": 21227,
"text": "Still, we are not finished yet: For tidymodels, we need to commit to a model and then finalize it appropriately. In this case, we extract all glmnet candidate models from our model comparison. By running select_best() on the tuning results of our tidymodels logistic regression workflow, we get the performance metrics for the candidate model that ranked highest on the F1-score. To further commit on this model, we finalize our winning workflow: we extract the log_regworkflow from our model race with the set of hyperparameters that worked best numerically. This is how our finalized workflow looks like:"
},
{
"code": null,
"e": 22095,
"s": 21834,
"text": "If we however look at caret’s final tuning result, we get other values for the same hyperparameters which could be due to differences in the optimization criteria: Maybe it’s because tidymodel’s glmnet model was not explicitly trained to maximize the F1-value?"
},
{
"code": null,
"e": 22115,
"s": 22095,
"text": "print(model_glmnet)"
},
{
"code": null,
"e": 22835,
"s": 22115,
"text": "For our tidymodels approach, we use our finalized workflow and fit it to the resamples to calculate performance metrices across folds. For this purpose, we can use fit_resamples()to assess model performance across different folds of the data. In contrast to fitting the finalized workflow to our whole training set, resampling gives us a closer estimation on how the model would perform on new samples. For the final conclusion on its performance, we still must wait for the evaluation on the test data though. As the next step, we use last_fit()to generate the final resampling results that are assessed on the test data. We can take a look at each model’s performance on the training data to confirm this observation."
},
{
"code": null,
"e": 23341,
"s": 22835,
"text": "This last_fit object does not only include the finalized workflow along with our final performance metrics, but it also contains a nice section on our predictions: the predicted classes, class probabilities and actual outcomes. These are nested within the last_fit object, so when we convert them to a data.frame, we can visualize the final predictions.For example, we can plot the density distributions of predicted probabilities coloured by actual attrition outcomes (inspired by a post by Julia Silge)."
},
{
"code": null,
"e": 23949,
"s": 23341,
"text": "Because the distributions overlap quite a bit, it seems like there are a lot of cases in which the model predicted both outcomes (yes/no) to be almost equally likely. This also means that active employees and leavers were not that easily separable, but our performance metrics show that the algorithm assign just slightly more probability to the correct outcome in most of the cases. For the positive class, the classifier must have detected the majority of true leavers but also incorrectly left about 30 % of them to the negative class (and vice versa). What happens if we plot the predictions from caret?"
},
{
"code": null,
"e": 24381,
"s": 23949,
"text": "Wow, it appears that more true negatives receive an attrition-probability close to zero while the classifier tends to assign a high attrition-probability to employees who actually left. But it also incorrectly classified about a third of true positives as well as true negatives... How does this performance translate to the whole training set? After fitting the model on the training data, we can make predictions using predict()."
},
{
"code": null,
"e": 24819,
"s": 24381,
"text": "As hypothesized, about 30% of the data were incorrectly classified within each class which is also reflected in the similar levels of sensitivity (TPR) vs. specificity (TNR). To get similar information for our caret classifier, we can simply call predict.train()and use our model and training data as inputs, then we wrap the predictions in a ConfustionMatrix() command. Now we have a very nice overview of the training data performance."
},
{
"code": null,
"e": 25095,
"s": 24819,
"text": "The output underlines our assumption that caret has been a bit better in disentangling the positive from the negative class. In addition, we get much less false negatives compared to our tidymodels output, suggesting that recall should be relatively enhanced (76 % vs. 71 %)."
},
{
"code": null,
"e": 25819,
"s": 25095,
"text": "We have been smart enough to save some of the data for testing purposes — now it is time for the models to actually showcase their prediction skills! The reason for this is that evaluating performance on the same data the classifiers were trained on yields a high risk of overfitting: the model highly depends on the data it has originally trained on, so performance metrics calculated on the same data are by no means independent. To challenge the model’s capacity to handle new observations, we therefore need to test it on data they have never seen before. In order to get performance metrics on the test data for tidymodels, we can call our last_fit object that already carries metrics on the test data for convenience."
},
{
"code": null,
"e": 26122,
"s": 25819,
"text": "It turns out that estimation on the training data would have indeed overestimated the model’s performance since except for specificity, all metrics are lowered a bit on the testing data (e.g., 0.61 vs. 0.71 for sensitivity). This can be further confirmed with our raw confusion matrix on the test data:"
},
{
"code": null,
"e": 26266,
"s": 26122,
"text": "In caret, we can simply pass the model along with the target column of our test data to ConfusionMatrix()and also get some performance metrics."
},
{
"code": null,
"e": 26564,
"s": 26266,
"text": "The output suggests that the same methods again perform slightly better compared to tidymodels even though the differences are rather negligible. All in all, it yet again shows how much sense it makes to save some data for testing to avoid becoming overly optimistic about the model’s performance."
},
{
"code": null,
"e": 28225,
"s": 26564,
"text": "We can interpret it like this: The closer the line resembles an 90°-angle, the more specificity comes with high sensitivity. You can think of sensitivity as 1 — FPR, so this is the probability of the model to correctly identify all true positive cases as being positive (or true positive rate). Conversely, specificity is 1 — FNR, so it is the probability of the model to correctly identify all true negative cases as being negative (or true negative rate). Because sensitivity is plotted against 1- specificity, the area under the curve increases when a high sensitivity comes with a high specificity. You can think of the ROC curve as a kind of thought experiment: When we have a sensitivity of 0.75, the best possible value for 1 — specificity or false positive rate would be 0.00. In this case, the detection of true positives would come with all negative cases still identified as such. Imagine we would have a pretty rough model that would simply classify all cases as positives: we would have a perfect sensitivity (TPR), but a miserable specificity or FPR because true negatives (e.g., loyal employees) would also get a positive label. This is bad if you also care about the negative cases and want to differentiate them from the positives. So, the higher the sensitivity, the higher the risk for low specificity and vice versa, represented by the dotted diagonal on the plot. For tidymodels, a sensitivity of 0.75 comes with a specificity of 0.6 which is not a bad trade-off. Thus, as our model becomes smart enough to detect leavers (attrition = yes) while still also identifying active employees (attrition = no), the area under the curve increases."
},
{
"code": null,
"e": 28529,
"s": 28225,
"text": "To make a similar ROC-curve within our caret-framework, we can use MLeval to provide us with a range of ROC-related outputs. Overall, the area under the curve for glmnet appears to be a tiny bit larger compared to our tidymodels output and there is not a huge gap in performance between both frameworks."
},
{
"code": null,
"e": 29861,
"s": 28529,
"text": "Tidymodels comes with a high flexibility because it is based on various modern packages and has a completely customizable workflow structure. This means that you have a lot of degrees of freedom when it comes to creating your very own machine learning project. But because so many steps and objects are included, it can certainly be a bit confusing for beginners. During my own project, I felt like I got a better grasp of what the program does since I have worked my way around a lot of error messages, learning a lot of theory along the way. But this can also be due to the fact that tidymodels is still under development and therefore not as stable yet. If you want a quick and concise solution to your prediction problem instead of setting up a big project, I would recommend caret to you. It is not only faster in terms of runtime, but there are still more resourced and solved issues from experienced users out there. Caret fits a lot of models for you with very little coding work while being as fast as possible thanks to parallel processing. Unlike in tidymodels, the best candidate model across resamples is selected for you automatically which can be neat. One thing is for sure: Max Kuhn certainly did a great job with designing these packages and you are lucky to have two powerful open-source packages to choose from."
},
{
"code": null,
"e": 30092,
"s": 29861,
"text": "My special thanks goes to Robert Lohne, a consultant from Norway, who acted as my intellectual sparring-partner during this project. I really enjoyed our collaborative exchange and I hope that our learning journey won’t stop here!"
},
{
"code": null,
"e": 30281,
"s": 30092,
"text": "[1] G. Harden, K. G. Boakye & S. Ryan, Turnover intention of technology professionals: A social exchange theory perspective (2018), Journal of Computer Information Systems, 58(4), 291–300."
},
{
"code": null,
"e": 30422,
"s": 30281,
"text": "[2] J. Sarkar, Linking Compensation and Turnover: Retrospection and Future Directions (2018), IUP Journal of Organizational Behavior, 17(1)."
}
] |
How to make phone call in iOS 10 using Swift?
|
In this post we will be seeing how to make phone in iOS programmatically.
So let’s get started.
Step 1 − Open Xcode → New Project → Single View Application → Let’s name it “MakeCall”
Step 2 − Open Main.storyboard and add one text field and one button as shown below
Step 3 − Create @IBOutlet for the text field, name it phoneNumberTextfield.
Step 4 − Create @IBAction method callButtonClicked for call button
Step 5 − To make a call we can use iOS openURL. In callButtonClicked add following lines
if let url = URL(string: "tel://\(phoneNumberTextfield.text!)"),
UIApplication.shared.canOpenURL(url) {
UIApplication.shared.open(url, options: [:], completionHandler: nil)
Step 6 − Run the app and enter the number you want to make call to as shown in the pic below
Step 7 − Click on the call button, you will be shown an alert with ‘Call’ and ‘Cancel’ options
Step 8 − Click on the call button, the call will be made to the number, as shown below
|
[
{
"code": null,
"e": 1136,
"s": 1062,
"text": "In this post we will be seeing how to make phone in iOS programmatically."
},
{
"code": null,
"e": 1158,
"s": 1136,
"text": "So let’s get started."
},
{
"code": null,
"e": 1245,
"s": 1158,
"text": "Step 1 − Open Xcode → New Project → Single View Application → Let’s name it “MakeCall”"
},
{
"code": null,
"e": 1328,
"s": 1245,
"text": "Step 2 − Open Main.storyboard and add one text field and one button as shown below"
},
{
"code": null,
"e": 1404,
"s": 1328,
"text": "Step 3 − Create @IBOutlet for the text field, name it phoneNumberTextfield."
},
{
"code": null,
"e": 1471,
"s": 1404,
"text": "Step 4 − Create @IBAction method callButtonClicked for call button"
},
{
"code": null,
"e": 1560,
"s": 1471,
"text": "Step 5 − To make a call we can use iOS openURL. In callButtonClicked add following lines"
},
{
"code": null,
"e": 1733,
"s": 1560,
"text": "if let url = URL(string: \"tel://\\(phoneNumberTextfield.text!)\"),\nUIApplication.shared.canOpenURL(url) {\nUIApplication.shared.open(url, options: [:], completionHandler: nil)"
},
{
"code": null,
"e": 1826,
"s": 1733,
"text": "Step 6 − Run the app and enter the number you want to make call to as shown in the pic below"
},
{
"code": null,
"e": 1921,
"s": 1826,
"text": "Step 7 − Click on the call button, you will be shown an alert with ‘Call’ and ‘Cancel’ options"
},
{
"code": null,
"e": 2008,
"s": 1921,
"text": "Step 8 − Click on the call button, the call will be made to the number, as shown below"
}
] |
Java Threads
|
Threads allows a program to operate more efficiently by doing multiple things at the same
time.
Threads can be used to perform complicated tasks in the background without interrupting
the main program.
There are two ways to create a thread.
It can be created by extending the Thread class and overriding its run()
method:
public class Main extends Thread {
public void run() {
System.out.println("This code is running in a thread");
}
}
Another way to create a thread is to implement the Runnable interface:
public class Main implements Runnable {
public void run() {
System.out.println("This code is running in a thread");
}
}
If the class extends the Thread class, the thread can be run by creating an instance of the
class and call its start() method:
public class Main extends Thread {
public static void main(String[] args) {
Main thread = new Main();
thread.start();
System.out.println("This code is outside of the thread");
}
public void run() {
System.out.println("This code is running in a thread");
}
}
Try it Yourself »
If the class implements the Runnable interface, the thread can be run by passing an
instance of the class to a Thread object's constructor and then calling the thread's
start() method:
public class Main implements Runnable {
public static void main(String[] args) {
Main obj = new Main();
Thread thread = new Thread(obj);
thread.start();
System.out.println("This code is outside of the thread");
}
public void run() {
System.out.println("This code is running in a thread");
}
}
Try it Yourself »
Differences between "extending" and "implementing" Threads
The major difference is that when a class extends the Thread class, you cannot extend any other class, but by implementing the Runnable interface,
it is possible to extend from another class as well, like: class MyClass extends OtherClass implements Runnable.
Because threads run at the same time as other parts of the program, there is no way to
know in which order the code will run. When the threads and main program are reading
and writing the same variables, the values are unpredictable. The problems that result
from this are called concurrency problems.
A code example where the value of the variable amount is unpredictable:
public class Main extends Thread {
public static int amount = 0;
public static void main(String[] args) {
Main thread = new Main();
thread.start();
System.out.println(amount);
amount++;
System.out.println(amount);
}
public void run() {
amount++;
}
}
Try it Yourself »
To avoid concurrency problems, it is best to share as few attributes between threads as
possible. If attributes need to be shared, one possible solution is to use the isAlive()
method of the thread to check whether the thread has finished running before using any
attributes that the thread can change.
Use isAlive() to prevent concurrency problems:
public class Main extends Thread {
public static int amount = 0;
public static void main(String[] args) {
Main thread = new Main();
thread.start();
// Wait for the thread to finish
while(thread.isAlive()) {
System.out.println("Waiting...");
}
// Update amount and print its value
System.out.println("Main: " + amount);
amount++;
System.out.println("Main: " + amount);
}
public void run() {
amount++;
}
}
Try it Yourself »
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
help@w3schools.com
Your message has been sent to W3Schools.
|
[
{
"code": null,
"e": 96,
"s": 0,
"text": "Threads allows a program to operate more efficiently by doing multiple things at the same\ntime."
},
{
"code": null,
"e": 202,
"s": 96,
"text": "Threads can be used to perform complicated tasks in the background without interrupting\nthe main program."
},
{
"code": null,
"e": 241,
"s": 202,
"text": "There are two ways to create a thread."
},
{
"code": null,
"e": 323,
"s": 241,
"text": "It can be created by extending the Thread class and overriding its run() \nmethod:"
},
{
"code": null,
"e": 446,
"s": 323,
"text": "public class Main extends Thread {\n public void run() {\n System.out.println(\"This code is running in a thread\");\n }\n}"
},
{
"code": null,
"e": 517,
"s": 446,
"text": "Another way to create a thread is to implement the Runnable interface:"
},
{
"code": null,
"e": 645,
"s": 517,
"text": "public class Main implements Runnable {\n public void run() {\n System.out.println(\"This code is running in a thread\");\n }\n}"
},
{
"code": null,
"e": 772,
"s": 645,
"text": "If the class extends the Thread class, the thread can be run by creating an instance of the\nclass and call its start() method:"
},
{
"code": null,
"e": 1054,
"s": 772,
"text": "public class Main extends Thread {\n public static void main(String[] args) {\n Main thread = new Main();\n thread.start();\n System.out.println(\"This code is outside of the thread\");\n }\n public void run() {\n System.out.println(\"This code is running in a thread\");\n }\n}"
},
{
"code": null,
"e": 1074,
"s": 1054,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1259,
"s": 1074,
"text": "If the class implements the Runnable interface, the thread can be run by passing an\ninstance of the class to a Thread object's constructor and then calling the thread's\nstart() method:"
},
{
"code": null,
"e": 1580,
"s": 1259,
"text": "public class Main implements Runnable {\n public static void main(String[] args) {\n Main obj = new Main();\n Thread thread = new Thread(obj);\n thread.start();\n System.out.println(\"This code is outside of the thread\");\n }\n public void run() {\n System.out.println(\"This code is running in a thread\");\n }\n}"
},
{
"code": null,
"e": 1600,
"s": 1580,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1659,
"s": 1600,
"text": "Differences between \"extending\" and \"implementing\" Threads"
},
{
"code": null,
"e": 1920,
"s": 1659,
"text": "The major difference is that when a class extends the Thread class, you cannot extend any other class, but by implementing the Runnable interface, \nit is possible to extend from another class as well, like: class MyClass extends OtherClass implements Runnable."
},
{
"code": null,
"e": 2222,
"s": 1920,
"text": "Because threads run at the same time as other parts of the program, there is no way to\nknow in which order the code will run. When the threads and main program are reading\nand writing the same variables, the values are unpredictable. The problems that result\nfrom this are called concurrency problems."
},
{
"code": null,
"e": 2294,
"s": 2222,
"text": "A code example where the value of the variable amount is unpredictable:"
},
{
"code": null,
"e": 2580,
"s": 2294,
"text": "public class Main extends Thread {\n public static int amount = 0;\n\n public static void main(String[] args) {\n Main thread = new Main();\n thread.start();\n System.out.println(amount);\n amount++;\n System.out.println(amount);\n }\n\n public void run() {\n amount++;\n }\n}"
},
{
"code": null,
"e": 2600,
"s": 2580,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 2904,
"s": 2600,
"text": "To avoid concurrency problems, it is best to share as few attributes between threads as\npossible. If attributes need to be shared, one possible solution is to use the isAlive()\nmethod of the thread to check whether the thread has finished running before using any \nattributes that the thread can change."
},
{
"code": null,
"e": 2951,
"s": 2904,
"text": "Use isAlive() to prevent concurrency problems:"
},
{
"code": null,
"e": 3400,
"s": 2951,
"text": "public class Main extends Thread {\n public static int amount = 0;\n\n public static void main(String[] args) {\n Main thread = new Main();\n thread.start();\n // Wait for the thread to finish\n while(thread.isAlive()) {\n System.out.println(\"Waiting...\");\n }\n // Update amount and print its value\n System.out.println(\"Main: \" + amount);\n amount++;\n System.out.println(\"Main: \" + amount);\n }\n public void run() {\n amount++;\n }\n}"
},
{
"code": null,
"e": 3420,
"s": 3400,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 3453,
"s": 3420,
"text": "We just launchedW3Schools videos"
},
{
"code": null,
"e": 3495,
"s": 3453,
"text": "Get certifiedby completinga course today!"
},
{
"code": null,
"e": 3602,
"s": 3495,
"text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"
},
{
"code": null,
"e": 3621,
"s": 3602,
"text": "help@w3schools.com"
}
] |
rev command in Linux with Examples - GeeksforGeeks
|
24 May, 2019
rev command in Linux is used to reverse the lines characterwise. This utility basically reverses the order of the characters in each line by copying the specified files to the standard output. If no files are specified, then the standard input will read.
Syntax:
rev [option] [file...]
Example 1: Taking input from standard input
Example 2: Suppose we have a text file named as sample.txt
Using rev command on sample file. It will display the result on the terminal as follows:
Options:
rev -V: This option display the version information and exit.rev -V
rev -V
rev -h: This option will show the help message and exit.rev -h
rev -h
linux-command
Linux-text-processing-commands
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
scp command in Linux with Examples
nohup Command in Linux with Examples
mv command in Linux with examples
Thread functions in C/C++
Docker - COPY Instruction
chown command in Linux with Examples
nslookup command in Linux with Examples
SED command in Linux | Set 2
Named Pipe or FIFO with example C program
uniq Command in LINUX with examples
|
[
{
"code": null,
"e": 24015,
"s": 23987,
"text": "\n24 May, 2019"
},
{
"code": null,
"e": 24270,
"s": 24015,
"text": "rev command in Linux is used to reverse the lines characterwise. This utility basically reverses the order of the characters in each line by copying the specified files to the standard output. If no files are specified, then the standard input will read."
},
{
"code": null,
"e": 24278,
"s": 24270,
"text": "Syntax:"
},
{
"code": null,
"e": 24302,
"s": 24278,
"text": "rev [option] [file...]\n"
},
{
"code": null,
"e": 24346,
"s": 24302,
"text": "Example 1: Taking input from standard input"
},
{
"code": null,
"e": 24405,
"s": 24346,
"text": "Example 2: Suppose we have a text file named as sample.txt"
},
{
"code": null,
"e": 24494,
"s": 24405,
"text": "Using rev command on sample file. It will display the result on the terminal as follows:"
},
{
"code": null,
"e": 24503,
"s": 24494,
"text": "Options:"
},
{
"code": null,
"e": 24571,
"s": 24503,
"text": "rev -V: This option display the version information and exit.rev -V"
},
{
"code": null,
"e": 24578,
"s": 24571,
"text": "rev -V"
},
{
"code": null,
"e": 24641,
"s": 24578,
"text": "rev -h: This option will show the help message and exit.rev -h"
},
{
"code": null,
"e": 24648,
"s": 24641,
"text": "rev -h"
},
{
"code": null,
"e": 24662,
"s": 24648,
"text": "linux-command"
},
{
"code": null,
"e": 24693,
"s": 24662,
"text": "Linux-text-processing-commands"
},
{
"code": null,
"e": 24704,
"s": 24693,
"text": "Linux-Unix"
},
{
"code": null,
"e": 24802,
"s": 24704,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 24811,
"s": 24802,
"text": "Comments"
},
{
"code": null,
"e": 24824,
"s": 24811,
"text": "Old Comments"
},
{
"code": null,
"e": 24859,
"s": 24824,
"text": "scp command in Linux with Examples"
},
{
"code": null,
"e": 24896,
"s": 24859,
"text": "nohup Command in Linux with Examples"
},
{
"code": null,
"e": 24930,
"s": 24896,
"text": "mv command in Linux with examples"
},
{
"code": null,
"e": 24956,
"s": 24930,
"text": "Thread functions in C/C++"
},
{
"code": null,
"e": 24982,
"s": 24956,
"text": "Docker - COPY Instruction"
},
{
"code": null,
"e": 25019,
"s": 24982,
"text": "chown command in Linux with Examples"
},
{
"code": null,
"e": 25059,
"s": 25019,
"text": "nslookup command in Linux with Examples"
},
{
"code": null,
"e": 25088,
"s": 25059,
"text": "SED command in Linux | Set 2"
},
{
"code": null,
"e": 25130,
"s": 25088,
"text": "Named Pipe or FIFO with example C program"
}
] |
Length of intercept cut off from a line by a Circle - GeeksforGeeks
|
08 Jun, 2021
Given six integers, a, b, c, i, j, and k representing the equation of the circle and equation of the line , the task is to find the length of the intercept cut off from the given line to the circle.
Examples:
Input: a = 0, b = 0, c = -4, i = 2, j = -1, k = 1 Output: 3.89872
Input: a = 5, b = 6, c = -16, i = 1, j = 4, k = 3 Output: 6.9282
Approach: Follow the steps below to solve the problem:
Find the center of the circle, say as and .
The perpendicular from the center divides the intercept into two equal parts, therefore calculate the length of one of the parts and multiply it by 2 to get the total length of the intercept.
Calculate the value of radius (r) using the formula: , where and
Calculate the value of perpendicular distance ( d ) of center O from the line by using the formula:
Now from the pythagoras theorem in triangle OCA:
After completing the above steps, print the value of twice of AC to get the length of the total intercept.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to find the// radius of a circledouble radius(int a, int b, int c){ // g and f are the coordinates // of the center int g = a / 2; int f = b / 2; // Case of invalid circle if (g * g + f * f - c < 0) return (-1); // Apply the radius formula return (sqrt(g * g + f * f - c));} // Function to find the perpendicular// distance between circle center and the linedouble centerDistanceFromLine(int a, int b, int i, int j, int k){ // Store the coordinates of center int g = a / 2; int f = b / 2; // Stores the perpendicular distance // between the line and the point double distance = fabs(i * g + j * f + k) / (sqrt(i * i + j * j)); // Invalid Case if (distance < 0) return (-1); // Return the distance return distance;} // Function to find the length of intercept// cut off from a line by a circlevoid interceptLength(int a, int b, int c, int i, int j, int k){ // Calculate the value of radius double rad = radius(a, b, c); // Calculate the perpendicular distance // between line and center double dist = centerDistanceFromLine( a, b, i, j, k); // Invalid Case if (rad < 0 || dist < 0) { cout << "circle not possible"; return; } // If line do not cut circle if (dist > rad) { cout << "Line not cutting circle"; } // Print the intercept length else cout << 2 * sqrt(rad * rad - dist * dist);} // Driver Codeint main(){ // Given Input int a = 0, b = 0, c = -4; int i = 2, j = -1, k = 1; // Function Call interceptLength(a, b, c, i, j, k); return 0;}
// Java program for the above approachclass GFG{ // Function to find the// radius of a circlestatic double radius(int a, int b, int c){ // g and f are the coordinates // of the center int g = a / 2; int f = b / 2; // Case of invalid circle if (g * g + f * f - c < 0) return (-1); // Apply the radius formula return (Math.sqrt(g * g + f * f - c));} // Function to find the perpendicular// distance between circle center and the linestatic double centerDistanceFromLine(int a, int b, int i, int j, int k){ // Store the coordinates of center int g = a / 2; int f = b / 2; // Stores the perpendicular distance // between the line and the point double distance = Math.abs(i * g + j * f + k) / (Math.sqrt(i * i + j * j)); // Invalid Case if (distance < 0) return (-1); // Return the distance return distance;} // Function to find the length of intercept// cut off from a line by a circlestatic void interceptLength(int a, int b, int c, int i, int j, int k){ // Calculate the value of radius double rad = radius(a, b, c); // Calculate the perpendicular distance // between line and center double dist = centerDistanceFromLine( a, b, i, j, k); // Invalid Case if (rad < 0 || dist < 0) { System.out.println("circle not possible"); return; } // If line do not cut circle if (dist > rad) { System.out.println("Line not cutting circle"); } // Print the intercept length else System.out.println(2 * Math.sqrt( rad * rad - dist * dist));} // Driver codepublic static void main(String[] args){ // Given Input int a = 0, b = 0, c = -4; int i = 2, j = -1, k = 1; // Function Call interceptLength(a, b, c, i, j, k);}} // This code is contributed by abhinavjain194
# Python3 program for the above approachimport math # Function to find the# radius of a circledef radius(a, b, c): # g and f are the coordinates # of the center g = a / 2 f = b / 2 # Case of invalid circle if (g * g + f * f - c < 0): return(-1) # Apply the radius formula return(math.sqrt(g * g + f * f - c)) # Function to find the perpendicular# distance between circle center and the linedef centerDistanceFromLine(a, b, i, j, k): # Store the coordinates of center g = a / 2 f = b / 2 # Stores the perpendicular distance # between the line and the point distance = (abs(i * g + j * f + k) / (math.sqrt(i * i + j * j))) # Invalid Case if (distance < 0): return (-1) # Return the distance return distance # Function to find the length of intercept# cut off from a line by a circledef interceptLength(a, b, c, i, j, k): # Calculate the value of radius rad = radius(a, b, c) # Calculate the perpendicular distance # between line and center dist = centerDistanceFromLine( a, b, i, j, k) # Invalid Case if (rad < 0 or dist < 0): print("circle not possible") return # If line do not cut circle if (dist > rad): print("Line not cutting circle") # Print the intercept length else: print(2 * math.sqrt( rad * rad - dist * dist)) # Driver Codeif __name__ == "__main__": # Given Input a = 0 b = 0 c = -4 i = 2 j = -1 k = 1 # Function Call interceptLength(a, b, c, i, j, k) # This code is contributed by ukasp
// C# program for the above approachusing System; class GFG{ // Function to find the// radius of a circlestatic double radius(int a, int b, int c){ // g and f are the coordinates // of the center int g = a / 2; int f = b / 2; // Case of invalid circle if (g * g + f * f - c < 0) return (-1); // Apply the radius formula return(Math.Sqrt(g * g + f * f - c));} // Function to find the perpendicular// distance between circle center and the linestatic double centerDistanceFromLine(int a, int b, int i, int j, int k){ // Store the coordinates of center int g = a / 2; int f = b / 2; // Stores the perpendicular distance // between the line and the point double distance = Math.Abs(i * g + j * f + k) / (Math.Sqrt(i * i + j * j)); // Invalid Case if (distance < 0) return (-1); // Return the distance return distance;} // Function to find the length of intercept// cut off from a line by a circlestatic void interceptLength(int a, int b, int c, int i, int j, int k){ // Calculate the value of radius double rad = radius(a, b, c); // Calculate the perpendicular distance // between line and center double dist = centerDistanceFromLine( a, b, i, j, k); // Invalid Case if (rad < 0 || dist < 0) { Console.WriteLine("circle not possible"); return; } // If line do not cut circle if (dist > rad) { Console.WriteLine("Line not cutting circle"); } // Print the intercept length else Console.WriteLine(2 * Math.Sqrt( rad * rad - dist * dist));} // Driver codepublic static void Main(String []args){ // Given Input int a = 0, b = 0, c = -4; int i = 2, j = -1, k = 1; // Function Call interceptLength(a, b, c, i, j, k);}} // This code is contributed by sanjoy_62
<script> // JavaScript program for the above approach // Function to find the // radius of a circle function radius(a, b, c) { // g and f are the coordinates // of the center let g = a / 2; let f = b / 2; // Case of invalid circle if (g * g + f * f - c < 0) return (-1); // Apply the radius formula return (Math.sqrt(g * g + f * f - c)); } // Function to find the perpendicular // distance between circle center and the line function centerDistanceFromLine(a, b, i, j, k) { // Store the coordinates of center let g = a / 2; let f = b / 2; // Stores the perpendicular distance // between the line and the point let distance = Math.abs(i * g + j * f + k) / (Math.sqrt(i * i + j * j)); // Invalid Case if (distance < 0) return (-1); // Return the distance return distance; } // Function to find the length of intercept // cut off from a line by a circle function interceptLength(a, b, c, i, j, k) { // Calculate the value of radius let rad = radius(a, b, c); // Calculate the perpendicular distance // between line and center let dist = centerDistanceFromLine( a, b, i, j, k); // Invalid Case if (rad < 0 || dist < 0) { document.write("circle not possible"); return; } // If line do not cut circle if (dist > rad) { document.write("Line not cutting circle"); } // Print the intercept length else document.write(2 * Math.sqrt( rad * rad - dist * dist)); } // Driver code // Given Input let a = 0, b = 0, c = -4; let i = 2, j = -1, k = 1; // Function Call interceptLength(a, b, c, i, j, k); // This code is contributed by Hritik </script>
3.89872
Time Complexity: O(1)Auxiliary Space: O(1)
abhinavjain194
sanjoy_62
hritikrommie
ukasp
circle
Circles
Geometric-Lines
Geometric
Mathematical
Mathematical
Geometric
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Convex Hull | Set 2 (Graham Scan)
Line Clipping | Set 1 (Cohen–Sutherland Algorithm)
Closest Pair of Points | O(nlogn) Implementation
Given n line segments, find if any two segments intersect
Window to Viewport Transformation in Computer Graphics with Implementation
Program for Fibonacci numbers
Write a program to print all permutations of a given string
C++ Data Types
Set in C++ Standard Template Library (STL)
Coin Change | DP-7
|
[
{
"code": null,
"e": 24945,
"s": 24917,
"text": "\n08 Jun, 2021"
},
{
"code": null,
"e": 25145,
"s": 24945,
"text": "Given six integers, a, b, c, i, j, and k representing the equation of the circle and equation of the line , the task is to find the length of the intercept cut off from the given line to the circle."
},
{
"code": null,
"e": 25155,
"s": 25145,
"text": "Examples:"
},
{
"code": null,
"e": 25221,
"s": 25155,
"text": "Input: a = 0, b = 0, c = -4, i = 2, j = -1, k = 1 Output: 3.89872"
},
{
"code": null,
"e": 25286,
"s": 25221,
"text": "Input: a = 5, b = 6, c = -16, i = 1, j = 4, k = 3 Output: 6.9282"
},
{
"code": null,
"e": 25341,
"s": 25286,
"text": "Approach: Follow the steps below to solve the problem:"
},
{
"code": null,
"e": 25387,
"s": 25341,
"text": "Find the center of the circle, say as and ."
},
{
"code": null,
"e": 25579,
"s": 25387,
"text": "The perpendicular from the center divides the intercept into two equal parts, therefore calculate the length of one of the parts and multiply it by 2 to get the total length of the intercept."
},
{
"code": null,
"e": 25646,
"s": 25579,
"text": "Calculate the value of radius (r) using the formula: , where and "
},
{
"code": null,
"e": 25747,
"s": 25646,
"text": "Calculate the value of perpendicular distance ( d ) of center O from the line by using the formula: "
},
{
"code": null,
"e": 25796,
"s": 25747,
"text": "Now from the pythagoras theorem in triangle OCA:"
},
{
"code": null,
"e": 25903,
"s": 25796,
"text": "After completing the above steps, print the value of twice of AC to get the length of the total intercept."
},
{
"code": null,
"e": 25954,
"s": 25903,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 25958,
"s": 25954,
"text": "C++"
},
{
"code": null,
"e": 25963,
"s": 25958,
"text": "Java"
},
{
"code": null,
"e": 25971,
"s": 25963,
"text": "Python3"
},
{
"code": null,
"e": 25974,
"s": 25971,
"text": "C#"
},
{
"code": null,
"e": 25985,
"s": 25974,
"text": "Javascript"
},
{
"code": "// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to find the// radius of a circledouble radius(int a, int b, int c){ // g and f are the coordinates // of the center int g = a / 2; int f = b / 2; // Case of invalid circle if (g * g + f * f - c < 0) return (-1); // Apply the radius formula return (sqrt(g * g + f * f - c));} // Function to find the perpendicular// distance between circle center and the linedouble centerDistanceFromLine(int a, int b, int i, int j, int k){ // Store the coordinates of center int g = a / 2; int f = b / 2; // Stores the perpendicular distance // between the line and the point double distance = fabs(i * g + j * f + k) / (sqrt(i * i + j * j)); // Invalid Case if (distance < 0) return (-1); // Return the distance return distance;} // Function to find the length of intercept// cut off from a line by a circlevoid interceptLength(int a, int b, int c, int i, int j, int k){ // Calculate the value of radius double rad = radius(a, b, c); // Calculate the perpendicular distance // between line and center double dist = centerDistanceFromLine( a, b, i, j, k); // Invalid Case if (rad < 0 || dist < 0) { cout << \"circle not possible\"; return; } // If line do not cut circle if (dist > rad) { cout << \"Line not cutting circle\"; } // Print the intercept length else cout << 2 * sqrt(rad * rad - dist * dist);} // Driver Codeint main(){ // Given Input int a = 0, b = 0, c = -4; int i = 2, j = -1, k = 1; // Function Call interceptLength(a, b, c, i, j, k); return 0;}",
"e": 27807,
"s": 25985,
"text": null
},
{
"code": "// Java program for the above approachclass GFG{ // Function to find the// radius of a circlestatic double radius(int a, int b, int c){ // g and f are the coordinates // of the center int g = a / 2; int f = b / 2; // Case of invalid circle if (g * g + f * f - c < 0) return (-1); // Apply the radius formula return (Math.sqrt(g * g + f * f - c));} // Function to find the perpendicular// distance between circle center and the linestatic double centerDistanceFromLine(int a, int b, int i, int j, int k){ // Store the coordinates of center int g = a / 2; int f = b / 2; // Stores the perpendicular distance // between the line and the point double distance = Math.abs(i * g + j * f + k) / (Math.sqrt(i * i + j * j)); // Invalid Case if (distance < 0) return (-1); // Return the distance return distance;} // Function to find the length of intercept// cut off from a line by a circlestatic void interceptLength(int a, int b, int c, int i, int j, int k){ // Calculate the value of radius double rad = radius(a, b, c); // Calculate the perpendicular distance // between line and center double dist = centerDistanceFromLine( a, b, i, j, k); // Invalid Case if (rad < 0 || dist < 0) { System.out.println(\"circle not possible\"); return; } // If line do not cut circle if (dist > rad) { System.out.println(\"Line not cutting circle\"); } // Print the intercept length else System.out.println(2 * Math.sqrt( rad * rad - dist * dist));} // Driver codepublic static void main(String[] args){ // Given Input int a = 0, b = 0, c = -4; int i = 2, j = -1, k = 1; // Function Call interceptLength(a, b, c, i, j, k);}} // This code is contributed by abhinavjain194",
"e": 29775,
"s": 27807,
"text": null
},
{
"code": "# Python3 program for the above approachimport math # Function to find the# radius of a circledef radius(a, b, c): # g and f are the coordinates # of the center g = a / 2 f = b / 2 # Case of invalid circle if (g * g + f * f - c < 0): return(-1) # Apply the radius formula return(math.sqrt(g * g + f * f - c)) # Function to find the perpendicular# distance between circle center and the linedef centerDistanceFromLine(a, b, i, j, k): # Store the coordinates of center g = a / 2 f = b / 2 # Stores the perpendicular distance # between the line and the point distance = (abs(i * g + j * f + k) / (math.sqrt(i * i + j * j))) # Invalid Case if (distance < 0): return (-1) # Return the distance return distance # Function to find the length of intercept# cut off from a line by a circledef interceptLength(a, b, c, i, j, k): # Calculate the value of radius rad = radius(a, b, c) # Calculate the perpendicular distance # between line and center dist = centerDistanceFromLine( a, b, i, j, k) # Invalid Case if (rad < 0 or dist < 0): print(\"circle not possible\") return # If line do not cut circle if (dist > rad): print(\"Line not cutting circle\") # Print the intercept length else: print(2 * math.sqrt( rad * rad - dist * dist)) # Driver Codeif __name__ == \"__main__\": # Given Input a = 0 b = 0 c = -4 i = 2 j = -1 k = 1 # Function Call interceptLength(a, b, c, i, j, k) # This code is contributed by ukasp",
"e": 31370,
"s": 29775,
"text": null
},
{
"code": "// C# program for the above approachusing System; class GFG{ // Function to find the// radius of a circlestatic double radius(int a, int b, int c){ // g and f are the coordinates // of the center int g = a / 2; int f = b / 2; // Case of invalid circle if (g * g + f * f - c < 0) return (-1); // Apply the radius formula return(Math.Sqrt(g * g + f * f - c));} // Function to find the perpendicular// distance between circle center and the linestatic double centerDistanceFromLine(int a, int b, int i, int j, int k){ // Store the coordinates of center int g = a / 2; int f = b / 2; // Stores the perpendicular distance // between the line and the point double distance = Math.Abs(i * g + j * f + k) / (Math.Sqrt(i * i + j * j)); // Invalid Case if (distance < 0) return (-1); // Return the distance return distance;} // Function to find the length of intercept// cut off from a line by a circlestatic void interceptLength(int a, int b, int c, int i, int j, int k){ // Calculate the value of radius double rad = radius(a, b, c); // Calculate the perpendicular distance // between line and center double dist = centerDistanceFromLine( a, b, i, j, k); // Invalid Case if (rad < 0 || dist < 0) { Console.WriteLine(\"circle not possible\"); return; } // If line do not cut circle if (dist > rad) { Console.WriteLine(\"Line not cutting circle\"); } // Print the intercept length else Console.WriteLine(2 * Math.Sqrt( rad * rad - dist * dist));} // Driver codepublic static void Main(String []args){ // Given Input int a = 0, b = 0, c = -4; int i = 2, j = -1, k = 1; // Function Call interceptLength(a, b, c, i, j, k);}} // This code is contributed by sanjoy_62",
"e": 33358,
"s": 31370,
"text": null
},
{
"code": "<script> // JavaScript program for the above approach // Function to find the // radius of a circle function radius(a, b, c) { // g and f are the coordinates // of the center let g = a / 2; let f = b / 2; // Case of invalid circle if (g * g + f * f - c < 0) return (-1); // Apply the radius formula return (Math.sqrt(g * g + f * f - c)); } // Function to find the perpendicular // distance between circle center and the line function centerDistanceFromLine(a, b, i, j, k) { // Store the coordinates of center let g = a / 2; let f = b / 2; // Stores the perpendicular distance // between the line and the point let distance = Math.abs(i * g + j * f + k) / (Math.sqrt(i * i + j * j)); // Invalid Case if (distance < 0) return (-1); // Return the distance return distance; } // Function to find the length of intercept // cut off from a line by a circle function interceptLength(a, b, c, i, j, k) { // Calculate the value of radius let rad = radius(a, b, c); // Calculate the perpendicular distance // between line and center let dist = centerDistanceFromLine( a, b, i, j, k); // Invalid Case if (rad < 0 || dist < 0) { document.write(\"circle not possible\"); return; } // If line do not cut circle if (dist > rad) { document.write(\"Line not cutting circle\"); } // Print the intercept length else document.write(2 * Math.sqrt( rad * rad - dist * dist)); } // Driver code // Given Input let a = 0, b = 0, c = -4; let i = 2, j = -1, k = 1; // Function Call interceptLength(a, b, c, i, j, k); // This code is contributed by Hritik </script>",
"e": 35541,
"s": 33358,
"text": null
},
{
"code": null,
"e": 35549,
"s": 35541,
"text": "3.89872"
},
{
"code": null,
"e": 35594,
"s": 35551,
"text": "Time Complexity: O(1)Auxiliary Space: O(1)"
},
{
"code": null,
"e": 35611,
"s": 35596,
"text": "abhinavjain194"
},
{
"code": null,
"e": 35621,
"s": 35611,
"text": "sanjoy_62"
},
{
"code": null,
"e": 35634,
"s": 35621,
"text": "hritikrommie"
},
{
"code": null,
"e": 35640,
"s": 35634,
"text": "ukasp"
},
{
"code": null,
"e": 35647,
"s": 35640,
"text": "circle"
},
{
"code": null,
"e": 35655,
"s": 35647,
"text": "Circles"
},
{
"code": null,
"e": 35671,
"s": 35655,
"text": "Geometric-Lines"
},
{
"code": null,
"e": 35681,
"s": 35671,
"text": "Geometric"
},
{
"code": null,
"e": 35694,
"s": 35681,
"text": "Mathematical"
},
{
"code": null,
"e": 35707,
"s": 35694,
"text": "Mathematical"
},
{
"code": null,
"e": 35717,
"s": 35707,
"text": "Geometric"
},
{
"code": null,
"e": 35815,
"s": 35717,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 35824,
"s": 35815,
"text": "Comments"
},
{
"code": null,
"e": 35837,
"s": 35824,
"text": "Old Comments"
},
{
"code": null,
"e": 35871,
"s": 35837,
"text": "Convex Hull | Set 2 (Graham Scan)"
},
{
"code": null,
"e": 35922,
"s": 35871,
"text": "Line Clipping | Set 1 (Cohen–Sutherland Algorithm)"
},
{
"code": null,
"e": 35971,
"s": 35922,
"text": "Closest Pair of Points | O(nlogn) Implementation"
},
{
"code": null,
"e": 36029,
"s": 35971,
"text": "Given n line segments, find if any two segments intersect"
},
{
"code": null,
"e": 36104,
"s": 36029,
"text": "Window to Viewport Transformation in Computer Graphics with Implementation"
},
{
"code": null,
"e": 36134,
"s": 36104,
"text": "Program for Fibonacci numbers"
},
{
"code": null,
"e": 36194,
"s": 36134,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 36209,
"s": 36194,
"text": "C++ Data Types"
},
{
"code": null,
"e": 36252,
"s": 36209,
"text": "Set in C++ Standard Template Library (STL)"
}
] |
How to convert Object's array to an array using JavaScript ? - GeeksforGeeks
|
27 Apr, 2020
Given an array of objects and the task is to convert the object values to an array with the help of JavaScript. There are two approaches that are discussed below:
Approach 1: We can use the map() method and return the values of each object which makes the array.
Example:<!DOCTYPE HTML><html> <head> <title> Convert a JS object to an array using JQuery </title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js"> </script></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP"></p> <button onclick="myGFG()"> Click Here </button> <p id="GFG_DOWN"></p> <script> var up = document.getElementById("GFG_UP"); var JS_Obj = { 1: ['gfg', 'Gfg', 'gFG'], 2: ['geek', 'Geek', 'gEEK'] }; up.innerHTML = "Object - [" + JSON.stringify(JS_Obj) + "]"; var down = document.getElementById("GFG_DOWN"); function myGFG() { var array = $.map(JS_Obj, function (val, ind) { return [val]; }); down.innerHTML = array; } </script></body> </html>
<!DOCTYPE HTML><html> <head> <title> Convert a JS object to an array using JQuery </title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js"> </script></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP"></p> <button onclick="myGFG()"> Click Here </button> <p id="GFG_DOWN"></p> <script> var up = document.getElementById("GFG_UP"); var JS_Obj = { 1: ['gfg', 'Gfg', 'gFG'], 2: ['geek', 'Geek', 'gEEK'] }; up.innerHTML = "Object - [" + JSON.stringify(JS_Obj) + "]"; var down = document.getElementById("GFG_DOWN"); function myGFG() { var array = $.map(JS_Obj, function (val, ind) { return [val]; }); down.innerHTML = array; } </script></body> </html>
Output:
Approach 2: The Object.keys() method is used to get the keys of object and then those keys are used to get the values of the objects from the array.
Example:<!DOCTYPE HTML><html> <head> <title> Convert a JS object to an array using JQuery </title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js"> </script></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP"></p> <button onclick="myGFG()"> Click Here </button> <p id="GFG_DOWN"></p> <script> var up = document.getElementById("GFG_UP"); var JS_Obj = { 1: ['gfg', 'Gfg', 'gFG'], 2: ['geek', 'Geek', 'gEEK'] }; up.innerHTML = "Object - [" + JSON.stringify(JS_Obj) + "]"; var down = document.getElementById("GFG_DOWN"); function myGFG() { var arr = Object.keys(JS_Obj) .map(function (key) { return JS_Obj[key]; }); down.innerHTML = arr; } </script></body> </html>
<!DOCTYPE HTML><html> <head> <title> Convert a JS object to an array using JQuery </title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js"> </script></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP"></p> <button onclick="myGFG()"> Click Here </button> <p id="GFG_DOWN"></p> <script> var up = document.getElementById("GFG_UP"); var JS_Obj = { 1: ['gfg', 'Gfg', 'gFG'], 2: ['geek', 'Geek', 'gEEK'] }; up.innerHTML = "Object - [" + JSON.stringify(JS_Obj) + "]"; var down = document.getElementById("GFG_DOWN"); function myGFG() { var arr = Object.keys(JS_Obj) .map(function (key) { return JS_Obj[key]; }); down.innerHTML = arr; } </script></body> </html>
Output:
CSS-Misc
HTML-Misc
JavaScript-Misc
CSS
HTML
JavaScript
Web Technologies
Web technologies Questions
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Create a Responsive Navbar using ReactJS
Design a web page using HTML and CSS
How to set div width to fit content using CSS ?
How to set fixed width for <td> in a table ?
How to apply style to parent if it has child with CSS?
How to set the default value for an HTML <select> element ?
How to set input type date in dd-mm-yyyy format using HTML ?
How to Insert Form Data into Database using PHP ?
Hide or show elements in HTML using display property
REST API (Introduction)
|
[
{
"code": null,
"e": 25442,
"s": 25414,
"text": "\n27 Apr, 2020"
},
{
"code": null,
"e": 25605,
"s": 25442,
"text": "Given an array of objects and the task is to convert the object values to an array with the help of JavaScript. There are two approaches that are discussed below:"
},
{
"code": null,
"e": 25705,
"s": 25605,
"text": "Approach 1: We can use the map() method and return the values of each object which makes the array."
},
{
"code": null,
"e": 26682,
"s": 25705,
"text": "Example:<!DOCTYPE HTML><html> <head> <title> Convert a JS object to an array using JQuery </title> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js\"> </script></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\"></p> <button onclick=\"myGFG()\"> Click Here </button> <p id=\"GFG_DOWN\"></p> <script> var up = document.getElementById(\"GFG_UP\"); var JS_Obj = { 1: ['gfg', 'Gfg', 'gFG'], 2: ['geek', 'Geek', 'gEEK'] }; up.innerHTML = \"Object - [\" + JSON.stringify(JS_Obj) + \"]\"; var down = document.getElementById(\"GFG_DOWN\"); function myGFG() { var array = $.map(JS_Obj, function (val, ind) { return [val]; }); down.innerHTML = array; } </script></body> </html>"
},
{
"code": "<!DOCTYPE HTML><html> <head> <title> Convert a JS object to an array using JQuery </title> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js\"> </script></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\"></p> <button onclick=\"myGFG()\"> Click Here </button> <p id=\"GFG_DOWN\"></p> <script> var up = document.getElementById(\"GFG_UP\"); var JS_Obj = { 1: ['gfg', 'Gfg', 'gFG'], 2: ['geek', 'Geek', 'gEEK'] }; up.innerHTML = \"Object - [\" + JSON.stringify(JS_Obj) + \"]\"; var down = document.getElementById(\"GFG_DOWN\"); function myGFG() { var array = $.map(JS_Obj, function (val, ind) { return [val]; }); down.innerHTML = array; } </script></body> </html>",
"e": 27651,
"s": 26682,
"text": null
},
{
"code": null,
"e": 27659,
"s": 27651,
"text": "Output:"
},
{
"code": null,
"e": 27808,
"s": 27659,
"text": "Approach 2: The Object.keys() method is used to get the keys of object and then those keys are used to get the values of the objects from the array."
},
{
"code": null,
"e": 28818,
"s": 27808,
"text": "Example:<!DOCTYPE HTML><html> <head> <title> Convert a JS object to an array using JQuery </title> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js\"> </script></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\"></p> <button onclick=\"myGFG()\"> Click Here </button> <p id=\"GFG_DOWN\"></p> <script> var up = document.getElementById(\"GFG_UP\"); var JS_Obj = { 1: ['gfg', 'Gfg', 'gFG'], 2: ['geek', 'Geek', 'gEEK'] }; up.innerHTML = \"Object - [\" + JSON.stringify(JS_Obj) + \"]\"; var down = document.getElementById(\"GFG_DOWN\"); function myGFG() { var arr = Object.keys(JS_Obj) .map(function (key) { return JS_Obj[key]; }); down.innerHTML = arr; } </script></body> </html>"
},
{
"code": "<!DOCTYPE HTML><html> <head> <title> Convert a JS object to an array using JQuery </title> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js\"> </script></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\"></p> <button onclick=\"myGFG()\"> Click Here </button> <p id=\"GFG_DOWN\"></p> <script> var up = document.getElementById(\"GFG_UP\"); var JS_Obj = { 1: ['gfg', 'Gfg', 'gFG'], 2: ['geek', 'Geek', 'gEEK'] }; up.innerHTML = \"Object - [\" + JSON.stringify(JS_Obj) + \"]\"; var down = document.getElementById(\"GFG_DOWN\"); function myGFG() { var arr = Object.keys(JS_Obj) .map(function (key) { return JS_Obj[key]; }); down.innerHTML = arr; } </script></body> </html>",
"e": 29820,
"s": 28818,
"text": null
},
{
"code": null,
"e": 29828,
"s": 29820,
"text": "Output:"
},
{
"code": null,
"e": 29837,
"s": 29828,
"text": "CSS-Misc"
},
{
"code": null,
"e": 29847,
"s": 29837,
"text": "HTML-Misc"
},
{
"code": null,
"e": 29863,
"s": 29847,
"text": "JavaScript-Misc"
},
{
"code": null,
"e": 29867,
"s": 29863,
"text": "CSS"
},
{
"code": null,
"e": 29872,
"s": 29867,
"text": "HTML"
},
{
"code": null,
"e": 29883,
"s": 29872,
"text": "JavaScript"
},
{
"code": null,
"e": 29900,
"s": 29883,
"text": "Web Technologies"
},
{
"code": null,
"e": 29927,
"s": 29900,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 29932,
"s": 29927,
"text": "HTML"
},
{
"code": null,
"e": 30030,
"s": 29932,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30039,
"s": 30030,
"text": "Comments"
},
{
"code": null,
"e": 30052,
"s": 30039,
"text": "Old Comments"
},
{
"code": null,
"e": 30093,
"s": 30052,
"text": "Create a Responsive Navbar using ReactJS"
},
{
"code": null,
"e": 30130,
"s": 30093,
"text": "Design a web page using HTML and CSS"
},
{
"code": null,
"e": 30178,
"s": 30130,
"text": "How to set div width to fit content using CSS ?"
},
{
"code": null,
"e": 30223,
"s": 30178,
"text": "How to set fixed width for <td> in a table ?"
},
{
"code": null,
"e": 30278,
"s": 30223,
"text": "How to apply style to parent if it has child with CSS?"
},
{
"code": null,
"e": 30338,
"s": 30278,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 30399,
"s": 30338,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 30449,
"s": 30399,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 30502,
"s": 30449,
"text": "Hide or show elements in HTML using display property"
}
] |
How to get all the collections from all the MongoDB databases?
|
To get all the collections from all the databases, let us first get all the databases using the following query
> switchDatabaseAdmin = db.getSiblingDB("admin");
admin
> allDatabaseName = switchDatabaseAdmin.runCommand({ "listDatabases": 1 }).databases;
This will produce the following output
[
{
"name" : "admin",
"sizeOnDisk" : 495616,
"empty" : false
},
{
"name" : "config",
"sizeOnDisk" : 98304,
"empty" : false
},
{
"name" : "local",
"sizeOnDisk" : 73728,
"empty" : false
},
{
"name" : "sample",
"sizeOnDisk" : 1335296,
"empty" : false
},
{
"name" : "sampleDemo",
"sizeOnDisk" : 278528,
"empty" : false
},
{
"name" : "studentSearch",
"sizeOnDisk" : 262144,
"empty" : false
},
{
"name" : "test",
"sizeOnDisk" : 8724480,
"empty" : false
}
]
Following is the query to get all the collection names from databases
> allDatabaseName.forEach(function(databaseName)
... {
... db = db.getSiblingDB(databaseName.name);
... collectionName = db.getCollectionNames();
... collectionName.forEach(function(collectionName)
... {
... print(collectionName);
... });
... });
This will produce the following output
clearingItemsInNestedArrayDemo
customIdDemo
deleteRecordDemo
documentExistsOrNotDemo
findAllExceptFromOneOrtwoDemo
mongoExportDemo
startup_log
arraySizeErrorDemo
basicInformationDemo
copyThisCollectionToSampleDatabaseDemo
deleteAllRecordsDemo
deleteDocuments
deleteDocumentsDemo
deleteSomeInformation
documentWithAParticularFieldValueDemo
employee
findListOfIdsDemo
findSubstring
getAllRecordsFromSourceCollectionDemo
getElementWithMaxIdDemo
internalArraySizeDemo
largestDocumentDemo
makingStudentInformationClone
oppositeAddToSetDemo
prettyDemo
returnOnlyUniqueValuesDemo
selectWhereInDemo
sourceCollection
studentInformation
sumOfValueDemo
truncateDemo
updateInformation
userInformation
copyThisCollectionToSampleDatabaseDemo
deleteDocuments
deleteDocumentsDemo
deleteInformation
employee
internalArraySizeDemo
prettyDemo
sourceCollection
updateInformation
userInformation
col1
col2
indexingForArrayElementDemo
removeObjectFromArrayDemo
specifyAKeyDemo
useVariableDemo
ConvertStringToDateDemo
Employee_Information
IdUpdateDemo
IndexingDemo
NotAndDemo
ProductsInformation
addCurrentDateTimeDemo
addFieldDemo
addNewFieldToEveryDocument
aggregateSumDemo
aggregationFrameworkWithOrMatchDemo
aggregationSortDemo
andOrDemo
arrayInnerElementsDemo
arrayLengthGreaterThanOne
arrayOfArraysDemo
avoidDuplicateEntriesDemo
caseInsensitiveDemo
caseInsesitiveDemo
castingDemo
changeDataType
changeType
charactersAllowedDemo
charactersDemo
checkFieldContainsStringDemo
checkFieldExistsOrNotDemo
checkSequenceDemo
collectionOnDifferentDocumentDemo
combinationOfArrayDemo
comparingTwoFieldsDemo
concatStringAndIntDemo
conditionalSumDemo
convertStringToNumberDemo
copyThisCollectionToSampleDatabaseDemo
countDemo
countPerformanceDemo
createSequenceDemo
creatingUniqueIndexDemo
dateDemo
deleteAllElementsInArrayDemo
deleteRecordDemo
demo.insertCollection
distinctAggregation
distinctCountValuesDemo
distinctRecordDemo
distinctWithMultipleKeysDemo
doubleNestedArrayDemo
embeddedCollectionDemo
employeeInformation
equivalentForSelectColumn1Column2Demo
fieldIsNullOrNotSetDemo
filterArray
findAllDuplicateKeyDocumentDemo
findAllNonDistinctDemo
findByMultipleArrayDemo
findDocumentDoNotHaveCertainFields
findDocumentNonExistenceFieldDemo
findDocumentWithObjectIdDemo
findDuplicateByKeyDemo
findDuplicateRecordsDemo
findMinValueDemo
findSpecificValue
findValueInArrayWithMultipleCriteriaDemo
firstDocumentDemo
firstItemInArrayToNewFieldDemo
getDistinctListOfSubDocumentFieldDemo
getFirstItemDemo
getIndexSizeDemo
getLastNRecordsDemo
getLastXRecordsDemo
getNThElementDemo
getParticularElementFromArrayDemo
getPartuclarElement
getSizeDemo
getSizeOfArray
gettingHighestValueDemo
groupByDateDemo
hideidDemo
identifyLastDocuementDemo
incrementValueDemo
incrementValueInNestedArrayDemo
indexDemo
indexOptimizationDemo
indexTimeDemo
index_Demo
indexingDemo
insertDemo
insertFieldWithCurrentDateDemo
insertIfNotExistsDemo
insertIntegerDemo
insertOneRecordDemo
listAllValuesOfCeratinFieldsDemo
matchBetweenFieldsDemo
mongoExportDemo
multipleOrDemo
my-collection
nestedArrayDemo
nestedIndexDemo
nestedObjectDemo
new_Collection
notLikeOpeartorDemo
numberofKeysInADocumentDemo
objectInAnArrayDemo
objectidToStringDemo
orConditionDemo
orDemo
orderDocsDemo
paginationDemo
performRegex
priceStoredAsStringDemo
priceStoredDemo
queryArrayElementsDemo
queryByKeyDemo
queryBySubFieldDemo
queryForBooleanFieldsDemo
queryInSameDocumentsDemo
queryToEmbeddedDocument
queryingMongoDbCaseInsensitiveDemo
regExpOnIntegerDemo
regexSearchDemo
removeArrayDemo
removeArrayElement
removeArrayElementByItsIndexDemo
removeArrayElements
removeDocumentOnBasisOfId
removeDuplicateDocumentDemo
removeDuplicateDocuments
removeElementFromDoublyNestedArrayDemo
removeFieldCompletlyDemo
removeMultipleDocumentsDemo
removeObject
removingidElementDemo
renameFieldDemo
retrieveValueFromAKeyDemo
retunFieldInFindDemo
returnQueryFromDate
reverseRegexDemo
s
searchArrayDemo
searchDocumentDemo
searchDocumentWithSpecialCharactersDemo
searchMultipleFieldsDemo
secondDocumentDemo
selectInWhereIdDemo
selectMongoDBDocumentsWithSomeCondition
selectRecordsHavingKeyDemo
selectSingleFieldDemo
singleFieldDemo
sortDemo
sortInnerArrayDemo
sortingDemo
sourceCollection
sqlLikeDemo
stringFieldLengthDemo
stringToObjectIdDemo
test.js
translateDefinitionDemo
unconditionalUpdatesDemo
uniqueIndexOnArrayDemo
unprettyJsonDemo
unwindOperatorDemo
updateDemo
updateExactField
updateIdDemo
updateManyDocumentsDemo
updateNestedValueDemo
updateObjects
updatingEmbeddedDocumentPropertyDemo
userStatus
|
[
{
"code": null,
"e": 1174,
"s": 1062,
"text": "To get all the collections from all the databases, let us first get all the databases using the following query"
},
{
"code": null,
"e": 1316,
"s": 1174,
"text": "> switchDatabaseAdmin = db.getSiblingDB(\"admin\");\nadmin\n> allDatabaseName = switchDatabaseAdmin.runCommand({ \"listDatabases\": 1 }).databases;"
},
{
"code": null,
"e": 1355,
"s": 1316,
"text": "This will produce the following output"
},
{
"code": null,
"e": 1974,
"s": 1355,
"text": "[\n {\n \"name\" : \"admin\",\n \"sizeOnDisk\" : 495616,\n \"empty\" : false\n },\n {\n \"name\" : \"config\",\n \"sizeOnDisk\" : 98304,\n \"empty\" : false\n },\n {\n \"name\" : \"local\",\n \"sizeOnDisk\" : 73728,\n \"empty\" : false\n },\n {\n \"name\" : \"sample\",\n \"sizeOnDisk\" : 1335296,\n \"empty\" : false\n },\n {\n \"name\" : \"sampleDemo\",\n \"sizeOnDisk\" : 278528,\n \"empty\" : false\n },\n {\n \"name\" : \"studentSearch\",\n \"sizeOnDisk\" : 262144,\n \"empty\" : false\n },\n {\n \"name\" : \"test\",\n \"sizeOnDisk\" : 8724480,\n \"empty\" : false\n }\n]"
},
{
"code": null,
"e": 2044,
"s": 1974,
"text": "Following is the query to get all the collection names from databases"
},
{
"code": null,
"e": 2312,
"s": 2044,
"text": "> allDatabaseName.forEach(function(databaseName)\n... {\n... db = db.getSiblingDB(databaseName.name);\n... collectionName = db.getCollectionNames();\n... collectionName.forEach(function(collectionName)\n... {\n... print(collectionName);\n... });\n... });"
},
{
"code": null,
"e": 2351,
"s": 2312,
"text": "This will produce the following output"
},
{
"code": null,
"e": 6860,
"s": 2351,
"text": "clearingItemsInNestedArrayDemo\ncustomIdDemo\ndeleteRecordDemo\ndocumentExistsOrNotDemo\nfindAllExceptFromOneOrtwoDemo\nmongoExportDemo\nstartup_log\narraySizeErrorDemo\nbasicInformationDemo\ncopyThisCollectionToSampleDatabaseDemo\ndeleteAllRecordsDemo\ndeleteDocuments\ndeleteDocumentsDemo\ndeleteSomeInformation\ndocumentWithAParticularFieldValueDemo\nemployee\nfindListOfIdsDemo\nfindSubstring\ngetAllRecordsFromSourceCollectionDemo\ngetElementWithMaxIdDemo\ninternalArraySizeDemo\nlargestDocumentDemo\nmakingStudentInformationClone\noppositeAddToSetDemo\nprettyDemo\nreturnOnlyUniqueValuesDemo\nselectWhereInDemo\nsourceCollection\nstudentInformation\nsumOfValueDemo\ntruncateDemo\nupdateInformation\nuserInformation\ncopyThisCollectionToSampleDatabaseDemo\ndeleteDocuments\ndeleteDocumentsDemo\ndeleteInformation\nemployee\ninternalArraySizeDemo\nprettyDemo\nsourceCollection\nupdateInformation\nuserInformation\ncol1\ncol2\nindexingForArrayElementDemo\nremoveObjectFromArrayDemo\nspecifyAKeyDemo\nuseVariableDemo\nConvertStringToDateDemo\nEmployee_Information\nIdUpdateDemo\nIndexingDemo\nNotAndDemo\nProductsInformation\naddCurrentDateTimeDemo\naddFieldDemo\naddNewFieldToEveryDocument\naggregateSumDemo\naggregationFrameworkWithOrMatchDemo\naggregationSortDemo\nandOrDemo\narrayInnerElementsDemo\narrayLengthGreaterThanOne\narrayOfArraysDemo\navoidDuplicateEntriesDemo\ncaseInsensitiveDemo\ncaseInsesitiveDemo\ncastingDemo\nchangeDataType\nchangeType\ncharactersAllowedDemo\ncharactersDemo\ncheckFieldContainsStringDemo\ncheckFieldExistsOrNotDemo\ncheckSequenceDemo\ncollectionOnDifferentDocumentDemo\ncombinationOfArrayDemo\ncomparingTwoFieldsDemo\nconcatStringAndIntDemo\nconditionalSumDemo\nconvertStringToNumberDemo\ncopyThisCollectionToSampleDatabaseDemo\ncountDemo\ncountPerformanceDemo\ncreateSequenceDemo\ncreatingUniqueIndexDemo\ndateDemo\ndeleteAllElementsInArrayDemo\ndeleteRecordDemo\ndemo.insertCollection\ndistinctAggregation\ndistinctCountValuesDemo\ndistinctRecordDemo\ndistinctWithMultipleKeysDemo\ndoubleNestedArrayDemo\nembeddedCollectionDemo\nemployeeInformation\nequivalentForSelectColumn1Column2Demo\nfieldIsNullOrNotSetDemo\nfilterArray\nfindAllDuplicateKeyDocumentDemo\nfindAllNonDistinctDemo\nfindByMultipleArrayDemo\nfindDocumentDoNotHaveCertainFields\nfindDocumentNonExistenceFieldDemo\nfindDocumentWithObjectIdDemo\nfindDuplicateByKeyDemo\nfindDuplicateRecordsDemo\nfindMinValueDemo\nfindSpecificValue\nfindValueInArrayWithMultipleCriteriaDemo\nfirstDocumentDemo\nfirstItemInArrayToNewFieldDemo\ngetDistinctListOfSubDocumentFieldDemo\ngetFirstItemDemo\ngetIndexSizeDemo\ngetLastNRecordsDemo\ngetLastXRecordsDemo\ngetNThElementDemo\ngetParticularElementFromArrayDemo\ngetPartuclarElement\ngetSizeDemo\ngetSizeOfArray\ngettingHighestValueDemo\ngroupByDateDemo\nhideidDemo\nidentifyLastDocuementDemo\nincrementValueDemo\nincrementValueInNestedArrayDemo\nindexDemo\nindexOptimizationDemo\nindexTimeDemo\nindex_Demo\nindexingDemo\ninsertDemo\ninsertFieldWithCurrentDateDemo\ninsertIfNotExistsDemo\ninsertIntegerDemo\ninsertOneRecordDemo\nlistAllValuesOfCeratinFieldsDemo\nmatchBetweenFieldsDemo\nmongoExportDemo\nmultipleOrDemo\nmy-collection\nnestedArrayDemo\nnestedIndexDemo\nnestedObjectDemo\nnew_Collection\nnotLikeOpeartorDemo\nnumberofKeysInADocumentDemo\nobjectInAnArrayDemo\nobjectidToStringDemo\norConditionDemo\norDemo\norderDocsDemo\npaginationDemo\nperformRegex\npriceStoredAsStringDemo\npriceStoredDemo\nqueryArrayElementsDemo\nqueryByKeyDemo\nqueryBySubFieldDemo\nqueryForBooleanFieldsDemo\nqueryInSameDocumentsDemo\nqueryToEmbeddedDocument\nqueryingMongoDbCaseInsensitiveDemo\nregExpOnIntegerDemo\nregexSearchDemo\nremoveArrayDemo\nremoveArrayElement\nremoveArrayElementByItsIndexDemo\nremoveArrayElements\nremoveDocumentOnBasisOfId\nremoveDuplicateDocumentDemo\nremoveDuplicateDocuments\nremoveElementFromDoublyNestedArrayDemo\nremoveFieldCompletlyDemo\nremoveMultipleDocumentsDemo\nremoveObject\nremovingidElementDemo\nrenameFieldDemo\nretrieveValueFromAKeyDemo\nretunFieldInFindDemo\nreturnQueryFromDate\nreverseRegexDemo\ns\nsearchArrayDemo\nsearchDocumentDemo\nsearchDocumentWithSpecialCharactersDemo\nsearchMultipleFieldsDemo\nsecondDocumentDemo\nselectInWhereIdDemo\nselectMongoDBDocumentsWithSomeCondition\nselectRecordsHavingKeyDemo\nselectSingleFieldDemo\nsingleFieldDemo\nsortDemo\nsortInnerArrayDemo\nsortingDemo\nsourceCollection\nsqlLikeDemo\nstringFieldLengthDemo\nstringToObjectIdDemo\ntest.js\ntranslateDefinitionDemo\nunconditionalUpdatesDemo\nuniqueIndexOnArrayDemo\nunprettyJsonDemo\nunwindOperatorDemo\nupdateDemo\nupdateExactField\nupdateIdDemo\nupdateManyDocumentsDemo\nupdateNestedValueDemo\nupdateObjects\nupdatingEmbeddedDocumentPropertyDemo\nuserStatus"
}
] |
Multiple COUNT() for multiple conditions in a single MySQL query?
|
You can count multiple COUNT() for multiple conditions in a single query using GROUP BY.
The syntax is as follows -
SELECT yourColumnName,COUNT(*) from yourTableName group by yourColumnName;
To understand the above syntax, let us first create a table. The query to create a table is as follows.
mysql> create table MultipleCountDemo
-> (
-> Id int,
-> Name varchar(100),
-> Age int
-> );
Query OK, 0 rows affected (2.17 sec)
Insert records in the table using insert command. The query is as follows.
mysql> insert into MultipleCountDemo values(1,'Carol',21);
Query OK, 1 row affected (0.27 sec)
mysql> insert into MultipleCountDemo values(2,'Sam',21);
Query OK, 1 row affected (0.29 sec)
mysql> insert into MultipleCountDemo values(3,'Bob',22);
Query OK, 1 row affected (0.10 sec)
mysql> insert into MultipleCountDemo values(4,'John',23);
Query OK, 1 row affected (0.24 sec)
mysql> insert into MultipleCountDemo values(5,'David',22);
Query OK, 1 row affected (0.12 sec)
mysql> insert into MultipleCountDemo values(6,'Adam',22);
Query OK, 1 row affected (0.21 sec)
mysql> insert into MultipleCountDemo values(7,'Johnson',23);
Query OK, 1 row affected (0.14 sec)
mysql> insert into MultipleCountDemo values(8,'Elizabeth',23);
Query OK, 1 row affected (0.25 sec)
Display all records from the table using select statement. The query is as follows -
mysql> select *from MultipleCountDemo;
The following is the output.
+------+-----------+------+
| Id | Name | Age |
+------+-----------+------+
| 1 | Carol | 21 |
| 2 | Sam | 21 |
| 3 | Bob | 22 |
| 4 | John | 23 |
| 5 | David | 22 |
| 6 | Adam | 22 |
| 7 | Johnson | 23 |
| 8 | Elizabeth | 23 |
+------+-----------+------+
8 rows in set (0.00 sec)
Now here is the query for multiple count() for multiple conditions in a single query.
mysql> select Age,count(*)as AllSingleCount from MultipleCountDemo group by Age;
The following is the output.
+------+----------------+
| Age | AllSingleCount |
+------+----------------+
| 21 | 2 |
| 22 | 3 |
| 23 | 3 |
+------+----------------+
3 rows in set (0.00 sec)
|
[
{
"code": null,
"e": 1151,
"s": 1062,
"text": "You can count multiple COUNT() for multiple conditions in a single query using GROUP BY."
},
{
"code": null,
"e": 1178,
"s": 1151,
"text": "The syntax is as follows -"
},
{
"code": null,
"e": 1253,
"s": 1178,
"text": "SELECT yourColumnName,COUNT(*) from yourTableName group by yourColumnName;"
},
{
"code": null,
"e": 1357,
"s": 1253,
"text": "To understand the above syntax, let us first create a table. The query to create a table is as follows."
},
{
"code": null,
"e": 1487,
"s": 1357,
"text": "mysql> create table MultipleCountDemo\n-> (\n-> Id int,\n-> Name varchar(100),\n-> Age int\n-> );\nQuery OK, 0 rows affected (2.17 sec)"
},
{
"code": null,
"e": 1562,
"s": 1487,
"text": "Insert records in the table using insert command. The query is as follows."
},
{
"code": null,
"e": 2329,
"s": 1562,
"text": "mysql> insert into MultipleCountDemo values(1,'Carol',21);\nQuery OK, 1 row affected (0.27 sec)\n\nmysql> insert into MultipleCountDemo values(2,'Sam',21);\nQuery OK, 1 row affected (0.29 sec)\n\nmysql> insert into MultipleCountDemo values(3,'Bob',22);\nQuery OK, 1 row affected (0.10 sec)\n\nmysql> insert into MultipleCountDemo values(4,'John',23);\nQuery OK, 1 row affected (0.24 sec)\n\nmysql> insert into MultipleCountDemo values(5,'David',22);\nQuery OK, 1 row affected (0.12 sec)\n\nmysql> insert into MultipleCountDemo values(6,'Adam',22);\nQuery OK, 1 row affected (0.21 sec)\n\nmysql> insert into MultipleCountDemo values(7,'Johnson',23);\nQuery OK, 1 row affected (0.14 sec)\n\nmysql> insert into MultipleCountDemo values(8,'Elizabeth',23);\nQuery OK, 1 row affected (0.25 sec)"
},
{
"code": null,
"e": 2414,
"s": 2329,
"text": "Display all records from the table using select statement. The query is as follows -"
},
{
"code": null,
"e": 2453,
"s": 2414,
"text": "mysql> select *from MultipleCountDemo;"
},
{
"code": null,
"e": 2482,
"s": 2453,
"text": "The following is the output."
},
{
"code": null,
"e": 2763,
"s": 2482,
"text": "+------+-----------+------+\n| Id | Name | Age |\n+------+-----------+------+\n| 1 | Carol | 21 |\n| 2 | Sam | 21 |\n| 3 | Bob | 22 |\n| 4 | John | 23 |\n| 5 | David | 22 |\n| 6 | Adam | 22 |\n| 7 | Johnson | 23 |\n| 8 | Elizabeth | 23 |\n+------+-----------+------+\n8 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2849,
"s": 2763,
"text": "Now here is the query for multiple count() for multiple conditions in a single query."
},
{
"code": null,
"e": 2930,
"s": 2849,
"text": "mysql> select Age,count(*)as AllSingleCount from MultipleCountDemo group by Age;"
},
{
"code": null,
"e": 2959,
"s": 2930,
"text": "The following is the output."
},
{
"code": null,
"e": 3120,
"s": 2959,
"text": "+------+----------------+\n| Age | AllSingleCount |\n+------+----------------+\n| 21 | 2 |\n| 22 | 3 |\n| 23 | 3 |\n+------+----------------+\n3 rows in set (0.00 sec)"
}
] |
How to use SearchView in Android Kotlin?
|
This example demonstrates how to use SearchView in Android Kotlin.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<ListView
android:id="@+id/listView"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_below="@+id/searchView"
android:divider="#ad5"
android:dividerHeight="2dp" />
<SearchView
android:id="@+id/searchView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:iconifiedByDefault="false"
android:queryHint="Search Here" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.kt
import android.os.Bundle
import android.widget.ArrayAdapter
import android.widget.ListView
import android.widget.SearchView
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
class MainActivity : AppCompatActivity() {
lateinit var searchView: SearchView
lateinit var listView: ListView
lateinit var list: ArrayList<String>
lateinit var adapter: ArrayAdapter<*>
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
searchView = findViewById(R.id.searchView)
listView = findViewById(R.id.listView)
list = ArrayList()
list.add("Apple")
list.add("Banana")
list.add("Pineapple")
list.add("Orange")
list.add("Mango")
list.add("Grapes")
list.add("Lemon")
list.add("Melon")
list.add("Watermelon")
list.add("Papaya")
adapter = ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, list)
listView.adapter = adapter
searchView.setOnQueryTextListener(object : SearchView.OnQueryTextListener {
override fun onQueryTextSubmit(query: String): Boolean {
if (list.contains(query)) {
adapter.filter.filter(query)
} else {
Toast.makeText(this@MainActivity, "No Match found", Toast.LENGTH_LONG).show()
}
return false
}
override fun onQueryTextChange(newText: String): Boolean {
adapter.filter.filter(newText)
return false
}
})
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.q11">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen
|
[
{
"code": null,
"e": 1129,
"s": 1062,
"text": "This example demonstrates how to use SearchView in Android Kotlin."
},
{
"code": null,
"e": 1258,
"s": 1129,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1323,
"s": 1258,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2120,
"s": 1323,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n <ListView\n android:id=\"@+id/listView\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:layout_below=\"@+id/searchView\"\n android:divider=\"#ad5\"\n android:dividerHeight=\"2dp\" />\n <SearchView\n android:id=\"@+id/searchView\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_alignParentTop=\"true\"\n android:iconifiedByDefault=\"false\"\n android:queryHint=\"Search Here\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 2175,
"s": 2120,
"text": "Step 3 − Add the following code to src/MainActivity.kt"
},
{
"code": null,
"e": 3791,
"s": 2175,
"text": "import android.os.Bundle\nimport android.widget.ArrayAdapter\nimport android.widget.ListView\nimport android.widget.SearchView\nimport android.widget.Toast\nimport androidx.appcompat.app.AppCompatActivity\nclass MainActivity : AppCompatActivity() {\n lateinit var searchView: SearchView\n lateinit var listView: ListView\n lateinit var list: ArrayList<String>\n lateinit var adapter: ArrayAdapter<*>\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n searchView = findViewById(R.id.searchView)\n listView = findViewById(R.id.listView)\n list = ArrayList()\n list.add(\"Apple\")\n list.add(\"Banana\")\n list.add(\"Pineapple\")\n list.add(\"Orange\")\n list.add(\"Mango\")\n list.add(\"Grapes\")\n list.add(\"Lemon\")\n list.add(\"Melon\")\n list.add(\"Watermelon\")\n list.add(\"Papaya\")\n adapter = ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, list)\n listView.adapter = adapter\n searchView.setOnQueryTextListener(object : SearchView.OnQueryTextListener {\n override fun onQueryTextSubmit(query: String): Boolean {\n if (list.contains(query)) {\n adapter.filter.filter(query)\n } else {\n Toast.makeText(this@MainActivity, \"No Match found\", Toast.LENGTH_LONG).show()\n }\n return false\n }\n override fun onQueryTextChange(newText: String): Boolean {\n adapter.filter.filter(newText)\n return false\n }\n })\n }\n}"
},
{
"code": null,
"e": 3846,
"s": 3791,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 4517,
"s": 3846,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"com.example.q11\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 4865,
"s": 4517,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen"
}
] |
Maximum sum of hour glass in matrix in C++
|
In this problem, we are given a matrix. Our task is to create a program that finds the maximum sum of the hourglass in a matrix in C++.
Program description − Here, we will find the maximum sum of all hourglasses that can be created for the given matrix elements.
Hour glass is a 7 element shape made in the matrix in the following form,
X X X
X
X X X
Let’s take an example to understand the problem,
Input −
array ={
{2 4 0 0}
{0 1 1 0}
{4 2 1 0}
{0 3 0 1}}
Output −
Explanation −
Hour glass are :
2 4 0 0 1 1
1 2
4 2 1 0 3 0
4 0 0 1 1 0
1 1
2 1 0 3 0 1
So, an hourglass can be created using the following indexes,
matrix[i][j] matrix[i][j+1] matrix[i][j+2]
matrix[i+1][j+1]
matrix[i+2][j] matrix[i+2][j+1] matrix[i+2][j+2]
We will find the sum of all these elements of the array from [0][0] to [R2][C-2] starting points. And the find the maxSum for all these hourglasses created from array elements.
Program to illustrate the working of our solution,
Live Demo
#include<iostream>
using namespace std;
const int row = 4;
const int col = 4;
int findHourGlassSum(int mat[row][col]){
if (row<3 || col<3)
return -1;
int maxSum = 0;
for (int i=0; i<row-2; i++){
for (int j=0; j<col-2; j++){
int hrSum = (mat[i][j]+mat[i][j+1]+mat[i][j+2])+ (mat[i+1][j+1])+ (mat[i+2][j]+mat[i+2][j+1]+mat[i+2][j+2]);
maxSum = max(maxSum, hrSum);
}
}
return maxSum;
}
int main() {
int mat[row][col] = {
{2, 4, 0, 0},
{0, 1, 1, 0},
{4, 2, 1, 0},
{0, 3, 0, 1}};
int maxSum = findHourGlassSum(mat);
if (maxSum == -1)
cout<<"Not possible";
else
cout<<"Maximum sum of hour glass created is "<<maxSum;
return 0;
}
Maximum sum of hour glass created is 14
|
[
{
"code": null,
"e": 1198,
"s": 1062,
"text": "In this problem, we are given a matrix. Our task is to create a program that finds the maximum sum of the hourglass in a matrix in C++."
},
{
"code": null,
"e": 1325,
"s": 1198,
"text": "Program description − Here, we will find the maximum sum of all hourglasses that can be created for the given matrix elements."
},
{
"code": null,
"e": 1399,
"s": 1325,
"text": "Hour glass is a 7 element shape made in the matrix in the following form,"
},
{
"code": null,
"e": 1415,
"s": 1399,
"text": "X X X\n X\nX X X"
},
{
"code": null,
"e": 1464,
"s": 1415,
"text": "Let’s take an example to understand the problem,"
},
{
"code": null,
"e": 1472,
"s": 1464,
"text": "Input −"
},
{
"code": null,
"e": 1534,
"s": 1472,
"text": "array ={\n {2 4 0 0}\n {0 1 1 0}\n {4 2 1 0}\n {0 3 0 1}}"
},
{
"code": null,
"e": 1543,
"s": 1534,
"text": "Output −"
},
{
"code": null,
"e": 1557,
"s": 1543,
"text": "Explanation −"
},
{
"code": null,
"e": 1744,
"s": 1557,
"text": "Hour glass are :\n2 4 0 0 1 1\n 1 2\n4 2 1 0 3 0\n4 0 0 1 1 0\n 1 1\n2 1 0 3 0 1"
},
{
"code": null,
"e": 1805,
"s": 1744,
"text": "So, an hourglass can be created using the following indexes,"
},
{
"code": null,
"e": 1937,
"s": 1805,
"text": "matrix[i][j] matrix[i][j+1] matrix[i][j+2]\n matrix[i+1][j+1]\nmatrix[i+2][j] matrix[i+2][j+1] matrix[i+2][j+2]"
},
{
"code": null,
"e": 2114,
"s": 1937,
"text": "We will find the sum of all these elements of the array from [0][0] to [R2][C-2] starting points. And the find the maxSum for all these hourglasses created from array elements."
},
{
"code": null,
"e": 2165,
"s": 2114,
"text": "Program to illustrate the working of our solution,"
},
{
"code": null,
"e": 2176,
"s": 2165,
"text": " Live Demo"
},
{
"code": null,
"e": 2919,
"s": 2176,
"text": "#include<iostream>\nusing namespace std;\nconst int row = 4;\nconst int col = 4;\nint findHourGlassSum(int mat[row][col]){\n if (row<3 || col<3)\n return -1;\n int maxSum = 0;\n for (int i=0; i<row-2; i++){\n for (int j=0; j<col-2; j++){\n int hrSum = (mat[i][j]+mat[i][j+1]+mat[i][j+2])+ (mat[i+1][j+1])+ (mat[i+2][j]+mat[i+2][j+1]+mat[i+2][j+2]);\n maxSum = max(maxSum, hrSum);\n }\n }\n return maxSum;\n}\nint main() {\n int mat[row][col] = {\n {2, 4, 0, 0},\n {0, 1, 1, 0},\n {4, 2, 1, 0},\n {0, 3, 0, 1}};\n int maxSum = findHourGlassSum(mat);\n if (maxSum == -1)\n cout<<\"Not possible\";\n else\n cout<<\"Maximum sum of hour glass created is \"<<maxSum;\n return 0;\n}"
},
{
"code": null,
"e": 2959,
"s": 2919,
"text": "Maximum sum of hour glass created is 14"
}
] |
Tutorial: Selfie Filters Using Deep Learning And OpenCV (Facial Landmarks Detection) | by Akshay L Chandra | Towards Data Science
|
This is a tutorial on how to build a python application that can put various sunglasses on a detected face (I am calling them ‘Selfie Filters’) by finding the Facial Keypoints (15 unique points). These keypoints mark important areas of the face — the eyes, corners of the mouth, the nose, etc.
OpenCV is often used in practice with other machine learning and deep learning libraries to produce interesting results. Employing Convolutional Neural Networks (CNN) in Keras along with OpenCV — I built a couple of selfie filters (very boring ones).
Facial key points can be used in a variety of machine learning applications from face and emotion recognition to commercial applications like the image filters popularized by Snapchat.
You can access the full project code here:
github.com
The code is in Python version 3.6, uses OpenCV and Keras libraries.
Follow this medium post to install OpenCV and Keras in Python 3.
This dataset on Kaggle allows us to train a model to detect the facial keypoints given an image with a face.
It was provided by Dr. Yoshua Bengio of the University of Montreal.
Each datapoint in the dataset contains space-separated pixel values of the images in sequential order and the last 30 values of the datapoint represent 15 pairs of coordinates of the key points on the faces.
So we just have to train a CNN model to solve a classic deep learning regression problem. Check out the loss function list for regression problems here.
1.1 Define Model
We first define the model architecture in a separate python file for convenience — my_CNN_model.py
We write a method to create a network architecture.
1.2 Compile, Save, Fit And Load
We now write methods to compile, save, fit and load the models
1.3 Model Builder (Wrapper)
We now write a model builder that loads the data and calls the above methods.
We now create a new python file — shades.py in which we write code to read the webcam input, detect faces, use the CNN model we built, etc.
Once you start reading the input from the webcam, detect faces in the input using the Cascade Classifier object we initialized earlier.
Here, I used a blue bottle cap as a filter switch. To detect the blue cap we write code to find the blue contours in the image. We pass the blueLower and blueUpper HSV ranges we defined in Step 2.
Once we create the blueMask, that is supposed to find blue things in a video, we use OpenCV’s cv2.findContours() method to find the contours.
The above code checks if there are any contours (blue bottle cap) in the frame, once found it checks if the center of the contour in touching the ‘Next Filter’ button we created in this step. If touched, we change the filter (increment filterIndex by 1)
We use the face cascade we initialized earlier to locate faces in the frame. We then loop over each and every face to predict the facial keypoints.
Before we pass the detected face to the model, it is necessary that we normalize the input because normalized images were what our model was trained on, resize it to 96 x 96 images because that is what our model expects. The above code does the same.
Once we detect facial key points, we can use them to do all sorts of cool things. For example, you can use the nose keypoint to add a mustache, lips key points to maybe add color to them, etc.
Here, we used 4 of the key points to configure the width and height of the shades on the faces. And, face_resized_color has shades and maps back to frame and face_resized_color2 has keypoints and maps back to frame2. We then display both of them.
In the end, we just clean stuff up.
You can access the full project code here:
github.com
1. Download The Data
Download the data from here and put it in a folder named ‘data’ in your project directory. Make sure the ‘data’ folder contains — training.csv and test.csv
2. Build The CNN Model
> python model_builder.py
This calls my_CNN_model.py so make sure you have that file in your project directory.
4. Run The Engine File
> python shades.py
5. Grab A Blue Bottle Cap
And have fun!
In this tutorial, we built a deep convolutional neural network model, trained on the facial keypoints data. We then used the model to predict facial key points for the faces detected in the input webcam data. We also created a switch using contours, which allows us to iterate through other filters with gestures. I encourage you to tweak the model architecture and see for yourself how it affects keypoints detection.
This data only had 15 key points, there are several other datasets out there that have over 30 keypoints labeled on the face.
Applications of Facial Keypoint Detection:
Facial feature detection improves face recognition
Male/Female Distinction
Facial Expression Distinction
Head pose estimation
Face Morphing
Virtual Makeover
Face Replacement
Hope this tutorial was fun. Thanks for reading it.
Live and let live!A
|
[
{
"code": null,
"e": 466,
"s": 172,
"text": "This is a tutorial on how to build a python application that can put various sunglasses on a detected face (I am calling them ‘Selfie Filters’) by finding the Facial Keypoints (15 unique points). These keypoints mark important areas of the face — the eyes, corners of the mouth, the nose, etc."
},
{
"code": null,
"e": 717,
"s": 466,
"text": "OpenCV is often used in practice with other machine learning and deep learning libraries to produce interesting results. Employing Convolutional Neural Networks (CNN) in Keras along with OpenCV — I built a couple of selfie filters (very boring ones)."
},
{
"code": null,
"e": 902,
"s": 717,
"text": "Facial key points can be used in a variety of machine learning applications from face and emotion recognition to commercial applications like the image filters popularized by Snapchat."
},
{
"code": null,
"e": 945,
"s": 902,
"text": "You can access the full project code here:"
},
{
"code": null,
"e": 956,
"s": 945,
"text": "github.com"
},
{
"code": null,
"e": 1024,
"s": 956,
"text": "The code is in Python version 3.6, uses OpenCV and Keras libraries."
},
{
"code": null,
"e": 1089,
"s": 1024,
"text": "Follow this medium post to install OpenCV and Keras in Python 3."
},
{
"code": null,
"e": 1198,
"s": 1089,
"text": "This dataset on Kaggle allows us to train a model to detect the facial keypoints given an image with a face."
},
{
"code": null,
"e": 1266,
"s": 1198,
"text": "It was provided by Dr. Yoshua Bengio of the University of Montreal."
},
{
"code": null,
"e": 1474,
"s": 1266,
"text": "Each datapoint in the dataset contains space-separated pixel values of the images in sequential order and the last 30 values of the datapoint represent 15 pairs of coordinates of the key points on the faces."
},
{
"code": null,
"e": 1627,
"s": 1474,
"text": "So we just have to train a CNN model to solve a classic deep learning regression problem. Check out the loss function list for regression problems here."
},
{
"code": null,
"e": 1644,
"s": 1627,
"text": "1.1 Define Model"
},
{
"code": null,
"e": 1743,
"s": 1644,
"text": "We first define the model architecture in a separate python file for convenience — my_CNN_model.py"
},
{
"code": null,
"e": 1795,
"s": 1743,
"text": "We write a method to create a network architecture."
},
{
"code": null,
"e": 1827,
"s": 1795,
"text": "1.2 Compile, Save, Fit And Load"
},
{
"code": null,
"e": 1890,
"s": 1827,
"text": "We now write methods to compile, save, fit and load the models"
},
{
"code": null,
"e": 1918,
"s": 1890,
"text": "1.3 Model Builder (Wrapper)"
},
{
"code": null,
"e": 1996,
"s": 1918,
"text": "We now write a model builder that loads the data and calls the above methods."
},
{
"code": null,
"e": 2136,
"s": 1996,
"text": "We now create a new python file — shades.py in which we write code to read the webcam input, detect faces, use the CNN model we built, etc."
},
{
"code": null,
"e": 2272,
"s": 2136,
"text": "Once you start reading the input from the webcam, detect faces in the input using the Cascade Classifier object we initialized earlier."
},
{
"code": null,
"e": 2469,
"s": 2272,
"text": "Here, I used a blue bottle cap as a filter switch. To detect the blue cap we write code to find the blue contours in the image. We pass the blueLower and blueUpper HSV ranges we defined in Step 2."
},
{
"code": null,
"e": 2611,
"s": 2469,
"text": "Once we create the blueMask, that is supposed to find blue things in a video, we use OpenCV’s cv2.findContours() method to find the contours."
},
{
"code": null,
"e": 2865,
"s": 2611,
"text": "The above code checks if there are any contours (blue bottle cap) in the frame, once found it checks if the center of the contour in touching the ‘Next Filter’ button we created in this step. If touched, we change the filter (increment filterIndex by 1)"
},
{
"code": null,
"e": 3013,
"s": 2865,
"text": "We use the face cascade we initialized earlier to locate faces in the frame. We then loop over each and every face to predict the facial keypoints."
},
{
"code": null,
"e": 3264,
"s": 3013,
"text": "Before we pass the detected face to the model, it is necessary that we normalize the input because normalized images were what our model was trained on, resize it to 96 x 96 images because that is what our model expects. The above code does the same."
},
{
"code": null,
"e": 3457,
"s": 3264,
"text": "Once we detect facial key points, we can use them to do all sorts of cool things. For example, you can use the nose keypoint to add a mustache, lips key points to maybe add color to them, etc."
},
{
"code": null,
"e": 3704,
"s": 3457,
"text": "Here, we used 4 of the key points to configure the width and height of the shades on the faces. And, face_resized_color has shades and maps back to frame and face_resized_color2 has keypoints and maps back to frame2. We then display both of them."
},
{
"code": null,
"e": 3740,
"s": 3704,
"text": "In the end, we just clean stuff up."
},
{
"code": null,
"e": 3783,
"s": 3740,
"text": "You can access the full project code here:"
},
{
"code": null,
"e": 3794,
"s": 3783,
"text": "github.com"
},
{
"code": null,
"e": 3815,
"s": 3794,
"text": "1. Download The Data"
},
{
"code": null,
"e": 3971,
"s": 3815,
"text": "Download the data from here and put it in a folder named ‘data’ in your project directory. Make sure the ‘data’ folder contains — training.csv and test.csv"
},
{
"code": null,
"e": 3994,
"s": 3971,
"text": "2. Build The CNN Model"
},
{
"code": null,
"e": 4020,
"s": 3994,
"text": "> python model_builder.py"
},
{
"code": null,
"e": 4106,
"s": 4020,
"text": "This calls my_CNN_model.py so make sure you have that file in your project directory."
},
{
"code": null,
"e": 4129,
"s": 4106,
"text": "4. Run The Engine File"
},
{
"code": null,
"e": 4148,
"s": 4129,
"text": "> python shades.py"
},
{
"code": null,
"e": 4174,
"s": 4148,
"text": "5. Grab A Blue Bottle Cap"
},
{
"code": null,
"e": 4188,
"s": 4174,
"text": "And have fun!"
},
{
"code": null,
"e": 4607,
"s": 4188,
"text": "In this tutorial, we built a deep convolutional neural network model, trained on the facial keypoints data. We then used the model to predict facial key points for the faces detected in the input webcam data. We also created a switch using contours, which allows us to iterate through other filters with gestures. I encourage you to tweak the model architecture and see for yourself how it affects keypoints detection."
},
{
"code": null,
"e": 4733,
"s": 4607,
"text": "This data only had 15 key points, there are several other datasets out there that have over 30 keypoints labeled on the face."
},
{
"code": null,
"e": 4776,
"s": 4733,
"text": "Applications of Facial Keypoint Detection:"
},
{
"code": null,
"e": 4827,
"s": 4776,
"text": "Facial feature detection improves face recognition"
},
{
"code": null,
"e": 4851,
"s": 4827,
"text": "Male/Female Distinction"
},
{
"code": null,
"e": 4881,
"s": 4851,
"text": "Facial Expression Distinction"
},
{
"code": null,
"e": 4902,
"s": 4881,
"text": "Head pose estimation"
},
{
"code": null,
"e": 4916,
"s": 4902,
"text": "Face Morphing"
},
{
"code": null,
"e": 4933,
"s": 4916,
"text": "Virtual Makeover"
},
{
"code": null,
"e": 4950,
"s": 4933,
"text": "Face Replacement"
},
{
"code": null,
"e": 5001,
"s": 4950,
"text": "Hope this tutorial was fun. Thanks for reading it."
}
] |
Ethical Hacking - Fingerprinting
|
The term OS fingerprinting in Ethical Hacking refers to any method used to determine what operating system is running on a remote computer. This could be −
Active Fingerprinting − Active fingerprinting is accomplished by sending specially crafted packets to a target machine and then noting down its response and analyzing the gathered information to determine the target OS. In the following section, we have given an example to explain how you can use NMAP tool to detect the OS of a target domain.
Active Fingerprinting − Active fingerprinting is accomplished by sending specially crafted packets to a target machine and then noting down its response and analyzing the gathered information to determine the target OS. In the following section, we have given an example to explain how you can use NMAP tool to detect the OS of a target domain.
Passive Fingerprinting − Passive fingerprinting is based on sniffer traces from the remote system. Based on the sniffer traces (such as Wireshark) of the packets, you can determine the operating system of the remote host.
Passive Fingerprinting − Passive fingerprinting is based on sniffer traces from the remote system. Based on the sniffer traces (such as Wireshark) of the packets, you can determine the operating system of the remote host.
We have the following four important elements that we will look at to determine the operating system −
TTL − What the operating system sets the Time-To-Live on the outbound packet.
TTL − What the operating system sets the Time-To-Live on the outbound packet.
Window Size − What the operating system sets the Window Size at.
Window Size − What the operating system sets the Window Size at.
DF − Does the operating system set the Don't Fragment bit.
DF − Does the operating system set the Don't Fragment bit.
TOS − Does the operating system set the Type of Service, and if so, at what.
TOS − Does the operating system set the Type of Service, and if so, at what.
By analyzing these factors of a packet, you may be able to determine the remote operating system. This system is not 100% accurate, and works better for some operating systems than others.
Before attacking a system, it is required that you know what operating system is hosting a website. Once a target OS is known, then it becomes easy to determine which vulnerabilities might be present to exploit the target system.
Below is a simple nmap command which can be used to identify the operating system serving a website and all the opened ports associated with the domain name, i.e., the IP address.
$nmap -O -v tutorialspoint.com
It will show you the following sensitive information about the given domain name or IP address −
Starting Nmap 5.51 ( http://nmap.org ) at 2015-10-04 09:57 CDT
Initiating Parallel DNS resolution of 1 host. at 09:57
Completed Parallel DNS resolution of 1 host. at 09:57, 0.00s elapsed
Initiating SYN Stealth Scan at 09:57
Scanning tutorialspoint.com (66.135.33.172) [1000 ports]
Discovered open port 22/tcp on 66.135.33.172
Discovered open port 3306/tcp on 66.135.33.172
Discovered open port 80/tcp on 66.135.33.172
Discovered open port 443/tcp on 66.135.33.172
Completed SYN Stealth Scan at 09:57, 0.04s elapsed (1000 total ports)
Initiating OS detection (try #1) against tutorialspoint.com (66.135.33.172)
Retrying OS detection (try #2) against tutorialspoint.com (66.135.33.172)
Retrying OS detection (try #3) against tutorialspoint.com (66.135.33.172)
Retrying OS detection (try #4) against tutorialspoint.com (66.135.33.172)
Retrying OS detection (try #5) against tutorialspoint.com (66.135.33.172)
Nmap scan report for tutorialspoint.com (66.135.33.172)
Host is up (0.000038s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
3306/tcp open mysql
TCP/IP fingerprint:
OS:SCAN(V=5.51%D=10/4%OT=22%CT=1%CU=40379%PV=N%DS=0%DC=L%G=Y%TM=56113E6D%P=
OS:x86_64-redhat-linux-gnu)SEQ(SP=106%GCD=1%ISR=109%TI=Z%CI=Z%II=I%TS=A)OPS
OS:(O1=MFFD7ST11NW7%O2=MFFD7ST11NW7%O3=MFFD7NNT11NW7%O4=MFFD7ST11NW7%O5=MFF
OS:D7ST11NW7%O6=MFFD7ST11)WIN(W1=FFCB%W2=FFCB%W3=FFCB%W4=FFCB%W5=FFCB%W6=FF
OS:CB)ECN(R=Y%DF=Y%T=40%W=FFD7%O=MFFD7NNSNW7%CC=Y%Q=)T1(R=Y%DF=Y%T=40%S=O%A
OS:=S+%F=AS%RD=0%Q=)T2(R=N)T3(R=N)T4(R=Y%DF=Y%T=40%W=0%S=A%A=Z%F=R%O=%RD=0%
OS:Q=)T5(R=Y%DF=Y%T=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)T6(R=Y%DF=Y%T=40%W=0%S=
OS:A%A=Z%F=R%O=%RD=0%Q=)T7(R=Y%DF=Y%T=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)U1(R=
OS:Y%DF=N%T=40%IPL=164%UN=0%RIPL=G%RID=G%RIPCK=G%RUCK=G%RUD=G)IE(R=Y%DFI=N%
OS:T=40%CD=S)
If you do not have nmap command installed on your Linux system, then you can install it using the following yum command −
$yum install nmap
You can go through nmap command in detail to check and understand the different features associated with a system and secure it against malicious attacks.
You can hide your main system behind a secure proxy server or a VPN so that your complete identity is safe and ultimately your main system remains safe.
We have just seen information given by nmap command. This command lists down all the open ports on a given server.
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
3306/tcp open mysql
You can also check if a particular port is opened or not using the following command −
$nmap -sT -p 443 tutorialspoint.com
It will produce the following result −
Starting Nmap 5.51 ( http://nmap.org ) at 2015-10-04 10:19 CDT
Nmap scan report for tutorialspoint.com (66.135.33.172)
Host is up (0.000067s latency).
PORT STATE SERVICE
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 0.04 seconds
Once a hacker knows about open ports, then he can plan different attack techniques through the open ports.
It is always recommended to check and close all the unwanted ports to safeguard the system from malicious attacks.
A ping sweep is a network scanning technique that you can use to determine which IP address from a range of IP addresses map to live hosts. Ping Sweep is also known as ICMP sweep.
You can use fping command for ping sweep. This command is a ping-like program which uses the Internet Control Message Protocol (ICMP) echo request to determine if a host is up.
fping is different from ping in that you can specify any number of hosts on the command line, or specify a file containing the lists of hosts to ping. If a host does not respond within a certain time limit and/or retry limit, it will be considered unreachable.
To disable ping sweeps on a network, you can block ICMP ECHO requests from outside sources. This can be done using the following command which will create a firewall rule in iptable.
$iptables -A OUTPUT -p icmp --icmp-type echo-request -j DROP
Domain Name Server (DNS) is like a map or an address book. In fact, it is like a distributed database which is used to translate an IP address 192.111.1.120 to a name www.example.com and vice versa.
DNS enumeration is the process of locating all the DNS servers and their corresponding records for an organization. The idea is to gather as much interesting details as possible about your target before initiating an attack.
You can use nslookup command available on Linux to get DNS and host-related information. In addition, you can use the following DNSenum script to get detailed information about a domain −
DNSenum.pl
DNSenum script can perform the following important operations −
Get the host's addresses
Get the host's addresses
Get the nameservers
Get the nameservers
Get the MX record
Get the MX record
Perform axfr queries on nameservers
Perform axfr queries on nameservers
Get extra names and subdomains via Google scraping
Get extra names and subdomains via Google scraping
Brute force subdomains from file can also perform recursion on subdomain that has NS records
Brute force subdomains from file can also perform recursion on subdomain that has NS records
Calculate C class domain network ranges and perform whois queries on them
Calculate C class domain network ranges and perform whois queries on them
Perform reverse lookups on netranges
Perform reverse lookups on netranges
DNS Enumeration does not have a quick fix and it is really beyond the scope of this tutorial. Preventing DNS Enumeration is a big challenge.
If your DNS is not configured in a secure way, it is possible that lots of sensitive information about the network and organization can go outside and an untrusted Internet user can perform a DNS zone transfer.
36 Lectures
5 hours
Sharad Kumar
31 Lectures
3.5 hours
Abhilash Nelson
22 Lectures
3 hours
Blair Cook
74 Lectures
4.5 hours
199courses
75 Lectures
4.5 hours
199courses
148 Lectures
28.5 hours
Joseph Delgadillo
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2635,
"s": 2479,
"text": "The term OS fingerprinting in Ethical Hacking refers to any method used to determine what operating system is running on a remote computer. This could be −"
},
{
"code": null,
"e": 2980,
"s": 2635,
"text": "Active Fingerprinting − Active fingerprinting is accomplished by sending specially crafted packets to a target machine and then noting down its response and analyzing the gathered information to determine the target OS. In the following section, we have given an example to explain how you can use NMAP tool to detect the OS of a target domain."
},
{
"code": null,
"e": 3325,
"s": 2980,
"text": "Active Fingerprinting − Active fingerprinting is accomplished by sending specially crafted packets to a target machine and then noting down its response and analyzing the gathered information to determine the target OS. In the following section, we have given an example to explain how you can use NMAP tool to detect the OS of a target domain."
},
{
"code": null,
"e": 3547,
"s": 3325,
"text": "Passive Fingerprinting − Passive fingerprinting is based on sniffer traces from the remote system. Based on the sniffer traces (such as Wireshark) of the packets, you can determine the operating system of the remote host."
},
{
"code": null,
"e": 3769,
"s": 3547,
"text": "Passive Fingerprinting − Passive fingerprinting is based on sniffer traces from the remote system. Based on the sniffer traces (such as Wireshark) of the packets, you can determine the operating system of the remote host."
},
{
"code": null,
"e": 3872,
"s": 3769,
"text": "We have the following four important elements that we will look at to determine the operating system −"
},
{
"code": null,
"e": 3950,
"s": 3872,
"text": "TTL − What the operating system sets the Time-To-Live on the outbound packet."
},
{
"code": null,
"e": 4028,
"s": 3950,
"text": "TTL − What the operating system sets the Time-To-Live on the outbound packet."
},
{
"code": null,
"e": 4093,
"s": 4028,
"text": "Window Size − What the operating system sets the Window Size at."
},
{
"code": null,
"e": 4158,
"s": 4093,
"text": "Window Size − What the operating system sets the Window Size at."
},
{
"code": null,
"e": 4217,
"s": 4158,
"text": "DF − Does the operating system set the Don't Fragment bit."
},
{
"code": null,
"e": 4276,
"s": 4217,
"text": "DF − Does the operating system set the Don't Fragment bit."
},
{
"code": null,
"e": 4353,
"s": 4276,
"text": "TOS − Does the operating system set the Type of Service, and if so, at what."
},
{
"code": null,
"e": 4430,
"s": 4353,
"text": "TOS − Does the operating system set the Type of Service, and if so, at what."
},
{
"code": null,
"e": 4619,
"s": 4430,
"text": "By analyzing these factors of a packet, you may be able to determine the remote operating system. This system is not 100% accurate, and works better for some operating systems than others."
},
{
"code": null,
"e": 4849,
"s": 4619,
"text": "Before attacking a system, it is required that you know what operating system is hosting a website. Once a target OS is known, then it becomes easy to determine which vulnerabilities might be present to exploit the target system."
},
{
"code": null,
"e": 5029,
"s": 4849,
"text": "Below is a simple nmap command which can be used to identify the operating system serving a website and all the opened ports associated with the domain name, i.e., the IP address."
},
{
"code": null,
"e": 5062,
"s": 5029,
"text": "$nmap -O -v tutorialspoint.com \n"
},
{
"code": null,
"e": 5159,
"s": 5062,
"text": "It will show you the following sensitive information about the given domain name or IP address −"
},
{
"code": null,
"e": 7038,
"s": 5159,
"text": "Starting Nmap 5.51 ( http://nmap.org ) at 2015-10-04 09:57 CDT \nInitiating Parallel DNS resolution of 1 host. at 09:57 \nCompleted Parallel DNS resolution of 1 host. at 09:57, 0.00s elapsed \nInitiating SYN Stealth Scan at 09:57\nScanning tutorialspoint.com (66.135.33.172) [1000 ports] \nDiscovered open port 22/tcp on 66.135.33.172 \nDiscovered open port 3306/tcp on 66.135.33.172 \nDiscovered open port 80/tcp on 66.135.33.172 \nDiscovered open port 443/tcp on 66.135.33.172 \nCompleted SYN Stealth Scan at 09:57, 0.04s elapsed (1000 total ports) \nInitiating OS detection (try #1) against tutorialspoint.com (66.135.33.172) \nRetrying OS detection (try #2) against tutorialspoint.com (66.135.33.172) \nRetrying OS detection (try #3) against tutorialspoint.com (66.135.33.172) \nRetrying OS detection (try #4) against tutorialspoint.com (66.135.33.172) \nRetrying OS detection (try #5) against tutorialspoint.com (66.135.33.172) \nNmap scan report for tutorialspoint.com (66.135.33.172) \nHost is up (0.000038s latency). \nNot shown: 996 closed ports \nPORT STATE SERVICE \n22/tcp open ssh \n80/tcp open http \n443/tcp open https \n3306/tcp open mysql \n\nTCP/IP fingerprint: \nOS:SCAN(V=5.51%D=10/4%OT=22%CT=1%CU=40379%PV=N%DS=0%DC=L%G=Y%TM=56113E6D%P= \nOS:x86_64-redhat-linux-gnu)SEQ(SP=106%GCD=1%ISR=109%TI=Z%CI=Z%II=I%TS=A)OPS \nOS:(O1=MFFD7ST11NW7%O2=MFFD7ST11NW7%O3=MFFD7NNT11NW7%O4=MFFD7ST11NW7%O5=MFF \nOS:D7ST11NW7%O6=MFFD7ST11)WIN(W1=FFCB%W2=FFCB%W3=FFCB%W4=FFCB%W5=FFCB%W6=FF \nOS:CB)ECN(R=Y%DF=Y%T=40%W=FFD7%O=MFFD7NNSNW7%CC=Y%Q=)T1(R=Y%DF=Y%T=40%S=O%A \nOS:=S+%F=AS%RD=0%Q=)T2(R=N)T3(R=N)T4(R=Y%DF=Y%T=40%W=0%S=A%A=Z%F=R%O=%RD=0% \nOS:Q=)T5(R=Y%DF=Y%T=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)T6(R=Y%DF=Y%T=40%W=0%S= \nOS:A%A=Z%F=R%O=%RD=0%Q=)T7(R=Y%DF=Y%T=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)U1(R= \nOS:Y%DF=N%T=40%IPL=164%UN=0%RIPL=G%RID=G%RIPCK=G%RUCK=G%RUD=G)IE(R=Y%DFI=N% \nOS:T=40%CD=S)\n"
},
{
"code": null,
"e": 7160,
"s": 7038,
"text": "If you do not have nmap command installed on your Linux system, then you can install it using the following yum command −"
},
{
"code": null,
"e": 7179,
"s": 7160,
"text": "$yum install nmap\n"
},
{
"code": null,
"e": 7334,
"s": 7179,
"text": "You can go through nmap command in detail to check and understand the different features associated with a system and secure it against malicious attacks."
},
{
"code": null,
"e": 7487,
"s": 7334,
"text": "You can hide your main system behind a secure proxy server or a VPN so that your complete identity is safe and ultimately your main system remains safe."
},
{
"code": null,
"e": 7602,
"s": 7487,
"text": "We have just seen information given by nmap command. This command lists down all the open ports on a given server."
},
{
"code": null,
"e": 7731,
"s": 7602,
"text": "PORT STATE SERVICE \n22/tcp open ssh \n80/tcp open http \n443/tcp open https \n3306/tcp open mysql\n"
},
{
"code": null,
"e": 7818,
"s": 7731,
"text": "You can also check if a particular port is opened or not using the following command −"
},
{
"code": null,
"e": 7855,
"s": 7818,
"text": "$nmap -sT -p 443 tutorialspoint.com\n"
},
{
"code": null,
"e": 7894,
"s": 7855,
"text": "It will produce the following result −"
},
{
"code": null,
"e": 8155,
"s": 7894,
"text": "Starting Nmap 5.51 ( http://nmap.org ) at 2015-10-04 10:19 CDT \nNmap scan report for tutorialspoint.com (66.135.33.172) \nHost is up (0.000067s latency). \nPORT STATE SERVICE \n443/tcp open https \n\nNmap done: 1 IP address (1 host up) scanned in 0.04 seconds\n"
},
{
"code": null,
"e": 8262,
"s": 8155,
"text": "Once a hacker knows about open ports, then he can plan different attack techniques through the open ports."
},
{
"code": null,
"e": 8377,
"s": 8262,
"text": "It is always recommended to check and close all the unwanted ports to safeguard the system from malicious attacks."
},
{
"code": null,
"e": 8557,
"s": 8377,
"text": "A ping sweep is a network scanning technique that you can use to determine which IP address from a range of IP addresses map to live hosts. Ping Sweep is also known as ICMP sweep."
},
{
"code": null,
"e": 8734,
"s": 8557,
"text": "You can use fping command for ping sweep. This command is a ping-like program which uses the Internet Control Message Protocol (ICMP) echo request to determine if a host is up."
},
{
"code": null,
"e": 8995,
"s": 8734,
"text": "fping is different from ping in that you can specify any number of hosts on the command line, or specify a file containing the lists of hosts to ping. If a host does not respond within a certain time limit and/or retry limit, it will be considered unreachable."
},
{
"code": null,
"e": 9178,
"s": 8995,
"text": "To disable ping sweeps on a network, you can block ICMP ECHO requests from outside sources. This can be done using the following command which will create a firewall rule in iptable."
},
{
"code": null,
"e": 9240,
"s": 9178,
"text": "$iptables -A OUTPUT -p icmp --icmp-type echo-request -j DROP\n"
},
{
"code": null,
"e": 9439,
"s": 9240,
"text": "Domain Name Server (DNS) is like a map or an address book. In fact, it is like a distributed database which is used to translate an IP address 192.111.1.120 to a name www.example.com and vice versa."
},
{
"code": null,
"e": 9664,
"s": 9439,
"text": "DNS enumeration is the process of locating all the DNS servers and their corresponding records for an organization. The idea is to gather as much interesting details as possible about your target before initiating an attack."
},
{
"code": null,
"e": 9852,
"s": 9664,
"text": "You can use nslookup command available on Linux to get DNS and host-related information. In addition, you can use the following DNSenum script to get detailed information about a domain −"
},
{
"code": null,
"e": 9863,
"s": 9852,
"text": "DNSenum.pl"
},
{
"code": null,
"e": 9927,
"s": 9863,
"text": "DNSenum script can perform the following important operations −"
},
{
"code": null,
"e": 9952,
"s": 9927,
"text": "Get the host's addresses"
},
{
"code": null,
"e": 9977,
"s": 9952,
"text": "Get the host's addresses"
},
{
"code": null,
"e": 9997,
"s": 9977,
"text": "Get the nameservers"
},
{
"code": null,
"e": 10017,
"s": 9997,
"text": "Get the nameservers"
},
{
"code": null,
"e": 10035,
"s": 10017,
"text": "Get the MX record"
},
{
"code": null,
"e": 10053,
"s": 10035,
"text": "Get the MX record"
},
{
"code": null,
"e": 10089,
"s": 10053,
"text": "Perform axfr queries on nameservers"
},
{
"code": null,
"e": 10125,
"s": 10089,
"text": "Perform axfr queries on nameservers"
},
{
"code": null,
"e": 10176,
"s": 10125,
"text": "Get extra names and subdomains via Google scraping"
},
{
"code": null,
"e": 10227,
"s": 10176,
"text": "Get extra names and subdomains via Google scraping"
},
{
"code": null,
"e": 10320,
"s": 10227,
"text": "Brute force subdomains from file can also perform recursion on subdomain that has NS records"
},
{
"code": null,
"e": 10413,
"s": 10320,
"text": "Brute force subdomains from file can also perform recursion on subdomain that has NS records"
},
{
"code": null,
"e": 10487,
"s": 10413,
"text": "Calculate C class domain network ranges and perform whois queries on them"
},
{
"code": null,
"e": 10561,
"s": 10487,
"text": "Calculate C class domain network ranges and perform whois queries on them"
},
{
"code": null,
"e": 10598,
"s": 10561,
"text": "Perform reverse lookups on netranges"
},
{
"code": null,
"e": 10635,
"s": 10598,
"text": "Perform reverse lookups on netranges"
},
{
"code": null,
"e": 10776,
"s": 10635,
"text": "DNS Enumeration does not have a quick fix and it is really beyond the scope of this tutorial. Preventing DNS Enumeration is a big challenge."
},
{
"code": null,
"e": 10987,
"s": 10776,
"text": "If your DNS is not configured in a secure way, it is possible that lots of sensitive information about the network and organization can go outside and an untrusted Internet user can perform a DNS zone transfer."
},
{
"code": null,
"e": 11020,
"s": 10987,
"text": "\n 36 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 11034,
"s": 11020,
"text": " Sharad Kumar"
},
{
"code": null,
"e": 11069,
"s": 11034,
"text": "\n 31 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 11086,
"s": 11069,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 11119,
"s": 11086,
"text": "\n 22 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 11131,
"s": 11119,
"text": " Blair Cook"
},
{
"code": null,
"e": 11166,
"s": 11131,
"text": "\n 74 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 11178,
"s": 11166,
"text": " 199courses"
},
{
"code": null,
"e": 11213,
"s": 11178,
"text": "\n 75 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 11225,
"s": 11213,
"text": " 199courses"
},
{
"code": null,
"e": 11262,
"s": 11225,
"text": "\n 148 Lectures \n 28.5 hours \n"
},
{
"code": null,
"e": 11281,
"s": 11262,
"text": " Joseph Delgadillo"
},
{
"code": null,
"e": 11288,
"s": 11281,
"text": " Print"
},
{
"code": null,
"e": 11299,
"s": 11288,
"text": " Add Notes"
}
] |
SQL Query to Convert FLOAT to NVARCHAR - GeeksforGeeks
|
23 Apr, 2021
Here we will see, how to convert FLOAT data to NVARCHAR data in an MS SQL Server’s database table using the CAST(), CONVERT(), and FORMAT() functions.
We will be creating a person table in a database called “geeks”.
CREATE DATABASE geeks;
USE geeks;
We have the following Employee table in our geeks database :
CREATE TABLE person(
id INT IDENTITY(1,1) PRIMARY KEY,
name VARCHAR(30) NOT NULL,
weight REAL NOT NULL);
You can use the below statement to query the description of the created table:
EXEC SP_COLUMNS person;
Use the below statement to add data to the person table:
INSERT INTO person
VALUES
('Yogesh Vaishnav', 62.5),
('Vishal Vishwakarma', 70),
('Ashish Yadav', 69),
('Ajit Yadav', 71.9);
To verify the contents of the table use the below statement:
SELECT * FROM person;
Now let’s convert FLOAT values to nvarchar using three different methods.
Syntax: SELECT CONVERT(<DATA_TYPE>, <VALUE>);
--DATA_TYPE is the type we want to convert to.
--VALUE is the value we want to convert into DATA_TYPE.
Example:
SELECT 'Weight of Yogesh Vaishnav is ' + CONVERT(NVARCHAR(20), weight)
AS person_weight
FROM person
WHERE name = 'Yogesh Vaishnav';
Output:
Syntax: SELECT CAST(<VALUE> AS <DATA_TYPE>);
--DATA_TYPE is the type we want to convert to.
--VALUE is the value we want to convert into DATA_TYPE.
Example:
SELECT 'Weight of Yogesh Vaishnav is ' + CAST(weight as NVARCHAR(20))
AS person_weight
FROM person
WHERE name = 'Yogesh Vaishnav';
Output:
Although the FORMAT() function is useful for formatting datetime and not converting one type into another, still can be used to convert(or here format) float value into an STR value.
Syntax: SELECT FORMAT(<VALUE> , 'actual_format';
--actual_format is the format we want to achieve in a string form.
--VALUE is the value we want to format according to the actual_format.
Example:
SELECT 'Weight of Ashish Yadav is ' + FORMAT(weight, '') --'' denotes no formating
--i.e simply convert it to a string of characters.
AS person_weight
FROM person
WHERE name = 'Ashish Yadav';
Output:
Picked
SQL-Query
SQL
SQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Update Multiple Columns in Single Update Statement in SQL?
How to Create a Table With Multiple Foreign Keys in SQL?
What is Temporary Table in SQL?
SQL Query to Convert VARCHAR to INT
SQL | Subquery
SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter
SQL using Python
How to Select Data Between Two Dates and Times in SQL Server?
How to Write a SQL Query For a Specific Date Range and Date Time?
SQL Query to Compare Two Dates
|
[
{
"code": null,
"e": 25623,
"s": 25595,
"text": "\n23 Apr, 2021"
},
{
"code": null,
"e": 25774,
"s": 25623,
"text": "Here we will see, how to convert FLOAT data to NVARCHAR data in an MS SQL Server’s database table using the CAST(), CONVERT(), and FORMAT() functions."
},
{
"code": null,
"e": 25839,
"s": 25774,
"text": "We will be creating a person table in a database called “geeks”."
},
{
"code": null,
"e": 25862,
"s": 25839,
"text": "CREATE DATABASE geeks;"
},
{
"code": null,
"e": 25873,
"s": 25862,
"text": "USE geeks;"
},
{
"code": null,
"e": 25934,
"s": 25873,
"text": "We have the following Employee table in our geeks database :"
},
{
"code": null,
"e": 26039,
"s": 25934,
"text": "CREATE TABLE person(\nid INT IDENTITY(1,1) PRIMARY KEY,\nname VARCHAR(30) NOT NULL,\nweight REAL NOT NULL);"
},
{
"code": null,
"e": 26118,
"s": 26039,
"text": "You can use the below statement to query the description of the created table:"
},
{
"code": null,
"e": 26142,
"s": 26118,
"text": "EXEC SP_COLUMNS person;"
},
{
"code": null,
"e": 26199,
"s": 26142,
"text": "Use the below statement to add data to the person table:"
},
{
"code": null,
"e": 26324,
"s": 26199,
"text": "INSERT INTO person\nVALUES\n('Yogesh Vaishnav', 62.5),\n('Vishal Vishwakarma', 70),\n('Ashish Yadav', 69),\n('Ajit Yadav', 71.9);"
},
{
"code": null,
"e": 26385,
"s": 26324,
"text": "To verify the contents of the table use the below statement:"
},
{
"code": null,
"e": 26407,
"s": 26385,
"text": "SELECT * FROM person;"
},
{
"code": null,
"e": 26481,
"s": 26407,
"text": "Now let’s convert FLOAT values to nvarchar using three different methods."
},
{
"code": null,
"e": 26630,
"s": 26481,
"text": "Syntax: SELECT CONVERT(<DATA_TYPE>, <VALUE>);\n--DATA_TYPE is the type we want to convert to.\n--VALUE is the value we want to convert into DATA_TYPE."
},
{
"code": null,
"e": 26639,
"s": 26630,
"text": "Example:"
},
{
"code": null,
"e": 26772,
"s": 26639,
"text": "SELECT 'Weight of Yogesh Vaishnav is ' + CONVERT(NVARCHAR(20), weight) \nAS person_weight\nFROM person\nWHERE name = 'Yogesh Vaishnav';"
},
{
"code": null,
"e": 26780,
"s": 26772,
"text": "Output:"
},
{
"code": null,
"e": 26928,
"s": 26780,
"text": "Syntax: SELECT CAST(<VALUE> AS <DATA_TYPE>);\n--DATA_TYPE is the type we want to convert to.\n--VALUE is the value we want to convert into DATA_TYPE."
},
{
"code": null,
"e": 26937,
"s": 26928,
"text": "Example:"
},
{
"code": null,
"e": 27069,
"s": 26937,
"text": "SELECT 'Weight of Yogesh Vaishnav is ' + CAST(weight as NVARCHAR(20)) \nAS person_weight\nFROM person\nWHERE name = 'Yogesh Vaishnav';"
},
{
"code": null,
"e": 27077,
"s": 27069,
"text": "Output:"
},
{
"code": null,
"e": 27260,
"s": 27077,
"text": "Although the FORMAT() function is useful for formatting datetime and not converting one type into another, still can be used to convert(or here format) float value into an STR value."
},
{
"code": null,
"e": 27447,
"s": 27260,
"text": "Syntax: SELECT FORMAT(<VALUE> , 'actual_format';\n--actual_format is the format we want to achieve in a string form.\n--VALUE is the value we want to format according to the actual_format."
},
{
"code": null,
"e": 27456,
"s": 27447,
"text": "Example:"
},
{
"code": null,
"e": 27648,
"s": 27456,
"text": "SELECT 'Weight of Ashish Yadav is ' + FORMAT(weight, '') --'' denotes no formating\n--i.e simply convert it to a string of characters.\nAS person_weight\nFROM person\nWHERE name = 'Ashish Yadav';"
},
{
"code": null,
"e": 27656,
"s": 27648,
"text": "Output:"
},
{
"code": null,
"e": 27663,
"s": 27656,
"text": "Picked"
},
{
"code": null,
"e": 27673,
"s": 27663,
"text": "SQL-Query"
},
{
"code": null,
"e": 27677,
"s": 27673,
"text": "SQL"
},
{
"code": null,
"e": 27681,
"s": 27677,
"text": "SQL"
},
{
"code": null,
"e": 27779,
"s": 27681,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27845,
"s": 27779,
"text": "How to Update Multiple Columns in Single Update Statement in SQL?"
},
{
"code": null,
"e": 27902,
"s": 27845,
"text": "How to Create a Table With Multiple Foreign Keys in SQL?"
},
{
"code": null,
"e": 27934,
"s": 27902,
"text": "What is Temporary Table in SQL?"
},
{
"code": null,
"e": 27970,
"s": 27934,
"text": "SQL Query to Convert VARCHAR to INT"
},
{
"code": null,
"e": 27985,
"s": 27970,
"text": "SQL | Subquery"
},
{
"code": null,
"e": 28063,
"s": 27985,
"text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter"
},
{
"code": null,
"e": 28080,
"s": 28063,
"text": "SQL using Python"
},
{
"code": null,
"e": 28142,
"s": 28080,
"text": "How to Select Data Between Two Dates and Times in SQL Server?"
},
{
"code": null,
"e": 28208,
"s": 28142,
"text": "How to Write a SQL Query For a Specific Date Range and Date Time?"
}
] |
How to Analyze Data Using Mito in Python | Towards Data Science
|
Data contains so many meaningful insights. Data Analysis is the way for getting those insights. Sometimes, we get confused about choosing which tools we want to use, whether using spreadsheet software like Excel. Or we can use a programming language like Python.
And for some people, they prefer to use the spreadsheet tool. One of the reasons for that is because they cannot do programming yet.
Using the spreadsheet tool is not recommended for big data. Therefore, we need programming for analyzing big data. But thankfully, there’s a tool for connecting both. It’s called Mito.
Mito is a library that has capabilities for analyzing the data. Unlike the Pandas library, Mito has an interface like the spreadsheet software. Therefore, we can explore and process the data without interfering with codes.
In this article, I will show you how to analyze data using Mito. Also, I will show you the features that are included in this tool. Without further, let’s get started!
Before we can use the library, we need to install it first. We need to install the mitoinstaller library for installing Mito with the ‘pip’ command. Here is the command for doing that:
python -m pip install mitoinstaller
After that, you can install Mito by using this command line:
python -m mitoinstaller install
If your installation is complete, it will show texts like this:
Now we can load the library on the notebook.
Keep in mind that you can use Mito only with JupyterLab. Until now, you cannot access it using the regular Jupyter Notebook.
Now let’s initialize the Mito sheet. For doing that, please copy these lines of code:
import mitosheetmitosheet.sheet()
Here is the result for running the code:
If you can see the interface on the notebook, it means that you can use it now.
For the data source, we will use a dataset from Kaggle called Top Streamers on Twitch. Basically, the dataset contains information about the top 1000 streamers from 2020.
The information that is included on the dataset is the number of viewers, followers, language name, channel name, etc. You can access the dataset here.
Disclaimer:The dataset is in the public domain. It also contains the ‘CC0: Public Domain’ License. For more details, you can take a look here.
To open the dataset, we need to create a dataframe object from it. We can use the pandas library for doing that. Let’s write these lines of code for doing that:
import pandas as pddf = pd.read_csv('twitchdata-update.csv')
After we get the dataframe, the next step is to load it into our Mito sheet. Add this line of code for doing that:
mitosheet.sheet(df)
Here is the result for running the code:
As you can see from above, the data is already loaded. Now let’s explore what Mito can do.
With Mito, we can explore and customize the dataset like on a spreadsheet. The first feature that I want to show you is to add a column to the dataset.
Let’s say we want to add a column where it has a boolean value that determines whether the channel is in the English language or not. We called the column the ‘is_english’ column.
For adding the column, have a look at this GIF:
Because Mito is like a spreadsheet tool that we can use in our notebook, we can use formulas like the spreadsheet software for customizing columns.
Let’s recall the is_english column. We want to set boolean values to 1 if the language is English. In spreadsheet software, we can use a formula like this:
IF(language == 'English', 1, 0)
Let’s apply the formula on Mito. Here is the GIF for the process:
After we set the column’s values, let’s filter the data based on the ‘is_english’ column. We will take rows that contain the value of 1.
In Mito, we can do that easily. We only need to give parameters for doing the filtering process. Have a look at this GIF:
The next feature that we can do is to visualize the data. With Mito, we can display charts easier rather than spending time writing codes and looking at the helper’s website for getting syntax for certain problems. We can visualize charts like box plot, histogram, scatter plot, and bar plot.
The next feature that I want to show you is to create the pivot table. Just like previous features, we only need to give parameters for doing a certain task.
For creating the pivot table, we can set which column is acts as the row, the column, and the value. From that table, we can see the value based on specific columns. In this case, we want to aggregate the number of followers based on maturity and language.
Please have a look at this GIF for how to create the pivot table:
Let’s have a look at our pivot table. As you can see, the table contains the number of followers based on maturity and the language. But we still didn’t get the insights. Let’s sort the data first.
With Mito, sorting the data is simple. We only have to click several buttons for doing that. Please have a look at this GIF:
If we sort the column that has no mature content, we can see that English is the most language. And then it follows by Korean, Russian, Spanish, and so on.
But if you see more details, the number of followers for non-mature content is not the same as the mature one. Let’s sort the data on the mature column. Here is the result for doing that:
As you can see, the Korean is not in second place. Most of them are European languages. And Korean language is below Chinese and Thai.
Here is the last thing that Mito can do. It generates codes. When we conduct some processing on the data, it automatically generates codes based on it. In my case, here is the code that has been generated by Mito:
As you can see, it looks like the pandas command that we’ve used. With Mito, we can do the process like the spreadsheet software and generate code based on it.
Well done! Now you have learned how to analyze data using Mito in Python. For those who are new to programming and data analysis, I hope it helps you for getting started.
If you are interested in this article, please follow my Medium for more articles like this. I will talk about lots of data science ranging from tutorials to applications on many domains.
If you have any questions or want to discuss, you can contact me on LinkedIn or E-Mail (khaliddotdev@gmail.com).
Thank you for reading my article!
|
[
{
"code": null,
"e": 434,
"s": 171,
"text": "Data contains so many meaningful insights. Data Analysis is the way for getting those insights. Sometimes, we get confused about choosing which tools we want to use, whether using spreadsheet software like Excel. Or we can use a programming language like Python."
},
{
"code": null,
"e": 567,
"s": 434,
"text": "And for some people, they prefer to use the spreadsheet tool. One of the reasons for that is because they cannot do programming yet."
},
{
"code": null,
"e": 752,
"s": 567,
"text": "Using the spreadsheet tool is not recommended for big data. Therefore, we need programming for analyzing big data. But thankfully, there’s a tool for connecting both. It’s called Mito."
},
{
"code": null,
"e": 975,
"s": 752,
"text": "Mito is a library that has capabilities for analyzing the data. Unlike the Pandas library, Mito has an interface like the spreadsheet software. Therefore, we can explore and process the data without interfering with codes."
},
{
"code": null,
"e": 1143,
"s": 975,
"text": "In this article, I will show you how to analyze data using Mito. Also, I will show you the features that are included in this tool. Without further, let’s get started!"
},
{
"code": null,
"e": 1328,
"s": 1143,
"text": "Before we can use the library, we need to install it first. We need to install the mitoinstaller library for installing Mito with the ‘pip’ command. Here is the command for doing that:"
},
{
"code": null,
"e": 1364,
"s": 1328,
"text": "python -m pip install mitoinstaller"
},
{
"code": null,
"e": 1425,
"s": 1364,
"text": "After that, you can install Mito by using this command line:"
},
{
"code": null,
"e": 1457,
"s": 1425,
"text": "python -m mitoinstaller install"
},
{
"code": null,
"e": 1521,
"s": 1457,
"text": "If your installation is complete, it will show texts like this:"
},
{
"code": null,
"e": 1566,
"s": 1521,
"text": "Now we can load the library on the notebook."
},
{
"code": null,
"e": 1691,
"s": 1566,
"text": "Keep in mind that you can use Mito only with JupyterLab. Until now, you cannot access it using the regular Jupyter Notebook."
},
{
"code": null,
"e": 1777,
"s": 1691,
"text": "Now let’s initialize the Mito sheet. For doing that, please copy these lines of code:"
},
{
"code": null,
"e": 1811,
"s": 1777,
"text": "import mitosheetmitosheet.sheet()"
},
{
"code": null,
"e": 1852,
"s": 1811,
"text": "Here is the result for running the code:"
},
{
"code": null,
"e": 1932,
"s": 1852,
"text": "If you can see the interface on the notebook, it means that you can use it now."
},
{
"code": null,
"e": 2103,
"s": 1932,
"text": "For the data source, we will use a dataset from Kaggle called Top Streamers on Twitch. Basically, the dataset contains information about the top 1000 streamers from 2020."
},
{
"code": null,
"e": 2255,
"s": 2103,
"text": "The information that is included on the dataset is the number of viewers, followers, language name, channel name, etc. You can access the dataset here."
},
{
"code": null,
"e": 2398,
"s": 2255,
"text": "Disclaimer:The dataset is in the public domain. It also contains the ‘CC0: Public Domain’ License. For more details, you can take a look here."
},
{
"code": null,
"e": 2559,
"s": 2398,
"text": "To open the dataset, we need to create a dataframe object from it. We can use the pandas library for doing that. Let’s write these lines of code for doing that:"
},
{
"code": null,
"e": 2620,
"s": 2559,
"text": "import pandas as pddf = pd.read_csv('twitchdata-update.csv')"
},
{
"code": null,
"e": 2735,
"s": 2620,
"text": "After we get the dataframe, the next step is to load it into our Mito sheet. Add this line of code for doing that:"
},
{
"code": null,
"e": 2755,
"s": 2735,
"text": "mitosheet.sheet(df)"
},
{
"code": null,
"e": 2796,
"s": 2755,
"text": "Here is the result for running the code:"
},
{
"code": null,
"e": 2887,
"s": 2796,
"text": "As you can see from above, the data is already loaded. Now let’s explore what Mito can do."
},
{
"code": null,
"e": 3039,
"s": 2887,
"text": "With Mito, we can explore and customize the dataset like on a spreadsheet. The first feature that I want to show you is to add a column to the dataset."
},
{
"code": null,
"e": 3219,
"s": 3039,
"text": "Let’s say we want to add a column where it has a boolean value that determines whether the channel is in the English language or not. We called the column the ‘is_english’ column."
},
{
"code": null,
"e": 3267,
"s": 3219,
"text": "For adding the column, have a look at this GIF:"
},
{
"code": null,
"e": 3415,
"s": 3267,
"text": "Because Mito is like a spreadsheet tool that we can use in our notebook, we can use formulas like the spreadsheet software for customizing columns."
},
{
"code": null,
"e": 3571,
"s": 3415,
"text": "Let’s recall the is_english column. We want to set boolean values to 1 if the language is English. In spreadsheet software, we can use a formula like this:"
},
{
"code": null,
"e": 3603,
"s": 3571,
"text": "IF(language == 'English', 1, 0)"
},
{
"code": null,
"e": 3669,
"s": 3603,
"text": "Let’s apply the formula on Mito. Here is the GIF for the process:"
},
{
"code": null,
"e": 3806,
"s": 3669,
"text": "After we set the column’s values, let’s filter the data based on the ‘is_english’ column. We will take rows that contain the value of 1."
},
{
"code": null,
"e": 3928,
"s": 3806,
"text": "In Mito, we can do that easily. We only need to give parameters for doing the filtering process. Have a look at this GIF:"
},
{
"code": null,
"e": 4221,
"s": 3928,
"text": "The next feature that we can do is to visualize the data. With Mito, we can display charts easier rather than spending time writing codes and looking at the helper’s website for getting syntax for certain problems. We can visualize charts like box plot, histogram, scatter plot, and bar plot."
},
{
"code": null,
"e": 4379,
"s": 4221,
"text": "The next feature that I want to show you is to create the pivot table. Just like previous features, we only need to give parameters for doing a certain task."
},
{
"code": null,
"e": 4636,
"s": 4379,
"text": "For creating the pivot table, we can set which column is acts as the row, the column, and the value. From that table, we can see the value based on specific columns. In this case, we want to aggregate the number of followers based on maturity and language."
},
{
"code": null,
"e": 4702,
"s": 4636,
"text": "Please have a look at this GIF for how to create the pivot table:"
},
{
"code": null,
"e": 4900,
"s": 4702,
"text": "Let’s have a look at our pivot table. As you can see, the table contains the number of followers based on maturity and the language. But we still didn’t get the insights. Let’s sort the data first."
},
{
"code": null,
"e": 5025,
"s": 4900,
"text": "With Mito, sorting the data is simple. We only have to click several buttons for doing that. Please have a look at this GIF:"
},
{
"code": null,
"e": 5181,
"s": 5025,
"text": "If we sort the column that has no mature content, we can see that English is the most language. And then it follows by Korean, Russian, Spanish, and so on."
},
{
"code": null,
"e": 5369,
"s": 5181,
"text": "But if you see more details, the number of followers for non-mature content is not the same as the mature one. Let’s sort the data on the mature column. Here is the result for doing that:"
},
{
"code": null,
"e": 5504,
"s": 5369,
"text": "As you can see, the Korean is not in second place. Most of them are European languages. And Korean language is below Chinese and Thai."
},
{
"code": null,
"e": 5718,
"s": 5504,
"text": "Here is the last thing that Mito can do. It generates codes. When we conduct some processing on the data, it automatically generates codes based on it. In my case, here is the code that has been generated by Mito:"
},
{
"code": null,
"e": 5878,
"s": 5718,
"text": "As you can see, it looks like the pandas command that we’ve used. With Mito, we can do the process like the spreadsheet software and generate code based on it."
},
{
"code": null,
"e": 6049,
"s": 5878,
"text": "Well done! Now you have learned how to analyze data using Mito in Python. For those who are new to programming and data analysis, I hope it helps you for getting started."
},
{
"code": null,
"e": 6236,
"s": 6049,
"text": "If you are interested in this article, please follow my Medium for more articles like this. I will talk about lots of data science ranging from tutorials to applications on many domains."
},
{
"code": null,
"e": 6349,
"s": 6236,
"text": "If you have any questions or want to discuss, you can contact me on LinkedIn or E-Mail (khaliddotdev@gmail.com)."
}
] |
Mid() and Len() Function in MS Access - GeeksforGeeks
|
01 Sep, 2020
1. Mid() function :In MS Access the mid() function will extract the string from a given position. In this function 3 parameters will be passed first will be the string and second will be the starting position and last will be the length of the string.
Syntax :
Mid(string, start, length)
Example-1 :
SELECT Mid("GEEKSFORGEEKS", 3, 3) AS ExtractString;
Output –
Example-2 :
SELECT Mid("GEEKSFORGEEKS", 6, 14) AS ExtractString;
Output –
2. Len() function :In MS Access the Len() function will return the length of the string. It will take the string as a parameter and it will return the length of the string.
Syntax :
Len(string/varname)
Example-1 :
SELECT Len("GEEKSFORGEEKS") AS LengthOfString;
Output –
Example-2 :
SELECT Len("GFG") AS LengthOfString;
Output –
DBMS-SQL
Functions
SQL
Functions
SQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Update Multiple Columns in Single Update Statement in SQL?
What is Temporary Table in SQL?
SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter
SQL using Python
SQL | Subquery
How to Write a SQL Query For a Specific Date Range and Date Time?
SQL Query to Convert VARCHAR to INT
SQL Query to Delete Duplicate Rows
SQL Query to Compare Two Dates
Window functions in SQL
|
[
{
"code": null,
"e": 23877,
"s": 23849,
"text": "\n01 Sep, 2020"
},
{
"code": null,
"e": 24129,
"s": 23877,
"text": "1. Mid() function :In MS Access the mid() function will extract the string from a given position. In this function 3 parameters will be passed first will be the string and second will be the starting position and last will be the length of the string."
},
{
"code": null,
"e": 24138,
"s": 24129,
"text": "Syntax :"
},
{
"code": null,
"e": 24166,
"s": 24138,
"text": "Mid(string, start, length)\n"
},
{
"code": null,
"e": 24178,
"s": 24166,
"text": "Example-1 :"
},
{
"code": null,
"e": 24231,
"s": 24178,
"text": "SELECT Mid(\"GEEKSFORGEEKS\", 3, 3) AS ExtractString;\n"
},
{
"code": null,
"e": 24240,
"s": 24231,
"text": "Output –"
},
{
"code": null,
"e": 24252,
"s": 24240,
"text": "Example-2 :"
},
{
"code": null,
"e": 24306,
"s": 24252,
"text": "SELECT Mid(\"GEEKSFORGEEKS\", 6, 14) AS ExtractString;\n"
},
{
"code": null,
"e": 24315,
"s": 24306,
"text": "Output –"
},
{
"code": null,
"e": 24488,
"s": 24315,
"text": "2. Len() function :In MS Access the Len() function will return the length of the string. It will take the string as a parameter and it will return the length of the string."
},
{
"code": null,
"e": 24497,
"s": 24488,
"text": "Syntax :"
},
{
"code": null,
"e": 24518,
"s": 24497,
"text": "Len(string/varname)\n"
},
{
"code": null,
"e": 24530,
"s": 24518,
"text": "Example-1 :"
},
{
"code": null,
"e": 24578,
"s": 24530,
"text": "SELECT Len(\"GEEKSFORGEEKS\") AS LengthOfString;\n"
},
{
"code": null,
"e": 24587,
"s": 24578,
"text": "Output –"
},
{
"code": null,
"e": 24599,
"s": 24587,
"text": "Example-2 :"
},
{
"code": null,
"e": 24637,
"s": 24599,
"text": "SELECT Len(\"GFG\") AS LengthOfString;\n"
},
{
"code": null,
"e": 24646,
"s": 24637,
"text": "Output –"
},
{
"code": null,
"e": 24655,
"s": 24646,
"text": "DBMS-SQL"
},
{
"code": null,
"e": 24665,
"s": 24655,
"text": "Functions"
},
{
"code": null,
"e": 24669,
"s": 24665,
"text": "SQL"
},
{
"code": null,
"e": 24679,
"s": 24669,
"text": "Functions"
},
{
"code": null,
"e": 24683,
"s": 24679,
"text": "SQL"
},
{
"code": null,
"e": 24781,
"s": 24683,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 24790,
"s": 24781,
"text": "Comments"
},
{
"code": null,
"e": 24803,
"s": 24790,
"text": "Old Comments"
},
{
"code": null,
"e": 24869,
"s": 24803,
"text": "How to Update Multiple Columns in Single Update Statement in SQL?"
},
{
"code": null,
"e": 24901,
"s": 24869,
"text": "What is Temporary Table in SQL?"
},
{
"code": null,
"e": 24979,
"s": 24901,
"text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter"
},
{
"code": null,
"e": 24996,
"s": 24979,
"text": "SQL using Python"
},
{
"code": null,
"e": 25011,
"s": 24996,
"text": "SQL | Subquery"
},
{
"code": null,
"e": 25077,
"s": 25011,
"text": "How to Write a SQL Query For a Specific Date Range and Date Time?"
},
{
"code": null,
"e": 25113,
"s": 25077,
"text": "SQL Query to Convert VARCHAR to INT"
},
{
"code": null,
"e": 25148,
"s": 25113,
"text": "SQL Query to Delete Duplicate Rows"
},
{
"code": null,
"e": 25179,
"s": 25148,
"text": "SQL Query to Compare Two Dates"
}
] |
What is the use of Passthru parameter in Stop-Process in PowerShell?
|
With the Passthru parameter, PowerShell returns the output in the console. For example, below notepad.exe process with ID 12344 will be stopped and the same will be displayed in the console with the Passthru parameter. Earlier it was not the case with only Stop-Process.
PS C:\WINDOWS\system32> Stop-Process -Id 12344 -PassThru
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
227 13 2800 13440 0.19 12344 1 notepad
You can also combine –Confirm and –PassThru parameters.
PS C:\WINDOWS\system32> Stop-Process -Id 26492 -Confirm -PassThru
Confirm
Are you sure you want to perform this action?
Performing the operation "Stop-Process" on target "notepad (26492)".
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): Y
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
228 14 3152 13736 0.20 26492 1 notepad
|
[
{
"code": null,
"e": 1333,
"s": 1062,
"text": "With the Passthru parameter, PowerShell returns the output in the console. For example, below notepad.exe process with ID 12344 will be stopped and the same will be displayed in the console with the Passthru parameter. Earlier it was not the case with only Stop-Process."
},
{
"code": null,
"e": 1609,
"s": 1333,
"text": "PS C:\\WINDOWS\\system32> Stop-Process -Id 12344 -PassThru\n\n Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName\n ------- ------ ----- ----- ------ -- -- -----------\n 227 13 2800 13440 0.19 12344 1 notepad"
},
{
"code": null,
"e": 1665,
"s": 1609,
"text": "You can also combine –Confirm and –PassThru parameters."
},
{
"code": null,
"e": 2152,
"s": 1665,
"text": "PS C:\\WINDOWS\\system32> Stop-Process -Id 26492 -Confirm -PassThru\n\nConfirm\nAre you sure you want to perform this action?\nPerforming the operation \"Stop-Process\" on target \"notepad (26492)\".\n[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is \"Y\"): Y\n\nHandles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName\n------- ------ ----- ----- ------ -- -- -----------\n 228 14 3152 13736 0.20 26492 1 notepad"
}
] |
Collectors collectingAndThen() method in Java 8
|
The collectingAndThen() method in Java Collectors class acclimates a Collector to perform an additional finishing transformation. It returns collector which performs the action of the downstream collector, followed by an additional ending step.
The syntax is as follows.
static <T,A,R,RR> Collector<T,A,RR> collectingAndThen(Collector<T,A,R> downstream, Function<R,RR>
finisher)
Here, the parameter,
T − Type of the input elements
T − Type of the input elements
A − Intermediate accumulation type of the downstream collector
A − Intermediate accumulation type of the downstream collector
R − The result type of the downstream collector
R − The result type of the downstream collector
RR − The result type of the resulting collector
RR − The result type of the resulting collector
downstream − Collector
downstream − Collector
finisher − A function to be applied to the final result of the downstream collector
finisher − A function to be applied to the final result of the downstream collector
To work with Collectors class in Java, import the following package.
import java.util.stream.Collectors;
The following is an example to implement collectingAndThen() method in Java.
Live Demo
import java.util.List;
import java.util.Collections;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class Demo {
public static void main(String[] args) {
List<String> list
= Stream.of("Demo1", "Demo2").collect(Collectors.collectingAndThen(
Collectors.toList(),
Collections::<String> unmodifiableList));
System.out.println(list);
}
}
[Demo1, Demo2]
|
[
{
"code": null,
"e": 1307,
"s": 1062,
"text": "The collectingAndThen() method in Java Collectors class acclimates a Collector to perform an additional finishing transformation. It returns collector which performs the action of the downstream collector, followed by an additional ending step."
},
{
"code": null,
"e": 1333,
"s": 1307,
"text": "The syntax is as follows."
},
{
"code": null,
"e": 1441,
"s": 1333,
"text": "static <T,A,R,RR> Collector<T,A,RR> collectingAndThen(Collector<T,A,R> downstream, Function<R,RR>\nfinisher)"
},
{
"code": null,
"e": 1462,
"s": 1441,
"text": "Here, the parameter,"
},
{
"code": null,
"e": 1493,
"s": 1462,
"text": "T − Type of the input elements"
},
{
"code": null,
"e": 1524,
"s": 1493,
"text": "T − Type of the input elements"
},
{
"code": null,
"e": 1587,
"s": 1524,
"text": "A − Intermediate accumulation type of the downstream collector"
},
{
"code": null,
"e": 1650,
"s": 1587,
"text": "A − Intermediate accumulation type of the downstream collector"
},
{
"code": null,
"e": 1698,
"s": 1650,
"text": "R − The result type of the downstream collector"
},
{
"code": null,
"e": 1746,
"s": 1698,
"text": "R − The result type of the downstream collector"
},
{
"code": null,
"e": 1794,
"s": 1746,
"text": "RR − The result type of the resulting collector"
},
{
"code": null,
"e": 1842,
"s": 1794,
"text": "RR − The result type of the resulting collector"
},
{
"code": null,
"e": 1865,
"s": 1842,
"text": "downstream − Collector"
},
{
"code": null,
"e": 1888,
"s": 1865,
"text": "downstream − Collector"
},
{
"code": null,
"e": 1972,
"s": 1888,
"text": "finisher − A function to be applied to the final result of the downstream collector"
},
{
"code": null,
"e": 2056,
"s": 1972,
"text": "finisher − A function to be applied to the final result of the downstream collector"
},
{
"code": null,
"e": 2125,
"s": 2056,
"text": "To work with Collectors class in Java, import the following package."
},
{
"code": null,
"e": 2161,
"s": 2125,
"text": "import java.util.stream.Collectors;"
},
{
"code": null,
"e": 2238,
"s": 2161,
"text": "The following is an example to implement collectingAndThen() method in Java."
},
{
"code": null,
"e": 2249,
"s": 2238,
"text": " Live Demo"
},
{
"code": null,
"e": 2646,
"s": 2249,
"text": "import java.util.List;\nimport java.util.Collections;\nimport java.util.stream.Collectors;\nimport java.util.stream.Stream;\npublic class Demo {\n public static void main(String[] args) {\n List<String> list\n = Stream.of(\"Demo1\", \"Demo2\").collect(Collectors.collectingAndThen(\n Collectors.toList(),\n Collections::<String> unmodifiableList));\n System.out.println(list);\n }\n}"
},
{
"code": null,
"e": 2661,
"s": 2646,
"text": "[Demo1, Demo2]"
}
] |
Running an Apache Beam Data Pipeline on Databricks | Towards Data Science
|
A brief walk-through on how to execute your Apache Beam Pipeline on Databricks
When we think about data-parallel pipelines, Apache Spark immediately comes to mind, but there are also promising and fresher models able to achieve the same results and performances.
This is the case of Apache Beam, an open source, unified model for defining both batch and streaming data-parallel processing pipelines. It gives the possibility to define data pipelines in a handy way, using as runtime one of its distributed processing back-ends (Apache Apex, Apache Flink, Apache Spark, Google Cloud Dataflow and many others).
Apache Beam’s great capabilities consist in an higher level of abstraction, which can prevent programmers from learning multiple frameworks. Currently, the usage of Apache Beam is mainly restricted to Google Cloud Platform and, in particular, to Google Cloud Dataflow.
However, when it comes to moving to other platforms, it can be tricky to find some useful references and examples that could help us running our Apache Beam pipeline.
That’s why I would like to tell you my experience on how to run an Apache Beam Pipeline on Databricks.
Note: the code of this walk-through is available at this Github repository.
I decided to start off from official Apache Beam’s Wordcount example and change few details in order to execute our pipeline on Databricks.
The official code simply reads a public text file from Google Cloud Storage, performs a word count on the input text and writes the output to a given path. In order to simplify this process, we will replace these operations by simply reading the input text from an in-code mocked string, finally printing the word count results to the standard output.
The input string will be defined as a List:
We then create a simple Beam custom DoFn Transform to print our results:
Our final pipeline will look like this:
We now have a working Beam Pipeline that can be executed in local mode. If you try to run it, you should see this printed to your standard output:
20/06/01 13:14:13 INFO transforms.PrintFN: against: 120/06/01 13:14:13 INFO transforms.PrintFN: and: 120/06/01 13:14:13 INFO transforms.PrintFN: of: 220/06/01 13:14:13 INFO transforms.PrintFN: troubles: 120/06/01 13:14:13 INFO transforms.PrintFN: nobler: 120/06/01 13:14:13 INFO transforms.PrintFN: arrows: 120/06/01 13:14:13 INFO transforms.PrintFN: suffer: 120/06/01 13:14:13 INFO transforms.PrintFN: sea: 120/06/01 13:14:13 INFO transforms.PrintFN: The: 120/06/01 13:14:13 INFO transforms.PrintFN: Or: 120/06/01 13:14:13 INFO transforms.PrintFN: not: 120/06/01 13:14:13 INFO transforms.PrintFN: slings: 120/06/01 13:14:13 INFO transforms.PrintFN: that: 120/06/01 13:14:13 INFO transforms.PrintFN: is: 120/06/01 13:14:13 INFO transforms.PrintFN: arms: 120/06/01 13:14:13 INFO transforms.PrintFN: Whether: 120/06/01 13:14:13 INFO transforms.PrintFN: a: 120/06/01 13:14:13 INFO transforms.PrintFN: fortune: 120/06/01 13:14:13 INFO transforms.PrintFN: take: 120/06/01 13:14:13 INFO transforms.PrintFN: question: 120/06/01 13:14:13 INFO transforms.PrintFN: To: 120/06/01 13:14:13 INFO transforms.PrintFN: mind: 120/06/01 13:14:13 INFO transforms.PrintFN: to: 320/06/01 13:14:13 INFO transforms.PrintFN: outrageous: 120/06/01 13:14:13 INFO transforms.PrintFN: or: 120/06/01 13:14:13 INFO transforms.PrintFN: tis: 120/06/01 13:14:13 INFO transforms.PrintFN: in: 120/06/01 13:14:13 INFO transforms.PrintFN: the: 220/06/01 13:14:13 INFO transforms.PrintFN: be: 2
Now we would like to execute our pipeline on our Databricks instance. To achieve this, we need to modify few more things in our code. First of all, we modify our WordCountOptions, which has to extend the SparkContextOptions class. These Beam options are necessary in order to manipulate Beam’s SparkContext. A Databricks cluster has its own SparkContext which is crucial to retrieve, in order to scale out the application. Once we retrieve the SparkContext we can directly inject it into Beam’s SparkContextOptions as shown below:
With this final version of our Beam code, we are now ready to launch our Databricks workspace in Azure and to proceed by creating a new Job. We package our project into a fat jar (in this example, I will be using the standard maven life cycle to package my application) and we upload it to our Job by clicking on “Upload Jar”.
Notice that if you have any Spark dependencies in your pom.xml file, remember to mark them as “provided”, since our Databricks cluster will provide them to our application through the execution context.
After specifying the main class, it’s essential to use these parameters that will be parsed by SparkContextOptions:
--runner=SparkRunner --usesProvidedSparkContext
Finally, we can setup the cluster that will be associated to our Job by clicking on edit:
In this way, we define a “New Automated Cluster” with 2 workers on Databricks runtime version 6.4. If you prefer, you can also create an “Interactive Cluster”, which gives you more control over cluster’s execution.
Now, we’re good to go! If your job looks similar to the one below, just click on “run now” and wait for its termination.
Please note that if you have written your Beam pipeline in python the procedure to make it work on Databricks should look more or less the same: just remember to inject Databricks’ SparkContext into Beam and execute your Pipeline with the right set of parameters.
I hope you liked my walk-through on how to run an Apache Beam pipeline on Azure Databricks and feel free to contact me if you find out more useful insights on this topic!
Some rights reserved
|
[
{
"code": null,
"e": 250,
"s": 171,
"text": "A brief walk-through on how to execute your Apache Beam Pipeline on Databricks"
},
{
"code": null,
"e": 434,
"s": 250,
"text": "When we think about data-parallel pipelines, Apache Spark immediately comes to mind, but there are also promising and fresher models able to achieve the same results and performances."
},
{
"code": null,
"e": 780,
"s": 434,
"text": "This is the case of Apache Beam, an open source, unified model for defining both batch and streaming data-parallel processing pipelines. It gives the possibility to define data pipelines in a handy way, using as runtime one of its distributed processing back-ends (Apache Apex, Apache Flink, Apache Spark, Google Cloud Dataflow and many others)."
},
{
"code": null,
"e": 1049,
"s": 780,
"text": "Apache Beam’s great capabilities consist in an higher level of abstraction, which can prevent programmers from learning multiple frameworks. Currently, the usage of Apache Beam is mainly restricted to Google Cloud Platform and, in particular, to Google Cloud Dataflow."
},
{
"code": null,
"e": 1216,
"s": 1049,
"text": "However, when it comes to moving to other platforms, it can be tricky to find some useful references and examples that could help us running our Apache Beam pipeline."
},
{
"code": null,
"e": 1319,
"s": 1216,
"text": "That’s why I would like to tell you my experience on how to run an Apache Beam Pipeline on Databricks."
},
{
"code": null,
"e": 1395,
"s": 1319,
"text": "Note: the code of this walk-through is available at this Github repository."
},
{
"code": null,
"e": 1535,
"s": 1395,
"text": "I decided to start off from official Apache Beam’s Wordcount example and change few details in order to execute our pipeline on Databricks."
},
{
"code": null,
"e": 1887,
"s": 1535,
"text": "The official code simply reads a public text file from Google Cloud Storage, performs a word count on the input text and writes the output to a given path. In order to simplify this process, we will replace these operations by simply reading the input text from an in-code mocked string, finally printing the word count results to the standard output."
},
{
"code": null,
"e": 1931,
"s": 1887,
"text": "The input string will be defined as a List:"
},
{
"code": null,
"e": 2004,
"s": 1931,
"text": "We then create a simple Beam custom DoFn Transform to print our results:"
},
{
"code": null,
"e": 2044,
"s": 2004,
"text": "Our final pipeline will look like this:"
},
{
"code": null,
"e": 2191,
"s": 2044,
"text": "We now have a working Beam Pipeline that can be executed in local mode. If you try to run it, you should see this printed to your standard output:"
},
{
"code": null,
"e": 3648,
"s": 2191,
"text": "20/06/01 13:14:13 INFO transforms.PrintFN: against: 120/06/01 13:14:13 INFO transforms.PrintFN: and: 120/06/01 13:14:13 INFO transforms.PrintFN: of: 220/06/01 13:14:13 INFO transforms.PrintFN: troubles: 120/06/01 13:14:13 INFO transforms.PrintFN: nobler: 120/06/01 13:14:13 INFO transforms.PrintFN: arrows: 120/06/01 13:14:13 INFO transforms.PrintFN: suffer: 120/06/01 13:14:13 INFO transforms.PrintFN: sea: 120/06/01 13:14:13 INFO transforms.PrintFN: The: 120/06/01 13:14:13 INFO transforms.PrintFN: Or: 120/06/01 13:14:13 INFO transforms.PrintFN: not: 120/06/01 13:14:13 INFO transforms.PrintFN: slings: 120/06/01 13:14:13 INFO transforms.PrintFN: that: 120/06/01 13:14:13 INFO transforms.PrintFN: is: 120/06/01 13:14:13 INFO transforms.PrintFN: arms: 120/06/01 13:14:13 INFO transforms.PrintFN: Whether: 120/06/01 13:14:13 INFO transforms.PrintFN: a: 120/06/01 13:14:13 INFO transforms.PrintFN: fortune: 120/06/01 13:14:13 INFO transforms.PrintFN: take: 120/06/01 13:14:13 INFO transforms.PrintFN: question: 120/06/01 13:14:13 INFO transforms.PrintFN: To: 120/06/01 13:14:13 INFO transforms.PrintFN: mind: 120/06/01 13:14:13 INFO transforms.PrintFN: to: 320/06/01 13:14:13 INFO transforms.PrintFN: outrageous: 120/06/01 13:14:13 INFO transforms.PrintFN: or: 120/06/01 13:14:13 INFO transforms.PrintFN: tis: 120/06/01 13:14:13 INFO transforms.PrintFN: in: 120/06/01 13:14:13 INFO transforms.PrintFN: the: 220/06/01 13:14:13 INFO transforms.PrintFN: be: 2"
},
{
"code": null,
"e": 4179,
"s": 3648,
"text": "Now we would like to execute our pipeline on our Databricks instance. To achieve this, we need to modify few more things in our code. First of all, we modify our WordCountOptions, which has to extend the SparkContextOptions class. These Beam options are necessary in order to manipulate Beam’s SparkContext. A Databricks cluster has its own SparkContext which is crucial to retrieve, in order to scale out the application. Once we retrieve the SparkContext we can directly inject it into Beam’s SparkContextOptions as shown below:"
},
{
"code": null,
"e": 4506,
"s": 4179,
"text": "With this final version of our Beam code, we are now ready to launch our Databricks workspace in Azure and to proceed by creating a new Job. We package our project into a fat jar (in this example, I will be using the standard maven life cycle to package my application) and we upload it to our Job by clicking on “Upload Jar”."
},
{
"code": null,
"e": 4709,
"s": 4506,
"text": "Notice that if you have any Spark dependencies in your pom.xml file, remember to mark them as “provided”, since our Databricks cluster will provide them to our application through the execution context."
},
{
"code": null,
"e": 4825,
"s": 4709,
"text": "After specifying the main class, it’s essential to use these parameters that will be parsed by SparkContextOptions:"
},
{
"code": null,
"e": 4873,
"s": 4825,
"text": "--runner=SparkRunner --usesProvidedSparkContext"
},
{
"code": null,
"e": 4963,
"s": 4873,
"text": "Finally, we can setup the cluster that will be associated to our Job by clicking on edit:"
},
{
"code": null,
"e": 5178,
"s": 4963,
"text": "In this way, we define a “New Automated Cluster” with 2 workers on Databricks runtime version 6.4. If you prefer, you can also create an “Interactive Cluster”, which gives you more control over cluster’s execution."
},
{
"code": null,
"e": 5299,
"s": 5178,
"text": "Now, we’re good to go! If your job looks similar to the one below, just click on “run now” and wait for its termination."
},
{
"code": null,
"e": 5563,
"s": 5299,
"text": "Please note that if you have written your Beam pipeline in python the procedure to make it work on Databricks should look more or less the same: just remember to inject Databricks’ SparkContext into Beam and execute your Pipeline with the right set of parameters."
},
{
"code": null,
"e": 5734,
"s": 5563,
"text": "I hope you liked my walk-through on how to run an Apache Beam pipeline on Azure Databricks and feel free to contact me if you find out more useful insights on this topic!"
}
] |
Python program to find common elements in three lists using sets
|
Given three user input lists, our task is to find out common elements from these three lists. Here we are applying intersection method.
Input
A=[2, 3, 4, 5, 6]
B=[2, 3, 7, 6, 90]
C=[2, 3, 45, 34]
Common elements=[2, 3]
Step1: input the elements of three lists.
Step2: Use intersection method, first convert lists to sets then apply intersection method of two sets and find out common elements then this set intersect with the third set.
def common_ele(my_A, my_B, my_C):
my_s1 = set(my_A)
my_s2 = set(my_B)
my_s3 = set(my_C)
my_set1 = my_s1.intersection(my_s2)
output_set = my_set1.intersection(my_s3)
output_list = list(output_set)
print(output_list)
if __name__ == '__main__' :
# First List
A=list()
n=int(input("Enter the size of the List"))
print("Enter the number")
for i in range(int(n)):
p=int(input("Size="))
A.append(int(p))
print (A)
# Second List
B=list()
n1=int(input("Enter the size of the List"))
print("Enter the number")
for i in range(int(n1)):
p=int(input("Size="))
B.append(int(p))
print (B)
# Third Array
C=list()
n2=int(input("Enter the size of the List"))
print("Enter the number")
for i in range(int(n2)):
p=int(input("Size="))
C.append(int(p))
print (C)
# Calling Function
common_ele(A, B, C)
Enter the size of the List 3
Enter the number
Size= 2
[2]
Size= 1
[2, 1]
Size= 2
[2, 1, 2]
Enter the size of the List 3
Enter the number
Size= 2
[2]
Size= 1
[2, 1]
Size= 4
[2, 1, 4]
Enter the size of the List 4
Enter the number
Size= 3
[3]
[]
Size= 2
[3, 2]
[2]
Size= 1
[3, 2, 1]
[1, 2]
Size= 3
[3, 2, 1, 3]
[1, 2]
|
[
{
"code": null,
"e": 1198,
"s": 1062,
"text": "Given three user input lists, our task is to find out common elements from these three lists. Here we are applying intersection method."
},
{
"code": null,
"e": 1282,
"s": 1198,
"text": "Input\nA=[2, 3, 4, 5, 6]\nB=[2, 3, 7, 6, 90]\nC=[2, 3, 45, 34]\nCommon elements=[2, 3]\n"
},
{
"code": null,
"e": 1501,
"s": 1282,
"text": "Step1: input the elements of three lists.\nStep2: Use intersection method, first convert lists to sets then apply intersection method of two sets and find out common elements then this set intersect with the third set.\n"
},
{
"code": null,
"e": 2347,
"s": 1501,
"text": "def common_ele(my_A, my_B, my_C):\n my_s1 = set(my_A)\n my_s2 = set(my_B)\n my_s3 = set(my_C)\n my_set1 = my_s1.intersection(my_s2)\n output_set = my_set1.intersection(my_s3)\n output_list = list(output_set)\n print(output_list)\nif __name__ == '__main__' :\n# First List\nA=list()\nn=int(input(\"Enter the size of the List\"))\nprint(\"Enter the number\")\nfor i in range(int(n)):\n p=int(input(\"Size=\"))\n A.append(int(p))\n print (A)\n # Second List\n B=list()\nn1=int(input(\"Enter the size of the List\"))\nprint(\"Enter the number\")\nfor i in range(int(n1)):\n p=int(input(\"Size=\"))\n B.append(int(p))\n print (B)\n # Third Array\n C=list()\nn2=int(input(\"Enter the size of the List\"))\nprint(\"Enter the number\")\nfor i in range(int(n2)):\n p=int(input(\"Size=\"))\n C.append(int(p))\n print (C)\n # Calling Function\n common_ele(A, B, C)"
},
{
"code": null,
"e": 2663,
"s": 2347,
"text": "Enter the size of the List 3\nEnter the number\nSize= 2\n[2]\nSize= 1\n[2, 1]\nSize= 2\n[2, 1, 2]\nEnter the size of the List 3\nEnter the number\nSize= 2\n[2]\nSize= 1\n[2, 1]\nSize= 4\n[2, 1, 4]\nEnter the size of the List 4\nEnter the number\nSize= 3\n[3]\n[]\nSize= 2\n[3, 2]\n[2]\nSize= 1\n[3, 2, 1]\n[1, 2]\nSize= 3\n[3, 2, 1, 3]\n[1, 2]\n"
}
] |
How to work with MySQL in Lua Programming?
|
Lua provides different libraries that once can be used to work with MySQL. The most popular framework that enables us to work with MySQL in Lua is LuaSQL.
LuaSQL is a simple interface from Lua to a DBMS. It enables a Lua program to −
Connect to ODBC, ADO, Oracle, MySQL, SQLite, Firebird and PostgreSQL databases;
Execute arbitrary SQL statements;
Retrieve results in a row-by-row cursor fashion.
You can download MySQL with the help of this command −
luarocks install luasql-mysql
In order to use the following examples to work as expected, we need the initial db setup. The assumptions are listed below.
You have installed and set up MySQL with the default user as root and password as '123456'.
You have created a database test.
We can use a simple require statement to import the sqlite library assuming that your Lua implementation was done correctly.
mysql = require "luasql.mysql"
The variable mysql will provide access to the functions by referring to the main mysql table.
We can set up the connection by initiating a MySQL environment and then creating a connection for the environment. It is shown below.
create environment object
env = assert (mysql.mysql())
connect to data source
con = assert (env:connect("test","root","123456"))
The above connection will connect to an existing MySQL file and establish the connection with the newly created file.
Below code is the complete code that establishes the connection and then traverses over the table present inside the MySQL database.
Consider the code shown below −
-- load driver
local driver = require "luasql.mysql"
create environment object
env = assert (driver.mysql())
connect to data source
con = assert (env:connect("test","root","123456"))
-- reset our table
res = con:execute"DROP TABLE people"
res = assert (con:execute[[
CREATE TABLE people(
name varchar(50),
email varchar(50)
)
]])
add a few elements list = {
{ name="Mukul Latiyan", email="immukul@protonmail.com", },
{ name="Manoel Joaquim", email="manoel@cafundo.com", },
{ name="Rahul", email="rahul@protonmail.com", },
}
for i, p in pairs (list) do
res = assert (con:execute(string.format([[
INSERT INTO people
VALUES ('%s', '%s')]], p.name, p.email)
))
end
-- retrieve a cursor
cur = assert (con:execute"SELECT name, email from people")
-- print all rows, the rows will be indexed by field names
row = cur:fetch ({}, "a")
while row do
print(string.format("Name: %s, E-mail: %s", row.name, row.email))
-- reusing the table of results
row = cur:fetch (row, "a")
end
-- close everything
cur:close() -- already closed because all the result set was consumed
con:close()
env:close()
Name: Mukul Latiyan, E-mail: immukul@protonmail.com
Name: Manoel Joaquim, E-mail: manoel@cafundo.com
Name: Rahul, E-mail: rahul@protonmail.com
|
[
{
"code": null,
"e": 1217,
"s": 1062,
"text": "Lua provides different libraries that once can be used to work with MySQL. The most popular framework that enables us to work with MySQL in Lua is LuaSQL."
},
{
"code": null,
"e": 1296,
"s": 1217,
"text": "LuaSQL is a simple interface from Lua to a DBMS. It enables a Lua program to −"
},
{
"code": null,
"e": 1376,
"s": 1296,
"text": "Connect to ODBC, ADO, Oracle, MySQL, SQLite, Firebird and PostgreSQL databases;"
},
{
"code": null,
"e": 1410,
"s": 1376,
"text": "Execute arbitrary SQL statements;"
},
{
"code": null,
"e": 1459,
"s": 1410,
"text": "Retrieve results in a row-by-row cursor fashion."
},
{
"code": null,
"e": 1514,
"s": 1459,
"text": "You can download MySQL with the help of this command −"
},
{
"code": null,
"e": 1544,
"s": 1514,
"text": "luarocks install luasql-mysql"
},
{
"code": null,
"e": 1668,
"s": 1544,
"text": "In order to use the following examples to work as expected, we need the initial db setup. The assumptions are listed below."
},
{
"code": null,
"e": 1760,
"s": 1668,
"text": "You have installed and set up MySQL with the default user as root and password as '123456'."
},
{
"code": null,
"e": 1794,
"s": 1760,
"text": "You have created a database test."
},
{
"code": null,
"e": 1919,
"s": 1794,
"text": "We can use a simple require statement to import the sqlite library assuming that your Lua implementation was done correctly."
},
{
"code": null,
"e": 1950,
"s": 1919,
"text": "mysql = require \"luasql.mysql\""
},
{
"code": null,
"e": 2044,
"s": 1950,
"text": "The variable mysql will provide access to the functions by referring to the main mysql table."
},
{
"code": null,
"e": 2178,
"s": 2044,
"text": "We can set up the connection by initiating a MySQL environment and then creating a connection for the environment. It is shown below."
},
{
"code": null,
"e": 2307,
"s": 2178,
"text": "create environment object\nenv = assert (mysql.mysql())\nconnect to data source\ncon = assert (env:connect(\"test\",\"root\",\"123456\"))"
},
{
"code": null,
"e": 2425,
"s": 2307,
"text": "The above connection will connect to an existing MySQL file and establish the connection with the newly created file."
},
{
"code": null,
"e": 2558,
"s": 2425,
"text": "Below code is the complete code that establishes the connection and then traverses over the table present inside the MySQL database."
},
{
"code": null,
"e": 2590,
"s": 2558,
"text": "Consider the code shown below −"
},
{
"code": null,
"e": 3726,
"s": 2590,
"text": "-- load driver\nlocal driver = require \"luasql.mysql\"\ncreate environment object\nenv = assert (driver.mysql())\nconnect to data source\ncon = assert (env:connect(\"test\",\"root\",\"123456\"))\n-- reset our table\nres = con:execute\"DROP TABLE people\"\nres = assert (con:execute[[\n CREATE TABLE people(\n name varchar(50),\n email varchar(50)\n )\n]])\nadd a few elements list = {\n { name=\"Mukul Latiyan\", email=\"immukul@protonmail.com\", },\n { name=\"Manoel Joaquim\", email=\"manoel@cafundo.com\", },\n { name=\"Rahul\", email=\"rahul@protonmail.com\", },\n}\nfor i, p in pairs (list) do\n res = assert (con:execute(string.format([[\n INSERT INTO people\n VALUES ('%s', '%s')]], p.name, p.email)\n ))\nend\n-- retrieve a cursor\ncur = assert (con:execute\"SELECT name, email from people\")\n-- print all rows, the rows will be indexed by field names\nrow = cur:fetch ({}, \"a\")\nwhile row do\n print(string.format(\"Name: %s, E-mail: %s\", row.name, row.email))\n -- reusing the table of results\n row = cur:fetch (row, \"a\")\nend\n-- close everything\ncur:close() -- already closed because all the result set was consumed\ncon:close()\nenv:close()"
},
{
"code": null,
"e": 3869,
"s": 3726,
"text": "Name: Mukul Latiyan, E-mail: immukul@protonmail.com\nName: Manoel Joaquim, E-mail: manoel@cafundo.com\nName: Rahul, E-mail: rahul@protonmail.com"
}
] |
Character.equals() method in Java with examples - GeeksforGeeks
|
06 Dec, 2018
The java.lang.Character.equals() is a function in Java which compares this object against the specified object. If the argument is not null then the result is true and is a Character object that represents the same char value as this object.
Syntax:
public boolean equals(Object obj)
Parameters: The function accepts a single parameter obj which specifies the object to be compared with.
Return Value: The function returns a boolean value. It returns true if the objects are same, otherwise, it returns false.
Below programs illustrates the above method:
Program 1:
// Java program to demonstrate the function// Character.equals() when two objects are sameimport java.lang.*; public class gfg { public static void main(String[] args) { // assign values to c1, c2 Character c1 = new Character('Z'); Character c2 = new Character('Z'); // assign the result of equals method on c1, c2 to res boolean res = c1.equals(c2); // print res value System.out.println(c1 + " and " + c2 + " are equal is " + res); }}
Z and Z are equal is true
Program 2:
// Java program to demonstrate function// when two objects are different import java.lang.*; public class gfg { public static void main(String[] args) { // assign values to c1, c2 Character c1 = new Character('a'); Character c2 = new Character('A'); // assign the result of equals // method on c1, c2 to res boolean res = c1.equals(c2); // prints the res value System.out.println(c1 + " and " + c2 + " are equal is " + res); }}
a and A are equal is false
Reference: https://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#equals(java.lang.Object)
Java-Character
Java-Functions
Java-lang package
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Initialize an ArrayList in Java
Object Oriented Programming (OOPs) Concept in Java
HashMap in Java with Examples
Interfaces in Java
How to iterate any Map in Java
ArrayList in Java
Multidimensional Arrays in Java
Stream In Java
Stack Class in Java
Singleton Class in Java
|
[
{
"code": null,
"e": 24474,
"s": 24446,
"text": "\n06 Dec, 2018"
},
{
"code": null,
"e": 24716,
"s": 24474,
"text": "The java.lang.Character.equals() is a function in Java which compares this object against the specified object. If the argument is not null then the result is true and is a Character object that represents the same char value as this object."
},
{
"code": null,
"e": 24724,
"s": 24716,
"text": "Syntax:"
},
{
"code": null,
"e": 24758,
"s": 24724,
"text": "public boolean equals(Object obj)"
},
{
"code": null,
"e": 24862,
"s": 24758,
"text": "Parameters: The function accepts a single parameter obj which specifies the object to be compared with."
},
{
"code": null,
"e": 24984,
"s": 24862,
"text": "Return Value: The function returns a boolean value. It returns true if the objects are same, otherwise, it returns false."
},
{
"code": null,
"e": 25029,
"s": 24984,
"text": "Below programs illustrates the above method:"
},
{
"code": null,
"e": 25040,
"s": 25029,
"text": "Program 1:"
},
{
"code": "// Java program to demonstrate the function// Character.equals() when two objects are sameimport java.lang.*; public class gfg { public static void main(String[] args) { // assign values to c1, c2 Character c1 = new Character('Z'); Character c2 = new Character('Z'); // assign the result of equals method on c1, c2 to res boolean res = c1.equals(c2); // print res value System.out.println(c1 + \" and \" + c2 + \" are equal is \" + res); }}",
"e": 25544,
"s": 25040,
"text": null
},
{
"code": null,
"e": 25571,
"s": 25544,
"text": "Z and Z are equal is true\n"
},
{
"code": null,
"e": 25582,
"s": 25571,
"text": "Program 2:"
},
{
"code": "// Java program to demonstrate function// when two objects are different import java.lang.*; public class gfg { public static void main(String[] args) { // assign values to c1, c2 Character c1 = new Character('a'); Character c2 = new Character('A'); // assign the result of equals // method on c1, c2 to res boolean res = c1.equals(c2); // prints the res value System.out.println(c1 + \" and \" + c2 + \" are equal is \" + res); }}",
"e": 26085,
"s": 25582,
"text": null
},
{
"code": null,
"e": 26113,
"s": 26085,
"text": "a and A are equal is false\n"
},
{
"code": null,
"e": 26216,
"s": 26113,
"text": "Reference: https://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#equals(java.lang.Object)"
},
{
"code": null,
"e": 26231,
"s": 26216,
"text": "Java-Character"
},
{
"code": null,
"e": 26246,
"s": 26231,
"text": "Java-Functions"
},
{
"code": null,
"e": 26264,
"s": 26246,
"text": "Java-lang package"
},
{
"code": null,
"e": 26269,
"s": 26264,
"text": "Java"
},
{
"code": null,
"e": 26274,
"s": 26269,
"text": "Java"
},
{
"code": null,
"e": 26372,
"s": 26274,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26404,
"s": 26372,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 26455,
"s": 26404,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 26485,
"s": 26455,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 26504,
"s": 26485,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 26535,
"s": 26504,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 26553,
"s": 26535,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 26585,
"s": 26553,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 26600,
"s": 26585,
"text": "Stream In Java"
},
{
"code": null,
"e": 26620,
"s": 26600,
"text": "Stack Class in Java"
}
] |
setjump() and longjump() in C
|
In this section, we will see what are the setjump and longjump in C. The setjump() and longjump() is located at setjmp.h library. The syntax of these two functions is like below.
setjump(jmp_buf buf) : uses buf to store current position and returns 0.
longjump(jmp_buf buf, i) : Go back to place pointed by buf and return i.
These are used in C for exception handling. The setjump() can be used as try block, and longjump() can be used as throw statement. The longjump() transfers control the pointe which is pointed by setjump().
Here we will see how to print a number 100 times without using recursion, loop, or macro expansion. Here we will use the setjump() and longjump() functions to do that.
#include <stdio.h>
#include <setjmp.h>
jmp_buf buf;
main() {
int x = 1;
setjmp(buf); //set the jump position using buf
printf("5"); // Prints a number
x++;
if (x <= 100)
longjmp(buf, 1); // Jump to the point located by setjmp
}
5555555555555555555555555555555555555555555555555555555555555555555555555555
555555555555555555555555
|
[
{
"code": null,
"e": 1241,
"s": 1062,
"text": "In this section, we will see what are the setjump and longjump in C. The setjump() and longjump() is located at setjmp.h library. The syntax of these two functions is like below."
},
{
"code": null,
"e": 1387,
"s": 1241,
"text": "setjump(jmp_buf buf) : uses buf to store current position and returns 0.\nlongjump(jmp_buf buf, i) : Go back to place pointed by buf and return i."
},
{
"code": null,
"e": 1593,
"s": 1387,
"text": "These are used in C for exception handling. The setjump() can be used as try block, and longjump() can be used as throw statement. The longjump() transfers control the pointe which is pointed by setjump()."
},
{
"code": null,
"e": 1761,
"s": 1593,
"text": "Here we will see how to print a number 100 times without using recursion, loop, or macro expansion. Here we will use the setjump() and longjump() functions to do that."
},
{
"code": null,
"e": 2010,
"s": 1761,
"text": "#include <stdio.h>\n#include <setjmp.h>\njmp_buf buf;\nmain() {\n int x = 1;\n setjmp(buf); //set the jump position using buf\n printf(\"5\"); // Prints a number\n x++;\n if (x <= 100)\n longjmp(buf, 1); // Jump to the point located by setjmp\n}"
},
{
"code": null,
"e": 2112,
"s": 2010,
"text": "5555555555555555555555555555555555555555555555555555555555555555555555555555\n555555555555555555555555"
}
] |
Scanner delimiter() method in Java with Examples - GeeksforGeeks
|
10 Oct, 2018
The delimiter() method ofjava.util.Scanner class returns the Pattern this Scanner is currently using to match delimiters.
Syntax:
public Pattern delimiter()
Return Value: The function returns the scanner’s delimiting pattern.
Below programs illustrate the above function:
Program 1:
// Java program to illustrate the// delimiter() method of Scanner class in Java import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { String s = "Geeksforgeeks has Scanner Class Methods"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); // prints the next line of the string System.out.println("Scanner String: \n" + scanner.nextLine()); // print the delimiter this scanner is using System.out.println("\nDelimiter being used in Scanner: " + scanner.delimiter()); // Close the scanner scanner.close(); }}
Scanner String:
Geeksforgeeks has Scanner Class Methods
Delimiter being used in Scanner: \p{javaWhitespace}+
Program 2:
// Java program to illustrate the// delimiter() method of Scanner class in Java import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { String s = "Geeksforgeeks.has.Scanner.Class.Methods"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); // Set the delimiter to "." scanner.useDelimiter("."); // prints the next line of the string System.out.println("Scanner String: \n" + scanner.nextLine()); // print the delimiter this scanner is using System.out.println("\nDelimiter being used in Scanner: " + scanner.delimiter()); // Close the scanner scanner.close(); }}
Scanner String:
Geeksforgeeks.has.Scanner.Class.Methods
Delimiter being used in Scanner: .
Reference: https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#delimiter()
Java - util package
Java-Functions
Java-I/O
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Object Oriented Programming (OOPs) Concept in Java
HashMap in Java with Examples
How to iterate any Map in Java
Interfaces in Java
Initialize an ArrayList in Java
ArrayList in Java
Stack Class in Java
Multidimensional Arrays in Java
Singleton Class in Java
LinkedList in Java
|
[
{
"code": null,
"e": 24029,
"s": 24001,
"text": "\n10 Oct, 2018"
},
{
"code": null,
"e": 24151,
"s": 24029,
"text": "The delimiter() method ofjava.util.Scanner class returns the Pattern this Scanner is currently using to match delimiters."
},
{
"code": null,
"e": 24159,
"s": 24151,
"text": "Syntax:"
},
{
"code": null,
"e": 24186,
"s": 24159,
"text": "public Pattern delimiter()"
},
{
"code": null,
"e": 24255,
"s": 24186,
"text": "Return Value: The function returns the scanner’s delimiting pattern."
},
{
"code": null,
"e": 24301,
"s": 24255,
"text": "Below programs illustrate the above function:"
},
{
"code": null,
"e": 24312,
"s": 24301,
"text": "Program 1:"
},
{
"code": "// Java program to illustrate the// delimiter() method of Scanner class in Java import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { String s = \"Geeksforgeeks has Scanner Class Methods\"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); // prints the next line of the string System.out.println(\"Scanner String: \\n\" + scanner.nextLine()); // print the delimiter this scanner is using System.out.println(\"\\nDelimiter being used in Scanner: \" + scanner.delimiter()); // Close the scanner scanner.close(); }}",
"e": 25056,
"s": 24312,
"text": null
},
{
"code": null,
"e": 25168,
"s": 25056,
"text": "Scanner String: \nGeeksforgeeks has Scanner Class Methods\n\nDelimiter being used in Scanner: \\p{javaWhitespace}+\n"
},
{
"code": null,
"e": 25179,
"s": 25168,
"text": "Program 2:"
},
{
"code": "// Java program to illustrate the// delimiter() method of Scanner class in Java import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { String s = \"Geeksforgeeks.has.Scanner.Class.Methods\"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); // Set the delimiter to \".\" scanner.useDelimiter(\".\"); // prints the next line of the string System.out.println(\"Scanner String: \\n\" + scanner.nextLine()); // print the delimiter this scanner is using System.out.println(\"\\nDelimiter being used in Scanner: \" + scanner.delimiter()); // Close the scanner scanner.close(); }}",
"e": 25994,
"s": 25179,
"text": null
},
{
"code": null,
"e": 26088,
"s": 25994,
"text": "Scanner String: \nGeeksforgeeks.has.Scanner.Class.Methods\n\nDelimiter being used in Scanner: .\n"
},
{
"code": null,
"e": 26176,
"s": 26088,
"text": "Reference: https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#delimiter()"
},
{
"code": null,
"e": 26196,
"s": 26176,
"text": "Java - util package"
},
{
"code": null,
"e": 26211,
"s": 26196,
"text": "Java-Functions"
},
{
"code": null,
"e": 26220,
"s": 26211,
"text": "Java-I/O"
},
{
"code": null,
"e": 26225,
"s": 26220,
"text": "Java"
},
{
"code": null,
"e": 26230,
"s": 26225,
"text": "Java"
},
{
"code": null,
"e": 26328,
"s": 26230,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26337,
"s": 26328,
"text": "Comments"
},
{
"code": null,
"e": 26350,
"s": 26337,
"text": "Old Comments"
},
{
"code": null,
"e": 26401,
"s": 26350,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 26431,
"s": 26401,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 26462,
"s": 26431,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 26481,
"s": 26462,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 26513,
"s": 26481,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 26531,
"s": 26513,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 26551,
"s": 26531,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 26583,
"s": 26551,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 26607,
"s": 26583,
"text": "Singleton Class in Java"
}
] |
VBScript If Statement
|
An If statement consists of a Boolean expression followed by one or more statements. If the condition is said to be True, the statements under If condition(s) are Executed. If the Condition is said to be False, the statements after the If loop are executed.
The syntax of an If statement in VBScript is −
If(boolean_expression) Then
Statement 1
.....
.....
Statement n
End If
<!DOCTYPE html>
<html>
<body>
<script language = "vbscript" type = "text/vbscript">
Dim a : a = 20
Dim b : b = 10
If a > b Then
Document.write "a is Greater than b"
End If
</script>
</body>
</html>
When the above code is executed, it produces the following result −
a is Greater than b
63 Lectures
4 hours
Frahaan Hussain
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2338,
"s": 2080,
"text": "An If statement consists of a Boolean expression followed by one or more statements. If the condition is said to be True, the statements under If condition(s) are Executed. If the Condition is said to be False, the statements after the If loop are executed."
},
{
"code": null,
"e": 2385,
"s": 2338,
"text": "The syntax of an If statement in VBScript is −"
},
{
"code": null,
"e": 2465,
"s": 2385,
"text": "If(boolean_expression) Then\n Statement 1\n\t.....\n\t.....\n Statement n\nEnd If\n"
},
{
"code": null,
"e": 2731,
"s": 2465,
"text": "<!DOCTYPE html>\n<html>\n <body>\n <script language = \"vbscript\" type = \"text/vbscript\">\n Dim a : a = 20\n Dim b : b = 10\n\n If a > b Then\n Document.write \"a is Greater than b\"\n End If\n\n </script>\n </body>\n</html>"
},
{
"code": null,
"e": 2799,
"s": 2731,
"text": "When the above code is executed, it produces the following result −"
},
{
"code": null,
"e": 2820,
"s": 2799,
"text": "a is Greater than b\n"
},
{
"code": null,
"e": 2853,
"s": 2820,
"text": "\n 63 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 2870,
"s": 2853,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 2877,
"s": 2870,
"text": " Print"
},
{
"code": null,
"e": 2888,
"s": 2877,
"text": " Add Notes"
}
] |
Show all columns of Pandas DataFrame in Jupyter Notebook - GeeksforGeeks
|
30 Nov, 2021
In this article, we will discuss how to show all the columns of a pandas data frame in jupyter notebook.
Pandas have a very handy method called get option(), by this method, we can customize the output screen and work without any inconvenient form of outputs. set_option() used to set the value. This is used to set the maximum number of columns and rows that should be displayed, By setting the max_columns to None or a specified number of column
Syntax:
Syntax: pd.set_option(‘display.max_columns’, None)
Python3
# importing pandas
import pandas as pd
# reading csv
df = pd.read_csv('data.csv')
# set the max columns to none
pd.set_option('display.max_columns', None)
Output:
If we want to change back to normal, reset_option() is used. It is used to reset one or more options to their default value.
Syntax: pd.reset_option(‘max_columns’)
Output:
Another common problem that arises while using the categorical data is, we couldn’t see the entire categorical value. Because the maximum column width is less, so the data that covers the column width is displayed. Rest is not displayed
In the above example, you can see that data is not displayed enough. To Solve this we can set the max_colwidth higher.
Syntax: pd.set_option(‘display.max_colwidth’,3000)
Python3
#import pandas
import pandas as pd
# read csv
df = pd.read_csv('data.csv')
# set max_colwidth to 3000
pd.set_option('display.max_colwidth', 3000)
Output:
By applying the function, the maximum column width is set to 3000. All the data get displayed.
When we work with a dataset having more columns or rows, we might find it difficult to see all the columns and rows in the pandas. The pandas by default print some of the first rows and some of the last rows. In the middle, it will omit the data. When we deal with datasets with fewer rows and columns does not affect us. But it is difficult to analyze the data without seeing all the rows and columns in a single time.
Python3
# importing pandas
import pandas as pd
df = pd.read_csv('data.csv')
# printing dataframe
print(df)
Output:
We can see that it does not print all the columns instead, it is replaced by(.....).
get_option() – This function is used to get the values,
Syntax: pd.get_option(“display.max_columns”)
It helps us display the values such as the maximum number of columns displayed, the maximum number of rows displayed, and the maximum column width.
Let us see how to use them,
Python3
# importing pandas
import pandas as pd
# reading the csv
df = pd.read_csv('data.csv')
# get option to get maximum columns displayed
pd.get_option("display.max_columns")
# to get the number of columns
len(df.columns)
The Total number of columns present is 25, and the Maximum number of columns displayed is 20. So it displayed the first 10 columns and last 10 columns and we couldn’t see the rest of the columns. We can solve this by maximizing the column and columns’ width.
Python3
# importing pandas
import pandas as pd
# reading the csv
df = pd.read_csv('data.csv')
# set max columns to none
pd.set_option("display.max_columns", None)
# set colwidth hidher
pd.set_option('display.max_colwidth', 100)
Output:
Now, we can see all the columns are displayed by changing the column width to 100 and the Number of columns to None.
singghakshay
Picked
Python pandas-dataFrame
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Check if element exists in list in Python
Python | os.path.join() method
Defaultdict in Python
Selecting rows in pandas DataFrame based on conditions
Python | Get unique values from a list
Create a directory in Python
Python | Pandas dataframe.groupby()
|
[
{
"code": null,
"e": 24364,
"s": 24333,
"text": " \n30 Nov, 2021\n"
},
{
"code": null,
"e": 24469,
"s": 24364,
"text": "In this article, we will discuss how to show all the columns of a pandas data frame in jupyter notebook."
},
{
"code": null,
"e": 24812,
"s": 24469,
"text": "Pandas have a very handy method called get option(), by this method, we can customize the output screen and work without any inconvenient form of outputs. set_option() used to set the value. This is used to set the maximum number of columns and rows that should be displayed, By setting the max_columns to None or a specified number of column"
},
{
"code": null,
"e": 24820,
"s": 24812,
"text": "Syntax:"
},
{
"code": null,
"e": 24871,
"s": 24820,
"text": "Syntax: pd.set_option(‘display.max_columns’, None)"
},
{
"code": null,
"e": 24879,
"s": 24871,
"text": "Python3"
},
{
"code": "\n\n\n\n\n\n\n# importing pandas\nimport pandas as pd\n \n# reading csv\ndf = pd.read_csv('data.csv')\n \n# set the max columns to none\npd.set_option('display.max_columns', None)\n\n\n\n\n\n",
"e": 25061,
"s": 24889,
"text": null
},
{
"code": null,
"e": 25069,
"s": 25061,
"text": "Output:"
},
{
"code": null,
"e": 25195,
"s": 25069,
"text": "If we want to change back to normal, reset_option() is used. It is used to reset one or more options to their default value."
},
{
"code": null,
"e": 25234,
"s": 25195,
"text": "Syntax: pd.reset_option(‘max_columns’)"
},
{
"code": null,
"e": 25242,
"s": 25234,
"text": "Output:"
},
{
"code": null,
"e": 25479,
"s": 25242,
"text": "Another common problem that arises while using the categorical data is, we couldn’t see the entire categorical value. Because the maximum column width is less, so the data that covers the column width is displayed. Rest is not displayed"
},
{
"code": null,
"e": 25598,
"s": 25479,
"text": "In the above example, you can see that data is not displayed enough. To Solve this we can set the max_colwidth higher."
},
{
"code": null,
"e": 25649,
"s": 25598,
"text": "Syntax: pd.set_option(‘display.max_colwidth’,3000)"
},
{
"code": null,
"e": 25657,
"s": 25649,
"text": "Python3"
},
{
"code": "\n\n\n\n\n\n\n#import pandas\nimport pandas as pd\n \n# read csv\ndf = pd.read_csv('data.csv')\n \n# set max_colwidth to 3000\npd.set_option('display.max_colwidth', 3000)\n\n\n\n\n\n",
"e": 25830,
"s": 25667,
"text": null
},
{
"code": null,
"e": 25838,
"s": 25830,
"text": "Output:"
},
{
"code": null,
"e": 25933,
"s": 25838,
"text": "By applying the function, the maximum column width is set to 3000. All the data get displayed."
},
{
"code": null,
"e": 26353,
"s": 25933,
"text": "When we work with a dataset having more columns or rows, we might find it difficult to see all the columns and rows in the pandas. The pandas by default print some of the first rows and some of the last rows. In the middle, it will omit the data. When we deal with datasets with fewer rows and columns does not affect us. But it is difficult to analyze the data without seeing all the rows and columns in a single time."
},
{
"code": null,
"e": 26361,
"s": 26353,
"text": "Python3"
},
{
"code": "\n\n\n\n\n\n\n# importing pandas\nimport pandas as pd\ndf = pd.read_csv('data.csv')\n \n# printing dataframe\nprint(df)\n\n\n\n\n\n",
"e": 26485,
"s": 26371,
"text": null
},
{
"code": null,
"e": 26493,
"s": 26485,
"text": "Output:"
},
{
"code": null,
"e": 26579,
"s": 26493,
"text": "We can see that it does not print all the columns instead, it is replaced by(.....). "
},
{
"code": null,
"e": 26636,
"s": 26579,
"text": "get_option() – This function is used to get the values,"
},
{
"code": null,
"e": 26681,
"s": 26636,
"text": "Syntax: pd.get_option(“display.max_columns”)"
},
{
"code": null,
"e": 26830,
"s": 26681,
"text": "It helps us display the values such as the maximum number of columns displayed, the maximum number of rows displayed, and the maximum column width. "
},
{
"code": null,
"e": 26858,
"s": 26830,
"text": "Let us see how to use them,"
},
{
"code": null,
"e": 26866,
"s": 26858,
"text": "Python3"
},
{
"code": "\n\n\n\n\n\n\n# importing pandas\nimport pandas as pd\n \n# reading the csv\ndf = pd.read_csv('data.csv')\n \n# get option to get maximum columns displayed\npd.get_option(\"display.max_columns\")\n \n# to get the number of columns\nlen(df.columns)\n\n\n\n\n\n",
"e": 27111,
"s": 26876,
"text": null
},
{
"code": null,
"e": 27370,
"s": 27111,
"text": "The Total number of columns present is 25, and the Maximum number of columns displayed is 20. So it displayed the first 10 columns and last 10 columns and we couldn’t see the rest of the columns. We can solve this by maximizing the column and columns’ width."
},
{
"code": null,
"e": 27378,
"s": 27370,
"text": "Python3"
},
{
"code": "\n\n\n\n\n\n\n# importing pandas\nimport pandas as pd\n \n# reading the csv\ndf = pd.read_csv('data.csv')\n \n# set max columns to none\npd.set_option(\"display.max_columns\", None)\n \n# set colwidth hidher\npd.set_option('display.max_colwidth', 100)\n\n\n\n\n\n",
"e": 27627,
"s": 27388,
"text": null
},
{
"code": null,
"e": 27635,
"s": 27627,
"text": "Output:"
},
{
"code": null,
"e": 27752,
"s": 27635,
"text": "Now, we can see all the columns are displayed by changing the column width to 100 and the Number of columns to None."
},
{
"code": null,
"e": 27765,
"s": 27752,
"text": "singghakshay"
},
{
"code": null,
"e": 27774,
"s": 27765,
"text": "\nPicked\n"
},
{
"code": null,
"e": 27800,
"s": 27774,
"text": "\nPython pandas-dataFrame\n"
},
{
"code": null,
"e": 27816,
"s": 27800,
"text": "\nPython-pandas\n"
},
{
"code": null,
"e": 27825,
"s": 27816,
"text": "\nPython\n"
},
{
"code": null,
"e": 28030,
"s": 27825,
"text": "Writing code in comment? \n Please use ide.geeksforgeeks.org, \n generate link and share the link here.\n "
},
{
"code": null,
"e": 28062,
"s": 28030,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28104,
"s": 28062,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 28160,
"s": 28104,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 28202,
"s": 28160,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 28233,
"s": 28202,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 28255,
"s": 28233,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 28310,
"s": 28255,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 28349,
"s": 28310,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 28378,
"s": 28349,
"text": "Create a directory in Python"
}
] |
Check whether K-th bit is set or not - GeeksforGeeks
|
09 Jun, 2021
Given a number n, check if the Kth bit of n is set or not.
Examples:
Input : n = 5, k = 1
Output : SET
5 is represented as 101 in binary and has its first bit set.
Input : n = 2, k = 3
Output : NOT SET
2 is represented as 10 in binary, all higher i.e. beyond MSB, bits are NOT SET.
Method 1 (Using Left Shift Operator) Below are simple steps to find the value of Kth bit:
1) Left shift given number 1 by k-1 to create a number that has only set bit as k-th bit.
temp = 1 << (k-1)
2) If bitwise AND of n and temp is non-zero, then result is SET else result is NOT SET.
Example:
n = 75 and k = 4
temp = 1 << (k-1) = 1 << 3 = 8
Binary Representation of temp = 0..00001000
Binary Representation of n = 0..01001011
Since bitwise AND of n and temp is non-zero, result is SET.
C++
Java
Python3
C#
PHP
Javascript
// CPP program to check if k-th bit// of a given number is set or not#include <iostream>using namespace std; void isKthBitSet(int n, int k){ if (n & (1 << (k - 1))) cout << "SET"; else cout << "NOT SET";} // Driver codeint main(){ int n = 5, k = 1; isKthBitSet(n, k); return 0;}
// Java program to check if k-th bit// of a given number is set or not class Number { public static void isKthBitSet(int n, int k) { if ((n & (1 << (k - 1))) > 0) System.out.print("SET"); else System.out.print("NOT SET"); } // driver code public static void main(String[] args) { int n = 5, k = 1; isKthBitSet(n, k); }} // This code is contributed by rishabh_jain
# Python3 code to check if k-th bit# of a given number is set or not def isKthBitSet(n, k): if n & (1 << (k - 1)): print( "SET") else: print("NOT SET") # Driver coden = 5k = 1isKthBitSet(n, k) # This code is contributed by "Sharad_Bhardwaj".
// C# program to check if k-th bit// of a given number is set or not.using System; class GFG { public static void isKthBitSet(int n, int k) { if ((n & (1 << (k - 1))) > 0) Console.Write("SET"); else Console.Write("NOT SET"); } // Driver code public static void Main() { int n = 5, k = 1; isKthBitSet(n, k); }} // This code is contributed by nitin mittal.
<?php// PHP program to check if// k-th bit of a given// number is set or notfunction isKthBitSet($n, $k){ if ($n & (1 << ($k - 1))) echo "SET"; else echo "NOT SET";} // Driver code$n = 5; $k = 1;isKthBitSet($n, $k); // This code is contributed// by akt_mit?>
<script> // Javascript program to check if k-th bit // of a given number is set or not. function isKthBitSet(n, k) { if ((n & (1 << (k - 1))) > 0) document.write("SET"); else document.write("NOT SET"); } let n = 5, k = 1; isKthBitSet(n, k); </script>
Output:
SET
Method 2 (Using Right Shift Operator) If we right shift n by k-1, we get the last bit as 1 if Kth bit is set else 0.
C++
Java
Python3
C#
PHP
Javascript
// CPP program to check if k-th bit// of a given number is set or not using// right shift operator.#include <iostream>using namespace std; void isKthBitSet(int n, int k){ if ((n >> (k - 1)) & 1) cout << "SET"; else cout << "NOT SET";} // Driver codeint main(){ int n = 5, k = 1; isKthBitSet(n, k); return 0;}
// Java program to check if// k-th bit of a given number// is set or not using right// shift operator.import java.io.*; class GFG{static void isKthBitSet(int n, int k){ if (((n >> (k - 1)) & 1) > 0) System.out.println("SET"); else System.out.println("NOT SET");} // Driver codepublic static void main (String[] args){ int n = 5, k = 1; isKthBitSet(n, k);}} // This code is contributed// by ajit
# PHP program to check if k-th bit of# a given number is set or not using# right shift operator. def isKthBitSet(n, k): if ((n >> (k - 1)) and 1): print("SET") else: print("NOT SET") # Driver coden, k = 5, 1isKthBitSet(n, k) # This code contributed by# PrinciRaj1992
// C# program to check if// k-th bit of a given number// is set or not using right// shift operatorusing System; class GFG{static void isKthBitSet(int n, int k){ if (((n >> (k - 1)) & 1) > 0) Console.WriteLine("SET"); else Console.WriteLine("NOT SET");} // Driver codestatic public void Main (){ int n = 5, k = 1; isKthBitSet(n, k);}} // This code is contributed// by ajit
<?php// PHP program to check// if k-th bit of a given// number is set or not// using right shift operator. function isKthBitSet($n, $k){ if (($n >> ($k - 1)) & 1) echo "SET"; else echo "NOT SET";} // Driver code$n = 5; $k = 1;isKthBitSet($n, $k); // This code is contributed// by akt_mit?>
<script> // Javascript program to check if// k-th bit of a given number// is set or not using right// shift operator. function isKthBitSet(n, k){ if (((n >> (k - 1)) & 1) > 0) document.write("SET"); else document.write("NOT SET");} // Driver Code let n = 5, k = 1; isKthBitSet(n, k); // This code is contributed by sanjoy_62.</script>
Output:
SET
YouTubeGeeksforGeeks502K subscribersCheck whether K-th bit is set or not | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 3:39•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=006HH_izfPM" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>
This article is contributed by SAKSHI TIWARI. If you like GeeksforGeeks(We know you do!) and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
nitin mittal
classyallrounder
jit_t
princiraj1992
shreyashagrawal
_merlin__
divyeshrabadiya07
sanjoy_62
Microsoft
Bit Magic
Strings
Microsoft
Strings
Bit Magic
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Cyclic Redundancy Check and Modulo-2 Division
Little and Big Endian Mystery
Program to find whether a given number is power of 2
Binary representation of a given number
Bits manipulation (Important tactics)
Reverse a string in Java
Write a program to reverse an array or string
Longest Common Subsequence | DP-4
Write a program to print all permutations of a given string
C++ Data Types
|
[
{
"code": null,
"e": 24654,
"s": 24626,
"text": "\n09 Jun, 2021"
},
{
"code": null,
"e": 24714,
"s": 24654,
"text": "Given a number n, check if the Kth bit of n is set or not. "
},
{
"code": null,
"e": 24725,
"s": 24714,
"text": "Examples: "
},
{
"code": null,
"e": 24939,
"s": 24725,
"text": "Input : n = 5, k = 1\nOutput : SET\n5 is represented as 101 in binary and has its first bit set.\n\nInput : n = 2, k = 3\nOutput : NOT SET\n2 is represented as 10 in binary, all higher i.e. beyond MSB, bits are NOT SET."
},
{
"code": null,
"e": 25030,
"s": 24939,
"text": "Method 1 (Using Left Shift Operator) Below are simple steps to find the value of Kth bit: "
},
{
"code": null,
"e": 25229,
"s": 25030,
"text": "1) Left shift given number 1 by k-1 to create a number that has only set bit as k-th bit.\n temp = 1 << (k-1)\n2) If bitwise AND of n and temp is non-zero, then result is SET else result is NOT SET."
},
{
"code": null,
"e": 25239,
"s": 25229,
"text": "Example: "
},
{
"code": null,
"e": 25440,
"s": 25239,
"text": " n = 75 and k = 4\n temp = 1 << (k-1) = 1 << 3 = 8 \n Binary Representation of temp = 0..00001000 \n Binary Representation of n = 0..01001011 \n Since bitwise AND of n and temp is non-zero, result is SET."
},
{
"code": null,
"e": 25444,
"s": 25440,
"text": "C++"
},
{
"code": null,
"e": 25449,
"s": 25444,
"text": "Java"
},
{
"code": null,
"e": 25457,
"s": 25449,
"text": "Python3"
},
{
"code": null,
"e": 25460,
"s": 25457,
"text": "C#"
},
{
"code": null,
"e": 25464,
"s": 25460,
"text": "PHP"
},
{
"code": null,
"e": 25475,
"s": 25464,
"text": "Javascript"
},
{
"code": "// CPP program to check if k-th bit// of a given number is set or not#include <iostream>using namespace std; void isKthBitSet(int n, int k){ if (n & (1 << (k - 1))) cout << \"SET\"; else cout << \"NOT SET\";} // Driver codeint main(){ int n = 5, k = 1; isKthBitSet(n, k); return 0;}",
"e": 25783,
"s": 25475,
"text": null
},
{
"code": "// Java program to check if k-th bit// of a given number is set or not class Number { public static void isKthBitSet(int n, int k) { if ((n & (1 << (k - 1))) > 0) System.out.print(\"SET\"); else System.out.print(\"NOT SET\"); } // driver code public static void main(String[] args) { int n = 5, k = 1; isKthBitSet(n, k); }} // This code is contributed by rishabh_jain",
"e": 26253,
"s": 25783,
"text": null
},
{
"code": "# Python3 code to check if k-th bit# of a given number is set or not def isKthBitSet(n, k): if n & (1 << (k - 1)): print( \"SET\") else: print(\"NOT SET\") # Driver coden = 5k = 1isKthBitSet(n, k) # This code is contributed by \"Sharad_Bhardwaj\".",
"e": 26515,
"s": 26253,
"text": null
},
{
"code": "// C# program to check if k-th bit// of a given number is set or not.using System; class GFG { public static void isKthBitSet(int n, int k) { if ((n & (1 << (k - 1))) > 0) Console.Write(\"SET\"); else Console.Write(\"NOT SET\"); } // Driver code public static void Main() { int n = 5, k = 1; isKthBitSet(n, k); }} // This code is contributed by nitin mittal.",
"e": 26978,
"s": 26515,
"text": null
},
{
"code": "<?php// PHP program to check if// k-th bit of a given// number is set or notfunction isKthBitSet($n, $k){ if ($n & (1 << ($k - 1))) echo \"SET\"; else echo \"NOT SET\";} // Driver code$n = 5; $k = 1;isKthBitSet($n, $k); // This code is contributed// by akt_mit?>",
"e": 27257,
"s": 26978,
"text": null
},
{
"code": "<script> // Javascript program to check if k-th bit // of a given number is set or not. function isKthBitSet(n, k) { if ((n & (1 << (k - 1))) > 0) document.write(\"SET\"); else document.write(\"NOT SET\"); } let n = 5, k = 1; isKthBitSet(n, k); </script>",
"e": 27612,
"s": 27257,
"text": null
},
{
"code": null,
"e": 27622,
"s": 27612,
"text": "Output: "
},
{
"code": null,
"e": 27626,
"s": 27622,
"text": "SET"
},
{
"code": null,
"e": 27746,
"s": 27626,
"text": " Method 2 (Using Right Shift Operator) If we right shift n by k-1, we get the last bit as 1 if Kth bit is set else 0. "
},
{
"code": null,
"e": 27750,
"s": 27746,
"text": "C++"
},
{
"code": null,
"e": 27755,
"s": 27750,
"text": "Java"
},
{
"code": null,
"e": 27763,
"s": 27755,
"text": "Python3"
},
{
"code": null,
"e": 27766,
"s": 27763,
"text": "C#"
},
{
"code": null,
"e": 27770,
"s": 27766,
"text": "PHP"
},
{
"code": null,
"e": 27781,
"s": 27770,
"text": "Javascript"
},
{
"code": "// CPP program to check if k-th bit// of a given number is set or not using// right shift operator.#include <iostream>using namespace std; void isKthBitSet(int n, int k){ if ((n >> (k - 1)) & 1) cout << \"SET\"; else cout << \"NOT SET\";} // Driver codeint main(){ int n = 5, k = 1; isKthBitSet(n, k); return 0;}",
"e": 28119,
"s": 27781,
"text": null
},
{
"code": "// Java program to check if// k-th bit of a given number// is set or not using right// shift operator.import java.io.*; class GFG{static void isKthBitSet(int n, int k){ if (((n >> (k - 1)) & 1) > 0) System.out.println(\"SET\"); else System.out.println(\"NOT SET\");} // Driver codepublic static void main (String[] args){ int n = 5, k = 1; isKthBitSet(n, k);}} // This code is contributed// by ajit",
"e": 28577,
"s": 28119,
"text": null
},
{
"code": "# PHP program to check if k-th bit of# a given number is set or not using# right shift operator. def isKthBitSet(n, k): if ((n >> (k - 1)) and 1): print(\"SET\") else: print(\"NOT SET\") # Driver coden, k = 5, 1isKthBitSet(n, k) # This code contributed by# PrinciRaj1992",
"e": 28864,
"s": 28577,
"text": null
},
{
"code": "// C# program to check if// k-th bit of a given number// is set or not using right// shift operatorusing System; class GFG{static void isKthBitSet(int n, int k){ if (((n >> (k - 1)) & 1) > 0) Console.WriteLine(\"SET\"); else Console.WriteLine(\"NOT SET\");} // Driver codestatic public void Main (){ int n = 5, k = 1; isKthBitSet(n, k);}} // This code is contributed// by ajit",
"e": 29299,
"s": 28864,
"text": null
},
{
"code": "<?php// PHP program to check// if k-th bit of a given// number is set or not// using right shift operator. function isKthBitSet($n, $k){ if (($n >> ($k - 1)) & 1) echo \"SET\"; else echo \"NOT SET\";} // Driver code$n = 5; $k = 1;isKthBitSet($n, $k); // This code is contributed// by akt_mit?>",
"e": 29609,
"s": 29299,
"text": null
},
{
"code": "<script> // Javascript program to check if// k-th bit of a given number// is set or not using right// shift operator. function isKthBitSet(n, k){ if (((n >> (k - 1)) & 1) > 0) document.write(\"SET\"); else document.write(\"NOT SET\");} // Driver Code let n = 5, k = 1; isKthBitSet(n, k); // This code is contributed by sanjoy_62.</script>",
"e": 29985,
"s": 29609,
"text": null
},
{
"code": null,
"e": 29995,
"s": 29985,
"text": "Output: "
},
{
"code": null,
"e": 29999,
"s": 29995,
"text": "SET"
},
{
"code": null,
"e": 30836,
"s": 30001,
"text": "YouTubeGeeksforGeeks502K subscribersCheck whether K-th bit is set or not | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 3:39•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=006HH_izfPM\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>"
},
{
"code": null,
"e": 31275,
"s": 30836,
"text": "This article is contributed by SAKSHI TIWARI. If you like GeeksforGeeks(We know you do!) and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 31288,
"s": 31275,
"text": "nitin mittal"
},
{
"code": null,
"e": 31305,
"s": 31288,
"text": "classyallrounder"
},
{
"code": null,
"e": 31311,
"s": 31305,
"text": "jit_t"
},
{
"code": null,
"e": 31325,
"s": 31311,
"text": "princiraj1992"
},
{
"code": null,
"e": 31341,
"s": 31325,
"text": "shreyashagrawal"
},
{
"code": null,
"e": 31351,
"s": 31341,
"text": "_merlin__"
},
{
"code": null,
"e": 31369,
"s": 31351,
"text": "divyeshrabadiya07"
},
{
"code": null,
"e": 31379,
"s": 31369,
"text": "sanjoy_62"
},
{
"code": null,
"e": 31389,
"s": 31379,
"text": "Microsoft"
},
{
"code": null,
"e": 31399,
"s": 31389,
"text": "Bit Magic"
},
{
"code": null,
"e": 31407,
"s": 31399,
"text": "Strings"
},
{
"code": null,
"e": 31417,
"s": 31407,
"text": "Microsoft"
},
{
"code": null,
"e": 31425,
"s": 31417,
"text": "Strings"
},
{
"code": null,
"e": 31435,
"s": 31425,
"text": "Bit Magic"
},
{
"code": null,
"e": 31533,
"s": 31435,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31579,
"s": 31533,
"text": "Cyclic Redundancy Check and Modulo-2 Division"
},
{
"code": null,
"e": 31609,
"s": 31579,
"text": "Little and Big Endian Mystery"
},
{
"code": null,
"e": 31662,
"s": 31609,
"text": "Program to find whether a given number is power of 2"
},
{
"code": null,
"e": 31702,
"s": 31662,
"text": "Binary representation of a given number"
},
{
"code": null,
"e": 31740,
"s": 31702,
"text": "Bits manipulation (Important tactics)"
},
{
"code": null,
"e": 31765,
"s": 31740,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 31811,
"s": 31765,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 31845,
"s": 31811,
"text": "Longest Common Subsequence | DP-4"
},
{
"code": null,
"e": 31905,
"s": 31845,
"text": "Write a program to print all permutations of a given string"
}
] |
Program for Surface Area of Octahedron in C++
|
The word ‘dodecahedron’ is derived from the Greek words where, Octa means ‘Eight’ and hedron specifies ‘faces’. octahedron in geometric is a 3-D platonic or regular solid with eight faces. Like, other figures octahedrons also have properties and that are −
6 polyhedron vertices
12 polyhedron edges
8 equilateral faces
Given below is the figure of octahedron
Given with the sides, the program must find the surface area of octahedron where surface area is the total space occupied by the faces of the given figure.
To calculate surface area of octahedron there is a formula −
Where, a is side of an Octahedron
Input-: side=5
Output-: 86.6025
Start
Step 1 -> declare function to find area of octahedron
double surface_area(double side)
return (2*(sqrt(3))*(side*side))
Step 2 -> In main()
Declare variable double side=5
Print surface_area(side)
Stop
#include <bits/stdc++.h>
using namespace std;
//function for surface area of octahedron
double surface_area(double side){
return (2*(sqrt(3))*(side*side));
}
int main(){
double side = 5;
cout << "Surface area of octahedron is : " << surface_area(side);
}
Surface area of octahedron is : 86.6025
|
[
{
"code": null,
"e": 1319,
"s": 1062,
"text": "The word ‘dodecahedron’ is derived from the Greek words where, Octa means ‘Eight’ and hedron specifies ‘faces’. octahedron in geometric is a 3-D platonic or regular solid with eight faces. Like, other figures octahedrons also have properties and that are −"
},
{
"code": null,
"e": 1341,
"s": 1319,
"text": "6 polyhedron vertices"
},
{
"code": null,
"e": 1361,
"s": 1341,
"text": "12 polyhedron edges"
},
{
"code": null,
"e": 1381,
"s": 1361,
"text": "8 equilateral faces"
},
{
"code": null,
"e": 1421,
"s": 1381,
"text": "Given below is the figure of octahedron"
},
{
"code": null,
"e": 1577,
"s": 1421,
"text": "Given with the sides, the program must find the surface area of octahedron where surface area is the total space occupied by the faces of the given figure."
},
{
"code": null,
"e": 1638,
"s": 1577,
"text": "To calculate surface area of octahedron there is a formula −"
},
{
"code": null,
"e": 1672,
"s": 1638,
"text": "Where, a is side of an Octahedron"
},
{
"code": null,
"e": 1704,
"s": 1672,
"text": "Input-: side=5\nOutput-: 86.6025"
},
{
"code": null,
"e": 1926,
"s": 1704,
"text": "Start\nStep 1 -> declare function to find area of octahedron\n double surface_area(double side)\n return (2*(sqrt(3))*(side*side))\nStep 2 -> In main()\n Declare variable double side=5\n Print surface_area(side)\nStop"
},
{
"code": null,
"e": 2190,
"s": 1926,
"text": "#include <bits/stdc++.h>\nusing namespace std;\n//function for surface area of octahedron\ndouble surface_area(double side){\n return (2*(sqrt(3))*(side*side));\n}\nint main(){\n double side = 5;\n cout << \"Surface area of octahedron is : \" << surface_area(side);\n}"
},
{
"code": null,
"e": 2230,
"s": 2190,
"text": "Surface area of octahedron is : 86.6025"
}
] |
How to write data to .csv file in Java?
|
A library named OpenCSV provides API’s to read and write data from/into a.CSV file. Here it is explained how to write the contents of a .csv file using a Java program.
<dependency>
<groupId>com.opencsv</groupId>
<artifactId>opencsv</artifactId>
<version>4.4</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.9</version>
</dependency>
The CSVWriter class of the com.opencsv package represents a simple csv writer. While instantiating this class you need to pass a Writer object representing the file, to which you want to write the data, as a parameter to its constructor.
It provides methods named writeAll() and writeNext() to write data to a .csv file.
The writeNext() method of the CSVWriter class writes the next line to the .csv file
Following Java program demonstrates how to write data to a .csv file using the writeNext() method.
import java.io.FileWriter;
import com.opencsv.CSVWriter;
public class WritingToCSV {
public static void main(String args[]) throws Exception {
//Instantiating the CSVWriter class
CSVWriter writer = new CSVWriter(new FileWriter("D://output.csv"));
//Writing data to a csv file
String line1[] = {"id", "name", "salary", "start_date", "dept"};
String line2[] = {"1", "Krishna", "2548", "2012-01-01", "IT"};
String line3[] = {"2", "Vishnu", "4522", "2013-02-26", "Operations"};
String line4[] = {"3", "Raja", "3021", "2016-10-10", "HR"};
String line5[] = {"4", "Raghav", "6988", "2012-01-01", "IT"};
//Writing data to the csv file
writer.writeNext(line1);
writer.writeNext(line2);
writer.writeNext(line3);
writer.writeNext(line4);
//Flushing data from writer to file
writer.flush();
System.out.println("Data entered");
}
}
Data entered
This method accepts a List object (of String array type) containing the contents to be written and, writes them to the .csv file at once.
Following Java program demonstrates write contents to a .csv file using the writeAll() method.
import java.io.FileWriter;
import java.util.ArrayList;
import java.util.List;
import com.opencsv.CSVWriter;
public class WritingToCSV {
public static void main(String args[]) throws Exception {
//Instantiating the CSVWriter class
CSVWriter writer = new CSVWriter(new FileWriter("D://output.csv"));
//Writing data to a csv file
String line1[] = {"id", "name", "salary", "start_date", "dept"};
String line2[] = {"1", "Krishna", "2548", "2012-01-01", "IT"};
String line3[] = {"2", "Vishnu", "4522", "2013-02-26", "Operations"};
String line4[] = {"3", "Raja", "3021", "2016-10-10", "HR"};
String line5[] = {"4", "Raghav", "6988", "2012-01-01", "IT"};
//Instantiating the List Object
List list = new ArrayList();
list.add(line1);
list.add(line2);
list.add(line3);
list.add(line4);
list.add(line5);
//Writing data to the csv file
writer.writeAll(list);
writer.flush();
System.out.println("Data entered");
}
}
Data entered
|
[
{
"code": null,
"e": 1230,
"s": 1062,
"text": "A library named OpenCSV provides API’s to read and write data from/into a.CSV file. Here it is explained how to write the contents of a .csv file using a Java program."
},
{
"code": null,
"e": 1489,
"s": 1230,
"text": "<dependency>\n <groupId>com.opencsv</groupId>\n <artifactId>opencsv</artifactId>\n <version>4.4</version>\n</dependency>\n<dependency>\n <groupId>org.apache.commons</groupId>\n <artifactId>commons-lang3</artifactId>\n <version>3.9</version>\n</dependency>"
},
{
"code": null,
"e": 1727,
"s": 1489,
"text": "The CSVWriter class of the com.opencsv package represents a simple csv writer. While instantiating this class you need to pass a Writer object representing the file, to which you want to write the data, as a parameter to its constructor."
},
{
"code": null,
"e": 1810,
"s": 1727,
"text": "It provides methods named writeAll() and writeNext() to write data to a .csv file."
},
{
"code": null,
"e": 1894,
"s": 1810,
"text": "The writeNext() method of the CSVWriter class writes the next line to the .csv file"
},
{
"code": null,
"e": 1993,
"s": 1894,
"text": "Following Java program demonstrates how to write data to a .csv file using the writeNext() method."
},
{
"code": null,
"e": 2914,
"s": 1993,
"text": "import java.io.FileWriter;\nimport com.opencsv.CSVWriter;\npublic class WritingToCSV {\n public static void main(String args[]) throws Exception {\n //Instantiating the CSVWriter class\n CSVWriter writer = new CSVWriter(new FileWriter(\"D://output.csv\"));\n //Writing data to a csv file\n String line1[] = {\"id\", \"name\", \"salary\", \"start_date\", \"dept\"};\n String line2[] = {\"1\", \"Krishna\", \"2548\", \"2012-01-01\", \"IT\"};\n String line3[] = {\"2\", \"Vishnu\", \"4522\", \"2013-02-26\", \"Operations\"};\n String line4[] = {\"3\", \"Raja\", \"3021\", \"2016-10-10\", \"HR\"};\n String line5[] = {\"4\", \"Raghav\", \"6988\", \"2012-01-01\", \"IT\"};\n //Writing data to the csv file\n writer.writeNext(line1);\n writer.writeNext(line2);\n writer.writeNext(line3);\n writer.writeNext(line4);\n //Flushing data from writer to file\n writer.flush();\n System.out.println(\"Data entered\");\n }\n}"
},
{
"code": null,
"e": 2927,
"s": 2914,
"text": "Data entered"
},
{
"code": null,
"e": 3065,
"s": 2927,
"text": "This method accepts a List object (of String array type) containing the contents to be written and, writes them to the .csv file at once."
},
{
"code": null,
"e": 3160,
"s": 3065,
"text": "Following Java program demonstrates write contents to a .csv file using the writeAll() method."
},
{
"code": null,
"e": 4183,
"s": 3160,
"text": "import java.io.FileWriter;\nimport java.util.ArrayList;\nimport java.util.List;\nimport com.opencsv.CSVWriter;\npublic class WritingToCSV {\n public static void main(String args[]) throws Exception {\n //Instantiating the CSVWriter class\n CSVWriter writer = new CSVWriter(new FileWriter(\"D://output.csv\"));\n //Writing data to a csv file\n String line1[] = {\"id\", \"name\", \"salary\", \"start_date\", \"dept\"};\n String line2[] = {\"1\", \"Krishna\", \"2548\", \"2012-01-01\", \"IT\"};\n String line3[] = {\"2\", \"Vishnu\", \"4522\", \"2013-02-26\", \"Operations\"};\n String line4[] = {\"3\", \"Raja\", \"3021\", \"2016-10-10\", \"HR\"};\n String line5[] = {\"4\", \"Raghav\", \"6988\", \"2012-01-01\", \"IT\"};\n //Instantiating the List Object\n List list = new ArrayList();\n list.add(line1);\n list.add(line2);\n list.add(line3);\n list.add(line4);\n list.add(line5);\n //Writing data to the csv file\n writer.writeAll(list);\n writer.flush();\n System.out.println(\"Data entered\");\n }\n}"
},
{
"code": null,
"e": 4196,
"s": 4183,
"text": "Data entered"
}
] |
Decimal.ToString() Method in C#
|
The Decimal.ToString() method in C# is used to convert the numeric value of this instance to its equivalent string representation.
Following is the syntax −
public override string ToString ();
Let us now see an example to implement the Decimal.ToString() method −
using System;
public class Demo{
public static void Main(){
decimal d = 3444.787m;
string str = d.ToString();
Console.WriteLine("String = "+str);
}
}
This will produce the following output −
String = 3444.787
Let us see another example −
using System;
public class Demo{
public static void Main(){
decimal d = 100;
string str = d.ToString();
Console.WriteLine("String = "+str);
}
}
This will produce the following output −
String = 100
|
[
{
"code": null,
"e": 1193,
"s": 1062,
"text": "The Decimal.ToString() method in C# is used to convert the numeric value of this instance to its equivalent string representation."
},
{
"code": null,
"e": 1219,
"s": 1193,
"text": "Following is the syntax −"
},
{
"code": null,
"e": 1255,
"s": 1219,
"text": "public override string ToString ();"
},
{
"code": null,
"e": 1326,
"s": 1255,
"text": "Let us now see an example to implement the Decimal.ToString() method −"
},
{
"code": null,
"e": 1500,
"s": 1326,
"text": "using System;\npublic class Demo{\n public static void Main(){\n decimal d = 3444.787m;\n string str = d.ToString();\n Console.WriteLine(\"String = \"+str);\n }\n}"
},
{
"code": null,
"e": 1541,
"s": 1500,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1559,
"s": 1541,
"text": "String = 3444.787"
},
{
"code": null,
"e": 1588,
"s": 1559,
"text": "Let us see another example −"
},
{
"code": null,
"e": 1756,
"s": 1588,
"text": "using System;\npublic class Demo{\n public static void Main(){\n decimal d = 100;\n string str = d.ToString();\n Console.WriteLine(\"String = \"+str);\n }\n}"
},
{
"code": null,
"e": 1797,
"s": 1756,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1810,
"s": 1797,
"text": "String = 100"
}
] |
Predict malignancy in cancer tumors with your own neural network | by Javier Ideami | Towards Data Science
|
In part 1 of this series, we understood in depth the architecture of our neural network. In part 2, we built it using Python. We also understood in depth back-propagation and the gradient descent optimization algorithm.
In the final part 3, we will use the Wisconsin Cancer data-set. We will learn to prepare our data, run it through our network and analyze the results.
It’s time to explore the loss landscape of our network.
To switch on our network, we need some fuel, we need data.
We will use a real data-set connected to the detection of breast cancer tumors.
The data comes from The Wisconsin Cancer Data-set.
This data was gathered by the University of Wisconsin Hospitals, Madison and by Dr. William H. Wolberg.
By request of the owners of the data, we mention one of the studies linked to the data-set: O. L. Mangasarian and W. H. Wolberg: “Cancer diagnosis via linear programming”, SIAM News, Volume 23, Number 5, September 1990, pp 1 & 18.
The data, in csv format, can be downloaded from this link
At this Github link, you can access all the code and data of the project.
github.com
First, we download the data to our machine. Then, we use pandas to create a dataframe and we take a look at its first rows.
df = pd.read_csv('wisconsin-cancer-dataset.csv',header=None)df.head(5)
A dataframe is a python data structure that allows us to work and visualize data very easily.
The first thing we need to do is to understand the structure of the data. We find key information about it on its website.
There are 699 rows in total, belonging to 699 patients
The first column is an ID that identifies each patient.
The following 9 columns are features that express different types of information connected to the detected tumors. They represent data related to: Clump Thickness, Uniformity of Cell Size, Uniformity of Cell Shape, Marginal Adhesion, Single Epithelial Cell Size, Bare Nuclei, Bland Chromatin, Normal Nucleoli and Mitoses.
The last column is the class of the tumor and it has two possible values: 2 means that the tumor was found to be benign. 4 means that it was found to be malignant.
We are told as well that there are a few rows that contain missing data. The missing data is represented in the data-set with the ? character.
Out of the 699 patients in the dataset, the class distribution is: Benign: 458 (65.5%) and Malignant: 241 (34.5%)
This is useful information that allows us to achieve some conclusions.
Our objective will be to train our neural network to predict if a tumor is benign or malignant, based on the features provided by the data.
The input to the network will be made of the 9 features, the 9 columns that express different features of the tumors.
We will not make use of the first column that holds the ID of the patient.
We will eliminate from the data-set any rows that contain missing data (identified with the ? character).
In binary classification scenarios, It’s good to have a good percentage of data from both classes. We have a 65%-35% distribution, which is good enough.
The benign and malignant classes are identified with the digits 2 and 4. The last layer of our network outputs values between 0 and 1 through its Sigmoid function. Furthermore, neural networks tend to work better when data is set in that range, from 0 to 1. We will therefore change the values of the class columns to hold a 0 instead of a 2 for benign cases and a 1 instead of a 4 for malignant cases. (we could also scale the output of the Sigmoid instead).
We proceed to do these changes. First, we change the class values (at the column number 10) from 2 to 0 and from 4 to 1
df.iloc[:,10].replace(2, 0,inplace=True)df.iloc[:,10].replace(4, 1,inplace=True)
Then we proceed to eliminate all rows that hold missing values (represented by the ? character) at column 6, which we have identified as the column that holds them.
df = df[~df[6].isin(['?'])]
The ‘?’ character causes Python to interpret column 6 as made of strings. Other columns are made of integers. We set the entire dataframe to be interpreted as made of float numbers. This helps our network perform complex computations.
df = df.astype(float)
Next, let’s deal with the range of the values in our data. Notice how the data within the 9 features is made of numbers that go beyond the 0 to 1 range. Real data-sets are often messy and come with a great diversity of range in their values: negative numbers, huge range differences within columns, etc.
That’s why data normalization is a key first step within the feature engineering phase of deep learning processes.
Normalizing the data means preparing it in a way that is easier for the network to digest. We are helping the network converge easier and faster to that minima we are looking for. Typically, neural networks respond well to numerical data set in the 0 to 1 range, and also to data that has a mean of 0 and a standard deviation of 1.
Feature engineering and normalization are not the focus of this article but let’s quickly mention a couple of methods within this phase of the feature engineering process:
An example of a normalization method would be re-scaling our data to fit within the 0 to 1 range by applying the min-max method to each feature column. new_x= (x- min_x)/(max_x- min_x)
We can also apply standardization, which centers the values of each feature column, setting it to have a mean of 0 and a standard deviation of 1. new_x = (x- mean)/std.dev
Some data-sets and scenarios will benefit more than others from each of these techniques. In our case and after some tests, we decide to apply min-max normalization using the sklearn library:
names = df.columns[0:10]scaler = MinMaxScaler() scaled_df = scaler.fit_transform(df.iloc[:,0:10]) scaled_df = pd.DataFrame(scaled_df, columns=names)
Let’s take a look at the same 15 rows after all these changes.
After the changes we have 683 rows. 16 that held missing data have been eliminated.
All the columns are now made of float numbers and their values are normalized between 0 and 1. (Column 0, the IDs, will be ignored when we build the training set later).
The final column, the class, is now using a value of 0 for benign tumors and of 1 for malignant ones.
Notice that we are not normalizing the class column because it already holds values in the 0 to 1 range and its values should remain set to 0 or 1.
And notice that the final column, the one we will use as our target, doesn’t need to be a float. It can be an integer because our output can only be 1 or 0. (when we train the network, we will pick that column from the original df dataframe, where it is set to 0 or 1).
Therefore, our scaled_df dataframe contains all the normalized columns, and we will pick the class column from the df dataframe, the non normalized version of the data-set.
The process could continue as we explore more the data.
Are all the 9 features essential? do we want to include them all in the training process?
Do we have enough quality data to produce good training, validation and test sets? (more about this later)
Studying the data, do we find any meaningful and useful insights that may help us train the network more efficiently?
These and more are part of the feature engineering process that takes place before the training begins.
Another useful thing to do is to build charts to analyze the data in different ways. The myplotlib python library helps us study the data through different kinds of graphs.
We first combine the normalized columns we want to study with the class column, and then begin to explore.
scaled_df[10]= df[10]scaled_df.iloc[0:13,1:11].plot.bar();scaled_df.iloc[0:13,1:11].plot.hist(alpha=0.5)
Explore the full range of visualization options offered by Panda with this link
To speed up the article, in our case we conclude that the 9 features are useful. Our objective is to predict the “class” column with precision.
So, how complex would the function be that would describe the connection between our 683 samples and their output?
The relationship between the 9 features and the output is clearly multi-dimensional and non-linear.
Let’s run the data through the network and see what happens.
Before we do that, we need to consider a key topic:
If we use the 683 samples, all our samples, to create our training set and we get good results, we will have to face a key question.
What if the network sets its weights in a way that matches perfectly the samples used during training, and yet fails to generalize to new samples outside the training set?
Is our final objective achieving great accuracy when using the training data, or when using data it has never seen before? Obviously, the second case.
This is the reason why the deep learning practitioner typically takes into account three kinds of data-sets:
Training set: the data you use to train the network. It contains the input features and the target labels.
Validation set: a separate, different batch of data, which should ideally come from the same distribution than the training set. You will use it to verify the quality of the training. The validation set has as well target labels.
Test set: another separate batch of data, used to test the network with fresh related data that ideally comes from the same distribution than the validation set. Typically, the test set doesn’t come with target labels.
The size of the different sets in relation to each other is another topic that would take some time to describe. For our purposes, consider that most of the data forms the training set and a small percentage of it is typically extracted (and eliminated from the training set) to become the validation set.
20% is a typical number that is often chosen as the percentage of the data that will form our validation set.
To estimate the quality of the training of your network, it is useful to compare the performance of your training and validation sets:
If the loss value achieved on the validation set improves and then starts to get worse, the network is over-fitting, meaning the network has learnt a function that fits very well the training data, yet does not generalize well enough to the validation set.
The opposite of over-fitting is under-fitting, when the training performance of the network is not good enough and we obtain loss values that are too high in both the training and validation sets (and the training loss is worse than the validation loss, for example).
Ideally you want to get similar performance in both data-sets.
When we have over-fitting, we can apply regularization. Regularization is a technique that applies changes to the optimization algorithm that allow the network to generalize better. Regularization techniques include Dropout, L1 and L2 regularization, early stopping and data augmentation techniques.
In general, realize that success with the validation set is your real target. Having the network perform fantastically well with the training data serves no purpose if it fails to perform well with new data it hasn’t seen before.
So your real target is to reach a good loss value and achieve good accuracy with your validation set.
To get there, over-fitting is one of the most important issues we need to prevent, and that’s why regularization is so important. Let’s quickly and briefly recap 4 widely used regularization techniques.
Dropout: During each training pass, we randomly disable some of the hidden units of our network. This prevents the network from putting too much emphasis on any specific weight and helps the network generalize better. It is as if we were running the data through different network architectures and then averaging their impact, which helps prevent over-fitting.
L1 and L2: We add extra terms to the cost function that penalize the network when weights become too large. These techniques encourage the network to find a good balance between the loss value and the scale of the weights.
Early stopping: Over-fitting can be a consequence of training for too long. If we monitor our validation error, we can stop the training process when the validation error stops improving.
Data augmentation: Typically, more training data means a better network performance, but obtaining more data is not always possible. Instead, we can augment the existing data by artificially creating variations of it. For example, in the case of images, we can apply rotations, translations, cropping and other techniques to produce new variations of them.
Back to our data. It’s time to pick our training and validation sets. We will select part of the 683 rows as the training set and a different part of the data-set as our validation set.
After the training, we will validate the quality of our network by running the process again through the validation set.
x=scaled_df.iloc[0:500,1:10].values.transpose()y=df.iloc[0:500,10:].values.transpose()xval=scaled_df.iloc[501:683,1:10].values.transpose()yval=df.iloc[501:683,10:].values.transpose()
We decide to build our training set with 500 of the 683 rows, and we pick them from the normalized scaled_df dataframe. We also make sure to eliminate the first column (the IDs) and to not include the last column (the class) in the input x to the network
We declare the target output y using the class column that corresponds to the same 500 rows. We pick the class column from the original non-normalized df dataframe (as the class value should remain as a 0 or a 1).
We then select the next 183 rows for our validation set, and store them in the variables xval and yval.
We are ready. We will first train the network with the 500 rows of our x,y training set. Afterwards, we will test the trained network with the 183 rows of our xval,yval validation set, to see how well the network generalizes to data it has never seen before.
nn = dlnet(x,y)nn.lr=0.01nn.dims = [9, 15, 1]nn.gd(x, y, iter = 15000)
We declare our network, set a learning rate and the number of nodes at each layer (the input has 9 nodes because we are using 9 features, and it’s not counted as a layer of the network. The first hidden layer has 15 hidden units and the second and final layer has a single output node).
We then run the gradient descent algorithm through a few thousand iterations. Let’s get a feel of how well the network trains with a few seconds of gradient descent.
Every x iterations, we display the loss value of the network. If the training proceeds well, the loss value should decrease after every cycle.
Cost after iteration 0: 0.673967Cost after iteration 500: 0.388928Cost after iteration 1000: 0.231340Cost after iteration 1500: 0.171447Cost after iteration 2000: 0.146433Cost after iteration 2500: 0.133993Cost after iteration 3000: 0.126808Cost after iteration 3500: 0.122107Cost after iteration 12500: 0.101980Cost after iteration 13000: 0.101604Cost after iteration 14500: 0.100592
After a number of iterations our loss begins to stabilize at a low level. We plot a chart that follows the loss of the network through the iterations.
Our network seems to have trained quite well, reaching a low loss value (the distance between our predictions and the target outputs is low). But, how good is it? and most importantly, how good is it, not just on the whole training set, but way more important, on our validation set?
To find out, we create a new function, pred(), that runs a set of inputs through the network and then compares systematically every obtained output to its corresponding target output in order to produce an average accuracy value.
Notice below how the function studies if the prediction is above or below 0.5. We are doing binary classification and by default we consider that output values that are above 0.5 mean that the result belongs to one of the classes, and vice-versa.
In this case, because 1 is the class value for malignant tumors, we consider that outputs above 0.5 predict a malignant result, and below 0.5 the opposite. We will talk in a bit about how, when and why we would want to change this 0.5 threshold value.
def pred(self,x, y): self.X=x self.Y=y comp = np.zeros((1,x.shape[1])) pred, loss= self.forward() for i in range(0, pred.shape[1]): if pred[0,i] > 0.5: comp[0,i] = 1 else: comp[0,i] = 0 print("Acc: " + str(np.sum((comp == y)/x.shape[1]))) return comp
We now proceed to compare the accuracy of the network when using the training and validation sets, by calling the pred function twice, once with our training set, and another time with our validation set.
pred_train = nn.pred(x, y)pred_test = nn.pred(xval, yval)
And we get these 2 results.
Acc: 0.9620000000000003Acc: 1.0
The network has an accuracy of a 96% on the training set (first 500 rows) and of 100% when using the validation set (next 183 rows).
The accuracy on the validation set is higher. This means that the network is not over-fitting and is generalizing well enough to be able to adapt to data it has never seen before.
We can now use the nn.forward() function to compare directly the first few values of the validation set output in relation to the target output:
nn.X,nn.Y=xval, yval yvalh, loss = nn.forward()print("\ny",np.around(yval[:,0:50,], decimals=0).astype(np.int)) print("\nyh",np.around(yvalh[:,0:50,], decimals=0).astype(np.int),"\n")
And we get
y [[0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]]yh [[0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]]
Both match perfectly, because we have achieved 100% accuracy on our validation set.
Therefore, the function learnt pretty well to adapt to both the training and validation sets.
One great way to analyze the accuracy is by plotting a confusion matrix. First, we declare a custom plotting function.
def plotCf(a,b,t): cf =confusion_matrix(a,b) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title(t) plt.xlabel('Predicted') plt.ylabel('Actual') tick_marks = np.arange(len(set(expected))) # length of classes class_labels = ['0','1'] tick_marks plt.xticks(tick_marks,class_labels) plt.yticks(tick_marks,class_labels) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show();
(This custom confusion matrix function comes from this public Kaggle created by JP)
Then, we run the pred function again twice, and plot confusion matrices for both the training and validation sets.
nn.X,nn.Y=x, y target=np.around(np.squeeze(y), decimals=0).astype(np.int)predicted=np.around(np.squeeze(nn.pred(x,y)), decimals=0).astype(np.int)plotCf(target,predicted,'Cf Training Set')nn.X,nn.Y=xval, yval target=np.around(np.squeeze(yval), decimals=0).astype(np.int)predicted=np.around(np.squeeze(nn.pred(xval,yval)), decimals=0).astype(np.int)plotCf(target,predicted,'Cf Validation Set')
We can see even more clearly that our validation set has perfect accuracy on its 183 samples. As for the training set, there are 19 mistakes among the 500 samples.
Now, at this point you may say that in a topic as delicate as diagnosing a tumor, setting our prediction to be 1 if the sigmoid output gives a value above 0.5 is not really good. The network should be really confident before giving a prediction of malignancy.
I totally agree, that’s very correct. And these are the kinds of decisions that you need to take depending on the nature of the challenge and topic you are dealing with.
Let’s then create a new variable called threshold. It will control our confidence threshold, how close to 1 the output of the network needs to be before we decide that a tumor is malignant. By default we set it to 0.5
self.threshold=0.5
Out prediction function is now updated to use that confidence threshold.
def pred(self,x, y): self.X=x self.Y=y comp = np.zeros((1,x.shape[1])) pred, loss= self.forward() for i in range(0, pred.shape[1]): if pred[0,i] > self.threshold: comp[0,i] = 1 else: comp[0,i] = 0 print("Acc: " + str(np.sum((comp == y)/x.shape[1]))) return comp
Let’s now compare our results as we gradually raise the confidence threshold.
Confidence threshold: 0.5 . Output values need to be higher than 0.5 for the output to be considered malignant. As seen previously, the validation accuracy is 100%, the training one is 96%.
Confidence threshold: 0.7 . Output values need to be higher than 0.7 for the output to be considered malignant. The validation accuracy remains at 100%, the training one decreases a bit to 95%.
Confidence threshold: 0.8 . Output values need to be higher than 0.8 for the output to be considered malignant. The validation accuracy for the first time decreases very, very slightly to 99.45%. In the confusion matrix we see that 1 single sample of the 183 is not recognized correctly. The training accuracy decreases a bit more till 94.2%
Confidence threshold: 0.9. Finally, in the case of 0.9, output values need to be higher than 0.9 for the output to be considered malignant. We are looking for almost complete confidence. The validation accuracy decreases a bit more till 98.9%. In the confusion matrix we see that 2 samples of the 183 were not recognized correctly. The training accuracy decreases further till 92.6%.
Therefore, by controlling the confidence threshold, we adapt to the specific needs of our challenge.
If we want to lower the loss value related to our training set (because we are failing to recognize a small percentage of the training samples), we can try to train for longer, and also use different learning rates.
For example, if we set the learning rate to 0.07 and train for 65000 iterations, we obtain:
Cost after iteration 63500: 0.017076Cost after iteration 64000: 0.016762Cost after iteration 64500: 0.016443Acc: 0.9980000000000003Acc: 0.9945054945054945
Now, with our confidence threshold set to 0.5, the network is accurate with every sample in both sets, except with one of each.
If we raise the confidence threshold to 0.7, performance is still excellent, only 1 validation sample and 2 training samples are not predicted correctly.
Finally, if we are really demanding and set the confidence threshold to 0.9, the network fails to guess correctly 1 of the validation samples and 10 of the training ones.
Although we have done quite well, considering that we are using a basic network without regularization, it is typical for things to get much harder when you are dealing with more complex data.
Often, the loss landscape gets very complex and it’s easier to fall in the wrong local minima or fail to converge to a good enough loss.
Also, depending on the initial conditions of the network, we may converge to a good minima or we may get stuck at a plateau somewhere and fail to get out of it. It’s useful at this stage to picture again our initial animation.
Picture that landscape, full of hills and valleys, places where the loss is really high, and places where the loss gets very low. The landscape of the loss function related to a complex scenario is often not uniform (though it can be made more smooth using different methods, but that’s a whole different topic).
It’s full of hills and valleys of different depths and angles. The way you move around the landscape is by changing the loss value of the network when you run the gradient descent algorithm.
And the speed at which you move is controlled by the learning rate:
If you are moving very slowly and somehow arrive to a plateau or a valley that is not low enough, you may get stuck there.
If you move too fast, you may arrive to a low enough valley but rush through it and move away from it just as fast.
So there are some very delicate issues that have an enormous impact on how your network will perform.
The initial conditions: in what part of the landscape do you drop the ball at the beginning of the process?
The speed at which you move the ball, the learning rate.
A lot of the progress achieved recently in improving the speed with which neural networks train is connected to different techniques that dynamically manage the learning rate and also to new ways of setting those initial conditions in better ways.
Regarding the initial conditions:
Remember that each layer computes a combination of the weights and the inputs of the preceding layer (weighted sum of the inputs) and pass that computation to that layer’s activation functions.
Those activation functions have shapes that can either accelerate or stop all together the dynamics of the neurons, depending on the combination between the range of the inputs and the way they respond to that range.
If the sigmoid function, for example, receives values that trigger a result that is close to the extremes of its output range, the output of the activation function on that part of its range becomes really flat. If it stays flat for some time, the derivative, the rate of change at that point becomes zero or very small.
Recall that it is the derivative what helps us decide in what direction to move next. Therefore, if the derivative is not giving us meaningful information, it will be very difficult for the network to know in what direction to move next from that point.
It is as if you had reached a plateau in the landscape and you were really confused as to where to go next, and you just kept moving in circles around that point.
This may happen also with ReLU, although ReLU has only 1 flat side as opposed to the 2 of Sigmoid and Tanh. Leaky-ReLU is a variation of ReLU that slightly modifies that side of the function (the flat one) to try to prevent vanishing gradients.
It is therefore critical to set the initial values of our weights in the best way possible so that the computations of the units at the start of the training process produce outputs that fall within the best possible range of our activation functions.
That could make the whole difference between beginning at a really high hill of the loss landscape or way lower.
Managing the learning rate to prevent the training process from being too slow or too fast, and to adapt its value to the changing conditions of the process and of each parameter, is another complex challenge
Talking about the many ways of dealing with the initial conditions and the learning rate would take a few articles. I will briefly describe some of them to give an idea of some of the methods experts use to deal with these challenges.
Xavier initialization: A way of initializing our weights so that neuron’s won’t start in a saturated state (trapped at the delicate parts of their output ranges, where derivatives cannot provide enough information for the network to know where to go next).
Learning rate annealing: high learning rates can push the algorithm to bypass and miss good minima at the loss landscape. A gradual decrease of the learning rate can prevent that. There are different ways to implement this decrease, including: exponential decay, step decay and 1/t decay.
Fast.ai Lr_find(): An algorithm of the fast.ai library that finds the ideal range of values for the learning rate. Lr_find trains the model through a few iterations. It first tries to use a very low learning rate, and at each mini batch it changes the rate gradually until it reaches a very high value. The loss is recorded at each iteration and a chart helps us visualize the loss against the learning rate. We can then decide what are the optimal values of the learning rate that decrease the loss in the most efficient way.
Differential learning rates: Using different learning rates in different parts of our network.
SGDR, Stochastic Gradient Descent with Restarts: Resetting our learning rate every x iterations. This can help us get out of plateaus or local minima that are not low enough, if we get stuck in one of them. A typical process is to start with a high learning rate. You then decrease it gradually at each mini batch. After x number of Epochs you reset it back to its initial high value and the same process repeats again. The concept is that moving gradually from a high rate to a lower one makes sense because we first quickly move down from the high points of the landscape (initial high loss value) and then move slower to prevent bypassing the minima of the landscape (low loss value areas). But if we get stuck at some plateau or a valley that is not low enough, restarting our rate to a high value every x iterations will help us jump out of that situation and continue exploring the landscape.
1 Cycle Policy: A way of dynamically changing the learning rate proposed by Leslie N. Smith, in which we begin with a low rate value and gradually increase it until we reach a maximum. Then, we proceed to gradually decrease it till the end of the process. The initial gradual increase allows us to explore large areas of the loss landscape, increasing our chances of reaching a low area that is not bumpy; in the second part of the cycle, we settle in the low, flat area we have reached.
Momentum: A variation of stochastic gradient descent that helps accelerate the path through the loss landscape while keeping the overall direction controlled. Recall that SGD can be noisy. Momentum averages the changes in the path, smooths that path and accelerates the movement towards the goal.
Adaptive learning rates: Methods that calculate and use different learning rates for different parameters of the network.
AdaGrad (Adaptive Gradient Algorithm): Connecting with the previous point, AdaGrad is a variation of SGD that instead of using a single learning rate for all the parameters, uses a different rate for each parameter.
Root Mean Square Propagation (RMSProp): Like Adagrad, RMSProp uses different learning rates for each parameter, and adapts those rates depending on the average of how fast they are changing (this helps when dealing with noisy contexts).
Adam: It combines some aspects of RMSprop and SGDR with momentum. Like RMSprop, it uses squared gradients to scale the learning rate, and it also uses the average of the gradient to make use of momentum.
If you are new to all these names, don’t get overwhelmed. Behind most of them are the very same roots: back-propagation and gradient descent.
Also, a lot of these methods are selected automatically for you within modern frameworks such as the fast.ai library. It is though really useful to understand how they work, as you are then in a better position to take your own decisions and even to research and test different variations and options.
When we understand the core of the network, the basic back-propagation algorithm and the basic gradient descent process, we have more options to explore and experiment whenever we face hard challenges.
Because we understand the process, we realize for example that in deep learning, the initial place where we drop the ball within the loss landscape is key.
Some initial positions will soon push the ball (the training process) to get stuck in some part of the landscape. Others will quickly drive us to a good minima.
When the mystery function becomes more complex, it is the time to incorporate some of the advanced solutions I mentioned earlier. It is also time to study in more depth the architecture of the entire network and to go deeper into the different hyper-parameters.
The shape of our loss landscape is very much influenced by the design of the architecture of our networks as well as hyper-parameters like the learning rate, the size of our batches, the optimizer algorithm we use, etc.
For a discussion about those influences, check the paper: Visualizing the Loss Landscape of Neural Nets by Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, Tom Goldstein.
A very interesting point coming out of recent research is how the skip connections model in neural nets can smooth our loss landscape and make it dramatically simpler and more convex, increasing our chances to converge to a good result.
Skip connections have helped a lot to train very deep networks. Basically, skip connections are extra connections that link nodes of separate layers, skipping one or more non-linear layers in between.
As we experiment with different architectures and parameters, we are modifying our loss landscape, making it more rugged or smooth, increasing or decreasing the number of local optima. And as we optimize the way we initialize the parameters of the network, we are improving our starting position.
Let’s keep on exploring new ways to navigate the loss landscapes of the most fascinating challenges in the world.
This article covered the basics and from here, the sky is the limit!
Links to the 3 parts of this article:Part 1 | Part 2 | Part 3
Github Repository with all the code of this project
|
[
{
"code": null,
"e": 392,
"s": 172,
"text": "In part 1 of this series, we understood in depth the architecture of our neural network. In part 2, we built it using Python. We also understood in depth back-propagation and the gradient descent optimization algorithm."
},
{
"code": null,
"e": 543,
"s": 392,
"text": "In the final part 3, we will use the Wisconsin Cancer data-set. We will learn to prepare our data, run it through our network and analyze the results."
},
{
"code": null,
"e": 599,
"s": 543,
"text": "It’s time to explore the loss landscape of our network."
},
{
"code": null,
"e": 658,
"s": 599,
"text": "To switch on our network, we need some fuel, we need data."
},
{
"code": null,
"e": 738,
"s": 658,
"text": "We will use a real data-set connected to the detection of breast cancer tumors."
},
{
"code": null,
"e": 789,
"s": 738,
"text": "The data comes from The Wisconsin Cancer Data-set."
},
{
"code": null,
"e": 893,
"s": 789,
"text": "This data was gathered by the University of Wisconsin Hospitals, Madison and by Dr. William H. Wolberg."
},
{
"code": null,
"e": 1124,
"s": 893,
"text": "By request of the owners of the data, we mention one of the studies linked to the data-set: O. L. Mangasarian and W. H. Wolberg: “Cancer diagnosis via linear programming”, SIAM News, Volume 23, Number 5, September 1990, pp 1 & 18."
},
{
"code": null,
"e": 1182,
"s": 1124,
"text": "The data, in csv format, can be downloaded from this link"
},
{
"code": null,
"e": 1256,
"s": 1182,
"text": "At this Github link, you can access all the code and data of the project."
},
{
"code": null,
"e": 1267,
"s": 1256,
"text": "github.com"
},
{
"code": null,
"e": 1391,
"s": 1267,
"text": "First, we download the data to our machine. Then, we use pandas to create a dataframe and we take a look at its first rows."
},
{
"code": null,
"e": 1462,
"s": 1391,
"text": "df = pd.read_csv('wisconsin-cancer-dataset.csv',header=None)df.head(5)"
},
{
"code": null,
"e": 1556,
"s": 1462,
"text": "A dataframe is a python data structure that allows us to work and visualize data very easily."
},
{
"code": null,
"e": 1679,
"s": 1556,
"text": "The first thing we need to do is to understand the structure of the data. We find key information about it on its website."
},
{
"code": null,
"e": 1734,
"s": 1679,
"text": "There are 699 rows in total, belonging to 699 patients"
},
{
"code": null,
"e": 1790,
"s": 1734,
"text": "The first column is an ID that identifies each patient."
},
{
"code": null,
"e": 2112,
"s": 1790,
"text": "The following 9 columns are features that express different types of information connected to the detected tumors. They represent data related to: Clump Thickness, Uniformity of Cell Size, Uniformity of Cell Shape, Marginal Adhesion, Single Epithelial Cell Size, Bare Nuclei, Bland Chromatin, Normal Nucleoli and Mitoses."
},
{
"code": null,
"e": 2276,
"s": 2112,
"text": "The last column is the class of the tumor and it has two possible values: 2 means that the tumor was found to be benign. 4 means that it was found to be malignant."
},
{
"code": null,
"e": 2419,
"s": 2276,
"text": "We are told as well that there are a few rows that contain missing data. The missing data is represented in the data-set with the ? character."
},
{
"code": null,
"e": 2533,
"s": 2419,
"text": "Out of the 699 patients in the dataset, the class distribution is: Benign: 458 (65.5%) and Malignant: 241 (34.5%)"
},
{
"code": null,
"e": 2604,
"s": 2533,
"text": "This is useful information that allows us to achieve some conclusions."
},
{
"code": null,
"e": 2744,
"s": 2604,
"text": "Our objective will be to train our neural network to predict if a tumor is benign or malignant, based on the features provided by the data."
},
{
"code": null,
"e": 2862,
"s": 2744,
"text": "The input to the network will be made of the 9 features, the 9 columns that express different features of the tumors."
},
{
"code": null,
"e": 2937,
"s": 2862,
"text": "We will not make use of the first column that holds the ID of the patient."
},
{
"code": null,
"e": 3043,
"s": 2937,
"text": "We will eliminate from the data-set any rows that contain missing data (identified with the ? character)."
},
{
"code": null,
"e": 3196,
"s": 3043,
"text": "In binary classification scenarios, It’s good to have a good percentage of data from both classes. We have a 65%-35% distribution, which is good enough."
},
{
"code": null,
"e": 3656,
"s": 3196,
"text": "The benign and malignant classes are identified with the digits 2 and 4. The last layer of our network outputs values between 0 and 1 through its Sigmoid function. Furthermore, neural networks tend to work better when data is set in that range, from 0 to 1. We will therefore change the values of the class columns to hold a 0 instead of a 2 for benign cases and a 1 instead of a 4 for malignant cases. (we could also scale the output of the Sigmoid instead)."
},
{
"code": null,
"e": 3776,
"s": 3656,
"text": "We proceed to do these changes. First, we change the class values (at the column number 10) from 2 to 0 and from 4 to 1"
},
{
"code": null,
"e": 3857,
"s": 3776,
"text": "df.iloc[:,10].replace(2, 0,inplace=True)df.iloc[:,10].replace(4, 1,inplace=True)"
},
{
"code": null,
"e": 4022,
"s": 3857,
"text": "Then we proceed to eliminate all rows that hold missing values (represented by the ? character) at column 6, which we have identified as the column that holds them."
},
{
"code": null,
"e": 4050,
"s": 4022,
"text": "df = df[~df[6].isin(['?'])]"
},
{
"code": null,
"e": 4285,
"s": 4050,
"text": "The ‘?’ character causes Python to interpret column 6 as made of strings. Other columns are made of integers. We set the entire dataframe to be interpreted as made of float numbers. This helps our network perform complex computations."
},
{
"code": null,
"e": 4307,
"s": 4285,
"text": "df = df.astype(float)"
},
{
"code": null,
"e": 4611,
"s": 4307,
"text": "Next, let’s deal with the range of the values in our data. Notice how the data within the 9 features is made of numbers that go beyond the 0 to 1 range. Real data-sets are often messy and come with a great diversity of range in their values: negative numbers, huge range differences within columns, etc."
},
{
"code": null,
"e": 4726,
"s": 4611,
"text": "That’s why data normalization is a key first step within the feature engineering phase of deep learning processes."
},
{
"code": null,
"e": 5058,
"s": 4726,
"text": "Normalizing the data means preparing it in a way that is easier for the network to digest. We are helping the network converge easier and faster to that minima we are looking for. Typically, neural networks respond well to numerical data set in the 0 to 1 range, and also to data that has a mean of 0 and a standard deviation of 1."
},
{
"code": null,
"e": 5230,
"s": 5058,
"text": "Feature engineering and normalization are not the focus of this article but let’s quickly mention a couple of methods within this phase of the feature engineering process:"
},
{
"code": null,
"e": 5415,
"s": 5230,
"text": "An example of a normalization method would be re-scaling our data to fit within the 0 to 1 range by applying the min-max method to each feature column. new_x= (x- min_x)/(max_x- min_x)"
},
{
"code": null,
"e": 5587,
"s": 5415,
"text": "We can also apply standardization, which centers the values of each feature column, setting it to have a mean of 0 and a standard deviation of 1. new_x = (x- mean)/std.dev"
},
{
"code": null,
"e": 5779,
"s": 5587,
"text": "Some data-sets and scenarios will benefit more than others from each of these techniques. In our case and after some tests, we decide to apply min-max normalization using the sklearn library:"
},
{
"code": null,
"e": 5928,
"s": 5779,
"text": "names = df.columns[0:10]scaler = MinMaxScaler() scaled_df = scaler.fit_transform(df.iloc[:,0:10]) scaled_df = pd.DataFrame(scaled_df, columns=names)"
},
{
"code": null,
"e": 5991,
"s": 5928,
"text": "Let’s take a look at the same 15 rows after all these changes."
},
{
"code": null,
"e": 6075,
"s": 5991,
"text": "After the changes we have 683 rows. 16 that held missing data have been eliminated."
},
{
"code": null,
"e": 6245,
"s": 6075,
"text": "All the columns are now made of float numbers and their values are normalized between 0 and 1. (Column 0, the IDs, will be ignored when we build the training set later)."
},
{
"code": null,
"e": 6347,
"s": 6245,
"text": "The final column, the class, is now using a value of 0 for benign tumors and of 1 for malignant ones."
},
{
"code": null,
"e": 6495,
"s": 6347,
"text": "Notice that we are not normalizing the class column because it already holds values in the 0 to 1 range and its values should remain set to 0 or 1."
},
{
"code": null,
"e": 6765,
"s": 6495,
"text": "And notice that the final column, the one we will use as our target, doesn’t need to be a float. It can be an integer because our output can only be 1 or 0. (when we train the network, we will pick that column from the original df dataframe, where it is set to 0 or 1)."
},
{
"code": null,
"e": 6938,
"s": 6765,
"text": "Therefore, our scaled_df dataframe contains all the normalized columns, and we will pick the class column from the df dataframe, the non normalized version of the data-set."
},
{
"code": null,
"e": 6994,
"s": 6938,
"text": "The process could continue as we explore more the data."
},
{
"code": null,
"e": 7084,
"s": 6994,
"text": "Are all the 9 features essential? do we want to include them all in the training process?"
},
{
"code": null,
"e": 7191,
"s": 7084,
"text": "Do we have enough quality data to produce good training, validation and test sets? (more about this later)"
},
{
"code": null,
"e": 7309,
"s": 7191,
"text": "Studying the data, do we find any meaningful and useful insights that may help us train the network more efficiently?"
},
{
"code": null,
"e": 7413,
"s": 7309,
"text": "These and more are part of the feature engineering process that takes place before the training begins."
},
{
"code": null,
"e": 7586,
"s": 7413,
"text": "Another useful thing to do is to build charts to analyze the data in different ways. The myplotlib python library helps us study the data through different kinds of graphs."
},
{
"code": null,
"e": 7693,
"s": 7586,
"text": "We first combine the normalized columns we want to study with the class column, and then begin to explore."
},
{
"code": null,
"e": 7798,
"s": 7693,
"text": "scaled_df[10]= df[10]scaled_df.iloc[0:13,1:11].plot.bar();scaled_df.iloc[0:13,1:11].plot.hist(alpha=0.5)"
},
{
"code": null,
"e": 7878,
"s": 7798,
"text": "Explore the full range of visualization options offered by Panda with this link"
},
{
"code": null,
"e": 8022,
"s": 7878,
"text": "To speed up the article, in our case we conclude that the 9 features are useful. Our objective is to predict the “class” column with precision."
},
{
"code": null,
"e": 8137,
"s": 8022,
"text": "So, how complex would the function be that would describe the connection between our 683 samples and their output?"
},
{
"code": null,
"e": 8237,
"s": 8137,
"text": "The relationship between the 9 features and the output is clearly multi-dimensional and non-linear."
},
{
"code": null,
"e": 8298,
"s": 8237,
"text": "Let’s run the data through the network and see what happens."
},
{
"code": null,
"e": 8350,
"s": 8298,
"text": "Before we do that, we need to consider a key topic:"
},
{
"code": null,
"e": 8483,
"s": 8350,
"text": "If we use the 683 samples, all our samples, to create our training set and we get good results, we will have to face a key question."
},
{
"code": null,
"e": 8655,
"s": 8483,
"text": "What if the network sets its weights in a way that matches perfectly the samples used during training, and yet fails to generalize to new samples outside the training set?"
},
{
"code": null,
"e": 8806,
"s": 8655,
"text": "Is our final objective achieving great accuracy when using the training data, or when using data it has never seen before? Obviously, the second case."
},
{
"code": null,
"e": 8915,
"s": 8806,
"text": "This is the reason why the deep learning practitioner typically takes into account three kinds of data-sets:"
},
{
"code": null,
"e": 9022,
"s": 8915,
"text": "Training set: the data you use to train the network. It contains the input features and the target labels."
},
{
"code": null,
"e": 9252,
"s": 9022,
"text": "Validation set: a separate, different batch of data, which should ideally come from the same distribution than the training set. You will use it to verify the quality of the training. The validation set has as well target labels."
},
{
"code": null,
"e": 9471,
"s": 9252,
"text": "Test set: another separate batch of data, used to test the network with fresh related data that ideally comes from the same distribution than the validation set. Typically, the test set doesn’t come with target labels."
},
{
"code": null,
"e": 9777,
"s": 9471,
"text": "The size of the different sets in relation to each other is another topic that would take some time to describe. For our purposes, consider that most of the data forms the training set and a small percentage of it is typically extracted (and eliminated from the training set) to become the validation set."
},
{
"code": null,
"e": 9887,
"s": 9777,
"text": "20% is a typical number that is often chosen as the percentage of the data that will form our validation set."
},
{
"code": null,
"e": 10022,
"s": 9887,
"text": "To estimate the quality of the training of your network, it is useful to compare the performance of your training and validation sets:"
},
{
"code": null,
"e": 10279,
"s": 10022,
"text": "If the loss value achieved on the validation set improves and then starts to get worse, the network is over-fitting, meaning the network has learnt a function that fits very well the training data, yet does not generalize well enough to the validation set."
},
{
"code": null,
"e": 10547,
"s": 10279,
"text": "The opposite of over-fitting is under-fitting, when the training performance of the network is not good enough and we obtain loss values that are too high in both the training and validation sets (and the training loss is worse than the validation loss, for example)."
},
{
"code": null,
"e": 10610,
"s": 10547,
"text": "Ideally you want to get similar performance in both data-sets."
},
{
"code": null,
"e": 10910,
"s": 10610,
"text": "When we have over-fitting, we can apply regularization. Regularization is a technique that applies changes to the optimization algorithm that allow the network to generalize better. Regularization techniques include Dropout, L1 and L2 regularization, early stopping and data augmentation techniques."
},
{
"code": null,
"e": 11140,
"s": 10910,
"text": "In general, realize that success with the validation set is your real target. Having the network perform fantastically well with the training data serves no purpose if it fails to perform well with new data it hasn’t seen before."
},
{
"code": null,
"e": 11242,
"s": 11140,
"text": "So your real target is to reach a good loss value and achieve good accuracy with your validation set."
},
{
"code": null,
"e": 11445,
"s": 11242,
"text": "To get there, over-fitting is one of the most important issues we need to prevent, and that’s why regularization is so important. Let’s quickly and briefly recap 4 widely used regularization techniques."
},
{
"code": null,
"e": 11807,
"s": 11445,
"text": "Dropout: During each training pass, we randomly disable some of the hidden units of our network. This prevents the network from putting too much emphasis on any specific weight and helps the network generalize better. It is as if we were running the data through different network architectures and then averaging their impact, which helps prevent over-fitting."
},
{
"code": null,
"e": 12030,
"s": 11807,
"text": "L1 and L2: We add extra terms to the cost function that penalize the network when weights become too large. These techniques encourage the network to find a good balance between the loss value and the scale of the weights."
},
{
"code": null,
"e": 12218,
"s": 12030,
"text": "Early stopping: Over-fitting can be a consequence of training for too long. If we monitor our validation error, we can stop the training process when the validation error stops improving."
},
{
"code": null,
"e": 12575,
"s": 12218,
"text": "Data augmentation: Typically, more training data means a better network performance, but obtaining more data is not always possible. Instead, we can augment the existing data by artificially creating variations of it. For example, in the case of images, we can apply rotations, translations, cropping and other techniques to produce new variations of them."
},
{
"code": null,
"e": 12761,
"s": 12575,
"text": "Back to our data. It’s time to pick our training and validation sets. We will select part of the 683 rows as the training set and a different part of the data-set as our validation set."
},
{
"code": null,
"e": 12882,
"s": 12761,
"text": "After the training, we will validate the quality of our network by running the process again through the validation set."
},
{
"code": null,
"e": 13065,
"s": 12882,
"text": "x=scaled_df.iloc[0:500,1:10].values.transpose()y=df.iloc[0:500,10:].values.transpose()xval=scaled_df.iloc[501:683,1:10].values.transpose()yval=df.iloc[501:683,10:].values.transpose()"
},
{
"code": null,
"e": 13320,
"s": 13065,
"text": "We decide to build our training set with 500 of the 683 rows, and we pick them from the normalized scaled_df dataframe. We also make sure to eliminate the first column (the IDs) and to not include the last column (the class) in the input x to the network"
},
{
"code": null,
"e": 13534,
"s": 13320,
"text": "We declare the target output y using the class column that corresponds to the same 500 rows. We pick the class column from the original non-normalized df dataframe (as the class value should remain as a 0 or a 1)."
},
{
"code": null,
"e": 13638,
"s": 13534,
"text": "We then select the next 183 rows for our validation set, and store them in the variables xval and yval."
},
{
"code": null,
"e": 13897,
"s": 13638,
"text": "We are ready. We will first train the network with the 500 rows of our x,y training set. Afterwards, we will test the trained network with the 183 rows of our xval,yval validation set, to see how well the network generalizes to data it has never seen before."
},
{
"code": null,
"e": 13968,
"s": 13897,
"text": "nn = dlnet(x,y)nn.lr=0.01nn.dims = [9, 15, 1]nn.gd(x, y, iter = 15000)"
},
{
"code": null,
"e": 14255,
"s": 13968,
"text": "We declare our network, set a learning rate and the number of nodes at each layer (the input has 9 nodes because we are using 9 features, and it’s not counted as a layer of the network. The first hidden layer has 15 hidden units and the second and final layer has a single output node)."
},
{
"code": null,
"e": 14421,
"s": 14255,
"text": "We then run the gradient descent algorithm through a few thousand iterations. Let’s get a feel of how well the network trains with a few seconds of gradient descent."
},
{
"code": null,
"e": 14564,
"s": 14421,
"text": "Every x iterations, we display the loss value of the network. If the training proceeds well, the loss value should decrease after every cycle."
},
{
"code": null,
"e": 14949,
"s": 14564,
"text": "Cost after iteration 0: 0.673967Cost after iteration 500: 0.388928Cost after iteration 1000: 0.231340Cost after iteration 1500: 0.171447Cost after iteration 2000: 0.146433Cost after iteration 2500: 0.133993Cost after iteration 3000: 0.126808Cost after iteration 3500: 0.122107Cost after iteration 12500: 0.101980Cost after iteration 13000: 0.101604Cost after iteration 14500: 0.100592"
},
{
"code": null,
"e": 15100,
"s": 14949,
"text": "After a number of iterations our loss begins to stabilize at a low level. We plot a chart that follows the loss of the network through the iterations."
},
{
"code": null,
"e": 15384,
"s": 15100,
"text": "Our network seems to have trained quite well, reaching a low loss value (the distance between our predictions and the target outputs is low). But, how good is it? and most importantly, how good is it, not just on the whole training set, but way more important, on our validation set?"
},
{
"code": null,
"e": 15614,
"s": 15384,
"text": "To find out, we create a new function, pred(), that runs a set of inputs through the network and then compares systematically every obtained output to its corresponding target output in order to produce an average accuracy value."
},
{
"code": null,
"e": 15861,
"s": 15614,
"text": "Notice below how the function studies if the prediction is above or below 0.5. We are doing binary classification and by default we consider that output values that are above 0.5 mean that the result belongs to one of the classes, and vice-versa."
},
{
"code": null,
"e": 16113,
"s": 15861,
"text": "In this case, because 1 is the class value for malignant tumors, we consider that outputs above 0.5 predict a malignant result, and below 0.5 the opposite. We will talk in a bit about how, when and why we would want to change this 0.5 threshold value."
},
{
"code": null,
"e": 16457,
"s": 16113,
"text": "def pred(self,x, y): self.X=x self.Y=y comp = np.zeros((1,x.shape[1])) pred, loss= self.forward() for i in range(0, pred.shape[1]): if pred[0,i] > 0.5: comp[0,i] = 1 else: comp[0,i] = 0 print(\"Acc: \" + str(np.sum((comp == y)/x.shape[1]))) return comp"
},
{
"code": null,
"e": 16662,
"s": 16457,
"text": "We now proceed to compare the accuracy of the network when using the training and validation sets, by calling the pred function twice, once with our training set, and another time with our validation set."
},
{
"code": null,
"e": 16720,
"s": 16662,
"text": "pred_train = nn.pred(x, y)pred_test = nn.pred(xval, yval)"
},
{
"code": null,
"e": 16748,
"s": 16720,
"text": "And we get these 2 results."
},
{
"code": null,
"e": 16780,
"s": 16748,
"text": "Acc: 0.9620000000000003Acc: 1.0"
},
{
"code": null,
"e": 16913,
"s": 16780,
"text": "The network has an accuracy of a 96% on the training set (first 500 rows) and of 100% when using the validation set (next 183 rows)."
},
{
"code": null,
"e": 17093,
"s": 16913,
"text": "The accuracy on the validation set is higher. This means that the network is not over-fitting and is generalizing well enough to be able to adapt to data it has never seen before."
},
{
"code": null,
"e": 17238,
"s": 17093,
"text": "We can now use the nn.forward() function to compare directly the first few values of the validation set output in relation to the target output:"
},
{
"code": null,
"e": 17428,
"s": 17238,
"text": "nn.X,nn.Y=xval, yval yvalh, loss = nn.forward()print(\"\\ny\",np.around(yval[:,0:50,], decimals=0).astype(np.int)) print(\"\\nyh\",np.around(yvalh[:,0:50,], decimals=0).astype(np.int),\"\\n\")"
},
{
"code": null,
"e": 17439,
"s": 17428,
"text": "And we get"
},
{
"code": null,
"e": 17653,
"s": 17439,
"text": "y [[0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]]yh [[0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]]"
},
{
"code": null,
"e": 17737,
"s": 17653,
"text": "Both match perfectly, because we have achieved 100% accuracy on our validation set."
},
{
"code": null,
"e": 17831,
"s": 17737,
"text": "Therefore, the function learnt pretty well to adapt to both the training and validation sets."
},
{
"code": null,
"e": 17950,
"s": 17831,
"text": "One great way to analyze the accuracy is by plotting a confusion matrix. First, we declare a custom plotting function."
},
{
"code": null,
"e": 18596,
"s": 17950,
"text": "def plotCf(a,b,t): cf =confusion_matrix(a,b) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title(t) plt.xlabel('Predicted') plt.ylabel('Actual') tick_marks = np.arange(len(set(expected))) # length of classes class_labels = ['0','1'] tick_marks plt.xticks(tick_marks,class_labels) plt.yticks(tick_marks,class_labels) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show();"
},
{
"code": null,
"e": 18680,
"s": 18596,
"text": "(This custom confusion matrix function comes from this public Kaggle created by JP)"
},
{
"code": null,
"e": 18795,
"s": 18680,
"text": "Then, we run the pred function again twice, and plot confusion matrices for both the training and validation sets."
},
{
"code": null,
"e": 19187,
"s": 18795,
"text": "nn.X,nn.Y=x, y target=np.around(np.squeeze(y), decimals=0).astype(np.int)predicted=np.around(np.squeeze(nn.pred(x,y)), decimals=0).astype(np.int)plotCf(target,predicted,'Cf Training Set')nn.X,nn.Y=xval, yval target=np.around(np.squeeze(yval), decimals=0).astype(np.int)predicted=np.around(np.squeeze(nn.pred(xval,yval)), decimals=0).astype(np.int)plotCf(target,predicted,'Cf Validation Set')"
},
{
"code": null,
"e": 19351,
"s": 19187,
"text": "We can see even more clearly that our validation set has perfect accuracy on its 183 samples. As for the training set, there are 19 mistakes among the 500 samples."
},
{
"code": null,
"e": 19611,
"s": 19351,
"text": "Now, at this point you may say that in a topic as delicate as diagnosing a tumor, setting our prediction to be 1 if the sigmoid output gives a value above 0.5 is not really good. The network should be really confident before giving a prediction of malignancy."
},
{
"code": null,
"e": 19781,
"s": 19611,
"text": "I totally agree, that’s very correct. And these are the kinds of decisions that you need to take depending on the nature of the challenge and topic you are dealing with."
},
{
"code": null,
"e": 19999,
"s": 19781,
"text": "Let’s then create a new variable called threshold. It will control our confidence threshold, how close to 1 the output of the network needs to be before we decide that a tumor is malignant. By default we set it to 0.5"
},
{
"code": null,
"e": 20018,
"s": 19999,
"text": "self.threshold=0.5"
},
{
"code": null,
"e": 20091,
"s": 20018,
"text": "Out prediction function is now updated to use that confidence threshold."
},
{
"code": null,
"e": 20446,
"s": 20091,
"text": "def pred(self,x, y): self.X=x self.Y=y comp = np.zeros((1,x.shape[1])) pred, loss= self.forward() for i in range(0, pred.shape[1]): if pred[0,i] > self.threshold: comp[0,i] = 1 else: comp[0,i] = 0 print(\"Acc: \" + str(np.sum((comp == y)/x.shape[1]))) return comp"
},
{
"code": null,
"e": 20524,
"s": 20446,
"text": "Let’s now compare our results as we gradually raise the confidence threshold."
},
{
"code": null,
"e": 20714,
"s": 20524,
"text": "Confidence threshold: 0.5 . Output values need to be higher than 0.5 for the output to be considered malignant. As seen previously, the validation accuracy is 100%, the training one is 96%."
},
{
"code": null,
"e": 20908,
"s": 20714,
"text": "Confidence threshold: 0.7 . Output values need to be higher than 0.7 for the output to be considered malignant. The validation accuracy remains at 100%, the training one decreases a bit to 95%."
},
{
"code": null,
"e": 21250,
"s": 20908,
"text": "Confidence threshold: 0.8 . Output values need to be higher than 0.8 for the output to be considered malignant. The validation accuracy for the first time decreases very, very slightly to 99.45%. In the confusion matrix we see that 1 single sample of the 183 is not recognized correctly. The training accuracy decreases a bit more till 94.2%"
},
{
"code": null,
"e": 21634,
"s": 21250,
"text": "Confidence threshold: 0.9. Finally, in the case of 0.9, output values need to be higher than 0.9 for the output to be considered malignant. We are looking for almost complete confidence. The validation accuracy decreases a bit more till 98.9%. In the confusion matrix we see that 2 samples of the 183 were not recognized correctly. The training accuracy decreases further till 92.6%."
},
{
"code": null,
"e": 21735,
"s": 21634,
"text": "Therefore, by controlling the confidence threshold, we adapt to the specific needs of our challenge."
},
{
"code": null,
"e": 21951,
"s": 21735,
"text": "If we want to lower the loss value related to our training set (because we are failing to recognize a small percentage of the training samples), we can try to train for longer, and also use different learning rates."
},
{
"code": null,
"e": 22043,
"s": 21951,
"text": "For example, if we set the learning rate to 0.07 and train for 65000 iterations, we obtain:"
},
{
"code": null,
"e": 22198,
"s": 22043,
"text": "Cost after iteration 63500: 0.017076Cost after iteration 64000: 0.016762Cost after iteration 64500: 0.016443Acc: 0.9980000000000003Acc: 0.9945054945054945"
},
{
"code": null,
"e": 22326,
"s": 22198,
"text": "Now, with our confidence threshold set to 0.5, the network is accurate with every sample in both sets, except with one of each."
},
{
"code": null,
"e": 22480,
"s": 22326,
"text": "If we raise the confidence threshold to 0.7, performance is still excellent, only 1 validation sample and 2 training samples are not predicted correctly."
},
{
"code": null,
"e": 22651,
"s": 22480,
"text": "Finally, if we are really demanding and set the confidence threshold to 0.9, the network fails to guess correctly 1 of the validation samples and 10 of the training ones."
},
{
"code": null,
"e": 22844,
"s": 22651,
"text": "Although we have done quite well, considering that we are using a basic network without regularization, it is typical for things to get much harder when you are dealing with more complex data."
},
{
"code": null,
"e": 22981,
"s": 22844,
"text": "Often, the loss landscape gets very complex and it’s easier to fall in the wrong local minima or fail to converge to a good enough loss."
},
{
"code": null,
"e": 23208,
"s": 22981,
"text": "Also, depending on the initial conditions of the network, we may converge to a good minima or we may get stuck at a plateau somewhere and fail to get out of it. It’s useful at this stage to picture again our initial animation."
},
{
"code": null,
"e": 23521,
"s": 23208,
"text": "Picture that landscape, full of hills and valleys, places where the loss is really high, and places where the loss gets very low. The landscape of the loss function related to a complex scenario is often not uniform (though it can be made more smooth using different methods, but that’s a whole different topic)."
},
{
"code": null,
"e": 23712,
"s": 23521,
"text": "It’s full of hills and valleys of different depths and angles. The way you move around the landscape is by changing the loss value of the network when you run the gradient descent algorithm."
},
{
"code": null,
"e": 23780,
"s": 23712,
"text": "And the speed at which you move is controlled by the learning rate:"
},
{
"code": null,
"e": 23903,
"s": 23780,
"text": "If you are moving very slowly and somehow arrive to a plateau or a valley that is not low enough, you may get stuck there."
},
{
"code": null,
"e": 24019,
"s": 23903,
"text": "If you move too fast, you may arrive to a low enough valley but rush through it and move away from it just as fast."
},
{
"code": null,
"e": 24121,
"s": 24019,
"text": "So there are some very delicate issues that have an enormous impact on how your network will perform."
},
{
"code": null,
"e": 24229,
"s": 24121,
"text": "The initial conditions: in what part of the landscape do you drop the ball at the beginning of the process?"
},
{
"code": null,
"e": 24286,
"s": 24229,
"text": "The speed at which you move the ball, the learning rate."
},
{
"code": null,
"e": 24534,
"s": 24286,
"text": "A lot of the progress achieved recently in improving the speed with which neural networks train is connected to different techniques that dynamically manage the learning rate and also to new ways of setting those initial conditions in better ways."
},
{
"code": null,
"e": 24568,
"s": 24534,
"text": "Regarding the initial conditions:"
},
{
"code": null,
"e": 24762,
"s": 24568,
"text": "Remember that each layer computes a combination of the weights and the inputs of the preceding layer (weighted sum of the inputs) and pass that computation to that layer’s activation functions."
},
{
"code": null,
"e": 24979,
"s": 24762,
"text": "Those activation functions have shapes that can either accelerate or stop all together the dynamics of the neurons, depending on the combination between the range of the inputs and the way they respond to that range."
},
{
"code": null,
"e": 25300,
"s": 24979,
"text": "If the sigmoid function, for example, receives values that trigger a result that is close to the extremes of its output range, the output of the activation function on that part of its range becomes really flat. If it stays flat for some time, the derivative, the rate of change at that point becomes zero or very small."
},
{
"code": null,
"e": 25554,
"s": 25300,
"text": "Recall that it is the derivative what helps us decide in what direction to move next. Therefore, if the derivative is not giving us meaningful information, it will be very difficult for the network to know in what direction to move next from that point."
},
{
"code": null,
"e": 25717,
"s": 25554,
"text": "It is as if you had reached a plateau in the landscape and you were really confused as to where to go next, and you just kept moving in circles around that point."
},
{
"code": null,
"e": 25962,
"s": 25717,
"text": "This may happen also with ReLU, although ReLU has only 1 flat side as opposed to the 2 of Sigmoid and Tanh. Leaky-ReLU is a variation of ReLU that slightly modifies that side of the function (the flat one) to try to prevent vanishing gradients."
},
{
"code": null,
"e": 26214,
"s": 25962,
"text": "It is therefore critical to set the initial values of our weights in the best way possible so that the computations of the units at the start of the training process produce outputs that fall within the best possible range of our activation functions."
},
{
"code": null,
"e": 26327,
"s": 26214,
"text": "That could make the whole difference between beginning at a really high hill of the loss landscape or way lower."
},
{
"code": null,
"e": 26536,
"s": 26327,
"text": "Managing the learning rate to prevent the training process from being too slow or too fast, and to adapt its value to the changing conditions of the process and of each parameter, is another complex challenge"
},
{
"code": null,
"e": 26771,
"s": 26536,
"text": "Talking about the many ways of dealing with the initial conditions and the learning rate would take a few articles. I will briefly describe some of them to give an idea of some of the methods experts use to deal with these challenges."
},
{
"code": null,
"e": 27028,
"s": 26771,
"text": "Xavier initialization: A way of initializing our weights so that neuron’s won’t start in a saturated state (trapped at the delicate parts of their output ranges, where derivatives cannot provide enough information for the network to know where to go next)."
},
{
"code": null,
"e": 27317,
"s": 27028,
"text": "Learning rate annealing: high learning rates can push the algorithm to bypass and miss good minima at the loss landscape. A gradual decrease of the learning rate can prevent that. There are different ways to implement this decrease, including: exponential decay, step decay and 1/t decay."
},
{
"code": null,
"e": 27844,
"s": 27317,
"text": "Fast.ai Lr_find(): An algorithm of the fast.ai library that finds the ideal range of values for the learning rate. Lr_find trains the model through a few iterations. It first tries to use a very low learning rate, and at each mini batch it changes the rate gradually until it reaches a very high value. The loss is recorded at each iteration and a chart helps us visualize the loss against the learning rate. We can then decide what are the optimal values of the learning rate that decrease the loss in the most efficient way."
},
{
"code": null,
"e": 27939,
"s": 27844,
"text": "Differential learning rates: Using different learning rates in different parts of our network."
},
{
"code": null,
"e": 28838,
"s": 27939,
"text": "SGDR, Stochastic Gradient Descent with Restarts: Resetting our learning rate every x iterations. This can help us get out of plateaus or local minima that are not low enough, if we get stuck in one of them. A typical process is to start with a high learning rate. You then decrease it gradually at each mini batch. After x number of Epochs you reset it back to its initial high value and the same process repeats again. The concept is that moving gradually from a high rate to a lower one makes sense because we first quickly move down from the high points of the landscape (initial high loss value) and then move slower to prevent bypassing the minima of the landscape (low loss value areas). But if we get stuck at some plateau or a valley that is not low enough, restarting our rate to a high value every x iterations will help us jump out of that situation and continue exploring the landscape."
},
{
"code": null,
"e": 29326,
"s": 28838,
"text": "1 Cycle Policy: A way of dynamically changing the learning rate proposed by Leslie N. Smith, in which we begin with a low rate value and gradually increase it until we reach a maximum. Then, we proceed to gradually decrease it till the end of the process. The initial gradual increase allows us to explore large areas of the loss landscape, increasing our chances of reaching a low area that is not bumpy; in the second part of the cycle, we settle in the low, flat area we have reached."
},
{
"code": null,
"e": 29623,
"s": 29326,
"text": "Momentum: A variation of stochastic gradient descent that helps accelerate the path through the loss landscape while keeping the overall direction controlled. Recall that SGD can be noisy. Momentum averages the changes in the path, smooths that path and accelerates the movement towards the goal."
},
{
"code": null,
"e": 29745,
"s": 29623,
"text": "Adaptive learning rates: Methods that calculate and use different learning rates for different parameters of the network."
},
{
"code": null,
"e": 29961,
"s": 29745,
"text": "AdaGrad (Adaptive Gradient Algorithm): Connecting with the previous point, AdaGrad is a variation of SGD that instead of using a single learning rate for all the parameters, uses a different rate for each parameter."
},
{
"code": null,
"e": 30198,
"s": 29961,
"text": "Root Mean Square Propagation (RMSProp): Like Adagrad, RMSProp uses different learning rates for each parameter, and adapts those rates depending on the average of how fast they are changing (this helps when dealing with noisy contexts)."
},
{
"code": null,
"e": 30402,
"s": 30198,
"text": "Adam: It combines some aspects of RMSprop and SGDR with momentum. Like RMSprop, it uses squared gradients to scale the learning rate, and it also uses the average of the gradient to make use of momentum."
},
{
"code": null,
"e": 30544,
"s": 30402,
"text": "If you are new to all these names, don’t get overwhelmed. Behind most of them are the very same roots: back-propagation and gradient descent."
},
{
"code": null,
"e": 30846,
"s": 30544,
"text": "Also, a lot of these methods are selected automatically for you within modern frameworks such as the fast.ai library. It is though really useful to understand how they work, as you are then in a better position to take your own decisions and even to research and test different variations and options."
},
{
"code": null,
"e": 31048,
"s": 30846,
"text": "When we understand the core of the network, the basic back-propagation algorithm and the basic gradient descent process, we have more options to explore and experiment whenever we face hard challenges."
},
{
"code": null,
"e": 31204,
"s": 31048,
"text": "Because we understand the process, we realize for example that in deep learning, the initial place where we drop the ball within the loss landscape is key."
},
{
"code": null,
"e": 31365,
"s": 31204,
"text": "Some initial positions will soon push the ball (the training process) to get stuck in some part of the landscape. Others will quickly drive us to a good minima."
},
{
"code": null,
"e": 31627,
"s": 31365,
"text": "When the mystery function becomes more complex, it is the time to incorporate some of the advanced solutions I mentioned earlier. It is also time to study in more depth the architecture of the entire network and to go deeper into the different hyper-parameters."
},
{
"code": null,
"e": 31847,
"s": 31627,
"text": "The shape of our loss landscape is very much influenced by the design of the architecture of our networks as well as hyper-parameters like the learning rate, the size of our batches, the optimizer algorithm we use, etc."
},
{
"code": null,
"e": 32019,
"s": 31847,
"text": "For a discussion about those influences, check the paper: Visualizing the Loss Landscape of Neural Nets by Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, Tom Goldstein."
},
{
"code": null,
"e": 32256,
"s": 32019,
"text": "A very interesting point coming out of recent research is how the skip connections model in neural nets can smooth our loss landscape and make it dramatically simpler and more convex, increasing our chances to converge to a good result."
},
{
"code": null,
"e": 32457,
"s": 32256,
"text": "Skip connections have helped a lot to train very deep networks. Basically, skip connections are extra connections that link nodes of separate layers, skipping one or more non-linear layers in between."
},
{
"code": null,
"e": 32754,
"s": 32457,
"text": "As we experiment with different architectures and parameters, we are modifying our loss landscape, making it more rugged or smooth, increasing or decreasing the number of local optima. And as we optimize the way we initialize the parameters of the network, we are improving our starting position."
},
{
"code": null,
"e": 32868,
"s": 32754,
"text": "Let’s keep on exploring new ways to navigate the loss landscapes of the most fascinating challenges in the world."
},
{
"code": null,
"e": 32937,
"s": 32868,
"text": "This article covered the basics and from here, the sky is the limit!"
},
{
"code": null,
"e": 32999,
"s": 32937,
"text": "Links to the 3 parts of this article:Part 1 | Part 2 | Part 3"
}
] |
Can we use reserved word ‘index’ as MySQL column name?
|
Yes, but you need to add a backtick symbol to the reserved word (index) to avoid error while using it as a column name.
Let us first create a table −
mysql> create table DemoTable
(
`index` int
);
Query OK, 0 rows affected (0.48 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable values(1000);
Query OK, 1 row affected (0.18 sec)
mysql> insert into DemoTable values(1020);
Query OK, 1 row affected (0.14 sec)
mysql> insert into DemoTable values(967);
Query OK, 1 row affected (0.11 sec)
mysql> insert into DemoTable values(567);
Query OK, 1 row affected (0.12 sec)
mysql> insert into DemoTable values(1010);
Query OK, 1 row affected (0.18 sec)
Display all records from the table using select statement −
mysql> select *from DemoTable;
This will produce the following output −
+-------+
| index |
+-------+
| 1000 |
| 1020 |
| 967 |
| 567 |
| 1010 |
+-------+
5 rows in set (0.00 sec)
Now let us display some records with our column name `index`. Here, we are displaying 3 records −
mysql> select *from DemoTable order by `index` DESC LIMIT 3;
This will produce the following output −
+-------+
| index |
+-------+
| 1020 |
| 1010 |
| 1000 |
+-------+
3 rows in set (0.00 sec)
|
[
{
"code": null,
"e": 1182,
"s": 1062,
"text": "Yes, but you need to add a backtick symbol to the reserved word (index) to avoid error while using it as a column name."
},
{
"code": null,
"e": 1212,
"s": 1182,
"text": "Let us first create a table −"
},
{
"code": null,
"e": 1299,
"s": 1212,
"text": "mysql> create table DemoTable\n(\n `index` int\n);\nQuery OK, 0 rows affected (0.48 sec)"
},
{
"code": null,
"e": 1355,
"s": 1299,
"text": "Insert some records in the table using insert command −"
},
{
"code": null,
"e": 1748,
"s": 1355,
"text": "mysql> insert into DemoTable values(1000);\nQuery OK, 1 row affected (0.18 sec)\nmysql> insert into DemoTable values(1020);\nQuery OK, 1 row affected (0.14 sec)\nmysql> insert into DemoTable values(967);\nQuery OK, 1 row affected (0.11 sec)\nmysql> insert into DemoTable values(567);\nQuery OK, 1 row affected (0.12 sec)\nmysql> insert into DemoTable values(1010);\nQuery OK, 1 row affected (0.18 sec)"
},
{
"code": null,
"e": 1808,
"s": 1748,
"text": "Display all records from the table using select statement −"
},
{
"code": null,
"e": 1839,
"s": 1808,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 1880,
"s": 1839,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1995,
"s": 1880,
"text": "+-------+\n| index |\n+-------+\n| 1000 |\n| 1020 |\n| 967 |\n| 567 |\n| 1010 |\n+-------+\n5 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2093,
"s": 1995,
"text": "Now let us display some records with our column name `index`. Here, we are displaying 3 records −"
},
{
"code": null,
"e": 2154,
"s": 2093,
"text": "mysql> select *from DemoTable order by `index` DESC LIMIT 3;"
},
{
"code": null,
"e": 2195,
"s": 2154,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2290,
"s": 2195,
"text": "+-------+\n| index |\n+-------+\n| 1020 |\n| 1010 |\n| 1000 |\n+-------+\n3 rows in set (0.00 sec)"
}
] |
Spring MVC - Properties Method Name Resolver Example
|
The following example shows how to use the Properties Method Name Resolver method of a Multi Action Controller using Spring Web MVC framework. The MultiActionController class helps to map multiple URLs with their methods in a single controller respectively.
package com.tutorialspoint;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.springframework.web.servlet.ModelAndView;
import org.springframework.web.servlet.mvc.multiaction.MultiActionController;
public class UserController extends MultiActionController{
public ModelAndView home(HttpServletRequest request,
HttpServletResponse response) throws Exception {
ModelAndView model = new ModelAndView("user");
model.addObject("message", "Home");
return model;
}
public ModelAndView add(HttpServletRequest request,
HttpServletResponse response) throws Exception {
ModelAndView model = new ModelAndView("user");
model.addObject("message", "Add");
return model;
}
public ModelAndView remove(HttpServletRequest request,
HttpServletResponse response) throws Exception {
ModelAndView model = new ModelAndView("user");
model.addObject("message", "Remove");
return model;
}
}
<bean class = "com.tutorialspoint.UserController">
<property name = "methodNameResolver">
<bean class = "org.springframework.web.servlet.mvc.multiaction.PropertiesMethodNameResolver">
<property name = "mappings">
<props>
<prop key = "/user/home.htm">home</prop>
<prop key = "/user/add.htm">add</prop>
<prop key = "/user/remove.htm">update</prop>
</props>
</property>
</bean>
</property>
</bean>
For example, using the above configuration, if URI −
/user/home.htm is requested, DispatcherServlet will forward the request to the UserController home() method.
/user/home.htm is requested, DispatcherServlet will forward the request to the UserController home() method.
/user/add.htm is requested, DispatcherServlet will forward the request to the UserController add() method.
/user/add.htm is requested, DispatcherServlet will forward the request to the UserController add() method.
/user/remove.htm is requested, DispatcherServlet will forward the request to the UserController remove() method.
/user/remove.htm is requested, DispatcherServlet will forward the request to the UserController remove() method.
To start with it, let us have a working Eclipse IDE in place and consider the following steps to develop a Dynamic Form based Web Application using the Spring Web Framework.
package com.tutorialspoint;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.springframework.web.servlet.ModelAndView;
import org.springframework.web.servlet.mvc.multiaction.MultiActionController;
public class UserController extends MultiActionController{
public ModelAndView home(HttpServletRequest request,
HttpServletResponse response) throws Exception {
ModelAndView model = new ModelAndView("user");
model.addObject("message", "Home");
return model;
}
public ModelAndView add(HttpServletRequest request,
HttpServletResponse response) throws Exception {
ModelAndView model = new ModelAndView("user");
model.addObject("message", "Add");
return model;
}
public ModelAndView remove(HttpServletRequest request,
HttpServletResponse response) throws Exception {
ModelAndView model = new ModelAndView("user");
model.addObject("message", "Remove");
return model;
}
}
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:context = "http://www.springframework.org/schema/context"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">
<bean class = "org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name = "prefix" value = "/WEB-INF/jsp/"/>
<property name = "suffix" value = ".jsp"/>
</bean>
<bean class = "org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping">
<property name = "caseSensitive" value = "true" />
</bean>
<bean class = "com.tutorialspoint.UserController">
<property name = "methodNameResolver">
<bean class = "org.springframework.web.servlet.mvc.multiaction.PropertiesMethodNameResolver">
<property name = "mappings">
<props>
<prop key = "/user/home.htm">home</prop>
<prop key = "/user/add.htm">add</prop>
<prop key = "/user/remove.htm">update</prop>
</props>
</property>
</bean>
</property>
</bean>
</beans>
<%@ page contentType = "text/html; charset = UTF-8" %>
<html>
<head>
<title>Hello World</title>
</head>
<body>
<h2>${message}</h2>
</body>
</html>
Once you are done with creating source and configuration files, export your application. Right click on your application, use Export → WAR File option and save the TestWeb.war file in Tomcat's webapps folder.
Now, start your Tomcat server and make sure you are able to access other webpages from the webapps folder using a standard browser. Now, try a URL − http://localhost:8080/TestWeb/user/add.htm and we will see the following screen, if everything is fine with the Spring Web Application.
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 3049,
"s": 2791,
"text": "The following example shows how to use the Properties Method Name Resolver method of a Multi Action Controller using Spring Web MVC framework. The MultiActionController class helps to map multiple URLs with their methods in a single controller respectively."
},
{
"code": null,
"e": 4064,
"s": 3049,
"text": "package com.tutorialspoint;\n\nimport javax.servlet.http.HttpServletRequest;\nimport javax.servlet.http.HttpServletResponse;\n\nimport org.springframework.web.servlet.ModelAndView;\nimport org.springframework.web.servlet.mvc.multiaction.MultiActionController;\n\npublic class UserController extends MultiActionController{\n\t\n public ModelAndView home(HttpServletRequest request,\n HttpServletResponse response) throws Exception {\n ModelAndView model = new ModelAndView(\"user\");\n model.addObject(\"message\", \"Home\");\n return model;\n }\n\n public ModelAndView add(HttpServletRequest request,\n HttpServletResponse response) throws Exception {\n ModelAndView model = new ModelAndView(\"user\");\n model.addObject(\"message\", \"Add\");\n return model;\n }\n\n public ModelAndView remove(HttpServletRequest request,\n HttpServletResponse response) throws Exception {\n ModelAndView model = new ModelAndView(\"user\");\n model.addObject(\"message\", \"Remove\");\n return model;\n }\n}"
},
{
"code": null,
"e": 4567,
"s": 4064,
"text": "<bean class = \"com.tutorialspoint.UserController\">\n <property name = \"methodNameResolver\">\n <bean class = \"org.springframework.web.servlet.mvc.multiaction.PropertiesMethodNameResolver\">\n <property name = \"mappings\">\n <props>\n <prop key = \"/user/home.htm\">home</prop>\n <prop key = \"/user/add.htm\">add</prop>\n <prop key = \"/user/remove.htm\">update</prop>\t \n </props>\n </property>\n </bean>\n </property>\n</bean>"
},
{
"code": null,
"e": 4620,
"s": 4567,
"text": "For example, using the above configuration, if URI −"
},
{
"code": null,
"e": 4729,
"s": 4620,
"text": "/user/home.htm is requested, DispatcherServlet will forward the request to the UserController home() method."
},
{
"code": null,
"e": 4838,
"s": 4729,
"text": "/user/home.htm is requested, DispatcherServlet will forward the request to the UserController home() method."
},
{
"code": null,
"e": 4945,
"s": 4838,
"text": "/user/add.htm is requested, DispatcherServlet will forward the request to the UserController add() method."
},
{
"code": null,
"e": 5052,
"s": 4945,
"text": "/user/add.htm is requested, DispatcherServlet will forward the request to the UserController add() method."
},
{
"code": null,
"e": 5165,
"s": 5052,
"text": "/user/remove.htm is requested, DispatcherServlet will forward the request to the UserController remove() method."
},
{
"code": null,
"e": 5278,
"s": 5165,
"text": "/user/remove.htm is requested, DispatcherServlet will forward the request to the UserController remove() method."
},
{
"code": null,
"e": 5452,
"s": 5278,
"text": "To start with it, let us have a working Eclipse IDE in place and consider the following steps to develop a Dynamic Form based Web Application using the Spring Web Framework."
},
{
"code": null,
"e": 6467,
"s": 5452,
"text": "package com.tutorialspoint;\n\nimport javax.servlet.http.HttpServletRequest;\nimport javax.servlet.http.HttpServletResponse;\n\nimport org.springframework.web.servlet.ModelAndView;\nimport org.springframework.web.servlet.mvc.multiaction.MultiActionController;\n\npublic class UserController extends MultiActionController{\n\t\n public ModelAndView home(HttpServletRequest request,\n HttpServletResponse response) throws Exception {\n ModelAndView model = new ModelAndView(\"user\");\n model.addObject(\"message\", \"Home\");\n return model;\n }\n\n public ModelAndView add(HttpServletRequest request,\n HttpServletResponse response) throws Exception {\n ModelAndView model = new ModelAndView(\"user\");\n model.addObject(\"message\", \"Add\");\n return model;\n }\n\n public ModelAndView remove(HttpServletRequest request,\n HttpServletResponse response) throws Exception {\n ModelAndView model = new ModelAndView(\"user\");\n model.addObject(\"message\", \"Remove\");\n return model;\n }\n}"
},
{
"code": null,
"e": 7850,
"s": 6467,
"text": "<beans xmlns = \"http://www.springframework.org/schema/beans\"\n xmlns:context = \"http://www.springframework.org/schema/context\"\n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation = \"\n http://www.springframework.org/schema/beans \n http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\n http://www.springframework.org/schema/context \n http://www.springframework.org/schema/context/spring-context-3.0.xsd\">\n\n <bean class = \"org.springframework.web.servlet.view.InternalResourceViewResolver\">\n <property name = \"prefix\" value = \"/WEB-INF/jsp/\"/>\n <property name = \"suffix\" value = \".jsp\"/>\n </bean>\n\n <bean class = \"org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping\"> \n <property name = \"caseSensitive\" value = \"true\" />\n </bean>\n <bean class = \"com.tutorialspoint.UserController\">\n <property name = \"methodNameResolver\">\n <bean class = \"org.springframework.web.servlet.mvc.multiaction.PropertiesMethodNameResolver\">\n <property name = \"mappings\">\n <props>\n <prop key = \"/user/home.htm\">home</prop>\n <prop key = \"/user/add.htm\">add</prop>\n <prop key = \"/user/remove.htm\">update</prop>\t \n </props>\n </property>\n </bean>\n </property>\n </bean> \n</beans>"
},
{
"code": null,
"e": 8023,
"s": 7850,
"text": "<%@ page contentType = \"text/html; charset = UTF-8\" %>\n<html>\n <head>\n <title>Hello World</title>\n </head>\n <body>\n <h2>${message}</h2> \n </body>\n</html>"
},
{
"code": null,
"e": 8232,
"s": 8023,
"text": "Once you are done with creating source and configuration files, export your application. Right click on your application, use Export → WAR File option and save the TestWeb.war file in Tomcat's webapps folder."
},
{
"code": null,
"e": 8517,
"s": 8232,
"text": "Now, start your Tomcat server and make sure you are able to access other webpages from the webapps folder using a standard browser. Now, try a URL − http://localhost:8080/TestWeb/user/add.htm and we will see the following screen, if everything is fine with the Spring Web Application."
},
{
"code": null,
"e": 8524,
"s": 8517,
"text": " Print"
},
{
"code": null,
"e": 8535,
"s": 8524,
"text": " Add Notes"
}
] |
Count characters at same position as in English alphabet - GeeksforGeeks
|
29 Apr, 2021
Given a string of lower and uppercase characters, the task is to find that how many characters are at same position as in English alphabet.Examples:
Input: ABcED
Output : 3
First three characters are at same position
as in English alphabets.
Input: geeksforgeeks
Output : 1
Only 'f' is at same position as in English
alphabet
Input : alphabetical
Output : 3
For this we can have simple approach:
1) Initialize result as 0.
2) Traverse input string and do following for every
character str[i]
a) If 'i' is same as str[i] - 'a' or same as
str[i] - 'A', then do result++
3) Return result
C++
Java
Python3
C#
PHP
Javascript
// C++ program to find number of characters at same// position as in English alphabets#include<bits/stdc++.h>using namespace std; int findCount(string str){ int result = 0; // Traverse input string for (int i = 0 ; i < str.size(); i++) // Check that index of characters of string is // same as of English alphabets by using ASCII // values and the fact that all lower case // alphabetic characters come together in same // order in ASCII table. And same is true for // upper case. if (i == (str[i] - 'a') || i == (str[i] - 'A')) result++; return result;} // Driver codeint main(){ string str = "AbgdeF"; cout << findCount(str); return 0;}
// Java program to find number of// characters at same position// as in English alphabetsclass GFG{ static int findCount(String str) { int result = 0; // Traverse input string for (int i = 0; i < str.length(); i++) // Check that index of characters // of string is same as of English // alphabets by using ASCII values // and the fact that all lower case // alphabetic characters come together // in same order in ASCII table. And // same is true for upper case. { if (i == (str.charAt(i) - 'a') || i == (str.charAt(i) - 'A')) { result++; } } return result; } // Driver code public static void main(String[] args) { String str = "AbgdeF"; System.out.print(findCount(str)); }} // This code is contributed by Rajput-JI
# Python program to find number of# characters at same position as# in English alphabets # Function to count the number of# characters at same position as# in English alphabetsdef findCount(str): result = 0 # Traverse the input string for i in range(len(str)): # Check that index of characters of string is # same as of English alphabets by using ASCII # values and the fact that all lower case # alphabetic characters come together in same # order in ASCII table. And same is true for # upper case. if ((i == ord(str[i]) - ord('a')) or (i == ord(str[i]) - ord('A'))): result += 1 return result # Driver Codestr = 'AbgdeF'print(findCount(str)) # This code is contributed# by SamyuktaSHegde
// C# program to find number of// characters at same position// as in English alphabetsusing System; class GFG{static int findCount(string str){ int result = 0; // Traverse input string for (int i = 0 ; i < str.Length; i++) // Check that index of characters // of string is same as of English // alphabets by using ASCII values // and the fact that all lower case // alphabetic characters come together // in same order in ASCII table. And // same is true for upper case. if (i == (str[i] - 'a') || i == (str[i] - 'A')) result++; return result;} // Driver codepublic static void Main(){ string str = "AbgdeF"; Console.Write(findCount(str));}} // This code is contributed// by Akanksha Rai
<?php// PHP program to find number of// characters at same position as// in English alphabets // Function to count the number of// characters at same position as// in English alphabetsfunction findCount($str){ $result = 0; // Traverse the input string for ($i = 0; $i < strlen($str); $i++) { // Check that index of characters of string is // same as of English alphabets by using ASCII // values and the fact that all lower case // alphabetic characters come together in same // order in ASCII table. And same is true for // upper case. if (($i == ord($str[$i]) - ord('a')) or ($i == ord($str[$i]) - ord('A'))) $result += 1; } return $result;} // Driver Code$str = "AbgdeF";print(findCount($str)) // This code has been contributed by 29AjayKumar?>
<script> // JavaScript program to find number of characters at same // position as in English alphabets function findCount(str) { var result = 0; // Traverse input string for (var i = 0; i < str.length; i++) // Check that index of characters of string is // same as of English alphabets by using ASCII // values and the fact that all lower case // alphabetic characters come together in same // order in ASCII table. And same is true for // upper case. if ( i === str[i].charCodeAt(0) - "a".charCodeAt(0) || i === str[i].charCodeAt(0) - "A".charCodeAt(0) ) result++; return result; } // Driver code var str = "AbgdeF"; document.write(findCount(str)); </script>
Output:
5
YouTubeGeeksforGeeks502K subscribersCount characters at same position as in English alphabet | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 1:57•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=MhLeyMLKE3w" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>
This article is contributed by Sahil Chhabra. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
SamyuktaSHegde
Akanksha_Rai
Rajput-Ji
29AjayKumar
shubham_singh
rdtank
Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Check for Balanced Brackets in an expression (well-formedness) using Stack
Python program to check if a string is palindrome or not
Different methods to reverse a string in C/C++
Convert string to char array in C++
Array of Strings in C++ (5 Different Ways to Create)
Longest Palindromic Substring | Set 1
Caesar Cipher in Cryptography
Check whether two strings are anagram of each other
Length of the longest substring without repeating characters
Reverse words in a given string
|
[
{
"code": null,
"e": 25003,
"s": 24975,
"text": "\n29 Apr, 2021"
},
{
"code": null,
"e": 25154,
"s": 25003,
"text": "Given a string of lower and uppercase characters, the task is to find that how many characters are at same position as in English alphabet.Examples: "
},
{
"code": null,
"e": 25374,
"s": 25154,
"text": "Input: ABcED \nOutput : 3\nFirst three characters are at same position\nas in English alphabets.\n\nInput: geeksforgeeks \nOutput : 1\nOnly 'f' is at same position as in English\nalphabet\n\nInput : alphabetical \nOutput : 3"
},
{
"code": null,
"e": 25416,
"s": 25376,
"text": "For this we can have simple approach: "
},
{
"code": null,
"e": 25623,
"s": 25416,
"text": "1) Initialize result as 0.\n2) Traverse input string and do following for every \n character str[i]\n a) If 'i' is same as str[i] - 'a' or same as \n str[i] - 'A', then do result++\n3) Return result"
},
{
"code": null,
"e": 25629,
"s": 25625,
"text": "C++"
},
{
"code": null,
"e": 25634,
"s": 25629,
"text": "Java"
},
{
"code": null,
"e": 25642,
"s": 25634,
"text": "Python3"
},
{
"code": null,
"e": 25645,
"s": 25642,
"text": "C#"
},
{
"code": null,
"e": 25649,
"s": 25645,
"text": "PHP"
},
{
"code": null,
"e": 25660,
"s": 25649,
"text": "Javascript"
},
{
"code": "// C++ program to find number of characters at same// position as in English alphabets#include<bits/stdc++.h>using namespace std; int findCount(string str){ int result = 0; // Traverse input string for (int i = 0 ; i < str.size(); i++) // Check that index of characters of string is // same as of English alphabets by using ASCII // values and the fact that all lower case // alphabetic characters come together in same // order in ASCII table. And same is true for // upper case. if (i == (str[i] - 'a') || i == (str[i] - 'A')) result++; return result;} // Driver codeint main(){ string str = \"AbgdeF\"; cout << findCount(str); return 0;}",
"e": 26385,
"s": 25660,
"text": null
},
{
"code": "// Java program to find number of// characters at same position// as in English alphabetsclass GFG{ static int findCount(String str) { int result = 0; // Traverse input string for (int i = 0; i < str.length(); i++) // Check that index of characters // of string is same as of English // alphabets by using ASCII values // and the fact that all lower case // alphabetic characters come together // in same order in ASCII table. And // same is true for upper case. { if (i == (str.charAt(i) - 'a') || i == (str.charAt(i) - 'A')) { result++; } } return result; } // Driver code public static void main(String[] args) { String str = \"AbgdeF\"; System.out.print(findCount(str)); }} // This code is contributed by Rajput-JI",
"e": 27307,
"s": 26385,
"text": null
},
{
"code": "# Python program to find number of# characters at same position as# in English alphabets # Function to count the number of# characters at same position as# in English alphabetsdef findCount(str): result = 0 # Traverse the input string for i in range(len(str)): # Check that index of characters of string is # same as of English alphabets by using ASCII # values and the fact that all lower case # alphabetic characters come together in same # order in ASCII table. And same is true for # upper case. if ((i == ord(str[i]) - ord('a')) or (i == ord(str[i]) - ord('A'))): result += 1 return result # Driver Codestr = 'AbgdeF'print(findCount(str)) # This code is contributed# by SamyuktaSHegde",
"e": 28082,
"s": 27307,
"text": null
},
{
"code": "// C# program to find number of// characters at same position// as in English alphabetsusing System; class GFG{static int findCount(string str){ int result = 0; // Traverse input string for (int i = 0 ; i < str.Length; i++) // Check that index of characters // of string is same as of English // alphabets by using ASCII values // and the fact that all lower case // alphabetic characters come together // in same order in ASCII table. And // same is true for upper case. if (i == (str[i] - 'a') || i == (str[i] - 'A')) result++; return result;} // Driver codepublic static void Main(){ string str = \"AbgdeF\"; Console.Write(findCount(str));}} // This code is contributed// by Akanksha Rai",
"e": 28870,
"s": 28082,
"text": null
},
{
"code": "<?php// PHP program to find number of// characters at same position as// in English alphabets // Function to count the number of// characters at same position as// in English alphabetsfunction findCount($str){ $result = 0; // Traverse the input string for ($i = 0; $i < strlen($str); $i++) { // Check that index of characters of string is // same as of English alphabets by using ASCII // values and the fact that all lower case // alphabetic characters come together in same // order in ASCII table. And same is true for // upper case. if (($i == ord($str[$i]) - ord('a')) or ($i == ord($str[$i]) - ord('A'))) $result += 1; } return $result;} // Driver Code$str = \"AbgdeF\";print(findCount($str)) // This code has been contributed by 29AjayKumar?>",
"e": 29709,
"s": 28870,
"text": null
},
{
"code": "<script> // JavaScript program to find number of characters at same // position as in English alphabets function findCount(str) { var result = 0; // Traverse input string for (var i = 0; i < str.length; i++) // Check that index of characters of string is // same as of English alphabets by using ASCII // values and the fact that all lower case // alphabetic characters come together in same // order in ASCII table. And same is true for // upper case. if ( i === str[i].charCodeAt(0) - \"a\".charCodeAt(0) || i === str[i].charCodeAt(0) - \"A\".charCodeAt(0) ) result++; return result; } // Driver code var str = \"AbgdeF\"; document.write(findCount(str)); </script>",
"e": 30545,
"s": 29709,
"text": null
},
{
"code": null,
"e": 30555,
"s": 30545,
"text": "Output: "
},
{
"code": null,
"e": 30558,
"s": 30555,
"text": " 5"
},
{
"code": null,
"e": 31415,
"s": 30560,
"text": "YouTubeGeeksforGeeks502K subscribersCount characters at same position as in English alphabet | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 1:57•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=MhLeyMLKE3w\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>"
},
{
"code": null,
"e": 31836,
"s": 31415,
"text": "This article is contributed by Sahil Chhabra. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 31851,
"s": 31836,
"text": "SamyuktaSHegde"
},
{
"code": null,
"e": 31864,
"s": 31851,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 31874,
"s": 31864,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 31886,
"s": 31874,
"text": "29AjayKumar"
},
{
"code": null,
"e": 31900,
"s": 31886,
"text": "shubham_singh"
},
{
"code": null,
"e": 31907,
"s": 31900,
"text": "rdtank"
},
{
"code": null,
"e": 31915,
"s": 31907,
"text": "Strings"
},
{
"code": null,
"e": 31923,
"s": 31915,
"text": "Strings"
},
{
"code": null,
"e": 32021,
"s": 31923,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 32096,
"s": 32021,
"text": "Check for Balanced Brackets in an expression (well-formedness) using Stack"
},
{
"code": null,
"e": 32153,
"s": 32096,
"text": "Python program to check if a string is palindrome or not"
},
{
"code": null,
"e": 32200,
"s": 32153,
"text": "Different methods to reverse a string in C/C++"
},
{
"code": null,
"e": 32236,
"s": 32200,
"text": "Convert string to char array in C++"
},
{
"code": null,
"e": 32289,
"s": 32236,
"text": "Array of Strings in C++ (5 Different Ways to Create)"
},
{
"code": null,
"e": 32327,
"s": 32289,
"text": "Longest Palindromic Substring | Set 1"
},
{
"code": null,
"e": 32357,
"s": 32327,
"text": "Caesar Cipher in Cryptography"
},
{
"code": null,
"e": 32409,
"s": 32357,
"text": "Check whether two strings are anagram of each other"
},
{
"code": null,
"e": 32470,
"s": 32409,
"text": "Length of the longest substring without repeating characters"
}
] |
How to change Table Engine in MySQL?
|
You can change table engine with the help of alter command. The syntax is as follows −
alter table yourTableName ENGINE = yourEngineName;
To understand the above syntax let us create a table with engine MyISAM. Later you can change any other engine. The following is the query to create a table.
mysql> create table ChangeEngineTableDemo
−> (
−> MovieId int,
−> MovieName varchar(100),
−> IsPopular bool
−> )ENGINE = 'MyISAM';
Query OK, 0 rows affected (0.37 sec)
Look at the above query, the table engine is MyISAM, now you can change it to any other engine. Here, we will change engine type InnoDB. The query to change engine type is as follows −
mysql> alter table ChangeEngineTableDemo ENGINE = InnoDB;
Query OK, 0 rows affected (2.21 sec)
Records: 0 Duplicates: 0 Warnings: 0
To check the engine type has been changed or not with the help of show command, the following is the query −
mysql> show create table ChangeEngineTableDemo;
The following is the output −
+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| ChangeEngineTableDemo | CREATE TABLE `changeenginetabledemo` (
`MovieId` int(11) DEFAULT NULL,
`MovieName` varchar(100) DEFAULT NULL,
`IsPopular` tinyint(1) DEFAULT NULL
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_0900_ai_ci |
+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.03 sec)
|
[
{
"code": null,
"e": 1149,
"s": 1062,
"text": "You can change table engine with the help of alter command. The syntax is as follows −"
},
{
"code": null,
"e": 1200,
"s": 1149,
"text": "alter table yourTableName ENGINE = yourEngineName;"
},
{
"code": null,
"e": 1358,
"s": 1200,
"text": "To understand the above syntax let us create a table with engine MyISAM. Later you can change any other engine. The following is the query to create a table."
},
{
"code": null,
"e": 1535,
"s": 1358,
"text": "mysql> create table ChangeEngineTableDemo\n−> (\n −> MovieId int,\n −> MovieName varchar(100),\n −> IsPopular bool\n−> )ENGINE = 'MyISAM';\nQuery OK, 0 rows affected (0.37 sec)"
},
{
"code": null,
"e": 1720,
"s": 1535,
"text": "Look at the above query, the table engine is MyISAM, now you can change it to any other engine. Here, we will change engine type InnoDB. The query to change engine type is as follows −"
},
{
"code": null,
"e": 1852,
"s": 1720,
"text": "mysql> alter table ChangeEngineTableDemo ENGINE = InnoDB;\nQuery OK, 0 rows affected (2.21 sec)\nRecords: 0 Duplicates: 0 Warnings: 0"
},
{
"code": null,
"e": 1961,
"s": 1852,
"text": "To check the engine type has been changed or not with the help of show command, the following is the query −"
},
{
"code": null,
"e": 2009,
"s": 1961,
"text": "mysql> show create table ChangeEngineTableDemo;"
},
{
"code": null,
"e": 2039,
"s": 2009,
"text": "The following is the output −"
},
{
"code": null,
"e": 3441,
"s": 2039,
"text": "+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n| Table | Create Table |\n+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n| ChangeEngineTableDemo | CREATE TABLE `changeenginetabledemo` (\n`MovieId` int(11) DEFAULT NULL,\n`MovieName` varchar(100) DEFAULT NULL,\n`IsPopular` tinyint(1) DEFAULT NULL\n) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_0900_ai_ci |\n+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n1 row in set (0.03 sec)"
}
] |
Content of a Polynomial - GeeksforGeeks
|
22 Apr, 2021
Given an array arr[] which denotes the integer coefficients of the polynomial, the task is to find the content of the polynomial.
Content of polynomials with integer coefficients is defined as the greatest common divisor of its integer coefficients.That is for:
F(x) = amxm + am-1xm-1 + ........+a1x + a0Then, Content of Polynomial = gcd(am, am-1, am-2...., a1, a0)
Examples:
Input: arr[] = {9, 30, 12} Output: 3Explanation:Given Polynomial can be: 9x2 + 30x + 12Therefore, Content = gcd(9, 30, 12) = 3
Input: arr[] = {2, 4, 6}Output: 2
Approach: The idea is to find the Greatest common divisor of all the elements of the array which can be computed by finding the GCD repeatedly by choosing two elements at a time. That is:
gcd(a, b, c)
= gcd(gcd(a, b), c)
= gcd(a, gcd(b, c))
= gcd(gcd(a, c), b)
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ implementation to find the// content of the polynomial #include <bits/stdc++.h>using namespace std; #define newl "\n"#define ll long long#define pb push_back // Function to find the content// of the polynomialint findContent(int arr[], int n){ int content = arr[0]; // Loop to iterate over the // elements of the array for (int i = 1; i < n; i++) { //__gcd(a, b) is a inbuilt // function for Greatest // Common Divisor content = __gcd(content, arr[i]); } return content;} // Driver Codeint main(){ int n = 3; int arr[] = { 9, 6, 12 }; // Function call cout << findContent(arr, n); return 0;}
// Java implementation to find the// content of the polynomialclass GFG{ // Function to find the content// of the polynomialstatic int findContent(int arr[], int n){ int content = arr[0]; // Loop to iterate over the // elements of the array for(int i = 1; i < n; i++) { //__gcd(a, b) is a inbuilt // function for Greatest // Common Divisor content = __gcd(content, arr[i]); } return content;} static int __gcd(int a, int b){ return b == 0 ? a : __gcd(b, a % b); } // Driver Codepublic static void main(String[] args){ int n = 3; int arr[] = { 9, 6, 12 }; // Function call System.out.print(findContent(arr, n));}} // This code is contributed by sapnasingh4991
# Python3 implementation to find the# content of the polynomialfrom math import gcd # Function to find the content# of the polynomialdef findContent(arr, n): content = arr[0] # Loop to iterate over the # elements of the array for i in range(1, n): # __gcd(a, b) is a inbuilt # function for Greatest # Common Divisor content = gcd(content, arr[i]) return content # Driver Codeif __name__ == '__main__': n = 3 arr = [ 9, 6, 12 ] # Function call print(findContent(arr, n)) # This code is contributed by mohit kumar 29
// C# implementation to find the// content of the polynomialusing System; class GFG{ // Function to find the content// of the polynomialstatic int findContent(int []arr, int n){ int content = arr[0]; // Loop to iterate over the // elements of the array for(int i = 1; i < n; i++) { //__gcd(a, b) is a inbuilt // function for Greatest // Common Divisor content = __gcd(content, arr[i]); } return content;} static int __gcd(int a, int b){ return b == 0 ? a : __gcd(b, a % b); } // Driver Codepublic static void Main(String[] args){ int n = 3; int []arr = { 9, 6, 12 }; // Function call Console.Write(findContent(arr, n));}} // This code is contributed by PrinciRaj1992
<script> // Javascript implementation to find the// content of the polynomial // Function to find the content// of the polynomialfunction findContent(arr, n){ var content = arr[0]; // Loop to iterate over the // elements of the array for(var i = 1; i < n; i++) { //__gcd(a, b) is a inbuilt // function for Greatest // Common Divisor content = __gcd(content, arr[i]); } return content;} function __gcd(a, b){ return b == 0 ? a : __gcd(b, a % b); } // Driver Codevar n = 3;var arr = [ 9, 6, 12 ]; // Function calldocument.write(findContent(arr, n)); // This code is contributed by kirti </script>
3
mohit kumar 29
sapnasingh4991
princiraj1992
Kirti_Mangal
GCD-LCM
maths-polynomial
Mathematical
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Find all factors of a natural number | Set 1
Check if a number is Palindrome
Program to print prime numbers from 1 to N.
Program to add two binary strings
Fizz Buzz Implementation
Program to multiply two matrices
Find pair with maximum GCD in an array
Find Union and Intersection of two unsorted arrays
Count all possible paths from top left to bottom right of a mXn matrix
Count ways to reach the n'th stair
|
[
{
"code": null,
"e": 24326,
"s": 24298,
"text": "\n22 Apr, 2021"
},
{
"code": null,
"e": 24457,
"s": 24326,
"text": "Given an array arr[] which denotes the integer coefficients of the polynomial, the task is to find the content of the polynomial. "
},
{
"code": null,
"e": 24589,
"s": 24457,
"text": "Content of polynomials with integer coefficients is defined as the greatest common divisor of its integer coefficients.That is for:"
},
{
"code": null,
"e": 24693,
"s": 24589,
"text": "F(x) = amxm + am-1xm-1 + ........+a1x + a0Then, Content of Polynomial = gcd(am, am-1, am-2...., a1, a0)"
},
{
"code": null,
"e": 24704,
"s": 24693,
"text": "Examples: "
},
{
"code": null,
"e": 24832,
"s": 24704,
"text": "Input: arr[] = {9, 30, 12} Output: 3Explanation:Given Polynomial can be: 9x2 + 30x + 12Therefore, Content = gcd(9, 30, 12) = 3"
},
{
"code": null,
"e": 24866,
"s": 24832,
"text": "Input: arr[] = {2, 4, 6}Output: 2"
},
{
"code": null,
"e": 25054,
"s": 24866,
"text": "Approach: The idea is to find the Greatest common divisor of all the elements of the array which can be computed by finding the GCD repeatedly by choosing two elements at a time. That is:"
},
{
"code": null,
"e": 25130,
"s": 25054,
"text": "gcd(a, b, c)\n = gcd(gcd(a, b), c)\n = gcd(a, gcd(b, c))\n = gcd(gcd(a, c), b)"
},
{
"code": null,
"e": 25181,
"s": 25130,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 25185,
"s": 25181,
"text": "C++"
},
{
"code": null,
"e": 25190,
"s": 25185,
"text": "Java"
},
{
"code": null,
"e": 25198,
"s": 25190,
"text": "Python3"
},
{
"code": null,
"e": 25201,
"s": 25198,
"text": "C#"
},
{
"code": null,
"e": 25212,
"s": 25201,
"text": "Javascript"
},
{
"code": "// C++ implementation to find the// content of the polynomial #include <bits/stdc++.h>using namespace std; #define newl \"\\n\"#define ll long long#define pb push_back // Function to find the content// of the polynomialint findContent(int arr[], int n){ int content = arr[0]; // Loop to iterate over the // elements of the array for (int i = 1; i < n; i++) { //__gcd(a, b) is a inbuilt // function for Greatest // Common Divisor content = __gcd(content, arr[i]); } return content;} // Driver Codeint main(){ int n = 3; int arr[] = { 9, 6, 12 }; // Function call cout << findContent(arr, n); return 0;}",
"e": 25876,
"s": 25212,
"text": null
},
{
"code": "// Java implementation to find the// content of the polynomialclass GFG{ // Function to find the content// of the polynomialstatic int findContent(int arr[], int n){ int content = arr[0]; // Loop to iterate over the // elements of the array for(int i = 1; i < n; i++) { //__gcd(a, b) is a inbuilt // function for Greatest // Common Divisor content = __gcd(content, arr[i]); } return content;} static int __gcd(int a, int b){ return b == 0 ? a : __gcd(b, a % b); } // Driver Codepublic static void main(String[] args){ int n = 3; int arr[] = { 9, 6, 12 }; // Function call System.out.print(findContent(arr, n));}} // This code is contributed by sapnasingh4991",
"e": 26613,
"s": 25876,
"text": null
},
{
"code": "# Python3 implementation to find the# content of the polynomialfrom math import gcd # Function to find the content# of the polynomialdef findContent(arr, n): content = arr[0] # Loop to iterate over the # elements of the array for i in range(1, n): # __gcd(a, b) is a inbuilt # function for Greatest # Common Divisor content = gcd(content, arr[i]) return content # Driver Codeif __name__ == '__main__': n = 3 arr = [ 9, 6, 12 ] # Function call print(findContent(arr, n)) # This code is contributed by mohit kumar 29",
"e": 27197,
"s": 26613,
"text": null
},
{
"code": "// C# implementation to find the// content of the polynomialusing System; class GFG{ // Function to find the content// of the polynomialstatic int findContent(int []arr, int n){ int content = arr[0]; // Loop to iterate over the // elements of the array for(int i = 1; i < n; i++) { //__gcd(a, b) is a inbuilt // function for Greatest // Common Divisor content = __gcd(content, arr[i]); } return content;} static int __gcd(int a, int b){ return b == 0 ? a : __gcd(b, a % b); } // Driver Codepublic static void Main(String[] args){ int n = 3; int []arr = { 9, 6, 12 }; // Function call Console.Write(findContent(arr, n));}} // This code is contributed by PrinciRaj1992",
"e": 27942,
"s": 27197,
"text": null
},
{
"code": "<script> // Javascript implementation to find the// content of the polynomial // Function to find the content// of the polynomialfunction findContent(arr, n){ var content = arr[0]; // Loop to iterate over the // elements of the array for(var i = 1; i < n; i++) { //__gcd(a, b) is a inbuilt // function for Greatest // Common Divisor content = __gcd(content, arr[i]); } return content;} function __gcd(a, b){ return b == 0 ? a : __gcd(b, a % b); } // Driver Codevar n = 3;var arr = [ 9, 6, 12 ]; // Function calldocument.write(findContent(arr, n)); // This code is contributed by kirti </script>",
"e": 28602,
"s": 27942,
"text": null
},
{
"code": null,
"e": 28607,
"s": 28605,
"text": "3"
},
{
"code": null,
"e": 28626,
"s": 28611,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 28641,
"s": 28626,
"text": "sapnasingh4991"
},
{
"code": null,
"e": 28655,
"s": 28641,
"text": "princiraj1992"
},
{
"code": null,
"e": 28668,
"s": 28655,
"text": "Kirti_Mangal"
},
{
"code": null,
"e": 28676,
"s": 28668,
"text": "GCD-LCM"
},
{
"code": null,
"e": 28693,
"s": 28676,
"text": "maths-polynomial"
},
{
"code": null,
"e": 28706,
"s": 28693,
"text": "Mathematical"
},
{
"code": null,
"e": 28719,
"s": 28706,
"text": "Mathematical"
},
{
"code": null,
"e": 28817,
"s": 28719,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28826,
"s": 28817,
"text": "Comments"
},
{
"code": null,
"e": 28839,
"s": 28826,
"text": "Old Comments"
},
{
"code": null,
"e": 28884,
"s": 28839,
"text": "Find all factors of a natural number | Set 1"
},
{
"code": null,
"e": 28916,
"s": 28884,
"text": "Check if a number is Palindrome"
},
{
"code": null,
"e": 28960,
"s": 28916,
"text": "Program to print prime numbers from 1 to N."
},
{
"code": null,
"e": 28994,
"s": 28960,
"text": "Program to add two binary strings"
},
{
"code": null,
"e": 29019,
"s": 28994,
"text": "Fizz Buzz Implementation"
},
{
"code": null,
"e": 29052,
"s": 29019,
"text": "Program to multiply two matrices"
},
{
"code": null,
"e": 29091,
"s": 29052,
"text": "Find pair with maximum GCD in an array"
},
{
"code": null,
"e": 29142,
"s": 29091,
"text": "Find Union and Intersection of two unsorted arrays"
},
{
"code": null,
"e": 29213,
"s": 29142,
"text": "Count all possible paths from top left to bottom right of a mXn matrix"
}
] |
AWT CheckBoxGroup Class
|
The CheckboxGroup class is used to group the set of checkbox.
Following is the declaration for java.awt.CheckboxGroup class:
public class CheckboxGroup
extends Object
implements Serializable
CheckboxGroup() ()
Creates a new instance of CheckboxGroup.
Checkbox getCurrent()
Deprecated. As of JDK version 1.1, replaced by getSelectedCheckbox().
Checkbox getSelectedCheckbox()
Gets the current choice from this check box group.
void setCurrent(Checkbox box)
Deprecated. As of JDK version 1.1, replaced by setSelectedCheckbox(Checkbox).
void setSelectedCheckbox(Checkbox box)
Sets the currently selected check box in this group to be the specified check box.
String toString()
Returns a string representation of this check box group, including the value of its current selection.
This class inherits methods from the following classes:
java.lang.Object
java.lang.Object
Create the following java program using any editor of your choice in say D:/ > AWT > com > tutorialspoint > gui >
package com.tutorialspoint.gui;
import java.awt.*;
import java.awt.event.*;
public class AwtControlDemo {
private Frame mainFrame;
private Label headerLabel;
private Label statusLabel;
private Panel controlPanel;
public AwtControlDemo(){
prepareGUI();
}
public static void main(String[] args){
AwtControlDemo awtControlDemo = new AwtControlDemo();
awtControlDemo.showCheckBoxGroupDemo();
}
private void prepareGUI(){
mainFrame = new Frame("Java AWT Examples");
mainFrame.setSize(400,400);
mainFrame.setLayout(new GridLayout(3, 1));
mainFrame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent windowEvent){
System.exit(0);
}
});
headerLabel = new Label();
headerLabel.setAlignment(Label.CENTER);
statusLabel = new Label();
statusLabel.setAlignment(Label.CENTER);
statusLabel.setSize(350,100);
controlPanel = new Panel();
controlPanel.setLayout(new FlowLayout());
mainFrame.add(headerLabel);
mainFrame.add(controlPanel);
mainFrame.add(statusLabel);
mainFrame.setVisible(true);
}
private void showCheckBoxGroupDemo(){
headerLabel.setText("Control in action: CheckBoxGroup");
CheckboxGroup fruitGroup = new CheckboxGroup();
Checkbox chkApple = new Checkbox("Apple",fruitGroup,true);
Checkbox chkMango = new Checkbox("Mango",fruitGroup,false);
Checkbox chkPeer = new Checkbox("Peer",fruitGroup,false);
statusLabel.setText("Apple Checkbox: checked");
chkApple.addItemListener(new ItemListener() {
public void itemStateChanged(ItemEvent e) {
statusLabel.setText("Apple Checkbox: checked");
}
});
chkMango.addItemListener(new ItemListener() {
public void itemStateChanged(ItemEvent e) {
statusLabel.setText("Mango Checkbox: checked");
}
});
chkPeer.addItemListener(new ItemListener() {
public void itemStateChanged(ItemEvent e) {
statusLabel.setText("Peer Checkbox: checked");
}
});
controlPanel.add(chkApple);
controlPanel.add(chkMango);
controlPanel.add(chkPeer);
mainFrame.setVisible(true);
}
}
Compile the program using command prompt. Go to D:/ > AWT and type the following command.
D:\AWT>javac com\tutorialspoint\gui\AwtControlDemo.java
If no error comes that means compilation is successful. Run the program using following command.
D:\AWT>java com.tutorialspoint.gui.AwtControlDemo
Verify the following output
13 Lectures
2 hours
EduOLC
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 1809,
"s": 1747,
"text": "The CheckboxGroup class is used to group the set of checkbox."
},
{
"code": null,
"e": 1872,
"s": 1809,
"text": "Following is the declaration for java.awt.CheckboxGroup class:"
},
{
"code": null,
"e": 1947,
"s": 1872,
"text": "public class CheckboxGroup\n extends Object\n implements Serializable"
},
{
"code": null,
"e": 1967,
"s": 1947,
"text": "CheckboxGroup() () "
},
{
"code": null,
"e": 2008,
"s": 1967,
"text": "Creates a new instance of CheckboxGroup."
},
{
"code": null,
"e": 2031,
"s": 2008,
"text": "Checkbox getCurrent() "
},
{
"code": null,
"e": 2101,
"s": 2031,
"text": "Deprecated. As of JDK version 1.1, replaced by getSelectedCheckbox()."
},
{
"code": null,
"e": 2133,
"s": 2101,
"text": "Checkbox getSelectedCheckbox() "
},
{
"code": null,
"e": 2184,
"s": 2133,
"text": "Gets the current choice from this check box group."
},
{
"code": null,
"e": 2214,
"s": 2184,
"text": "void setCurrent(Checkbox box)"
},
{
"code": null,
"e": 2293,
"s": 2214,
"text": " Deprecated. As of JDK version 1.1, replaced by setSelectedCheckbox(Checkbox)."
},
{
"code": null,
"e": 2333,
"s": 2293,
"text": "void setSelectedCheckbox(Checkbox box) "
},
{
"code": null,
"e": 2416,
"s": 2333,
"text": "Sets the currently selected check box in this group to be the specified check box."
},
{
"code": null,
"e": 2435,
"s": 2416,
"text": "String\ttoString() "
},
{
"code": null,
"e": 2538,
"s": 2435,
"text": "Returns a string representation of this check box group, including the value of its current selection."
},
{
"code": null,
"e": 2594,
"s": 2538,
"text": "This class inherits methods from the following classes:"
},
{
"code": null,
"e": 2611,
"s": 2594,
"text": "java.lang.Object"
},
{
"code": null,
"e": 2628,
"s": 2611,
"text": "java.lang.Object"
},
{
"code": null,
"e": 2742,
"s": 2628,
"text": "Create the following java program using any editor of your choice in say D:/ > AWT > com > tutorialspoint > gui >"
},
{
"code": null,
"e": 5096,
"s": 2742,
"text": "package com.tutorialspoint.gui;\n\nimport java.awt.*;\nimport java.awt.event.*;\n\npublic class AwtControlDemo {\n\n private Frame mainFrame;\n private Label headerLabel;\n private Label statusLabel;\n private Panel controlPanel;\n\n public AwtControlDemo(){\n prepareGUI();\n }\n\n public static void main(String[] args){\n AwtControlDemo awtControlDemo = new AwtControlDemo();\n awtControlDemo.showCheckBoxGroupDemo();\n }\n\n private void prepareGUI(){\n mainFrame = new Frame(\"Java AWT Examples\");\n mainFrame.setSize(400,400);\n mainFrame.setLayout(new GridLayout(3, 1));\n mainFrame.addWindowListener(new WindowAdapter() {\n public void windowClosing(WindowEvent windowEvent){\n System.exit(0);\n } \n }); \n headerLabel = new Label();\n headerLabel.setAlignment(Label.CENTER);\n statusLabel = new Label(); \n statusLabel.setAlignment(Label.CENTER);\n statusLabel.setSize(350,100);\n\n controlPanel = new Panel();\n controlPanel.setLayout(new FlowLayout());\n\n mainFrame.add(headerLabel);\n mainFrame.add(controlPanel);\n mainFrame.add(statusLabel);\n mainFrame.setVisible(true); \n }\n\n private void showCheckBoxGroupDemo(){\n \n headerLabel.setText(\"Control in action: CheckBoxGroup\"); \n\n CheckboxGroup fruitGroup = new CheckboxGroup();\n\n Checkbox chkApple = new Checkbox(\"Apple\",fruitGroup,true);\n Checkbox chkMango = new Checkbox(\"Mango\",fruitGroup,false);\n Checkbox chkPeer = new Checkbox(\"Peer\",fruitGroup,false);\n\n statusLabel.setText(\"Apple Checkbox: checked\");\n chkApple.addItemListener(new ItemListener() {\n public void itemStateChanged(ItemEvent e) { \n statusLabel.setText(\"Apple Checkbox: checked\");\n }\n });\n\n chkMango.addItemListener(new ItemListener() {\n public void itemStateChanged(ItemEvent e) {\n statusLabel.setText(\"Mango Checkbox: checked\");\n }\n });\n\n chkPeer.addItemListener(new ItemListener() {\n public void itemStateChanged(ItemEvent e) {\n statusLabel.setText(\"Peer Checkbox: checked\");\n }\n });\n\n controlPanel.add(chkApple);\n controlPanel.add(chkMango); \n controlPanel.add(chkPeer); \n\n mainFrame.setVisible(true); \n }\n}"
},
{
"code": null,
"e": 5187,
"s": 5096,
"text": "Compile the program using command prompt. Go to D:/ > AWT and type the following command."
},
{
"code": null,
"e": 5243,
"s": 5187,
"text": "D:\\AWT>javac com\\tutorialspoint\\gui\\AwtControlDemo.java"
},
{
"code": null,
"e": 5340,
"s": 5243,
"text": "If no error comes that means compilation is successful. Run the program using following command."
},
{
"code": null,
"e": 5390,
"s": 5340,
"text": "D:\\AWT>java com.tutorialspoint.gui.AwtControlDemo"
},
{
"code": null,
"e": 5418,
"s": 5390,
"text": "Verify the following output"
},
{
"code": null,
"e": 5451,
"s": 5418,
"text": "\n 13 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 5459,
"s": 5451,
"text": " EduOLC"
},
{
"code": null,
"e": 5466,
"s": 5459,
"text": " Print"
},
{
"code": null,
"e": 5477,
"s": 5466,
"text": " Add Notes"
}
] |
How can I capture network traffic of a specific page using Selenium?
|
We can capture network traffic on a specific page using Selenium webdriver in Python. To achieve this, we take the help of the JavaScript Executor. Selenium can execute JavaScript commands with the help of the execute_script method.
JavaScript command to be executed is passed as a parameter to this method. To capture the network traffic, we have to pass the command: return window.performance.getEntries() as a parameter to the execute_script method.
r = driver.execute_script("return window.performance.getEntries();")
Code Implementation
from selenium import webdriver
#configure chromedriver path
driver = webdriver.Chrome(executable_path='../drivers/chromedriver')
#implicit wait
driver.implicitly_wait(0.5)
#url launch
driver.get("https://www.google.com/")
#JavaScript command to traffic
r = driver.execute_script("return window.performance.getEntries();")
for res in r:
print(res['name'])
#browser close
driver.close()
|
[
{
"code": null,
"e": 1295,
"s": 1062,
"text": "We can capture network traffic on a specific page using Selenium webdriver in Python. To achieve this, we take the help of the JavaScript Executor. Selenium can execute JavaScript commands with the help of the execute_script method."
},
{
"code": null,
"e": 1515,
"s": 1295,
"text": "JavaScript command to be executed is passed as a parameter to this method. To capture the network traffic, we have to pass the command: return window.performance.getEntries() as a parameter to the execute_script method."
},
{
"code": null,
"e": 1584,
"s": 1515,
"text": "r = driver.execute_script(\"return window.performance.getEntries();\")"
},
{
"code": null,
"e": 1604,
"s": 1584,
"text": "Code Implementation"
},
{
"code": null,
"e": 1989,
"s": 1604,
"text": "from selenium import webdriver\n#configure chromedriver path\ndriver = webdriver.Chrome(executable_path='../drivers/chromedriver')\n#implicit wait\ndriver.implicitly_wait(0.5)\n#url launch\ndriver.get(\"https://www.google.com/\")\n#JavaScript command to traffic\nr = driver.execute_script(\"return window.performance.getEntries();\")\nfor res in r:\nprint(res['name'])\n#browser close\ndriver.close()"
}
] |
JDBC Class.forName vs DriverManager.registerDriver
|
To connect with a database using JDBC you need to select get the driver for the respective database and register the driver. You can register a database driver in two ways −
Using Class.forName() method − The forName() method of the class named Class accepts a class name as a String parameter and loads it into the memory, Soon the is loaded into the memory it gets registered automatically.
Class.forName("com.mysql.jdbc.Driver");
Following JDBC program establishes connection with MySQL database. Here, we are trying to register the MySQL driver using the forName() method.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
public class RegisterDriverExample {
public static void main(String args[]) throws SQLException {
//Registering the Driver
Class.forName("com.mysql.jdbc.Driver");
//Getting the connection
String mysqlUrl = "jdbc:mysql://localhost/mydatabase";
Connection con = DriverManager.getConnection(mysqlUrl, "root", "password");
System.out.println("Connection established: "+con);
}
}
Connection established: com.mysql.jdbc.JDBC4Connection@4fccd51b
Using the registerDriver() method − The registerDriver() method of the DriverManager class accepts an object of the diver class as a parameter and, registers it with the JDBC driver manager.
Driver myDriver = new com.mysql.jdbc.Driver();
DriverManager.registerDriver(myDriver);
Following JDBC program establishes connection with MySQL database. Here, we are trying to register the MySQL driver using the registerDriver() method.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
public class RegisterDriverExample {
public static void main(String args[]) throws SQLException {
//Registering the Driver
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
//Getting the connection
String mysqlUrl = "jdbc:mysql://localhost/mydatabase";
Connection con = DriverManager.getConnection(mysqlUrl, "root", "password");
System.out.println("Connection established: "+con);
}
}
Connection established: com.mysql.jdbc.JDBC4Connection@4fccd51b
|
[
{
"code": null,
"e": 1236,
"s": 1062,
"text": "To connect with a database using JDBC you need to select get the driver for the respective database and register the driver. You can register a database driver in two ways −"
},
{
"code": null,
"e": 1455,
"s": 1236,
"text": "Using Class.forName() method − The forName() method of the class named Class accepts a class name as a String parameter and loads it into the memory, Soon the is loaded into the memory it gets registered automatically."
},
{
"code": null,
"e": 1495,
"s": 1455,
"text": "Class.forName(\"com.mysql.jdbc.Driver\");"
},
{
"code": null,
"e": 1639,
"s": 1495,
"text": "Following JDBC program establishes connection with MySQL database. Here, we are trying to register the MySQL driver using the forName() method."
},
{
"code": null,
"e": 2145,
"s": 1639,
"text": "import java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.SQLException;\npublic class RegisterDriverExample {\n public static void main(String args[]) throws SQLException {\n //Registering the Driver\n Class.forName(\"com.mysql.jdbc.Driver\");\n //Getting the connection\n String mysqlUrl = \"jdbc:mysql://localhost/mydatabase\";\n Connection con = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");\n System.out.println(\"Connection established: \"+con);\n }\n}"
},
{
"code": null,
"e": 2209,
"s": 2145,
"text": "Connection established: com.mysql.jdbc.JDBC4Connection@4fccd51b"
},
{
"code": null,
"e": 2400,
"s": 2209,
"text": "Using the registerDriver() method − The registerDriver() method of the DriverManager class accepts an object of the diver class as a parameter and, registers it with the JDBC driver manager."
},
{
"code": null,
"e": 2487,
"s": 2400,
"text": "Driver myDriver = new com.mysql.jdbc.Driver();\nDriverManager.registerDriver(myDriver);"
},
{
"code": null,
"e": 2638,
"s": 2487,
"text": "Following JDBC program establishes connection with MySQL database. Here, we are trying to register the MySQL driver using the registerDriver() method."
},
{
"code": null,
"e": 3163,
"s": 2638,
"text": "import java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.SQLException;\npublic class RegisterDriverExample {\n public static void main(String args[]) throws SQLException {\n //Registering the Driver\n DriverManager.registerDriver(new com.mysql.jdbc.Driver());\n //Getting the connection\n String mysqlUrl = \"jdbc:mysql://localhost/mydatabase\";\n Connection con = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");\n System.out.println(\"Connection established: \"+con);\n }\n}"
},
{
"code": null,
"e": 3227,
"s": 3163,
"text": "Connection established: com.mysql.jdbc.JDBC4Connection@4fccd51b"
}
] |
C++ Program to Calculate Sum of Natural Numbers
|
The natural numbers are the positive integers starting from 1.
The sequence of natural numbers is −
1, 2, 3, 4, 5, 6, 7, 8, 9, 10......
Sum of the first n natural numbers can be calculated using the for loop or the formula.
Programs specifying both of these methods are given as follows −
The program to calculate the sum of n natural numbers using for loop is given as follows.
Live Demo
#include<iostream>
using namespace std;
int main() {
int n=5, sum=0, i;
for(i=1;i<=n;i++)
sum=sum+i;
cout<<"Sum of first "<<n<<" natural numbers is "<<sum;
return 0;
}
Sum of first 5 natural numbers is 15
In the above program, a for loop is run from 1 to n. In each iteration of the loop, the value of i is added to the sum. So, the sum of the first n natural numbers is obtained. This is demonstrated by the following code snippet.
for(i=1;i<=n;i++)
sum=sum+i;
The formula to find the sum of first n natural numbers is as follows.
sum = n(n+1)/2
The program to calculate the sum of n natural numbers using the above formula is given as follows.
Live Demo
#include<iostream>
using namespace std;
int main() {
int n=5, sum;
sum = n*(n+1)/2;
cout<<"Sum of first "<<n<<" natural numbers is "<<sum;
return 0;
}
Sum of first 5 natural numbers is 15
In the above program, the sum of the first n natural numbers is calculated using the formula. Then this value is displayed. This is demonstrated by the following code snippet.
sum = n*(n+1)/2;
cout<<"Sum of first "<<n<<" natural numbers is "<<sum;
|
[
{
"code": null,
"e": 1125,
"s": 1062,
"text": "The natural numbers are the positive integers starting from 1."
},
{
"code": null,
"e": 1162,
"s": 1125,
"text": "The sequence of natural numbers is −"
},
{
"code": null,
"e": 1198,
"s": 1162,
"text": "1, 2, 3, 4, 5, 6, 7, 8, 9, 10......"
},
{
"code": null,
"e": 1286,
"s": 1198,
"text": "Sum of the first n natural numbers can be calculated using the for loop or the formula."
},
{
"code": null,
"e": 1351,
"s": 1286,
"text": "Programs specifying both of these methods are given as follows −"
},
{
"code": null,
"e": 1441,
"s": 1351,
"text": "The program to calculate the sum of n natural numbers using for loop is given as follows."
},
{
"code": null,
"e": 1452,
"s": 1441,
"text": " Live Demo"
},
{
"code": null,
"e": 1635,
"s": 1452,
"text": "#include<iostream>\nusing namespace std;\nint main() {\n int n=5, sum=0, i;\n for(i=1;i<=n;i++)\n sum=sum+i;\n cout<<\"Sum of first \"<<n<<\" natural numbers is \"<<sum;\n return 0;\n}"
},
{
"code": null,
"e": 1672,
"s": 1635,
"text": "Sum of first 5 natural numbers is 15"
},
{
"code": null,
"e": 1900,
"s": 1672,
"text": "In the above program, a for loop is run from 1 to n. In each iteration of the loop, the value of i is added to the sum. So, the sum of the first n natural numbers is obtained. This is demonstrated by the following code snippet."
},
{
"code": null,
"e": 1929,
"s": 1900,
"text": "for(i=1;i<=n;i++)\nsum=sum+i;"
},
{
"code": null,
"e": 1999,
"s": 1929,
"text": "The formula to find the sum of first n natural numbers is as follows."
},
{
"code": null,
"e": 2014,
"s": 1999,
"text": "sum = n(n+1)/2"
},
{
"code": null,
"e": 2113,
"s": 2014,
"text": "The program to calculate the sum of n natural numbers using the above formula is given as follows."
},
{
"code": null,
"e": 2124,
"s": 2113,
"text": " Live Demo"
},
{
"code": null,
"e": 2287,
"s": 2124,
"text": "#include<iostream>\nusing namespace std;\nint main() {\n int n=5, sum;\n sum = n*(n+1)/2;\n cout<<\"Sum of first \"<<n<<\" natural numbers is \"<<sum;\n return 0;\n}"
},
{
"code": null,
"e": 2324,
"s": 2287,
"text": "Sum of first 5 natural numbers is 15"
},
{
"code": null,
"e": 2500,
"s": 2324,
"text": "In the above program, the sum of the first n natural numbers is calculated using the formula. Then this value is displayed. This is demonstrated by the following code snippet."
},
{
"code": null,
"e": 2572,
"s": 2500,
"text": "sum = n*(n+1)/2;\ncout<<\"Sum of first \"<<n<<\" natural numbers is \"<<sum;"
}
] |
How to iterate two Lists or Arrays with one foreach statement in C#?
|
Set two arrays.
var val = new [] { 20, 40, 60};
var str = new [] { "ele1", "ele2", "ele3"};
Use the zip() method to process the two arrays in parallel.
var res = val.Zip(str, (n, w) => new { Number = n, Word = w });
The above fetches both the arrays with int and string elements respectively.
Now, use foreach to iterate the two arrays −
Live Demo
using System;
using System.Collections.Generic;
using System.Linq;
public class Demo {
public static void Main() {
var val = new [] { 20, 40, 60};
var str = new [] { "ele1", "ele2", "ele3"};
var res = val.Zip(str, (n, w) => new { Number = n, Word = w });
foreach(var a in res) {
Console.WriteLine(a.Number + a.Word);
}
}
}
20ele1
40ele2
60ele3
|
[
{
"code": null,
"e": 1078,
"s": 1062,
"text": "Set two arrays."
},
{
"code": null,
"e": 1154,
"s": 1078,
"text": "var val = new [] { 20, 40, 60};\nvar str = new [] { \"ele1\", \"ele2\", \"ele3\"};"
},
{
"code": null,
"e": 1214,
"s": 1154,
"text": "Use the zip() method to process the two arrays in parallel."
},
{
"code": null,
"e": 1278,
"s": 1214,
"text": "var res = val.Zip(str, (n, w) => new { Number = n, Word = w });"
},
{
"code": null,
"e": 1355,
"s": 1278,
"text": "The above fetches both the arrays with int and string elements respectively."
},
{
"code": null,
"e": 1400,
"s": 1355,
"text": "Now, use foreach to iterate the two arrays −"
},
{
"code": null,
"e": 1411,
"s": 1400,
"text": " Live Demo"
},
{
"code": null,
"e": 1781,
"s": 1411,
"text": "using System;\nusing System.Collections.Generic;\nusing System.Linq;\n\npublic class Demo {\n public static void Main() {\n var val = new [] { 20, 40, 60};\n var str = new [] { \"ele1\", \"ele2\", \"ele3\"};\n var res = val.Zip(str, (n, w) => new { Number = n, Word = w });\n\n foreach(var a in res) {\n Console.WriteLine(a.Number + a.Word);\n }\n }\n}"
},
{
"code": null,
"e": 1802,
"s": 1781,
"text": "20ele1\n40ele2\n60ele3"
}
] |
Check if words are sorted according to new order of alphabets - GeeksforGeeks
|
09 Dec, 2018
Given a sequence of Words and the Order of the alphabet. The order of the alphabet is some permutation of lowercase letters. The task is to check whether the given words are sorted lexicographically according to order of alphabets. Return “True” if it is, otherwise “False”.
Examples:
Input : Words = [“hello”, “leetcode”], Order = “habcldefgijkmnopqrstuvwxyz”Output : true
Input : Words = [“word”, “world”, “row”], Order = “abcworldefghijkmnpqstuvxyz”Output : falseExplanation : As ‘d’ comes after ‘l’ in Order, thus words[0] > words[1], hence the sequence is unsorted.
Approach : The words are sorted lexicographically if and only if adjacent words are sorted. This is because order is transitive i:e if a <= b and b <= c, implies a <= c.
So our goal it to check whether all adjacent words a and b have a <= b.
To check whether for two adjacent words a and b, a <= b holds we can find their first difference. For example, "seen" and "scene" have a first difference of e and c. After this, we compare these characters to the index in order.
We have to deal with the blank character effectively. If for example, we are comparing "add" to "addition", this is a first difference of (NULL) vs "i".
Below is the implementation of above approach :
Python3
# Function to check whether Words are sorted in given Orderdef isAlienSorted(Words, Order): Order_index = {c: i for i, c in enumerate(Order)} for i in range(len(Words) - 1): word1 = Words[i] word2 = Words[i + 1] # Find the first difference word1[k] != word2[k]. for k in range(min(len(word1), len(word2))): # If they compare false then it's not sorted. if word1[k] != word2[k]: if Order_index[word1[k]] > Order_index[word2[k]]: return False break else: # If we didn't find a first difference, the # Words are like ("add", "addition"). if len(word1) > len(word2): return False return True # Program CodeWords = ["hello", "leetcode"]Order = "habcldefgijkmnopqrstuvwxyz" # Function call to print required answerprint(isAlienSorted(Words, Order))
True
Time Complexity: O(N), where N is the total number of characters in all words.
Auxiliary Space: O(1)
lexicographic-ordering
Sorting
Strings
Strings
Sorting
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
HeapSort
Time Complexities of all Sorting Algorithms
Radix Sort
Merge two sorted arrays
Python Program for Bubble Sort
Reverse a string in Java
Write a program to reverse an array or string
C++ Data Types
Longest Common Subsequence | DP-4
Write a program to print all permutations of a given string
|
[
{
"code": null,
"e": 24636,
"s": 24608,
"text": "\n09 Dec, 2018"
},
{
"code": null,
"e": 24911,
"s": 24636,
"text": "Given a sequence of Words and the Order of the alphabet. The order of the alphabet is some permutation of lowercase letters. The task is to check whether the given words are sorted lexicographically according to order of alphabets. Return “True” if it is, otherwise “False”."
},
{
"code": null,
"e": 24921,
"s": 24911,
"text": "Examples:"
},
{
"code": null,
"e": 25010,
"s": 24921,
"text": "Input : Words = [“hello”, “leetcode”], Order = “habcldefgijkmnopqrstuvwxyz”Output : true"
},
{
"code": null,
"e": 25207,
"s": 25010,
"text": "Input : Words = [“word”, “world”, “row”], Order = “abcworldefghijkmnpqstuvxyz”Output : falseExplanation : As ‘d’ comes after ‘l’ in Order, thus words[0] > words[1], hence the sequence is unsorted."
},
{
"code": null,
"e": 25377,
"s": 25207,
"text": "Approach : The words are sorted lexicographically if and only if adjacent words are sorted. This is because order is transitive i:e if a <= b and b <= c, implies a <= c."
},
{
"code": null,
"e": 25449,
"s": 25377,
"text": "So our goal it to check whether all adjacent words a and b have a <= b."
},
{
"code": null,
"e": 25678,
"s": 25449,
"text": "To check whether for two adjacent words a and b, a <= b holds we can find their first difference. For example, \"seen\" and \"scene\" have a first difference of e and c. After this, we compare these characters to the index in order."
},
{
"code": null,
"e": 25831,
"s": 25678,
"text": "We have to deal with the blank character effectively. If for example, we are comparing \"add\" to \"addition\", this is a first difference of (NULL) vs \"i\"."
},
{
"code": null,
"e": 25879,
"s": 25831,
"text": "Below is the implementation of above approach :"
},
{
"code": null,
"e": 25887,
"s": 25879,
"text": "Python3"
},
{
"code": "# Function to check whether Words are sorted in given Orderdef isAlienSorted(Words, Order): Order_index = {c: i for i, c in enumerate(Order)} for i in range(len(Words) - 1): word1 = Words[i] word2 = Words[i + 1] # Find the first difference word1[k] != word2[k]. for k in range(min(len(word1), len(word2))): # If they compare false then it's not sorted. if word1[k] != word2[k]: if Order_index[word1[k]] > Order_index[word2[k]]: return False break else: # If we didn't find a first difference, the # Words are like (\"add\", \"addition\"). if len(word1) > len(word2): return False return True # Program CodeWords = [\"hello\", \"leetcode\"]Order = \"habcldefgijkmnopqrstuvwxyz\" # Function call to print required answerprint(isAlienSorted(Words, Order))",
"e": 26807,
"s": 25887,
"text": null
},
{
"code": null,
"e": 26813,
"s": 26807,
"text": "True\n"
},
{
"code": null,
"e": 26892,
"s": 26813,
"text": "Time Complexity: O(N), where N is the total number of characters in all words."
},
{
"code": null,
"e": 26914,
"s": 26892,
"text": "Auxiliary Space: O(1)"
},
{
"code": null,
"e": 26937,
"s": 26914,
"text": "lexicographic-ordering"
},
{
"code": null,
"e": 26945,
"s": 26937,
"text": "Sorting"
},
{
"code": null,
"e": 26953,
"s": 26945,
"text": "Strings"
},
{
"code": null,
"e": 26961,
"s": 26953,
"text": "Strings"
},
{
"code": null,
"e": 26969,
"s": 26961,
"text": "Sorting"
},
{
"code": null,
"e": 27067,
"s": 26969,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27076,
"s": 27067,
"text": "Comments"
},
{
"code": null,
"e": 27089,
"s": 27076,
"text": "Old Comments"
},
{
"code": null,
"e": 27098,
"s": 27089,
"text": "HeapSort"
},
{
"code": null,
"e": 27142,
"s": 27098,
"text": "Time Complexities of all Sorting Algorithms"
},
{
"code": null,
"e": 27153,
"s": 27142,
"text": "Radix Sort"
},
{
"code": null,
"e": 27177,
"s": 27153,
"text": "Merge two sorted arrays"
},
{
"code": null,
"e": 27208,
"s": 27177,
"text": "Python Program for Bubble Sort"
},
{
"code": null,
"e": 27233,
"s": 27208,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 27279,
"s": 27233,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 27294,
"s": 27279,
"text": "C++ Data Types"
},
{
"code": null,
"e": 27328,
"s": 27294,
"text": "Longest Common Subsequence | DP-4"
}
] |
Auto comment on a facebook post using JavaScript - GeeksforGeeks
|
29 Jul, 2020
In this article, we are going to learn how to comment automatically in a Facebook post. You can use this method to wish your friends a happy birthday or just comment on anything. It is useful when you want to comment a number of times on a post. You just need to specify the count and message that will be automatically commented on a time-interval. Also, you don’t need to install anything for this method to work.
Approach:
Initialize count and message value.Then define an interval function which will be called each time.Make an input variable that points to the input field of comment section.Make a submit variable that points to the comment button.Since, comment button is disabled by default, so first enable it.Set the message to be written in input.Click on the submit.Decrement the count.If count become zero, then clear the interval function.Set the time interval of 10000ms, it means the function will be called after each 10 seconds.
Initialize count and message value.
Then define an interval function which will be called each time.
Make an input variable that points to the input field of comment section.
Make a submit variable that points to the comment button.
Since, comment button is disabled by default, so first enable it.
Set the message to be written in input.
Click on the submit.
Decrement the count.
If count become zero, then clear the interval function.
Set the time interval of 10000ms, it means the function will be called after each 10 seconds.
Below are the steps:
Go to facebook page using m.facebook.com
Sign in and open any post.
Open developer mode in Chrome by pressing Ctrl+Shift+I
Navigate to the console.
Now, run the below script.
var count = 100;var message = "Hi";var loop = setInterval(function(){ var input = document.getElementsByName("comment_text")[0]; var submit = document.querySelector('button[type="submit"]'); submit.disabled = false; input.value = message; submit.click(); count -= 1; if(count == 0) { clearInterval(loop); }}, 10000);
Output:
Output
Note: Please ensure that there is stable internet connection available, so that the script runs smoothly. Also ensure to visit facebook with m.facebook.com not www.facebook.com because this script works on mobile version of facebook only.
This tutorial is for educational purpose only, please don’t use it for disturbing anyone or any unethical way.
JavaScript-Misc
GBlog
JavaScript
TechTips
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Roadmap to Become a Web Developer in 2022
DSA Sheet by Love Babbar
GET and POST requests using Python
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Working with csv files in Python
Convert a string to an integer in JavaScript
How to calculate the number of days between two dates in javascript?
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
File uploading in React.js
|
[
{
"code": null,
"e": 24518,
"s": 24490,
"text": "\n29 Jul, 2020"
},
{
"code": null,
"e": 24934,
"s": 24518,
"text": "In this article, we are going to learn how to comment automatically in a Facebook post. You can use this method to wish your friends a happy birthday or just comment on anything. It is useful when you want to comment a number of times on a post. You just need to specify the count and message that will be automatically commented on a time-interval. Also, you don’t need to install anything for this method to work."
},
{
"code": null,
"e": 24944,
"s": 24934,
"text": "Approach:"
},
{
"code": null,
"e": 25466,
"s": 24944,
"text": "Initialize count and message value.Then define an interval function which will be called each time.Make an input variable that points to the input field of comment section.Make a submit variable that points to the comment button.Since, comment button is disabled by default, so first enable it.Set the message to be written in input.Click on the submit.Decrement the count.If count become zero, then clear the interval function.Set the time interval of 10000ms, it means the function will be called after each 10 seconds."
},
{
"code": null,
"e": 25502,
"s": 25466,
"text": "Initialize count and message value."
},
{
"code": null,
"e": 25567,
"s": 25502,
"text": "Then define an interval function which will be called each time."
},
{
"code": null,
"e": 25641,
"s": 25567,
"text": "Make an input variable that points to the input field of comment section."
},
{
"code": null,
"e": 25699,
"s": 25641,
"text": "Make a submit variable that points to the comment button."
},
{
"code": null,
"e": 25765,
"s": 25699,
"text": "Since, comment button is disabled by default, so first enable it."
},
{
"code": null,
"e": 25805,
"s": 25765,
"text": "Set the message to be written in input."
},
{
"code": null,
"e": 25826,
"s": 25805,
"text": "Click on the submit."
},
{
"code": null,
"e": 25847,
"s": 25826,
"text": "Decrement the count."
},
{
"code": null,
"e": 25903,
"s": 25847,
"text": "If count become zero, then clear the interval function."
},
{
"code": null,
"e": 25997,
"s": 25903,
"text": "Set the time interval of 10000ms, it means the function will be called after each 10 seconds."
},
{
"code": null,
"e": 26018,
"s": 25997,
"text": "Below are the steps:"
},
{
"code": null,
"e": 26059,
"s": 26018,
"text": "Go to facebook page using m.facebook.com"
},
{
"code": null,
"e": 26086,
"s": 26059,
"text": "Sign in and open any post."
},
{
"code": null,
"e": 26141,
"s": 26086,
"text": "Open developer mode in Chrome by pressing Ctrl+Shift+I"
},
{
"code": null,
"e": 26166,
"s": 26141,
"text": "Navigate to the console."
},
{
"code": null,
"e": 26193,
"s": 26166,
"text": "Now, run the below script."
},
{
"code": "var count = 100;var message = \"Hi\";var loop = setInterval(function(){ var input = document.getElementsByName(\"comment_text\")[0]; var submit = document.querySelector('button[type=\"submit\"]'); submit.disabled = false; input.value = message; submit.click(); count -= 1; if(count == 0) { clearInterval(loop); }}, 10000);",
"e": 26544,
"s": 26193,
"text": null
},
{
"code": null,
"e": 26552,
"s": 26544,
"text": "Output:"
},
{
"code": null,
"e": 26559,
"s": 26552,
"text": "Output"
},
{
"code": null,
"e": 26798,
"s": 26559,
"text": "Note: Please ensure that there is stable internet connection available, so that the script runs smoothly. Also ensure to visit facebook with m.facebook.com not www.facebook.com because this script works on mobile version of facebook only."
},
{
"code": null,
"e": 26909,
"s": 26798,
"text": "This tutorial is for educational purpose only, please don’t use it for disturbing anyone or any unethical way."
},
{
"code": null,
"e": 26925,
"s": 26909,
"text": "JavaScript-Misc"
},
{
"code": null,
"e": 26931,
"s": 26925,
"text": "GBlog"
},
{
"code": null,
"e": 26942,
"s": 26931,
"text": "JavaScript"
},
{
"code": null,
"e": 26951,
"s": 26942,
"text": "TechTips"
},
{
"code": null,
"e": 26968,
"s": 26951,
"text": "Web Technologies"
},
{
"code": null,
"e": 27066,
"s": 26968,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27075,
"s": 27066,
"text": "Comments"
},
{
"code": null,
"e": 27088,
"s": 27075,
"text": "Old Comments"
},
{
"code": null,
"e": 27130,
"s": 27088,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 27155,
"s": 27130,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 27190,
"s": 27155,
"text": "GET and POST requests using Python"
},
{
"code": null,
"e": 27252,
"s": 27190,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 27285,
"s": 27252,
"text": "Working with csv files in Python"
},
{
"code": null,
"e": 27330,
"s": 27285,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 27399,
"s": 27330,
"text": "How to calculate the number of days between two dates in javascript?"
},
{
"code": null,
"e": 27460,
"s": 27399,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 27532,
"s": 27460,
"text": "Differences between Functional Components and Class Components in React"
}
] |
BufferedReader read() method in Java with Examples - GeeksforGeeks
|
28 May, 2020
The read() method of BufferedReader class in Java is of two types:
1. The read() method of BufferedReader class in Java is used to read a single character from the given buffered reader. This read() method reads one character at a time from the buffered stream and return it as an integer value.
Syntax:
public int read()
throws IOException
Overrides: It overrides the read() method of Reader class.
Parameters: This method does not accept any parameter.
Return value: This method returns the character that is read by this method in the form of an integer. If the buffered stream has ended and there is no character to be read then this method return -1.
Exceptions: This method throws IOException if an I/O error occurs.
Below program illustrates read() method in BufferedReader class in IO package:
Program: Assume the existence of the file “c:/demo.txt”.
// Java program to illustrate// BufferedReader read() method import java.io.*; public class GFG { public static void main(String[] args) { // Read the stream 'demo.txt' // containing text "GEEKSFORGEEKS" FileReader fileReader = new FileReader( "c:/demo.txt"); // Convert fileReader to // bufferedReader BufferedReader buffReader = new BufferedReader( fileReader); while (buffReader.ready()) { // Read and print characters one by one // by converting into character System.out.println("Char :" + (char)buffReader.read()); } }}
2. The read(char[ ], int, int) method of BufferedReader class in Java is used to read characters in a part of a specific array.
General Contract:The general contract of this read() method is as following:
It reads maximum possible characters by calling again and again the read() method of the main stream.
It continues till the reading of specified number of characters or till the ending of file or till ready() method has returned false.
Specified By: This method is specified by read() method of Reader class.
Syntax:
public int read(char[] cbuf,
int offset,
int length)
throws IOException
Parameters: This method accepts three parameters:
cbuf – It represents the destination buffer.
offset – It represents the starting point to store the characters.
length – It represents the maximum number of characters that is to be read.
Return value: This method returns the number of characters that is read by this method. If the buffered stream has ended and there is no character to be read then this method return -1.
Exceptions: This method throws IOException if an I/O error occurs.
Below program illustrates read(char, int, int) method in BufferedReader class in IO package:
Program: Assume the existence of the file “c:/demo.txt”.
// Java program to illustrate// BufferedReader read(char, int, int) method import java.io.*; public class GFG { public static void main(String[] args) { // Read the stream 'demo.txt' // containing text "GEEKSFORGEEKS" FileReader fileReader = new FileReader( "c:/demo.txt"); // Convert fileReader to // bufferedReader BufferedReader buffReader = new BufferedReader( fileReader); // Create a character array char[] cbuf = new char[13]; // Initialize and declare // offset and length int offset = 2; int length = 5; // Calling read() method // on buffer reader System.out.println( "Total number of characters read: " + buffReader.read( cbuf, offset, length)); // For each char in cbuf for (char c : cbuf) { if (c == (char)0) c = '-'; System.out.print((char)c); } }}
References:https://docs.oracle.com/javase/10/docs/api/java/io/BufferedReader.html#read()https://docs.oracle.com/javase/10/docs/api/java/io/BufferedReader.html#read(char%5B%5D, int, int)
Java-Functions
Java-IO package
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Different ways of Reading a text file in Java
Constructors in Java
Exceptions in Java
Generics in Java
Functional Interfaces in Java
Comparator Interface in Java with Examples
HashMap get() Method in Java
Introduction to Java
Difference between Abstract Class and Interface in Java
|
[
{
"code": null,
"e": 23948,
"s": 23920,
"text": "\n28 May, 2020"
},
{
"code": null,
"e": 24015,
"s": 23948,
"text": "The read() method of BufferedReader class in Java is of two types:"
},
{
"code": null,
"e": 24244,
"s": 24015,
"text": "1. The read() method of BufferedReader class in Java is used to read a single character from the given buffered reader. This read() method reads one character at a time from the buffered stream and return it as an integer value."
},
{
"code": null,
"e": 24252,
"s": 24244,
"text": "Syntax:"
},
{
"code": null,
"e": 24301,
"s": 24252,
"text": "public int read() \n throws IOException\n"
},
{
"code": null,
"e": 24360,
"s": 24301,
"text": "Overrides: It overrides the read() method of Reader class."
},
{
"code": null,
"e": 24415,
"s": 24360,
"text": "Parameters: This method does not accept any parameter."
},
{
"code": null,
"e": 24616,
"s": 24415,
"text": "Return value: This method returns the character that is read by this method in the form of an integer. If the buffered stream has ended and there is no character to be read then this method return -1."
},
{
"code": null,
"e": 24683,
"s": 24616,
"text": "Exceptions: This method throws IOException if an I/O error occurs."
},
{
"code": null,
"e": 24762,
"s": 24683,
"text": "Below program illustrates read() method in BufferedReader class in IO package:"
},
{
"code": null,
"e": 24819,
"s": 24762,
"text": "Program: Assume the existence of the file “c:/demo.txt”."
},
{
"code": "// Java program to illustrate// BufferedReader read() method import java.io.*; public class GFG { public static void main(String[] args) { // Read the stream 'demo.txt' // containing text \"GEEKSFORGEEKS\" FileReader fileReader = new FileReader( \"c:/demo.txt\"); // Convert fileReader to // bufferedReader BufferedReader buffReader = new BufferedReader( fileReader); while (buffReader.ready()) { // Read and print characters one by one // by converting into character System.out.println(\"Char :\" + (char)buffReader.read()); } }}",
"e": 25533,
"s": 24819,
"text": null
},
{
"code": null,
"e": 25661,
"s": 25533,
"text": "2. The read(char[ ], int, int) method of BufferedReader class in Java is used to read characters in a part of a specific array."
},
{
"code": null,
"e": 25738,
"s": 25661,
"text": "General Contract:The general contract of this read() method is as following:"
},
{
"code": null,
"e": 25840,
"s": 25738,
"text": "It reads maximum possible characters by calling again and again the read() method of the main stream."
},
{
"code": null,
"e": 25974,
"s": 25840,
"text": "It continues till the reading of specified number of characters or till the ending of file or till ready() method has returned false."
},
{
"code": null,
"e": 26047,
"s": 25974,
"text": "Specified By: This method is specified by read() method of Reader class."
},
{
"code": null,
"e": 26055,
"s": 26047,
"text": "Syntax:"
},
{
"code": null,
"e": 26169,
"s": 26055,
"text": "public int read(char[] cbuf,\n int offset,\n int length)\n throws IOException\n"
},
{
"code": null,
"e": 26219,
"s": 26169,
"text": "Parameters: This method accepts three parameters:"
},
{
"code": null,
"e": 26264,
"s": 26219,
"text": "cbuf – It represents the destination buffer."
},
{
"code": null,
"e": 26331,
"s": 26264,
"text": "offset – It represents the starting point to store the characters."
},
{
"code": null,
"e": 26407,
"s": 26331,
"text": "length – It represents the maximum number of characters that is to be read."
},
{
"code": null,
"e": 26593,
"s": 26407,
"text": "Return value: This method returns the number of characters that is read by this method. If the buffered stream has ended and there is no character to be read then this method return -1."
},
{
"code": null,
"e": 26660,
"s": 26593,
"text": "Exceptions: This method throws IOException if an I/O error occurs."
},
{
"code": null,
"e": 26753,
"s": 26660,
"text": "Below program illustrates read(char, int, int) method in BufferedReader class in IO package:"
},
{
"code": null,
"e": 26810,
"s": 26753,
"text": "Program: Assume the existence of the file “c:/demo.txt”."
},
{
"code": "// Java program to illustrate// BufferedReader read(char, int, int) method import java.io.*; public class GFG { public static void main(String[] args) { // Read the stream 'demo.txt' // containing text \"GEEKSFORGEEKS\" FileReader fileReader = new FileReader( \"c:/demo.txt\"); // Convert fileReader to // bufferedReader BufferedReader buffReader = new BufferedReader( fileReader); // Create a character array char[] cbuf = new char[13]; // Initialize and declare // offset and length int offset = 2; int length = 5; // Calling read() method // on buffer reader System.out.println( \"Total number of characters read: \" + buffReader.read( cbuf, offset, length)); // For each char in cbuf for (char c : cbuf) { if (c == (char)0) c = '-'; System.out.print((char)c); } }}",
"e": 27850,
"s": 26810,
"text": null
},
{
"code": null,
"e": 28036,
"s": 27850,
"text": "References:https://docs.oracle.com/javase/10/docs/api/java/io/BufferedReader.html#read()https://docs.oracle.com/javase/10/docs/api/java/io/BufferedReader.html#read(char%5B%5D, int, int)"
},
{
"code": null,
"e": 28051,
"s": 28036,
"text": "Java-Functions"
},
{
"code": null,
"e": 28067,
"s": 28051,
"text": "Java-IO package"
},
{
"code": null,
"e": 28072,
"s": 28067,
"text": "Java"
},
{
"code": null,
"e": 28077,
"s": 28072,
"text": "Java"
},
{
"code": null,
"e": 28175,
"s": 28077,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28190,
"s": 28175,
"text": "Stream In Java"
},
{
"code": null,
"e": 28236,
"s": 28190,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 28257,
"s": 28236,
"text": "Constructors in Java"
},
{
"code": null,
"e": 28276,
"s": 28257,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 28293,
"s": 28276,
"text": "Generics in Java"
},
{
"code": null,
"e": 28323,
"s": 28293,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 28366,
"s": 28323,
"text": "Comparator Interface in Java with Examples"
},
{
"code": null,
"e": 28395,
"s": 28366,
"text": "HashMap get() Method in Java"
},
{
"code": null,
"e": 28416,
"s": 28395,
"text": "Introduction to Java"
}
] |
How to create a delete confirmation modal with CSS and JavaScript?
|
To create a delete confirmation modal with CSS and JavaScript, the code is as
follows −
Live Demo
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
.modal {
text-align: center;
display: none;
position: fixed;
z-index: 1;
padding-top: 100px;
left: 0;
top: 0;
width: 100%;
height: 100%;
background-color: rgba(0, 0, 0, 0.4);
}
.modalContent {
font-size: 20px;
font-weight: bold;
background-color: #fefefe;
margin: auto;
padding: 20px;
border: 1px solid #888;
width: 80%;
}
.close {
color: rgb(255, 65, 65);
float: right;
font-size: 40px;
font-weight: bold;
}
.close:hover, .close:focus {
color: #ff1010;
cursor: pointer;
}
.modalContent button {
border: none;
border-radius: 4px;
font-size: 18px;
font-weight: bold;
padding: 10px;
}
.del {
background-color: rgb(255, 65, 65);
}
.del:hover {
background-color: rgb(255, 7, 7);
}
.cancel:hover {
background-color: rgb(167, 167, 167);
}
</style>
</head>
<body>
<h1>Modal Example</h1>
<button class="openModal">Open Modal</button>
<h2>Click on the above button to open modal</h2>
<div class="modal">
<div class="modalContent">
<span class="close">×</span>
<p>Are you sure you want to delete your account</p>
<button class="del" onclick="hideModal()">Delete Account</button>
<button class="cancel" onclick="hideModal()">Cancel</button>
</div>
</div>
<script>
var modal = document.querySelector(".modal");
var btn = document.querySelector(".openModal");
var span = document.querySelector(".close");
btn.addEventListener("click", () => {
modal.style.display = "block";
});
span.addEventListener("click", () => {
hideModal();
});
function hideModal() {
modal.style.display = "none";
}
window.onclick = function(event) {
if (event.target == modal) {
hideModal();
}
};
</script>
</body>
</html>
The above code will produce the following output −
On clicking the Open Modal button −
|
[
{
"code": null,
"e": 1150,
"s": 1062,
"text": "To create a delete confirmation modal with CSS and JavaScript, the code is as\nfollows −"
},
{
"code": null,
"e": 1161,
"s": 1150,
"text": " Live Demo"
},
{
"code": null,
"e": 3224,
"s": 1161,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n<style>\n body {\n font-family: Arial, Helvetica, sans-serif;\n }\n .modal {\n text-align: center;\n display: none;\n position: fixed;\n z-index: 1;\n padding-top: 100px;\n left: 0;\n top: 0;\n width: 100%;\n height: 100%;\n background-color: rgba(0, 0, 0, 0.4);\n }\n .modalContent {\n font-size: 20px;\n font-weight: bold;\n background-color: #fefefe;\n margin: auto;\n padding: 20px;\n border: 1px solid #888;\n width: 80%;\n }\n .close {\n color: rgb(255, 65, 65);\n float: right;\n font-size: 40px;\n font-weight: bold;\n }\n .close:hover, .close:focus {\n color: #ff1010;\n cursor: pointer;\n }\n .modalContent button {\n border: none;\n border-radius: 4px;\n font-size: 18px;\n font-weight: bold;\n padding: 10px;\n }\n .del {\n background-color: rgb(255, 65, 65);\n }\n .del:hover {\n background-color: rgb(255, 7, 7);\n }\n .cancel:hover {\n background-color: rgb(167, 167, 167);\n }\n</style>\n</head>\n<body>\n<h1>Modal Example</h1>\n<button class=\"openModal\">Open Modal</button>\n<h2>Click on the above button to open modal</h2>\n<div class=\"modal\">\n<div class=\"modalContent\">\n<span class=\"close\">×</span>\n<p>Are you sure you want to delete your account</p>\n<button class=\"del\" onclick=\"hideModal()\">Delete Account</button>\n<button class=\"cancel\" onclick=\"hideModal()\">Cancel</button>\n</div>\n</div>\n<script>\n var modal = document.querySelector(\".modal\");\n var btn = document.querySelector(\".openModal\");\n var span = document.querySelector(\".close\");\n btn.addEventListener(\"click\", () => {\n modal.style.display = \"block\";\n });\n span.addEventListener(\"click\", () => {\n hideModal();\n });\n function hideModal() {\n modal.style.display = \"none\";\n }\n window.onclick = function(event) {\n if (event.target == modal) {\n hideModal();\n }\n };\n</script>\n</body>\n</html>"
},
{
"code": null,
"e": 3275,
"s": 3224,
"text": "The above code will produce the following output −"
},
{
"code": null,
"e": 3311,
"s": 3275,
"text": "On clicking the Open Modal button −"
}
] |
How to Create Empty List in R? - GeeksforGeeks
|
03 Dec, 2021
In this article, we will discuss how to create an empty list in R Programming Language.
Here we are going to create an empty list with length 0 by using list() function.
Syntax:
data=list()
Example:
R
# create empty listdata = list() print(data) # display lengthprint(length(data))
Output:
list()
[1] 0
Here we are specifying length in an empty list using the following syntax.
vector(mode='list', length)
where,
mode specifies the list
length specifies the length of empty list
Example:
R
# create empty list with length 3data = vector(mode='list', length=3) print(data) # display lengthprint(length(data))
Output:
[[1]]
NULL
[[2]]
NULL
[[3]]
NULL
[1] 3
anikakapoor
Picked
R List-Programs
R-List
R Language
R Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
How to filter R dataframe by multiple conditions?
Replace Specific Characters in String in R
Convert Matrix to Dataframe in R
|
[
{
"code": null,
"e": 25242,
"s": 25214,
"text": "\n03 Dec, 2021"
},
{
"code": null,
"e": 25330,
"s": 25242,
"text": "In this article, we will discuss how to create an empty list in R Programming Language."
},
{
"code": null,
"e": 25412,
"s": 25330,
"text": "Here we are going to create an empty list with length 0 by using list() function."
},
{
"code": null,
"e": 25420,
"s": 25412,
"text": "Syntax:"
},
{
"code": null,
"e": 25432,
"s": 25420,
"text": "data=list()"
},
{
"code": null,
"e": 25441,
"s": 25432,
"text": "Example:"
},
{
"code": null,
"e": 25443,
"s": 25441,
"text": "R"
},
{
"code": "# create empty listdata = list() print(data) # display lengthprint(length(data))",
"e": 25524,
"s": 25443,
"text": null
},
{
"code": null,
"e": 25532,
"s": 25524,
"text": "Output:"
},
{
"code": null,
"e": 25545,
"s": 25532,
"text": "list()\n[1] 0"
},
{
"code": null,
"e": 25620,
"s": 25545,
"text": "Here we are specifying length in an empty list using the following syntax."
},
{
"code": null,
"e": 25648,
"s": 25620,
"text": "vector(mode='list', length)"
},
{
"code": null,
"e": 25655,
"s": 25648,
"text": "where,"
},
{
"code": null,
"e": 25679,
"s": 25655,
"text": "mode specifies the list"
},
{
"code": null,
"e": 25721,
"s": 25679,
"text": "length specifies the length of empty list"
},
{
"code": null,
"e": 25730,
"s": 25721,
"text": "Example:"
},
{
"code": null,
"e": 25732,
"s": 25730,
"text": "R"
},
{
"code": "# create empty list with length 3data = vector(mode='list', length=3) print(data) # display lengthprint(length(data))",
"e": 25850,
"s": 25732,
"text": null
},
{
"code": null,
"e": 25858,
"s": 25850,
"text": "Output:"
},
{
"code": null,
"e": 25900,
"s": 25858,
"text": "[[1]]\nNULL\n\n[[2]]\nNULL\n\n[[3]]\nNULL\n\n[1] 3"
},
{
"code": null,
"e": 25912,
"s": 25900,
"text": "anikakapoor"
},
{
"code": null,
"e": 25919,
"s": 25912,
"text": "Picked"
},
{
"code": null,
"e": 25935,
"s": 25919,
"text": "R List-Programs"
},
{
"code": null,
"e": 25942,
"s": 25935,
"text": "R-List"
},
{
"code": null,
"e": 25953,
"s": 25942,
"text": "R Language"
},
{
"code": null,
"e": 25964,
"s": 25953,
"text": "R Programs"
},
{
"code": null,
"e": 26062,
"s": 25964,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26071,
"s": 26062,
"text": "Comments"
},
{
"code": null,
"e": 26084,
"s": 26071,
"text": "Old Comments"
},
{
"code": null,
"e": 26136,
"s": 26084,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 26174,
"s": 26136,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 26209,
"s": 26174,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 26267,
"s": 26209,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 26316,
"s": 26267,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 26374,
"s": 26316,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 26423,
"s": 26374,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 26473,
"s": 26423,
"text": "How to filter R dataframe by multiple conditions?"
},
{
"code": null,
"e": 26516,
"s": 26473,
"text": "Replace Specific Characters in String in R"
}
] |
Numbers in Ruby - GeeksforGeeks
|
10 Sep, 2021
Ruby supports two types of numbers:
Integers: An integer is simply a sequence of digits, e.g., 12, 100. Or in other words, numbers without decimal points are called Integers. In Ruby, Integers are object of class Fixnum(32 or 64 bits) or Bignum(used for bigger numbers).Floating-point numbers: Numbers with decimal points are usually called floats, e.g., 1.2, 10.0. The floating-point numbers are object of class Float.
Integers: An integer is simply a sequence of digits, e.g., 12, 100. Or in other words, numbers without decimal points are called Integers. In Ruby, Integers are object of class Fixnum(32 or 64 bits) or Bignum(used for bigger numbers).
Floating-point numbers: Numbers with decimal points are usually called floats, e.g., 1.2, 10.0. The floating-point numbers are object of class Float.
Note: Underscore can be used to separate a thousand places e.g: 25_120.55 is the same as the number 25120.55.
Example 1: Basic arithmetic operations on numbers in Ruby is shown below. In Ruby, mathematical operations result in an integer only if all numbers used are integer numbers unless we get the result as a float.
Ruby
# Addition of two integersputs 2 + 3 # Addition of integer and floatputs 2 + 3.0 # Subtraction of two integersputs 5 - 3 # Multiplication and division of two integersputs 2 * 3puts 6 / 2 # Exponential operationputs 2 ** 3
Output:
5
5.0
2
6
3
8
Example 2: In Ruby, for Modulus(%) operator the sign of the result is always the same as the sign of the second operand. So, 10 % -3 is -2 and -10 % 3 is 2.
Ruby
# Modulus operation on numbersputs 10 % 3puts 10 % -3puts -10 % 3
Output:
1
-2
2
Example 3: Other mathematical operations on numbers in Ruby is shown below.
Ruby
num1 = -20num2 = 10.2 # abs() method returns absolute value of numberputs num1.abs() # round() method returns the number after roundingputs num2.round() # ceil() and floor() function for numbers in Rubyputs num2.ceil()puts num2.floor()
Output:
20
10
11
10
simmytarika5
Picked
Ruby
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Ruby | Enumerator each_with_index function
Ruby | Decision Making (if, if-else, if-else-if, ternary) | Set - 1
Ruby For Beginners
Ruby | Types of Iterators
Ruby on Rails Introduction
Ruby | Array shift() function
Ruby | Class Method and Variables
Ruby | Enumerable find() function
Ruby | String concat Method
Ruby | pop() function
|
[
{
"code": null,
"e": 23986,
"s": 23958,
"text": "\n10 Sep, 2021"
},
{
"code": null,
"e": 24022,
"s": 23986,
"text": "Ruby supports two types of numbers:"
},
{
"code": null,
"e": 24406,
"s": 24022,
"text": "Integers: An integer is simply a sequence of digits, e.g., 12, 100. Or in other words, numbers without decimal points are called Integers. In Ruby, Integers are object of class Fixnum(32 or 64 bits) or Bignum(used for bigger numbers).Floating-point numbers: Numbers with decimal points are usually called floats, e.g., 1.2, 10.0. The floating-point numbers are object of class Float."
},
{
"code": null,
"e": 24641,
"s": 24406,
"text": "Integers: An integer is simply a sequence of digits, e.g., 12, 100. Or in other words, numbers without decimal points are called Integers. In Ruby, Integers are object of class Fixnum(32 or 64 bits) or Bignum(used for bigger numbers)."
},
{
"code": null,
"e": 24791,
"s": 24641,
"text": "Floating-point numbers: Numbers with decimal points are usually called floats, e.g., 1.2, 10.0. The floating-point numbers are object of class Float."
},
{
"code": null,
"e": 24901,
"s": 24791,
"text": "Note: Underscore can be used to separate a thousand places e.g: 25_120.55 is the same as the number 25120.55."
},
{
"code": null,
"e": 25112,
"s": 24901,
"text": "Example 1: Basic arithmetic operations on numbers in Ruby is shown below. In Ruby, mathematical operations result in an integer only if all numbers used are integer numbers unless we get the result as a float. "
},
{
"code": null,
"e": 25117,
"s": 25112,
"text": "Ruby"
},
{
"code": "# Addition of two integersputs 2 + 3 # Addition of integer and floatputs 2 + 3.0 # Subtraction of two integersputs 5 - 3 # Multiplication and division of two integersputs 2 * 3puts 6 / 2 # Exponential operationputs 2 ** 3",
"e": 25339,
"s": 25117,
"text": null
},
{
"code": null,
"e": 25348,
"s": 25339,
"text": "Output: "
},
{
"code": null,
"e": 25362,
"s": 25348,
"text": "5\n5.0\n2\n6\n3\n8"
},
{
"code": null,
"e": 25519,
"s": 25362,
"text": "Example 2: In Ruby, for Modulus(%) operator the sign of the result is always the same as the sign of the second operand. So, 10 % -3 is -2 and -10 % 3 is 2."
},
{
"code": null,
"e": 25524,
"s": 25519,
"text": "Ruby"
},
{
"code": "# Modulus operation on numbersputs 10 % 3puts 10 % -3puts -10 % 3",
"e": 25590,
"s": 25524,
"text": null
},
{
"code": null,
"e": 25599,
"s": 25590,
"text": "Output: "
},
{
"code": null,
"e": 25606,
"s": 25599,
"text": "1\n-2\n2"
},
{
"code": null,
"e": 25682,
"s": 25606,
"text": "Example 3: Other mathematical operations on numbers in Ruby is shown below."
},
{
"code": null,
"e": 25687,
"s": 25682,
"text": "Ruby"
},
{
"code": "num1 = -20num2 = 10.2 # abs() method returns absolute value of numberputs num1.abs() # round() method returns the number after roundingputs num2.round() # ceil() and floor() function for numbers in Rubyputs num2.ceil()puts num2.floor()",
"e": 25923,
"s": 25687,
"text": null
},
{
"code": null,
"e": 25931,
"s": 25923,
"text": "Output:"
},
{
"code": null,
"e": 25943,
"s": 25931,
"text": "20\n10\n11\n10"
},
{
"code": null,
"e": 25958,
"s": 25945,
"text": "simmytarika5"
},
{
"code": null,
"e": 25965,
"s": 25958,
"text": "Picked"
},
{
"code": null,
"e": 25970,
"s": 25965,
"text": "Ruby"
},
{
"code": null,
"e": 26068,
"s": 25970,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26111,
"s": 26068,
"text": "Ruby | Enumerator each_with_index function"
},
{
"code": null,
"e": 26179,
"s": 26111,
"text": "Ruby | Decision Making (if, if-else, if-else-if, ternary) | Set - 1"
},
{
"code": null,
"e": 26198,
"s": 26179,
"text": "Ruby For Beginners"
},
{
"code": null,
"e": 26224,
"s": 26198,
"text": "Ruby | Types of Iterators"
},
{
"code": null,
"e": 26251,
"s": 26224,
"text": "Ruby on Rails Introduction"
},
{
"code": null,
"e": 26281,
"s": 26251,
"text": "Ruby | Array shift() function"
},
{
"code": null,
"e": 26315,
"s": 26281,
"text": "Ruby | Class Method and Variables"
},
{
"code": null,
"e": 26349,
"s": 26315,
"text": "Ruby | Enumerable find() function"
},
{
"code": null,
"e": 26377,
"s": 26349,
"text": "Ruby | String concat Method"
}
] |
R Visualizations: Flow Charts in R | by Paul Aleksis | Towards Data Science
|
In the field of data analytics, data visualization is one of the single most important tools when starting new projects. It aids discovery and allows data scientists to read the story of their data. I have recently noticed how much I love R over Python for my data visualization needs. Because of this, I have started a small blog series on R visualizations beginning with this blog on flow charts. You can find the code to this blog on my GitHub.
Flow charts are a great way to express ideas or logic with a top-down approach. Personally, I often use them in setting up for debates or trying to understand an idea. I present the idea in the uppermost node and each argument supporting the idea in lower level nodes along with counter-arguments in opposing nodes.
Flow charts are also super useful when you are learning a new skill. Whenever learning something new, I start at a super high contextual level to grasp some understanding, then I dive lower into the finer details of what I need to know at that particular time of learning. Having a visual aid such as a flow chart is very useful in this because it serves as a roadmap to my understanding.
Begin by installing the DiagrammeR package, so that we can take advantage of its node graphing methods. Once installed, import it into your R script using the library() function.
install.packages("DiagrammeR")library(DiagrammeR)
The next step is to run the grViz() function from the library and enter your syntax. Below is the syntax to a very basic top-down flow chart.
grViz(diagram = "digraph flowchart { node [fontname = arial, shape = oval] tab1 [label = '@@1'] tab2 [label = '@@2'] tab3 [label = '@@3'] tab1 -> tab2 -> tab3;} [1]: 'Learning Data Science' [2]: 'Industry vs Technical Knowledge' [3]: 'Statistics vs Mathematics Knowledge' ")
Within the node argument, you can set a font and shape for the text in the nodes and the node itself. I used oval in this example but you can use square, diamond, rectangle, circle, triangle, etc...
Note that you can add as many tabs by defining them after the node input and connect them by drawing their path.
the output looks like this:
It is also possible to add color to the outline of your nodes, or even fill them with a color. To do so, add additional arguments “color” and “style” to the node section of the syntax as shown below.
grViz(diagram = "digraph flowchart { node [fontname = arial, shape = oval, color = grey, style = filled] tab1 [label = '@@1'] tab2 [label = '@@2'] tab3 [label = '@@3'] tab1 -> tab2 -> tab3;} [1]: 'Learning Data Science' [2]: 'Industry vs Technical Knowledge' [3]: 'Statistics vs Mathematics Knowledge' ")
The output looks like this:
You can find a list of colors, edge attributes, node shapes, node attributes, and arrow shapes from the source documentation.
To split up your arrows and connect one node to multiple nodes, all you need to do is adjust the layout argument.
grViz(diagram = "digraph flowchart { # define node aesthetics node [fontname = Arial, shape = oval, color = Lavender, style = filled] tab1 [label = '@@1'] tab2 [label = '@@2'] tab3 [label = '@@3'] tab4 [label = '@@4']# set up node layout tab1 -> tab2; tab2 -> tab3; tab2 -> tab4 }[1]: 'Learning Data Science' [2]: 'Industry vs Technical Knowledge' [3]: 'Python/R' [4]: 'Domain Experience' ")
The output looks like this:
There are some additional arguments you can include in your function to customize things such as font size, font color, height, width, alpha, etc...
Just add the argument within the node [] section of your formula for example:
grViz(diagram = "digraph flowchart { # define node aesthetics node [fontname = Arial, shape = oval, color = DeepSkyBlue, style = filled, fontcolor = White] tab1 [label = '@@1'] tab2 [label = '@@2'] tab3 [label = '@@3'] tab4 [label = '@@4']# set up node layout tab1 -> tab2; tab2 -> tab3; tab2 -> tab4 } [1]: 'Learning Data Science' [2]: 'Industry vs Technical Knowledge' [3]: 'Python/R' [4]: 'Domain Experience' ")
Please review the documentation for more customization options.
Flow charts are a great way to express ideas especially at the beginning stages of production. Consider using R visualizations in your future work and take advantage of its powerful graphing libraries.
Sources:
rich-iannone.github.io
Code for this blog can be found on my GitHub.
|
[
{
"code": null,
"e": 620,
"s": 172,
"text": "In the field of data analytics, data visualization is one of the single most important tools when starting new projects. It aids discovery and allows data scientists to read the story of their data. I have recently noticed how much I love R over Python for my data visualization needs. Because of this, I have started a small blog series on R visualizations beginning with this blog on flow charts. You can find the code to this blog on my GitHub."
},
{
"code": null,
"e": 936,
"s": 620,
"text": "Flow charts are a great way to express ideas or logic with a top-down approach. Personally, I often use them in setting up for debates or trying to understand an idea. I present the idea in the uppermost node and each argument supporting the idea in lower level nodes along with counter-arguments in opposing nodes."
},
{
"code": null,
"e": 1325,
"s": 936,
"text": "Flow charts are also super useful when you are learning a new skill. Whenever learning something new, I start at a super high contextual level to grasp some understanding, then I dive lower into the finer details of what I need to know at that particular time of learning. Having a visual aid such as a flow chart is very useful in this because it serves as a roadmap to my understanding."
},
{
"code": null,
"e": 1504,
"s": 1325,
"text": "Begin by installing the DiagrammeR package, so that we can take advantage of its node graphing methods. Once installed, import it into your R script using the library() function."
},
{
"code": null,
"e": 1554,
"s": 1504,
"text": "install.packages(\"DiagrammeR\")library(DiagrammeR)"
},
{
"code": null,
"e": 1696,
"s": 1554,
"text": "The next step is to run the grViz() function from the library and enter your syntax. Below is the syntax to a very basic top-down flow chart."
},
{
"code": null,
"e": 1992,
"s": 1696,
"text": "grViz(diagram = \"digraph flowchart { node [fontname = arial, shape = oval] tab1 [label = '@@1'] tab2 [label = '@@2'] tab3 [label = '@@3'] tab1 -> tab2 -> tab3;} [1]: 'Learning Data Science' [2]: 'Industry vs Technical Knowledge' [3]: 'Statistics vs Mathematics Knowledge' \")"
},
{
"code": null,
"e": 2191,
"s": 1992,
"text": "Within the node argument, you can set a font and shape for the text in the nodes and the node itself. I used oval in this example but you can use square, diamond, rectangle, circle, triangle, etc..."
},
{
"code": null,
"e": 2304,
"s": 2191,
"text": "Note that you can add as many tabs by defining them after the node input and connect them by drawing their path."
},
{
"code": null,
"e": 2332,
"s": 2304,
"text": "the output looks like this:"
},
{
"code": null,
"e": 2532,
"s": 2332,
"text": "It is also possible to add color to the outline of your nodes, or even fill them with a color. To do so, add additional arguments “color” and “style” to the node section of the syntax as shown below."
},
{
"code": null,
"e": 2858,
"s": 2532,
"text": "grViz(diagram = \"digraph flowchart { node [fontname = arial, shape = oval, color = grey, style = filled] tab1 [label = '@@1'] tab2 [label = '@@2'] tab3 [label = '@@3'] tab1 -> tab2 -> tab3;} [1]: 'Learning Data Science' [2]: 'Industry vs Technical Knowledge' [3]: 'Statistics vs Mathematics Knowledge' \")"
},
{
"code": null,
"e": 2886,
"s": 2858,
"text": "The output looks like this:"
},
{
"code": null,
"e": 3012,
"s": 2886,
"text": "You can find a list of colors, edge attributes, node shapes, node attributes, and arrow shapes from the source documentation."
},
{
"code": null,
"e": 3126,
"s": 3012,
"text": "To split up your arrows and connect one node to multiple nodes, all you need to do is adjust the layout argument."
},
{
"code": null,
"e": 3596,
"s": 3126,
"text": "grViz(diagram = \"digraph flowchart { # define node aesthetics node [fontname = Arial, shape = oval, color = Lavender, style = filled] tab1 [label = '@@1'] tab2 [label = '@@2'] tab3 [label = '@@3'] tab4 [label = '@@4']# set up node layout tab1 -> tab2; tab2 -> tab3; tab2 -> tab4 }[1]: 'Learning Data Science' [2]: 'Industry vs Technical Knowledge' [3]: 'Python/R' [4]: 'Domain Experience' \")"
},
{
"code": null,
"e": 3624,
"s": 3596,
"text": "The output looks like this:"
},
{
"code": null,
"e": 3773,
"s": 3624,
"text": "There are some additional arguments you can include in your function to customize things such as font size, font color, height, width, alpha, etc..."
},
{
"code": null,
"e": 3851,
"s": 3773,
"text": "Just add the argument within the node [] section of your formula for example:"
},
{
"code": null,
"e": 4349,
"s": 3851,
"text": "grViz(diagram = \"digraph flowchart { # define node aesthetics node [fontname = Arial, shape = oval, color = DeepSkyBlue, style = filled, fontcolor = White] tab1 [label = '@@1'] tab2 [label = '@@2'] tab3 [label = '@@3'] tab4 [label = '@@4']# set up node layout tab1 -> tab2; tab2 -> tab3; tab2 -> tab4 } [1]: 'Learning Data Science' [2]: 'Industry vs Technical Knowledge' [3]: 'Python/R' [4]: 'Domain Experience' \")"
},
{
"code": null,
"e": 4413,
"s": 4349,
"text": "Please review the documentation for more customization options."
},
{
"code": null,
"e": 4615,
"s": 4413,
"text": "Flow charts are a great way to express ideas especially at the beginning stages of production. Consider using R visualizations in your future work and take advantage of its powerful graphing libraries."
},
{
"code": null,
"e": 4624,
"s": 4615,
"text": "Sources:"
},
{
"code": null,
"e": 4647,
"s": 4624,
"text": "rich-iannone.github.io"
}
] |
How to write to CSV in R without index ? - GeeksforGeeks
|
07 Apr, 2021
We know that when we write some data from DataFrame to CSV file then a column is automatically created for indexing. We can remove it by some modifications. So, in this article, we are going to see how to write CSV in R without index.
To write to csv file write.csv() is used.
Syntax:
write.csv(data,path)
Lets first see how indices appear when data is written to CSV.
Example:
R
Country <- c("China", "India", "United States", "Indonesia", "Pakistan") Population_1_july_2018 <- c("1,427,647,786", "1,352,642,280", "327,096,265", "267,670,543", "212,228,286") Population_1_july_2019 <- c("1,433,783,686", "1,366,417,754", "329,064,917", "270,625,568", "216,565,318") change_in_percents <- c("+0.43%", "+1.02%", "+0.60%", "+1.10%", "+2.04%") data <- data.frame(Country, Population_1_july_2018, Population_1_july_2019, change_in_percents)print(data) write.csv(data,"C:\\Users\\...YOUR PATH...\\population.csv")print ('CSV file written Successfully :)')
Output:
CSV file with extra index column
Now let us see how these indices can be removed, for that simply set row.names parameter to False while writing data to a csv file using write.csv() function. By Default, it will be TRUE and create one extra column to CSV file as index column.
Example:
R
Country <- c("China", "India", "United States", "Indonesia", "Pakistan") Population_1_july_2018 <- c("1,427,647,786", "1,352,642,280", "327,096,265", "267,670,543", "212,228,286") Population_1_july_2019 <- c("1,433,783,686", "1,366,417,754", "329,064,917", "270,625,568", "216,565,318") change_in_percents <- c("+0.43%", "+1.02%", "+0.60%", "+1.10%", "+2.04%") data <- data.frame(Country, Population_1_july_2018, Population_1_july_2019, change_in_percents) write.csv(data,"C:\\Users\\..YOUR PATH...\\population.csv", row.names = FALSE)print ('CSV file written Successfully :)')
Output:
CSV file without extra index column
Picked
R-CSV
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Change Color of Bars in Barchart using ggplot2 in R
Group by function in R using Dplyr
How to Change Axis Scales in R Plots?
How to Split Column Into Multiple Columns in R DataFrame?
Replace Specific Characters in String in R
How to filter R DataFrame by values in a column?
How to import an Excel File into R ?
Time Series Analysis in R
R - if statement
How to filter R dataframe by multiple conditions?
|
[
{
"code": null,
"e": 26487,
"s": 26459,
"text": "\n07 Apr, 2021"
},
{
"code": null,
"e": 26722,
"s": 26487,
"text": "We know that when we write some data from DataFrame to CSV file then a column is automatically created for indexing. We can remove it by some modifications. So, in this article, we are going to see how to write CSV in R without index."
},
{
"code": null,
"e": 26764,
"s": 26722,
"text": "To write to csv file write.csv() is used."
},
{
"code": null,
"e": 26772,
"s": 26764,
"text": "Syntax:"
},
{
"code": null,
"e": 26793,
"s": 26772,
"text": "write.csv(data,path)"
},
{
"code": null,
"e": 26856,
"s": 26793,
"text": "Lets first see how indices appear when data is written to CSV."
},
{
"code": null,
"e": 26865,
"s": 26856,
"text": "Example:"
},
{
"code": null,
"e": 26867,
"s": 26865,
"text": "R"
},
{
"code": "Country <- c(\"China\", \"India\", \"United States\", \"Indonesia\", \"Pakistan\") Population_1_july_2018 <- c(\"1,427,647,786\", \"1,352,642,280\", \"327,096,265\", \"267,670,543\", \"212,228,286\") Population_1_july_2019 <- c(\"1,433,783,686\", \"1,366,417,754\", \"329,064,917\", \"270,625,568\", \"216,565,318\") change_in_percents <- c(\"+0.43%\", \"+1.02%\", \"+0.60%\", \"+1.10%\", \"+2.04%\") data <- data.frame(Country, Population_1_july_2018, Population_1_july_2019, change_in_percents)print(data) write.csv(data,\"C:\\\\Users\\\\...YOUR PATH...\\\\population.csv\")print ('CSV file written Successfully :)')",
"e": 27501,
"s": 26867,
"text": null
},
{
"code": null,
"e": 27509,
"s": 27501,
"text": "Output:"
},
{
"code": null,
"e": 27542,
"s": 27509,
"text": "CSV file with extra index column"
},
{
"code": null,
"e": 27786,
"s": 27542,
"text": "Now let us see how these indices can be removed, for that simply set row.names parameter to False while writing data to a csv file using write.csv() function. By Default, it will be TRUE and create one extra column to CSV file as index column."
},
{
"code": null,
"e": 27795,
"s": 27786,
"text": "Example:"
},
{
"code": null,
"e": 27797,
"s": 27795,
"text": "R"
},
{
"code": "Country <- c(\"China\", \"India\", \"United States\", \"Indonesia\", \"Pakistan\") Population_1_july_2018 <- c(\"1,427,647,786\", \"1,352,642,280\", \"327,096,265\", \"267,670,543\", \"212,228,286\") Population_1_july_2019 <- c(\"1,433,783,686\", \"1,366,417,754\", \"329,064,917\", \"270,625,568\", \"216,565,318\") change_in_percents <- c(\"+0.43%\", \"+1.02%\", \"+0.60%\", \"+1.10%\", \"+2.04%\") data <- data.frame(Country, Population_1_july_2018, Population_1_july_2019, change_in_percents) write.csv(data,\"C:\\\\Users\\\\..YOUR PATH...\\\\population.csv\", row.names = FALSE)print ('CSV file written Successfully :)')",
"e": 28438,
"s": 27797,
"text": null
},
{
"code": null,
"e": 28446,
"s": 28438,
"text": "Output:"
},
{
"code": null,
"e": 28482,
"s": 28446,
"text": "CSV file without extra index column"
},
{
"code": null,
"e": 28489,
"s": 28482,
"text": "Picked"
},
{
"code": null,
"e": 28495,
"s": 28489,
"text": "R-CSV"
},
{
"code": null,
"e": 28506,
"s": 28495,
"text": "R Language"
},
{
"code": null,
"e": 28604,
"s": 28506,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28656,
"s": 28604,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 28691,
"s": 28656,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 28729,
"s": 28691,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 28787,
"s": 28729,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 28830,
"s": 28787,
"text": "Replace Specific Characters in String in R"
},
{
"code": null,
"e": 28879,
"s": 28830,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 28916,
"s": 28879,
"text": "How to import an Excel File into R ?"
},
{
"code": null,
"e": 28942,
"s": 28916,
"text": "Time Series Analysis in R"
},
{
"code": null,
"e": 28959,
"s": 28942,
"text": "R - if statement"
}
] |
Stack Permutations (Check if an array is stack permutation of other) - GeeksforGeeks
|
28 Apr, 2022
A stack permutation is a permutation of objects in the given input queue which is done by transferring elements from input queue to the output queue with the help of a stack and the built-in push and pop functions.The well defined rules are:
Only dequeue from the input queue.Use inbuilt push, pop functions in the single stack.Stack and input queue must be empty at the end.Only enqueue to the output queue.
Only dequeue from the input queue.
Use inbuilt push, pop functions in the single stack.
Stack and input queue must be empty at the end.
Only enqueue to the output queue.
There are a huge number of permutations possible using a stack for a single input queue. Given two arrays, both of unique elements. One represents the input queue and the other represents the output queue. Our task is to check if the given output is possible through stack permutation.Examples:
Input : First array: 1, 2, 3
Second array: 2, 1, 3
Output : Yes
Procedure:
push 1 from input to stack
push 2 from input to stack
pop 2 from stack to output
pop 1 from stack to output
push 3 from input to stack
pop 3 from stack to output
Input : First array: 1, 2, 3
Second array: 3, 1, 2
Output : Not Possible
The idea to do this is we will try to convert the input queue to output queue using a stack, if we are able to do so then the queue is permutable otherwise not. Below is the step by step algorithm to do this:
Continuously pop elements from the input queue and check if it is equal to the top of output queue or not, if it is not equal to the top of output queue then we will push the element to stack. Once we find an element in input queue such the top of input queue is equal to top of output queue, we will pop a single element from both input and output queues, and compare the top of stack and top of output queue now. If top of both stack and output queue are equal then pop element from both stack and output queue. If not equal, go to step 1. Repeat above two steps until the input queue becomes empty. At the end if both of the input queue and stack are empty then the input queue is permutable otherwise not.
Continuously pop elements from the input queue and check if it is equal to the top of output queue or not, if it is not equal to the top of output queue then we will push the element to stack.
Once we find an element in input queue such the top of input queue is equal to top of output queue, we will pop a single element from both input and output queues, and compare the top of stack and top of output queue now. If top of both stack and output queue are equal then pop element from both stack and output queue. If not equal, go to step 1.
Repeat above two steps until the input queue becomes empty. At the end if both of the input queue and stack are empty then the input queue is permutable otherwise not.
Below is implementation of above idea:
C++
Java
Python3
C#
Javascript
// Given two arrays, check if one array is// stack permutation of other.#include<bits/stdc++.h>using namespace std; // function to check if input queue is// permutable to output queuebool checkStackPermutation(int ip[], int op[], int n){ // Input queue queue<int> input; for (int i=0;i<n;i++) input.push(ip[i]); // output queue queue<int> output; for (int i=0;i<n;i++) output.push(op[i]); // stack to be used for permutation stack <int> tempStack; while (!input.empty()) { int ele = input.front(); input.pop(); if (ele == output.front()) { output.pop(); while (!tempStack.empty()) { if (tempStack.top() == output.front()) { tempStack.pop(); output.pop(); } else break; } } else tempStack.push(ele); } // If after processing, both input queue and // stack are empty then the input queue is // permutable otherwise not. return (input.empty()&&tempStack.empty());} // Driver program to test above functionint main(){ // Input Queue int input[] = {1, 2, 3}; // Output Queue int output[] = {2, 1, 3}; int n = 3; if (checkStackPermutation(input, output, n)) cout << "Yes"; else cout << "Not Possible"; return 0;}
// Given two arrays, check if one array is// stack permutation of other.import java.util.LinkedList;import java.util.Queue;import java.util.Stack; class Gfg{ // function to check if input queue is // permutable to output queue static boolean checkStackPermutation(int ip[], int op[], int n) { Queue<Integer> input = new LinkedList<>(); // Input queue for (int i = 0; i < n; i++) { input.add(ip[i]); } // Output queue Queue<Integer> output = new LinkedList<>(); for (int i = 0; i < n; i++) { output.add(op[i]); } // stack to be used for permutation Stack<Integer> tempStack = new Stack<>(); while (!input.isEmpty()) { int ele = input.poll(); if (ele == output.peek()) { output.poll(); while (!tempStack.isEmpty()) { if (tempStack.peek() == output.peek()) { tempStack.pop(); output.poll(); } else break; } } else { tempStack.push(ele); } } // If after processing, both input queue and // stack are empty then the input queue is // permutable otherwise not. return (input.isEmpty() && tempStack.isEmpty()); } // Driver code public static void main(String[] args) { // Input Queue int input[] = { 1, 2, 3 }; // Output Queue int output[] = { 2, 1, 3 }; int n = 3; if (checkStackPermutation(input, output, n)) System.out.println("Yes"); else System.out.println("Not Possible"); }} // This code is contributed by Vivekkumar Singh
# Given two arrays, check if one array is# stack permutation of other.from queue import Queue # function to check if Input queue# is permutable to output queuedef checkStackPermutation(ip, op, n): # Input queue Input = Queue() for i in range(n): Input.put(ip[i]) # output queue output = Queue() for i in range(n): output.put(op[i]) # stack to be used for permutation tempStack = [] while (not Input.empty()): ele = Input.queue[0] Input.get() if (ele == output.queue[0]): output.get() while (len(tempStack) != 0): if (tempStack[-1] == output.queue[0]): tempStack.pop() output.get() else: break else: tempStack.append(ele) # If after processing, both Input # queue and stack are empty then # the Input queue is permutable # otherwise not. return (Input.empty() and len(tempStack) == 0) # Driver Codeif __name__ == '__main__': # Input Queue Input = [1, 2, 3] # Output Queue output = [2, 1, 3] n = 3 if (checkStackPermutation(Input, output, n)): print("Yes") else: print("Not Possible") # This code is contributed by PranchalK
// Given two arrays, check if one array is// stack permutation of other.using System;using System.Collections.Generic; class GFG{ // function to check if input queue is // permutable to output queue static bool checkStackPermutation(int []ip, int []op, int n) { Queue<int> input = new Queue<int>(); // Input queue for (int i = 0; i < n; i++) { input.Enqueue(ip[i]); } // Output queue Queue<int> output = new Queue<int>(); for (int i = 0; i < n; i++) { output.Enqueue(op[i]); } // stack to be used for permutation Stack<int> tempStack = new Stack<int>(); while (input.Count != 0) { int ele = input.Dequeue(); if (ele == output.Peek()) { output.Dequeue(); while (tempStack.Count != 0) { if (tempStack.Peek() == output.Peek()) { tempStack.Pop(); output.Dequeue(); } else break; } } else { tempStack.Push(ele); } } // If after processing, both input queue and // stack are empty then the input queue is // permutable otherwise not. return (input.Count == 0 && tempStack.Count == 0); } // Driver code public static void Main(String[] args) { // Input Queue int []input = { 1, 2, 3 }; // Output Queue int []output = { 2, 1, 3 }; int n = 3; if (checkStackPermutation(input, output, n)) Console.WriteLine("Yes"); else Console.WriteLine("Not Possible"); }} // This code is contributed by PrinciRaj1992
<script> // Given two arrays, check if one array is // stack permutation of other. // function to check if input queue is // permutable to output queue function checkStackPermutation(ip, op, n) { let input = []; // Input queue for (let i = 0; i < n; i++) { input.push(ip[i]); } // Output queue let output = []; for (let i = 0; i < n; i++) { output.push(op[i]); } // stack to be used for permutation let tempStack = []; while (input.length != 0) { let ele = input.shift(); if (ele == output[0]) { output.shift(); while (tempStack.length != 0) { if (tempStack[tempStack.length - 1] == output[0]) { tempStack.pop(); output.shift(); } else break; } } else { tempStack.push(ele); } } // If after processing, both input queue and // stack are empty then the input queue is // permutable otherwise not. return (input.length == 0 && tempStack.length == 0); } // Input Queue let input = [ 1, 2, 3 ]; // Output Queue let output = [ 2, 1, 3 ]; let n = 3; if (checkStackPermutation(input, output, n)) document.write("Yes"); else document.write("Not Possible"); // This code is contributed by rameshtravel07.</script>
Output:
Yes
Another Approach: –
Idea – we will start iterating on input array and storing its element one by one in a stack and if top of our stack matches with an element in output array we will pop that element from stack and compare next element of output array with top of our stack if again it matches then again pop until our stack isn’t empty
C++
// Given two arrays, check if one array is// stack permutation of other.#include<bits/stdc++.h>using namespace std; // function to check if input array is// permutable to output arraybool checkStackPermutation(int ip[], int op[], int n){ // we will be pushing elements from input array to stack uptill top of our stack // matches with first element of output array stack<int>s; // will maintain a variable j to iterate on output array int j=0; // will iterate one by one in input array for(int i=0;i<n;i++) { // pushed an element from input array to stack s.push(ip[i]); // if our stack isn't empty and top matches with output array // then we will keep popping out from stack uptill top matches with // output array while(!s.empty() and s.top()==op[j]) { s.pop(); // increasing j so next time we can compare next element in output array j++; } } // if output array was a correct permutation of input array then // by now our stack should be empty if(s.empty()) { return true; } return false; } // Driver program to test above functionint main(){ // Input Array int input[] = {4,5,6,7,8}; // Output Array int output[] = {8,7,6,5,4}; int n = 5; if (checkStackPermutation(input, output, n)) cout << "Yes"; else cout << "Not Possible"; return 0;}
Yes
Time Complexity – O(N)
Space Complexity – O(N)
This article is contributed by Suprotik Dey. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
PranchalKatiyar
Vivekkumar Singh
princiraj1992
rameshtravel07
8_bit_spider
surinderdawra388
Combinatorial
Queue
Stack
Stack
Combinatorial
Queue
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Combinational Sum
Count ways to reach the nth stair using step 1, 2 or 3
Print all possible strings of length k that can be formed from a set of n characters
Count of subsets with sum equal to X
Python program to get all subsets of given size of a set
Breadth First Search or BFS for a Graph
Level Order Binary Tree Traversal
Queue Interface In Java
Queue in Python
Queue | Set 1 (Introduction and Array Implementation)
|
[
{
"code": null,
"e": 25881,
"s": 25853,
"text": "\n28 Apr, 2022"
},
{
"code": null,
"e": 26125,
"s": 25881,
"text": "A stack permutation is a permutation of objects in the given input queue which is done by transferring elements from input queue to the output queue with the help of a stack and the built-in push and pop functions.The well defined rules are: "
},
{
"code": null,
"e": 26292,
"s": 26125,
"text": "Only dequeue from the input queue.Use inbuilt push, pop functions in the single stack.Stack and input queue must be empty at the end.Only enqueue to the output queue."
},
{
"code": null,
"e": 26327,
"s": 26292,
"text": "Only dequeue from the input queue."
},
{
"code": null,
"e": 26380,
"s": 26327,
"text": "Use inbuilt push, pop functions in the single stack."
},
{
"code": null,
"e": 26428,
"s": 26380,
"text": "Stack and input queue must be empty at the end."
},
{
"code": null,
"e": 26462,
"s": 26428,
"text": "Only enqueue to the output queue."
},
{
"code": null,
"e": 26759,
"s": 26462,
"text": "There are a huge number of permutations possible using a stack for a single input queue. Given two arrays, both of unique elements. One represents the input queue and the other represents the output queue. Our task is to check if the given output is possible through stack permutation.Examples: "
},
{
"code": null,
"e": 27093,
"s": 26759,
"text": "Input : First array: 1, 2, 3 \n Second array: 2, 1, 3\nOutput : Yes\nProcedure:\npush 1 from input to stack\npush 2 from input to stack\npop 2 from stack to output\npop 1 from stack to output\npush 3 from input to stack\npop 3 from stack to output\n\n\nInput : First array: 1, 2, 3 \n Second array: 3, 1, 2\nOutput : Not Possible "
},
{
"code": null,
"e": 27306,
"s": 27095,
"text": "The idea to do this is we will try to convert the input queue to output queue using a stack, if we are able to do so then the queue is permutable otherwise not. Below is the step by step algorithm to do this: "
},
{
"code": null,
"e": 28020,
"s": 27306,
"text": "Continuously pop elements from the input queue and check if it is equal to the top of output queue or not, if it is not equal to the top of output queue then we will push the element to stack. Once we find an element in input queue such the top of input queue is equal to top of output queue, we will pop a single element from both input and output queues, and compare the top of stack and top of output queue now. If top of both stack and output queue are equal then pop element from both stack and output queue. If not equal, go to step 1. Repeat above two steps until the input queue becomes empty. At the end if both of the input queue and stack are empty then the input queue is permutable otherwise not. "
},
{
"code": null,
"e": 28215,
"s": 28020,
"text": "Continuously pop elements from the input queue and check if it is equal to the top of output queue or not, if it is not equal to the top of output queue then we will push the element to stack. "
},
{
"code": null,
"e": 28566,
"s": 28215,
"text": "Once we find an element in input queue such the top of input queue is equal to top of output queue, we will pop a single element from both input and output queues, and compare the top of stack and top of output queue now. If top of both stack and output queue are equal then pop element from both stack and output queue. If not equal, go to step 1. "
},
{
"code": null,
"e": 28736,
"s": 28566,
"text": "Repeat above two steps until the input queue becomes empty. At the end if both of the input queue and stack are empty then the input queue is permutable otherwise not. "
},
{
"code": null,
"e": 28777,
"s": 28736,
"text": "Below is implementation of above idea: "
},
{
"code": null,
"e": 28781,
"s": 28777,
"text": "C++"
},
{
"code": null,
"e": 28786,
"s": 28781,
"text": "Java"
},
{
"code": null,
"e": 28794,
"s": 28786,
"text": "Python3"
},
{
"code": null,
"e": 28797,
"s": 28794,
"text": "C#"
},
{
"code": null,
"e": 28808,
"s": 28797,
"text": "Javascript"
},
{
"code": "// Given two arrays, check if one array is// stack permutation of other.#include<bits/stdc++.h>using namespace std; // function to check if input queue is// permutable to output queuebool checkStackPermutation(int ip[], int op[], int n){ // Input queue queue<int> input; for (int i=0;i<n;i++) input.push(ip[i]); // output queue queue<int> output; for (int i=0;i<n;i++) output.push(op[i]); // stack to be used for permutation stack <int> tempStack; while (!input.empty()) { int ele = input.front(); input.pop(); if (ele == output.front()) { output.pop(); while (!tempStack.empty()) { if (tempStack.top() == output.front()) { tempStack.pop(); output.pop(); } else break; } } else tempStack.push(ele); } // If after processing, both input queue and // stack are empty then the input queue is // permutable otherwise not. return (input.empty()&&tempStack.empty());} // Driver program to test above functionint main(){ // Input Queue int input[] = {1, 2, 3}; // Output Queue int output[] = {2, 1, 3}; int n = 3; if (checkStackPermutation(input, output, n)) cout << \"Yes\"; else cout << \"Not Possible\"; return 0;}",
"e": 30232,
"s": 28808,
"text": null
},
{
"code": "// Given two arrays, check if one array is// stack permutation of other.import java.util.LinkedList;import java.util.Queue;import java.util.Stack; class Gfg{ // function to check if input queue is // permutable to output queue static boolean checkStackPermutation(int ip[], int op[], int n) { Queue<Integer> input = new LinkedList<>(); // Input queue for (int i = 0; i < n; i++) { input.add(ip[i]); } // Output queue Queue<Integer> output = new LinkedList<>(); for (int i = 0; i < n; i++) { output.add(op[i]); } // stack to be used for permutation Stack<Integer> tempStack = new Stack<>(); while (!input.isEmpty()) { int ele = input.poll(); if (ele == output.peek()) { output.poll(); while (!tempStack.isEmpty()) { if (tempStack.peek() == output.peek()) { tempStack.pop(); output.poll(); } else break; } } else { tempStack.push(ele); } } // If after processing, both input queue and // stack are empty then the input queue is // permutable otherwise not. return (input.isEmpty() && tempStack.isEmpty()); } // Driver code public static void main(String[] args) { // Input Queue int input[] = { 1, 2, 3 }; // Output Queue int output[] = { 2, 1, 3 }; int n = 3; if (checkStackPermutation(input, output, n)) System.out.println(\"Yes\"); else System.out.println(\"Not Possible\"); }} // This code is contributed by Vivekkumar Singh",
"e": 32147,
"s": 30232,
"text": null
},
{
"code": "# Given two arrays, check if one array is# stack permutation of other.from queue import Queue # function to check if Input queue# is permutable to output queuedef checkStackPermutation(ip, op, n): # Input queue Input = Queue() for i in range(n): Input.put(ip[i]) # output queue output = Queue() for i in range(n): output.put(op[i]) # stack to be used for permutation tempStack = [] while (not Input.empty()): ele = Input.queue[0] Input.get() if (ele == output.queue[0]): output.get() while (len(tempStack) != 0): if (tempStack[-1] == output.queue[0]): tempStack.pop() output.get() else: break else: tempStack.append(ele) # If after processing, both Input # queue and stack are empty then # the Input queue is permutable # otherwise not. return (Input.empty() and len(tempStack) == 0) # Driver Codeif __name__ == '__main__': # Input Queue Input = [1, 2, 3] # Output Queue output = [2, 1, 3] n = 3 if (checkStackPermutation(Input, output, n)): print(\"Yes\") else: print(\"Not Possible\") # This code is contributed by PranchalK",
"e": 33458,
"s": 32147,
"text": null
},
{
"code": "// Given two arrays, check if one array is// stack permutation of other.using System;using System.Collections.Generic; class GFG{ // function to check if input queue is // permutable to output queue static bool checkStackPermutation(int []ip, int []op, int n) { Queue<int> input = new Queue<int>(); // Input queue for (int i = 0; i < n; i++) { input.Enqueue(ip[i]); } // Output queue Queue<int> output = new Queue<int>(); for (int i = 0; i < n; i++) { output.Enqueue(op[i]); } // stack to be used for permutation Stack<int> tempStack = new Stack<int>(); while (input.Count != 0) { int ele = input.Dequeue(); if (ele == output.Peek()) { output.Dequeue(); while (tempStack.Count != 0) { if (tempStack.Peek() == output.Peek()) { tempStack.Pop(); output.Dequeue(); } else break; } } else { tempStack.Push(ele); } } // If after processing, both input queue and // stack are empty then the input queue is // permutable otherwise not. return (input.Count == 0 && tempStack.Count == 0); } // Driver code public static void Main(String[] args) { // Input Queue int []input = { 1, 2, 3 }; // Output Queue int []output = { 2, 1, 3 }; int n = 3; if (checkStackPermutation(input, output, n)) Console.WriteLine(\"Yes\"); else Console.WriteLine(\"Not Possible\"); }} // This code is contributed by PrinciRaj1992",
"e": 35345,
"s": 33458,
"text": null
},
{
"code": "<script> // Given two arrays, check if one array is // stack permutation of other. // function to check if input queue is // permutable to output queue function checkStackPermutation(ip, op, n) { let input = []; // Input queue for (let i = 0; i < n; i++) { input.push(ip[i]); } // Output queue let output = []; for (let i = 0; i < n; i++) { output.push(op[i]); } // stack to be used for permutation let tempStack = []; while (input.length != 0) { let ele = input.shift(); if (ele == output[0]) { output.shift(); while (tempStack.length != 0) { if (tempStack[tempStack.length - 1] == output[0]) { tempStack.pop(); output.shift(); } else break; } } else { tempStack.push(ele); } } // If after processing, both input queue and // stack are empty then the input queue is // permutable otherwise not. return (input.length == 0 && tempStack.length == 0); } // Input Queue let input = [ 1, 2, 3 ]; // Output Queue let output = [ 2, 1, 3 ]; let n = 3; if (checkStackPermutation(input, output, n)) document.write(\"Yes\"); else document.write(\"Not Possible\"); // This code is contributed by rameshtravel07.</script>",
"e": 36988,
"s": 35345,
"text": null
},
{
"code": null,
"e": 36998,
"s": 36988,
"text": "Output: "
},
{
"code": null,
"e": 37002,
"s": 36998,
"text": "Yes"
},
{
"code": null,
"e": 37024,
"s": 37002,
"text": "Another Approach: – "
},
{
"code": null,
"e": 37344,
"s": 37024,
"text": "Idea – we will start iterating on input array and storing its element one by one in a stack and if top of our stack matches with an element in output array we will pop that element from stack and compare next element of output array with top of our stack if again it matches then again pop until our stack isn’t empty "
},
{
"code": null,
"e": 37348,
"s": 37344,
"text": "C++"
},
{
"code": "// Given two arrays, check if one array is// stack permutation of other.#include<bits/stdc++.h>using namespace std; // function to check if input array is// permutable to output arraybool checkStackPermutation(int ip[], int op[], int n){ // we will be pushing elements from input array to stack uptill top of our stack // matches with first element of output array stack<int>s; // will maintain a variable j to iterate on output array int j=0; // will iterate one by one in input array for(int i=0;i<n;i++) { // pushed an element from input array to stack s.push(ip[i]); // if our stack isn't empty and top matches with output array // then we will keep popping out from stack uptill top matches with // output array while(!s.empty() and s.top()==op[j]) { s.pop(); // increasing j so next time we can compare next element in output array j++; } } // if output array was a correct permutation of input array then // by now our stack should be empty if(s.empty()) { return true; } return false; } // Driver program to test above functionint main(){ // Input Array int input[] = {4,5,6,7,8}; // Output Array int output[] = {8,7,6,5,4}; int n = 5; if (checkStackPermutation(input, output, n)) cout << \"Yes\"; else cout << \"Not Possible\"; return 0;}",
"e": 38816,
"s": 37348,
"text": null
},
{
"code": null,
"e": 38820,
"s": 38816,
"text": "Yes"
},
{
"code": null,
"e": 38844,
"s": 38820,
"text": "Time Complexity – O(N)"
},
{
"code": null,
"e": 38868,
"s": 38844,
"text": "Space Complexity – O(N)"
},
{
"code": null,
"e": 39289,
"s": 38868,
"text": "This article is contributed by Suprotik Dey. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 39305,
"s": 39289,
"text": "PranchalKatiyar"
},
{
"code": null,
"e": 39322,
"s": 39305,
"text": "Vivekkumar Singh"
},
{
"code": null,
"e": 39336,
"s": 39322,
"text": "princiraj1992"
},
{
"code": null,
"e": 39351,
"s": 39336,
"text": "rameshtravel07"
},
{
"code": null,
"e": 39364,
"s": 39351,
"text": "8_bit_spider"
},
{
"code": null,
"e": 39381,
"s": 39364,
"text": "surinderdawra388"
},
{
"code": null,
"e": 39395,
"s": 39381,
"text": "Combinatorial"
},
{
"code": null,
"e": 39401,
"s": 39395,
"text": "Queue"
},
{
"code": null,
"e": 39407,
"s": 39401,
"text": "Stack"
},
{
"code": null,
"e": 39413,
"s": 39407,
"text": "Stack"
},
{
"code": null,
"e": 39427,
"s": 39413,
"text": "Combinatorial"
},
{
"code": null,
"e": 39433,
"s": 39427,
"text": "Queue"
},
{
"code": null,
"e": 39531,
"s": 39433,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 39549,
"s": 39531,
"text": "Combinational Sum"
},
{
"code": null,
"e": 39604,
"s": 39549,
"text": "Count ways to reach the nth stair using step 1, 2 or 3"
},
{
"code": null,
"e": 39689,
"s": 39604,
"text": "Print all possible strings of length k that can be formed from a set of n characters"
},
{
"code": null,
"e": 39726,
"s": 39689,
"text": "Count of subsets with sum equal to X"
},
{
"code": null,
"e": 39783,
"s": 39726,
"text": "Python program to get all subsets of given size of a set"
},
{
"code": null,
"e": 39823,
"s": 39783,
"text": "Breadth First Search or BFS for a Graph"
},
{
"code": null,
"e": 39857,
"s": 39823,
"text": "Level Order Binary Tree Traversal"
},
{
"code": null,
"e": 39881,
"s": 39857,
"text": "Queue Interface In Java"
},
{
"code": null,
"e": 39897,
"s": 39881,
"text": "Queue in Python"
}
] |
How to set icon in the ToolTip in C#? - GeeksforGeeks
|
19 Jul, 2019
In Windows Forms, the ToolTip represents a tiny pop-up box which appears when you place your pointer or cursor on the control and the purpose of this control is it provides a brief description about the control present in the windows form. In ToolTip, you are allowed to set an icon in the ToolTip window with ToolTip text using ToolTipIcon Property. This property accepts four different types of values that are defined under ToolTipIcon enum and the values are:
None: It means ToolTip window does not contain icons.
Info: It is a information icon.
Warning: It is a warning icon.
Error: It is an error icon.
You can set this property in two different ways:
1. Design-Time: It is the easiest way to set the value of the ToolTipIcon property as shown in the following steps:
Step 1: Create a windows form as shown in the below image:Visual Studio -> File -> New -> Project -> WindowsFormApp
Step 2: Drag the ToolTip from the ToolBox and drop it on the form. When you drag and drop this ToolTip on the form it will automatically add to the properties(named as ToolTip on ToolTip1) of every controls present in the current windows from.
Step 3: After drag and drop you will go to the properties of the ToolTip and set the value of the ToolTipIcon property.Output:
Output:
2. Run-Time: It is a little bit trickier than the above method. In this method, you can set the ToolTipIcon property of ToolTip programmatically with the help of given syntax:
public System.Windows.Forms.ToolTipIcon ToolTipIcon { get; set; }
Here, ToolTipIcon represents a value provided by the ToolTipIcon enum. The following steps show how to set the ToolTipIcon property of the ToolTip dynamically:
Step 1: Create a ToolTip using the ToolTip() constructor is provided by the ToolTip class.// Creating a ToolTip
ToolTip t = new ToolTip();
// Creating a ToolTip
ToolTip t = new ToolTip();
Step 2: After creating Tooltip, set the ToolTipIcon property of the Tooltip provided by the ToolTip class.// Setting the ToolTipIcon property
t.ToolTipIcon = ToolTipIcon.Info;
// Setting the ToolTipIcon property
t.ToolTipIcon = ToolTipIcon.Info;
Step 3: And last add this ToolTip to the controls using SetToolTip() method. This method contains the control name and the text which you want to display in the ToolTip box.t.SetToolTip(box1, "Name should start with Capital letter");Example:using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp34 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the // properties of the Label Label l1 = new Label(); l1.Location = new Point(140, 122); l1.Text = "Name"; // Adding this Label // control to the form this.Controls.Add(l1); // Creating and setting the // properties of the TextBox TextBox box1 = new TextBox(); box1.Location = new Point(248, 119); box1.BorderStyle = BorderStyle.FixedSingle; // Adding this TextBox // control to the form this.Controls.Add(box1); // Creating and setting the // properties of Label Label l2 = new Label(); l2.Location = new Point(140, 152); l2.Text = "Password"; // Adding this Label // control to the form this.Controls.Add(l2); // Creating and setting the // properties of the TextBox TextBox box2 = new TextBox(); box2.Location = new Point(248, 145); box2.BorderStyle = BorderStyle.FixedSingle; // Adding this TextBox // control to the form this.Controls.Add(box2); // Creating and setting the // properties of the ToolTip ToolTip t = new ToolTip(); t.Active = true; t.AutoPopDelay = 4000; t.InitialDelay = 600; t.IsBalloon = true; t.ToolTipIcon = ToolTipIcon.Info; t.SetToolTip(box1, "Name should start with Capital letter"); t.SetToolTip(box2, "Password should be greater than 8 words"); }}}Output:
t.SetToolTip(box1, "Name should start with Capital letter");
Example:
using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp34 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the // properties of the Label Label l1 = new Label(); l1.Location = new Point(140, 122); l1.Text = "Name"; // Adding this Label // control to the form this.Controls.Add(l1); // Creating and setting the // properties of the TextBox TextBox box1 = new TextBox(); box1.Location = new Point(248, 119); box1.BorderStyle = BorderStyle.FixedSingle; // Adding this TextBox // control to the form this.Controls.Add(box1); // Creating and setting the // properties of Label Label l2 = new Label(); l2.Location = new Point(140, 152); l2.Text = "Password"; // Adding this Label // control to the form this.Controls.Add(l2); // Creating and setting the // properties of the TextBox TextBox box2 = new TextBox(); box2.Location = new Point(248, 145); box2.BorderStyle = BorderStyle.FixedSingle; // Adding this TextBox // control to the form this.Controls.Add(box2); // Creating and setting the // properties of the ToolTip ToolTip t = new ToolTip(); t.Active = true; t.AutoPopDelay = 4000; t.InitialDelay = 600; t.IsBalloon = true; t.ToolTipIcon = ToolTipIcon.Info; t.SetToolTip(box1, "Name should start with Capital letter"); t.SetToolTip(box2, "Password should be greater than 8 words"); }}}
Output:
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Extension Method in C#
HashSet in C# with Examples
C# | Inheritance
Partial Classes in C#
C# | Generics - Introduction
Top 50 C# Interview Questions & Answers
Switch Statement in C#
Convert String to Character Array in C#
C# | How to insert an element in an Array?
Linked List Implementation in C#
|
[
{
"code": null,
"e": 25547,
"s": 25519,
"text": "\n19 Jul, 2019"
},
{
"code": null,
"e": 26011,
"s": 25547,
"text": "In Windows Forms, the ToolTip represents a tiny pop-up box which appears when you place your pointer or cursor on the control and the purpose of this control is it provides a brief description about the control present in the windows form. In ToolTip, you are allowed to set an icon in the ToolTip window with ToolTip text using ToolTipIcon Property. This property accepts four different types of values that are defined under ToolTipIcon enum and the values are:"
},
{
"code": null,
"e": 26065,
"s": 26011,
"text": "None: It means ToolTip window does not contain icons."
},
{
"code": null,
"e": 26097,
"s": 26065,
"text": "Info: It is a information icon."
},
{
"code": null,
"e": 26128,
"s": 26097,
"text": "Warning: It is a warning icon."
},
{
"code": null,
"e": 26156,
"s": 26128,
"text": "Error: It is an error icon."
},
{
"code": null,
"e": 26205,
"s": 26156,
"text": "You can set this property in two different ways:"
},
{
"code": null,
"e": 26321,
"s": 26205,
"text": "1. Design-Time: It is the easiest way to set the value of the ToolTipIcon property as shown in the following steps:"
},
{
"code": null,
"e": 26437,
"s": 26321,
"text": "Step 1: Create a windows form as shown in the below image:Visual Studio -> File -> New -> Project -> WindowsFormApp"
},
{
"code": null,
"e": 26681,
"s": 26437,
"text": "Step 2: Drag the ToolTip from the ToolBox and drop it on the form. When you drag and drop this ToolTip on the form it will automatically add to the properties(named as ToolTip on ToolTip1) of every controls present in the current windows from."
},
{
"code": null,
"e": 26808,
"s": 26681,
"text": "Step 3: After drag and drop you will go to the properties of the ToolTip and set the value of the ToolTipIcon property.Output:"
},
{
"code": null,
"e": 26816,
"s": 26808,
"text": "Output:"
},
{
"code": null,
"e": 26992,
"s": 26816,
"text": "2. Run-Time: It is a little bit trickier than the above method. In this method, you can set the ToolTipIcon property of ToolTip programmatically with the help of given syntax:"
},
{
"code": null,
"e": 27058,
"s": 26992,
"text": "public System.Windows.Forms.ToolTipIcon ToolTipIcon { get; set; }"
},
{
"code": null,
"e": 27218,
"s": 27058,
"text": "Here, ToolTipIcon represents a value provided by the ToolTipIcon enum. The following steps show how to set the ToolTipIcon property of the ToolTip dynamically:"
},
{
"code": null,
"e": 27358,
"s": 27218,
"text": "Step 1: Create a ToolTip using the ToolTip() constructor is provided by the ToolTip class.// Creating a ToolTip\nToolTip t = new ToolTip();\n"
},
{
"code": null,
"e": 27408,
"s": 27358,
"text": "// Creating a ToolTip\nToolTip t = new ToolTip();\n"
},
{
"code": null,
"e": 27585,
"s": 27408,
"text": "Step 2: After creating Tooltip, set the ToolTipIcon property of the Tooltip provided by the ToolTip class.// Setting the ToolTipIcon property\nt.ToolTipIcon = ToolTipIcon.Info;\n"
},
{
"code": null,
"e": 27656,
"s": 27585,
"text": "// Setting the ToolTipIcon property\nt.ToolTipIcon = ToolTipIcon.Info;\n"
},
{
"code": null,
"e": 29823,
"s": 27656,
"text": "Step 3: And last add this ToolTip to the controls using SetToolTip() method. This method contains the control name and the text which you want to display in the ToolTip box.t.SetToolTip(box1, \"Name should start with Capital letter\");Example:using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp34 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the // properties of the Label Label l1 = new Label(); l1.Location = new Point(140, 122); l1.Text = \"Name\"; // Adding this Label // control to the form this.Controls.Add(l1); // Creating and setting the // properties of the TextBox TextBox box1 = new TextBox(); box1.Location = new Point(248, 119); box1.BorderStyle = BorderStyle.FixedSingle; // Adding this TextBox // control to the form this.Controls.Add(box1); // Creating and setting the // properties of Label Label l2 = new Label(); l2.Location = new Point(140, 152); l2.Text = \"Password\"; // Adding this Label // control to the form this.Controls.Add(l2); // Creating and setting the // properties of the TextBox TextBox box2 = new TextBox(); box2.Location = new Point(248, 145); box2.BorderStyle = BorderStyle.FixedSingle; // Adding this TextBox // control to the form this.Controls.Add(box2); // Creating and setting the // properties of the ToolTip ToolTip t = new ToolTip(); t.Active = true; t.AutoPopDelay = 4000; t.InitialDelay = 600; t.IsBalloon = true; t.ToolTipIcon = ToolTipIcon.Info; t.SetToolTip(box1, \"Name should start with Capital letter\"); t.SetToolTip(box2, \"Password should be greater than 8 words\"); }}}Output:"
},
{
"code": null,
"e": 29884,
"s": 29823,
"text": "t.SetToolTip(box1, \"Name should start with Capital letter\");"
},
{
"code": null,
"e": 29893,
"s": 29884,
"text": "Example:"
},
{
"code": "using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp34 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the // properties of the Label Label l1 = new Label(); l1.Location = new Point(140, 122); l1.Text = \"Name\"; // Adding this Label // control to the form this.Controls.Add(l1); // Creating and setting the // properties of the TextBox TextBox box1 = new TextBox(); box1.Location = new Point(248, 119); box1.BorderStyle = BorderStyle.FixedSingle; // Adding this TextBox // control to the form this.Controls.Add(box1); // Creating and setting the // properties of Label Label l2 = new Label(); l2.Location = new Point(140, 152); l2.Text = \"Password\"; // Adding this Label // control to the form this.Controls.Add(l2); // Creating and setting the // properties of the TextBox TextBox box2 = new TextBox(); box2.Location = new Point(248, 145); box2.BorderStyle = BorderStyle.FixedSingle; // Adding this TextBox // control to the form this.Controls.Add(box2); // Creating and setting the // properties of the ToolTip ToolTip t = new ToolTip(); t.Active = true; t.AutoPopDelay = 4000; t.InitialDelay = 600; t.IsBalloon = true; t.ToolTipIcon = ToolTipIcon.Info; t.SetToolTip(box1, \"Name should start with Capital letter\"); t.SetToolTip(box2, \"Password should be greater than 8 words\"); }}}",
"e": 31812,
"s": 29893,
"text": null
},
{
"code": null,
"e": 31820,
"s": 31812,
"text": "Output:"
},
{
"code": null,
"e": 31823,
"s": 31820,
"text": "C#"
},
{
"code": null,
"e": 31921,
"s": 31823,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31944,
"s": 31921,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 31972,
"s": 31944,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 31989,
"s": 31972,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 32011,
"s": 31989,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 32040,
"s": 32011,
"text": "C# | Generics - Introduction"
},
{
"code": null,
"e": 32080,
"s": 32040,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 32103,
"s": 32080,
"text": "Switch Statement in C#"
},
{
"code": null,
"e": 32143,
"s": 32103,
"text": "Convert String to Character Array in C#"
},
{
"code": null,
"e": 32186,
"s": 32143,
"text": "C# | How to insert an element in an Array?"
}
] |
Ruby | Range first() function - GeeksforGeeks
|
06 Jan, 2020
The first() is an inbuilt method in Ruby returns an array of first X elements. If X is not mentioned, it returns the first element only.
Syntax: range1.first(X)
Parameters: The function accepts X which is the number of elements from the beginning.
Return Value: It returns an array of first X elements.
Example 1:
# Ruby program for first() # method in Range # Initialize range range1 = (0..10) # Prints the first element puts range1.first()
Output:
0
Example 2:
# Ruby program for first() # method in Range # Initialize range range1 = (0..10) # Prints the first element puts range1.first(3)
Output:
0
1
2
Ruby Range-class
Ruby-Methods
Ruby
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Ruby | Types of Variables
Ruby | Enumerator each_with_index function
Ruby For Beginners
Ruby | Decision Making (if, if-else, if-else-if, ternary) | Set - 1
Ruby Mixins
Ruby | Array collect() operation
Ruby | Array shift() function
Ruby | String concat Method
Ruby | unless Statement and unless Modifier
Ruby | Module
|
[
{
"code": null,
"e": 25217,
"s": 25189,
"text": "\n06 Jan, 2020"
},
{
"code": null,
"e": 25354,
"s": 25217,
"text": "The first() is an inbuilt method in Ruby returns an array of first X elements. If X is not mentioned, it returns the first element only."
},
{
"code": null,
"e": 25378,
"s": 25354,
"text": "Syntax: range1.first(X)"
},
{
"code": null,
"e": 25465,
"s": 25378,
"text": "Parameters: The function accepts X which is the number of elements from the beginning."
},
{
"code": null,
"e": 25520,
"s": 25465,
"text": "Return Value: It returns an array of first X elements."
},
{
"code": null,
"e": 25531,
"s": 25520,
"text": "Example 1:"
},
{
"code": "# Ruby program for first() # method in Range # Initialize range range1 = (0..10) # Prints the first element puts range1.first()",
"e": 25662,
"s": 25531,
"text": null
},
{
"code": null,
"e": 25670,
"s": 25662,
"text": "Output:"
},
{
"code": null,
"e": 25673,
"s": 25670,
"text": "0\n"
},
{
"code": null,
"e": 25684,
"s": 25673,
"text": "Example 2:"
},
{
"code": "# Ruby program for first() # method in Range # Initialize range range1 = (0..10) # Prints the first element puts range1.first(3)",
"e": 25816,
"s": 25684,
"text": null
},
{
"code": null,
"e": 25824,
"s": 25816,
"text": "Output:"
},
{
"code": null,
"e": 25831,
"s": 25824,
"text": "0\n1\n2\n"
},
{
"code": null,
"e": 25848,
"s": 25831,
"text": "Ruby Range-class"
},
{
"code": null,
"e": 25861,
"s": 25848,
"text": "Ruby-Methods"
},
{
"code": null,
"e": 25866,
"s": 25861,
"text": "Ruby"
},
{
"code": null,
"e": 25964,
"s": 25866,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25990,
"s": 25964,
"text": "Ruby | Types of Variables"
},
{
"code": null,
"e": 26033,
"s": 25990,
"text": "Ruby | Enumerator each_with_index function"
},
{
"code": null,
"e": 26052,
"s": 26033,
"text": "Ruby For Beginners"
},
{
"code": null,
"e": 26120,
"s": 26052,
"text": "Ruby | Decision Making (if, if-else, if-else-if, ternary) | Set - 1"
},
{
"code": null,
"e": 26132,
"s": 26120,
"text": "Ruby Mixins"
},
{
"code": null,
"e": 26165,
"s": 26132,
"text": "Ruby | Array collect() operation"
},
{
"code": null,
"e": 26195,
"s": 26165,
"text": "Ruby | Array shift() function"
},
{
"code": null,
"e": 26223,
"s": 26195,
"text": "Ruby | String concat Method"
},
{
"code": null,
"e": 26267,
"s": 26223,
"text": "Ruby | unless Statement and unless Modifier"
}
] |
How to set the Visibility of the ComboBox in C#? - GeeksforGeeks
|
27 Jun, 2019
In Windows forms, ComboBox provides two different features in a single control, it means ComboBox works as both TextBox and ListBox. In ComboBox, only one item is displayed at a time and the rest of the items are present in the drop-down menu. You are allowed to set the visibility of the ComboBox by using Visible Property.If you want to display the given ComboBox and its child controls, then set the value of Visible property to true, otherwise set false. The default value of this property is true. You can set this property using two different methods:
1. Design-Time: It is the easiest method to set the visibility of the ComboBox control using the following steps:
Step 1: Create a windows form as shown in the below image:Visual Studio -> File -> New -> Project -> WindowsFormApp
Step 2: Drag the ComboBox control from the ToolBox and dropit on the windows form. You are allowed to place a ComboBox control anywhere on the windows form according to your need.
Step 3: After drag and drop you will go to the properties of the ComboBox control to set the visibility the ComboBox.Output:
Output:
2. Run-Time: It is a little bit trickier than the above method. In this method, you can set the visibility of the ComboBox programmatically with the help of given syntax:
public bool Visible { get; set; }
Here, the value of this property is of System.Boolean type. Following steps are used to set the visibility of the ComboBox:
Step 1: Create a combobox using the ComboBox() constructor is provided by the ComboBox class.// Creating ComboBox using ComboBox class
ComboBox mybox = new ComboBox();
// Creating ComboBox using ComboBox class
ComboBox mybox = new ComboBox();
Step 2: After creating ComboBox, set the visibility of the ComboBox.// Set the visibility of the combobox
mybox.Visible = false;
// Set the visibility of the combobox
mybox.Visible = false;
Step 3: And last add this combobox control to form using Add() method.// Add this ComboBox to form
this.Controls.Add(mybox);
Example:using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp11 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Location = new Point(222, 80); l.Size = new Size(99, 18); l.Text = "Select city name"; // Adding this label to the form this.Controls.Add(l); // Creating and setting the properties of comboBox ComboBox mybox = new ComboBox(); mybox.Location = new Point(327, 77); mybox.Size = new Size(216, 26); mybox.Visible = false; mybox.Name = "My_Cobo_Box"; mybox.Items.Add("Mumbai"); mybox.Items.Add("Delhi"); mybox.Items.Add("Jaipur"); mybox.Items.Add("Kolkata"); mybox.Items.Add("Bengaluru"); // Adding this ComboBox to the form this.Controls.Add(mybox); }}}Output:Before setting the Visible property the output is like this:After setting the Visible property to false the output is like this:
// Add this ComboBox to form
this.Controls.Add(mybox);
Example:
using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp11 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Location = new Point(222, 80); l.Size = new Size(99, 18); l.Text = "Select city name"; // Adding this label to the form this.Controls.Add(l); // Creating and setting the properties of comboBox ComboBox mybox = new ComboBox(); mybox.Location = new Point(327, 77); mybox.Size = new Size(216, 26); mybox.Visible = false; mybox.Name = "My_Cobo_Box"; mybox.Items.Add("Mumbai"); mybox.Items.Add("Delhi"); mybox.Items.Add("Jaipur"); mybox.Items.Add("Kolkata"); mybox.Items.Add("Bengaluru"); // Adding this ComboBox to the form this.Controls.Add(mybox); }}}
Output:
Before setting the Visible property the output is like this:
After setting the Visible property to false the output is like this:
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Extension Method in C#
HashSet in C# with Examples
C# | Inheritance
Partial Classes in C#
C# | Generics - Introduction
Top 50 C# Interview Questions & Answers
Switch Statement in C#
Convert String to Character Array in C#
C# | How to insert an element in an Array?
Linked List Implementation in C#
|
[
{
"code": null,
"e": 25547,
"s": 25519,
"text": "\n27 Jun, 2019"
},
{
"code": null,
"e": 26105,
"s": 25547,
"text": "In Windows forms, ComboBox provides two different features in a single control, it means ComboBox works as both TextBox and ListBox. In ComboBox, only one item is displayed at a time and the rest of the items are present in the drop-down menu. You are allowed to set the visibility of the ComboBox by using Visible Property.If you want to display the given ComboBox and its child controls, then set the value of Visible property to true, otherwise set false. The default value of this property is true. You can set this property using two different methods:"
},
{
"code": null,
"e": 26219,
"s": 26105,
"text": "1. Design-Time: It is the easiest method to set the visibility of the ComboBox control using the following steps:"
},
{
"code": null,
"e": 26335,
"s": 26219,
"text": "Step 1: Create a windows form as shown in the below image:Visual Studio -> File -> New -> Project -> WindowsFormApp"
},
{
"code": null,
"e": 26515,
"s": 26335,
"text": "Step 2: Drag the ComboBox control from the ToolBox and dropit on the windows form. You are allowed to place a ComboBox control anywhere on the windows form according to your need."
},
{
"code": null,
"e": 26640,
"s": 26515,
"text": "Step 3: After drag and drop you will go to the properties of the ComboBox control to set the visibility the ComboBox.Output:"
},
{
"code": null,
"e": 26648,
"s": 26640,
"text": "Output:"
},
{
"code": null,
"e": 26819,
"s": 26648,
"text": "2. Run-Time: It is a little bit trickier than the above method. In this method, you can set the visibility of the ComboBox programmatically with the help of given syntax:"
},
{
"code": null,
"e": 26853,
"s": 26819,
"text": "public bool Visible { get; set; }"
},
{
"code": null,
"e": 26977,
"s": 26853,
"text": "Here, the value of this property is of System.Boolean type. Following steps are used to set the visibility of the ComboBox:"
},
{
"code": null,
"e": 27146,
"s": 26977,
"text": "Step 1: Create a combobox using the ComboBox() constructor is provided by the ComboBox class.// Creating ComboBox using ComboBox class\nComboBox mybox = new ComboBox();\n"
},
{
"code": null,
"e": 27222,
"s": 27146,
"text": "// Creating ComboBox using ComboBox class\nComboBox mybox = new ComboBox();\n"
},
{
"code": null,
"e": 27353,
"s": 27222,
"text": "Step 2: After creating ComboBox, set the visibility of the ComboBox.// Set the visibility of the combobox \nmybox.Visible = false;\n"
},
{
"code": null,
"e": 27416,
"s": 27353,
"text": "// Set the visibility of the combobox \nmybox.Visible = false;\n"
},
{
"code": null,
"e": 28852,
"s": 27416,
"text": "Step 3: And last add this combobox control to form using Add() method.// Add this ComboBox to form\nthis.Controls.Add(mybox);\nExample:using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp11 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Location = new Point(222, 80); l.Size = new Size(99, 18); l.Text = \"Select city name\"; // Adding this label to the form this.Controls.Add(l); // Creating and setting the properties of comboBox ComboBox mybox = new ComboBox(); mybox.Location = new Point(327, 77); mybox.Size = new Size(216, 26); mybox.Visible = false; mybox.Name = \"My_Cobo_Box\"; mybox.Items.Add(\"Mumbai\"); mybox.Items.Add(\"Delhi\"); mybox.Items.Add(\"Jaipur\"); mybox.Items.Add(\"Kolkata\"); mybox.Items.Add(\"Bengaluru\"); // Adding this ComboBox to the form this.Controls.Add(mybox); }}}Output:Before setting the Visible property the output is like this:After setting the Visible property to false the output is like this:"
},
{
"code": null,
"e": 28908,
"s": 28852,
"text": "// Add this ComboBox to form\nthis.Controls.Add(mybox);\n"
},
{
"code": null,
"e": 28917,
"s": 28908,
"text": "Example:"
},
{
"code": "using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp11 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Location = new Point(222, 80); l.Size = new Size(99, 18); l.Text = \"Select city name\"; // Adding this label to the form this.Controls.Add(l); // Creating and setting the properties of comboBox ComboBox mybox = new ComboBox(); mybox.Location = new Point(327, 77); mybox.Size = new Size(216, 26); mybox.Visible = false; mybox.Name = \"My_Cobo_Box\"; mybox.Items.Add(\"Mumbai\"); mybox.Items.Add(\"Delhi\"); mybox.Items.Add(\"Jaipur\"); mybox.Items.Add(\"Kolkata\"); mybox.Items.Add(\"Bengaluru\"); // Adding this ComboBox to the form this.Controls.Add(mybox); }}}",
"e": 30085,
"s": 28917,
"text": null
},
{
"code": null,
"e": 30093,
"s": 30085,
"text": "Output:"
},
{
"code": null,
"e": 30154,
"s": 30093,
"text": "Before setting the Visible property the output is like this:"
},
{
"code": null,
"e": 30223,
"s": 30154,
"text": "After setting the Visible property to false the output is like this:"
},
{
"code": null,
"e": 30226,
"s": 30223,
"text": "C#"
},
{
"code": null,
"e": 30324,
"s": 30226,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30347,
"s": 30324,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 30375,
"s": 30347,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 30392,
"s": 30375,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 30414,
"s": 30392,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 30443,
"s": 30414,
"text": "C# | Generics - Introduction"
},
{
"code": null,
"e": 30483,
"s": 30443,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 30506,
"s": 30483,
"text": "Switch Statement in C#"
},
{
"code": null,
"e": 30546,
"s": 30506,
"text": "Convert String to Character Array in C#"
},
{
"code": null,
"e": 30589,
"s": 30546,
"text": "C# | How to insert an element in an Array?"
}
] |
Flatten a Stream of Arrays in Java using forEach loop - GeeksforGeeks
|
11 Dec, 2018
Given a Stream of Arrays in Java, the task is to Flatten the Stream using forEach() method.
Examples:
Input: arr[][] = {{ 1, 2 }, { 3, 4, 5, 6 }, { 7, 8, 9 }}
Output: [1, 2, 3, 4, 5, 6, 7, 8, 9]
Input: arr[][] = {{'G', 'e', 'e', 'k', 's'}, {'F', 'o', 'r'}}
Output: [G, e, e, k, s, F, o, r]
Approach:
Get the Arrays in the form of 2D array.Create an empty list to collect the flattened elements.With the help of forEach loop, convert each elements of the array into stream and add it to the listNow convert this list into stream using stream() method.Now flatten the stream by converting it into array using toArray() method.
Get the Arrays in the form of 2D array.
Create an empty list to collect the flattened elements.
With the help of forEach loop, convert each elements of the array into stream and add it to the list
Now convert this list into stream using stream() method.
Now flatten the stream by converting it into array using toArray() method.
Below is the implementation of the above approach:
Example 1: Using arrays of integer.
// Java program to flatten a stream of arrays// using forEach() method import java.util.ArrayList;import java.util.Arrays;import java.util.List;import java.util.stream.Stream; class GFG { // Function to flatten a Stream of Arrays public static <T> Stream<T> flattenStream(T[][] arrays) { // Create an empty list to collect the stream List<T> list = new ArrayList<>(); // Using forEach loop // convert the array into stream // and add the stream into list for (T[] array : arrays) { Arrays.stream(array) .forEach(list::add); } // Convert the list into Stream and return it return list.stream(); } public static void main(String[] args) { // Get the arrays to be flattened. Integer[][] arr = { { 1, 2 }, { 3, 4, 5, 6 }, { 7, 8, 9 } }; // Flatten the Stream Integer[] flatArray = flattenStream(arr) .toArray(Integer[] ::new); // Print the flattened array System.out.println(Arrays.toString(flatArray)); }}
[1, 2, 3, 4, 5, 6, 7, 8, 9]
Example 2: Using arrays of Characters.
// Java program to flatten a stream of arrays// using forEach() method import java.util.ArrayList;import java.util.Arrays;import java.util.List;import java.util.stream.Stream; class GFG { // Function to flatten a Stream of Arrays public static <T> Stream<T> flattenStream(T[][] arrays) { // Create an empty list to collect the stream List<T> list = new ArrayList<>(); // Using forEach loop // convert the array into stream // and add the stream into list for (T[] array : arrays) { Arrays.stream(array) .forEach(list::add); } // Convert the list into Stream and return it return list.stream(); } public static void main(String[] args) { // Get the arrays to be flattened. Character[][] arr = { { 'G', 'e', 'e', 'k', 's' }, { 'F', 'o', 'r' }, { 'G', 'e', 'e', 'k', 's' } }; // Flatten the Stream Character[] flatArray = flattenStream(arr) .toArray(Character[] ::new); // Print the flattened array System.out.println(Arrays.toString(flatArray)); }}
[G, e, e, k, s, F, o, r, G, e, e, k, s]
Java - util package
Java-Array-Programs
Java-Arrays
java-stream
Java-Stream-programs
Java
Java Programs
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Constructors in Java
Exceptions in Java
Functional Interfaces in Java
Different ways of Reading a text file in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java?
Program to print ASCII Value of a character
|
[
{
"code": null,
"e": 25251,
"s": 25223,
"text": "\n11 Dec, 2018"
},
{
"code": null,
"e": 25343,
"s": 25251,
"text": "Given a Stream of Arrays in Java, the task is to Flatten the Stream using forEach() method."
},
{
"code": null,
"e": 25353,
"s": 25343,
"text": "Examples:"
},
{
"code": null,
"e": 25543,
"s": 25353,
"text": "Input: arr[][] = {{ 1, 2 }, { 3, 4, 5, 6 }, { 7, 8, 9 }}\nOutput: [1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nInput: arr[][] = {{'G', 'e', 'e', 'k', 's'}, {'F', 'o', 'r'}}\nOutput: [G, e, e, k, s, F, o, r]\n"
},
{
"code": null,
"e": 25553,
"s": 25543,
"text": "Approach:"
},
{
"code": null,
"e": 25878,
"s": 25553,
"text": "Get the Arrays in the form of 2D array.Create an empty list to collect the flattened elements.With the help of forEach loop, convert each elements of the array into stream and add it to the listNow convert this list into stream using stream() method.Now flatten the stream by converting it into array using toArray() method."
},
{
"code": null,
"e": 25918,
"s": 25878,
"text": "Get the Arrays in the form of 2D array."
},
{
"code": null,
"e": 25974,
"s": 25918,
"text": "Create an empty list to collect the flattened elements."
},
{
"code": null,
"e": 26075,
"s": 25974,
"text": "With the help of forEach loop, convert each elements of the array into stream and add it to the list"
},
{
"code": null,
"e": 26132,
"s": 26075,
"text": "Now convert this list into stream using stream() method."
},
{
"code": null,
"e": 26207,
"s": 26132,
"text": "Now flatten the stream by converting it into array using toArray() method."
},
{
"code": null,
"e": 26258,
"s": 26207,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 26294,
"s": 26258,
"text": "Example 1: Using arrays of integer."
},
{
"code": "// Java program to flatten a stream of arrays// using forEach() method import java.util.ArrayList;import java.util.Arrays;import java.util.List;import java.util.stream.Stream; class GFG { // Function to flatten a Stream of Arrays public static <T> Stream<T> flattenStream(T[][] arrays) { // Create an empty list to collect the stream List<T> list = new ArrayList<>(); // Using forEach loop // convert the array into stream // and add the stream into list for (T[] array : arrays) { Arrays.stream(array) .forEach(list::add); } // Convert the list into Stream and return it return list.stream(); } public static void main(String[] args) { // Get the arrays to be flattened. Integer[][] arr = { { 1, 2 }, { 3, 4, 5, 6 }, { 7, 8, 9 } }; // Flatten the Stream Integer[] flatArray = flattenStream(arr) .toArray(Integer[] ::new); // Print the flattened array System.out.println(Arrays.toString(flatArray)); }}",
"e": 27440,
"s": 26294,
"text": null
},
{
"code": null,
"e": 27469,
"s": 27440,
"text": "[1, 2, 3, 4, 5, 6, 7, 8, 9]\n"
},
{
"code": null,
"e": 27508,
"s": 27469,
"text": "Example 2: Using arrays of Characters."
},
{
"code": "// Java program to flatten a stream of arrays// using forEach() method import java.util.ArrayList;import java.util.Arrays;import java.util.List;import java.util.stream.Stream; class GFG { // Function to flatten a Stream of Arrays public static <T> Stream<T> flattenStream(T[][] arrays) { // Create an empty list to collect the stream List<T> list = new ArrayList<>(); // Using forEach loop // convert the array into stream // and add the stream into list for (T[] array : arrays) { Arrays.stream(array) .forEach(list::add); } // Convert the list into Stream and return it return list.stream(); } public static void main(String[] args) { // Get the arrays to be flattened. Character[][] arr = { { 'G', 'e', 'e', 'k', 's' }, { 'F', 'o', 'r' }, { 'G', 'e', 'e', 'k', 's' } }; // Flatten the Stream Character[] flatArray = flattenStream(arr) .toArray(Character[] ::new); // Print the flattened array System.out.println(Arrays.toString(flatArray)); }}",
"e": 28700,
"s": 27508,
"text": null
},
{
"code": null,
"e": 28741,
"s": 28700,
"text": "[G, e, e, k, s, F, o, r, G, e, e, k, s]\n"
},
{
"code": null,
"e": 28761,
"s": 28741,
"text": "Java - util package"
},
{
"code": null,
"e": 28781,
"s": 28761,
"text": "Java-Array-Programs"
},
{
"code": null,
"e": 28793,
"s": 28781,
"text": "Java-Arrays"
},
{
"code": null,
"e": 28805,
"s": 28793,
"text": "java-stream"
},
{
"code": null,
"e": 28826,
"s": 28805,
"text": "Java-Stream-programs"
},
{
"code": null,
"e": 28831,
"s": 28826,
"text": "Java"
},
{
"code": null,
"e": 28845,
"s": 28831,
"text": "Java Programs"
},
{
"code": null,
"e": 28850,
"s": 28845,
"text": "Java"
},
{
"code": null,
"e": 28948,
"s": 28850,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28963,
"s": 28948,
"text": "Stream In Java"
},
{
"code": null,
"e": 28984,
"s": 28963,
"text": "Constructors in Java"
},
{
"code": null,
"e": 29003,
"s": 28984,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 29033,
"s": 29003,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 29079,
"s": 29033,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 29105,
"s": 29079,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 29139,
"s": 29105,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 29186,
"s": 29139,
"text": "Implementing a Linked List in Java using Class"
},
{
"code": null,
"e": 29218,
"s": 29186,
"text": "How to Iterate HashMap in Java?"
}
] |
Length of the longest subsequence such that xor of adjacent elements is non-decreasing - GeeksforGeeks
|
04 Jun, 2021
Given a sequence arr of N positive integers, the task is to find the length of the longest subsequence such that xor of adjacent integers in the subsequence must be non-decreasing.
Examples:
Input: N = 8, arr = {1, 100, 3, 64, 0, 5, 2, 15} Output: 6 The subsequence of maximum length is {1, 3, 0, 5, 2, 15} with XOR of adjacent elements as {2, 3, 5, 7, 13}Input: N = 3, arr = {1, 7, 10} Output: 3 The subsequence of maximum length is {1, 3, 7} with XOR of adjacent elements as {2, 4}.
Approach:
This problem can be solved using dynamic programming where dp[i] will store the length of the longest valid subsequence that ends at index i.
First, store the xor of all the pairs of elements i.e. arr[i] ^ arr[j] and the pair (i, j) also and then sort them according to the value of xor as they need to be non-decreasing.
Now if the pair (i, j) is considered then the length of the longest subsequence that ends at j will be max(dp[j], 1 + dp[i]). In this way, calculate the maximum possible value of dp[] array for each position and then take the maximum of them.
Below is the implementation of the above approach:
C++
Python3
Javascript
// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to find the length of the longest// subsequence such that the XOR of adjacent// elements in the subsequence must// be non-decreasingint LongestXorSubsequence(int arr[], int n){ vector<pair<int, pair<int, int> > > v; for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { // Computing xor of all the pairs // of elements and store them // along with the pair (i, j) v.push_back(make_pair(arr[i] ^ arr[j], make_pair(i, j))); } } // Sort all possible xor values sort(v.begin(), v.end()); int dp[n]; // Initialize the dp array for (int i = 0; i < n; i++) { dp[i] = 1; } // Calculating the dp array // for each possible position // and calculating the max length // that ends at a particular index for (auto i : v) { dp[i.second.second] = max(dp[i.second.second], 1 + dp[i.second.first]); } int ans = 1; // Taking maximum of all position for (int i = 0; i < n; i++) ans = max(ans, dp[i]); return ans;} // Driver codeint main(){ int arr[] = { 2, 12, 6, 7, 13, 14, 8, 6 }; int n = sizeof(arr) / sizeof(arr[0]); cout << LongestXorSubsequence(arr, n); return 0;}
# Python3 implementation of the approach # Function to find the length of the longest# subsequence such that the XOR of adjacent# elements in the subsequence must# be non-decreasingdef LongestXorSubsequence(arr, n): v = [] for i in range(0, n): for j in range(i + 1, n): # Computing xor of all the pairs # of elements and store them # along with the pair (i, j) v.append([(arr[i] ^ arr[j]), (i, j)]) # v.push_back(make_pair(arr[i] ^ arr[j], make_pair(i, j))) # Sort all possible xor values v.sort() # Initialize the dp array dp = [1 for x in range(88)] # Calculating the dp array # for each possible position # and calculating the max length # that ends at a particular index for a, b in v: dp[b[1]] = max(dp[b[1]], 1 + dp[b[0]]) ans = 1 # Taking maximum of all position for i in range(0, n): ans = max(ans, dp[i]) return ans # Driver codearr = [ 2, 12, 6, 7, 13, 14, 8, 6 ]n = len(arr)print(LongestXorSubsequence(arr, n)) # This code is contributed by Sanjit Prasad
<script>// Javascript implementation of the approach // Function to find the length of the longest// subsequence such that the XOR of adjacent// elements in the subsequence must// be non-decreasingfunction LongestXorSubsequence(arr, n) { let v = []; for (let i = 0; i < n; i++) { for (let j = i + 1; j < n; j++) { // Computing xor of all the pairs // of elements and store them // along with the pair (i, j) v.push([arr[i] ^ arr[j], [i, j]]); } } // Sort all possible xor values v.sort((a, b) => a[0] - b[0]); let dp = new Array(n); // Initialize the dp array for (let i = 0; i < n; i++) { dp[i] = 1; } // Calculating the dp array // for each possible position // and calculating the max length // that ends at a particular index for (let i of v) { dp[i[1][1]] = Math.max(dp[i[1][1]], 1 + dp[i[1][0]]); } let ans = 1; // Taking maximum of all position for (let i = 0; i < n; i++) ans = Math.max(ans, dp[i]); return ans;} // Driver codelet arr = [2, 12, 6, 7, 13, 14, 8, 6];let n = arr.length; document.write(LongestXorSubsequence(arr, n)); // This code is contributed by _saurabh_jaiswal.</script>
5
Time Complexity: O(N* N)Auxiliary Space: O(N)
Sanjit_Prasad
ujjwalgoel1103
gfgking
Bitwise-XOR
subsequence
Arrays
C++ Programs
Dynamic Programming
Greedy
Arrays
Dynamic Programming
Greedy
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Maximum and minimum of an array using minimum number of comparisons
Top 50 Array Coding Problems for Interviews
Introduction to Arrays
Multidimensional Arrays in Java
Linear Search
Header files in C/C++ and its uses
Program to print ASCII Value of a character
C++ Program for QuickSort
How to return multiple values from a function in C or C++?
Sorting a Map by value in C++ STL
|
[
{
"code": null,
"e": 26171,
"s": 26143,
"text": "\n04 Jun, 2021"
},
{
"code": null,
"e": 26352,
"s": 26171,
"text": "Given a sequence arr of N positive integers, the task is to find the length of the longest subsequence such that xor of adjacent integers in the subsequence must be non-decreasing."
},
{
"code": null,
"e": 26363,
"s": 26352,
"text": "Examples: "
},
{
"code": null,
"e": 26658,
"s": 26363,
"text": "Input: N = 8, arr = {1, 100, 3, 64, 0, 5, 2, 15} Output: 6 The subsequence of maximum length is {1, 3, 0, 5, 2, 15} with XOR of adjacent elements as {2, 3, 5, 7, 13}Input: N = 3, arr = {1, 7, 10} Output: 3 The subsequence of maximum length is {1, 3, 7} with XOR of adjacent elements as {2, 4}. "
},
{
"code": null,
"e": 26669,
"s": 26658,
"text": "Approach: "
},
{
"code": null,
"e": 26811,
"s": 26669,
"text": "This problem can be solved using dynamic programming where dp[i] will store the length of the longest valid subsequence that ends at index i."
},
{
"code": null,
"e": 26991,
"s": 26811,
"text": "First, store the xor of all the pairs of elements i.e. arr[i] ^ arr[j] and the pair (i, j) also and then sort them according to the value of xor as they need to be non-decreasing."
},
{
"code": null,
"e": 27234,
"s": 26991,
"text": "Now if the pair (i, j) is considered then the length of the longest subsequence that ends at j will be max(dp[j], 1 + dp[i]). In this way, calculate the maximum possible value of dp[] array for each position and then take the maximum of them."
},
{
"code": null,
"e": 27286,
"s": 27234,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 27290,
"s": 27286,
"text": "C++"
},
{
"code": null,
"e": 27298,
"s": 27290,
"text": "Python3"
},
{
"code": null,
"e": 27309,
"s": 27298,
"text": "Javascript"
},
{
"code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to find the length of the longest// subsequence such that the XOR of adjacent// elements in the subsequence must// be non-decreasingint LongestXorSubsequence(int arr[], int n){ vector<pair<int, pair<int, int> > > v; for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { // Computing xor of all the pairs // of elements and store them // along with the pair (i, j) v.push_back(make_pair(arr[i] ^ arr[j], make_pair(i, j))); } } // Sort all possible xor values sort(v.begin(), v.end()); int dp[n]; // Initialize the dp array for (int i = 0; i < n; i++) { dp[i] = 1; } // Calculating the dp array // for each possible position // and calculating the max length // that ends at a particular index for (auto i : v) { dp[i.second.second] = max(dp[i.second.second], 1 + dp[i.second.first]); } int ans = 1; // Taking maximum of all position for (int i = 0; i < n; i++) ans = max(ans, dp[i]); return ans;} // Driver codeint main(){ int arr[] = { 2, 12, 6, 7, 13, 14, 8, 6 }; int n = sizeof(arr) / sizeof(arr[0]); cout << LongestXorSubsequence(arr, n); return 0;}",
"e": 28688,
"s": 27309,
"text": null
},
{
"code": "# Python3 implementation of the approach # Function to find the length of the longest# subsequence such that the XOR of adjacent# elements in the subsequence must# be non-decreasingdef LongestXorSubsequence(arr, n): v = [] for i in range(0, n): for j in range(i + 1, n): # Computing xor of all the pairs # of elements and store them # along with the pair (i, j) v.append([(arr[i] ^ arr[j]), (i, j)]) # v.push_back(make_pair(arr[i] ^ arr[j], make_pair(i, j))) # Sort all possible xor values v.sort() # Initialize the dp array dp = [1 for x in range(88)] # Calculating the dp array # for each possible position # and calculating the max length # that ends at a particular index for a, b in v: dp[b[1]] = max(dp[b[1]], 1 + dp[b[0]]) ans = 1 # Taking maximum of all position for i in range(0, n): ans = max(ans, dp[i]) return ans # Driver codearr = [ 2, 12, 6, 7, 13, 14, 8, 6 ]n = len(arr)print(LongestXorSubsequence(arr, n)) # This code is contributed by Sanjit Prasad",
"e": 29799,
"s": 28688,
"text": null
},
{
"code": "<script>// Javascript implementation of the approach // Function to find the length of the longest// subsequence such that the XOR of adjacent// elements in the subsequence must// be non-decreasingfunction LongestXorSubsequence(arr, n) { let v = []; for (let i = 0; i < n; i++) { for (let j = i + 1; j < n; j++) { // Computing xor of all the pairs // of elements and store them // along with the pair (i, j) v.push([arr[i] ^ arr[j], [i, j]]); } } // Sort all possible xor values v.sort((a, b) => a[0] - b[0]); let dp = new Array(n); // Initialize the dp array for (let i = 0; i < n; i++) { dp[i] = 1; } // Calculating the dp array // for each possible position // and calculating the max length // that ends at a particular index for (let i of v) { dp[i[1][1]] = Math.max(dp[i[1][1]], 1 + dp[i[1][0]]); } let ans = 1; // Taking maximum of all position for (let i = 0; i < n; i++) ans = Math.max(ans, dp[i]); return ans;} // Driver codelet arr = [2, 12, 6, 7, 13, 14, 8, 6];let n = arr.length; document.write(LongestXorSubsequence(arr, n)); // This code is contributed by _saurabh_jaiswal.</script>",
"e": 31069,
"s": 29799,
"text": null
},
{
"code": null,
"e": 31071,
"s": 31069,
"text": "5"
},
{
"code": null,
"e": 31117,
"s": 31071,
"text": "Time Complexity: O(N* N)Auxiliary Space: O(N)"
},
{
"code": null,
"e": 31131,
"s": 31117,
"text": "Sanjit_Prasad"
},
{
"code": null,
"e": 31146,
"s": 31131,
"text": "ujjwalgoel1103"
},
{
"code": null,
"e": 31154,
"s": 31146,
"text": "gfgking"
},
{
"code": null,
"e": 31166,
"s": 31154,
"text": "Bitwise-XOR"
},
{
"code": null,
"e": 31178,
"s": 31166,
"text": "subsequence"
},
{
"code": null,
"e": 31185,
"s": 31178,
"text": "Arrays"
},
{
"code": null,
"e": 31198,
"s": 31185,
"text": "C++ Programs"
},
{
"code": null,
"e": 31218,
"s": 31198,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 31225,
"s": 31218,
"text": "Greedy"
},
{
"code": null,
"e": 31232,
"s": 31225,
"text": "Arrays"
},
{
"code": null,
"e": 31252,
"s": 31232,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 31259,
"s": 31252,
"text": "Greedy"
},
{
"code": null,
"e": 31357,
"s": 31259,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31425,
"s": 31357,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 31469,
"s": 31425,
"text": "Top 50 Array Coding Problems for Interviews"
},
{
"code": null,
"e": 31492,
"s": 31469,
"text": "Introduction to Arrays"
},
{
"code": null,
"e": 31524,
"s": 31492,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 31538,
"s": 31524,
"text": "Linear Search"
},
{
"code": null,
"e": 31573,
"s": 31538,
"text": "Header files in C/C++ and its uses"
},
{
"code": null,
"e": 31617,
"s": 31573,
"text": "Program to print ASCII Value of a character"
},
{
"code": null,
"e": 31643,
"s": 31617,
"text": "C++ Program for QuickSort"
},
{
"code": null,
"e": 31702,
"s": 31643,
"text": "How to return multiple values from a function in C or C++?"
}
] |
How to Set the Foreground Color of the ComboBox in C#? - GeeksforGeeks
|
27 Jun, 2019
In Windows forms, ComboBox provides two different features in a single control, it means ComboBox works as both TextBox and ListBox. In ComboBox, only one item is displayed at a time and the rest of the items are present in the drop-down menu. You are allowed to set the foreground color of the ComboBox by using the ForeColor Property. It gives a more attractive look to your ComboBox control. You can set this property using two different methods:
1. Design-Time: It is the easiest method to set the foreground color of the ComboBox control using the following steps:
Step 1: Create a windows form as shown in the below image:Visual Studio -> File -> New -> Project -> WindowsFormApp
Step 2: Drag the ComboBox control from the ToolBox and drop it on the windows form. You are allowed to place a ComboBox control anywhere on the windows form according to your need.
Step 3: After drag and drop you will go to the properties of the ComboBox control to set the foreground color of the ComboBox.Output:
Output:
2. Run-Time: It is a little bit trickier than the above method. In this method, you can set the foreground color of the ComboBox programmatically with the help of given syntax:
public override System.Drawing.Color ForeColor { get; set; }
Here, Color indicates the foreground color of the ComboBox. Following steps are used to set the foreground color of the ComboBox:
Step 1: Create a combobox using the ComboBox() constructor is provided by the ComboBox class.// Creating ComboBox using ComboBox class
ComboBox mybox = new ComboBox();
// Creating ComboBox using ComboBox class
ComboBox mybox = new ComboBox();
Step 2: After creating ComboBox, set the foreground color of the ComboBox.// Set the foreground color of the ComboBox
mybox.ForeColor = Color.DeepPink;
// Set the foreground color of the ComboBox
mybox.ForeColor = Color.DeepPink;
Step 3: And last add this combobox control to form using Add() method.// Add this ComboBox to form
this.Controls.Add(mybox);Example:using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp11 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Location = new Point(222, 80); l.Size = new Size(99, 18); l.Text = "Select city name"; // Adding this label to the form this.Controls.Add(l); // Creating and setting the properties of comboBox ComboBox mybox = new ComboBox(); mybox.Location = new Point(327, 77); mybox.Size = new Size(216, 26); mybox.Sorted = true; mybox.BackColor = Color.LightBlue; mybox.ForeColor = Color.DeepPink; mybox.Name = "My_Cobo_Box"; mybox.Items.Add("Mumbai"); mybox.Items.Add("Delhi"); mybox.Items.Add("Jaipur"); mybox.Items.Add("Kolkata"); mybox.Items.Add("Bengaluru"); // Adding this ComboBox to the form this.Controls.Add(mybox); }}}Output:
// Add this ComboBox to form
this.Controls.Add(mybox);
Example:
using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp11 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Location = new Point(222, 80); l.Size = new Size(99, 18); l.Text = "Select city name"; // Adding this label to the form this.Controls.Add(l); // Creating and setting the properties of comboBox ComboBox mybox = new ComboBox(); mybox.Location = new Point(327, 77); mybox.Size = new Size(216, 26); mybox.Sorted = true; mybox.BackColor = Color.LightBlue; mybox.ForeColor = Color.DeepPink; mybox.Name = "My_Cobo_Box"; mybox.Items.Add("Mumbai"); mybox.Items.Add("Delhi"); mybox.Items.Add("Jaipur"); mybox.Items.Add("Kolkata"); mybox.Items.Add("Bengaluru"); // Adding this ComboBox to the form this.Controls.Add(mybox); }}}
Output:
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C# Dictionary with examples
C# | Delegates
C# | Method Overriding
C# | Abstract Classes
Difference between Ref and Out keywords in C#
Extension Method in C#
C# | Class and Object
C# | Constructors
C# | String.IndexOf( ) Method | Set - 1
C# | Replace() Method
|
[
{
"code": null,
"e": 25357,
"s": 25329,
"text": "\n27 Jun, 2019"
},
{
"code": null,
"e": 25807,
"s": 25357,
"text": "In Windows forms, ComboBox provides two different features in a single control, it means ComboBox works as both TextBox and ListBox. In ComboBox, only one item is displayed at a time and the rest of the items are present in the drop-down menu. You are allowed to set the foreground color of the ComboBox by using the ForeColor Property. It gives a more attractive look to your ComboBox control. You can set this property using two different methods:"
},
{
"code": null,
"e": 25927,
"s": 25807,
"text": "1. Design-Time: It is the easiest method to set the foreground color of the ComboBox control using the following steps:"
},
{
"code": null,
"e": 26043,
"s": 25927,
"text": "Step 1: Create a windows form as shown in the below image:Visual Studio -> File -> New -> Project -> WindowsFormApp"
},
{
"code": null,
"e": 26224,
"s": 26043,
"text": "Step 2: Drag the ComboBox control from the ToolBox and drop it on the windows form. You are allowed to place a ComboBox control anywhere on the windows form according to your need."
},
{
"code": null,
"e": 26358,
"s": 26224,
"text": "Step 3: After drag and drop you will go to the properties of the ComboBox control to set the foreground color of the ComboBox.Output:"
},
{
"code": null,
"e": 26366,
"s": 26358,
"text": "Output:"
},
{
"code": null,
"e": 26543,
"s": 26366,
"text": "2. Run-Time: It is a little bit trickier than the above method. In this method, you can set the foreground color of the ComboBox programmatically with the help of given syntax:"
},
{
"code": null,
"e": 26604,
"s": 26543,
"text": "public override System.Drawing.Color ForeColor { get; set; }"
},
{
"code": null,
"e": 26734,
"s": 26604,
"text": "Here, Color indicates the foreground color of the ComboBox. Following steps are used to set the foreground color of the ComboBox:"
},
{
"code": null,
"e": 26903,
"s": 26734,
"text": "Step 1: Create a combobox using the ComboBox() constructor is provided by the ComboBox class.// Creating ComboBox using ComboBox class\nComboBox mybox = new ComboBox();\n"
},
{
"code": null,
"e": 26979,
"s": 26903,
"text": "// Creating ComboBox using ComboBox class\nComboBox mybox = new ComboBox();\n"
},
{
"code": null,
"e": 27133,
"s": 26979,
"text": "Step 2: After creating ComboBox, set the foreground color of the ComboBox.// Set the foreground color of the ComboBox \nmybox.ForeColor = Color.DeepPink;\n"
},
{
"code": null,
"e": 27213,
"s": 27133,
"text": "// Set the foreground color of the ComboBox \nmybox.ForeColor = Color.DeepPink;\n"
},
{
"code": null,
"e": 28601,
"s": 27213,
"text": "Step 3: And last add this combobox control to form using Add() method.// Add this ComboBox to form\nthis.Controls.Add(mybox);Example:using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp11 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Location = new Point(222, 80); l.Size = new Size(99, 18); l.Text = \"Select city name\"; // Adding this label to the form this.Controls.Add(l); // Creating and setting the properties of comboBox ComboBox mybox = new ComboBox(); mybox.Location = new Point(327, 77); mybox.Size = new Size(216, 26); mybox.Sorted = true; mybox.BackColor = Color.LightBlue; mybox.ForeColor = Color.DeepPink; mybox.Name = \"My_Cobo_Box\"; mybox.Items.Add(\"Mumbai\"); mybox.Items.Add(\"Delhi\"); mybox.Items.Add(\"Jaipur\"); mybox.Items.Add(\"Kolkata\"); mybox.Items.Add(\"Bengaluru\"); // Adding this ComboBox to the form this.Controls.Add(mybox); }}}Output:"
},
{
"code": null,
"e": 28656,
"s": 28601,
"text": "// Add this ComboBox to form\nthis.Controls.Add(mybox);"
},
{
"code": null,
"e": 28665,
"s": 28656,
"text": "Example:"
},
{
"code": "using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp11 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the properties of label Label l = new Label(); l.Location = new Point(222, 80); l.Size = new Size(99, 18); l.Text = \"Select city name\"; // Adding this label to the form this.Controls.Add(l); // Creating and setting the properties of comboBox ComboBox mybox = new ComboBox(); mybox.Location = new Point(327, 77); mybox.Size = new Size(216, 26); mybox.Sorted = true; mybox.BackColor = Color.LightBlue; mybox.ForeColor = Color.DeepPink; mybox.Name = \"My_Cobo_Box\"; mybox.Items.Add(\"Mumbai\"); mybox.Items.Add(\"Delhi\"); mybox.Items.Add(\"Jaipur\"); mybox.Items.Add(\"Kolkata\"); mybox.Items.Add(\"Bengaluru\"); // Adding this ComboBox to the form this.Controls.Add(mybox); }}}",
"e": 29914,
"s": 28665,
"text": null
},
{
"code": null,
"e": 29922,
"s": 29914,
"text": "Output:"
},
{
"code": null,
"e": 29925,
"s": 29922,
"text": "C#"
},
{
"code": null,
"e": 30023,
"s": 29925,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30051,
"s": 30023,
"text": "C# Dictionary with examples"
},
{
"code": null,
"e": 30066,
"s": 30051,
"text": "C# | Delegates"
},
{
"code": null,
"e": 30089,
"s": 30066,
"text": "C# | Method Overriding"
},
{
"code": null,
"e": 30111,
"s": 30089,
"text": "C# | Abstract Classes"
},
{
"code": null,
"e": 30157,
"s": 30111,
"text": "Difference between Ref and Out keywords in C#"
},
{
"code": null,
"e": 30180,
"s": 30157,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 30202,
"s": 30180,
"text": "C# | Class and Object"
},
{
"code": null,
"e": 30220,
"s": 30202,
"text": "C# | Constructors"
},
{
"code": null,
"e": 30260,
"s": 30220,
"text": "C# | String.IndexOf( ) Method | Set - 1"
}
] |
C# Program to Sort a List of Employees Based on Salary using LINQ - GeeksforGeeks
|
06 Dec, 2021
Given a list of employees, now we sort the list of employees according to their salary using LINQ. So to this task, we use the OrderBy() method. This method is used to sort the elements of the specified sequence in ascending order.
Example:
Input:
{id = 101, Name = Rohit, Salary = 50000, Department = HR}
{id = 104, Name = Rohit, Salary = 10000, Department = Development}
{id = 106, Name = Rohit, Salary = 80000, Department = HR}
{id = 108, Name = Rohit, Salary = 20000, Department = Development}
Output:
{id = 104, Name = Rohit, Salary = 10000, Department = Development}
{id = 108, Name = Rohit, Salary = 20000, Department = Development}
{id = 101, Name = Rohit, Salary = 50000, Department = HR}
{id = 106, Name = Rohit, Salary = 80000, Department = HR}
Approach:
1. Create a list of employees along with their id, name, salary, and department.
2. Now sort the employee’s list according to their salary using the OrderBy() method.
var result_set = Geeks.OrderBy(sal => sal.Emp_Salary);
Or we can also sort the list using the order OrderBy clause of LINQ
var result_set = from emp in Geeks orderby emp.Emp_Salary select emp;
3. Display the sorted list using foreach loop.
Example 1:
C#
// C# program to sort a list of employees// based on salary. Using OrderBy() methodusing System;using System.Linq;using System.Collections.Generic; class Geek{ int emp_id;string Emp_Name;int Emp_Salary;string Emp_Department; static void Main(string[] args){ // Geeks data List<Geek> Geeks = new List<Geek>() { new Geek{emp_id = 101, Emp_Name = "arjun", Emp_Salary = 50000, Emp_Department = "ABC"}, new Geek{emp_id = 102, Emp_Name = "bheem", Emp_Salary = 65000, Emp_Department = "DEF"}, new Geek{emp_id = 103, Emp_Name = "krishna", Emp_Salary = 45000, Emp_Department = "ABC"}, new Geek{emp_id = 104, Emp_Name = "Ram", Emp_Salary = 20000, Emp_Department = "DEF"}, new Geek{emp_id = 105, Emp_Name = "kiran", Emp_Salary = 70000, Emp_Department = "DEF"}, new Geek{emp_id = 106, Emp_Name = "karna", Emp_Salary = 50000, Emp_Department = "ABC"}, }; // We have sorted the data using OrderBy() command var result_set = Geeks.OrderBy(sal => sal.Emp_Salary); // Display the sorted result foreach (Geek emp in result_set) { Console.WriteLine(emp.emp_id + " " + emp.Emp_Name + " " + emp.Emp_Salary + " " + emp.Emp_Department); }}}
104 Ram 20000 DEF
103 krishna 45000 ABC
101 arjun 50000 ABC
106 karna 50000 ABC
102 bheem 65000 DEF
105 kiran 70000 DEF
Example 2:
C#
// C# program to sort a list of employees// based on salary. Using OrderBy() methodusing System;using System.Linq;using System.Collections.Generic; class Geek{ int emp_id;string Emp_Name;int Emp_Salary;string Emp_Department; static void Main(string[] args){ // Geeks data List<Geek> Geeks = new List<Geek>() { new Geek{emp_id = 201, Emp_Name = "Sumit", Emp_Salary = 50000, Emp_Department = "ABC"}, new Geek{emp_id = 202, Emp_Name = "Rohan", Emp_Salary = 65000, Emp_Department = "DEF"}, new Geek{emp_id = 203, Emp_Name = "Mohit", Emp_Salary = 45000, Emp_Department = "ABC"}, new Geek{emp_id = 204, Emp_Name = "Sonam", Emp_Salary = 20000, Emp_Department = "DEF"}, new Geek{emp_id = 205, Emp_Name = "Shive", Emp_Salary = 70000, Emp_Department = "DEF"}, }; // We have sorted the data using OrderBy linq clause var result_set = from emp in Geeks orderby emp.Emp_Salary select emp; // Display the sorted result foreach (Geek emp in result_set) { Console.WriteLine(emp.emp_id + " " + emp.Emp_Name + " " + emp.Emp_Salary + " " + emp.Emp_Department); }}}
Output
204 Sonam 20000 DEF
203 Mohit 45000 ABC
201 Sumit 50000 ABC
202 Rohan 65000 DEF
205 Shive 70000 DEF
CSharp LINQ
Picked
C#
C# Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Extension Method in C#
HashSet in C# with Examples
C# | Inheritance
Partial Classes in C#
C# | Generics - Introduction
Convert String to Character Array in C#
Program to Print a New Line in C#
Getting a Month Name Using Month Number in C#
Socket Programming in C#
C# Program for Dijkstra's shortest path algorithm | Greedy Algo-7
|
[
{
"code": null,
"e": 25547,
"s": 25519,
"text": "\n06 Dec, 2021"
},
{
"code": null,
"e": 25780,
"s": 25547,
"text": "Given a list of employees, now we sort the list of employees according to their salary using LINQ. So to this task, we use the OrderBy() method. This method is used to sort the elements of the specified sequence in ascending order. "
},
{
"code": null,
"e": 25790,
"s": 25780,
"text": "Example: "
},
{
"code": null,
"e": 26307,
"s": 25790,
"text": "Input: \n{id = 101, Name = Rohit, Salary = 50000, Department = HR}\n{id = 104, Name = Rohit, Salary = 10000, Department = Development}\n{id = 106, Name = Rohit, Salary = 80000, Department = HR}\n{id = 108, Name = Rohit, Salary = 20000, Department = Development}\nOutput:\n{id = 104, Name = Rohit, Salary = 10000, Department = Development}\n{id = 108, Name = Rohit, Salary = 20000, Department = Development} \n{id = 101, Name = Rohit, Salary = 50000, Department = HR}\n{id = 106, Name = Rohit, Salary = 80000, Department = HR}"
},
{
"code": null,
"e": 26317,
"s": 26307,
"text": "Approach:"
},
{
"code": null,
"e": 26398,
"s": 26317,
"text": "1. Create a list of employees along with their id, name, salary, and department."
},
{
"code": null,
"e": 26484,
"s": 26398,
"text": "2. Now sort the employee’s list according to their salary using the OrderBy() method."
},
{
"code": null,
"e": 26539,
"s": 26484,
"text": "var result_set = Geeks.OrderBy(sal => sal.Emp_Salary);"
},
{
"code": null,
"e": 26607,
"s": 26539,
"text": "Or we can also sort the list using the order OrderBy clause of LINQ"
},
{
"code": null,
"e": 26677,
"s": 26607,
"text": "var result_set = from emp in Geeks orderby emp.Emp_Salary select emp;"
},
{
"code": null,
"e": 26724,
"s": 26677,
"text": "3. Display the sorted list using foreach loop."
},
{
"code": null,
"e": 26736,
"s": 26724,
"text": "Example 1: "
},
{
"code": null,
"e": 26739,
"s": 26736,
"text": "C#"
},
{
"code": "// C# program to sort a list of employees// based on salary. Using OrderBy() methodusing System;using System.Linq;using System.Collections.Generic; class Geek{ int emp_id;string Emp_Name;int Emp_Salary;string Emp_Department; static void Main(string[] args){ // Geeks data List<Geek> Geeks = new List<Geek>() { new Geek{emp_id = 101, Emp_Name = \"arjun\", Emp_Salary = 50000, Emp_Department = \"ABC\"}, new Geek{emp_id = 102, Emp_Name = \"bheem\", Emp_Salary = 65000, Emp_Department = \"DEF\"}, new Geek{emp_id = 103, Emp_Name = \"krishna\", Emp_Salary = 45000, Emp_Department = \"ABC\"}, new Geek{emp_id = 104, Emp_Name = \"Ram\", Emp_Salary = 20000, Emp_Department = \"DEF\"}, new Geek{emp_id = 105, Emp_Name = \"kiran\", Emp_Salary = 70000, Emp_Department = \"DEF\"}, new Geek{emp_id = 106, Emp_Name = \"karna\", Emp_Salary = 50000, Emp_Department = \"ABC\"}, }; // We have sorted the data using OrderBy() command var result_set = Geeks.OrderBy(sal => sal.Emp_Salary); // Display the sorted result foreach (Geek emp in result_set) { Console.WriteLine(emp.emp_id + \" \" + emp.Emp_Name + \" \" + emp.Emp_Salary + \" \" + emp.Emp_Department); }}}",
"e": 28138,
"s": 26739,
"text": null
},
{
"code": null,
"e": 28264,
"s": 28138,
"text": "104 Ram 20000 DEF\n103 krishna 45000 ABC\n101 arjun 50000 ABC\n106 karna 50000 ABC\n102 bheem 65000 DEF\n105 kiran 70000 DEF"
},
{
"code": null,
"e": 28276,
"s": 28264,
"text": "Example 2: "
},
{
"code": null,
"e": 28279,
"s": 28276,
"text": "C#"
},
{
"code": "// C# program to sort a list of employees// based on salary. Using OrderBy() methodusing System;using System.Linq;using System.Collections.Generic; class Geek{ int emp_id;string Emp_Name;int Emp_Salary;string Emp_Department; static void Main(string[] args){ // Geeks data List<Geek> Geeks = new List<Geek>() { new Geek{emp_id = 201, Emp_Name = \"Sumit\", Emp_Salary = 50000, Emp_Department = \"ABC\"}, new Geek{emp_id = 202, Emp_Name = \"Rohan\", Emp_Salary = 65000, Emp_Department = \"DEF\"}, new Geek{emp_id = 203, Emp_Name = \"Mohit\", Emp_Salary = 45000, Emp_Department = \"ABC\"}, new Geek{emp_id = 204, Emp_Name = \"Sonam\", Emp_Salary = 20000, Emp_Department = \"DEF\"}, new Geek{emp_id = 205, Emp_Name = \"Shive\", Emp_Salary = 70000, Emp_Department = \"DEF\"}, }; // We have sorted the data using OrderBy linq clause var result_set = from emp in Geeks orderby emp.Emp_Salary select emp; // Display the sorted result foreach (Geek emp in result_set) { Console.WriteLine(emp.emp_id + \" \" + emp.Emp_Name + \" \" + emp.Emp_Salary + \" \" + emp.Emp_Department); }}}",
"e": 29583,
"s": 28279,
"text": null
},
{
"code": null,
"e": 29590,
"s": 29583,
"text": "Output"
},
{
"code": null,
"e": 29690,
"s": 29590,
"text": "204 Sonam 20000 DEF\n203 Mohit 45000 ABC\n201 Sumit 50000 ABC\n202 Rohan 65000 DEF\n205 Shive 70000 DEF"
},
{
"code": null,
"e": 29704,
"s": 29692,
"text": "CSharp LINQ"
},
{
"code": null,
"e": 29711,
"s": 29704,
"text": "Picked"
},
{
"code": null,
"e": 29714,
"s": 29711,
"text": "C#"
},
{
"code": null,
"e": 29726,
"s": 29714,
"text": "C# Programs"
},
{
"code": null,
"e": 29824,
"s": 29726,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29847,
"s": 29824,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 29875,
"s": 29847,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 29892,
"s": 29875,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 29914,
"s": 29892,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 29943,
"s": 29914,
"text": "C# | Generics - Introduction"
},
{
"code": null,
"e": 29983,
"s": 29943,
"text": "Convert String to Character Array in C#"
},
{
"code": null,
"e": 30017,
"s": 29983,
"text": "Program to Print a New Line in C#"
},
{
"code": null,
"e": 30063,
"s": 30017,
"text": "Getting a Month Name Using Month Number in C#"
},
{
"code": null,
"e": 30088,
"s": 30063,
"text": "Socket Programming in C#"
}
] |
Structured binding in C++ - GeeksforGeeks
|
07 May, 2018
Prerequisite : Tuples in C++
Structured binding is one of the newest features of C++17 that binds the specified names to subobjects or elements of initializer. In simple words, Structured Bindings give us the ability to declare multiple variables initialized from a tuple or struct. The main purpose of Structured Bindings in C++ 17 is to make the code clean and easy to understand. Like a reference, a structured binding is an alias to an existing object. Unlike a reference, the type of a structured binding does not have to be a reference type.
Syntax :
auto ref-operator(optional)[identifier-list] = expression;
// Or
auto ref-operator(optional)[identifier-list]{expression};
// Or
auto ref-operator(optional)[identifier-list](expression);
Parameters :
auto : auto
ref operator : either & or &&
identifier-list : List of comma separated variable names.
expression : An expression that does not have the comma operator at the top level (i.e, an assignment-expression), and has either array or non-union class type.
Let E denote the type of the initializer expression. E shall be either a specialization of std::tuple, or a type whose non-static data members are all accessible and are declared in the same base class of E. A structured binding declaration performs the binding in one of three possible ways, depending on E.
Case 1 : if E is an array type, then the names are bound to the array elements.
Case 2 : if E is a non-union class type and tuple_size is a complete type, then the “tuple-like” binding protocol is used.
Case 3 : if E is a non-union class type but tuple_size is not a complete type, then the names are bound to the public data members of E.
Let us see the advantage of Structure bindings over tuples with the help of an example :Example 1 : In C++98
#include <bits/stdc++.h>using namespace std; // Creating a structure named Pointstruct Point { int x; int y;}; // Driver codeint main(){ Point p = {1, 2}; int x_coord = p.x; int y_coord = p.y; cout << "X Coordinate : " << x_coord << endl; cout << "Y Coordinate : " << y_coord << endl; return 0;}
Output :
X Coordinate : 1
Y Coordinate : 2
Example 2 : In C++11/C++14
#include <bits/stdc++.h>#include <tuple>using namespace std; // Creating a structure named Pointstruct Point{ int x, y; // Default Constructor Point() : x(0), y(0) { } // Parameterized Constructor for Init List Point(int x, int y) : x(x), y(y) { } auto operator()() { // returns a tuple to make it work with std::tie return make_tuple(x, y); }}; // Driver codeint main(){ Point p = {1, 2}; int x_coord, y_coord; tie(x_coord, y_coord) = p(); cout << "X Coordinate : " << x_coord << endl; cout << "Y Coordinate : " << y_coord << endl; return 0;}
Output :
X Coordinate : 1
Y Coordinate : 2
Example 3 : In C++17
#include <bits/stdc++.h>using namespace std; struct Point{ int x; int y;}; // Driver codeint main( ){ Point p = { 1,2 }; // Structure binding auto[ x_coord, y_coord ] = p; cout << "X Coordinate : " << x_coord << endl; cout << "Y Coordinate : " << y_coord << endl; return 0;}
Output :
X Coordinate : 1
Y Coordinate : 2
Applications : Structured Binding can be used with arrays to get the elements from the array. In this case, E is an array type, hence the names are bound to the array elements. Below is the implementation to show the same :
#include <bits/stdc++.h>using namespace std; int main(){ int arr[3] = { 1, 2, 3 }; // Here, E is an array type, hence the // names are bound to the array elements. auto[x, y, z] = arr; cout << x << " " << y << " " << z << endl; return 0;}
Output :
1 2 3
Note : The number of identifiers in the identifier list must be equal to the number of elements in the array. If the number of identifiers in the identifier list is less, then either a compile time error or design time error may occur. This means that we cannot take the specific set of elements from the array.
A more practical example for using the structured bindings is as follows :
#include <bits/stdc++.h>#include <map>using namespace std; int main(){ // Creating a map with key and value // fields as String map<string, string> sites; sites.insert({ "GeeksforGeeks", "Coding Resources" }); sites.insert({ "StackOverflow", "Q-A type" }); sites.insert({ "Wikipedia", "Resources + References" }); for (auto & [ key, value ] : sites) { cout << key.c_str() << " " << value.c_str() << endl; } return 0;}
Output :
GeeksforGeeks Coding Resources
StackOverflow Q-A type
Wikipedia Resources + References
Note : Qualifiers such as const and volatile can be used along with type to make the declaration variable constant or volatile.
For more details on Structured bindings, you may refer : P0144R0
cpp-advanced
cpp-structure
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Operator Overloading in C++
Polymorphism in C++
Sorting a vector in C++
Friend class and function in C++
Pair in C++ Standard Template Library (STL)
std::string class in C++
Queue in C++ Standard Template Library (STL)
Array of Strings in C++ (5 Different Ways to Create)
Inline Functions in C++
Convert string to char array in C++
|
[
{
"code": null,
"e": 25367,
"s": 25339,
"text": "\n07 May, 2018"
},
{
"code": null,
"e": 25396,
"s": 25367,
"text": "Prerequisite : Tuples in C++"
},
{
"code": null,
"e": 25915,
"s": 25396,
"text": "Structured binding is one of the newest features of C++17 that binds the specified names to subobjects or elements of initializer. In simple words, Structured Bindings give us the ability to declare multiple variables initialized from a tuple or struct. The main purpose of Structured Bindings in C++ 17 is to make the code clean and easy to understand. Like a reference, a structured binding is an alias to an existing object. Unlike a reference, the type of a structured binding does not have to be a reference type."
},
{
"code": null,
"e": 25924,
"s": 25915,
"text": "Syntax :"
},
{
"code": null,
"e": 26116,
"s": 25924,
"text": "auto ref-operator(optional)[identifier-list] = expression;\n\n// Or\n\nauto ref-operator(optional)[identifier-list]{expression};\n\n// Or\n\nauto ref-operator(optional)[identifier-list](expression);\n"
},
{
"code": null,
"e": 26129,
"s": 26116,
"text": "Parameters :"
},
{
"code": null,
"e": 26141,
"s": 26129,
"text": "auto : auto"
},
{
"code": null,
"e": 26171,
"s": 26141,
"text": "ref operator : either & or &&"
},
{
"code": null,
"e": 26229,
"s": 26171,
"text": "identifier-list : List of comma separated variable names."
},
{
"code": null,
"e": 26390,
"s": 26229,
"text": "expression : An expression that does not have the comma operator at the top level (i.e, an assignment-expression), and has either array or non-union class type."
},
{
"code": null,
"e": 26699,
"s": 26390,
"text": "Let E denote the type of the initializer expression. E shall be either a specialization of std::tuple, or a type whose non-static data members are all accessible and are declared in the same base class of E. A structured binding declaration performs the binding in one of three possible ways, depending on E."
},
{
"code": null,
"e": 26779,
"s": 26699,
"text": "Case 1 : if E is an array type, then the names are bound to the array elements."
},
{
"code": null,
"e": 26902,
"s": 26779,
"text": "Case 2 : if E is a non-union class type and tuple_size is a complete type, then the “tuple-like” binding protocol is used."
},
{
"code": null,
"e": 27039,
"s": 26902,
"text": "Case 3 : if E is a non-union class type but tuple_size is not a complete type, then the names are bound to the public data members of E."
},
{
"code": null,
"e": 27148,
"s": 27039,
"text": "Let us see the advantage of Structure bindings over tuples with the help of an example :Example 1 : In C++98"
},
{
"code": "#include <bits/stdc++.h>using namespace std; // Creating a structure named Pointstruct Point { int x; int y;}; // Driver codeint main(){ Point p = {1, 2}; int x_coord = p.x; int y_coord = p.y; cout << \"X Coordinate : \" << x_coord << endl; cout << \"Y Coordinate : \" << y_coord << endl; return 0;}",
"e": 27484,
"s": 27148,
"text": null
},
{
"code": null,
"e": 27493,
"s": 27484,
"text": "Output :"
},
{
"code": null,
"e": 27528,
"s": 27493,
"text": "X Coordinate : 1\nY Coordinate : 2\n"
},
{
"code": null,
"e": 27555,
"s": 27528,
"text": "Example 2 : In C++11/C++14"
},
{
"code": "#include <bits/stdc++.h>#include <tuple>using namespace std; // Creating a structure named Pointstruct Point{ int x, y; // Default Constructor Point() : x(0), y(0) { } // Parameterized Constructor for Init List Point(int x, int y) : x(x), y(y) { } auto operator()() { // returns a tuple to make it work with std::tie return make_tuple(x, y); }}; // Driver codeint main(){ Point p = {1, 2}; int x_coord, y_coord; tie(x_coord, y_coord) = p(); cout << \"X Coordinate : \" << x_coord << endl; cout << \"Y Coordinate : \" << y_coord << endl; return 0;}",
"e": 28216,
"s": 27555,
"text": null
},
{
"code": null,
"e": 28225,
"s": 28216,
"text": "Output :"
},
{
"code": null,
"e": 28260,
"s": 28225,
"text": "X Coordinate : 1\nY Coordinate : 2\n"
},
{
"code": null,
"e": 28281,
"s": 28260,
"text": "Example 3 : In C++17"
},
{
"code": "#include <bits/stdc++.h>using namespace std; struct Point{ int x; int y;}; // Driver codeint main( ){ Point p = { 1,2 }; // Structure binding auto[ x_coord, y_coord ] = p; cout << \"X Coordinate : \" << x_coord << endl; cout << \"Y Coordinate : \" << y_coord << endl; return 0;}",
"e": 28600,
"s": 28281,
"text": null
},
{
"code": null,
"e": 28609,
"s": 28600,
"text": "Output :"
},
{
"code": null,
"e": 28644,
"s": 28609,
"text": "X Coordinate : 1\nY Coordinate : 2\n"
},
{
"code": null,
"e": 28868,
"s": 28644,
"text": "Applications : Structured Binding can be used with arrays to get the elements from the array. In this case, E is an array type, hence the names are bound to the array elements. Below is the implementation to show the same :"
},
{
"code": "#include <bits/stdc++.h>using namespace std; int main(){ int arr[3] = { 1, 2, 3 }; // Here, E is an array type, hence the // names are bound to the array elements. auto[x, y, z] = arr; cout << x << \" \" << y << \" \" << z << endl; return 0;}",
"e": 29143,
"s": 28868,
"text": null
},
{
"code": null,
"e": 29152,
"s": 29143,
"text": "Output :"
},
{
"code": null,
"e": 29159,
"s": 29152,
"text": "1 2 3\n"
},
{
"code": null,
"e": 29471,
"s": 29159,
"text": "Note : The number of identifiers in the identifier list must be equal to the number of elements in the array. If the number of identifiers in the identifier list is less, then either a compile time error or design time error may occur. This means that we cannot take the specific set of elements from the array."
},
{
"code": null,
"e": 29546,
"s": 29471,
"text": "A more practical example for using the structured bindings is as follows :"
},
{
"code": "#include <bits/stdc++.h>#include <map>using namespace std; int main(){ // Creating a map with key and value // fields as String map<string, string> sites; sites.insert({ \"GeeksforGeeks\", \"Coding Resources\" }); sites.insert({ \"StackOverflow\", \"Q-A type\" }); sites.insert({ \"Wikipedia\", \"Resources + References\" }); for (auto & [ key, value ] : sites) { cout << key.c_str() << \" \" << value.c_str() << endl; } return 0;}",
"e": 30017,
"s": 29546,
"text": null
},
{
"code": null,
"e": 30026,
"s": 30017,
"text": "Output :"
},
{
"code": null,
"e": 30114,
"s": 30026,
"text": "GeeksforGeeks Coding Resources\nStackOverflow Q-A type\nWikipedia Resources + References\n"
},
{
"code": null,
"e": 30242,
"s": 30114,
"text": "Note : Qualifiers such as const and volatile can be used along with type to make the declaration variable constant or volatile."
},
{
"code": null,
"e": 30307,
"s": 30242,
"text": "For more details on Structured bindings, you may refer : P0144R0"
},
{
"code": null,
"e": 30320,
"s": 30307,
"text": "cpp-advanced"
},
{
"code": null,
"e": 30334,
"s": 30320,
"text": "cpp-structure"
},
{
"code": null,
"e": 30338,
"s": 30334,
"text": "C++"
},
{
"code": null,
"e": 30342,
"s": 30338,
"text": "CPP"
},
{
"code": null,
"e": 30440,
"s": 30342,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30468,
"s": 30440,
"text": "Operator Overloading in C++"
},
{
"code": null,
"e": 30488,
"s": 30468,
"text": "Polymorphism in C++"
},
{
"code": null,
"e": 30512,
"s": 30488,
"text": "Sorting a vector in C++"
},
{
"code": null,
"e": 30545,
"s": 30512,
"text": "Friend class and function in C++"
},
{
"code": null,
"e": 30589,
"s": 30545,
"text": "Pair in C++ Standard Template Library (STL)"
},
{
"code": null,
"e": 30614,
"s": 30589,
"text": "std::string class in C++"
},
{
"code": null,
"e": 30659,
"s": 30614,
"text": "Queue in C++ Standard Template Library (STL)"
},
{
"code": null,
"e": 30712,
"s": 30659,
"text": "Array of Strings in C++ (5 Different Ways to Create)"
},
{
"code": null,
"e": 30736,
"s": 30712,
"text": "Inline Functions in C++"
}
] |
std::inserter in C++ - GeeksforGeeks
|
27 Jul, 2017
std::inserter constructs an insert iterator that inserts new elements into x in successive locations starting at the position pointed by it. It is defined inside the header file .
An insert iterator is a special type of output iterator designed to allow algorithms that usually overwrite elements (such as copy) to instead insert new elements automatically at a specific position in the container.Syntax:
std::inserter(Container& x, typename Container::iterator it);
x: Container in which new elements will
be inserted.
it: Iterator pointing to the insertion point.
Returns: An insert_iterator that inserts elements into
x at the position indicated by it.
// C++ program to demonstrate std::inserter#include <iostream>#include <iterator>#include <deque>#include <algorithm>using namespace std;int main(){ // Declaring first container deque<int> v1 = { 1, 2, 3 }; // Declaring second container for // copying values deque<int> v2 = { 4, 5, 6 }; deque<int>::iterator i1; i1 = v2.begin() + 1; // i1 points to next element of 4 in v2 // Using std::inserter inside std::copy std::copy(v1.begin(), v1.end(), std::inserter(v2, i1)); // v2 now contains 4 1 2 3 5 6 // Displaying v1 and v2 cout << "v1 = "; int i; for (i = 0; i < 3; ++i) { cout << v1[i] << " "; } cout << "\nv2 = "; for (i = 0; i < 6; ++i) { cout << v2[i] << " "; } return 0;}
Output:
v1 = 1 2 3
v2 = 4 1 2 3 5 6
How is it helpful ?
Inserting values anywhere : Now, just imagine, if we had to copy value into a container such as a vector, firstly, we had to move elements and then copy, but with the help of std::insert() we can insert at any position with ease.// C++ program to demonstrate std::inserter#include <iostream>#include <iterator>#include <vector>#include <algorithm>using namespace std;int main(){ // Declaring first container vector<int> v1 = { 1, 2, 3, 7, 8, 9 }; // Declaring second container vector<int> v2 = { 4, 5, 6 }; vector<int>::iterator i1; i1 = v2.begin() + 2; // i1 points to next element of 5 in v2 // Using std::inserter inside std::copy std::copy(v1.begin(), v1.end(), std::inserter(v2, i1)); // v2 now contains 4 5 1 2 3 7 8 9 6 // Displaying v1 and v2 cout << "v1 = "; int i; for (i = 0; i < 6; ++i) { cout << v1[i] << " "; } cout << "\nv2 = "; for (i = 0; i < 9; ++i) { cout << v2[i] << " "; } return 0;}Output:v1 = 1 2 3 7 8 9
v2 = 4 5 1 2 3 7 8 9 6
Explanation: Here, we started copying v1 into v2 but not from the beginning, but after the second position of v2, i.e., after 5, so all the elements of v1 were inserted after 5, and before 6. In this way, we inserted value where we wanted quite easily.
// C++ program to demonstrate std::inserter#include <iostream>#include <iterator>#include <vector>#include <algorithm>using namespace std;int main(){ // Declaring first container vector<int> v1 = { 1, 2, 3, 7, 8, 9 }; // Declaring second container vector<int> v2 = { 4, 5, 6 }; vector<int>::iterator i1; i1 = v2.begin() + 2; // i1 points to next element of 5 in v2 // Using std::inserter inside std::copy std::copy(v1.begin(), v1.end(), std::inserter(v2, i1)); // v2 now contains 4 5 1 2 3 7 8 9 6 // Displaying v1 and v2 cout << "v1 = "; int i; for (i = 0; i < 6; ++i) { cout << v1[i] << " "; } cout << "\nv2 = "; for (i = 0; i < 9; ++i) { cout << v2[i] << " "; } return 0;}
Output:
v1 = 1 2 3 7 8 9
v2 = 4 5 1 2 3 7 8 9 6
Explanation: Here, we started copying v1 into v2 but not from the beginning, but after the second position of v2, i.e., after 5, so all the elements of v1 were inserted after 5, and before 6. In this way, we inserted value where we wanted quite easily.
Points to Remember:
One of the pitfalls of std::inserter is that it can be used with only those containers that have insert as one of its methods like in case of vector, list and deque, and so on.insert() vs std::inserter(): Now, you may be thinking that insert() and std::inserter() are similar, but they are not. When you have to pass an iterator in the algorithm, then you should use inserter() like in above case, while for normally inserting the values in the container, insert() can be used.In place of using std::inserter, we can create a insert_iterator and then use it, as eventually, std::inserter returns a insert_iterator only.// C++ program to demonstrate insert_iterator#include <iostream>#include <iterator>#include <deque>#include <algorithm>using namespace std;int main(){ // Declaring first container deque<int> v1 = { 1, 2, 3 }; // Declaring second container for // copying values deque<int> v2 = { 4, 5, 6 }; deque<int>::iterator ii; ii = v2.begin() + 1; // ii points after 4 in v2 // Declaring a insert_iterator std::insert_iterator<std::deque<int> > i1(v2, ii); // Using the iterator in the copy() std::copy(v1.begin(), v1.end(), i1); // v2 now contains 4 1 2 3 5 6 // Displaying v1 and v2 cout << "v1 = "; int i; for (i = 0; i < 3; ++i) { cout << v1[i] << " "; } cout << "\nv2 = "; for (i = 0; i < 6; ++i) { cout << v2[i] << " "; } return 0;}Output:v1 = 1 2 3
v2 = 4 1 2 3 5 6
One of the pitfalls of std::inserter is that it can be used with only those containers that have insert as one of its methods like in case of vector, list and deque, and so on.
insert() vs std::inserter(): Now, you may be thinking that insert() and std::inserter() are similar, but they are not. When you have to pass an iterator in the algorithm, then you should use inserter() like in above case, while for normally inserting the values in the container, insert() can be used.
In place of using std::inserter, we can create a insert_iterator and then use it, as eventually, std::inserter returns a insert_iterator only.// C++ program to demonstrate insert_iterator#include <iostream>#include <iterator>#include <deque>#include <algorithm>using namespace std;int main(){ // Declaring first container deque<int> v1 = { 1, 2, 3 }; // Declaring second container for // copying values deque<int> v2 = { 4, 5, 6 }; deque<int>::iterator ii; ii = v2.begin() + 1; // ii points after 4 in v2 // Declaring a insert_iterator std::insert_iterator<std::deque<int> > i1(v2, ii); // Using the iterator in the copy() std::copy(v1.begin(), v1.end(), i1); // v2 now contains 4 1 2 3 5 6 // Displaying v1 and v2 cout << "v1 = "; int i; for (i = 0; i < 3; ++i) { cout << v1[i] << " "; } cout << "\nv2 = "; for (i = 0; i < 6; ++i) { cout << v2[i] << " "; } return 0;}Output:v1 = 1 2 3
v2 = 4 1 2 3 5 6
// C++ program to demonstrate insert_iterator#include <iostream>#include <iterator>#include <deque>#include <algorithm>using namespace std;int main(){ // Declaring first container deque<int> v1 = { 1, 2, 3 }; // Declaring second container for // copying values deque<int> v2 = { 4, 5, 6 }; deque<int>::iterator ii; ii = v2.begin() + 1; // ii points after 4 in v2 // Declaring a insert_iterator std::insert_iterator<std::deque<int> > i1(v2, ii); // Using the iterator in the copy() std::copy(v1.begin(), v1.end(), i1); // v2 now contains 4 1 2 3 5 6 // Displaying v1 and v2 cout << "v1 = "; int i; for (i = 0; i < 3; ++i) { cout << v1[i] << " "; } cout << "\nv2 = "; for (i = 0; i < 6; ++i) { cout << v2[i] << " "; } return 0;}
Output:
v1 = 1 2 3
v2 = 4 1 2 3 5 6
This article is contributed by Mrigendra Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
cpp-iterator
STL
C++
STL
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C++ Classes and Objects
Virtual Function in C++
Templates in C++ with Examples
Constructors in C++
Operator Overloading in C++
Socket Programming in C/C++
vector erase() and clear() in C++
Substring in C++
Copy Constructor in C++
Polymorphism in C++
|
[
{
"code": null,
"e": 25627,
"s": 25599,
"text": "\n27 Jul, 2017"
},
{
"code": null,
"e": 25807,
"s": 25627,
"text": "std::inserter constructs an insert iterator that inserts new elements into x in successive locations starting at the position pointed by it. It is defined inside the header file ."
},
{
"code": null,
"e": 26032,
"s": 25807,
"text": "An insert iterator is a special type of output iterator designed to allow algorithms that usually overwrite elements (such as copy) to instead insert new elements automatically at a specific position in the container.Syntax:"
},
{
"code": null,
"e": 26287,
"s": 26032,
"text": "std::inserter(Container& x, typename Container::iterator it);\nx: Container in which new elements will \nbe inserted.\nit: Iterator pointing to the insertion point.\n\nReturns: An insert_iterator that inserts elements into \nx at the position indicated by it.\n"
},
{
"code": "// C++ program to demonstrate std::inserter#include <iostream>#include <iterator>#include <deque>#include <algorithm>using namespace std;int main(){ // Declaring first container deque<int> v1 = { 1, 2, 3 }; // Declaring second container for // copying values deque<int> v2 = { 4, 5, 6 }; deque<int>::iterator i1; i1 = v2.begin() + 1; // i1 points to next element of 4 in v2 // Using std::inserter inside std::copy std::copy(v1.begin(), v1.end(), std::inserter(v2, i1)); // v2 now contains 4 1 2 3 5 6 // Displaying v1 and v2 cout << \"v1 = \"; int i; for (i = 0; i < 3; ++i) { cout << v1[i] << \" \"; } cout << \"\\nv2 = \"; for (i = 0; i < 6; ++i) { cout << v2[i] << \" \"; } return 0;}",
"e": 27054,
"s": 26287,
"text": null
},
{
"code": null,
"e": 27062,
"s": 27054,
"text": "Output:"
},
{
"code": null,
"e": 27092,
"s": 27062,
"text": "v1 = 1 2 3\nv2 = 4 1 2 3 5 6 \n"
},
{
"code": null,
"e": 27112,
"s": 27092,
"text": "How is it helpful ?"
},
{
"code": null,
"e": 28401,
"s": 27112,
"text": "Inserting values anywhere : Now, just imagine, if we had to copy value into a container such as a vector, firstly, we had to move elements and then copy, but with the help of std::insert() we can insert at any position with ease.// C++ program to demonstrate std::inserter#include <iostream>#include <iterator>#include <vector>#include <algorithm>using namespace std;int main(){ // Declaring first container vector<int> v1 = { 1, 2, 3, 7, 8, 9 }; // Declaring second container vector<int> v2 = { 4, 5, 6 }; vector<int>::iterator i1; i1 = v2.begin() + 2; // i1 points to next element of 5 in v2 // Using std::inserter inside std::copy std::copy(v1.begin(), v1.end(), std::inserter(v2, i1)); // v2 now contains 4 5 1 2 3 7 8 9 6 // Displaying v1 and v2 cout << \"v1 = \"; int i; for (i = 0; i < 6; ++i) { cout << v1[i] << \" \"; } cout << \"\\nv2 = \"; for (i = 0; i < 9; ++i) { cout << v2[i] << \" \"; } return 0;}Output:v1 = 1 2 3 7 8 9\nv2 = 4 5 1 2 3 7 8 9 6\nExplanation: Here, we started copying v1 into v2 but not from the beginning, but after the second position of v2, i.e., after 5, so all the elements of v1 were inserted after 5, and before 6. In this way, we inserted value where we wanted quite easily."
},
{
"code": "// C++ program to demonstrate std::inserter#include <iostream>#include <iterator>#include <vector>#include <algorithm>using namespace std;int main(){ // Declaring first container vector<int> v1 = { 1, 2, 3, 7, 8, 9 }; // Declaring second container vector<int> v2 = { 4, 5, 6 }; vector<int>::iterator i1; i1 = v2.begin() + 2; // i1 points to next element of 5 in v2 // Using std::inserter inside std::copy std::copy(v1.begin(), v1.end(), std::inserter(v2, i1)); // v2 now contains 4 5 1 2 3 7 8 9 6 // Displaying v1 and v2 cout << \"v1 = \"; int i; for (i = 0; i < 6; ++i) { cout << v1[i] << \" \"; } cout << \"\\nv2 = \"; for (i = 0; i < 9; ++i) { cout << v2[i] << \" \"; } return 0;}",
"e": 29162,
"s": 28401,
"text": null
},
{
"code": null,
"e": 29170,
"s": 29162,
"text": "Output:"
},
{
"code": null,
"e": 29211,
"s": 29170,
"text": "v1 = 1 2 3 7 8 9\nv2 = 4 5 1 2 3 7 8 9 6\n"
},
{
"code": null,
"e": 29464,
"s": 29211,
"text": "Explanation: Here, we started copying v1 into v2 but not from the beginning, but after the second position of v2, i.e., after 5, so all the elements of v1 were inserted after 5, and before 6. In this way, we inserted value where we wanted quite easily."
},
{
"code": null,
"e": 29484,
"s": 29464,
"text": "Points to Remember:"
},
{
"code": null,
"e": 30961,
"s": 29484,
"text": "One of the pitfalls of std::inserter is that it can be used with only those containers that have insert as one of its methods like in case of vector, list and deque, and so on.insert() vs std::inserter(): Now, you may be thinking that insert() and std::inserter() are similar, but they are not. When you have to pass an iterator in the algorithm, then you should use inserter() like in above case, while for normally inserting the values in the container, insert() can be used.In place of using std::inserter, we can create a insert_iterator and then use it, as eventually, std::inserter returns a insert_iterator only.// C++ program to demonstrate insert_iterator#include <iostream>#include <iterator>#include <deque>#include <algorithm>using namespace std;int main(){ // Declaring first container deque<int> v1 = { 1, 2, 3 }; // Declaring second container for // copying values deque<int> v2 = { 4, 5, 6 }; deque<int>::iterator ii; ii = v2.begin() + 1; // ii points after 4 in v2 // Declaring a insert_iterator std::insert_iterator<std::deque<int> > i1(v2, ii); // Using the iterator in the copy() std::copy(v1.begin(), v1.end(), i1); // v2 now contains 4 1 2 3 5 6 // Displaying v1 and v2 cout << \"v1 = \"; int i; for (i = 0; i < 3; ++i) { cout << v1[i] << \" \"; } cout << \"\\nv2 = \"; for (i = 0; i < 6; ++i) { cout << v2[i] << \" \"; } return 0;}Output:v1 = 1 2 3\nv2 = 4 1 2 3 5 6\n"
},
{
"code": null,
"e": 31138,
"s": 30961,
"text": "One of the pitfalls of std::inserter is that it can be used with only those containers that have insert as one of its methods like in case of vector, list and deque, and so on."
},
{
"code": null,
"e": 31440,
"s": 31138,
"text": "insert() vs std::inserter(): Now, you may be thinking that insert() and std::inserter() are similar, but they are not. When you have to pass an iterator in the algorithm, then you should use inserter() like in above case, while for normally inserting the values in the container, insert() can be used."
},
{
"code": null,
"e": 32440,
"s": 31440,
"text": "In place of using std::inserter, we can create a insert_iterator and then use it, as eventually, std::inserter returns a insert_iterator only.// C++ program to demonstrate insert_iterator#include <iostream>#include <iterator>#include <deque>#include <algorithm>using namespace std;int main(){ // Declaring first container deque<int> v1 = { 1, 2, 3 }; // Declaring second container for // copying values deque<int> v2 = { 4, 5, 6 }; deque<int>::iterator ii; ii = v2.begin() + 1; // ii points after 4 in v2 // Declaring a insert_iterator std::insert_iterator<std::deque<int> > i1(v2, ii); // Using the iterator in the copy() std::copy(v1.begin(), v1.end(), i1); // v2 now contains 4 1 2 3 5 6 // Displaying v1 and v2 cout << \"v1 = \"; int i; for (i = 0; i < 3; ++i) { cout << v1[i] << \" \"; } cout << \"\\nv2 = \"; for (i = 0; i < 6; ++i) { cout << v2[i] << \" \"; } return 0;}Output:v1 = 1 2 3\nv2 = 4 1 2 3 5 6\n"
},
{
"code": "// C++ program to demonstrate insert_iterator#include <iostream>#include <iterator>#include <deque>#include <algorithm>using namespace std;int main(){ // Declaring first container deque<int> v1 = { 1, 2, 3 }; // Declaring second container for // copying values deque<int> v2 = { 4, 5, 6 }; deque<int>::iterator ii; ii = v2.begin() + 1; // ii points after 4 in v2 // Declaring a insert_iterator std::insert_iterator<std::deque<int> > i1(v2, ii); // Using the iterator in the copy() std::copy(v1.begin(), v1.end(), i1); // v2 now contains 4 1 2 3 5 6 // Displaying v1 and v2 cout << \"v1 = \"; int i; for (i = 0; i < 3; ++i) { cout << v1[i] << \" \"; } cout << \"\\nv2 = \"; for (i = 0; i < 6; ++i) { cout << v2[i] << \" \"; } return 0;}",
"e": 33263,
"s": 32440,
"text": null
},
{
"code": null,
"e": 33271,
"s": 33263,
"text": "Output:"
},
{
"code": null,
"e": 33300,
"s": 33271,
"text": "v1 = 1 2 3\nv2 = 4 1 2 3 5 6\n"
},
{
"code": null,
"e": 33603,
"s": 33300,
"text": "This article is contributed by Mrigendra Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 33728,
"s": 33603,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 33741,
"s": 33728,
"text": "cpp-iterator"
},
{
"code": null,
"e": 33745,
"s": 33741,
"text": "STL"
},
{
"code": null,
"e": 33749,
"s": 33745,
"text": "C++"
},
{
"code": null,
"e": 33753,
"s": 33749,
"text": "STL"
},
{
"code": null,
"e": 33757,
"s": 33753,
"text": "CPP"
},
{
"code": null,
"e": 33855,
"s": 33757,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33879,
"s": 33855,
"text": "C++ Classes and Objects"
},
{
"code": null,
"e": 33903,
"s": 33879,
"text": "Virtual Function in C++"
},
{
"code": null,
"e": 33934,
"s": 33903,
"text": "Templates in C++ with Examples"
},
{
"code": null,
"e": 33954,
"s": 33934,
"text": "Constructors in C++"
},
{
"code": null,
"e": 33982,
"s": 33954,
"text": "Operator Overloading in C++"
},
{
"code": null,
"e": 34010,
"s": 33982,
"text": "Socket Programming in C/C++"
},
{
"code": null,
"e": 34044,
"s": 34010,
"text": "vector erase() and clear() in C++"
},
{
"code": null,
"e": 34061,
"s": 34044,
"text": "Substring in C++"
},
{
"code": null,
"e": 34085,
"s": 34061,
"text": "Copy Constructor in C++"
}
] |
Mahout - Quick Guide
|
We are living in a day and age where information is available in abundance. The information overload has scaled to such heights that sometimes it becomes difficult to manage our little mailboxes! Imagine the volume of data and records some of the popular websites (the likes of Facebook, Twitter, and Youtube) have to collect and manage on a daily basis. It is not uncommon even for lesser known websites to receive huge amounts of information in bulk.
Normally we fall back on data mining algorithms to analyze bulk data to identify trends
and draw conclusions. However, no data mining algorithm can be efficient enough to process very large datasets and provide outcomes in quick time, unless the computational tasks are run on multiple machines distributed over the cloud.
We now have new frameworks that allow us to break down a computation task into multiple segments and run each segment on a different machine. Mahout is such a data mining framework that normally runs coupled with the Hadoop infrastructure at its background to manage huge volumes of data.
A mahout is one who drives an elephant as its master. The name comes from its close association with Apache Hadoop which uses an elephant as its logo.
Hadoop is an open-source framework from Apache that allows to store and process big data in a distributed environment across clusters of computers using simple programming models.
Apache Mahout is an open source project that is primarily used for creating scalable machine learning algorithms. It implements popular machine learning techniques such as:
Recommendation
Classification
Clustering
Apache Mahout started as a sub-project of Apache’s Lucene in 2008. In 2010, Mahout became a top level project of Apache.
The primitive features of Apache Mahout are listed below.
The algorithms of Mahout are written on top of Hadoop, so it works well in distributed environment. Mahout uses the Apache Hadoop library to scale effectively in the cloud.
The algorithms of Mahout are written on top of Hadoop, so it works well in distributed environment. Mahout uses the Apache Hadoop library to scale effectively in the cloud.
Mahout offers the coder a ready-to-use framework for doing data mining tasks on large volumes of data.
Mahout offers the coder a ready-to-use framework for doing data mining tasks on large volumes of data.
Mahout lets applications to analyze large sets of data effectively and in quick time.
Mahout lets applications to analyze large sets of data effectively and in quick time.
Includes several MapReduce enabled clustering implementations such as k-means, fuzzy k-means, Canopy, Dirichlet, and Mean-Shift.
Includes several MapReduce enabled clustering implementations such as k-means, fuzzy k-means, Canopy, Dirichlet, and Mean-Shift.
Supports Distributed Naive Bayes and Complementary Naive Bayes classification implementations.
Supports Distributed Naive Bayes and Complementary Naive Bayes classification implementations.
Comes with distributed fitness function capabilities for evolutionary programming.
Comes with distributed fitness function capabilities for evolutionary programming.
Includes matrix and vector libraries.
Includes matrix and vector libraries.
Companies such as Adobe, Facebook, LinkedIn, Foursquare, Twitter, and Yahoo use Mahout internally.
Companies such as Adobe, Facebook, LinkedIn, Foursquare, Twitter, and Yahoo use Mahout internally.
Foursquare helps you in finding out places, food, and entertainment available in a particular area. It uses the recommender engine of Mahout.
Foursquare helps you in finding out places, food, and entertainment available in a particular area. It uses the recommender engine of Mahout.
Twitter uses Mahout for user interest modelling.
Twitter uses Mahout for user interest modelling.
Yahoo! uses Mahout for pattern mining.
Yahoo! uses Mahout for pattern mining.
Apache Mahout is a highly scalable machine learning library that enables developers
to use optimized algorithms. Mahout implements popular machine learning techniques such as recommendation, classification, and clustering. Therefore, it is prudent to have a brief section on machine learning before we move further.
Machine learning is a branch of science that deals with programming the systems in such a way that they automatically learn and improve with experience. Here, learning means recognizing and understanding the input data and making wise decisions based on the supplied data.
It is very difficult to cater to all the decisions based on all possible inputs. To tackle this problem, algorithms are developed. These algorithms build knowledge from specific data and past experience with the principles of statistics, probability theory, logic, combinatorial optimization, search, reinforcement learning, and control theory.
The developed algorithms form the basis of various applications such as:
Vision processing
Language processing
Forecasting (e.g., stock market trends)
Pattern recognition
Games
Data mining
Expert systems
Robotics
Machine learning is a vast area and it is quite beyond the scope of this tutorial to cover all its features. There are several ways to implement machine learning techniques, however the most commonly used ones are supervised and unsupervised learning.
Supervised learning deals with learning a function from available training data. A
supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. Common examples of supervised learning include:
classifying e-mails as spam,
labeling webpages based on their content, and
voice recognition.
There are many supervised learning algorithms such as neural networks, Support Vector Machines (SVMs), and Naive Bayes classifiers. Mahout implements Naive Bayes classifier.
Unsupervised learning makes sense of unlabeled data without having any predefined dataset for its training. Unsupervised learning is an extremely powerful tool for analyzing available data and look for patterns and trends. It is most commonly used for clustering similar input into logical groups. Common approaches to unsupervised learning include:
k-means
self-organizing maps, and
hierarchical clustering
Recommendation is a popular technique that provides close recommendations based on user information such as previous purchases, clicks, and ratings.
Amazon uses this technique to display a list of recommended items that you might be interested in, drawing information from your past actions. There are recommender engines that work behind Amazon to capture user behavior and recommend selected items based on your earlier actions.
Amazon uses this technique to display a list of recommended items that you might be interested in, drawing information from your past actions. There are recommender engines that work behind Amazon to capture user behavior and recommend selected items based on your earlier actions.
Facebook uses the recommender technique to identify and recommend the “people you may know list”.
Facebook uses the recommender technique to identify and recommend the “people you may know list”.
Classification, also known as categorization, is a machine learning technique that uses known data to determine how the new data should be classified into a set of existing categories. Classification is a form of supervised learning.
Mail service providers such as Yahoo! and Gmail use this technique to decide whether a new mail should be classified as a spam. The categorization algorithm trains itself by analyzing user habits of marking certain mails as spams. Based on that, the classifier decides whether a future mail should be deposited in your inbox or in the spams folder.
Mail service providers such as Yahoo! and Gmail use this technique to decide whether a new mail should be classified as a spam. The categorization algorithm trains itself by analyzing user habits of marking certain mails as spams. Based on that, the classifier decides whether a future mail should be deposited in your inbox or in the spams folder.
iTunes application uses classification to prepare playlists.
iTunes application uses classification to prepare playlists.
Clustering is used to form groups or clusters of similar data based on common characteristics. Clustering is a form of unsupervised learning.
Search engines such as Google and Yahoo! use clustering techniques to group data with similar characteristics.
Search engines such as Google and Yahoo! use clustering techniques to group data with similar characteristics.
Newsgroups use clustering techniques to group various articles based on related topics.
Newsgroups use clustering techniques to group various articles based on related topics.
The clustering engine goes through the input data completely and based on the characteristics of the data, it will decide under which cluster it should be grouped. Take a look at the following example.
Our library of tutorials contains topics on various subjects. When we receive a new tutorial at TutorialsPoint, it gets processed by a clustering engine that decides, based on its content, where it should be grouped.
This chapter teaches you how to setup mahout. Java and Hadoop are the prerequisites
of mahout. Below given are the steps to download and install Java, Hadoop, and Mahout.
Before installing Hadoop into Linux environment, we need to set up Linux using ssh (Secure Shell). Follow the steps mentioned below for setting up the Linux environment.
It is recommended to create a separate user for Hadoop to isolate the Hadoop file
system from the Unix file system. Follow the steps given below to create a user:
Open root using the command “su”.
Open root using the command “su”.
Create a user from the root account using the command “useradd username”.
Now you can open an existing user account using the command “su username”.
Now you can open an existing user account using the command “su username”.
Open the Linux terminal and type the following commands to create a user.
Open the Linux terminal and type the following commands to create a user.
$ su
password:
# useradd hadoop
# passwd hadoop
New passwd:
Retype new passwd
SSH setup is required to perform different operations on a cluster such as starting, stopping, and distributed daemon shell operations. To authenticate different users of
Hadoop, it is required to provide public/private key pair for a Hadoop user and share
it with different users.
The following commands are used to generate a key value pair using SSH, copy the public keys form id_rsa.pub to authorized_keys, and provide owner, read and write permissions to authorized_keys file respectively.
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
ssh localhost
Java is the main prerequisite for Hadoop and HBase. First of all, you should verify the
existence of Java in your system using “java -version”. The syntax of Java version command is given below.
$ java -version
It should produce the following output.
java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b13)
Java HotSpot(TM) Client VM (build 25.0-b02, mixed mode)
If you don’t have Java installed in your system, then follow the steps given below for
installing Java.
Step 1
Download java (JDK <latest version> - X64.tar.gz) by visiting the following link:
Oracle
Then jdk-7u71-linux-x64.tar.gz is downloaded onto your system.
Step 2
Generally, you find the downloaded Java file in the Downloads folder. Verify it and
extract the jdk-7u71-linux-x64.gz file using the following commands.
$ cd Downloads/
$ ls
jdk-7u71-linux-x64.gz
$ tar zxf jdk-7u71-linux-x64.gz
$ ls
jdk1.7.0_71 jdk-7u71-linux-x64.gz
Step 3
To make Java available to all the users, you need to move it to the location “/usr/local/”. Open root, and type the following commands.
$ su
password:
# mv jdk1.7.0_71 /usr/local/
# exit
Step 4
For setting up PATH and JAVA_HOME variables, add the following commands to ~/.bashrc file.
export JAVA_HOME=/usr/local/jdk1.7.0_71
export PATH= $PATH:$JAVA_HOME/bin
Now, verify the java -version command from terminal as explained above.
After installing Java, you need to install Hadoop initially. Verify the existence of Hadoop using “Hadoop version” command as shown below.
hadoop version
It should produce the following output:
Hadoop 2.6.0
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using /home/hadoop/hadoop/share/hadoop/common/hadoopcommon-2.6.0.jar
If your system is unable to locate Hadoop, then download Hadoop and have it installed
on your system. Follow the commands given below to do so.
Download and extract hadoop-2.6.0 from apache software foundation using the
following commands.
$ su
password:
# cd /usr/local
# wget http://mirrors.advancedhosters.com/apache/hadoop/common/hadoop-
2.6.0/hadoop-2.6.0-src.tar.gz
# tar xzf hadoop-2.6.0-src.tar.gz
# mv hadoop-2.6.0/* hadoop/
# exit
Install Hadoop in any of the required modes. Here, we are demonstrating HBase functionalities in pseudo-distributed mode, therefore install Hadoop in pseudo-distributed
mode.
Follow the steps given below to install Hadoop 2.4.1 on your system.
You can set Hadoop environment variables by appending the following commands to ~/.bashrc file.
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_INSTALL=$HADOOP_HOME
Now, apply all changes into the currently running system.
$ source ~/.bashrc
You can find all the Hadoop configuration files at the location “$HADOOP_HOME/etc/hadoop”. It is required to make changes in those configuration files according to your Hadoop infrastructure.
$ cd $HADOOP_HOME/etc/hadoop
In order to develop Hadoop programs in Java, you need to reset the Java environment
variables in hadoop-env.sh file by replacing JAVA_HOME value with the location of Java in your system.
export JAVA_HOME=/usr/local/jdk1.7.0_71
Given below are the list of files which you have to edit to configure Hadoop.
core-site.xml
The core-site.xml file contains information such as the port number used for Hadoop instance, memory allocated for file system, memory limit for storing data, and the
size of Read/Write buffers.
Open core-site.xml and add the following property in between the <configuration>,
</configuration> tags:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xm
The hdfs-site.xml file contains information such as the value of replication data, namenode path, and datanode paths of your local file systems. It means the place where you want to store the Hadoop infrastructure.
Let us assume the following data:
dfs.replication (data replication value) = 1
(In the below given path /hadoop/ is the user name.
hadoopinfra/hdfs/namenode is the directory created by hdfs file system.)
namenode path = //home/hadoop/hadoopinfra/hdfs/namenode
(hadoopinfra/hdfs/datanode is the directory created by hdfs file system.)
datanode path = //home/hadoop/hadoopinfra/hdfs/datanode
Open this file and add the following properties in between the <configuration>,
</configuration> tags in this file.
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/datanode</value>
</property>
</configuration>
Note: In the above file, all the property values are user defined. You can make
changes according to your Hadoop infrastructure.
mapred-site.xml
This file is used to configure yarn into Hadoop. Open mapred-site.xml file and add the
following property in between the <configuration>, </configuration> tags in this file.
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
mapred-site.xml
This file is used to specify which MapReduce framework we are using. By default, Hadoop contains a template of mapred-site.xml. First of all, it is required to copy the file from mapred-site.xml.template to mapred-site.xml file using the following command.
$ cp mapred-site.xml.template mapred-site.xml
Open mapred-site.xml file and add the following properties in between the
<configuration>, </configuration> tags in this file.
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
The following steps are used to verify the Hadoop installation.
Set up the namenode using the command “hdfs namenode -format” as follows:
$ cd ~
$ hdfs namenode -format
The expected result is as follows:
10/24/14 21:30:55 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/192.168.1.11
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.4.1
...
...
10/24/14 21:30:56 INFO common.Storage: Storage directory
/home/hadoop/hadoopinfra/hdfs/namenode has been successfully formatted.
10/24/14 21:30:56 INFO namenode.NNStorageRetentionManager: Going to retain
1 images with txid >= 0
10/24/14 21:30:56 INFO util.ExitUtil: Exiting with status 0
10/24/14 21:30:56 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/192.168.1.11
************************************************************/
The following command is used to start dfs. This command starts your Hadoop file system.
$ start-dfs.sh
The expected output is as follows:
10/24/14 21:37:56
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop/hadoop-
2.4.1/logs/hadoop-hadoop-namenode-localhost.out
localhost: starting datanode, logging to /home/hadoop/hadoop-
2.4.1/logs/hadoop-hadoop-datanode-localhost.out
Starting secondary namenodes [0.0.0.0]
The following command is used to start yarn script. Executing this command will start your yarn demons.
$ start-yarn.sh
The expected output is as follows:
starting yarn daemons
starting resource manager, logging to /home/hadoop/hadoop-2.4.1/logs/yarn-
hadoop-resourcemanager-localhost.out
localhost: starting node manager, logging to /home/hadoop/hadoop-
2.4.1/logs/yarn-hadoop-nodemanager-localhost.out
The default port number to access hadoop is 50070. Use the following URL to get
Hadoop services on your browser.
http://localhost:50070/
The default port number to access all application of cluster is 8088. Use the following
URL to visit this service.
http://localhost:8088/
Mahout is available in the website Mahout. Download Mahout from
the link provided in the website. Here is the screenshot of the website.
Download Apache mahout from the link
http://mirror.nexcess.net/apache/mahout/ using the following command.
[Hadoop@localhost ~]$ wget
http://mirror.nexcess.net/apache/mahout/0.9/mahout-distribution-0.9.tar.gz
Then mahout-distribution-0.9.tar.gz will be downloaded in your system.
Browse through the folder where mahout-distribution-0.9.tar.gz is stored and
extract the downloaded jar file as shown below.
[Hadoop@localhost ~]$ tar zxvf mahout-distribution-0.9.tar.gz
Given below is the pom.xml to build Apache Mahout using Eclipse.
<dependency>
<groupId>org.apache.mahout</groupId>
<artifactId>mahout-core</artifactId>
<version>0.9</version>
</dependency>
<dependency>
<groupId>org.apache.mahout</groupId>
<artifactId>mahout-math</artifactId>
<version>${mahout.version}</version>
</dependency>
<dependency>
<groupId>org.apache.mahout</groupId>
<artifactId>mahout-integration</artifactId>
<version>${mahout.version}</version>
</dependency>
This chapter covers the popular machine learning technique called recommendation, its mechanisms, and how to write an application implementing Mahout recommendation.
Ever wondered how Amazon comes up with a list of recommended items to draw your attention to a particular product that you might be interested in!
Suppose you want to purchase the book “Mahout in Action” from Amazon:
Along with the selected product, Amazon also displays a list of related recommended
items, as shown below.
Such recommendation lists are produced with the help of recommender engines.
Mahout provides recommender engines of several types such as:
user-based recommenders,
item-based recommenders, and
several other algorithms.
Mahout has a non-distributed, non-Hadoop-based recommender engine. You should pass a text document having user preferences for items. And the output of this engine would be the estimated preferences of a particular user for other items.
Consider a website that sells consumer goods such as mobiles, gadgets, and their accessories. If we want to implement the features of Mahout in such a site, then we
can build a recommender engine. This engine analyzes past purchase data of the users
and recommends new products based on that.
The components provided by Mahout to build a recommender engine are as follows:
DataModel
UserSimilarity
ItemSimilarity
UserNeighborhood
Recommender
From the data store, the data model is prepared and is passed as an input to the recommender engine. The Recommender engine generates the recommendations for a particular user. Given below is the architecture of recommender engine.
Here are the steps to develop a simple recommender:
The constructor of PearsonCorrelationSimilarity class requires a data model
object, which holds a file that contains the Users, Items, and Preferences details of a
product. Here is the sample data model file:
1,00,1.0
1,01,2.0
1,02,5.0
1,03,5.0
1,04,5.0
2,00,1.0
2,01,2.0
2,05,5.0
2,06,4.5
2,02,5.0
3,01,2.5
3,02,5.0
3,03,4.0
3,04,3.0
4,00,5.0
4,01,5.0
4,02,5.0
4,03,0.0
The DataModel object requires the file object, which contains the path of the input file. Create the DataModel object as shown below.
DataModel datamodel = new FileDataModel(new File("input file"));
Create UserSimilarity object using PearsonCorrelationSimilarity class as shown below:
UserSimilarity similarity = new PearsonCorrelationSimilarity(datamodel);
This object computes a "neighborhood" of users like a given user. There are two types
of neighborhoods:
NearestNUserNeighborhood - This class computes a neighborhood
consisting of the nearest n users to a given user. "Nearest" is defined by the
given UserSimilarity.
NearestNUserNeighborhood - This class computes a neighborhood
consisting of the nearest n users to a given user. "Nearest" is defined by the
given UserSimilarity.
ThresholdUserNeighborhood - This class computes a neighborhood
consisting of all the users whose similarity to the given user meets or exceeds
a certain threshold. Similarity is defined by the given UserSimilarity.
ThresholdUserNeighborhood - This class computes a neighborhood
consisting of all the users whose similarity to the given user meets or exceeds
a certain threshold. Similarity is defined by the given UserSimilarity.
Here we are using ThresholdUserNeighborhood and set the limit of preference to
3.0.
UserNeighborhood neighborhood = new ThresholdUserNeighborhood(3.0, similarity, model);
Create UserbasedRecomender object. Pass all the above created objects to its constructor as shown below.
UserBasedRecommender recommender = new GenericUserBasedRecommender(model, neighborhood, similarity);
Recommend products to a user using the recommend() method of Recommender interface. This method requires two parameters. The first represents the user id of the user to whom we need to send the recommendations, and the second represents the number of recommendations to be sent. Here is the usage of recommender() method:
List<RecommendedItem> recommendations = recommender.recommend(2, 3);
for (RecommendedItem recommendation : recommendations) {
System.out.println(recommendation);
}
Example Program
Given below is an example program to set recommendation. Prepare the recommendations for the user with user id 2.
import java.io.File;
import java.util.List;
import org.apache.mahout.cf.taste.impl.model.file.FileDataModel;
import org.apache.mahout.cf.taste.impl.neighborhood.ThresholdUserNeighborhood;
import org.apache.mahout.cf.taste.impl.recommender.GenericUserBasedRecommender;
import org.apache.mahout.cf.taste.impl.similarity.PearsonCorrelationSimilarity;
import org.apache.mahout.cf.taste.model.DataModel;
import org.apache.mahout.cf.taste.neighborhood.UserNeighborhood;
import org.apache.mahout.cf.taste.recommender.RecommendedItem;
import org.apache.mahout.cf.taste.recommender.UserBasedRecommender;
import org.apache.mahout.cf.taste.similarity.UserSimilarity;
public class Recommender {
public static void main(String args[]){
try{
//Creating data model
DataModel datamodel = new FileDataModel(new File("data")); //data
//Creating UserSimilarity object.
UserSimilarity usersimilarity = new PearsonCorrelationSimilarity(datamodel);
//Creating UserNeighbourHHood object.
UserNeighborhood userneighborhood = new ThresholdUserNeighborhood(3.0, usersimilarity, datamodel);
//Create UserRecomender
UserBasedRecommender recommender = new GenericUserBasedRecommender(datamodel, userneighborhood, usersimilarity);
List<RecommendedItem> recommendations = recommender.recommend(2, 3);
for (RecommendedItem recommendation : recommendations) {
System.out.println(recommendation);
}
}catch(Exception e){}
}
}
Compile the program using the following commands:
javac Recommender.java
java Recommender
It should produce the following output:
RecommendedItem [item:3, value:4.5]
RecommendedItem [item:4, value:4.0]
Clustering is the procedure to organize elements or items of a given collection into
groups based on the similarity between the items. For example, the applications related to online news publishing group their news articles using clustering.
Clustering is broadly used in many applications such as market research, pattern recognition, data analysis, and image processing.
Clustering is broadly used in many applications such as market research, pattern recognition, data analysis, and image processing.
Clustering can help marketers discover distinct groups in their customer basis.
And they can characterize their customer groups based on purchasing patterns.
Clustering can help marketers discover distinct groups in their customer basis.
And they can characterize their customer groups based on purchasing patterns.
In the field of biology, it can be used to derive plant and animal taxonomies,
categorize genes with similar functionality and gain insight into structures inherent in populations.
In the field of biology, it can be used to derive plant and animal taxonomies,
categorize genes with similar functionality and gain insight into structures inherent in populations.
Clustering helps in identification of areas of similar land use in an earth
observation database.
Clustering helps in identification of areas of similar land use in an earth
observation database.
Clustering also helps in classifying documents on the web for information
discovery.
Clustering also helps in classifying documents on the web for information
discovery.
Clustering is used in outlier detection applications such as detection of credit
card fraud.
Clustering is used in outlier detection applications such as detection of credit
card fraud.
As a data mining function, Cluster Analysis serves as a tool to gain insight into
the distribution of data to observe characteristics of each cluster.
As a data mining function, Cluster Analysis serves as a tool to gain insight into
the distribution of data to observe characteristics of each cluster.
Using Mahout, we can cluster a given set of data. The steps required are as follows:
Algorithm You need to select a suitable clustering algorithm to group the
elements of a cluster.
Algorithm You need to select a suitable clustering algorithm to group the
elements of a cluster.
Similarity and Dissimilarity You need to have a rule in place to verify the
similarity between the newly encountered elements and the elements in the groups.
Similarity and Dissimilarity You need to have a rule in place to verify the
similarity between the newly encountered elements and the elements in the groups.
Stopping Condition A stopping condition is required to define the point where no clustering is required.
Stopping Condition A stopping condition is required to define the point where no clustering is required.
To cluster the given data you need to -
Start the Hadoop server. Create required directories for storing files in Hadoop File System. (Create directories for input file, sequence file, and clustered output in case of canopy).
Start the Hadoop server. Create required directories for storing files in Hadoop File System. (Create directories for input file, sequence file, and clustered output in case of canopy).
Copy the input file to the Hadoop File system from Unix file system.
Copy the input file to the Hadoop File system from Unix file system.
Prepare the sequence file from the input data.
Prepare the sequence file from the input data.
Run any of the available clustering algorithms.
Run any of the available clustering algorithms.
Get the clustered data.
Get the clustered data.
Mahout works with Hadoop, hence make sure that the Hadoop server is up and running.
$ cd HADOOP_HOME/bin
$ start-all.sh
Create directories in the Hadoop file system to store the input file, sequence files, and clustered data using the following command:
$ hadoop fs -p mkdir /mahout_data
$ hadoop fs -p mkdir /clustered_data
$ hadoop fs -p mkdir /mahout_seq
You can verify whether the directory is created using the hadoop web interface in the
following URL - http://localhost:50070/
It gives you the output as shown below:
Now, copy the input data file from the Linux file system to mahout_data directory in
the Hadoop File System as shown below. Assume your input file is mydata.txt and it is in the /home/Hadoop/data/ directory.
$ hadoop fs -put /home/Hadoop/data/mydata.txt /mahout_data/
Mahout provides you a utility to convert the given input file in to a sequence file
format. This utility requires two parameters.
The input file directory where the original data resides.
The output file directory where the clustered data is to be stored.
Given below is the help prompt of mahout seqdirectory utility.
Step 1: Browse to the Mahout home directory. You can get help of the utility as shown below:
[Hadoop@localhost bin]$ ./mahout seqdirectory --help
Job-Specific Options:
--input (-i) input Path to job input directory.
--output (-o) output The directory pathname for output.
--overwrite (-ow) If present, overwrite the output directory
Generate the sequence file using the utility using the following syntax:
mahout seqdirectory -i <input file path> -o <output directory>
Example
mahout seqdirectory
-i hdfs://localhost:9000/mahout_seq/
-o hdfs://localhost:9000/clustered_data/
Mahout supports two main algorithms for clustering namely:
Canopy clustering
K-means clustering
Canopy clustering is a simple and fast technique used by Mahout for clustering purpose. The objects will be treated as points in a plain space. This technique is often
used as an initial step in other clustering techniques such as k-means clustering. You
can run a Canopy job using the following syntax:
mahout canopy -i <input vectors directory>
-o <output directory>
-t1 <threshold value 1>
-t2 <threshold value 2>
Canopy job requires an input file directory with the sequence file and an output
directory where the clustered data is to be stored.
Example
mahout canopy -i hdfs://localhost:9000/mahout_seq/mydata.seq
-o hdfs://localhost:9000/clustered_data
-t1 20
-t2 30
You will get the clustered data generated in the given output directory.
K-means clustering is an important clustering algorithm. The k in k-means clustering
algorithm represents the number of clusters the data is to be divided into. For
example, the k value specified to this algorithm is selected as 3, the algorithm is going
to divide the data into 3 clusters.
Each object will be represented as vector in space. Initially k points will be chosen by the algorithm randomly and treated as centers, every object closest to each center
are clustered. There are several algorithms for the distance measure and the user should choose the required one.
Creating Vector Files
Unlike Canopy algorithm, the k-means algorithm requires vector files as input,
therefore you have to create vector files.
Unlike Canopy algorithm, the k-means algorithm requires vector files as input,
therefore you have to create vector files.
To generate vector files from sequence file format, Mahout provides the
seq2parse utility.
To generate vector files from sequence file format, Mahout provides the
seq2parse utility.
Given below are some of the options of seq2parse utility. Create vector files using these options.
$MAHOUT_HOME/bin/mahout seq2sparse
--analyzerName (-a) analyzerName The class name of the analyzer
--chunkSize (-chunk) chunkSize The chunkSize in MegaBytes.
--output (-o) output The directory pathname for o/p
--input (-i) input Path to job input directory.
After creating vectors, proceed with k-means algorithm. The syntax to run k-means
job is as follows:
mahout kmeans -i <input vectors directory>
-c <input clusters directory>
-o <output working directory>
-dm <Distance Measure technique>
-x <maximum number of iterations>
-k <number of initial clusters>
K-means clustering job requires input vector directory, output clusters directory,
distance measure, maximum number of iterations to be carried out, and an integer value representing the number of clusters the input data is to be divided into.
Classification is a machine learning technique that uses known data to determine how
the new data should be classified into a set of existing categories. For example,
iTunes application uses classification to prepare playlists.
iTunes application uses classification to prepare playlists.
Mail service providers such as Yahoo! and Gmail use this technique to decide whether a new mail should be classified as a spam. The categorization algorithm trains itself by analyzing user habits of marking certain mails as spams. Based on that, the classifier decides whether a future mail should be deposited in your inbox or in the spams folder.
Mail service providers such as Yahoo! and Gmail use this technique to decide whether a new mail should be classified as a spam. The categorization algorithm trains itself by analyzing user habits of marking certain mails as spams. Based on that, the classifier decides whether a future mail should be deposited in your inbox or in the spams folder.
While classifying a given set of data, the classifier system performs the following
actions:
Initially a new data model is prepared using any of the learning algorithms.
Then the prepared data model is tested.
Thereafter, this data model is used to evaluate the new data and to determine
its class.
Credit card fraud detection - The Classification mechanism is used to predict credit card frauds. Using historical information of previous frauds, the classifier can predict which future transactions may turn into frauds.
Credit card fraud detection - The Classification mechanism is used to predict credit card frauds. Using historical information of previous frauds, the classifier can predict which future transactions may turn into frauds.
Spam e-mails - Depending on the characteristics of previous spam mails, the
classifier determines whether a newly encountered e-mail should be sent to the
spam folder.
Spam e-mails - Depending on the characteristics of previous spam mails, the
classifier determines whether a newly encountered e-mail should be sent to the
spam folder.
Mahout uses the Naive Bayes classifier algorithm. It uses two implementations:
Distributed Naive Bayes classification
Complementary Naive Bayes classification
Naive Bayes is a simple technique for constructing classifiers. It is not a single
algorithm for training such classifiers, but a family of algorithms. A Bayes classifier
constructs models to classify problem instances. These classifications are made using
the available data.
An advantage of naive Bayes is that it only requires a small amount of training data
to estimate the parameters necessary for classification.
For some types of probability models, naive Bayes classifiers can be trained very
efficiently in a supervised learning setting.
Despite its oversimplified assumptions, naive Bayes classifiers have worked quite well
in many complex real-world situations.
The following steps are to be followed to implement Classification:
Generate example data
Create sequence files from data
Convert sequence files to vectors
Train the vectors
Test the vectors
Generate or download the data to be classified. For example, you can get the
20 newsgroups example data from the following link:
http://people.csail.mit.edu/jrennie/20Newsgroups/20news-bydate.tar.gz
Create a directory for storing input data. Download the example as shown below.
$ mkdir classification_example
$ cd classification_example
$tar xzvf 20news-bydate.tar.gz
wget http://people.csail.mit.edu/jrennie/20Newsgroups/20news-bydate.tar.gz
Create sequence file from the example using seqdirectory utility. The syntax to generate sequence is given below:
mahout seqdirectory -i <input file path> -o <output directory>
Create vector files from sequence files using seq2parse utility. The options of
seq2parse utility are given below:
$MAHOUT_HOME/bin/mahout seq2sparse
--analyzerName (-a) analyzerName The class name of the analyzer
--chunkSize (-chunk) chunkSize The chunkSize in MegaBytes.
--output (-o) output The directory pathname for o/p
--input (-i) input Path to job input directory.
Train the generated vectors using the trainnb utility. The options to use trainnb utility are given below:
mahout trainnb
-i ${PATH_TO_TFIDF_VECTORS}
-el
-o ${PATH_TO_MODEL}/model
-li ${PATH_TO_MODEL}/labelindex
-ow
-c
Test the vectors using testnb utility. The options to use testnb utility are given below:
mahout testnb
-i ${PATH_TO_TFIDF_TEST_VECTORS}
-m ${PATH_TO_MODEL}/model
-l ${PATH_TO_MODEL}/labelindex
-ow
-o ${PATH_TO_OUTPUT}
-c
-seq
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2158,
"s": 1705,
"text": "We are living in a day and age where information is available in abundance. The information overload has scaled to such heights that sometimes it becomes difficult to manage our little mailboxes! Imagine the volume of data and records some of the popular websites (the likes of Facebook, Twitter, and Youtube) have to collect and manage on a daily basis. It is not uncommon even for lesser known websites to receive huge amounts of information in bulk."
},
{
"code": null,
"e": 2481,
"s": 2158,
"text": "Normally we fall back on data mining algorithms to analyze bulk data to identify trends\nand draw conclusions. However, no data mining algorithm can be efficient enough to process very large datasets and provide outcomes in quick time, unless the computational tasks are run on multiple machines distributed over the cloud."
},
{
"code": null,
"e": 2770,
"s": 2481,
"text": "We now have new frameworks that allow us to break down a computation task into multiple segments and run each segment on a different machine. Mahout is such a data mining framework that normally runs coupled with the Hadoop infrastructure at its background to manage huge volumes of data."
},
{
"code": null,
"e": 2921,
"s": 2770,
"text": "A mahout is one who drives an elephant as its master. The name comes from its close association with Apache Hadoop which uses an elephant as its logo."
},
{
"code": null,
"e": 3101,
"s": 2921,
"text": "Hadoop is an open-source framework from Apache that allows to store and process big data in a distributed environment across clusters of computers using simple programming models."
},
{
"code": null,
"e": 3274,
"s": 3101,
"text": "Apache Mahout is an open source project that is primarily used for creating scalable machine learning algorithms. It implements popular machine learning techniques such as:"
},
{
"code": null,
"e": 3289,
"s": 3274,
"text": "Recommendation"
},
{
"code": null,
"e": 3304,
"s": 3289,
"text": "Classification"
},
{
"code": null,
"e": 3315,
"s": 3304,
"text": "Clustering"
},
{
"code": null,
"e": 3436,
"s": 3315,
"text": "Apache Mahout started as a sub-project of Apache’s Lucene in 2008. In 2010, Mahout became a top level project of Apache."
},
{
"code": null,
"e": 3494,
"s": 3436,
"text": "The primitive features of Apache Mahout are listed below."
},
{
"code": null,
"e": 3667,
"s": 3494,
"text": "The algorithms of Mahout are written on top of Hadoop, so it works well in distributed environment. Mahout uses the Apache Hadoop library to scale effectively in the cloud."
},
{
"code": null,
"e": 3840,
"s": 3667,
"text": "The algorithms of Mahout are written on top of Hadoop, so it works well in distributed environment. Mahout uses the Apache Hadoop library to scale effectively in the cloud."
},
{
"code": null,
"e": 3943,
"s": 3840,
"text": "Mahout offers the coder a ready-to-use framework for doing data mining tasks on large volumes of data."
},
{
"code": null,
"e": 4046,
"s": 3943,
"text": "Mahout offers the coder a ready-to-use framework for doing data mining tasks on large volumes of data."
},
{
"code": null,
"e": 4132,
"s": 4046,
"text": "Mahout lets applications to analyze large sets of data effectively and in quick time."
},
{
"code": null,
"e": 4218,
"s": 4132,
"text": "Mahout lets applications to analyze large sets of data effectively and in quick time."
},
{
"code": null,
"e": 4347,
"s": 4218,
"text": "Includes several MapReduce enabled clustering implementations such as k-means, fuzzy k-means, Canopy, Dirichlet, and Mean-Shift."
},
{
"code": null,
"e": 4476,
"s": 4347,
"text": "Includes several MapReduce enabled clustering implementations such as k-means, fuzzy k-means, Canopy, Dirichlet, and Mean-Shift."
},
{
"code": null,
"e": 4571,
"s": 4476,
"text": "Supports Distributed Naive Bayes and Complementary Naive Bayes classification implementations."
},
{
"code": null,
"e": 4666,
"s": 4571,
"text": "Supports Distributed Naive Bayes and Complementary Naive Bayes classification implementations."
},
{
"code": null,
"e": 4749,
"s": 4666,
"text": "Comes with distributed fitness function capabilities for evolutionary programming."
},
{
"code": null,
"e": 4832,
"s": 4749,
"text": "Comes with distributed fitness function capabilities for evolutionary programming."
},
{
"code": null,
"e": 4870,
"s": 4832,
"text": "Includes matrix and vector libraries."
},
{
"code": null,
"e": 4908,
"s": 4870,
"text": "Includes matrix and vector libraries."
},
{
"code": null,
"e": 5007,
"s": 4908,
"text": "Companies such as Adobe, Facebook, LinkedIn, Foursquare, Twitter, and Yahoo use Mahout internally."
},
{
"code": null,
"e": 5106,
"s": 5007,
"text": "Companies such as Adobe, Facebook, LinkedIn, Foursquare, Twitter, and Yahoo use Mahout internally."
},
{
"code": null,
"e": 5248,
"s": 5106,
"text": "Foursquare helps you in finding out places, food, and entertainment available in a particular area. It uses the recommender engine of Mahout."
},
{
"code": null,
"e": 5390,
"s": 5248,
"text": "Foursquare helps you in finding out places, food, and entertainment available in a particular area. It uses the recommender engine of Mahout."
},
{
"code": null,
"e": 5439,
"s": 5390,
"text": "Twitter uses Mahout for user interest modelling."
},
{
"code": null,
"e": 5488,
"s": 5439,
"text": "Twitter uses Mahout for user interest modelling."
},
{
"code": null,
"e": 5527,
"s": 5488,
"text": "Yahoo! uses Mahout for pattern mining."
},
{
"code": null,
"e": 5566,
"s": 5527,
"text": "Yahoo! uses Mahout for pattern mining."
},
{
"code": null,
"e": 5882,
"s": 5566,
"text": "Apache Mahout is a highly scalable machine learning library that enables developers\nto use optimized algorithms. Mahout implements popular machine learning techniques such as recommendation, classification, and clustering. Therefore, it is prudent to have a brief section on machine learning before we move further."
},
{
"code": null,
"e": 6155,
"s": 5882,
"text": "Machine learning is a branch of science that deals with programming the systems in such a way that they automatically learn and improve with experience. Here, learning means recognizing and understanding the input data and making wise decisions based on the supplied data."
},
{
"code": null,
"e": 6500,
"s": 6155,
"text": "It is very difficult to cater to all the decisions based on all possible inputs. To tackle this problem, algorithms are developed. These algorithms build knowledge from specific data and past experience with the principles of statistics, probability theory, logic, combinatorial optimization, search, reinforcement learning, and control theory."
},
{
"code": null,
"e": 6573,
"s": 6500,
"text": "The developed algorithms form the basis of various applications such as:"
},
{
"code": null,
"e": 6591,
"s": 6573,
"text": "Vision processing"
},
{
"code": null,
"e": 6611,
"s": 6591,
"text": "Language processing"
},
{
"code": null,
"e": 6651,
"s": 6611,
"text": "Forecasting (e.g., stock market trends)"
},
{
"code": null,
"e": 6671,
"s": 6651,
"text": "Pattern recognition"
},
{
"code": null,
"e": 6677,
"s": 6671,
"text": "Games"
},
{
"code": null,
"e": 6689,
"s": 6677,
"text": "Data mining"
},
{
"code": null,
"e": 6704,
"s": 6689,
"text": "Expert systems"
},
{
"code": null,
"e": 6713,
"s": 6704,
"text": "Robotics"
},
{
"code": null,
"e": 6965,
"s": 6713,
"text": "Machine learning is a vast area and it is quite beyond the scope of this tutorial to cover all its features. There are several ways to implement machine learning techniques, however the most commonly used ones are supervised and unsupervised learning."
},
{
"code": null,
"e": 7232,
"s": 6965,
"text": "Supervised learning deals with learning a function from available training data. A\nsupervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. Common examples of supervised learning include:"
},
{
"code": null,
"e": 7261,
"s": 7232,
"text": "classifying e-mails as spam,"
},
{
"code": null,
"e": 7307,
"s": 7261,
"text": "labeling webpages based on their content, and"
},
{
"code": null,
"e": 7326,
"s": 7307,
"text": "voice recognition."
},
{
"code": null,
"e": 7500,
"s": 7326,
"text": "There are many supervised learning algorithms such as neural networks, Support Vector Machines (SVMs), and Naive Bayes classifiers. Mahout implements Naive Bayes classifier."
},
{
"code": null,
"e": 7850,
"s": 7500,
"text": "Unsupervised learning makes sense of unlabeled data without having any predefined dataset for its training. Unsupervised learning is an extremely powerful tool for analyzing available data and look for patterns and trends. It is most commonly used for clustering similar input into logical groups. Common approaches to unsupervised learning include:"
},
{
"code": null,
"e": 7858,
"s": 7850,
"text": "k-means"
},
{
"code": null,
"e": 7884,
"s": 7858,
"text": "self-organizing maps, and"
},
{
"code": null,
"e": 7908,
"s": 7884,
"text": "hierarchical clustering"
},
{
"code": null,
"e": 8057,
"s": 7908,
"text": "Recommendation is a popular technique that provides close recommendations based on user information such as previous purchases, clicks, and ratings."
},
{
"code": null,
"e": 8339,
"s": 8057,
"text": "Amazon uses this technique to display a list of recommended items that you might be interested in, drawing information from your past actions. There are recommender engines that work behind Amazon to capture user behavior and recommend selected items based on your earlier actions."
},
{
"code": null,
"e": 8621,
"s": 8339,
"text": "Amazon uses this technique to display a list of recommended items that you might be interested in, drawing information from your past actions. There are recommender engines that work behind Amazon to capture user behavior and recommend selected items based on your earlier actions."
},
{
"code": null,
"e": 8719,
"s": 8621,
"text": "Facebook uses the recommender technique to identify and recommend the “people you may know list”."
},
{
"code": null,
"e": 8817,
"s": 8719,
"text": "Facebook uses the recommender technique to identify and recommend the “people you may know list”."
},
{
"code": null,
"e": 9051,
"s": 8817,
"text": "Classification, also known as categorization, is a machine learning technique that uses known data to determine how the new data should be classified into a set of existing categories. Classification is a form of supervised learning."
},
{
"code": null,
"e": 9400,
"s": 9051,
"text": "Mail service providers such as Yahoo! and Gmail use this technique to decide whether a new mail should be classified as a spam. The categorization algorithm trains itself by analyzing user habits of marking certain mails as spams. Based on that, the classifier decides whether a future mail should be deposited in your inbox or in the spams folder."
},
{
"code": null,
"e": 9749,
"s": 9400,
"text": "Mail service providers such as Yahoo! and Gmail use this technique to decide whether a new mail should be classified as a spam. The categorization algorithm trains itself by analyzing user habits of marking certain mails as spams. Based on that, the classifier decides whether a future mail should be deposited in your inbox or in the spams folder."
},
{
"code": null,
"e": 9810,
"s": 9749,
"text": "iTunes application uses classification to prepare playlists."
},
{
"code": null,
"e": 9871,
"s": 9810,
"text": "iTunes application uses classification to prepare playlists."
},
{
"code": null,
"e": 10013,
"s": 9871,
"text": "Clustering is used to form groups or clusters of similar data based on common characteristics. Clustering is a form of unsupervised learning."
},
{
"code": null,
"e": 10124,
"s": 10013,
"text": "Search engines such as Google and Yahoo! use clustering techniques to group data with similar characteristics."
},
{
"code": null,
"e": 10235,
"s": 10124,
"text": "Search engines such as Google and Yahoo! use clustering techniques to group data with similar characteristics."
},
{
"code": null,
"e": 10323,
"s": 10235,
"text": "Newsgroups use clustering techniques to group various articles based on related topics."
},
{
"code": null,
"e": 10411,
"s": 10323,
"text": "Newsgroups use clustering techniques to group various articles based on related topics."
},
{
"code": null,
"e": 10613,
"s": 10411,
"text": "The clustering engine goes through the input data completely and based on the characteristics of the data, it will decide under which cluster it should be grouped. Take a look at the following example."
},
{
"code": null,
"e": 10830,
"s": 10613,
"text": "Our library of tutorials contains topics on various subjects. When we receive a new tutorial at TutorialsPoint, it gets processed by a clustering engine that decides, based on its content, where it should be grouped."
},
{
"code": null,
"e": 11001,
"s": 10830,
"text": "This chapter teaches you how to setup mahout. Java and Hadoop are the prerequisites\nof mahout. Below given are the steps to download and install Java, Hadoop, and Mahout."
},
{
"code": null,
"e": 11171,
"s": 11001,
"text": "Before installing Hadoop into Linux environment, we need to set up Linux using ssh (Secure Shell). Follow the steps mentioned below for setting up the Linux environment."
},
{
"code": null,
"e": 11334,
"s": 11171,
"text": "It is recommended to create a separate user for Hadoop to isolate the Hadoop file\nsystem from the Unix file system. Follow the steps given below to create a user:"
},
{
"code": null,
"e": 11368,
"s": 11334,
"text": "Open root using the command “su”."
},
{
"code": null,
"e": 11402,
"s": 11368,
"text": "Open root using the command “su”."
},
{
"code": null,
"e": 11476,
"s": 11402,
"text": "Create a user from the root account using the command “useradd username”."
},
{
"code": null,
"e": 11551,
"s": 11476,
"text": "Now you can open an existing user account using the command “su username”."
},
{
"code": null,
"e": 11626,
"s": 11551,
"text": "Now you can open an existing user account using the command “su username”."
},
{
"code": null,
"e": 11700,
"s": 11626,
"text": "Open the Linux terminal and type the following commands to create a user."
},
{
"code": null,
"e": 11774,
"s": 11700,
"text": "Open the Linux terminal and type the following commands to create a user."
},
{
"code": null,
"e": 11853,
"s": 11774,
"text": "$ su\npassword:\n# useradd hadoop\n# passwd hadoop\nNew passwd:\nRetype new passwd\n"
},
{
"code": null,
"e": 12135,
"s": 11853,
"text": "SSH setup is required to perform different operations on a cluster such as starting, stopping, and distributed daemon shell operations. To authenticate different users of\nHadoop, it is required to provide public/private key pair for a Hadoop user and share\nit with different users."
},
{
"code": null,
"e": 12348,
"s": 12135,
"text": "The following commands are used to generate a key value pair using SSH, copy the public keys form id_rsa.pub to authorized_keys, and provide owner, read and write permissions to authorized_keys file respectively."
},
{
"code": null,
"e": 12455,
"s": 12348,
"text": "$ ssh-keygen -t rsa\n$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys\n$ chmod 0600 ~/.ssh/authorized_keys\n"
},
{
"code": null,
"e": 12470,
"s": 12455,
"text": "ssh localhost\n"
},
{
"code": null,
"e": 12665,
"s": 12470,
"text": "Java is the main prerequisite for Hadoop and HBase. First of all, you should verify the\nexistence of Java in your system using “java -version”. The syntax of Java version command is given below."
},
{
"code": null,
"e": 12682,
"s": 12665,
"text": "$ java -version\n"
},
{
"code": null,
"e": 12722,
"s": 12682,
"text": "It should produce the following output."
},
{
"code": null,
"e": 12856,
"s": 12722,
"text": "java version \"1.7.0_71\"\nJava(TM) SE Runtime Environment (build 1.7.0_71-b13)\nJava HotSpot(TM) Client VM (build 25.0-b02, mixed mode)\n"
},
{
"code": null,
"e": 12960,
"s": 12856,
"text": "If you don’t have Java installed in your system, then follow the steps given below for\ninstalling Java."
},
{
"code": null,
"e": 12967,
"s": 12960,
"text": "Step 1"
},
{
"code": null,
"e": 13056,
"s": 12967,
"text": "Download java (JDK <latest version> - X64.tar.gz) by visiting the following link:\nOracle"
},
{
"code": null,
"e": 13119,
"s": 13056,
"text": "Then jdk-7u71-linux-x64.tar.gz is downloaded onto your system."
},
{
"code": null,
"e": 13126,
"s": 13119,
"text": "Step 2"
},
{
"code": null,
"e": 13279,
"s": 13126,
"text": "Generally, you find the downloaded Java file in the Downloads folder. Verify it and\nextract the jdk-7u71-linux-x64.gz file using the following commands."
},
{
"code": null,
"e": 13394,
"s": 13279,
"text": "$ cd Downloads/\n$ ls\njdk-7u71-linux-x64.gz\n$ tar zxf jdk-7u71-linux-x64.gz\n$ ls\njdk1.7.0_71 jdk-7u71-linux-x64.gz\n"
},
{
"code": null,
"e": 13401,
"s": 13394,
"text": "Step 3"
},
{
"code": null,
"e": 13537,
"s": 13401,
"text": "To make Java available to all the users, you need to move it to the location “/usr/local/”. Open root, and type the following commands."
},
{
"code": null,
"e": 13589,
"s": 13537,
"text": "$ su\npassword:\n# mv jdk1.7.0_71 /usr/local/\n# exit\n"
},
{
"code": null,
"e": 13596,
"s": 13589,
"text": "Step 4"
},
{
"code": null,
"e": 13687,
"s": 13596,
"text": "For setting up PATH and JAVA_HOME variables, add the following commands to ~/.bashrc file."
},
{
"code": null,
"e": 13762,
"s": 13687,
"text": "export JAVA_HOME=/usr/local/jdk1.7.0_71\nexport PATH= $PATH:$JAVA_HOME/bin\n"
},
{
"code": null,
"e": 13834,
"s": 13762,
"text": "Now, verify the java -version command from terminal as explained above."
},
{
"code": null,
"e": 13973,
"s": 13834,
"text": "After installing Java, you need to install Hadoop initially. Verify the existence of Hadoop using “Hadoop version” command as shown below."
},
{
"code": null,
"e": 13989,
"s": 13973,
"text": "hadoop version\n"
},
{
"code": null,
"e": 14029,
"s": 13989,
"text": "It should produce the following output:"
},
{
"code": null,
"e": 14260,
"s": 14029,
"text": "Hadoop 2.6.0\nCompiled by jenkins on 2014-11-13T21:10Z\nCompiled with protoc 2.5.0\nFrom source with checksum 18e43357c8f927c0695f1e9522859d6a\nThis command was run using /home/hadoop/hadoop/share/hadoop/common/hadoopcommon-2.6.0.jar\n"
},
{
"code": null,
"e": 14404,
"s": 14260,
"text": "If your system is unable to locate Hadoop, then download Hadoop and have it installed\non your system. Follow the commands given below to do so."
},
{
"code": null,
"e": 14500,
"s": 14404,
"text": "Download and extract hadoop-2.6.0 from apache software foundation using the\nfollowing commands."
},
{
"code": null,
"e": 14702,
"s": 14500,
"text": "$ su\npassword:\n# cd /usr/local\n# wget http://mirrors.advancedhosters.com/apache/hadoop/common/hadoop-\n2.6.0/hadoop-2.6.0-src.tar.gz\n# tar xzf hadoop-2.6.0-src.tar.gz\n# mv hadoop-2.6.0/* hadoop/\n# exit\n"
},
{
"code": null,
"e": 14877,
"s": 14702,
"text": "Install Hadoop in any of the required modes. Here, we are demonstrating HBase functionalities in pseudo-distributed mode, therefore install Hadoop in pseudo-distributed\nmode."
},
{
"code": null,
"e": 14946,
"s": 14877,
"text": "Follow the steps given below to install Hadoop 2.4.1 on your system."
},
{
"code": null,
"e": 15042,
"s": 14946,
"text": "You can set Hadoop environment variables by appending the following commands to ~/.bashrc file."
},
{
"code": null,
"e": 15375,
"s": 15042,
"text": "export HADOOP_HOME=/usr/local/hadoop\nexport HADOOP_MAPRED_HOME=$HADOOP_HOME\nexport HADOOP_COMMON_HOME=$HADOOP_HOME\nexport HADOOP_HDFS_HOME=$HADOOP_HOME\n\nexport YARN_HOME=$HADOOP_HOME\nexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native\n\nexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin\nexport HADOOP_INSTALL=$HADOOP_HOME\n"
},
{
"code": null,
"e": 15433,
"s": 15375,
"text": "Now, apply all changes into the currently running system."
},
{
"code": null,
"e": 15453,
"s": 15433,
"text": "$ source ~/.bashrc\n"
},
{
"code": null,
"e": 15645,
"s": 15453,
"text": "You can find all the Hadoop configuration files at the location “$HADOOP_HOME/etc/hadoop”. It is required to make changes in those configuration files according to your Hadoop infrastructure."
},
{
"code": null,
"e": 15675,
"s": 15645,
"text": "$ cd $HADOOP_HOME/etc/hadoop\n"
},
{
"code": null,
"e": 15862,
"s": 15675,
"text": "In order to develop Hadoop programs in Java, you need to reset the Java environment\nvariables in hadoop-env.sh file by replacing JAVA_HOME value with the location of Java in your system."
},
{
"code": null,
"e": 15903,
"s": 15862,
"text": "export JAVA_HOME=/usr/local/jdk1.7.0_71\n"
},
{
"code": null,
"e": 15981,
"s": 15903,
"text": "Given below are the list of files which you have to edit to configure Hadoop."
},
{
"code": null,
"e": 15995,
"s": 15981,
"text": "core-site.xml"
},
{
"code": null,
"e": 16190,
"s": 15995,
"text": "The core-site.xml file contains information such as the port number used for Hadoop instance, memory allocated for file system, memory limit for storing data, and the\nsize of Read/Write buffers."
},
{
"code": null,
"e": 16295,
"s": 16190,
"text": "Open core-site.xml and add the following property in between the <configuration>,\n</configuration> tags:"
},
{
"code": null,
"e": 16435,
"s": 16295,
"text": "<configuration>\n <property>\n <name>fs.default.name</name>\n <value>hdfs://localhost:9000</value>\n </property>\n</configuration>"
},
{
"code": null,
"e": 16448,
"s": 16435,
"text": "hdfs-site.xm"
},
{
"code": null,
"e": 16663,
"s": 16448,
"text": "The hdfs-site.xml file contains information such as the value of replication data, namenode path, and datanode paths of your local file systems. It means the place where you want to store the Hadoop infrastructure."
},
{
"code": null,
"e": 16697,
"s": 16663,
"text": "Let us assume the following data:"
},
{
"code": null,
"e": 17056,
"s": 16697,
"text": "dfs.replication (data replication value) = 1\n\n(In the below given path /hadoop/ is the user name.\nhadoopinfra/hdfs/namenode is the directory created by hdfs file system.)\nnamenode path = //home/hadoop/hadoopinfra/hdfs/namenode\n\n(hadoopinfra/hdfs/datanode is the directory created by hdfs file system.)\ndatanode path = //home/hadoop/hadoopinfra/hdfs/datanode\n"
},
{
"code": null,
"e": 17172,
"s": 17056,
"text": "Open this file and add the following properties in between the <configuration>,\n</configuration> tags in this file."
},
{
"code": null,
"e": 17552,
"s": 17172,
"text": "<configuration>\n <property>\n <name>dfs.replication</name>\n <value>1</value>\n </property>\n\t\n <property>\n <name>dfs.name.dir</name>\n <value>file:///home/hadoop/hadoopinfra/hdfs/namenode</value>\n </property>\n\t\n <property>\n <name>dfs.data.dir</name>\n <value>file:///home/hadoop/hadoopinfra/hdfs/datanode</value>\n </property>\n</configuration>"
},
{
"code": null,
"e": 17681,
"s": 17552,
"text": "Note: In the above file, all the property values are user defined. You can make\nchanges according to your Hadoop infrastructure."
},
{
"code": null,
"e": 17697,
"s": 17681,
"text": "mapred-site.xml"
},
{
"code": null,
"e": 17871,
"s": 17697,
"text": "This file is used to configure yarn into Hadoop. Open mapred-site.xml file and add the\nfollowing property in between the <configuration>, </configuration> tags in this file."
},
{
"code": null,
"e": 18021,
"s": 17871,
"text": "<configuration>\n <property>\n <name>yarn.nodemanager.aux-services</name>\n <value>mapreduce_shuffle</value>\n </property>\n</configuration>"
},
{
"code": null,
"e": 18037,
"s": 18021,
"text": "mapred-site.xml"
},
{
"code": null,
"e": 18294,
"s": 18037,
"text": "This file is used to specify which MapReduce framework we are using. By default, Hadoop contains a template of mapred-site.xml. First of all, it is required to copy the file from mapred-site.xml.template to mapred-site.xml file using the following command."
},
{
"code": null,
"e": 18341,
"s": 18294,
"text": "$ cp mapred-site.xml.template mapred-site.xml\n"
},
{
"code": null,
"e": 18468,
"s": 18341,
"text": "Open mapred-site.xml file and add the following properties in between the\n<configuration>, </configuration> tags in this file."
},
{
"code": null,
"e": 18600,
"s": 18468,
"text": "<configuration>\n <property>\n <name>mapreduce.framework.name</name>\n <value>yarn</value>\n </property>\n</configuration>"
},
{
"code": null,
"e": 18664,
"s": 18600,
"text": "The following steps are used to verify the Hadoop installation."
},
{
"code": null,
"e": 18738,
"s": 18664,
"text": "Set up the namenode using the command “hdfs namenode -format” as follows:"
},
{
"code": null,
"e": 18770,
"s": 18738,
"text": "$ cd ~\n$ hdfs namenode -format\n"
},
{
"code": null,
"e": 18805,
"s": 18770,
"text": "The expected result is as follows:"
},
{
"code": null,
"e": 19595,
"s": 18805,
"text": "10/24/14 21:30:55 INFO namenode.NameNode: STARTUP_MSG:\n/************************************************************\nSTARTUP_MSG: Starting NameNode\nSTARTUP_MSG: host = localhost/192.168.1.11\nSTARTUP_MSG: args = [-format]\nSTARTUP_MSG: version = 2.4.1\n...\n...\n10/24/14 21:30:56 INFO common.Storage: Storage directory\n/home/hadoop/hadoopinfra/hdfs/namenode has been successfully formatted.\n10/24/14 21:30:56 INFO namenode.NNStorageRetentionManager: Going to retain\n1 images with txid >= 0\n10/24/14 21:30:56 INFO util.ExitUtil: Exiting with status 0\n10/24/14 21:30:56 INFO namenode.NameNode: SHUTDOWN_MSG:\n/************************************************************\nSHUTDOWN_MSG: Shutting down NameNode at localhost/192.168.1.11\n************************************************************/\n"
},
{
"code": null,
"e": 19684,
"s": 19595,
"text": "The following command is used to start dfs. This command starts your Hadoop file system."
},
{
"code": null,
"e": 19700,
"s": 19684,
"text": "$ start-dfs.sh\n"
},
{
"code": null,
"e": 19735,
"s": 19700,
"text": "The expected output is as follows:"
},
{
"code": null,
"e": 20047,
"s": 19735,
"text": "10/24/14 21:37:56\nStarting namenodes on [localhost]\nlocalhost: starting namenode, logging to /home/hadoop/hadoop-\n2.4.1/logs/hadoop-hadoop-namenode-localhost.out\nlocalhost: starting datanode, logging to /home/hadoop/hadoop-\n2.4.1/logs/hadoop-hadoop-datanode-localhost.out\nStarting secondary namenodes [0.0.0.0]\n"
},
{
"code": null,
"e": 20151,
"s": 20047,
"text": "The following command is used to start yarn script. Executing this command will start your yarn demons."
},
{
"code": null,
"e": 20168,
"s": 20151,
"text": "$ start-yarn.sh\n"
},
{
"code": null,
"e": 20203,
"s": 20168,
"text": "The expected output is as follows:"
},
{
"code": null,
"e": 20453,
"s": 20203,
"text": "starting yarn daemons\nstarting resource manager, logging to /home/hadoop/hadoop-2.4.1/logs/yarn-\nhadoop-resourcemanager-localhost.out\nlocalhost: starting node manager, logging to /home/hadoop/hadoop-\n2.4.1/logs/yarn-hadoop-nodemanager-localhost.out\n"
},
{
"code": null,
"e": 20566,
"s": 20453,
"text": "The default port number to access hadoop is 50070. Use the following URL to get\nHadoop services on your browser."
},
{
"code": null,
"e": 20591,
"s": 20566,
"text": "http://localhost:50070/\n"
},
{
"code": null,
"e": 20706,
"s": 20591,
"text": "The default port number to access all application of cluster is 8088. Use the following\nURL to visit this service."
},
{
"code": null,
"e": 20730,
"s": 20706,
"text": "http://localhost:8088/\n"
},
{
"code": null,
"e": 20867,
"s": 20730,
"text": "Mahout is available in the website Mahout. Download Mahout from\nthe link provided in the website. Here is the screenshot of the website."
},
{
"code": null,
"e": 20975,
"s": 20867,
"text": "Download Apache mahout from the link \nhttp://mirror.nexcess.net/apache/mahout/ using the following command."
},
{
"code": null,
"e": 21078,
"s": 20975,
"text": "[Hadoop@localhost ~]$ wget\nhttp://mirror.nexcess.net/apache/mahout/0.9/mahout-distribution-0.9.tar.gz\n"
},
{
"code": null,
"e": 21149,
"s": 21078,
"text": "Then mahout-distribution-0.9.tar.gz will be downloaded in your system."
},
{
"code": null,
"e": 21274,
"s": 21149,
"text": "Browse through the folder where mahout-distribution-0.9.tar.gz is stored and\nextract the downloaded jar file as shown below."
},
{
"code": null,
"e": 21337,
"s": 21274,
"text": "[Hadoop@localhost ~]$ tar zxvf mahout-distribution-0.9.tar.gz\n"
},
{
"code": null,
"e": 21402,
"s": 21337,
"text": "Given below is the pom.xml to build Apache Mahout using Eclipse."
},
{
"code": null,
"e": 21838,
"s": 21402,
"text": "<dependency>\n <groupId>org.apache.mahout</groupId>\n <artifactId>mahout-core</artifactId>\n <version>0.9</version>\n</dependency>\n\n<dependency>\n <groupId>org.apache.mahout</groupId>\n <artifactId>mahout-math</artifactId>\n <version>${mahout.version}</version>\n</dependency>\n\n<dependency>\n <groupId>org.apache.mahout</groupId>\n <artifactId>mahout-integration</artifactId>\n <version>${mahout.version}</version>\n</dependency>"
},
{
"code": null,
"e": 22004,
"s": 21838,
"text": "This chapter covers the popular machine learning technique called recommendation, its mechanisms, and how to write an application implementing Mahout recommendation."
},
{
"code": null,
"e": 22151,
"s": 22004,
"text": "Ever wondered how Amazon comes up with a list of recommended items to draw your attention to a particular product that you might be interested in!"
},
{
"code": null,
"e": 22221,
"s": 22151,
"text": "Suppose you want to purchase the book “Mahout in Action” from Amazon:"
},
{
"code": null,
"e": 22328,
"s": 22221,
"text": "Along with the selected product, Amazon also displays a list of related recommended\nitems, as shown below."
},
{
"code": null,
"e": 22467,
"s": 22328,
"text": "Such recommendation lists are produced with the help of recommender engines.\nMahout provides recommender engines of several types such as:"
},
{
"code": null,
"e": 22492,
"s": 22467,
"text": "user-based recommenders,"
},
{
"code": null,
"e": 22522,
"s": 22492,
"text": "item-based recommenders, and "
},
{
"code": null,
"e": 22548,
"s": 22522,
"text": "several other algorithms."
},
{
"code": null,
"e": 22785,
"s": 22548,
"text": "Mahout has a non-distributed, non-Hadoop-based recommender engine. You should pass a text document having user preferences for items. And the output of this engine would be the estimated preferences of a particular user for other items."
},
{
"code": null,
"e": 23078,
"s": 22785,
"text": "Consider a website that sells consumer goods such as mobiles, gadgets, and their accessories. If we want to implement the features of Mahout in such a site, then we\ncan build a recommender engine. This engine analyzes past purchase data of the users\nand recommends new products based on that."
},
{
"code": null,
"e": 23158,
"s": 23078,
"text": "The components provided by Mahout to build a recommender engine are as follows:"
},
{
"code": null,
"e": 23168,
"s": 23158,
"text": "DataModel"
},
{
"code": null,
"e": 23183,
"s": 23168,
"text": "UserSimilarity"
},
{
"code": null,
"e": 23198,
"s": 23183,
"text": "ItemSimilarity"
},
{
"code": null,
"e": 23215,
"s": 23198,
"text": "UserNeighborhood"
},
{
"code": null,
"e": 23228,
"s": 23215,
"text": " Recommender"
},
{
"code": null,
"e": 23460,
"s": 23228,
"text": "From the data store, the data model is prepared and is passed as an input to the recommender engine. The Recommender engine generates the recommendations for a particular user. Given below is the architecture of recommender engine."
},
{
"code": null,
"e": 23512,
"s": 23460,
"text": "Here are the steps to develop a simple recommender:"
},
{
"code": null,
"e": 23721,
"s": 23512,
"text": "The constructor of PearsonCorrelationSimilarity class requires a data model\nobject, which holds a file that contains the Users, Items, and Preferences details of a\nproduct. Here is the sample data model file:"
},
{
"code": null,
"e": 23887,
"s": 23721,
"text": "1,00,1.0\n1,01,2.0\n1,02,5.0\n1,03,5.0\n1,04,5.0\n\n2,00,1.0\n2,01,2.0\n2,05,5.0\n2,06,4.5\n2,02,5.0\n\n3,01,2.5\n3,02,5.0\n3,03,4.0\n3,04,3.0\n\n4,00,5.0\n4,01,5.0\n4,02,5.0\n4,03,0.0\n"
},
{
"code": null,
"e": 24021,
"s": 23887,
"text": "The DataModel object requires the file object, which contains the path of the input file. Create the DataModel object as shown below."
},
{
"code": null,
"e": 24087,
"s": 24021,
"text": "DataModel datamodel = new FileDataModel(new File(\"input file\"));\n"
},
{
"code": null,
"e": 24173,
"s": 24087,
"text": "Create UserSimilarity object using PearsonCorrelationSimilarity class as shown below:"
},
{
"code": null,
"e": 24247,
"s": 24173,
"text": "UserSimilarity similarity = new PearsonCorrelationSimilarity(datamodel);\n"
},
{
"code": null,
"e": 24351,
"s": 24247,
"text": "This object computes a \"neighborhood\" of users like a given user. There are two types\nof neighborhoods:"
},
{
"code": null,
"e": 24514,
"s": 24351,
"text": "NearestNUserNeighborhood - This class computes a neighborhood\nconsisting of the nearest n users to a given user. \"Nearest\" is defined by the\ngiven UserSimilarity."
},
{
"code": null,
"e": 24677,
"s": 24514,
"text": "NearestNUserNeighborhood - This class computes a neighborhood\nconsisting of the nearest n users to a given user. \"Nearest\" is defined by the\ngiven UserSimilarity."
},
{
"code": null,
"e": 24892,
"s": 24677,
"text": "ThresholdUserNeighborhood - This class computes a neighborhood\nconsisting of all the users whose similarity to the given user meets or exceeds\na certain threshold. Similarity is defined by the given UserSimilarity."
},
{
"code": null,
"e": 25107,
"s": 24892,
"text": "ThresholdUserNeighborhood - This class computes a neighborhood\nconsisting of all the users whose similarity to the given user meets or exceeds\na certain threshold. Similarity is defined by the given UserSimilarity."
},
{
"code": null,
"e": 25191,
"s": 25107,
"text": "Here we are using ThresholdUserNeighborhood and set the limit of preference to\n3.0."
},
{
"code": null,
"e": 25279,
"s": 25191,
"text": "UserNeighborhood neighborhood = new ThresholdUserNeighborhood(3.0, similarity, model);\n"
},
{
"code": null,
"e": 25384,
"s": 25279,
"text": "Create UserbasedRecomender object. Pass all the above created objects to its constructor as shown below."
},
{
"code": null,
"e": 25486,
"s": 25384,
"text": "UserBasedRecommender recommender = new GenericUserBasedRecommender(model, neighborhood, similarity);\n"
},
{
"code": null,
"e": 25808,
"s": 25486,
"text": "Recommend products to a user using the recommend() method of Recommender interface. This method requires two parameters. The first represents the user id of the user to whom we need to send the recommendations, and the second represents the number of recommendations to be sent. Here is the usage of recommender() method:"
},
{
"code": null,
"e": 25978,
"s": 25808,
"text": "List<RecommendedItem> recommendations = recommender.recommend(2, 3);\n\nfor (RecommendedItem recommendation : recommendations) {\n System.out.println(recommendation);\n }\n"
},
{
"code": null,
"e": 25994,
"s": 25978,
"text": "Example Program"
},
{
"code": null,
"e": 26108,
"s": 25994,
"text": "Given below is an example program to set recommendation. Prepare the recommendations for the user with user id 2."
},
{
"code": null,
"e": 27680,
"s": 26108,
"text": "import java.io.File;\nimport java.util.List;\n\nimport org.apache.mahout.cf.taste.impl.model.file.FileDataModel;\nimport org.apache.mahout.cf.taste.impl.neighborhood.ThresholdUserNeighborhood;\nimport org.apache.mahout.cf.taste.impl.recommender.GenericUserBasedRecommender;\nimport org.apache.mahout.cf.taste.impl.similarity.PearsonCorrelationSimilarity;\nimport org.apache.mahout.cf.taste.model.DataModel;\nimport org.apache.mahout.cf.taste.neighborhood.UserNeighborhood;\nimport org.apache.mahout.cf.taste.recommender.RecommendedItem;\nimport org.apache.mahout.cf.taste.recommender.UserBasedRecommender;\nimport org.apache.mahout.cf.taste.similarity.UserSimilarity;\n\npublic class Recommender {\n public static void main(String args[]){\n try{\n //Creating data model\n DataModel datamodel = new FileDataModel(new File(\"data\")); //data\n \n //Creating UserSimilarity object.\n UserSimilarity usersimilarity = new PearsonCorrelationSimilarity(datamodel);\n \n //Creating UserNeighbourHHood object.\n UserNeighborhood userneighborhood = new ThresholdUserNeighborhood(3.0, usersimilarity, datamodel);\n \n //Create UserRecomender\n UserBasedRecommender recommender = new GenericUserBasedRecommender(datamodel, userneighborhood, usersimilarity);\n \n List<RecommendedItem> recommendations = recommender.recommend(2, 3);\n\t\t\t\n for (RecommendedItem recommendation : recommendations) {\n System.out.println(recommendation);\n }\n \n }catch(Exception e){}\n \n }\n }"
},
{
"code": null,
"e": 27730,
"s": 27680,
"text": "Compile the program using the following commands:"
},
{
"code": null,
"e": 27771,
"s": 27730,
"text": "javac Recommender.java\njava Recommender\n"
},
{
"code": null,
"e": 27811,
"s": 27771,
"text": "It should produce the following output:"
},
{
"code": null,
"e": 27884,
"s": 27811,
"text": "RecommendedItem [item:3, value:4.5]\nRecommendedItem [item:4, value:4.0]\n"
},
{
"code": null,
"e": 28127,
"s": 27884,
"text": "Clustering is the procedure to organize elements or items of a given collection into\ngroups based on the similarity between the items. For example, the applications related to online news publishing group their news articles using clustering."
},
{
"code": null,
"e": 28258,
"s": 28127,
"text": "Clustering is broadly used in many applications such as market research, pattern recognition, data analysis, and image processing."
},
{
"code": null,
"e": 28389,
"s": 28258,
"text": "Clustering is broadly used in many applications such as market research, pattern recognition, data analysis, and image processing."
},
{
"code": null,
"e": 28547,
"s": 28389,
"text": "Clustering can help marketers discover distinct groups in their customer basis.\nAnd they can characterize their customer groups based on purchasing patterns."
},
{
"code": null,
"e": 28705,
"s": 28547,
"text": "Clustering can help marketers discover distinct groups in their customer basis.\nAnd they can characterize their customer groups based on purchasing patterns."
},
{
"code": null,
"e": 28886,
"s": 28705,
"text": "In the field of biology, it can be used to derive plant and animal taxonomies,\ncategorize genes with similar functionality and gain insight into structures inherent in populations."
},
{
"code": null,
"e": 29067,
"s": 28886,
"text": "In the field of biology, it can be used to derive plant and animal taxonomies,\ncategorize genes with similar functionality and gain insight into structures inherent in populations."
},
{
"code": null,
"e": 29165,
"s": 29067,
"text": "Clustering helps in identification of areas of similar land use in an earth\nobservation database."
},
{
"code": null,
"e": 29263,
"s": 29165,
"text": "Clustering helps in identification of areas of similar land use in an earth\nobservation database."
},
{
"code": null,
"e": 29348,
"s": 29263,
"text": "Clustering also helps in classifying documents on the web for information\ndiscovery."
},
{
"code": null,
"e": 29433,
"s": 29348,
"text": "Clustering also helps in classifying documents on the web for information\ndiscovery."
},
{
"code": null,
"e": 29526,
"s": 29433,
"text": "Clustering is used in outlier detection applications such as detection of credit\ncard fraud."
},
{
"code": null,
"e": 29619,
"s": 29526,
"text": "Clustering is used in outlier detection applications such as detection of credit\ncard fraud."
},
{
"code": null,
"e": 29770,
"s": 29619,
"text": "As a data mining function, Cluster Analysis serves as a tool to gain insight into\nthe distribution of data to observe characteristics of each cluster."
},
{
"code": null,
"e": 29921,
"s": 29770,
"text": "As a data mining function, Cluster Analysis serves as a tool to gain insight into\nthe distribution of data to observe characteristics of each cluster."
},
{
"code": null,
"e": 30006,
"s": 29921,
"text": "Using Mahout, we can cluster a given set of data. The steps required are as follows:"
},
{
"code": null,
"e": 30103,
"s": 30006,
"text": "Algorithm You need to select a suitable clustering algorithm to group the\nelements of a cluster."
},
{
"code": null,
"e": 30200,
"s": 30103,
"text": "Algorithm You need to select a suitable clustering algorithm to group the\nelements of a cluster."
},
{
"code": null,
"e": 30358,
"s": 30200,
"text": "Similarity and Dissimilarity You need to have a rule in place to verify the\nsimilarity between the newly encountered elements and the elements in the groups."
},
{
"code": null,
"e": 30516,
"s": 30358,
"text": "Similarity and Dissimilarity You need to have a rule in place to verify the\nsimilarity between the newly encountered elements and the elements in the groups."
},
{
"code": null,
"e": 30621,
"s": 30516,
"text": "Stopping Condition A stopping condition is required to define the point where no clustering is required."
},
{
"code": null,
"e": 30726,
"s": 30621,
"text": "Stopping Condition A stopping condition is required to define the point where no clustering is required."
},
{
"code": null,
"e": 30766,
"s": 30726,
"text": "To cluster the given data you need to -"
},
{
"code": null,
"e": 30952,
"s": 30766,
"text": "Start the Hadoop server. Create required directories for storing files in Hadoop File System. (Create directories for input file, sequence file, and clustered output in case of canopy)."
},
{
"code": null,
"e": 31138,
"s": 30952,
"text": "Start the Hadoop server. Create required directories for storing files in Hadoop File System. (Create directories for input file, sequence file, and clustered output in case of canopy)."
},
{
"code": null,
"e": 31207,
"s": 31138,
"text": "Copy the input file to the Hadoop File system from Unix file system."
},
{
"code": null,
"e": 31276,
"s": 31207,
"text": "Copy the input file to the Hadoop File system from Unix file system."
},
{
"code": null,
"e": 31323,
"s": 31276,
"text": "Prepare the sequence file from the input data."
},
{
"code": null,
"e": 31370,
"s": 31323,
"text": "Prepare the sequence file from the input data."
},
{
"code": null,
"e": 31418,
"s": 31370,
"text": "Run any of the available clustering algorithms."
},
{
"code": null,
"e": 31466,
"s": 31418,
"text": "Run any of the available clustering algorithms."
},
{
"code": null,
"e": 31490,
"s": 31466,
"text": "Get the clustered data."
},
{
"code": null,
"e": 31514,
"s": 31490,
"text": "Get the clustered data."
},
{
"code": null,
"e": 31599,
"s": 31514,
"text": "Mahout works with Hadoop, hence make sure that the Hadoop server is up and running. "
},
{
"code": null,
"e": 31636,
"s": 31599,
"text": "$ cd HADOOP_HOME/bin\n$ start-all.sh\n"
},
{
"code": null,
"e": 31770,
"s": 31636,
"text": "Create directories in the Hadoop file system to store the input file, sequence files, and clustered data using the following command:"
},
{
"code": null,
"e": 31875,
"s": 31770,
"text": "$ hadoop fs -p mkdir /mahout_data\n$ hadoop fs -p mkdir /clustered_data\n$ hadoop fs -p mkdir /mahout_seq\n"
},
{
"code": null,
"e": 32001,
"s": 31875,
"text": "You can verify whether the directory is created using the hadoop web interface in the\nfollowing URL - http://localhost:50070/"
},
{
"code": null,
"e": 32041,
"s": 32001,
"text": "It gives you the output as shown below:"
},
{
"code": null,
"e": 32249,
"s": 32041,
"text": "Now, copy the input data file from the Linux file system to mahout_data directory in\nthe Hadoop File System as shown below. Assume your input file is mydata.txt and it is in the /home/Hadoop/data/ directory."
},
{
"code": null,
"e": 32310,
"s": 32249,
"text": "$ hadoop fs -put /home/Hadoop/data/mydata.txt /mahout_data/\n"
},
{
"code": null,
"e": 32440,
"s": 32310,
"text": "Mahout provides you a utility to convert the given input file in to a sequence file\nformat. This utility requires two parameters."
},
{
"code": null,
"e": 32498,
"s": 32440,
"text": "The input file directory where the original data resides."
},
{
"code": null,
"e": 32566,
"s": 32498,
"text": "The output file directory where the clustered data is to be stored."
},
{
"code": null,
"e": 32629,
"s": 32566,
"text": "Given below is the help prompt of mahout seqdirectory utility."
},
{
"code": null,
"e": 32722,
"s": 32629,
"text": "Step 1: Browse to the Mahout home directory. You can get help of the utility as shown below:"
},
{
"code": null,
"e": 32963,
"s": 32722,
"text": "[Hadoop@localhost bin]$ ./mahout seqdirectory --help\nJob-Specific Options:\n--input (-i) input Path to job input directory.\n--output (-o) output The directory pathname for output.\n--overwrite (-ow) If present, overwrite the output directory\n"
},
{
"code": null,
"e": 33036,
"s": 32963,
"text": "Generate the sequence file using the utility using the following syntax:"
},
{
"code": null,
"e": 33100,
"s": 33036,
"text": "mahout seqdirectory -i <input file path> -o <output directory>\n"
},
{
"code": null,
"e": 33108,
"s": 33100,
"text": "Example"
},
{
"code": null,
"e": 33207,
"s": 33108,
"text": "mahout seqdirectory\n-i hdfs://localhost:9000/mahout_seq/\n-o hdfs://localhost:9000/clustered_data/\n"
},
{
"code": null,
"e": 33266,
"s": 33207,
"text": "Mahout supports two main algorithms for clustering namely:"
},
{
"code": null,
"e": 33284,
"s": 33266,
"text": "Canopy clustering"
},
{
"code": null,
"e": 33303,
"s": 33284,
"text": "K-means clustering"
},
{
"code": null,
"e": 33607,
"s": 33303,
"text": "Canopy clustering is a simple and fast technique used by Mahout for clustering purpose. The objects will be treated as points in a plain space. This technique is often\nused as an initial step in other clustering techniques such as k-means clustering. You\ncan run a Canopy job using the following syntax:"
},
{
"code": null,
"e": 33721,
"s": 33607,
"text": "mahout canopy -i <input vectors directory>\n-o <output directory>\n-t1 <threshold value 1>\n-t2 <threshold value 2>\n"
},
{
"code": null,
"e": 33854,
"s": 33721,
"text": "Canopy job requires an input file directory with the sequence file and an output\ndirectory where the clustered data is to be stored."
},
{
"code": null,
"e": 33862,
"s": 33854,
"text": "Example"
},
{
"code": null,
"e": 33979,
"s": 33862,
"text": "mahout canopy -i hdfs://localhost:9000/mahout_seq/mydata.seq\n-o hdfs://localhost:9000/clustered_data\n-t1 20\n-t2 30 \n"
},
{
"code": null,
"e": 34052,
"s": 33979,
"text": "You will get the clustered data generated in the given output directory."
},
{
"code": null,
"e": 34343,
"s": 34052,
"text": "K-means clustering is an important clustering algorithm. The k in k-means clustering\nalgorithm represents the number of clusters the data is to be divided into. For\nexample, the k value specified to this algorithm is selected as 3, the algorithm is going\nto divide the data into 3 clusters."
},
{
"code": null,
"e": 34629,
"s": 34343,
"text": "Each object will be represented as vector in space. Initially k points will be chosen by the algorithm randomly and treated as centers, every object closest to each center\nare clustered. There are several algorithms for the distance measure and the user should choose the required one."
},
{
"code": null,
"e": 34651,
"s": 34629,
"text": "Creating Vector Files"
},
{
"code": null,
"e": 34773,
"s": 34651,
"text": "Unlike Canopy algorithm, the k-means algorithm requires vector files as input,\ntherefore you have to create vector files."
},
{
"code": null,
"e": 34895,
"s": 34773,
"text": "Unlike Canopy algorithm, the k-means algorithm requires vector files as input,\ntherefore you have to create vector files."
},
{
"code": null,
"e": 34986,
"s": 34895,
"text": "To generate vector files from sequence file format, Mahout provides the\nseq2parse utility."
},
{
"code": null,
"e": 35077,
"s": 34986,
"text": "To generate vector files from sequence file format, Mahout provides the\nseq2parse utility."
},
{
"code": null,
"e": 35176,
"s": 35077,
"text": "Given below are some of the options of seq2parse utility. Create vector files using these options."
},
{
"code": null,
"e": 35467,
"s": 35176,
"text": "$MAHOUT_HOME/bin/mahout seq2sparse\n--analyzerName (-a) analyzerName The class name of the analyzer\n--chunkSize (-chunk) chunkSize The chunkSize in MegaBytes.\n--output (-o) output The directory pathname for o/p\n--input (-i) input Path to job input directory.\n"
},
{
"code": null,
"e": 35568,
"s": 35467,
"text": "After creating vectors, proceed with k-means algorithm. The syntax to run k-means\njob is as follows:"
},
{
"code": null,
"e": 35775,
"s": 35568,
"text": "mahout kmeans -i <input vectors directory>\n-c <input clusters directory>\n-o <output working directory>\n-dm <Distance Measure technique>\n-x <maximum number of iterations>\n-k <number of initial clusters>\n"
},
{
"code": null,
"e": 36019,
"s": 35775,
"text": "K-means clustering job requires input vector directory, output clusters directory,\ndistance measure, maximum number of iterations to be carried out, and an integer value representing the number of clusters the input data is to be divided into."
},
{
"code": null,
"e": 36186,
"s": 36019,
"text": "Classification is a machine learning technique that uses known data to determine how\nthe new data should be classified into a set of existing categories. For example,"
},
{
"code": null,
"e": 36247,
"s": 36186,
"text": "iTunes application uses classification to prepare playlists."
},
{
"code": null,
"e": 36308,
"s": 36247,
"text": "iTunes application uses classification to prepare playlists."
},
{
"code": null,
"e": 36657,
"s": 36308,
"text": "Mail service providers such as Yahoo! and Gmail use this technique to decide whether a new mail should be classified as a spam. The categorization algorithm trains itself by analyzing user habits of marking certain mails as spams. Based on that, the classifier decides whether a future mail should be deposited in your inbox or in the spams folder."
},
{
"code": null,
"e": 37006,
"s": 36657,
"text": "Mail service providers such as Yahoo! and Gmail use this technique to decide whether a new mail should be classified as a spam. The categorization algorithm trains itself by analyzing user habits of marking certain mails as spams. Based on that, the classifier decides whether a future mail should be deposited in your inbox or in the spams folder."
},
{
"code": null,
"e": 37099,
"s": 37006,
"text": "While classifying a given set of data, the classifier system performs the following\nactions:"
},
{
"code": null,
"e": 37176,
"s": 37099,
"text": "Initially a new data model is prepared using any of the learning algorithms."
},
{
"code": null,
"e": 37216,
"s": 37176,
"text": "Then the prepared data model is tested."
},
{
"code": null,
"e": 37305,
"s": 37216,
"text": "Thereafter, this data model is used to evaluate the new data and to determine\nits class."
},
{
"code": null,
"e": 37527,
"s": 37305,
"text": "Credit card fraud detection - The Classification mechanism is used to predict credit card frauds. Using historical information of previous frauds, the classifier can predict which future transactions may turn into frauds."
},
{
"code": null,
"e": 37749,
"s": 37527,
"text": "Credit card fraud detection - The Classification mechanism is used to predict credit card frauds. Using historical information of previous frauds, the classifier can predict which future transactions may turn into frauds."
},
{
"code": null,
"e": 37917,
"s": 37749,
"text": "Spam e-mails - Depending on the characteristics of previous spam mails, the\nclassifier determines whether a newly encountered e-mail should be sent to the\nspam folder."
},
{
"code": null,
"e": 38085,
"s": 37917,
"text": "Spam e-mails - Depending on the characteristics of previous spam mails, the\nclassifier determines whether a newly encountered e-mail should be sent to the\nspam folder."
},
{
"code": null,
"e": 38164,
"s": 38085,
"text": "Mahout uses the Naive Bayes classifier algorithm. It uses two implementations:"
},
{
"code": null,
"e": 38203,
"s": 38164,
"text": "Distributed Naive Bayes classification"
},
{
"code": null,
"e": 38244,
"s": 38203,
"text": "Complementary Naive Bayes classification"
},
{
"code": null,
"e": 38521,
"s": 38244,
"text": "Naive Bayes is a simple technique for constructing classifiers. It is not a single\nalgorithm for training such classifiers, but a family of algorithms. A Bayes classifier\nconstructs models to classify problem instances. These classifications are made using\nthe available data."
},
{
"code": null,
"e": 38663,
"s": 38521,
"text": "An advantage of naive Bayes is that it only requires a small amount of training data\nto estimate the parameters necessary for classification."
},
{
"code": null,
"e": 38791,
"s": 38663,
"text": "For some types of probability models, naive Bayes classifiers can be trained very\nefficiently in a supervised learning setting."
},
{
"code": null,
"e": 38917,
"s": 38791,
"text": "Despite its oversimplified assumptions, naive Bayes classifiers have worked quite well\nin many complex real-world situations."
},
{
"code": null,
"e": 38985,
"s": 38917,
"text": "The following steps are to be followed to implement Classification:"
},
{
"code": null,
"e": 39007,
"s": 38985,
"text": "Generate example data"
},
{
"code": null,
"e": 39039,
"s": 39007,
"text": "Create sequence files from data"
},
{
"code": null,
"e": 39073,
"s": 39039,
"text": "Convert sequence files to vectors"
},
{
"code": null,
"e": 39091,
"s": 39073,
"text": "Train the vectors"
},
{
"code": null,
"e": 39108,
"s": 39091,
"text": "Test the vectors"
},
{
"code": null,
"e": 39307,
"s": 39108,
"text": "Generate or download the data to be classified. For example, you can get the\n20 newsgroups example data from the following link:\nhttp://people.csail.mit.edu/jrennie/20Newsgroups/20news-bydate.tar.gz"
},
{
"code": null,
"e": 39387,
"s": 39307,
"text": "Create a directory for storing input data. Download the example as shown below."
},
{
"code": null,
"e": 39554,
"s": 39387,
"text": "$ mkdir classification_example\n$ cd classification_example\n$tar xzvf 20news-bydate.tar.gz\nwget http://people.csail.mit.edu/jrennie/20Newsgroups/20news-bydate.tar.gz \n"
},
{
"code": null,
"e": 39668,
"s": 39554,
"text": "Create sequence file from the example using seqdirectory utility. The syntax to generate sequence is given below:"
},
{
"code": null,
"e": 39732,
"s": 39668,
"text": "mahout seqdirectory -i <input file path> -o <output directory>\n"
},
{
"code": null,
"e": 39847,
"s": 39732,
"text": "Create vector files from sequence files using seq2parse utility. The options of\nseq2parse utility are given below:"
},
{
"code": null,
"e": 40139,
"s": 39847,
"text": "$MAHOUT_HOME/bin/mahout seq2sparse\n--analyzerName (-a) analyzerName The class name of the analyzer\n--chunkSize (-chunk) chunkSize The chunkSize in MegaBytes.\n--output (-o) output The directory pathname for o/p\n--input (-i) input Path to job input directory. \n"
},
{
"code": null,
"e": 40246,
"s": 40139,
"text": "Train the generated vectors using the trainnb utility. The options to use trainnb utility are given below:"
},
{
"code": null,
"e": 40365,
"s": 40246,
"text": "mahout trainnb\n -i ${PATH_TO_TFIDF_VECTORS}\n -el\n -o ${PATH_TO_MODEL}/model\n -li ${PATH_TO_MODEL}/labelindex\n -ow\n -c\n"
},
{
"code": null,
"e": 40455,
"s": 40365,
"text": "Test the vectors using testnb utility. The options to use testnb utility are given below:"
},
{
"code": null,
"e": 40600,
"s": 40455,
"text": "mahout testnb\n -i ${PATH_TO_TFIDF_TEST_VECTORS}\n -m ${PATH_TO_MODEL}/model\n -l ${PATH_TO_MODEL}/labelindex\n -ow\n -o ${PATH_TO_OUTPUT}\n -c\n -seq\n"
},
{
"code": null,
"e": 40607,
"s": 40600,
"text": " Print"
},
{
"code": null,
"e": 40618,
"s": 40607,
"text": " Add Notes"
}
] |
How to create a read-only list in Java?
|
Let us first create a List in Java −
List<String>list = new ArrayList<String>();
list.add("A");
list.add("B");
list.add("C");
Now to convert the above list to read-only, use Collections −
list = Collections.unmodifiableList(list);
We have converted the above list to read-only. Now, if you will try to add more elements to the list, then the following error would be visible −
Exception in thread "main" java.lang.Error: Unresolved compilation problem:
Live Demo
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Demo {
public static void main(String args[]) throws Exception {
List<String>list = new ArrayList<String>();
list.add("A");
list.add("B");
list.add("C");
list = Collections.unmodifiableList(list);
// An exception is thrown since its a read-only list now
list.add("D");
list.add("E");
list.add("F");
System.out.println(list);
}
}
The output is as follows. Since it’s a read-only list, an error would be visible −
Exception in thread "main" java.lang.Error:Unresolved compilation problem:
String literal is not properly closed by a double-quote
at Amit/my.Demo.main(Demo.java:18)
|
[
{
"code": null,
"e": 1099,
"s": 1062,
"text": "Let us first create a List in Java −"
},
{
"code": null,
"e": 1188,
"s": 1099,
"text": "List<String>list = new ArrayList<String>();\nlist.add(\"A\");\nlist.add(\"B\");\nlist.add(\"C\");"
},
{
"code": null,
"e": 1250,
"s": 1188,
"text": "Now to convert the above list to read-only, use Collections −"
},
{
"code": null,
"e": 1293,
"s": 1250,
"text": "list = Collections.unmodifiableList(list);"
},
{
"code": null,
"e": 1439,
"s": 1293,
"text": "We have converted the above list to read-only. Now, if you will try to add more elements to the list, then the following error would be visible −"
},
{
"code": null,
"e": 1515,
"s": 1439,
"text": "Exception in thread \"main\" java.lang.Error: Unresolved compilation problem:"
},
{
"code": null,
"e": 1526,
"s": 1515,
"text": " Live Demo"
},
{
"code": null,
"e": 2015,
"s": 1526,
"text": "import java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.List;\npublic class Demo {\n public static void main(String args[]) throws Exception {\n List<String>list = new ArrayList<String>();\n list.add(\"A\");\n list.add(\"B\");\n list.add(\"C\");\n list = Collections.unmodifiableList(list);\n // An exception is thrown since its a read-only list now\n list.add(\"D\");\n list.add(\"E\");\n list.add(\"F\");\n System.out.println(list);\n }\n}"
},
{
"code": null,
"e": 2098,
"s": 2015,
"text": "The output is as follows. Since it’s a read-only list, an error would be visible −"
},
{
"code": null,
"e": 2264,
"s": 2098,
"text": "Exception in thread \"main\" java.lang.Error:Unresolved compilation problem:\nString literal is not properly closed by a double-quote\nat Amit/my.Demo.main(Demo.java:18)"
}
] |
What is the difference between dynamic type variables and object type variables?
|
You can store any type of value in the dynamic data type variable. Type checking for these types of variables takes place at run-time.
The Object Type is the ultimate base class for all data types in C# Common Type System (CTS). The object is an alias for System. Object class. The object types can be assigned values of any other types, value types, reference types, predefined or user-defined types.
Dynamic types are similar to object types except that type checking for object type variables takes place at compile time, whereas that for the dynamic type variables takes place at runtime.
Example of Dynamic Type −
dynamic z = 100;
Example of Object Type −
object obj = 100;
|
[
{
"code": null,
"e": 1197,
"s": 1062,
"text": "You can store any type of value in the dynamic data type variable. Type checking for these types of variables takes place at run-time."
},
{
"code": null,
"e": 1464,
"s": 1197,
"text": "The Object Type is the ultimate base class for all data types in C# Common Type System (CTS). The object is an alias for System. Object class. The object types can be assigned values of any other types, value types, reference types, predefined or user-defined types."
},
{
"code": null,
"e": 1655,
"s": 1464,
"text": "Dynamic types are similar to object types except that type checking for object type variables takes place at compile time, whereas that for the dynamic type variables takes place at runtime."
},
{
"code": null,
"e": 1681,
"s": 1655,
"text": "Example of Dynamic Type −"
},
{
"code": null,
"e": 1698,
"s": 1681,
"text": "dynamic z = 100;"
},
{
"code": null,
"e": 1723,
"s": 1698,
"text": "Example of Object Type −"
},
{
"code": null,
"e": 1742,
"s": 1723,
"text": "object obj = 100;\n"
}
] |
Check for NULL or NOT NULL values in a column in MySQL
|
For this, use IS NOT NULL in MySQL. Let us see the syntax−
select yourColumnName IS NOT NULL from yourTableName;
The above query returns 1 if the column does not have NULL value otherwise 0. Let us first create a −
mysql> create table DemoTable1408
-> (
-> FirstName varchar(30)
-> );
Query OK, 0 rows affected (0.54 sec)
Insert some records in the table using insert −
mysql> insert into DemoTable1408 values('Chris');
Query OK, 1 row affected (0.14 sec)
mysql> insert into DemoTable1408 values('');
Query OK, 1 row affected (0.12 sec)
mysql> insert into DemoTable1408 values(NULL);
Query OK, 1 row affected (0.13 sec)
mysql> insert into DemoTable1408 values('David');
Query OK, 1 row affected (0.10 sec)
Display all records from the table using select −
mysql> select * from DemoTable1408;
This will produce the following output −
+-----------+
| FirstName |
+-----------+
| Chris |
| |
| NULL |
| David |
+-----------+
4 rows in set (0.00 sec)
Following is the query to check for NULL or NOT NULL −
mysql> select FirstName IS NOT NULL from DemoTable1408;
This will produce the following output −
+-----------------------+
| FirstName IS NOT NULL |
+-----------------------+
| 1 |
| 1 |
| 0 |
| 1 |
+-----------------------+
4 rows in set (0.00 sec)
|
[
{
"code": null,
"e": 1121,
"s": 1062,
"text": "For this, use IS NOT NULL in MySQL. Let us see the syntax−"
},
{
"code": null,
"e": 1175,
"s": 1121,
"text": "select yourColumnName IS NOT NULL from yourTableName;"
},
{
"code": null,
"e": 1277,
"s": 1175,
"text": "The above query returns 1 if the column does not have NULL value otherwise 0. Let us first create a −"
},
{
"code": null,
"e": 1393,
"s": 1277,
"text": "mysql> create table DemoTable1408\n -> (\n -> FirstName varchar(30)\n -> );\nQuery OK, 0 rows affected (0.54 sec)"
},
{
"code": null,
"e": 1441,
"s": 1393,
"text": "Insert some records in the table using insert −"
},
{
"code": null,
"e": 1777,
"s": 1441,
"text": "mysql> insert into DemoTable1408 values('Chris');\nQuery OK, 1 row affected (0.14 sec)\nmysql> insert into DemoTable1408 values('');\nQuery OK, 1 row affected (0.12 sec)\nmysql> insert into DemoTable1408 values(NULL);\nQuery OK, 1 row affected (0.13 sec)\nmysql> insert into DemoTable1408 values('David');\nQuery OK, 1 row affected (0.10 sec)"
},
{
"code": null,
"e": 1827,
"s": 1777,
"text": "Display all records from the table using select −"
},
{
"code": null,
"e": 1863,
"s": 1827,
"text": "mysql> select * from DemoTable1408;"
},
{
"code": null,
"e": 1904,
"s": 1863,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2041,
"s": 1904,
"text": "+-----------+\n| FirstName |\n+-----------+\n| Chris |\n| |\n| NULL |\n| David |\n+-----------+\n4 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2096,
"s": 2041,
"text": "Following is the query to check for NULL or NOT NULL −"
},
{
"code": null,
"e": 2152,
"s": 2096,
"text": "mysql> select FirstName IS NOT NULL from DemoTable1408;"
},
{
"code": null,
"e": 2193,
"s": 2152,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2426,
"s": 2193,
"text": "+-----------------------+\n| FirstName IS NOT NULL |\n+-----------------------+\n| 1 |\n| 1 |\n| 0 |\n| 1 |\n+-----------------------+\n4 rows in set (0.00 sec)"
}
] |
Using neural networks for a functional connectivity classification of fMRI data | by Gelana Tostaeva | Towards Data Science
|
In this tutorial you will:
explore a neuroimaging dataset from nilearn;
perform a canonical independent component analysis in nilearn;
extract functional connectivity coefficients between regions determined by the component analysis;
use the coefficients for a classification multilayer perceptron model using Keras.
All the necessary code will be provided in this guide. For the full version, please see this GitHub page.
Prerequisites: This tutorial uses Python, Nilearn, and Keras. You do not need to have used Nilearn before, but you should have some experience with using Python and Keras. If you have never made a neural network before, this tutorial might be difficult to follow towards the end when we build a model, but feel free to stick around for the Nilearn part.
We will explore a neural network approach to analyzing functional connectivity-based data on attention deficit hyperactivity disorder (ADHD). Functional connectivity shows how brain regions connect with one another and make up functional networks. As such, it might hold insights into how the brain communicates
Our approach does not directly rely on neuroimaging scans and, instead, makes use of vectorized functional connectivity measures. This provides a computationally light way of getting practice with using machine learning for functional connectivity analysis.
Similar approaches using more complicated neural network models have been used for classification, in the context of brain disorders in particular. However, these have proven to be challenging due to the numerous features in the data and validation difficulty (see Du et al., 2018 for a review). Our own analysis also shows less than optimal model accuracy, but the purpose of this tutorial is to provide a guide on how to get started. You are encouraged to try out your own neural network in the end!
Nilearn, which is a Python module for neuroimaging data we will be using, has a variety of preprocessed datasets you can easily download with a built-in function:
from nilearn import datasetsnum = 40adhd_data = datasets.fetch_adhd(n_subjects=num)
We will use the ADHD-200 publicly accessible dataset composed of resting-state fMRI and anatomical data collected from multiple research centers. Nilearn has data for only 40 subjects, so we load all of them.
We can inspect the dataset by looking at the .keys() — we see that there are 4 types of information. “func” features the paths to the rs-fMRI data images; “confounds” are the CSV files containing the nuisance variables (confounds) we want to be aware of as not to affect our analysis; “phenotypic” provides explanations for the preprocessing steps; “description” is, well, description of the dataset.
The data were collected to increase understanding of the neural correlates of ADHD. If you want, you can learn more about the original dataset here. For the purposes of this tutorial, you only need to know that the dataset features both typically developing individuals (“controls”) and those with diagnosed ADHD (“treatments”).
We will extract the functional connectivity coefficients from this dataset to classify whether a given subject is a “control” or “treatment”. We will do so by first using an independent component analysis from nilearn and then use it to extract functional connectivity coefficients. Finally, we will build a neural network using these coefficients to discriminate “controls” from “treatments”.
Independent component analysis (ICA) is commonly used to assess functional connectivity. Nilearn has a method for group-level ICA (CanICA) which allows for control over single subject variability, especially given that we are interested in functional networks.
We use Nilearn’s built-in function and get nice visualizations of what we are working with. We choose to use a 20-component decomposition based on the standards provided in the Nilearn documentation for the chosen dataset. We get the independent components using masker_.inverse_transform which we then plot using the Nilearn’s plotting options for both the statistical and probabilistic atlas maps. We need the statistical one to plot the default mode network (DMN) specifically — the plot_stat_map function allows plotting cuts of this region of interest; the probabilistic one simply uses all the components yielded from the decomposition and layers them on top of the default anatomical brain image.
from nilearn import decompositioncanica = decomposition.CanICA(n_components=20, mask_strategy=’background’)canica.fit(func)#Retrieving the componentscomponents = canica.components_#Using a masker to project into the 3D spacecomponents_img = canica.masker_.inverse_transform(components)#Plotting the default mode network (DMN) without region extractionplotting.plot_stat_map(image.index_img(components_img, 9), title='DMN')plotting.show()#Plotting all the componentsplotting.plot_prob_atlas(components_img, title='All ICA components')plotting.show()
Our component decomposition is hard to interpret conclusively (essentially, we are seeing different brain regions in the fMRI data), but we can use it as a filter to extract the regions we are interested in. We do so calling the NiftiMapsMasker function from Nilearn to “summarize” the brain signals we obtained using ICA. Once we have that, we transform the extracted data to time-series by using the fit_transform method.
We then use everything we know about the dataset (“func”, “confounds”, and “phenotypic” files) to get the information we need, including whether the subject is “treatment” or “control” and their associated data collection location (site).
#Using a filter to extract the regions time series from nilearn import input_datamasker = input_data.NiftiMapsMasker(components_img, smoothing_fwhm=6, standardize=False, detrend=True, t_r=2.5, low_pass=0.1, high_pass=0.01)#Computing the regions signals and extracting the phenotypic information of interestsubjects = []adhds = []sites = []labels = []for func_file, confound_file, phenotypic in zip( adhd_data.func, adhd_data.confounds, adhd_data.phenotypic): time_series = masker.fit_transform(func_file, confounds=confound_file) subjects.append(time_series) is_adhd = phenotypic[‘adhd’] if is_adhd == 1: adhds.append(time_series) sites.append(phenotypic[‘site’]) labels.append(phenotypic[‘adhd’])
Thus far, we used a CanICA to get components that we needed to determine the regions of interest. The last thing to do before we can build our neural network model is to get functional connectivity coefficients. For this, we need to look at the functional connectivity between the regions of interest we extracted. We considered three different kinds of functional connectivity and determined correlation to be the most accurate. You can find how we did this in the full code.
Correlation simply determines the marginal connectivity between pairwise regions of interest. Nilearn has a built-in method for computing the correlation matrices, the ConnectivityMeasure function. We only need to specify the kind of functional connectivity we are interested in and then fit the time-series data we extracted in the previous step.
from nilearn.connectome import ConnectivityMeasurecorrelation_measure = ConnectivityMeasure(kind=’correlation’)correlation_matrices = correlation_measure.fit_transform(subjects)for i in range(40): plt.figure(figsize=(8,6)) plt.imshow(correlation_matrices[i], vmax=.20, vmin=-.20, cmap=’RdBu_r’) plt.colorbar() plt.title(‘Connectivity matrix of subject {} with label {}’.format(i, labels[i]))
We now have our connectivity matrices for all subjects, but let’s see what the average connectivity across all looks like. We split the matrices into those of treatment versus control subjects for comparison. This comparison is what our neural network model will use for classification.
#Separating the correlation matrices between treatment and control subjectsadhd_correlations = []control_correlations = []for i in range(40): if labels[i] == 1: adhd_correlations.append(correlation_matrices[i]) else: control_correlations.append(correlation_matrices[i])#Getting the mean correlation matrix across all treatment subjectsmean_correlations_adhd = np.mean(adhd_correlations, axis=0).reshape(time_series.shape[-1], time_series.shape[-1])#Getting the mean correlation matrix across all control subjectsmean_correlations_control = np.mean(control_correlations, axis=0).reshape(time_series.shape[-1], time_series.shape[-1])#Visualizing the mean correlationplotting.plot_matrix(mean_correlations_adhd, vmax=1, vmin=-1, colorbar=True, title='Correlation between 20 regions for ADHD')plotting.plot_matrix(mean_correlations_control, vmax=1, vmin=-1, colorbar=True, title='Correlation between 20 regions for controls')
We can see that the connections are not particularly strong for either group (the diagonal line can be ignored as it shows correlation with itself and, thus, always equals to 1). To better visualize the connections and the differences, we can project these back onto the brain.
#Getting the center coordinates from the component decomposition to use as atlas labelscoords = plotting.find_probabilistic_atlas_cut_coords(components_img)#Plotting the connectome with 80% edge strength in the connectivityplotting.plot_connectome(mean_correlations_adhd, coords, edge_threshold="80%", title='Correlation between 20 regions for ADHD')plotting.plot_connectome(mean_correlations_control, coords, edge_threshold="80%", title='Correlation between 20 regions for controls')plotting.show()
This gives us a nice connectome, the brain map of all the connections for 20 regions we are looking at.
The resulting ADHD connections do not seem as dense, compared to the control ones, which might be related to the notion of reduced functional connectivity associated with ADHD (Yang et al., 2011). In line with some of the previous research (Tomasi & Volkow, 2012), for ADHA subjects, we notice fewer connections in the superior parietal cortex (the upper right part in the first coronal graph) which is thought to be implicated in attention. There also seem to be fewer connections in the DMN, which we visualized before, a network active during rest and associated with the “self” — one that is suggested to be altered in ADHD (Mowinckel et al., 2017). These, albeit small differences, suggest that it should not be impossible to classify between treatments and controls using correlation matrices in our neural network model.
If you would like to see an interactive visualization of the connectome, you can run the line below. Otherwise, we are ready to move on to the modeling!
#Creating the interactive visualizationview = plotting.view_connectome(mean_correlations, coords, edge_threshold='80%')#To display in the cell belowview#To display in a different tabview.open_in_browser()
Now that we have our correlation matrices providing a vectorized measure of functional connectivity, we can use these as the input data for our neural network.
Before we build our model, we should split the data for training (70%) and testing (30%):
from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(correlation_matrices, labels, test_size=0.3)
Our neural network can be anything from one layer to five. After playing with a few different architectures, we settled on a Sequential model made up of four Dense layers. This tutorial assumes you know what these mean, so we will not go into all the details and give a brief overview of the overall architecture instead. If you need a refresher on the topic, this blog is a good start.
import kerasfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.optimizers import Adamclassifier = Sequential()#First Hidden Layerclassifier.add(Dense(32, activation=’tanh’, kernel_initializer=’random_normal’, input_shape=connectivity_biomarkers[‘correlation’].shape[1:]))#Second Hidden Layerclassifier.add(Dense(16, activation=’relu’, kernel_initializer=’random_normal’))#Third Hidden Layerclassifier.add(Dense(16, activation=’relu’, kernel_initializer=’random_normal’))#Output Layerclassifier.add(Dense(1, activation=’sigmoid’, kernel_initializer=’random_normal’))#Compiling the modelclassifier.compile(optimizer = Adam(lr =.0001),loss='binary_crossentropy', metrics =['accuracy'])#Fitting the modelclassifier.fit(np.array(X_train),np.array(y_train), batch_size=32, epochs=100)
And this is it! We built a simple neural network. Let’s recap the architecture of our model:
We use the Sequential model so that we can simply build layers on top of one another;
We choose Dense layers which are simple layers in a neural network — you can think of these as linear models taking multiple inputs and producing single output. We use 4 of these as we are dealing with functions that are not necessarily linearly separable, but, since we are essentially only interested in a binary classification, you don’t have to use 4, and might get away with fewer — this is what seemed to work best in terms of model’s predictive power. For each layer, we specific different numbers of nodes (32, 16, 16, 1) which is also mostly trivial in this example model. The only rule of thumb is to start with the number smaller or equal to the length of your input; in our case, we have 40 matrices, so 32 seems appropriate. The final output layer should be related to the number of output categories. Since this is a binary classification problem, we need to use 1.
For activation functions (which basically do the input-output signal conversion), we use Tanh, ReLu, and Sigmoid. Tanh and Sigmoid are used for the first and last layers, respectively, and not in the hidden ones. This is because both can be characterized by the vanishing gradient problem meaning they would output a zero gradient if the input is higher (which we are not sure of, so better be safe). We could have just used ReLu functions which overcome this problem due to its simplicity (it gives x as output if x is positive and 0 if otherwise), but these are better avoided in non-hidden layers because they might product no gradient, or “dead” neurons. The current architecture (Tanh, ReLu, ReLu, Sigmoid) seemed to, again, result in most optimal evaluation metrics like accuracy;
Finally, we use an Adam optimizer and a binary cross-entropy loss function. Adam is considered to be a good option, especially for noisy data. Binary cross-entropy is a usual choice for classification problems since it is independent for each class and ensures that one output vector is unaffected by other components.
Now, there’s only one thing left to do — see how our model performs. We will use accuracy as our main metric as it represents the proportion of correct classifications.
Let’s start with the training set:
eval_model=classifier.evaluate(np.array(X_train), np.array(y_train))eval_model
Hooray, the accuracy on our training data is 1. Onto the scary part of testing data.
y_pred=classifier.predict(X_test,batch_size=32)y_pred =(y_pred>0.5)from sklearn.metrics import confusion_matrix, classification_reportcm = confusion_matrix(y_test, y_pred)print(cm)cr = classification_report(y_test, y_pred)print(cr)
The overall accuracy of our classification is 75% which is not terrible but could be better. We also only got 2 false negatives and 1 false positive. It’s now up to you to play with the simple model you build in this step-by-step guide! A disclaimer that the model in the corresponding Jupyter notebook has a much lower accuracy than reported here likely due to random processes — all the more reason to experiment and come up with a better model.
This tutorial explored how functional connectivity data can be used to screen for ADHD. We created a neural network based on functional connectivity coefficients for different regions to do so. As we have seen through the component analysis and connectome plots, the differences between treatment and control samples are not prominent which could explain the lower accuracy of our screening. More work with larger datasets should be done to see if there is a more consistent pattern within functional connectivity in ADHD. It could be also interesting to examine a higher number of regions and components in future analyses.
Hopefully, this has given you some idea of how you can use machine learning for functional connectivity analysis of fMRI data. We (conveniently) used preprocessed data, but if you want to learn more about the preprocessing step of fMRI data analysis, check out my other tutorial here. Until next time!
Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., ... & Varoquaux, G. (2014). Machine learning for neuroimaging with scikit-learn. Frontiers in neuroinformatics, 8, 14.
Du, Y., Fu, Z., & Calhoun, V. D. (2018). Classification and prediction of brain disorders using functional connectivity: promising but challenging. Frontiers in neuroscience, 12.
Mowinckel, A. M., Alnæs, D., Pedersen, M. L., Ziegler, S., Fredriksen, M., Kaufmann, T., ... & Biele, G. (2017). Increased default-mode variability is related to reduced task-performance and is evident in adults with ADHD. NeuroImage: Clinical, 16, 369–382.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of machine learning research, 12(Oct), 2825–2830.
Tomasi, D., & Volkow, N. D. (2012). Abnormal functional connectivity in children with attention-deficit/hyperactivity disorder. Biological psychiatry, 71(5), 443–450.
Varoquaux, G., Sadaghiani, S., Pinel, P., Kleinschmidt, A., Poline, J. B., & Thirion, B. (2010). A group model for stable multi-subject ICA on fMRI datasets. Neuroimage, 51(1), 288–299.
Yang, H., Wu, Q. Z., Guo, L. T., Li, Q. Q., Long, X. Y., Huang, X. Q., ... & Gong, Q. Y. (2011). Abnormal spontaneous brain activity in medication-naive ADHD children: a resting state fMRI study. Neuroscience letters, 502(2), 89–93.
|
[
{
"code": null,
"e": 199,
"s": 172,
"text": "In this tutorial you will:"
},
{
"code": null,
"e": 244,
"s": 199,
"text": "explore a neuroimaging dataset from nilearn;"
},
{
"code": null,
"e": 307,
"s": 244,
"text": "perform a canonical independent component analysis in nilearn;"
},
{
"code": null,
"e": 406,
"s": 307,
"text": "extract functional connectivity coefficients between regions determined by the component analysis;"
},
{
"code": null,
"e": 489,
"s": 406,
"text": "use the coefficients for a classification multilayer perceptron model using Keras."
},
{
"code": null,
"e": 595,
"s": 489,
"text": "All the necessary code will be provided in this guide. For the full version, please see this GitHub page."
},
{
"code": null,
"e": 949,
"s": 595,
"text": "Prerequisites: This tutorial uses Python, Nilearn, and Keras. You do not need to have used Nilearn before, but you should have some experience with using Python and Keras. If you have never made a neural network before, this tutorial might be difficult to follow towards the end when we build a model, but feel free to stick around for the Nilearn part."
},
{
"code": null,
"e": 1261,
"s": 949,
"text": "We will explore a neural network approach to analyzing functional connectivity-based data on attention deficit hyperactivity disorder (ADHD). Functional connectivity shows how brain regions connect with one another and make up functional networks. As such, it might hold insights into how the brain communicates"
},
{
"code": null,
"e": 1519,
"s": 1261,
"text": "Our approach does not directly rely on neuroimaging scans and, instead, makes use of vectorized functional connectivity measures. This provides a computationally light way of getting practice with using machine learning for functional connectivity analysis."
},
{
"code": null,
"e": 2021,
"s": 1519,
"text": "Similar approaches using more complicated neural network models have been used for classification, in the context of brain disorders in particular. However, these have proven to be challenging due to the numerous features in the data and validation difficulty (see Du et al., 2018 for a review). Our own analysis also shows less than optimal model accuracy, but the purpose of this tutorial is to provide a guide on how to get started. You are encouraged to try out your own neural network in the end!"
},
{
"code": null,
"e": 2184,
"s": 2021,
"text": "Nilearn, which is a Python module for neuroimaging data we will be using, has a variety of preprocessed datasets you can easily download with a built-in function:"
},
{
"code": null,
"e": 2268,
"s": 2184,
"text": "from nilearn import datasetsnum = 40adhd_data = datasets.fetch_adhd(n_subjects=num)"
},
{
"code": null,
"e": 2477,
"s": 2268,
"text": "We will use the ADHD-200 publicly accessible dataset composed of resting-state fMRI and anatomical data collected from multiple research centers. Nilearn has data for only 40 subjects, so we load all of them."
},
{
"code": null,
"e": 2878,
"s": 2477,
"text": "We can inspect the dataset by looking at the .keys() — we see that there are 4 types of information. “func” features the paths to the rs-fMRI data images; “confounds” are the CSV files containing the nuisance variables (confounds) we want to be aware of as not to affect our analysis; “phenotypic” provides explanations for the preprocessing steps; “description” is, well, description of the dataset."
},
{
"code": null,
"e": 3207,
"s": 2878,
"text": "The data were collected to increase understanding of the neural correlates of ADHD. If you want, you can learn more about the original dataset here. For the purposes of this tutorial, you only need to know that the dataset features both typically developing individuals (“controls”) and those with diagnosed ADHD (“treatments”)."
},
{
"code": null,
"e": 3601,
"s": 3207,
"text": "We will extract the functional connectivity coefficients from this dataset to classify whether a given subject is a “control” or “treatment”. We will do so by first using an independent component analysis from nilearn and then use it to extract functional connectivity coefficients. Finally, we will build a neural network using these coefficients to discriminate “controls” from “treatments”."
},
{
"code": null,
"e": 3862,
"s": 3601,
"text": "Independent component analysis (ICA) is commonly used to assess functional connectivity. Nilearn has a method for group-level ICA (CanICA) which allows for control over single subject variability, especially given that we are interested in functional networks."
},
{
"code": null,
"e": 4566,
"s": 3862,
"text": "We use Nilearn’s built-in function and get nice visualizations of what we are working with. We choose to use a 20-component decomposition based on the standards provided in the Nilearn documentation for the chosen dataset. We get the independent components using masker_.inverse_transform which we then plot using the Nilearn’s plotting options for both the statistical and probabilistic atlas maps. We need the statistical one to plot the default mode network (DMN) specifically — the plot_stat_map function allows plotting cuts of this region of interest; the probabilistic one simply uses all the components yielded from the decomposition and layers them on top of the default anatomical brain image."
},
{
"code": null,
"e": 5115,
"s": 4566,
"text": "from nilearn import decompositioncanica = decomposition.CanICA(n_components=20, mask_strategy=’background’)canica.fit(func)#Retrieving the componentscomponents = canica.components_#Using a masker to project into the 3D spacecomponents_img = canica.masker_.inverse_transform(components)#Plotting the default mode network (DMN) without region extractionplotting.plot_stat_map(image.index_img(components_img, 9), title='DMN')plotting.show()#Plotting all the componentsplotting.plot_prob_atlas(components_img, title='All ICA components')plotting.show()"
},
{
"code": null,
"e": 5539,
"s": 5115,
"text": "Our component decomposition is hard to interpret conclusively (essentially, we are seeing different brain regions in the fMRI data), but we can use it as a filter to extract the regions we are interested in. We do so calling the NiftiMapsMasker function from Nilearn to “summarize” the brain signals we obtained using ICA. Once we have that, we transform the extracted data to time-series by using the fit_transform method."
},
{
"code": null,
"e": 5778,
"s": 5539,
"text": "We then use everything we know about the dataset (“func”, “confounds”, and “phenotypic” files) to get the information we need, including whether the subject is “treatment” or “control” and their associated data collection location (site)."
},
{
"code": null,
"e": 6477,
"s": 5778,
"text": "#Using a filter to extract the regions time series from nilearn import input_datamasker = input_data.NiftiMapsMasker(components_img, smoothing_fwhm=6, standardize=False, detrend=True, t_r=2.5, low_pass=0.1, high_pass=0.01)#Computing the regions signals and extracting the phenotypic information of interestsubjects = []adhds = []sites = []labels = []for func_file, confound_file, phenotypic in zip( adhd_data.func, adhd_data.confounds, adhd_data.phenotypic): time_series = masker.fit_transform(func_file, confounds=confound_file) subjects.append(time_series) is_adhd = phenotypic[‘adhd’] if is_adhd == 1: adhds.append(time_series) sites.append(phenotypic[‘site’]) labels.append(phenotypic[‘adhd’])"
},
{
"code": null,
"e": 6954,
"s": 6477,
"text": "Thus far, we used a CanICA to get components that we needed to determine the regions of interest. The last thing to do before we can build our neural network model is to get functional connectivity coefficients. For this, we need to look at the functional connectivity between the regions of interest we extracted. We considered three different kinds of functional connectivity and determined correlation to be the most accurate. You can find how we did this in the full code."
},
{
"code": null,
"e": 7302,
"s": 6954,
"text": "Correlation simply determines the marginal connectivity between pairwise regions of interest. Nilearn has a built-in method for computing the correlation matrices, the ConnectivityMeasure function. We only need to specify the kind of functional connectivity we are interested in and then fit the time-series data we extracted in the previous step."
},
{
"code": null,
"e": 7694,
"s": 7302,
"text": "from nilearn.connectome import ConnectivityMeasurecorrelation_measure = ConnectivityMeasure(kind=’correlation’)correlation_matrices = correlation_measure.fit_transform(subjects)for i in range(40): plt.figure(figsize=(8,6)) plt.imshow(correlation_matrices[i], vmax=.20, vmin=-.20, cmap=’RdBu_r’) plt.colorbar() plt.title(‘Connectivity matrix of subject {} with label {}’.format(i, labels[i]))"
},
{
"code": null,
"e": 7981,
"s": 7694,
"text": "We now have our connectivity matrices for all subjects, but let’s see what the average connectivity across all looks like. We split the matrices into those of treatment versus control subjects for comparison. This comparison is what our neural network model will use for classification."
},
{
"code": null,
"e": 9097,
"s": 7981,
"text": "#Separating the correlation matrices between treatment and control subjectsadhd_correlations = []control_correlations = []for i in range(40): if labels[i] == 1: adhd_correlations.append(correlation_matrices[i]) else: control_correlations.append(correlation_matrices[i])#Getting the mean correlation matrix across all treatment subjectsmean_correlations_adhd = np.mean(adhd_correlations, axis=0).reshape(time_series.shape[-1], time_series.shape[-1])#Getting the mean correlation matrix across all control subjectsmean_correlations_control = np.mean(control_correlations, axis=0).reshape(time_series.shape[-1], time_series.shape[-1])#Visualizing the mean correlationplotting.plot_matrix(mean_correlations_adhd, vmax=1, vmin=-1, colorbar=True, title='Correlation between 20 regions for ADHD')plotting.plot_matrix(mean_correlations_control, vmax=1, vmin=-1, colorbar=True, title='Correlation between 20 regions for controls')"
},
{
"code": null,
"e": 9375,
"s": 9097,
"text": "We can see that the connections are not particularly strong for either group (the diagonal line can be ignored as it shows correlation with itself and, thus, always equals to 1). To better visualize the connections and the differences, we can project these back onto the brain."
},
{
"code": null,
"e": 9923,
"s": 9375,
"text": "#Getting the center coordinates from the component decomposition to use as atlas labelscoords = plotting.find_probabilistic_atlas_cut_coords(components_img)#Plotting the connectome with 80% edge strength in the connectivityplotting.plot_connectome(mean_correlations_adhd, coords, edge_threshold=\"80%\", title='Correlation between 20 regions for ADHD')plotting.plot_connectome(mean_correlations_control, coords, edge_threshold=\"80%\", title='Correlation between 20 regions for controls')plotting.show()"
},
{
"code": null,
"e": 10027,
"s": 9923,
"text": "This gives us a nice connectome, the brain map of all the connections for 20 regions we are looking at."
},
{
"code": null,
"e": 10855,
"s": 10027,
"text": "The resulting ADHD connections do not seem as dense, compared to the control ones, which might be related to the notion of reduced functional connectivity associated with ADHD (Yang et al., 2011). In line with some of the previous research (Tomasi & Volkow, 2012), for ADHA subjects, we notice fewer connections in the superior parietal cortex (the upper right part in the first coronal graph) which is thought to be implicated in attention. There also seem to be fewer connections in the DMN, which we visualized before, a network active during rest and associated with the “self” — one that is suggested to be altered in ADHD (Mowinckel et al., 2017). These, albeit small differences, suggest that it should not be impossible to classify between treatments and controls using correlation matrices in our neural network model."
},
{
"code": null,
"e": 11008,
"s": 10855,
"text": "If you would like to see an interactive visualization of the connectome, you can run the line below. Otherwise, we are ready to move on to the modeling!"
},
{
"code": null,
"e": 11213,
"s": 11008,
"text": "#Creating the interactive visualizationview = plotting.view_connectome(mean_correlations, coords, edge_threshold='80%')#To display in the cell belowview#To display in a different tabview.open_in_browser()"
},
{
"code": null,
"e": 11373,
"s": 11213,
"text": "Now that we have our correlation matrices providing a vectorized measure of functional connectivity, we can use these as the input data for our neural network."
},
{
"code": null,
"e": 11463,
"s": 11373,
"text": "Before we build our model, we should split the data for training (70%) and testing (30%):"
},
{
"code": null,
"e": 11612,
"s": 11463,
"text": "from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(correlation_matrices, labels, test_size=0.3)"
},
{
"code": null,
"e": 11999,
"s": 11612,
"text": "Our neural network can be anything from one layer to five. After playing with a few different architectures, we settled on a Sequential model made up of four Dense layers. This tutorial assumes you know what these mean, so we will not go into all the details and give a brief overview of the overall architecture instead. If you need a refresher on the topic, this blog is a good start."
},
{
"code": null,
"e": 12803,
"s": 11999,
"text": "import kerasfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.optimizers import Adamclassifier = Sequential()#First Hidden Layerclassifier.add(Dense(32, activation=’tanh’, kernel_initializer=’random_normal’, input_shape=connectivity_biomarkers[‘correlation’].shape[1:]))#Second Hidden Layerclassifier.add(Dense(16, activation=’relu’, kernel_initializer=’random_normal’))#Third Hidden Layerclassifier.add(Dense(16, activation=’relu’, kernel_initializer=’random_normal’))#Output Layerclassifier.add(Dense(1, activation=’sigmoid’, kernel_initializer=’random_normal’))#Compiling the modelclassifier.compile(optimizer = Adam(lr =.0001),loss='binary_crossentropy', metrics =['accuracy'])#Fitting the modelclassifier.fit(np.array(X_train),np.array(y_train), batch_size=32, epochs=100)"
},
{
"code": null,
"e": 12896,
"s": 12803,
"text": "And this is it! We built a simple neural network. Let’s recap the architecture of our model:"
},
{
"code": null,
"e": 12982,
"s": 12896,
"text": "We use the Sequential model so that we can simply build layers on top of one another;"
},
{
"code": null,
"e": 13862,
"s": 12982,
"text": "We choose Dense layers which are simple layers in a neural network — you can think of these as linear models taking multiple inputs and producing single output. We use 4 of these as we are dealing with functions that are not necessarily linearly separable, but, since we are essentially only interested in a binary classification, you don’t have to use 4, and might get away with fewer — this is what seemed to work best in terms of model’s predictive power. For each layer, we specific different numbers of nodes (32, 16, 16, 1) which is also mostly trivial in this example model. The only rule of thumb is to start with the number smaller or equal to the length of your input; in our case, we have 40 matrices, so 32 seems appropriate. The final output layer should be related to the number of output categories. Since this is a binary classification problem, we need to use 1."
},
{
"code": null,
"e": 14649,
"s": 13862,
"text": "For activation functions (which basically do the input-output signal conversion), we use Tanh, ReLu, and Sigmoid. Tanh and Sigmoid are used for the first and last layers, respectively, and not in the hidden ones. This is because both can be characterized by the vanishing gradient problem meaning they would output a zero gradient if the input is higher (which we are not sure of, so better be safe). We could have just used ReLu functions which overcome this problem due to its simplicity (it gives x as output if x is positive and 0 if otherwise), but these are better avoided in non-hidden layers because they might product no gradient, or “dead” neurons. The current architecture (Tanh, ReLu, ReLu, Sigmoid) seemed to, again, result in most optimal evaluation metrics like accuracy;"
},
{
"code": null,
"e": 14968,
"s": 14649,
"text": "Finally, we use an Adam optimizer and a binary cross-entropy loss function. Adam is considered to be a good option, especially for noisy data. Binary cross-entropy is a usual choice for classification problems since it is independent for each class and ensures that one output vector is unaffected by other components."
},
{
"code": null,
"e": 15137,
"s": 14968,
"text": "Now, there’s only one thing left to do — see how our model performs. We will use accuracy as our main metric as it represents the proportion of correct classifications."
},
{
"code": null,
"e": 15172,
"s": 15137,
"text": "Let’s start with the training set:"
},
{
"code": null,
"e": 15251,
"s": 15172,
"text": "eval_model=classifier.evaluate(np.array(X_train), np.array(y_train))eval_model"
},
{
"code": null,
"e": 15336,
"s": 15251,
"text": "Hooray, the accuracy on our training data is 1. Onto the scary part of testing data."
},
{
"code": null,
"e": 15568,
"s": 15336,
"text": "y_pred=classifier.predict(X_test,batch_size=32)y_pred =(y_pred>0.5)from sklearn.metrics import confusion_matrix, classification_reportcm = confusion_matrix(y_test, y_pred)print(cm)cr = classification_report(y_test, y_pred)print(cr)"
},
{
"code": null,
"e": 16016,
"s": 15568,
"text": "The overall accuracy of our classification is 75% which is not terrible but could be better. We also only got 2 false negatives and 1 false positive. It’s now up to you to play with the simple model you build in this step-by-step guide! A disclaimer that the model in the corresponding Jupyter notebook has a much lower accuracy than reported here likely due to random processes — all the more reason to experiment and come up with a better model."
},
{
"code": null,
"e": 16641,
"s": 16016,
"text": "This tutorial explored how functional connectivity data can be used to screen for ADHD. We created a neural network based on functional connectivity coefficients for different regions to do so. As we have seen through the component analysis and connectome plots, the differences between treatment and control samples are not prominent which could explain the lower accuracy of our screening. More work with larger datasets should be done to see if there is a more consistent pattern within functional connectivity in ADHD. It could be also interesting to examine a higher number of regions and components in future analyses."
},
{
"code": null,
"e": 16943,
"s": 16641,
"text": "Hopefully, this has given you some idea of how you can use machine learning for functional connectivity analysis of fMRI data. We (conveniently) used preprocessed data, but if you want to learn more about the preprocessing step of fMRI data analysis, check out my other tutorial here. Until next time!"
},
{
"code": null,
"e": 17146,
"s": 16943,
"text": "Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., ... & Varoquaux, G. (2014). Machine learning for neuroimaging with scikit-learn. Frontiers in neuroinformatics, 8, 14."
},
{
"code": null,
"e": 17325,
"s": 17146,
"text": "Du, Y., Fu, Z., & Calhoun, V. D. (2018). Classification and prediction of brain disorders using functional connectivity: promising but challenging. Frontiers in neuroscience, 12."
},
{
"code": null,
"e": 17583,
"s": 17325,
"text": "Mowinckel, A. M., Alnæs, D., Pedersen, M. L., Ziegler, S., Fredriksen, M., Kaufmann, T., ... & Biele, G. (2017). Increased default-mode variability is related to reduced task-performance and is evident in adults with ADHD. NeuroImage: Clinical, 16, 369–382."
},
{
"code": null,
"e": 17793,
"s": 17583,
"text": "Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of machine learning research, 12(Oct), 2825–2830."
},
{
"code": null,
"e": 17960,
"s": 17793,
"text": "Tomasi, D., & Volkow, N. D. (2012). Abnormal functional connectivity in children with attention-deficit/hyperactivity disorder. Biological psychiatry, 71(5), 443–450."
},
{
"code": null,
"e": 18146,
"s": 17960,
"text": "Varoquaux, G., Sadaghiani, S., Pinel, P., Kleinschmidt, A., Poline, J. B., & Thirion, B. (2010). A group model for stable multi-subject ICA on fMRI datasets. Neuroimage, 51(1), 288–299."
}
] |
Primitive data type vs. Object data type in Java with Examples - GeeksforGeeks
|
22 Mar, 2021
Primitive Data Type: In Java, the primitive data types are the predefined data types of Java. They specify the size and type of any standard values. Java has 8 primitive data types namely byte, short, int, long, float, double, char and boolean. When a primitive data type is stored, it is the stack that the values will be assigned. When a variable is copied then another copy of the variable is created and changes made to the copied variable will not reflect changes in the original variable. Here is a Java program to demonstrate all the primitive data types in Java.
Object Data Type: These are also referred to as Non-primitive or Reference Data Type. They are so-called because they refer to any particular objects. Unlike the primitive data types, the non-primitive ones are created by the users in Java. Examples include arrays, strings, classes, interfaces etc. When the reference variables will be stored, the variable will be stored in the stack and the original object will be stored in the heap. In Object data type although two copies will be created they both will point to the same variable in the heap, hence changes made to any variable will reflect the change in both the variables. Here is a Java program to demonstrate arrays(an object data type) in Java.
Difference between the primitive and object data types in Java:
Now let’s look at a program that demonstrates the difference between the primitive and object data types in Java.
Java
import java.lang.*;import java.util.*; class GeeksForGeeks { public static void main(String ar[]) { System.out.println("PRIMITIVE DATA TYPES\n"); int x = 10; int y = x; System.out.print("Initially: "); System.out.println("x = " + x + ", y = " + y); // Here the change in the value of y // will not affect the value of x y = 30; System.out.print("After changing y to 30: "); System.out.println("x = " + x + ", y = " + y); System.out.println( "**Only value of y is affected here " + "because of Primitive Data Type\n"); System.out.println("REFERENCE DATA TYPES\n"); int[] c = { 10, 20, 30, 40 }; // Here complete reference of c is copied to d // and both point to same memory in Heap int[] d = c; System.out.println("Initially"); System.out.println("Array c: " + Arrays.toString(c)); System.out.println("Array d: " + Arrays.toString(d)); // Modifying the value at // index 1 to 50 in array d System.out.println("\nModifying the value at " + "index 1 to 50 in array d\n"); d[1] = 50; System.out.println("After modification"); System.out.println("Array c: " + Arrays.toString(c)); System.out.println("Array d: " + Arrays.toString(d)); System.out.println( "**Here value of c[1] is also affected " + "because of Reference Data Type\n"); }}
PRIMITIVE DATA TYPES
Initially: x = 10, y = 10
After changing y to 30: x = 10, y = 30
**Only value of y is affected here because of Primitive Data Type
REFERENCE DATA TYPES
Initially
Array c: [10, 20, 30, 40]
Array d: [10, 20, 30, 40]
Modifying the value at index 1 to 50 in array d
After modification
Array c: [10, 50, 30, 40]
Array d: [10, 50, 30, 40]
**Here value of c[1] is also affected because of Reference Data Type
Let’s look at the difference between the primitive and object data type in a tabular manner.
bxu66
Data Types
java-basics
Difference Between
Java
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Difference between Process and Thread
Stack vs Heap Memory Allocation
Difference Between Method Overloading and Method Overriding in Java
Differences between JDK, JRE and JVM
Arrays.sort() in Java with examples
Reverse a string in Java
Initialize an ArrayList in Java
Object Oriented Programming (OOPs) Concept in Java
HashMap in Java with Examples
|
[
{
"code": null,
"e": 24378,
"s": 24350,
"text": "\n22 Mar, 2021"
},
{
"code": null,
"e": 24949,
"s": 24378,
"text": "Primitive Data Type: In Java, the primitive data types are the predefined data types of Java. They specify the size and type of any standard values. Java has 8 primitive data types namely byte, short, int, long, float, double, char and boolean. When a primitive data type is stored, it is the stack that the values will be assigned. When a variable is copied then another copy of the variable is created and changes made to the copied variable will not reflect changes in the original variable. Here is a Java program to demonstrate all the primitive data types in Java."
},
{
"code": null,
"e": 25655,
"s": 24949,
"text": "Object Data Type: These are also referred to as Non-primitive or Reference Data Type. They are so-called because they refer to any particular objects. Unlike the primitive data types, the non-primitive ones are created by the users in Java. Examples include arrays, strings, classes, interfaces etc. When the reference variables will be stored, the variable will be stored in the stack and the original object will be stored in the heap. In Object data type although two copies will be created they both will point to the same variable in the heap, hence changes made to any variable will reflect the change in both the variables. Here is a Java program to demonstrate arrays(an object data type) in Java."
},
{
"code": null,
"e": 25720,
"s": 25655,
"text": "Difference between the primitive and object data types in Java: "
},
{
"code": null,
"e": 25835,
"s": 25720,
"text": "Now let’s look at a program that demonstrates the difference between the primitive and object data types in Java. "
},
{
"code": null,
"e": 25840,
"s": 25835,
"text": "Java"
},
{
"code": "import java.lang.*;import java.util.*; class GeeksForGeeks { public static void main(String ar[]) { System.out.println(\"PRIMITIVE DATA TYPES\\n\"); int x = 10; int y = x; System.out.print(\"Initially: \"); System.out.println(\"x = \" + x + \", y = \" + y); // Here the change in the value of y // will not affect the value of x y = 30; System.out.print(\"After changing y to 30: \"); System.out.println(\"x = \" + x + \", y = \" + y); System.out.println( \"**Only value of y is affected here \" + \"because of Primitive Data Type\\n\"); System.out.println(\"REFERENCE DATA TYPES\\n\"); int[] c = { 10, 20, 30, 40 }; // Here complete reference of c is copied to d // and both point to same memory in Heap int[] d = c; System.out.println(\"Initially\"); System.out.println(\"Array c: \" + Arrays.toString(c)); System.out.println(\"Array d: \" + Arrays.toString(d)); // Modifying the value at // index 1 to 50 in array d System.out.println(\"\\nModifying the value at \" + \"index 1 to 50 in array d\\n\"); d[1] = 50; System.out.println(\"After modification\"); System.out.println(\"Array c: \" + Arrays.toString(c)); System.out.println(\"Array d: \" + Arrays.toString(d)); System.out.println( \"**Here value of c[1] is also affected \" + \"because of Reference Data Type\\n\"); }}",
"e": 27453,
"s": 25840,
"text": null
},
{
"code": null,
"e": 27883,
"s": 27453,
"text": "PRIMITIVE DATA TYPES\n\nInitially: x = 10, y = 10\nAfter changing y to 30: x = 10, y = 30\n**Only value of y is affected here because of Primitive Data Type\n\nREFERENCE DATA TYPES\n\nInitially\nArray c: [10, 20, 30, 40]\nArray d: [10, 20, 30, 40]\n\nModifying the value at index 1 to 50 in array d\n\nAfter modification\nArray c: [10, 50, 30, 40]\nArray d: [10, 50, 30, 40]\n**Here value of c[1] is also affected because of Reference Data Type\n\n"
},
{
"code": null,
"e": 27978,
"s": 27883,
"text": "Let’s look at the difference between the primitive and object data type in a tabular manner. "
},
{
"code": null,
"e": 27984,
"s": 27978,
"text": "bxu66"
},
{
"code": null,
"e": 27995,
"s": 27984,
"text": "Data Types"
},
{
"code": null,
"e": 28007,
"s": 27995,
"text": "java-basics"
},
{
"code": null,
"e": 28026,
"s": 28007,
"text": "Difference Between"
},
{
"code": null,
"e": 28031,
"s": 28026,
"text": "Java"
},
{
"code": null,
"e": 28050,
"s": 28031,
"text": "Technical Scripter"
},
{
"code": null,
"e": 28055,
"s": 28050,
"text": "Java"
},
{
"code": null,
"e": 28153,
"s": 28055,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28214,
"s": 28153,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 28252,
"s": 28214,
"text": "Difference between Process and Thread"
},
{
"code": null,
"e": 28284,
"s": 28252,
"text": "Stack vs Heap Memory Allocation"
},
{
"code": null,
"e": 28352,
"s": 28284,
"text": "Difference Between Method Overloading and Method Overriding in Java"
},
{
"code": null,
"e": 28389,
"s": 28352,
"text": "Differences between JDK, JRE and JVM"
},
{
"code": null,
"e": 28425,
"s": 28389,
"text": "Arrays.sort() in Java with examples"
},
{
"code": null,
"e": 28450,
"s": 28425,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 28482,
"s": 28450,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 28533,
"s": 28482,
"text": "Object Oriented Programming (OOPs) Concept in Java"
}
] |
Generating password in Java
|
Generate temporary password is now a requirement on almost every website now-a-days. In case a user forgets the password, system generates a random password adhering to password policy of the company. Following example generates a random password adhering to following conditions −
It should contain at least one capital case letter.
It should contain at least one capital case letter.
It should contain at least one lower-case letter.
It should contain at least one lower-case letter.
It should contain at least one number.
It should contain at least one number.
Length should be 8 characters.
Length should be 8 characters.
It should contain one of the following special characters: @, $, #, !.
It should contain one of the following special characters: @, $, #, !.
import java.util.Random;
public class Tester{
public static void main(String[] args) {
System.out.println(generatePassword(8));
}
private static char[] generatePassword(int length) {
String capitalCaseLetters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
String lowerCaseLetters = "abcdefghijklmnopqrstuvwxyz";
String specialCharacters = "!@#$";
String numbers = "1234567890";
String combinedChars = capitalCaseLetters + lowerCaseLetters + specialCharacters + numbers;
Random random = new Random();
char[] password = new char[length];
password[0] = lowerCaseLetters.charAt(random.nextInt(lowerCaseLetters.length()));
password[1] = capitalCaseLetters.charAt(random.nextInt(capitalCaseLetters.length()));
password[2] = specialCharacters.charAt(random.nextInt(specialCharacters.length()));
password[3] = numbers.charAt(random.nextInt(numbers.length()));
for(int i = 4; i< length ; i++) {
password[i] = combinedChars.charAt(random.nextInt(combinedChars.length()));
}
return password;
}
}
cF#0KYbY
|
[
{
"code": null,
"e": 1344,
"s": 1062,
"text": "Generate temporary password is now a requirement on almost every website now-a-days. In case a user forgets the password, system generates a random password adhering to password policy of the company. Following example generates a random password adhering to following conditions −"
},
{
"code": null,
"e": 1396,
"s": 1344,
"text": "It should contain at least one capital case letter."
},
{
"code": null,
"e": 1448,
"s": 1396,
"text": "It should contain at least one capital case letter."
},
{
"code": null,
"e": 1498,
"s": 1448,
"text": "It should contain at least one lower-case letter."
},
{
"code": null,
"e": 1548,
"s": 1498,
"text": "It should contain at least one lower-case letter."
},
{
"code": null,
"e": 1587,
"s": 1548,
"text": "It should contain at least one number."
},
{
"code": null,
"e": 1626,
"s": 1587,
"text": "It should contain at least one number."
},
{
"code": null,
"e": 1657,
"s": 1626,
"text": "Length should be 8 characters."
},
{
"code": null,
"e": 1688,
"s": 1657,
"text": "Length should be 8 characters."
},
{
"code": null,
"e": 1759,
"s": 1688,
"text": "It should contain one of the following special characters: @, $, #, !."
},
{
"code": null,
"e": 1830,
"s": 1759,
"text": "It should contain one of the following special characters: @, $, #, !."
},
{
"code": null,
"e": 2918,
"s": 1830,
"text": "import java.util.Random;\n\npublic class Tester{\n public static void main(String[] args) {\n System.out.println(generatePassword(8));\n }\n\n private static char[] generatePassword(int length) {\n String capitalCaseLetters = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\";\n String lowerCaseLetters = \"abcdefghijklmnopqrstuvwxyz\";\n String specialCharacters = \"!@#$\";\n String numbers = \"1234567890\";\n String combinedChars = capitalCaseLetters + lowerCaseLetters + specialCharacters + numbers;\n Random random = new Random();\n char[] password = new char[length];\n\n password[0] = lowerCaseLetters.charAt(random.nextInt(lowerCaseLetters.length()));\n password[1] = capitalCaseLetters.charAt(random.nextInt(capitalCaseLetters.length()));\n password[2] = specialCharacters.charAt(random.nextInt(specialCharacters.length()));\n password[3] = numbers.charAt(random.nextInt(numbers.length()));\n \n for(int i = 4; i< length ; i++) {\n password[i] = combinedChars.charAt(random.nextInt(combinedChars.length()));\n }\n return password;\n }\n}"
},
{
"code": null,
"e": 2927,
"s": 2918,
"text": "cF#0KYbY"
}
] |
Maximum Likelihood Estimation in R | by Andrew Hetherington | Towards Data Science
|
Often, you’ll have some level of intuition — or perhaps concrete evidence — to suggest that a set of observations has been generated by a particular statistical distribution. Similar phenomena to the one you are modelling may have been shown to be explained well by a certain distribution. The setup of the situation or problem you are investigating may naturally suggest a family of distributions to try. Or maybe you just want to have a bit of fun by fitting your data to some obscure model just to see what happens (if you are challenged on this, tell people you’re doing Exploratory Data Analysis and that you don’t like to be disturbed when you’re in your zone).
Now, there are many ways of estimating the parameters of your chosen model from the data you have. The simplest of these is the method of moments — an effective tool, but one not without its disadvantages (notably, these estimates are often biased).
Another method you may want to consider is Maximum Likelihood Estimation (MLE), which tends to produce better (ie more unbiased) estimates for model parameters. It’s a little more technical, but nothing that we can’t handle. Let’s see how it works.
The likelihood — more precisely, the likelihood function — is a function that represents how likely it is to obtain a certain set of observations from a given model. We’re considering the set of observations as fixed — they’ve happened, they’re in the past — and now we’re considering under which set of model parameters we would be most likely to observe them.
Consider an example. Let’s say we flipped a coin 100 times and observed 52 heads and 48 tails. We want to come up with a model that will predict the number of heads we’ll get if we kept flipping another 100 times.
Formalising the problem a bit, let’s think about the number of heads obtained from 100 coin flips. Given that:
there are only two possible outcomes (heads and tails),
there’s a fixed number of “trials” (100 coin flips), and that
there’s a fixed probability of “success” (ie getting a heads),
we might reasonably suggest that the situation could be modelled using a binomial distribution.
We can use R to set up the problem as follows (check out the Jupyter notebook used for this article for more detail):
# I don’t know about you but I’m feelingset.seed(22)# Generate an outcome, ie number of heads obtained, assuming a fair coin was used for the 100 flipsheads <- rbinom(1,100,0.5)heads# 52
(For the purposes of generating the data, we’ve used a 50/50 chance of getting a heads/tails, although we are going to pretend that we don’t know this for the time being. For almost all real world problems we don’t have access to this kind of information on the processes that generate the data we’re looking at — which is entirely why we are motivated to estimate these parameters!)
Under our formulation of the heads/tails process as a binomial one, we are supposing that there is a probability p of obtaining a heads for each coin flip. Extending this, the probability of obtaining 52 heads after 100 flips is given by:
This probability is our likelihood function — it allows us to calculate the probability, ie how likely it is, of that our set of data being observed given a probability of heads p. You may be able to guess the next step, given the name of this technique — we must find the value of p that maximises this likelihood function.
We can easily calculate this probability in two different ways in R:
# To illustrate, let's find the likelihood of obtaining these results if p was 0.6—that is, if our coin was biased in such a way to show heads 60% of the time.biased_prob <- 0.6# Explicit calculationchoose(100,52)*(biased_prob**52)*(1-biased_prob)**48# 0.0214877567069514# Using R's dbinom function (density function for a given binomial distribution)dbinom(heads,100,biased_prob)# 0.0214877567069514
Back to our problem — we want to know the value of p that our data implies. For simple situations like the one under consideration, it’s possible to differentiate the likelihood function with respect to the parameter being estimated and equate the resulting expression to zero in order to solve for the MLE estimate of p. However, for more complicated (and realistic) processes, you will probably have to resort to doing it numerically.
Luckily, this is a breeze with R as well! Our approach will be as follows:
Define a function that will calculate the likelihood function for a given value of p; thenSearch for the value of p that results in the highest likelihood.
Define a function that will calculate the likelihood function for a given value of p; then
Search for the value of p that results in the highest likelihood.
Starting with the first step:
likelihood <- function(p){ dbinom(heads, 100, p)}# Test that our function gives the same result as in our earlier examplelikelihood(biased_prob)# 0.0214877567069513
And now considering the second step. There are many different ways of optimising (ie maximising or minimising) functions in R — the one we’ll consider here makes use of the nlm function, which stands for non-linear minimisation. If you give nlm a function and indicate which parameter you want it to vary, it will follow an algorithm and work iteratively until it finds the value of that parameter which minimises the function’s value.
You may be concerned that I’ve introduced a tool to minimise a function’s value when we really are looking to maximise — this is maximum likelihood estimation, after all! Fortunately, maximising a function is equivalent to minimising the function multiplied by minus one. If we create a new function that simply produces the likelihood multiplied by minus one, then the parameter that minimises the value of this new function will be exactly the same as the parameter that maximises our original likelihood.
As such, a small adjustment to our function from before is in order:
negative_likelihood <- function(p){ dbinom(heads, 100, p)*-1}# Test that our function is behaving as expectednegative_likelihood(biased_prob)# -0.0214877567069513
Excellent — we’re now ready to find our MLE value for p.
nlm(negative_likelihood,0.5,stepmax=0.5)# $minimum# -0.07965256# $estimate# 0.5199995# $gradient# -2.775558e-11# $code# 1# $iterations# 4
The nlm function has returned some information about its quest to find the MLE estimate of p.
$minimum denotes the minimum value of the negative likelihood that was found — so the maximum likelihood is just this value multiplied by minus one, ie 0.07965...;
$estimate is our MLE estimate of p;
$gradient is the gradient of the likelihood function in the vicinity of our estimate of p — we would expect this to be very close to zero for a successful estimate;
$code explains to use why the minimisation algorithm was terminated — a value of 1 indicates that the minimisation is likely to have been successful; and
$iterations tells us the number of iterations that nlm had to go through to obtain this optimal value of the parameter.
This information is all nice to know — but what we really care about is that it’s telling us that our MLE estimate of p is 0.52. We can intuitively tell that this is correct — what coin would be more likely to give us 52 heads out of 100 flips than one that lands on heads 52% of the time?
In this rather trivial example we’ve looked at today, it may seems like we’ve put ourselves through a lot of hassle to arrive at a fairly obvious conclusion. But consider a problem where you have a more complicated distribution and multiple parameters to optimise — the problem of maximum likelihood estimation becomes exponentially more difficult — fortunately, the process that we’ve explored today scales up well to these more complicated problems.
Ultimately, you better have a good grasp of MLE estimation if you want to build robust models — and in my estimation, you’ve just taken another step towards maximising your chances of success — or would you prefer to think of it as minimising your probability of failure?
Andrew Hetherington is an actuary-in-training and data enthusiast based in London, UK.
Check out my website.
Connect with me on LinkedIn.
See what I’m tinkering with on GitHub.
The notebook used to produce the work in this article can be found here.
Coin photo by 🇨🇭 Claudio Schwarz | @purzlbaum on Unsplash.
|
[
{
"code": null,
"e": 715,
"s": 47,
"text": "Often, you’ll have some level of intuition — or perhaps concrete evidence — to suggest that a set of observations has been generated by a particular statistical distribution. Similar phenomena to the one you are modelling may have been shown to be explained well by a certain distribution. The setup of the situation or problem you are investigating may naturally suggest a family of distributions to try. Or maybe you just want to have a bit of fun by fitting your data to some obscure model just to see what happens (if you are challenged on this, tell people you’re doing Exploratory Data Analysis and that you don’t like to be disturbed when you’re in your zone)."
},
{
"code": null,
"e": 965,
"s": 715,
"text": "Now, there are many ways of estimating the parameters of your chosen model from the data you have. The simplest of these is the method of moments — an effective tool, but one not without its disadvantages (notably, these estimates are often biased)."
},
{
"code": null,
"e": 1214,
"s": 965,
"text": "Another method you may want to consider is Maximum Likelihood Estimation (MLE), which tends to produce better (ie more unbiased) estimates for model parameters. It’s a little more technical, but nothing that we can’t handle. Let’s see how it works."
},
{
"code": null,
"e": 1576,
"s": 1214,
"text": "The likelihood — more precisely, the likelihood function — is a function that represents how likely it is to obtain a certain set of observations from a given model. We’re considering the set of observations as fixed — they’ve happened, they’re in the past — and now we’re considering under which set of model parameters we would be most likely to observe them."
},
{
"code": null,
"e": 1790,
"s": 1576,
"text": "Consider an example. Let’s say we flipped a coin 100 times and observed 52 heads and 48 tails. We want to come up with a model that will predict the number of heads we’ll get if we kept flipping another 100 times."
},
{
"code": null,
"e": 1901,
"s": 1790,
"text": "Formalising the problem a bit, let’s think about the number of heads obtained from 100 coin flips. Given that:"
},
{
"code": null,
"e": 1957,
"s": 1901,
"text": "there are only two possible outcomes (heads and tails),"
},
{
"code": null,
"e": 2019,
"s": 1957,
"text": "there’s a fixed number of “trials” (100 coin flips), and that"
},
{
"code": null,
"e": 2082,
"s": 2019,
"text": "there’s a fixed probability of “success” (ie getting a heads),"
},
{
"code": null,
"e": 2178,
"s": 2082,
"text": "we might reasonably suggest that the situation could be modelled using a binomial distribution."
},
{
"code": null,
"e": 2296,
"s": 2178,
"text": "We can use R to set up the problem as follows (check out the Jupyter notebook used for this article for more detail):"
},
{
"code": null,
"e": 2483,
"s": 2296,
"text": "# I don’t know about you but I’m feelingset.seed(22)# Generate an outcome, ie number of heads obtained, assuming a fair coin was used for the 100 flipsheads <- rbinom(1,100,0.5)heads# 52"
},
{
"code": null,
"e": 2867,
"s": 2483,
"text": "(For the purposes of generating the data, we’ve used a 50/50 chance of getting a heads/tails, although we are going to pretend that we don’t know this for the time being. For almost all real world problems we don’t have access to this kind of information on the processes that generate the data we’re looking at — which is entirely why we are motivated to estimate these parameters!)"
},
{
"code": null,
"e": 3106,
"s": 2867,
"text": "Under our formulation of the heads/tails process as a binomial one, we are supposing that there is a probability p of obtaining a heads for each coin flip. Extending this, the probability of obtaining 52 heads after 100 flips is given by:"
},
{
"code": null,
"e": 3431,
"s": 3106,
"text": "This probability is our likelihood function — it allows us to calculate the probability, ie how likely it is, of that our set of data being observed given a probability of heads p. You may be able to guess the next step, given the name of this technique — we must find the value of p that maximises this likelihood function."
},
{
"code": null,
"e": 3500,
"s": 3431,
"text": "We can easily calculate this probability in two different ways in R:"
},
{
"code": null,
"e": 3901,
"s": 3500,
"text": "# To illustrate, let's find the likelihood of obtaining these results if p was 0.6—that is, if our coin was biased in such a way to show heads 60% of the time.biased_prob <- 0.6# Explicit calculationchoose(100,52)*(biased_prob**52)*(1-biased_prob)**48# 0.0214877567069514# Using R's dbinom function (density function for a given binomial distribution)dbinom(heads,100,biased_prob)# 0.0214877567069514"
},
{
"code": null,
"e": 4338,
"s": 3901,
"text": "Back to our problem — we want to know the value of p that our data implies. For simple situations like the one under consideration, it’s possible to differentiate the likelihood function with respect to the parameter being estimated and equate the resulting expression to zero in order to solve for the MLE estimate of p. However, for more complicated (and realistic) processes, you will probably have to resort to doing it numerically."
},
{
"code": null,
"e": 4413,
"s": 4338,
"text": "Luckily, this is a breeze with R as well! Our approach will be as follows:"
},
{
"code": null,
"e": 4569,
"s": 4413,
"text": "Define a function that will calculate the likelihood function for a given value of p; thenSearch for the value of p that results in the highest likelihood."
},
{
"code": null,
"e": 4660,
"s": 4569,
"text": "Define a function that will calculate the likelihood function for a given value of p; then"
},
{
"code": null,
"e": 4726,
"s": 4660,
"text": "Search for the value of p that results in the highest likelihood."
},
{
"code": null,
"e": 4756,
"s": 4726,
"text": "Starting with the first step:"
},
{
"code": null,
"e": 4922,
"s": 4756,
"text": "likelihood <- function(p){ dbinom(heads, 100, p)}# Test that our function gives the same result as in our earlier examplelikelihood(biased_prob)# 0.0214877567069513"
},
{
"code": null,
"e": 5358,
"s": 4922,
"text": "And now considering the second step. There are many different ways of optimising (ie maximising or minimising) functions in R — the one we’ll consider here makes use of the nlm function, which stands for non-linear minimisation. If you give nlm a function and indicate which parameter you want it to vary, it will follow an algorithm and work iteratively until it finds the value of that parameter which minimises the function’s value."
},
{
"code": null,
"e": 5866,
"s": 5358,
"text": "You may be concerned that I’ve introduced a tool to minimise a function’s value when we really are looking to maximise — this is maximum likelihood estimation, after all! Fortunately, maximising a function is equivalent to minimising the function multiplied by minus one. If we create a new function that simply produces the likelihood multiplied by minus one, then the parameter that minimises the value of this new function will be exactly the same as the parameter that maximises our original likelihood."
},
{
"code": null,
"e": 5935,
"s": 5866,
"text": "As such, a small adjustment to our function from before is in order:"
},
{
"code": null,
"e": 6099,
"s": 5935,
"text": "negative_likelihood <- function(p){ dbinom(heads, 100, p)*-1}# Test that our function is behaving as expectednegative_likelihood(biased_prob)# -0.0214877567069513"
},
{
"code": null,
"e": 6156,
"s": 6099,
"text": "Excellent — we’re now ready to find our MLE value for p."
},
{
"code": null,
"e": 6294,
"s": 6156,
"text": "nlm(negative_likelihood,0.5,stepmax=0.5)# $minimum# -0.07965256# $estimate# 0.5199995# $gradient# -2.775558e-11# $code# 1# $iterations# 4"
},
{
"code": null,
"e": 6388,
"s": 6294,
"text": "The nlm function has returned some information about its quest to find the MLE estimate of p."
},
{
"code": null,
"e": 6552,
"s": 6388,
"text": "$minimum denotes the minimum value of the negative likelihood that was found — so the maximum likelihood is just this value multiplied by minus one, ie 0.07965...;"
},
{
"code": null,
"e": 6588,
"s": 6552,
"text": "$estimate is our MLE estimate of p;"
},
{
"code": null,
"e": 6753,
"s": 6588,
"text": "$gradient is the gradient of the likelihood function in the vicinity of our estimate of p — we would expect this to be very close to zero for a successful estimate;"
},
{
"code": null,
"e": 6907,
"s": 6753,
"text": "$code explains to use why the minimisation algorithm was terminated — a value of 1 indicates that the minimisation is likely to have been successful; and"
},
{
"code": null,
"e": 7027,
"s": 6907,
"text": "$iterations tells us the number of iterations that nlm had to go through to obtain this optimal value of the parameter."
},
{
"code": null,
"e": 7317,
"s": 7027,
"text": "This information is all nice to know — but what we really care about is that it’s telling us that our MLE estimate of p is 0.52. We can intuitively tell that this is correct — what coin would be more likely to give us 52 heads out of 100 flips than one that lands on heads 52% of the time?"
},
{
"code": null,
"e": 7769,
"s": 7317,
"text": "In this rather trivial example we’ve looked at today, it may seems like we’ve put ourselves through a lot of hassle to arrive at a fairly obvious conclusion. But consider a problem where you have a more complicated distribution and multiple parameters to optimise — the problem of maximum likelihood estimation becomes exponentially more difficult — fortunately, the process that we’ve explored today scales up well to these more complicated problems."
},
{
"code": null,
"e": 8041,
"s": 7769,
"text": "Ultimately, you better have a good grasp of MLE estimation if you want to build robust models — and in my estimation, you’ve just taken another step towards maximising your chances of success — or would you prefer to think of it as minimising your probability of failure?"
},
{
"code": null,
"e": 8128,
"s": 8041,
"text": "Andrew Hetherington is an actuary-in-training and data enthusiast based in London, UK."
},
{
"code": null,
"e": 8150,
"s": 8128,
"text": "Check out my website."
},
{
"code": null,
"e": 8179,
"s": 8150,
"text": "Connect with me on LinkedIn."
},
{
"code": null,
"e": 8218,
"s": 8179,
"text": "See what I’m tinkering with on GitHub."
},
{
"code": null,
"e": 8291,
"s": 8218,
"text": "The notebook used to produce the work in this article can be found here."
}
] |
Installing MySQL on Unix/Linux Using Generic Binaries
|
Oracle comes with a set of binary distributions of MySQL. This includes generic binary distributions in the form of compressed tar files (files that have a .tar.xz extension) for many platforms, and binaries in platform-specific package formats for specific platforms.
MySQL compressed tar file binary distributions have names in the format ‘mysql−VERSION−OS.tar.xz’, where VERSION refers to a number and OS indicates the type of operating system on which the distribution is required to be used.
To install a compressed tar file binary distribution, the installation needs to be unpacked into a location that is chosen by user. Debug versions of the mysqld binary are available as mysqld−debug.
If the user’s own debug version needs to be used to compile MySQL from a source distribution, appropriate configuration options need to be used.
To install and use a MySQL binary distribution, the below shown command sequence needs to be used −
shell> groupadd mysql
shell> useradd −r −g mysql −s /bin/false mysql
shell> cd /usr/local
shell> tar xvf /path/to/mysql−VERSION−OS.tar.xz
shell> ln −s full−path−to−mysql−VERSION−OS mysql
shell> cd mysql
shell> mkdir mysql−files
shell> chown mysql:mysql mysql−files
shell> chmod 750 mysql−files
shell> bin/mysqld −−initialize −−user=mysql
shell> bin/mysql_ssl_rsa_setup
shell> bin/mysqld_safe −−user=mysql &
# Below command is optional
shell> cp support−files/mysql.server /etc/init.d/mysql.server
The above assumes that the user has root (administrator) access to their system.
The mysql-files directory provides an easy location to use as the value for the secure_file_priv system variable. This limits the import and export operations to a specific directory only. See Section 5.1.8, “Server System Variables”.
The steps are briefed as shown below −
Create a mysql User and Group. It can be done using below commands −
shell> groupadd mysql
shell> useradd −r −g mysql −s /bin/false mysql
Obtain and unpack the distribution. It can be done using below commands −
shell> cd /usr/local
Unpack the distribution, that will create the installation directory. The ‘tar’ can uncompress and unpack the distribution if it has ’z’ option support. It can be done using below commands −
shell> tar xvf /path/to/mysql−VERSION−OS.tar.xz
The tar command creates a directory named mysql−VERSION−OS
The tar command can be replaced with below command to uncompress and extract the distribution −
shell> xz −dc /path/to/mysql−VERSION−OS.tar.xz | tar x
A symbolic link can be created to the installation directory that has been created by tar −
shell> ln −s full−path−to−mysql−VERSION−OS mysql
The ‘ln’ command creates a symbolic link to the installation directory. This enables the user to refer more easily to the poth as /usr/local/mysql. The /usr/local/mysql/bin directory can be added to the user’s PATH variable using the below commad −
shell> export PATH=$PATH:/usr/local/mysql/bin
|
[
{
"code": null,
"e": 1331,
"s": 1062,
"text": "Oracle comes with a set of binary distributions of MySQL. This includes generic binary distributions in the form of compressed tar files (files that have a .tar.xz extension) for many platforms, and binaries in platform-specific package formats for specific platforms."
},
{
"code": null,
"e": 1559,
"s": 1331,
"text": "MySQL compressed tar file binary distributions have names in the format ‘mysql−VERSION−OS.tar.xz’, where VERSION refers to a number and OS indicates the type of operating system on which the distribution is required to be used."
},
{
"code": null,
"e": 1758,
"s": 1559,
"text": "To install a compressed tar file binary distribution, the installation needs to be unpacked into a location that is chosen by user. Debug versions of the mysqld binary are available as mysqld−debug."
},
{
"code": null,
"e": 1903,
"s": 1758,
"text": "If the user’s own debug version needs to be used to compile MySQL from a source distribution, appropriate configuration options need to be used."
},
{
"code": null,
"e": 2003,
"s": 1903,
"text": "To install and use a MySQL binary distribution, the below shown command sequence needs to be used −"
},
{
"code": null,
"e": 2500,
"s": 2003,
"text": "shell> groupadd mysql\nshell> useradd −r −g mysql −s /bin/false mysql\nshell> cd /usr/local\nshell> tar xvf /path/to/mysql−VERSION−OS.tar.xz\nshell> ln −s full−path−to−mysql−VERSION−OS mysql\nshell> cd mysql\nshell> mkdir mysql−files\nshell> chown mysql:mysql mysql−files\nshell> chmod 750 mysql−files\nshell> bin/mysqld −−initialize −−user=mysql\nshell> bin/mysql_ssl_rsa_setup\nshell> bin/mysqld_safe −−user=mysql &\n# Below command is optional\nshell> cp support−files/mysql.server /etc/init.d/mysql.server"
},
{
"code": null,
"e": 2581,
"s": 2500,
"text": "The above assumes that the user has root (administrator) access to their system."
},
{
"code": null,
"e": 2816,
"s": 2581,
"text": "The mysql-files directory provides an easy location to use as the value for the secure_file_priv system variable. This limits the import and export operations to a specific directory only. See Section 5.1.8, “Server System Variables”."
},
{
"code": null,
"e": 2855,
"s": 2816,
"text": "The steps are briefed as shown below −"
},
{
"code": null,
"e": 2924,
"s": 2855,
"text": "Create a mysql User and Group. It can be done using below commands −"
},
{
"code": null,
"e": 2993,
"s": 2924,
"text": "shell> groupadd mysql\nshell> useradd −r −g mysql −s /bin/false mysql"
},
{
"code": null,
"e": 3067,
"s": 2993,
"text": "Obtain and unpack the distribution. It can be done using below commands −"
},
{
"code": null,
"e": 3088,
"s": 3067,
"text": "shell> cd /usr/local"
},
{
"code": null,
"e": 3279,
"s": 3088,
"text": "Unpack the distribution, that will create the installation directory. The ‘tar’ can uncompress and unpack the distribution if it has ’z’ option support. It can be done using below commands −"
},
{
"code": null,
"e": 3327,
"s": 3279,
"text": "shell> tar xvf /path/to/mysql−VERSION−OS.tar.xz"
},
{
"code": null,
"e": 3386,
"s": 3327,
"text": "The tar command creates a directory named mysql−VERSION−OS"
},
{
"code": null,
"e": 3482,
"s": 3386,
"text": "The tar command can be replaced with below command to uncompress and extract the distribution −"
},
{
"code": null,
"e": 3538,
"s": 3482,
"text": "shell> xz −dc /path/to/mysql−VERSION−OS.tar.xz | tar x\n"
},
{
"code": null,
"e": 3630,
"s": 3538,
"text": "A symbolic link can be created to the installation directory that has been created by tar −"
},
{
"code": null,
"e": 3679,
"s": 3630,
"text": "shell> ln −s full−path−to−mysql−VERSION−OS mysql"
},
{
"code": null,
"e": 3928,
"s": 3679,
"text": "The ‘ln’ command creates a symbolic link to the installation directory. This enables the user to refer more easily to the poth as /usr/local/mysql. The /usr/local/mysql/bin directory can be added to the user’s PATH variable using the below commad −"
},
{
"code": null,
"e": 3974,
"s": 3928,
"text": "shell> export PATH=$PATH:/usr/local/mysql/bin"
}
] |
Dynamic Topic Modeling with BERTopic | by Sejal Dua | Towards Data Science
|
This marks the last article of a three-part investigation to better understand my Medium suggested reads by way of an NLP technique called topic modeling. In this article, I will introduce what I believe to be the most powerful topic modeling algorithm in the field today: BERTopic. I will also attempt to illustrate how temporally-aware interactive visualizations can comprise vast amounts of information without needing to manually inspect any documents or topics at a granular level.
Dynamic topic modeling, or the ability to monitor how the anatomy of each topic has evolved over time, is a robust and sophisticated approach to understanding a large corpus. My primary goal of this article is to highlight BERTopic’s capabilities in juxtaposition with the pain points of wordclouds that were previously discussed. I hope to prove that topic modeling can be accessible to not-so-technical people and that it is worthwhile to resist the tempting chokehold that wordclouds seem to have on the NLP space. Disclaimer: I’m very much guilty of making wordclouds to inspect whatever corpus I am working with, but I hope to discover more informative and mathematically-grounded alternatives.
BERTopic is a topic modeling technique that leverages BERT embeddings and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions.
It was written by Maarten Grootendorst in 2020 and has steadily been garnering traction ever since. The two greatest advantages to BERTopic are arguably its straight forward out-of-the-box usability and its novel interactive visualization methods. Having an overall picture of the topics that have been learned by the model allows us to generate an internal perception of the model’s quality and the most notable themes encapsulated in the corpus.
The package can be installed via pypi:
pip install bertopic
If you anticipate that your project will make use of the visualization options included with the BERTopic package, install it as follows:
pip install bertopic[visualization]
Embed Documents: Extract document embeddings with Sentence Transformers. Since the data we are working with are article titles, we will need to obtain sentence embeddings, which BERTopic lets us do conveniently, by employing its default sentence transformer model paraphrase-MiniLM-L6-v2.
Cluster Documents: Create groups of similar documents with UMAP (to reduce the dimensionality of embeddings) and HDBSCAN (to identify and cluster semantically similar documents)
Create Topic Representation: Extract and reduce topics with c-TF-IDF (class-based term frequency, inverse document frequency). If you are unfamiliar with TF-IDF in the first place, all you need to know in order to generally grasp what is going on here is one thing: it allows for comparing the importance of words between documents by computing the frequency of a word in a given document and also the measure of how prevalent the word is in the entire corpus. Now, if we instead treat all documents in a single cluster as a single document and then perform TF-IDF, the result would be importance scores for words within a cluster. The more important words are within a cluster, the more representative they are of that topic. Therefore, we can obtain keyword-based descriptions for each topic! This is super powerful when it comes to inferring meaning from the groupings yielded by any unsupervised clustering technique.
For more information and theoretical backing pertaining to these three algorithmic steps, please consult the author’s comprehensive guide for in-depth explanations.
To create a BERTopic object in Python and move onto the fun stuff (dynamic topic modeling), we just need our preprocessed list of documents. After loading in the data with pd.read_csv(), we can either write some lambda apply functions to preprocess our textual data:
df.text = df.apply(lambda row: re.sub(r"http\S+", "", row.text).lower(), 1)df.text = df.apply(lambda row: " ".join(filter(lambda x:x[0]!="@", row.text.split())), 1)df.text = df.apply(lambda row: " ".join(re.sub("[^a-zA-Z]+", " ", row.text).split()), 1)
... or, if it’s already cleaned, we can prepare our two variables. The most critical variable is obviously our list of documents, which we will call titles. Secondly, we will want a list of dates corresponding to each document so that we can gain insights into how our topics have shifted with respect to time.
titles = df.text.to_list()dates = df['date'].apply(lambda x: pd.Timestamp(x)).to_list()
Next, let’s create our topic model!
from bertopic import BERTopictopic_model = BERTopic(min_topic_size=70, n_gram_range=(1,3), verbose=True)topics, _ = topic_model.fit_transform(titles)
We can extract the largest ten topics based on the number of topics assigned to each topic and also preview the keyword-based “names” of our topics, as alluded to earlier:
freq = topic_model.get_topic_info()freq.head(10)
Note that in the above data frame, topic -1 denotes the topic consisting of outlier documents which are typically ignored due to terms having a relatively high prevalence across the whole corpus and thus, low specificity toward any cohesive theme or topic.
We can also take a look at the terms which comprise a particular topic of interest:
topic_nr = freq.iloc[6]["Topic"] # select a frequent topictopic_model.get_topic(topic_nr)
Great! All of our least favorite words belong to Topic #5. We can see that the top three words are “coronavirus”, “covid”, and “pandemic”, but there is also an n-gram which has made it into these top 10 topic-specific terms: “covid vaccine,” and it’s a term which has more positive connotations, so it will be interesting to see at which point in time the word “vaccine” started appearing in article titles with greater frequency.
The cherry on top of this entire technique is the ability to visualize our topic model in a way that tells us enough about our data without needing to investigate the raw text itself.
If you recall from my previous topic modeling article entitled “NLP Preprocessing and Latent Dirichlet Allocation (LDA) Topic Modeling with Gensim,” there exists a Python visualization package called pyLDAvis, which enables the user to produce interactive subplots depicting the distance between topics on a 2D plane as well as the top 30 most relevant and salient terms within the topic. BERTopic has its own intertopic distance map implementation which includes a hover tooltip which reveals the number of documents designated to a particular topic and the top 5 most frequent terms within that topic. The line of code you will need to produce the following visualization is shockingly straightforward:
topic_model.visualize_topics()
The intertopic distance map above depicsts five main clusters of topics among our 15 total topics. The cluster that is closest to the x-axis pertains to the general theme of Python, which makes intuitive sense based on my Medium interests and search history. The cluster in the bottom left aligns consists of domain interests such as machine learning, deep learning, data science, tech design, etc. Gravitating toward the top left corner are articles that pertain to NLP, sentiment analysis, and Twitter data. Despite the fact that these could be labeled yet another domain interset of mine, I hypothesize that they occupy a different region of the 2D plane because I have researched these subjects with greater depth and granularity as part of my senior capstone project. The cluster in the middle bottom area near the y-axis, I presume, is characterized by less technical content, with subject lines that follow something along the lines of the templates “10 Tips to Avoid Burnout as a Software Engineer” or “What I learned from writing 100 Medium Articles in 100 Days”. These are the articles that lie at the intersection of productivity hacks and self-development wisdom, in my opinion. They don’t always teach me new things, but they certainly inspire me when I go through my Medium Digest each morning. Lastly, the rightmost topic cluster represents a potpourri of topics that do not reside within the tech space but are nevertheless important in order to consume a well-rounded selection of articles on Medium.
And now, for the main attraction, dynamic topic modeling with respect to time! In the video below, we can observe the frequency of the ten most prevalent topics in my corpus of suggested Medium articles (titles), from the start of 2020 through now.
We can observe that data science (topic 5) and development / coding (topic 9) have been consistent interests of mine over the course of almost 2 years. Interestingly, React apps (topic 6) were something that I read about extensively around October and November of 2020, but this interest of mine faded shortly thereafter. The backstory for this, you may be wondering? Well, I was cramming for a frontend development interview that I felt wildly unprepared for. Fun fact: Medium taught me pretty much everything I know about React. We can notice that the NLP topic (topic 8) spiked in frequency around the summer of 2020 and June of 2021, which makes a lot of sense when I reflect on what I was working on at those times. COVID-19 (topic 2) spiked in frequency around March through May of 2020, which speaks for itself I think... moving on. The potpourri topic (topic 0) had the highest frequency in early 2020, but tapered off later in the year. I guess I read a lot more non-technical content when I was stuck at home, whereas I now primarily consult Medium articles for technical purposes.
Without going into more personal reflection, I hope you get the gist of how powerful dynamic topic modeling can by. Hovering over our topic frequency line plots at any point in time brings up different topic keywords, thus making it possible to analyze how a topic has evolved in composition over time.
Though I briefly showed how to access the top keywords belonging to a particular topic and their importance scores, we can also visualize these terms and scores as bar charts.
That’s a wrap for our comprehensive tour of BERTopic! I hope you learned a thing or two about dynamic topic modeling and can see why I find this topic modeling Python package to surpass other packages and techniques in this space. Again, huge shoutout to the author of BERTopic, Maarten Grootendorst for implementing and open sourcing this awesome code.
If you want to give dynamic topic modeling a whirl, I have gone ahead and written a small Streamlit app which allows you to import a CSV file consisting of dates and documents from which you’d like to build a topic model. It then creates a BERTopic model and embeds some interactive visualizations within the app. I hope you enjoy and can play around with the app to your heart’s content. My motivation for developing this app was the following:
make topic modeling more interactive and enjoyable and move away from it being an intimidatingly technical NLP subdomainmake this subject matter accessible to less technical folks who may also want the opportunity to better understand the themes present in a given corpus of text
make topic modeling more interactive and enjoyable and move away from it being an intimidatingly technical NLP subdomain
make this subject matter accessible to less technical folks who may also want the opportunity to better understand the themes present in a given corpus of text
The app is accessible here: https://share.streamlit.io/sejaldua/digesting-the-digest/main/bertopic_app.py
As always, I am making my source code available in a public GitHub repository:
|
[
{
"code": null,
"e": 658,
"s": 171,
"text": "This marks the last article of a three-part investigation to better understand my Medium suggested reads by way of an NLP technique called topic modeling. In this article, I will introduce what I believe to be the most powerful topic modeling algorithm in the field today: BERTopic. I will also attempt to illustrate how temporally-aware interactive visualizations can comprise vast amounts of information without needing to manually inspect any documents or topics at a granular level."
},
{
"code": null,
"e": 1358,
"s": 658,
"text": "Dynamic topic modeling, or the ability to monitor how the anatomy of each topic has evolved over time, is a robust and sophisticated approach to understanding a large corpus. My primary goal of this article is to highlight BERTopic’s capabilities in juxtaposition with the pain points of wordclouds that were previously discussed. I hope to prove that topic modeling can be accessible to not-so-technical people and that it is worthwhile to resist the tempting chokehold that wordclouds seem to have on the NLP space. Disclaimer: I’m very much guilty of making wordclouds to inspect whatever corpus I am working with, but I hope to discover more informative and mathematically-grounded alternatives."
},
{
"code": null,
"e": 1565,
"s": 1358,
"text": "BERTopic is a topic modeling technique that leverages BERT embeddings and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions."
},
{
"code": null,
"e": 2013,
"s": 1565,
"text": "It was written by Maarten Grootendorst in 2020 and has steadily been garnering traction ever since. The two greatest advantages to BERTopic are arguably its straight forward out-of-the-box usability and its novel interactive visualization methods. Having an overall picture of the topics that have been learned by the model allows us to generate an internal perception of the model’s quality and the most notable themes encapsulated in the corpus."
},
{
"code": null,
"e": 2052,
"s": 2013,
"text": "The package can be installed via pypi:"
},
{
"code": null,
"e": 2073,
"s": 2052,
"text": "pip install bertopic"
},
{
"code": null,
"e": 2211,
"s": 2073,
"text": "If you anticipate that your project will make use of the visualization options included with the BERTopic package, install it as follows:"
},
{
"code": null,
"e": 2247,
"s": 2211,
"text": "pip install bertopic[visualization]"
},
{
"code": null,
"e": 2536,
"s": 2247,
"text": "Embed Documents: Extract document embeddings with Sentence Transformers. Since the data we are working with are article titles, we will need to obtain sentence embeddings, which BERTopic lets us do conveniently, by employing its default sentence transformer model paraphrase-MiniLM-L6-v2."
},
{
"code": null,
"e": 2714,
"s": 2536,
"text": "Cluster Documents: Create groups of similar documents with UMAP (to reduce the dimensionality of embeddings) and HDBSCAN (to identify and cluster semantically similar documents)"
},
{
"code": null,
"e": 3636,
"s": 2714,
"text": "Create Topic Representation: Extract and reduce topics with c-TF-IDF (class-based term frequency, inverse document frequency). If you are unfamiliar with TF-IDF in the first place, all you need to know in order to generally grasp what is going on here is one thing: it allows for comparing the importance of words between documents by computing the frequency of a word in a given document and also the measure of how prevalent the word is in the entire corpus. Now, if we instead treat all documents in a single cluster as a single document and then perform TF-IDF, the result would be importance scores for words within a cluster. The more important words are within a cluster, the more representative they are of that topic. Therefore, we can obtain keyword-based descriptions for each topic! This is super powerful when it comes to inferring meaning from the groupings yielded by any unsupervised clustering technique."
},
{
"code": null,
"e": 3801,
"s": 3636,
"text": "For more information and theoretical backing pertaining to these three algorithmic steps, please consult the author’s comprehensive guide for in-depth explanations."
},
{
"code": null,
"e": 4068,
"s": 3801,
"text": "To create a BERTopic object in Python and move onto the fun stuff (dynamic topic modeling), we just need our preprocessed list of documents. After loading in the data with pd.read_csv(), we can either write some lambda apply functions to preprocess our textual data:"
},
{
"code": null,
"e": 4321,
"s": 4068,
"text": "df.text = df.apply(lambda row: re.sub(r\"http\\S+\", \"\", row.text).lower(), 1)df.text = df.apply(lambda row: \" \".join(filter(lambda x:x[0]!=\"@\", row.text.split())), 1)df.text = df.apply(lambda row: \" \".join(re.sub(\"[^a-zA-Z]+\", \" \", row.text).split()), 1)"
},
{
"code": null,
"e": 4632,
"s": 4321,
"text": "... or, if it’s already cleaned, we can prepare our two variables. The most critical variable is obviously our list of documents, which we will call titles. Secondly, we will want a list of dates corresponding to each document so that we can gain insights into how our topics have shifted with respect to time."
},
{
"code": null,
"e": 4720,
"s": 4632,
"text": "titles = df.text.to_list()dates = df['date'].apply(lambda x: pd.Timestamp(x)).to_list()"
},
{
"code": null,
"e": 4756,
"s": 4720,
"text": "Next, let’s create our topic model!"
},
{
"code": null,
"e": 4906,
"s": 4756,
"text": "from bertopic import BERTopictopic_model = BERTopic(min_topic_size=70, n_gram_range=(1,3), verbose=True)topics, _ = topic_model.fit_transform(titles)"
},
{
"code": null,
"e": 5078,
"s": 4906,
"text": "We can extract the largest ten topics based on the number of topics assigned to each topic and also preview the keyword-based “names” of our topics, as alluded to earlier:"
},
{
"code": null,
"e": 5127,
"s": 5078,
"text": "freq = topic_model.get_topic_info()freq.head(10)"
},
{
"code": null,
"e": 5384,
"s": 5127,
"text": "Note that in the above data frame, topic -1 denotes the topic consisting of outlier documents which are typically ignored due to terms having a relatively high prevalence across the whole corpus and thus, low specificity toward any cohesive theme or topic."
},
{
"code": null,
"e": 5468,
"s": 5384,
"text": "We can also take a look at the terms which comprise a particular topic of interest:"
},
{
"code": null,
"e": 5558,
"s": 5468,
"text": "topic_nr = freq.iloc[6][\"Topic\"] # select a frequent topictopic_model.get_topic(topic_nr)"
},
{
"code": null,
"e": 5989,
"s": 5558,
"text": "Great! All of our least favorite words belong to Topic #5. We can see that the top three words are “coronavirus”, “covid”, and “pandemic”, but there is also an n-gram which has made it into these top 10 topic-specific terms: “covid vaccine,” and it’s a term which has more positive connotations, so it will be interesting to see at which point in time the word “vaccine” started appearing in article titles with greater frequency."
},
{
"code": null,
"e": 6173,
"s": 5989,
"text": "The cherry on top of this entire technique is the ability to visualize our topic model in a way that tells us enough about our data without needing to investigate the raw text itself."
},
{
"code": null,
"e": 6878,
"s": 6173,
"text": "If you recall from my previous topic modeling article entitled “NLP Preprocessing and Latent Dirichlet Allocation (LDA) Topic Modeling with Gensim,” there exists a Python visualization package called pyLDAvis, which enables the user to produce interactive subplots depicting the distance between topics on a 2D plane as well as the top 30 most relevant and salient terms within the topic. BERTopic has its own intertopic distance map implementation which includes a hover tooltip which reveals the number of documents designated to a particular topic and the top 5 most frequent terms within that topic. The line of code you will need to produce the following visualization is shockingly straightforward:"
},
{
"code": null,
"e": 6909,
"s": 6878,
"text": "topic_model.visualize_topics()"
},
{
"code": null,
"e": 8427,
"s": 6909,
"text": "The intertopic distance map above depicsts five main clusters of topics among our 15 total topics. The cluster that is closest to the x-axis pertains to the general theme of Python, which makes intuitive sense based on my Medium interests and search history. The cluster in the bottom left aligns consists of domain interests such as machine learning, deep learning, data science, tech design, etc. Gravitating toward the top left corner are articles that pertain to NLP, sentiment analysis, and Twitter data. Despite the fact that these could be labeled yet another domain interset of mine, I hypothesize that they occupy a different region of the 2D plane because I have researched these subjects with greater depth and granularity as part of my senior capstone project. The cluster in the middle bottom area near the y-axis, I presume, is characterized by less technical content, with subject lines that follow something along the lines of the templates “10 Tips to Avoid Burnout as a Software Engineer” or “What I learned from writing 100 Medium Articles in 100 Days”. These are the articles that lie at the intersection of productivity hacks and self-development wisdom, in my opinion. They don’t always teach me new things, but they certainly inspire me when I go through my Medium Digest each morning. Lastly, the rightmost topic cluster represents a potpourri of topics that do not reside within the tech space but are nevertheless important in order to consume a well-rounded selection of articles on Medium."
},
{
"code": null,
"e": 8676,
"s": 8427,
"text": "And now, for the main attraction, dynamic topic modeling with respect to time! In the video below, we can observe the frequency of the ten most prevalent topics in my corpus of suggested Medium articles (titles), from the start of 2020 through now."
},
{
"code": null,
"e": 9768,
"s": 8676,
"text": "We can observe that data science (topic 5) and development / coding (topic 9) have been consistent interests of mine over the course of almost 2 years. Interestingly, React apps (topic 6) were something that I read about extensively around October and November of 2020, but this interest of mine faded shortly thereafter. The backstory for this, you may be wondering? Well, I was cramming for a frontend development interview that I felt wildly unprepared for. Fun fact: Medium taught me pretty much everything I know about React. We can notice that the NLP topic (topic 8) spiked in frequency around the summer of 2020 and June of 2021, which makes a lot of sense when I reflect on what I was working on at those times. COVID-19 (topic 2) spiked in frequency around March through May of 2020, which speaks for itself I think... moving on. The potpourri topic (topic 0) had the highest frequency in early 2020, but tapered off later in the year. I guess I read a lot more non-technical content when I was stuck at home, whereas I now primarily consult Medium articles for technical purposes."
},
{
"code": null,
"e": 10071,
"s": 9768,
"text": "Without going into more personal reflection, I hope you get the gist of how powerful dynamic topic modeling can by. Hovering over our topic frequency line plots at any point in time brings up different topic keywords, thus making it possible to analyze how a topic has evolved in composition over time."
},
{
"code": null,
"e": 10247,
"s": 10071,
"text": "Though I briefly showed how to access the top keywords belonging to a particular topic and their importance scores, we can also visualize these terms and scores as bar charts."
},
{
"code": null,
"e": 10601,
"s": 10247,
"text": "That’s a wrap for our comprehensive tour of BERTopic! I hope you learned a thing or two about dynamic topic modeling and can see why I find this topic modeling Python package to surpass other packages and techniques in this space. Again, huge shoutout to the author of BERTopic, Maarten Grootendorst for implementing and open sourcing this awesome code."
},
{
"code": null,
"e": 11047,
"s": 10601,
"text": "If you want to give dynamic topic modeling a whirl, I have gone ahead and written a small Streamlit app which allows you to import a CSV file consisting of dates and documents from which you’d like to build a topic model. It then creates a BERTopic model and embeds some interactive visualizations within the app. I hope you enjoy and can play around with the app to your heart’s content. My motivation for developing this app was the following:"
},
{
"code": null,
"e": 11327,
"s": 11047,
"text": "make topic modeling more interactive and enjoyable and move away from it being an intimidatingly technical NLP subdomainmake this subject matter accessible to less technical folks who may also want the opportunity to better understand the themes present in a given corpus of text"
},
{
"code": null,
"e": 11448,
"s": 11327,
"text": "make topic modeling more interactive and enjoyable and move away from it being an intimidatingly technical NLP subdomain"
},
{
"code": null,
"e": 11608,
"s": 11448,
"text": "make this subject matter accessible to less technical folks who may also want the opportunity to better understand the themes present in a given corpus of text"
},
{
"code": null,
"e": 11714,
"s": 11608,
"text": "The app is accessible here: https://share.streamlit.io/sejaldua/digesting-the-digest/main/bertopic_app.py"
}
] |
Find the difference between two datetime values with MySQL?
|
To find the difference between two datetime values, you can use TIMESTAMPDIFF(). Let us first create a table −
mysql> create table DemoTable
-> (
-> DueDatetime1 datetime,
-> DueDatetime2 datetime
-> );
Query OK, 0 rows affected (0.86 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable values('2019-10-26 19:49:00','2019-10-26 17:49:00');
Query OK, 1 row affected (0.19 sec)
mysql> insert into DemoTable values('2019-10-26 08:00:00','2019-10-26 13:00:00');
Query OK, 1 row affected (0.15 sec)
mysql> insert into DemoTable values('2019-10-26 06:50:00','2019-10-26 12:50:00');
Query OK, 1 row affected (0.68 sec)
Display all records from the table using select statement −
mysql> select *from DemoTable;
This will produce the following output −
+---------------------+---------------------+
| DueDatetime1 | DueDatetime2 |
+---------------------+---------------------+
| 2019-10-26 19:49:00 | 2019-10-26 17:49:00 |
| 2019-10-26 08:00:00 | 2019-10-26 13:00:00 |
| 2019-10-26 06:50:00 | 2019-10-26 12:50:00 |
+---------------------+---------------------+
3 rows in set (0.00 sec)
Here is the query to implement timestampdiff() and find the difference between two dates −
mysql> select abs(timestampdiff(minute,DueDatetime1,DueDatetime2)) from DemoTable;
This will produce the following output −
+------------------------------------------------------+
| abs(timestampdiff(minute,DueDatetime1,DueDatetime2)) |
+------------------------------------------------------+
| 120 |
| 300 |
| 360 |
+------------------------------------------------------+
3 rows in set (0.00 sec)
|
[
{
"code": null,
"e": 1173,
"s": 1062,
"text": "To find the difference between two datetime values, you can use TIMESTAMPDIFF(). Let us first create a table −"
},
{
"code": null,
"e": 1314,
"s": 1173,
"text": "mysql> create table DemoTable\n -> (\n -> DueDatetime1 datetime,\n -> DueDatetime2 datetime\n -> );\nQuery OK, 0 rows affected (0.86 sec)"
},
{
"code": null,
"e": 1370,
"s": 1314,
"text": "Insert some records in the table using insert command −"
},
{
"code": null,
"e": 1724,
"s": 1370,
"text": "mysql> insert into DemoTable values('2019-10-26 19:49:00','2019-10-26 17:49:00');\nQuery OK, 1 row affected (0.19 sec)\nmysql> insert into DemoTable values('2019-10-26 08:00:00','2019-10-26 13:00:00');\nQuery OK, 1 row affected (0.15 sec)\nmysql> insert into DemoTable values('2019-10-26 06:50:00','2019-10-26 12:50:00');\nQuery OK, 1 row affected (0.68 sec)"
},
{
"code": null,
"e": 1784,
"s": 1724,
"text": "Display all records from the table using select statement −"
},
{
"code": null,
"e": 1815,
"s": 1784,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 1856,
"s": 1815,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2203,
"s": 1856,
"text": "+---------------------+---------------------+\n| DueDatetime1 | DueDatetime2 |\n+---------------------+---------------------+\n| 2019-10-26 19:49:00 | 2019-10-26 17:49:00 |\n| 2019-10-26 08:00:00 | 2019-10-26 13:00:00 |\n| 2019-10-26 06:50:00 | 2019-10-26 12:50:00 |\n+---------------------+---------------------+\n3 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2294,
"s": 2203,
"text": "Here is the query to implement timestampdiff() and find the difference between two dates −"
},
{
"code": null,
"e": 2377,
"s": 2294,
"text": "mysql> select abs(timestampdiff(minute,DueDatetime1,DueDatetime2)) from DemoTable;"
},
{
"code": null,
"e": 2418,
"s": 2377,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2842,
"s": 2418,
"text": "+------------------------------------------------------+\n| abs(timestampdiff(minute,DueDatetime1,DueDatetime2)) |\n+------------------------------------------------------+\n| 120 |\n| 300 |\n| 360 |\n+------------------------------------------------------+\n3 rows in set (0.00 sec)"
}
] |
Count triplets in a sorted doubly linked list whose sum is equal to a given value x - GeeksforGeeks
|
06 Jul, 2021
Given a sorted doubly linked list of distinct nodes(no two nodes have the same data) and a value x. Count triplets in the list that sum up to a given value x.
Examples:
Method 1 (Naive Approach): Using three nested loops generate all triplets and check whether elements in the triplet sum up to x or not.
C++
Java
Python3
C#
Javascript
// C++ implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'#include <bits/stdc++.h> using namespace std; // structure of node of doubly linked liststruct Node { int data; struct Node* next, *prev;}; // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'int countTriplets(struct Node* head, int x){ struct Node* ptr1, *ptr2, *ptr3; int count = 0; // generate all possible triplets for (ptr1 = head; ptr1 != NULL; ptr1 = ptr1->next) for (ptr2 = ptr1->next; ptr2 != NULL; ptr2 = ptr2->next) for (ptr3 = ptr2->next; ptr3 != NULL; ptr3 = ptr3->next) // if elements in the current triplet sum up to 'x' if ((ptr1->data + ptr2->data + ptr3->data) == x) // increment count count++; // required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked listvoid insert(struct Node** head, int data){ // allocate node struct Node* temp = new Node(); // put in the data temp->data = data; temp->next = temp->prev = NULL; if ((*head) == NULL) (*head) = temp; else { temp->next = *head; (*head)->prev = temp; (*head) = temp; }} // Driver program to test aboveint main(){ // start with an empty doubly linked list struct Node* head = NULL; // insert values in sorted order insert(&head, 9); insert(&head, 8); insert(&head, 6); insert(&head, 5); insert(&head, 4); insert(&head, 2); insert(&head, 1); int x = 17; cout << "Count = " << countTriplets(head, x); return 0;}
// Java implementation to count triplets// in a sorted doubly linked list// whose sum is equal to a given value 'x'import java.io.*;import java.util.*; // Represents node of a doubly linked listclass Node{ int data; Node prev, next; Node(int val) { data = val; prev = null; next = null; }} class GFG{ // function to count triplets in // a sorted doubly linked list // whose sum is equal to a given value 'x' static int countTriplets(Node head, int x) { Node ptr1, ptr2, ptr3; int count = 0; // generate all possible triplets for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) for (ptr3 = ptr2.next; ptr3 != null; ptr3 = ptr3.next) // if elements in the current triplet sum up to 'x' if ((ptr1.data + ptr2.data + ptr3.data) == x) // increment count count++; // required count of triplets return count; } // A utility function to insert a new node at the // beginning of doubly linked list static Node insert(Node head, int val) { // allocate node Node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head; } // Driver code public static void main(String args[]) { // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; System.out.println("count = " + countTriplets(head, x)); }} // This code is contributed by rachana soma
# Python3 implementation to count triplets# in a sorted doubly linked list# whose sum is equal to a given value 'x' # structure of node of doubly linked listclass Node: def __init__(self): self.data = None self.prev = None self.next = None # function to count triplets in a sorted doubly linked list# whose sum is equal to a given value 'x'def countTriplets( head, x): ptr1 = head ptr2 = None ptr3 = None count = 0 # generate all possible triplets while (ptr1 != None ): ptr2 = ptr1.next while ( ptr2 != None ): ptr3 = ptr2.next while ( ptr3 != None ): # if elements in the current triplet sum up to 'x' if ((ptr1.data + ptr2.data + ptr3.data) == x): # increment count count = count + 1 ptr3 = ptr3.next ptr2 = ptr2.next ptr1 = ptr1.next # required count of triplets return count # A utility function to insert a new node at the# beginning of doubly linked listdef insert(head, data): # allocate node temp = Node() # put in the data temp.data = data temp.next = temp.prev = None if ((head) == None): (head) = temp else : temp.next = head (head).prev = temp (head) = temp return head # Driver code # start with an empty doubly linked listhead = None # insert values in sorted orderhead = insert(head, 9)head = insert(head, 8)head = insert(head, 6)head = insert(head, 5)head = insert(head, 4)head = insert(head, 2)head = insert(head, 1) x = 17 print( "Count = ", countTriplets(head, x)) # This code is contributed by Arnab Kundu
// C# implementation to count triplets// in a sorted doubly linked list// whose sum is equal to a given value 'x'using System; // Represents node of a doubly linked listpublic class Node{ public int data; public Node prev, next; public Node(int val) { data = val; prev = null; next = null; }} class GFG{ // function to count triplets in // a sorted doubly linked list // whose sum is equal to a given value 'x' static int countTriplets(Node head, int x) { Node ptr1, ptr2, ptr3; int count = 0; // generate all possible triplets for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) for (ptr3 = ptr2.next; ptr3 != null; ptr3 = ptr3.next) // if elements in the current triplet sum up to 'x' if ((ptr1.data + ptr2.data + ptr3.data) == x) // increment count count++; // required count of triplets return count; } // A utility function to insert a new node at the // beginning of doubly linked list static Node insert(Node head, int val) { // allocate node Node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head; } // Driver code public static void Main(String []args) { // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; Console.WriteLine("count = " + countTriplets(head, x)); }} // This code is contributed by Arnab Kundu
<script>// javascript implementation to count triplets// in a sorted doubly linked list// whose sum is equal to a given value 'x'// Represents node of a doubly linked listclass Node { constructor(val) { this.data = val; this.prev = null; this.next = null; }} // function to count triplets in // a sorted doubly linked list // whose sum is equal to a given value 'x' function countTriplets( head , x) { var ptr1, ptr2, ptr3; var count = 0; // generate all possible triplets for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) for (ptr3 = ptr2.next; ptr3 != null; ptr3 = ptr3.next) // if elements in the current triplet sum up to 'x' if ((ptr1.data + ptr2.data + ptr3.data) == x) // increment count count++; // required count of triplets return count; } // A utility function to insert a new node at the // beginning of doubly linked list function insert( head , val) { // allocate node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head; } // Driver code // start with an empty doubly linked list head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); var x = 17; document.write("count = " + countTriplets(head, x)); // This code is contributed by umadevi9616</script>
Output:
Count = 2
Time Complexity: O(n3) Auxiliary Space: O(1)
Method 2 (Hashing): Create a hash table with (key, value) tuples represented as (node data, node pointer) tuples. Traverse the doubly linked list and store each node’s data and its pointer pair(tuple) in the hash table. Now, generate each possible pair of nodes. For each pair of nodes, calculate the p_sum(sum of data in the two nodes) and check whether (x-p_sum) exists in the hash table or not. If it exists, then also verify that the two nodes in the pair are not same to the node associated with (x-p_sum) in the hash table and finally increment count. Return (count / 3) as each triplet is counted 3 times in the above process.
C++
Java
Python3
C#
Javascript
// C++ implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'#include <bits/stdc++.h> using namespace std; // structure of node of doubly linked liststruct Node { int data; struct Node* next, *prev;}; // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'int countTriplets(struct Node* head, int x){ struct Node* ptr, *ptr1, *ptr2; int count = 0; // unordered_map 'um' implemented as hash table unordered_map<int, Node*> um; // insert the <node data, node pointer> tuple in 'um' for (ptr = head; ptr != NULL; ptr = ptr->next) um[ptr->data] = ptr; // generate all possible pairs for (ptr1 = head; ptr1 != NULL; ptr1 = ptr1->next) for (ptr2 = ptr1->next; ptr2 != NULL; ptr2 = ptr2->next) { // p_sum - sum of elements in the current pair int p_sum = ptr1->data + ptr2->data; // if 'x-p_sum' is present in 'um' and either of the two nodes // are not equal to the 'um[x-p_sum]' node if (um.find(x - p_sum) != um.end() && um[x - p_sum] != ptr1 && um[x - p_sum] != ptr2) // increment count count++; } // required count of triplets // division by 3 as each triplet is counted 3 times return (count / 3);} // A utility function to insert a new node at the// beginning of doubly linked listvoid insert(struct Node** head, int data){ // allocate node struct Node* temp = new Node(); // put in the data temp->data = data; temp->next = temp->prev = NULL; if ((*head) == NULL) (*head) = temp; else { temp->next = *head; (*head)->prev = temp; (*head) = temp; }} // Driver program to test aboveint main(){ // start with an empty doubly linked list struct Node* head = NULL; // insert values in sorted order insert(&head, 9); insert(&head, 8); insert(&head, 6); insert(&head, 5); insert(&head, 4); insert(&head, 2); insert(&head, 1); int x = 17; cout << "Count = " << countTriplets(head, x); return 0;}
// Java implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'import java.util.*; class GFG{ // structure of node of doubly linked liststatic class Node { int data; Node next, prev; Node(int val) { data = val; prev = null; next = null; }}; // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'static int countTriplets(Node head, int x){ Node ptr, ptr1, ptr2; int count = 0; // unordered_map 'um' implemented as hash table HashMap<Integer,Node> um = new HashMap<Integer,Node>(); // insert the <node data, node pointer> tuple in 'um' for (ptr = head; ptr != null; ptr = ptr.next) um.put(ptr.data, ptr); // generate all possible pairs for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) { // p_sum - sum of elements in the current pair int p_sum = ptr1.data + ptr2.data; // if 'x-p_sum' is present in 'um' and either of the two nodes // are not equal to the 'um[x-p_sum]' node if (um.containsKey(x - p_sum) && um.get(x - p_sum) != ptr1 && um.get(x - p_sum) != ptr2) // increment count count++; } // required count of triplets // division by 3 as each triplet is counted 3 times return (count / 3);} // A utility function to insert a new node at the// beginning of doubly linked liststatic Node insert(Node head, int val){ // allocate node Node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head;} // Driver program to test abovepublic static void main(String[] args){ // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; System.out.print("Count = " + countTriplets(head, x));}} // This code is contributed by Rajput-Ji
# Python3 implementation to count triplets in a sorted doubly linked list# whose sum is equal to a given value 'x' # structure of node of doubly linked listclass Node: def __init__(self, data): self.data=data self.next=None self.prev=None # function to count triplets in a sorted doubly linked list# whose sum is equal to a given value 'x'def countTriplets(head, x): ptr2=head count = 0; # unordered_map 'um' implemented as hash table um = dict() ptr = head # insert the <node data, node pointer> tuple in 'um' while ptr!=None: um[ptr.data] = ptr; ptr = ptr.next # generate all possible pairs ptr1=head while ptr1!=None: ptr2 = ptr1.next while ptr2!=None: # p_sum - sum of elements in the current pair p_sum = ptr1.data + ptr2.data; # if 'x-p_sum' is present in 'um' and either of the two nodes # are not equal to the 'um[x-p_sum]' node if ((x-p_sum) in um) and um[x - p_sum] != ptr1 and um[x - p_sum] != ptr2: # increment count count+=1 ptr2 = ptr2.next ptr1 = ptr1.next # required count of triplets # division by 3 as each triplet is counted 3 times return (count // 3); # A utility function to insert a new node at the# beginning of doubly linked listdef insert(head, data): # allocate node temp = Node(data); if ((head) == None): (head) = temp; else: temp.next = head; (head).prev = temp; (head) = temp; return head # Driver program to test aboveif __name__=='__main__': # start with an empty doubly linked list head = None; # insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert( head, 1); x = 17; print("Count = "+ str(countTriplets(head, x))) # This code is contributed by rutvik_56
// C# implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'using System;using System.Collections.Generic; class GFG{ // structure of node of doubly linked listclass Node { public int data; public Node next, prev; public Node(int val) { data = val; prev = null; next = null; }}; // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'static int countTriplets(Node head, int x){ Node ptr, ptr1, ptr2; int count = 0; // unordered_map 'um' implemented as hash table Dictionary<int,Node> um = new Dictionary<int,Node>(); // insert the <node data, node pointer> tuple in 'um' for (ptr = head; ptr != null; ptr = ptr.next) if(um.ContainsKey(ptr.data)) um[ptr.data] = ptr; else um.Add(ptr.data, ptr); // generate all possible pairs for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) { // p_sum - sum of elements in the current pair int p_sum = ptr1.data + ptr2.data; // if 'x-p_sum' is present in 'um' and either of the two nodes // are not equal to the 'um[x-p_sum]' node if (um.ContainsKey(x - p_sum) && um[x - p_sum] != ptr1 && um[x - p_sum] != ptr2) // increment count count++; } // required count of triplets // division by 3 as each triplet is counted 3 times return (count / 3);} // A utility function to insert a new node at the// beginning of doubly linked liststatic Node insert(Node head, int val){ // allocate node Node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head;} // Driver codepublic static void Main(String[] args){ // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; Console.Write("Count = " + countTriplets(head, x));}} // This code is contributed by PrinciRaj1992
<script> // Javascript implementation to count// triplets in a sorted doubly linked list// whose sum is equal to a given value 'x' // Structure of node of doubly linked listclass Node{constructor(data){ this.data = data; this.prev = null; this.next = null;}} // Function to count triplets in a sorted// doubly linked list whose sum is equal// to a given value 'x'function countTriplets(head, x){ let ptr, ptr1, ptr2; let count = 0; // unordered_map 'um' implemented // as hash table let um = new Map(); // Insert the <node data, node pointer> // tuple in 'um' for(ptr = head; ptr != null; ptr = ptr.next) um.set(ptr.data, ptr); // Generate all possible pairs for(ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for(ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) { // p_sum - sum of elements in // the current pair let p_sum = ptr1.data + ptr2.data; // If 'x-p_sum' is present in 'um' // and either of the two nodes are // not equal to the 'um[x-p_sum]' node if (um.has(x - p_sum) && um.get(x - p_sum) != ptr1 && um.get(x - p_sum) != ptr2) // Increment count count++; } // Required count of triplets // division by 3 as each triplet // is counted 3 times return (count / 3);} // A utility function to insert a new// node at the beginning of doubly linked listfunction insert(head, val){ // Allocate node let temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head;} // Driver code // Start with an empty doubly linked listlet head = null; // Insert values in sorted orderhead = insert(head, 9);head = insert(head, 8);head = insert(head, 6);head = insert(head, 5);head = insert(head, 4);head = insert(head, 2);head = insert(head, 1); let x = 17; document.write("Count = " + countTriplets(head, x)); // This code is contributed by patel2127 </script>
Output:
Count = 2
Time Complexity: O(n2) Auxiliary Space: O(n)
Method 3 Efficient Approach(Use of two pointers): Traverse the doubly linked list from left to right. For each current node during the traversal, initialize two pointers first = pointer to the node next to the current node and last = pointer to the last node of the list. Now, count pairs in the list from first to last pointer that sum up to value (x – current node’s data) (algorithm described in this post). Add this count to the total_count of triplets. Pointer to the last node can be found only once in the beginning.
C++
Java
Python3
C#
Javascript
// C++ implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'#include <bits/stdc++.h> using namespace std; // structure of node of doubly linked liststruct Node { int data; struct Node* next, *prev;}; // function to count pairs whose sum equal to given 'value'int countPairs(struct Node* first, struct Node* second, int value){ int count = 0; // The loop terminates when either of two pointers // become NULL, or they cross each other (second->next // == first), or they become same (first == second) while (first != NULL && second != NULL && first != second && second->next != first) { // pair found if ((first->data + second->data) == value) { // increment count count++; // move first in forward direction first = first->next; // move second in backward direction second = second->prev; } // if sum is greater than 'value' // move second in backward direction else if ((first->data + second->data) > value) second = second->prev; // else move first in forward direction else first = first->next; } // required count of pairs return count;} // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'int countTriplets(struct Node* head, int x){ // if list is empty if (head == NULL) return 0; struct Node* current, *first, *last; int count = 0; // get pointer to the last node of // the doubly linked list last = head; while (last->next != NULL) last = last->next; // traversing the doubly linked list for (current = head; current != NULL; current = current->next) { // for each current node first = current->next; // count pairs with sum(x - current->data) in the range // first to last and add it to the 'count' of triplets count += countPairs(first, last, x - current->data); } // required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked listvoid insert(struct Node** head, int data){ // allocate node struct Node* temp = new Node(); // put in the data temp->data = data; temp->next = temp->prev = NULL; if ((*head) == NULL) (*head) = temp; else { temp->next = *head; (*head)->prev = temp; (*head) = temp; }} // Driver program to test aboveint main(){ // start with an empty doubly linked list struct Node* head = NULL; // insert values in sorted order insert(&head, 9); insert(&head, 8); insert(&head, 6); insert(&head, 5); insert(&head, 4); insert(&head, 2); insert(&head, 1); int x = 17; cout << "Count = " << countTriplets(head, x); return 0;}
// Java implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'import java.util.*; class GFG{ // structure of node of doubly linked liststatic class Node { int data; Node next, prev;}; // function to count pairs whose sum equal to given 'value'static int countPairs(Node first, Node second, int value){ int count = 0; // The loop terminates when either of two pointers // become null, or they cross each other (second.next // == first), or they become same (first == second) while (first != null && second != null && first != second && second.next != first) { // pair found if ((first.data + second.data) == value) { // increment count count++; // move first in forward direction first = first.next; // move second in backward direction second = second.prev; } // if sum is greater than 'value' // move second in backward direction else if ((first.data + second.data) > value) second = second.prev; // else move first in forward direction else first = first.next; } // required count of pairs return count;} // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'static int countTriplets(Node head, int x){ // if list is empty if (head == null) return 0; Node current, first, last; int count = 0; // get pointer to the last node of // the doubly linked list last = head; while (last.next != null) last = last.next; // traversing the doubly linked list for (current = head; current != null; current = current.next) { // for each current node first = current.next; // count pairs with sum(x - current.data) in the range // first to last and add it to the 'count' of triplets count += countPairs(first, last, x - current.data); } // required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked liststatic Node insert(Node head, int data){ // allocate node Node temp = new Node(); // put in the data temp.data = data; temp.next = temp.prev = null; if ((head) == null) (head) = temp; else { temp.next = head; (head).prev = temp; (head) = temp; } return head;} // Driver program to test abovepublic static void main(String[] args){ // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; System.out.print("Count = " + countTriplets(head, x));}} // This code is contributed by 29AjayKumar
# Python3 implementation to count triplets# in a sorted doubly linked list whose sum# is equal to a given value 'x' # Structure of node of doubly linked listclass Node: def __init__(self, x): self.data = x self.next = None self.prev = None # Function to count pairs whose sum# equal to given 'value'def countPairs(first, second, value): count = 0 # The loop terminates when either of two pointers # become None, or they cross each other (second.next # == first), or they become same (first == second) while (first != None and second != None and first != second and second.next != first): # Pair found if ((first.data + second.data) == value): # Increment count count += 1 # Move first in forward direction first = first.next # Move second in backward direction second = second.prev # If sum is greater than 'value' # move second in backward direction elif ((first.data + second.data) > value): second = second.prev # Else move first in forward direction else: first = first.next # Required count of pairs return count # Function to count triplets in a sorted# doubly linked list whose sum is equal# to a given value 'x'def countTriplets(head, x): # If list is empty if (head == None): return 0 current, first, last = head, None, None count = 0 # Get pointer to the last node of # the doubly linked list last = head while (last.next != None): last = last.next # Traversing the doubly linked list while current != None: # For each current node first = current.next # count pairs with sum(x - current.data) in # the range first to last and add it to the # 'count' of triplets count, current = count + countPairs( first, last, x - current.data), current.next # Required count of triplets return count # A utility function to insert a new node# at the beginning of doubly linked listdef insert(head, data): # Allocate node temp = Node(data) # Put in the data # temp.next = temp.prev = None if (head == None): head = temp else: temp.next = head head.prev = temp head = temp return head # Driver codeif __name__ == '__main__': # Start with an empty doubly linked list head = None # Insert values in sorted order head = insert(head, 9) head = insert(head, 8) head = insert(head, 6) head = insert(head, 5) head = insert(head, 4) head = insert(head, 2) head = insert(head, 1) x = 17 print("Count = ", countTriplets(head, x)) # This code is contributed by mohit kumar 29
// C# implementation to count triplets// in a sorted doubly linked list// whose sum is equal to a given value 'x'using System; class GFG{ // structure of node of doubly linked listclass Node{ public int data; public Node next, prev;}; // function to count pairs whose sum equal to given 'value'static int countPairs(Node first, Node second, int value){ int count = 0; // The loop terminates when either of two pointers // become null, or they cross each other (second.next // == first), or they become same (first == second) while (first != null && second != null && first != second && second.next != first) { // pair found if ((first.data + second.data) == value) { // increment count count++; // move first in forward direction first = first.next; // move second in backward direction second = second.prev; } // if sum is greater than 'value' // move second in backward direction else if ((first.data + second.data) > value) second = second.prev; // else move first in forward direction else first = first.next; } // required count of pairs return count;} // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'static int countTriplets(Node head, int x){ // if list is empty if (head == null) return 0; Node current, first, last; int count = 0; // get pointer to the last node of // the doubly linked list last = head; while (last.next != null) last = last.next; // traversing the doubly linked list for (current = head; current != null; current = current.next) { // for each current node first = current.next; // count pairs with sum(x - current.data) in the range // first to last and add it to the 'count' of triplets count += countPairs(first, last, x - current.data); } // required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked liststatic Node insert(Node head, int data){ // allocate node Node temp = new Node(); // put in the data temp.data = data; temp.next = temp.prev = null; if ((head) == null) (head) = temp; else { temp.next = head; (head).prev = temp; (head) = temp; } return head;} // Driver program to test abovepublic static void Main(String[] args){ // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; Console.Write("Count = " + countTriplets(head, x));}} // This code is contributed by 29AjayKumar
<script> // Javascript implementation to count// triplets in a sorted doubly linked list// whose sum is equal to a given value 'x' // Structure of node of doubly linked listclass Node{ constructor(data) { this.data = data; this.next = this.prev = null; }} // Function to count pairs whose sum// equal to given 'value'function countPairs(first, second, value){ let count = 0; // The loop terminates when either of two pointers // become null, or they cross each other (second.next // == first), or they become same (first == second) while (first != null && second != null && first != second && second.next != first) { // Pair found if ((first.data + second.data) == value) { // Increment count count++; // Move first in forward direction first = first.next; // Move second in backward direction second = second.prev; } // If sum is greater than 'value' // move second in backward direction else if ((first.data + second.data) > value) second = second.prev; // Else move first in forward direction else first = first.next; } // Required count of pairs return count;} // Function to count triplets in a sorted// doubly linked list whose sum is equal// to a given value 'x'function countTriplets(head, x){ // If list is empty if (head == null) return 0; let current, first, last; let count = 0; // Get pointer to the last node of // the doubly linked list last = head; while (last.next != null) last = last.next; // Traversing the doubly linked list for(current = head; current != null; current = current.next) { // For each current node first = current.next; // Count pairs with sum(x - current.data) // in the range first to last and add it // to the 'count' of triplets count += countPairs(first, last, x - current.data); } // Required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked listfunction insert(head, data){ // Allocate node let temp = new Node(); // Put in the data temp.data = data; temp.next = temp.prev = null; if ((head) == null) (head) = temp; else { temp.next = head; (head).prev = temp; (head) = temp; } return head;} // Driver code // Start with an empty doubly linked listlet head = null; // Insert values in sorted orderhead = insert(head, 9);head = insert(head, 8);head = insert(head, 6);head = insert(head, 5);head = insert(head, 4);head = insert(head, 2);head = insert(head, 1); let x = 17; document.write("Count = " + countTriplets(head, x)); // This code is contributed by unknown2108 </script>
Output:
Count = 2
Time Complexity: O(n2) Auxiliary Space: O(1)
This article is contributed by Ayush Jauhari. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
rachana soma
andrew1234
Rajput-Ji
princiraj1992
29AjayKumar
mohit kumar 29
rutvik_56
umadevi9616
patel2127
unknown2108
gabaa406
doubly linked list
Linked List
Linked List
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Linked List vs Array
Delete a Linked List node at a given position
Queue - Linked List Implementation
Merge two sorted linked lists
Find the middle of a given linked list
Implement a stack using singly linked list
Implementing a Linked List in Java using Class
Merge Sort for Linked Lists
Circular Linked List | Set 1 (Introduction and Applications)
Remove duplicates from a sorted linked list
|
[
{
"code": null,
"e": 24859,
"s": 24831,
"text": "\n06 Jul, 2021"
},
{
"code": null,
"e": 25018,
"s": 24859,
"text": "Given a sorted doubly linked list of distinct nodes(no two nodes have the same data) and a value x. Count triplets in the list that sum up to a given value x."
},
{
"code": null,
"e": 25029,
"s": 25018,
"text": "Examples: "
},
{
"code": null,
"e": 25165,
"s": 25029,
"text": "Method 1 (Naive Approach): Using three nested loops generate all triplets and check whether elements in the triplet sum up to x or not."
},
{
"code": null,
"e": 25169,
"s": 25165,
"text": "C++"
},
{
"code": null,
"e": 25174,
"s": 25169,
"text": "Java"
},
{
"code": null,
"e": 25182,
"s": 25174,
"text": "Python3"
},
{
"code": null,
"e": 25185,
"s": 25182,
"text": "C#"
},
{
"code": null,
"e": 25196,
"s": 25185,
"text": "Javascript"
},
{
"code": "// C++ implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'#include <bits/stdc++.h> using namespace std; // structure of node of doubly linked liststruct Node { int data; struct Node* next, *prev;}; // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'int countTriplets(struct Node* head, int x){ struct Node* ptr1, *ptr2, *ptr3; int count = 0; // generate all possible triplets for (ptr1 = head; ptr1 != NULL; ptr1 = ptr1->next) for (ptr2 = ptr1->next; ptr2 != NULL; ptr2 = ptr2->next) for (ptr3 = ptr2->next; ptr3 != NULL; ptr3 = ptr3->next) // if elements in the current triplet sum up to 'x' if ((ptr1->data + ptr2->data + ptr3->data) == x) // increment count count++; // required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked listvoid insert(struct Node** head, int data){ // allocate node struct Node* temp = new Node(); // put in the data temp->data = data; temp->next = temp->prev = NULL; if ((*head) == NULL) (*head) = temp; else { temp->next = *head; (*head)->prev = temp; (*head) = temp; }} // Driver program to test aboveint main(){ // start with an empty doubly linked list struct Node* head = NULL; // insert values in sorted order insert(&head, 9); insert(&head, 8); insert(&head, 6); insert(&head, 5); insert(&head, 4); insert(&head, 2); insert(&head, 1); int x = 17; cout << \"Count = \" << countTriplets(head, x); return 0;}",
"e": 26923,
"s": 25196,
"text": null
},
{
"code": "// Java implementation to count triplets// in a sorted doubly linked list// whose sum is equal to a given value 'x'import java.io.*;import java.util.*; // Represents node of a doubly linked listclass Node{ int data; Node prev, next; Node(int val) { data = val; prev = null; next = null; }} class GFG{ // function to count triplets in // a sorted doubly linked list // whose sum is equal to a given value 'x' static int countTriplets(Node head, int x) { Node ptr1, ptr2, ptr3; int count = 0; // generate all possible triplets for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) for (ptr3 = ptr2.next; ptr3 != null; ptr3 = ptr3.next) // if elements in the current triplet sum up to 'x' if ((ptr1.data + ptr2.data + ptr3.data) == x) // increment count count++; // required count of triplets return count; } // A utility function to insert a new node at the // beginning of doubly linked list static Node insert(Node head, int val) { // allocate node Node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head; } // Driver code public static void main(String args[]) { // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; System.out.println(\"count = \" + countTriplets(head, x)); }} // This code is contributed by rachana soma",
"e": 29097,
"s": 26923,
"text": null
},
{
"code": "# Python3 implementation to count triplets# in a sorted doubly linked list# whose sum is equal to a given value 'x' # structure of node of doubly linked listclass Node: def __init__(self): self.data = None self.prev = None self.next = None # function to count triplets in a sorted doubly linked list# whose sum is equal to a given value 'x'def countTriplets( head, x): ptr1 = head ptr2 = None ptr3 = None count = 0 # generate all possible triplets while (ptr1 != None ): ptr2 = ptr1.next while ( ptr2 != None ): ptr3 = ptr2.next while ( ptr3 != None ): # if elements in the current triplet sum up to 'x' if ((ptr1.data + ptr2.data + ptr3.data) == x): # increment count count = count + 1 ptr3 = ptr3.next ptr2 = ptr2.next ptr1 = ptr1.next # required count of triplets return count # A utility function to insert a new node at the# beginning of doubly linked listdef insert(head, data): # allocate node temp = Node() # put in the data temp.data = data temp.next = temp.prev = None if ((head) == None): (head) = temp else : temp.next = head (head).prev = temp (head) = temp return head # Driver code # start with an empty doubly linked listhead = None # insert values in sorted orderhead = insert(head, 9)head = insert(head, 8)head = insert(head, 6)head = insert(head, 5)head = insert(head, 4)head = insert(head, 2)head = insert(head, 1) x = 17 print( \"Count = \", countTriplets(head, x)) # This code is contributed by Arnab Kundu",
"e": 30781,
"s": 29097,
"text": null
},
{
"code": "// C# implementation to count triplets// in a sorted doubly linked list// whose sum is equal to a given value 'x'using System; // Represents node of a doubly linked listpublic class Node{ public int data; public Node prev, next; public Node(int val) { data = val; prev = null; next = null; }} class GFG{ // function to count triplets in // a sorted doubly linked list // whose sum is equal to a given value 'x' static int countTriplets(Node head, int x) { Node ptr1, ptr2, ptr3; int count = 0; // generate all possible triplets for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) for (ptr3 = ptr2.next; ptr3 != null; ptr3 = ptr3.next) // if elements in the current triplet sum up to 'x' if ((ptr1.data + ptr2.data + ptr3.data) == x) // increment count count++; // required count of triplets return count; } // A utility function to insert a new node at the // beginning of doubly linked list static Node insert(Node head, int val) { // allocate node Node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head; } // Driver code public static void Main(String []args) { // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; Console.WriteLine(\"count = \" + countTriplets(head, x)); }} // This code is contributed by Arnab Kundu",
"e": 32856,
"s": 30781,
"text": null
},
{
"code": "<script>// javascript implementation to count triplets// in a sorted doubly linked list// whose sum is equal to a given value 'x'// Represents node of a doubly linked listclass Node { constructor(val) { this.data = val; this.prev = null; this.next = null; }} // function to count triplets in // a sorted doubly linked list // whose sum is equal to a given value 'x' function countTriplets( head , x) { var ptr1, ptr2, ptr3; var count = 0; // generate all possible triplets for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) for (ptr3 = ptr2.next; ptr3 != null; ptr3 = ptr3.next) // if elements in the current triplet sum up to 'x' if ((ptr1.data + ptr2.data + ptr3.data) == x) // increment count count++; // required count of triplets return count; } // A utility function to insert a new node at the // beginning of doubly linked list function insert( head , val) { // allocate node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head; } // Driver code // start with an empty doubly linked list head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); var x = 17; document.write(\"count = \" + countTriplets(head, x)); // This code is contributed by umadevi9616</script>",
"e": 34705,
"s": 32856,
"text": null
},
{
"code": null,
"e": 34714,
"s": 34705,
"text": "Output: "
},
{
"code": null,
"e": 34724,
"s": 34714,
"text": "Count = 2"
},
{
"code": null,
"e": 34769,
"s": 34724,
"text": "Time Complexity: O(n3) Auxiliary Space: O(1)"
},
{
"code": null,
"e": 35403,
"s": 34769,
"text": "Method 2 (Hashing): Create a hash table with (key, value) tuples represented as (node data, node pointer) tuples. Traverse the doubly linked list and store each node’s data and its pointer pair(tuple) in the hash table. Now, generate each possible pair of nodes. For each pair of nodes, calculate the p_sum(sum of data in the two nodes) and check whether (x-p_sum) exists in the hash table or not. If it exists, then also verify that the two nodes in the pair are not same to the node associated with (x-p_sum) in the hash table and finally increment count. Return (count / 3) as each triplet is counted 3 times in the above process."
},
{
"code": null,
"e": 35407,
"s": 35403,
"text": "C++"
},
{
"code": null,
"e": 35412,
"s": 35407,
"text": "Java"
},
{
"code": null,
"e": 35420,
"s": 35412,
"text": "Python3"
},
{
"code": null,
"e": 35423,
"s": 35420,
"text": "C#"
},
{
"code": null,
"e": 35434,
"s": 35423,
"text": "Javascript"
},
{
"code": "// C++ implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'#include <bits/stdc++.h> using namespace std; // structure of node of doubly linked liststruct Node { int data; struct Node* next, *prev;}; // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'int countTriplets(struct Node* head, int x){ struct Node* ptr, *ptr1, *ptr2; int count = 0; // unordered_map 'um' implemented as hash table unordered_map<int, Node*> um; // insert the <node data, node pointer> tuple in 'um' for (ptr = head; ptr != NULL; ptr = ptr->next) um[ptr->data] = ptr; // generate all possible pairs for (ptr1 = head; ptr1 != NULL; ptr1 = ptr1->next) for (ptr2 = ptr1->next; ptr2 != NULL; ptr2 = ptr2->next) { // p_sum - sum of elements in the current pair int p_sum = ptr1->data + ptr2->data; // if 'x-p_sum' is present in 'um' and either of the two nodes // are not equal to the 'um[x-p_sum]' node if (um.find(x - p_sum) != um.end() && um[x - p_sum] != ptr1 && um[x - p_sum] != ptr2) // increment count count++; } // required count of triplets // division by 3 as each triplet is counted 3 times return (count / 3);} // A utility function to insert a new node at the// beginning of doubly linked listvoid insert(struct Node** head, int data){ // allocate node struct Node* temp = new Node(); // put in the data temp->data = data; temp->next = temp->prev = NULL; if ((*head) == NULL) (*head) = temp; else { temp->next = *head; (*head)->prev = temp; (*head) = temp; }} // Driver program to test aboveint main(){ // start with an empty doubly linked list struct Node* head = NULL; // insert values in sorted order insert(&head, 9); insert(&head, 8); insert(&head, 6); insert(&head, 5); insert(&head, 4); insert(&head, 2); insert(&head, 1); int x = 17; cout << \"Count = \" << countTriplets(head, x); return 0;}",
"e": 37590,
"s": 35434,
"text": null
},
{
"code": "// Java implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'import java.util.*; class GFG{ // structure of node of doubly linked liststatic class Node { int data; Node next, prev; Node(int val) { data = val; prev = null; next = null; }}; // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'static int countTriplets(Node head, int x){ Node ptr, ptr1, ptr2; int count = 0; // unordered_map 'um' implemented as hash table HashMap<Integer,Node> um = new HashMap<Integer,Node>(); // insert the <node data, node pointer> tuple in 'um' for (ptr = head; ptr != null; ptr = ptr.next) um.put(ptr.data, ptr); // generate all possible pairs for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) { // p_sum - sum of elements in the current pair int p_sum = ptr1.data + ptr2.data; // if 'x-p_sum' is present in 'um' and either of the two nodes // are not equal to the 'um[x-p_sum]' node if (um.containsKey(x - p_sum) && um.get(x - p_sum) != ptr1 && um.get(x - p_sum) != ptr2) // increment count count++; } // required count of triplets // division by 3 as each triplet is counted 3 times return (count / 3);} // A utility function to insert a new node at the// beginning of doubly linked liststatic Node insert(Node head, int val){ // allocate node Node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head;} // Driver program to test abovepublic static void main(String[] args){ // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; System.out.print(\"Count = \" + countTriplets(head, x));}} // This code is contributed by Rajput-Ji",
"e": 39910,
"s": 37590,
"text": null
},
{
"code": "# Python3 implementation to count triplets in a sorted doubly linked list# whose sum is equal to a given value 'x' # structure of node of doubly linked listclass Node: def __init__(self, data): self.data=data self.next=None self.prev=None # function to count triplets in a sorted doubly linked list# whose sum is equal to a given value 'x'def countTriplets(head, x): ptr2=head count = 0; # unordered_map 'um' implemented as hash table um = dict() ptr = head # insert the <node data, node pointer> tuple in 'um' while ptr!=None: um[ptr.data] = ptr; ptr = ptr.next # generate all possible pairs ptr1=head while ptr1!=None: ptr2 = ptr1.next while ptr2!=None: # p_sum - sum of elements in the current pair p_sum = ptr1.data + ptr2.data; # if 'x-p_sum' is present in 'um' and either of the two nodes # are not equal to the 'um[x-p_sum]' node if ((x-p_sum) in um) and um[x - p_sum] != ptr1 and um[x - p_sum] != ptr2: # increment count count+=1 ptr2 = ptr2.next ptr1 = ptr1.next # required count of triplets # division by 3 as each triplet is counted 3 times return (count // 3); # A utility function to insert a new node at the# beginning of doubly linked listdef insert(head, data): # allocate node temp = Node(data); if ((head) == None): (head) = temp; else: temp.next = head; (head).prev = temp; (head) = temp; return head # Driver program to test aboveif __name__=='__main__': # start with an empty doubly linked list head = None; # insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert( head, 1); x = 17; print(\"Count = \"+ str(countTriplets(head, x))) # This code is contributed by rutvik_56",
"e": 42001,
"s": 39910,
"text": null
},
{
"code": "// C# implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'using System;using System.Collections.Generic; class GFG{ // structure of node of doubly linked listclass Node { public int data; public Node next, prev; public Node(int val) { data = val; prev = null; next = null; }}; // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'static int countTriplets(Node head, int x){ Node ptr, ptr1, ptr2; int count = 0; // unordered_map 'um' implemented as hash table Dictionary<int,Node> um = new Dictionary<int,Node>(); // insert the <node data, node pointer> tuple in 'um' for (ptr = head; ptr != null; ptr = ptr.next) if(um.ContainsKey(ptr.data)) um[ptr.data] = ptr; else um.Add(ptr.data, ptr); // generate all possible pairs for (ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for (ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) { // p_sum - sum of elements in the current pair int p_sum = ptr1.data + ptr2.data; // if 'x-p_sum' is present in 'um' and either of the two nodes // are not equal to the 'um[x-p_sum]' node if (um.ContainsKey(x - p_sum) && um[x - p_sum] != ptr1 && um[x - p_sum] != ptr2) // increment count count++; } // required count of triplets // division by 3 as each triplet is counted 3 times return (count / 3);} // A utility function to insert a new node at the// beginning of doubly linked liststatic Node insert(Node head, int val){ // allocate node Node temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head;} // Driver codepublic static void Main(String[] args){ // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; Console.Write(\"Count = \" + countTriplets(head, x));}} // This code is contributed by PrinciRaj1992",
"e": 44408,
"s": 42001,
"text": null
},
{
"code": "<script> // Javascript implementation to count// triplets in a sorted doubly linked list// whose sum is equal to a given value 'x' // Structure of node of doubly linked listclass Node{constructor(data){ this.data = data; this.prev = null; this.next = null;}} // Function to count triplets in a sorted// doubly linked list whose sum is equal// to a given value 'x'function countTriplets(head, x){ let ptr, ptr1, ptr2; let count = 0; // unordered_map 'um' implemented // as hash table let um = new Map(); // Insert the <node data, node pointer> // tuple in 'um' for(ptr = head; ptr != null; ptr = ptr.next) um.set(ptr.data, ptr); // Generate all possible pairs for(ptr1 = head; ptr1 != null; ptr1 = ptr1.next) for(ptr2 = ptr1.next; ptr2 != null; ptr2 = ptr2.next) { // p_sum - sum of elements in // the current pair let p_sum = ptr1.data + ptr2.data; // If 'x-p_sum' is present in 'um' // and either of the two nodes are // not equal to the 'um[x-p_sum]' node if (um.has(x - p_sum) && um.get(x - p_sum) != ptr1 && um.get(x - p_sum) != ptr2) // Increment count count++; } // Required count of triplets // division by 3 as each triplet // is counted 3 times return (count / 3);} // A utility function to insert a new// node at the beginning of doubly linked listfunction insert(head, val){ // Allocate node let temp = new Node(val); if (head == null) head = temp; else { temp.next = head; head.prev = temp; head = temp; } return head;} // Driver code // Start with an empty doubly linked listlet head = null; // Insert values in sorted orderhead = insert(head, 9);head = insert(head, 8);head = insert(head, 6);head = insert(head, 5);head = insert(head, 4);head = insert(head, 2);head = insert(head, 1); let x = 17; document.write(\"Count = \" + countTriplets(head, x)); // This code is contributed by patel2127 </script>",
"e": 46596,
"s": 44408,
"text": null
},
{
"code": null,
"e": 46605,
"s": 46596,
"text": "Output: "
},
{
"code": null,
"e": 46615,
"s": 46605,
"text": "Count = 2"
},
{
"code": null,
"e": 46660,
"s": 46615,
"text": "Time Complexity: O(n2) Auxiliary Space: O(n)"
},
{
"code": null,
"e": 47184,
"s": 46660,
"text": "Method 3 Efficient Approach(Use of two pointers): Traverse the doubly linked list from left to right. For each current node during the traversal, initialize two pointers first = pointer to the node next to the current node and last = pointer to the last node of the list. Now, count pairs in the list from first to last pointer that sum up to value (x – current node’s data) (algorithm described in this post). Add this count to the total_count of triplets. Pointer to the last node can be found only once in the beginning."
},
{
"code": null,
"e": 47188,
"s": 47184,
"text": "C++"
},
{
"code": null,
"e": 47193,
"s": 47188,
"text": "Java"
},
{
"code": null,
"e": 47201,
"s": 47193,
"text": "Python3"
},
{
"code": null,
"e": 47204,
"s": 47201,
"text": "C#"
},
{
"code": null,
"e": 47215,
"s": 47204,
"text": "Javascript"
},
{
"code": "// C++ implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'#include <bits/stdc++.h> using namespace std; // structure of node of doubly linked liststruct Node { int data; struct Node* next, *prev;}; // function to count pairs whose sum equal to given 'value'int countPairs(struct Node* first, struct Node* second, int value){ int count = 0; // The loop terminates when either of two pointers // become NULL, or they cross each other (second->next // == first), or they become same (first == second) while (first != NULL && second != NULL && first != second && second->next != first) { // pair found if ((first->data + second->data) == value) { // increment count count++; // move first in forward direction first = first->next; // move second in backward direction second = second->prev; } // if sum is greater than 'value' // move second in backward direction else if ((first->data + second->data) > value) second = second->prev; // else move first in forward direction else first = first->next; } // required count of pairs return count;} // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'int countTriplets(struct Node* head, int x){ // if list is empty if (head == NULL) return 0; struct Node* current, *first, *last; int count = 0; // get pointer to the last node of // the doubly linked list last = head; while (last->next != NULL) last = last->next; // traversing the doubly linked list for (current = head; current != NULL; current = current->next) { // for each current node first = current->next; // count pairs with sum(x - current->data) in the range // first to last and add it to the 'count' of triplets count += countPairs(first, last, x - current->data); } // required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked listvoid insert(struct Node** head, int data){ // allocate node struct Node* temp = new Node(); // put in the data temp->data = data; temp->next = temp->prev = NULL; if ((*head) == NULL) (*head) = temp; else { temp->next = *head; (*head)->prev = temp; (*head) = temp; }} // Driver program to test aboveint main(){ // start with an empty doubly linked list struct Node* head = NULL; // insert values in sorted order insert(&head, 9); insert(&head, 8); insert(&head, 6); insert(&head, 5); insert(&head, 4); insert(&head, 2); insert(&head, 1); int x = 17; cout << \"Count = \" << countTriplets(head, x); return 0;}",
"e": 50116,
"s": 47215,
"text": null
},
{
"code": "// Java implementation to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'import java.util.*; class GFG{ // structure of node of doubly linked liststatic class Node { int data; Node next, prev;}; // function to count pairs whose sum equal to given 'value'static int countPairs(Node first, Node second, int value){ int count = 0; // The loop terminates when either of two pointers // become null, or they cross each other (second.next // == first), or they become same (first == second) while (first != null && second != null && first != second && second.next != first) { // pair found if ((first.data + second.data) == value) { // increment count count++; // move first in forward direction first = first.next; // move second in backward direction second = second.prev; } // if sum is greater than 'value' // move second in backward direction else if ((first.data + second.data) > value) second = second.prev; // else move first in forward direction else first = first.next; } // required count of pairs return count;} // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'static int countTriplets(Node head, int x){ // if list is empty if (head == null) return 0; Node current, first, last; int count = 0; // get pointer to the last node of // the doubly linked list last = head; while (last.next != null) last = last.next; // traversing the doubly linked list for (current = head; current != null; current = current.next) { // for each current node first = current.next; // count pairs with sum(x - current.data) in the range // first to last and add it to the 'count' of triplets count += countPairs(first, last, x - current.data); } // required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked liststatic Node insert(Node head, int data){ // allocate node Node temp = new Node(); // put in the data temp.data = data; temp.next = temp.prev = null; if ((head) == null) (head) = temp; else { temp.next = head; (head).prev = temp; (head) = temp; } return head;} // Driver program to test abovepublic static void main(String[] args){ // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; System.out.print(\"Count = \" + countTriplets(head, x));}} // This code is contributed by 29AjayKumar",
"e": 53086,
"s": 50116,
"text": null
},
{
"code": "# Python3 implementation to count triplets# in a sorted doubly linked list whose sum# is equal to a given value 'x' # Structure of node of doubly linked listclass Node: def __init__(self, x): self.data = x self.next = None self.prev = None # Function to count pairs whose sum# equal to given 'value'def countPairs(first, second, value): count = 0 # The loop terminates when either of two pointers # become None, or they cross each other (second.next # == first), or they become same (first == second) while (first != None and second != None and first != second and second.next != first): # Pair found if ((first.data + second.data) == value): # Increment count count += 1 # Move first in forward direction first = first.next # Move second in backward direction second = second.prev # If sum is greater than 'value' # move second in backward direction elif ((first.data + second.data) > value): second = second.prev # Else move first in forward direction else: first = first.next # Required count of pairs return count # Function to count triplets in a sorted# doubly linked list whose sum is equal# to a given value 'x'def countTriplets(head, x): # If list is empty if (head == None): return 0 current, first, last = head, None, None count = 0 # Get pointer to the last node of # the doubly linked list last = head while (last.next != None): last = last.next # Traversing the doubly linked list while current != None: # For each current node first = current.next # count pairs with sum(x - current.data) in # the range first to last and add it to the # 'count' of triplets count, current = count + countPairs( first, last, x - current.data), current.next # Required count of triplets return count # A utility function to insert a new node# at the beginning of doubly linked listdef insert(head, data): # Allocate node temp = Node(data) # Put in the data # temp.next = temp.prev = None if (head == None): head = temp else: temp.next = head head.prev = temp head = temp return head # Driver codeif __name__ == '__main__': # Start with an empty doubly linked list head = None # Insert values in sorted order head = insert(head, 9) head = insert(head, 8) head = insert(head, 6) head = insert(head, 5) head = insert(head, 4) head = insert(head, 2) head = insert(head, 1) x = 17 print(\"Count = \", countTriplets(head, x)) # This code is contributed by mohit kumar 29",
"e": 55908,
"s": 53086,
"text": null
},
{
"code": "// C# implementation to count triplets// in a sorted doubly linked list// whose sum is equal to a given value 'x'using System; class GFG{ // structure of node of doubly linked listclass Node{ public int data; public Node next, prev;}; // function to count pairs whose sum equal to given 'value'static int countPairs(Node first, Node second, int value){ int count = 0; // The loop terminates when either of two pointers // become null, or they cross each other (second.next // == first), or they become same (first == second) while (first != null && second != null && first != second && second.next != first) { // pair found if ((first.data + second.data) == value) { // increment count count++; // move first in forward direction first = first.next; // move second in backward direction second = second.prev; } // if sum is greater than 'value' // move second in backward direction else if ((first.data + second.data) > value) second = second.prev; // else move first in forward direction else first = first.next; } // required count of pairs return count;} // function to count triplets in a sorted doubly linked list// whose sum is equal to a given value 'x'static int countTriplets(Node head, int x){ // if list is empty if (head == null) return 0; Node current, first, last; int count = 0; // get pointer to the last node of // the doubly linked list last = head; while (last.next != null) last = last.next; // traversing the doubly linked list for (current = head; current != null; current = current.next) { // for each current node first = current.next; // count pairs with sum(x - current.data) in the range // first to last and add it to the 'count' of triplets count += countPairs(first, last, x - current.data); } // required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked liststatic Node insert(Node head, int data){ // allocate node Node temp = new Node(); // put in the data temp.data = data; temp.next = temp.prev = null; if ((head) == null) (head) = temp; else { temp.next = head; (head).prev = temp; (head) = temp; } return head;} // Driver program to test abovepublic static void Main(String[] args){ // start with an empty doubly linked list Node head = null; // insert values in sorted order head = insert(head, 9); head = insert(head, 8); head = insert(head, 6); head = insert(head, 5); head = insert(head, 4); head = insert(head, 2); head = insert(head, 1); int x = 17; Console.Write(\"Count = \" + countTriplets(head, x));}} // This code is contributed by 29AjayKumar",
"e": 58847,
"s": 55908,
"text": null
},
{
"code": "<script> // Javascript implementation to count// triplets in a sorted doubly linked list// whose sum is equal to a given value 'x' // Structure of node of doubly linked listclass Node{ constructor(data) { this.data = data; this.next = this.prev = null; }} // Function to count pairs whose sum// equal to given 'value'function countPairs(first, second, value){ let count = 0; // The loop terminates when either of two pointers // become null, or they cross each other (second.next // == first), or they become same (first == second) while (first != null && second != null && first != second && second.next != first) { // Pair found if ((first.data + second.data) == value) { // Increment count count++; // Move first in forward direction first = first.next; // Move second in backward direction second = second.prev; } // If sum is greater than 'value' // move second in backward direction else if ((first.data + second.data) > value) second = second.prev; // Else move first in forward direction else first = first.next; } // Required count of pairs return count;} // Function to count triplets in a sorted// doubly linked list whose sum is equal// to a given value 'x'function countTriplets(head, x){ // If list is empty if (head == null) return 0; let current, first, last; let count = 0; // Get pointer to the last node of // the doubly linked list last = head; while (last.next != null) last = last.next; // Traversing the doubly linked list for(current = head; current != null; current = current.next) { // For each current node first = current.next; // Count pairs with sum(x - current.data) // in the range first to last and add it // to the 'count' of triplets count += countPairs(first, last, x - current.data); } // Required count of triplets return count;} // A utility function to insert a new node at the// beginning of doubly linked listfunction insert(head, data){ // Allocate node let temp = new Node(); // Put in the data temp.data = data; temp.next = temp.prev = null; if ((head) == null) (head) = temp; else { temp.next = head; (head).prev = temp; (head) = temp; } return head;} // Driver code // Start with an empty doubly linked listlet head = null; // Insert values in sorted orderhead = insert(head, 9);head = insert(head, 8);head = insert(head, 6);head = insert(head, 5);head = insert(head, 4);head = insert(head, 2);head = insert(head, 1); let x = 17; document.write(\"Count = \" + countTriplets(head, x)); // This code is contributed by unknown2108 </script>",
"e": 61839,
"s": 58847,
"text": null
},
{
"code": null,
"e": 61848,
"s": 61839,
"text": "Output: "
},
{
"code": null,
"e": 61858,
"s": 61848,
"text": "Count = 2"
},
{
"code": null,
"e": 61903,
"s": 61858,
"text": "Time Complexity: O(n2) Auxiliary Space: O(1)"
},
{
"code": null,
"e": 62325,
"s": 61903,
"text": "This article is contributed by Ayush Jauhari. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 62338,
"s": 62325,
"text": "rachana soma"
},
{
"code": null,
"e": 62349,
"s": 62338,
"text": "andrew1234"
},
{
"code": null,
"e": 62359,
"s": 62349,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 62373,
"s": 62359,
"text": "princiraj1992"
},
{
"code": null,
"e": 62385,
"s": 62373,
"text": "29AjayKumar"
},
{
"code": null,
"e": 62400,
"s": 62385,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 62410,
"s": 62400,
"text": "rutvik_56"
},
{
"code": null,
"e": 62422,
"s": 62410,
"text": "umadevi9616"
},
{
"code": null,
"e": 62432,
"s": 62422,
"text": "patel2127"
},
{
"code": null,
"e": 62444,
"s": 62432,
"text": "unknown2108"
},
{
"code": null,
"e": 62453,
"s": 62444,
"text": "gabaa406"
},
{
"code": null,
"e": 62472,
"s": 62453,
"text": "doubly linked list"
},
{
"code": null,
"e": 62484,
"s": 62472,
"text": "Linked List"
},
{
"code": null,
"e": 62496,
"s": 62484,
"text": "Linked List"
},
{
"code": null,
"e": 62594,
"s": 62496,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 62603,
"s": 62594,
"text": "Comments"
},
{
"code": null,
"e": 62616,
"s": 62603,
"text": "Old Comments"
},
{
"code": null,
"e": 62637,
"s": 62616,
"text": "Linked List vs Array"
},
{
"code": null,
"e": 62683,
"s": 62637,
"text": "Delete a Linked List node at a given position"
},
{
"code": null,
"e": 62718,
"s": 62683,
"text": "Queue - Linked List Implementation"
},
{
"code": null,
"e": 62748,
"s": 62718,
"text": "Merge two sorted linked lists"
},
{
"code": null,
"e": 62787,
"s": 62748,
"text": "Find the middle of a given linked list"
},
{
"code": null,
"e": 62830,
"s": 62787,
"text": "Implement a stack using singly linked list"
},
{
"code": null,
"e": 62877,
"s": 62830,
"text": "Implementing a Linked List in Java using Class"
},
{
"code": null,
"e": 62905,
"s": 62877,
"text": "Merge Sort for Linked Lists"
},
{
"code": null,
"e": 62966,
"s": 62905,
"text": "Circular Linked List | Set 1 (Introduction and Applications)"
}
] |
Explain the role of HttpContext class in ASP.NET Core
|
The HttpContext encapsulates all the HTTP-specific information about a single HTTP request.
When an HTTP request arrives at the server, the server processes the request and builds an HttpContext object. This object represents the request which your application code can use to create the response.
The HttpContext object constructed by the ASP.NET Core web server acts as a container for a single request. It stores the request and response information, such as the properties of request, request-related services, and any data to/from the request or errors, if there are any.
ASP.NET Core applications access the HTTPContext through the IHttpContextAccessor interface. The HttpContextAccessor class implements it. You can use this class when you need to access HttpContext inside a service.
Here are different ways to access HttpContext from various types of applications.
public class HomeController : Controller{
public IActionResult About(){
var pathBase = HttpContext.Request.PathBase;
...
return View();
}
}
public class AboutModel : PageModel{
public string Message { get; set; }
public void OnGet(){
Message = HttpContext.Request.PathBase;
}
}
@{
var username = Context.User.Identity.Name;
...
}
From middleware
public class MyCustomMiddleware{
public Task InvokeAsync(HttpContext context){
...
}
}
Here are some of the useful properties and methods on the HttpContext object.
Connection: Gets the information about the underlying network connection for this request.
Request: Gets the HttpRequest object for this request
Response: Gets the HttpResponse object for this request
Session: Gets or sets the object used to manage the user session data for this request
Abort(): Aborts the connection underlying the request.
In ASP.NET Core, the Kestrel web server receives the HTTP request and constructs a C# representation of the request, the HttpContext object. However, Kestrel doesn't generate the response itself but forwards the HttpContext object to the middleware pipeline in the ASP.NET Core application. Middleware is a series of components that process the incoming request and perform various operations such as authentication, caching, logging, etc.
|
[
{
"code": null,
"e": 1154,
"s": 1062,
"text": "The HttpContext encapsulates all the HTTP-specific information about a single HTTP request."
},
{
"code": null,
"e": 1360,
"s": 1154,
"text": "When an HTTP request arrives at the server, the server processes the request and builds an HttpContext object. This object represents the request which your application code can use to create the response."
},
{
"code": null,
"e": 1639,
"s": 1360,
"text": "The HttpContext object constructed by the ASP.NET Core web server acts as a container for a single request. It stores the request and response information, such as the properties of request, request-related services, and any data to/from the request or errors, if there are any."
},
{
"code": null,
"e": 1854,
"s": 1639,
"text": "ASP.NET Core applications access the HTTPContext through the IHttpContextAccessor interface. The HttpContextAccessor class implements it. You can use this class when you need to access HttpContext inside a service."
},
{
"code": null,
"e": 1936,
"s": 1854,
"text": "Here are different ways to access HttpContext from various types of applications."
},
{
"code": null,
"e": 2102,
"s": 1936,
"text": "public class HomeController : Controller{\n public IActionResult About(){\n var pathBase = HttpContext.Request.PathBase;\n\n ...\n\n return View();\n }\n}"
},
{
"code": null,
"e": 2256,
"s": 2102,
"text": "public class AboutModel : PageModel{\n public string Message { get; set; }\n\n public void OnGet(){\n Message = HttpContext.Request.PathBase;\n }\n}"
},
{
"code": null,
"e": 2430,
"s": 2256,
"text": "@{\n var username = Context.User.Identity.Name;\n\n ...\n}\nFrom middleware\npublic class MyCustomMiddleware{\n public Task InvokeAsync(HttpContext context){\n ...\n }\n}"
},
{
"code": null,
"e": 2508,
"s": 2430,
"text": "Here are some of the useful properties and methods on the HttpContext object."
},
{
"code": null,
"e": 2599,
"s": 2508,
"text": "Connection: Gets the information about the underlying network connection for this request."
},
{
"code": null,
"e": 2653,
"s": 2599,
"text": "Request: Gets the HttpRequest object for this request"
},
{
"code": null,
"e": 2709,
"s": 2653,
"text": "Response: Gets the HttpResponse object for this request"
},
{
"code": null,
"e": 2796,
"s": 2709,
"text": "Session: Gets or sets the object used to manage the user session data for this request"
},
{
"code": null,
"e": 2851,
"s": 2796,
"text": "Abort(): Aborts the connection underlying the request."
},
{
"code": null,
"e": 3291,
"s": 2851,
"text": "In ASP.NET Core, the Kestrel web server receives the HTTP request and constructs a C# representation of the request, the HttpContext object. However, Kestrel doesn't generate the response itself but forwards the HttpContext object to the middleware pipeline in the ASP.NET Core application. Middleware is a series of components that process the incoming request and perform various operations such as authentication, caching, logging, etc."
}
] |
DAX Other - EXCEPT function
|
Returns the rows of one table which do not appear in another table. DAX EXCEPT function is new in Excel 2016.
EXCEPT (<table_expression1>, <table_expression2>)
A table that contains the rows of one table minus all the rows of another table.
If a row appears in both tables, that row and its duplicates are not present in the result table.
If a row appears in both tables, that row and its duplicates are not present in the result table.
If a row appears in only table_expression1, that row and its duplicates will appear in the result table.
If a row appears in only table_expression1, that row and its duplicates will appear in the result table.
The two tables must have the same number of columns.
The two tables must have the same number of columns.
The column names in the result table will match the column names in table_expression1.
The column names in the result table will match the column names in table_expression1.
Columns are compared based on positioning, and data comparison with no type coercion.
Columns are compared based on positioning, and data comparison with no type coercion.
The set of rows returned depends on the order of the two expressions.
The set of rows returned depends on the order of the two expressions.
The returned table has lineage based on the columns in table_expression1, regardless of the lineage of the columns in the second table. For example, if the first column of first table_expression has lineage to the base column C1 in the Data Model, DAX Except function will reduce the rows based on the availability of values in the first column of table_expression2 and keep the lineage on base column C1 intact.
The returned table has lineage based on the columns in table_expression1, regardless of the lineage of the columns in the second table. For example, if the first column of first table_expression has lineage to the base column C1 in the Data Model, DAX Except function will reduce the rows based on the availability of values in the first column of table_expression2 and keep the lineage on base column C1 intact.
The returned table does not include columns from the tables related to table_expression1.
The returned table does not include columns from the tables related to table_expression1.
= SUMX (EXCEPT (SalesNewData,SalesOldData),[Sales Amount])
This DAX formula returns the sum of Sales Amount for those transactions that appear in the table SalesNewData but do not appear in the table SalesOldData.
53 Lectures
5.5 hours
Abhay Gadiya
24 Lectures
2 hours
Randy Minder
26 Lectures
4.5 hours
Randy Minder
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2111,
"s": 2001,
"text": "Returns the rows of one table which do not appear in another table. DAX EXCEPT function is new in Excel 2016."
},
{
"code": null,
"e": 2163,
"s": 2111,
"text": "EXCEPT (<table_expression1>, <table_expression2>) \n"
},
{
"code": null,
"e": 2244,
"s": 2163,
"text": "A table that contains the rows of one table minus all the rows of another table."
},
{
"code": null,
"e": 2342,
"s": 2244,
"text": "If a row appears in both tables, that row and its duplicates are not present in the result table."
},
{
"code": null,
"e": 2440,
"s": 2342,
"text": "If a row appears in both tables, that row and its duplicates are not present in the result table."
},
{
"code": null,
"e": 2545,
"s": 2440,
"text": "If a row appears in only table_expression1, that row and its duplicates will appear in the result table."
},
{
"code": null,
"e": 2650,
"s": 2545,
"text": "If a row appears in only table_expression1, that row and its duplicates will appear in the result table."
},
{
"code": null,
"e": 2703,
"s": 2650,
"text": "The two tables must have the same number of columns."
},
{
"code": null,
"e": 2756,
"s": 2703,
"text": "The two tables must have the same number of columns."
},
{
"code": null,
"e": 2843,
"s": 2756,
"text": "The column names in the result table will match the column names in table_expression1."
},
{
"code": null,
"e": 2930,
"s": 2843,
"text": "The column names in the result table will match the column names in table_expression1."
},
{
"code": null,
"e": 3016,
"s": 2930,
"text": "Columns are compared based on positioning, and data comparison with no type coercion."
},
{
"code": null,
"e": 3102,
"s": 3016,
"text": "Columns are compared based on positioning, and data comparison with no type coercion."
},
{
"code": null,
"e": 3172,
"s": 3102,
"text": "The set of rows returned depends on the order of the two expressions."
},
{
"code": null,
"e": 3242,
"s": 3172,
"text": "The set of rows returned depends on the order of the two expressions."
},
{
"code": null,
"e": 3655,
"s": 3242,
"text": "The returned table has lineage based on the columns in table_expression1, regardless of the lineage of the columns in the second table. For example, if the first column of first table_expression has lineage to the base column C1 in the Data Model, DAX Except function will reduce the rows based on the availability of values in the first column of table_expression2 and keep the lineage on base column C1 intact."
},
{
"code": null,
"e": 4068,
"s": 3655,
"text": "The returned table has lineage based on the columns in table_expression1, regardless of the lineage of the columns in the second table. For example, if the first column of first table_expression has lineage to the base column C1 in the Data Model, DAX Except function will reduce the rows based on the availability of values in the first column of table_expression2 and keep the lineage on base column C1 intact."
},
{
"code": null,
"e": 4158,
"s": 4068,
"text": "The returned table does not include columns from the tables related to table_expression1."
},
{
"code": null,
"e": 4248,
"s": 4158,
"text": "The returned table does not include columns from the tables related to table_expression1."
},
{
"code": null,
"e": 4308,
"s": 4248,
"text": "= SUMX (EXCEPT (SalesNewData,SalesOldData),[Sales Amount]) "
},
{
"code": null,
"e": 4463,
"s": 4308,
"text": "This DAX formula returns the sum of Sales Amount for those transactions that appear in the table SalesNewData but do not appear in the table SalesOldData."
},
{
"code": null,
"e": 4498,
"s": 4463,
"text": "\n 53 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 4512,
"s": 4498,
"text": " Abhay Gadiya"
},
{
"code": null,
"e": 4545,
"s": 4512,
"text": "\n 24 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 4559,
"s": 4545,
"text": " Randy Minder"
},
{
"code": null,
"e": 4594,
"s": 4559,
"text": "\n 26 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 4608,
"s": 4594,
"text": " Randy Minder"
},
{
"code": null,
"e": 4615,
"s": 4608,
"text": " Print"
},
{
"code": null,
"e": 4626,
"s": 4615,
"text": " Add Notes"
}
] |
4 Different Ways to Center an Element using CSS - GeeksforGeeks
|
22 Feb, 2021
When we create a web page, we most probably have come across the issue to center an element. So let us take a look at 4 different ways to center an element using CSS:
Using FlexMargin PropertyGrid PropertyAbsolute Property
Using Flex
Margin Property
Grid Property
Absolute Property
Now let’s have a look at how these respective properties work using example.
HTML Code:
Filename: index.html
HTML
<!DOCTYPE html><html> <head> <title>Page Title</title> <link rel="stylesheet" href="styles.css" /></head> <body> <div class="parent"> <div class="child"> This element is centered </div> </div></body> </html>
In the above code, we have created a parent div and a child div. We will take a look on how to center the child div inside the parent div. A stylesheet titled styles.css has been linked to the file where we have defined the styles of the parent and child div.
CSS
.parent { height: 400px; width: 400px; background-color: red;}.child { height: 100px; width: 100px; background-color: blue;}
Method 1: Using Flex We can use Flexbox in order to center the element. We can set the display property of parent div as flex and can easily center the children div using justify-context : center (horizontally) and align-items : center (vertically) properties.
CSS
.parent { display: flex; justify-content: center; align-items: center;}
Method 2: Margin Property Another simple way to center a child div is to set it’s margin to auto and make the parent div display as grid.
CSS
.parent { display: grid;}.child { margin: auto;}
Method 3: Grid Property A quite easy way to center elements is to use the grid property on the parent div and set the place-items: center.
CSS
.parent { display: grid; place-items: center;}
Method 4: Absolute Property We can also use the position property to center the elements.
CSS
.parent { position: relative;}.child { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%);}
Output:
The output of all these ways will be the same which is shown below:
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
CSS-Questions
CSS
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Design a web page using HTML and CSS
Form validation using jQuery
How to set space between the flexbox ?
Search Bar using HTML, CSS and JavaScript
How to style a checkbox using CSS?
How to set the default value for an HTML <select> element ?
How to set input type date in dd-mm-yyyy format using HTML ?
Hide or show elements in HTML using display property
How to Insert Form Data into Database using PHP ?
REST API (Introduction)
|
[
{
"code": null,
"e": 24985,
"s": 24957,
"text": "\n22 Feb, 2021"
},
{
"code": null,
"e": 25152,
"s": 24985,
"text": "When we create a web page, we most probably have come across the issue to center an element. So let us take a look at 4 different ways to center an element using CSS:"
},
{
"code": null,
"e": 25208,
"s": 25152,
"text": "Using FlexMargin PropertyGrid PropertyAbsolute Property"
},
{
"code": null,
"e": 25219,
"s": 25208,
"text": "Using Flex"
},
{
"code": null,
"e": 25235,
"s": 25219,
"text": "Margin Property"
},
{
"code": null,
"e": 25249,
"s": 25235,
"text": "Grid Property"
},
{
"code": null,
"e": 25267,
"s": 25249,
"text": "Absolute Property"
},
{
"code": null,
"e": 25344,
"s": 25267,
"text": "Now let’s have a look at how these respective properties work using example."
},
{
"code": null,
"e": 25355,
"s": 25344,
"text": "HTML Code:"
},
{
"code": null,
"e": 25376,
"s": 25355,
"text": "Filename: index.html"
},
{
"code": null,
"e": 25381,
"s": 25376,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <head> <title>Page Title</title> <link rel=\"stylesheet\" href=\"styles.css\" /></head> <body> <div class=\"parent\"> <div class=\"child\"> This element is centered </div> </div></body> </html>",
"e": 25629,
"s": 25381,
"text": null
},
{
"code": null,
"e": 25889,
"s": 25629,
"text": "In the above code, we have created a parent div and a child div. We will take a look on how to center the child div inside the parent div. A stylesheet titled styles.css has been linked to the file where we have defined the styles of the parent and child div."
},
{
"code": null,
"e": 25893,
"s": 25889,
"text": "CSS"
},
{
"code": ".parent { height: 400px; width: 400px; background-color: red;}.child { height: 100px; width: 100px; background-color: blue;}",
"e": 26024,
"s": 25893,
"text": null
},
{
"code": null,
"e": 26285,
"s": 26024,
"text": "Method 1: Using Flex We can use Flexbox in order to center the element. We can set the display property of parent div as flex and can easily center the children div using justify-context : center (horizontally) and align-items : center (vertically) properties."
},
{
"code": null,
"e": 26289,
"s": 26285,
"text": "CSS"
},
{
"code": ".parent { display: flex; justify-content: center; align-items: center;}",
"e": 26364,
"s": 26289,
"text": null
},
{
"code": null,
"e": 26502,
"s": 26364,
"text": "Method 2: Margin Property Another simple way to center a child div is to set it’s margin to auto and make the parent div display as grid."
},
{
"code": null,
"e": 26506,
"s": 26502,
"text": "CSS"
},
{
"code": ".parent { display: grid;}.child { margin: auto;}",
"e": 26557,
"s": 26506,
"text": null
},
{
"code": null,
"e": 26696,
"s": 26557,
"text": "Method 3: Grid Property A quite easy way to center elements is to use the grid property on the parent div and set the place-items: center."
},
{
"code": null,
"e": 26700,
"s": 26696,
"text": "CSS"
},
{
"code": ".parent { display: grid; place-items: center;}",
"e": 26749,
"s": 26700,
"text": null
},
{
"code": null,
"e": 26839,
"s": 26749,
"text": "Method 4: Absolute Property We can also use the position property to center the elements."
},
{
"code": null,
"e": 26843,
"s": 26839,
"text": "CSS"
},
{
"code": ".parent { position: relative;}.child { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%);}",
"e": 26963,
"s": 26843,
"text": null
},
{
"code": null,
"e": 26971,
"s": 26963,
"text": "Output:"
},
{
"code": null,
"e": 27039,
"s": 26971,
"text": "The output of all these ways will be the same which is shown below:"
},
{
"code": null,
"e": 27176,
"s": 27039,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 27190,
"s": 27176,
"text": "CSS-Questions"
},
{
"code": null,
"e": 27194,
"s": 27190,
"text": "CSS"
},
{
"code": null,
"e": 27199,
"s": 27194,
"text": "HTML"
},
{
"code": null,
"e": 27216,
"s": 27199,
"text": "Web Technologies"
},
{
"code": null,
"e": 27221,
"s": 27216,
"text": "HTML"
},
{
"code": null,
"e": 27319,
"s": 27221,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27328,
"s": 27319,
"text": "Comments"
},
{
"code": null,
"e": 27341,
"s": 27328,
"text": "Old Comments"
},
{
"code": null,
"e": 27378,
"s": 27341,
"text": "Design a web page using HTML and CSS"
},
{
"code": null,
"e": 27407,
"s": 27378,
"text": "Form validation using jQuery"
},
{
"code": null,
"e": 27446,
"s": 27407,
"text": "How to set space between the flexbox ?"
},
{
"code": null,
"e": 27488,
"s": 27446,
"text": "Search Bar using HTML, CSS and JavaScript"
},
{
"code": null,
"e": 27523,
"s": 27488,
"text": "How to style a checkbox using CSS?"
},
{
"code": null,
"e": 27583,
"s": 27523,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 27644,
"s": 27583,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 27697,
"s": 27644,
"text": "Hide or show elements in HTML using display property"
},
{
"code": null,
"e": 27747,
"s": 27697,
"text": "How to Insert Form Data into Database using PHP ?"
}
] |
atomic.CompareAndSwapInt32() Function in Golang With Examples - GeeksforGeeks
|
01 Apr, 2020
In Go language, atomic packages supply lower level atomic memory that is helpful is implementing synchronization algorithms. The CompareAndSwapInt32() function in Go language is used to perform the compare and swap operation for an int32 value. This function is defined under the atomic package. Here, you need to import “sync/atomic” package in order to use these functions.
Syntax:
func CompareAndSwapInt32(addr *int32, old, new int32) (swapped bool)
Here, addr indicates address, old indicates int32 value that is the old swapped value which is returned from the swapped operation, and new is the int32 new value that will swap itself from the old swapped value.
Note: (*int32) is the pointer to a int32 value. And int32 is integer type of bit size 32. Moreover, int32 contains the set of all signed 32-bit integers from -2147483648 to 2147483647.
Return Value: It returns true if swapping is accomplished else it returns false.
Example 1:
// Golang Program to illustrate the usage of// CompareAndSwapInt32 function // Including main packagepackage main // importing fmt and sync/atomicimport ( "fmt" "sync/atomic") // Main functionfunc main() { // Assigning variable values to the int32 var ( i int32 = 111 ) // Swapping var old_value = atomic.SwapInt32(&i, 498) // Printing old value and swapped value fmt.Println("Swapped:", i, ", old value:", old_value) // Calling CompareAndSwapInt32 method with its parameters Swap := atomic.CompareAndSwapInt32(&i, 498, 675) // Displays true if swapped else false fmt.Println(Swap) fmt.Println("The Value of i is: ",i)}
Output:
Swapped: 498 , old value: 111
true
The Value of i is: 675
Example 2:
// Golang Program to illustrate the usage of// CompareAndSwapInt32 function // Including main packagepackage main // importing fmt and sync/atomicimport ( "fmt" "sync/atomic") // Main functionfunc main() { // Assigning variable values to the int32 var ( i int32 = 111 ) // Swapping var old_value = atomic.SwapInt32(&i, 498) // Printing old value and swapped value fmt.Println("Swapped:", i, ", old value:", old_value) // Calling CompareAndSwapInt32 // method with its parameters Swap := atomic.CompareAndSwapInt32(&i, 111, 675) // Displays true if // swapped else false fmt.Println(Swap) fmt.Println("The Value of i is: ",i)}
Output:
Swapped: 498 , old value: 111
false
The Value of i is: 498
Here, the old value in the CompareAndSwapInt32 method must be the swapped value returned from the SwapInt32 method. And here the swapping is not performed so false is returned.
GoLang-atomic
Go Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Parse JSON in Golang?
Defer Keyword in Golang
Rune in Golang
Anonymous function in Go Language
Loops in Go Language
Class and Object in Golang
Structures in Golang
Time Durations in Golang
Strings in Golang
How to iterate over an Array using for loop in Golang?
|
[
{
"code": null,
"e": 24069,
"s": 24041,
"text": "\n01 Apr, 2020"
},
{
"code": null,
"e": 24445,
"s": 24069,
"text": "In Go language, atomic packages supply lower level atomic memory that is helpful is implementing synchronization algorithms. The CompareAndSwapInt32() function in Go language is used to perform the compare and swap operation for an int32 value. This function is defined under the atomic package. Here, you need to import “sync/atomic” package in order to use these functions."
},
{
"code": null,
"e": 24453,
"s": 24445,
"text": "Syntax:"
},
{
"code": null,
"e": 24523,
"s": 24453,
"text": "func CompareAndSwapInt32(addr *int32, old, new int32) (swapped bool)\n"
},
{
"code": null,
"e": 24736,
"s": 24523,
"text": "Here, addr indicates address, old indicates int32 value that is the old swapped value which is returned from the swapped operation, and new is the int32 new value that will swap itself from the old swapped value."
},
{
"code": null,
"e": 24921,
"s": 24736,
"text": "Note: (*int32) is the pointer to a int32 value. And int32 is integer type of bit size 32. Moreover, int32 contains the set of all signed 32-bit integers from -2147483648 to 2147483647."
},
{
"code": null,
"e": 25002,
"s": 24921,
"text": "Return Value: It returns true if swapping is accomplished else it returns false."
},
{
"code": null,
"e": 25013,
"s": 25002,
"text": "Example 1:"
},
{
"code": "// Golang Program to illustrate the usage of// CompareAndSwapInt32 function // Including main packagepackage main // importing fmt and sync/atomicimport ( \"fmt\" \"sync/atomic\") // Main functionfunc main() { // Assigning variable values to the int32 var ( i int32 = 111 ) // Swapping var old_value = atomic.SwapInt32(&i, 498) // Printing old value and swapped value fmt.Println(\"Swapped:\", i, \", old value:\", old_value) // Calling CompareAndSwapInt32 method with its parameters Swap := atomic.CompareAndSwapInt32(&i, 498, 675) // Displays true if swapped else false fmt.Println(Swap) fmt.Println(\"The Value of i is: \",i)}",
"e": 25695,
"s": 25013,
"text": null
},
{
"code": null,
"e": 25703,
"s": 25695,
"text": "Output:"
},
{
"code": null,
"e": 25763,
"s": 25703,
"text": "Swapped: 498 , old value: 111\ntrue\nThe Value of i is: 675\n"
},
{
"code": null,
"e": 25774,
"s": 25763,
"text": "Example 2:"
},
{
"code": "// Golang Program to illustrate the usage of// CompareAndSwapInt32 function // Including main packagepackage main // importing fmt and sync/atomicimport ( \"fmt\" \"sync/atomic\") // Main functionfunc main() { // Assigning variable values to the int32 var ( i int32 = 111 ) // Swapping var old_value = atomic.SwapInt32(&i, 498) // Printing old value and swapped value fmt.Println(\"Swapped:\", i, \", old value:\", old_value) // Calling CompareAndSwapInt32 // method with its parameters Swap := atomic.CompareAndSwapInt32(&i, 111, 675) // Displays true if // swapped else false fmt.Println(Swap) fmt.Println(\"The Value of i is: \",i)}",
"e": 26468,
"s": 25774,
"text": null
},
{
"code": null,
"e": 26476,
"s": 26468,
"text": "Output:"
},
{
"code": null,
"e": 26537,
"s": 26476,
"text": "Swapped: 498 , old value: 111\nfalse\nThe Value of i is: 498\n"
},
{
"code": null,
"e": 26714,
"s": 26537,
"text": "Here, the old value in the CompareAndSwapInt32 method must be the swapped value returned from the SwapInt32 method. And here the swapping is not performed so false is returned."
},
{
"code": null,
"e": 26728,
"s": 26714,
"text": "GoLang-atomic"
},
{
"code": null,
"e": 26740,
"s": 26728,
"text": "Go Language"
},
{
"code": null,
"e": 26838,
"s": 26740,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26847,
"s": 26838,
"text": "Comments"
},
{
"code": null,
"e": 26860,
"s": 26847,
"text": "Old Comments"
},
{
"code": null,
"e": 26889,
"s": 26860,
"text": "How to Parse JSON in Golang?"
},
{
"code": null,
"e": 26913,
"s": 26889,
"text": "Defer Keyword in Golang"
},
{
"code": null,
"e": 26928,
"s": 26913,
"text": "Rune in Golang"
},
{
"code": null,
"e": 26962,
"s": 26928,
"text": "Anonymous function in Go Language"
},
{
"code": null,
"e": 26983,
"s": 26962,
"text": "Loops in Go Language"
},
{
"code": null,
"e": 27010,
"s": 26983,
"text": "Class and Object in Golang"
},
{
"code": null,
"e": 27031,
"s": 27010,
"text": "Structures in Golang"
},
{
"code": null,
"e": 27056,
"s": 27031,
"text": "Time Durations in Golang"
},
{
"code": null,
"e": 27074,
"s": 27056,
"text": "Strings in Golang"
}
] |
How to Create a Timeline in Jupyter Notebook | by Alon Lekhtman | Towards Data Science
|
When dealing with temporal data it is always useful to be able to see the data on a timeline in order to understand it better.
There are some built-in python solutions like this and this. However, I found them not flexible enough. I want to propose another solution that is a bit more complex but gives you much more flexibility. Integrate a really great javascript library (vis.js) that deals with timelines on a general-purpose web application into a Jupyter Notebook (don’t worry it is not as complex as it may sound).
Install vis.js timeline:
Install vis.js timeline:
npm install vis-timeline
2. Generate config for your Jupyter Notebook (If you don’t already have one):
jupyter notebook --generate-config
3. Open for editing the config file, located at ~/.jupyter/jupyter_notebook_config.py (Linux and macOS, on windows it is on equivalent location). Uncomment the line with c.NotebookApp.extra_static_path in the file. Put the path to the installed vis-timeline dist folder, it should be something like this:
c.NotebookApp.extra_static_paths = ['~/node_modules/vis-timeline/dist']
(pay attention that on windows you need to use double backslash instead of a single slash in the path).
4. Run (or restart if it was running before):
jupyter notebook
Now your notebook is ready to work with vis.js timelines.
You will need 3 Jupyter Notebook cells for this:
A Python cell, you can create your data here. For this simple example, I created two events, one on December 20, and one on December 25. This code puts our events on the window of the browser, so it can later be used in a javascript cell.
A Python cell, you can create your data here. For this simple example, I created two events, one on December 20, and one on December 25. This code puts our events on the window of the browser, so it can later be used in a javascript cell.
2. A Html cell, with a placeholder for our timeline.
3. A Javascript cell. We import the js and css files of vis.js itself, then get our events from the window, and use the vis.js library to put them in a timeline object.
Now you should see this basic timeline in the HTML cell:
The timeline doesn’t look impressive at this point. However, with the approach introduced here, you can bring all the flexibility there is in the javascript library to your Jupyter Notebook ( Add a functionality when clicking on an event, create a different style for each node, and much much more... )
In order to understand how to create more complex timelines, you can see all the documentation here.
|
[
{
"code": null,
"e": 298,
"s": 171,
"text": "When dealing with temporal data it is always useful to be able to see the data on a timeline in order to understand it better."
},
{
"code": null,
"e": 693,
"s": 298,
"text": "There are some built-in python solutions like this and this. However, I found them not flexible enough. I want to propose another solution that is a bit more complex but gives you much more flexibility. Integrate a really great javascript library (vis.js) that deals with timelines on a general-purpose web application into a Jupyter Notebook (don’t worry it is not as complex as it may sound)."
},
{
"code": null,
"e": 718,
"s": 693,
"text": "Install vis.js timeline:"
},
{
"code": null,
"e": 743,
"s": 718,
"text": "Install vis.js timeline:"
},
{
"code": null,
"e": 768,
"s": 743,
"text": "npm install vis-timeline"
},
{
"code": null,
"e": 846,
"s": 768,
"text": "2. Generate config for your Jupyter Notebook (If you don’t already have one):"
},
{
"code": null,
"e": 881,
"s": 846,
"text": "jupyter notebook --generate-config"
},
{
"code": null,
"e": 1186,
"s": 881,
"text": "3. Open for editing the config file, located at ~/.jupyter/jupyter_notebook_config.py (Linux and macOS, on windows it is on equivalent location). Uncomment the line with c.NotebookApp.extra_static_path in the file. Put the path to the installed vis-timeline dist folder, it should be something like this:"
},
{
"code": null,
"e": 1258,
"s": 1186,
"text": "c.NotebookApp.extra_static_paths = ['~/node_modules/vis-timeline/dist']"
},
{
"code": null,
"e": 1362,
"s": 1258,
"text": "(pay attention that on windows you need to use double backslash instead of a single slash in the path)."
},
{
"code": null,
"e": 1408,
"s": 1362,
"text": "4. Run (or restart if it was running before):"
},
{
"code": null,
"e": 1425,
"s": 1408,
"text": "jupyter notebook"
},
{
"code": null,
"e": 1483,
"s": 1425,
"text": "Now your notebook is ready to work with vis.js timelines."
},
{
"code": null,
"e": 1532,
"s": 1483,
"text": "You will need 3 Jupyter Notebook cells for this:"
},
{
"code": null,
"e": 1771,
"s": 1532,
"text": "A Python cell, you can create your data here. For this simple example, I created two events, one on December 20, and one on December 25. This code puts our events on the window of the browser, so it can later be used in a javascript cell."
},
{
"code": null,
"e": 2010,
"s": 1771,
"text": "A Python cell, you can create your data here. For this simple example, I created two events, one on December 20, and one on December 25. This code puts our events on the window of the browser, so it can later be used in a javascript cell."
},
{
"code": null,
"e": 2063,
"s": 2010,
"text": "2. A Html cell, with a placeholder for our timeline."
},
{
"code": null,
"e": 2232,
"s": 2063,
"text": "3. A Javascript cell. We import the js and css files of vis.js itself, then get our events from the window, and use the vis.js library to put them in a timeline object."
},
{
"code": null,
"e": 2289,
"s": 2232,
"text": "Now you should see this basic timeline in the HTML cell:"
},
{
"code": null,
"e": 2592,
"s": 2289,
"text": "The timeline doesn’t look impressive at this point. However, with the approach introduced here, you can bring all the flexibility there is in the javascript library to your Jupyter Notebook ( Add a functionality when clicking on an event, create a different style for each node, and much much more... )"
}
] |
pow() function in C
|
The function pow() is used to calculate the power raised to the base value. It takes two arguments. It returns the power raised to the base value. It is declared in “math.h” header file.
Here is the syntax of pow() in C language,
double pow(double val1, double val2);
Here,
val1 − The base value whose power is to be calculated.
val2 − The power value.
Here is an example of pow() in C language,
#include<stdio.h>
#include<math.h>
int main() {
double x = 5.5;
double y = 4.0;
double p;
p = pow(x, y);
printf("The value : %lf", p);
return 0;
}
The value : 915.062500
On some online compilers, the following error may occur.
undefined reference to `pow'
error: ld returned 1 exit status
The above error occurs because we have added “math.h” header file, but haven’t linked the program to the following math library.
libm.a
Link the program with the above library, so that the call to function pow() is resolved.
|
[
{
"code": null,
"e": 1249,
"s": 1062,
"text": "The function pow() is used to calculate the power raised to the base value. It takes two arguments. It returns the power raised to the base value. It is declared in “math.h” header file."
},
{
"code": null,
"e": 1292,
"s": 1249,
"text": "Here is the syntax of pow() in C language,"
},
{
"code": null,
"e": 1330,
"s": 1292,
"text": "double pow(double val1, double val2);"
},
{
"code": null,
"e": 1336,
"s": 1330,
"text": "Here,"
},
{
"code": null,
"e": 1391,
"s": 1336,
"text": "val1 − The base value whose power is to be calculated."
},
{
"code": null,
"e": 1415,
"s": 1391,
"text": "val2 − The power value."
},
{
"code": null,
"e": 1458,
"s": 1415,
"text": "Here is an example of pow() in C language,"
},
{
"code": null,
"e": 1623,
"s": 1458,
"text": "#include<stdio.h>\n#include<math.h>\nint main() {\n double x = 5.5;\n double y = 4.0;\n double p;\n p = pow(x, y);\n printf(\"The value : %lf\", p);\n return 0;\n}"
},
{
"code": null,
"e": 1646,
"s": 1623,
"text": "The value : 915.062500"
},
{
"code": null,
"e": 1703,
"s": 1646,
"text": "On some online compilers, the following error may occur."
},
{
"code": null,
"e": 1765,
"s": 1703,
"text": "undefined reference to `pow'\nerror: ld returned 1 exit status"
},
{
"code": null,
"e": 1894,
"s": 1765,
"text": "The above error occurs because we have added “math.h” header file, but haven’t linked the program to the following math library."
},
{
"code": null,
"e": 1901,
"s": 1894,
"text": "libm.a"
},
{
"code": null,
"e": 1990,
"s": 1901,
"text": "Link the program with the above library, so that the call to function pow() is resolved."
}
] |
Predicting a Hotel Booking Demand | by Dimas Adnan | Towards Data Science
|
Here in this article, I want to write about how to build a prediction model using Python with Jupyter Notebook. The data that I am using for this experiment is the Hotel Booking Demand dataset from Kaggle.
In this article, I will only show you the modeling stage, with just the Logistic Regression model, but you can access the full document, including the Data Cleaning, Preprocessing, and Exploratory Data Analysis on my Github.
Without further ado, let’s do it.
Importing Libraries
Loading the Dataset
And here is how the dataset looks.
It has 32 columns, and the full version of it is:
['hotel', 'is_canceled', 'lead_time', 'arrival_date_year', 'arrival_date_month', 'arrival_date_week_number', 'arrival_date_day_of_month', 'stays_in_weekend_nights', 'stays_in_week_nights', 'adults', 'children', 'babies', 'meal', 'country', 'market_segment', 'distribution_channel', 'is_repeated_guest', 'previous_cancellations', 'previous_bookings_not_canceled', 'reserved_room_type', 'assigned_room_type', 'booking_changes', 'deposit_type', 'agent', 'company', 'days_in_waiting_list', 'customer_type', 'adr', 'required_car_parking_spaces', 'total_of_special_requests', 'reservation_status', 'reservation_status_date']
Based on the info that I have run on the notebook, the NaN values from the dataset can be found on three columns, which are ’country’, ’agent,’ and ’company.’
I replace the NaN value in ‘country’ with PRT (Portugal) based on the ‘lead_time’ feature based on the ‘lead_time’ column, where PRT is the most common country in which lead_time = 118.
I was trying to replace the NaN values on the ‘agent’ feature based on lead_time, arrival_date_month, and arrival_date_week_number, but most of them have ‘240’ as the most common agent. After I read the description and explanation of the dataset that can be found on the internet, the author(s) describe the ‘agent’ feature as “ID of the travel agency that made the bookings.” So, those who have ‘agent’ in the dataset are the only ones that made the book through a travel agency, and those who don’t have ‘agent’ or the value is Nan, are those who did not make the book through a travel agency. So, based on that, I think it is best to fill the NaN values with 0 other than fill them with the agent, which would make the dataset different from the original.
And last but not least, I choose to drop the entire ‘company’ feature because the NaN in that feature is about 96% of the data. If I decide to modify the data, it could make a massive difference to the data and could change the entire data, especially in the company feature.
Splitting the Dataset
I am trying to split the dataset based on the top 5 that have the most significant correlation to the target (is_canceled), which are required_car_parking_spaces’, ’lead_time’, ’booking_changes’, ’adr’, ’adults,’ and ‘is_canceled.’
Fitting the Model
model_LogReg_Asli is the original model that used Logistic Regression before using Hyperparameter Tuning, and here is the model prediction.
Model Performance
As you can see above, the Logistic Regression model has about 69.3 percent of accuracy.
Model Parameter
Logistic Regression with Randomized Search CV
model_LR_RS is the model that used Logistic Regression with Hyperparameter Tuning (Randomized).
As you can see in the above picture, the Logistic Regression model with Randomized Search CV has the exact same result as without it, which is 69.3 percent.
Logistic Regression with Grid Search CV
model_LR2_GS is the model that used Logistic Regression with Hyperparameter Tuning (Grid Search).
The image above shows that the Logistic Regression model with Grid Search CV has the exact same accuracy, which is 69.3 percent.
Evaluation Model
TN is True Negative, FN is False Negative, FP is False Positive, and TP is True Positive, while 0 is Not Canceled, and 1 is Canceled. Here below is the classification report for the model.
In this article, once again, I am only using Logistic Regression for the testing, but you can use other kinds of models like Random Forest, Decision Tree, and so on. On my Github, I have tried the Random Forest Classifier too, but the result is pretty similar.
That is all for this article. Thank you and have a nice day.
|
[
{
"code": null,
"e": 378,
"s": 172,
"text": "Here in this article, I want to write about how to build a prediction model using Python with Jupyter Notebook. The data that I am using for this experiment is the Hotel Booking Demand dataset from Kaggle."
},
{
"code": null,
"e": 603,
"s": 378,
"text": "In this article, I will only show you the modeling stage, with just the Logistic Regression model, but you can access the full document, including the Data Cleaning, Preprocessing, and Exploratory Data Analysis on my Github."
},
{
"code": null,
"e": 637,
"s": 603,
"text": "Without further ado, let’s do it."
},
{
"code": null,
"e": 657,
"s": 637,
"text": "Importing Libraries"
},
{
"code": null,
"e": 677,
"s": 657,
"text": "Loading the Dataset"
},
{
"code": null,
"e": 712,
"s": 677,
"text": "And here is how the dataset looks."
},
{
"code": null,
"e": 762,
"s": 712,
"text": "It has 32 columns, and the full version of it is:"
},
{
"code": null,
"e": 1441,
"s": 762,
"text": "['hotel', 'is_canceled', 'lead_time', 'arrival_date_year', 'arrival_date_month', 'arrival_date_week_number', 'arrival_date_day_of_month', 'stays_in_weekend_nights', 'stays_in_week_nights', 'adults', 'children', 'babies', 'meal', 'country', 'market_segment', 'distribution_channel', 'is_repeated_guest', 'previous_cancellations', 'previous_bookings_not_canceled', 'reserved_room_type', 'assigned_room_type', 'booking_changes', 'deposit_type', 'agent', 'company', 'days_in_waiting_list', 'customer_type', 'adr', 'required_car_parking_spaces', 'total_of_special_requests', 'reservation_status', 'reservation_status_date']"
},
{
"code": null,
"e": 1600,
"s": 1441,
"text": "Based on the info that I have run on the notebook, the NaN values from the dataset can be found on three columns, which are ’country’, ’agent,’ and ’company.’"
},
{
"code": null,
"e": 1786,
"s": 1600,
"text": "I replace the NaN value in ‘country’ with PRT (Portugal) based on the ‘lead_time’ feature based on the ‘lead_time’ column, where PRT is the most common country in which lead_time = 118."
},
{
"code": null,
"e": 2545,
"s": 1786,
"text": "I was trying to replace the NaN values on the ‘agent’ feature based on lead_time, arrival_date_month, and arrival_date_week_number, but most of them have ‘240’ as the most common agent. After I read the description and explanation of the dataset that can be found on the internet, the author(s) describe the ‘agent’ feature as “ID of the travel agency that made the bookings.” So, those who have ‘agent’ in the dataset are the only ones that made the book through a travel agency, and those who don’t have ‘agent’ or the value is Nan, are those who did not make the book through a travel agency. So, based on that, I think it is best to fill the NaN values with 0 other than fill them with the agent, which would make the dataset different from the original."
},
{
"code": null,
"e": 2821,
"s": 2545,
"text": "And last but not least, I choose to drop the entire ‘company’ feature because the NaN in that feature is about 96% of the data. If I decide to modify the data, it could make a massive difference to the data and could change the entire data, especially in the company feature."
},
{
"code": null,
"e": 2843,
"s": 2821,
"text": "Splitting the Dataset"
},
{
"code": null,
"e": 3075,
"s": 2843,
"text": "I am trying to split the dataset based on the top 5 that have the most significant correlation to the target (is_canceled), which are required_car_parking_spaces’, ’lead_time’, ’booking_changes’, ’adr’, ’adults,’ and ‘is_canceled.’"
},
{
"code": null,
"e": 3093,
"s": 3075,
"text": "Fitting the Model"
},
{
"code": null,
"e": 3233,
"s": 3093,
"text": "model_LogReg_Asli is the original model that used Logistic Regression before using Hyperparameter Tuning, and here is the model prediction."
},
{
"code": null,
"e": 3251,
"s": 3233,
"text": "Model Performance"
},
{
"code": null,
"e": 3339,
"s": 3251,
"text": "As you can see above, the Logistic Regression model has about 69.3 percent of accuracy."
},
{
"code": null,
"e": 3355,
"s": 3339,
"text": "Model Parameter"
},
{
"code": null,
"e": 3401,
"s": 3355,
"text": "Logistic Regression with Randomized Search CV"
},
{
"code": null,
"e": 3497,
"s": 3401,
"text": "model_LR_RS is the model that used Logistic Regression with Hyperparameter Tuning (Randomized)."
},
{
"code": null,
"e": 3654,
"s": 3497,
"text": "As you can see in the above picture, the Logistic Regression model with Randomized Search CV has the exact same result as without it, which is 69.3 percent."
},
{
"code": null,
"e": 3694,
"s": 3654,
"text": "Logistic Regression with Grid Search CV"
},
{
"code": null,
"e": 3792,
"s": 3694,
"text": "model_LR2_GS is the model that used Logistic Regression with Hyperparameter Tuning (Grid Search)."
},
{
"code": null,
"e": 3921,
"s": 3792,
"text": "The image above shows that the Logistic Regression model with Grid Search CV has the exact same accuracy, which is 69.3 percent."
},
{
"code": null,
"e": 3938,
"s": 3921,
"text": "Evaluation Model"
},
{
"code": null,
"e": 4127,
"s": 3938,
"text": "TN is True Negative, FN is False Negative, FP is False Positive, and TP is True Positive, while 0 is Not Canceled, and 1 is Canceled. Here below is the classification report for the model."
},
{
"code": null,
"e": 4388,
"s": 4127,
"text": "In this article, once again, I am only using Logistic Regression for the testing, but you can use other kinds of models like Random Forest, Decision Tree, and so on. On my Github, I have tried the Random Forest Classifier too, but the result is pretty similar."
}
] |
Differentiate the NULL pointer with Void pointer in C language
|
The difference between Null pointer and Void pointer is that Null pointer is a value and Void pointer is a type.
A null pointer means it is not pointing to anything. If, there is no address that is assigned to a pointer, then set it to null.
A pointer type, i.e., int *, char * each have a null pointer value.
The syntax is as follows −
<data type> *<variable name> = NULL;
For example,
int *p = NULL;
char *p = '\0';
Following is the C program for NULL pointer −
Live Demo
#include<stdio.h>
int main(){
printf("TutorialPoint C Programming");
int *p = NULL; // ptr is a NULL pointer
printf("\n The value of pointer is: %x ", p);
return 0;
}
When the above program is executed, it produces the following result −
TutorialPoint C Programming
The value of pointer is: 0
A void pointer is nothing but the one who does not have any data type with it. It is also called as a general purpose pointer. It can hold the addresses of any data type.
Thee syntax is as follows −
void *<data type>;
For example,
void *p;
int a; char c;
p = &a; //p changes to integer pointer as address of integer is assigned to it
p = &c; //p changes to character pointer as address of character is assigned to it
Following is the C program for Void Pointer −
Live Demo
#include<stdio.h>
int main(){
int a = 10;
void *ptr = &a;
printf("%d", *(int *)ptr);
return 0;
}
When the above program is executed, it produces the following result −
10
|
[
{
"code": null,
"e": 1175,
"s": 1062,
"text": "The difference between Null pointer and Void pointer is that Null pointer is a value and Void pointer is a type."
},
{
"code": null,
"e": 1304,
"s": 1175,
"text": "A null pointer means it is not pointing to anything. If, there is no address that is assigned to a pointer, then set it to null."
},
{
"code": null,
"e": 1372,
"s": 1304,
"text": "A pointer type, i.e., int *, char * each have a null pointer value."
},
{
"code": null,
"e": 1399,
"s": 1372,
"text": "The syntax is as follows −"
},
{
"code": null,
"e": 1436,
"s": 1399,
"text": "<data type> *<variable name> = NULL;"
},
{
"code": null,
"e": 1449,
"s": 1436,
"text": "For example,"
},
{
"code": null,
"e": 1480,
"s": 1449,
"text": "int *p = NULL;\nchar *p = '\\0';"
},
{
"code": null,
"e": 1526,
"s": 1480,
"text": "Following is the C program for NULL pointer −"
},
{
"code": null,
"e": 1537,
"s": 1526,
"text": " Live Demo"
},
{
"code": null,
"e": 1716,
"s": 1537,
"text": "#include<stdio.h>\nint main(){\n printf(\"TutorialPoint C Programming\");\n int *p = NULL; // ptr is a NULL pointer\n printf(\"\\n The value of pointer is: %x \", p);\n return 0;\n}"
},
{
"code": null,
"e": 1787,
"s": 1716,
"text": "When the above program is executed, it produces the following result −"
},
{
"code": null,
"e": 1842,
"s": 1787,
"text": "TutorialPoint C Programming\nThe value of pointer is: 0"
},
{
"code": null,
"e": 2013,
"s": 1842,
"text": "A void pointer is nothing but the one who does not have any data type with it. It is also called as a general purpose pointer. It can hold the addresses of any data type."
},
{
"code": null,
"e": 2041,
"s": 2013,
"text": "Thee syntax is as follows −"
},
{
"code": null,
"e": 2060,
"s": 2041,
"text": "void *<data type>;"
},
{
"code": null,
"e": 2073,
"s": 2060,
"text": "For example,"
},
{
"code": null,
"e": 2097,
"s": 2073,
"text": "void *p;\nint a; char c;"
},
{
"code": null,
"e": 2176,
"s": 2097,
"text": "p = &a; //p changes to integer pointer as address of integer is assigned to it"
},
{
"code": null,
"e": 2259,
"s": 2176,
"text": "p = &c; //p changes to character pointer as address of character is assigned to it"
},
{
"code": null,
"e": 2305,
"s": 2259,
"text": "Following is the C program for Void Pointer −"
},
{
"code": null,
"e": 2316,
"s": 2305,
"text": " Live Demo"
},
{
"code": null,
"e": 2425,
"s": 2316,
"text": "#include<stdio.h>\nint main(){\n int a = 10;\n void *ptr = &a;\n printf(\"%d\", *(int *)ptr);\n return 0;\n}"
},
{
"code": null,
"e": 2496,
"s": 2425,
"text": "When the above program is executed, it produces the following result −"
},
{
"code": null,
"e": 2499,
"s": 2496,
"text": "10"
}
] |
Find the Minimum length Unsorted Subarray, sorting which makes the complete array sorted in Python
|
Suppose we have a given unsorted array A[0..n-1] of size n, we have to find the minimum length subarray A[s..e] so that by sorting this subarray the whole array will be sorted. So, if the array is like [2,6,4,8,10,9,15], then the output will be 5. The subarray will be [6,4,8,10,9].
To solve this, we will follow these steps −
res := sort the nums as an array
res := sort the nums as an array
ans := 0
ans := 0
set r as a linked list
set r as a linked list
for i in range 0 to length of resif nums[i] is not same as res[i], then insert i into the r
for i in range 0 to length of res
if nums[i] is not same as res[i], then insert i into the r
if nums[i] is not same as res[i], then insert i into the r
if length of r is 0, then return 0, if length of r is 1, then return 1
if length of r is 0, then return 0, if length of r is 1, then return 1
return last element of r – first element of r + 1
return last element of r – first element of r + 1
Let us see the following implementation to get better understanding −
Live Demo
class Solution(object):
def findUnsortedSubarray(self, nums):
res = sorted(nums)
ans = 0
r = []
for i in range(len(res)):
if nums[i] != res[i]:
r.append(i)
if not len(r):
return 0
if len(r) == 1:
return 1
return r[-1]-r[0]+1
ob1 = Solution()
print(ob1.findUnsortedSubarray([2,6,4,8,10,9,15]))
[2,6,4,8,10,9,15]
5
|
[
{
"code": null,
"e": 1345,
"s": 1062,
"text": "Suppose we have a given unsorted array A[0..n-1] of size n, we have to find the minimum length subarray A[s..e] so that by sorting this subarray the whole array will be sorted. So, if the array is like [2,6,4,8,10,9,15], then the output will be 5. The subarray will be [6,4,8,10,9]."
},
{
"code": null,
"e": 1389,
"s": 1345,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1422,
"s": 1389,
"text": "res := sort the nums as an array"
},
{
"code": null,
"e": 1455,
"s": 1422,
"text": "res := sort the nums as an array"
},
{
"code": null,
"e": 1464,
"s": 1455,
"text": "ans := 0"
},
{
"code": null,
"e": 1473,
"s": 1464,
"text": "ans := 0"
},
{
"code": null,
"e": 1496,
"s": 1473,
"text": "set r as a linked list"
},
{
"code": null,
"e": 1519,
"s": 1496,
"text": "set r as a linked list"
},
{
"code": null,
"e": 1611,
"s": 1519,
"text": "for i in range 0 to length of resif nums[i] is not same as res[i], then insert i into the r"
},
{
"code": null,
"e": 1645,
"s": 1611,
"text": "for i in range 0 to length of res"
},
{
"code": null,
"e": 1704,
"s": 1645,
"text": "if nums[i] is not same as res[i], then insert i into the r"
},
{
"code": null,
"e": 1763,
"s": 1704,
"text": "if nums[i] is not same as res[i], then insert i into the r"
},
{
"code": null,
"e": 1834,
"s": 1763,
"text": "if length of r is 0, then return 0, if length of r is 1, then return 1"
},
{
"code": null,
"e": 1905,
"s": 1834,
"text": "if length of r is 0, then return 0, if length of r is 1, then return 1"
},
{
"code": null,
"e": 1955,
"s": 1905,
"text": "return last element of r – first element of r + 1"
},
{
"code": null,
"e": 2005,
"s": 1955,
"text": "return last element of r – first element of r + 1"
},
{
"code": null,
"e": 2075,
"s": 2005,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 2086,
"s": 2075,
"text": " Live Demo"
},
{
"code": null,
"e": 2463,
"s": 2086,
"text": "class Solution(object):\n def findUnsortedSubarray(self, nums):\n res = sorted(nums)\n ans = 0\n r = []\n for i in range(len(res)):\n if nums[i] != res[i]:\n r.append(i)\n if not len(r):\n return 0\n if len(r) == 1:\n return 1\n return r[-1]-r[0]+1\nob1 = Solution()\nprint(ob1.findUnsortedSubarray([2,6,4,8,10,9,15]))"
},
{
"code": null,
"e": 2481,
"s": 2463,
"text": "[2,6,4,8,10,9,15]"
},
{
"code": null,
"e": 2483,
"s": 2481,
"text": "5"
}
] |
How to create circular ProgressBar in Android using Kotlin?
|
This example demonstrates how to create a circular ProgressBar in Android using Kotlin.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="8dp"
tools:context=".MainActivity">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_marginTop="70dp"
android:background="#008080"
android:padding="5dp"
android:text="TutorialsPoint"
android:textColor="#fff"
android:textSize="24sp"
android:textStyle="bold" />
<ProgressBar
android:id="@+id/circularProgressbar"
style="?android:attr/progressBarStyleHorizontal"
android:layout_width="250dp"
android:layout_height="250dp"
android:layout_centerInParent="true"
android:indeterminate="false"
android:max="100"
android:progress="50"
android:secondaryProgress="100" />
<TextView
android:id="@+id/textView"
android:layout_width="250dp"
android:layout_height="250dp"
android:layout_centerInParent="true"
android:gravity="center"
android:text="25%"
android:textColor="@color/colorPrimaryDark"
android:textSize="24sp" />
</RelativeLayout>
Step 3 − Create a drawable resource file (circularprogressbar.xml) and add the following code
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:tools="http://schemas.android.com/tools"
xmlns:android="http://schemas.android.com/apk/res/android"
tools:ignore="ExtraText">
android:layout_width="wrap_content"
android:layout_height="wrap_content">
<item android:id="@android:id/secondaryProgress">
<shape
android:innerRadiusRatio="6"
android:shape="ring"
android:thicknessRatio="20.0"
android:useLevel="true">
<gradient
android:centerColor="@android:color/holo_green_light"
android:endColor="@android:color/holo_red_dark"
android:startColor="@android:color/white"
android:type="sweep" />
</shape>
</item>
<item android:id="@android:id/progress">
<rotate
android:fromDegrees="270"
android:pivotX="50%"
android:pivotY="50%"
android:toDegrees="270">
<shape
android:innerRadiusRatio="6"
android:shape="ring"
android:thicknessRatio="20.0"
android:useLevel="true">
<gradient
android:centerColor="@android:color/holo_blue_dark"
android:endColor="@android:color/holo_purple"
android:startColor="@android:color/holo_orange_dark"
android:type="sweep" />
<rotate
android:fromDegrees="0"
android:pivotX="50%"
android:pivotY="50%"
android:toDegrees="360" />
</shape>
</rotate>
</item>
<item android:id="@android:id/secondaryProgress">
<shape
android:innerRadiusRatio="6"
android:shape="ring"
android:thicknessRatio="20.0"
android:useLevel="true">
<gradient
android:centerColor="@android:color/holo_blue_dark"
android:endColor="@android:color/holo_purple"
android:startColor="@android:color/holo_orange_dark"
android:type="sweep" />
</shape>
</item>
</layer-list>
Step 4 − Add the following code to src/MainActivity.kt
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.os.Handler
import android.widget.ProgressBar
import android.widget.TextView
class MainActivity : AppCompatActivity() {
internal var status = 0
private val handler = Handler()
lateinit var textView: TextView
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
val resources = resources
val drawable = resources.getDrawable(R.drawable.circularprogressbar)
val progressBar: ProgressBar = findViewById(R.id.circularProgressbar)
progressBar.progress = 0
progressBar.secondaryProgress = 100
progressBar.max = 100
progressBar.progressDrawable = drawable
textView = findViewById(R.id.textView)
Thread {
while (status < 100) {
status += 1
handler.post {
progressBar.progress = status
textView.text = String.format("%d%%", status)
}
try {
Thread.sleep(16)
}
catch (e: InterruptedException) {
e.printStackTrace()
}
}
}.start()
}
}
Step 5 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.q11">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen.
|
[
{
"code": null,
"e": 1150,
"s": 1062,
"text": "This example demonstrates how to create a circular ProgressBar in Android using Kotlin."
},
{
"code": null,
"e": 1279,
"s": 1150,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1344,
"s": 1279,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2696,
"s": 1344,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:padding=\"8dp\"\n tools:context=\".MainActivity\">\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"70dp\"\n android:background=\"#008080\"\n android:padding=\"5dp\"\n android:text=\"TutorialsPoint\"\n android:textColor=\"#fff\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold\" />\n <ProgressBar\n android:id=\"@+id/circularProgressbar\"\n style=\"?android:attr/progressBarStyleHorizontal\"\n android:layout_width=\"250dp\"\n android:layout_height=\"250dp\"\n android:layout_centerInParent=\"true\"\n android:indeterminate=\"false\"\n android:max=\"100\"\n android:progress=\"50\"\n android:secondaryProgress=\"100\" />\n <TextView\n android:id=\"@+id/textView\"\n android:layout_width=\"250dp\"\n android:layout_height=\"250dp\"\n android:layout_centerInParent=\"true\"\n android:gravity=\"center\"\n android:text=\"25%\"\n android:textColor=\"@color/colorPrimaryDark\"\n android:textSize=\"24sp\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 2790,
"s": 2696,
"text": "Step 3 − Create a drawable resource file (circularprogressbar.xml) and add the following code"
},
{
"code": null,
"e": 4817,
"s": 2790,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<layer-list xmlns:tools=\"http://schemas.android.com/tools\"\n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n tools:ignore=\"ExtraText\">\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\">\n <item android:id=\"@android:id/secondaryProgress\">\n <shape\n android:innerRadiusRatio=\"6\"\n android:shape=\"ring\"\n android:thicknessRatio=\"20.0\"\n android:useLevel=\"true\">\n <gradient\n android:centerColor=\"@android:color/holo_green_light\"\n android:endColor=\"@android:color/holo_red_dark\"\n android:startColor=\"@android:color/white\"\n android:type=\"sweep\" />\n </shape>\n </item>\n <item android:id=\"@android:id/progress\">\n <rotate\n android:fromDegrees=\"270\"\n android:pivotX=\"50%\"\n android:pivotY=\"50%\"\n android:toDegrees=\"270\">\n <shape\n android:innerRadiusRatio=\"6\"\n android:shape=\"ring\"\n android:thicknessRatio=\"20.0\"\n android:useLevel=\"true\">\n <gradient\n android:centerColor=\"@android:color/holo_blue_dark\"\n android:endColor=\"@android:color/holo_purple\"\n android:startColor=\"@android:color/holo_orange_dark\"\n android:type=\"sweep\" />\n <rotate\n android:fromDegrees=\"0\"\n android:pivotX=\"50%\"\n android:pivotY=\"50%\"\n android:toDegrees=\"360\" />\n </shape>\n </rotate>\n </item>\n <item android:id=\"@android:id/secondaryProgress\">\n <shape\n android:innerRadiusRatio=\"6\"\n android:shape=\"ring\"\n android:thicknessRatio=\"20.0\"\n android:useLevel=\"true\">\n <gradient\n android:centerColor=\"@android:color/holo_blue_dark\"\n android:endColor=\"@android:color/holo_purple\"\n android:startColor=\"@android:color/holo_orange_dark\"\n android:type=\"sweep\" />\n </shape>\n </item>\n</layer-list>"
},
{
"code": null,
"e": 4872,
"s": 4817,
"text": "Step 4 − Add the following code to src/MainActivity.kt"
},
{
"code": null,
"e": 6130,
"s": 4872,
"text": "import androidx.appcompat.app.AppCompatActivity\nimport android.os.Bundle\nimport android.os.Handler\nimport android.widget.ProgressBar\nimport android.widget.TextView\nclass MainActivity : AppCompatActivity() {\n internal var status = 0\n private val handler = Handler()\n lateinit var textView: TextView\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n val resources = resources\n val drawable = resources.getDrawable(R.drawable.circularprogressbar)\n val progressBar: ProgressBar = findViewById(R.id.circularProgressbar)\n progressBar.progress = 0\n progressBar.secondaryProgress = 100\n progressBar.max = 100\n progressBar.progressDrawable = drawable\n textView = findViewById(R.id.textView)\n Thread {\n while (status < 100) {\n status += 1\n handler.post {\n progressBar.progress = status\n textView.text = String.format(\"%d%%\", status)\n }\n try {\n Thread.sleep(16)\n }\n catch (e: InterruptedException) {\n e.printStackTrace()\n }\n }\n }.start()\n }\n}"
},
{
"code": null,
"e": 6186,
"s": 6130,
"text": "Step 5 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 6860,
"s": 6186,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"com.example.q11\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 7209,
"s": 6860,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen."
}
] |
Sum of Square Numbers in C++
|
Suppose we have a non-negative integer c, we have to decide whether there're two integers a and b such that it satisfies a^2 + b^2 = c.
So, if the input is like 61, then the output will be True, as 61 = 5^2 + 6^2.
To solve this, we will follow these steps −
Define a function isPerfect(), this will take x,
Define a function isPerfect(), this will take x,
sr := square root of x
sr := square root of x
return true when (sr - floor of sr) is 0
return true when (sr - floor of sr) is 0
From the main method do the following,
From the main method do the following,
if c is same as 0, then −return true
if c is same as 0, then −
return true
return true
for initialize i := 0, when i < the ceiling of square root of c, update (increase i by 1), do −b := c - i * iif isPerfect(b) is true, then −return true
for initialize i := 0, when i < the ceiling of square root of c, update (increase i by 1), do −
b := c - i * i
b := c - i * i
if isPerfect(b) is true, then −return true
if isPerfect(b) is true, then −
return true
return true
return false
return false
Let us see the following implementation to get a better understanding −
Live Demo
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
bool isPerfect(int x){
long double sr = sqrt(x);
return ((sr - floor(sr)) == 0);
}
bool judgeSquareSum(int c) {
if (c == 0)
return true;
int b;
for (int i = 0; i < ceil(sqrt(c)); i++) {
b = c - i * i;
if (isPerfect(b))
return true;
}
return false;
}
};
main(){
Solution ob;
cout << (ob.judgeSquareSum(61));
}
61
1
|
[
{
"code": null,
"e": 1198,
"s": 1062,
"text": "Suppose we have a non-negative integer c, we have to decide whether there're two integers a and b such that it satisfies a^2 + b^2 = c."
},
{
"code": null,
"e": 1276,
"s": 1198,
"text": "So, if the input is like 61, then the output will be True, as 61 = 5^2 + 6^2."
},
{
"code": null,
"e": 1320,
"s": 1276,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1369,
"s": 1320,
"text": "Define a function isPerfect(), this will take x,"
},
{
"code": null,
"e": 1418,
"s": 1369,
"text": "Define a function isPerfect(), this will take x,"
},
{
"code": null,
"e": 1441,
"s": 1418,
"text": "sr := square root of x"
},
{
"code": null,
"e": 1464,
"s": 1441,
"text": "sr := square root of x"
},
{
"code": null,
"e": 1505,
"s": 1464,
"text": "return true when (sr - floor of sr) is 0"
},
{
"code": null,
"e": 1546,
"s": 1505,
"text": "return true when (sr - floor of sr) is 0"
},
{
"code": null,
"e": 1585,
"s": 1546,
"text": "From the main method do the following,"
},
{
"code": null,
"e": 1624,
"s": 1585,
"text": "From the main method do the following,"
},
{
"code": null,
"e": 1661,
"s": 1624,
"text": "if c is same as 0, then −return true"
},
{
"code": null,
"e": 1687,
"s": 1661,
"text": "if c is same as 0, then −"
},
{
"code": null,
"e": 1699,
"s": 1687,
"text": "return true"
},
{
"code": null,
"e": 1711,
"s": 1699,
"text": "return true"
},
{
"code": null,
"e": 1863,
"s": 1711,
"text": "for initialize i := 0, when i < the ceiling of square root of c, update (increase i by 1), do −b := c - i * iif isPerfect(b) is true, then −return true"
},
{
"code": null,
"e": 1959,
"s": 1863,
"text": "for initialize i := 0, when i < the ceiling of square root of c, update (increase i by 1), do −"
},
{
"code": null,
"e": 1974,
"s": 1959,
"text": "b := c - i * i"
},
{
"code": null,
"e": 1989,
"s": 1974,
"text": "b := c - i * i"
},
{
"code": null,
"e": 2032,
"s": 1989,
"text": "if isPerfect(b) is true, then −return true"
},
{
"code": null,
"e": 2064,
"s": 2032,
"text": "if isPerfect(b) is true, then −"
},
{
"code": null,
"e": 2076,
"s": 2064,
"text": "return true"
},
{
"code": null,
"e": 2088,
"s": 2076,
"text": "return true"
},
{
"code": null,
"e": 2101,
"s": 2088,
"text": "return false"
},
{
"code": null,
"e": 2114,
"s": 2101,
"text": "return false"
},
{
"code": null,
"e": 2186,
"s": 2114,
"text": "Let us see the following implementation to get a better understanding −"
},
{
"code": null,
"e": 2197,
"s": 2186,
"text": " Live Demo"
},
{
"code": null,
"e": 2676,
"s": 2197,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nclass Solution {\npublic:\n bool isPerfect(int x){\n long double sr = sqrt(x);\n return ((sr - floor(sr)) == 0);\n }\n bool judgeSquareSum(int c) {\n if (c == 0)\n return true;\n int b;\n for (int i = 0; i < ceil(sqrt(c)); i++) {\n b = c - i * i;\n if (isPerfect(b))\n return true;\n }\n return false;\n }\n};\nmain(){\n Solution ob;\n cout << (ob.judgeSquareSum(61));\n}"
},
{
"code": null,
"e": 2679,
"s": 2676,
"text": "61"
},
{
"code": null,
"e": 2681,
"s": 2679,
"text": "1"
}
] |
Narrowing Conversion in Java
|
Narrowing conversion is needed when you convert from a larger size type to a smaller size. This is for incompatible data types, wherein automatic conversions cannot be done.
Let us see an example wherein we are converting long to integer using Narrowing Conversion.
Live Demo
public class Demo {
public static void main(String[] args) {
long longVal = 878;
int intVal = (int) longVal;
System.out.println("Long: "+longVal);
System.out.println("Integer: "+intVal);
}
}
Long: 878
Integer: 878
Let us see another example, wherein we are converting double to long using Narrowing Conversion.
Live Demo
public class Demo {
public static void main(String[] args) {
double doubleVal = 299.89;
long longVal = (long)doubleVal;
System.out.println("Double: "+doubleVal);
System.out.println("Long: "+longVal);
}
}
Double: 299.89
Long: 299
|
[
{
"code": null,
"e": 1236,
"s": 1062,
"text": "Narrowing conversion is needed when you convert from a larger size type to a smaller size. This is for incompatible data types, wherein automatic conversions cannot be done."
},
{
"code": null,
"e": 1328,
"s": 1236,
"text": "Let us see an example wherein we are converting long to integer using Narrowing Conversion."
},
{
"code": null,
"e": 1339,
"s": 1328,
"text": " Live Demo"
},
{
"code": null,
"e": 1566,
"s": 1339,
"text": "public class Demo {\n public static void main(String[] args) {\n long longVal = 878;\n int intVal = (int) longVal;\n System.out.println(\"Long: \"+longVal);\n System.out.println(\"Integer: \"+intVal);\n }\n}"
},
{
"code": null,
"e": 1589,
"s": 1566,
"text": "Long: 878\nInteger: 878"
},
{
"code": null,
"e": 1686,
"s": 1589,
"text": "Let us see another example, wherein we are converting double to long using Narrowing Conversion."
},
{
"code": null,
"e": 1697,
"s": 1686,
"text": " Live Demo"
},
{
"code": null,
"e": 1937,
"s": 1697,
"text": "public class Demo {\n public static void main(String[] args) {\n double doubleVal = 299.89;\n long longVal = (long)doubleVal;\n System.out.println(\"Double: \"+doubleVal);\n System.out.println(\"Long: \"+longVal);\n }\n}"
},
{
"code": null,
"e": 1962,
"s": 1937,
"text": "Double: 299.89\nLong: 299"
}
] |
Angular PrimeNG Steps Component - GeeksforGeeks
|
28 Oct, 2021
Angular PrimeNG is an open-source framework with a rich set of native Angular UI components that are used for great styling and this framework is used to make responsive websites with very much ease. In this article, we will know how to use the Steps component in Angular PrimeNG. We will also learn about the properties, events & styling along with their syntaxes that will be used in the code.
Step component: It is used to indicate or track the completion of series of processes.
Properties:
model: It is an array of menu items. It is of an array type of data & the default value is null.
activeIndex: It is the index of the active item. It accepts the number as an input data type & the default value is 0.
readonly: It specifies whether the items are clickable or not. It is of the boolean data type & the default value is true.
style: It sets the inline style of the component. It is of the string data type & the default value is null.
styleClass: It is the style class of the component. It is of the string data type & the default value is null.
Events:
activeIndexChange: it is a callback that is fired when the new step is selected.
Styling:
p-steps: It is the container element.
p-steps-item: It is the menuitem element.
p-steps-number: It is the number of menuitem.
p-steps-title: It is the label of menuitem.
Creating Angular application & module installation:
Step 1: Create an Angular application using the following command.
ng new appname
Step 2: After creating your project folder i.e. appname, move to it using the following command.
cd appname
Step 3: Install PrimeNG in your given directory.
npm install primeng --save
npm install primeicons --save
Project Structure: It will look like the following:
Example 1: This is the basic example that illustrates how to use the Steps component.
app.component.html
<h2>GeeksforGeeks</h2><h4>PrimeNG Steps Component</h4><p-steps [model]="geeks" [(activeIndex)]="gfg" [readonly]="false"></p-steps>
app.component.ts
import { Component } from "@angular/core";import { MenuItem } from "primeng/api"; @Component({ selector: "my-app", templateUrl: "./app.component.html",})export class AppComponent { geeks: MenuItem[]; gfg: number = 1; ngOnInit() { this.geeks = [ { label: "PrimeNG" }, { label: "AngularJS" }, { label: "ReactJS" }, { label: "HTML" }, ]; }}
app.module.ts
import { NgModule } from "@angular/core";import { BrowserModule } from "@angular/platform-browser";import { RouterModule } from "@angular/router";import { BrowserAnimationsModule } from "@angular/platform-browser/animations"; import { AppComponent } from "./app.component";import { StepsModule } from "primeng/steps";import { ToastModule } from "primeng/toast"; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, StepsModule, ToastModule, RouterModule.forRoot([{ path: "", component: AppComponent }]), ], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {}
Output:
Example 2: In this example, we will make the next button to navigate the element in the Step Component.
app.component.html
<h2>GeeksforGeeks</h2><h4>PrimeNG Steps Component</h4><p-steps [model]="[ { label: 'DSA' }, { label: 'Algorithm' }, { label: 'Web Tech' }, { label: 'Aptitude' } ]" [(activeIndex)]="gfg" [readonly]="true"></p-steps><br /><button (click)="chan()">Next</button>
app.component.ts
import { Component } from "@angular/core";import { MenuItem } from "primeng/api"; @Component({ selector: "my-app", templateUrl: "./app.component.html",})export class AppComponent { geeks: MenuItem[]; gfg: number = 0; chan() { this.gfg += 1; } ngOnInit() {}}
app.module.ts
import { NgModule } from "@angular/core";import { BrowserModule } from "@angular/platform-browser";import { RouterModule } from "@angular/router";import { BrowserAnimationsModule } from "@angular/platform-browser/animations"; import { AppComponent } from "./app.component";import { StepsModule } from "primeng/steps";import { ToastModule } from "primeng/toast"; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, StepsModule, ToastModule, RouterModule.forRoot([{ path: "", component: AppComponent }]), ], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {}
Output:
Reference: https://primefaces.org/primeng/showcase/#/steps/confirmation
sooda367
Angular-PrimeNG
AngularJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 10 Angular Libraries For Web Developers
How to use <mat-chip-list> and <mat-chip> in Angular Material ?
How to make a Bootstrap Modal Popup in Angular 9/8 ?
Angular 10 (blur) Event
Angular PrimeNG Dropdown Component
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
|
[
{
"code": null,
"e": 25109,
"s": 25081,
"text": "\n28 Oct, 2021"
},
{
"code": null,
"e": 25506,
"s": 25109,
"text": "Angular PrimeNG is an open-source framework with a rich set of native Angular UI components that are used for great styling and this framework is used to make responsive websites with very much ease. In this article, we will know how to use the Steps component in Angular PrimeNG. We will also learn about the properties, events & styling along with their syntaxes that will be used in the code. "
},
{
"code": null,
"e": 25593,
"s": 25506,
"text": "Step component: It is used to indicate or track the completion of series of processes."
},
{
"code": null,
"e": 25605,
"s": 25593,
"text": "Properties:"
},
{
"code": null,
"e": 25702,
"s": 25605,
"text": "model: It is an array of menu items. It is of an array type of data & the default value is null."
},
{
"code": null,
"e": 25821,
"s": 25702,
"text": "activeIndex: It is the index of the active item. It accepts the number as an input data type & the default value is 0."
},
{
"code": null,
"e": 25944,
"s": 25821,
"text": "readonly: It specifies whether the items are clickable or not. It is of the boolean data type & the default value is true."
},
{
"code": null,
"e": 26053,
"s": 25944,
"text": "style: It sets the inline style of the component. It is of the string data type & the default value is null."
},
{
"code": null,
"e": 26164,
"s": 26053,
"text": "styleClass: It is the style class of the component. It is of the string data type & the default value is null."
},
{
"code": null,
"e": 26172,
"s": 26164,
"text": "Events:"
},
{
"code": null,
"e": 26253,
"s": 26172,
"text": "activeIndexChange: it is a callback that is fired when the new step is selected."
},
{
"code": null,
"e": 26264,
"s": 26255,
"text": "Styling:"
},
{
"code": null,
"e": 26302,
"s": 26264,
"text": "p-steps: It is the container element."
},
{
"code": null,
"e": 26344,
"s": 26302,
"text": "p-steps-item: It is the menuitem element."
},
{
"code": null,
"e": 26390,
"s": 26344,
"text": "p-steps-number: It is the number of menuitem."
},
{
"code": null,
"e": 26434,
"s": 26390,
"text": "p-steps-title: It is the label of menuitem."
},
{
"code": null,
"e": 26486,
"s": 26434,
"text": "Creating Angular application & module installation:"
},
{
"code": null,
"e": 26553,
"s": 26486,
"text": "Step 1: Create an Angular application using the following command."
},
{
"code": null,
"e": 26568,
"s": 26553,
"text": "ng new appname"
},
{
"code": null,
"e": 26665,
"s": 26568,
"text": "Step 2: After creating your project folder i.e. appname, move to it using the following command."
},
{
"code": null,
"e": 26676,
"s": 26665,
"text": "cd appname"
},
{
"code": null,
"e": 26725,
"s": 26676,
"text": "Step 3: Install PrimeNG in your given directory."
},
{
"code": null,
"e": 26782,
"s": 26725,
"text": "npm install primeng --save\nnpm install primeicons --save"
},
{
"code": null,
"e": 26834,
"s": 26782,
"text": "Project Structure: It will look like the following:"
},
{
"code": null,
"e": 26920,
"s": 26834,
"text": "Example 1: This is the basic example that illustrates how to use the Steps component."
},
{
"code": null,
"e": 26939,
"s": 26920,
"text": "app.component.html"
},
{
"code": "<h2>GeeksforGeeks</h2><h4>PrimeNG Steps Component</h4><p-steps [model]=\"geeks\" [(activeIndex)]=\"gfg\" [readonly]=\"false\"></p-steps>",
"e": 27074,
"s": 26939,
"text": null
},
{
"code": null,
"e": 27091,
"s": 27074,
"text": "app.component.ts"
},
{
"code": "import { Component } from \"@angular/core\";import { MenuItem } from \"primeng/api\"; @Component({ selector: \"my-app\", templateUrl: \"./app.component.html\",})export class AppComponent { geeks: MenuItem[]; gfg: number = 1; ngOnInit() { this.geeks = [ { label: \"PrimeNG\" }, { label: \"AngularJS\" }, { label: \"ReactJS\" }, { label: \"HTML\" }, ]; }}",
"e": 27464,
"s": 27091,
"text": null
},
{
"code": null,
"e": 27478,
"s": 27464,
"text": "app.module.ts"
},
{
"code": "import { NgModule } from \"@angular/core\";import { BrowserModule } from \"@angular/platform-browser\";import { RouterModule } from \"@angular/router\";import { BrowserAnimationsModule } from \"@angular/platform-browser/animations\"; import { AppComponent } from \"./app.component\";import { StepsModule } from \"primeng/steps\";import { ToastModule } from \"primeng/toast\"; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, StepsModule, ToastModule, RouterModule.forRoot([{ path: \"\", component: AppComponent }]), ], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {}",
"e": 28112,
"s": 27478,
"text": null
},
{
"code": null,
"e": 28120,
"s": 28112,
"text": "Output:"
},
{
"code": null,
"e": 28224,
"s": 28120,
"text": "Example 2: In this example, we will make the next button to navigate the element in the Step Component."
},
{
"code": null,
"e": 28243,
"s": 28224,
"text": "app.component.html"
},
{
"code": "<h2>GeeksforGeeks</h2><h4>PrimeNG Steps Component</h4><p-steps [model]=\"[ { label: 'DSA' }, { label: 'Algorithm' }, { label: 'Web Tech' }, { label: 'Aptitude' } ]\" [(activeIndex)]=\"gfg\" [readonly]=\"true\"></p-steps><br /><button (click)=\"chan()\">Next</button>",
"e": 28518,
"s": 28243,
"text": null
},
{
"code": null,
"e": 28535,
"s": 28518,
"text": "app.component.ts"
},
{
"code": "import { Component } from \"@angular/core\";import { MenuItem } from \"primeng/api\"; @Component({ selector: \"my-app\", templateUrl: \"./app.component.html\",})export class AppComponent { geeks: MenuItem[]; gfg: number = 0; chan() { this.gfg += 1; } ngOnInit() {}}",
"e": 28806,
"s": 28535,
"text": null
},
{
"code": null,
"e": 28820,
"s": 28806,
"text": "app.module.ts"
},
{
"code": "import { NgModule } from \"@angular/core\";import { BrowserModule } from \"@angular/platform-browser\";import { RouterModule } from \"@angular/router\";import { BrowserAnimationsModule } from \"@angular/platform-browser/animations\"; import { AppComponent } from \"./app.component\";import { StepsModule } from \"primeng/steps\";import { ToastModule } from \"primeng/toast\"; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, StepsModule, ToastModule, RouterModule.forRoot([{ path: \"\", component: AppComponent }]), ], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {}",
"e": 29454,
"s": 28820,
"text": null
},
{
"code": null,
"e": 29462,
"s": 29454,
"text": "Output:"
},
{
"code": null,
"e": 29534,
"s": 29462,
"text": "Reference: https://primefaces.org/primeng/showcase/#/steps/confirmation"
},
{
"code": null,
"e": 29543,
"s": 29534,
"text": "sooda367"
},
{
"code": null,
"e": 29559,
"s": 29543,
"text": "Angular-PrimeNG"
},
{
"code": null,
"e": 29569,
"s": 29559,
"text": "AngularJS"
},
{
"code": null,
"e": 29586,
"s": 29569,
"text": "Web Technologies"
},
{
"code": null,
"e": 29684,
"s": 29586,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29693,
"s": 29684,
"text": "Comments"
},
{
"code": null,
"e": 29706,
"s": 29693,
"text": "Old Comments"
},
{
"code": null,
"e": 29750,
"s": 29706,
"text": "Top 10 Angular Libraries For Web Developers"
},
{
"code": null,
"e": 29814,
"s": 29750,
"text": "How to use <mat-chip-list> and <mat-chip> in Angular Material ?"
},
{
"code": null,
"e": 29867,
"s": 29814,
"text": "How to make a Bootstrap Modal Popup in Angular 9/8 ?"
},
{
"code": null,
"e": 29891,
"s": 29867,
"text": "Angular 10 (blur) Event"
},
{
"code": null,
"e": 29926,
"s": 29891,
"text": "Angular PrimeNG Dropdown Component"
},
{
"code": null,
"e": 29968,
"s": 29926,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 30001,
"s": 29968,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 30044,
"s": 30001,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 30106,
"s": 30044,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
}
] |
Predictive Maintenance of Turbofan Engine | by Deepak Honakeri | Towards Data Science
|
Predictive maintenance is very important for manufacturers as well as the maintainers, which lowers maintenance cost, extend equipment life, reduce downtime and improve production quality by addressing problems before they cause equipment failures.
“Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed” — source Wikipedia
In this post, I would like to demonstrate that the use of RNN(Recurrent Neural Network)/LSTM(Long Short Term Memory) architecture is not only more accurate but it performs better in classifying the results accurately when compared to the previous CNN (Convolution Neural Network) approach, written by Marco Cerliani (read here).
This post uses the C-MAPSS dataset for the predictive maintenance of the Turbofan Engine. Here the challenge is to determine the Remaining Useful Life (RUL) until next fault that occur in the engine.
The dataset can be found (here), here’s a brief on the dataset,
“The engine is operating normally at the start of each time series, and develops a fault at some point during the series.
In the training set, the fault grows in magnitude until system failure.
In the test set, the time series ends some time prior to system failure.
The following are the conditions of the engine that are used in the training of the model
Train trjectories: 100Test trajectories: 100Conditions: ONE (Sea Level)Fault Modes: ONE (HPC Degradation)
Understanding the Dataset
Once we load the dataset, we obtain the time series data of 100 engines that contains the operational settings and sensor readings of each 100 engines with different senarios where the fault occurs and a total of 20631 training examples. To illustrate, below are the first 5 training examples of our training dataset.
train_df.head()
To further understand the data given, (see Fig 2) describes that for a given engine how many cycles are left before the next fault occurs.
Example 1 : Engine id number 69 (farthest left) approximately has 360 cycles remaining before fault.
Example 2 : Engine id number 39 (farthest right) approximately has 110 cycles remaining before fault.
train_df.id.value_counts().plot.bar()
The following (Fig3 and Fig 4) are time series data for engine whose id is 69,
engine_id = train_df[train_df['id'] == 69]ax1 = engine_id[train_df.columns[2:]].plot(subplots=True, sharex=True, figsize=(20,30))
*The images (fig2, fig 3 and fig4) are obtained by using the source code from GitHub Notebook (here), by Marco Cerliani.
Data preprocessing is the most important step towards training any neural network. For neural networks like RNN (Recurrent Neural Network), the network is very sensitive to the input data and the data needs to be in range of -1 to 1 or 0 to 1. This range i.e, -1 to 1 or 0 to 1, is usually because of the tanh (see Fig 5) is the activation function accompanied in the hidden layers of the network. Thus, the data must be normalized before the training of the model.
Using the MinMaxScaler function provided by sklearn’s preprocessing library we normalize our training data in the scale ranging in 0 to 1, although, theoritcally we could normalize and experiment our data to -1 to 1. However, this post only demostrates scaling of data in the range in 0 to 1.
from sklearn.preprocessing import MinMaxScalersc = MinMaxScaler(feature_range=(0,1))train_df[train_df.columns[2:26]] = sc.fit_transform(train_df[ train_df.columns[2:26]])train_df = train_df.dropna(axis=1)
The reason of using the columns numbers from 2 to 26(see Fig1), is that we take the operational setting1 (as column number 2) , setting2, setting3, sensor 1 up until sensor 21 (column 25), and python range doesn’t take in account of the upper limit, hence the upper limit 26. For illustration, here’s first 5 training examples after the normalization of training data (fig 6).
Once we have our data normalized, we employ a classification approach in order to predict RUL. We do this by adding new labels for our classifcation approach onto our dataset by the following.
“following source code is used : from Github”
w1 = 45w0 = 15train_df['class1'] = np.where(train_df['RUL'] <= w1, 1, 0 )train_df['class2'] = train_df['class1']train_df.loc[train_df['RUL'] <= w0, 'class2'] = 2
This snippet of code now creates labels(see fig7) for our classification problem and the classifcation approcah is as follows,
label 0 : when 45+ cycles are left until fault.
label 1 : when cycles between 16 and 45 are left until fault.
label 2 : when cycles between 0 and 15 are left until fault.
Great! Now we need to further prepare our data inorder for the neural network to efficiently process the time series data, we do this by specifying the time step size(or window size). Neural network such as RNN or CNN, require the input data to be in 3-Dimensional form. Hence we now need to transorm our 2-Dimensional data to 3-Dimensional data.
To demonstrate this process of transformation (see Fig8), we simple run through the time series data by specifying the time step size(window size). This process is also known as Sliding Window technique.
For our time series data, we use the Sliding Window technique for all of the sensors and operating settings by specifying the time steps(or window size) as 50, although, the time steps size can be set arbitary. Following code snippet transforms our 2-Dimensional to a 3-Dimensional data(numpy pandas array)of size 15631x50x17, which is optimum for input to the neural network.
“following source code is modified : from Github”
time_steps = 50def gen_sequence(id_df):data_matrix = id_df.iloc[2:26] num_elements = data_matrix.shape[0]for start, stop in zip(range(0, num_elements-time_steps), range(time_steps, num_elements)): yield data_matrix[start:stop, :]def gen_labels(id_df, label):data_matrix = id_df[label].values num_elements = data_matrix.shape[0]return data_matrix[time_steps:num_elements, :]x_train, y_train = [], []for engine_id in train_df.id.unique(): for sequence in gen_sequence(train_df[train_df.id==engine_id]): x_train.append(sequence) for label in gen_labels(train_df[train_df.id==engine_id['label2']): y_train.append(label)x_train = np.asarray(x_train)y_train = np.asarray(y_train).reshape(-1,1)
*For more further reading on time series data, please read the article (here).
RNN/LSTM has been best proven for handling time series data and there are plenty of articles on the web demonstrating the effectiveness on broad applications. Hence, we employ the RNN/LSTM architecture.
Now since our data is ready and is in 3-Dimensional form, we can now define the RNN/LSTM neural network architecture which comprises of 2 hidden layers and each hidden layer having activation function of tanh (see fig5) followed by a layer of softmax classifier.
model = Sequential()#inputmodel.add(LSTM(units=50, return_sequences='true', activation='tanh',input_shape = (x_train.shape[1], x_train.shape[2])) )model.add(Dropout(0.2))#hidden layer 1model.add(LSTM(units=60, return_sequences='true',activation='tanh'))model.add(Dropout(0.2))#hidden layer 2model.add(LSTM(units=60, activation='tanh'))model.add(Dropout(0.2))#outputmodel.add(Dense(units=3,activation='softmax'))model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])print(model.summary())
Here’s an output of the model summary(see fig 9).
Training the RNN/LSTM Model
The RNN/LSTM model is been trained for a total of 30 epochs, although I’ve tried to train the model for 40 epochs, and the model was seen to be overfitting.
history = model.fit(x_train, y_train,batch_size=32, epochs=30)
Note : The ‘history’ variable is used to record the necessary parameters such as loss and accuracy accuracy of the model during the train.
The accuracy is obtained by using the keras.evaluate() function, and obtained an overall accuracy of close to 94% (see Fig 10).
model.evaluate(x_test, y_test, verbose=2)
The following code presents on how the accuracy and the loss is been plotted.
plt.subplot(211)plt.plot(history.history['accuracy'])plt.title('model accuracy')plt.ylabel('accuracy')plt.xlabel('epoch')plt.legend(['accuracy'], loc='upper left')
Here’s an output of the models accuracy, obtaining about 94% accuracy when compared to 79% accuracy. Already we see an improvement over the previous approach.
We obtain loss of the model as 0.12, which is better than the CNN approach.
Here’s the confusion matrix (see Fig 12) that offers a much deeper look at how the model actually performs in classifying.
cnf_matrix = confusion_matrix(np.where(y_test != 0)[1], model.predict_classes(x_test))plt.figure(figsize=(7,7))plot_confusion_matrix(cnf_matrix, classes=np.unique(np.where(y_test != 0)[1]), title="Confusion matrix")plt.show()
*Note : the function- confusion_matrix() is found on sklearn documentation
In this post we’ve seen an alternative approach on how solve the Predictive Maintenance problem by using RNN/LSTM neural network architecture and proves to be better than the previous CNN approach.
|
[
{
"code": null,
"e": 420,
"s": 171,
"text": "Predictive maintenance is very important for manufacturers as well as the maintainers, which lowers maintenance cost, extend equipment life, reduce downtime and improve production quality by addressing problems before they cause equipment failures."
},
{
"code": null,
"e": 602,
"s": 420,
"text": "“Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed” — source Wikipedia"
},
{
"code": null,
"e": 931,
"s": 602,
"text": "In this post, I would like to demonstrate that the use of RNN(Recurrent Neural Network)/LSTM(Long Short Term Memory) architecture is not only more accurate but it performs better in classifying the results accurately when compared to the previous CNN (Convolution Neural Network) approach, written by Marco Cerliani (read here)."
},
{
"code": null,
"e": 1131,
"s": 931,
"text": "This post uses the C-MAPSS dataset for the predictive maintenance of the Turbofan Engine. Here the challenge is to determine the Remaining Useful Life (RUL) until next fault that occur in the engine."
},
{
"code": null,
"e": 1195,
"s": 1131,
"text": "The dataset can be found (here), here’s a brief on the dataset,"
},
{
"code": null,
"e": 1317,
"s": 1195,
"text": "“The engine is operating normally at the start of each time series, and develops a fault at some point during the series."
},
{
"code": null,
"e": 1389,
"s": 1317,
"text": "In the training set, the fault grows in magnitude until system failure."
},
{
"code": null,
"e": 1462,
"s": 1389,
"text": "In the test set, the time series ends some time prior to system failure."
},
{
"code": null,
"e": 1552,
"s": 1462,
"text": "The following are the conditions of the engine that are used in the training of the model"
},
{
"code": null,
"e": 1658,
"s": 1552,
"text": "Train trjectories: 100Test trajectories: 100Conditions: ONE (Sea Level)Fault Modes: ONE (HPC Degradation)"
},
{
"code": null,
"e": 1684,
"s": 1658,
"text": "Understanding the Dataset"
},
{
"code": null,
"e": 2002,
"s": 1684,
"text": "Once we load the dataset, we obtain the time series data of 100 engines that contains the operational settings and sensor readings of each 100 engines with different senarios where the fault occurs and a total of 20631 training examples. To illustrate, below are the first 5 training examples of our training dataset."
},
{
"code": null,
"e": 2018,
"s": 2002,
"text": "train_df.head()"
},
{
"code": null,
"e": 2157,
"s": 2018,
"text": "To further understand the data given, (see Fig 2) describes that for a given engine how many cycles are left before the next fault occurs."
},
{
"code": null,
"e": 2258,
"s": 2157,
"text": "Example 1 : Engine id number 69 (farthest left) approximately has 360 cycles remaining before fault."
},
{
"code": null,
"e": 2360,
"s": 2258,
"text": "Example 2 : Engine id number 39 (farthest right) approximately has 110 cycles remaining before fault."
},
{
"code": null,
"e": 2398,
"s": 2360,
"text": "train_df.id.value_counts().plot.bar()"
},
{
"code": null,
"e": 2477,
"s": 2398,
"text": "The following (Fig3 and Fig 4) are time series data for engine whose id is 69,"
},
{
"code": null,
"e": 2607,
"s": 2477,
"text": "engine_id = train_df[train_df['id'] == 69]ax1 = engine_id[train_df.columns[2:]].plot(subplots=True, sharex=True, figsize=(20,30))"
},
{
"code": null,
"e": 2728,
"s": 2607,
"text": "*The images (fig2, fig 3 and fig4) are obtained by using the source code from GitHub Notebook (here), by Marco Cerliani."
},
{
"code": null,
"e": 3194,
"s": 2728,
"text": "Data preprocessing is the most important step towards training any neural network. For neural networks like RNN (Recurrent Neural Network), the network is very sensitive to the input data and the data needs to be in range of -1 to 1 or 0 to 1. This range i.e, -1 to 1 or 0 to 1, is usually because of the tanh (see Fig 5) is the activation function accompanied in the hidden layers of the network. Thus, the data must be normalized before the training of the model."
},
{
"code": null,
"e": 3487,
"s": 3194,
"text": "Using the MinMaxScaler function provided by sklearn’s preprocessing library we normalize our training data in the scale ranging in 0 to 1, although, theoritcally we could normalize and experiment our data to -1 to 1. However, this post only demostrates scaling of data in the range in 0 to 1."
},
{
"code": null,
"e": 3692,
"s": 3487,
"text": "from sklearn.preprocessing import MinMaxScalersc = MinMaxScaler(feature_range=(0,1))train_df[train_df.columns[2:26]] = sc.fit_transform(train_df[ train_df.columns[2:26]])train_df = train_df.dropna(axis=1)"
},
{
"code": null,
"e": 4069,
"s": 3692,
"text": "The reason of using the columns numbers from 2 to 26(see Fig1), is that we take the operational setting1 (as column number 2) , setting2, setting3, sensor 1 up until sensor 21 (column 25), and python range doesn’t take in account of the upper limit, hence the upper limit 26. For illustration, here’s first 5 training examples after the normalization of training data (fig 6)."
},
{
"code": null,
"e": 4262,
"s": 4069,
"text": "Once we have our data normalized, we employ a classification approach in order to predict RUL. We do this by adding new labels for our classifcation approach onto our dataset by the following."
},
{
"code": null,
"e": 4308,
"s": 4262,
"text": "“following source code is used : from Github”"
},
{
"code": null,
"e": 4470,
"s": 4308,
"text": "w1 = 45w0 = 15train_df['class1'] = np.where(train_df['RUL'] <= w1, 1, 0 )train_df['class2'] = train_df['class1']train_df.loc[train_df['RUL'] <= w0, 'class2'] = 2"
},
{
"code": null,
"e": 4597,
"s": 4470,
"text": "This snippet of code now creates labels(see fig7) for our classification problem and the classifcation approcah is as follows,"
},
{
"code": null,
"e": 4645,
"s": 4597,
"text": "label 0 : when 45+ cycles are left until fault."
},
{
"code": null,
"e": 4707,
"s": 4645,
"text": "label 1 : when cycles between 16 and 45 are left until fault."
},
{
"code": null,
"e": 4768,
"s": 4707,
"text": "label 2 : when cycles between 0 and 15 are left until fault."
},
{
"code": null,
"e": 5115,
"s": 4768,
"text": "Great! Now we need to further prepare our data inorder for the neural network to efficiently process the time series data, we do this by specifying the time step size(or window size). Neural network such as RNN or CNN, require the input data to be in 3-Dimensional form. Hence we now need to transorm our 2-Dimensional data to 3-Dimensional data."
},
{
"code": null,
"e": 5319,
"s": 5115,
"text": "To demonstrate this process of transformation (see Fig8), we simple run through the time series data by specifying the time step size(window size). This process is also known as Sliding Window technique."
},
{
"code": null,
"e": 5696,
"s": 5319,
"text": "For our time series data, we use the Sliding Window technique for all of the sensors and operating settings by specifying the time steps(or window size) as 50, although, the time steps size can be set arbitary. Following code snippet transforms our 2-Dimensional to a 3-Dimensional data(numpy pandas array)of size 15631x50x17, which is optimum for input to the neural network."
},
{
"code": null,
"e": 5746,
"s": 5696,
"text": "“following source code is modified : from Github”"
},
{
"code": null,
"e": 6485,
"s": 5746,
"text": "time_steps = 50def gen_sequence(id_df):data_matrix = id_df.iloc[2:26] num_elements = data_matrix.shape[0]for start, stop in zip(range(0, num_elements-time_steps), range(time_steps, num_elements)): yield data_matrix[start:stop, :]def gen_labels(id_df, label):data_matrix = id_df[label].values num_elements = data_matrix.shape[0]return data_matrix[time_steps:num_elements, :]x_train, y_train = [], []for engine_id in train_df.id.unique(): for sequence in gen_sequence(train_df[train_df.id==engine_id]): x_train.append(sequence) for label in gen_labels(train_df[train_df.id==engine_id['label2']): y_train.append(label)x_train = np.asarray(x_train)y_train = np.asarray(y_train).reshape(-1,1)"
},
{
"code": null,
"e": 6564,
"s": 6485,
"text": "*For more further reading on time series data, please read the article (here)."
},
{
"code": null,
"e": 6767,
"s": 6564,
"text": "RNN/LSTM has been best proven for handling time series data and there are plenty of articles on the web demonstrating the effectiveness on broad applications. Hence, we employ the RNN/LSTM architecture."
},
{
"code": null,
"e": 7030,
"s": 6767,
"text": "Now since our data is ready and is in 3-Dimensional form, we can now define the RNN/LSTM neural network architecture which comprises of 2 hidden layers and each hidden layer having activation function of tanh (see fig5) followed by a layer of softmax classifier."
},
{
"code": null,
"e": 7550,
"s": 7030,
"text": "model = Sequential()#inputmodel.add(LSTM(units=50, return_sequences='true', activation='tanh',input_shape = (x_train.shape[1], x_train.shape[2])) )model.add(Dropout(0.2))#hidden layer 1model.add(LSTM(units=60, return_sequences='true',activation='tanh'))model.add(Dropout(0.2))#hidden layer 2model.add(LSTM(units=60, activation='tanh'))model.add(Dropout(0.2))#outputmodel.add(Dense(units=3,activation='softmax'))model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])print(model.summary())"
},
{
"code": null,
"e": 7600,
"s": 7550,
"text": "Here’s an output of the model summary(see fig 9)."
},
{
"code": null,
"e": 7628,
"s": 7600,
"text": "Training the RNN/LSTM Model"
},
{
"code": null,
"e": 7785,
"s": 7628,
"text": "The RNN/LSTM model is been trained for a total of 30 epochs, although I’ve tried to train the model for 40 epochs, and the model was seen to be overfitting."
},
{
"code": null,
"e": 7848,
"s": 7785,
"text": "history = model.fit(x_train, y_train,batch_size=32, epochs=30)"
},
{
"code": null,
"e": 7987,
"s": 7848,
"text": "Note : The ‘history’ variable is used to record the necessary parameters such as loss and accuracy accuracy of the model during the train."
},
{
"code": null,
"e": 8115,
"s": 7987,
"text": "The accuracy is obtained by using the keras.evaluate() function, and obtained an overall accuracy of close to 94% (see Fig 10)."
},
{
"code": null,
"e": 8157,
"s": 8115,
"text": "model.evaluate(x_test, y_test, verbose=2)"
},
{
"code": null,
"e": 8235,
"s": 8157,
"text": "The following code presents on how the accuracy and the loss is been plotted."
},
{
"code": null,
"e": 8399,
"s": 8235,
"text": "plt.subplot(211)plt.plot(history.history['accuracy'])plt.title('model accuracy')plt.ylabel('accuracy')plt.xlabel('epoch')plt.legend(['accuracy'], loc='upper left')"
},
{
"code": null,
"e": 8558,
"s": 8399,
"text": "Here’s an output of the models accuracy, obtaining about 94% accuracy when compared to 79% accuracy. Already we see an improvement over the previous approach."
},
{
"code": null,
"e": 8634,
"s": 8558,
"text": "We obtain loss of the model as 0.12, which is better than the CNN approach."
},
{
"code": null,
"e": 8757,
"s": 8634,
"text": "Here’s the confusion matrix (see Fig 12) that offers a much deeper look at how the model actually performs in classifying."
},
{
"code": null,
"e": 8983,
"s": 8757,
"text": "cnf_matrix = confusion_matrix(np.where(y_test != 0)[1], model.predict_classes(x_test))plt.figure(figsize=(7,7))plot_confusion_matrix(cnf_matrix, classes=np.unique(np.where(y_test != 0)[1]), title=\"Confusion matrix\")plt.show()"
},
{
"code": null,
"e": 9058,
"s": 8983,
"text": "*Note : the function- confusion_matrix() is found on sklearn documentation"
}
] |
Plotly - Adding Buttons Dropdown
|
Plotly provides high degree of interactivity by use of different controls on the plotting area – such as buttons, dropdowns and sliders etc. These controls are incorporated with updatemenu attribute of the plot layout. You can add button and its behaviour by specifying the method to be called.
There are four possible methods that can be associated with a button as follows −
restyle − modify data or data attributes
restyle − modify data or data attributes
relayout − modify layout attributes
relayout − modify layout attributes
update − modify data and layout attributes
update − modify data and layout attributes
animate − start or pause an animation
animate − start or pause an animation
The restyle method should be used when modifying the data and data attributes of the graph. In the following example, two buttons are added by Updatemenu() method to the layout with restyle method.
go.layout.Updatemenu(
type = "buttons",
direction = "left",
buttons = list([
dict(args = ["type", "box"], label = "Box", method = "restyle"),
dict(args = ["type", "violin"], label = "Violin", method = "restyle" )]
))
Value of type property is buttons by default. To render a dropdown list of buttons, change type to dropdown. A Box trace added to Figure object before updating its layout as above. The complete code that renders boxplot and violin plot depending on button clicked, is as follows −
import plotly.graph_objs as go
fig = go.Figure()
fig.add_trace(go.Box(y = [1140,1460,489,594,502,508,370,200]))
fig.layout.update(
updatemenus = [
go.layout.Updatemenu(
type = "buttons", direction = "left", buttons=list(
[
dict(args = ["type", "box"], label = "Box", method = "restyle"),
dict(args = ["type", "violin"], label = "Violin", method = "restyle")
]
),
pad = {"r": 2, "t": 2},
showactive = True,
x = 0.11,
xanchor = "left",
y = 1.1,
yanchor = "top"
),
]
)
iplot(fig)
The output of the code is given below −
Click on Violin button to display corresponding Violin plot.
As mentioned above, value of type key in Updatemenu() method is assigned dropdown to display dropdown list of buttons. The plot appears as below −
The update method should be used when modifying the data and layout sections of the graph. Following example demonstrates how to update and which traces are displayed while simultaneously updating layout attributes, such as, the chart title. Two Scatter traces corresponding to sine and cos wave are added to Figure object. The trace with visible attribute as True will be displayed on the plot and other traces will be hidden.
import numpy as np
import math #needed for definition of pi
xpoints = np.arange(0, math.pi*2, 0.05)
y1 = np.sin(xpoints)
y2 = np.cos(xpoints)
fig = go.Figure()
# Add Traces
fig.add_trace(
go.Scatter(
x = xpoints, y = y1, name = 'Sine'
)
)
fig.add_trace(
go.Scatter(
x = xpoints, y = y2, name = 'cos'
)
)
fig.layout.update(
updatemenus = [
go.layout.Updatemenu(
type = "buttons", direction = "right", active = 0, x = 0.1, y = 1.2,
buttons = list(
[
dict(
label = "first", method = "update",
args = [{"visible": [True, False]},{"title": "Sine"} ]
),
dict(
label = "second", method = "update",
args = [{"visible": [False, True]},{"title": Cos"}]
)
]
)
)
]
)
iplot(fig)
Initially, Sine curve will be displayed. If clicked on second button, cos trace appears.
Note that chart title also updates accordingly.
In order to use animate method, we need to add one or more Frames to the Figure object. Along with data and layout, frames can be added as a key in a figure object. The frames key points to a list of figures, each of which will be cycled through when animation is triggered.
You can add, play and pause buttons to introduce animation in chart by adding an updatemenus array to the layout.
"updatemenus": [{
"type": "buttons", "buttons": [{
"label": "Your Label", "method": "animate", "args": [frames]
}]
}]
In the following example, a scatter curve trace is first plotted. Then add frames which is a list of 50 Frame objects, each representing a red marker on the curve. Note that the args attribute of button is set to None, due to which all frames are animated.
import numpy as np
t = np.linspace(-1, 1, 100)
x = t + t ** 2
y = t - t ** 2
xm = np.min(x) - 1.5
xM = np.max(x) + 1.5
ym = np.min(y) - 1.5
yM = np.max(y) + 1.5
N = 50
s = np.linspace(-1, 1, N)
#s = np.arange(0, math.pi*2, 0.1)
xx = s + s ** 2
yy = s - s ** 2
fig = go.Figure(
data = [
go.Scatter(x = x, y = y, mode = "lines", line = dict(width = 2, color = "blue")),
go.Scatter(x = x, y = y, mode = "lines", line = dict(width = 2, color = "blue"))
],
layout = go.Layout(
xaxis=dict(range=[xm, xM], autorange=False, zeroline=False),
yaxis=dict(range=[ym, yM], autorange=False, zeroline=False),
title_text="Moving marker on curve",
updatemenus=[
dict(type="buttons", buttons=[dict(label="Play", method="animate", args=[None])])
]
),
frames = [go.Frame(
data = [
go.Scatter(
x = [xx[k]], y = [yy[k]], mode = "markers", marker = dict(
color = "red", size = 10
)
)
]
)
for k in range(N)]
)
iplot(fig)
The output of the code is stated below −
The red marker will start moving along the curve on clicking play button.
12 Lectures
53 mins
Pranjal Srivastava
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2655,
"s": 2360,
"text": "Plotly provides high degree of interactivity by use of different controls on the plotting area – such as buttons, dropdowns and sliders etc. These controls are incorporated with updatemenu attribute of the plot layout. You can add button and its behaviour by specifying the method to be called."
},
{
"code": null,
"e": 2737,
"s": 2655,
"text": "There are four possible methods that can be associated with a button as follows −"
},
{
"code": null,
"e": 2778,
"s": 2737,
"text": "restyle − modify data or data attributes"
},
{
"code": null,
"e": 2819,
"s": 2778,
"text": "restyle − modify data or data attributes"
},
{
"code": null,
"e": 2855,
"s": 2819,
"text": "relayout − modify layout attributes"
},
{
"code": null,
"e": 2891,
"s": 2855,
"text": "relayout − modify layout attributes"
},
{
"code": null,
"e": 2934,
"s": 2891,
"text": "update − modify data and layout attributes"
},
{
"code": null,
"e": 2977,
"s": 2934,
"text": "update − modify data and layout attributes"
},
{
"code": null,
"e": 3015,
"s": 2977,
"text": "animate − start or pause an animation"
},
{
"code": null,
"e": 3053,
"s": 3015,
"text": "animate − start or pause an animation"
},
{
"code": null,
"e": 3251,
"s": 3053,
"text": "The restyle method should be used when modifying the data and data attributes of the graph. In the following example, two buttons are added by Updatemenu() method to the layout with restyle method."
},
{
"code": null,
"e": 3474,
"s": 3251,
"text": "go.layout.Updatemenu(\ntype = \"buttons\",\ndirection = \"left\",\nbuttons = list([\n dict(args = [\"type\", \"box\"], label = \"Box\", method = \"restyle\"),\n dict(args = [\"type\", \"violin\"], label = \"Violin\", method = \"restyle\" )]\n))"
},
{
"code": null,
"e": 3755,
"s": 3474,
"text": "Value of type property is buttons by default. To render a dropdown list of buttons, change type to dropdown. A Box trace added to Figure object before updating its layout as above. The complete code that renders boxplot and violin plot depending on button clicked, is as follows −"
},
{
"code": null,
"e": 4377,
"s": 3755,
"text": "import plotly.graph_objs as go\nfig = go.Figure()\nfig.add_trace(go.Box(y = [1140,1460,489,594,502,508,370,200]))\nfig.layout.update(\n updatemenus = [\n go.layout.Updatemenu(\n type = \"buttons\", direction = \"left\", buttons=list(\n [\n dict(args = [\"type\", \"box\"], label = \"Box\", method = \"restyle\"),\n dict(args = [\"type\", \"violin\"], label = \"Violin\", method = \"restyle\")\n ]\n ),\n pad = {\"r\": 2, \"t\": 2},\n showactive = True,\n x = 0.11,\n xanchor = \"left\",\n y = 1.1,\n yanchor = \"top\"\n ), \n ]\n)\niplot(fig)"
},
{
"code": null,
"e": 4417,
"s": 4377,
"text": "The output of the code is given below −"
},
{
"code": null,
"e": 4478,
"s": 4417,
"text": "Click on Violin button to display corresponding Violin plot."
},
{
"code": null,
"e": 4625,
"s": 4478,
"text": "As mentioned above, value of type key in Updatemenu() method is assigned dropdown to display dropdown list of buttons. The plot appears as below −"
},
{
"code": null,
"e": 5053,
"s": 4625,
"text": "The update method should be used when modifying the data and layout sections of the graph. Following example demonstrates how to update and which traces are displayed while simultaneously updating layout attributes, such as, the chart title. Two Scatter traces corresponding to sine and cos wave are added to Figure object. The trace with visible attribute as True will be displayed on the plot and other traces will be hidden."
},
{
"code": null,
"e": 5946,
"s": 5053,
"text": "import numpy as np\nimport math #needed for definition of pi\n\nxpoints = np.arange(0, math.pi*2, 0.05)\ny1 = np.sin(xpoints)\ny2 = np.cos(xpoints)\nfig = go.Figure()\n# Add Traces\nfig.add_trace(\n go.Scatter(\n x = xpoints, y = y1, name = 'Sine'\n )\n)\nfig.add_trace(\n go.Scatter(\n x = xpoints, y = y2, name = 'cos'\n )\n)\nfig.layout.update(\n updatemenus = [\n go.layout.Updatemenu(\n type = \"buttons\", direction = \"right\", active = 0, x = 0.1, y = 1.2,\n buttons = list(\n [\n dict(\n label = \"first\", method = \"update\",\n args = [{\"visible\": [True, False]},{\"title\": \"Sine\"} ]\n ),\n dict(\n label = \"second\", method = \"update\", \n args = [{\"visible\": [False, True]},{\"title\": Cos\"}]\n )\n ]\n )\n )\n ]\n)\niplot(fig)"
},
{
"code": null,
"e": 6035,
"s": 5946,
"text": "Initially, Sine curve will be displayed. If clicked on second button, cos trace appears."
},
{
"code": null,
"e": 6083,
"s": 6035,
"text": "Note that chart title also updates accordingly."
},
{
"code": null,
"e": 6358,
"s": 6083,
"text": "In order to use animate method, we need to add one or more Frames to the Figure object. Along with data and layout, frames can be added as a key in a figure object. The frames key points to a list of figures, each of which will be cycled through when animation is triggered."
},
{
"code": null,
"e": 6472,
"s": 6358,
"text": "You can add, play and pause buttons to introduce animation in chart by adding an updatemenus array to the layout."
},
{
"code": null,
"e": 6603,
"s": 6472,
"text": "\"updatemenus\": [{\n \"type\": \"buttons\", \"buttons\": [{\n \"label\": \"Your Label\", \"method\": \"animate\", \"args\": [frames]\n }]\n}]\n"
},
{
"code": null,
"e": 6860,
"s": 6603,
"text": "In the following example, a scatter curve trace is first plotted. Then add frames which is a list of 50 Frame objects, each representing a red marker on the curve. Note that the args attribute of button is set to None, due to which all frames are animated."
},
{
"code": null,
"e": 7901,
"s": 6860,
"text": "import numpy as np\nt = np.linspace(-1, 1, 100)\nx = t + t ** 2\ny = t - t ** 2\nxm = np.min(x) - 1.5\nxM = np.max(x) + 1.5\nym = np.min(y) - 1.5\nyM = np.max(y) + 1.5\nN = 50\ns = np.linspace(-1, 1, N)\n#s = np.arange(0, math.pi*2, 0.1)\nxx = s + s ** 2\nyy = s - s ** 2\nfig = go.Figure(\n data = [\n go.Scatter(x = x, y = y, mode = \"lines\", line = dict(width = 2, color = \"blue\")),\n go.Scatter(x = x, y = y, mode = \"lines\", line = dict(width = 2, color = \"blue\"))\n ],\n layout = go.Layout(\n xaxis=dict(range=[xm, xM], autorange=False, zeroline=False),\n yaxis=dict(range=[ym, yM], autorange=False, zeroline=False),\n title_text=\"Moving marker on curve\",\n updatemenus=[\n dict(type=\"buttons\", buttons=[dict(label=\"Play\", method=\"animate\", args=[None])])\n ]\n ),\n frames = [go.Frame(\n data = [\n go.Scatter(\n x = [xx[k]], y = [yy[k]], mode = \"markers\", marker = dict(\n color = \"red\", size = 10\n )\n )\n ]\n )\n for k in range(N)]\n)\niplot(fig)"
},
{
"code": null,
"e": 7942,
"s": 7901,
"text": "The output of the code is stated below −"
},
{
"code": null,
"e": 8016,
"s": 7942,
"text": "The red marker will start moving along the curve on clicking play button."
},
{
"code": null,
"e": 8048,
"s": 8016,
"text": "\n 12 Lectures \n 53 mins\n"
},
{
"code": null,
"e": 8068,
"s": 8048,
"text": " Pranjal Srivastava"
},
{
"code": null,
"e": 8075,
"s": 8068,
"text": " Print"
},
{
"code": null,
"e": 8086,
"s": 8075,
"text": " Add Notes"
}
] |
How to use viewFlipper in Android?
|
This example demonstrates how do I use viewFlipper in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<ViewFlipper
android:id="@+id/viewFlipper"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:inAnimation="@android:anim/slide_in_left"
android:outAnimation="@android:anim/slide_out_right">
<ImageView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:src="@drawable/image"
android:layout_gravity="center"/>
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Button"
android:textSize="48sp"
android:textStyle="bold"
android:layout_gravity="center"/>
</ViewFlipper>
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_alignParentStart="true"
android:layout_margin="16dp"
android:onClick="previousView"
android:text="Previous" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_alignParentEnd="true"
android:layout_margin="16dp"
android:onClick="nextView"
android:text="Next" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.java
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.Gravity;
import android.view.View;
import android.widget.TextView;
import android.widget.ViewFlipper;
public class MainActivity extends AppCompatActivity {
private ViewFlipper viewFlipper;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
viewFlipper = findViewById(R.id.viewFlipper);
TextView textView = new TextView(this);
textView.setText("Dynamically added TextView");
textView.setGravity(Gravity.CENTER_HORIZONTAL);
viewFlipper.addView(textView);
viewFlipper.setFlipInterval(2000);
viewFlipper.startFlipping();
}
public void previousView(View v) {
viewFlipper.setInAnimation(this, R.anim.slide_in_right);
viewFlipper.setOutAnimation(this, R.anim.slide_out_left);
viewFlipper.showPrevious();
}
public void nextView(View v) {
viewFlipper.setInAnimation(this, android.R.anim.slide_in_left);
viewFlipper.setOutAnimation(this,
android.R.anim.slide_out_right);
viewFlipper.showNext();
}
}
Step 4 − Create a android Resource Directory(anim) → Right click, create android Resource files (slide_in_right & slide_out_left) and the following codes −
Slide_in_right.xml
<?xml version="1.0" encoding="utf-8"?>
<set xmlns:android="http://schemas.android.com/apk/res/android">
<translate
android:duration="@android:integer/config_mediumAnimTime"
android:fromXDelta="50%p"
android:toXDelta="0" />
<alpha
android:duration="@android:integer/config_mediumAnimTime"
android:fromAlpha="0.0"
android:toAlpha="1.0" />
</set>
slide_out_left.xml
<?xml version="1.0" encoding="utf-8"?>
<set xmlns:android="http://schemas.android.com/apk/res/android">
<translate
android:duration="@android:integer/config_mediumAnimTime"
android:fromXDelta="0"
android:toXDelta="-50%p" />
<alpha
android:duration="@android:integer/config_mediumAnimTime"
android:fromAlpha="1.0"
android:toAlpha="0.0" />
</set>
Step 5 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −
Click here to download the project code.
|
[
{
"code": null,
"e": 1125,
"s": 1062,
"text": "This example demonstrates how do I use viewFlipper in android."
},
{
"code": null,
"e": 1254,
"s": 1125,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1319,
"s": 1254,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2898,
"s": 1319,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout\n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n <ViewFlipper\n android:id=\"@+id/viewFlipper\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:inAnimation=\"@android:anim/slide_in_left\"\n android:outAnimation=\"@android:anim/slide_out_right\">\n <ImageView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:src=\"@drawable/image\"\n android:layout_gravity=\"center\"/>\n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Button\"\n android:textSize=\"48sp\"\n android:textStyle=\"bold\"\n android:layout_gravity=\"center\"/>\n </ViewFlipper>\n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_alignParentBottom=\"true\"\n android:layout_alignParentStart=\"true\"\n android:layout_margin=\"16dp\"\n android:onClick=\"previousView\"\n android:text=\"Previous\" />\n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_alignParentBottom=\"true\"\n android:layout_alignParentEnd=\"true\"\n android:layout_margin=\"16dp\"\n android:onClick=\"nextView\"\n android:text=\"Next\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 2955,
"s": 2898,
"text": "Step 3 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 4147,
"s": 2955,
"text": "import android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.view.Gravity;\nimport android.view.View;\nimport android.widget.TextView;\nimport android.widget.ViewFlipper;\npublic class MainActivity extends AppCompatActivity {\n private ViewFlipper viewFlipper;\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n viewFlipper = findViewById(R.id.viewFlipper);\n TextView textView = new TextView(this);\n textView.setText(\"Dynamically added TextView\");\n textView.setGravity(Gravity.CENTER_HORIZONTAL);\n viewFlipper.addView(textView);\n viewFlipper.setFlipInterval(2000);\n viewFlipper.startFlipping();\n }\n public void previousView(View v) {\n viewFlipper.setInAnimation(this, R.anim.slide_in_right);\n viewFlipper.setOutAnimation(this, R.anim.slide_out_left);\n viewFlipper.showPrevious();\n }\n public void nextView(View v) {\n viewFlipper.setInAnimation(this, android.R.anim.slide_in_left);\n viewFlipper.setOutAnimation(this,\n android.R.anim.slide_out_right);\n viewFlipper.showNext();\n }\n}"
},
{
"code": null,
"e": 4303,
"s": 4147,
"text": "Step 4 − Create a android Resource Directory(anim) → Right click, create android Resource files (slide_in_right & slide_out_left) and the following codes −"
},
{
"code": null,
"e": 4322,
"s": 4303,
"text": "Slide_in_right.xml"
},
{
"code": null,
"e": 4708,
"s": 4322,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<set xmlns:android=\"http://schemas.android.com/apk/res/android\">\n <translate\n android:duration=\"@android:integer/config_mediumAnimTime\"\n android:fromXDelta=\"50%p\"\n android:toXDelta=\"0\" />\n <alpha\n android:duration=\"@android:integer/config_mediumAnimTime\"\n android:fromAlpha=\"0.0\"\n android:toAlpha=\"1.0\" />\n</set>"
},
{
"code": null,
"e": 4727,
"s": 4708,
"text": "slide_out_left.xml"
},
{
"code": null,
"e": 5114,
"s": 4727,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<set xmlns:android=\"http://schemas.android.com/apk/res/android\">\n <translate\n android:duration=\"@android:integer/config_mediumAnimTime\"\n android:fromXDelta=\"0\"\n android:toXDelta=\"-50%p\" />\n <alpha\n android:duration=\"@android:integer/config_mediumAnimTime\"\n android:fromAlpha=\"1.0\"\n android:toAlpha=\"0.0\" />\n</set>"
},
{
"code": null,
"e": 5169,
"s": 5114,
"text": "Step 5 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 5839,
"s": 5169,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 6186,
"s": 5839,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −"
},
{
"code": null,
"e": 6227,
"s": 6186,
"text": "Click here to download the project code."
}
] |
Java Program to Find the Surface Area and Volume of a Cone - GeeksforGeeks
|
01 Dec, 2020
Given the dimensions of the cone, find the Surface area and Volume of a cone. The formula’s to calculate the area and volume are given below.
Cone
Cone is a three-dimensional geometric shape. It consists of a base having the shape of a circle and a curved side (the lateral surface) ending up in a tip called the apex or vertex.
Surface Area of Cone = Area of cone + Area of Circle = pi * r * s + pi * r^2
Volume of Cone = 1/3(pi * r * r * h)
where r is the radius of the circular base, h is the height (the perpendicular distance from the base to the vertex) and s is the slant height of the cone.
Slant height (s) can be calculated using Pythagoras formula sqrt(r * r + h * h)
Input :
radius = 5
slant_height = 13
height = 12
Output :
Volume Of Cone = 314.159
Surface Area Of Cone = 282.743
Input :
radius = 6
slant_height = 10
height = 8
Output :
Volume Of Cone = 301.593
Surface Area Of Cone = 301.593
Approach :
Given the dimensions of the cone, say radius R and height H of cone
Find S = sqrt(R * R + H * H)
Apply the above formulas
Example 1:
Java
// Java Program to Find the Surface Area and Volume of a// Cone import java.io.*; class GFG { public static void main(String[] args) { // specify radius and height of cone double R = 6, H = 8; // calculate slant height S double S = Math.sqrt(R * R + H * H); // calculate surface area of cone double SurfaceArea = (Math.PI * R * R) + (Math.PI * R * S); // calculate volume of cone double Volume = (Math.PI * R * R * H) / 3; System.out.println("Surface area of cone is : " + SurfaceArea); System.out.println("Volume of cone is : " + Volume); }}
Surface area of cone is : 301.59289474462014
Volume of cone is : 301.59289474462014
Example 2:
Java
// Java Program to Find the Surface Area and Volume of a// Cone import java.io.*; class GFG { public static void main(String[] args) { // specify radius and height of cone double R = 3.42, H = 12; // calculate slant height S double S = Math.sqrt(R * R + H * H); // calculate surface area of cone double SurfaceArea = (Math.PI * R * R) + (Math.PI * R * S); // calculate volume of cone double Volume = (Math.PI * R * R * H) / 3; System.out.println("Surface area of cone is : " + SurfaceArea); System.out.println("Volume of cone is : " + Volume); }}
Surface area of cone is : 170.81027853689216
Volume of cone is : 146.98129725379061
Time Complexity = O(1)
Picked
Java
Java Programs
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Constructors in Java
Different ways of Reading a text file in Java
Exceptions in Java
Functional Interfaces in Java
Convert a String to Character array in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java?
|
[
{
"code": null,
"e": 23948,
"s": 23920,
"text": "\n01 Dec, 2020"
},
{
"code": null,
"e": 24090,
"s": 23948,
"text": "Given the dimensions of the cone, find the Surface area and Volume of a cone. The formula’s to calculate the area and volume are given below."
},
{
"code": null,
"e": 24095,
"s": 24090,
"text": "Cone"
},
{
"code": null,
"e": 24277,
"s": 24095,
"text": "Cone is a three-dimensional geometric shape. It consists of a base having the shape of a circle and a curved side (the lateral surface) ending up in a tip called the apex or vertex."
},
{
"code": null,
"e": 24354,
"s": 24277,
"text": "Surface Area of Cone = Area of cone + Area of Circle = pi * r * s + pi * r^2"
},
{
"code": null,
"e": 24391,
"s": 24354,
"text": "Volume of Cone = 1/3(pi * r * r * h)"
},
{
"code": null,
"e": 24547,
"s": 24391,
"text": "where r is the radius of the circular base, h is the height (the perpendicular distance from the base to the vertex) and s is the slant height of the cone."
},
{
"code": null,
"e": 24627,
"s": 24547,
"text": "Slant height (s) can be calculated using Pythagoras formula sqrt(r * r + h * h)"
},
{
"code": null,
"e": 24857,
"s": 24627,
"text": "Input : \nradius = 5\nslant_height = 13\nheight = 12\nOutput :\nVolume Of Cone = 314.159\nSurface Area Of Cone = 282.743\n\nInput :\nradius = 6\nslant_height = 10\nheight = 8\nOutput : \nVolume Of Cone = 301.593\nSurface Area Of Cone = 301.593"
},
{
"code": null,
"e": 24869,
"s": 24857,
"text": "Approach : "
},
{
"code": null,
"e": 24937,
"s": 24869,
"text": "Given the dimensions of the cone, say radius R and height H of cone"
},
{
"code": null,
"e": 24966,
"s": 24937,
"text": "Find S = sqrt(R * R + H * H)"
},
{
"code": null,
"e": 24991,
"s": 24966,
"text": "Apply the above formulas"
},
{
"code": null,
"e": 25003,
"s": 24991,
"text": "Example 1: "
},
{
"code": null,
"e": 25008,
"s": 25003,
"text": "Java"
},
{
"code": "// Java Program to Find the Surface Area and Volume of a// Cone import java.io.*; class GFG { public static void main(String[] args) { // specify radius and height of cone double R = 6, H = 8; // calculate slant height S double S = Math.sqrt(R * R + H * H); // calculate surface area of cone double SurfaceArea = (Math.PI * R * R) + (Math.PI * R * S); // calculate volume of cone double Volume = (Math.PI * R * R * H) / 3; System.out.println(\"Surface area of cone is : \" + SurfaceArea); System.out.println(\"Volume of cone is : \" + Volume); }}",
"e": 25679,
"s": 25008,
"text": null
},
{
"code": null,
"e": 25764,
"s": 25679,
"text": "Surface area of cone is : 301.59289474462014\nVolume of cone is : 301.59289474462014\n"
},
{
"code": null,
"e": 25775,
"s": 25764,
"text": "Example 2:"
},
{
"code": null,
"e": 25780,
"s": 25775,
"text": "Java"
},
{
"code": "// Java Program to Find the Surface Area and Volume of a// Cone import java.io.*; class GFG { public static void main(String[] args) { // specify radius and height of cone double R = 3.42, H = 12; // calculate slant height S double S = Math.sqrt(R * R + H * H); // calculate surface area of cone double SurfaceArea = (Math.PI * R * R) + (Math.PI * R * S); // calculate volume of cone double Volume = (Math.PI * R * R * H) / 3; System.out.println(\"Surface area of cone is : \" + SurfaceArea); System.out.println(\"Volume of cone is : \" + Volume); }}",
"e": 26455,
"s": 25780,
"text": null
},
{
"code": null,
"e": 26540,
"s": 26455,
"text": "Surface area of cone is : 170.81027853689216\nVolume of cone is : 146.98129725379061\n"
},
{
"code": null,
"e": 26563,
"s": 26540,
"text": "Time Complexity = O(1)"
},
{
"code": null,
"e": 26570,
"s": 26563,
"text": "Picked"
},
{
"code": null,
"e": 26575,
"s": 26570,
"text": "Java"
},
{
"code": null,
"e": 26589,
"s": 26575,
"text": "Java Programs"
},
{
"code": null,
"e": 26594,
"s": 26589,
"text": "Java"
},
{
"code": null,
"e": 26692,
"s": 26594,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26707,
"s": 26692,
"text": "Stream In Java"
},
{
"code": null,
"e": 26728,
"s": 26707,
"text": "Constructors in Java"
},
{
"code": null,
"e": 26774,
"s": 26728,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 26793,
"s": 26774,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 26823,
"s": 26793,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 26867,
"s": 26823,
"text": "Convert a String to Character array in Java"
},
{
"code": null,
"e": 26893,
"s": 26867,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 26927,
"s": 26893,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 26974,
"s": 26927,
"text": "Implementing a Linked List in Java using Class"
}
] |
Convert an object to associative array in PHP - GeeksforGeeks
|
01 Aug, 2021
An object is an instance of a class. It is simply a specimen of a class and has memory allocated. Array is the data structure that stores one or more similar type of values in a single name but associative array is different from a simple PHP array. An array which contains string index is called associative array. It stores element values in association with key values rather than in linear index order.Method 1: Using json_decode and json_encode method: The json_decode function accepts JSON encoded string and converts it into a PHP variable on the other hand json_encode returns a JSON encoded string for a given value.Syntax:
$myArray = json_decode(json_encode($object), true);
Example:
php
<?phpclass sample { /* Member variables */ var $var1; var $var2; function __construct( $par1, $par2 ) { $this->var1 = $par1; $this->var2 = $par2; }} // Creating the object$myObj = new sample(1000, "second");echo "Before conversion: \n";var_dump($myObj); // Converting object to associative array$myArray = json_decode(json_encode($myObj), true);echo "After conversion: \n";var_dump($myArray);?>
Before conversion:
object(sample)#1 (2) {
["var1"]=>
int(1000)
["var2"]=>
string(6) "second"
}
After conversion:
array(2) {
["var1"]=>
int(1000)
["var2"]=>
string(6) "second"
}
Method 2: Type Casting object to an array: Typecasting is the way to utilize one data type variable into the different data type and it is simply the explicit conversion of a data type. It can convert a PHP object to an array by using typecasting rules supported in PHP.Syntax:
$myArray = (array) $myObj;
Example:
php
<?phpclass bag { /* Member variables */ var $item1; var $item2; var $item3; function __construct( $par1, $par2, $par3) { $this->item1 = $par1; $this->item2 = $par2; $this->item3 = $par3; }} // Create myBag object$myBag = new bag("Mobile", "Charger", "Cable");echo "Before conversion : \n";var_dump($myBag); // Converting object to an array$myBagArray = (array)$myBag;echo "After conversion : \n";var_dump($myBagArray);?>
Before conversion :
object(bag)#1 (3) {
["item1"]=>
string(6) "Mobile"
["item2"]=>
string(7) "Charger"
["item3"]=>
string(5) "Cable"
}
After conversion :
array(3) {
["item1"]=>
string(6) "Mobile"
["item2"]=>
string(7) "Charger"
["item3"]=>
string(5) "Cable"
}
PHP is a server-side scripting language designed specifically for web development. You can learn PHP from the ground up by following this PHP Tutorial and PHP Examples.
Akanksha_Rai
adnanirshad158
JSON
Picked
PHP
PHP Programs
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to convert array to string in PHP ?
How to fetch data from localserver database and display on HTML table using PHP ?
Split a comma delimited string into an array in PHP
How to display logged in user information in PHP ?
How to declare a global variable in PHP?
How to convert array to string in PHP ?
How to call PHP function on the click of a Button ?
How to fetch data from localserver database and display on HTML table using PHP ?
Split a comma delimited string into an array in PHP
How to declare a global variable in PHP?
|
[
{
"code": null,
"e": 24987,
"s": 24959,
"text": "\n01 Aug, 2021"
},
{
"code": null,
"e": 25622,
"s": 24987,
"text": "An object is an instance of a class. It is simply a specimen of a class and has memory allocated. Array is the data structure that stores one or more similar type of values in a single name but associative array is different from a simple PHP array. An array which contains string index is called associative array. It stores element values in association with key values rather than in linear index order.Method 1: Using json_decode and json_encode method: The json_decode function accepts JSON encoded string and converts it into a PHP variable on the other hand json_encode returns a JSON encoded string for a given value.Syntax: "
},
{
"code": null,
"e": 25674,
"s": 25622,
"text": "$myArray = json_decode(json_encode($object), true);"
},
{
"code": null,
"e": 25685,
"s": 25674,
"text": "Example: "
},
{
"code": null,
"e": 25689,
"s": 25685,
"text": "php"
},
{
"code": "<?phpclass sample { /* Member variables */ var $var1; var $var2; function __construct( $par1, $par2 ) { $this->var1 = $par1; $this->var2 = $par2; }} // Creating the object$myObj = new sample(1000, \"second\");echo \"Before conversion: \\n\";var_dump($myObj); // Converting object to associative array$myArray = json_decode(json_encode($myObj), true);echo \"After conversion: \\n\";var_dump($myArray);?>",
"e": 26131,
"s": 25689,
"text": null
},
{
"code": null,
"e": 26326,
"s": 26131,
"text": "Before conversion: \nobject(sample)#1 (2) {\n [\"var1\"]=>\n int(1000)\n [\"var2\"]=>\n string(6) \"second\"\n}\nAfter conversion: \narray(2) {\n [\"var1\"]=>\n int(1000)\n [\"var2\"]=>\n string(6) \"second\"\n}"
},
{
"code": null,
"e": 26608,
"s": 26328,
"text": "Method 2: Type Casting object to an array: Typecasting is the way to utilize one data type variable into the different data type and it is simply the explicit conversion of a data type. It can convert a PHP object to an array by using typecasting rules supported in PHP.Syntax: "
},
{
"code": null,
"e": 26635,
"s": 26608,
"text": "$myArray = (array) $myObj;"
},
{
"code": null,
"e": 26646,
"s": 26635,
"text": "Example: "
},
{
"code": null,
"e": 26650,
"s": 26646,
"text": "php"
},
{
"code": "<?phpclass bag { /* Member variables */ var $item1; var $item2; var $item3; function __construct( $par1, $par2, $par3) { $this->item1 = $par1; $this->item2 = $par2; $this->item3 = $par3; }} // Create myBag object$myBag = new bag(\"Mobile\", \"Charger\", \"Cable\");echo \"Before conversion : \\n\";var_dump($myBag); // Converting object to an array$myBagArray = (array)$myBag;echo \"After conversion : \\n\";var_dump($myBagArray);?>",
"e": 27128,
"s": 26650,
"text": null
},
{
"code": null,
"e": 27414,
"s": 27128,
"text": "Before conversion : \nobject(bag)#1 (3) {\n [\"item1\"]=>\n string(6) \"Mobile\"\n [\"item2\"]=>\n string(7) \"Charger\"\n [\"item3\"]=>\n string(5) \"Cable\"\n}\nAfter conversion : \narray(3) {\n [\"item1\"]=>\n string(6) \"Mobile\"\n [\"item2\"]=>\n string(7) \"Charger\"\n [\"item3\"]=>\n string(5) \"Cable\"\n}"
},
{
"code": null,
"e": 27585,
"s": 27416,
"text": "PHP is a server-side scripting language designed specifically for web development. You can learn PHP from the ground up by following this PHP Tutorial and PHP Examples."
},
{
"code": null,
"e": 27598,
"s": 27585,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 27613,
"s": 27598,
"text": "adnanirshad158"
},
{
"code": null,
"e": 27618,
"s": 27613,
"text": "JSON"
},
{
"code": null,
"e": 27625,
"s": 27618,
"text": "Picked"
},
{
"code": null,
"e": 27629,
"s": 27625,
"text": "PHP"
},
{
"code": null,
"e": 27642,
"s": 27629,
"text": "PHP Programs"
},
{
"code": null,
"e": 27659,
"s": 27642,
"text": "Web Technologies"
},
{
"code": null,
"e": 27663,
"s": 27659,
"text": "PHP"
},
{
"code": null,
"e": 27761,
"s": 27663,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27770,
"s": 27761,
"text": "Comments"
},
{
"code": null,
"e": 27783,
"s": 27770,
"text": "Old Comments"
},
{
"code": null,
"e": 27823,
"s": 27783,
"text": "How to convert array to string in PHP ?"
},
{
"code": null,
"e": 27905,
"s": 27823,
"text": "How to fetch data from localserver database and display on HTML table using PHP ?"
},
{
"code": null,
"e": 27957,
"s": 27905,
"text": "Split a comma delimited string into an array in PHP"
},
{
"code": null,
"e": 28008,
"s": 27957,
"text": "How to display logged in user information in PHP ?"
},
{
"code": null,
"e": 28049,
"s": 28008,
"text": "How to declare a global variable in PHP?"
},
{
"code": null,
"e": 28089,
"s": 28049,
"text": "How to convert array to string in PHP ?"
},
{
"code": null,
"e": 28141,
"s": 28089,
"text": "How to call PHP function on the click of a Button ?"
},
{
"code": null,
"e": 28223,
"s": 28141,
"text": "How to fetch data from localserver database and display on HTML table using PHP ?"
},
{
"code": null,
"e": 28275,
"s": 28223,
"text": "Split a comma delimited string into an array in PHP"
}
] |
WebSockets - Quick Guide
|
In literal terms, handshaking can be defined as gripping and shaking of right hands by two individuals, as to symbolize greeting, congratulations, agreement or farewell. In computer science, handshaking is a process that ensures the server is in sync with its clients. Handshaking is the basic concept of Web Socket protocol.
The following diagram shows the server handshake with various clients −
Web sockets are defined as a two-way communication between the servers and the clients, which mean both the parties communicate and exchange data at the same time.
The key points of Web Sockets are true concurrency and optimization of performance, resulting in more responsive and rich web applications.
This protocol defines a full duplex communication from the ground up. Web sockets take a step forward in bringing desktop rich functionalities to the web browsers. It represents an evolution, which was awaited for a long time in client/server web technology.
The main features of web sockets are as follows −
Web socket protocol is being standardized, which means real time communication between web servers and clients is possible with the help of this protocol.
Web socket protocol is being standardized, which means real time communication between web servers and clients is possible with the help of this protocol.
Web sockets are transforming to cross platform standard for real time communication between a client and the server.
Web sockets are transforming to cross platform standard for real time communication between a client and the server.
This standard enables new kind of the applications. Businesses for real time web application can speed up with the help of this technology.
This standard enables new kind of the applications. Businesses for real time web application can speed up with the help of this technology.
The biggest advantage of Web Socket is it provides a two-way communication (full duplex) over a single TCP connection.
The biggest advantage of Web Socket is it provides a two-way communication (full duplex) over a single TCP connection.
HTTP has its own set of schemas such as http and https. Web socket protocol also has similar schema defined in its URL pattern.
The following image shows the Web Socket URL in tokens.
The latest specification of Web Socket protocol is defined as RFC 6455 – a proposed standard.
RFC 6455 is supported by various browsers like Internet Explorer, Mozilla Firefox, Google Chrome, Safari, and Opera.
Before diving to the need of Web sockets, it is necessary to have a look at the existing techniques, which are used for duplex communication between the server and the client. They are as follows −
Polling
Long Polling
Streaming
Postback and AJAX
HTML5
Polling can be defined as a method, which performs periodic requests regardless of the data that exists in the transmission. The periodic requests are sent in a synchronous way. The client makes a periodic request in a specified time interval to the Server. The response of the server includes available data or some warning message in it.
Long polling, as the name suggests, includes similar technique like polling. The client and the server keep the connection active until some data is fetched or timeout occurs. If the connection is lost due to some reasons, the client can start over and perform sequential request.
Long polling is nothing but performance improvement over polling process, but constant requests may slow down the process.
It is considered as the best option for real-time data transmission. The server keeps the connection open and active with the client until and unless the required data is being fetched. In this case, the connection is said to be open indefinitely. Streaming includes HTTP headers which increases the file size, increasing delay. This can be considered as a major drawback.
AJAX is based on Javascript's XmlHttpRequest Object. It is an abbreviated form of Asynchronous Javascript and XML. XmlHttpRequest Object allows execution of the Javascript without reloading the complete web page. AJAX sends and receives only a portion of the web page.
The code snippet of AJAX call with XmlHttpRequest Object is as follows −
var xhttp;
if (window.XMLHttpRequest) {
xhttp = new XMLHttpRequest();
} else {
// code for IE6, IE5
xhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
The major drawbacks of AJAX in comparison with Web Sockets are −
They send HTTP headers, which makes total size larger.
The communication is half-duplex.
The web server consumes more resources.
HTML5 is a robust framework for developing and designing web applications. The main pillars include Mark-up, CSS3 and Javascript APIs together.
The following diagram shows HTML5 components −
The code snippet given below describes the declaration of HTML5 and its doctype.
<!DOCTYPE html>
Internet was conceived to be a collection of Hypertext Mark-up Language (HTML) pages linking one another to form a conceptual web of information. During the course of time, static resources increased in number and richer items, such as images and began to be a part of the web fabric.
Server technologies advanced which allowed dynamic server pages - pages whose content was generated based on a query.
Soon, the requirement to have more dynamic web pages lead to the availability of Dynamic Hypertext Mark-up Language (DHTML). All thanks to JavaScript. Over the following years, we saw cross frame communication in an attempt to avoid page reloads followed by HTTP Polling within frames.
However, none of these solutions offered a truly standardized cross browser solution to real-time bi-directional communication between a server and a client.
This gave rise to the need of Web Sockets Protocol. It gave rise to full-duplex communication bringing desktop-rich functionality to all web browsers.
Web Socket represents a major upgrade in the history of web communications. Before its existence, all communication between the web clients and the servers relied only on HTTP.
Web Socket helps in dynamic flow of the connections that are persistent full duplex. Full duplex refers to the communication from both the ends with considerable fast speed.
It is termed as a game changer because of its efficiency of overcoming all the drawbacks of existing protocols.
Importance of Web Socket for developers and architects −
Web Socket is an independent TCP-based protocol, but it is designed to support any other protocol that would traditionally run only on top of a pure TCP connection.
Web Socket is an independent TCP-based protocol, but it is designed to support any other protocol that would traditionally run only on top of a pure TCP connection.
Web Socket is a transport layer on top of which any other protocol can run. The Web Socket API supports the ability to define sub-protocols: protocol libraries that can interpret specific protocols.
Web Socket is a transport layer on top of which any other protocol can run. The Web Socket API supports the ability to define sub-protocols: protocol libraries that can interpret specific protocols.
Examples of such protocols include XMPP, STOMP, and AMQP. The developers no longer have to think in terms of the HTTP request-response paradigm.
Examples of such protocols include XMPP, STOMP, and AMQP. The developers no longer have to think in terms of the HTTP request-response paradigm.
The only requirement on the browser-side is to run a JavaScript library that can interpret the Web Socket handshake, establish and maintain a Web Socket connection.
The only requirement on the browser-side is to run a JavaScript library that can interpret the Web Socket handshake, establish and maintain a Web Socket connection.
On the server side, the industry standard is to use existing protocol libraries that run on top of TCP and leverage a Web Socket Gateway.
On the server side, the industry standard is to use existing protocol libraries that run on top of TCP and leverage a Web Socket Gateway.
The following diagram describes the functionalities of Web Sockets −
Web Socket connections are initiated via HTTP; HTTP servers typically interpret Web Socket handshakes as an Upgrade request.
Web Sockets can both be a complementary add-on to an existing HTTP environment and can provide the required infrastructure to add web functionality. It relies on more advanced, full duplex protocols that allow data to flow in both directions between client and server.
Web Sockets provide a connection between the web server and a client such that both the parties can start sending the data.
The steps for establishing the connection of Web Socket are as follows −
The client establishes a connection through a process known as Web Socket handshake.
The client establishes a connection through a process known as Web Socket handshake.
The process begins with the client sending a regular HTTP request to the server.
The process begins with the client sending a regular HTTP request to the server.
An Upgrade header is requested. In this request, it informs the server that request is for Web Socket connection.
An Upgrade header is requested. In this request, it informs the server that request is for Web Socket connection.
Web Socket URLs use the ws scheme. They are also used for secure Web Socket connections, which are the equivalent to HTTPs.
Web Socket URLs use the ws scheme. They are also used for secure Web Socket connections, which are the equivalent to HTTPs.
A simple example of initial request headers is as follows −
GET ws://websocket.example.com/ HTTP/1.1
Origin: http://example.com
Connection: Upgrade
Host: websocket.example.com
Upgrade: websocket
Web Sockets occupy a key role not only in the web but also in the mobile industry. The importance of Web Sockets is given below.
Web Sockets as the name indicates, are related to the web. Web consists of a bunch of techniques for some browsers; it is a broad communication platform for vast number of devices, including desktop computers, laptops, tablets and smart phones.
Web Sockets as the name indicates, are related to the web. Web consists of a bunch of techniques for some browsers; it is a broad communication platform for vast number of devices, including desktop computers, laptops, tablets and smart phones.
HTML5 app that utilizes Web Sockets will work on any HTML5 enabled web browser.
HTML5 app that utilizes Web Sockets will work on any HTML5 enabled web browser.
Web socket is supported in the mainstream operating systems. All key players in the mobile industry provide Web Socket APIs in own native apps.
Web socket is supported in the mainstream operating systems. All key players in the mobile industry provide Web Socket APIs in own native apps.
Web sockets are said to be a full duplex communication. The approach of Web Sockets works well for certain categories of web application such as chat room, where the updates from client as well as server are shared simultaneously.
Web sockets are said to be a full duplex communication. The approach of Web Sockets works well for certain categories of web application such as chat room, where the updates from client as well as server are shared simultaneously.
Web Sockets, a part of the HTML5 specification, allow full duplex communication between web pages and a remote host. The protocol is designed to achieve the following benefits, which can be considered as the key points −
Reduce unnecessary network traffic and latency using full duplex through a single connection (instead of two).
Reduce unnecessary network traffic and latency using full duplex through a single connection (instead of two).
Streaming through proxies and firewalls, with the support of upstream and downstream communication simultaneously.
Streaming through proxies and firewalls, with the support of upstream and downstream communication simultaneously.
It is necessary to initialize the connection to the server from client for communication between them. For initializing the connection, creation of Javascript object with the URL with the remote or local server is required.
var socket = new WebSocket(“ ws://echo.websocket.org ”);
The URL mentioned above is a public address that can be used for testing and experiments. The websocket.org server is always up and when it receives the message and sends it back to the client.
This is the most important step to ensure that application works correctly.
There are four main Web Socket API events −
Open
Message
Close
Error
Each of the events are handled by implementing the functions like onopen, onmessage, onclose and onerror functions respectively. It can also be implemented with the help of addEventListener method.
The brief overview of the events and functions are described as follows −
Once the connection has been established between the client and the server, the open event is fired from Web Socket instance. It is called as the initial handshake between client and server. The event, which is raised once the connection is established, is called onopen.
Message event happens usually when the server sends some data. Messages sent by the server to the client can include plain text messages, binary data or images. Whenever the data is sent, the onmessage function is fired.
Close event marks the end of the communication between server and the client. Closing the connection is possible with the help of onclose event. After marking the end of communication with the help of onclose event, no messages can be further transferred between the server and the client. Closing the event can happen due to poor connectivity as well.
Error marks for some mistake, which happens during the communication. It is marked with the help of onerror event. Onerror is always followed by termination of connection. The detailed description of each and every event is discussed in further chapters.
Events are usually triggered when something happens. On the other hand, actions are taken when a user wants something to happen. Actions are made by explicit calls using functions by users.
The Web Socket protocol supports two main actions, namely −
send( )
close( )
This action is usually preferred for some communication with the server, which includes sending messages, which includes text files, binary data or images.
A chat message, which is sent with the help of send() action, is as follows −
// get text view and button for submitting the message
var textsend = document.getElementById(“text-view”);
var submitMsg = document.getElementById(“tsend-button”);
//Handling the click event
submitMsg.onclick = function ( ) {
// Send the data
socket.send( textsend.value);
}
Note − Sending the messages is only possible if the connection is open.
This method stands for goodbye handshake. It terminates the connection completely and no data can be transferred until the connection is re-established.
var textsend = document.getElementById(“text-view”);
var buttonStop = document.getElementById(“stop-button”);
//Handling the click event
buttonStop.onclick = function ( ) {
// Close the connection if open
if (socket.readyState === WebSocket.OPEN){
socket.close( );
}
}
It is also possible to close the connection deliberately with the help of following code snippet −
socket.close(1000,”Deliberate Connection”);
Once a connection has been established between the client and the server, the open event is fired from Web Socket instance. It is called as the initial handshake between client and server.
The event, which is raised once the connection is established, is called the onopen. Creating Web Socket connections is really simple. All you have to do is call the WebSocket constructor and pass in the URL of your server.
The following code is used to create a Web Socket connection −
// Create a new WebSocket.
var socket = new WebSocket('ws://echo.websocket.org');
Once the connection has been established, the open event will be fired on your Web Socket instance.
onopen refers to the initial handshake between client and the server which has lead to the first deal and the web application is ready to transmit the data.
The following code snippet describes opening the connection of Web Socket protocol −
socket.onopen = function(event) {
console.log(“Connection established”);
// Display user friendly messages for the successful establishment of connection
var.label = document.getElementById(“status”);
label.innerHTML = ”Connection established”;
}
It is a good practice to provide appropriate feedback to the users waiting for the Web Socket connection to be established. However, it is always noted that Web Socket connections are comparatively fast.
The demo of the Web Socket connection established is documented in the given URL − https://www.websocket.org/echo.html
A snapshot of the connection establishment and response to the user is shown below −
Establishing an open state allows full duplex communication and transfer of messages until the connection is terminated.
Building up the client-HTML5 file.
<!DOCTYPE html>
<html>
<meta charset = "utf-8" />
<title>WebSocket Test</title>
<script language = "javascript" type = "text/javascript">
var wsUri = "ws://echo.websocket.org/";
var output;
function init() {
output = document.getElementById("output");
testWebSocket();
}
function testWebSocket() {
websocket = new WebSocket(wsUri);
websocket.onopen = function(evt) {
onOpen(evt)
};
}
function onOpen(evt) {
writeToScreen("CONNECTED");
}
window.addEventListener("load", init, false);
</script>
<h2>WebSocket Test</h2>
<div id = "output"></div>
</html>
The output will be as follows −
The above HTML5 and JavaScript file shows the implementation of two events of Web Socket, namely −
onLoad which helps in creation of JavaScript object and initialization of connection.
onLoad which helps in creation of JavaScript object and initialization of connection.
onOpen establishes connection with the server and also sends the status.
onOpen establishes connection with the server and also sends the status.
Once a connection has been established between the client and the server, an open event is fired from the Web Socket instance. Error are generated for mistakes, which take place during the communication. It is marked with the help of onerror event. Onerror is always followed by termination of connection.
The onerror event is fired when something wrong occurs between the communications. The event onerror is followed by a connection termination, which is a close event.
A good practice is to always inform the user about the unexpected error and try to reconnect them.
socket.onclose = function(event) {
console.log("Error occurred.");
// Inform the user about the error.
var label = document.getElementById("status-label");
label.innerHTML = "Error: " + event;
}
When it comes to error handling, you have to consider both internal and external parameters.
Internal parameters include errors that can be generated because of the bugs in your code, or unexpected user behavior.
Internal parameters include errors that can be generated because of the bugs in your code, or unexpected user behavior.
External errors have nothing to do with the application; rather, they are related to parameters, which cannot be controlled. The most important one is the network connectivity.
External errors have nothing to do with the application; rather, they are related to parameters, which cannot be controlled. The most important one is the network connectivity.
Any interactive bidirectional web application requires, well, an active Internet connection.
Any interactive bidirectional web application requires, well, an active Internet connection.
Imagine that your users are enjoying your web app, when suddenly the network connection becomes unresponsive in the middle of their task. In modern native desktop and mobile applications, it is a common task to check for network availability.
The most common way of doing so is simply making an HTTP request to a website that is supposed to be up (for example, http://www.google.com). If the request succeeds, the desktop or mobile device knows there is active connectivity. Similarly, HTML has XMLHttpRequest for determining network availability.
HTML5, though, made it even easier and introduced a way to check whether the browser can accept web responses. This is achieved via the navigator object −
if (navigator.onLine) {
alert("You are Online");
}else {
alert("You are Offline");
}
Offline mode means that either the device is not connected or the user has selected the offline mode from browser toolbar.
Here is how to inform the user that the network is not available and try to reconnect when a WebSocket close event occurs −
socket.onclose = function (event) {
// Connection closed.
// Firstly, check the reason.
if (event.code != 1000) {
// Error code 1000 means that the connection was closed normally.
// Try to reconnect.
if (!navigator.onLine) {
alert("You are offline. Please connect to the Internet and try again.");
}
}
}
The following program explains how to show error messages using Web Sockets −
<!DOCTYPE html>
<html>
<meta charset = "utf-8" />
<title>WebSocket Test</title>
<script language = "javascript" type = "text/javascript">
var wsUri = "ws://echo.websocket.org/";
var output;
function init() {
output = document.getElementById("output");
testWebSocket();
}
function testWebSocket() {
websocket = new WebSocket(wsUri);
websocket.onopen = function(evt) {
onOpen(evt)
};
websocket.onclose = function(evt) {
onClose(evt)
};
websocket.onerror = function(evt) {
onError(evt)
};
}
function onOpen(evt) {
writeToScreen("CONNECTED");
doSend("WebSocket rocks");
}
function onClose(evt) {
writeToScreen("DISCONNECTED");
}
function onError(evt) {
writeToScreen('<span style = "color: red;">ERROR:</span> ' + evt.data);
}
function doSend(message) {
writeToScreen("SENT: " + message); websocket.send(message);
}
function writeToScreen(message) {
var pre = document.createElement("p");
pre.style.wordWrap = "break-word";
pre.innerHTML = message; output.appendChild(pre);
}
window.addEventListener("load", init, false);
</script>
<h2>WebSocket Test</h2>
<div id = "output"></div>
</html>
The output is as follows −
The Message event takes place usually when the server sends some data. Messages sent by the server to the client can include plain text messages, binary data, or images. Whenever data is sent, the onmessage function is fired.
This event acts as a client's ear to the server. Whenever the server sends data, the onmessage event gets fired.
The following code snippet describes opening the connection of Web Socket protocol.
connection.onmessage = function(e){
var server_message = e.data;
console.log(server_message);
}
It is also necessary to take into account what kinds of data can be transferred with the help of Web Sockets. Web socket protocol supports text and binary data. In terms of Javascript, text refers to as a string, while binary data is represented like ArrayBuffer.
Web sockets support only one binary format at a time. The declaration of binary data is done explicitly as follows −
socket.binaryType = ”arrayBuffer”;
socket.binaryType = ”blob”;
Strings are considered to be useful, dealing with human readable formats such as XML and JSON. Whenever onmessage event is raised, client needs to check the data type and act accordingly.
The code snippet for determining the data type as String is mentioned below −
socket.onmessage = function(event){
if(typeOf event.data === String ) {
console.log(“Received data string”);
}
}
It is a lightweight format for transferring human-readable data between the computers. The structure of JSON consists of key-value pairs.
{
name: “James Devilson”,
message: “Hello World!”
}
The following code shows how to handle a JSON object and extract its properties −
socket.onmessage = function(event) {
if(typeOf event.data === String ){
//create a JSON object
var jsonObject = JSON.parse(event.data);
var username = jsonObject.name;
var message = jsonObject.message;
console.log(“Received data string”);
}
}
Parsing in XML is not difficult, though the techniques differ from browser to browser. The best method is to parse using third party library like jQuery.
In both XML and JSON, the server responds as a string, which is being parsed at the client end.
It consists of a structured binary data. The enclosed bits are given in an order so that the position can be easily tracked. ArrayBuffers are handy to store the image files.
Receiving data using ArrayBuffers is fairly simple. The operator instanceOf is used instead of equal operator.
The following code shows how to handle and receive an ArrayBuffer object −
socket.onmessage = function(event) {
if(event.data instanceof ArrayBuffer ){
var buffer = event.data;
console.log(“Received arraybuffer”);
}
}
The following program code shows how to send and receive messages using Web Sockets.
<!DOCTYPE html>
<html>
<meta charset = "utf-8" />
<title>WebSocket Test</title>
<script language = "javascript" type = "text/javascript">
var wsUri = "ws://echo.websocket.org/";
var output;
function init() {
output = document.getElementById("output");
testWebSocket();
}
function testWebSocket() {
websocket = new WebSocket(wsUri);
websocket.onopen = function(evt) {
onOpen(evt)
};
websocket.onmessage = function(evt) {
onMessage(evt)
};
websocket.onerror = function(evt) {
onError(evt)
};
}
function onOpen(evt) {
writeToScreen("CONNECTED");
doSend("WebSocket rocks");
}
function onMessage(evt) {
writeToScreen('<span style = "color: blue;">RESPONSE: ' +
evt.data+'</span>'); websocket.close();
}
function onError(evt) {
writeToScreen('<span style="color: red;">ERROR:</span> ' + evt.data);
}
function doSend(message) {
writeToScreen("SENT: " + message); websocket.send(message);
}
function writeToScreen(message) {
var pre = document.createElement("p");
pre.style.wordWrap = "break-word";
pre.innerHTML = message; output.appendChild(pre);
}
window.addEventListener("load", init, false);
</script>
<h2>WebSocket Test</h2>
<div id = "output"></div>
</html>
The output is shown below.
Close event marks the end of a communication between the server and the client. Closing a connection is possible with the help of onclose event. After marking the end of communication with the help of onclose event, no messages can be further transferred between the server and the client. Closing the event can occur due to poor connectivity as well.
The close() method stands for goodbye handshake. It terminates the connection and no data can be exchanged unless the connection opens again.
Similar to the previous example, we call the close() method when the user clicks on the second button.
var textView = document.getElementById("text-view");
var buttonStop = document.getElementById("stop-button");
buttonStop.onclick = function() {
// Close the connection, if open.
if (socket.readyState === WebSocket.OPEN) {
socket.close();
}
}
It is also possible to pass the code and reason parameters we mentioned earlier as shown below.
socket.close(1000, "Deliberate disconnection");
The following code gives a complete overview of how to close or disconnect a Web Socket connection −
<!DOCTYPE html>
<html>
<meta charset = "utf-8" />
<title>WebSocket Test</title>
<script language = "javascript" type = "text/javascript">
var wsUri = "ws://echo.websocket.org/";
var output;
function init() {
output = document.getElementById("output");
testWebSocket();
}
function testWebSocket() {
websocket = new WebSocket(wsUri);
websocket.onopen = function(evt) {
onOpen(evt)
};
websocket.onclose = function(evt) {
onClose(evt)
};
websocket.onmessage = function(evt) {
onMessage(evt)
};
websocket.onerror = function(evt) {
onError(evt)
};
}
function onOpen(evt) {
writeToScreen("CONNECTED");
doSend("WebSocket rocks");
}
function onClose(evt) {
writeToScreen("DISCONNECTED");
}
function onMessage(evt) {
writeToScreen('<span style = "color: blue;">RESPONSE: ' +
evt.data+'</span>'); websocket.close();
}
function onError(evt) {
writeToScreen('<span style = "color: red;">ERROR:</span> '
+ evt.data);
}
function doSend(message) {
writeToScreen("SENT: " + message); websocket.send(message);
}
function writeToScreen(message) {
var pre = document.createElement("p");
pre.style.wordWrap = "break-word";
pre.innerHTML = message;
output.appendChild(pre);
}
window.addEventListener("load", init, false);
</script>
<h2>WebSocket Test</h2>
<div id = "output"></div>
</html>
The output is as follows −
A Web Socket server is a simple program, which has the ability to handle Web Socket events and actions. It usually exposes similar methods to the Web Socket client API and most programming languages provide an implementation. The following diagram illustrates the communication process between a Web Socket server and a Web Socket client, emphasizing the triggered events and actions.
The following diagram shows a Web Socket server and client event triggering −
The Web Socket server works in a similar way to the Web Socket clients. It responds to events and performs actions when necessary. Regardless of the programming language used, every Web Socket server performs some specific actions.
It is initialized to a Web Socket address. It handles OnOpen, OnClose, and OnMessage events, and sends messages to the clients too.
Every Web Socket server needs a valid host and port. An example of creating a Web Socket instance in server is as follows −
var server = new WebSocketServer("ws://localhost:8181");
Any valid URL can be used with the specification of a port, which was not used earlier. It is very useful to keep a record of the connected clients, as it provides details with different data or send different messages to each one.
Fleck represents the incoming connections (clients) with the IwebSocketConnection interface. Whenever someone connects or disconnects from our service, empty list can be created or updated.
var clients = new List<IWebSocketConnection>();
After that, we can call the Start method and wait for the clients to connect. After starting, the server is able to accept incoming connections. In Fleck, the Start method needs a parameter, which indicates the socket that raised the events −
server.Start(socket) =>
{
});
The OnOpen event determines that a new client has requested access and performs an initial handshake. The client should be added to the list and probably the information should be stored related to it, such as the IP address. Fleck provides us with such information, as well as a unique identifier for the connection.
server.Start(socket) ⇒ {
socket.OnOpen = () ⇒ {
// Add the incoming connection to our list.
clients.Add(socket);
}
// Handle the other events here...
});
The OnClose event is raised whenever a client is disconnected. The Client is removed from the list and informs the rest of clients about the disconnection.
socket.OnClose = () ⇒ {
// Remove the disconnected client from the list.
clients.Remove(socket);
};
The OnMessage event is raised when a client sends data to the server. Inside this event handler, the incoming message can be transmitted to the clients, or probably select only some of them.
The process is simple. Note that this handler takes a string named message as a parameter −
socket.OnMessage = () ⇒ {
// Display the message on the console.
Console.WriteLine(message);
};
The Send() method simply transmits the desired message to the specified client. Using Send(), text or binary data can be stored across the clients.
The working of OnMessage event is as follows −
socket.OnMessage = () ⇒ {
foreach (var client in clients) {
// Send the message to everyone!
// Also, send the client connection's unique identifier in order
// to recognize who is who.
client.Send(client.ConnectionInfo.Id + " says: " + message);
}
};
API, an abbreviation of Application Program Interface, is a set of routines, protocols, and tools for building software applications.
Some important features are −
The API specifies how software components should interact and APIs should be used when programming graphical user interface (GUI) components.
The API specifies how software components should interact and APIs should be used when programming graphical user interface (GUI) components.
A good API makes it easier to develop a program by providing all the building blocks.
A good API makes it easier to develop a program by providing all the building blocks.
REST, which typically runs over HTTP is often used in mobile applications, social websites, mashup tools, and automated business processes.
REST, which typically runs over HTTP is often used in mobile applications, social websites, mashup tools, and automated business processes.
The REST style emphasizes that interactions between the clients and services is enhanced by having a limited number of operations (verbs).
The REST style emphasizes that interactions between the clients and services is enhanced by having a limited number of operations (verbs).
Flexibility is provided by assigning resources; their own unique Universal Resource Identifiers (URIs).
Flexibility is provided by assigning resources; their own unique Universal Resource Identifiers (URIs).
REST avoids ambiguity because each verb has a specific meaning (GET, POST, PUT and DELETE)
REST avoids ambiguity because each verb has a specific meaning (GET, POST, PUT and DELETE)
Web Socket solves a few issues with REST, or HTTP in general −
HTTP is a unidirectional protocol where the client always initiates a request. The server processes and returns a response, and then the client consumes it. Web Socket is a bi-directional protocol where there are no predefined message patterns such as request/response. Either the client or the server can send a message to the other party.
HTTP allows the request message to go from the client to the server and then the server sends a response message to the client. At a given time, either the client is talking to the server or the server is talking to the client. Web Socket allows the client and the server to talk independent of each other.
Typically, a new TCP connection is initiated for an HTTP request and terminated after the response is received. A new TCP connection needs to be established for another HTTP request/response. For Web Socket, the HTTP connection is upgraded using standard HTTP upgrade mechanism and the client and the server communicate over that same TCP connection for the lifecycle of Web Socket connection.
The graph given below shows the time (in milliseconds) taken to process N messages for a constant payload size.
Here is the raw data that feeds this graph −
The graph and the table given above show that the REST overhead increases with the number of messages. This is true because that many TCP connections need to be initiated and terminated and that many HTTP headers need to be sent and received.
The last column particularly shows the multiplication factor for the amount of time to fulfil a REST request.
The second graph shows the time taken to process a fixed number of messages by varying the payload size.
Here is the raw data that feeds this graph −
This graph shows that the incremental cost of processing the request/response for a REST endpoint is minimal and most of the time is spent in connection initiation/termination and honoring HTTP semantics.
Web Socket is a low-level protocol. Everything, including a simple request/response design pattern, how to create/update/delete resources need, status codes etc. to be builds on top of it. All of these are well defined for HTTP.
Web Socket is a stateful protocol whereas HTTP is a stateless protocol. Web Socket connections can scale vertically on a single server whereas HTTP can scale horizontally. There are some proprietary solutions for Web Socket horizontal scaling, but they are not based on standards. HTTP comes with a lot of other goodies such as caching, routing, and multiplexing. All of these need to be defined on top of Web Socket.
The following program code describes the working of a chat application using JavaScript and Web Socket protocol.
<!DOCTYPE html>
<html lang = "en">
<head>
<meta charset = utf-8>
<title>HTML5 Chat</title>
<body>
<section id = "wrapper">
<header>
<h1>HTML5 Chat</h1>
</header>
<style>
#chat { width: 97%; }
.message { font-weight: bold; }
.message:before { content: ' '; color: #bbb; font-size: 14px; }
#log {
overflow: auto;
max-height: 300px;
list-style: none;
padding: 0;
}
#log li {
border-top: 1px solid #ccc;
margin: 0;
padding: 10px 0;
}
body {
font: normal 16px/20px "Helvetica Neue", Helvetica, sans-serif;
background: rgb(237, 237, 236);
margin: 0;
margin-top: 40px;
padding: 0;
}
section, header {
display: block;
}
#wrapper {
width: 600px;
margin: 0 auto;
background: #fff;
border-radius: 10px;
border-top: 1px solid #fff;
padding-bottom: 16px;
}
h1 {
padding-top: 10px;
}
h2 {
font-size: 100%;
font-style: italic;
}
header, article > * {
margin: 20px;
}
#status {
padding: 5px;
color: #fff;
background: #ccc;
}
#status.fail {
background: #c00;
}
#status.success {
background: #0c0;
}
#status.offline {
background: #c00;
}
#status.online {
background: #0c0;
}
#html5badge {
margin-left: -30px;
border: 0;
}
#html5badge img {
border: 0;
}
</style>
<article>
<form onsubmit = "addMessage(); return false;">
<input type = "text" id = "chat" placeholder = "type and press
enter to chat" />
</form>
<p id = "status">Not connected</p>
<p>Users connected: <span id = "connected">0
</span></p>
<ul id = "log"></ul>
</article>
<script>
connected = document.getElementById("connected");
log = document.getElementById("log");
chat = document.getElementById("chat");
form = chat.form;
state = document.getElementById("status");
if (window.WebSocket === undefined) {
state.innerHTML = "sockets not supported";
state.className = "fail";
}else {
if (typeof String.prototype.startsWith != "function") {
String.prototype.startsWith = function (str) {
return this.indexOf(str) == 0;
};
}
window.addEventListener("load", onLoad, false);
}
function onLoad() {
var wsUri = "ws://127.0.0.1:7777";
websocket = new WebSocket(wsUri);
websocket.onopen = function(evt) { onOpen(evt) };
websocket.onclose = function(evt) { onClose(evt) };
websocket.onmessage = function(evt) { onMessage(evt) };
websocket.onerror = function(evt) { onError(evt) };
}
function onOpen(evt) {
state.className = "success";
state.innerHTML = "Connected to server";
}
function onClose(evt) {
state.className = "fail";
state.innerHTML = "Not connected";
connected.innerHTML = "0";
}
function onMessage(evt) {
// There are two types of messages:
// 1. a chat participant message itself
// 2. a message with a number of connected chat participants
var message = evt.data;
if (message.startsWith("log:")) {
message = message.slice("log:".length);
log.innerHTML = '<li class = "message">' +
message + "</li>" + log.innerHTML;
}else if (message.startsWith("connected:")) {
message = message.slice("connected:".length);
connected.innerHTML = message;
}
}
function onError(evt) {
state.className = "fail";
state.innerHTML = "Communication error";
}
function addMessage() {
var message = chat.value;
chat.value = "";
websocket.send(message);
}
</script>
</section>
</body>
</head>
</html>
The key features and the output of the chat application are discussed below −
To test, open the two windows with Web Socket support, type a message above and press return. This would enable the feature of chat application.
If the connection is not established, the output is available as shown below.
The output of a successful chat communication is shown below.
The Web has been largely built around the request/response paradigm of HTTP. A client loads up a web page and then nothing happens until the user clicks onto the next page. Around 2005, AJAX started to make the web feel more dynamic. Still, all HTTP communication is steered by the client, which requires user interaction or periodic polling to load new data from the server.
Technologies that enable the server to send the data to a client in the very moment when it knows that new data is available have been around for quite some time. They go by names such as "Push" or “Comet”.
With long polling, the client opens an HTTP connection to the server, which keeps it open until sending response. Whenever the server actually has new data, it sends the response. Long polling and the other techniques work quite well. However, all of these share one problem, they carry the overhead of HTTP, which does not make them well suited for low latency applications. For example, a multiplayer shooter game in the browser or any other online game with a real-time component.
The Web Socket specification defines an API establishing "socket" connections between a web browser and a server. In layman terms, there is a persistent connection between the client and the server and both parties can start sending data at any time.
Web socket connection can be simply opened using a constructor −
var connection = new WebSocket('ws://html5rocks.websocket.org/echo', ['soap', 'xmpp']);
ws is the new URL schema for WebSocket connections. There is also wss, for secure WebSocket connection the same way https is used for secure HTTP connections.
Attaching some event handlers immediately to the connection allows you to know when the connection is opened, received incoming messages, or there is an error.
The second argument accepts optional subprotocols. It can be a string or an array of strings. Each string should represent a subprotocol name and server accepts only one of passed subprotocols in the array. Accepted subprotocol can be determined by accessing protocol property of WebSocket object.
// When the connection is open, send some data to the server
connection.onopen = function () {
connection.send('Ping'); // Send the message 'Ping' to the server
};
// Log errors
connection.onerror = function (error) {
console.log('WebSocket Error ' + error);
};
// Log messages from the server
connection.onmessage = function (e) {
console.log('Server: ' + e.data);
};
As soon as we have a connection to the server (when the open event is fired) we can start sending data to the server using the send (your message) method on the connection object. It used to support only strings, but in the latest specification, it now can send binary messages too. To send binary data, Blob or ArrayBuffer object is used.
// Sending String
connection.send('your message');
// Sending canvas ImageData as ArrayBuffer
var img = canvas_context.getImageData(0, 0, 400, 320);
var binary = new Uint8Array(img.data.length);
for (var i = 0; i < img.data.length; i++) {
binary[i] = img.data[i];
}
connection.send(binary.buffer);
// Sending file as Blob
var file = document.querySelector('input[type = "file"]').files[0];
connection.send(file);
Equally, the server might send us messages at any time. Whenever this happens the onmessage callback fires. The callback receives an event object and the actual message is accessible via the data property.
WebSocket can also receive binary messages in the latest spec. Binary frames can be received in Blob or ArrayBuffer format. To specify the format of the received binary, set the binaryType property of WebSocket object to either 'blob' or 'arraybuffer'. The default format is 'blob'.
// Setting binaryType to accept received binary as either 'blob' or 'arraybuffer'
connection.binaryType = 'arraybuffer';
connection.onmessage = function(e) {
console.log(e.data.byteLength); // ArrayBuffer object if binary
};
Another newly added feature of WebSocket is extensions. Using extensions, it will be possible to send frames compressed, multiplexed, etc.
// Determining accepted extensions
console.log(connection.extensions);
Being a modern protocol, cross-origin communication is baked right into WebSocket. WebSocket enables communication between parties on any domain. The server decides whether to make its service available to all clients or only those that reside on a set of well-defined domains.
Every new technology comes with a new set of problems. In the case of WebSocket it is the compatibility with proxy servers, which mediate HTTP connections in most company networks. The WebSocket protocol uses the HTTP upgrade system (which is normally used for HTTP/SSL) to "upgrade" an HTTP connection to a WebSocket connection. Some proxy servers do not like this and will drop the connection. Thus, even if a given client uses the WebSocket protocol, it may not be possible to establish a connection. This makes the next section even more important :)
Using WebSocket creates a whole new usage pattern for server side applications. While traditional server stacks such as LAMP are designed around the HTTP request/response cycle they often do not deal well with a large number of open WebSocket connections. Keeping a large number of connections open at the same time requires an architecture that receives high concurrency at a low performance cost.
Protocol should be designed for security reasons. WebSocket is a brand-new protocol and not all web browsers implement it correctly. For example, some of them still allow the mix of HTTP and WS, although the specification implies the opposite. In this chapter, we will discuss a few common security attacks that a user should be aware of.
Denial of Service (DoS) attacks attempt to make a machine or network resource unavailable to the users that request it. Suppose someone makes an infinite number of requests to a web server with no or tiny time intervals. The server is not able to handle each connection and will either stop responding or will keep responding too slowly. This can be termed as Denial of service attack.
Denial of service is very frustrating for the end users, who could not even load a web page.
DoS attack can even apply on peer-to-peer communications, forcing the clients of a P2P network to concurrently connect to the victim web server.
Let us understand this with the help of an example.
Suppose a person A is chatting with his friend B via an IM client. Some third person wants to view the messages you exchange. So, he makes an independent connections with both the persons. He also sends messages to person A and his friend B, as an invisible intermediate to your communication. This is known as a man-in-the-middle attack.
The man-in-the-middle kind of attack is easier for unencrypted connections, as the intruder can read the packages directly. When the connection is encrypted, the information has to be decrypted by the attacker, which might be way too difficult.
From a technical aspect, the attacker intercepts a public-key message exchange and sends the message while replacing the requested key with his own. Obviously, a solid strategy to make the attacker's job difficult is to use SSH with WebSockets.
Mostly when exchanging critical data, prefer the WSS secure connection instead of the unencrypted WS.
Cross-site scripting (XSS) is a vulnerability that enables attackers to inject client-side scripts into web pages or applications. An attacker can send HTML or Javascript code using your application hubs and let this code be executed on the clients' machines.
By default, the WebSocket protocol is designed to be secure. In the real world, the user might encounter various issues that might occur due to poor browser implementation. As time goes by, browser vendors fix any issues immediately.
An extra layer of security is added when secure WebSocket connection over SSH (or TLS) is used.
In the WebSocket world, the main concern is about the performance of a secure connection. Although there is still an extra TLS layer on top, the protocol itself contains optimizations for this kind of use, furthermore, WSS works more sleekly through proxies.
Every message transmitted between a WebSocket server and a WebSocket client contains a specific key, named masking key, which allows any WebSocket-compliant intermediaries to unmask and inspect the message. If the intermediary is not WebSocket-compliant, then the message cannot be affected. The browser that implements the WebSocket protocol handles masking.
Finally, useful tools can be presented to investigate the flow of information between your WebSocket clients and server, analyze the exchanged data, and identify possible risks.
Chrome, Firefox, and Opera are great browsers in terms of developer support. Their built-in tools help us determine almost any aspect of client-side interactions and resources. It plays a great role for security purposes.
WebSocket, as the name implies, is something that uses the web. The web is usually interwoven with browser pages because that are the primary means of displaying data online. However, non-browser programs too, use online data transmission.
The release of the iPhone (initially) and the iPad (later) introduced a brand new world of web interconnectivity without necessarily using a web browser. Instead, the new smartphone and tablet devices utilized the power of native apps to offer a unique user experience.
Currently, there are one billion active smartphones out there. That is, millions of potential customers for your applications. These people use their mobile phone to accomplish daily tasks, surf the internet, communicate, or shop.
Smartphones have become synonymous to apps. Nowadays, there is an app for any usage, a user can think of. Most of the apps connect to the internet in order to retrieve data, make transactions, gather news, and so on.
It would be great to use the existing WebSocket knowledge and develop a WebSocket client running natively on a smartphone or tablet device.
Well, this is a common conflict and as usual, the answer depends on the needs of the target audience. If a user is familiar with the modern design trends, designing a website that is responsive and mobile friendly is now a must. However, the end user must be sure that the content, which is what really matters, is equally accessible via a smartphone, as it is via a classic desktop browser.
Definitely, a WebSocket web app will run on any HTML5-compliant browser, including mobile browsers such as Safari for iOS and Chrome for mobile. Therefore, there are no worries about compatibility issues with smartphones.
In order to develop a smartphone app, installation of development tools and SDKs are required.
WebSockets can act as a universal hub for transmitting messages between connected mobile and tablet clients. We can implement a native iOS application, which communicates with a WebSocket server just like the HTML5 JavaScript client.
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2445,
"s": 2119,
"text": "In literal terms, handshaking can be defined as gripping and shaking of right hands by two individuals, as to symbolize greeting, congratulations, agreement or farewell. In computer science, handshaking is a process that ensures the server is in sync with its clients. Handshaking is the basic concept of Web Socket protocol."
},
{
"code": null,
"e": 2517,
"s": 2445,
"text": "The following diagram shows the server handshake with various clients −"
},
{
"code": null,
"e": 2681,
"s": 2517,
"text": "Web sockets are defined as a two-way communication between the servers and the clients, which mean both the parties communicate and exchange data at the same time."
},
{
"code": null,
"e": 2821,
"s": 2681,
"text": "The key points of Web Sockets are true concurrency and optimization of performance, resulting in more responsive and rich web applications."
},
{
"code": null,
"e": 3080,
"s": 2821,
"text": "This protocol defines a full duplex communication from the ground up. Web sockets take a step forward in bringing desktop rich functionalities to the web browsers. It represents an evolution, which was awaited for a long time in client/server web technology."
},
{
"code": null,
"e": 3130,
"s": 3080,
"text": "The main features of web sockets are as follows −"
},
{
"code": null,
"e": 3285,
"s": 3130,
"text": "Web socket protocol is being standardized, which means real time communication between web servers and clients is possible with the help of this protocol."
},
{
"code": null,
"e": 3440,
"s": 3285,
"text": "Web socket protocol is being standardized, which means real time communication between web servers and clients is possible with the help of this protocol."
},
{
"code": null,
"e": 3557,
"s": 3440,
"text": "Web sockets are transforming to cross platform standard for real time communication between a client and the server."
},
{
"code": null,
"e": 3674,
"s": 3557,
"text": "Web sockets are transforming to cross platform standard for real time communication between a client and the server."
},
{
"code": null,
"e": 3814,
"s": 3674,
"text": "This standard enables new kind of the applications. Businesses for real time web application can speed up with the help of this technology."
},
{
"code": null,
"e": 3954,
"s": 3814,
"text": "This standard enables new kind of the applications. Businesses for real time web application can speed up with the help of this technology."
},
{
"code": null,
"e": 4073,
"s": 3954,
"text": "The biggest advantage of Web Socket is it provides a two-way communication (full duplex) over a single TCP connection."
},
{
"code": null,
"e": 4192,
"s": 4073,
"text": "The biggest advantage of Web Socket is it provides a two-way communication (full duplex) over a single TCP connection."
},
{
"code": null,
"e": 4320,
"s": 4192,
"text": "HTTP has its own set of schemas such as http and https. Web socket protocol also has similar schema defined in its URL pattern."
},
{
"code": null,
"e": 4376,
"s": 4320,
"text": "The following image shows the Web Socket URL in tokens."
},
{
"code": null,
"e": 4470,
"s": 4376,
"text": "The latest specification of Web Socket protocol is defined as RFC 6455 – a proposed standard."
},
{
"code": null,
"e": 4587,
"s": 4470,
"text": "RFC 6455 is supported by various browsers like Internet Explorer, Mozilla Firefox, Google Chrome, Safari, and Opera."
},
{
"code": null,
"e": 4785,
"s": 4587,
"text": "Before diving to the need of Web sockets, it is necessary to have a look at the existing techniques, which are used for duplex communication between the server and the client. They are as follows −"
},
{
"code": null,
"e": 4793,
"s": 4785,
"text": "Polling"
},
{
"code": null,
"e": 4806,
"s": 4793,
"text": "Long Polling"
},
{
"code": null,
"e": 4816,
"s": 4806,
"text": "Streaming"
},
{
"code": null,
"e": 4834,
"s": 4816,
"text": "Postback and AJAX"
},
{
"code": null,
"e": 4840,
"s": 4834,
"text": "HTML5"
},
{
"code": null,
"e": 5180,
"s": 4840,
"text": "Polling can be defined as a method, which performs periodic requests regardless of the data that exists in the transmission. The periodic requests are sent in a synchronous way. The client makes a periodic request in a specified time interval to the Server. The response of the server includes available data or some warning message in it."
},
{
"code": null,
"e": 5461,
"s": 5180,
"text": "Long polling, as the name suggests, includes similar technique like polling. The client and the server keep the connection active until some data is fetched or timeout occurs. If the connection is lost due to some reasons, the client can start over and perform sequential request."
},
{
"code": null,
"e": 5584,
"s": 5461,
"text": "Long polling is nothing but performance improvement over polling process, but constant requests may slow down the process."
},
{
"code": null,
"e": 5957,
"s": 5584,
"text": "It is considered as the best option for real-time data transmission. The server keeps the connection open and active with the client until and unless the required data is being fetched. In this case, the connection is said to be open indefinitely. Streaming includes HTTP headers which increases the file size, increasing delay. This can be considered as a major drawback."
},
{
"code": null,
"e": 6226,
"s": 5957,
"text": "AJAX is based on Javascript's XmlHttpRequest Object. It is an abbreviated form of Asynchronous Javascript and XML. XmlHttpRequest Object allows execution of the Javascript without reloading the complete web page. AJAX sends and receives only a portion of the web page."
},
{
"code": null,
"e": 6299,
"s": 6226,
"text": "The code snippet of AJAX call with XmlHttpRequest Object is as follows −"
},
{
"code": null,
"e": 6459,
"s": 6299,
"text": "var xhttp;\n\nif (window.XMLHttpRequest) {\n xhttp = new XMLHttpRequest();\n} else {\n // code for IE6, IE5\n xhttp = new ActiveXObject(\"Microsoft.XMLHTTP\");\n}"
},
{
"code": null,
"e": 6524,
"s": 6459,
"text": "The major drawbacks of AJAX in comparison with Web Sockets are −"
},
{
"code": null,
"e": 6579,
"s": 6524,
"text": "They send HTTP headers, which makes total size larger."
},
{
"code": null,
"e": 6613,
"s": 6579,
"text": "The communication is half-duplex."
},
{
"code": null,
"e": 6653,
"s": 6613,
"text": "The web server consumes more resources."
},
{
"code": null,
"e": 6797,
"s": 6653,
"text": "HTML5 is a robust framework for developing and designing web applications. The main pillars include Mark-up, CSS3 and Javascript APIs together."
},
{
"code": null,
"e": 6844,
"s": 6797,
"text": "The following diagram shows HTML5 components −"
},
{
"code": null,
"e": 6925,
"s": 6844,
"text": "The code snippet given below describes the declaration of HTML5 and its doctype."
},
{
"code": null,
"e": 6941,
"s": 6925,
"text": "<!DOCTYPE html>"
},
{
"code": null,
"e": 7226,
"s": 6941,
"text": "Internet was conceived to be a collection of Hypertext Mark-up Language (HTML) pages linking one another to form a conceptual web of information. During the course of time, static resources increased in number and richer items, such as images and began to be a part of the web fabric."
},
{
"code": null,
"e": 7344,
"s": 7226,
"text": "Server technologies advanced which allowed dynamic server pages - pages whose content was generated based on a query."
},
{
"code": null,
"e": 7630,
"s": 7344,
"text": "Soon, the requirement to have more dynamic web pages lead to the availability of Dynamic Hypertext Mark-up Language (DHTML). All thanks to JavaScript. Over the following years, we saw cross frame communication in an attempt to avoid page reloads followed by HTTP Polling within frames."
},
{
"code": null,
"e": 7788,
"s": 7630,
"text": "However, none of these solutions offered a truly standardized cross browser solution to real-time bi-directional communication between a server and a client."
},
{
"code": null,
"e": 7939,
"s": 7788,
"text": "This gave rise to the need of Web Sockets Protocol. It gave rise to full-duplex communication bringing desktop-rich functionality to all web browsers."
},
{
"code": null,
"e": 8116,
"s": 7939,
"text": "Web Socket represents a major upgrade in the history of web communications. Before its existence, all communication between the web clients and the servers relied only on HTTP."
},
{
"code": null,
"e": 8290,
"s": 8116,
"text": "Web Socket helps in dynamic flow of the connections that are persistent full duplex. Full duplex refers to the communication from both the ends with considerable fast speed."
},
{
"code": null,
"e": 8402,
"s": 8290,
"text": "It is termed as a game changer because of its efficiency of overcoming all the drawbacks of existing protocols."
},
{
"code": null,
"e": 8459,
"s": 8402,
"text": "Importance of Web Socket for developers and architects −"
},
{
"code": null,
"e": 8624,
"s": 8459,
"text": "Web Socket is an independent TCP-based protocol, but it is designed to support any other protocol that would traditionally run only on top of a pure TCP connection."
},
{
"code": null,
"e": 8789,
"s": 8624,
"text": "Web Socket is an independent TCP-based protocol, but it is designed to support any other protocol that would traditionally run only on top of a pure TCP connection."
},
{
"code": null,
"e": 8988,
"s": 8789,
"text": "Web Socket is a transport layer on top of which any other protocol can run. The Web Socket API supports the ability to define sub-protocols: protocol libraries that can interpret specific protocols."
},
{
"code": null,
"e": 9187,
"s": 8988,
"text": "Web Socket is a transport layer on top of which any other protocol can run. The Web Socket API supports the ability to define sub-protocols: protocol libraries that can interpret specific protocols."
},
{
"code": null,
"e": 9332,
"s": 9187,
"text": "Examples of such protocols include XMPP, STOMP, and AMQP. The developers no longer have to think in terms of the HTTP request-response paradigm."
},
{
"code": null,
"e": 9477,
"s": 9332,
"text": "Examples of such protocols include XMPP, STOMP, and AMQP. The developers no longer have to think in terms of the HTTP request-response paradigm."
},
{
"code": null,
"e": 9642,
"s": 9477,
"text": "The only requirement on the browser-side is to run a JavaScript library that can interpret the Web Socket handshake, establish and maintain a Web Socket connection."
},
{
"code": null,
"e": 9807,
"s": 9642,
"text": "The only requirement on the browser-side is to run a JavaScript library that can interpret the Web Socket handshake, establish and maintain a Web Socket connection."
},
{
"code": null,
"e": 9945,
"s": 9807,
"text": "On the server side, the industry standard is to use existing protocol libraries that run on top of TCP and leverage a Web Socket Gateway."
},
{
"code": null,
"e": 10083,
"s": 9945,
"text": "On the server side, the industry standard is to use existing protocol libraries that run on top of TCP and leverage a Web Socket Gateway."
},
{
"code": null,
"e": 10152,
"s": 10083,
"text": "The following diagram describes the functionalities of Web Sockets −"
},
{
"code": null,
"e": 10277,
"s": 10152,
"text": "Web Socket connections are initiated via HTTP; HTTP servers typically interpret Web Socket handshakes as an Upgrade request."
},
{
"code": null,
"e": 10546,
"s": 10277,
"text": "Web Sockets can both be a complementary add-on to an existing HTTP environment and can provide the required infrastructure to add web functionality. It relies on more advanced, full duplex protocols that allow data to flow in both directions between client and server."
},
{
"code": null,
"e": 10670,
"s": 10546,
"text": "Web Sockets provide a connection between the web server and a client such that both the parties can start sending the data."
},
{
"code": null,
"e": 10743,
"s": 10670,
"text": "The steps for establishing the connection of Web Socket are as follows −"
},
{
"code": null,
"e": 10828,
"s": 10743,
"text": "The client establishes a connection through a process known as Web Socket handshake."
},
{
"code": null,
"e": 10913,
"s": 10828,
"text": "The client establishes a connection through a process known as Web Socket handshake."
},
{
"code": null,
"e": 10994,
"s": 10913,
"text": "The process begins with the client sending a regular HTTP request to the server."
},
{
"code": null,
"e": 11075,
"s": 10994,
"text": "The process begins with the client sending a regular HTTP request to the server."
},
{
"code": null,
"e": 11189,
"s": 11075,
"text": "An Upgrade header is requested. In this request, it informs the server that request is for Web Socket connection."
},
{
"code": null,
"e": 11303,
"s": 11189,
"text": "An Upgrade header is requested. In this request, it informs the server that request is for Web Socket connection."
},
{
"code": null,
"e": 11427,
"s": 11303,
"text": "Web Socket URLs use the ws scheme. They are also used for secure Web Socket connections, which are the equivalent to HTTPs."
},
{
"code": null,
"e": 11551,
"s": 11427,
"text": "Web Socket URLs use the ws scheme. They are also used for secure Web Socket connections, which are the equivalent to HTTPs."
},
{
"code": null,
"e": 11611,
"s": 11551,
"text": "A simple example of initial request headers is as follows −"
},
{
"code": null,
"e": 11746,
"s": 11611,
"text": "GET ws://websocket.example.com/ HTTP/1.1\nOrigin: http://example.com\nConnection: Upgrade\nHost: websocket.example.com\nUpgrade: websocket"
},
{
"code": null,
"e": 11875,
"s": 11746,
"text": "Web Sockets occupy a key role not only in the web but also in the mobile industry. The importance of Web Sockets is given below."
},
{
"code": null,
"e": 12120,
"s": 11875,
"text": "Web Sockets as the name indicates, are related to the web. Web consists of a bunch of techniques for some browsers; it is a broad communication platform for vast number of devices, including desktop computers, laptops, tablets and smart phones."
},
{
"code": null,
"e": 12365,
"s": 12120,
"text": "Web Sockets as the name indicates, are related to the web. Web consists of a bunch of techniques for some browsers; it is a broad communication platform for vast number of devices, including desktop computers, laptops, tablets and smart phones."
},
{
"code": null,
"e": 12445,
"s": 12365,
"text": "HTML5 app that utilizes Web Sockets will work on any HTML5 enabled web browser."
},
{
"code": null,
"e": 12525,
"s": 12445,
"text": "HTML5 app that utilizes Web Sockets will work on any HTML5 enabled web browser."
},
{
"code": null,
"e": 12669,
"s": 12525,
"text": "Web socket is supported in the mainstream operating systems. All key players in the mobile industry provide Web Socket APIs in own native apps."
},
{
"code": null,
"e": 12813,
"s": 12669,
"text": "Web socket is supported in the mainstream operating systems. All key players in the mobile industry provide Web Socket APIs in own native apps."
},
{
"code": null,
"e": 13044,
"s": 12813,
"text": "Web sockets are said to be a full duplex communication. The approach of Web Sockets works well for certain categories of web application such as chat room, where the updates from client as well as server are shared simultaneously."
},
{
"code": null,
"e": 13275,
"s": 13044,
"text": "Web sockets are said to be a full duplex communication. The approach of Web Sockets works well for certain categories of web application such as chat room, where the updates from client as well as server are shared simultaneously."
},
{
"code": null,
"e": 13496,
"s": 13275,
"text": "Web Sockets, a part of the HTML5 specification, allow full duplex communication between web pages and a remote host. The protocol is designed to achieve the following benefits, which can be considered as the key points −"
},
{
"code": null,
"e": 13607,
"s": 13496,
"text": "Reduce unnecessary network traffic and latency using full duplex through a single connection (instead of two)."
},
{
"code": null,
"e": 13718,
"s": 13607,
"text": "Reduce unnecessary network traffic and latency using full duplex through a single connection (instead of two)."
},
{
"code": null,
"e": 13833,
"s": 13718,
"text": "Streaming through proxies and firewalls, with the support of upstream and downstream communication simultaneously."
},
{
"code": null,
"e": 13948,
"s": 13833,
"text": "Streaming through proxies and firewalls, with the support of upstream and downstream communication simultaneously."
},
{
"code": null,
"e": 14172,
"s": 13948,
"text": "It is necessary to initialize the connection to the server from client for communication between them. For initializing the connection, creation of Javascript object with the URL with the remote or local server is required."
},
{
"code": null,
"e": 14230,
"s": 14172,
"text": "var socket = new WebSocket(“ ws://echo.websocket.org ”);\n"
},
{
"code": null,
"e": 14424,
"s": 14230,
"text": "The URL mentioned above is a public address that can be used for testing and experiments. The websocket.org server is always up and when it receives the message and sends it back to the client."
},
{
"code": null,
"e": 14500,
"s": 14424,
"text": "This is the most important step to ensure that application works correctly."
},
{
"code": null,
"e": 14544,
"s": 14500,
"text": "There are four main Web Socket API events −"
},
{
"code": null,
"e": 14549,
"s": 14544,
"text": "Open"
},
{
"code": null,
"e": 14557,
"s": 14549,
"text": "Message"
},
{
"code": null,
"e": 14563,
"s": 14557,
"text": "Close"
},
{
"code": null,
"e": 14569,
"s": 14563,
"text": "Error"
},
{
"code": null,
"e": 14767,
"s": 14569,
"text": "Each of the events are handled by implementing the functions like onopen, onmessage, onclose and onerror functions respectively. It can also be implemented with the help of addEventListener method."
},
{
"code": null,
"e": 14841,
"s": 14767,
"text": "The brief overview of the events and functions are described as follows −"
},
{
"code": null,
"e": 15113,
"s": 14841,
"text": "Once the connection has been established between the client and the server, the open event is fired from Web Socket instance. It is called as the initial handshake between client and server. The event, which is raised once the connection is established, is called onopen."
},
{
"code": null,
"e": 15334,
"s": 15113,
"text": "Message event happens usually when the server sends some data. Messages sent by the server to the client can include plain text messages, binary data or images. Whenever the data is sent, the onmessage function is fired."
},
{
"code": null,
"e": 15687,
"s": 15334,
"text": "Close event marks the end of the communication between server and the client. Closing the connection is possible with the help of onclose event. After marking the end of communication with the help of onclose event, no messages can be further transferred between the server and the client. Closing the event can happen due to poor connectivity as well."
},
{
"code": null,
"e": 15942,
"s": 15687,
"text": "Error marks for some mistake, which happens during the communication. It is marked with the help of onerror event. Onerror is always followed by termination of connection. The detailed description of each and every event is discussed in further chapters."
},
{
"code": null,
"e": 16132,
"s": 15942,
"text": "Events are usually triggered when something happens. On the other hand, actions are taken when a user wants something to happen. Actions are made by explicit calls using functions by users."
},
{
"code": null,
"e": 16192,
"s": 16132,
"text": "The Web Socket protocol supports two main actions, namely −"
},
{
"code": null,
"e": 16200,
"s": 16192,
"text": "send( )"
},
{
"code": null,
"e": 16209,
"s": 16200,
"text": "close( )"
},
{
"code": null,
"e": 16365,
"s": 16209,
"text": "This action is usually preferred for some communication with the server, which includes sending messages, which includes text files, binary data or images."
},
{
"code": null,
"e": 16443,
"s": 16365,
"text": "A chat message, which is sent with the help of send() action, is as follows −"
},
{
"code": null,
"e": 16726,
"s": 16443,
"text": "// get text view and button for submitting the message\nvar textsend = document.getElementById(“text-view”);\nvar submitMsg = document.getElementById(“tsend-button”);\n\n//Handling the click event\nsubmitMsg.onclick = function ( ) {\n // Send the data\n socket.send( textsend.value);\n}"
},
{
"code": null,
"e": 16798,
"s": 16726,
"text": "Note − Sending the messages is only possible if the connection is open."
},
{
"code": null,
"e": 16951,
"s": 16798,
"text": "This method stands for goodbye handshake. It terminates the connection completely and no data can be transferred until the connection is re-established."
},
{
"code": null,
"e": 17236,
"s": 16951,
"text": "var textsend = document.getElementById(“text-view”);\nvar buttonStop = document.getElementById(“stop-button”);\n\n//Handling the click event\nbuttonStop.onclick = function ( ) {\n // Close the connection if open\n if (socket.readyState === WebSocket.OPEN){\n socket.close( );\n }\n}"
},
{
"code": null,
"e": 17335,
"s": 17236,
"text": "It is also possible to close the connection deliberately with the help of following code snippet −"
},
{
"code": null,
"e": 17379,
"s": 17335,
"text": "socket.close(1000,”Deliberate Connection”);"
},
{
"code": null,
"e": 17568,
"s": 17379,
"text": "Once a connection has been established between the client and the server, the open event is fired from Web Socket instance. It is called as the initial handshake between client and server."
},
{
"code": null,
"e": 17792,
"s": 17568,
"text": "The event, which is raised once the connection is established, is called the onopen. Creating Web Socket connections is really simple. All you have to do is call the WebSocket constructor and pass in the URL of your server."
},
{
"code": null,
"e": 17855,
"s": 17792,
"text": "The following code is used to create a Web Socket connection −"
},
{
"code": null,
"e": 17937,
"s": 17855,
"text": "// Create a new WebSocket.\nvar socket = new WebSocket('ws://echo.websocket.org');"
},
{
"code": null,
"e": 18037,
"s": 17937,
"text": "Once the connection has been established, the open event will be fired on your Web Socket instance."
},
{
"code": null,
"e": 18194,
"s": 18037,
"text": "onopen refers to the initial handshake between client and the server which has lead to the first deal and the web application is ready to transmit the data."
},
{
"code": null,
"e": 18279,
"s": 18194,
"text": "The following code snippet describes opening the connection of Web Socket protocol −"
},
{
"code": null,
"e": 18538,
"s": 18279,
"text": "socket.onopen = function(event) {\n console.log(“Connection established”);\n // Display user friendly messages for the successful establishment of connection\n var.label = document.getElementById(“status”);\n label.innerHTML = ”Connection established”;\n}"
},
{
"code": null,
"e": 18742,
"s": 18538,
"text": "It is a good practice to provide appropriate feedback to the users waiting for the Web Socket connection to be established. However, it is always noted that Web Socket connections are comparatively fast."
},
{
"code": null,
"e": 18861,
"s": 18742,
"text": "The demo of the Web Socket connection established is documented in the given URL − https://www.websocket.org/echo.html"
},
{
"code": null,
"e": 18946,
"s": 18861,
"text": "A snapshot of the connection establishment and response to the user is shown below −"
},
{
"code": null,
"e": 19067,
"s": 18946,
"text": "Establishing an open state allows full duplex communication and transfer of messages until the connection is terminated."
},
{
"code": null,
"e": 19102,
"s": 19067,
"text": "Building up the client-HTML5 file."
},
{
"code": null,
"e": 19810,
"s": 19102,
"text": "<!DOCTYPE html>\n<html>\n <meta charset = \"utf-8\" />\n <title>WebSocket Test</title>\n\n <script language = \"javascript\" type = \"text/javascript\">\n var wsUri = \"ws://echo.websocket.org/\";\n var output;\n\t\n function init() {\n output = document.getElementById(\"output\");\n testWebSocket();\n }\n\t\n function testWebSocket() {\n websocket = new WebSocket(wsUri);\n\t\t\t\n websocket.onopen = function(evt) {\n onOpen(evt)\n };\n }\n\t\n function onOpen(evt) {\n writeToScreen(\"CONNECTED\");\n }\n\t\n window.addEventListener(\"load\", init, false);\n \n </script>\n\n <h2>WebSocket Test</h2>\n <div id = \"output\"></div>\n\n</html>"
},
{
"code": null,
"e": 19842,
"s": 19810,
"text": "The output will be as follows −"
},
{
"code": null,
"e": 19941,
"s": 19842,
"text": "The above HTML5 and JavaScript file shows the implementation of two events of Web Socket, namely −"
},
{
"code": null,
"e": 20027,
"s": 19941,
"text": "onLoad which helps in creation of JavaScript object and initialization of connection."
},
{
"code": null,
"e": 20113,
"s": 20027,
"text": "onLoad which helps in creation of JavaScript object and initialization of connection."
},
{
"code": null,
"e": 20186,
"s": 20113,
"text": "onOpen establishes connection with the server and also sends the status."
},
{
"code": null,
"e": 20259,
"s": 20186,
"text": "onOpen establishes connection with the server and also sends the status."
},
{
"code": null,
"e": 20565,
"s": 20259,
"text": "Once a connection has been established between the client and the server, an open event is fired from the Web Socket instance. Error are generated for mistakes, which take place during the communication. It is marked with the help of onerror event. Onerror is always followed by termination of connection."
},
{
"code": null,
"e": 20731,
"s": 20565,
"text": "The onerror event is fired when something wrong occurs between the communications. The event onerror is followed by a connection termination, which is a close event."
},
{
"code": null,
"e": 20830,
"s": 20731,
"text": "A good practice is to always inform the user about the unexpected error and try to reconnect them."
},
{
"code": null,
"e": 21039,
"s": 20830,
"text": "socket.onclose = function(event) {\n console.log(\"Error occurred.\");\n\t\n // Inform the user about the error.\n var label = document.getElementById(\"status-label\");\n label.innerHTML = \"Error: \" + event;\n}"
},
{
"code": null,
"e": 21132,
"s": 21039,
"text": "When it comes to error handling, you have to consider both internal and external parameters."
},
{
"code": null,
"e": 21252,
"s": 21132,
"text": "Internal parameters include errors that can be generated because of the bugs in your code, or unexpected user behavior."
},
{
"code": null,
"e": 21372,
"s": 21252,
"text": "Internal parameters include errors that can be generated because of the bugs in your code, or unexpected user behavior."
},
{
"code": null,
"e": 21549,
"s": 21372,
"text": "External errors have nothing to do with the application; rather, they are related to parameters, which cannot be controlled. The most important one is the network connectivity."
},
{
"code": null,
"e": 21726,
"s": 21549,
"text": "External errors have nothing to do with the application; rather, they are related to parameters, which cannot be controlled. The most important one is the network connectivity."
},
{
"code": null,
"e": 21819,
"s": 21726,
"text": "Any interactive bidirectional web application requires, well, an active Internet connection."
},
{
"code": null,
"e": 21912,
"s": 21819,
"text": "Any interactive bidirectional web application requires, well, an active Internet connection."
},
{
"code": null,
"e": 22155,
"s": 21912,
"text": "Imagine that your users are enjoying your web app, when suddenly the network connection becomes unresponsive in the middle of their task. In modern native desktop and mobile applications, it is a common task to check for network availability."
},
{
"code": null,
"e": 22460,
"s": 22155,
"text": "The most common way of doing so is simply making an HTTP request to a website that is supposed to be up (for example, http://www.google.com). If the request succeeds, the desktop or mobile device knows there is active connectivity. Similarly, HTML has XMLHttpRequest for determining network availability."
},
{
"code": null,
"e": 22615,
"s": 22460,
"text": "HTML5, though, made it even easier and introduced a way to check whether the browser can accept web responses. This is achieved via the navigator object −"
},
{
"code": null,
"e": 22706,
"s": 22615,
"text": "if (navigator.onLine) {\n alert(\"You are Online\");\n}else {\n alert(\"You are Offline\");\n}"
},
{
"code": null,
"e": 22829,
"s": 22706,
"text": "Offline mode means that either the device is not connected or the user has selected the offline mode from browser toolbar."
},
{
"code": null,
"e": 22953,
"s": 22829,
"text": "Here is how to inform the user that the network is not available and try to reconnect when a WebSocket close event occurs −"
},
{
"code": null,
"e": 23308,
"s": 22953,
"text": "socket.onclose = function (event) {\n // Connection closed.\n // Firstly, check the reason.\n\t\n if (event.code != 1000) {\n // Error code 1000 means that the connection was closed normally.\n // Try to reconnect.\n\t\t\n if (!navigator.onLine) {\n alert(\"You are offline. Please connect to the Internet and try again.\");\n }\n }\n}"
},
{
"code": null,
"e": 23386,
"s": 23308,
"text": "The following program explains how to show error messages using Web Sockets −"
},
{
"code": null,
"e": 24825,
"s": 23386,
"text": "<!DOCTYPE html>\n<html>\n <meta charset = \"utf-8\" />\n <title>WebSocket Test</title>\n\n <script language = \"javascript\" type = \"text/javascript\">\n var wsUri = \"ws://echo.websocket.org/\";\n var output;\n\t\t\n function init() {\n output = document.getElementById(\"output\");\n testWebSocket();\n }\n\t\t\n function testWebSocket() {\n websocket = new WebSocket(wsUri);\n\t\t\t\n websocket.onopen = function(evt) {\n onOpen(evt)\n };\n\t\t\t\n websocket.onclose = function(evt) {\n onClose(evt)\n };\n\t\t\t\n websocket.onerror = function(evt) {\n onError(evt)\n };\n }\n\t\t\n function onOpen(evt) {\n writeToScreen(\"CONNECTED\");\n doSend(\"WebSocket rocks\");\n }\n\t\t\n function onClose(evt) {\n writeToScreen(\"DISCONNECTED\");\n }\n\t\t\n function onError(evt) {\n writeToScreen('<span style = \"color: red;\">ERROR:</span> ' + evt.data);\n } \n\t\t\n function doSend(message) {\n writeToScreen(\"SENT: \" + message); websocket.send(message);\n }\n\t\t\n function writeToScreen(message) {\n var pre = document.createElement(\"p\"); \n pre.style.wordWrap = \"break-word\"; \n pre.innerHTML = message; output.appendChild(pre);\n }\n\t\t\n window.addEventListener(\"load\", init, false);\n </script>\n\t\n <h2>WebSocket Test</h2>\n <div id = \"output\"></div>\n\t\n</html>"
},
{
"code": null,
"e": 24852,
"s": 24825,
"text": "The output is as follows −"
},
{
"code": null,
"e": 25078,
"s": 24852,
"text": "The Message event takes place usually when the server sends some data. Messages sent by the server to the client can include plain text messages, binary data, or images. Whenever data is sent, the onmessage function is fired."
},
{
"code": null,
"e": 25191,
"s": 25078,
"text": "This event acts as a client's ear to the server. Whenever the server sends data, the onmessage event gets fired."
},
{
"code": null,
"e": 25275,
"s": 25191,
"text": "The following code snippet describes opening the connection of Web Socket protocol."
},
{
"code": null,
"e": 25377,
"s": 25275,
"text": "connection.onmessage = function(e){\n var server_message = e.data;\n console.log(server_message);\n}"
},
{
"code": null,
"e": 25641,
"s": 25377,
"text": "It is also necessary to take into account what kinds of data can be transferred with the help of Web Sockets. Web socket protocol supports text and binary data. In terms of Javascript, text refers to as a string, while binary data is represented like ArrayBuffer."
},
{
"code": null,
"e": 25758,
"s": 25641,
"text": "Web sockets support only one binary format at a time. The declaration of binary data is done explicitly as follows −"
},
{
"code": null,
"e": 25821,
"s": 25758,
"text": "socket.binaryType = ”arrayBuffer”;\nsocket.binaryType = ”blob”;"
},
{
"code": null,
"e": 26009,
"s": 25821,
"text": "Strings are considered to be useful, dealing with human readable formats such as XML and JSON. Whenever onmessage event is raised, client needs to check the data type and act accordingly."
},
{
"code": null,
"e": 26087,
"s": 26009,
"text": "The code snippet for determining the data type as String is mentioned below −"
},
{
"code": null,
"e": 26213,
"s": 26087,
"text": "socket.onmessage = function(event){\n\n if(typeOf event.data === String ) {\n console.log(“Received data string”);\n }\n}"
},
{
"code": null,
"e": 26351,
"s": 26213,
"text": "It is a lightweight format for transferring human-readable data between the computers. The structure of JSON consists of key-value pairs."
},
{
"code": null,
"e": 26409,
"s": 26351,
"text": "{\n name: “James Devilson”,\n message: “Hello World!”\n}"
},
{
"code": null,
"e": 26491,
"s": 26409,
"text": "The following code shows how to handle a JSON object and extract its properties −"
},
{
"code": null,
"e": 26773,
"s": 26491,
"text": "socket.onmessage = function(event) {\n if(typeOf event.data === String ){\n //create a JSON object\n var jsonObject = JSON.parse(event.data);\n var username = jsonObject.name;\n var message = jsonObject.message;\n\t\t\n console.log(“Received data string”);\n }\n}"
},
{
"code": null,
"e": 26927,
"s": 26773,
"text": "Parsing in XML is not difficult, though the techniques differ from browser to browser. The best method is to parse using third party library like jQuery."
},
{
"code": null,
"e": 27023,
"s": 26927,
"text": "In both XML and JSON, the server responds as a string, which is being parsed at the client end."
},
{
"code": null,
"e": 27197,
"s": 27023,
"text": "It consists of a structured binary data. The enclosed bits are given in an order so that the position can be easily tracked. ArrayBuffers are handy to store the image files."
},
{
"code": null,
"e": 27308,
"s": 27197,
"text": "Receiving data using ArrayBuffers is fairly simple. The operator instanceOf is used instead of equal operator."
},
{
"code": null,
"e": 27383,
"s": 27308,
"text": "The following code shows how to handle and receive an ArrayBuffer object −"
},
{
"code": null,
"e": 27544,
"s": 27383,
"text": "socket.onmessage = function(event) {\n if(event.data instanceof ArrayBuffer ){\n var buffer = event.data;\n console.log(“Received arraybuffer”);\n }\n}"
},
{
"code": null,
"e": 27629,
"s": 27544,
"text": "The following program code shows how to send and receive messages using Web Sockets."
},
{
"code": null,
"e": 29150,
"s": 27629,
"text": "<!DOCTYPE html>\n<html>\n <meta charset = \"utf-8\" />\n <title>WebSocket Test</title>\n\n <script language = \"javascript\" type = \"text/javascript\">\n var wsUri = \"ws://echo.websocket.org/\";\n var output;\n\t\t\n function init() {\n output = document.getElementById(\"output\");\n testWebSocket();\n }\n\t\t\n function testWebSocket() {\n websocket = new WebSocket(wsUri);\n\t\t\t\n websocket.onopen = function(evt) {\n onOpen(evt)\n };\n\t\t\n websocket.onmessage = function(evt) {\n onMessage(evt)\n };\n\t\t\n websocket.onerror = function(evt) {\n onError(evt)\n };\n }\n\t\t\n function onOpen(evt) {\n writeToScreen(\"CONNECTED\");\n doSend(\"WebSocket rocks\");\n }\n\t\t\n function onMessage(evt) {\n writeToScreen('<span style = \"color: blue;\">RESPONSE: ' +\n evt.data+'</span>'); websocket.close();\n }\n\n function onError(evt) {\n writeToScreen('<span style=\"color: red;\">ERROR:</span> ' + evt.data);\n }\n\t\t\n function doSend(message) {\n writeToScreen(\"SENT: \" + message); websocket.send(message);\n }\n\t\t\n function writeToScreen(message) {\n var pre = document.createElement(\"p\"); \n pre.style.wordWrap = \"break-word\"; \n pre.innerHTML = message; output.appendChild(pre);\n }\n\t\t\n window.addEventListener(\"load\", init, false);\n\t\t\n </script>\n\t\n <h2>WebSocket Test</h2>\n <div id = \"output\"></div> \n\t\n</html>"
},
{
"code": null,
"e": 29177,
"s": 29150,
"text": "The output is shown below."
},
{
"code": null,
"e": 29529,
"s": 29177,
"text": "Close event marks the end of a communication between the server and the client. Closing a connection is possible with the help of onclose event. After marking the end of communication with the help of onclose event, no messages can be further transferred between the server and the client. Closing the event can occur due to poor connectivity as well."
},
{
"code": null,
"e": 29671,
"s": 29529,
"text": "The close() method stands for goodbye handshake. It terminates the connection and no data can be exchanged unless the connection opens again."
},
{
"code": null,
"e": 29774,
"s": 29671,
"text": "Similar to the previous example, we call the close() method when the user clicks on the second button."
},
{
"code": null,
"e": 30032,
"s": 29774,
"text": "var textView = document.getElementById(\"text-view\");\nvar buttonStop = document.getElementById(\"stop-button\");\n\nbuttonStop.onclick = function() {\n // Close the connection, if open.\n if (socket.readyState === WebSocket.OPEN) {\n socket.close();\n }\n}"
},
{
"code": null,
"e": 30128,
"s": 30032,
"text": "It is also possible to pass the code and reason parameters we mentioned earlier as shown below."
},
{
"code": null,
"e": 30177,
"s": 30128,
"text": "socket.close(1000, \"Deliberate disconnection\");\n"
},
{
"code": null,
"e": 30278,
"s": 30177,
"text": "The following code gives a complete overview of how to close or disconnect a Web Socket connection −"
},
{
"code": null,
"e": 31979,
"s": 30278,
"text": "<!DOCTYPE html>\n<html>\n <meta charset = \"utf-8\" />\n <title>WebSocket Test</title>\n\n <script language = \"javascript\" type = \"text/javascript\">\n var wsUri = \"ws://echo.websocket.org/\";\n var output;\n\t\n function init() {\n output = document.getElementById(\"output\");\n testWebSocket();\n }\n\t\n function testWebSocket() {\n websocket = new WebSocket(wsUri);\n\t\t\n websocket.onopen = function(evt) {\n onOpen(evt)\n };\n\t\t\n websocket.onclose = function(evt) {\n onClose(evt)\n };\n\t\t\n websocket.onmessage = function(evt) {\n onMessage(evt)\n };\n\t\t\n websocket.onerror = function(evt) {\n onError(evt)\n };\n }\n\t\n function onOpen(evt) {\n writeToScreen(\"CONNECTED\");\n doSend(\"WebSocket rocks\");\n }\n\t\n function onClose(evt) {\n writeToScreen(\"DISCONNECTED\");\n }\n\t\n function onMessage(evt) {\n writeToScreen('<span style = \"color: blue;\">RESPONSE: ' + \n evt.data+'</span>'); websocket.close();\n }\n\t\n function onError(evt) {\n writeToScreen('<span style = \"color: red;\">ERROR:</span> '\n + evt.data);\n } \n\t\n function doSend(message) {\n writeToScreen(\"SENT: \" + message); websocket.send(message);\n }\n\t\n function writeToScreen(message) {\n var pre = document.createElement(\"p\"); \n pre.style.wordWrap = \"break-word\"; \n pre.innerHTML = message; \n output.appendChild(pre);\n }\n\t\n window.addEventListener(\"load\", init, false);\n </script>\n\t\n <h2>WebSocket Test</h2>\n <div id = \"output\"></div>\n\t\n</html>"
},
{
"code": null,
"e": 32006,
"s": 31979,
"text": "The output is as follows −"
},
{
"code": null,
"e": 32391,
"s": 32006,
"text": "A Web Socket server is a simple program, which has the ability to handle Web Socket events and actions. It usually exposes similar methods to the Web Socket client API and most programming languages provide an implementation. The following diagram illustrates the communication process between a Web Socket server and a Web Socket client, emphasizing the triggered events and actions."
},
{
"code": null,
"e": 32469,
"s": 32391,
"text": "The following diagram shows a Web Socket server and client event triggering −"
},
{
"code": null,
"e": 32701,
"s": 32469,
"text": "The Web Socket server works in a similar way to the Web Socket clients. It responds to events and performs actions when necessary. Regardless of the programming language used, every Web Socket server performs some specific actions."
},
{
"code": null,
"e": 32833,
"s": 32701,
"text": "It is initialized to a Web Socket address. It handles OnOpen, OnClose, and OnMessage events, and sends messages to the clients too."
},
{
"code": null,
"e": 32957,
"s": 32833,
"text": "Every Web Socket server needs a valid host and port. An example of creating a Web Socket instance in server is as follows −"
},
{
"code": null,
"e": 33015,
"s": 32957,
"text": "var server = new WebSocketServer(\"ws://localhost:8181\");\n"
},
{
"code": null,
"e": 33247,
"s": 33015,
"text": "Any valid URL can be used with the specification of a port, which was not used earlier. It is very useful to keep a record of the connected clients, as it provides details with different data or send different messages to each one."
},
{
"code": null,
"e": 33437,
"s": 33247,
"text": "Fleck represents the incoming connections (clients) with the IwebSocketConnection interface. Whenever someone connects or disconnects from our service, empty list can be created or updated."
},
{
"code": null,
"e": 33486,
"s": 33437,
"text": "var clients = new List<IWebSocketConnection>();\n"
},
{
"code": null,
"e": 33729,
"s": 33486,
"text": "After that, we can call the Start method and wait for the clients to connect. After starting, the server is able to accept incoming connections. In Fleck, the Start method needs a parameter, which indicates the socket that raised the events −"
},
{
"code": null,
"e": 33760,
"s": 33729,
"text": "server.Start(socket) =>\n{\n});\n"
},
{
"code": null,
"e": 34078,
"s": 33760,
"text": "The OnOpen event determines that a new client has requested access and performs an initial handshake. The client should be added to the list and probably the information should be stored related to it, such as the IP address. Fleck provides us with such information, as well as a unique identifier for the connection."
},
{
"code": null,
"e": 34256,
"s": 34078,
"text": "server.Start(socket) ⇒ {\n\n socket.OnOpen = () ⇒ {\n // Add the incoming connection to our list.\n clients.Add(socket);\n }\n\t\n // Handle the other events here...\n});"
},
{
"code": null,
"e": 34412,
"s": 34256,
"text": "The OnClose event is raised whenever a client is disconnected. The Client is removed from the list and informs the rest of clients about the disconnection."
},
{
"code": null,
"e": 34518,
"s": 34412,
"text": "socket.OnClose = () ⇒ {\n // Remove the disconnected client from the list.\n clients.Remove(socket);\n};"
},
{
"code": null,
"e": 34709,
"s": 34518,
"text": "The OnMessage event is raised when a client sends data to the server. Inside this event handler, the incoming message can be transmitted to the clients, or probably select only some of them."
},
{
"code": null,
"e": 34801,
"s": 34709,
"text": "The process is simple. Note that this handler takes a string named message as a parameter −"
},
{
"code": null,
"e": 34903,
"s": 34801,
"text": "socket.OnMessage = () ⇒ {\n // Display the message on the console.\n Console.WriteLine(message);\n};"
},
{
"code": null,
"e": 35051,
"s": 34903,
"text": "The Send() method simply transmits the desired message to the specified client. Using Send(), text or binary data can be stored across the clients."
},
{
"code": null,
"e": 35098,
"s": 35051,
"text": "The working of OnMessage event is as follows −"
},
{
"code": null,
"e": 35380,
"s": 35098,
"text": "socket.OnMessage = () ⇒ {\n foreach (var client in clients) {\n // Send the message to everyone!\n // Also, send the client connection's unique identifier in order\n // to recognize who is who.\n client.Send(client.ConnectionInfo.Id + \" says: \" + message);\n }\n};"
},
{
"code": null,
"e": 35514,
"s": 35380,
"text": "API, an abbreviation of Application Program Interface, is a set of routines, protocols, and tools for building software applications."
},
{
"code": null,
"e": 35544,
"s": 35514,
"text": "Some important features are −"
},
{
"code": null,
"e": 35686,
"s": 35544,
"text": "The API specifies how software components should interact and APIs should be used when programming graphical user interface (GUI) components."
},
{
"code": null,
"e": 35828,
"s": 35686,
"text": "The API specifies how software components should interact and APIs should be used when programming graphical user interface (GUI) components."
},
{
"code": null,
"e": 35914,
"s": 35828,
"text": "A good API makes it easier to develop a program by providing all the building blocks."
},
{
"code": null,
"e": 36000,
"s": 35914,
"text": "A good API makes it easier to develop a program by providing all the building blocks."
},
{
"code": null,
"e": 36140,
"s": 36000,
"text": "REST, which typically runs over HTTP is often used in mobile applications, social websites, mashup tools, and automated business processes."
},
{
"code": null,
"e": 36280,
"s": 36140,
"text": "REST, which typically runs over HTTP is often used in mobile applications, social websites, mashup tools, and automated business processes."
},
{
"code": null,
"e": 36419,
"s": 36280,
"text": "The REST style emphasizes that interactions between the clients and services is enhanced by having a limited number of operations (verbs)."
},
{
"code": null,
"e": 36558,
"s": 36419,
"text": "The REST style emphasizes that interactions between the clients and services is enhanced by having a limited number of operations (verbs)."
},
{
"code": null,
"e": 36662,
"s": 36558,
"text": "Flexibility is provided by assigning resources; their own unique Universal Resource Identifiers (URIs)."
},
{
"code": null,
"e": 36766,
"s": 36662,
"text": "Flexibility is provided by assigning resources; their own unique Universal Resource Identifiers (URIs)."
},
{
"code": null,
"e": 36857,
"s": 36766,
"text": "REST avoids ambiguity because each verb has a specific meaning (GET, POST, PUT and DELETE)"
},
{
"code": null,
"e": 36948,
"s": 36857,
"text": "REST avoids ambiguity because each verb has a specific meaning (GET, POST, PUT and DELETE)"
},
{
"code": null,
"e": 37011,
"s": 36948,
"text": "Web Socket solves a few issues with REST, or HTTP in general −"
},
{
"code": null,
"e": 37352,
"s": 37011,
"text": "HTTP is a unidirectional protocol where the client always initiates a request. The server processes and returns a response, and then the client consumes it. Web Socket is a bi-directional protocol where there are no predefined message patterns such as request/response. Either the client or the server can send a message to the other party."
},
{
"code": null,
"e": 37659,
"s": 37352,
"text": "HTTP allows the request message to go from the client to the server and then the server sends a response message to the client. At a given time, either the client is talking to the server or the server is talking to the client. Web Socket allows the client and the server to talk independent of each other."
},
{
"code": null,
"e": 38053,
"s": 37659,
"text": "Typically, a new TCP connection is initiated for an HTTP request and terminated after the response is received. A new TCP connection needs to be established for another HTTP request/response. For Web Socket, the HTTP connection is upgraded using standard HTTP upgrade mechanism and the client and the server communicate over that same TCP connection for the lifecycle of Web Socket connection."
},
{
"code": null,
"e": 38165,
"s": 38053,
"text": "The graph given below shows the time (in milliseconds) taken to process N messages for a constant payload size."
},
{
"code": null,
"e": 38210,
"s": 38165,
"text": "Here is the raw data that feeds this graph −"
},
{
"code": null,
"e": 38453,
"s": 38210,
"text": "The graph and the table given above show that the REST overhead increases with the number of messages. This is true because that many TCP connections need to be initiated and terminated and that many HTTP headers need to be sent and received."
},
{
"code": null,
"e": 38563,
"s": 38453,
"text": "The last column particularly shows the multiplication factor for the amount of time to fulfil a REST request."
},
{
"code": null,
"e": 38668,
"s": 38563,
"text": "The second graph shows the time taken to process a fixed number of messages by varying the payload size."
},
{
"code": null,
"e": 38713,
"s": 38668,
"text": "Here is the raw data that feeds this graph −"
},
{
"code": null,
"e": 38918,
"s": 38713,
"text": "This graph shows that the incremental cost of processing the request/response for a REST endpoint is minimal and most of the time is spent in connection initiation/termination and honoring HTTP semantics."
},
{
"code": null,
"e": 39147,
"s": 38918,
"text": "Web Socket is a low-level protocol. Everything, including a simple request/response design pattern, how to create/update/delete resources need, status codes etc. to be builds on top of it. All of these are well defined for HTTP."
},
{
"code": null,
"e": 39565,
"s": 39147,
"text": "Web Socket is a stateful protocol whereas HTTP is a stateless protocol. Web Socket connections can scale vertically on a single server whereas HTTP can scale horizontally. There are some proprietary solutions for Web Socket horizontal scaling, but they are not based on standards. HTTP comes with a lot of other goodies such as caching, routing, and multiplexing. All of these need to be defined on top of Web Socket."
},
{
"code": null,
"e": 39678,
"s": 39565,
"text": "The following program code describes the working of a chat application using JavaScript and Web Socket protocol."
},
{
"code": null,
"e": 45370,
"s": 39678,
"text": "<!DOCTYPE html>\n<html lang = \"en\">\n\n <head>\n <meta charset = utf-8>\n <title>HTML5 Chat</title>\n\t\t\n <body>\n\t\t\n <section id = \"wrapper\">\n\t\t\t\n <header>\n <h1>HTML5 Chat</h1>\n </header>\n\t\t\t\t\n <style>\n #chat { width: 97%; }\n .message { font-weight: bold; }\n .message:before { content: ' '; color: #bbb; font-size: 14px; }\n\t\t\t\t\t\n #log {\n overflow: auto;\n max-height: 300px;\n list-style: none;\n padding: 0;\n }\n\t\t\t\t\t\n #log li {\n border-top: 1px solid #ccc;\n margin: 0;\n padding: 10px 0;\n }\n\t\t\t\t\t\n body {\n font: normal 16px/20px \"Helvetica Neue\", Helvetica, sans-serif;\n background: rgb(237, 237, 236);\n margin: 0;\n margin-top: 40px;\n padding: 0;\n }\n\t\t\t\t\t\n section, header {\n display: block;\n }\n\t\t\t\t\t\n #wrapper {\n width: 600px;\n margin: 0 auto;\n background: #fff;\n border-radius: 10px;\n border-top: 1px solid #fff;\n padding-bottom: 16px;\n }\n\t\t\t\t\t\n h1 {\n padding-top: 10px;\n }\n\t\t\t\t\t\n h2 {\n font-size: 100%;\n font-style: italic;\n }\n\t\t\t\t\t\n header, article > * {\n margin: 20px;\n }\n\t\t\t\t\t\n #status {\n padding: 5px;\n color: #fff;\n background: #ccc;\n }\n\t\t\t\t\t\n #status.fail {\n background: #c00;\n }\n\t\t\t\t\t\n #status.success {\n background: #0c0;\n }\n\t\t\t\t\t\n #status.offline {\n background: #c00;\n }\n\t\t\t\t\t\n #status.online {\n background: #0c0;\n }\n\t\t\t\t\t\n #html5badge {\n margin-left: -30px;\n border: 0;\n }\n\t\t\t\t\t\n #html5badge img {\n border: 0;\n }\n </style>\n\t\t\t\t\n <article>\n\t\t\t\t\n <form onsubmit = \"addMessage(); return false;\">\n <input type = \"text\" id = \"chat\" placeholder = \"type and press \n enter to chat\" />\n </form>\n\t\t\t\t\t\n <p id = \"status\">Not connected</p>\n <p>Users connected: <span id = \"connected\">0\n </span></p>\n <ul id = \"log\"></ul>\n\t\t\t\t\t\n </article>\n\t\t\t\t\n <script>\n connected = document.getElementById(\"connected\");\n log = document.getElementById(\"log\");\n chat = document.getElementById(\"chat\");\n form = chat.form;\n state = document.getElementById(\"status\");\n\t\t\t\t\t\n if (window.WebSocket === undefined) {\n state.innerHTML = \"sockets not supported\";\n state.className = \"fail\";\n }else {\n if (typeof String.prototype.startsWith != \"function\") {\n String.prototype.startsWith = function (str) {\n return this.indexOf(str) == 0;\n };\n }\n\t\t\t\t\t\t\n window.addEventListener(\"load\", onLoad, false);\n }\n\t\t\t\t\t\n function onLoad() {\n var wsUri = \"ws://127.0.0.1:7777\";\n websocket = new WebSocket(wsUri);\n websocket.onopen = function(evt) { onOpen(evt) };\n websocket.onclose = function(evt) { onClose(evt) };\n websocket.onmessage = function(evt) { onMessage(evt) };\n websocket.onerror = function(evt) { onError(evt) };\n }\n\t\t\t\t\t\n function onOpen(evt) {\n state.className = \"success\";\n state.innerHTML = \"Connected to server\";\n }\n\t\t\t\t\t\n function onClose(evt) {\n state.className = \"fail\";\n state.innerHTML = \"Not connected\";\n connected.innerHTML = \"0\";\n }\n\t\t\t\t\t\n function onMessage(evt) {\n // There are two types of messages:\n // 1. a chat participant message itself\n // 2. a message with a number of connected chat participants\n var message = evt.data;\n\t\t\t\t\t\t\n if (message.startsWith(\"log:\")) {\n message = message.slice(\"log:\".length);\n log.innerHTML = '<li class = \"message\">' + \n message + \"</li>\" + log.innerHTML;\n }else if (message.startsWith(\"connected:\")) {\n message = message.slice(\"connected:\".length);\n connected.innerHTML = message;\n }\n }\n\t\t\t\t\t\n function onError(evt) {\n state.className = \"fail\";\n state.innerHTML = \"Communication error\";\n }\n\t\t\t\t\t\n function addMessage() {\n var message = chat.value;\n chat.value = \"\";\n websocket.send(message);\n }\n\t\t\t\t\t\n </script>\n\t\t\t\t\n </section>\n\t\t\t\n </body>\n\t\t\n </head>\t\n\t\n</html>"
},
{
"code": null,
"e": 45448,
"s": 45370,
"text": "The key features and the output of the chat application are discussed below −"
},
{
"code": null,
"e": 45593,
"s": 45448,
"text": "To test, open the two windows with Web Socket support, type a message above and press return. This would enable the feature of chat application."
},
{
"code": null,
"e": 45671,
"s": 45593,
"text": "If the connection is not established, the output is available as shown below."
},
{
"code": null,
"e": 45733,
"s": 45671,
"text": "The output of a successful chat communication is shown below."
},
{
"code": null,
"e": 46109,
"s": 45733,
"text": "The Web has been largely built around the request/response paradigm of HTTP. A client loads up a web page and then nothing happens until the user clicks onto the next page. Around 2005, AJAX started to make the web feel more dynamic. Still, all HTTP communication is steered by the client, which requires user interaction or periodic polling to load new data from the server."
},
{
"code": null,
"e": 46316,
"s": 46109,
"text": "Technologies that enable the server to send the data to a client in the very moment when it knows that new data is available have been around for quite some time. They go by names such as \"Push\" or “Comet”."
},
{
"code": null,
"e": 46800,
"s": 46316,
"text": "With long polling, the client opens an HTTP connection to the server, which keeps it open until sending response. Whenever the server actually has new data, it sends the response. Long polling and the other techniques work quite well. However, all of these share one problem, they carry the overhead of HTTP, which does not make them well suited for low latency applications. For example, a multiplayer shooter game in the browser or any other online game with a real-time component."
},
{
"code": null,
"e": 47051,
"s": 46800,
"text": "The Web Socket specification defines an API establishing \"socket\" connections between a web browser and a server. In layman terms, there is a persistent connection between the client and the server and both parties can start sending data at any time."
},
{
"code": null,
"e": 47116,
"s": 47051,
"text": "Web socket connection can be simply opened using a constructor −"
},
{
"code": null,
"e": 47205,
"s": 47116,
"text": "var connection = new WebSocket('ws://html5rocks.websocket.org/echo', ['soap', 'xmpp']);\n"
},
{
"code": null,
"e": 47364,
"s": 47205,
"text": "ws is the new URL schema for WebSocket connections. There is also wss, for secure WebSocket connection the same way https is used for secure HTTP connections."
},
{
"code": null,
"e": 47524,
"s": 47364,
"text": "Attaching some event handlers immediately to the connection allows you to know when the connection is opened, received incoming messages, or there is an error."
},
{
"code": null,
"e": 47822,
"s": 47524,
"text": "The second argument accepts optional subprotocols. It can be a string or an array of strings. Each string should represent a subprotocol name and server accepts only one of passed subprotocols in the array. Accepted subprotocol can be determined by accessing protocol property of WebSocket object."
},
{
"code": null,
"e": 48202,
"s": 47822,
"text": "// When the connection is open, send some data to the server\nconnection.onopen = function () {\n connection.send('Ping'); // Send the message 'Ping' to the server\n};\n\n// Log errors\nconnection.onerror = function (error) {\n console.log('WebSocket Error ' + error);\n};\n\n// Log messages from the server\nconnection.onmessage = function (e) {\n console.log('Server: ' + e.data);\n};"
},
{
"code": null,
"e": 48542,
"s": 48202,
"text": "As soon as we have a connection to the server (when the open event is fired) we can start sending data to the server using the send (your message) method on the connection object. It used to support only strings, but in the latest specification, it now can send binary messages too. To send binary data, Blob or ArrayBuffer object is used."
},
{
"code": null,
"e": 48962,
"s": 48542,
"text": "// Sending String\nconnection.send('your message');\n\n// Sending canvas ImageData as ArrayBuffer\nvar img = canvas_context.getImageData(0, 0, 400, 320);\nvar binary = new Uint8Array(img.data.length);\n\nfor (var i = 0; i < img.data.length; i++) {\n binary[i] = img.data[i];\n}\n\nconnection.send(binary.buffer);\n\n// Sending file as Blob\nvar file = document.querySelector('input[type = \"file\"]').files[0];\nconnection.send(file);"
},
{
"code": null,
"e": 49168,
"s": 48962,
"text": "Equally, the server might send us messages at any time. Whenever this happens the onmessage callback fires. The callback receives an event object and the actual message is accessible via the data property."
},
{
"code": null,
"e": 49451,
"s": 49168,
"text": "WebSocket can also receive binary messages in the latest spec. Binary frames can be received in Blob or ArrayBuffer format. To specify the format of the received binary, set the binaryType property of WebSocket object to either 'blob' or 'arraybuffer'. The default format is 'blob'."
},
{
"code": null,
"e": 49679,
"s": 49451,
"text": "// Setting binaryType to accept received binary as either 'blob' or 'arraybuffer'\nconnection.binaryType = 'arraybuffer';\nconnection.onmessage = function(e) {\n console.log(e.data.byteLength); // ArrayBuffer object if binary\n};"
},
{
"code": null,
"e": 49818,
"s": 49679,
"text": "Another newly added feature of WebSocket is extensions. Using extensions, it will be possible to send frames compressed, multiplexed, etc."
},
{
"code": null,
"e": 49890,
"s": 49818,
"text": "// Determining accepted extensions\nconsole.log(connection.extensions);\n"
},
{
"code": null,
"e": 50168,
"s": 49890,
"text": "Being a modern protocol, cross-origin communication is baked right into WebSocket. WebSocket enables communication between parties on any domain. The server decides whether to make its service available to all clients or only those that reside on a set of well-defined domains."
},
{
"code": null,
"e": 50723,
"s": 50168,
"text": "Every new technology comes with a new set of problems. In the case of WebSocket it is the compatibility with proxy servers, which mediate HTTP connections in most company networks. The WebSocket protocol uses the HTTP upgrade system (which is normally used for HTTP/SSL) to \"upgrade\" an HTTP connection to a WebSocket connection. Some proxy servers do not like this and will drop the connection. Thus, even if a given client uses the WebSocket protocol, it may not be possible to establish a connection. This makes the next section even more important :)"
},
{
"code": null,
"e": 51122,
"s": 50723,
"text": "Using WebSocket creates a whole new usage pattern for server side applications. While traditional server stacks such as LAMP are designed around the HTTP request/response cycle they often do not deal well with a large number of open WebSocket connections. Keeping a large number of connections open at the same time requires an architecture that receives high concurrency at a low performance cost."
},
{
"code": null,
"e": 51461,
"s": 51122,
"text": "Protocol should be designed for security reasons. WebSocket is a brand-new protocol and not all web browsers implement it correctly. For example, some of them still allow the mix of HTTP and WS, although the specification implies the opposite. In this chapter, we will discuss a few common security attacks that a user should be aware of."
},
{
"code": null,
"e": 51847,
"s": 51461,
"text": "Denial of Service (DoS) attacks attempt to make a machine or network resource unavailable to the users that request it. Suppose someone makes an infinite number of requests to a web server with no or tiny time intervals. The server is not able to handle each connection and will either stop responding or will keep responding too slowly. This can be termed as Denial of service attack."
},
{
"code": null,
"e": 51940,
"s": 51847,
"text": "Denial of service is very frustrating for the end users, who could not even load a web page."
},
{
"code": null,
"e": 52085,
"s": 51940,
"text": "DoS attack can even apply on peer-to-peer communications, forcing the clients of a P2P network to concurrently connect to the victim web server."
},
{
"code": null,
"e": 52137,
"s": 52085,
"text": "Let us understand this with the help of an example."
},
{
"code": null,
"e": 52476,
"s": 52137,
"text": "Suppose a person A is chatting with his friend B via an IM client. Some third person wants to view the messages you exchange. So, he makes an independent connections with both the persons. He also sends messages to person A and his friend B, as an invisible intermediate to your communication. This is known as a man-in-the-middle attack."
},
{
"code": null,
"e": 52721,
"s": 52476,
"text": "The man-in-the-middle kind of attack is easier for unencrypted connections, as the intruder can read the packages directly. When the connection is encrypted, the information has to be decrypted by the attacker, which might be way too difficult."
},
{
"code": null,
"e": 52966,
"s": 52721,
"text": "From a technical aspect, the attacker intercepts a public-key message exchange and sends the message while replacing the requested key with his own. Obviously, a solid strategy to make the attacker's job difficult is to use SSH with WebSockets."
},
{
"code": null,
"e": 53068,
"s": 52966,
"text": "Mostly when exchanging critical data, prefer the WSS secure connection instead of the unencrypted WS."
},
{
"code": null,
"e": 53328,
"s": 53068,
"text": "Cross-site scripting (XSS) is a vulnerability that enables attackers to inject client-side scripts into web pages or applications. An attacker can send HTML or Javascript code using your application hubs and let this code be executed on the clients' machines."
},
{
"code": null,
"e": 53562,
"s": 53328,
"text": "By default, the WebSocket protocol is designed to be secure. In the real world, the user might encounter various issues that might occur due to poor browser implementation. As time goes by, browser vendors fix any issues immediately."
},
{
"code": null,
"e": 53658,
"s": 53562,
"text": "An extra layer of security is added when secure WebSocket connection over SSH (or TLS) is used."
},
{
"code": null,
"e": 53917,
"s": 53658,
"text": "In the WebSocket world, the main concern is about the performance of a secure connection. Although there is still an extra TLS layer on top, the protocol itself contains optimizations for this kind of use, furthermore, WSS works more sleekly through proxies."
},
{
"code": null,
"e": 54277,
"s": 53917,
"text": "Every message transmitted between a WebSocket server and a WebSocket client contains a specific key, named masking key, which allows any WebSocket-compliant intermediaries to unmask and inspect the message. If the intermediary is not WebSocket-compliant, then the message cannot be affected. The browser that implements the WebSocket protocol handles masking."
},
{
"code": null,
"e": 54455,
"s": 54277,
"text": "Finally, useful tools can be presented to investigate the flow of information between your WebSocket clients and server, analyze the exchanged data, and identify possible risks."
},
{
"code": null,
"e": 54677,
"s": 54455,
"text": "Chrome, Firefox, and Opera are great browsers in terms of developer support. Their built-in tools help us determine almost any aspect of client-side interactions and resources. It plays a great role for security purposes."
},
{
"code": null,
"e": 54917,
"s": 54677,
"text": "WebSocket, as the name implies, is something that uses the web. The web is usually interwoven with browser pages because that are the primary means of displaying data online. However, non-browser programs too, use online data transmission."
},
{
"code": null,
"e": 55187,
"s": 54917,
"text": "The release of the iPhone (initially) and the iPad (later) introduced a brand new world of web interconnectivity without necessarily using a web browser. Instead, the new smartphone and tablet devices utilized the power of native apps to offer a unique user experience."
},
{
"code": null,
"e": 55418,
"s": 55187,
"text": "Currently, there are one billion active smartphones out there. That is, millions of potential customers for your applications. These people use their mobile phone to accomplish daily tasks, surf the internet, communicate, or shop."
},
{
"code": null,
"e": 55635,
"s": 55418,
"text": "Smartphones have become synonymous to apps. Nowadays, there is an app for any usage, a user can think of. Most of the apps connect to the internet in order to retrieve data, make transactions, gather news, and so on."
},
{
"code": null,
"e": 55775,
"s": 55635,
"text": "It would be great to use the existing WebSocket knowledge and develop a WebSocket client running natively on a smartphone or tablet device."
},
{
"code": null,
"e": 56167,
"s": 55775,
"text": "Well, this is a common conflict and as usual, the answer depends on the needs of the target audience. If a user is familiar with the modern design trends, designing a website that is responsive and mobile friendly is now a must. However, the end user must be sure that the content, which is what really matters, is equally accessible via a smartphone, as it is via a classic desktop browser."
},
{
"code": null,
"e": 56389,
"s": 56167,
"text": "Definitely, a WebSocket web app will run on any HTML5-compliant browser, including mobile browsers such as Safari for iOS and Chrome for mobile. Therefore, there are no worries about compatibility issues with smartphones."
},
{
"code": null,
"e": 56484,
"s": 56389,
"text": "In order to develop a smartphone app, installation of development tools and SDKs are required."
},
{
"code": null,
"e": 56718,
"s": 56484,
"text": "WebSockets can act as a universal hub for transmitting messages between connected mobile and tablet clients. We can implement a native iOS application, which communicates with a WebSocket server just like the HTML5 JavaScript client."
},
{
"code": null,
"e": 56725,
"s": 56718,
"text": " Print"
},
{
"code": null,
"e": 56736,
"s": 56725,
"text": " Add Notes"
}
] |
How to Get Started with C++ Programming?
|
So you've decided to learn how to program in C++ but don't know where to start. Here's a brief overview of how you can get started.
This is the first step you'd want to do before starting learning to program in C++. There are good free C++ compilers available for all major OS platforms. Download one that suits your platform or you can use the tutorialspoint.com's online compiler on https://www.tutorialspoint.com/compile_cpp_online.php
GCC − GCC is the GNU Compiler chain that is basically a collection of a bunch of different compilers created by GNU. You can download and install this compiler from http://gcc.gnu.org/
Clang − Clang is a compiler collection released by the LLVM community. It is available on all platforms and you can download and find install instructions on http://clang.llvm.org/get_started.html
Visual C++ 2017 Community − This is a free C++ compiler built for windows by Microsoft. You can download and install this compiler from https://www.visualstudio.com/vs/cplusplus/
Now that you have a compiler installed, its time to write a C++ program. Let's start with the epitome of programming example's, it, the Hello world program. We'll print hello world to the screen using C++ in this example. Create a new file called hello.cpp and write the following code to it −
#include<iostream>
int main() {
std::cout << "Hello World\n";
}
Let's dissect this program.Line 1 − We start with the #include<iostream> line which essentially tells the compiler to copy the code from the iostream file(used for managing input and output streams) and paste it in our source file. Header iostream, that allows to perform standard input and output operations, such as writing the output of this program (Hello World) to the screen. Lines beginning with a hash sign (#) are directives read and interpreted by what is known as the preprocessor.Line 2 − A blank line: Blank lines have no effect on a program.Line 3 − We then declare a function called main with the return type of int. main() is the entry point of our program. Whenever we run a C++ program, we start with the main function and begin execution from the first line within this function and keep executing each line till we reach the end. We start a block using the curly brace({) here. This marks the beginning of main's function definition, and the closing brace (}) at line 5, marks its end. All statements between these braces are the function's body that defines what happens when main is called.Line 4 −
std::cout << "Hello World\n";
This line is a C++ statement. This statement has three parts: First, std::cout, which identifies the standard console output device. Second the insertion operator << which indicates that what follows is inserted into std::cout. Last, we have a sentence within quotes that we'd like printed on the screen. This will become more clear to you as we proceed in learning C++.In short, we provide a cout object with a string "Hello world\n" to be printed to the standard output device.Note that the statement ends with a semicolon (;). This character marks the end of the statement
Now that we've written the program, we need to translate it to a language that the processor understands, ie, in binary machine code. We do this using a compiler we installed in the first step. You need to open your terminal/cmd and navigate to the location of the hello.cpp file using the cd command. Assuming you installed the GCC, you can use the following command to compile the program −
$ g++ -o hello hello.cpp
This command means that you want the g++ compiler to create an output file, hello using the source file hello.cpp.
Now that we've written our program and compiled it, time to run it! You can run the program using −
$ ./hello
You will get the output −
Hello world
|
[
{
"code": null,
"e": 1194,
"s": 1062,
"text": "So you've decided to learn how to program in C++ but don't know where to start. Here's a brief overview of how you can get started."
},
{
"code": null,
"e": 1501,
"s": 1194,
"text": "This is the first step you'd want to do before starting learning to program in C++. There are good free C++ compilers available for all major OS platforms. Download one that suits your platform or you can use the tutorialspoint.com's online compiler on https://www.tutorialspoint.com/compile_cpp_online.php"
},
{
"code": null,
"e": 1686,
"s": 1501,
"text": "GCC − GCC is the GNU Compiler chain that is basically a collection of a bunch of different compilers created by GNU. You can download and install this compiler from http://gcc.gnu.org/"
},
{
"code": null,
"e": 1883,
"s": 1686,
"text": "Clang − Clang is a compiler collection released by the LLVM community. It is available on all platforms and you can download and find install instructions on http://clang.llvm.org/get_started.html"
},
{
"code": null,
"e": 2062,
"s": 1883,
"text": "Visual C++ 2017 Community − This is a free C++ compiler built for windows by Microsoft. You can download and install this compiler from https://www.visualstudio.com/vs/cplusplus/"
},
{
"code": null,
"e": 2356,
"s": 2062,
"text": "Now that you have a compiler installed, its time to write a C++ program. Let's start with the epitome of programming example's, it, the Hello world program. We'll print hello world to the screen using C++ in this example. Create a new file called hello.cpp and write the following code to it −"
},
{
"code": null,
"e": 2424,
"s": 2356,
"text": "#include<iostream>\n\nint main() {\n std::cout << \"Hello World\\n\";\n}"
},
{
"code": null,
"e": 3547,
"s": 2424,
"text": "Let's dissect this program.Line 1 − We start with the #include<iostream> line which essentially tells the compiler to copy the code from the iostream file(used for managing input and output streams) and paste it in our source file. Header iostream, that allows to perform standard input and output operations, such as writing the output of this program (Hello World) to the screen. Lines beginning with a hash sign (#) are directives read and interpreted by what is known as the preprocessor.Line 2 − A blank line: Blank lines have no effect on a program.Line 3 − We then declare a function called main with the return type of int. main() is the entry point of our program. Whenever we run a C++ program, we start with the main function and begin execution from the first line within this function and keep executing each line till we reach the end. We start a block using the curly brace({) here. This marks the beginning of main's function definition, and the closing brace (}) at line 5, marks its end. All statements between these braces are the function's body that defines what happens when main is called.Line 4 − "
},
{
"code": null,
"e": 3577,
"s": 3547,
"text": "std::cout << \"Hello World\\n\";"
},
{
"code": null,
"e": 4153,
"s": 3577,
"text": "This line is a C++ statement. This statement has three parts: First, std::cout, which identifies the standard console output device. Second the insertion operator << which indicates that what follows is inserted into std::cout. Last, we have a sentence within quotes that we'd like printed on the screen. This will become more clear to you as we proceed in learning C++.In short, we provide a cout object with a string \"Hello world\\n\" to be printed to the standard output device.Note that the statement ends with a semicolon (;). This character marks the end of the statement"
},
{
"code": null,
"e": 4546,
"s": 4153,
"text": "Now that we've written the program, we need to translate it to a language that the processor understands, ie, in binary machine code. We do this using a compiler we installed in the first step. You need to open your terminal/cmd and navigate to the location of the hello.cpp file using the cd command. Assuming you installed the GCC, you can use the following command to compile the program −"
},
{
"code": null,
"e": 4571,
"s": 4546,
"text": "$ g++ -o hello hello.cpp"
},
{
"code": null,
"e": 4686,
"s": 4571,
"text": "This command means that you want the g++ compiler to create an output file, hello using the source file hello.cpp."
},
{
"code": null,
"e": 4786,
"s": 4686,
"text": "Now that we've written our program and compiled it, time to run it! You can run the program using −"
},
{
"code": null,
"e": 4796,
"s": 4786,
"text": "$ ./hello"
},
{
"code": null,
"e": 4822,
"s": 4796,
"text": "You will get the output −"
},
{
"code": null,
"e": 4834,
"s": 4822,
"text": "Hello world"
}
] |
Tryit Editor v3.7
|
Tryit: HTML image map - rectangle area
|
[] |
Gold Mine Problem | Practice | GeeksforGeeks
|
Given a gold mine called M of (n x m) dimensions. Each field in this mine contains a positive integer which is the amount of gold in tons. Initially the miner can start from any row in the first column. From a given cell, the miner can move
to the cell diagonally up towards the right
to the right
to the cell diagonally down towards the right
to the cell diagonally up towards the right
to the right
to the cell diagonally down towards the right
Find out maximum amount of gold which he can collect.
Example 1:
Input: n = 3, m = 3
M = {{1, 3, 3},
{2, 1, 4},
{0, 6, 4}};
Output: 12
Explaination:
The path is {(1,0) -> (2,1) -> (2,2)}.
Example 2:
Input: n = 4, m = 4
M = {{1, 3, 1, 5},
{2, 2, 4, 1},
{5, 0, 2, 3},
{0, 6, 1, 2}};
Output: 16
Explaination:
The path is {(2,0) -> (3,1) -> (2,2)
-> (2,3)} or {(2,0) -> (1,1) -> (1,2)
-> (0,3)}.
Your Task:
You do not need to read input or print anything. Your task is to complete the function maxGold() which takes the values n, m and the mine M as input parameters and returns the maximum amount of gold that can be collected.
Expected Time Complexity: O(n*m)
Expected Auxiliary Space: O(n*m)
Constraints:
1 ≤ n, m ≤ 50
1 ≤ M[i][j] ≤ 100
0
rohit jain 111 week ago
Please help, on submitting my solution i am getting wrong result for following input: [{2,1}{1,2}]. I cant find error in my algo.
public static int maxGold(int n, int m, int M[][]) { return maxGoldCal(n, m, M);}
public static int maxGoldCal(int n, int m, int M[][]) { int max = 0; for (int j = m - 1; j >= 0; j--) { for (int i = n - 1; i >= 0; i--) { // cannot proceed condition int previousRow = i - 1; int nextRow = i + 1; int nextColumn = j + 1; boolean nextRowNotPresent = nextRow > (n - 1); boolean previousRowNotPresent = previousRow < 0; boolean nextColumnNotPresent = nextColumn > (m - 1); boolean diagonalDownNotPossible = nextRowNotPresent || nextColumnNotPresent; boolean diagonalUpNotPossible = previousRowNotPresent || nextColumnNotPresent; if (nextColumnNotPresent) { continue; } else if (diagonalDownNotPossible) { M[i][j] += Integer.max(M[previousRow][nextColumn], M[i][nextColumn]); } else if (diagonalUpNotPossible) { M[i][j] += Integer.max(M[i][nextColumn], M[nextRow][nextColumn]); } else { int temp = Integer.max(M[previousRow][nextColumn], M[nextRow][nextColumn]); M[i][j] += Integer.max(temp, M[i][nextColumn]); } max = Integer.max(M[i][j], max); } } return max;}
0
2020ucs00972 weeks ago
class Solution{public: int max(int a, int b){ if(a>=b) return a; else return b; } int f(int cn, int cm, int n, int m, vector<vector<int>> &M, vector<vector<int>> &v){ if(cn<0 || cn>n-1 || cm>m-1) return 0; if(v[cn][cm]!=-1) return v[cn][cm]; int p,q,r; p=f(cn-1,cm+1,n,m,M,v); q=f(cn,cm+1,n,m,M,v); r=f(cn+1,cm+1,n,m,M,v); return v[cn][cm]=M[cn][cm]+ max(p,max(q,r)); } int maxGold(int n, int m, vector<vector<int>> M) { int max=0,t; vector<vector<int>> v; v.resize(n,vector<int>(m,-1)); for (int i=0;i<n;i++){ t=f(i,0,n,m,M,v); if(t>max) max=t; } return max; }};
0
apurvkumarak2 weeks ago
C++ Solution
int solve(int n,int m,vector<vector<int>> & mat,vector<vector<int>> & dp,int i ,int j){
if(j==m-1){
dp[i][j]=mat[i][j];
return mat[i][j];
}
if(dp[i][j] != -1){
return dp[i][j];
}
int a=0,b=0,c=0;
if(i-1 >= 0 && j+1 < m){
if(dp[i-1][j+1] == -1){
a=solve(n,m,mat,dp,i-1,j+1);
}else{
a=dp[i-1][j+1];
}
}
if(j+1 < m){
if(dp[i][j+1] == -1){
b=solve(n,m,mat,dp,i,j+1);
}else{
b=dp[i][j+1];
}
}
if(i+1 < n && j+1 < m){
if(dp[i+1][j+1] == -1){
c=solve(n,m,mat,dp,i+1,j+1);
}else{
c=dp[i+1][j+1];
}
}
dp[i][j]=mat[i][j]+max(max(a,b),c);
return dp[i][j];
}
int maxGold(int n, int m, vector<vector<int>> M)
{
vector<vector<int>> dp(n,vector<int>(m,-1));
int res=INT_MIN;
for(int i=0;i<M.size();i++){
res=max(solve(n,m,M,dp,i,0),res);
}
return res;
}
0
dwivedidivyanshu30Premium2 weeks ago
Why does this code give error ?
int maxGold(int n, int m, vector<vector<int>> M)
{
for(int col = m-2;col>=0;col--){
for(int row = 0;row<n;row++){
int a,b,c = 0;
if((col+1)<m)
a = M[row][col+1];
if((row-1)>=0 and (col+1)<m)
b = M[row-1][col+1];
if((row+1<n)and (col+1)<m)
c = M[row+1][col+1];
M[row][col]+= max(a,max(b,c));
}
}
int max = INT_MIN;
for(int i=0;i<n;i++){
if(M[i][0]>max)
max = M[i][0];
}
return max;
}
+1
kashyapjhon3 weeks ago
C++ Solution Pep Coding Method Time=(0.04/1.29) :
int maxGold(int n, int m, vector<vector<int>> M) { // code here if(n>1){ int dp[n+2][m+2]; for(int j=m-1;j>=0;j--){ for(int i=0;i<n;i++){ if(j==m-1){ dp[i][j]=M[i][j]; } else if(i==0){ dp[i][j]=M[i][j]+max(M[i][j+1],M[i+1][j+1]); M[i][j]=dp[i][j]; } else if(i==n-1){ dp[i][j]=M[i][j]+max(M[i-1][j+1],M[i][j+1]); M[i][j]=dp[i][j]; } else{ dp[i][j]=M[i][j]+max(M[i-1][j+1],max(M[i][j+1],M[i+1][j+1])); M[i][j]=dp[i][j]; } } } int ans=INT_MIN; for(int i=0;i<n;i++){ ans=max(ans,dp[i][0]); } return ans; } else{ int ans=0; for(int i=0;i<m;i++){ ans=ans+M[0][i]; } return ans; } }
0
omkarg14171 month ago
Time Complexity : O(n*m)
Space Complexity: O(2*n)
int maxGold(int n, int m, vector<vector<int>> a)
{
vector<vector<int>> dp(2, vector<int>(n, 0));
for(int i = 0; i < n; ++i) {
dp[0][i] = a[i][0];
}
int x = 1;
for(int k = 1; k < m; ++k) {
for(int i = 0; i < n; ++i) {
int res1 = 0, res2 = 0;
if(i-1 >= 0)
res1 = dp[1-x][i-1];
if(i+1 < n)
res2 = dp[1-x][i+1];
dp[x][i] = a[i][k] + max({res1, res2, dp[1-x][i]});
}
x ^= 1;
}
int res=0;
for(int i = 0; i < n; ++i) {
res = max({res, dp[0][i], dp[1][i]});
}
return res;
0
shreeshsingh2 months ago
# User function Template for Python3
from typing import List
# filling dp table from left to right
# 2D dp tabulation
class Solution:
def maxGold(self, n: int, m: int, matrix: List[List[int]]) -> int:
# code here
dp = [ [0 for _ in range(m+1)] for _ in range(n+1) ]
for j in range(m):
for i in range(n):
left = 0
leftUp = 0
leftDown = 0
if j - 1 <= m:
left = dp[i][j-1]
if i - 1 >= 0 and j - 1 <= m:
leftUp = dp[i-1][j-1]
if i+1 <= n and j - 1 <= m:
leftDown = dp[i+1][j-1]
dp[i][j] = matrix[i][j] + max(left, leftUp, leftDown)
res = dp[0][m-1]
for i in range(1, n):
res = max(res, dp[i][m-1])
return res
+1
aloksinghbais022 months ago
C++ solution having time complexity as O(n*m) and space complexity as O(n*m) is as follows :-
Execution Time :- 0.0 / 1.3 sec
int dp[51][51]; bool isValid(int x,int y,int n,int m){ if(x < 0 || x >= n || y < 0 || y >= m) return (false); return (true); } int helper(int x,int y,int n,int m,vector<vector<int>> &M){ if(!isValid(x,y,n,m)) return (0); if(dp[x][y] != -1) return (dp[x][y]); int res1 = M[x][y] + helper(x-1,y+1,n,m,M); int res2 = M[x][y] + helper(x,y+1,n,m,M); int res3 = M[x][y] + helper(x+1,y+1,n,m,M); return dp[x][y] = (max(res1,max(res2,res3))); } int maxGold(int n, int m, vector<vector<int>> M) { int maxGold = 0; memset(dp,-1,sizeof(dp)); for(int i = 0; i < n; i++){ int gold = helper(i,0,n,m,M); maxGold = max(maxGold,gold); } return (maxGold); }
0
tarun2002ts0212 months ago
3 / 204Input:
2 1 1 2
And Your Code's output is:
1Its Correct output is:
2
i think both “correct output ” ans “ your output” is wrong i think right ans will be 4
my code--- have a look
int maxGold(int n, int m, vector<vector<int>> M) { // code here int maxi=INT_MIN; vector<vector<int>>dp(n+2,vector<int>(m+2,-1)); for(int i=0;i<m;i++) {maxi=max(maxi,solve(i,0,n,m,M,dp));} return maxi; }
//////////////////////function 1 int solve(int i,int j,int n,int m,vector<vector<int>>&arr,vector<vector<int>>dp) { if(!satisfy(i,j,n,m)) return 0; if(dp[i][j]!=-1) return dp[i][j]; return dp[i][j]=arr[i][j]+max(solve(i,j+1,n,m,arr,dp),max(solve(i-1,j+1,n,m,arr,dp),solve(i+1,j+1,n,m,arr,dp))); }
////////////////// function2 bool satisfy(int i,int j,int n,int m) { if(i<0 || j<0 ||i>=n|| j>=m) return false; return true; }
///////////function 3
0
helloworld013
This comment was deleted.
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 480,
"s": 238,
"text": "Given a gold mine called M of (n x m) dimensions. Each field in this mine contains a positive integer which is the amount of gold in tons. Initially the miner can start from any row in the first column. From a given cell, the miner can move "
},
{
"code": null,
"e": 586,
"s": 480,
"text": "\nto the cell diagonally up towards the right \nto the right\nto the cell diagonally down towards the right\n"
},
{
"code": null,
"e": 631,
"s": 586,
"text": "to the cell diagonally up towards the right "
},
{
"code": null,
"e": 644,
"s": 631,
"text": "to the right"
},
{
"code": null,
"e": 690,
"s": 644,
"text": "to the cell diagonally down towards the right"
},
{
"code": null,
"e": 744,
"s": 690,
"text": "Find out maximum amount of gold which he can collect."
},
{
"code": null,
"e": 756,
"s": 744,
"text": "\nExample 1:"
},
{
"code": null,
"e": 890,
"s": 756,
"text": "Input: n = 3, m = 3\nM = {{1, 3, 3},\n {2, 1, 4},\n {0, 6, 4}};\nOutput: 12\nExplaination: \nThe path is {(1,0) -> (2,1) -> (2,2)}."
},
{
"code": null,
"e": 902,
"s": 890,
"text": "\nExample 2:"
},
{
"code": null,
"e": 1113,
"s": 902,
"text": "Input: n = 4, m = 4\nM = {{1, 3, 1, 5},\n {2, 2, 4, 1},\n {5, 0, 2, 3},\n {0, 6, 1, 2}};\nOutput: 16\nExplaination: \nThe path is {(2,0) -> (3,1) -> (2,2) \n-> (2,3)} or {(2,0) -> (1,1) -> (1,2) \n-> (0,3)}."
},
{
"code": null,
"e": 1347,
"s": 1113,
"text": "\nYour Task:\nYou do not need to read input or print anything. Your task is to complete the function maxGold() which takes the values n, m and the mine M as input parameters and returns the maximum amount of gold that can be collected."
},
{
"code": null,
"e": 1414,
"s": 1347,
"text": "\nExpected Time Complexity: O(n*m)\nExpected Auxiliary Space: O(n*m)"
},
{
"code": null,
"e": 1460,
"s": 1414,
"text": "\nConstraints:\n1 ≤ n, m ≤ 50\n1 ≤ M[i][j] ≤ 100"
},
{
"code": null,
"e": 1462,
"s": 1460,
"text": "0"
},
{
"code": null,
"e": 1486,
"s": 1462,
"text": "rohit jain 111 week ago"
},
{
"code": null,
"e": 1616,
"s": 1486,
"text": "Please help, on submitting my solution i am getting wrong result for following input: [{2,1}{1,2}]. I cant find error in my algo."
},
{
"code": null,
"e": 1702,
"s": 1618,
"text": "public static int maxGold(int n, int m, int M[][]) { return maxGoldCal(n, m, M);}"
},
{
"code": null,
"e": 2870,
"s": 1702,
"text": "public static int maxGoldCal(int n, int m, int M[][]) { int max = 0; for (int j = m - 1; j >= 0; j--) { for (int i = n - 1; i >= 0; i--) { // cannot proceed condition int previousRow = i - 1; int nextRow = i + 1; int nextColumn = j + 1; boolean nextRowNotPresent = nextRow > (n - 1); boolean previousRowNotPresent = previousRow < 0; boolean nextColumnNotPresent = nextColumn > (m - 1); boolean diagonalDownNotPossible = nextRowNotPresent || nextColumnNotPresent; boolean diagonalUpNotPossible = previousRowNotPresent || nextColumnNotPresent; if (nextColumnNotPresent) { continue; } else if (diagonalDownNotPossible) { M[i][j] += Integer.max(M[previousRow][nextColumn], M[i][nextColumn]); } else if (diagonalUpNotPossible) { M[i][j] += Integer.max(M[i][nextColumn], M[nextRow][nextColumn]); } else { int temp = Integer.max(M[previousRow][nextColumn], M[nextRow][nextColumn]); M[i][j] += Integer.max(temp, M[i][nextColumn]); } max = Integer.max(M[i][j], max); } } return max;}"
},
{
"code": null,
"e": 2872,
"s": 2870,
"text": "0"
},
{
"code": null,
"e": 2895,
"s": 2872,
"text": "2020ucs00972 weeks ago"
},
{
"code": null,
"e": 3583,
"s": 2895,
"text": "class Solution{public: int max(int a, int b){ if(a>=b) return a; else return b; } int f(int cn, int cm, int n, int m, vector<vector<int>> &M, vector<vector<int>> &v){ if(cn<0 || cn>n-1 || cm>m-1) return 0; if(v[cn][cm]!=-1) return v[cn][cm]; int p,q,r; p=f(cn-1,cm+1,n,m,M,v); q=f(cn,cm+1,n,m,M,v); r=f(cn+1,cm+1,n,m,M,v); return v[cn][cm]=M[cn][cm]+ max(p,max(q,r)); } int maxGold(int n, int m, vector<vector<int>> M) { int max=0,t; vector<vector<int>> v; v.resize(n,vector<int>(m,-1)); for (int i=0;i<n;i++){ t=f(i,0,n,m,M,v); if(t>max) max=t; } return max; }};"
},
{
"code": null,
"e": 3585,
"s": 3583,
"text": "0"
},
{
"code": null,
"e": 3609,
"s": 3585,
"text": "apurvkumarak2 weeks ago"
},
{
"code": null,
"e": 3623,
"s": 3609,
"text": "C++ Solution "
},
{
"code": null,
"e": 4778,
"s": 3623,
"text": " int solve(int n,int m,vector<vector<int>> & mat,vector<vector<int>> & dp,int i ,int j){\n if(j==m-1){\n dp[i][j]=mat[i][j];\n return mat[i][j];\n }\n if(dp[i][j] != -1){\n return dp[i][j];\n }\n int a=0,b=0,c=0;\n if(i-1 >= 0 && j+1 < m){\n if(dp[i-1][j+1] == -1){\n a=solve(n,m,mat,dp,i-1,j+1);\n }else{\n a=dp[i-1][j+1];\n }\n }\n if(j+1 < m){\n if(dp[i][j+1] == -1){\n b=solve(n,m,mat,dp,i,j+1);\n }else{\n b=dp[i][j+1];\n }\n }\n if(i+1 < n && j+1 < m){\n if(dp[i+1][j+1] == -1){\n c=solve(n,m,mat,dp,i+1,j+1);\n }else{\n c=dp[i+1][j+1];\n }\n }\n dp[i][j]=mat[i][j]+max(max(a,b),c);\n return dp[i][j];\n }\n int maxGold(int n, int m, vector<vector<int>> M)\n {\n vector<vector<int>> dp(n,vector<int>(m,-1));\n int res=INT_MIN;\n for(int i=0;i<M.size();i++){\n res=max(solve(n,m,M,dp,i,0),res);\n }\n return res;\n }"
},
{
"code": null,
"e": 4780,
"s": 4778,
"text": "0"
},
{
"code": null,
"e": 4817,
"s": 4780,
"text": "dwivedidivyanshu30Premium2 weeks ago"
},
{
"code": null,
"e": 4849,
"s": 4817,
"text": "Why does this code give error ?"
},
{
"code": null,
"e": 5474,
"s": 4851,
"text": "int maxGold(int n, int m, vector<vector<int>> M)\n {\n \n for(int col = m-2;col>=0;col--){\n for(int row = 0;row<n;row++){\n int a,b,c = 0;\n if((col+1)<m)\n a = M[row][col+1];\n if((row-1)>=0 and (col+1)<m)\n b = M[row-1][col+1];\n if((row+1<n)and (col+1)<m)\n c = M[row+1][col+1];\n M[row][col]+= max(a,max(b,c));\n }\n }\n int max = INT_MIN;\n for(int i=0;i<n;i++){\n if(M[i][0]>max)\n max = M[i][0];\n }\n return max;\n }"
},
{
"code": null,
"e": 5477,
"s": 5474,
"text": "+1"
},
{
"code": null,
"e": 5500,
"s": 5477,
"text": "kashyapjhon3 weeks ago"
},
{
"code": null,
"e": 5550,
"s": 5500,
"text": "C++ Solution Pep Coding Method Time=(0.04/1.29) :"
},
{
"code": null,
"e": 6524,
"s": 5550,
"text": "int maxGold(int n, int m, vector<vector<int>> M) { // code here if(n>1){ int dp[n+2][m+2]; for(int j=m-1;j>=0;j--){ for(int i=0;i<n;i++){ if(j==m-1){ dp[i][j]=M[i][j]; } else if(i==0){ dp[i][j]=M[i][j]+max(M[i][j+1],M[i+1][j+1]); M[i][j]=dp[i][j]; } else if(i==n-1){ dp[i][j]=M[i][j]+max(M[i-1][j+1],M[i][j+1]); M[i][j]=dp[i][j]; } else{ dp[i][j]=M[i][j]+max(M[i-1][j+1],max(M[i][j+1],M[i+1][j+1])); M[i][j]=dp[i][j]; } } } int ans=INT_MIN; for(int i=0;i<n;i++){ ans=max(ans,dp[i][0]); } return ans; } else{ int ans=0; for(int i=0;i<m;i++){ ans=ans+M[0][i]; } return ans; } }"
},
{
"code": null,
"e": 6526,
"s": 6524,
"text": "0"
},
{
"code": null,
"e": 6548,
"s": 6526,
"text": "omkarg14171 month ago"
},
{
"code": null,
"e": 6573,
"s": 6548,
"text": "Time Complexity : O(n*m)"
},
{
"code": null,
"e": 6598,
"s": 6573,
"text": "Space Complexity: O(2*n)"
},
{
"code": null,
"e": 7375,
"s": 6600,
"text": "int maxGold(int n, int m, vector<vector<int>> a)\n {\n\n vector<vector<int>> dp(2, vector<int>(n, 0));\n for(int i = 0; i < n; ++i) {\n dp[0][i] = a[i][0];\n }\n \n int x = 1;\n \n for(int k = 1; k < m; ++k) {\n for(int i = 0; i < n; ++i) {\n int res1 = 0, res2 = 0;\n \n if(i-1 >= 0)\n res1 = dp[1-x][i-1];\n if(i+1 < n) \n res2 = dp[1-x][i+1];\n \n dp[x][i] = a[i][k] + max({res1, res2, dp[1-x][i]});\n }\n x ^= 1;\n }\n \n int res=0;\n for(int i = 0; i < n; ++i) {\n res = max({res, dp[0][i], dp[1][i]});\n }\n return res;"
},
{
"code": null,
"e": 7379,
"s": 7377,
"text": "0"
},
{
"code": null,
"e": 7404,
"s": 7379,
"text": "shreeshsingh2 months ago"
},
{
"code": null,
"e": 8421,
"s": 7404,
"text": "# User function Template for Python3\nfrom typing import List\n\n\n# filling dp table from left to right\n\n# 2D dp tabulation\n\nclass Solution:\n def maxGold(self, n: int, m: int, matrix: List[List[int]]) -> int:\n # code here\n \n \n dp = [ [0 for _ in range(m+1)] for _ in range(n+1) ]\n \n for j in range(m):\n for i in range(n):\n \n left = 0\n leftUp = 0\n leftDown = 0\n \n if j - 1 <= m:\n left = dp[i][j-1]\n \n if i - 1 >= 0 and j - 1 <= m:\n leftUp = dp[i-1][j-1]\n \n if i+1 <= n and j - 1 <= m:\n leftDown = dp[i+1][j-1]\n \n dp[i][j] = matrix[i][j] + max(left, leftUp, leftDown)\n \n res = dp[0][m-1]\n \n for i in range(1, n):\n res = max(res, dp[i][m-1])\n \n return res"
},
{
"code": null,
"e": 8424,
"s": 8421,
"text": "+1"
},
{
"code": null,
"e": 8452,
"s": 8424,
"text": "aloksinghbais022 months ago"
},
{
"code": null,
"e": 8547,
"s": 8452,
"text": "C++ solution having time complexity as O(n*m) and space complexity as O(n*m) is as follows :- "
},
{
"code": null,
"e": 8581,
"s": 8549,
"text": "Execution Time :- 0.0 / 1.3 sec"
},
{
"code": null,
"e": 9372,
"s": 8583,
"text": "int dp[51][51]; bool isValid(int x,int y,int n,int m){ if(x < 0 || x >= n || y < 0 || y >= m) return (false); return (true); } int helper(int x,int y,int n,int m,vector<vector<int>> &M){ if(!isValid(x,y,n,m)) return (0); if(dp[x][y] != -1) return (dp[x][y]); int res1 = M[x][y] + helper(x-1,y+1,n,m,M); int res2 = M[x][y] + helper(x,y+1,n,m,M); int res3 = M[x][y] + helper(x+1,y+1,n,m,M); return dp[x][y] = (max(res1,max(res2,res3))); } int maxGold(int n, int m, vector<vector<int>> M) { int maxGold = 0; memset(dp,-1,sizeof(dp)); for(int i = 0; i < n; i++){ int gold = helper(i,0,n,m,M); maxGold = max(maxGold,gold); } return (maxGold); }"
},
{
"code": null,
"e": 9374,
"s": 9372,
"text": "0"
},
{
"code": null,
"e": 9401,
"s": 9374,
"text": "tarun2002ts0212 months ago"
},
{
"code": null,
"e": 9416,
"s": 9401,
"text": "3 / 204Input: "
},
{
"code": null,
"e": 9424,
"s": 9416,
"text": "2 1 1 2"
},
{
"code": null,
"e": 9451,
"s": 9424,
"text": "And Your Code's output is:"
},
{
"code": null,
"e": 9475,
"s": 9451,
"text": "1Its Correct output is:"
},
{
"code": null,
"e": 9477,
"s": 9475,
"text": "2"
},
{
"code": null,
"e": 9567,
"s": 9479,
"text": "i think both “correct output ” ans “ your output” is wrong i think right ans will be 4 "
},
{
"code": null,
"e": 9591,
"s": 9567,
"text": " my code--- have a look"
},
{
"code": null,
"e": 9839,
"s": 9591,
"text": " int maxGold(int n, int m, vector<vector<int>> M) { // code here int maxi=INT_MIN; vector<vector<int>>dp(n+2,vector<int>(m+2,-1)); for(int i=0;i<m;i++) {maxi=max(maxi,solve(i,0,n,m,M,dp));} return maxi; }"
},
{
"code": null,
"e": 10181,
"s": 9841,
"text": "//////////////////////function 1 int solve(int i,int j,int n,int m,vector<vector<int>>&arr,vector<vector<int>>dp) { if(!satisfy(i,j,n,m)) return 0; if(dp[i][j]!=-1) return dp[i][j]; return dp[i][j]=arr[i][j]+max(solve(i,j+1,n,m,arr,dp),max(solve(i-1,j+1,n,m,arr,dp),solve(i+1,j+1,n,m,arr,dp))); }"
},
{
"code": null,
"e": 10334,
"s": 10183,
"text": "////////////////// function2 bool satisfy(int i,int j,int n,int m) { if(i<0 || j<0 ||i>=n|| j>=m) return false; return true; }"
},
{
"code": null,
"e": 10356,
"s": 10334,
"text": "///////////function 3"
},
{
"code": null,
"e": 10366,
"s": 10364,
"text": "0"
},
{
"code": null,
"e": 10380,
"s": 10366,
"text": "helloworld013"
},
{
"code": null,
"e": 10406,
"s": 10380,
"text": "This comment was deleted."
},
{
"code": null,
"e": 10552,
"s": 10406,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 10588,
"s": 10552,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 10598,
"s": 10588,
"text": "\nProblem\n"
},
{
"code": null,
"e": 10608,
"s": 10598,
"text": "\nContest\n"
},
{
"code": null,
"e": 10671,
"s": 10608,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 10819,
"s": 10671,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 11027,
"s": 10819,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 11133,
"s": 11027,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
How to install OpenCV in Python?
|
OpenCV is a Python library that is used to solve computer vision problems. Computer vision include understanding and analyzing digital images by the computer and process the images or provide relevant data after analyzing the image.
OpenCV is an open-source library used in machine learning and image processing.It performs tasks such as recognizing handwritten digits, human faces and objects.
To use OpenCV, we need to install it.
Type the following commands in command prompt to check is python and pip is installed on your system.
python --version
If python is successfully installed, the version of python installed on your system will be displayed.
pip -V
The version of pip will be displayed, if it is successfully installed on your system.
OpenCV can be installed using pip. The following command is run in the command prompt to install OpenCV.
pip install opencv-python
This command will start downloading and installing packages related to the OpenCV library. Once done, the message of successful installation will be displayed.
|
[
{
"code": null,
"e": 1295,
"s": 1062,
"text": "OpenCV is a Python library that is used to solve computer vision problems. Computer vision include understanding and analyzing digital images by the computer and process the images or provide relevant data after analyzing the image."
},
{
"code": null,
"e": 1457,
"s": 1295,
"text": "OpenCV is an open-source library used in machine learning and image processing.It performs tasks such as recognizing handwritten digits, human faces and objects."
},
{
"code": null,
"e": 1495,
"s": 1457,
"text": "To use OpenCV, we need to install it."
},
{
"code": null,
"e": 1597,
"s": 1495,
"text": "Type the following commands in command prompt to check is python and pip is installed on your system."
},
{
"code": null,
"e": 1614,
"s": 1597,
"text": "python --version"
},
{
"code": null,
"e": 1717,
"s": 1614,
"text": "If python is successfully installed, the version of python installed on your system will be displayed."
},
{
"code": null,
"e": 1724,
"s": 1717,
"text": "pip -V"
},
{
"code": null,
"e": 1810,
"s": 1724,
"text": "The version of pip will be displayed, if it is successfully installed on your system."
},
{
"code": null,
"e": 1915,
"s": 1810,
"text": "OpenCV can be installed using pip. The following command is run in the command prompt to install OpenCV."
},
{
"code": null,
"e": 1941,
"s": 1915,
"text": "pip install opencv-python"
},
{
"code": null,
"e": 2101,
"s": 1941,
"text": "This command will start downloading and installing packages related to the OpenCV library. Once done, the message of successful installation will be displayed."
}
] |
Reading Clocks using Neural Nets. Can Neural Network detect time from... | by Shiva Verma | Towards Data Science
|
I was really interested in an idea, which was to read the time from Analog Clock Images using Neural Nets. To do this task, I needed a dataset containing clock images, but there was not any dataset available on the web. One other way was to download the clock images from the web and manually label them. Which is a quite time-consuming process.
Finally, I decided to write a python script that generates animated clock images and their respective labels. The following are the few samples of clocks images generated by the script.
These images are not as realistic as real-world clock Images but it would be really exciting to see if a Neural Net can be trained on these images as well.
The following is the link to the dataset, which is available on Kaggle.
www.kaggle.com
Before designing the neural net, let’s see how do we read time from a clock.
in order to read the time from a clock, we need two values, an hour and a minute. An hour can take value from 0 to 11, where 0 is nothing but 12 o’clock. Similarly, a minute can take value from 0–59. The length of the hour-hand is smaller than the minute-hand. This is how we differentiate the Minute and Hour hand. And most important, there is at least one marker(12 o’clock at the top) in the clock.
Now let’s design a neural net that can read the time similarly.
As you know. our objective is to feed a clock image to the neural network and get the time value from it. So, the network has to output 2 values, hour, and minute. Let’s cover each case one by one.
Hour values can be from 0 to 11. We can consider this as a Classification task where we have a total of 12 classes. And the network has to select one of the classes.
Minutes values can be from 0–59. This means there are 60 possible values. It would be wise to consider it as a Regression task. Because in this case, we want to predict the minute value as close as possible.
Now let’s look at the architecture diagram below which explains the complete network architecture.
In the beginning, the Network contains Convolutional layers, which will extract useful features from the image. On top of the Convolutional layers, there are 2 branches of Fully-Connected layers. One branch is for detecting Hour and one for detecting Minute.
Since predicting Hour value is a classification task. There would be 12 output nodes in the hour-branch. And we apply a Softmax activation on top of output nodes.
In the minute-branch, there would be just one output node with the Linear activation, since in regression we just need a single value. Linear activation is essentially no activation. I will not go into details of classification and regression here.
The whole network is ready which can read the time from clock images. Following is the code to create this network in Keras.
We have designed the network architecture and created it in Keras. It’s time to feed the data to the network and train it. Let’s quickly discuss the input and target of the network.
My clock dataset contains RGB images of (300*300) size. Before feeding the image to the network, I converted all images to greyscale and reduced the size to (100*100). I am doing this to load the images in less memory and to make training faster. On the other hand, we may lose some information due to low resolution.
The target is the hour and minute values. I divided the minute value by 60 to keep it in the range of (0 – 1). The neural net performs better on small-range output.
I have trained the network on 40K clock images for 10 epochs. During the training, I kept decreasing the learning rate and increasing batch size. The final results were as follows on 500 test images.
Hour, accuracy: ~99.00Minute, Mean Absolute Error: ~3.5
On average, the network was able to read the time close to 3.5 minutes. Which is quite impressive putting little effort.
Following is a snippet of my Jupyter Notebook which shows the Network reading time.
Following is the GitHub link for this complete project. I have also saved the trained model in the Repo. Thanks for reading.
|
[
{
"code": null,
"e": 518,
"s": 172,
"text": "I was really interested in an idea, which was to read the time from Analog Clock Images using Neural Nets. To do this task, I needed a dataset containing clock images, but there was not any dataset available on the web. One other way was to download the clock images from the web and manually label them. Which is a quite time-consuming process."
},
{
"code": null,
"e": 704,
"s": 518,
"text": "Finally, I decided to write a python script that generates animated clock images and their respective labels. The following are the few samples of clocks images generated by the script."
},
{
"code": null,
"e": 860,
"s": 704,
"text": "These images are not as realistic as real-world clock Images but it would be really exciting to see if a Neural Net can be trained on these images as well."
},
{
"code": null,
"e": 932,
"s": 860,
"text": "The following is the link to the dataset, which is available on Kaggle."
},
{
"code": null,
"e": 947,
"s": 932,
"text": "www.kaggle.com"
},
{
"code": null,
"e": 1024,
"s": 947,
"text": "Before designing the neural net, let’s see how do we read time from a clock."
},
{
"code": null,
"e": 1426,
"s": 1024,
"text": "in order to read the time from a clock, we need two values, an hour and a minute. An hour can take value from 0 to 11, where 0 is nothing but 12 o’clock. Similarly, a minute can take value from 0–59. The length of the hour-hand is smaller than the minute-hand. This is how we differentiate the Minute and Hour hand. And most important, there is at least one marker(12 o’clock at the top) in the clock."
},
{
"code": null,
"e": 1490,
"s": 1426,
"text": "Now let’s design a neural net that can read the time similarly."
},
{
"code": null,
"e": 1688,
"s": 1490,
"text": "As you know. our objective is to feed a clock image to the neural network and get the time value from it. So, the network has to output 2 values, hour, and minute. Let’s cover each case one by one."
},
{
"code": null,
"e": 1854,
"s": 1688,
"text": "Hour values can be from 0 to 11. We can consider this as a Classification task where we have a total of 12 classes. And the network has to select one of the classes."
},
{
"code": null,
"e": 2062,
"s": 1854,
"text": "Minutes values can be from 0–59. This means there are 60 possible values. It would be wise to consider it as a Regression task. Because in this case, we want to predict the minute value as close as possible."
},
{
"code": null,
"e": 2161,
"s": 2062,
"text": "Now let’s look at the architecture diagram below which explains the complete network architecture."
},
{
"code": null,
"e": 2420,
"s": 2161,
"text": "In the beginning, the Network contains Convolutional layers, which will extract useful features from the image. On top of the Convolutional layers, there are 2 branches of Fully-Connected layers. One branch is for detecting Hour and one for detecting Minute."
},
{
"code": null,
"e": 2583,
"s": 2420,
"text": "Since predicting Hour value is a classification task. There would be 12 output nodes in the hour-branch. And we apply a Softmax activation on top of output nodes."
},
{
"code": null,
"e": 2832,
"s": 2583,
"text": "In the minute-branch, there would be just one output node with the Linear activation, since in regression we just need a single value. Linear activation is essentially no activation. I will not go into details of classification and regression here."
},
{
"code": null,
"e": 2957,
"s": 2832,
"text": "The whole network is ready which can read the time from clock images. Following is the code to create this network in Keras."
},
{
"code": null,
"e": 3139,
"s": 2957,
"text": "We have designed the network architecture and created it in Keras. It’s time to feed the data to the network and train it. Let’s quickly discuss the input and target of the network."
},
{
"code": null,
"e": 3457,
"s": 3139,
"text": "My clock dataset contains RGB images of (300*300) size. Before feeding the image to the network, I converted all images to greyscale and reduced the size to (100*100). I am doing this to load the images in less memory and to make training faster. On the other hand, we may lose some information due to low resolution."
},
{
"code": null,
"e": 3622,
"s": 3457,
"text": "The target is the hour and minute values. I divided the minute value by 60 to keep it in the range of (0 – 1). The neural net performs better on small-range output."
},
{
"code": null,
"e": 3822,
"s": 3622,
"text": "I have trained the network on 40K clock images for 10 epochs. During the training, I kept decreasing the learning rate and increasing batch size. The final results were as follows on 500 test images."
},
{
"code": null,
"e": 3878,
"s": 3822,
"text": "Hour, accuracy: ~99.00Minute, Mean Absolute Error: ~3.5"
},
{
"code": null,
"e": 3999,
"s": 3878,
"text": "On average, the network was able to read the time close to 3.5 minutes. Which is quite impressive putting little effort."
},
{
"code": null,
"e": 4083,
"s": 3999,
"text": "Following is a snippet of my Jupyter Notebook which shows the Network reading time."
}
] |
How to convert a dictionary to a Pandas series? - GeeksforGeeks
|
08 Oct, 2021
Let’s discuss how to convert a dictionary into pandas series in Python. A series is a one-dimensional labeled array which can contain any type of data i.e. integer, float, string, python objects, etc. while dictionary is an unordered collection of key : value pairs. We use series() function of pandas library to convert a dictionary into series by passing the dictionary as an argument.
Let’s see some examples:
Example 1: We pass the name of dictionary as an argument in series() function. The order of output will be same as of dictionary.
Python3
# Import pandas libraryimport pandas as pd # Create a dictionaryd = {'g' : 100, 'e' : 200, 'k' : 400, 's' : 800, 'n' : 1600} # Convert from dictionary to seriesresult_series = pd.Series(d) # Print seriesresult_series
Output:
Example 2: We pass the name of dictionary and a different order of index. The order of output will be same as the order we passed in the argument.
Python3
# Import pandas libraryimport pandas as pd # Create a dictionaryd = {'a' : 10, 'b' : 20, 'c' : 40, 'd' :80, 'e' :160} # Convert from dictionary to seriesresult_series = pd.Series(d, index = ['e', 'b', 'd', 'a', 'c'])# Print seriesresult_series
Output:
Example 3: In the above example the length of index list was same as the number of keys in the dictionary. What happens if they are not equal let’s see with the help of an example.
Python3
# Import pandas libraryimport pandas as pd # Create a dictionaryd = {'a' : 10, 'b' : 20, 'c' : 40, 'd':80} # Convert from dictionary to seriesresult_series = pd.Series(d, index = ['b', 'd', 'e', 'a', 'c'])# Print seriesresult_series
Output:
So It’s assigning NaN value to that corresponding index.
simmytarika5
Python pandas-dataFrame
Python Pandas-exercise
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Check if element exists in list in Python
Selecting rows in pandas DataFrame based on conditions
Python | os.path.join() method
Defaultdict in Python
Python | Get unique values from a list
Create a directory in Python
Python | Pandas dataframe.groupby()
|
[
{
"code": null,
"e": 24292,
"s": 24264,
"text": "\n08 Oct, 2021"
},
{
"code": null,
"e": 24680,
"s": 24292,
"text": "Let’s discuss how to convert a dictionary into pandas series in Python. A series is a one-dimensional labeled array which can contain any type of data i.e. integer, float, string, python objects, etc. while dictionary is an unordered collection of key : value pairs. We use series() function of pandas library to convert a dictionary into series by passing the dictionary as an argument."
},
{
"code": null,
"e": 24705,
"s": 24680,
"text": "Let’s see some examples:"
},
{
"code": null,
"e": 24835,
"s": 24705,
"text": "Example 1: We pass the name of dictionary as an argument in series() function. The order of output will be same as of dictionary."
},
{
"code": null,
"e": 24843,
"s": 24835,
"text": "Python3"
},
{
"code": "# Import pandas libraryimport pandas as pd # Create a dictionaryd = {'g' : 100, 'e' : 200, 'k' : 400, 's' : 800, 'n' : 1600} # Convert from dictionary to seriesresult_series = pd.Series(d) # Print seriesresult_series",
"e": 25068,
"s": 24843,
"text": null
},
{
"code": null,
"e": 25077,
"s": 25068,
"text": " Output:"
},
{
"code": null,
"e": 25224,
"s": 25077,
"text": "Example 2: We pass the name of dictionary and a different order of index. The order of output will be same as the order we passed in the argument."
},
{
"code": null,
"e": 25232,
"s": 25224,
"text": "Python3"
},
{
"code": "# Import pandas libraryimport pandas as pd # Create a dictionaryd = {'a' : 10, 'b' : 20, 'c' : 40, 'd' :80, 'e' :160} # Convert from dictionary to seriesresult_series = pd.Series(d, index = ['e', 'b', 'd', 'a', 'c'])# Print seriesresult_series",
"e": 25560,
"s": 25232,
"text": null
},
{
"code": null,
"e": 25569,
"s": 25560,
"text": "Output: "
},
{
"code": null,
"e": 25750,
"s": 25569,
"text": "Example 3: In the above example the length of index list was same as the number of keys in the dictionary. What happens if they are not equal let’s see with the help of an example."
},
{
"code": null,
"e": 25758,
"s": 25750,
"text": "Python3"
},
{
"code": "# Import pandas libraryimport pandas as pd # Create a dictionaryd = {'a' : 10, 'b' : 20, 'c' : 40, 'd':80} # Convert from dictionary to seriesresult_series = pd.Series(d, index = ['b', 'd', 'e', 'a', 'c'])# Print seriesresult_series",
"e": 26069,
"s": 25758,
"text": null
},
{
"code": null,
"e": 26078,
"s": 26069,
"text": "Output: "
},
{
"code": null,
"e": 26135,
"s": 26078,
"text": "So It’s assigning NaN value to that corresponding index."
},
{
"code": null,
"e": 26148,
"s": 26135,
"text": "simmytarika5"
},
{
"code": null,
"e": 26172,
"s": 26148,
"text": "Python pandas-dataFrame"
},
{
"code": null,
"e": 26195,
"s": 26172,
"text": "Python Pandas-exercise"
},
{
"code": null,
"e": 26209,
"s": 26195,
"text": "Python-pandas"
},
{
"code": null,
"e": 26216,
"s": 26209,
"text": "Python"
},
{
"code": null,
"e": 26314,
"s": 26216,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26346,
"s": 26314,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26388,
"s": 26346,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 26444,
"s": 26388,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 26486,
"s": 26444,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 26541,
"s": 26486,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 26572,
"s": 26541,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 26594,
"s": 26572,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 26633,
"s": 26594,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 26662,
"s": 26633,
"text": "Create a directory in Python"
}
] |
NLP: Analysis On Tweets Using Python and TWINT | Towards Data Science
|
Ever heard of Twint?
Twint is an advanced web scraping tool built in Python which scrapes the web instead of collecting data through the twitter API like tweepy. It is short for Twitter Intelligence Tool. You can download it using:
pip3 install twint
The twint documentation can be found here.
In this article, we will use Donald Trump’s tweets since the start of the year 2019. We can download tweets of a given user with this simple command in the command line:
twint -u realDonaldTrump --since 2019-01-01 -o trump.csv --csv
This will download all the tweets of @realDonaldTrump since 2019 into a single csv file trump.csv .
Here I have converted the csv file to xls format for convenience. Lets dive in!
df=pd.read_excel('trump.xls')***added columns mentions, hashtags and length******added month, year and hour columns******added cleaned_tweetsnum_mentionsnum_hashtags***df.head()
Let’s look at the average tweet length by hour.
Looks like the president’s tweets are lengthy early in the morning (3am to 10am).
Average number of mentions by hour.
How about when coupled with the sentiment of those tweets? (calculation of sentiment shown later.)
First, let’s clean the tweets. For this, we will create two functions, one for removing urls, mentions and hashtags (store them in a separate column) and the other for cleaning the remaining text (removing stop words, punctuations).
I will use the cleaned_tweets column for the tweets with stripped content, stopwords and punctuation and tweet column with just the content removed, to calculate sentiment and subjectivity.
df['cleaned_tweets']=df['tweet'].apply(lambda x: process_text(x))df['tweet']=df['tweet'].apply(lambda x: remove_content(x))
Now let’s build a word cloud to get an idea of frequent phrases.
from wordcloud import WordCloud, STOPWORDSimport matplotlib.pyplot as plttemp=' '.join(df['cleaned_tweets'].tolist())wordcloud = WordCloud(width = 800, height = 500, background_color ='white', min_font_size = 10).generate(temp)plt.figure(figsize = (8, 8), facecolor = None)plt.imshow(wordcloud)plt.axis("off")plt.tight_layout(pad = 0) plt.show()
More frequent words/phrases appear in larger font.
Now let’s define a function to plot the top n occurrences of phrases in the given ngram range. For this, we will use the CountVectorizer function.
Most of the work is done, now let’s plot the frequent phrases.
Frequent Unigrams
plot_topn(tweet_list, ngram_range=(1,1))
Frequent Bigrams
plot_topn(tweet_list, ngram_range=(2,2))
Frequent Trigrams
plot_topn(tweet_list, ngram_range=(3,3))
Sleepy Joe Biden? Seriously?
Most mentioned users:
Most used hashtags:
We use the tweet column to analyze the sentiment and subjectivity of the tweets. For this, we will use TextBlob.
Given an input sentence, TextBlob outputs a tuple of two elements: (sentiment, subjectivity)
from textblob import TextBlobdf['sentiment']=df['tweet'].apply(lambda x:TextBlob(x).sentiment[0])df['subject']=df['tweet'].apply(lambda x: TextBlob(x).sentiment[1])df['polarity']=df['sentiment'].apply(lambda x: 'pos' if x>=0 else 'neg')
Let’s look at the sentiment distribution of tweets
Most of the tweets might not be subjective. The tweets might be a fact, like bad news. Let’s find out the sentiment distribution of tweets which were subjective. For this, let’s filter out the tweets with subjectivity greater than 0.5 and plot the distribution.
fig=px.histogram(df[df['subject']>0.5], x='polarity', color='polarity')fig.show()
Looks like the proportion of negative sentiment increased when only subjective tweets were analyzed.
Now let's look at the polarity of subjective tweets of the 20 most mentioned users.
Topic modeling is a machine learning technique that automatically analyzes text data to determine cluster words for a set of documents. This is known as ‘unsupervised’ machine learning because it doesn’t require a predefined list of tags or training data that’s been previously classified by humans.
We will use the gensim LDA model for topic modelling.
#pre-process tweets to BOWfrom gensim import corporar = [process_text(x,stem=False).split() for x in df['tweet'].tolist()] dictionary = corpora.Dictionary(r)corpus = [dictionary.doc2bow(rev) for rev in r]#initialize model and print topicsfrom gensim import modelsmodel = models.ldamodel.LdaModel(corpus, num_topics=10, id2word=dictionary, passes=15)topics = model.print_topics(num_words=5)for topic in topics: print(topics[0],process_text(topic[1]))
There are some clear topics like topic 5 during the early stages of impeachment trial, topic 8 containing phrases related to the China trade deal and topic 6 regarding his plans to build the wall.
labels=[]for x in model[corpus]: labels.append(sorted(x,key=lambda x: x[1],reverse=True)[0][0])df['topic']=pd.Series(labels)
Let’s look at the topic distribution.
Let’s look at the distribution of topic 5 and 6.
These plots make sense since the tweets in topic 5 significantly increase during the month when the whistleblower complaint was released and topic 6 has more tweets during the first month of 2019 when Trump was planning to build the wall.
You can find a more detailed analysis here.
If you liked this article please do leave a clap. Thank you for reading!
Github Twint documentation: https://github.com/twintproject/twinthttps://medium.com/bigpanda-engineering/exploratory-data-analysis-for-text-data-29cf7dd54eb8https://medium.com/@b.terryjack/nlp-pre-trained-sentiment-analysis-1eb52a9d742c
Github Twint documentation: https://github.com/twintproject/twint
|
[
{
"code": null,
"e": 192,
"s": 171,
"text": "Ever heard of Twint?"
},
{
"code": null,
"e": 403,
"s": 192,
"text": "Twint is an advanced web scraping tool built in Python which scrapes the web instead of collecting data through the twitter API like tweepy. It is short for Twitter Intelligence Tool. You can download it using:"
},
{
"code": null,
"e": 422,
"s": 403,
"text": "pip3 install twint"
},
{
"code": null,
"e": 465,
"s": 422,
"text": "The twint documentation can be found here."
},
{
"code": null,
"e": 635,
"s": 465,
"text": "In this article, we will use Donald Trump’s tweets since the start of the year 2019. We can download tweets of a given user with this simple command in the command line:"
},
{
"code": null,
"e": 698,
"s": 635,
"text": "twint -u realDonaldTrump --since 2019-01-01 -o trump.csv --csv"
},
{
"code": null,
"e": 798,
"s": 698,
"text": "This will download all the tweets of @realDonaldTrump since 2019 into a single csv file trump.csv ."
},
{
"code": null,
"e": 878,
"s": 798,
"text": "Here I have converted the csv file to xls format for convenience. Lets dive in!"
},
{
"code": null,
"e": 1056,
"s": 878,
"text": "df=pd.read_excel('trump.xls')***added columns mentions, hashtags and length******added month, year and hour columns******added cleaned_tweetsnum_mentionsnum_hashtags***df.head()"
},
{
"code": null,
"e": 1104,
"s": 1056,
"text": "Let’s look at the average tweet length by hour."
},
{
"code": null,
"e": 1186,
"s": 1104,
"text": "Looks like the president’s tweets are lengthy early in the morning (3am to 10am)."
},
{
"code": null,
"e": 1222,
"s": 1186,
"text": "Average number of mentions by hour."
},
{
"code": null,
"e": 1321,
"s": 1222,
"text": "How about when coupled with the sentiment of those tweets? (calculation of sentiment shown later.)"
},
{
"code": null,
"e": 1554,
"s": 1321,
"text": "First, let’s clean the tweets. For this, we will create two functions, one for removing urls, mentions and hashtags (store them in a separate column) and the other for cleaning the remaining text (removing stop words, punctuations)."
},
{
"code": null,
"e": 1744,
"s": 1554,
"text": "I will use the cleaned_tweets column for the tweets with stripped content, stopwords and punctuation and tweet column with just the content removed, to calculate sentiment and subjectivity."
},
{
"code": null,
"e": 1868,
"s": 1744,
"text": "df['cleaned_tweets']=df['tweet'].apply(lambda x: process_text(x))df['tweet']=df['tweet'].apply(lambda x: remove_content(x))"
},
{
"code": null,
"e": 1933,
"s": 1868,
"text": "Now let’s build a word cloud to get an idea of frequent phrases."
},
{
"code": null,
"e": 2311,
"s": 1933,
"text": "from wordcloud import WordCloud, STOPWORDSimport matplotlib.pyplot as plttemp=' '.join(df['cleaned_tweets'].tolist())wordcloud = WordCloud(width = 800, height = 500, background_color ='white', min_font_size = 10).generate(temp)plt.figure(figsize = (8, 8), facecolor = None)plt.imshow(wordcloud)plt.axis(\"off\")plt.tight_layout(pad = 0) plt.show()"
},
{
"code": null,
"e": 2362,
"s": 2311,
"text": "More frequent words/phrases appear in larger font."
},
{
"code": null,
"e": 2509,
"s": 2362,
"text": "Now let’s define a function to plot the top n occurrences of phrases in the given ngram range. For this, we will use the CountVectorizer function."
},
{
"code": null,
"e": 2572,
"s": 2509,
"text": "Most of the work is done, now let’s plot the frequent phrases."
},
{
"code": null,
"e": 2590,
"s": 2572,
"text": "Frequent Unigrams"
},
{
"code": null,
"e": 2631,
"s": 2590,
"text": "plot_topn(tweet_list, ngram_range=(1,1))"
},
{
"code": null,
"e": 2648,
"s": 2631,
"text": "Frequent Bigrams"
},
{
"code": null,
"e": 2689,
"s": 2648,
"text": "plot_topn(tweet_list, ngram_range=(2,2))"
},
{
"code": null,
"e": 2707,
"s": 2689,
"text": "Frequent Trigrams"
},
{
"code": null,
"e": 2748,
"s": 2707,
"text": "plot_topn(tweet_list, ngram_range=(3,3))"
},
{
"code": null,
"e": 2777,
"s": 2748,
"text": "Sleepy Joe Biden? Seriously?"
},
{
"code": null,
"e": 2799,
"s": 2777,
"text": "Most mentioned users:"
},
{
"code": null,
"e": 2819,
"s": 2799,
"text": "Most used hashtags:"
},
{
"code": null,
"e": 2932,
"s": 2819,
"text": "We use the tweet column to analyze the sentiment and subjectivity of the tweets. For this, we will use TextBlob."
},
{
"code": null,
"e": 3025,
"s": 2932,
"text": "Given an input sentence, TextBlob outputs a tuple of two elements: (sentiment, subjectivity)"
},
{
"code": null,
"e": 3262,
"s": 3025,
"text": "from textblob import TextBlobdf['sentiment']=df['tweet'].apply(lambda x:TextBlob(x).sentiment[0])df['subject']=df['tweet'].apply(lambda x: TextBlob(x).sentiment[1])df['polarity']=df['sentiment'].apply(lambda x: 'pos' if x>=0 else 'neg')"
},
{
"code": null,
"e": 3313,
"s": 3262,
"text": "Let’s look at the sentiment distribution of tweets"
},
{
"code": null,
"e": 3575,
"s": 3313,
"text": "Most of the tweets might not be subjective. The tweets might be a fact, like bad news. Let’s find out the sentiment distribution of tweets which were subjective. For this, let’s filter out the tweets with subjectivity greater than 0.5 and plot the distribution."
},
{
"code": null,
"e": 3657,
"s": 3575,
"text": "fig=px.histogram(df[df['subject']>0.5], x='polarity', color='polarity')fig.show()"
},
{
"code": null,
"e": 3758,
"s": 3657,
"text": "Looks like the proportion of negative sentiment increased when only subjective tweets were analyzed."
},
{
"code": null,
"e": 3842,
"s": 3758,
"text": "Now let's look at the polarity of subjective tweets of the 20 most mentioned users."
},
{
"code": null,
"e": 4142,
"s": 3842,
"text": "Topic modeling is a machine learning technique that automatically analyzes text data to determine cluster words for a set of documents. This is known as ‘unsupervised’ machine learning because it doesn’t require a predefined list of tags or training data that’s been previously classified by humans."
},
{
"code": null,
"e": 4196,
"s": 4142,
"text": "We will use the gensim LDA model for topic modelling."
},
{
"code": null,
"e": 4649,
"s": 4196,
"text": "#pre-process tweets to BOWfrom gensim import corporar = [process_text(x,stem=False).split() for x in df['tweet'].tolist()] dictionary = corpora.Dictionary(r)corpus = [dictionary.doc2bow(rev) for rev in r]#initialize model and print topicsfrom gensim import modelsmodel = models.ldamodel.LdaModel(corpus, num_topics=10, id2word=dictionary, passes=15)topics = model.print_topics(num_words=5)for topic in topics: print(topics[0],process_text(topic[1]))"
},
{
"code": null,
"e": 4846,
"s": 4649,
"text": "There are some clear topics like topic 5 during the early stages of impeachment trial, topic 8 containing phrases related to the China trade deal and topic 6 regarding his plans to build the wall."
},
{
"code": null,
"e": 4974,
"s": 4846,
"text": "labels=[]for x in model[corpus]: labels.append(sorted(x,key=lambda x: x[1],reverse=True)[0][0])df['topic']=pd.Series(labels)"
},
{
"code": null,
"e": 5012,
"s": 4974,
"text": "Let’s look at the topic distribution."
},
{
"code": null,
"e": 5061,
"s": 5012,
"text": "Let’s look at the distribution of topic 5 and 6."
},
{
"code": null,
"e": 5300,
"s": 5061,
"text": "These plots make sense since the tweets in topic 5 significantly increase during the month when the whistleblower complaint was released and topic 6 has more tweets during the first month of 2019 when Trump was planning to build the wall."
},
{
"code": null,
"e": 5344,
"s": 5300,
"text": "You can find a more detailed analysis here."
},
{
"code": null,
"e": 5417,
"s": 5344,
"text": "If you liked this article please do leave a clap. Thank you for reading!"
},
{
"code": null,
"e": 5654,
"s": 5417,
"text": "Github Twint documentation: https://github.com/twintproject/twinthttps://medium.com/bigpanda-engineering/exploratory-data-analysis-for-text-data-29cf7dd54eb8https://medium.com/@b.terryjack/nlp-pre-trained-sentiment-analysis-1eb52a9d742c"
}
] |
Spiral Matrix II in Python
|
Suppose we have a positive integer n, we have to generate a square matrix with n2 elements in spiral order. So if n = 5, then the matrix will be −
Let us see the steps −
set (row1, col1) := (0, 0) and (row2, col2) := (n, n), and create one matrix called res, then fill it with 0s, and set num := 1
while num <= n2,for i in range col1 to col2,res[row1, i] = num, incase num by 1if num > n2, then breakfor i in range row1 + 1 to row2,res[i, col2-1] = num, incase num by 1if num > n2, then breakfor i in range col2 – 2 down to col1 – 1,res[row2 – 1, i] = num, incase num by 1if num > n2, then breakfor i in range row2 – 2 down to row1,res[i, col1] = num, incase num by 1increase row1 by 1, decrease row2 by 1, increase col1 by 1 and decrease col2 by 1
for i in range col1 to col2,res[row1, i] = num, incase num by 1if num > n2, then break
res[row1, i] = num, incase num by 1
if num > n2, then break
for i in range row1 + 1 to row2,res[i, col2-1] = num, incase num by 1if num > n2, then break
res[i, col2-1] = num, incase num by 1
if num > n2, then break
for i in range col2 – 2 down to col1 – 1,res[row2 – 1, i] = num, incase num by 1if num > n2, then break
res[row2 – 1, i] = num, incase num by 1
if num > n2, then break
for i in range row2 – 2 down to row1,res[i, col1] = num, incase num by 1
res[i, col1] = num, incase num by 1
increase row1 by 1, decrease row2 by 1, increase col1 by 1 and decrease col2 by 1
return res
Let us see the following implementation to get better understanding −
Live Demo
class Solution(object):
def generateMatrix(self, n):
row1 = 0
col1 = 0
row2 = n
col2 = n
result = [ [0 for i in range(n)] for j in range(n)]
num = 1
while num<=n**2:
for i in range(col1,col2):
result[row1][i] = num
num+=1
if num > n**2:
break
for i in range(row1+1,row2):
result[i][col2-1] = num
num+=1
if num > n**2:
break
for i in range(col2-2,col1-1,-1):
result[row2-1][i] = num
num+=1
if num > n**2:
break
for i in range(row2-2,row1,-1):
result[i][col1] = num
num+=1
row1+=1
row2-=1
col1+=1
col2-=1
#print(result)
return result
ob1 = Solution()
print(ob1.generateMatrix(4))
4
[[1, 2, 3, 4], [12, 13, 14, 5], [11, 16, 15, 6], [10, 9, 8, 7]]
|
[
{
"code": null,
"e": 1209,
"s": 1062,
"text": "Suppose we have a positive integer n, we have to generate a square matrix with n2 elements in spiral order. So if n = 5, then the matrix will be −"
},
{
"code": null,
"e": 1232,
"s": 1209,
"text": "Let us see the steps −"
},
{
"code": null,
"e": 1360,
"s": 1232,
"text": "set (row1, col1) := (0, 0) and (row2, col2) := (n, n), and create one matrix called res, then fill it with 0s, and set num := 1"
},
{
"code": null,
"e": 1811,
"s": 1360,
"text": "while num <= n2,for i in range col1 to col2,res[row1, i] = num, incase num by 1if num > n2, then breakfor i in range row1 + 1 to row2,res[i, col2-1] = num, incase num by 1if num > n2, then breakfor i in range col2 – 2 down to col1 – 1,res[row2 – 1, i] = num, incase num by 1if num > n2, then breakfor i in range row2 – 2 down to row1,res[i, col1] = num, incase num by 1increase row1 by 1, decrease row2 by 1, increase col1 by 1 and decrease col2 by 1"
},
{
"code": null,
"e": 1898,
"s": 1811,
"text": "for i in range col1 to col2,res[row1, i] = num, incase num by 1if num > n2, then break"
},
{
"code": null,
"e": 1934,
"s": 1898,
"text": "res[row1, i] = num, incase num by 1"
},
{
"code": null,
"e": 1958,
"s": 1934,
"text": "if num > n2, then break"
},
{
"code": null,
"e": 2051,
"s": 1958,
"text": "for i in range row1 + 1 to row2,res[i, col2-1] = num, incase num by 1if num > n2, then break"
},
{
"code": null,
"e": 2089,
"s": 2051,
"text": "res[i, col2-1] = num, incase num by 1"
},
{
"code": null,
"e": 2113,
"s": 2089,
"text": "if num > n2, then break"
},
{
"code": null,
"e": 2217,
"s": 2113,
"text": "for i in range col2 – 2 down to col1 – 1,res[row2 – 1, i] = num, incase num by 1if num > n2, then break"
},
{
"code": null,
"e": 2257,
"s": 2217,
"text": "res[row2 – 1, i] = num, incase num by 1"
},
{
"code": null,
"e": 2281,
"s": 2257,
"text": "if num > n2, then break"
},
{
"code": null,
"e": 2354,
"s": 2281,
"text": "for i in range row2 – 2 down to row1,res[i, col1] = num, incase num by 1"
},
{
"code": null,
"e": 2390,
"s": 2354,
"text": "res[i, col1] = num, incase num by 1"
},
{
"code": null,
"e": 2472,
"s": 2390,
"text": "increase row1 by 1, decrease row2 by 1, increase col1 by 1 and decrease col2 by 1"
},
{
"code": null,
"e": 2483,
"s": 2472,
"text": "return res"
},
{
"code": null,
"e": 2553,
"s": 2483,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 2564,
"s": 2553,
"text": " Live Demo"
},
{
"code": null,
"e": 3448,
"s": 2564,
"text": "class Solution(object):\n def generateMatrix(self, n):\n row1 = 0\n col1 = 0\n row2 = n\n col2 = n\n result = [ [0 for i in range(n)] for j in range(n)]\n num = 1\n while num<=n**2:\n for i in range(col1,col2):\n result[row1][i] = num\n num+=1\n if num > n**2:\n break\n for i in range(row1+1,row2):\n result[i][col2-1] = num\n num+=1\n if num > n**2:\n break\n for i in range(col2-2,col1-1,-1):\n result[row2-1][i] = num\n num+=1\n if num > n**2:\n break\n for i in range(row2-2,row1,-1):\n result[i][col1] = num\n num+=1\n row1+=1\n row2-=1\n col1+=1\n col2-=1\n #print(result)\n return result\nob1 = Solution()\nprint(ob1.generateMatrix(4))"
},
{
"code": null,
"e": 3450,
"s": 3448,
"text": "4"
},
{
"code": null,
"e": 3514,
"s": 3450,
"text": "[[1, 2, 3, 4], [12, 13, 14, 5], [11, 16, 15, 6], [10, 9, 8, 7]]"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.