markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Let's now take a look at the `DataFrame` containing the transformed data. | ax = nutr_df_TF.hist(bins=50, xlabelsize=-1, ylabelsize=-1, figsize=(11,11)) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
Few of these columns looks properly normal, but it is enough to now center the data.Our data units were incompatible to begin with, and the transformations have not improved that. But we can address that by centering the data around 0; that is, we will again transform the data, this time so that every column has a mean of 0 and a standard deviation of 1. Scikit-learn has a convenient function for this. | nutr_df_TF = StandardScaler().fit_transform(nutr_df_TF) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
You can satisfy your self that the data is now centered by using the `mean()` method on the `DataFrame`. | print("mean: ", np.round(nutr_df_TF.mean(), 2)) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
> **Exercise**>> Find the standard deviation for the `nutr_df_TF`. (If you need a hint as to which method to use, see [this page](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.std.html).) > **Exercise solution**>> The correct code to use here is `print("s.d.: ", np.round(nutr_df_TF.std(), 2))`. PCA in practiceIt is finally time to perform the PCA on our data. (As stated before, even with pretty clean data, a lot of effort has to go into preparing the data for analysis.) | fit = PCA()
pca = fit.fit_transform(nutr_df_TF) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
So, now that we have peformed the PCA on our data, what do we actually have? Remember that PCA is foremost about finding the eigenvectors for our data. We then want to select some subset of those vectors to form the lower-dimensional subspace in which to analyze our data.Not all of the eigenvectors are created equal. Just a few of them will account for the majority of the variance in the data. (Put another way, a subspace composed of just a few of the eigenvectors will retain the majority of the information from our data.) We want to focus on those vectors.To help us get a sense of how many vectors we should use, consider this scree graph of the variance for the PCA components, which plots the variance explained by the components from greatest to least. | plt.plot(fit.explained_variance_ratio_) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
This is where data science can become an art. As a rule of thumb, we want to look for "elbow" in the graph, which is the point at which the few components have captured the majority of the variance in the data (after that point, we are only adding complexity to the analysis for increasingly diminishing returns). In this particular case, that appears to be at about five components.We can take the cumulative sum of the first five components to see how much variance they capture in total. | print(fit.explained_variance_ratio_[:5].sum()) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
So our five components capture about 70 percent of the variance. We can see what fewer or additional components would yield by looking at the cumulative variance for all of the components. | print(fit.explained_variance_ratio_.cumsum()) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
We can also examine this visually. | plt.plot(np.cumsum(fit.explained_variance_ratio_))
plt.title("Cumulative Explained Variance Graph") | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
Ultimately, it is a matter of judgment as to how many components to use, but five vectors (and 70 percent of the variance) will suffice for our purposes in this section.To aid further analysis, let's now put those five components into a DataFrame. | pca_df = pd.DataFrame(pca[:, :5], index=df.index)
pca_df.head() | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
Each column represents one of the eigenvectors, and each row is one of the coordinates that defines that vector in five-dimensional space.We will want to add the FoodGroup column back in to aid with our interpretation of the data later on. Let's also rename the component-columns $c_{1}$ through $c_{5}$ so that we know what we are looking at. | pca_df = pca_df.join(desc_df)
pca_df.drop(['Shrt_Desc', 'GmWt_Desc1', 'GmWt_2', 'GmWt_Desc2', 'Refuse_Pct'],
axis=1, inplace=True)
pca_df.rename(columns={0:'c1', 1:'c2', 2:'c3', 3:'c4', 4:'c5'},
inplace=True)
pca_df.head() | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
Don't worry that the FoodGroup column has all `NaN` values: it is not a vector, so it has no vector coordinates.One last thing we should demonstrate is that each of the components is mutually perpendicular (or orthogonal in math-speak). One way of expressing that condition is that each component-vector should perfectly correspond with itself and not correlate at all (positively or negatively) with any other vector. | np.round(pca_df.corr(), 5) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
Interpreting the resultsWhat do our vectors mean? Put another way, what kinds of foods populate the differnt clusters we have discovered among the data?To see these results, we will create pandas Series for each of the components, index them by feature, and then sort them in descreasing order (so that a higher number represents a feature that is positively correlated with that vector and negative numbers represent low correlation). | vects = fit.components_[:5]
c1 = pd.Series(vects[0], index=nutr_df.columns)
c1.sort_values(ascending=False) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
Our first cluster is defined by foods that are high in protein and minerals like selenium and zinc while also being low in sugars and vitamin C. Even to a non-specialist, these sound like foods such as meat, poultry, or legumes.> **Key takeaway:** Particularly when it comes to interpretation, subject-matter expertise can prove essential to producing high-quality analysis. For this reason, you should also try to include SMEs in your data -cience projects. | c2 = pd.Series(vects[1], index=nutr_df.columns)
c2.sort_values(ascending=False) | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
Our second group is foods that are high in fiber and folic acid and low in cholesterol.> **Exercise**>> Find the sorted output for $c_{3}$, $c_{4}$, and $c_{5}$.>> ***Hint:*** Remember that Python uses zero-indexing. Even without subject-matter expertise, it is possible to get a more accurate sense of the kinds of foods are defined by each component? Yes! This is the reason we merged the `FoodGroup` column back into `pca_df`. We will sort that `DataFrame` by the components and count the values from `FoodGroup` for the top items. | pca_df.sort_values(by='c1')['FoodGroup'][:500].value_counts() | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
We can do the same thing for $c_{2}$. | pca_df.sort_values(by='c2')['FoodGroup'][:500].value_counts() | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
> **Exercise**>> Repeat this process for $c_{3}$, $c_{4}$, and $c_{5}$. > **A parting note:** `Baby Foods` and some other categories might seem to dominate several of the categories. This is a product of all of the rows we had to drop that had `NaN` values. If we look at all of the value counts for `FoodGroup`, we will see that they are not evenly distributed, with some categories far more represented than others. | df['FoodGroup'].value_counts() | _____no_output_____ | MIT | Machine Learning 2_Using Advanced Machine Learning Models/Reference Material/190053-Reactors-DS-Tr2-Sec1-2-PCA.ipynb | raspyweather/Reactors |
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). | # Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize() | _____no_output_____ | MIT | examples/notebooks/geemap_and_ipyleaflet.ipynb | hugoledoux/geemap |
Create an interactive map | import geemap
Map = geemap.Map(center=(40, -100), zoom=4)
Map.add_minimap(position='bottomright')
Map | _____no_output_____ | MIT | examples/notebooks/geemap_and_ipyleaflet.ipynb | hugoledoux/geemap |
Add tile layersFor example, you can Google Map tile layer: | url = 'https://mt1.google.com/vt/lyrs=m&x={x}&y={y}&z={z}'
Map.add_tile_layer(url, name='Google Map', attribution='Google') | _____no_output_____ | MIT | examples/notebooks/geemap_and_ipyleaflet.ipynb | hugoledoux/geemap |
Add Google Terrain tile layer: | url = 'https://mt1.google.com/vt/lyrs=p&x={x}&y={y}&z={z}'
Map.add_tile_layer(url, name='Google Terrain', attribution='Google') | _____no_output_____ | MIT | examples/notebooks/geemap_and_ipyleaflet.ipynb | hugoledoux/geemap |
Add WMS layersMore WMS layers can be found at . For example, you can add NAIP imagery. | url = 'https://services.nationalmap.gov/arcgis/services/USGSNAIPImagery/ImageServer/WMSServer?'
Map.add_wms_layer(url=url, layers='0', name='NAIP Imagery', format='image/png') | _____no_output_____ | MIT | examples/notebooks/geemap_and_ipyleaflet.ipynb | hugoledoux/geemap |
Add USGS 3DEP Elevation Dataset | url = 'https://elevation.nationalmap.gov/arcgis/services/3DEPElevation/ImageServer/WMSServer?'
Map.add_wms_layer(url=url, layers='3DEPElevation:None', name='3DEP Elevation', format='image/png') | _____no_output_____ | MIT | examples/notebooks/geemap_and_ipyleaflet.ipynb | hugoledoux/geemap |
Capture user inputs | import geemap
from ipywidgets import Label
from ipyleaflet import Marker
Map = geemap.Map(center=(40, -100), zoom=4)
label = Label()
display(label)
coordinates = []
def handle_interaction(**kwargs):
latlon = kwargs.get('coordinates')
if kwargs.get('type') == 'mousemove':
label.value = str(latlon)
elif kwargs.get('type') == 'click':
coordinates.append(latlon)
Map.add_layer(Marker(location=latlon))
Map.on_interaction(handle_interaction)
Map
print(coordinates) | _____no_output_____ | MIT | examples/notebooks/geemap_and_ipyleaflet.ipynb | hugoledoux/geemap |
A simpler way for capturing user inputs | import geemap
Map = geemap.Map(center=(40, -100), zoom=4)
cluster = Map.listening(event='click', add_marker=True)
Map
# Get the last mouse clicked coordinates
Map.last_click
# Get all the mouse clicked coordinates
Map.all_clicks | _____no_output_____ | MIT | examples/notebooks/geemap_and_ipyleaflet.ipynb | hugoledoux/geemap |
SplitMap control | import geemap
from ipyleaflet import *
Map = geemap.Map(center=(47.50, -101), zoom=7)
right_layer = WMSLayer(
url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2017_CIR/ImageServer/WMSServer?',
layers = 'AerialImage_ND_2017_CIR',
name = 'AerialImage_ND_2017_CIR',
format = 'image/png'
)
left_layer = WMSLayer(
url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2018_CIR/ImageServer/WMSServer?',
layers = 'AerialImage_ND_2018_CIR',
name = 'AerialImage_ND_2018_CIR',
format = 'image/png'
)
control = SplitMapControl(left_layer=left_layer, right_layer=right_layer)
Map.add_control(control)
Map.add_control(LayersControl(position='topright'))
Map.add_control(FullScreenControl())
Map
import geemap
Map = geemap.Map()
Map.split_map(left_layer='HYBRID', right_layer='ESRI')
Map | _____no_output_____ | MIT | examples/notebooks/geemap_and_ipyleaflet.ipynb | hugoledoux/geemap |
Gender and Age Detection | import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import cv2
from tensorflow.keras.models import Sequential, load_model, Model
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Dropout, BatchNormalization, Flatten, Input
from sklearn.model_selection import train_test_split
# Defining the path .
datasetFolder = r"C:\Users\ACER\Documents\Gender Detection\DataSets\UTKFace"
# Creating empty list.
pixels = []
age = []
gender = []
for img in os.listdir(datasetFolder) : # os.listdir opens the directory "datasetFolder"
# Label of each image is splitted on "_" and required information is stored in required variable.
ages = img.split("_")[0]
genders = img.split("_")[1]
img = cv2.imread(str(datasetFolder) + "/" + str(img)) # Reading each image from the path of folder provided.
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Converting the input image from BGR to RGB as computer by default sees an image in BGR.
# Appending necessary data in respective created lists.
pixels.append(np.array(img))
age.append(np.array(ages))
gender.append(np.array(genders))
# Converting list to array
age = np.array(age, dtype = np.int64)
pixels = np.array(pixels)
gender = np.array(gender, np.uint64)
# Printing the length of the pixel .
p = len(pixels)
print(f"No. of images working upon {p}")
# Splitting the images in train and test dataset.
x_train, x_test, y_train, y_test = train_test_split(pixels, age, random_state = 100)
# Splitting the dataset in train and test dataset as gender as.
x_train_2, x_test_2, y_train_2, y_test_2 = train_test_split(pixels, gender, random_state = 100)
# Checking the shape of the images set. Here (200, 200, 3) are height, width and channel of the images respectively.
x_train.shape, x_train_2.shape, x_test.shape, x_test_2.shape,
# Checking the shape of the target variable.
y_train.shape, y_train_2.shape, y_test.shape, y_test_2.shape | _____no_output_____ | MIT | Training.ipynb | nitinsrswt/age_gender_predictions |
Below cell of code is used to create layers of a convolution neural network model. The layers in a CNN model are : * Input Layer* Convolution Layer* ReLu Layer* Pooling Layer* Fully Connected Network | inputLayer = Input(shape = (200, 200, 3)) # From the Input Model called from keras.models. Again (200, 200, 3) are height, width and channel of the images respectively.
convLayer1 = Conv2D(140,(3,3), activation = 'relu')(inputLayer)
'''An activation function is basically just a simple function that transforms its inputs into outputs that have a certain range.
Also the ReLu activation transforms the -ve vaulues into 0 and positive remains the same, hence it is known as half rectifier as
well.'''
convLayer2 = Conv2D(130,(3,3), activation = 'relu')(convLayer1) # Creating seccond layer of CNN.
batch1 = BatchNormalization()(convLayer2) # Normalizing the data.
poolLayer3 = MaxPool2D((2,2))(batch1) # Creating third, Pool Layer of the CNN.
convLayer3 = Conv2D(120,(3,3), activation = 'relu')(poolLayer3) # Adding the third Layer.
batch2 = BatchNormalization()(convLayer3) # Normalizing the layer.
poolLayer4 = MaxPool2D((2,2))(batch2) #Adding fourth layer of CNN.
flt = Flatten()(poolLayer4) # Flattening the data.
age_model = Dense(128,activation="relu")(flt) # Here 128 is the no. of neurons connected with the flatten data layer.
age_model = Dense(64,activation="relu")(age_model) #Now as we move down, no. of neurons are reducing with previous neurons connected to them.
age_model = Dense(32,activation="relu")(age_model)
age_model = Dense(1,activation="relu")(age_model)
gender_model = Dense(128,activation="relu")(flt) # The same work as above with 128 neurons is done for gender predictive model.
gender_model = Dense(80,activation="relu")(gender_model)
gender_model = Dense(64,activation="relu")(gender_model)
gender_model = Dense(32,activation="relu")(gender_model)
gender_model = Dropout(0.5)(gender_model) # Drop-out layer is added to dodge the overfitting of the model.
'''Softmax is a mathematical function that converts a vector of numbers into a vector of probabilities, where the probabilities
of each value are proportional to the relative scale of each value in the vector. Here it is used as an activation function.'''
gender_model = Dense(2,activation="softmax")(gender_model) | _____no_output_____ | MIT | Training.ipynb | nitinsrswt/age_gender_predictions |
Below cell of code is to make an object of the Model from keras.models. | model = Model(inputs=inputLayer,outputs=[age_model,gender_model]) # Adding the input layer and the output layer in our model and making the object.
model.compile(optimizer="adam",loss=["mse","sparse_categorical_crossentropy"],metrics=['mae','accuracy'])
model.summary() # To get the summary of our model.
save = model.fit(x_train,[y_train,y_train_2], validation_data=(x_test,[y_test,y_test_2]),epochs=50)
model.save("model.h5") | Epoch 1/50
| MIT | Training.ipynb | nitinsrswt/age_gender_predictions |
An Introduction to SageMaker LDA***Finding topics in synthetic document data using Spectral LDA algorithms.***---1. [Introduction](Introduction)1. [Setup](Setup)1. [Training](Training)1. [Inference](Inference)1. [Epilogue](Epilogue) Introduction***Amazon SageMaker LDA is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. Latent Dirichlet Allocation (LDA) is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. Here each observation is a document, the features are the presence (or occurrence count) of each word, and the categories are the topics. Since the method is unsupervised, the topics are not specified up front, and are not guaranteed to align with how a human may naturally categorize documents. The topics are learned as a probability distribution over the words that occur in each document. Each document, in turn, is described as a mixture of topics.In this notebook we will use the Amazon SageMaker LDA algorithm to train an LDA model on some example synthetic data. We will then use this model to classify (perform inference on) the data. The main goals of this notebook are to,* learn how to obtain and store data for use in Amazon SageMaker,* create an AWS SageMaker training job on a data set to produce an LDA model,* use the LDA model to perform inference with an Amazon SageMaker endpoint.The following are ***not*** goals of this notebook:* understand the LDA model,* understand how the Amazon SageMaker LDA algorithm works,* interpret the meaning of the inference outputIf you would like to know more about these things take a minute to run this notebook and then check out the SageMaker LDA Documentation and the **LDA-Science.ipynb** notebook. | !conda install -y scipy
%matplotlib inline
import os, re
import boto3
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=3, suppress=True)
# some helpful utility functions are defined in the Python module
# "generate_example_data" located in the same directory as this
# notebook
from generate_example_data import generate_griffiths_data, plot_lda, match_estimated_topics
# accessing the SageMaker Python SDK
import sagemaker
from sagemaker.amazon.common import RecordSerializer
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import JSONDeserializer | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Setup****This notebook was created and tested on an ml.m4.xlarge notebook instance.*Before we do anything at all, we need data! We also need to setup our AWS credentials so that AWS SageMaker can store and access data. In this section we will do four things:1. [Setup AWS Credentials](SetupAWSCredentials)1. [Obtain Example Dataset](ObtainExampleDataset)1. [Inspect Example Data](InspectExampleData)1. [Store Data on S3](StoreDataonS3) Setup AWS CredentialsWe first need to specify some AWS credentials; specifically data locations and access roles. This is the only cell of this notebook that you will need to edit. In particular, we need the following data:* `bucket` - An S3 bucket accessible by this account. * Used to store input training data and model data output. * Should be within the same region as this notebook instance, training, and hosting.* `prefix` - The location in the bucket where this notebook's input and and output data will be stored. (The default value is sufficient.)* `role` - The IAM Role ARN used to give training and hosting access to your data. * See documentation on how to create these. * The script below will try to determine an appropriate Role ARN. | from sagemaker import get_execution_role
session = sagemaker.Session()
role = get_execution_role()
bucket = session.default_bucket()
prefix = 'sagemaker/DEMO-lda-introduction'
print('Training input/output will be stored in {}/{}'.format(bucket, prefix))
print('\nIAM Role: {}'.format(role)) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Obtain Example DataWe generate some example synthetic document data. For the purposes of this notebook we will omit the details of this process. All we need to know is that each piece of data, commonly called a *"document"*, is a vector of integers representing *"word counts"* within the document. In this particular example there are a total of 25 words in the *"vocabulary"*.$$\underbrace{w}_{\text{document}} = \overbrace{\big[ w_1, w_2, \ldots, w_V \big] }^{\text{word counts}},\quadV = \text{vocabulary size}$$These data are based on that used by Griffiths and Steyvers in their paper [Finding Scientific Topics](http://psiexp.ss.uci.edu/research/papers/sciencetopics.pdf). For more information, see the **LDA-Science.ipynb** notebook. | print('Generating example data...')
num_documents = 6000
num_topics = 5
known_alpha, known_beta, documents, topic_mixtures = generate_griffiths_data(
num_documents=num_documents, num_topics=num_topics)
vocabulary_size = len(documents[0])
# separate the generated data into training and tests subsets
num_documents_training = int(0.9*num_documents)
num_documents_test = num_documents - num_documents_training
documents_training = documents[:num_documents_training]
documents_test = documents[num_documents_training:]
topic_mixtures_training = topic_mixtures[:num_documents_training]
topic_mixtures_test = topic_mixtures[num_documents_training:]
print('documents_training.shape = {}'.format(documents_training.shape))
print('documents_test.shape = {}'.format(documents_test.shape)) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Inspect Example Data*What does the example data actually look like?* Below we print an example document as well as its corresponding known *topic-mixture*. A topic-mixture serves as the "label" in the LDA model. It describes the ratio of topics from which the words in the document are found.For example, if the topic mixture of an input document $\mathbf{w}$ is,$$\theta = \left[ 0.3, 0.2, 0, 0.5, 0 \right]$$then $\mathbf{w}$ is 30% generated from the first topic, 20% from the second topic, and 50% from the fourth topic. For more information see **How LDA Works** in the SageMaker documentation as well as the **LDA-Science.ipynb** notebook.Below, we compute the topic mixtures for the first few training documents. As we can see, each document is a vector of word counts from the 25-word vocabulary and its topic-mixture is a probability distribution across the five topics used to generate the sample dataset. | print('First training document =\n{}'.format(documents[0]))
print('\nVocabulary size = {}'.format(vocabulary_size))
print('Known topic mixture of first document =\n{}'.format(topic_mixtures_training[0]))
print('\nNumber of topics = {}'.format(num_topics))
print('Sum of elements = {}'.format(topic_mixtures_training[0].sum())) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Later, when we perform inference on the training data set we will compare the inferred topic mixture to this known one.---Human beings are visual creatures, so it might be helpful to come up with a visual representation of these documents. In the below plots, each pixel of a document represents a word. The greyscale intensity is a measure of how frequently that word occurs. Below we plot the first few documents of the training set reshaped into 5x5 pixel grids. | %matplotlib inline
fig = plot_lda(documents_training, nrows=3, ncols=4, cmap='gray_r', with_colorbar=True)
fig.suptitle('Example Document Word Counts')
fig.set_dpi(160) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Store Data on S3A SageMaker training job needs access to training data stored in an S3 bucket. Although training can accept data of various formats we convert the documents MXNet RecordIO Protobuf format before uploading to the S3 bucket defined at the beginning of this notebook. We do so by making use of the SageMaker Python SDK utility `RecordSerializer`. | # convert documents_training to Protobuf RecordIO format
recordio_protobuf_serializer = RecordSerializer()
fbuffer = recordio_protobuf_serializer.serialize(documents_training)
# upload to S3 in bucket/prefix/train
fname = 'lda.data'
s3_object = os.path.join(prefix, 'train', fname)
boto3.Session().resource('s3').Bucket(bucket).Object(s3_object).upload_fileobj(fbuffer)
s3_train_data = 's3://{}/{}'.format(bucket, s3_object)
print('Uploaded data to S3: {}'.format(s3_train_data)) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Training***Once the data is preprocessed and available in a recommended format the next step is to train our model on the data. There are number of parameters required by SageMaker LDA configuring the model and defining the computational environment in which training will take place.First, we specify a Docker container containing the SageMaker LDA algorithm. For your convenience, a region-specific container is automatically chosen for you to minimize cross-region data communication. Information about the locations of each SageMaker algorithm is available in the documentation. | from sagemaker.amazon.amazon_estimator import get_image_uri
# select the algorithm container based on this notebook's current location
region_name = boto3.Session().region_name
container = get_image_uri(region_name, 'lda')
print('Using SageMaker LDA container: {} ({})'.format(container, region_name)) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Particular to a SageMaker LDA training job are the following hyperparameters:* **`num_topics`** - The number of topics or categories in the LDA model. * Usually, this is not known a priori. * In this example, howevever, we know that the data is generated by five topics.* **`feature_dim`** - The size of the *"vocabulary"*, in LDA parlance. * In this example, this is equal 25.* **`mini_batch_size`** - The number of input training documents.* **`alpha0`** - *(optional)* a measurement of how "mixed" are the topic-mixtures. * When `alpha0` is small the data tends to be represented by one or few topics. * When `alpha0` is large the data tends to be an even combination of several or many topics. * The default value is `alpha0 = 1.0`.In addition to these LDA model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,* Recommended instance type: `ml.c4`* Current limitations: * SageMaker LDA *training* can only run on a single instance. * SageMaker LDA does not take advantage of GPU hardware. * (The Amazon AI Algorithms team is working hard to provide these capabilities in a future release!) | # specify general training job information
lda = sagemaker.estimator.Estimator(
container,
role,
output_path='s3://{}/{}/output'.format(bucket, prefix),
train_instance_count=1,
train_instance_type='ml.c4.2xlarge',
sagemaker_session=session,
)
# set algorithm-specific hyperparameters
lda.set_hyperparameters(
num_topics=num_topics,
feature_dim=vocabulary_size,
mini_batch_size=num_documents_training,
alpha0=1.0,
)
# run the training job on input data stored in S3
lda.fit({'train': s3_train_data}) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
If you see the message> `===== Job Complete =====`at the bottom of the output logs then that means training sucessfully completed and the output LDA model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below: | print('Training job name: {}'.format(lda.latest_training_job.job_name)) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Inference***A trained model does nothing on its own. We now want to use the model we computed to perform inference on data. For this example, that means predicting the topic mixture representing a given document.We create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up. | lda_inference = lda.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge', # LDA inference may work better at scale on ml.c4 instances
) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Congratulations! You now have a functioning SageMaker LDA inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below: | print('Endpoint name: {}'.format(lda_inference.endpoint_name)) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
With this realtime endpoint at our fingertips we can finally perform inference on our training and test data.We can pass a variety of data formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted, JSON-sparse-formatter, and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `CSVSerializer` and `JSONDeserializer` when configuring the inference endpoint. | lda_inference.serializer = CSVSerializer()
lda_inference.deserializer = JSONDeserializer() | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
We pass some test documents to the inference endpoint. Note that the serializer and deserializer will atuomatically take care of the datatype conversion from Numpy NDArrays. | results = lda_inference.predict(documents_test[:12])
print(results) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
It may be hard to see but the output format of SageMaker LDA inference endpoint is a Python dictionary with the following format.```{ 'predictions': [ {'topic_mixture': [ ... ] }, {'topic_mixture': [ ... ] }, {'topic_mixture': [ ... ] }, ... ]}```We extract the topic mixtures, themselves, corresponding to each of the input documents. | computed_topic_mixtures = np.array([prediction['topic_mixture'] for prediction in results['predictions']])
print(computed_topic_mixtures) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
If you decide to compare these results to the known topic mixtures generated in the [Obtain Example Data](ObtainExampleData) Section keep in mind that SageMaker LDA discovers topics in no particular order. That is, the approximate topic mixtures computed above may be permutations of the known topic mixtures corresponding to the same documents. | print(topic_mixtures_test[0]) # known test topic mixture
print(computed_topic_mixtures[0]) # computed topic mixture (topics permuted) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Stop / Close the EndpointFinally, we should delete the endpoint before we close the notebook.To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select "Delete" from the "Actions" dropdown menu. | sagemaker.Session().delete_endpoint(lda_inference.endpoint_name) | _____no_output_____ | Apache-2.0 | introduction_to_amazon_algorithms/lda_topic_modeling/LDA-Introduction.ipynb | P15241328/amazon-sagemaker-examples |
Word2Vec**Learning Objectives**1. Compile all steps into one function2. Prepare training data for Word2Vec3. Model and Training4. Embedding lookup and analysis Introduction Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.Note: This notebook is based on [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf) and[DistributedRepresentations of Words and Phrases and their Compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.These papers proposed two methods for learning representations of words: * **Continuous Bag-of-Words Model** which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.* **Continuous Skip-gram Model** which predict words within a certain range before and after the current word in the same sentence. A worked example of this is given below.You'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the [TensorFlow Embedding Projector](http://projector.tensorflow.org/).Each learning objective will correspond to a __TODO__ in the [student lab notebook](../labs/word2vec.ipynb) -- try to complete that notebook first before reviewing this solution notebook. Skip-gram and Negative Sampling While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of `(target_word, context_word)` where `context_word` appears in the neighboring context of `target_word`. Consider the following sentence of 8 words.> The wide road shimmered in the hot sun. The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a `target_word` that can be considered `context word`. Take a look at this table of skip-grams for target words based on different window sizes. Note: For this tutorial, a window size of *n* implies n words on each side with a total window span of 2*n+1 words across a word.  The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words *w1, w2, ... wT*, the objective can be written as the average log probability  where `c` is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.  where *v* and *v'* are target and context vector representations of words and *W* is vocabulary size. Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (105-107) terms. The [Noise Contrastive Estimation](https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be [simplified](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) to use negative sampling. The simplified negative sampling objective for a target word is to distinguish the context word from *num_ns* negative samples drawn from noise distribution *Pn(w)* of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and *num_ns* negative samples. A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the `window_size` neighborhood of the target_word. For the example sentence, these are few potential negative samples (when `window_size` is 2).```(hot, shimmered)(wide, hot)(wide, sun)``` In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial. Setup | # Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tqdm
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import io
import itertools
import numpy as np
import os
import re
import string
import tensorflow as tf
import tqdm
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Dot, Embedding, Flatten, GlobalAveragePooling1D, Reshape
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Please check your tensorflow version using the cell below. | # Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
SEED = 42
AUTOTUNE = tf.data.experimental.AUTOTUNE | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Vectorize an example sentence Consider the following sentence: `The wide road shimmered in the hot sun.`Tokenize the sentence: | sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens)) | 8
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Create a vocabulary to save mappings from tokens to integer indices. | vocab, index = {}, 1 # start indexing from 1
vocab['<pad>'] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab) | {'<pad>': 0, 'the': 1, 'wide': 2, 'road': 3, 'shimmered': 4, 'in': 5, 'hot': 6, 'sun': 7}
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Create an inverse vocabulary to save mappings from integer indices to tokens. | inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab) | {0: '<pad>', 1: 'the', 2: 'wide', 3: 'road', 4: 'shimmered', 5: 'in', 6: 'hot', 7: 'sun'}
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Vectorize your sentence. | example_sequence = [vocab[word] for word in tokens]
print(example_sequence) | [1, 2, 3, 4, 5, 1, 6, 7]
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Generate skip-grams from one sentence The `tf.keras.preprocessing.sequence` module provides useful functions that simplify data preparation for Word2Vec. You can use the `tf.keras.preprocessing.sequence.skipgrams` to generate skip-gram pairs from the `example_sequence` with a given `window_size` from tokens in the range `[0, vocab_size)`.Note: `negative_samples` is set to `0` here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section. | window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0)
print(len(positive_skip_grams)) | 26
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Take a look at few positive skip-grams. | for target, context in positive_skip_grams[:5]:
print(f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})") | (1, 3): (the, road)
(4, 1): (shimmered, the)
(5, 6): (in, hot)
(4, 2): (shimmered, wide)
(3, 2): (road, wide)
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Negative sampling for one skip-gram The `skipgrams` function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the `tf.random.log_uniform_candidate_sampler` function to sample `num_ns` number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled. Key point: *num_ns* (number of negative samples per positive context word) between [5, 20] is [shown to work](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) best for smaller datasets, while *num_ns* between [2,5] suffices for larger datasets. | # Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling" # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates]) | tf.Tensor([2 1 4 3], shape=(4,), dtype=int64)
['wide', 'the', 'shimmered', 'road']
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Construct one training example For a given positive `(target_word, context_word)` skip-gram, you now also have `num_ns` negative sampled context words that do not appear in the window size neighborhood of `target_word`. Batch the `1` positive `context_word` and `num_ns` negative context words into one tensor. This produces a set of positive skip-grams (labelled as `1`) and negative samples (labelled as `0`) for each target word. | # Add a dimension so you can use concatenation (on the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concat positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label first context word as 1 (positive) followed by num_ns 0s (negative).
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Reshape target to shape (1,) and context and label to (num_ns+1,).
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Take a look at the context and the corresponding labels for the target word from the skip-gram example above. | print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}") | target_index : 1
target_word : the
context_indices : [3 2 1 4 3]
context_words : ['road', 'wide', 'the', 'shimmered', 'road']
label : [1 0 0 0 0]
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
A tuple of `(target, context, label)` tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape `(1,)` while the context and label are of shape `(1+num_ns,)` | print(f"target :", target)
print(f"context :", context )
print(f"label :", label ) | target : tf.Tensor(1, shape=(), dtype=int32)
context : tf.Tensor([3 2 1 4 3], shape=(5,), dtype=int64)
label : tf.Tensor([1 0 0 0 0], shape=(5,), dtype=int64)
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Summary This picture summarizes the procedure of generating training example from a sentence.  Lab Task 1: Compile all steps into one function Skip-gram Sampling table A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as `the`, `is`, `on`) don't add much useful information for the model to learn from. [Mikolov et al.](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) suggest subsampling of frequent words as a helpful practice to improve embedding quality. The `tf.keras.preprocessing.sequence.skipgrams` function accepts a sampling table argument to encode probabilities of sampling any token. You can use the `tf.keras.preprocessing.sequence.make_sampling_table` to generate a word-frequency rank based probabilistic sampling table and pass it to `skipgrams` function. Take a look at the sampling probabilities for a `vocab_size` of 10. | sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table) | [0.00315225 0.00315225 0.00547597 0.00741556 0.00912817 0.01068435
0.01212381 0.01347162 0.01474487 0.0159558 ]
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
`sampling_table[i]` denotes the probability of sampling the i-th most common word in a dataset. The function assumes a [Zipf's distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of the word frequencies for sampling. Key point: The `tf.random.log_uniform_candidate_sampler` already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective. Generate training data Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections. | # Generates skip-gram pairs with negative sampling for a list of sequences
# (int-encoded sentences) based on window size, number of negative samples
# and vocabulary size.
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for vocab_size tokens.
# TODO 1a
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(vocab_size)
# Iterate over all sequences (sentences) in dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0)
# Iterate over each positive skip-gram pair to produce training examples
# with positive context word and negative samples.
# TODO 1b
for target_word, context_word in positive_skip_grams:
context_class = tf.expand_dims(
tf.constant([context_word], dtype="int64"), 1)
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class,
num_true=1,
num_sampled=num_ns,
unique=True,
range_max=vocab_size,
seed=SEED,
name="negative_sampling")
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1)
context = tf.concat([context_class, negative_sampling_candidates], 0)
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Lab Task 2: Prepare training data for Word2Vec With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences! Download text corpus You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data. | path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') | Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Read text from the file and take a look at the first few lines. | with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line) | First Citizen:
Before we proceed any further, hear me speak.
All:
Speak, speak.
First Citizen:
You are all resolved rather to die than to famish?
All:
Resolved. resolved.
First Citizen:
First, you know Caius Marcius is chief enemy to the people.
All:
We know't, we know't.
First Citizen:
Let us kill him, and we'll have corn at our own price.
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Use the non empty lines to construct a `tf.data.TextLineDataset` object for next steps. | # TODO 2a
text_ds = tf.data.TextLineDataset(path_to_file).filter(lambda x: tf.cast(tf.strings.length(x), bool)) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Vectorize sentences from the corpus You can use the `TextVectorization` layer to vectorize sentences from the corpus. Learn more about using this layer in this [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a `custom_standardization function` that can be used in the TextVectorization layer. | # We create a custom standardization function to lowercase the text and
# remove punctuation.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(lowercase,
'[%s]' % re.escape(string.punctuation), '')
# Define the vocabulary size and number of words in a sequence.
vocab_size = 4096
sequence_length = 10
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Set output_sequence_length length to pad all samples to same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Call `adapt` on the text dataset to create vocabulary. | vectorize_layer.adapt(text_ds.batch(1024)) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with `get_vocabulary()`. This function returns a list of all vocabulary tokens sorted (descending) by their frequency. | # Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20]) | ['', '[UNK]', 'the', 'and', 'to', 'i', 'of', 'you', 'my', 'a', 'that', 'in', 'is', 'not', 'for', 'with', 'me', 'it', 'be', 'your']
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
The vectorize_layer can now be used to generate vectors for each element in the `text_ds`. | def vectorize_text(text):
text = tf.expand_dims(text, -1)
return tf.squeeze(vectorize_layer(text))
# Vectorize the data in text_ds.
text_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch() | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Obtain sequences from the dataset You now have a `tf.data.Dataset` of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples. Note: Since the `generate_training_data()` defined earlier uses non-TF python/numpy functions, you could also use a `tf.py_function` or `tf.numpy_function` with `tf.data.Dataset.map()`. | sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences)) | 32777
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Take a look at few examples from `sequences`. | for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}") | [ 89 270 0 0 0 0 0 0 0 0] => ['first', 'citizen', '', '', '', '', '', '', '', '']
[138 36 982 144 673 125 16 106 0 0] => ['before', 'we', 'proceed', 'any', 'further', 'hear', 'me', 'speak', '', '']
[34 0 0 0 0 0 0 0 0 0] => ['all', '', '', '', '', '', '', '', '', '']
[106 106 0 0 0 0 0 0 0 0] => ['speak', 'speak', '', '', '', '', '', '', '', '']
[ 89 270 0 0 0 0 0 0 0 0] => ['first', 'citizen', '', '', '', '', '', '', '', '']
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Generate training examples from sequences `sequences` is now a list of int encoded sentences. Just call the `generate_training_data()` function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples. | targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED)
print(len(targets), len(contexts), len(labels)) |
0%| | 0/32777 [00:00<?, ?it/s] | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Configure the dataset for performance To perform efficient batching for the potentially large number of training examples, use the `tf.data.Dataset` API. After this step, you would have a `tf.data.Dataset` object of `(target_word, context_word), (label)` elements to train your Word2Vec model! | BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset) | <BatchDataset shapes: (((1024,), (1024, 5, 1)), (1024, 5)), types: ((tf.int32, tf.int64), tf.int64)>
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Add `cache()` and `prefetch()` to improve performance. | dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset) | <PrefetchDataset shapes: (((1024,), (1024, 5, 1)), (1024, 5)), types: ((tf.int32, tf.int64), tf.int64)>
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Lab Task 3: Model and Training The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset. Subclassed Word2Vec Model Use the [Keras Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) to define your Word2Vec model with the following layers:* `target_embedding`: A `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are `(vocab_size * embedding_dim)`.* `context_embedding`: Another `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in `target_embedding`, i.e. `(vocab_size * embedding_dim)`.* `dots`: A `tf.keras.layers.Dot` layer that computes the dot product of target and context embeddings from a training pair.* `flatten`: A `tf.keras.layers.Flatten` layer to flatten the results of `dots` layer into logits.With the sublassed model, you can define the `call()` function that accepts `(target, context)` pairs which can then be passed into their corresponding embedding layer. Reshape the `context_embedding` to perform a dot product with `target_embedding` and return the flattened result. Key point: The `target_embedding` and `context_embedding` layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding. | class Word2Vec(Model):
def __init__(self, vocab_size, embedding_dim):
super(Word2Vec, self).__init__()
self.target_embedding = Embedding(vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding", )
self.context_embedding = Embedding(vocab_size,
embedding_dim,
input_length=num_ns+1)
self.dots = Dot(axes=(3,2))
self.flatten = Flatten()
def call(self, pair):
target, context = pair
we = self.target_embedding(target)
ce = self.context_embedding(context)
dots = self.dots([ce, we])
return self.flatten(dots) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Define loss function and compile model For simplicity, you can use `tf.keras.losses.CategoricalCrossEntropy` as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:``` pythondef custom_loss(x_logit, y_true): return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)```It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the `tf.keras.optimizers.Adam` optimizer. | # TODO 3a
embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy']) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Also define a callback to log training statistics for tensorboard. | tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs") | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Train the model with `dataset` prepared above for some number of epochs. | word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback]) | Epoch 1/20
| Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Tensorboard now shows the Word2Vec model's accuracy and loss. | !tensorboard --bind_all --port=8081 --load_fast=false --logdir logs | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Run the following command in **Cloud Shell:**gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081 Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.In Cloud Shell, click *Web Preview* > *Change Port* and insert port number *8081*. Click *Change and Preview* to open the TensorBoard.  **To quit the TensorBoard, click Kernel > Interrupt kernel**. Lab Task 4: Embedding lookup and analysis Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line. | # TODO 4a
weights = word2vec.get_layer('w2v_embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary() | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Create and save the vectors and metadata file. | out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close() | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Download the `vectors.tsv` and `metadata.tsv` to analyze the obtained embeddings in the [Embedding Projector](https://projector.tensorflow.org/). | try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb | juancaob/training-data-analyst |
Numbers and Integer MathWatch the full [C 101 video](https://www.youtube.com/watch?v=jEE0pWTq54U&list=PLdo4fOcmZ0oVxKLQCHpiUWun7vlJJvUiN&index=5) for this module. Integer MathYou have a few `integers` defined below. An `integer` is a positive or negative whole number.> Before you run the code, what should c be? Addition | int a = 18;
int b = 6;
int c = a + b;
Console.WriteLine(c); | 24
| MIT | csharp-101/04-Numbers and Integer Math.ipynb | ScriptBox99/dotnet-csharp-notebooks |
Subtraction | int c = a - b;
Console.WriteLine(c); | 12
| MIT | csharp-101/04-Numbers and Integer Math.ipynb | ScriptBox99/dotnet-csharp-notebooks |
Multiplication | int c = a * b;
Console.WriteLine(c); | 108
| MIT | csharp-101/04-Numbers and Integer Math.ipynb | ScriptBox99/dotnet-csharp-notebooks |
Division | int c = a / b;
Console.WriteLine(c); | 3
| MIT | csharp-101/04-Numbers and Integer Math.ipynb | ScriptBox99/dotnet-csharp-notebooks |
Order of operationsC follows the order of operation when it comes to math. That is, it does multiplication and division first, then addition and subtraction.> What would the math be if C didn't follow the order of operation, and instead just did math left to right? | int a = 5;
int b = 4;
int c = 2;
int d = a + b * c;
Console.WriteLine(d); | 13
| MIT | csharp-101/04-Numbers and Integer Math.ipynb | ScriptBox99/dotnet-csharp-notebooks |
Using parenthesisYou can also force different orders by putting parentheses around whatever you want done first> Try it out | int d = (a + b) * c;
Console.WriteLine(d); | 18
| MIT | csharp-101/04-Numbers and Integer Math.ipynb | ScriptBox99/dotnet-csharp-notebooks |
You can make math as long and complicated as you want.> Can you make this line even more complicated? | int d = (a + b) - 6 * c + (12 * 4) / 3 + 12;
Console.WriteLine(d); | 25
| MIT | csharp-101/04-Numbers and Integer Math.ipynb | ScriptBox99/dotnet-csharp-notebooks |
Integers: Whole numbers no matter whatInteger math will always produce integers. What that means is that even when math should result in a decimal or fraction, the answer will be truncated to a whole number.> Check it out. WHat should the answer truly be? | int a = 7;
int b = 4;
int c = 3;
int d = (a + b) / c;
Console.WriteLine(d); | 3
| MIT | csharp-101/04-Numbers and Integer Math.ipynb | ScriptBox99/dotnet-csharp-notebooks |
PlaygroundPlay around with what you've learned! Here's some starting ideas:> Do you have any homework or projects that need math? Try using code in place of a calculator!>> How do integers round? Do they always round up? down? to the nearest integer?>> How do the Order of Operations work? Play around with parentheses. | Console.WriteLine("Playground"); | Playground
| MIT | csharp-101/04-Numbers and Integer Math.ipynb | ScriptBox99/dotnet-csharp-notebooks |
**Nigerian Music scraped from Spotify - an analysis**
Clustering is a type of [Unsupervised Learning](https://wikipedia.org/wiki/Unsupervised_learning) that presumes that a dataset is unlabelled or that its inputs are not matched with predefined outputs. It uses various algorithms to sort through unlabeled data and provide groupings according to patterns it discerns in the data.
[**Pre-lecture quiz**](https://white-water-09ec41f0f.azurestaticapps.net/quiz/27/)
**Introduction**
[Clustering](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) is very useful for data exploration. Let's see if it can help discover trends and patterns in the way Nigerian audiences consume music.
> ✅ Take a minute to think about the uses of clustering. In real life, clustering happens whenever you have a pile of laundry and need to sort out your family members' clothes 🧦👕👖🩲. In data science, clustering happens when trying to analyze a user's preferences, or determine the characteristics of any unlabeled dataset. Clustering, in a way, helps make sense of chaos, like a sock drawer.
In a professional setting, clustering can be used to determine things like market segmentation, determining what age groups buy what items, for example. Another use would be anomaly detection, perhaps to detect fraud from a dataset of credit card transactions. Or you might use clustering to determine tumors in a batch of medical scans.
✅ Think a minute about how you might have encountered clustering 'in the wild', in a banking, e-commerce, or business setting.
> 🎓 Interestingly, cluster analysis originated in the fields of Anthropology and Psychology in the 1930s. Can you imagine how it might have been used?
Alternately, you could use it for grouping search results - by shopping links, images, or reviews, for example. Clustering is useful when you have a large dataset that you want to reduce and on which you want to perform more granular analysis, so the technique can be used to learn about data before other models are constructed.
✅ Once your data is organized in clusters, you assign it a cluster Id, and this technique can be useful when preserving a dataset's privacy; you can instead refer to a data point by its cluster id, rather than by more revealing identifiable data. Can you think of other reasons why you'd refer to a cluster Id rather than other elements of the cluster to identify it?
Getting started with clustering
> 🎓 How we create clusters has a lot to do with how we gather up the data points into groups. Let's unpack some vocabulary:
>
> 🎓 ['Transductive' vs. 'inductive'](https://wikipedia.org/wiki/Transduction_(machine_learning))
>
> Transductive inference is derived from observed training cases that map to specific test cases. Inductive inference is derived from training cases that map to general rules which are only then applied to test cases.
>
> An example: Imagine you have a dataset that is only partially labelled. Some things are 'records', some 'cds', and some are blank. Your job is to provide labels for the blanks. If you choose an inductive approach, you'd train a model looking for 'records' and 'cds', and apply those labels to your unlabeled data. This approach will have trouble classifying things that are actually 'cassettes'. A transductive approach, on the other hand, handles this unknown data more effectively as it works to group similar items together and then applies a label to a group. In this case, clusters might reflect 'round musical things' and 'square musical things'.
>
> 🎓 ['Non-flat' vs. 'flat' geometry](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
>
> Derived from mathematical terminology, non-flat vs. flat geometry refers to the measure of distances between points by either 'flat' ([Euclidean](https://wikipedia.org/wiki/Euclidean_geometry)) or 'non-flat' (non-Euclidean) geometrical methods.
>
> 'Flat' in this context refers to Euclidean geometry (parts of which are taught as 'plane' geometry), and non-flat refers to non-Euclidean geometry. What does geometry have to do with machine learning? Well, as two fields that are rooted in mathematics, there must be a common way to measure distances between points in clusters, and that can be done in a 'flat' or 'non-flat' way, depending on the nature of the data. [Euclidean distances](https://wikipedia.org/wiki/Euclidean_distance) are measured as the length of a line segment between two points. [Non-Euclidean distances](https://wikipedia.org/wiki/Non-Euclidean_geometry) are measured along a curve. If your data, visualized, seems to not exist on a plane, you might need to use a specialized algorithm to handle it.
<img src="../../images/flat-nonflat.png"
width="600"/>
Infographic by Dasani Madipalli
> 🎓 ['Distances'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
>
> Clusters are defined by their distance matrix, e.g. the distances between points. This distance can be measured a few ways. Euclidean clusters are defined by the average of the point values, and contain a 'centroid' or center point. Distances are thus measured by the distance to that centroid. Non-Euclidean distances refer to 'clustroids', the point closest to other points. Clustroids in turn can be defined in various ways.
>
> 🎓 ['Constrained'](https://wikipedia.org/wiki/Constrained_clustering)
>
> [Constrained Clustering](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf) introduces 'semi-supervised' learning into this unsupervised method. The relationships between points are flagged as 'cannot link' or 'must-link' so some rules are forced on the dataset.
>
> An example: If an algorithm is set free on a batch of unlabelled or semi-labelled data, the clusters it produces may be of poor quality. In the example above, the clusters might group 'round music things' and 'square music things' and 'triangular things' and 'cookies'. If given some constraints, or rules to follow ("the item must be made of plastic", "the item needs to be able to produce music") this can help 'constrain' the algorithm to make better choices.
>
> 🎓 'Density'
>
> Data that is 'noisy' is considered to be 'dense'. The distances between points in each of its clusters may prove, on examination, to be more or less dense, or 'crowded' and thus this data needs to be analyzed with the appropriate clustering method. [This article](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html) demonstrates the difference between using K-Means clustering vs. HDBSCAN algorithms to explore a noisy dataset with uneven cluster density.
Deepen your understanding of clustering techniques in this [Learn module](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-15963-cxa)
**Clustering algorithms**
There are over 100 clustering algorithms, and their use depends on the nature of the data at hand. Let's discuss some of the major ones:
- **Hierarchical clustering**. If an object is classified by its proximity to a nearby object, rather than to one farther away, clusters are formed based on their members' distance to and from other objects. Hierarchical clustering is characterized by repeatedly combining two clusters.
<img src="../../images/hierarchical.png"
width="600"/>
Infographic by Dasani Madipalli
- **Centroid clustering**. This popular algorithm requires the choice of 'k', or the number of clusters to form, after which the algorithm determines the center point of a cluster and gathers data around that point. [K-means clustering](https://wikipedia.org/wiki/K-means_clustering) is a popular version of centroid clustering which separates a data set into pre-defined K groups. The center is determined by the nearest mean, thus the name. The squared distance from the cluster is minimized.
<img src="../../images/centroid.png"
width="600"/>
Infographic by Dasani Madipalli
- **Distribution-based clustering**. Based in statistical modeling, distribution-based clustering centers on determining the probability that a data point belongs to a cluster, and assigning it accordingly. Gaussian mixture methods belong to this type.
- **Density-based clustering**. Data points are assigned to clusters based on their density, or their grouping around each other. Data points far from the group are considered outliers or noise. DBSCAN, Mean-shift and OPTICS belong to this type of clustering.
- **Grid-based clustering**. For multi-dimensional datasets, a grid is created and the data is divided amongst the grid's cells, thereby creating clusters.
The best way to learn about clustering is to try it for yourself, so that's what you'll do in this exercise.
We'll require some packages to knock-off this module. You can have them installed as: `install.packages(c('tidyverse', 'tidymodels', 'DataExplorer', 'summarytools', 'plotly', 'paletteer', 'corrplot', 'patchwork'))`
Alternatively, the script below checks whether you have the packages required to complete this module and installs them for you in case some are missing.
| suppressWarnings(if(!require("pacman")) install.packages("pacman"))
pacman::p_load('tidyverse', 'tidymodels', 'DataExplorer', 'summarytools', 'plotly', 'paletteer', 'corrplot', 'patchwork')
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
Exercise - cluster your dataClustering as a technique is greatly aided by proper visualization, so let's get started by visualizing our music data. This exercise will help us decide which of the methods of clustering we should most effectively use for the nature of this data.Let's hit the ground running by importing the data. | # Load the core tidyverse and make it available in your current R session
library(tidyverse)
# Import the data into a tibble
df <- read_csv(file = "https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/5-Clustering/data/nigerian-songs.csv")
# View the first 5 rows of the data set
df %>%
slice_head(n = 5)
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
Sometimes, we may want some little more information on our data. We can have a look at the `data` and `its structure` by using the [*glimpse()*](https://pillar.r-lib.org/reference/glimpse.html) function: | # Glimpse into the data set
df %>%
glimpse()
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
Good job!💪We can observe that `glimpse()` will give you the total number of rows (observations) and columns (variables), then, the first few entries of each variable in a row after the variable name. In addition, the *data type* of the variable is given immediately after each variable's name inside ``.`DataExplorer::introduce()` can summarize this information neatly: | # Describe basic information for our data
df %>%
introduce()
# A visual display of the same
df %>%
plot_intro()
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
Awesome! We have just learnt that our data has no missing values.While we are at it, we can explore common central tendency statistics (e.g [mean](https://en.wikipedia.org/wiki/Arithmetic_mean) and [median](https://en.wikipedia.org/wiki/Median)) and measures of dispersion (e.g [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation)) using `summarytools::descr()` | # Describe common statistics
df %>%
descr(stats = "common")
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
Let's look at the general values of the data. Note that popularity can be `0`, which show songs that have no ranking. We'll remove those shortly.> 🤔 If we are working with clustering, an unsupervised method that does not require labeled data, why are we showing this data with labels? In the data exploration phase, they come in handy, but they are not necessary for the clustering algorithms to work. 1. Explore popular genresLet's go ahead and find out the most popular genres 🎶 by making a count of the instances it appears. | # Popular genres
top_genres <- df %>%
count(artist_top_genre, sort = TRUE) %>%
# Encode to categorical and reorder the according to count
mutate(artist_top_genre = factor(artist_top_genre) %>% fct_inorder())
# Print the top genres
top_genres
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
That went well! They say a picture is worth a thousand rows of a data frame (actually nobody ever says that 😅). But you get the gist of it, right?One way to visualize categorical data (character or factor variables) is using barplots. Let's make a barplot of the top 10 genres: | # Change the default gray theme
theme_set(theme_light())
# Visualize popular genres
top_genres %>%
slice(1:10) %>%
ggplot(mapping = aes(x = artist_top_genre, y = n,
fill = artist_top_genre)) +
geom_col(alpha = 0.8) +
paletteer::scale_fill_paletteer_d("rcartocolor::Vivid") +
ggtitle("Top genres") +
theme(plot.title = element_text(hjust = 0.5),
# Rotates the X markers (so we can read them)
axis.text.x = element_text(angle = 90))
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
Now it's way easier to identify that we have `missing` genres 🧐!> A good visualisation will show you things that you did not expect, or raise new questions about the data - Hadley Wickham and Garrett Grolemund, [R For Data Science](https://r4ds.had.co.nz/introduction.html)Note, when the top genre is described as `Missing`, that means that Spotify did not classify it, so let's get rid of it. | # Visualize popular genres
top_genres %>%
filter(artist_top_genre != "Missing") %>%
slice(1:10) %>%
ggplot(mapping = aes(x = artist_top_genre, y = n,
fill = artist_top_genre)) +
geom_col(alpha = 0.8) +
paletteer::scale_fill_paletteer_d("rcartocolor::Vivid") +
ggtitle("Top genres") +
theme(plot.title = element_text(hjust = 0.5),
# Rotates the X markers (so we can read them)
axis.text.x = element_text(angle = 90))
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
From the little data exploration, we learn that the top three genres dominate this dataset. Let's concentrate on `afro dancehall`, `afropop`, and `nigerian pop`, additionally filter the dataset to remove anything with a 0 popularity value (meaning it was not classified with a popularity in the dataset and can be considered noise for our purposes): | nigerian_songs <- df %>%
# Concentrate on top 3 genres
filter(artist_top_genre %in% c("afro dancehall", "afropop","nigerian pop")) %>%
# Remove unclassified observations
filter(popularity != 0)
# Visualize popular genres
nigerian_songs %>%
count(artist_top_genre) %>%
ggplot(mapping = aes(x = artist_top_genre, y = n,
fill = artist_top_genre)) +
geom_col(alpha = 0.8) +
paletteer::scale_fill_paletteer_d("ggsci::category10_d3") +
ggtitle("Top genres") +
theme(plot.title = element_text(hjust = 0.5))
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
Let's see whether there is any apparent linear relationship among the numerical variables in our data set. This relationship is quantified mathematically by the [correlation statistic](https://en.wikipedia.org/wiki/Correlation).The correlation statistic is a value between -1 and 1 that indicates the strength of a relationship. Values above 0 indicate a *positive* correlation (high values of one variable tend to coincide with high values of the other), while values below 0 indicate a *negative* correlation (high values of one variable tend to coincide with low values of the other). | # Narrow down to numeric variables and fid correlation
corr_mat <- nigerian_songs %>%
select(where(is.numeric)) %>%
cor()
# Visualize correlation matrix
corrplot(corr_mat, order = 'AOE', col = c('white', 'black'), bg = 'gold2')
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
The data is not strongly correlated except between `energy` and `loudness`, which makes sense, given that loud music is usually pretty energetic. `Popularity` has a correspondence to `release date`, which also makes sense, as more recent songs are probably more popular. Length and energy seem to have a correlation too.It will be interesting to see what a clustering algorithm can make of this data!> 🎓 Note that correlation does not imply causation! We have proof of correlation but no proof of causation. An [amusing web site](https://tylervigen.com/spurious-correlations) has some visuals that emphasize this point. 2. Explore data distributionLet's ask some more subtle questions. Are the genres significantly different in the perception of their danceability, based on their popularity? Let's examine our top three genres data distribution for popularity and danceability along a given x and y axis using [density plots](https://www.khanacademy.org/math/ap-statistics/density-curves-normal-distribution-ap/density-curves/v/density-curves). | # Perform 2D kernel density estimation
density_estimate_2d <- nigerian_songs %>%
ggplot(mapping = aes(x = popularity, y = danceability, color = artist_top_genre)) +
geom_density_2d(bins = 5, size = 1) +
paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry") +
xlim(-20, 80) +
ylim(0, 1.2)
# Density plot based on the popularity
density_estimate_pop <- nigerian_songs %>%
ggplot(mapping = aes(x = popularity, fill = artist_top_genre, color = artist_top_genre)) +
geom_density(size = 1, alpha = 0.5) +
paletteer::scale_fill_paletteer_d("RSkittleBrewer::wildberry") +
paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry") +
theme(legend.position = "none")
# Density plot based on the danceability
density_estimate_dance <- nigerian_songs %>%
ggplot(mapping = aes(x = danceability, fill = artist_top_genre, color = artist_top_genre)) +
geom_density(size = 1, alpha = 0.5) +
paletteer::scale_fill_paletteer_d("RSkittleBrewer::wildberry") +
paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry")
# Patch everything together
library(patchwork)
density_estimate_2d / (density_estimate_pop + density_estimate_dance)
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
We see that there are concentric circles that line up, regardless of genre. Could it be that Nigerian tastes converge at a certain level of danceability for this genre?In general, the three genres align in terms of their popularity and danceability. Determining clusters in this loosely-aligned data will be a challenge. Let's see whether a scatter plot can support this. | # A scatter plot of popularity and danceability
scatter_plot <- nigerian_songs %>%
ggplot(mapping = aes(x = popularity, y = danceability, color = artist_top_genre, shape = artist_top_genre)) +
geom_point(size = 2, alpha = 0.8) +
paletteer::scale_color_paletteer_d("futurevisions::mars")
# Add a touch of interactivity
ggplotly(scatter_plot)
| _____no_output_____ | MIT | 5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb | LyhourChhen/ML-For-Beginners |
Ejercicios Random Networks vs Real Networks Ejercicios Diferencia en Distribución de GradosCompare la distribución de grados de una red real contra una red aleatoria.- Baje un red real de SNAP- Cree una red aleatoria con el mismo número de links y nodos- Compare la distribución de grados | import networkx as nx
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
edges = []
for line in open('CA-HepTh.txt'):
if line[0] != '#':
edge = line.replace('\n','').split('\t')
edges.append((edge[0],edge[1]))
G=nx.Graph()
G.add_edges_from(edges)
d = G.degree()
#degrees = [degree for _, d.items()]
#print(d)
N = len(G.nodes())
p = (2*len(edges))/(N*(N-1))
G_rand = nx.gnp_random_graph(N,p)
sns.distplot(list(G.degree().values()))
sns.distplot(list(G_rand.degree().values())) | C:\Users\Camil\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
| MIT | camilo_torres_botero/.ipynb_checkpoints/Ejercicios 1.3 Random Networks Vs. Real Networks-checkpoint.ipynb | spulido99/NetworksAnalysis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.