markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
A Scientific Deep Dive Into SageMaker LDA1. [Introduction](Introduction)1. [Setup](Setup)1. [Data Exploration](DataExploration)1. [Training](Training)1. [Inference](Inference)1. [Epilogue](Epilogue) Introduction***Amazon SageMaker LDA is an unsupervised learning algorithm that attempts to describe a set of observatio...
%matplotlib inline import os, re, tarfile import boto3 import matplotlib.pyplot as plt import mxnet as mx import numpy as np np.set_printoptions(precision=3, suppress=True) # some helpful utility functions are defined in the Python module # "generate_example_data" located in the same directory as this # notebook fr...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Setup****This notebook was created and tested on an ml.m4.xlarge notebook instance.*We first need to specify some AWS credentials; specifically data locations and access roles. This is the only cell of this notebook that you will need to edit. In particular, we need the following data:* `bucket` - An S3 bucket accessi...
from sagemaker import get_execution_role role = get_execution_role() bucket = sagemaker.Session().default_bucket() prefix = "sagemaker/DEMO-lda-science" print("Training input/output will be stored in {}/{}".format(bucket, prefix)) print("\nIAM Role: {}".format(role))
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
The LDA ModelAs mentioned above, LDA is a model for discovering latent topics describing a collection of documents. In this section we will give a brief introduction to the model. Let,* $M$ = the number of *documents* in a corpus* $N$ = the average *length* of a document.* $V$ = the size of the *vocabulary* (the total...
print("Generating example data...") num_documents = 6000 known_alpha, known_beta, documents, topic_mixtures = generate_griffiths_data( num_documents=num_documents, num_topics=10 ) num_topics, vocabulary_size = known_beta.shape # separate the generated data into training and tests subsets num_documents_training = ...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Let's start by taking a closer look at the documents. Note that the vocabulary size of these data is $V = 25$. The average length of each document in this data set is 150. (See `generate_griffiths_data.py`.)
print("First training document =\n{}".format(documents_training[0])) print("\nVocabulary size = {}".format(vocabulary_size)) print("Length of first document = {}".format(documents_training[0].sum())) average_document_length = documents.sum(axis=1).mean() print("Observed average document length = {}".format(average_docu...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
The example data set above also returns the LDA parameters,$$(\alpha, \beta)$$used to generate the documents. Let's examine the first topic and verify that it is a probability distribution on the vocabulary.
print("First topic =\n{}".format(known_beta[0])) print( "\nTopic-word probability matrix (beta) shape: (num_topics, vocabulary_size) = {}".format( known_beta.shape ) ) print("\nSum of elements of first topic = {}".format(known_beta[0].sum()))
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Unlike some clustering algorithms, one of the versatilities of the LDA model is that a given word can belong to multiple topics. The probability of that word occurring in each topic may differ, as well. This is reflective of real-world data where, for example, the word *"rover"* appears in a *"dogs"* topic as well as i...
print("Topic #1:\n{}".format(known_beta[0])) print("Topic #6:\n{}".format(known_beta[5]))
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Human beings are visual creatures, so it might be helpful to come up with a visual representation of these documents.In the below plots, each pixel of a document represents a word. The greyscale intensity is a measure of how frequently that word occurs within the document. Below we plot the first few documents of the t...
%matplotlib inline fig = plot_lda(documents_training, nrows=3, ncols=4, cmap="gray_r", with_colorbar=True) fig.suptitle("$w$ - Document Word Counts") fig.set_dpi(160)
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
When taking a close look at these documents we can see some patterns in the word distributions suggesting that, perhaps, each topic represents a "column" or "row" of words with non-zero probability and that each document is composed primarily of a handful of topics.Below we plots the *known* topic-word probability dist...
%matplotlib inline fig = plot_lda(known_beta, nrows=1, ncols=10) fig.suptitle(r"Known $\beta$ - Topic-Word Probability Distributions") fig.set_dpi(160) fig.set_figheight(2)
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
These 10 topics were used to generate the document corpus. Next, we will learn about how this is done. Generating DocumentsLDA is a generative model, meaning that the LDA parameters $(\alpha, \beta)$ are used to construct documents word-by-word by drawing from the topic-word distributions. In fact, looking closely at ...
print("First training document =\n{}".format(documents_training[0])) print("\nVocabulary size = {}".format(vocabulary_size)) print("Length of first document = {}".format(documents_training[0].sum())) print("First training document topic mixture =\n{}".format(topic_mixtures_training[0])) print("\nNumber of topics = {}"....
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
We plot the first document along with its topic mixture. We also plot the topic-word probability distributions again for reference.
%matplotlib inline fig, (ax1, ax2) = plt.subplots(2, 1) ax1.matshow(documents[0].reshape(5, 5), cmap="gray_r") ax1.set_title(r"$w$ - Document", fontsize=20) ax1.set_xticks([]) ax1.set_yticks([]) cax2 = ax2.matshow(topic_mixtures[0].reshape(1, -1), cmap="Reds", vmin=0, vmax=1) cbar = fig.colorbar(cax2, orientation="h...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Finally, let's plot several documents with their corresponding topic mixtures. We can see how topics with large weight in the document lead to more words in the document within the corresponding "row" or "column".
%matplotlib inline fig = plot_lda_topics(documents_training, 3, 4, topic_mixtures=topic_mixtures) fig.suptitle(r"$(w,\theta)$ - Documents with Known Topic Mixtures") fig.set_dpi(160)
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Training***In this section we will give some insight into how AWS SageMaker LDA fits an LDA model to a corpus, create an run a SageMaker LDA training job, and examine the output trained model. Topic Estimation using Tensor DecompositionsGiven a document corpus, Amazon SageMaker LDA uses a spectral tensor decompositio...
# convert documents_training to Protobuf RecordIO format recordio_protobuf_serializer = RecordSerializer() fbuffer = recordio_protobuf_serializer.serialize(documents_training) # upload to S3 in bucket/prefix/train fname = "lda.data" s3_object = os.path.join(prefix, "train", fname) boto3.Session().resource("s3").Bucket...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Next, we specify a Docker container containing the SageMaker LDA algorithm. For your convenience, a region-specific container is automatically chosen for you to minimize cross-region data communication
from sagemaker.image_uris import retrieve region_name = boto3.Session().region_name container = retrieve("lda", boto3.Session().region_name) print("Using SageMaker LDA container: {} ({})".format(container, region_name))
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Training ParametersParticular to a SageMaker LDA training job are the following hyperparameters:* **`num_topics`** - The number of topics or categories in the LDA model. * Usually, this is not known a priori. * In this example, howevever, we know that the data is generated by five topics.* **`feature_dim`** - The si...
session = sagemaker.Session() # specify general training job information lda = sagemaker.estimator.Estimator( container, role, output_path="s3://{}/{}/output".format(bucket, prefix), instance_count=1, instance_type="ml.c4.2xlarge", sagemaker_session=session, ) # set algorithm-specific hyperpar...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
If you see the message> `===== Job Complete =====`at the bottom of the output logs then that means training sucessfully completed and the output LDA model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Job...
print("Training job name: {}".format(lda.latest_training_job.job_name))
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Inspecting the Trained ModelWe know the LDA parameters $(\alpha, \beta)$ used to generate the example data. How does the learned model compare the known one? In this section we will download the model data and measure how well SageMaker LDA did in learning the model.First, we download the model data. SageMaker will ou...
# download and extract the model file from S3 job_name = lda.latest_training_job.job_name model_fname = "model.tar.gz" model_object = os.path.join(prefix, "output", job_name, "output", model_fname) boto3.Session().resource("s3").Bucket(bucket).Object(model_object).download_file(fname) with tarfile.open(fname) as tar: ...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Presumably, SageMaker LDA has found the topics most likely used to generate the training corpus. However, even if this is case the topics would not be returned in any particular order. Therefore, we match the found topics to the known topics closest in L1-norm in order to find the topic permutation.Note that we will us...
permutation, learned_beta = match_estimated_topics(known_beta, learned_beta_permuted) learned_alpha = learned_alpha_permuted[permutation] fig = plot_lda(np.vstack([known_beta, learned_beta]), 2, 10) fig.set_dpi(160) fig.suptitle("Known vs. Found Topic-Word Probability Distributions") fig.set_figheight(3) beta_error =...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Not bad!In the eyeball-norm the topics match quite well. In fact, the topic-word distribution error is approximately 2%. Inference***A trained model does nothing on its own. We now want to use the model we computed to perform inference on data. For this example, that means predicting the topic mixture representing a g...
lda_inference = lda.deploy( initial_instance_count=1, instance_type="ml.m4.xlarge", # LDA inference may work better at scale on ml.c4 instances serializer=CSVSerializer(), deserializer=JSONDeserializer(), )
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Congratulations! You now have a functioning SageMaker LDA inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below:
print("Endpoint name: {}".format(lda_inference.endpoint_name))
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
We pass some test documents to the inference endpoint. Note that the serializer and deserializer will atuomatically take care of the datatype conversion.
results = lda_inference.predict(documents_test[:12]) print(results)
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
It may be hard to see but the output format of SageMaker LDA inference endpoint is a Python dictionary with the following format.```{ 'predictions': [ {'topic_mixture': [ ... ] }, {'topic_mixture': [ ... ] }, {'topic_mixture': [ ... ] }, ... ]}```We extract the topic mixtures, themselves, corresponding to...
inferred_topic_mixtures_permuted = np.array( [prediction["topic_mixture"] for prediction in results["predictions"]] ) print("Inferred topic mixtures (permuted):\n\n{}".format(inferred_topic_mixtures_permuted))
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Inference AnalysisRecall that although SageMaker LDA successfully learned the underlying topics which generated the sample data the topics were in a different order. Before we compare to known topic mixtures $\theta \in \mathbb{R}^K$ we should also permute the inferred topic mixtures
inferred_topic_mixtures = inferred_topic_mixtures_permuted[:, permutation] print("Inferred topic mixtures:\n\n{}".format(inferred_topic_mixtures))
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Let's plot these topic mixture probability distributions alongside the known ones.
%matplotlib inline # create array of bar plots width = 0.4 x = np.arange(10) nrows, ncols = 3, 4 fig, ax = plt.subplots(nrows, ncols, sharey=True) for i in range(nrows): for j in range(ncols): index = i * ncols + j ax[i, j].bar(x, topic_mixtures_test[index], width, color="C0") ax[i, j].bar...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
In the eyeball-norm these look quite comparable.Let's be more scientific about this. Below we compute and plot the distribution of L1-errors from **all** of the test documents. Note that we send a new payload of test documents to the inference endpoint and apply the appropriate permutation to the output.
%%time # create a payload containing all of the test documents and run inference again # # TRY THIS: # try switching between the test data set and a subset of the training # data set. It is likely that LDA inference will perform better against # the training set than the holdout test set. # payload_documents = d...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Machine learning algorithms are not perfect and the data above suggests this is true of SageMaker LDA. With more documents and some hyperparameter tuning we can obtain more accurate results against the known topic-mixtures.For now, let's just investigate the documents-topic mixture pairs that seem to do well as well as...
N = 6 good_idx = l1_errors < 0.05 good_documents = payload_documents[good_idx][:N] good_topic_mixtures = inferred_topic_mixtures[good_idx][:N] poor_idx = l1_errors > 0.3 poor_documents = payload_documents[poor_idx][:N] poor_topic_mixtures = inferred_topic_mixtures[poor_idx][:N] %matplotlib inline fig = plot_lda_topi...
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
In this example set the documents on which inference was not as accurate tend to have a denser topic-mixture. This makes sense when extrapolated to real-world datasets: it can be difficult to nail down which topics are represented in a document when the document uses words from a large subset of the vocabulary. Stop /...
sagemaker.Session().delete_endpoint(lda_inference.endpoint_name)
_____no_output_____
Apache-2.0
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
Automate loan approvals with Business rules in Apache Spark and Scala Automating at scale your business decisions in Apache Spark with IBM ODM 8.9.2This Scala notebook shows you how to execute locally business rules in DSX and Apache Spark. You'll learn how to call in Apache Spark a rule-based decision service. This d...
// @hidden_cell import scala.sys.process._ "wget https://raw.githubusercontent.com/ODMDev/decisions-on-spark/master/data/miniloan/miniloan-requests-10K.csv".! val filename = "miniloan-requests-10K.csv"
_____no_output_____
Apache-2.0
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
This following code loads the 10 000 simple loan application dataset written in CSV format.
val requestData = sc.textFile(filename) val requestDataCount = requestData.count println(s"$requestDataCount loan requests read in a CVS format") println("The first 5 requests:") requestData.take(20).foreach(println)
10000 loan requests read in a CVS format The first 5 requests: John Doe, 550, 80000, 250000, 240, 0.05d John Woo, 540, 100000, 250000, 240, 0.05d Peter Woo, 540, 60000, 250000, 120, 0.05d Peter Woo, 540, 60000, 250000, 120, 0.07d John Doe, 550, 80000, 250000, 240, 0.05d John Woo, 540, 100000, 250000, 240, 0.05d Peter W...
Apache-2.0
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
2. Add libraries for business rule execution and a loan application object modelThe XXX refers to your object storage or other place where you make available these jars.Add the following jars to execute the deployed decision service%AddJar https://XXX/j2ee_connector-1_5-fr.jar%AddJar https://XXX/jrules-engine.jar%AddJ...
// @hidden_cell // The urls below are accessible for an IBM internal usage only %AddJar https://XXX/j2ee_connector-1_5-fr.jar %AddJar https://XXX/jrules-engine.jar %AddJar https://XXX/jrules-res-execution.jar %AddJar https://XXX/jackson-annotations-2.6.5.jar -f //Loan Application eXecutable Object Model %AddJar https...
_____no_output_____
Apache-2.0
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
3. Import packagesImport ODM and Apache Spark packages.
import java.util.Map import java.util.HashMap import com.fasterxml.jackson.core.JsonGenerationException import com.fasterxml.jackson.core.JsonProcessingException import com.fasterxml.jackson.databind.JsonMappingException import com.fasterxml.jackson.databind.ObjectMapper import com.fasterxml.jackson.databind.Serializa...
_____no_output_____
Apache-2.0
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
4. Implement a Map function that executes a rule-based decision service
case class MiniLoanRequest(borrower: miniloan.Borrower, loan: miniloan.Loan) case class RESRunner(sessionFactory: com.ibm.res.InMemoryJ2SEFactory) { def executeAsString(s: String): String = { println("executeAsString") val request = makeRequest(s) val response = executeRequest(request) ...
_____no_output_____
Apache-2.0
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
5. Automate the decision making on the loan application datasetYou invoke a map on the decision function. While the map occurs rule engines are processing in parallel the loan applications to produce a data set of answers.
println("Start of Execution") val answers = requestData.map(decisionService.execute) printf("Number of rule based decisions: %s \n" , answers.count) // Cleanup output file //val fs = FileSystem.get(new URI(outputPath), sc.hadoopConfiguration); //if (fs.exists(new Path(outputPath))) // fs.delete(new Path(outputPath),...
Start of Execution Number of rule based decisions: 10000 End of Execution Decision automation job done
Apache-2.0
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
6. View your automated decisionsEach decision is composed of output parameters and of a decision trace. The loan data contains the approval flag and the computed yearly repayment. The decision trace lists the business rules that have been executed in sequence to come to the conclusion. Each decision has been serialize...
//answers.toDF().show(false) answers.take(1).foreach(println)
{ "output" : { "ilog.rules.firedRulesCount" : 0, "loan" : { "amount" : 250000, "duration" : 240, "yearlyInterestRate" : 0.05, "yearlyRepayment" : 19798, "approved" : true, "messages" : [ ] } }, "input" : { "loan" : { "amount" : 250000, "duration" : 2...
Apache-2.0
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/fsharp/Samples) Machine Learning over House Prices with ML.NET Reference the packages
#r "nuget:Microsoft.ML,1.4.0" #r "nuget:Microsoft.ML.AutoML,0.16.0" #r "nuget:Microsoft.Data.Analysis,0.2.0" #r "nuget: XPlot.Plotly.Interactive, 4.0.6" open Microsoft.Data.Analysis open XPlot.Plotly
_____no_output_____
MIT
samples/notebooks/fsharp/Samples/HousingML.ipynb
BillWagner/interactive
Adding better default formatting for data framesRegister a formatter for data frames and data frame rows.
module DateFrameFormatter = // Locally open the F# HTML DSL. open Html let maxRows = 20 Formatter.Register<DataFrame>((fun (df: DataFrame) (writer: TextWriter) -> let take = 20 table [] [ thead [] [ th [] [ str "Index" ] for c in df...
_____no_output_____
MIT
samples/notebooks/fsharp/Samples/HousingML.ipynb
BillWagner/interactive
Download the data
open System.Net.Http let housingPath = "housing.csv" if not(File.Exists(housingPath)) then let contents = HttpClient().GetStringAsync("https://raw.githubusercontent.com/ageron/handson-ml2/master/datasets/housing/housing.csv").Result File.WriteAllText("housing.csv", contents)
_____no_output_____
MIT
samples/notebooks/fsharp/Samples/HousingML.ipynb
BillWagner/interactive
Add the data to the data frame
let housingData = DataFrame.LoadCsv(housingPath) housingData housingData.Description()
_____no_output_____
MIT
samples/notebooks/fsharp/Samples/HousingML.ipynb
BillWagner/interactive
Display the data
let graph = Histogram(x = housingData.["median_house_value"], nbinsx = 20) graph |> Chart.Plot let graph = Scattergl( x = housingData.["longitude"], y = housingData.["latitude"], mode = "markers", marker = Marker( color = housingDa...
_____no_output_____
MIT
samples/notebooks/fsharp/Samples/HousingML.ipynb
BillWagner/interactive
Prepare the training and validation sets
module Array = let shuffle (arr: 'T[]) = let rnd = Random() let arr = Array.copy arr for i in 0 .. arr.Length - 1 do let r = i + rnd.Next(arr.Length - i) let temp = arr.[r] arr.[r] <- arr.[i] arr.[i] <- temp arr let randomIndices = [|...
_____no_output_____
MIT
samples/notebooks/fsharp/Samples/HousingML.ipynb
BillWagner/interactive
Create the regression model and train it
#!time open Microsoft.ML open Microsoft.ML.Data open Microsoft.ML.AutoML let mlContext = MLContext() let experiment = mlContext.Auto().CreateRegressionExperiment(maxExperimentTimeInSeconds = 15u) let result = experiment.Execute(housing_train, labelColumnName = "median_house_value")
_____no_output_____
MIT
samples/notebooks/fsharp/Samples/HousingML.ipynb
BillWagner/interactive
Display the training results
let scatters = result.RunDetails |> Seq.filter (fun d -> not (isNull d.ValidationMetrics)) |> Seq.groupBy (fun r -> r.TrainerName) |> Seq.map (fun (name, details) -> Scattergl( name = name, x = (details |> Seq.map (fun r -> r.RuntimeInSec...
_____no_output_____
MIT
samples/notebooks/fsharp/Samples/HousingML.ipynb
BillWagner/interactive
Validate and display the results
let testResults = result.BestRun.Model.Transform(housing_test) let trueValues = testResults.GetColumn<float32>("median_house_value") let predictedValues = testResults.GetColumn<float32>("Score") let predictedVsTrue = Scattergl( x = trueValues, y = predictedValues, mode = "markers"...
_____no_output_____
MIT
samples/notebooks/fsharp/Samples/HousingML.ipynb
BillWagner/interactive
ML Course, Bogotá, Colombia (&copy; Josh Bloom; June 2019)
%run ../talktools.py
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Featurization and Dirty Data (and NLP) Source: [V. Singh](https://www.slideshare.net/hortonworks/data-science-workshop) Notes:Biased workflow :)* Labels → Answer* Feature extraction. Taking images/data and manipulate it, and do a feature matrix, each feature is a number (measurable). Domain-specific.* Run a ML model o...
import numpy as np import pandas as pd df = pd.DataFrame({"eye color": ["brown", "brown", "blonde", "red", None, "Brown"], "height": [1.85, 1.25, 1.45, 2.01, None, 1.02], "country of origin": ["Colombia", "USA", "Mexico", "Mexico", "Chile", "Colombia"], "gender": ["M", None, "F", "F","F", None]}) df
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Let's first normalize the data so it's all lower case. This will handle the "Brown" and "brown" issue.
df_new = df.copy() df_new["eye color"] = df_new["eye color"].str.lower() df_new
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Let's next handle the NaN in the height. What should we use here?
# mean of everyone? np.nanmean(df_new["height"].values) # mean of just females? np.nanmean(df_new[df_new["gender"] == 'F']["height"]) df_new1 = df_new.copy() df_new1.at[4, "height"] = np.nanmean(df_new[df_new["gender"] == 'F']["height"]) df_new1
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Let's next handle the eye color. What should we use?
df_new1["eye color"].mode() df_new2 = df_new1.copy() df_new2.at[4, "eye color"] = df_new1["eye color"].mode().values[0] df_new2
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
How should we handle the missing gender entries?
df_new3 = df_new2.fillna("N/A") df_new3
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
We're done, right? No. We fixed the dirty, missing data problem but we still dont have a numerical feature matrix.We could do a mapping such that "Colombia" -> 1, "USA" -> 2, ... etc. but then that would imply an ordering between what is fundamentally categories (without ordering). Instead we want to do `one-hot encodi...
pd.get_dummies(df_new3, prefix=['eye color', 'country of origin', 'gender'])
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Note: depending on the learning algorithm you use, you may want to do `drop_first=True` in `get_dummies`. Of course there are helpful tools that exist for us to deal with dirty, missing data.
%run transform bt = BasicTransformer(return_df=True) bt.fit_transform(df_new)
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Time seriesThe [wafer dataset](http://www.timeseriesclassification.com/description.php?Dataset=Wafer) is a set of timeseries capturing sensor measurements (1000 training examples, 6164 test examples) of one silicon wafer during the manufacture of semiconductors. Each wafer has a classification of normal or abnormal. T...
import requests from io import StringIO dat_file = requests.get("https://github.com/zygmuntz/time-series-classification/blob/master/data/wafer/Wafer.csv?raw=true") data = StringIO(dat_file.text) %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd data.seek(0) df = pd.read_csv(data,...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
0 to 151 time measurements, the latest colum is the label we are trying to predict.
df[152].value_counts() ## save the data as numpy arrays target = df.values[:,152].astype(int) time_series = df.values[:,0:152] normal_inds = np.argwhere(target == 1) ; np.random.shuffle(normal_inds) abnormal_inds = np.argwhere(target == -1); np.random.shuffle(abnormal_inds) num_to_plot = 3 fig, (ax1, ax2) = plt.subplo...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
What would be good features here?
f1 = np.mean(time_series, axis=1) # how about the mean? f1.shape import seaborn as sns, numpy as np import warnings warnings.filterwarnings("ignore") ax = sns.distplot(f1) # Plotting differences of means between normal and abnormal ax = sns.distplot(f1[normal_inds], kde_kws={"label": "normal"}) sns.distplot(f1[abnorm...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Often there are entire python packages devoted to help us build features from certain types of datasets (timeseries, text, images, movies, etc.). In the case of timeseries, a popular package is `tsfresh` (*"It automatically calculates a large number of time series characteristics, the so called features. Further the pa...
# !pip install tsfresh dfc = df.copy() del dfc[152] d = dfc.stack() d = d.reset_index() d = d.rename(columns={"level_0": "id", "level_1": "time", 0: "value"}) y = df[152] from tsfresh import extract_features max_num=300 from tsfresh import extract_relevant_features features_filtered_direct = extract_relevant_feature...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Text DataMany applications involve parsing and understanding something about natural language, ie. speech or text data. Categorization is a classic usage of Natural Language Processing (NLP): what bucket does this text belong to? Question: **What are some examples where learning on text has commerical or industrial a...
from sklearn.datasets import fetch_20newsgroups news_train = fetch_20newsgroups(subset='train', categories=['sci.space','rec.autos'], data_home='datatmp/') news_train.target_names print(news_train.data[1]) news_train.target_names[news_train.target[1]] autos = np.argwhere(news_train.target == 1) sci = np.argwhere(news_...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
**How do you (as a human) classify text? What do you look for? How might we make these features?**
# total character count? f1 = np.array([len(x) for x in news_train.data]) f1 ax = sns.distplot(f1[autos], kde_kws={"label": "autos"}) sns.distplot(f1[sci], ax=ax, kde_kws={"label": "sci"}) ax.set_xscale("log") ax.set_xlabel("number of charaters") # total character words? f2 = np.array([len(x.split(" ")) for x in news_t...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
We've got three fairly uninformative features now. We should be able to do better. Unsurprisingly, what matters most in NLP is the content: the words used, the tone, the meaning from the ordering of those words. The basic components of NLP are: * Tokenization - intelligently splitting up words in sentences, paying atte...
#!pip install spacy #!python -m spacy download en #!python -m spacy download es import spacy # Load English tokenizer, tagger, parser, NER and word vectors nlp = spacy.load("en") # the spanish model is # nlp = spacy.load("es") doc = nlp(u"Guido said that 'Python is one of the best languages for doing Data Science.' ...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
`doc` is now an `iterable ` with each word/item properly tokenized and tagged. This is done by applying rules specific to each language. Linguistic annotations are available as Token attributes.
for token in doc: print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_, token.shape_, token.is_alpha, token.is_stop) from spacy import displacy displacy.serve(doc, style="dep") displacy.render(doc, style = "ent", jupyter = True) nlp = spacy.load("es") # https://www.elespectador.com/notici...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
One very powerful way to featurize text/documents is to count the frequency of words---this is called **bag of words**. Each individual token occurrence frequency is used to generate a feature. So the two sentences become:```json{"Guido": 1, "said": 2, "that": 2, "Python": 2, "is": 1, "one": 1, "of": 1, "best": ...
from spacy.lang.en.stop_words import STOP_WORDS STOP_WORDS
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
`sklearn` has a number of helper functions, include the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html):> Convert a collection of text documents to a matrix of token counts. This implementation produces a sparse representation of the counts usi...
# the following is from https://www.dataquest.io/blog/tutorial-text-classification-in-python-using-spacy/ import pandas as pd from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer from sklearn.base import TransformerMixin from sklearn.pipeline import Pipeline import string from spacy.lang.en.stop_...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Why did we get `datum` as one of our feature names?
X.toarray() doc.text
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Let's try a bigger corpus (the newsgroups):
news_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'), categories=['sci.space','rec.autos'], data_home='datatmp/') %time X = bow_vector.fit_transform(news_train.data) X bow_...
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Most of those features will only appear once and we might not want to include them (as they add noise). In order to reweight the count features into floating point values suitable for usage by a classifier it is very common to use the *tf–idf* transform. From [`sklearn`](https://scikit-learn.org/stable/modules/generate...
tfidf_vector = TfidfVectorizer(tokenizer = spacy_tokenizer, min_df=0.03, max_df=0.9, max_features=1000) %time X = tfidf_vector.fit_transform(news_train.data) tfidf_vector.get_feature_names() X print(X[1,:]) y = news_train.target np.savez("tfidf.npz", X=X.todense(), y=y)
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
One of the challenges with BoW and TF-IDF is that we lose context. "Me gusta esta clase, no" is the same as "No me gusta esta clase". One way to handle this is with N-grams -- not just frequencies of individual words but of groupings of n-words. Eg. "Me gusta", "gusta esta", "esta clase", "clase no", "no me" (bigrams)....
bow_vector = CountVectorizer(tokenizer = spacy_tokenizer, ngram_range=(1,2)) X = bow_vector.fit_transform([x.text for x in en_doc.sents]) bow_vector.get_feature_names()
_____no_output_____
BSD-3-Clause
Lectures/1_ComputationalAndInferentialThinking/03_Featurization_and_NLP.ipynb
ijpulidos/ml_course
Calibration of non-isoplanatic low frequency dataThis uses an implementation of the SageCAL algorithm to calibrate a simulated SKA1LOW observation in which sources inside the primary beam have one set of calibration errors and sources outside have different errors.In this example, the peeler sources are held fixed in ...
% matplotlib inline import os import sys sys.path.append(os.path.join('..', '..')) from data_models.parameters import arl_path results_dir = arl_path('test_results') import numpy from astropy.coordinates import SkyCoord from astropy import units as u from astropy.wcs.utils import pixel_to_skycoord from matplotli...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Use Dask throughout
arlexecute.set_client(use_dask=True) arlexecute.run(init_logging)
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
We make the visibility. The parameter rmax determines the distance of the furthest antenna/stations used. All over parameters are determined from this number.We set the w coordinate to be zero for all visibilities so as not to have to do full w-term processing. This speeds up the imaging steps.
nfreqwin = 1 ntimes = 1 rmax = 750 frequency = numpy.linspace(0.8e8, 1.2e8, nfreqwin) if nfreqwin > 1: channel_bandwidth = numpy.array(nfreqwin * [frequency[1] - frequency[0]]) else: channel_bandwidth = [0.4e8] times = numpy.linspace(-numpy.pi / 3.0, numpy.pi / 3.0, ntimes) phasecentre=SkyCoord(ra=+30.0 * u.de...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Generate the model from the GLEAM catalog, including application of the primary beam.
beam = create_image_from_visibility(block_vis, npixel=npixel, frequency=frequency, nchan=nfreqwin, cellsize=cellsize, phasecentre=phasecentre) original_gleam_components = create_low_test_skycomponents_from_gleam(flux_limit=1.0, phasecentre=phasecentre, frequency=frequency, polarisation_frame=PolarisationF...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Generate the template image
model = create_image_from_visibility(block_vis, npixel=npixel, frequency=[numpy.average(frequency)], nchan=1, channel_bandwidth=[numpy.sum(channel_bandwidth)], cellsize=...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Create sources to be peeled
peel_distance = 0.16 peelers = select_components_by_separation(phasecentre, pb_gleam_components, min=peel_distance) gleam_components = select_components_by_separation(phasecentre, pb_gleam_components, max=peel_distance) print("There a...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Create the model visibilities, applying a different gain table for peeled sources and other components
corrupted_vis = copy_visibility(block_vis, zero=True) gt = create_gaintable_from_blockvisibility(block_vis, timeslice='auto') components_errors = [(p, 1.0) for p in peelers] components_errors.append((pb_gleam_components, 0.1)) for sc, phase_error in components_errors: component_vis = copy_visibility(block_vis, ze...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Find the components above the threshold
qa = qa_image(dirty) vmax=qa.data['medianabs']*20.0 vmin=-qa.data['medianabs']*2.0 print(qa) threshold = 10.0*qa.data['medianabs'] print("Selecting sources brighter than %f" % threshold) initial_found_components= find_skycomponents(dirty, threshold=threshold) show_image(dirty, components=initial_found_components, cm='G...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Run skymodel_cal using dask
corrupted_vis = arlexecute.scatter(corrupted_vis) graph = calskymodel_solve_workflow(corrupted_vis, calskymodel_graph, niter=30, gain=0.25, tol=1e-8) calskymodel, residual_vis = arlexecute.compute(graph, sync=True)
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Combine all components for display
skymodel_components = list() for csm in calskymodel: skymodel_components += csm[0].components
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Check that the peeled sources are not altered
recovered_peelers = find_skycomponent_matches(peelers, skymodel_components, 1e-5) ok = True for p in recovered_peelers: ok = ok and numpy.abs(peelers[p[0]].flux[0,0] - skymodel_components[p[1]].flux[0,0]) < 1e-7 print("Peeler sources flux unchanged: %s" % ok) ok = True for p in recovered_peelers: ok = ok and pe...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Now we find the components in the residual image and add those to the existing model
residual, sumwt = invert_function(residual_vis, model, context='2d') qa = qa_image(residual) vmax=qa.data['medianabs']*30.0 vmin=-qa.data['medianabs']*3.0 print(qa) threshold = 20.0*qa.data['medianabs'] print("Selecting sources brighter than %f" % threshold) final_found_components = find_skycomponents(residual, thresh...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Make a restored image
psf, _ = invert_function(residual_vis, model, dopsf=True, context='2d') component_image = copy_image(faint_model) component_image = insert_skycomponent(component_image, final_components) restored = restore_cube(component_image, psf, residual) export_image_to_fits(restored, '%s/calskymodel_restored.fits' % results_dir)...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Now match the recovered components to the originals
original_bright_components = peelers + bright_components matches = find_skycomponent_matches(final_components, original_bright_components, 3*cellsize)
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Look at the range of separations found
separations = [match[2] for match in matches] plt.clf() plt.hist(separations/cellsize, bins=50) plt.title('Separation between input and recovered source in pixels') plt.xlabel('Separation in cells (cellsize = %g radians)' % cellsize) plt.ylabel('Number') plt.show()
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Now look at the matches between the original components and those recovered.
totalfluxin = numpy.sum([c.flux[0,0] for c in pb_gleam_components]) totalfluxout = numpy.sum([c.flux[0,0] for c in final_components]) + numpy.sum(faint_model.data) print("Recovered %.3f (Jy) of original %.3f (Jy)" % (totalfluxout, totalfluxin)) found = [match[1] for match in matches] notfound = list() for c in range(l...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Look at the recovered flux and the location of the unmatched components. From the image display these seem to be blends of close components.
fluxin = [original_bright_components[match[1]].flux[0,0] for match in matches] fluxout = [final_components[match[0]].flux[0,0] for match in matches] missed_components = [original_bright_components[c] for c in notfound] missed_flux = [match.flux[0,0] for match in missed_components] plt.clf() plt.plot(fluxin, fluxou...
_____no_output_____
Apache-2.0
workflows/notebooks/calskymodel.ipynb
mfarrera/algorithm-reference-library
Multiple linear regression**For Table 3 of the paper**Cell-based QUBICC R2B5 model
from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from tensorflow.keras import backend as K from tensorflow.keras.regularizers import l1_l2 import tensorflow as tf import tensorflow.nn as nn import gc import numpy as np import pandas as pd import importlib import os import...
_____no_output_____
MIT
additional_content/baselines/multiple_linear_regression_cell_based-R2B5.ipynb
agrundner24/iconml_clc
Training the multiple linear model on the entire data set
t0 = time.time() # The optimal multiple linear regression model lin_reg = LinearRegression() lin_reg.fit(input_data_scaled, output_data) print(time.time() - t0) # Loss of this optimal multiple linear regression model clc_predictions = lin_reg.predict(input_data_scaled) lin_mse = mean_squared_error(output_data, clc_pr...
The mean squared error of the linear model is 401.47.
MIT
additional_content/baselines/multiple_linear_regression_cell_based-R2B5.ipynb
agrundner24/iconml_clc
Zero Output Model
np.mean(output_data**2, dtype=np.float64)
_____no_output_____
MIT
additional_content/baselines/multiple_linear_regression_cell_based-R2B5.ipynb
agrundner24/iconml_clc
Constant Output Model
mean = np.mean(output_data, dtype=np.float64) np.mean((output_data-mean)**2, dtype=np.float64)
_____no_output_____
MIT
additional_content/baselines/multiple_linear_regression_cell_based-R2B5.ipynb
agrundner24/iconml_clc
Randomly initialized neural network
# Create the model model = Sequential() # First hidden layer model.add(Dense(units=64, activation='tanh', input_dim=no_of_features, kernel_regularizer=l1_l2(l1=0.004749, l2=0.008732))) # Second hidden layer model.add(Dense(units=64, activation=nn.leaky_relu, kernel_regularizer=l1_l2(l1=0.004749, l2=0...
The mean squared error of the randomly initialized neural network is 913.91.
MIT
additional_content/baselines/multiple_linear_regression_cell_based-R2B5.ipynb
agrundner24/iconml_clc
Simplified Sundqvist functioninput_data is unscaled
qv = input_data[:, 0] temp = input_data[:, 3] pres = input_data[:, 4] t0 = time.time() # 0.001% of the data ind = np.random.randint(0, samples_total, samples_total//10**5) # Entries will be in [0, 1] sundqvist = [] for i in ind: sundqvist.append(simple_sundqvist_scheme(qv[i], temp[i], pres[i], ps=101325)) ti...
_____no_output_____
MIT
additional_content/baselines/multiple_linear_regression_cell_based-R2B5.ipynb
agrundner24/iconml_clc
** Artistic Colorizer ** ◢ DeOldify - Colorize your own photos!**Credits:**Special thanks to:Matt Robinson and María Benavente for pioneering the DeOldify image colab notebook. Dana Kelley for doing things, breaking stuff & having an opinion on everything. ---◢ Verify Correct Runtime Settings** IMPORTANT **In the "R...
!git clone https://github.com/jantic/DeOldify.git DeOldify cd DeOldify
_____no_output_____
MIT
ImageColorizerColab.ipynb
pabenz99/DeOldify
◢ Setup
#NOTE: This must be the first call in order to work properly! from deoldify import device from deoldify.device_id import DeviceId #choices: CPU, GPU0...GPU7 device.set(device=DeviceId.GPU0) import torch if not torch.cuda.is_available(): print('GPU not available.') !pip install -r colab_requirements.txt import f...
_____no_output_____
MIT
ImageColorizerColab.ipynb
pabenz99/DeOldify
◢ Instructions source_urlType in a url to a direct link of an image. Usually that means they'll end in .png, .jpg, etc. NOTE: If you want to use your own image, upload it first to a site like Imgur. render_factorThe default value of 35 has been carefully chosen and should work -ok- for most scenarios (but probably w...
source_url = '' #@param {type:"string"} render_factor = 35 #@param {type: "slider", min: 7, max: 40} watermarked = True #@param {type:"boolean"} if source_url is not None and source_url !='': image_path = colorizer.plot_transformed_image_from_url(url=source_url, render_factor=render_factor, compare=True, watermar...
_____no_output_____
MIT
ImageColorizerColab.ipynb
pabenz99/DeOldify
See how well render_factor values perform on the image here
for i in range(10,40,2): colorizer.plot_transformed_image('test_images/image.png', render_factor=i, display_render_factor=True, figsize=(8,8))
_____no_output_____
MIT
ImageColorizerColab.ipynb
pabenz99/DeOldify
M2.859 · Visualización de datos · Práctica, Parte 22021-1 · Máster universitario en Ciencia de datos (Data science)Estudios de Informática, Multimedia y Telecomunicación&nbsp; A9: Práctica Final (parte 2) - Wrangling dataEl [**wrangling data**](https://en.wikipedia.org/wiki/Data_wrangling) es el proceso de transformar ...
from six import StringIO from IPython.display import Image from sklearn import datasets from sklearn.decomposition import PCA from sklearn.model_selection import train_test_split, cross_val_score from sklearn.metrics import accuracy_score, confusion_matrix, classification_report from sklearn.tree import DecisionTreeC...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
1. Carga del conjunto de datos (1 punto)Se ha seleccionado un conjunto de datos desde el portal Stack Overflow Annual Developer Survey, que examina todos los aspectos de la experiencia de los programadores de la comunidad (Stack Overflow), desde la satisfacción profesional y la búsqueda de empleo hasta la educación y ...
so2021_df = pd.read_csv('survey_results_public.csv', header=0) so2021_df.sample(5)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Selección de variables: se realiza la selección de todas las variables del dataset que servirán para responder a todas las cuestiones planteadas en la primera parte de la práctica:
so2021_data = so2021_df[['MainBranch', 'Employment', 'Country', 'EdLevel', 'Age1stCode', 'YearsCode', 'YearsCodePro', 'DevType', 'CompTotal', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWo...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable Ethnicity:.
from re import search def choose_ethnia(cell_ethnia): val_ethnia_exceptions = ["I don't know", "Or, in your own words:"] if cell_ethnia == "I don't know;Or, in your own words:": return val_ethnia_exceptions[0] if search(";", cell_ethnia): row_ethnia_values = cell_ethnia.split(';',...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable Employment:.
data_test['Employment'].drop_duplicates().sort_values() data_test['Employment'] = data_test['Employment'].replace(['Employed full-time'], 'Tiempo completo') data_test['Employment'] = data_test['Employment'].replace(['Employed part-time'], 'Tiempo parcial') data_test['Employment'] = data_test['Employment'].replace(['Ind...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable EdLevel:.
data_test['EdLevel'].drop_duplicates().sort_values() data_test['EdLevel'] = data_test['EdLevel'].replace(['Associate degree (A.A., A.S., etc.)'], 'Grado Asociado') data_test['EdLevel'] = data_test['EdLevel'].replace(['Bachelor’s degree (B.A., B.S., B.Eng., etc.)'], 'Licenciatura') data_test['EdLevel'] = data_test['EdLe...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable DevType:.
data_test['DevType'].drop_duplicates().sort_values() from re import search def choose_devtype(cell_devtype): val_devtype_exceptions = ["Other (please specify):"] if cell_devtype == "Other (please specify):": return val_devtype_exceptions[0] if search(";", cell_devtype): row_devtyp...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable MainBranch:
data_test['MainBranch'].drop_duplicates().sort_values() data_test['MainBranch'] = data_test['MainBranch'].replace(['I am a developer by profession'], 'Desarrollador Profesional') data_test['MainBranch'] = data_test['MainBranch'].replace(['I am not primarily a developer, but I write code sometimes as part of my work'], ...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable Age1stCode:
data_test['Age1stCode'].drop_duplicates().sort_values() data_test['Age1stCode'].value_counts() data_test['Age1stCode'] = data_test['Age1stCode'].replace(['11 - 17 years'], '11-17') data_test['Age1stCode'] = data_test['Age1stCode'].replace(['18 - 24 years'], '18-24') data_test['Age1stCode'] = data_test['Age1stCode'].rep...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data