markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Let's verify each of our clustersCluster 1
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 0, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
_____no_output_____
Apache-2.0
appliedDataScienceCapstoneWeek3.ipynb
Margchunne28/Location-data-provider
Cluster 2
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 1, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
_____no_output_____
Apache-2.0
appliedDataScienceCapstoneWeek3.ipynb
Margchunne28/Location-data-provider
Cluster 3
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 2, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
_____no_output_____
Apache-2.0
appliedDataScienceCapstoneWeek3.ipynb
Margchunne28/Location-data-provider
Cluster 4
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 3, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
_____no_output_____
Apache-2.0
appliedDataScienceCapstoneWeek3.ipynb
Margchunne28/Location-data-provider
Cluster 5
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 4, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
_____no_output_____
Apache-2.0
appliedDataScienceCapstoneWeek3.ipynb
Margchunne28/Location-data-provider
Time Series - collecting dxata at regular intervales**ADDITIVE MODEL**- represent a TS as a combinatino fo patterns at diffferent scales.- Decompose pieces QUANDL FINANCIAL LIBRARY- https://www.quandl.com/tools/python- https://github.com/quandl/quandl-python
#!pip install quandl import quandl import pandas as pd # quandl.ApiConfig.api_key = 'getyourownkey!' tesla = quandl.get('WIKI/TSLA') gm = quandl.get('WIKI/GM') gm.head() tesla_copy = tesla.copy() gm_copy = gm.copy()
_____no_output_____
BSD-2-Clause
Misc Learning/Untitled.ipynb
hamil168/Data-Science-Misc
EDA
import matplotlib.pyplot as plt plt.style.use('ggplot') plt.plot(gm.index, gm['Adj. Close']) plt.title('GM Stock Prices') plt.ylabel('Price USD') plt.show() plt.plot(tesla.index, tesla['Adj. Close'], 'r') plt.title('Tesla Stock Price') plt.ylabel('Price USD') plt.show() # Yearly average number of shares outstanding for Tesla and GM tesla_shares = {2018: 168e6, 2017: 162e6, 2016: 144e6, 2015: 128e6, 2014: 125e6, 2013: 119e6, 2012: 107e6, 2011: 100e6, 2010: 51e6} gm_shares = {2018: 1.42e9, 2017: 1.50e9, 2016: 1.54e9, 2015: 1.59e9, 2014: 1.61e9, 2013: 1.39e9, 2012: 1.57e9, 2011: 1.54e9, 2010:1.50e9} # create a year column tesla['Year'] = tesla.index.year # Move dates from index to column tesla.reset_index(level=0, inplace = True) tesla['cap'] = 0 # calculate market cap for i, year in enumerate(tesla['Year']): shares = tesla_shares.get(year) tesla.ix[i, 'cap'] = shares * tesla.ix[i, 'Adj. Close'] # create a year column gm['Year'] = gm.index.year # Move dates from index to column gm.reset_index(level=0, inplace = True) gm['cap'] = 0 # calculate market cap for i, year in enumerate(gm['Year']): shares = gm_shares.get(year) gm.ix[i, 'cap'] = shares * gm.ix[i, 'Adj. Close'] # Merge Datasets cars = gm.merge(tesla, how = 'inner', on = 'Date') cars.rename(columns = {'cap_x': 'gm_cap', 'cap_y': 'tesla_cap'}, inplace=True) cars = cars.loc[:, ['Date', 'gm_cap', 'tesla_cap']] cars['gm_cap'] = cars['gm_cap'] / 1e9 cars['tesla_cap'] = cars['tesla_cap'] / 1e9 cars.head() plt.figure(figsize=(10,8)) plt.plot(cars['Date'], cars['gm_cap'], 'b-', label = 'GM') plt.plot(cars['Date'], cars['tesla_cap'], 'r-', label = 'TESLA') plt.title('Market Cap of GM and Tesla') plt.legend() plt.show() plt.show() import numpy as np #find first and last time Tesla was valued higher than GM first_date = cars.loc[(np.min(list(np.where(cars['tesla_cap'] > cars['gm_cap'])[0]))), 'Date'] last_date = cars.loc[(np.max(list(np.where(cars['tesla_cap'] > cars['gm_cap'])[0]))), 'Date'] print("Tesla was valued higher than GM from {} to {}.".format(first_date.date(), last_date.date()))
Tesla was valued higher than GM from 2017-04-10 to 2018-03-23.
BSD-2-Clause
Misc Learning/Untitled.ipynb
hamil168/Data-Science-Misc
Code Transfer TestThe code transfer test is designed to test your coding skills that is learnt during the lecture training. The allotted time for the subsequent problem set is approximately 30 minutes. You are allowed to refer to Jupyter notebook throughout the test. Good luck!Jupyter notebook resource: Timer extension! Heeryung
# First, let's import the pandas and numpy libraries import pandas as pd import numpy as np # In addition, I want to show some plots, so we'll import matplotlib as well import matplotlib.pyplot as plt # Finally, we'll bring in the scipy stats libraries from scipy import stats # Hide import pandas.util.testing as pdt # %install_ext https://raw.githubusercontent.com/minrk/ipython_extensions/master/extensions/writeandexecute.py # pdt.assert_series_equal(s1, s2) # pdt.assert_frame_equal(f1, f2) # pdt.assert_index_equal(i1, i2)
<ipython-input-2-b71c00b51b7e>:2: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as pdt
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
Transfer Test QuestionWhat's the probability that a NFL player makes a field goal? In this problem we are interested in predicting the probability of NFL players to make a field goal. After assuming that the probability of NFL players to make a field goal follows a beta distribution, we observe the field goals data during multiple NFL matches in 2019. Let us describe the model specification. The prior distribution of the probability p follows beta distribution, with shape parameters alpha and beta. The likelihood, however, follows binomial distribution since we know explicitly about the number of successful and unsuccessful field goals.$$ Y \sim Bin(n, p)$$$$ p \sim Beta(\alpha, \beta)$$where $\alpha$ and $\beta$ are the hyperparameters of p. Question 1: Import DataLet us compile the read_csv function to read the NFL data file into a pandas DataFrame. Then, look at the first 5 lines. The file name of the CSV file is nfl.csv.
# Answer file = 'nfl.csv' def read_csv(file): """Read the nfl.csv data and return the first few lines of the csv file. """ ### BEGIN SOLUTION data = pd.read_csv(file) # And let's look at the first few lines return data.head() ### END SOLUTION read_csv(file) # Basic Test Case """Check that read_csv function returns the correct dataframe output and format.""" df1 = read_csv(file) df2 = pd.read_csv(file).head() pdt.assert_frame_equal(df1, df2) # Data Type Test Case assert isinstance(read_csv(file), pd.core.frame.DataFrame) # Advanced Test Case
_____no_output_____
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
Question 2: Column MeanLet us define the column_mean function which takes the csv file and the column name as inputs, and returns the mean probability of making field goals. (Look at the FG column)
# Sample Answer column = 'FG' def column_mean(file, column): """Take the nfl.csv file and a specific column as input. Compute the mean value for a column in pandas dataframe. """ ### BEGIN SOLUTION data = pd.read_csv(file) return data[column].mean() ### END SOLUTION column_mean(file, column) # Basic Test Case """Test whether the data type and value of column mean are correctly returned.""" assert column_mean(file, column) == pd.read_csv(file)[column].mean() assert 0.836319 <= column_mean(file, column) <= 0.836321 # Advanced Test Cases assert isinstance(column_mean(file, column), (np.float64, float))
_____no_output_____
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
Question 3: Specify Prior and LikelihoodLet us specify the prior and likelihood. We are going to split two code chunks to perform the following steps:In the first code chunk, we initialize a random number generator as 123 to make the random numbers predictable. Then, we assign the hyperparameters of prior. In this question we use beta distribution as the prior. Beta distribution has two shape parameters, which is alpha and beta. We set the parameter names as alpha and beta, and assign values 40 and 20, respectively. Finally, we set the sample size as 100, using the parameter name size.In the second code chunk, we set up a variable observed, which is the observed outcome variable. We define a function called likelihood which takes a csv file, a column in the dataset and the sample size as inputs, and return the observed outcome variable, which the product of size and the mean field goal probability. (You can take the result from question 2).
# Sample answer ### BEGIN SOLUTION # We initialize random number generator seed for reproducible results np.random.seed(123) # We assign the hyperparameters of prior # We assign the shape parameters alpha and beta as 40 and 20. alpha, beta = 40, 20 # Then we make up the sample size as 100 size = 100 ### END SOLUTION # Basic Test Case from nose.tools import assert_equal assert_equal(alpha, 40) assert_equal(beta, 20) assert_equal(size, 100) # Finally, we set up Y the observed outcome variable as the product of size and mean field goal probability def likelihood(file, column, size): """Compute the product of the column mean of field goal probability among NFL players and sample size. """ ### BEGIN SOLUTION observed = column_mean(file, column) * size return observed ### END SOLUTION observed = likelihood(file, column, size) # Basic Test Case assert_equal(likelihood(file, column, size), column_mean(file, column) * size) # Advanced Test Case assert 83 <= likelihood(file, column, size) <= 84 assert isinstance(likelihood(file, column, size), (np.float64, float))
_____no_output_____
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
Optional QuestionYou can run the following code to generate a plot for the beta distribution based on the alpha and beta parameters you defined above. Here the scipy.stats.beta function and matplotlib package are used to generate the probability density function plot.
# We define the linestyle and set up a linear space to clearly plot the beta distribution x = np.linspace(0,1,1002)[1:-1] # Then, we use scipy.stats.beta function to set up beta distribution dist = stats.beta(alpha, beta) # Now we want to define a plot_beta_pdf function to generate a figure # showing the probability density function of the beta distribution def plot_beta_pdf(x,dist): # Note that we want the figure to be 8 inches height and 8 inches width plt.figure(figsize=(8,8)) # We read the linear space and the beta pdf into the plot, and we want to generate a # continuous and black curve. We also want to show a legend at the top-right corner with # the alpha and beta value plt.plot(x, dist.pdf(x), ls = '-', c = 'black', label = r'$\alpha = %.1f,\ \beta=%.1f$' % (alpha, beta)) plt.legend(loc = 0) # Finally, we set up the value ranges and labels for x-axis and y-axis and show the plot plt.xlim(0,1) plt.ylim(0,10) plt.xlabel('$x$') plt.ylabel(r'$p(x|\alpha, \beta)$') plt.title('Beta Distribution') plt.show() plot_beta_pdf(x,dist)
_____no_output_____
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
You will see that the beta distribution curve surprisingly resembles the case when we conduct binomial trials with roughly 40 successes and 20 failures.In fact, we can think of $\alpha - 1$ as the number of successes and $\beta - 1$ as the number of failures. You can choose the $\alpha$ and $\beta$ parameters however you think they should look like. If you want the probability of success to become very high, let us say 95 percent, set 95 for $\alpha$ and 5 for $\beta$. If you think otherwise, let us say 5 percent, set 95 for $\beta$ and 5 for $\alpha$.
import pymc3 as pm # Hide import unittest
_____no_output_____
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
Question 4: Train MCMC samplerLet us train the Markov Chain Monte Carlo sampler. In this example, we use the default NUTS algorithm to sample the posterior distribution. We need to perform the following steps:First we set a variable called niter, the number of draw, to 1000.Second, we instantiate the model object.Third, we specify the beta distribution as the prior for the probability of making a field goal, using the variable name p. Please remember to use the alpha and beta value specified from question 3. Note that the function for assigning beta distribution is pm.Beta().We also specify the observed likelihood as binomial distribution, using the variable name y. The parameters taken are sample size (n), probability (p) and observed data (observed). Note that the function for binomial distribution is pm.Binomial().Finally, we start the sampler to take 1000 draws (from niter variable) and take 3 chains. We also provide a seed to the random_seed generator to make the results reproducible. The results should be returned as a trace object.
# Sample answer seed = 1000 def sampler(alpha, beta, size, observed, seed): """Train a MCMC sampler to generate posterior samples for the field goal probability. """ ### BEGIN SOLUTION niter = 1000 model = pm.Model() with model: p = pm.Beta('p', alpha=alpha, beta=beta) # Specify the likelihood function (sampling distribution) y = pm.Binomial('y', n=size, p=p, observed=observed) trace = pm.sample(niter, chains = 3, random_seed = seed) return trace ### END SOLUTION trace = sampler(alpha, beta, size, observed, seed) trace # Test Cases """Check the correctness of parameters assigned to the PyMC3 model.""" #assert_equal(seed, 1000) assert isinstance(trace, (pm.backends.base.MultiTrace)) assert_equal(trace.varnames, ['p_logodds__', 'p']) assert_equal(len(trace['p']), 3000) # #
_____no_output_____
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
Posterior Diagnostics Question 5Now we look at the posterior diagnostics. Recall we will plot a traceplot to visualize the posterior distribution of parameters of interest. In addition, we also obtain Gelman-Rubin statistics to check whether the parameter of interest converges.a) Define a function named traceplot which takes the trace object as input and returns a traceplot for the variable p, the probability of making a field goal.
# Answer 5a # Plot results Traceplot def traceplot(trace): """Generate the posterior density plot and trajectory plot for the field goal probability.""" # BEGIN SOLUTION return pm.traceplot(trace, varnames = ['p']) # END SOLUTION traceplot(trace) plt.show() # Test cases """Check the length data type and shape of the traceplot object for sanity purpose. To make sure the number of plots generated are correct.""" assert_equal(len(traceplot(trace)), 1) assert isinstance(traceplot(trace), np.ndarray) assert_equal(traceplot(trace).shape, (1,2))
C:\Users\sonso\Anaconda3\lib\site-packages\pymc3\plots\__init__.py:35: UserWarning: Keyword argument `varnames` renamed to `var_names`, and will be removed in pymc3 3.8 warnings.warn('Keyword argument `{old}` renamed to `{new}`, and will be removed in pymc3 3.8'.format(old=old, new=new)) C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\pymc3\plots\__init__.py:35: UserWarning: Keyword argument `varnames` renamed to `var_names`, and will be removed in pymc3 3.8 warnings.warn('Keyword argument `{old}` renamed to `{new}`, and will be removed in pymc3 3.8'.format(old=old, new=new)) C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\pymc3\plots\__init__.py:35: UserWarning: Keyword argument `varnames` renamed to `var_names`, and will be removed in pymc3 3.8 warnings.warn('Keyword argument `{old}` renamed to `{new}`, and will be removed in pymc3 3.8'.format(old=old, new=new)) C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn(
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
b) (Optional) What trends do you see in the posterior distribution of the probability of making a field goal? c) Define a function named posterior_summary which takes a trace object as input and displays a table-based summary of posterior statistics rounded by 2 digits.
# Answer 5c # Obtain summary statistics for posterior distributions def posterior_summary(trace): """Generate a table-based summary for the field goal probability.""" # BEGIN SOLUTION return pm.summary(trace).round(2) # END SOLUTION # Test Cases """Check whether the summary output is correctly generated.""" sum1 = posterior_summary(trace) sum2 = pm.summary(trace).round(2) pdt.assert_frame_equal(sum1, sum2) assert_equal(posterior_summary(trace).shape, (1, 11))
C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn(
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
d) What is the posterior mean and standard deviation of the probability of making a field goal? Define a function posterior_statistics which takes a trace object as input and return the posterior mean and posterior standard deviation as a tuple looks like (mean, sd).
# Answer 5d def posterior_statistics(trace): return (posterior_summary(trace).iloc[0,0], posterior_summary(trace).iloc[0,1]) posterior_statistics(trace) # Test Cases """Check whether the posterior mean and posterior standard deviation are correctly generated.""" assert_equal(posterior_statistics(trace), tuple([posterior_summary(trace).iloc[0,0], posterior_summary(trace).iloc[0,1]])) assert isinstance(posterior_statistics(trace), tuple) assert_equal(len(posterior_statistics(trace)), 2)
C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn( C:\Users\sonso\Anaconda3\lib\site-packages\arviz\data\io_pymc3.py:87: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. warnings.warn(
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
e) Define a function named gelman_rubin which takes a trace object as input and return the Gelman-Rubin statistics. Does the posterior distribution converge?
# Answer # Get Gelman-Rubin Convergence Criterion def gelman_rubin(trace): """Compute Gelman-Rubin statistics for the posterior samples of field goal probability.""" ### BEGIN SOLUTION return print(pm.rhat(trace, varnames=['p'])) ### END SOLUTION gelman_rubin(trace) # Test cases assert_equal(gelman_rubin(trace), pm.rhat(trace,varnames=['p'])) #assert 1 <= gelman_rubin(trace) <= 1.1 gelman_rubin(trace)[p]
_____no_output_____
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
Bonus Section: Effective sample sizeThe calculation of effective sample size is given by the following formula:$$\hat{n}_{eff} = \frac{mn}{1 + 2 \sum_{t=1}^T \hat{\rho}_t}$$where m is the number of chains, n the number of steps per chain, T the time when the autocorrelation first becomes negative, and ρ̂_t the autocorrelation at lag t.
## Calculate effective sample size pm.effective_n(trace)
/Users/son520804/anaconda3/lib/python3.6/site-packages/pymc3/stats/__init__.py:50: UserWarning: effective_n has been deprecated. In the future, use ess instead. warnings.warn("effective_n has been deprecated. In the future, use ess instead.")
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
As you can see, the effective sample size is 1271 for the total of the 3 chains. Since by default, the tuning sample is 500, leaving 500 samples to be resampled. So that means the autocorrelation is not extreme, the MCMC converges well. Geweke StatisticsAs an alternative of Gelman-Rubin statistics, Geweke provides a sanity check of the convergence of MCMC chains. Geweke statistics compares the mean and variance of segments from the beginning and end of each single MCMC chain for a parameter. If the absolute value of Geweke statistics exceeds 1, it indicates a lack of convergence and suggests that additional samples are requred to achieve convergence.
# We can create a plot to show the trajectory of Geweke statistics plt.plot(pm.geweke(trace['p'])[:,1], 'o') plt.axhline(1, c='red') plt.axhline(-1, c='red') plt.gca().margins(0.05) plt.show() pass
_____no_output_____
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
Since the Geweke statistics are less than 1 in absolute value, it indicates a good convergence in the MCMC chains. Debug QuestionThe following question requires you to read the code carefully and correct the codes with errors. A Umich cognitive science research team want to produce an elegant code to run a MCMC sampler to determine the IQ distribution of the undergraduate students studying at the University of Michigan. They studied the literature and inferred the following priors:$IQ \sim Normal(mean = 105, variance = 7^2)$$\sigma(IQ) \sim HalfNormal(\beta = 2)$Then they collected experimental data from 100 students who took the Wechsler Adult Intelligence Scale (WAIS) test at the cognitive science building. The first code chunk gives their test results.After debugging the code, the resulting code should be error-free and return the trace object.
# IQ test results for the 100 students np.random.seed(123) y = np.random.normal(100, 15, 100) # Hierarchical Bayesian Modeling seed = 123 niter = 1000 nchains = 3 with pm.Model() as model: """Deploy NUTS sampler to update the distribution for students' IQ.""" ### BEGIN CODE mu = pm.Normal('mu', mu = 105, sigma = 7) sigma = pm.HalfCauchy('sigma', beta = 2) y_observed = pm.Normal('y_observed', mu=mu, sigma=sigma, observed=y) trace2 = pm.sample(niter, chains = nchains, random_seed = seed) ### END CODE # Test cases assert_equal(type(posterior_summary(trace2)), pd.core.frame.DataFrame) assert_equal(posterior_summary(trace2).shape, (2,11))
_____no_output_____
MIT
PyMC3_Transfer_Test.ipynb
son520804/Video_Composition_Project
Creating Provenance an Example Using a Python Notebook
import prov, requests, pandas as pd, io, git, datetime, urllib from prov.model import ProvDocument
_____no_output_____
MIT
prov/Provenance using KN resource.ipynb
CSIRO-enviro-informatics/jupyter-examples
Initialising a Provenance DocumentFirst we use the prov library to create a provenance and initialise it with some relevant namespaces that can be used later to define provenance activities and entities
pg = ProvDocument() kn_id = "data/data-gov-au/number-of-properties-by-suburb-and-planning-zone-csv" pg.add_namespace('kn', 'http://oznome.csiro.au/id/') pg.add_namespace('void', 'http://vocab.deri.ie/void#') pg.add_namespace('foaf', 'http://xmlns.com/foaf/0.1/') pg.add_namespace('dc', 'http://purl.org/dc/elements/1.1/') pg.add_namespace('doap', 'http://usefulinc.com/ns/doap#')
_____no_output_____
MIT
prov/Provenance using KN resource.ipynb
CSIRO-enviro-informatics/jupyter-examples
Processing the DataProcessing could be anything and represents one or more provenance activities. In this example we use a KN metadata record to retrieve data on residential properities. We intersperse definition of provenance into this processing but we could have easily seperated it out and performed it after the processing steps First we define an entity that describes the KN metadata records which we are using here
input_identifier = 'kn:'+ kn_id input_entity = pg.entity(input_identifier, {'prov:label': 'road static parking off street', 'prov:type': 'void:Dataset'})
_____no_output_____
MIT
prov/Provenance using KN resource.ipynb
CSIRO-enviro-informatics/jupyter-examples
Then we proceed to drill down to get detailed data that we've found associated with this record
start_time = datetime.datetime.now() response = requests.get('https://data.sa.gov.au/data/dataset/d080706c-2c05-433d-b84d-9aa9b6ccae73/resource/4a47e89b-4be8-430d-8926-13b180025ac6/download/city-of-onkaparinga---number-of-properties-by-suburb-and-planning-zone-2016.csv') url_data = response.content dataframe = pd.read_csv(io.StringIO(url_data.decode('utf-8'))) dataframe.columns
_____no_output_____
MIT
prov/Provenance using KN resource.ipynb
CSIRO-enviro-informatics/jupyter-examples
Our processing is very simple we are subsetting the original dataset here and creating a new dataset called residential_frame that we will then save to disk
residential_frame = dataframe[dataframe['Zone_Description'] == 'Residential'] residential_frame_file_name = "filtered_residential_data.csv" residential_frame.to_csv(residential_frame_file_name) end_time = datetime.datetime.now()
_____no_output_____
MIT
prov/Provenance using KN resource.ipynb
CSIRO-enviro-informatics/jupyter-examples
Completing ProvenanceWe have began to build a provenance record but we are missing a record of the activity that transforms our input into the output and we are also missing a description of the output Generating an output provenance entityIdeally we would store our output provenance entity somewhere known and persistent and identify it with a persistent url. However we can still mint an identifier and then describe the dataset in useful ways that will make it easy to find and query from later. To do this we create a new entity record and use the file name and sha hash of the file to describe it.
import subprocess output = subprocess.check_output("sha1sum "+ residential_frame_file_name, shell=True) sha1 = str(output).split(' ')[0][2:] output_identifier = 'kn:' + sha1 output_entity = pg.entity(output_identifier , {'prov:label': residential_frame_file_name, 'prov:type': 'void:Dataset'})
_____no_output_____
MIT
prov/Provenance using KN resource.ipynb
CSIRO-enviro-informatics/jupyter-examples
Describing the activity We need to connect the entity representing the input data to the entity representing the output data and we may want to describe the activity that transforms the input into the output. In this case the activity is this Jupyter Notebook. One way of storing provenance information in it is to make sure it is version controlled in git and then record these details. Connecting things together into the provenance graph
import re, ipykernel, json %%javascript var nb = Jupyter.notebook; var port = window.location.port; nb.kernel.execute("NB_Port = '" + port + "'"); kernel_id = re.search('kernel-(.*).json', ipykernel.connect.get_connection_file()).group(1) response = requests.get('http://127.0.0.1:{port}/jupyter/api/sessions'.format(port=NB_Port)) response.content matching = [s for s in json.loads(response.text) if s['kernel']['id'] == kernel_id] if matching: matched = matching[0]['notebook']['path'] notebook_file_name = matched.split('/')[-1]
_____no_output_____
MIT
prov/Provenance using KN resource.ipynb
CSIRO-enviro-informatics/jupyter-examples
One gotcha here is that we need to make sure this notebooks relevant version has been committed and pushed to the remote. So do that and then execute these cells.
repo = git.Repo('./', search_parent_directories=True) current_git_sha = repo.head.object.hexsha current_git_remote = list(repo.remotes['origin'].urls)[0] current_git_sha current_git_remote process_identifier = 'kn:' + 'notebook/' + urllib.parse.quote(notebook_file_name + current_git_sha, safe='') process_identifier process_entity = pg.entity(process_identifier, other_attributes={'dc:description': 'a jupyter notebook used that demonstrates provenance', 'doap:GitRepository' : current_git_remote, 'doap:Version' : current_git_sha }) import time sunixtime = time.mktime(start_time.timetuple()) eunixtime = time.mktime(end_time.timetuple()) activity_identifier = 'kn:' + 'notebook/' + urllib.parse.quote(notebook_file_name + current_git_sha, safe='') + str(sunixtime) + str(eunixtime) activity = pg.activity(activity_identifier, startTime=start_time, endTime=end_time) pg.wasGeneratedBy(activity=activity, entity=output_entity) pg.used(activity=activity, entity=input_entity) pg.used(activity=activity, entity=process_entity) pg # visualize the graph from prov.dot import prov_to_dot dot = prov_to_dot(pg) dot.write_png('prov.png') from IPython.display import Image Image('prov.png')
_____no_output_____
MIT
prov/Provenance using KN resource.ipynb
CSIRO-enviro-informatics/jupyter-examples
Cython in Jupyter notebooks To use cython in a Jupyter notebook, the extension has to be loaded.
%load_ext cython
_____no_output_____
CC-BY-4.0
source-code/cython/cython.ipynb
gjbex/Python-for-HPC
Pure Python To illustrate the performance difference between a pure Python function and a cython implementation, consider a function that computes the list of the first $k_{\rm max}$ prime numbers.
from array import array def primes(kmax, p=None): if p is None: p = array('i', [0]*kmax) result = [] k, n = 0, 2 while k < len(p): i = 0 while i < k and n % p[i] != 0: i += 1 if i == k: p[k] = n k += 1 result.append(n) n += 1 return result
_____no_output_____
CC-BY-4.0
source-code/cython/cython.ipynb
gjbex/Python-for-HPC
Checking the results for the 20 first prime numbers.
primes(20)
_____no_output_____
CC-BY-4.0
source-code/cython/cython.ipynb
gjbex/Python-for-HPC
Note that this is not the most efficient method to check whether $k$ is prime.
%timeit primes(1_000) p = array('i', [0]*10_000) %timeit primes(10_000, p)
7.65 s ± 993 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
CC-BY-4.0
source-code/cython/cython.ipynb
gjbex/Python-for-HPC
Cython The cython implementation differs little from that in pure Python, type annotations have been added for the function's argument, and the variables `n`, `k`, `i`, and `p`. Note that cython expects a constant array size, hence the upper limit on `kmax`.
%%cython def c_primes(int kmax): cdef int n, k, i cdef int p[10_000] if kmax > 10_000: kmax = 10_000 result = [] k, n = 0, 2 while k < kmax: i = 0 while i < k and n % p[i] != 0: i += 1 if i == k: p[k] = n k += 1 result.append(n) n += 1 return result
_____no_output_____
CC-BY-4.0
source-code/cython/cython.ipynb
gjbex/Python-for-HPC
Checking the results for the 20 first prime numbers.
c_primes(20) %timeit c_primes(1_000) %timeit c_primes(10_000)
195 ms ± 15 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
CC-BY-4.0
source-code/cython/cython.ipynb
gjbex/Python-for-HPC
It is clear that the cython implementation is more than 30 times faster than the pure Python implementation. Dynamic memory allocation The cython implementation can be improved by adding dynamic memory allocation for the array `p`.
%%cython from libc.stdlib cimport calloc, free def c_primes(int kmax): cdef int n, k, i cdef int *p = <int *> calloc(kmax, sizeof(int)) result = [] k, n = 0, 2 while k < kmax: i = 0 while i < k and n % p[i] != 0: i += 1 if i == k: p[k] = n k += 1 result.append(n) n += 1 free(p) return result
_____no_output_____
CC-BY-4.0
source-code/cython/cython.ipynb
gjbex/Python-for-HPC
Checking the results for the 20 first prime numbers.
c_primes(20)
_____no_output_____
CC-BY-4.0
source-code/cython/cython.ipynb
gjbex/Python-for-HPC
This has no noticeable impact on performance.
%timeit c_primes(1_000) %timeit c_primes(10_000)
243 ms ± 32.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
CC-BY-4.0
source-code/cython/cython.ipynb
gjbex/Python-for-HPC
Plotly's Python graphing library makes interactive, publication-quality graphs. Examples of how to make line plots, scatter plots, area charts, bar charts, error bars, box plots, histograms, heatmaps, subplots, multiple-axes, polar charts, and bubble charts.
!pip install plotly_express
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
Plotly Express is a terse, consistent, high-level wrapper around Plotly.py for rapid data exploration and figure generation.
!pip install calmap
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
Calendar heatmaps (calmap) Plot Pandas time series data sampled by day in a heatmap per calendar year, similar to GitHub’s contributions plot, using matplotlib.
!pip install squarify
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
Pure Python implementation of the squarify treemap layout algorithm. Based on algorithm from Bruls, Huizing, van Wijk, "Squarified Treemaps", but implements it differently.
!pip install pycountry_convert
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
Using country data derived from wikipedia, pycountry-convert provides conversion functions between ISO country names, country-codes, and continent names.
!pip install GoogleMaps
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
Use Python? Want to geocode something? Looking for directions? Maybe matrices of directions? This library brings the Google Maps Platform Web Services to your Python application.
!pip install xgboost
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way. The same code runs on major distributed environment (Hadoop, SGE, MPI) and can solve problems beyond billions of examples.
!pip install lightgbm
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient with the following advantages:* Faster training speed and higher efficiency.* Lower memory usage.* Better accuracy.* Support of parallel and GPU learning.* Capable of handling large-scale data.
!pip install altair
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
Altair is a declarative statistical visualization library for Python. With Altair, you can spend more time understanding your data and its meaning. Altair's API is simple, friendly and consistent and built on top of the powerful Vega-Lite JSON specification. This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code. Altair is developed by Jake Vanderplas and Brian Granger in close collaboration with the UW Interactive Data Lab.
!pip install folium
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
folium builds on the data wrangling strengths of the Python ecosystem and the mapping strengths of the Leaflet.js library. Manipulate your data in Python, then visualize it in a Leaflet map via folium.
!pip install fbprophet
_____no_output_____
MIT
1_Required_Python_Library_Installation.ipynb
chahatmax/World_VS_India_COVID-19
Análise do eleitorado brasileiroFonte -> http://www.tse.jus.br/eleicoes/estatisticas/estatisticas-eleitorais
# importando as bibliotecas import pandas as pd # Carregando o arquivo csv df = pd.read_csv('eleitorado_municipio_2020.csv', encoding='latin1', sep=';') df.head().T # Tamanho do arquivo df.info() # As cinco cidades com o maior número de eleitores deficientes. df.nlargest(5,'QTD_ELEITORES_DEFICIENTE') # As cinco cidades com o menor número de eleitores. df.nsmallest(5,'QTD_ELEITORES') # Quantidade de eleitores por gênero print('Eleitoras: ', df['QTD_ELEITORES_FEMININO'].sum()) print('Eleitores: ', df['QTD_ELEITORES_MASCULINO'].sum()) print('Não informado: ', df['QTD_ELEITORES_NAOINFORMADO'].sum()) # Criar variáveis para calcular o percentual tot_eleitores = df['QTD_ELEITORES'].sum() tot_masc = df['QTD_ELEITORES_MASCULINO'].sum() tot_fem = df['QTD_ELEITORES_FEMININO'].sum() tot_ninf = df['QTD_ELEITORES_NAOINFORMADO'].sum() # Mostrar valores percentuais print('Eleitoras: ', ( ( tot_fem / tot_eleitores ) * 100) .round(2), '%' ) print('Eleitores: ', ( ( tot_masc / tot_eleitores ) * 100) .round(2), '%' ) print('Não informado: ', ( ( tot_ninf / tot_eleitores ) * 100) .round(2), '%' ) # Quantos municípios com mais homens do que mulheres df[df['QTD_ELEITORES_MASCULINO'] > df['QTD_ELEITORES_FEMININO']].count() # Vamos criar uma coluna nova para indicar a relação fem/masc df['RELACAO_FM'] = df['QTD_ELEITORES_FEMININO'] / df['QTD_ELEITORES_MASCULINO'] # Quais os municípios com maior relação fem/masc? df.nlargest(5,'RELACAO_FM').T # Quais os municípios com menor relação fem/masc? df.nsmallest(5,'RELACAO_FM') # Vamos criar um DataFrame só com municípios de São Paulo df_sp = df[ df['SG_UF'] == 'SP' ].copy() df_sp.head() # Quais os municípios com maior relação fem/masc de São Paulo? df_sp.nlargest(5,'RELACAO_FM') # Quais os municípios com menor relação fem/masc de São Paulo? df_sp.nsmallest(5,'RELACAO_FM') # Plotar um gráfico de distribuição fem/masc df['RELACAO_FM'].plot.hist(bins=100) # carregar biblioteca gráfica import seaborn as sns import matplotlib.pyplot as plt # Plotar um gráfico de distribuição fem/masc sns.distplot( df['RELACAO_FM'] , bins=100 , color='red', kde=False ) # Embelezando o gráfico plt.title('Relação Eleitoras/Eleitores', fontsize=18) plt.xlabel('Eleitoras/Eleitores', fontsize=14) plt.ylabel('Frequência', fontsize=14) plt.axvline(1.0, color='black', linestyle='--') # Plotar um gráfico de distribuição fem/masc, mas mostrando os pontos (municípios) sns.swarmplot( data=df , x='NM_REGIAO', y='RELACAO_FM' ) plt.axhline(1.0, color='black', linestyle='--') # Vamos plotar o gráfico de total de eleitores por faixa etária usando o gráfico de barras horizontal # primeiro vamos listar as colunas de nosso interesse lista = ['QTD_ELEITORES_16', 'QTD_ELEITORES_17', 'QTD_ELEITORES_18', 'QTD_ELEITORES_19', 'QTD_ELEITORES_20'] tot_idade = df[lista].sum() tot_idade # Mostrando o gráfico de barras tot_idade.plot.barh()
_____no_output_____
MIT
09.Eleitorado_2020/09_csv_eleitores_2020.ipynb
alexfviana/Python_LabHacker
Gun Deaths in the US, 2012-2014This is an analysis of gun deaths in the US between the years 2012 and 2014. The data comes from FiveThirtyEight, specifcally [here](https://github.com/fivethirtyeight/guns-data).
import csv f = open("guns.csv", "r") data = list(csv.reader(f)) print(data[:5]) headers = data[0] data = data[1:] print(headers) print(data[:5]) import datetime dates = [datetime.datetime(year=int(d[1]), month=int(d[2]), day=1) for d in data] print(dates[:5]) date_counts = {} for d in dates: if d in date_counts: date_counts[d] += 1 else: date_counts[d] = 1 print(date_counts) sex_counts = {} for d in data: if d[5] in sex_counts: sex_counts[d[5]] += 1 else: sex_counts[d[5]] = 1 race_counts = {} for d in data: if d[7] in race_counts: race_counts[d[7]] += 1 else: race_counts[d[7]] = 1 print(sex_counts) print(race_counts)
{'F': 14449, 'M': 86349} {'Native American/Native Alaskan': 917, 'Hispanic': 9022, 'White': 66237, 'Asian/Pacific Islander': 1326, 'Black': 23296}
MIT
dataquest/gun_deaths/.ipynb_checkpoints/Gun_Deaths-checkpoint.ipynb
arturbacu/data-science
Patterns so far* Most gun deaths are for race = white and sex = M, but is there a further correlation here?* It may help to explore this further to find more patterns like age, education status, and if police were involved
f2 = open("census.csv", "r") census = list(csv.reader(f2)) census mapping = {"Asian/Pacific Islander": 15834141, "Black": 40250635, "Native American/Native Alaskan": 3739506, "Hispanic": 44618105, "White": 197318956} race_per_hundredk = {} for r, v in race_counts.items(): race_per_hundredk[r] = (v / mapping[r]) * 100000 print(race_per_hundredk) intent = [d[3] for d in data] races = [d[7] for d in data] homicide_race_counts = {} for i, race in enumerate(races): if intent[i] == "Suicide": if race in homicide_race_counts: homicide_race_counts[race] += 1 else: homicide_race_counts[race] = 1 homicide_race_per_hundredk = {} for r, v in homicide_race_counts.items(): homicide_race_per_hundredk[r] = (v / mapping[r]) * 100000 print(homicide_race_per_hundredk)
{'Native American/Native Alaskan': 14.841532544673013, 'Hispanic': 7.106980451097149, 'White': 28.06217969245692, 'Asian/Pacific Islander': 4.705023152187416, 'Black': 8.278130270491385}
MIT
dataquest/gun_deaths/.ipynb_checkpoints/Gun_Deaths-checkpoint.ipynb
arturbacu/data-science
Módulo 2: Scraping con Selenium LATAM AirlinesVamos a scrapear el sitio de Latam para averiguar datos de vuelos en funcion el origen y destino, fecha y cabina. La información que esperamos obtener de cada vuelo es:- Precio(s) disponibles- Horas de salida y llegada (duración)- Información de las escalas¡Empecemos!
url = 'https://www.latam.com/es_ar/apps/personas/booking?fecha1_dia=20&fecha1_anomes=2019-12&auAvailability=1&ida_vuelta=ida&vuelos_origen=Buenos%20Aires&from_city1=BUE&vuelos_destino=Madrid&to_city1=MAD&flex=1&vuelos_fecha_salida_ddmmaaaa=20/12/2019&cabina=Y&nadults=1&nchildren=0&ninfants=0&cod_promo=' from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument('--incognito') driver = webdriver.Chrome(executable_path='../../chromedriver', options=options) driver.get(url) #Usaremos el Xpath para obtener la lista de vuelos vuelos = driver.find_elements_by_xpath('//li[@class="flight"]') vuelo = vuelos[0]
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C5. Demoras inteligentes - Script.ipynb
Alejandro-sin/Learning_Notebooks
Obtenemos la información de la hora de salida, llegada y duración del vuelo
# Hora de salida vuelo.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime') # Hora de llegada vuelo.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime') # Duración del vuelo vuelo.find_element_by_xpath('.//span[@class="duration"]/time').get_attribute('datetime') boton_escalas = vuelo.find_element_by_xpath('.//div[@class="flight-summary-stops-description"]/button') boton_escalas boton_escalas.click() segmentos = vuelo.find_elements_by_xpath('//div[@class="segments-graph"]/div[@class="segments-graph-segment"]') segmentos escalas = len(segmentos) - 1 #0 escalas si es un vuelo directo segmento = segmentos[0] # Origen segmento.find_element_by_xpath('.//div[@class="departure"]/span[@class="ground-point-name"]').text # Hora de salida segmento.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime') # Destino segmento.find_element_by_xpath('.//div[@class="arrival"]/span[@class="ground-point-name"]').text # Hora de llegada segmento.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime') # Duración del vuelo segmento.find_element_by_xpath('.//span[@class="duration flight-schedule-duration"]/time').get_attribute('datetime') # Numero del vuelo segmento.find_element_by_xpath('.//span[@class="equipment-airline-number"]').text # Modelo de avion segmento.find_element_by_xpath('.//span[@class="equipment-airline-material"]').text # Duracion de la escala segmento.find_element_by_xpath('.//div[@class="stop connection"]//p[@class="stop-wait-time"]//time').get_attribute('datetime') vuelo.find_element_by_xpath('//div[@class="modal-dialog"]//button[@class="close"]').click() vuelo.click() tarifas = vuelo.find_elements_by_xpath('.//div[@class="fares-table-container"]//tfoot//td[contains(@class, "fare-")]') precios = [] for tarifa in tarifas: nombre = tarifa.find_element_by_xpath('.//label').get_attribute('for') moneda = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="currency-symbol"]').text valor = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="value"]').text dict_tarifa={nombre:{'moneda':moneda, 'valor':valor}} precios.append(dict_tarifa) print(dict_tarifa) def obtener_tiempos(vuelo): # Hora de salida salida = vuelo.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime') # Hora de llegada llegada = vuelo.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime') # Duracion duracion = vuelo.find_element_by_xpath('.//span[@class="duration"]/time').get_attribute('datetime') return {'hora_salida': salida, 'hora_llegada': llegada, 'duracion': duracion} def obtener_precios(vuelo): tarifas = vuelo.find_elements_by_xpath( './/div[@class="fares-table-container"]//tfoot//td[contains(@class, "fare-")]') precios = [] for tarifa in tarifas: nombre = tarifa.find_element_by_xpath('.//label').get_attribute('for') moneda = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="currency-symbol"]').text valor = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="value"]').text dict_tarifa={nombre:{'moneda':moneda, 'valor':valor}} precios.append(dict_tarifa) return precios def obtener_datos_escalas(vuelo): segmentos = vuelo.find_elements_by_xpath('//div[@class="segments-graph"]/div[@class="segments-graph-segment"]') info_escalas = [] for segmento in segmentos: # Origen origen = segmento.find_element_by_xpath( './/div[@class="departure"]/span[@class="ground-point-name"]').text # Hora de salida dep_time = segmento.find_element_by_xpath( './/div[@class="departure"]/time').get_attribute('datetime') # Destino destino = segmento.find_element_by_xpath( './/div[@class="arrival"]/span[@class="ground-point-name"]').text # Hora de llegada arr_time = segmento.find_element_by_xpath( './/div[@class="arrival"]/time').get_attribute('datetime') # Duración del vuelo duracion_vuelo = segmento.find_element_by_xpath( './/span[@class="duration flight-schedule-duration"]/time').get_attribute('datetime') # Numero del vuelo numero_vuelo = segmento.find_element_by_xpath( './/span[@class="equipment-airline-number"]').text # Modelo de avion modelo_avion = segmento.find_element_by_xpath( './/span[@class="equipment-airline-material"]').text # Duracion de la escala if segmento != segmentos[-1]: duracion_escala = segmento.find_element_by_xpath( './/div[@class="stop connection"]//p[@class="stop-wait-time"]//time').get_attribute('datetime') else: duracion_escala = '' # Armo un diccionario para almacenar los datos data_dict={'origen': origen, 'dep_time': dep_time, 'destino': destino, 'arr_time': arr_time, 'duracion_vuelo': duracion_vuelo, 'numero_vuelo': numero_vuelo, 'modelo_avion': modelo_avion, 'duracion_escala': duracion_escala} info_escalas.append(data_dict) return info_escalas
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C5. Demoras inteligentes - Script.ipynb
Alejandro-sin/Learning_Notebooks
Clase 15Ya tenemos el scraper casi listo. Unifiquemos las 3 funciones de la clase anterior en una sola
def obtener_info(driver): vuelos = driver.find_elements_by_xpath('//li[@class="flight"]') print(f'Se encontraron {len(vuelos)} vuelos.') print('Iniciando scraping...') info = [] for vuelo in vuelos: # Obtenemos los tiempos generales del vuelo tiempos = obtener_tiempos(vuelo) # Clickeamos el botón de escalas para ver los detalles vuelo.find_element_by_xpath('.//div[@class="flight-summary-stops-description"]/button').click() escalas = obtener_datos_escalas(vuelo) # Cerramos el pop-up con los detalles vuelo.find_element_by_xpath('//div[@class="modal-dialog"]//button[@class="close"]').click() # Clickeamos el vuelo para ver los precios vuelo.click() precios = obtener_precios(vuelo) # Cerramos los precios del vuelo vuelo.click() info.append({'precios':precios, 'tiempos':tiempos , 'escalas': escalas}) return info
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C5. Demoras inteligentes - Script.ipynb
Alejandro-sin/Learning_Notebooks
Ahora podemos cargar la página con el driver y pasárselo a esta función
driver = webdriver.Chrome(executable_path='../../chromedriver', options=options) driver.get(url) obtener_info(driver)
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C5. Demoras inteligentes - Script.ipynb
Alejandro-sin/Learning_Notebooks
Se encontraron 0 vuelos porque la página no terminó de cargar Lo más simple que podemos hacer es agregar una demora fija lo suficientemente grande para asegurarnos que la página terminó de cargar.
import time options = webdriver.ChromeOptions() options.add_argument('--incognito') driver = webdriver.Chrome(executable_path='../../chromedriver', options=options) driver.get(url) time.sleep(10) vuelos = driver.find_elements_by_xpath('//li[@class="flight"]') vuelos driver.close()
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C5. Demoras inteligentes - Script.ipynb
Alejandro-sin/Learning_Notebooks
Esto funciona pero no es muy eficiente. Lo mejor sería esperar a que la página termine de cargar y luego recuperar los elementos.
from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.common.exceptions import TimeoutException options = webdriver.ChromeOptions() options.add_argument('--incognito') driver = webdriver.Chrome(executable_path='../../chromedriver', options=options) driver.get(url) delay = 10 try: vuelo = WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.XPATH, '//li[@class="flight"]'))) print("La página terminó de cargar") info_vuelos = obtener_info(driver) except TimeoutException: print("La página tardó demasiado en cargar") driver.close() info_vuelos
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C5. Demoras inteligentes - Script.ipynb
Alejandro-sin/Learning_Notebooks
RIHAD VARIAWA, Data Scientist - Who has fun LEARNING, EXPLORING & GROWING2D Numpy in Python Welcome! This notebook will teach you about using Numpy in the Python Programming Language. By the end of this lab, you'll know what Numpy is and the Numpy operations. Table of Contents Create a 2D Numpy Array Accessing different elements of a Numpy Array Basic Operations Estimated time needed: 20 min Create a 2D Numpy Array
# Import the libraries import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
Consider the list a, the list contains three nested lists **each of equal size**.
# Create a list a = [[11, 12, 13], [21, 22, 23], [31, 32, 33]] a
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
We can cast the list to a Numpy Array as follow
# Convert list to Numpy Array # Every element is the same type A = np.array(a) A
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
We can use the attribute ndim to obtain the number of axes or dimensions referred to as the rank.
# Show the numpy array dimensions A.ndim
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
Attribute shape returns a tuple corresponding to the size or number of each dimension.
# Show the numpy array shape A.shape
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
The total number of elements in the array is given by the attribute size.
# Show the numpy array size A.size
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
Accessing different elements of a Numpy Array We can use rectangular brackets to access the different elements of the array. The correspondence between the rectangular brackets and the list and the rectangular representation is shown in the following figure for a 3x3 array: We can access the 2nd-row 3rd column as shown in the following figure: We simply use the square brackets and the indices corresponding to the element we would like:
# Access the element on the second row and third column A[1, 2]
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
We can also use the following notation to obtain the elements:
# Access the element on the second row and third column A[1][2]
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
Consider the elements shown in the following figure We can access the element as follows
# Access the element on the first row and first column A[0][0]
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
We can also use slicing in numpy arrays. Consider the following figure. We would like to obtain the first two columns in the first row This can be done with the following syntax
# Access the element on the first row and first and second columns A[0][0:2]
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
Similarly, we can obtain the first two rows of the 3rd column as follows:
# Access the element on the first and second rows and third column A[0:2, 2]
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
Corresponding to the following figure: Basic Operations We can also add arrays. The process is identical to matrix addition. Matrix addition of X and Y is shown in the following figure: The numpy array is given by X and Y
# Create a numpy array X X = np.array([[1, 0], [0, 1]]) X # Create a numpy array Y Y = np.array([[2, 1], [1, 2]]) Y
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
We can add the numpy arrays as follows.
# Add X and Y Z = X + Y Z
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
Multiplying a numpy array by a scaler is identical to multiplying a matrix by a scaler. If we multiply the matrix Y by the scaler 2, we simply multiply every element in the matrix by 2 as shown in the figure. We can perform the same operation in numpy as follows
# Create a numpy array Y Y = np.array([[2, 1], [1, 2]]) Y # Multiply Y with 2 Z = 2 * Y Z
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
Multiplication of two arrays corresponds to an element-wise product or Hadamard product. Consider matrix X and Y. The Hadamard product corresponds to multiplying each of the elements in the same position, i.e. multiplying elements contained in the same color boxes together. The result is a new matrix that is the same size as matrix Y or X, as shown in the following figure. We can perform element-wise product of the array X and Y as follows:
# Create a numpy array Y Y = np.array([[2, 1], [1, 2]]) Y # Create a numpy array X X = np.array([[1, 0], [0, 1]]) X # Multiply X with Y Z = X * Y Z
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
We can also perform matrix multiplication with the numpy arrays A and B as follows: First, we define matrix A and B:
# Create a matrix A A = np.array([[0, 1, 1], [1, 0, 1]]) A # Create a matrix B B = np.array([[1, 1], [1, 1], [-1, 1]]) B
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
We use the numpy function dot to multiply the arrays together.
# Calculate the dot product Z = np.dot(A,B) Z # Calculate the sine of Z np.sin(Z)
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
We use the numpy attribute T to calculate the transposed matrix
# Create a matrix C C = np.array([[1,1],[2,2],[3,3]]) C # Get the transposed of C C.T
_____no_output_____
MIT
Data Science Professional/2 - Python For Data Science/5.2 Numpy2D.ipynb
2series/DataScience-Courses
Master Notebook: Bosques aleatorios Como ya vímos en scikit-learn gran parte de codigo es reciclable. Particularmente, leyendo variables y preparando los datos es lo mismo, independientemente del clasificador que usamos. Leyendo datos Para que no esta tan aburrido (también para mi) esta vez nos vamos a escribir una función para leer los datos que podemos reciclar para otros datos.
import numpy as np from collections import Counter def lea_datos(archivo, i_clase=-1, encabezado=True, delim=","): '''Una funcion para leer archivos con datos de clasificación. Argumentos: archivo - direccion del archivo i_clase - indice de columna que contiene las clases. default es -1 y significa la ultima fila. header - si hay un encabezado delim - que separa los datos Regresa: Un tuple de los datos, clases y cabezazo en caso que hay.''' todo = np.loadtxt(archivo, dtype="S", delimiter=delim) # para csv if(encabezado): encabezado = todo[0,:] todo = todo[1:,:] else: encabezado = None clases = todo[:, i_clase] datos = np.delete(todo, i_clase, axis=1) print ("Clases") for k,v in Counter(clases).items(): print (k,":",v) return (datos, clases, encabezado)
_____no_output_____
MIT
rf_master.ipynb
MidoriR/Antimicrobial_compound_classifier
Ahora importando datos se hace muy simple en el futuro. Para datos de csv con cabezazo por ejemplo:
datos, clases, encabezado = lea_datos("datos_peña.csv") # _ significa que no nos interesa este valor clases.shape
Clases b'1' : 9 b'0' : 14
MIT
rf_master.ipynb
MidoriR/Antimicrobial_compound_classifier
Balanceando datos Normalizacion de datos no es necesario para bosques aleatorios o arboles porque son invariantes al respeto de los magnitudes de variables. Lo unico que podría ser un problema es los clases son imbalanceados bajo nuestra percepción. Esto significa que a veces ni esperamos que nuestros datos van a ser balanceados y, en este caso, es permisible dejarlos sin balancear porque queremos que el clasificador incluye este probabilidad "prior" de la vida real. Si por ejemplo clasifico por ejemplo evaluaciones del vino, puede ser que lo encuentro normal que muchos vinos tienen una evaluacion promedia y solamente poco una muy buena. Si quiero que mi clasificador también tenga esta "tendencía" no tengo que balancear. En otro caso, si diseño una prueba para una enfermeda y no quiero que mi clasificador hace asumpciones sobre el estado del paciente no basados en mis variables, mejor balanceo.Scikit-learn no llega con una función para balancear datos entonces porque no lo hacemos nosotros?
def balance(datos, clases, estrategia="down"): '''Balancea unos datos así que cada clase aparece en las mismas proporciones. Argumentos: datos - los datos. Filas son muestras y columnas variables. clases - las clases para cada muestra. estrategia - "up" para up-scaling y "down" para down-scaling''' import numpy as np from collections import Counter # Decidimos los nuevos numeros de muestras para cada clase conteos = Counter(clases) if estrategia=="up": muestras = max( conteos.values() ) else: muestras = min( conteos.values() ) datos_nuevo = np.array([]).reshape( 0, datos.shape[1] ) clases_nuevo = [] for c in conteos: c_i = np.where(clases==c)[0] new_i = np.random.choice(c_i, muestras, replace=(estrategia=="up") ) datos_nuevo = np.append( datos_nuevo, datos[new_i,:], axis=0 ) clases_nuevo = np.append( clases_nuevo, clases[new_i] ) return (datos_nuevo, clases_nuevo)
_____no_output_____
MIT
rf_master.ipynb
MidoriR/Antimicrobial_compound_classifier
Y vamos a ver si funciona...
datos_b, clases_b = balance(datos, clases) print Counter(clases_b) print datos_b.shape
_____no_output_____
MIT
rf_master.ipynb
MidoriR/Antimicrobial_compound_classifier
Entrenando el bosque Corriendo un bosque aleatorio para la clasificación es casi lo mismo como todos los otros clasificadores. Nada más importamos de una libreria diferente.
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(oob_score=True) clf.fit(datos_b, clases_b) print "Exactitud estimado:", clf.oob_score_
_____no_output_____
MIT
rf_master.ipynb
MidoriR/Antimicrobial_compound_classifier
Ya sabemos que los hiper-parametros mas importantes para los bosques son el numero de arboles dado por `n_estimators` y la profundidad del arbol dado por `max_depth`. Porque son monotonos los podemos variar por separado.
%matplotlib inline from sklearn.cross_validation import StratifiedKFold from sklearn.grid_search import GridSearchCV import matplotlib.pyplot as plt cv = StratifiedKFold(y=clases, n_folds=4) arboles_vals = np.arange(5,200,5) busqueda = GridSearchCV(clf, param_grid=dict(n_estimators=arboles_vals), cv=cv) busqueda.fit(datos, clases) print 'Mejor numero de arboles=',busqueda.best_params_,',exactitud =',busqueda.best_score_ scores = [x[1] for x in busqueda.grid_scores_] plt.plot(arboles_vals, scores) plt.xlabel('C') plt.ylabel('exactitud cv') plt.show()
_____no_output_____
MIT
rf_master.ipynb
MidoriR/Antimicrobial_compound_classifier
Y para la profundidad:
prof_vals = np.arange(1,12) busqueda = GridSearchCV(clf, param_grid=dict(max_depth=prof_vals), cv=cv) busqueda.fit(datos, clases) print 'Mejor profundidad=',busqueda.best_params_,',exactitud =',busqueda.best_score_ scores = [x[1] for x in busqueda.grid_scores_] plt.plot(prof_vals, scores) plt.xlabel('profundidad maxima') plt.ylabel('exactitud cv') plt.show()
_____no_output_____
MIT
rf_master.ipynb
MidoriR/Antimicrobial_compound_classifier
Viendo unas buenas valores podemos escoger un bosque con la menor numero de arboles y profundidad para una buena exactitud. También esta vez vamos a sacar las importancias de variables de una vez.
clf = RandomForestClassifier(n_estimators=101, oob_score=True) clf.fit(datos, clases) print "Exactitud estimada:", clf.oob_score_ print cabeza print clf.feature_importances_
_____no_output_____
MIT
rf_master.ipynb
MidoriR/Antimicrobial_compound_classifier
Predecir nuevas variables
datos_test, clases_test, _ = lea_datos("titanic_test.csv") clases_pred = clf.predict(datos_test) print "Predicho:", clases_pred print "Verdad: ", clases_test
_____no_output_____
MIT
rf_master.ipynb
MidoriR/Antimicrobial_compound_classifier
Initialize estimator class
from __future__ import annotations from typing import NoReturn import numpy as np import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import RocCurveDisplay, accuracy_score from IMLearn.base import BaseEstimator import re from copy import copy from datetime import datetime from typing import NoReturn from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.preprocessing import FunctionTransformer from IMLearn import BaseEstimator from challenge.agoda_cancellation_estimator import AgodaCancellationEstimator class AgodaCancellationEstimator(BaseEstimator): def __init__(self, threshold: float = None) -> AgodaCancellationEstimator: super().__init__() self.__fit_model: RandomForestClassifier = None self.thresh = threshold def get_params(self, deep=False): return {'threshold': self.thresh} def set_params(self, threshold) -> AgodaCancellationEstimator: self.thresh = threshold return self def _fit(self, X: np.ndarray, y: np.ndarray) -> NoReturn: self.__fit_model = RandomForestClassifier(random_state=0).fit(X, y) def _predict(self, X: pd.DataFrame) -> np.ndarray: probs = self.__fit_model.predict_proba(X)[:, 1] return probs > self.thresh if self.thresh is not None else probs def _loss(self, X: np.ndarray, y: np.ndarray) -> float: pass def plot_roc_curve(self, X: np.ndarray, y: np.ndarray): RocCurveDisplay.from_estimator(self.__fit_model, X, y) def score(self, X: pd.DataFrame, y: pd.Series): return accuracy_score(y, self._predict(X))
_____no_output_____
MIT
challenge/data_challenge_1.ipynb
rasegiIML/IML.HUJI
Helper Functions
def read_data_file(path: str) -> pd.DataFrame: return pd.read_csv(path).drop_duplicates() \ .astype({'checkout_date': 'datetime64', 'checkin_date': 'datetime64', 'hotel_live_date': 'datetime64', 'booking_datetime': 'datetime64'}) def get_days_between_dates(dates1: pd.Series, dates2: pd.Series): return (dates1 - dates2).apply(lambda period: period.days) def create_col_prob_mapper(col: str, mapper: dict): mapper = copy(mapper) def map_col_to_prob(df): df[col] = df[col].apply(mapper.get) return df return map_col_to_prob def add_categorical_prep_to_pipe(train_features: pd.DataFrame, pipeline: Pipeline, cat_vars: list, one_hot=False, calc_probs=True) -> Pipeline: assert one_hot ^ calc_probs, \ 'Error: can only do either one-hot encoding or probability calculations, not neither/both!' # one-hot encoding if one_hot: # TODO - use sklearn OneHotEncoder pipeline.steps.append(('one-hot encoding', FunctionTransformer(lambda df: pd.get_dummies(df, columns=cat_vars)))) # category probability preprocessing - make each category have its success percentage if calc_probs: for cat_var in cat_vars: map_cat_to_prob: dict = train_features.groupby(cat_var, dropna=False).labels.mean().to_dict() pipeline.steps.append((f'map {cat_var} to prob', FunctionTransformer(create_col_prob_mapper(cat_var, map_cat_to_prob)))) return pipeline def get_week_of_year(dates): return dates.apply(lambda d: d.weekofyear) def get_booked_on_weekend(dates): return dates.apply(lambda d: d.day_of_week >= 4) def get_weekend_holiday(in_date, out_date): return list(map(lambda d: (d[1] - d[0]).days <= 3 and d[0].dayofweek >= 4, zip(in_date, out_date))) def get_local_holiday(col1, col2): return list(map(lambda x: x[0] == x[1], zip(col1, col2))) def get_days_until_policy(policy_code: str) -> list: policies = policy_code.split('_') return [int(policy.split('D')[0]) if 'D' in policy else 0 for policy in policies] def get_policy_cost(policy, stay_cost, stay_length, time_until_checkin): """ returns tuple of the format (max lost, min lost, part min lost) """ if policy == 'UNKNOWN': return 0, 0, 0 nums = tuple(map(int, re.split('[a-zA-Z]', policy)[:-1])) if 'D' not in policy: # no show is suppressed return 0, 0, 0 if 'N' in policy: nights_cost = stay_cost / stay_length * nums[0] min_cost = nights_cost if time_until_checkin <= nums[1] else 0 return nights_cost, min_cost, min_cost / stay_cost elif 'P' in policy: nights_cost = stay_cost * nums[0] / 100 min_cost = nights_cost if time_until_checkin <= nums[1] else 0 return nights_cost, min_cost, min_cost / stay_cost else: raise Exception("Invalid Input") def get_money_lost_per_policy(features: pd.Series) -> list: policies = features.cancellation_policy_code.split('_') stay_cost = features.original_selling_amount stay_length = features.stay_length time_until_checkin = features.booking_to_arrival_time policy_cost = [get_policy_cost(policy, stay_cost, stay_length, time_until_checkin) for policy in policies] return list(map(list, zip(*policy_cost))) def add_cancellation_policy_features(features: pd.DataFrame) -> pd.DataFrame: cancellation_policy = features.cancellation_policy_code features['n_policies'] = cancellation_policy.apply(lambda policy: len(policy.split('_'))) days_until_policy = cancellation_policy.apply(get_days_until_policy) features['min_policy_days'] = days_until_policy.apply(min) features['max_policy_days'] = days_until_policy.apply(max) x = features.apply(get_money_lost_per_policy, axis='columns') features['max_policy_cost'], features['min_policy_cost'], features['part_min_policy_cost'] = list( map(list, zip(*x))) features['min_policy_cost'] = features['min_policy_cost'].apply(min) features['part_min_policy_cost'] = features['part_min_policy_cost'].apply(min) features['max_policy_cost'] = features['max_policy_cost'].apply(max) return features def add_time_based_cols(df: pd.DataFrame) -> pd.DataFrame: df['stay_length'] = get_days_between_dates(df.checkout_date, df.checkin_date) df['time_registered_pre_book'] = get_days_between_dates(df.checkin_date, df.hotel_live_date) df['booking_to_arrival_time'] = get_days_between_dates(df.checkin_date, df.booking_datetime) df['checkin_week_of_year'] = get_week_of_year(df.checkin_date) df['booking_week_of_year'] = get_week_of_year(df.booking_datetime) df['booked_on_weekend'] = get_booked_on_weekend(df.booking_datetime) df['is_weekend_holiday'] = get_weekend_holiday(df.checkin_date, df.checkout_date) df['is_local_holiday'] = get_local_holiday(df.origin_country_code, df.hotel_country_code) return df
_____no_output_____
MIT
challenge/data_challenge_1.ipynb
rasegiIML/IML.HUJI
Define pipeline
def create_pipeline_from_data(filename: str): NONE_OUTPUT_COLUMNS = ['checkin_date', 'checkout_date', 'booking_datetime', 'hotel_live_date', 'hotel_country_code', 'origin_country_code', 'cancellation_policy_code'] CATEGORICAL_COLUMNS = ['hotel_star_rating', 'guest_nationality_country_name', 'charge_option', 'accommadation_type_name', 'language', 'is_first_booking', 'customer_nationality', 'original_payment_currency', 'is_user_logged_in', ] RELEVANT_COLUMNS = ['no_of_adults', 'no_of_children', 'no_of_extra_bed', 'no_of_room', 'original_selling_amount'] + NONE_OUTPUT_COLUMNS + CATEGORICAL_COLUMNS features = read_data_file(filename) features['labels'] = features["cancellation_datetime"].isna() pipeline_steps = [('columns selector', FunctionTransformer(lambda df: df[RELEVANT_COLUMNS])), ('add time based columns', FunctionTransformer(add_time_based_cols)), ('add cancellation policy features', FunctionTransformer(add_cancellation_policy_features)) ] pipeline = Pipeline(pipeline_steps) pipeline = add_categorical_prep_to_pipe(features, pipeline, CATEGORICAL_COLUMNS) pipeline.steps.append( ('drop irrelevant columns', FunctionTransformer(lambda df: df.drop(NONE_OUTPUT_COLUMNS, axis='columns')))) return features.drop('labels', axis='columns'), features.labels, pipeline
_____no_output_____
MIT
challenge/data_challenge_1.ipynb
rasegiIML/IML.HUJI
Train, predict and export
def evaluate_and_export(estimator: BaseEstimator, X: pd.DataFrame, filename: str): preds = (~estimator.predict(X)).astype(int) pd.DataFrame(preds, columns=["predicted_values"]).to_csv(filename, index=False) def create_estimator_from_data(path="../datasets/agoda_cancellation_train.csv", threshold: float = 0.47, optimize_threshold=False, debug=False) -> Pipeline: np.random.seed(0) # Load data raw_df, cancellation_labels, pipeline = create_pipeline_from_data(path) train_X = raw_df train_y = cancellation_labels train_X = pipeline.transform(train_X) # Fit model over data estimator = AgodaCancellationEstimator(threshold).fit(train_X, train_y) pipeline.steps.append(('estimator', estimator)) return pipeline def export_test_data(pipeline: Pipeline, path="../datasets/test_set_week_1.csv") -> NoReturn: data = read_data_file(path) # Store model predictions over test set id1, id2, id3 = 209855253, 205843964, 212107536 evaluate_and_export(pipeline, data, f"{id1}_{id2}_{id3}.csv") pipeline = create_estimator_from_data() export_test_data(pipeline)
C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:117: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:118: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:119: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:120: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:121: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:122: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:123: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:124: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:99: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:102: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:103: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:107: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:109: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:110: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:111: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:17: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:117: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:118: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:119: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:120: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:121: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:122: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:123: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:124: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:99: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:102: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:103: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:107: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:109: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:110: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:111: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\conda\envs\iml.env\lib\site-packages\ipykernel_launcher.py:17: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
MIT
challenge/data_challenge_1.ipynb
rasegiIML/IML.HUJI
Machine learning to predict age from rs-fmriThe goal is to extract data from several rs-fmri images, and use that data as features in a machine learning model. We will integrate what we've learned in the previous machine learning lecture to build an unbiased model and test it on a left out sample.We're going to use a dataset that was prepared for this tutorial by [Elizabeth Dupre](https://elizabeth-dupre.com//), [Jake Vogel](https://github.com/illdopejake) and [Gael Varoquaux](http://gael-varoquaux.info/), by preprocessing [ds000228](https://openneuro.org/datasets/ds000228/versions/1.0.0) (from [Richardson et al. (2018)](https://dx.doi.org/10.1038%2Fs41467-018-03399-2)) through [fmriprep](https://github.com/poldracklab/fmriprep). They also created this tutorial and should be credited for it. Load the data
# change this to the location where you downloaded the data wdir = '/data/ml_tutorial/' # Now fetch the data from glob import glob import os data = sorted(glob(os.path.join(wdir,'*.gz'))) confounds = sorted(glob(os.path.join(wdir,'*regressors.tsv')))
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
How many individual subjects do we have?
#len(data.func) len(data)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Extract features![feat_xtrct](https://ars.els-cdn.com/content/image/1-s2.0-S1053811919301594-gr1.jpg) In order to do our machine learning, we will need to extract feature from our rs-fmri images.Specifically, we will extract signals from a brain parcellation and compute a correlation matrix, representing regional coactivation between regions.We will practice on one subject first, then we'll extract data for all subjects Retrieve the atlas for extracting features and an example subject Since we're using rs-fmri data, it makes sense to use an atlas defined using rs-fmri dataThis paper has many excellent insights about what kind of atlas to use for an rs-fmri machine learning task. See in particular Figure 5.https://www.sciencedirect.com/science/article/pii/S1053811919301594?via%3DihubLet's use the MIST atlas, created here in Montreal using the BASC method. This atlas has multiple resolutions, for larger networks or finer-grained ROIs. Let's use a 64-ROI atlas to allow some detail, but to ultimately keep our connectivity matrices manageableHere is a link to the MIST paper: https://mniopenresearch.org/articles/1-3
from nilearn import datasets parcellations = datasets.fetch_atlas_basc_multiscale_2015(version='sym') atlas_filename = parcellations.scale064 print('Atlas ROIs are located in nifti image (4D) at: %s' % atlas_filename)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Let's have a look at that atlas
from nilearn import plotting plotting.plot_roi(atlas_filename, draw_cross=False)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Great, let's load an example 4D fmri time-series for one subject
fmri_filenames = data[0] print(fmri_filenames)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Let's have a look at the image! Because it is a 4D image, we can only look at one slice at a time. Or, better yet, let's look at an average image!
from nilearn import image averaged_Img = image.mean_img(image.mean_img(fmri_filenames)) plotting.plot_stat_map(averaged_Img)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Extract signals on a parcellation defined by labelsUsing the NiftiLabelsMaskerSo we've loaded our atlas and 4D data for a single subject. Let's practice extracting features!
from nilearn.input_data import NiftiLabelsMasker masker = NiftiLabelsMasker(labels_img=atlas_filename, standardize=True, memory='nilearn_cache', verbose=1) # Here we go from nifti files to the signal time series in a numpy # array. Note how we give confounds to be regressed out during signal # extraction conf = confounds[0] time_series = masker.fit_transform(fmri_filenames, confounds=conf)
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
So what did we just create here?
type(time_series) time_series.shape
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
What are these "confounds" and how are they used?
import pandas conf_df = pandas.read_table(conf) conf_df.head() conf_df.shape
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Compute and display a correlation matrix
from nilearn.connectome import ConnectivityMeasure correlation_measure = ConnectivityMeasure(kind='correlation') correlation_matrix = correlation_measure.fit_transform([time_series])[0] correlation_matrix.shape
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Plot the correlation matrix
import numpy as np # Mask the main diagonal for visualization: np.fill_diagonal(correlation_matrix, 0) # The labels we have start with the background (0), hence we skip the # first label plotting.plot_matrix(correlation_matrix, figure=(10, 8), labels=range(time_series.shape[-1]), vmax=0.8, vmin=-0.8, reorder=False) # matrices are ordered for block-like representation
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop
Extract features from the whole datasetHere, we are going to use a for loop to iterate through each image and use the same techniques we learned above to extract rs-fmri connectivity features from every subject.
# Here is a really simple for loop for i in range(10): print('the number is', i) container = [] for i in range(10): container.append(i) container
_____no_output_____
BSD-3-Clause
notebooks/.ipynb_checkpoints/machine_learning_tutorial-checkpoint.ipynb
orchid00/nipype_arcana_workshop