markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Load a saved Move Re-load it from the file jsut as an example purpose.
with open('mymove.json') as f: loaded_move = Move.load(f)
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
Create a Move Player and Play Back a Recorded Move First, create the object used to re-play a recorded Move.
player = MovePlayer(poppy, loaded_move)
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
You can start the play back whenever you want:
player.start()
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
You can play your move as many times as you want. Note, that we use the wait_to_stop method to wait for the first play abck to end before running it again.
for _ in range(3): player.start() player.wait_to_stop()
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
Número de estaciones que se encuentran en la base de datos?
df["nombre"].nunique()
ejercicios/Pandas/Ejercicio_Estaciones_Aguascalientes_Solucion.ipynb
jorgemauricio/INIFAP_Course
mit
Precipitación acumulada de la base de datos?
df["prec"].sum()
ejercicios/Pandas/Ejercicio_Estaciones_Aguascalientes_Solucion.ipynb
jorgemauricio/INIFAP_Course
mit
Los 5 años con mayor precipitación de la base de datos?
# debemos de generar la columna año df["year"] = pd.DatetimeIndex(df["fecha"]).year # agrupar la información por años grouped = df.groupby("year").sum()["prec"] grouped.sort_values(ascending=False).head()
ejercicios/Pandas/Ejercicio_Estaciones_Aguascalientes_Solucion.ipynb
jorgemauricio/INIFAP_Course
mit
La estación con la mayor acumulación de precipitación de la base de datos?
grouped_st = df.groupby("nombre").sum()["prec"] grouped_st.sort_values(ascending=False).head(2)
ejercicios/Pandas/Ejercicio_Estaciones_Aguascalientes_Solucion.ipynb
jorgemauricio/INIFAP_Course
mit
Año y mes en la que se presenta la mayor acumulación de precipitación en la base de datos?
# debemos de generar la columna mes df["month"] = pd.DatetimeIndex(df["fecha"]).month # agrupar la información por año y mes grouped_ym = df.groupby(["year", "month"]).sum()["prec"] grouped_ym.sort_values(ascending=False).head()
ejercicios/Pandas/Ejercicio_Estaciones_Aguascalientes_Solucion.ipynb
jorgemauricio/INIFAP_Course
mit
Bonus Desplegar la información en un heatmap
# clasificar los datos a modo de tabla para desplegarlos en un heatmap table = pd.pivot_table(df, values="prec", index=["year"], columns=["month"], aggfunc=np.sum) # visualizar la tabla de datos table # visualización de la información en heatmap sns.heatmap(table) # cambiar los colores del heatmap sns.heatmap(table,...
ejercicios/Pandas/Ejercicio_Estaciones_Aguascalientes_Solucion.ipynb
jorgemauricio/INIFAP_Course
mit
We'll setup a distributed.Client locally. In the real world you could connect to a cluster of dask-workers.
client = Client()
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
For demonstration, we'll use the perennial NYC taxi cab dataset. Since I'm just running things on my laptop, we'll just grab the first month's worth of data.
if not os.path.exists('trip.csv'): s3 = S3FileSystem(anon=True) s3.get("dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv", "trip.csv") ddf = dd.read_csv("trip.csv") ddf = ddf.repartition(npartitions=8)
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
I happen to know that some of the values in this dataset are suspect, so let's drop them. Scikit-learn doesn't support filtering observations inside a pipeline (yet), so we'll do this before anything else.
# these filter out less than 1% of the observations ddf = ddf[(ddf.trip_distance < 20) & (ddf.fare_amount < 150)] ddf = ddf.repartition(npartitions=8)
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
Now, we'll split our DataFrame into a train and test set, and select our feature matrix and target column (whether the passenger tipped).
df_train, df_test = ddf.random_split([0.8, 0.2], random_state=2) columns = ['VendorID', 'passenger_count', 'trip_distance', 'payment_type', 'fare_amount'] X_train, y_train = df_train[columns], df_train['tip_amount'] > 0 X_test, y_test = df_test[columns], df_test['tip_amount'] > 0 X_train, y_train, X_test, y_test = p...
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
With our training data in hand, we fit our logistic regression. Nothing here should be surprising to those familiar with scikit-learn.
%%time # this is a *dask-glm* LogisticRegresion, not scikit-learn lm = LogisticRegression(fit_intercept=False) lm.fit(X_train.values, y_train.values)
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
Again, following the lead of scikit-learn we can measure the performance of the estimator on the training dataset using the .score method. For LogisticRegression this is the mean accuracy score (what percent of the predicted matched the actual).
%%time lm.score(X_train.values, y_train.values).compute()
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
and on the test dataset:
%%time lm.score(X_test.values, y_test.values).compute()
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
Pipelines The bulk of my time "doing data science" is data cleaning and pre-processing. Actually fitting an estimator or making predictions is a relatively small proportion of the work. You could manually do all your data-processing tasks as a sequence of function calls starting with the raw data. Or, you could use sci...
from sklearn.base import TransformerMixin, BaseEstimator from sklearn.pipeline import make_pipeline
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
First let's write a little transformer to convert columns to Categoricals. If you aren't familar with scikit-learn transformers, the basic idea is that the transformer must implement two methods: .fit and .tranform. .fit is called during training. It learns something about the data and records it on self. Then .transfo...
class CategoricalEncoder(BaseEstimator, TransformerMixin): """Encode `categories` as pandas `Categorical` Parameters ---------- categories : Dict[str, list] Mapping from column name to list of possible values """ def __init__(self, categories): self.categories = categories ...
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
We'll also want a daskified version of scikit-learn's StandardScaler, that won't eagerly convert a dask.array to a numpy array (N.B. the scikit-learn version has more features and error handling, but this will work for now).
class StandardScaler(BaseEstimator, TransformerMixin): def __init__(self, columns=None, with_mean=True, with_std=True): self.columns = columns self.with_mean = with_mean self.with_std = with_std def fit(self, X, y=None): if self.columns is None: self.columns_ = X.col...
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
Finally, I've written a dummy encoder transformer that converts categoricals to dummy-encoded interger columns. The full implementation is a bit long for a blog post, but you can see it here.
from dummy_encoder import DummyEncoder pipe = make_pipeline( CategoricalEncoder({"VendorID": [1, 2], "payment_type": [1, 2, 3, 4, 5]}), DummyEncoder(), StandardScaler(columns=['passenger_count', 'trip_distance', 'fare_amount']), LogisticRegression(fit_intercept=False) )
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
So that's our pipeline. We can go ahead and fit it just like before, passing in the raw data.
%%time pipe.fit(X_train, y_train.values)
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
And we can score it as well. The Pipeline ensures that all of the nescessary transformations take place before calling the estimator's score method.
pipe.score(X_train, y_train.values).compute() pipe.score(X_test, y_test.values).compute()
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
Grid Search As explained earlier, Pipelines and grid search go hand-in-hand. Let's run a quick example with dask-searchcv.
from sklearn.model_selection import GridSearchCV import dask_searchcv as dcv
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
We'll search over two hyperparameters Whether or not to standardize the variance of each column in StandardScaler The strength of the regularization in LogisticRegression This involves fitting many models, one for each combination of paramters. dask-searchcv is smart enough to know that early stages in the pipeline (...
param_grid = { 'standardscaler__with_std': [True, False], 'logisticregression__lamduh': [.001, .01, .1, 1], } pipe = make_pipeline( CategoricalEncoder({"VendorID": [1, 2], "payment_type": [1, 2, 3, 4, 5]}), DummyEncoder(), StandardScaler(columns=['passenger_count', 'trip_dis...
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
Now we have access to the usual attributes like cv_results_ learned by the grid search object:
pd.DataFrame(gs.cv_results_)
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
And we can do our usual checks on model fit for the training set:
gs.score(X_train, y_train.values).compute()
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
And the test set:
gs.score(X_test, y_test.values).compute()
docs/source/examples/dask-glm.ipynb
daniel-severo/dask-ml
bsd-3-clause
<header class="w3-container w3-teal"> <img src="images/utfsm.png" alt="" align="left"/> <img src="images/inf.png" alt="" align="right"/> </header> <br/><br/><br/><br/><br/> IWI131 Programación de Computadores Sebastián Flores http://progra.usm.cl/ https://www.github.com/usantamaria/iwi131 Soluciones a Certamen 3, S1 2...
def empresas(post): emp.append(e) arch_P.close() for li in arch_P: r, p, e = li.strip().split('#') if e not in emp: arch_P = open(post) emp = list() return emp # Solucion Ordenada def empresas(post): arch_P = open(post) emp = list() for li in arch_P: r, p, e = li.strip().split('#') if e not...
ipynb/25a-C3_2015_S1/Certamen3_2015_S1_CC.ipynb
usantamaria/iwi131
cc0-1.0
La segunda función resuelve el problema antes descrito (generar un archivo por empresa, los cuales deben tener los titulados que postularon a algún puesto en la empresa con el formato rut;nombre;puesto), recibiendo como parámetro el nombre del archivo con titulados y el nombre del archivo con postulaciones.
arch_E = open(e + '.txt', 'w') for pos in arch_P: arch_T.close() def registros(tit, post): arch_E.write(li.format(r, n, p)) emp = empresas(post) arch_T = open(tit) n, r2 = titu.strip().split(';') arch_P.close() for e in emp: arch_E.close() if e2 == e: r, p, e2 = pos.strip().split('#') if r2 == r: li = '{0};{1};{2}\n' a...
ipynb/25a-C3_2015_S1/Certamen3_2015_S1_CC.ipynb
usantamaria/iwi131
cc0-1.0
Pregunta 2 [35%] Andrónico Bank es un banco muy humilde que hasta hace poco usaban sólo papel y lápiz para manejar toda la información de sus clientes, también humildes. Como una manera de mejorar sus procesos, Andrónico Bank quiere utilizar un sistema computacional basado en Python. Por eso se traspasa la información ...
def buscar_clientes(nombre_archivo, clase_buscada): archivo = open(nombre_archivo) clientes_buscados = {} for linea in archivo: rut,nombre,clase = linea[:-1].split(";") if clase==clase_buscada: clientes_buscados[rut] = nombre archivo.close() return clientes_buscados prin...
ipynb/25a-C3_2015_S1/Certamen3_2015_S1_CC.ipynb
usantamaria/iwi131
cc0-1.0
Pregunta 2.b Escriba una función dar_credito(archivo, rut) que reciba como parámetros el nombre del archivo de clientes y el rut de un cliente, y que retorne True si éste es VIP o False si no lo es. Si no encuentra el cliente la función retorna False ```Python dar_credito('clientes.txt', '9999999-k') False ``` E...
def dar_credito(nombre_archivo, rut): clientes_VIP = buscar_clientes(nombre_archivo, "VIP") return rut in clientes_VIP print dar_credito('data/clientes.txt', '9999999-k') print dar_credito('data/clientes.txt', '11231709-k') print dar_credito('data/clientes.txt', '9234539-9')
ipynb/25a-C3_2015_S1/Certamen3_2015_S1_CC.ipynb
usantamaria/iwi131
cc0-1.0
Pregunta 2.c Escriba una función contar_clientes(archivo) que reciba como parámetros el nombre del archivo de clientes y que retorne un diccionario con la cantidad de clientes de cada clase en el archivo. ```Python contar_clientes('clientes.txt') {'VIP': 3, 'Pendiente': 1, 'RIP': 2, 'Estandar': 1} ``` Estrategia ...
def contar_clientes(nombre_archivo): archivo = open(nombre_archivo) cantidad_clases = {} for linea in archivo: rut,nombre,clase = linea.strip().split(";") if clase in cantidad_clases: cantidad_clases[clase] += 1 else: cantidad_clases[clase] = 1 archivo.clo...
ipynb/25a-C3_2015_S1/Certamen3_2015_S1_CC.ipynb
usantamaria/iwi131
cc0-1.0
Pregunta 3 [40%] Complementando la pregunta 2, se le solicita: Pregunta 3.a Escriba la función nuevo_cliente(archivo, rut, nombre, clase) que reciba como parámetro el nombre del archivo de clientes y el rut, nombre y clase de un nuevo cliente. La función debe agregar el nuevo cliente al final del archivo. Esta función ...
def nuevo_cliente(nombre_archivo, rut, nombre, clase): archivo = open(nombre_archivo,"a") formato_linea = "{0};{1};{2}\n" linea = formato_linea.format(rut, nombre, clase) archivo.write(linea) archivo.close() return None print nuevo_cliente('data/clientes.txt', '2121211-2', 'Sergio Lagos', '...
ipynb/25a-C3_2015_S1/Certamen3_2015_S1_CC.ipynb
usantamaria/iwi131
cc0-1.0
Pregunta 3.b Escriba la función actualizar_clase(archivo, rut, clase) que reciba como parámetro el nombre del archivo de clientes, el rut de un cliente y una nueva clase. La función debe modificar la clase del cliente con el rut indicado, cambiándola por clase en el archivo. Esta función retorna True si logra hacer el ...
def actualizar_clase(nombre_archivo, rut_buscado, nueva_clase): archivo = open(nombre_archivo) lista_lineas = [] formato_linea = "{0};{1};{2}\n" rut_hallado = False for linea in archivo: rut,nombre,clase = linea[:-1].split(";") if rut==rut_buscado: nueva_linea = formato_l...
ipynb/25a-C3_2015_S1/Certamen3_2015_S1_CC.ipynb
usantamaria/iwi131
cc0-1.0
Pregunta 3.c Escriba una función filtrar_clientes(archivo, clase) que reciba como parámetros el nombre del archivo de clientes y una clase de cliente. La función debe crear un archivo clientes_[clase].txt con los rut y los nombres de los clientes pertenecientes a esa clase. Note que el archivo debe ser nombrado según ...
def filtrar_clientes(nombre_archivo, clase_buscada): archivo_original = open(nombre_archivo) nombre_archivo_clase = "data\clientes_"+clase_buscada+".txt" archivo_clase = open(nombre_archivo_clase,"w") formato_linea = "{0};{1}\n" for linea in archivo_original: rut,nombre,clase = linea[:-1].sp...
ipynb/25a-C3_2015_S1/Certamen3_2015_S1_CC.ipynb
usantamaria/iwi131
cc0-1.0
Get header information
import requests url = 'http://www.github.com/ibm' response = requests.get(url) print(response.status_code) if response.status_code == 200: print('Response status - OK ') print(response.headers) else: print('Error making the HTTP request ',response.status_code )
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
km-Poonacha/python4phd
gpl-3.0
Get the body Information
import requests url = 'http://www.github.com/ibm' response = requests.get(url) print(response.status_code) if response.status_code == 200: print('Response status - OK ') print(response.text) else: print('Error making the HTTP request ',response.status_code )
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
km-Poonacha/python4phd
gpl-3.0
Using a Web API to Collect Data An application programming interface is a set of functions that you call to get access to some service. An API is basically a list of functions and datatsructures for interfacting with websites's data. The way these work is similar to viewing a web page. When you point your browser to...
import requests url = "https://api.github.com/orgs/ibm" response = requests.get(url) if response.status_code == 200: print('Response status - OK ') print(response.headers['X-RateLimit-Remaining']) else: print('Error making the HTTP request ',response.status_code )
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
km-Poonacha/python4phd
gpl-3.0
Step 2: Authentication (if required) Authenticate requests to increase the API request limit. Access data that requires authentication. Basic Authentication Pass the userid and password as parameters in the response.get function Little risky and prone to hacking. Create dummy user ID and password OAUTH OAuth 2 is a...
import requests def GithubAPI(url): """ Make a HTTP request for the given URL and send the response body back to the calling function""" # Use basic authentication response = requests.get(url, auth=("ENTER USER ID","ENTER PASSWORD")) if response.status_code == 200: print('Response statu...
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
km-Poonacha/python4phd
gpl-3.0
Step 3: Parse the response The json module gives us functions to convert the JSON response to a python readable data structure. Write a program to get the number of OSS projects started by IBM
import requests import json def GithubAPI(url): """ Make a HTTP request for the given URL and send the response body back to the calling function""" response = requests.get(url) if response.status_code == 200: print('Response status - OK ') return response.json() else: print...
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
km-Poonacha/python4phd
gpl-3.0
Step 3: Follow the url information from the Web API to find what you need *Let us collect the information regarding the different projects started by IBM *
import requests import json def GithubAPI(url): """ Make a HTTP request for the given URL and send the response body back to the calling function""" response = requests.get(url, auth("ENTER USER ID","ENTER PASSWORD")) if response.status_code == 200: print('Response status - OK ') ...
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
km-Poonacha/python4phd
gpl-3.0
Step 4: Paginate to get data from other pages Traverse the pages if the data is spread across multiple pages
import requests import json def GithubAPI(url): """ Make a HTTP request for the given URL and send the response body back to the calling function""" response = requests.get(url, auth = ("ENTER USER ID","ENTER PASSWORD")) if response.status_code == 200: print('Response status - OK ') ...
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
km-Poonacha/python4phd
gpl-3.0
3. Write a CSV Lets try to write the repos into a CSV file. Write a code to append data row wise to a csv file
import csv WRITE_CSV = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 2/ipython/Repo_csv.csv" with open(WRITE_CSV, 'at',encoding = 'utf-8', newline='') as csv_obj: write = csv.writer(csv_obj) # Note it is csv.writer not reader write.writerow(['REPO ID','REPO NAME'])
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
km-Poonacha/python4phd
gpl-3.0
What do you think will happen if we use 'wt' as mode instead of 'at' ? Write a program so that you save the IBM repositories into the CSV file. So that each row is a new repository and column 1 is the ID and column 2 is the name
#Enter code here import requests import json import csv WRITE_CSV = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 2/ipython/Repo_csv.csv" def appendcsv(data_list): with open(WRITE_CSV, 'at',encoding = 'utf-8', newline='') as csv_obj: write = csv.writer(csv_obj) # Note it...
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
km-Poonacha/python4phd
gpl-3.0
Verify CSV files exist In the seventh lab of this series 4a_sample_babyweight, we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
%%bash ls *.csv %%bash head -5 *.csv
courses/machine_learning/deepdive2/structured/solutions/4b_keras_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Create Keras model Set CSV Columns, label column, and column defaults. Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function. * CSV_COLUMNS are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files...
# Determine CSV, label, and key columns # Create list of string column headers, make sure order matches. CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"] # Add string name for label column LABEL_COLUMN = "weight_pounds"...
courses/machine_learning/deepdive2/structured/solutions/4b_keras_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Make dataset of features and label from CSV files. Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the colu...
def features_and_labels(row_data): """Splits features and labels from feature dictionary. Args: row_data: Dictionary of CSV column names and tensor values. Returns: Dictionary of feature tensors and label tensor. """ label = row_data.pop(LABEL_COLUMN) return row_data, label # ...
courses/machine_learning/deepdive2/structured/solutions/4b_keras_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Create input layers for raw features. We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining: * shape: A shape tuple (integers), not including the batch size. For instance, shape=(...
def create_input_layers(): """Creates dictionary of input layers for each feature. Returns: Dictionary of `tf.Keras.layers.Input` layers for each feature. """ inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ["mother_...
courses/machine_learning/deepdive2/structured/solutions/4b_keras_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Create feature columns for inputs. Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
def categorical_fc(name, values): """Helper function to wrap categorical feature by indicator column. Args: name: str, name of feature. values: list, list of strings of categorical values. Returns: Indicator column of categorical feature. """ cat_column = tf.feature_column.c...
courses/machine_learning/deepdive2/structured/solutions/4b_keras_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Create DNN dense hidden layers and output layer. So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output...
def get_model_outputs(inputs): """Creates model architecture and returns outputs. Args: inputs: Dense tensor used as inputs to model. Returns: Dense tensor output from the model. """ # Create two hidden layers of [64, 32] just in like the BQML DNN h1 = tf.keras.layers.Dense(64, ...
courses/machine_learning/deepdive2/structured/solutions/4b_keras_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Build DNN model tying all of the pieces together. Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Kera...
def build_dnn_model(): """Builds simple DNN using Keras Functional API. Returns: `tf.keras.models.Model` object. """ # Create input layer inputs = create_input_layers() # Create feature columns feature_columns = create_feature_columns() # The constructor for DenseFeatures take...
courses/machine_learning/deepdive2/structured/solutions/4b_keras_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Run and evaluate model Train and evaluate. We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to l...
TRAIN_BATCH_SIZE = 32 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around NUM_EVALS = 5 # how many times to evaluate # Enough to get a reasonable sample, but not so much that it slows down NUM_EVAL_EXAMPLES = 10000 trainds = load_dataset( pattern="train*", batch_size=TRAIN_BATCH_SIZE...
courses/machine_learning/deepdive2/structured/solutions/4b_keras_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
The lag is bounded, because every cycle (here, the loop) produces a delay of 0.
a.delay_automaton()
doc/notebooks/automaton.delay_automaton.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
State 1 has a delay of $(3, 0)$ because the first tape is 3 characters longer than the shortest tape (the second one) for all possible inputs leading to this state.
s = ctx.expression(r"(abc|x+ab|y)(d|z)").automaton() s s.delay_automaton()
doc/notebooks/automaton.delay_automaton.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
Check the head of the DataFrame.
data.head()
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
How many rows and columns are there?
data.shape
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
What is the average Purchase Price?
data["Purchase Price"].mean()
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
What were the highest and lowest purchase prices?
data["Purchase Price"].max() data["Purchase Price"].min()
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
How many people have English 'en' as their Language of choice on the website?
data[data['Language'] == 'en'].count()[0]
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
How many people have the job title of "Lawyer" ?
data[data['Job'] == 'Lawyer'].count()[0]
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
How many people made the purchase during the AM and how many people made the purchase during PM ? (Hint: Check out value_counts() )
data['AM or PM'].value_counts()
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
What are the 5 most common Job Titles?
data['Job'].value_counts().head()
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
Someone made a purchase that came from Lot: "90 WT" , what was the Purchase Price for this transaction?
data['Purchase Price'][data['Lot'] == '90 WT']
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
What is the email of the person with the following Credit Card Number: 4926535242672853
data['Email'][data['Credit Card'] == 4926535242672853]
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
How many people have American Express as their Credit Card Provider and made a purchase above $95 ?
data2 = data[data['Purchase Price'] > 95] data2[data2['CC Provider'] == 'American Express'].count()[0]
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
Hard: How many people have a credit card that expires in 2025?
data[data['CC Exp Date'].str.contains('/25')].shape[0]
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
Hard: What are the top 5 most popular email providers/hosts (e.g. gmail.com, yahoo.com, etc...)
data[data['Email'].split('@')]
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
Data Visualization Implement a bar plot for top 5 most popular email providers/hosts Plot distribution of Purchase Price
sns.distplot(data['Purchase Price'])
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
Implement countplot on Language
sns.countplot(data['Language']) Feel free to plot more graphs to dive deeper into the dataset.
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
nguyenphucdev/BookManagementSample
mit
<div class="alert alert-warning"> **Note:** The tutorial is generated from Jupyter notebooks which work in the "interactive" mode (like in the LArray Editor console). In the interactive mode, there is no need to use the print() function to display the content of a variable. Simply writing its name is enough. The sam...
s = 1 + 2 # In the interactive mode, there is no need to use the print() function # to display the content of the variable 's'. # Simply typing 's' is enough s # In the interactive mode, there is no need to use the print() function # to display the result of an expression 1 + 2
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Axis An Axis represents a dimension of an Array object. It consists of a name and a list of labels. They are several ways to create an axis:
# labels given as a list time = Axis([2007, 2008, 2009, 2010], 'time') # create an axis using one string gender = Axis('gender=M,F') # labels generated using the special syntax start..end age = Axis('age=0..100') time, gender, age
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
<div class="alert alert-warning"> **Warning:** When using the string syntax `"axis_name=list,of,labels"` or `"axis_name=start..end"`, LArray will automatically infer the type of labels.<br> For instance, the command line `age = Axis("age=0..100")` will create an age axis with labels of type `int`.<br><br> Mixi...
# When a string is passed to the Axis() constructor, LArray will automatically infer the type of the labels age = Axis("age=0..5") age # Mixing special characters like + with numbers will lead to create an axis with labels of type str instead of int. age = Axis("age=0..4,5+") age
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
See the Axis section of the API Reference to explore all methods of Axis objects. Groups A Group represents a selection of labels from an Axis. It can optionally have a name (using operator &gt;&gt;). Groups can be used when selecting a subset of an array and in aggregations. Group objects are created as follow:
age = Axis('age=0..100') # create an anonymous Group object 'teens' teens = age[10:18] teens # create a Group object 'pensioners' with a name pensioners = age[67:] >> 'pensioners' pensioners
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
It is possible to set a name or to rename a group after its declaration:
# method 'named' returns a new group with the given name teens = teens.named('teens') # operator >> is just a shortcut for the call of the method named teens = teens >> 'teens' teens
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
<div class="alert alert-warning"> **Warning:** Mixing slices and individual labels inside the `[ ]` will generate **several groups** (a tuple of groups) instead of a single group.<br>If you want to create a single group using both slices and individual labels, you need to use the `.union()` method (see below). </d...
# mixing slices and individual labels leads to the creation of several groups (a tuple of groups) age[0:10, 20, 30, 40] # the union() method allows to mix slices and individual labels to create a single group age[0:10].union(age[20, 30, 40])
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
See the Group section of the API Reference to explore all methods of Group objects. Array An Array object represents a multidimensional array with labeled axes. Create an array from scratch To create an array from scratch, you need to provide the data and a list of axes. Optionally, metadata (title, description, creat...
# define axes age = Axis('age=0-9,10-17,18-66,67+') gender = Axis('gender=female,male') time = Axis('time=2015..2017') # list of the axes axes = [age, gender, time] # define some data. This is the belgian population (in thousands). Source: eurostat. data = [[[633, 635, 634], [663, 665, 664]], [[484, 4...
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Metadata can be added to an array at any time using:
arr.meta.description = 'array containing random values between 0 and 100' arr.meta
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
<div class="alert alert-warning"> **Warning:** <ul> <li>Currently, only the HDF (.h5) file format supports saving and loading array metadata.</li> <li>Metadata is not kept when actions or methods are applied on an array except for operations modifying the object in-place, such as `population[age < ...
# start defines the starting value of data ndtest((3, 3), start=-1) # start defines the starting value of data # label_start defines the starting index of labels ndtest((3, 3), start=-1, label_start=2) # empty generates uninitialised array with correct axes # (much faster but use with care!). # This not really random...
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
All the above functions exist in (func)_like variants which take axes from another array
ones_like(arr)
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Create an array using the special sequence function (see link to documention of sequence in API reference for more examples):
# With initial=1.0 and inc=0.5, we generate the sequence 1.0, 1.5, 2.0, 2.5, 3.0, ... sequence(age, initial=1.0, inc=0.5)
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Inspecting Array objects
# create a test array ndtest([age, gender, time])
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Get array summary : metadata + dimensions + description of axes + dtype + size in memory
arr.info
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Get axes
arr.axes
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Get number of dimensions
arr.ndim
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Get length of each dimension
arr.shape
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Get total number of elements of the array
arr.size
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Get type of internal data (int, float, ...)
arr.dtype
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Get size in memory
arr.memory_used
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
Display the array in the viewer (graphical user interface) in read-only mode. This will open a new window and block execution of the rest of code until the windows is closed! Required PyQt installed. python view(arr) Or load it in Excel: python arr.to_excel() Extract an axis from an array It is possible to extract an a...
# extract the 'time' axis belonging to the 'arr' array time = arr.time time
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
More on Array objects To know how to save and load arrays in CSV, Excel or HDF format, please refer to the Loading and Dumping Arrays section of the tutorial. See the Array section of the API Reference to explore all methods of Array objects. Session A Session object is a dictionary-like object used to gather several a...
gender = Axis("gender=Male,Female") time = Axis("time=2013..2017") # create an empty session demography_session = Session() # add axes to the session demography_session.gender = gender demography_session.time = time # add arrays to the session demography_session.population = zeros((gender, time)) demography_session....
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
or you can create and populate a session in one step:
gender = Axis("gender=Male,Female") time = Axis("time=2013..2017") demography_session = Session(gender=gender, time=time, population=zeros((gender, time)), births=zeros((gender, time)), deaths=zeros((gender, time)), meta=Metadata(title='Demographic Model of Belgium', descrip...
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
gdementen/larray
gpl-3.0
All the different models in scikit-learn follow a consistent structure. The class is passed any parameters needed at initialization. In this case none are needed. The fit method takes the features and the target as the parameters X and y. The predict method takes an array of features and returns the predicted values ...
diabetes = datasets.load_diabetes() X = diabetes.data y = diabetes.target clf = linear_model.LinearRegression() clf.fit(X, y) plt.plot(y, clf.predict(X), 'k.') plt.show() from sklearn import metrics metrics.mean_squared_error(y, clf.predict(X))
Wk11/Wk11-regression-classification-in-class-exercises.ipynb
briennakh/BIOF509
mit
Although this single number might seem unimpressive, metrics are a key component for model evaluation. As a simple example, we can perform a permutation test to determine whether we might see this performance by chance.
diabetes = datasets.load_diabetes() X = diabetes.data y = diabetes.target clf = linear_model.LinearRegression() clf.fit(X, y) error = metrics.mean_squared_error(y, clf.predict(X)) rounds = 1000 np.random.seed(0) errors = [] for i in range(rounds): y_shuffle = y.copy() np.random.shuffle(y_shuffle) clf_sh...
Wk11/Wk11-regression-classification-in-class-exercises.ipynb
briennakh/BIOF509
mit
Training, validation, and test datasets When evaluating different models the approach taken above is not going to work. Particularly for models with high variance, that overfit the training data, we will get very good performance on the training data but perform no better than chance on new data.
from sklearn import tree diabetes = datasets.load_diabetes() X = diabetes.data y = diabetes.target clf = tree.DecisionTreeRegressor() clf.fit(X, y) plt.plot(y, clf.predict(X), 'k.') plt.show() metrics.mean_squared_error(y, clf.predict(X)) from sklearn import neighbors diabetes = datasets.load_diabetes() X = diab...
Wk11/Wk11-regression-classification-in-class-exercises.ipynb
briennakh/BIOF509
mit
Both these models appear to give perfect solutions but all they do is map our test samples back to the training samples and return the associated value. To understand how our model truly performs we need to evaluate the performance on previously unseen samples. The general approach is to divide a dataset into training,...
from sklearn import neighbors diabetes = datasets.load_diabetes() X = diabetes.data y = diabetes.target np.random.seed(0) split = np.random.random(y.shape) > 0.3 X_train = X[split] y_train = y[split] X_test = X[np.logical_not(split)] y_test = y[np.logical_not(split)] print(X_train.shape, X_test.shape) clf = neigh...
Wk11/Wk11-regression-classification-in-class-exercises.ipynb
briennakh/BIOF509
mit
Model types Scikit-learn includes a variety of different models. The most commonly used algorithms probably include the following: Regression Support Vector Machines Nearest neighbors Decision trees Ensembles & boosting Regression We have already seen several examples of regression. The basic form is: $$f(X) = \bet...
from sklearn import datasets diabetes = datasets.load_diabetes() # Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html X = diabetes.data y = diabetes.target print(X.shape, y.shape) from sklearn import linear_model clf = linear_model.LassoCV(cv=20) clf.fit(X, y) print('Alpha chosen was ', clf....
Wk11/Wk11-regression-classification-in-class-exercises.ipynb
briennakh/BIOF509
mit
There is an expanded example in the documentation. There are also general classes to handle parameter selection for situations when dedicated classes are not available. As we will often have parameters in preprocessing steps these general classes will be used much more often.
from sklearn import grid_search from sklearn import neighbors diabetes = datasets.load_diabetes() X = diabetes.data y = diabetes.target np.random.seed(0) split = np.random.random(y.shape) > 0.3 X_train = X[split] y_train = y[split] X_test = X[np.logical_not(split)] y_test = y[np.logical_not(split)] print(X_train....
Wk11/Wk11-regression-classification-in-class-exercises.ipynb
briennakh/BIOF509
mit
Exercises Load the handwritten digits dataset and choose an appropriate metric Divide the data into a training and test dataset Build a RandomForestClassifier on the training dataset, using cross-validation to evaluate performance Choose another classification algorithm and apply it to the digits dataset. Use grid se...
# 1. Load the handwritten digits dataset and choose an appropriate metric # 2. Divide the data into a training and test dataset from sklearn import datasets, metrics, ensemble digits = datasets.load_digits() X = digits.data y = digits.target print(X.shape, y.shape) np.random.seed(0) split = np.random.random(y.shape...
Wk11/Wk11-regression-classification-in-class-exercises.ipynb
briennakh/BIOF509
mit
Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog *...
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 2 sample_id = 1 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit