markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
After StatisticsGen finishes running, you can visualize the outputted statistics. Try playing with the different plots!
context.show(statistics_gen.outputs['statistics'])
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
SchemaGen The SchemaGen component generates a schema based on your data statistics. (A schema defines the expected bounds, types, and properties of the features in your dataset.) It also uses the TensorFlow Data Validation library. Note: The generated schema is best-effort and only tries to infer basic properties of th...
schema_gen = tfx.components.SchemaGen( statistics=statistics_gen.outputs['statistics'], infer_feature_shape=False) context.run(schema_gen, enable_cache=True)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
After SchemaGen finishes running, you can visualize the generated schema as a table.
context.show(schema_gen.outputs['schema'])
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain. To learn more about schemas, see the SchemaGen documentation. ExampleValidator The ExampleValidator component detects anomalie...
example_validator = tfx.components.ExampleValidator( statistics=statistics_gen.outputs['statistics'], schema=schema_gen.outputs['schema']) context.run(example_validator, enable_cache=True)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
After ExampleValidator finishes running, you can visualize the anomalies as a table.
context.show(example_validator.outputs['anomalies'])
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
In the anomalies table, you can see that there are no anomalies. This is what you'd expect, since this the first dataset that you've analyzed and the schema is tailored to it. You should review this schema -- anything unexpected means an anomaly in the data. Once reviewed, the schema can be used to guard future data, a...
_taxi_constants_module_file = 'taxi_constants.py' %%writefile {_taxi_constants_module_file} NUMERICAL_FEATURES = ['trip_miles', 'fare', 'trip_seconds'] BUCKET_FEATURES = [ 'pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude' ] # Number of buckets used by tf.transform for encoding ea...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next, you write a preprocessing_fn that takes in raw data as input, and returns transformed features that your model can train on:
_taxi_transform_module_file = 'taxi_transform.py' %%writefile {_taxi_transform_module_file} import tensorflow as tf import tensorflow_transform as tft # Imported files such as taxi_constants are normally cached, so changes are # not honored after the first import. Normally this is good for efficiency, but # during ...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now, you pass in this feature engineering code to the Transform component and run it to transform your data.
transform = tfx.components.Transform( examples=example_gen.outputs['examples'], schema=schema_gen.outputs['schema'], module_file=os.path.abspath(_taxi_transform_module_file)) context.run(transform, enable_cache=True)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's examine the output artifacts of Transform. This component produces two types of outputs: transform_graph is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models). transformed_examples represents the preprocessed training and evaluation data.
transform.outputs
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Take a peek at the transform_graph artifact. It points to a directory containing three subdirectories.
train_uri = transform.outputs['transform_graph'].get()[0].uri os.listdir(train_uri)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The transformed_metadata subdirectory contains the schema of the preprocessed data. The transform_fn subdirectory contains the actual preprocessing graph. The metadata subdirectory contains the schema of the original data. You can also take a look at the first three transformed examples:
# Get the URI of the output artifact representing the transformed examples, which is a directory train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train') # Get the list of files in this directory (all compressed TFRecord files) tfrecord_filenames = [os.path.join(train_uri, name) ...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
After the Transform component has transformed your data into features, and the next step is to train a model. Trainer The Trainer component will train a model that you define in TensorFlow. Default Trainer support Estimator API, to use Keras API, you need to specify Generic Trainer by setup custom_executor_spec=executo...
_taxi_trainer_module_file = 'taxi_trainer.py' %%writefile {_taxi_trainer_module_file} from typing import Dict, List, Text import os import glob from absl import logging import datetime import tensorflow as tf import tensorflow_transform as tft from tfx import v1 as tfx from tfx_bsl.public import tfxio from tensorf...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now, you pass in this model code to the Trainer component and run it to train the model.
# use a TFX component to train a TensorFlow model trainer = tfx.components.Trainer( module_file= # TODO: Your code goes here, examples=transform.outputs['transformed_examples'], transform_graph=transform.outputs['transform_graph'], schema=schema_gen.outputs['schema'], train_args=tfx.proto.TrainArgs(...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Analyze Training with TensorBoard Take a peek at the trainer artifact. It points to a directory containing the model subdirectories.
model_artifact_dir = trainer.outputs['model'].get()[0].uri pp.pprint(os.listdir(model_artifact_dir)) model_dir = os.path.join(model_artifact_dir, 'Format-Serving') pp.pprint(os.listdir(model_dir))
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Optionally, you can connect TensorBoard to the Trainer to analyze your model's training curves.
model_run_artifact_dir = trainer.outputs['model_run'].get()[0].uri %load_ext tensorboard %tensorboard --logdir {model_run_artifact_dir}
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Evaluator The Evaluator component computes model performance metrics over the evaluation set. It uses the TensorFlow Model Analysis library. The Evaluator can also optionally validate that a newly trained model is better than the previous model. This is useful in a production pipeline setting where you may automaticall...
# Imported files such as taxi_constants are normally cached, so changes are # not honored after the first import. Normally this is good for efficiency, but # during development when you may be iterating code it can be a problem. To # avoid this problem during development, reload the file. import taxi_constants import ...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next, you give this configuration to Evaluator and run it.
# Use TFMA to compute a evaluation statistics over features of a model and # validate them against a baseline. # The model resolver is only required if performing model validation in addition # to evaluation. In this case you validate against the latest blessed model. If # no model has been blessed before (as in this ...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now let's examine the output artifacts of Evaluator.
evaluator.outputs
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Using the evaluation output you can show the default visualization of global metrics on the entire evaluation set.
context.show(evaluator.outputs['evaluation'])
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
To see the visualization for sliced evaluation metrics, you can directly call the TensorFlow Model Analysis library.
import tensorflow_model_analysis as tfma # Get the TFMA output result path and load the result. PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri tfma_result = tfma.load_eval_result(PATH_TO_RESULT) # Show data sliced along feature column trip_start_hour. tfma.view.render_slicing_metrics( tfma_result, ...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
This visualization shows the same metrics, but computed at every feature value of trip_start_hour instead of on the entire evaluation set. TensorFlow Model Analysis supports many other visualizations, such as Fairness Indicators and plotting a time series of model performance. To learn more, see the tutorial. Since you...
blessing_uri = evaluator.outputs['blessing'].get()[0].uri !ls -l {blessing_uri}
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now can also verify the success by loading the validation result record:
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri print(tfma.load_validation_result(PATH_TO_RESULT))
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Pusher The Pusher component is usually at the end of a TFX pipeline. It checks whether a model has passed validation, and if so, exports the model to _serving_model_dir.
pusher = tfx.components.Pusher( model=trainer.outputs['model'], model_blessing=evaluator.outputs['blessing'], push_destination=tfx.proto.PushDestination( filesystem=tfx.proto.PushDestination.Filesystem( base_directory=_serving_model_dir))) context.run(pusher, enable_cache=True)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's examine the output artifacts of Pusher.
pusher.outputs
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
In particular, the Pusher will export your model in the SavedModel format, which looks like this:
push_uri = pusher.outputs['pushed_model'].get()[0].uri model = tf.saved_model.load(push_uri) for item in model.signatures.items(): pp.pprint(item)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
TF Lattice Aggregate Function Models <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/aggregate_function_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_bla...
#@test {"skip": true} !pip install tensorflow-lattice pydot
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
Importing required packages:
import tensorflow as tf import collections import logging import numpy as np import pandas as pd import sys import tensorflow_lattice as tfl logging.disable(sys.maxsize)
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
Downloading the Puzzles dataset:
train_dataframe = pd.read_csv( 'https://raw.githubusercontent.com/wbakst/puzzles_data/master/train.csv') train_dataframe.head() test_dataframe = pd.read_csv( 'https://raw.githubusercontent.com/wbakst/puzzles_data/master/test.csv') test_dataframe.head()
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
Extract and convert features and labels
# Features: # - star_rating rating out of 5 stars (1-5) # - word_count number of words in the review # - is_amazon 1 = reviewed on amazon; 0 = reviewed on artifact website # - includes_photo if the review includes a photo of the puzzle # - num_helpful number of people that found this revie...
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
Setting the default values used for training in this guide:
LEARNING_RATE = 0.1 BATCH_SIZE = 128 NUM_EPOCHS = 500 MIDDLE_DIM = 3 MIDDLE_LATTICE_SIZE = 2 MIDDLE_KEYPOINTS = 16 OUTPUT_KEYPOINTS = 8
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
Feature Configs Feature calibration and per-feature configurations are set using tfl.configs.FeatureConfig. Feature configurations include monotonicity constraints, per-feature regularization (see tfl.configs.RegularizerConfig), and lattice sizes for lattice models. Note that we must fully specify the feature config fo...
def compute_quantiles(features, num_keypoints=10, clip_min=None, clip_max=None, missing_value=None): # Clip min and max if desired. if clip_min is not None: features = np.maximum(features, clip_min) features = np.append(...
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
Defining Our Feature Configs Now that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
# Feature configs are used to specify how each feature is calibrated and used. feature_configs = [ tfl.configs.FeatureConfig( name='star_rating', lattice_size=2, monotonicity='increasing', pwl_calibration_num_keypoints=5, pwl_calibration_input_keypoints=compute_quantiles( ...
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
Aggregate Function Model To construct a TFL premade model, first construct a model configuration from tfl.configs. An aggregate function model is constructed using the tfl.configs.AggregateFunctionConfig. It applies piecewise-linear and categorical calibration, followed by a lattice model on each dimension of the ragge...
# Model config defines the model structure for the aggregate function model. aggregate_function_model_config = tfl.configs.AggregateFunctionConfig( feature_configs=feature_configs, middle_dimension=MIDDLE_DIM, middle_lattice_size=MIDDLE_LATTICE_SIZE, middle_calibration=True, middle_calibration_num_k...
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
The output of each Aggregation layer is the averaged output of a calibrated lattice over the ragged inputs. Here is the model used inside the first Aggregation layer:
aggregation_layers = [ layer for layer in aggregate_function_model.layers if isinstance(layer, tfl.layers.Aggregation) ] tf.keras.utils.plot_model( aggregation_layers[0].model, show_layer_names=False, rankdir='LR')
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
Now, as with any other tf.keras.Model, we compile and fit the model to our data.
aggregate_function_model.compile( loss='mae', optimizer=tf.keras.optimizers.Adam(LEARNING_RATE)) aggregate_function_model.fit( train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
After training our model, we can evaluate it on our test set.
print('Test Set Evaluation...') print(aggregate_function_model.evaluate(test_xs, test_ys))
docs/tutorials/aggregate_function_models.ipynb
tensorflow/lattice
apache-2.0
Loading in terms with given method and smoothing parameters
N.get_ns_act('depression', thresh=-1, method='knn',smoothing='sum') N.get_ns_act('dopamine', thresh=-1, method='knn',smoothing='sum') N.get_ns_act('reward', thresh=-1, method='knn',smoothing='sum') N.get_ns_act('serotonin', thresh=-1, method='knn',smoothing='sum') N.get_ns_act('anxiety', thresh=-1, method='knn',smoothi...
t_test_validation.ipynb
voytekresearch/laxbro
mit
Loading in gene lists
depression_genes = analysis.load_gene_list('/Users/Torben/Documents/ABI analysis/gene_collections/','DepressionGenes.csv') dopamine_genes = analysis.load_gene_list('/Users/Torben/Documents/ABI analysis/gene_collections/','DopamineGenes2.csv') reward_genes = analysis.load_gene_list('/Users/Torben/Documents/ABI analysis/...
t_test_validation.ipynb
voytekresearch/laxbro
mit
performing a t test on correlations of genes associated with their term. i.e. are these genes associated with this term more than by chance? I do this with 4 correlation methods: pearson's r, spearman's r, slope of linear regression, and a t test
import scipy.stats as stats A = analysis.NsabaAnalysis(N) all_analyses = np.zeros((6,4)) methods = ['pearson','spearman','regression','t_test'] for m in xrange(len(methods)): all_analyses[0,m]= stats.ttest_1samp(A.validate_with_t_test('depression',depression_genes,method=methods[m],quant=85)[0],0)[1] all_analy...
t_test_validation.ipynb
voytekresearch/laxbro
mit
Testing different cutoff values and methods for splitting term/non-term groups for t tests Machine learning methods are kmeans and mixture of gaussians.
t_test_analyses = np.zeros((6,6)) quants = [50,75,85,95] for q in xrange(len(quants)): t_test_analyses[0,q]= stats.ttest_1samp(A.validate_with_t_test('depression',depression_genes,quant=quants[q])[0],0)[1] t_test_analyses[1,q]= stats.ttest_1samp(A.validate_with_t_test('dopamine',dopamine_genes,quant=quants[q]...
t_test_validation.ipynb
voytekresearch/laxbro
mit
If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again. In all of the cells below, I've provided the necessary Python scaffolding to perfo...
conn.rollback()
Data_and_databases/.ipynb_checkpoints/Homework_2_Paul_Ronga-checkpoint.ipynb
palrogg/foundations-homework
mit
Problem set 1: WHERE and ORDER BY In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A...
cursor = conn.cursor() statement = "SELECT movie_title FROM uitem WHERE scifi = 1 AND horror = 1 ORDER BY release_date DESC" cursor.execute(statement) for row in cursor: print(row[0])
Data_and_databases/.ipynb_checkpoints/Homework_2_Paul_Ronga-checkpoint.ipynb
palrogg/foundations-homework
mit
Problem set 2: Aggregation, GROUP BY and HAVING In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate. Expected output: 157
cursor = conn.cursor() statement = "SELECT COUNT(*) FROM uitem WHERE musical = 1 OR childrens = 1" cursor.execute(statement) for row in cursor: print(row[0])
Data_and_databases/.ipynb_checkpoints/Homework_2_Paul_Ronga-checkpoint.ipynb
palrogg/foundations-homework
mit
Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results....
cursor = conn.cursor() statement = "SELECT DISTINCT(occupation), COUNT(*) FROM uuser GROUP BY occupation HAVING COUNT(*) > 50" cursor.execute(statement) for row in cursor: print(row[0], row[1])
Data_and_databases/.ipynb_checkpoints/Homework_2_Paul_Ronga-checkpoint.ipynb
palrogg/foundations-homework
mit
Problem set 3: Joining tables In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output: Madonna: Truth or Dare (1991) Koyaanisqatsi (1983) Paris Is Burning (1990) Thin Blue Line, ...
cursor = conn.cursor() statement = "SELECT DISTINCT(movie_title) FROM udata JOIN uitem ON uitem.movie_id = udata.item_id WHERE EXTRACT(YEAR FROM release_date) < 1992 AND rating = 5 GROUP BY movie_title" # if "any" has to be taken in the sense of "every": # statement = "SELECT movie_title FROM uitem JOIN udata ON uitem...
Data_and_databases/.ipynb_checkpoints/Homework_2_Paul_Ronga-checkpoint.ipynb
palrogg/foundations-homework
mit
Problem set 4: Joins and aggregations... together at last This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go. In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes...
conn.rollback() cursor = conn.cursor() statement = "SELECT movie_title), AVG(rating) FROM udata JOIN uitem ON uitem.movie_id = udata.item_id WHERE horror = 1 GROUP BY movie_title ORDER BY AVG(rating) LIMIT 10" cursor.execute(statement) for row in cursor: print(row[0], "%0.2f" % row[1])
Data_and_databases/.ipynb_checkpoints/Homework_2_Paul_Ronga-checkpoint.ipynb
palrogg/foundations-homework
mit
BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below. Expected output: Children of the Corn: The Gathering (1996) 1.32 Body Parts (1991) 1.62 Amityville II: The Possession (1982) 1.64 Jaws 3-D (1983) 1.94 Hellraiser: Bloodline (1996) 2....
cursor = conn.cursor() statement = "SELECT movie_title, AVG(rating) FROM udata JOIN uitem ON uitem.movie_id = udata.item_id WHERE horror = 1 GROUP BY movie_title HAVING COUNT(rating) > 10 ORDER BY AVG(rating) LIMIT 10;" cursor.execute(statement) for row in cursor: print(row[0], "%0.2f" % row[1])
Data_and_databases/.ipynb_checkpoints/Homework_2_Paul_Ronga-checkpoint.ipynb
palrogg/foundations-homework
mit
One sentence
sentence = """ John writes a short program that works correctly and he comments his code like a good student. """ G1 = Graph(text=sentence, transformer=T) visualize(G1)
docs/_examples/visualization.ipynb
agarsev/grafeno
agpl-3.0
Bigger graph (from the simple.wikipedia page of AI)
text = """ An extreme goal of AI research is to create computer programs that can learn, solve problems, and think logically. In practice, however, most applications have picked on problems which computers can do well. Searching data bases and doing calculations are things computers do better than people. On the other ...
docs/_examples/visualization.ipynb
agarsev/grafeno
agpl-3.0
Section 1.1 - Importing the Data Let's begin in the same way we did for Assignment #2 of 2014, but this time let's start with importing the temperature data:
temperatureDateConverter = lambda d : dt.datetime.strptime(d,'%Y-%m-%d %H:%M:%S') temperature = np.genfromtxt('../../data/temperature.csv',delimiter=",",dtype=[('timestamp', type(dt.datetime.now)),('tempF', 'f8')],converters={0: temperatureDateConverter}, skiprows=1)
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Notice that, because we are asking for the data to be interpreted as having different types for each column, and the the numpy.ndarray can only handle homoegenous types (i.e., all the elements of the array must be of the same type) then the resulting array is a one dimensional ndarray of tuples. Each tuple corresponds ...
print "The variable 'temperature' is a " + str(type(temperature)) + " and it has the following shape: " + str(temperature.shape)
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Fortunately, these structured arrays allow us to access the content inside the tuples directly by calling the field names. Let's figure out what those field names are:
temperature.dtype.fields
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Now let's see what the timestamps look like, for this dataset:
plt.plot(temperature['timestamp'])
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Seems as if there are no gaps, but let's make sure about that. First, let's compute the minimum and maximum difference between any two consecutive timestamps:
print "The minimum difference between any two consecutive timestamps is: " + str(np.min(np.diff(temperature['timestamp']))) print "The maximum difference between any two consecutive timestamps is: " + str(np.max(np.diff(temperature['timestamp'])))
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Given that they both are 5 minutes, then it means that there really is no gap in the datset, and all temperature measurements were taken 5 minutes apart. Since we need temperature readings every 15 minutes we can downsample this dataset. There are many ways to do the downsampling, and it is important to understand the ...
temperature = temperature[0:-1:3]
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Finally, let's make a note of when the first and last timestamp are:
print "First timestamp is on \t{}. \nLast timestamp is on \t{}.".format(temperature['timestamp'][0], temperature['timestamp'][-1])
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Loading the Power Data Just as we did before, we start with the genfromtxt function:
dateConverter = lambda d : dt.datetime.strptime(d,'%Y/%m/%d %H:%M:%S') power = np.genfromtxt('../../data/campusDemand.csv',delimiter=",",names=True,dtype=['S255',dt.datetime,'f8'],converters={1: dateConverter})
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Let's figure out how many meters there are, and where they are in the ndarray, as well as how many datapoints they have.
name, indices, counts = np.unique(power['Point_name'], return_index=True,return_counts=True)
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Now let's print that information in a more readable fashion:
for i in range(len(name)): print str(name[i])+"\n\t from "+str(power[indices[i]]['Time'])+" to "+str(power[indices[i]+counts[i]-1]['Time'])+"\n\t or "+str(power[indices[i]+counts[i]-1]['Time']-power[indices[i]]['Time'])
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Since only one meter needs to be used, pick the one you like and discard the rest:
power=power[power['Point_name']==name[3]]
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Let's make sure the data is sorted by time and then let's plot it
power = np.sort(power,order='Time') fig1= plt.figure(figsize=(15,5)) plt.plot(power['Time'],power['Value']) plt.title(name[0]) plt.xlabel('Time') plt.ylabel('Power [Watts]')
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Are there gaps in this dataset?
power = np.sort(power,order='Time') print "The minimum difference between any two consecutive timestamps is: " + str(np.min(np.diff(power['Time']))) print "The maximum difference between any two consecutive timestamps is: " + str(np.max(np.diff(power['Time'])))
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
And when is the first and last timestamp for this dataset? (We would like them to overlap as much as possible):
print "First timestamp is on \t{}. \nLast timestamp is on \t{}.".format(power['Time'][0], power['Time'][-1])
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
So let's summarize the differences in terms of the timestamps: There is at least one significant gap (1 day and a few hours), and there's also a strange situation that causes two consecutive samples to have the same timestamp (i.e., the minimum difference is zero). The temperature dataset starts a little later, and...
print "Power data from {0} to {1}.\nTemperature data from {2} to {3}".format(power['Time'][0], power['Time'][-1], temperature['timestamp'][0], temperature['timestamp'][-1])
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Clearly, we don't need the portion of the temperature data that is collected beyond the dates that we have power data. Let's remove this (note that the magic number 24 corresponds to 360 minutes or 6 hours):
temperature = temperature[0:-24]
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Now let's create the interpolation function:
def power_interp(tP, P, tT): # This function assumes that the input is an numpy.ndarray of datetime objects # Most useful interpolation tools don't work well with datetime objects # so we convert all datetime objects into the number of seconds elapsed # since 1/1/1970 at midnight (also called the UNIX ...
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
And let's use that funciton to get a copy of the interpolated power values, extracted at exactly the same timestamps as the temperature dataset:
newPowerValues = power_interp(power['Time'], power['Value'], temperature['timestamp'])
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Finally, to keep things simple, let's restate the variables that matter:
toposix = lambda d: (d - dt.datetime(1970,1,1,0,0,0)).total_seconds() timestamp_in_seconds = map(toposix,temperature['timestamp']) timestamps = temperature['timestamp'] temp_values = temperature['tempF'] power_values = newPowerValues
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
And let's plot it to see what it looks like.
plt.figure(figsize=(15,15)) plt.plot(timestamps,power_values,'ro') plt.figure(figsize=(15,15)) plt.plot(timestamps, temp_values, '--b')
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Task #1 Now let's put all of this data into a single structured array. Task #2 Since we have the timestamps in 'datetime' format we can easily do the extraction of the indeces:
weekday = map(lambda t: t.weekday(), timestamps) weekends = np.where( ) ## Note that depending on how you do this, the result could be a tuple of ndarrays. weekdays = np.where( )
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Did we do this correctly?
len(weekday) == len(weekends[0]) + len(weekdays[0]) ## This is assuming you have a tuple of ndarrays
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Seems like we did. Task #3 Similar as in the previous task...
hour = map(lambda t: t.hour, timestamps) occupied = np.where( ) unoccupied = np.where( )
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
Task #4 Let's calculate the temperature components, by creating a function that does just that:
def Tc(temperature, T_bound): # The return value will be a matrix with as many rows as the temperature # array, and as many columns as len(T_bound) [assuming that 0 is the first boundary] Tc_matrix = np.zeros((len(temperature), len(T_bound))) return Tc_matrix
assignments/2/12-752_Assignment_2_Starter.ipynb
keylime1/courses_12-752
mit
MarkerCluster Adds a MarkerCluster layer on the map.
import numpy as np N = 100 data = np.array( [ np.random.uniform(low=35, high=60, size=N), # Random latitudes in Europe. np.random.uniform(low=-12, high=30, size=N), # Random longitudes in Europe. ] ).T popups = [str(i) for i in range(N)] # Popups texts are simple numbers. m = folium.Map([4...
examples/Plugins.ipynb
python-visualization/folium
mit
Terminator
m = folium.Map([45, 3], zoom_start=1) plugins.Terminator().add_to(m) m
examples/Plugins.ipynb
python-visualization/folium
mit
BoatMarker
m = folium.Map([30, 0], zoom_start=3) plugins.BoatMarker( location=(34, -43), heading=45, wind_heading=150, wind_speed=45, color="#8f8" ).add_to(m) plugins.BoatMarker( location=(46, -30), heading=-20, wind_heading=46, wind_speed=25, color="#88f" ).add_to(m) m
examples/Plugins.ipynb
python-visualization/folium
mit
BeautifyIcon
m = folium.Map([45.5, -122], zoom_start=3) icon_plane = plugins.BeautifyIcon( icon="plane", border_color="#b3334f", text_color="#b3334f", icon_shape="triangle" ) icon_number = plugins.BeautifyIcon( border_color="#00ABDC", text_color="#00ABDC", number=10, inner_icon_style="margin-top:0;", ) folium...
examples/Plugins.ipynb
python-visualization/folium
mit
Fullscreen
m = folium.Map(location=[41.9, -97.3], zoom_start=4) plugins.Fullscreen( position="topright", title="Expand me", title_cancel="Exit me", force_separate_button=True, ).add_to(m) m
examples/Plugins.ipynb
python-visualization/folium
mit
Timestamped GeoJSON
m = folium.Map(location=[35.68159659061569, 139.76451516151428], zoom_start=16) # Lon, Lat order. lines = [ { "coordinates": [ [139.76451516151428, 35.68159659061569], [139.75964426994324, 35.682590062684206], ], "dates": ["2017-06-02T00:00:00", "2017-06-02T00:10:00"...
examples/Plugins.ipynb
python-visualization/folium
mit
FeatureGroupSubGroup Sub categories Disable all markers in the category, or just one of the subgroup.
m = folium.Map(location=[0, 0], zoom_start=6) fg = folium.FeatureGroup(name="groups") m.add_child(fg) g1 = plugins.FeatureGroupSubGroup(fg, "group1") m.add_child(g1) g2 = plugins.FeatureGroupSubGroup(fg, "group2") m.add_child(g2) folium.Marker([-1, -1]).add_to(g1) folium.Marker([1, 1]).add_to(g1) folium.Marker([-1...
examples/Plugins.ipynb
python-visualization/folium
mit
Marker clusters across groups Create two subgroups, but cluster markers together.
m = folium.Map(location=[0, 0], zoom_start=6) mcg = folium.plugins.MarkerCluster(control=False) m.add_child(mcg) g1 = folium.plugins.FeatureGroupSubGroup(mcg, "group1") m.add_child(g1) g2 = folium.plugins.FeatureGroupSubGroup(mcg, "group2") m.add_child(g2) folium.Marker([-1, -1]).add_to(g1) folium.Marker([1, 1]).ad...
examples/Plugins.ipynb
python-visualization/folium
mit
Minimap Adds a locator minimap to a folium document.
m = folium.Map(location=(30, 20), zoom_start=4) minimap = plugins.MiniMap() m.add_child(minimap) m
examples/Plugins.ipynb
python-visualization/folium
mit
DualMap The DualMap plugin can be used to display two maps side by side, where panning and zooming is synchronized. The DualMap class can be used just like the normal Map class. The two sub-maps can be accessed with its m1 and m2 attributes.
m = plugins.DualMap(location=(52.1, 5.1), tiles=None, zoom_start=8) folium.TileLayer("cartodbpositron").add_to(m.m2) folium.TileLayer("openstreetmap").add_to(m) fg_both = folium.FeatureGroup(name="markers_both").add_to(m) fg_1 = folium.FeatureGroup(name="markers_1").add_to(m.m1) fg_2 = folium.FeatureGroup(name="marke...
examples/Plugins.ipynb
python-visualization/folium
mit
Locate control Adds a control button that when clicked, the user device geolocation is displayed. For list of all possible keyword options see: https://github.com/domoritz/leaflet-locatecontrol#possible-options To work properly in production, the connection needs to be encrypted (HTTPS), otherwise browser will not allo...
m = folium.Map([41.97, 2.81]) plugins.LocateControl().add_to(m) # If you want get the user device position after load the map, set auto_start=True plugins.LocateControl(auto_start=True).add_to(m) m
examples/Plugins.ipynb
python-visualization/folium
mit
SemiCircle This can be used to display a semicircle or sector on a map. Whilst called SemiCircle it is not limited to 180 degree angles and can be used to display a sector of any angle. The semicircle is defined with a location (the central point, if it was a full circle), a radius and will either have a direction and...
m = folium.Map([45, 3], zoom_start=5) plugins.SemiCircle( (45, 3), radius=400000, start_angle=50, stop_angle=200, color="green", fill_color="green", opacity=0, popup="start angle - 50 degrees, stop angle - 200 degrees", ).add_to(m) plugins.SemiCircle( (46.5, 9.5), radius=200000...
examples/Plugins.ipynb
python-visualization/folium
mit
Geocoder Adds a search box to the map to search for geographic features like cities, countries, etc. You can search with names or addresses. Uses the Nomatim service from OpenStreetMap. Please respect their usage policy: https://operations.osmfoundation.org/policies/nominatim/
m = folium.Map() plugins.Geocoder().add_to(m) m
examples/Plugins.ipynb
python-visualization/folium
mit
1. Archivos Los archivos de los laboratorios deben tener la siguiente estructura: 1. Archivo .ipynb de extensión ipython notebook, para python 3. Nombre varía de laboratorio a laboratorio. 1. Carpeta code/ que contiene los códigos del laboratorio. Varía de laboratorio a laboratorio, pero debe tener __init__.py y lab.py...
import numpy as np from matplotlib import pyplot as plt # Presionar tabulación con el cursor despues de np.arr np.arr # Presionar Ctr-Enter para obtener la documentacion de la funcion np.array usando "?" np.array? # Presionar Ctr-Enter %whos x = 10 %whos
12_ejemplos_graficos_parametricos/graficos_parametricos.ipynb
usantamaria/ipynb_para_docencia
mit
Tipos de letras Se tienen los siguientes tipos de letras, gentileza del markdown y estilo lab.css: 1. emphasis 2. strong 3. strong and emphasis 4. code 5. code and emphasis 6. code and strong 7. code, strong and emphasis. 8. Código en python Python for i in range(n): print(i) 9. Código en bash Bash echo "ho...
def mi_funcion(x): f = 1 + x + x**3 + x**5 + np.sin(x) return f N = 5 x = np.linspace(-1,1,N) y = mi_funcion(x) # FIX ME I = 0 # FIX ME # FIX ME print("Area bajo la curva: %.3f" %I) # Ilustración gráfica x_aux = np.linspace(x.min(),x.max(),N**2) fig = plt.figure(figsize=(12,8)) fig.gca().fill_between(x, 0, y,...
12_ejemplos_graficos_parametricos/graficos_parametricos.ipynb
usantamaria/ipynb_para_docencia
mit
Parsing the molecular formula is not a trivial task that we will do later. We start by assuming that the formula has been parsed. We will start by keeping the formula as a dictionary. In a condensed formula, each element appears only once, and its subindex indicates the total amount of atoms of that element in the mole...
ethanol = {'C':2, 'H':6, 'O':1} water = {'H':2, 'O':1} HCl = {'H':1, 'Cl':1}
notebooks/solutions/4_1.Molecular weight.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
From that, calculate the total weight:
#Finish...
notebooks/solutions/4_1.Molecular weight.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Now imagine we also accept formulas in an extended way, for example ethanol as $\mathrm{CH_3CH_2OH}$. In that case it makes sense that our parsing procedure returns a list of tuples such as:
ethanol2 = [('C',1), ('H',3), ('C',1), ('H',2), ('O',1), ('H',1)] acetic2 = [('C',1), ('H',3), ('C',1), ('O',1), ('O',1), ('H',1)]
notebooks/solutions/4_1.Molecular weight.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
From that, we could also create a dictionary such as the previous one, but we can also calculate the weight directly:
#Finish
notebooks/solutions/4_1.Molecular weight.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Parsing Parsing the formula is not a trivial task. You have to remember the follwing: Some elements have 1 letter names, others have 2. In that case the second letter is always lower-case. Some numbers can be higher than 9, i.e. use 2 or more figures. When the number is 1, it us usually not written. When coding a com...
#Try to do it before looking at the answer! def weight(formula): """ Calcula el pes atòmic d'una formula química """ def parsing(formula): """ Parse the formula and return a list of pairs such as ('C', 3) """ formList = [] number = None symbol='' ...
notebooks/solutions/4_1.Molecular weight.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
We'll start with this image:
imgpath = 'images/original/image.bmp' blurredpath = 'images/image_blurred.bmp' img = Image.open(imgpath) blurred = img.copy().filter(ImageFilter.BLUR) blurred.save(blurredpath)
Blurred Image Comparison.ipynb
DakotaNelson/discrete-stego
mit
And here it is now that we've blurred it: Now, let's compare the two to see what kind of error rates we can expect:
[red_flipped, green_flipped, blue_flipped] = compare_images(imgpath, blurredpath)
Blurred Image Comparison.ipynb
DakotaNelson/discrete-stego
mit
Unlike our previous problem, the decision variables in this case won't be continuous (We can't sell half a car!), so the category is integer.
A = pulp.LpVariable('A', lowBound=0, cat='Integer') B = pulp.LpVariable('B', lowBound=0, cat='Integer') # Objective function model += 30000 * A + 45000 * B, "Profit" # Constraints model += 3 * A + 4 * B <= 30 model += 5 * A + 6 * B <= 60 model += 1.5 * A + 3 * B <= 21 # Solve our problem model.solve() pulp.LpStatus[...
LP/Introduction-to-linear-programming/Introduction to Linear Programming with Python - Part 3.ipynb
sysid/nbs
mit
the next cell contains all parameters that might need to be changed
#filename of the input mapping file file_mapping = '../../data/mapping-files/emp_qiime_mapping_release1.tsv' #filename of the resulting mapping file file_output = 'modMapping.txt' #a file containing counts of observations for various OTU picking methods. file_observations = '../../data/otu-picking/observations.tsv' ...
code/04-subsets-prevalence/subset_samples_by_empo_and_study.ipynb
cuttlefishh/emp
bsd-3-clause
read in mapping file and filter accoring to three criteria: 1. a sample must contain a certain number of raw sequence reads 2. the sample must not be flagged as being a "Control" 3. study is considered as being OK (this is a result of a manual curation)
metadata = pd.read_csv(file_mapping, sep="\t", index_col=0, low_memory=False, dtype=str) #it is more consistent to read all fields as strings and manually convert to numeric values for selected columns. Thus, roundtripping (read -> write) results in a nearly identical file. metadata['sequences_split_libraries'] = (pd.t...
code/04-subsets-prevalence/subset_samples_by_empo_and_study.ipynb
cuttlefishh/emp
bsd-3-clause
actual logic to pick samples Assume that we want to create more than one subsample set. However, they should form a strict hierarchy, i.e. a sample in a smaller set must also occure in any larger sets. Furthermore, we want to make sure that each group is covered, since some groups are very large, others are small and m...
#convert np.infty to the actual number of total available samples in EMP satisfying the filtering criteria setSizes = list(map(lambda x: x if x is not np.infty else subset_metadata.shape[0], setSizes)) #make sure set sizes increase setSizes = sorted(setSizes) subsets = {} #resulting object, will hold sample ID lists...
code/04-subsets-prevalence/subset_samples_by_empo_and_study.ipynb
cuttlefishh/emp
bsd-3-clause
Write output by first merging new columns to the original metadata and than only write those to a output file.
newColumnNames = [] #add a column to mark samples that are in EMP (i.e. samples that have some counts) newColumnNames.append('all_emp') metadata[newColumnNames[-1]] = metadata.index.isin(emp_metadata.index) #add a column to mark samples that satisfy our filtering criteria newColumnNames.append('qc_filtered') metadata...
code/04-subsets-prevalence/subset_samples_by_empo_and_study.ipynb
cuttlefishh/emp
bsd-3-clause