markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Finally we train the model:
strategy = tf.distribute.MirroredStrategy() with strategy.scope(): model = get_model() model.fit( train_dataset, epochs=1, validation_data=test_dataset, )
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Use List comprehension to create a list of all numbers between 1 and 50 that are divisble by 3.
[x for x in range(1,50) if x%3 == 0]
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Statements Assessment Test - Solutions-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Now, we get faster convergence (three iterations instead of five), and a lot less overfitting. Here are our results: <table> <tr> <th>Iteration</th> <th>Training Data Loss</th> <th>Evaluation Data Loss</th> <th>Duration (seconds)</th> </tr> <tr> <td>2</td> <td>0.6509</td> <td>1.4596</t...
%%bigquery --project $PROJECT CREATE OR REPLACE MODEL movielens.recommender_16 options(model_type='matrix_factorization', user_col='userId', item_col='movieId', rating_col='rating', l2_reg=0.2, num_factors=16) AS SELECT userId, movieId, rating FROM movielens.ratings %%bigquery --project $PROJECT SEL...
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml.ipynb
turbomanage/training-data-analyst
apache-2.0
Filtering out already rated movies Of course, this includes movies the user has already seen and rated in the past. Let’s remove them. TODO 2: Make a prediction for user 903 that does not include already seen movies.
%%bigquery --project $PROJECT SELECT * FROM ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, ( WITH seen AS ( SELECT ARRAY_AGG(movieId) AS movies FROM movielens.ratings WHERE userId = 903 ) SELECT movieId, title, 903 AS userId FROM movielens.movies, UNNEST(genres) g, seen WH...
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml.ipynb
turbomanage/training-data-analyst
apache-2.0
For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen. Customer targeting In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the custom...
%%bigquery --project $PROJECT SELECT * FROM ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, ( WITH allUsers AS ( SELECT DISTINCT userId FROM movielens.ratings ) SELECT 96481 AS movieId, (SELECT title FROM movielens.movies WHERE movieId=96481) title, userId FROM allU...
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml.ipynb
turbomanage/training-data-analyst
apache-2.0
Exercise 1: Histograms a. Returns Find the daily returns for SPY over a 7-year window.
data = get_pricing('SPY', fields='price', start_date='2010-01-01', end_date='2017-01-01') returns = data.pct_change()[1:]
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
b. Graphing Using the techniques laid out in lecture, plot a histogram of the returns
plt.hist(returns, bins = 30); plt.xlabel('Random Numbers'); plt.ylabel('Number of Times Observed'); plt.title('Frequency Distribution of randomly generated number');
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
c. Cumulative distribution Plot the cumulative distribution histogram for your returns
plt.hist(returns, bins = 30, cumulative='true');
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 2 : Scatter Plots a. Data Start by collecting the close prices of McDonalds Corp. (MCD) and Starbucks (SBUX) for the last 5 years with daily frequency.
SPY = get_pricing('SPY', fields='close_price', start_date='2013-06-19', end_date='2018-06-19', frequency='daily') SBUX = get_pricing('SBUX', fields='close_price', start_date='2013-06-19', end_date='2018-06-19', frequency='daily')
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
b. Plotting Graph a scatter plot of SPY and Starbucks.
plt.scatter(SPY, SBUX); plt.title('Scatter plot of spy and sbux'); plt.xlabel('SPY Price'); plt.ylabel('SBUX Price');
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
c. Plotting Returns Graph a scatter plot of the returns of SPY and Starbucks.
SPY_R = SPY.pct_change()[1:] SBUX_R = SBUX.pct_change()[1:] plt.scatter(SPY_R, SBUX_R); plt.title('Scatter plot of spy and starbucks returns'); plt.xlabel('SPY Return'); plt.ylabel('SBUX Return');
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Remember a scatter plot must have the same number of values for each parameter. If spy and SBUX did not have the same number of data points, your graph will return an error Exercise 3 : Linear Plots a. Getting Data Use the techniques laid out in lecture to find the open price over a 2-year period for Starbucks (SBUX)...
data = get_pricing(['SBUX', 'DNKN'], fields='open_price', start_date = '2015-01-01', end_date='2017-01-01') ## Your code goes here. data.head()
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
b. Data Structure The data is returned to us as a pandas dataframe object. Index your data to convert them into simple strings.
data.columns = [e.symbol for e in data.columns] data['SBUX'].head()
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
c. Plotting Plot the data for SBUX stock price as a function of time. Remember to label your axis and title the graph.
plt.plot(data['SBUX']); plt.xlabel('Time'); plt.ylabel('Price'); plt.title('Price vs Time');
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 4 : Best fits plots Here we have a scatter plot of two data sets. Vary the a and b parameter in the code to try to draw a line that 'fits' our data nicely. The line should seem as if it is describing a pattern in the data. While quantitative methods exist to do this automatically, we would like you to try to ...
data1 = get_pricing('SBUX', fields='open_price', start_date='2013-01-01', end_date='2014-01-01') data2 = get_pricing('SPY', fields='open_price', start_date = '2013-01-01', end_date='2014-01-01') rdata1= data1.pct_change()[1:] rdata2= data2.pct_change()[1:] plt.scatter(rdata2, rdata1); plt.scatter(rdata2, rdata1) # A...
notebooks/lectures/Plotting_Data/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Next, we discuss data formats in more detail, and show how to generate and store dummy ranking data. Data Formats Data Formats for Ranking For representing ranking data, protobuffers are extensible structures suitable for storing data in a serialized format, either locally or in a distributed manner. Ranking usually co...
!pip install -q tensorflow_ranking tensorflow-serving-api
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Let us start by importing libraries that will be used throughout this Notebook. We also enable the "eager execution" mode for convenience and demonstration purposes.
import tensorflow as tf import tensorflow_ranking as tfr from tensorflow_serving.apis import input_pb2 from google.protobuf import text_format CONTEXT = text_format.Parse( """ features { feature { key: "query_tokens" value { bytes_list { value: ["this", "is", "a", "relevant", "question"]...
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Dependencies and Global Variables Here we define the train and test paths, along with model hyperparameters.
# Store the paths to files containing training and test instances. _TRAIN_DATA_PATH = "/tmp/train.tfrecords" _TEST_DATA_PATH = "/tmp/test.tfrecords" # Store the vocabulary path for query and document tokens. _VOCAB_PATH = "/tmp/vocab.txt" # The maximum number of documents per query in the dataset. # Document lists ar...
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Components of a Ranking Estimator The overall components of a Ranking Estimator are shown below. The key components of the library are: Input Reader Tranform Function Scoring Function Ranking Losses Ranking Metrics Ranking Head Model Builder These are described in more details in the following sections. TensorFlow Ra...
_EMBEDDING_DIMENSION = 20 def context_feature_columns(): """Returns context feature names to column definitions.""" sparse_column = tf.feature_column.categorical_column_with_vocabulary_file( key="query_tokens", vocabulary_file=_VOCAB_PATH) query_embedding_column = tf.feature_column.embedding_column(...
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Reading Input Data using input_fn The input reader reads in data from persistent storage to produce raw dense and sparse tensors of appropriate type for each feature. Example features are represented by 3-D tensors (where dimensions correspond to queries, examples and feature values). Context features are represented b...
def input_fn(path, num_epochs=None): context_feature_spec = tf.feature_column.make_parse_example_spec( context_feature_columns().values()) label_column = tf.feature_column.numeric_column( _LABEL_FEATURE, dtype=tf.int64, default_value=_PADDING_LABEL) example_feature_spec = tf.feature_column.make_pa...
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Feature Transformations with transform_fn The transform function takes in the raw dense or sparse features from the input reader, applies suitable transformations to return dense representations for each feature. This is important before passing these features to a neural network, as neural networks layers usually take...
def make_transform_fn(): def _transform_fn(features, mode): """Defines transform_fn.""" context_features, example_features = tfr.feature.encode_listwise_features( features=features, context_feature_columns=context_feature_columns(), example_feature_columns=example_feature_columns(), ...
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Feature Interactions using scoring_fn Next, we turn to the scoring function which is arguably at the heart of a TF Ranking model. The idea is to compute a relevance score for a (set of) query-document pair(s). The TF-Ranking model will use training data to learn this function. Here we formulate a scoring function using...
def make_score_fn(): """Returns a scoring function to build `EstimatorSpec`.""" def _score_fn(context_features, group_features, mode, params, config): """Defines the network to score a group of documents.""" with tf.compat.v1.name_scope("input_layer"): context_input = [ tf.compat.v1.layers....
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Losses, Metrics and Ranking Head Evaluation Metrics We have provided an implementation of several popular Information Retrieval evaluation metrics in the TF Ranking library, which are shown here. The user can also define a custom evaluation metric, as shown in the description below.
def eval_metric_fns(): """Returns a dict from name to metric functions. This can be customized as follows. Care must be taken when handling padded lists. def _auc(labels, predictions, features): is_label_valid = tf_reshape(tf.greater_equal(labels, 0.), [-1, 1]) clean_labels = tf.boolean_mask(tf.reshap...
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Ranking Losses We provide several popular ranking loss functions as part of the library, which are shown here. The user can also define a custom loss function, similar to ones in tfr.losses.
# Define a loss function. To find a complete list of available # loss functions or to learn how to add your own custom function # please refer to the tensorflow_ranking.losses module. _LOSS = tfr.losses.RankingLossKey.APPROX_NDCG_LOSS loss_fn = tfr.losses.make_loss_fn(_LOSS)
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Ranking Head In the Estimator workflow, Head is an abstraction that encapsulates losses and corresponding metrics. Head easily interfaces with the Estimator, needing the user to define a scoring function and specify losses and metric computation.
optimizer = tf.compat.v1.train.AdagradOptimizer( learning_rate=_LEARNING_RATE) def _train_op_fn(loss): """Defines train op used in ranking head.""" update_ops = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.UPDATE_OPS) minimize_op = optimizer.minimize( loss=loss, global_step=tf.compat.v1.train.ge...
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Putting It All Together in a Model Builder We are now ready to put all of the components above together and create an Estimator that can be used to train and evaluate a model.
model_fn = tfr.model.make_groupwise_ranking_fn( group_score_fn=make_score_fn(), transform_fn=make_transform_fn(), group_size=_GROUP_SIZE, ranking_head=ranking_head)
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Train and evaluate the ranker
def train_and_eval_fn(): """Train and eval function used by `tf.estimator.train_and_evaluate`.""" run_config = tf.estimator.RunConfig( save_checkpoints_steps=1000) ranker = tf.estimator.Estimator( model_fn=model_fn, model_dir=_MODEL_DIR, config=run_config) train_input_fn = lambda: input...
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
A sample tensorboard output is shown here, with the ranking metrics. Generating Predictions We show how to generate predictions over the features of a dataset. We assume that the label is not present and needs to be inferred using the ranking model. Similar to the input_fn used for training and evaluation, predict_in...
def predict_input_fn(path): context_feature_spec = tf.feature_column.make_parse_example_spec( context_feature_columns().values()) example_feature_spec = tf.feature_column.make_parse_example_spec( list(example_feature_columns().values())) dataset = tfr.data.build_ranking_dataset( file_patte...
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
We generate predictions on the test dataset, where we only consider context and example features and predict the labels. The predict_input_fn generates predictions on a batch of datapoints. Batching allows us to iterate over large datasets which cannot be loaded in memory.
predictions = ranker.predict(input_fn=lambda: predict_input_fn("/tmp/test.tfrecords"))
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
ranker.predict returns a generator, which we can iterate over to create predictions, till the generator is exhausted.
x = next(predictions) assert len(x) == _LIST_SIZE # Note that this includes padding.
tensorflow_ranking/examples/handling_sparse_features.ipynb
tensorflow/ranking
apache-2.0
Loading all the input solar abundances SFR infall initial abundances and inflowing abundances
# Initialising sfr, infall, elements to trace, solar abundances from Chempy.wrapper import initialise_stuff basic_solar, basic_sfr, basic_infall = initialise_stuff(a) elements_to_trace = a.elements_to_trace
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
Elemental abundances at start We need to define the abundances of: - The ISM at beginning - The corona gas at beginning - The cosmic inflow into the corona for all times. For all we chose primordial here.
# Setting the abundance fractions at the beginning to primordial from Chempy.infall import INFALL, PRIMORDIAL_INFALL basic_primordial = PRIMORDIAL_INFALL(list(elements_to_trace),np.copy(basic_solar.table)) basic_primordial.primordial() basic_primordial.fractions
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
Initialising the element evolution matrix We now feed everything into the abundance matrix and check its entries
# Initialising the ISM instance from Chempy.time_integration import ABUNDANCE_MATRIX cube = ABUNDANCE_MATRIX(np.copy(basic_sfr.t),np.copy(basic_sfr.sfr),np.copy(basic_infall.infall),list(elements_to_trace),list(basic_primordial.symbols),list(basic_primordial.fractions),float(a.gas_at_start),list(basic_primordial.symbo...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
Time integration With the advance_one_step method we can evolve the matrix in time, given that we provide the feedback from each steps previous SSP.
# Now we run the time integration from Chempy.wrapper import SSP_wrap basic_ssp = SSP_wrap(a) for i in range(len(basic_sfr.t)-1): j = len(basic_sfr.t)-i ssp_mass = float(basic_sfr.sfr[i]) # The metallicity needs to be passed for the yields to be calculated as well as the initial elemental abundances ...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
Making abundances from element fractions The cube stores everything in elemental fractions, we use a tool to convert these to abundances scaled to solar:
# Turning the fractions into dex values (normalised to solar [X/H]) from Chempy.making_abundances import mass_fraction_to_abundances abundances,elements,numbers = mass_fraction_to_abundances(np.copy(cube.cube),np.copy(basic_solar.table)) print(abundances['He']) ## Alpha enhancement over time plot(cube.cube['time'][1...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
Likelihood calculation There are a few build-in functions (actually representing the observational constraints from the Chempy paper) which return a likelihood. One of those is called sol_norm and compares the proto-solar abundances with the Chempy ISM abundances 4.5 Gyr ago.
# Here we load a likelihood test for the solar abundances # This is how it looks for the prior parameters with the default yield set from Chempy.data_to_test import sol_norm probabilities, abundance_list, element_names = sol_norm(True,a.name_string,np.copy(abundances),np.copy(cube.cube),elements_to_trace,a.element_nam...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
Net vs. total yield Now we will change a little detail in the time-integration. Instead of letting unprocessed material that is expelled from the stars ('unprocessed_mass_in_winds' in the yield tables) being composed of the stellar birth material, which would be consistent (and is what I call 'net' yield), we now use s...
cube = ABUNDANCE_MATRIX(np.copy(basic_sfr.t),np.copy(basic_sfr.sfr),np.copy(basic_infall.infall),list(elements_to_trace),list(basic_primordial.symbols),list(basic_primordial.fractions),float(a.gas_at_start),list(basic_primordial.symbols),list(basic_primordial.fractions),float(a.gas_reservoir_mass_factor),float(a.outflo...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
Making chemical evolution modelling fast and flexible Now we have all ingredients at hand. We use a wrapper function were we only need to pass the ModelParameters.
# This is a convenience function from Chempy.wrapper import Chempy a = ModelParameters() cube, abundances = Chempy(a) plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], label = 'O') plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], label = 'Mn') plot(abundances['Fe'][1:],abundances...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
IMF effect now we can easily check the effect of the IMF on the chemical evolution
# prior IMF a = ModelParameters() a.imf_parameter= (0.69, 0.079,-2.29) cube, abundances = Chempy(a) plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], label = 'O', color = 'b') plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], label = 'Mn', color = 'orange') plot(abundances['Fe'][1...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
SFR effect We can do the same for the peak of the SFR etc...
# Prior SFR a = ModelParameters() a.sfr_scale = 3.5 cube, abundances = Chempy(a) plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], label = 'O', color = 'b') plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], label = 'Mn', color = 'orange') plot(abundances['Fe'][1:],abundances['N'][...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
Time resolution The time steps are equidistant and the resolution is flexible. Even with coarse 0.5Gyr resolution the results are quite good, saving a lot of computational time. Here we test different time resolution of 0.5, 0.1 and 0.025 Gyr. All results converge after metallicity increases above -1. The shorter time ...
## 0.5 Gyr resolution a = ModelParameters() a.time_steps = 28 # default cube, abundances = Chempy(a) plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], label = 'O', color = 'b') plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], label = 'Mn', color = 'orange') plot(abundances['Fe'][...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
A note on chemical evolution tracks and 'by eye' fit Sometimes Astronomers like to show that their chemical evolution track runs through some stellar abundance data points. But if we want the computer to steer our result fit we need to know the selection function of the stars that we try to match and we need to take ou...
# Default model parameters from Chempy import localpath a = ModelParameters() a.check_processes = True cube, abundances = Chempy(a) # Red clump age distribution selection = np.load(localpath + "input/selection/red_clump_new.npy") time_selection = np.load(localpath + "input/selection/time_red_clump_new.npy") plt.pl...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
This PDF can then be compared to real data to get a realistic likelihood. The nucleosynthetic feedback per element With the plot_processes routine we can plot the total feedback of each element and the fractional contribution from each nucleosynthetic feedback for a specific Chempy run.
# Loading the routine and plotting the process contribution into the current folder # Total enrichment mass in gray to the right, single process fractional contribution to the left from Chempy.data_to_test import plot_processes plot_processes(True,a.name_string,cube.sn2_cube,cube.sn1a_cube,cube.agb_cube,a.element_name...
tutorials/5-Chempy_function_and_stellar_tracer_sampling.ipynb
jan-rybizki/Chempy
mit
Load the data
!wget https://alfkjartan.github.io/files/sysid_hw_data.mat data = sio.loadmat("sysid_hw_data.mat")
system-identification/notebooks/Parameter estimation with least squares - Homework.ipynb
alfkjartan/control-computarizado
mit
Plot the data
N = len(data["u1"]) plt.figure(figsize=(14,1.7)) plt.step(range(N),data["u1"]) plt.ylabel("u_1") plt.figure(figsize=(14,1.7)) plt.step(range(N),data["y1"]) plt.ylabel("y_1") data["u1"].size
system-identification/notebooks/Parameter estimation with least squares - Homework.ipynb
alfkjartan/control-computarizado
mit
Identify first order model Consider the model structure $$y(k) = \frac{b_0\text{q}+b_1}{\text{q}+a} \text{q}^{-1} u(k),$$ which is a first order model with one zero, one pole and one delay. The true system has $b_0=0.2$, $b_1=0$ and $a=-0.8$. The ARX model can be written $$ y(k+1) = -ay(k) + b_0u(k) + b_1u(k-1) + e(k...
y = np.ravel(data["y1"]) u = np.ravel(data["u1"]) Phi = np.array([-y[1:N-1], u[1:N-1], u[:N-2]]).T yy = y[2:] theta_ls = np.linalg.lstsq(Phi, yy) theta_ls print("Estimated: a = %f" % theta_ls[0][0]) print("Estimated: b_0 = %f" % theta_ls[0][1]) print("Estimated: b_1 = %f" % theta_ls[0]...
system-identification/notebooks/Parameter estimation with least squares - Homework.ipynb
alfkjartan/control-computarizado
mit
The convergence can also be checked with the convergence plot:
vf.plot_convergence()
doc/source/examples/vectorfitting/vectorfitting_ex3_Agilent_E5071B.ipynb
jhillairet/scikit-rf
bsd-3-clause
Read the parent HyperLeda catalog. We immediately throw out objects with objtype='g' in Hyperleda, which are "probably extended" and many (most? all?) have incorrect D(25) diameters. We also toss out objects with D(25)>2.5 arcmin and B>16, which are also probably incorrect.
suffix = '0.05' ledafile = os.path.join(LSLGAdir, 'sample', 'leda-logd25-{}.fits'.format(suffix)) leda = Table.read(ledafile) keep = (np.char.strip(leda['OBJTYPE']) != 'g') * (leda['D25'] / 60 > mindiameter) leda = leda[keep] keep = ['SDSS' not in gg and '2MAS' not in gg for gg in leda['GALAXY']] #keep = np.logical_...
doc/nb/legacysurvey-gallery-groups-dr5.ipynb
legacysurvey/legacypipe
bsd-3-clause
Run FoF with spheregroup Identify groups using a simple angular linking length. Then construct a catalog of group properties.
%time grp, mult, frst, nxt = spheregroup(leda['RA'], leda['DEC'], linking_length / 60.0) npergrp, _ = np.histogram(grp, bins=len(grp), range=(0, len(grp))) nbiggrp = np.sum(npergrp > 1).astype('int') nsmallgrp = np.sum(npergrp == 1).astype('int') ngrp = nbiggrp + nsmallgrp print('Found {} total groups, including:'.fo...
doc/nb/legacysurvey-gallery-groups-dr5.ipynb
legacysurvey/legacypipe
bsd-3-clause
Populate the output group catalog Also add GROUPID to parent catalog to make it easier to cross-reference the two tables. D25MAX and D25MIN are the maximum and minimum D(25) diameters of the galaxies in the group.
groupcat = Table() groupcat.add_column(Column(name='GROUPID', dtype='i4', length=ngrp, data=np.arange(ngrp))) # unique ID number groupcat.add_column(Column(name='GALAXY', dtype='S1000', length=ngrp)) groupcat.add_column(Column(name='NMEMBERS', dtype='i4', length=ngrp)) groupcat.add_column(Column(name='RA', dtype='f8', ...
doc/nb/legacysurvey-gallery-groups-dr5.ipynb
legacysurvey/legacypipe
bsd-3-clause
Groups with one member--
smallindx = np.arange(nsmallgrp) ledaindx = np.where(npergrp == 1)[0] groupcat['RA'][smallindx] = leda['RA'][ledaindx] groupcat['DEC'][smallindx] = leda['DEC'][ledaindx] groupcat['NMEMBERS'][smallindx] = 1 groupcat['GALAXY'][smallindx] = np.char.strip(leda['GALAXY'][ledaindx]) groupcat['DIAMETER'][smallindx] = leda['D...
doc/nb/legacysurvey-gallery-groups-dr5.ipynb
legacysurvey/legacypipe
bsd-3-clause
Groups with more than one member--
bigindx = np.arange(nbiggrp) + nsmallgrp coord = SkyCoord(ra=leda['RA']*u.degree, dec=leda['DEC']*u.degree) def biggroups(): for grpindx, indx in zip(bigindx, np.where(npergrp > 1)[0]): ledaindx = np.where(grp == indx)[0] _ra, _dec = np.mean(leda['RA'][ledaindx]), np.mean(leda['DEC'][ledaindx]) ...
doc/nb/legacysurvey-gallery-groups-dr5.ipynb
legacysurvey/legacypipe
bsd-3-clause
load data
# get data X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10() print('Train data shape: ', X_train.shape) print('Train labels shape: ', y_train.shape) print('Validation data shape: ', X_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', X_test.shape) print('Test labels sha...
cifar_lasagne.ipynb
jseppanen/cifar_lasagne
bsd-3-clause
theano input_var
input_var = T.tensor4('inputs')
cifar_lasagne.ipynb
jseppanen/cifar_lasagne
bsd-3-clause
two-layer network
def create_twolayer(input_var, input_shape=(3, 32, 32), num_hidden_units=100, num_classes=10, **junk): # input layer network = lasagne.layers.InputLayer(shape=(None,) + input_shape, input_var=input_var) # fc-relu network = lasagne.layer...
cifar_lasagne.ipynb
jseppanen/cifar_lasagne
bsd-3-clause
v1: [conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM] v2: [conv-relu-pool]XN - [affine]XM - [softmax or SVM]
def create_v1(input_var, input_shape=(3, 32, 32), num_crp=1, crp_num_filters=32, crp_filter_size=5, num_cr=1, num_fc=1, fc_num_units=64, output_type='softmax', num_classes=10, **junk): # input layer network = lasagne.layers.InputLayer(shape=(...
cifar_lasagne.ipynb
jseppanen/cifar_lasagne
bsd-3-clause
v3: [conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM] VGG-ish input: 32x32x3 CONV3-64: 32x32x64 CONV3-64: 32x32x64 POOL2: 16x16x64 CONV3-128: 16x16x128 CONV3-128: 16x16x128 POOL2: 8x8x128 FC: 1x1x512 FC: 1x1x512 FC: 1x1x10
def create_v3(input_var, input_shape=(3, 32, 32), ccp_num_filters=[64, 128], ccp_filter_size=3, fc_num_units=[128, 128], num_classes=10, **junk): # input layer network = lasagne.layers.InputLayer(shape=(None,) + input_shape, input...
cifar_lasagne.ipynb
jseppanen/cifar_lasagne
bsd-3-clause
Exercises 1) Load the data. Run the next code cell (without changes) to load the GPS data into a pandas DataFrame birds_df.
# Load the data and print the first 5 rows birds_df = pd.read_csv("../input/geospatial-learn-course-data/purple_martin.csv", parse_dates=['timestamp']) print("There are {} different birds in the dataset.".format(birds_df["tag-local-identifier"].nunique())) birds_df.head()
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
There are 11 birds in the dataset, where each bird is identified by a unique value in the "tag-local-identifier" column. Each bird has several measurements, collected at different times of the year. Use the next code cell to create a GeoDataFrame birds. - birds should have all of the columns from birds_df, along with ...
# Your code here: Create the GeoDataFrame birds = ____ # Your code here: Set the CRS to {'init': 'epsg:4326'} birds.crs = ____ # Check your answer q_1.check() #%%RM_IF(PROD)%% # Create the GeoDataFrame birds = gpd.GeoDataFrame(birds_df, geometry=gpd.points_from_xy(birds_df["location-long"], birds_df["location-lat"])...
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
2) Plot the data. Next, we load in the 'naturalearth_lowres' dataset from GeoPandas, and set americas to a GeoDataFrame containing the boundaries of all countries in the Americas (both North and South America). Run the next code cell without changes.
# Load a GeoDataFrame with country boundaries in North/South America, print the first 5 rows world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) americas = world.loc[world['continent'].isin(['North America', 'South America'])] americas.head()
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Use the next code cell to create a single plot that shows both: (1) the country boundaries in the americas GeoDataFrame, and (2) all of the points in the birds_gdf GeoDataFrame. Don't worry about any special styling here; just create a preliminary plot, as a quick sanity check that all of the data was loaded properly...
# Your code here ____ # Uncomment to see a hint #_COMMENT_IF(PROD)_ q_2.hint() #%%RM_IF(PROD)%% ax = americas.plot(figsize=(10,10), color='white', linestyle=':', edgecolor='gray') birds.plot(ax=ax, markersize=10) # Uncomment to zoom in #ax.set_xlim([-110, -30]) #ax.set_ylim([-30, 60]) # Get credit for your work aft...
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
3) Where does each bird start and end its journey? (Part 1) Now, we're ready to look more closely at each bird's path. Run the next code cell to create two GeoDataFrames: - path_gdf contains LineString objects that show the path of each bird. It uses the LineString() method to create a LineString object from a list o...
# GeoDataFrame showing path for each bird path_df = birds.groupby("tag-local-identifier")['geometry'].apply(list).apply(lambda x: LineString(x)).reset_index() path_gdf = gpd.GeoDataFrame(path_df, geometry=path_df.geometry) path_gdf.crs = {'init' :'epsg:4326'} # GeoDataFrame showing starting point for each bird start_d...
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Use the next code cell to create a GeoDataFrame end_gdf containing the final location of each bird. - The format should be identical to that of start_gdf, with two columns ("tag-local-identifier" and "geometry"), where the "geometry" column contains Point objects. - Set the CRS of end_gdf to {'init': 'epsg:4326'}.
# Your code here end_gdf = ____ # Check your answer q_3.check() #%%RM_IF(PROD)%% end_df = birds.groupby("tag-local-identifier")['geometry'].apply(list).apply(lambda x: x[-1]).reset_index() end_gdf = gpd.GeoDataFrame(end_df, geometry=end_df.geometry) end_gdf.crs = {'init': 'epsg:4326'} q_3.assert_check_passed() # Li...
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
4) Where does each bird start and end its journey? (Part 2) Use the GeoDataFrames from the question above (path_gdf, start_gdf, and end_gdf) to visualize the paths of all birds on a single map. You may also want to use the americas GeoDataFrame.
# Your code here ____ # Uncomment to see a hint #_COMMENT_IF(PROD)_ q_4.hint() #%%RM_IF(PROD)%% ax = americas.plot(figsize=(10, 10), color='white', linestyle=':', edgecolor='gray') start_gdf.plot(ax=ax, color='red', markersize=30) path_gdf.plot(ax=ax, cmap='tab20b', linestyle='-', linewidth=1, zorder=1) end_gdf.plo...
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
5) Where are the protected areas in South America? (Part 1) It looks like all of the birds end up somewhere in South America. But are they going to protected areas? In the next code cell, you'll create a GeoDataFrame protected_areas containing the locations of all of the protected areas in South America. The correspo...
# Path of the shapefile to load protected_filepath = "../input/geospatial-learn-course-data/SAPA_Aug2019-shapefile/SAPA_Aug2019-shapefile/SAPA_Aug2019-shapefile-polygons.shp" # Your code here protected_areas = ____ # Check your answer q_5.check() #%%RM_IF(PROD)%% protected_areas = gpd.read_file(protected_filepath) q...
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
6) Where are the protected areas in South America? (Part 2) Create a plot that uses the protected_areas GeoDataFrame to show the locations of the protected areas in South America. (You'll notice that some protected areas are on land, while others are in marine waters.)
# Country boundaries in South America south_america = americas.loc[americas['continent']=='South America'] # Your code here: plot protected areas in South America ____ # Uncomment to see a hint #_COMMENT_IF(PROD)_ q_6.hint() #%%RM_IF(PROD)%% # Plot protected areas in South America ax = south_america.plot(figsize=(10...
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
7) What percentage of South America is protected? You're interested in determining what percentage of South America is protected, so that you know how much of South America is suitable for the birds. As a first step, you calculate the total area of all protected lands in South America (not including marine area). To...
P_Area = sum(protected_areas['REP_AREA']-protected_areas['REP_M_AREA']) print("South America has {} square kilometers of protected areas.".format(P_Area))
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Then, to finish the calculation, you'll use the south_america GeoDataFrame.
south_america.head()
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Calculate the total area of South America by following these steps: - Calculate the area of each country using the area attribute of each polygon (with EPSG 3035 as the CRS), and add up the results. The calculated area will be in units of square meters. - Convert your answer to have units of square kilometeters.
# Your code here: Calculate the total area of South America (in square kilometers) totalArea = ____ # Check your answer q_7.check() #%%RM_IF(PROD)%% # Calculate the total area of South America (in square kilometers) totalArea = sum(south_america.geometry.to_crs(epsg=3035).area) / 10**6 q_7.assert_check_passed() # Li...
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Run the code cell below to calculate the percentage of South America that is protected.
# What percentage of South America is protected? percentage_protected = P_Area/totalArea print('Approximately {}% of South America is protected.'.format(round(percentage_protected*100, 2)))
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
8) Where are the birds in South America? So, are the birds in protected areas? Create a plot that shows for all birds, all of the locations where they were discovered in South America. Also plot the locations of all protected areas in South America. To exclude protected areas that are purely marine areas (with no la...
# Your code here ____ # Uncomment to see a hint #_COMMENT_IF(PROD)_ q_8.hint() #%%RM_IF(PROD)%% ax = south_america.plot(figsize=(10,10), color='white', edgecolor='gray') protected_areas[protected_areas['MARINE']!='2'].plot(ax=ax, alpha=0.4, zorder=1) birds[birds.geometry.y < 0].plot(ax=ax, color='red', alpha=0.6, mar...
notebooks/geospatial/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Now let's set the projection to '3d', set the range for the viewing angles and disable pad_aspect (as it doesn't play nicely with animations).
autofig.gcf().axes.pad_aspect = False autofig.gcf().axes.projection = '3d' autofig.gcf().axes.elev.value = [0, 30] autofig.gcf().axes.azim.value = [-75, 0] anim = autofig.animate(i=times, tight_layout=False, save='phoebe_meshes_3d.gif', save_kwargs={'writer': 'imagemagick'})
docs/gallery/phoebe_meshes_3d.ipynb
kecnry/autofig
gpl-3.0
A Reader wraps a function, so it takes a callable:
r = Reader(lambda name: "Hi %s!" % name)
notebooks/Reader.ipynb
dbrattli/OSlash
apache-2.0
In Python you can call this wrapped function as any other callable:
r("Dag")
notebooks/Reader.ipynb
dbrattli/OSlash
apache-2.0
Unit Unit is a constructor that takes a value and returns a Reader that ignores the environment. That is it ignores any value that is passed to the Reader when it's called:
r = unit(42) r("Ignored")
notebooks/Reader.ipynb
dbrattli/OSlash
apache-2.0
Bind You can bind a Reader to a monadic function using the pipe | operator (The bind operator is called &gt;&gt;= in Haskell). A monadic function is a function that takes a value and returns a monad, and in this case it returns a new Reader monad:
r = Reader(lambda name: "Hi %s!" % name) b = r | (lambda x: unit(x.replace("Hi", "Hello"))) b("Dag")
notebooks/Reader.ipynb
dbrattli/OSlash
apache-2.0
Applicative Apply (*) is a beefed up map. It takes a Reader that has a function in it and another Reader, and extracts that function from the first Reader and then maps it over the second one (basically composes the two functions).
r = Reader(lambda name: "Hi %s!" % name) a = Reader.pure(lambda x: x + "!!!") * r a("Dag")
notebooks/Reader.ipynb
dbrattli/OSlash
apache-2.0
MonadReader The MonadReader class provides a number of convenience functions that are very useful when working with a Reader monad.
from oslash import MonadReader asks = MonadReader.asks ask = MonadReader.ask
notebooks/Reader.ipynb
dbrattli/OSlash
apache-2.0
Ask Provides a way to easily access the environment. Ask lets us read the environment and then play with it:
r = ask() | (lambda x: unit("Hi %s!" % x)) r("Dag")
notebooks/Reader.ipynb
dbrattli/OSlash
apache-2.0
Asks Given a function it returns a Reader which evaluates that function and returns the result.
r = asks(len) r("banana")
notebooks/Reader.ipynb
dbrattli/OSlash
apache-2.0
A Longer Example This example has been translated to Python from https://gist.github.com/egonSchiele/5752172.
from oslash import Reader, MonadReader ask = MonadReader.ask def hello(): return ask() | (lambda name: unit("Hello, " + name + "!")) def bye(): return ask() | (lambda name: unit("Bye, " + name + "!")) def convo(): return hello() | (lambda c1: bye() | (lambda c2: ...
notebooks/Reader.ipynb
dbrattli/OSlash
apache-2.0
We see that the Aharonov-Bohm effect contains several harmonics $$ g = g_0 + g_1 cos(\phi) + g_2 cos(2\phi) + ...$$ Your turn: How can we get just one harmonics (as in most experiments)? Try L = 100 and W= 12, what do you see? The results should not depend on the position of the gauge transform, can you check that? ...
L,W=100,12 def Field(site1,site2,phi): x1,y1=site1.pos x2,y2=site2.pos return -np.exp(-0.5j * phi * (x1 - x2) * (y1 + y2)) H[lat.neighbors()] = Field
3.1.Aharonov-Bohm.ipynb
kwant-project/kwant-tutorial-2016
bsd-2-clause
Now run it, don't forget to change the x-scale of the plot. Do you understand why the x - scale is so much smaller? Do you happen to know what will happen at higher field?
phis = np.linspace(0.,0.0005,50)
3.1.Aharonov-Bohm.ipynb
kwant-project/kwant-tutorial-2016
bsd-2-clause
Input data Read in time series from testdata.csv with pandas
raw = pd.read_csv('testdata.csv', index_col = 0) raw=raw.rename(columns={'T': 'Temperature [°C]', 'Load':'Demand [kW]', 'Wind':'Wind [m/s]', 'GHI': 'Solar [W/m²]'})
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
Setup the hyperparameter instance
tunedAggregations = tune.HyperTunedAggregations( tsam.TimeSeriesAggregation( raw, hoursPerPeriod=24, clusterMethod="hierarchical", representationMethod="medoidRepresentation", rescaleClusterPeriods=False, segmentation=True, ) )
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
Load the resulting combination
results = pd.read_csv(os.path.join("results","paretoOptimalAggregation.csv"),index_col=0) results["time_steps"] = results["segments"] * results["periods"]
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
Create the animated aggregations Drop all results with timesteps below 1% of the original data set since they are not meaningful.
results = results[results["time_steps"]>80]
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
Append the original time series
results=results.append({"segments":24, "periods":365, "time_steps":len(raw)}, ignore_index=True)
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
And reverse the order
results=results.iloc[::-1]
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
And create a dictionary with all aggregations we want to show in the animation
animation_list = [] for i, index in enumerate(tqdm.tqdm(results.index)): segments = results.loc[index,:].to_dict()["segments"] periods = results.loc[index,:].to_dict()["periods"] # aggregate to the selected set tunedAggregations._testAggregation(noTypicalPeriods=periods, noSegments=segments) # and r...
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
And then append a last aggregation with the novel duration/distribution represenation
aggregation=tsam.TimeSeriesAggregation( raw, hoursPerPeriod=24, noSegments=segments, noTypicalPeriods=periods, clusterMethod="hierarchical", rescaleClusterPeriods=False, segmentation=True, representationMethod="durationRepresentation", distribution...
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
Create the animation Let animation warp - slow in the beginning and slow in the end
iterator = [] for i in range(len(animation_list )): if i < 1: iterator+=[i]*100 elif i < 3: iterator+=[i]*50 elif i < 6: iterator+=[i]*30 elif i < 20: iterator+=[i]*10 elif i >= len(animation_list )-1: iterator+=[i]*150 elif i > len(animation_list )-3: ...
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
Create the plot and the animation loop
import matplotlib.ticker as tick fig, axes = plt.subplots(figsize = [7, 5], dpi = 300, nrows = raw.shape[1], ncols = 1) cmap = plt.cm.get_cmap("Spectral_r").copy() cmap.set_bad((.7, .7, .7, 1)) for ii, column in enumerate(raw.columns): data = raw[column] stacked, timeindex = tsam.unstackToPeriods(copy.deepcopy(...
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
And save as animation parelllized with ffmpeg since the default matplotlib implemenation takes too long. Faster implemntation than matplotib from here: https://stackoverflow.com/a/31315362/3253411 Parallelize animation to video
def chunks(lst, n): """Yield successive n-sized chunks from lst.""" for i in range(0, len(lst), n): yield lst[i:i + n] threads = multiprocessing.cpu_count() frames=[i for i in range(len(iterator))] # divide the frame equally i_length=math.ceil(len(frames)/(threads)) frame_sets=list(chunks(frames,i_len...
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
You can also show it inline but it takes quite long.
from IPython.display import HTML HTML(ani.to_jshtml())
examples/aggregation_segment_period_animation.ipynb
FZJ-IEK3-VSA/tsam
mit
NOTE on notation _x, _y, _z, ...: NumPy 0-d or 1-d arrays _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays x, y, z, ...: 0-d or 1-d tensors X, Y, Z, ...: 2-d or higher dimensional tensors Variables Q0. Create a variable w with an initial value of 1.0 and name weight. Then, print out the value of w.
w = tf.Variable(1.0, name="weight") with tf.Session() as sess: sess.run(w.initializer) print(sess.run(w))
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q1. Complete this code.
# Create a variable w. w = tf.Variable(1.0, name="Weight") # Q. Add 1 to w and assign the value to w. assign_op = w.assign(w + 1.0) # Or assign_op = w.assign_add(1.0) # Or assgin_op = tf.assgin(w, w + 1.0) with tf.Session() as sess: sess.run(w.initializer) for _ in range(10): print(sess.run(w), "=>", ...
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q2. Complete this code.
w1 = tf.Variable(1.0) w2 = tf.Variable(2.0) w3 = tf.Variable(3.0) out = w1 + w2 + w3 # Q. Add an Op to initialize global variables. init_op = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_op) # Initialize all variables. print(sess.run(out))
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q3-4. Complete this code.
V = tf.Variable(tf.truncated_normal([1, 10])) # Q3. Initialize `W` with 2 * W W = tf.Variable(V.initialized_value() * 2.0) # Q4. Add an Op to initialize global variables. init_op = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_op) # Initialize all variables. _V, _W = sess.run([V, W...
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0
Q5-8. Complete this code.
g = tf.Graph() with g.as_default(): W = tf.Variable([[0,1],[2,3]], name="Weight", dtype=tf.float32) # Q5. Print the name of `W`. print("Q5.", W.name) # Q6. Print the name of the op of `W`. print("Q6.", W.op.name) # Q7. Print the data type of `w`. print("Q7.", W.dtype) # Q8. Print the sha...
programming/Python/tensorflow/exercises/Variables_Solutions.ipynb
diegocavalca/Studies
cc0-1.0