markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Set up prior ranges for each parameter in the model. See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
limits = {'ito.p1': (-100, 100), 'ito.p2': (1e-7, 50), 'log_ito.p3': (-7, 0), 'ito.p4': (1e-7, 50), 'log_ito.p5': (-7, 0), 'ito.q1': (-100, 100), 'ito.q2': (1e-7, 50), 'log_ito.q3': (-5, 1), 'ito.q4': (-100, 100), 'ito.q5': (1e-7, 50), 'log_ito.q6': (-7, 0)} prior = Distribution(**{key: RV("uniform", a, b - a) for key, (a,b) in limits.items()}) # Test this works correctly with set-up functions assert len(observations) == len(summary_statistics(model(prior.rvs())))
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0
Run ABC-SMC inference Set-up path to results database.
db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "nygren_ito_original.db")) logging.basicConfig() abc_logger = logging.getLogger('ABC') abc_logger.setLevel(logging.DEBUG) eps_logger = logging.getLogger('Epsilon') eps_logger.setLevel(logging.DEBUG)
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0
Initialise ABCSMC (see pyABC documentation for further details). IonChannelDistance calculates the weighting applied to each datapoint based on the experimental variance.
abc = ABCSMC(models=model, parameter_priors=prior, distance_function=IonChannelDistance( exp_id=list(observations.exp_id), variance=list(observations.variance), delta=0.05), population_size=ConstantPopulationSize(2000), summary_statistics=summary_statistics, transitions=EfficientMultivariateNormalTransition(), eps=MedianEpsilon(initial_epsilon=100), sampler=MulticoreEvalParallelSampler(n_procs=16), acceptor=IonChannelAcceptor()) obs = observations.to_dict()['y'] obs = {str(k): v for k, v in obs.items()} abc_id = abc.new(db_path, obs)
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0
Analysis of results
history = History('sqlite:///results/nygren/ito/original/nygren_ito_original.db') history.all_runs() df, w = history.get_distribution() df.describe()
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0
Plot summary statistics compared to calibrated model output.
sns.set_context('poster') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 g = plot_sim_results(modelfile, shibata_act, firek_inact, nygren_inact_kin, nygren_rec, df=df, w=w) plt.tight_layout()
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0
Plot gating functions
import pandas as pd N = 100 nyg_par_samples = df.sample(n=N, weights=w, replace=True) nyg_par_samples = nyg_par_samples.set_index([pd.Index(range(N))]) nyg_par_samples = nyg_par_samples.to_dict(orient='records') sns.set_context('talk') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 f, ax = plot_variables(V, nyg_par_map, 'models/nygren_ito.mmt', [nyg_par_samples], figshape=(2,2)) plt.tight_layout()
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0
Plot parameter posteriors
m,_,_ = myokit.load(modelfile) originals = {} for name in limits.keys(): if name.startswith("log"): name_ = name[4:] else: name_ = name val = m.value(name_) if name.startswith("log"): val_ = np.log10(val) else: val_ = val originals[name] = val_ sns.set_context('paper') g = plot_kde_matrix_custom(df, w, limits=limits, refval=originals) plt.tight_layout()
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0
H2O API Walkthrough General Setup The first question you might be asking is why H2O instead of scikit-learn or Spark MLlib. People would prefer H2O over scikit-learn because it is much straightforward to integrate ML models into an existing non-Python system, i.e., Java-based product. Performance wise, H2O is extremely fast and can outperform scikit-learn by a significant amount when the data size we're dealing with large datset. As for Spark, while it is a decent tool for ETL on raw data (which often is indeed "big"), its ML libraries are often times outperformed (in training time, memory footprint and even accuracy) by much better tools by orders of magnitude. Please refer to the benchmark at the following link for more detailed number-based proofs. Github: A minimal benchmark for scalability, speed and accuracy of commonly used open source implementations ```bash for installing h2o in python pip install h2o ```
# Load the H2O library and start up the H2O cluter locally on your machine import h2o # Number of threads, nthreads = -1, means use all cores on your machine # max_mem_size is the maximum memory (in GB) to allocate to H2O h2o.init(nthreads = -1, max_mem_size = 8)
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
In this example, we will be working with a cleaned up version of the Lending Club Bad Loans dataset. The purpose here is to predict whether a loan will be bad (i.e. not repaid to the lender). The response column, bad_loan, is 1 if the loan was bad, and 0 otherwise.
filepath = 'https://raw.githubusercontent.com/h2oai/app-consumer-loan/master/data/loan.csv' data = h2o.import_file(filepath) print('dimension:', data.shape) data.head(6)
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Since the task we're dealing at hand is a binary classification problem, we must ensure that our response variable is encoded as a factor type. If the response is represented as numerical values of 0/1, H2O will assume we want to train a regression model.
# encode the binary repsonse as a factor label_col = 'bad_loan' data[label_col] = data[label_col].asfactor() # this is an optional step that checks the factor level data[label_col].levels() # if we check types of each column, we can see which columns # are treated as categorical type (listed as 'enum') data.types
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Next, we perform a three-way split: 70% for training 15% for validation 15% for final testing We will train a data set on one set and use the others to test the validity of the model by ensuring that it can predict accurately on data the model has not been shown. i.e. to ensure our model is generalizable.
# 1. for the splitting percentage, we can leave off # the last proportion, and h2o will generate the # number for the last subset for us # 2. setting a seed will guarantee reproducibility random_split_seed = 1234 train, valid, test = data.split_frame([0.7, 0.15], seed = random_split_seed) print(train.nrow) print(valid.nrow) print(test.nrow)
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Here, we extract the column name that will serve as our response and predictors. These informations will be used during the model training phase.
# .names, .col_names, .columns are # all equivalent way of accessing the list # of column names for the h2o dataframe input_cols = data.columns # remove the response and the interest rate # column since it's correlated with our response input_cols.remove(label_col) input_cols.remove('int_rate') input_cols
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Modeling We'll now jump into the model training part, here Gradient Boosted Machine (GBM) is used as an example. We will set the number of trees to be 500. Increasing the number of trees in a ensemble tree-based model like GBM is one way to increase performance of the model, however, we have to be careful not to overfit our model to the training data by using too many trees. To automatically find the optimal number of trees, we can leverage H2O's early stopping functionality. There are several parameters that could be used to control early stopping. The three that are generic to all the algorithms are: stopping_rounds, stopping_metric and stopping_tolerance. stopping metric is the metric by which we'd like to measure performance, and we will choose AUC here. score_tree_interval is a parameter specific to tree-based models. e.g. setting score_tree_interval=5 will score the model after every five trees. The parameters we have specify below is saying that our model will stop training after there have been three scoring intervals where the AUC has not increased more than 0.0005. Since we have specified a validation frame, the stopping tolerance will be computed on validation AUC rather than training AUC.
from h2o.estimators.gbm import H2OGradientBoostingEstimator # we specify an id for the model so we can refer to it more # easily later gbm = H2OGradientBoostingEstimator( seed = 1, ntrees = 500, model_id = 'gbm1', stopping_rounds = 3, stopping_metric = 'auc', score_tree_interval = 5, stopping_tolerance = 0.0005) # note that it is .train not .fit to train the model # just in case you're coming from scikit-learn gbm.train( y = label_col, x = input_cols, training_frame = train, validation_frame = valid) # evaluating the performance, printing the whole # model performance object will give us a whole bunch # of information, we'll only be accessing the auc metric here gbm_test_performance = gbm.model_performance(test) gbm_test_performance.auc()
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
To examine the scoring history, use the scoring_history method on a trained model. When early stopping is used, we see that training stopped at before the full 500 trees. Since we also used a validation set in our model, both training and validation performance metrics are stored in the scoring history object. We can take a look at the validation AUC to observe that the correct stopping tolerance was enforced.
gbm_history = gbm.scoring_history() gbm_history # change default style figure and font size plt.rcParams['figure.figsize'] = 8, 6 plt.rcParams['font.size'] = 12 plt.plot(gbm_history['training_auc'], label = 'training_auc') plt.plot(gbm_history['validation_auc'], label = 'validation_auc') plt.xticks(range(gbm_history.shape[0]), gbm_history['number_of_trees'].apply(int)) plt.title('GBM training history') plt.legend() plt.show()
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Hyperparameter Tuning When training machine learning algorithm, often times we wish to perform hyperparameter search. Thus rather than training our model with different parameters manually one-by-one, we will make use of the H2O's Grid Search functionality. H2O offers two types of grid search -- Cartesian and RandomDiscrete. Cartesian is the traditional, exhaustive, grid search over all the combinations of model parameters in the grid, whereas Random Grid Search will sample sets of model parameters randomly for some specified period of time (or maximum number of models).
# specify the grid gbm_params = { 'max_depth': [3, 5, 9], 'sample_rate': [0.8, 1.0], 'col_sample_rate': [0.2, 0.5, 1.0]}
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
If we wish to specify model parameters that are not part of our grid, we pass them along to the grid via the H2OGridSearch.train() method. See example below.
from h2o.grid.grid_search import H2OGridSearch gbm_tuned = H2OGridSearch( grid_id = 'gbm_tuned1', hyper_params = gbm_params, model = H2OGradientBoostingEstimator) gbm_tuned.train( y = label_col, x = input_cols, training_frame = train, validation_frame = valid, # nfolds = 5, # alternatively, we can use N-fold cross-validation ntrees = 100, stopping_rounds = 3, stopping_metric = 'auc', score_tree_interval = 5, stopping_tolerance = 0.0005) # we can specify other parameters like early stopping here
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
To compare the model performance among all the models in a grid, sorted by a particular metric (e.g. AUC), we can use the get_grid method.
gbm_tuned = gbm_tuned.get_grid( sort_by = 'auc', decreasing = True) gbm_tuned
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Instead of running a grid search, the example below shows the code modification needed to run a random search. In addition to the hyperparameter dictionary, we will need specify the search_criteria as RandomDiscrete' with a number for max_models, which is equivalent to the number of iterations to run for the random search. This example is set to run fairly quickly, we can increase max_models to cover more of the hyperparameter space. Also, we can expand the hyperparameter space of each of the algorithms by modifying the hyperparameter list below.
# specify the grid and search criteria gbm_params = { 'max_depth': [3, 5, 9], 'sample_rate': [0.8, 1.0], 'col_sample_rate': [0.2, 0.5, 1.0]} # note that in addition to max_models # we can specify max_runtime_secs # to run as many model as we can # for X amount of seconds search_criteria = { 'max_models': 5, 'strategy': 'RandomDiscrete'} # train the hyperparameter searched model gbm_tuned = H2OGridSearch( grid_id = 'gbm_tuned2', hyper_params = gbm_params, search_criteria = search_criteria, model = H2OGradientBoostingEstimator) gbm_tuned.train( y = label_col, x = input_cols, training_frame = train, validation_frame = valid, ntrees = 100) # evaluate the model performance gbm_tuned = gbm_tuned.get_grid( sort_by = 'auc', decreasing = True) gbm_tuned
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Lastly, let's extract the top model, as determined by the validation data's AUC score from the grid and use it to evaluate the model performance on a test set, so we get an honest estimate of the top model performance.
# our model is reordered based on the sorting done above; # hence we can retrieve the first model id to retrieve # the best performing model we currently have gbm_best = gbm_tuned.models[0] gbm_best_performance = gbm_best.model_performance(test) gbm_best_performance.auc() # saving and loading the model model_path = h2o.save_model( model = gbm_best, path = 'h2o_gbm', force = True) saved_model = h2o.load_model(model_path) gbm_best_performance = saved_model.model_performance(test) gbm_best_performance.auc() # generate the prediction on the test set, notice that # it will generate the predicted probability # along with the actual predicted class; # # we can extract the predicted probability for the # positive class and dump it back to pandas using # the syntax below if necessary: # # gbm_test_pred['p1'].as_data_frame(use_pandas = True) gbm_test_pred = gbm_best.predict(test) gbm_test_pred
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Model Interpretation After building our predictive model, often times we would like to inspect which variables/features were contributing the most. This interpretation process allows us the double-check to ensure what the model is learning makes intuitive sense and enable us to explain the results to non-technical audiences. With h2o's tree-base models, we can access the varimp attribute to get the top important features.
gbm_best.varimp(use_pandas = True).head()
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
We can return the variable importance as a pandas dataframe, hopefully the table should make intuitive sense, where the first column is the feature/variable and the rest of the columns are the feature's importance represented under different scale. We'll be working with the last column, where the feature importance has been normalized to sum up to 1. And note the results are already sorting in decreasing order, where the more important the feature the earlier the feature will appear in the table.
# we can drop the use_pandas argument and the # result will be a list of tuple gbm_best.varimp()[:4]
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
We can also visualize this information with a bar chart.
def plot_varimp(h2o_model, n_features = None): """Plot variable importance for H2O tree-based models""" importances = h2o_model.varimp() feature_labels = [tup[0] for tup in importances] feature_imp = [tup[3] for tup in importances] # specify bar centers on the y axis, but flip the order so largest bar appears at top pos = range(len(feature_labels))[::-1] if n_features is None: n_features = min(len(feature_imp), 10) fig, ax = plt.subplots(1, 1) plt.barh(pos[:n_features], feature_imp[:n_features], align = 'center', height = 0.8, color = '#1F77B4', edgecolor = 'none') # Hide the right and top spines, color others grey ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['bottom'].set_color('#7B7B7B') ax.spines['left'].set_color('#7B7B7B') # Only show ticks on the left and bottom spines ax.yaxis.set_ticks_position('left') ax.xaxis.set_ticks_position('bottom') plt.yticks(pos[:n_features], feature_labels[:n_features]) plt.ylim([min(pos[:n_features]) - 1, max(pos[:n_features]) + 1]) title_fontsize = 14 algo = h2o_model._model_json['algo'] if algo == 'gbm': plt.title('Variable Importance: H2O GBM', fontsize=title_fontsize) elif algo == 'drf': plt.title('Variable Importance: H2O RF', fontsize=title_fontsize) elif algo == 'xgboost': plt.title('Variable Importance: H2O XGBoost', fontsize=title_fontsize) plt.show() plt.rcParams['figure.figsize'] = 10, 8 plot_varimp(gbm_best)
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Another type of feature interpretation functionality that we can leverage in h2o is the partial dependence plot. Partial dependence plots is a model-agnostic machine learning interpretation tool. Please consider walking through another resource if you're not familiar with what it's doing.
partial_dep = gbm_best.partial_plot(data, cols=['annual_inc'], plot=False) partial_dep[0].as_data_frame()
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
We use the .partial_plot method for a h2o estimator, in this case our gbm model, then we pass in the data and the column that we wish to calculate the partial dependence for as the input argument. The result we get back will be the partial dependence table as shown above. To make the process easier, we'll leverage a helper function on top of the h2o .partial_plot method so we can also get a visualization for ease of interpretation.
from h2o_explainers import H2OPartialDependenceExplainer pd_explainer = H2OPartialDependenceExplainer(gbm_best) pd_explainer.fit(data, 'annual_inc') pd_explainer.plot() plt.show()
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Based on the visualization above, we can see that the higher your annual income, the lower the likelihood it is for your loan to be bad, but after a certain point, then this feature has no affect on the model. As for the feature term we can see that people having a 36 months loan term is less likely to have a bad loan compared to the ones that have a 60 months loan term.
pd_explainer.fit(data, 'term') pd_explainer.plot() plt.show() # remember to shut down the h2o cluster once we're done h2o.cluster().shutdown(prompt = False)
big_data/h2o/h2o_api_walkthrough.ipynb
ethen8181/machine-learning
mit
Character counting and entropy Write a function char_probs that takes a string and computes the probabilities of each character in the string: First do a character count and store the result in a dictionary. Then divide each character counts by the total number of character to compute the normalized probabilties. Return the dictionary of characters (keys) and probabilities (values).
def char_probs(s): """Find the probabilities of the unique characters in the string s. Parameters ---------- s : str A string of characters. Returns ------- probs : dict A dictionary whose keys are the unique characters in s and whose values are the probabilities of those characters. """ split_string = s.split() string_dict = {} for item in split_string: string_dict[item] = 0 for item in split_string: if item in string_dict.keys(): return string_dict ?splitlines def count_words(a_string): """Return a word count dictionary from the list of words in data.""" split_string = a_string.splitlines() string_dict = {} #first populate the dictionary with the keys being each word in the string, all having zero for their values. for item in split_string: string_dict[item] = 0 #Then cycle through the split string again and if the word is one of the keys in the dictionary add 1 each time. for item in split_string: if item in string_dict.keys(): string_dict[item] += 1 return string_dict test1 = char_probs('aaaa') assert np.allclose(test1['a'], 1.0) test2 = char_probs('aabb') assert np.allclose(test2['a'], 0.5) assert np.allclose(test2['b'], 0.5) test3 = char_probs('abcd') assert np.allclose(test3['a'], 0.25) assert np.allclose(test3['b'], 0.25) assert np.allclose(test3['c'], 0.25) assert np.allclose(test3['d'], 0.25)
midterm/AlgorithmsEx03.ipynb
LimeeZ/phys292-2015-work
mit
The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as: $$H = - \Sigma_i P_i \log_2(P_i)$$ In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy. Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities. To compute the entropy, you should: First convert the values (probabilities) of the dict to a Numpy array of probabilities. Then use other Numpy functions (np.log2, etc.) to compute the entropy. Don't use any for or while loops in your code.
?np.sum def entropy(d): """Compute the entropy of a dict d whose values are probabilities.""" H = -np.sum(d*np.log2(d)) return H assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0) assert np.allclose(entropy({'a': 1.0}), 0.0)
midterm/AlgorithmsEx03.ipynb
LimeeZ/phys292-2015-work
mit
Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
# I am going to go for credit if at all possible here: interact(entropy, d = 'Text Box') assert True # use this for grading the pi digits histogram
midterm/AlgorithmsEx03.ipynb
LimeeZ/phys292-2015-work
mit
1 Make a request from the Forecast.io API for where you were born (or lived, or want to visit!) Tip: Once you've imported the JSON into a variable, check the timezone's name to make sure it seems like it got the right part of the world! Tip 2: How is north vs. south and east vs. west latitude/longitude represented? Is it the normal North/South/East/West?
#https://api.forecast.io/forecast/APIKEY/LATITUDE,LONGITUDE,TIME response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/12.971599,77.594563') data = response.json() #print(data) #print(data.keys()) print("Bangalore is in", data['timezone'], "timezone") timezone_find = data.keys() #find representation
homework06/.ipynb_checkpoints/Homework06-Dark Sky Forecast API-Radhika-checkpoint.ipynb
radhikapc/foundation-homework
mit
2. What's the current wind speed? How much warmer does it feel than it actually is?
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400') data = response.json() #print(data.keys()) print("The current windspeed at New York is", data['currently']['windSpeed']) #print(data['currently']) - find how much warmer
homework06/.ipynb_checkpoints/Homework06-Dark Sky Forecast API-Radhika-checkpoint.ipynb
radhikapc/foundation-homework
mit
3. Moon Visible in New York The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400') data = response.json() #print(data.keys()) #print(data['daily']['data']) now_moon = data['daily']['data'] for i in now_moon: print("The visibility of moon today in New York is", i['moonPhase'])
homework06/.ipynb_checkpoints/Homework06-Dark Sky Forecast API-Radhika-checkpoint.ipynb
radhikapc/foundation-homework
mit
Read in data.
taxa = dendropy.TaxonSet() ours = dendropy.Tree.get_from_path('../best.phy', 'newick', taxon_set=taxa)
ExploratoryNotebooks/RF_checking.ipynb
wrightaprilm/squamates
mit
Calculate Robinson-Foulds distance, and tree length.
non_ml = dendropy.Tree.get_from_path('../Trees/MLE/ExaML_result.SquamataPyron.MLE.2b', 'newick', taxon_set = taxa) print ours.length() print non_ml.length() print ours.symmetric_difference(non_ml) taxa = dendropy.TaxonSet() pb = dendropy.Tree.get_from_path('../Trees/TotalOptimization/Ranked/2598364', 'nexus', taxon_set=taxa) rfs = [tree.symmetric_difference(pb_o) for tree in uotrees] rfs olist = find_files(top='garli_opt/', filename_filter='*.tre') print olist otrees = [dendropy.Tree.get_from_path(filename, "nexus") for filename in olist]
ExploratoryNotebooks/RF_checking.ipynb
wrightaprilm/squamates
mit
Calculate RF score, pairwise over all trees in sample.
n = len(uotrees) udiffarray = np.zeros((n,n)) for i, ele1 in enumerate(uotrees): for j, ele2 in enumerate(uotrees): if j >= i: break # Since the matrix is symmetrical we don't need to # calculate everything difference = ele1.symmetric_difference(ele2) udiffarray[i, j] = difference udiffarray[j, i] = difference diffarray diffarray o_tl = [tree.length() for tree in otrees] print o_tl uo_tl = [mle.length() for mle in uotrees] print uo_tl
ExploratoryNotebooks/RF_checking.ipynb
wrightaprilm/squamates
mit
All of the basic arithmetic operations are supported.
a = torch.Tensor([1,2]) b = torch.Tensor([3,4]) print('a + b:', a + b) print('a - b:', a - b) print('a * b:', a * b) print('a / b:', a / b)
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Indexing/slicing also behaves the same.
a = torch.randint(0, 10, (4, 4)) print('a:', a, '\n') # Slice using ranges print('a[2:, :]', a[2:, :], '\n') # Can count backwards using negative indices print('a[:, -1]', a[:, -1])
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Resizing and reshaping tensors is also quite simple
print('Turn tensor into a 1 dimensional array:') a = torch.randint(0, 10, (3, 3)) print(f'Before size: {a.size()}') print(a, '\n') a = a.view(1, 9) print(f'After size: {a.size()}') print(a)
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Changing a Tensor to and from an array is also quite simple:
# Tensor from array arr = np.array([1,2]) torch.from_numpy(arr) # Tensor to array t = torch.Tensor([1, 2]) t.numpy()
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Moving Tensors to the GPU is also quite simple:
t = torch.Tensor([1, 2]) # on CPU if torch.cuda.is_available(): t = t.cuda() # on GPU
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Automatic Differentiation https://pytorch.org/tutorials/beginner/basics/autograd_tutorial.html Derivatives and gradients are critical to a large number of machine learning algorithms. One of the key benefits of PyTorch is that these can be computed automatically. We'll demonstrate this using the following example. Suppose we have some data $x$ and $y$, and want to fit a model: $$ \hat{y} = mx + b $$ by minimizing the loss function: $$ L(y, \hat{y}) = \frac{1}{2}(y - \hat{y})^2 $$
# Data x = torch.tensor([1., 2, 3, 4]) # requires_grad = False by default y = torch.tensor([0., -1, -2, -3]) # Initialize parameters m = torch.rand(1, requires_grad=True) b = torch.rand(1, requires_grad=True) # Define regression function y_hat = m * x + b print(y_hat) # Define loss loss = torch.mean(0.5 * (y - y_hat)**2) loss.backward() # Backprop the gradients of the loss w.r.t other variables
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
If we look at the $x$ and $y$ values, you can see that the perfect values for our parameters are $m$=-1 and $b$=1 To obtain the gradient of the $L$ w.r.t $m$ and $b$ you need only run:
# Gradients print('Gradients:') print('dL/dm: %0.4f' % m.grad) print('dL/db: %0.4f' % b.grad)
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Training Models While automatic differentiation is in itself a useful feature, it can be quite tedious to keep track of all of the different parameters and gradients for more complicated models. In order to make life simple, PyTorch defines a torch.nn.Module class which handles all of these details for you. To paraphrase the PyTorch documentation, this is the base class for all neural network modules, and whenever you define a model it should be a subclass of this class. There are two main functions you need to implement for a Module class: - $init$: Function first called when object is initialized. Used to set parameters, etc. - $forward$: When the model is called, this forwards the inputs through the model. Here is an example implementation of the simple linear model given above:
import torch.nn as nn class LinearModel(nn.Module): def __init__(self): """This method is called when you instantiate a new LinearModel object. You should use it to define the parameters/layers of your model. """ # Whenever you define a new nn.Module you should start the __init__() # method with the following line. Remember to replace `LinearModel` # with whatever you are calling your model. super(LinearModel, self).__init__() # Now we define the parameters used by the model. self.m = torch.nn.Parameter(torch.rand(1)) self.b = torch.nn.Parameter(torch.rand(1)) def forward(self, x): """This method computes the output of the model. Args: x: The input data. """ return self.m * x + self.b # Initialize model model = LinearModel() # Example forward pass. Note that we use model(x) not model.forward(x) !!! y_hat = model(x) print(x, y_hat)
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
To train this model we need to pick an optimizer such as SGD, AdaDelta, ADAM, etc. There are many options in torch.optim. When initializing an optimizer, the first argument will be the collection of variables you want optimized. To obtain a list of all of the trainable parameters of a model you can call the nn.Module.parameters() method. For example, the following code initalizes a SGD optimizer for the model defined above:
import torch.optim as optim optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Training is done in a loop. The general structure is: Clear the gradients. Evaluate the model. Calculate the loss. Backpropagate. Perform an optimization step. (Once in a while) Print monitoring metrics. For example, we can train our linear model by running:
import time for i in range(5001): optimizer.zero_grad() y_hat = model(x) # calling model() calls the forward function loss = torch.mean(0.5 * (y - y_hat)**2) loss.backward() optimizer.step() if i % 1000 == 0: time.sleep(1) # DO NOT INCLUDE THIS IN YOUR CODE !!! Only for demo. print(f'Iteration {i} - Loss: {loss.item():0.6f}')
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Observe that the final parameters are what we expect:
print(model(x), y) print('Final parameters:') print('m: %0.2f' % model.m) print('b: %0.2f' % model.b)
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
CASE STUDY: POS Tagging! Now let's dive into an example that is more relevant to NLP and is relevant to your HW3, part-of-speech tagging! We will be building up code up until the point where you will be able to process the POS data into tensors, then train a simple model on it. The code we are building up to forms the basis of the code in the homework assignment. To start, we'll need some data to train and evaluate on. First download the train and dev POS data twitter_train.pos and twitter_dev.pos into the same directory as this notebook.
print("First data point:") with open('twitter_train.pos', 'r') as f: for line in f: line = line.strip() print('\t', line) if line == '': break
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
We will now be introducing three new components which are vital to training (NLP) models: 1. a Vocabulary object which converts from tokens/labels to integers. This part should also be able to handle padding so that batches can be easily created. 2. a Dataset object which takes in the data file and produces data tensors 3. a DataLoader object which takes data tensors from Dataset and batches them Vocabulary Next, we need to get our data into Python and in a form that is usable by PyTorch. For text data this typically entails building a Vocabulary of all of the words, then mapping words to integers corresponding to their place in the sorted vocabulary. This can be done as follows:
class Vocabulary(): """ Object holding vocabulary and mappings Args: word_list: ``list`` A list of words. Words assumed to be unique. add_unk_token: ``bool` Whether to create an token for unknown tokens. """ def __init__(self, word_list, add_unk_token=False): # create special tokens for padding and unknown words self.pad_token = '<pad>' self.unk_token = '<unk>' if add_unk_token else None self.special_tokens = [self.pad_token] if self.unk_token: self.special_tokens += [self.unk_token] self.word_list = word_list # maps from the token ID to the token self.id_to_token = self.word_list + self.special_tokens # maps from the token to its token ID self.token_to_id = {token: id for id, token in enumerate(self.id_to_token)} def __len__(self): """ Returns size of vocabulary """ return len(self.token_to_id) @property def pad_token_id(self): return self.map_token_to_id(self.pad_token) def map_token_to_id(self, token: str): """ Maps a single token to its token ID """ if token not in self.token_to_id: token = self.unk_token return self.token_to_id[token] def map_id_to_token(self, id: int): """ Maps a single token ID to its token """ return self.id_to_token[id] def map_tokens_to_ids(self, tokens: list, max_length: int = None): """ Maps a list of tokens to a list of token IDs """ # truncate extra tokens and pad to `max_length` if max_length: tokens = tokens[:max_length] tokens = tokens + [self.pad_token]*(max_length-len(tokens)) return [self.map_token_to_id(token) for token in tokens] def map_ids_to_tokens(self, ids: list, filter_padding=True): """ Maps a list of token IDs to a list of token """ tokens = [self.map_id_to_token(id) for id in ids] if filter_padding: tokens = [t for t in tokens if t != self.pad_token] return tokens
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Let's create a vocabulary with a small amount of words
word_list = ['i', 'like', 'dogs', '!'] vocab = Vocabulary(word_list, add_unk_token=True) print('map from the token "i" to its token ID, then back again') token_id = vocab.map_token_to_id('i') print(token_id) print(vocab.map_id_to_token(token_id)) print('what about a token not in our vocabulary like "you"?') token_id = vocab.map_token_to_id('you') print(token_id) print(vocab.map_id_to_token(token_id)) token_ids = vocab.map_tokens_to_ids(['i', 'like', 'dogs', '!'], max_length=10) print("mapping a sequence of tokens: \'['i', 'like', 'dogs', '!']\'") print(token_ids) print(vocab.map_ids_to_tokens(token_ids, filter_padding=False))
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Dataset Next, we need a way to efficiently read in the data file and to process it into tensors. PyTorch provides an easy way to do this using the torch.utils.data.Dataset class. We will be creating our own class which inherits from this class. Helpful link: https://pytorch.org/tutorials/beginner/basics/data_tutorial.html A custom Dataset class must implement three functions: $init$: The init functions is run once when instantisting the Dataset object. $len$: The len function returns the number of data points in our dataset. $getitem$. The getitem function returns a sample from the dataset give the index of the sample. The output of this part should be a dictionary of (mostly) PyTorch tensors.
class TwitterPOSDataset(torch.utils.data.Dataset): def __init__(self, data_path, max_length=30): self._max_length = max_length self._dataset = [] # read the dataset file, extracting tokens and tags with open(data_path, 'r') as f: tokens, tags = [], [] for line in f: elements = line.strip().split('\t') # empty line means end of sentence if elements == [""]: self._dataset.append({'tokens': tokens, 'tags': tags}) tokens, tags = [], [] else: tokens.append(elements[0].lower()) tags.append(elements[1]) # intiailize an empty vocabulary self.token_vocab = None self.tag_vocab = None def __len__(self): return len(self._dataset) def __getitem__(self, item: int): # get the sample corresponding to the index instance = self._dataset[item] # check the vocabulary has been set assert self.token_vocab is not None assert self.tag_vocab is not None # Convert inputs to tensors, then return return self.tensorize(instance['tokens'], instance['tags'], self._max_length) def tensorize(self, tokens, tags=None, max_length=None): # map the tokens and tags into their ID form token_ids = self.token_vocab.map_tokens_to_ids(tokens, max_length) tensor_dict = {'token_ids': torch.LongTensor(token_ids)} if tags: tag_ids = self.tag_vocab.map_tokens_to_ids(tags, max_length) tensor_dict['tag_ids'] = torch.LongTensor(tag_ids) return tensor_dict def get_tokens_list(self): """ Returns set of tokens in dataset """ tokens = [token for d in self._dataset for token in d['tokens']] return sorted(set(tokens)) def get_tags_list(self): """ Returns set of tags in dataset """ tags = [tag for d in self._dataset for tag in d['tags']] return sorted(set(tags)) def set_vocab(self, token_vocab: Vocabulary, tag_vocab: Vocabulary): self.token_vocab = token_vocab self.tag_vocab = tag_vocab
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Now let's create Dataset objects for our training and validation sets! A key step here is creating the Vocabulary for these datasets. We will use the list of words in the training set to intialize a Vocabulary object over the input words. We will also use list of tags to intialize a Vocabulary over the tags.
train_dataset = TwitterPOSDataset('twitter_train.pos') dev_dataset = TwitterPOSDataset('twitter_dev.pos') # Get list of tokens and tags seen in training set and use to create Vocabulary token_list = train_dataset.get_tokens_list() tag_list = train_dataset.get_tags_list() token_vocab = Vocabulary(token_list, add_unk_token=True) tag_vocab = Vocabulary(tag_list) # Update the train/dev set with vocabulary. Notice we created the vocabulary using the training set train_dataset.set_vocab(token_vocab, tag_vocab) dev_dataset.set_vocab(token_vocab, tag_vocab) print(f'Size of training set: {len(train_dataset)}') print(f'Size of validation set: {len(dev_dataset)}')
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Let's print out one data point of the tensorized data and see what it looks like
instance = train_dataset[2] print(instance) tokens = train_dataset.token_vocab.map_ids_to_tokens(instance['token_ids']) tags = train_dataset.tag_vocab.map_ids_to_tokens(instance['tag_ids']) print() print(f'Tokens: {tokens}') print(f'Tags: {tags}')
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
DataLoader At this point our data is in a tensor, and we can create context windows using only PyTorch operations. Now we need a way to generate batches of data for training and evaluation. To do this, we will wrap our Dataset objects in a torch.utils.data.DataLoader object, which will automatically batch datapoints.
batch_size = 3 print(f'Setting batch_size to be {batch_size}') train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size) dev_dataloader = torch.utils.data.DataLoader(dev_dataset, batch_size)
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Now let's do one iteration over our training set to see what a batch looks like:
for batch in train_dataloader: print(batch, '\n') print(f'Size of tag_ids: {batch["tag_ids"].size()}') break
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Model Now that we can read in the data, it is time to build our model. We will build a very simple LSTM based tagger! Note that this is pretty similar to the code in simple_tagger.py in your homework, but with a lot of things hardcoded. Useful links: - Embedding Layer: https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html - LSTMs: https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html - Linear Layer: https://pytorch.org/docs/stable/generated/torch.nn.Linear.html?highlight=linear#torch.nn.Linear
class SimpleTagger(torch.nn.Module): def __init__(self, token_vocab, tag_vocab): super(SimpleTagger, self).__init__() self.token_vocab = token_vocab self.tag_vocab = tag_vocab self.num_tags = len(self.tag_vocab) # Initialize random embeddings of size 50 for each word in your token vocabulary self._embeddings = torch.nn.Embedding(len(token_vocab), 50) # Initialize a single-layer bidirectional LSTM encoder self._encoder = torch.nn.LSTM(input_size=50, hidden_size=25, num_layers=1, bidirectional=True) # _encoder a Linear layer which projects from the hidden state size to the number of tags self._tag_projection = torch.nn.Linear(in_features=50, out_features=len(self.tag_vocab)) # Loss will be a Cross Entropy Loss over the tags (except the padding token) self.loss = torch.nn.CrossEntropyLoss(ignore_index=self.tag_vocab.pad_token_id) def forward(self, token_ids, tag_ids=None): # Create mask over all the positions where the input is padded mask = token_ids != self.token_vocab.pad_token_id # Embed Inputs embeddings = self._embeddings(token_ids).permute(1, 0, 2) # Feed embeddings through LSTM encoder_outputs = self._encoder(embeddings)[0].permute(1, 0, 2) # Project output of LSTM through linear layer to get logits tag_logits = self._tag_projection(encoder_outputs) # Get the maximum score for each position as the predicted tag pred_tag_ids = torch.max(tag_logits, dim=-1)[1] output_dict = { 'pred_tag_ids': pred_tag_ids, 'tag_logits': tag_logits, 'tag_probs': torch.nn.functional.softmax(tag_logits, dim=-1) # covert logits to probs } # Compute loss and accuracy if gold tags are provided if tag_ids is not None: loss = self.loss(tag_logits.view(-1, self.num_tags), tag_ids.view(-1)) output_dict['loss'] = loss correct = pred_tag_ids == tag_ids # 1's in positions where pred matches gold correct *= mask # zero out positions where mask is zero output_dict['accuracy'] = torch.sum(correct)/torch.sum(mask) return output_dict
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Training The training script essentially follows the same pattern that we used for the linear model above. However we have also added an evaluation step, and code for saving model checkpoints.
from tqdm import tqdm ################################ # Setup ################################ # Create model model = SimpleTagger(token_vocab=token_vocab, tag_vocab=tag_vocab) if torch.cuda.is_available(): model = model.cuda() # Initialize optimizer. # Note: The learning rate is an important hyperparameters to tune optimizer = torch.optim.Adam(model.parameters(), lr=0.001) ################################ # Training and Evaluation! ################################ num_epochs = 10 best_dev_loss = float('inf') for epoch in range(num_epochs): print('\nEpoch', epoch) # Training loop model.train() # THIS PART IS VERY IMPORTANT TO SET BEFORE TRAINING train_loss = 0 train_acc = 0 for batch in train_dataloader: batch_size = batch['token_ids'].size(0) optimizer.zero_grad() output_dict = model(**batch) loss = output_dict['loss'] loss.backward() optimizer.step() train_loss += loss.item()*batch_size accuracy = output_dict['accuracy'] train_acc += accuracy*batch_size train_loss /= len(train_dataset) train_acc /= len(train_dataset) print(f'Train loss {train_loss} accuracy {train_acc}') # Evaluation loop model.eval() # THIS PART IS VERY IMPORTANT TO SET BEFORE EVALUATION dev_loss = 0 dev_acc = 0 for batch in dev_dataloader: batch_size = batch['token_ids'].size(0) output_dict = model(**batch) dev_loss += output_dict['loss'].item()*batch_size dev_acc += output_dict['accuracy']*batch_size dev_loss /= len(dev_dataset) dev_acc /= len(dev_dataset) print(f'Dev loss {dev_loss} accuracy {dev_acc}') # Save best model if dev_loss < best_dev_loss: print('Best so far') torch.save(model, 'model.pt') best_dev_loss = dev_loss
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Loading Trained Models Loading a pretrained model can be done easily. To learn more about saving/loading models see https://pytorch.org/tutorials/beginner/saving_loading_models.html
model = torch.load('model.pt')
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
Feed in your own sentences!
sentence = 'i want to eat a pizza .'.lower().split() # convert sentence to tensor dictionar tensor_dict = train_dataset.tensorize(sentence) # unsqueeze first dimesion so batch size is 1 tensor_dict['token_ids'] = tensor_dict['token_ids'].unsqueeze(0) print(tensor_dict) # feed through model output_dict = model(**tensor_dict) # get predicted tag IDs pred_tag_ids = output_dict['pred_tag_ids'].squeeze().tolist() print(pred_tag_ids) # convert tag IDs to tag names print(model.tag_vocab.map_ids_to_tokens(pred_tag_ids))
tutorials/intro_to_pytorch.ipynb
sameersingh/uci-statnlp
apache-2.0
TensorFlow Probability のツアー <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/probability/examples/A_Tour_of_TensorFlow_Probability"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td> </table> この Colab では、TensorFlow Probability の基本的な機能の一部を紹介します。 依存関係と前提条件
#@title Import { display-mode: "form" } from pprint import pprint import matplotlib.pyplot as plt import numpy as np import seaborn as sns import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_probability as tfp sns.reset_defaults() sns.set_context(context='talk',font_scale=0.7) plt.rcParams['image.cmap'] = 'viridis' %matplotlib inline tfd = tfp.distributions tfb = tfp.bijectors #@title Utils { display-mode: "form" } def print_subclasses_from_module(module, base_class, maxwidth=80): import functools, inspect, sys subclasses = [name for name, obj in inspect.getmembers(module) if inspect.isclass(obj) and issubclass(obj, base_class)] def red(acc, x): if not acc or len(acc[-1]) + len(x) + 2 > maxwidth: acc.append(x) else: acc[-1] += ", " + x return acc print('\n'.join(functools.reduce(red, subclasses, [])))
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
概要 TensorFlow TensorFlow Probability 分布 Bijectors MCMC など! はじめに: TensorFlow TensorFlow は科学計算ライブラリです。 次の項目をサポートしています。 多数の数学演算 効率的なベクトル計算 簡単なハードウェアアクセラレーション 自動微分 ベクトル化 ベクトル化によって高速化することができます! 形状を重視していることでもあります。
mats = tf.random.uniform(shape=[1000, 10, 10]) vecs = tf.random.uniform(shape=[1000, 10, 1]) def for_loop_solve(): return np.array( [tf.linalg.solve(mats[i, ...], vecs[i, ...]) for i in range(1000)]) def vectorized_solve(): return tf.linalg.solve(mats, vecs) # Vectorization for the win! %timeit for_loop_solve() %timeit vectorized_solve()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
ハードウェアアクセラレーション
# Code can run seamlessly on a GPU, just change Colab runtime type # in the 'Runtime' menu. if tf.test.gpu_device_name() == '/device:GPU:0': print("Using a GPU") else: print("Using a CPU")
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
自動微分
a = tf.constant(np.pi) b = tf.constant(np.e) with tf.GradientTape() as tape: tape.watch([a, b]) c = .5 * (a**2 + b**2) grads = tape.gradient(c, [a, b]) print(grads[0]) print(grads[1])
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
TensorFlow Probability TensorFlow Probability は TensorFlow における確率論的推論と統計分析用のライブラリです。 低レベルのモジュール式コンポーネントを通じて、モデリング、推論、および批評をサポートしています。 低レベルのビルディングブロック 分布 Bijectors 高レベルのコンストラクト マルコフ連鎖モンテカルロ法 確率的レイヤー 構造時系列 一般化線形モデル オプティマイザ 分布 tfp.distributions.Distribution は、sample と log_prob という 2 つのコアモデルを持つクラスです。 TFP には多数の分布があります!
print_subclasses_from_module(tfp.distributions, tfp.distributions.Distribution)
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
単純なスカラー変量 Distribution
# A standard normal normal = tfd.Normal(loc=0., scale=1.) print(normal) # Plot 1000 samples from a standard normal samples = normal.sample(1000) sns.distplot(samples) plt.title("Samples from a standard Normal") plt.show() # Compute the log_prob of a point in the event space of `normal` normal.log_prob(0.) # Compute the log_prob of a few points normal.log_prob([-1., 0., 1.])
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
分布と形状 Numpy ndarrays と TensorFlow Tensors には形状があります。 TensorFlow Probability Distributions には形状セマンティクスがあり、全体に同じメモリチャンク(Tensor/ndarray)が使用されている場合でも、形状を意味的に異なる部分に分割します。 バッチ形状は、さまざまなパラメータで Distribution の集合体を示します。 イベント形状は、Distribution のサンプルの形状を示します。 常に、バッチ形状を「左」に、イベント形状を「右」に置きます。 スカラー変量 Distributions のバッチ バッチは「ベクトル化された」分布のようなもので、計算が並行して行われる独立したインスタンスです。
# Create a batch of 3 normals, and plot 1000 samples from each normals = tfd.Normal([-2.5, 0., 2.5], 1.) # The scale parameter broadacasts! print("Batch shape:", normals.batch_shape) print("Event shape:", normals.event_shape) # Samples' shapes go on the left! samples = normals.sample(1000) print("Shape of samples:", samples.shape) # Sample shapes can themselves be more complicated print("Shape of samples:", normals.sample([10, 10, 10]).shape) # A batch of normals gives a batch of log_probs. print(normals.log_prob([-2.5, 0., 2.5])) # The computation broadcasts, so a batch of normals applied to a scalar # also gives a batch of log_probs. print(normals.log_prob(0.)) # Normal numpy-like broadcasting rules apply! xs = np.linspace(-6, 6, 200) try: normals.log_prob(xs) except Exception as e: print("TFP error:", e.message) # That fails for the same reason this does: try: np.zeros(200) + np.zeros(3) except Exception as e: print("Numpy error:", e) # But this would work: a = np.zeros([200, 1]) + np.zeros(3) print("Broadcast shape:", a.shape) # And so will this! xs = np.linspace(-6, 6, 200)[..., np.newaxis] # => shape = [200, 1] lps = normals.log_prob(xs) print("Broadcast log_prob shape:", lps.shape) # Summarizing visually for i in range(3): sns.distplot(samples[:, i], kde=False, norm_hist=True) plt.plot(np.tile(xs, 3), normals.prob(xs), c='k', alpha=.5) plt.title("Samples from 3 Normals, and their PDF's") plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
ベクトル変量 Distribution
mvn = tfd.MultivariateNormalDiag(loc=[0., 0.], scale_diag = [1., 1.]) print("Batch shape:", mvn.batch_shape) print("Event shape:", mvn.event_shape) samples = mvn.sample(1000) print("Samples shape:", samples.shape) g = sns.jointplot(samples[:, 0], samples[:, 1], kind='scatter') plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
マトリクス変量 Distribution
lkj = tfd.LKJ(dimension=10, concentration=[1.5, 3.0]) print("Batch shape: ", lkj.batch_shape) print("Event shape: ", lkj.event_shape) samples = lkj.sample() print("Samples shape: ", samples.shape) fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(6, 3)) sns.heatmap(samples[0, ...], ax=axes[0], cbar=False) sns.heatmap(samples[1, ...], ax=axes[1], cbar=False) fig.tight_layout() plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
ガウス処理
kernel = tfp.math.psd_kernels.ExponentiatedQuadratic() xs = np.linspace(-5., 5., 200).reshape([-1, 1]) gp = tfd.GaussianProcess(kernel, index_points=xs) print("Batch shape:", gp.batch_shape) print("Event shape:", gp.event_shape) upper, lower = gp.mean() + [2 * gp.stddev(), -2 * gp.stddev()] plt.plot(xs, gp.mean()) plt.fill_between(xs[..., 0], upper, lower, color='k', alpha=.1) for _ in range(5): plt.plot(xs, gp.sample(), c='r', alpha=.3) plt.title(r"GP prior mean, $2\sigma$ intervals, and samples") plt.show() # *** Bonus question *** # Why do so many of these functions lie outside the 95% intervals?
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
GP 回帰
# Suppose we have some observed data obs_x = [[-3.], [0.], [2.]] # Shape 3x1 (3 1-D vectors) obs_y = [3., -2., 2.] # Shape 3 (3 scalars) gprm = tfd.GaussianProcessRegressionModel(kernel, xs, obs_x, obs_y) upper, lower = gprm.mean() + [2 * gprm.stddev(), -2 * gprm.stddev()] plt.plot(xs, gprm.mean()) plt.fill_between(xs[..., 0], upper, lower, color='k', alpha=.1) for _ in range(5): plt.plot(xs, gprm.sample(), c='r', alpha=.3) plt.scatter(obs_x, obs_y, c='k', zorder=3) plt.title(r"GP posterior mean, $2\sigma$ intervals, and samples") plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
Bijectors Bijector は(ほぼ)可逆的で円滑な関数を表します。サンプルを取得して log_probs を計算する能力を維持したまま分布を変換するために使用することができます。tfp.bijectors モジュール内に存在する場合があります。 Bijector ごとに、少なくとも次の 3 つのメソッドが実装されます。 forward inverse (少なくとも)forward_log_det_jacobian と inverse_log_det_jacobian のいずれか。 これらを材料として、分布を変換し、さらにサンプルを取得して結果の probs をログ記録することができます。 やや煩雑な数学 $X$ は pdf $p(x)$ のランダム変数です。 $g$ は $X$ の空間における円滑な可逆関数です。 $Y = g(X)$ は新しい変換済み確率変数です。 $p(Y=y) = p(X=g^{-1}(y)) \cdot |\nabla g^{-1}(y)|$ キャッシング Bijectors は前方計算と可逆計算、そして log-det-Jacobian のキャッシュも行うため、非常にコストのかかる演算を繰り返す必要がありません。
print_subclasses_from_module(tfp.bijectors, tfp.bijectors.Bijector)
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
単純な Bijector
normal_cdf = tfp.bijectors.NormalCDF() xs = np.linspace(-4., 4., 200) plt.plot(xs, normal_cdf.forward(xs)) plt.show() plt.plot(xs, normal_cdf.forward_log_det_jacobian(xs, event_ndims=0)) plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
Bijector による Distribution の変換
exp_bijector = tfp.bijectors.Exp() log_normal = exp_bijector(tfd.Normal(0., .5)) samples = log_normal.sample(1000) xs = np.linspace(1e-10, np.max(samples), 200) sns.distplot(samples, norm_hist=True, kde=False) plt.plot(xs, log_normal.prob(xs), c='k', alpha=.75) plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
Bijectors のバッチ処理
# Create a batch of bijectors of shape [3,] softplus = tfp.bijectors.Softplus( hinge_softness=[1., .5, .1]) print("Hinge softness shape:", softplus.hinge_softness.shape) # For broadcasting, we want this to be shape [200, 1] xs = np.linspace(-4., 4., 200)[..., np.newaxis] ys = softplus.forward(xs) print("Forward shape:", ys.shape) # Visualization lines = plt.plot(np.tile(xs, 3), ys) for line, hs in zip(lines, softplus.hinge_softness): line.set_label("Softness: %1.1f" % hs) plt.legend() plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
キャッシング
# This bijector represents a matrix outer product on the forward pass, # and a cholesky decomposition on the inverse pass. The latter costs O(N^3)! bij = tfb.CholeskyOuterProduct() size = 2500 # Make a big, lower-triangular matrix big_lower_triangular = tf.eye(size) # Squaring it gives us a positive-definite matrix big_positive_definite = bij.forward(big_lower_triangular) # Caching for the win! %timeit bij.inverse(big_positive_definite) %timeit tf.linalg.cholesky(big_positive_definite)
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
MCMC TFP には、ハミルトニアンモンテカルロ法など、いくつかの標準的なマルコフ連鎖モンテカルロアルゴリズムのサポートが組み込まれています。 データセットを生成する
# Generate some data def f(x, w): # Pad x with 1's so we can add bias via matmul x = tf.pad(x, [[1, 0], [0, 0]], constant_values=1) linop = tf.linalg.LinearOperatorFullMatrix(w[..., np.newaxis]) result = linop.matmul(x, adjoint=True) return result[..., 0, :] num_features = 2 num_examples = 50 noise_scale = .5 true_w = np.array([-1., 2., 3.]) xs = np.random.uniform(-1., 1., [num_features, num_examples]) ys = f(xs, true_w) + np.random.normal(0., noise_scale, size=num_examples) # Visualize the data set plt.scatter(*xs, c=ys, s=100, linewidths=0) grid = np.meshgrid(*([np.linspace(-1, 1, 100)] * 2)) xs_grid = np.stack(grid, axis=0) fs_grid = f(xs_grid.reshape([num_features, -1]), true_w) fs_grid = np.reshape(fs_grid, [100, 100]) plt.colorbar() plt.contour(xs_grid[0, ...], xs_grid[1, ...], fs_grid, 20, linewidths=1) plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
結合 log-prob 関数を定義する 同時対数確率の部分適用を形成するためにデータを閉じると、非正規化された事後確率となります。
# Define the joint_log_prob function, and our unnormalized posterior. def joint_log_prob(w, x, y): # Our model in maths is # w ~ MVN([0, 0, 0], diag([1, 1, 1])) # y_i ~ Normal(w @ x_i, noise_scale), i=1..N rv_w = tfd.MultivariateNormalDiag( loc=np.zeros(num_features + 1), scale_diag=np.ones(num_features + 1)) rv_y = tfd.Normal(f(x, w), noise_scale) return (rv_w.log_prob(w) + tf.reduce_sum(rv_y.log_prob(y), axis=-1)) # Create our unnormalized target density by currying x and y from the joint. def unnormalized_posterior(w): return joint_log_prob(w, xs, ys)
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
HMC TransitionKernel を構築して sample_chain を呼び出す
# Create an HMC TransitionKernel hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo( target_log_prob_fn=unnormalized_posterior, step_size=np.float64(.1), num_leapfrog_steps=2) # We wrap sample_chain in tf.function, telling TF to precompile a reusable # computation graph, which will dramatically improve performance. @tf.function def run_chain(initial_state, num_results=1000, num_burnin_steps=500): return tfp.mcmc.sample_chain( num_results=num_results, num_burnin_steps=num_burnin_steps, current_state=initial_state, kernel=hmc_kernel, trace_fn=lambda current_state, kernel_results: kernel_results) initial_state = np.zeros(num_features + 1) samples, kernel_results = run_chain(initial_state) print("Acceptance rate:", kernel_results.is_accepted.numpy().mean())
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
これでは良くありません!受容率は 0.65 に近くなければなりません。 (『Optimal Scaling for Various Metropolis-Hastings Algorithms』(Roberts &amp; Rosenthal, 2001)をご覧ください) 適応ステップサイズ HMC TransitionKernel を SimpleStepSizeAdaptation "meta-kernel" にラップすることができます。これは、バーンイン中に HMC ステップサイズを適応させるためになんらかの(かなり単純なヒューリスティック)ロジックを適用します。バーンインの 80% をステップサイズの適応に割り当ててから、残りの 20% を混同に利用します。
# Apply a simple step size adaptation during burnin @tf.function def run_chain(initial_state, num_results=1000, num_burnin_steps=500): adaptive_kernel = tfp.mcmc.SimpleStepSizeAdaptation( hmc_kernel, num_adaptation_steps=int(.8 * num_burnin_steps), target_accept_prob=np.float64(.65)) return tfp.mcmc.sample_chain( num_results=num_results, num_burnin_steps=num_burnin_steps, current_state=initial_state, kernel=adaptive_kernel, trace_fn=lambda cs, kr: kr) samples, kernel_results = run_chain( initial_state=np.zeros(num_features+1)) print("Acceptance rate:", kernel_results.inner_results.is_accepted.numpy().mean()) # Trace plots colors = ['b', 'g', 'r'] for i in range(3): plt.plot(samples[:, i], c=colors[i], alpha=.3) plt.hlines(true_w[i], 0, 1000, zorder=4, color=colors[i], label="$w_{}$".format(i)) plt.legend(loc='upper right') plt.show() # Histogram of samples for i in range(3): sns.distplot(samples[:, i], color=colors[i]) ymax = plt.ylim()[1] for i in range(3): plt.vlines(true_w[i], 0, ymax, color=colors[i]) plt.ylim(0, ymax) plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
診断 プロットのトレーシングは悪い方法であありませんが、診断の方が優れています! まず、複数のチェーンを実行する必要があります。これは、initial_state テンソルのバッチを指定するだけで行えます。
# Instead of a single set of initial w's, we create a batch of 8. num_chains = 8 initial_state = np.zeros([num_chains, num_features + 1]) chains, kernel_results = run_chain(initial_state) r_hat = tfp.mcmc.potential_scale_reduction(chains) print("Acceptance rate:", kernel_results.inner_results.is_accepted.numpy().mean()) print("R-hat diagnostic (per latent variable):", r_hat.numpy())
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
ノイズスケールのサンプリング
# Define the joint_log_prob function, and our unnormalized posterior. def joint_log_prob(w, sigma, x, y): # Our model in maths is # w ~ MVN([0, 0, 0], diag([1, 1, 1])) # y_i ~ Normal(w @ x_i, noise_scale), i=1..N rv_w = tfd.MultivariateNormalDiag( loc=np.zeros(num_features + 1), scale_diag=np.ones(num_features + 1)) rv_sigma = tfd.LogNormal(np.float64(1.), np.float64(5.)) rv_y = tfd.Normal(f(x, w), sigma[..., np.newaxis]) return (rv_w.log_prob(w) + rv_sigma.log_prob(sigma) + tf.reduce_sum(rv_y.log_prob(y), axis=-1)) # Create our unnormalized target density by currying x and y from the joint. def unnormalized_posterior(w, sigma): return joint_log_prob(w, sigma, xs, ys) # Create an HMC TransitionKernel hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo( target_log_prob_fn=unnormalized_posterior, step_size=np.float64(.1), num_leapfrog_steps=4) # Create a TransformedTransitionKernl transformed_kernel = tfp.mcmc.TransformedTransitionKernel( inner_kernel=hmc_kernel, bijector=[tfb.Identity(), # w tfb.Invert(tfb.Softplus())]) # sigma # Apply a simple step size adaptation during burnin @tf.function def run_chain(initial_state, num_results=1000, num_burnin_steps=500): adaptive_kernel = tfp.mcmc.SimpleStepSizeAdaptation( transformed_kernel, num_adaptation_steps=int(.8 * num_burnin_steps), target_accept_prob=np.float64(.75)) return tfp.mcmc.sample_chain( num_results=num_results, num_burnin_steps=num_burnin_steps, current_state=initial_state, kernel=adaptive_kernel, seed=(0, 1), trace_fn=lambda cs, kr: kr) # Instead of a single set of initial w's, we create a batch of 8. num_chains = 8 initial_state = [np.zeros([num_chains, num_features + 1]), .54 * np.ones([num_chains], dtype=np.float64)] chains, kernel_results = run_chain(initial_state) r_hat = tfp.mcmc.potential_scale_reduction(chains) print("Acceptance rate:", kernel_results.inner_results.inner_results.is_accepted.numpy().mean()) print("R-hat diagnostic (per w variable):", r_hat[0].numpy()) print("R-hat diagnostic (sigma):", r_hat[1].numpy()) w_chains, sigma_chains = chains # Trace plots of w (one of 8 chains) colors = ['b', 'g', 'r', 'teal'] fig, axes = plt.subplots(4, num_chains, figsize=(4 * num_chains, 8)) for j in range(num_chains): for i in range(3): ax = axes[i][j] ax.plot(w_chains[:, j, i], c=colors[i], alpha=.3) ax.hlines(true_w[i], 0, 1000, zorder=4, color=colors[i], label="$w_{}$".format(i)) ax.legend(loc='upper right') ax = axes[3][j] ax.plot(sigma_chains[:, j], alpha=.3, c=colors[3]) ax.hlines(noise_scale, 0, 1000, zorder=4, color=colors[3], label=r"$\sigma$".format(i)) ax.legend(loc='upper right') fig.tight_layout() plt.show() # Histogram of samples of w fig, axes = plt.subplots(4, num_chains, figsize=(4 * num_chains, 8)) for j in range(num_chains): for i in range(3): ax = axes[i][j] sns.distplot(w_chains[:, j, i], color=colors[i], norm_hist=True, ax=ax, hist_kws={'alpha': .3}) for i in range(3): ax = axes[i][j] ymax = ax.get_ylim()[1] ax.vlines(true_w[i], 0, ymax, color=colors[i], label="$w_{}$".format(i), linewidth=3) ax.set_ylim(0, ymax) ax.legend(loc='upper right') ax = axes[3][j] sns.distplot(sigma_chains[:, j], color=colors[3], norm_hist=True, ax=ax, hist_kws={'alpha': .3}) ymax = ax.get_ylim()[1] ax.vlines(noise_scale, 0, ymax, color=colors[3], label=r"$\sigma$".format(i), linewidth=3) ax.set_ylim(0, ymax) ax.legend(loc='upper right') fig.tight_layout() plt.show()
site/ja/probability/examples/A_Tour_of_TensorFlow_Probability.ipynb
tensorflow/docs-l10n
apache-2.0
1. Classification Metrics scikit-learn documentation Accuracy Accuracy is the fraction of the correct predictions: $$ accuracy(y, \hat{y}) = \frac{1}{m} \sum_{i=1}^m 1(y^{(i)} = \hat{y}^{(i)}) $$ where $ 1(x) $ is the indicator function and $ m $ is the number of samples.
import numpy as np from sklearn.metrics import accuracy_score y_pred = [0, 2, 1, 3] y_true = [0, 1, 2, 3] accuracy_score(y_true, y_pred)
04-evaluation-metrics.ipynb
msadegh97/machine-learning-course
gpl-3.0
Confusion Matrix For multiclass classification: <img src="images/metrics/confusion-matrix.png" width="500" /> and for binary classification:
import numpy as np from sklearn.metrics import confusion_matrix y_true = [0, 0, 0, 1, 1, 1, 1, 1] y_pred = [0, 1, 0, 1, 0, 1, 0, 1] tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel() np.array([[tn, fp], [fn, tp]])
04-evaluation-metrics.ipynb
msadegh97/machine-learning-course
gpl-3.0
Recall, Precision & F-Score $$ recall = \frac{tp}{tp + fn} $$ $$ precision = \frac{tp}{tp + fp} $$ $$ F_1 = 2 \cdot \frac{precision \cdot recall}{precision + recall} $$
from sklearn.metrics import precision_score, recall_score, f1_score y_pred = [0, 1, 0, 0] y_true = [0, 1, 0, 1] print('[[tn fn]\n [fp, tp]]') print(confusion_matrix(y_true, y_pred)) print() print('Recall =', recall_score(y_true, y_pred)) print('Precision =', precision_score(y_true, y_pred)) print('F1 =', f1_score(y_true, y_pred)) from sklearn.metrics import classification_report y_true = [0, 1, 2, 2, 0] y_pred = [0, 0, 2, 1, 0] target_names = ['class 0', 'class 1', 'class 2'] print(classification_report(y_true, y_pred, target_names=target_names))
04-evaluation-metrics.ipynb
msadegh97/machine-learning-course
gpl-3.0
Cross Entropy $$ l(y, \hat{y}) = - \frac{1}{m} \sum_{i=1}^m y^{(i)} log(\hat{y}^{(i)}) + (1 - y^{(i)})log(1 - \hat{y}^{(i)}) $$
from sklearn.metrics import log_loss y_true = [0, 0, 1, 1] y_pred = [[.9, .1], [.8, .2], [.3, .7], [.01, .99]] log_loss(y_true, y_pred)
04-evaluation-metrics.ipynb
msadegh97/machine-learning-course
gpl-3.0
2. Regression Metrics scikit-learn documentation Mean Absolute Error $$ MAE(y, \hat{y}) = \frac{1}{m} \sum_{i=1}^m \mid y^{(i)} - \hat{y}^{(i)} \mid $$
from sklearn.metrics import mean_absolute_error y_true = [3, -0.5, 2, 7] y_pred = [2.5, 0.0, 2, 8] mean_absolute_error(y_true, y_pred)
04-evaluation-metrics.ipynb
msadegh97/machine-learning-course
gpl-3.0
Mean Squared Error $$ MSE(y, \hat{y}) = \frac{1}{m} \sum_{i=1}^m (y^{(i)} - \hat{y}^{(i)})^2 $$
from sklearn.metrics import mean_squared_error y_true = [3, -0.5, 2, 7] y_pred = [2.5, 0.0, 2, 8] mean_squared_error(y_true, y_pred)
04-evaluation-metrics.ipynb
msadegh97/machine-learning-course
gpl-3.0
Plotting an image and select a location to retrieve a time series
#select time slice of interest - this is trial and error until you get a decent image time_slice_i = 140 rgb = sensor_clean['ls8'].isel(time =time_slice_i).to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color') #rgb = nbar_clean.isel(time =time_slice).to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color') fake_saturation = 4500 clipped_visible = rgb.where(rgb<fake_saturation).fillna(fake_saturation) max_val = clipped_visible.max(['y', 'x']) scaled = (clipped_visible / max_val) #Click on this image to chose the location for time series extraction w = widgets.HTML("Event information appears here when you click on the figure") def callback(event): global x, y x, y = int(event.xdata + 0.5), int(event.ydata + 0.5) w.value = 'X: {}, Y: {}'.format(x,y) fig = plt.figure(figsize =(12,6)) #plt.scatter(x=trans.coords['x'], y=trans.coords['y'], c='r') #turn this on or off to show location of transect plt.imshow(scaled, interpolation = 'nearest', extent=[scaled.coords['x'].min(), scaled.coords['x'].max(), scaled.coords['y'].min(), scaled.coords['y'].max()]) fig.canvas.mpl_connect('button_press_event', callback) date_ = sensor_clean['ls8'].time[time_slice_i] plt.title(date_.astype('datetime64[D]')) plt.show() display(w) #this converts the map x coordinate into image x coordinates image_coords = ~affine * (x, y) imagex = int(image_coords[0]) imagey = int(image_coords[1]) #retrieve the time series that corresponds with the location clicked, and drop the no data values green_ls8 = sensor_clean['ls8'].green.isel(x=[imagex],y=[imagey]).dropna('time', how = 'any')
notebooks/05_interactive_time_series_with_time_slice_retrieval.ipynb
data-cube/agdc-v2-examples
apache-2.0
Click on an interactive time series and pull back an image that corresponds with a point on the time seris
#Use this plot to visualise a time series and select the image that corresponds with a point in the time series def callback(event): global time_int, devent devent = event time_int = event.xdata #time_int_ = time_int.astype(datetime64[D]) w.value = 'time_int: {}'.format(time_int) fig = plt.figure(figsize=(10,5)) fig.canvas.mpl_connect('button_press_event', callback) plt.show() display(w) green_ls8.plot(linestyle= '--', c= 'b', marker = '8', mec = 'b', mfc ='r') plt.grid() time_slice = matplotlib.dates.num2date(time_int).date() rgb2 = sensor_clean['ls8'].sel(time =time_slice, method = 'nearest').to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color') fake_saturation = 6000 clipped_visible = rgb2.where(rgb2<fake_saturation).fillna(fake_saturation) max_val = clipped_visible.max(['y', 'x']) scaled2 = (clipped_visible / max_val) #This image shows the time slice of choice and the location of the time series fig = plt.figure(figsize =(12,6)) #plt.scatter(x=trans.coords['x'], y=trans.coords['y'], c='r') plt.scatter(x = [x], y = [y], c= 'yellow', marker = 'D') plt.imshow(scaled2, interpolation = 'nearest', extent=[scaled.coords['x'].min(), scaled.coords['x'].max(), scaled.coords['y'].min(), scaled.coords['y'].max()]) plt.title(time_slice) plt.show()
notebooks/05_interactive_time_series_with_time_slice_retrieval.ipynb
data-cube/agdc-v2-examples
apache-2.0
So, that is 9 records? ... that can be fixed by including the parameter includeFirst, see the OANDA documentation for details.
instrument = "EUR_USD" params = { "from": "2017-01-01T00:00:00Z", "granularity": "H1", "includeFirst": True, "count": 10, } r = instruments.InstrumentsCandles(instrument=instrument, params=params) response = client.request(r) print("Request: {} #candles received: {}".format(r, len(r.response.get('candles')))) print(json.dumps(response, indent=2))
jupyter/historical.ipynb
hootnot/oanda-api-v20
mit
Bulk history InstrumentsCandles class It is likely that you want to retrieve more than 10 records. The OANDA docs say that the default number of records is 500, in case you do not specify. You can specify the number of records to retrieve by using count, with a maximum of 5000. The InstrumentsCandles class enables you to retrieve the records. InstrumentsCandlesFactory Now if you would like to retrieve a lot of history, you have to make consecutive requests. To make this an easy process the oandapyV20 library comes with a so called factory named InstrumentsCandlesFactory. Using this class you can retrieve all history of an instrument from a certain date. The InstrumentsCandlesFactory acts as a generator generating InstrumentCandles requests until all data is retrieved. The number of requests can be influenced by specifying count. Setting count to 5000 would generate a tenth of the requests vs. the default of 500. Back to our example: lets make sure we request a lot of data, so we set the granularity to M5 and leave the date at 2017-01-01T00:00:00. The will retrieve all records from that date up to today, because we did not specify the to parameter.
import json import oandapyV20 import oandapyV20.endpoints.instruments as instruments from oandapyV20.contrib.factories import InstrumentsCandlesFactory from exampleauth import exampleauth accountID, access_token = exampleauth.exampleAuth() client = oandapyV20.API(access_token=access_token) instrument = "EUR_USD" params = { "from": "2017-01-01T00:00:00Z", "granularity": "M5", } def cnv(r, h): # get all candles from the response and write them as a record to the filehandle h for candle in r.get('candles'): ctime = candle.get('time')[0:19] try: rec = "{time},{complete},{o},{h},{l},{c},{v}".format( time=ctime, complete=candle['complete'], o=candle['mid']['o'], h=candle['mid']['h'], l=candle['mid']['l'], c=candle['mid']['c'], v=candle['volume'], ) except Exception as e: print(e, r) else: h.write(rec+"\n") datafile = "/tmp/{}.{}.out".format(instrument, params['granularity']) with open(datafile, "w") as O: n = 0 for r in InstrumentsCandlesFactory(instrument=instrument, params=params): rv = client.request(r) cnt = len(r.response.get('candles')) print("REQUEST: {} {} {}, received: {}".format(r, r.__class__.__name__, r.params, cnt)) n += cnt cnv(r.response, O) print("Check the datafile: {} under /tmp!, it contains {} records".format(datafile, n))
jupyter/historical.ipynb
hootnot/oanda-api-v20
mit
Open SPI rack connection and unlock (necessary after bootup of the controller module).
spi = SPI_rack("COM4", 1000000, 1) spi.unlock()
examples/S5k_Low_Level.ipynb
peendebak/SPI-rack
mit
Create new S5k module object at correct address and set clock source to internal clock. The clock can be divided by all even numbers between 2-510. We'll set DAC 1-8 at 50 MHz and DAC 9-16 at 500 KHz. This allows us to play the same waveform on both, with a factor 100 time difference. All these settings are base on a 200 MHz internal oscillator.
spi.get_battery() s5k = S5k_module(spi, 1) s5k.set_clock_source('internal') s5k.set_clock_division(1, 4) s5k.run_module(False) s5k.run_module(True) s5k.sync_clock() for DAC in range(1,9): s5k.set_clock_division(DAC, 4) for DAC in range(9, 17): s5k.set_clock_division(DAC, 400)
examples/S5k_Low_Level.ipynb
peendebak/SPI-rack
mit
Set all the DACs to AWG mode. This allows us to write to the internal 4096k samples RAM.
for DAC in range(1, 9): s5k.set_waveform_mode(DAC, 'AWG') s5k.set_digital_gain(DAC, 0.45) for DAC in range(1, 9): s5k.set_digital_gain(DAC, 1) for DAC in range(9, 17): s5k.set_digital_gain(DAC, 0)
examples/S5k_Low_Level.ipynb
peendebak/SPI-rack
mit
The ramp in both the slow and fast DAC's will be the same: 4000 samples long. To create the sawtooth we use the sawtooth function from the scipy signal library. The width argument allows us to define the width of the ramp as a fraction of the total waveform width: creates a ramp down.
wv_len = 4000 max_val = 2047 width = 0.5 t = np.linspace(0, 1, 4000) sawtooth = signal.square(2*np.pi*t, width) * max_val sawtooth = sawtooth.astype(int) plt.figure() plt.plot(sawtooth) plt.title('Sawtooth RAM data') plt.xlabel('Samples') plt.ylabel('RAM values') plt.show()
examples/S5k_Low_Level.ipynb
peendebak/SPI-rack
mit
We now have to upload the waveform to all DAC's. It only needs to be uploaded once to each DAC chip (each chip contains for DACs with shared memory). We will then simply point all the DAC's in the chip to use the same block of RAM.
s5k.upload_waveform(1, sawtooth, 0, set_pattern_length = True) s5k.upload_waveform(5, sawtooth, 0, set_pattern_length = True) s5k.upload_waveform(9, sawtooth, 0, set_pattern_length = True) s5k.upload_waveform(13, sawtooth, 0, set_pattern_length = True) for DAC in range(1,17): s5k.set_RAM_address(DAC, 0, len(sawtooth))
examples/S5k_Low_Level.ipynb
peendebak/SPI-rack
mit
We also have to set the length of the trigger period. It runs on the slowest clock used in the system, in this case at 500kHz. The period length is equal to the slow sawtooth
s5k.set_pattern_length_trigger(len(sawtooth)-1)
examples/S5k_Low_Level.ipynb
peendebak/SPI-rack
mit
One issue we now run into is the trigger delay. Each chip has a delay of 15 clock cycles from trigger in, to start outputting. This is especially noticable in this case where half is running at 500 kHz and the other half at 50 MHz. To compensate for this (to get them to start at the same time), we delay the start of the fast running DACs. The delay is 15 clock cycles at 500 kHz. This gives a delay of 30 us. As the fast DACs are running at 50 MHz, we need to delay by 1500 clock cycles. We write this (minus 1) to the necessary DAC chips.
s5k.run_module(False) fast_period = 1/50e6 slow_period = 1/500e3 delay_necessary = 15*slow_period delay_cycles = round(delay_necessary/fast_period) delay_cycles = int(delay_cycles) s5k.write_AD9106(s5k.DAreg.PATTERN_DLY, delay_cycles-1, 3) s5k.write_AD9106(s5k.DAreg.PATTERN_DLY, delay_cycles-1, 1) s5k.run_module(True)
examples/S5k_Low_Level.ipynb
peendebak/SPI-rack
mit
Now we can start the module, either by running from software or giving a gate on the front of the module.
s5k.run_module(True)
examples/S5k_Low_Level.ipynb
peendebak/SPI-rack
mit
Set the gain of the slow ramp to 0.5x, and of the fast ramp to 0.1x. Gain can go to 2x, but both channels can max out the swing of the output at a gain of 1x.
for DAC in range(1, 9): s5k.set_digital_gain(DAC, 1) for DAC in range(9, 17): s5k.set_digital_gain(DAC, 0.0) s5k.set_digital_gain(4, -0.1) #s5k.set_digital_gain(12, -0.7)
examples/S5k_Low_Level.ipynb
peendebak/SPI-rack
mit
Initialisation SCB is now a class we have imported, containing functions for navigating in the Statistical Database and fetch metadata as well as data from it. To use the SCB class we first need to initialise an object from it. Then we need one mandatory argument: language for the data and metadata. English and Swedish are supported. We choose English:
scb = SCB('en')
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Navigation and metadata Now we can look at the top node in the tree that is the Statistical Database's metadata:
scb.info()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0