markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Implementation: Data Exploration Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following: - The total number of students, n_students. - The total number of fea...
# TODO: Calculate number of students n_students = len(student_data) # TODO: Calculate number of features n_features = len(student_data.columns) - 1 # The last field is the target and is not a feature # TODO: Calculate passing students n_passed = len([x for x in student_data["passed"] if x == "yes"]) # TODO: Calculat...
student_intervention/student_intervention.ipynb
taylort7147/udacity-projects
mit
Implementation: Training and Testing Data Split So far, we have converted all categorical features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following: - Randomly shuffl...
# TODO: Import any additional functionality you may need here from sklearn.cross_validation import train_test_split # TODO: Set the number of training points num_train = 300 # Set the number of testing points num_test = X_all.shape[0] - num_train random_state = 0 # TODO: Shuffle and split the dataset into the numbe...
student_intervention/student_intervention.ipynb
taylort7147/udacity-projects
mit
Training and Evaluating Models In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses....
def train_classifier(clf, X_train, y_train): ''' Fits a classifier to the training data. ''' # Start the clock, train the classifier, then stop the clock start = time() clf.fit(X_train, y_train) end = time() # Print the results print "Trained model in {:.4f} seconds".format(end - s...
student_intervention/student_intervention.ipynb
taylort7147/udacity-projects
mit
Implementation: Model Performance Metrics With the predefined functions above, you will now import the three supervised learning models of your choice and run the train_predict function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, an...
# TODO: Import the three supervised learning models from sklearn from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression from sklearn.ensemble import AdaBoostClassifier # TODO: Initialize the three models clf_A = GaussianNB() clf_B = LogisticRegression(random_state=14) clf_C = A...
student_intervention/student_intervention.ipynb
taylort7147/udacity-projects
mit
Tabular Results Edit the cell below to see how a table can be designed in Markdown. You can record your results from above in the tables provided. Classifer 1 - Gaussian Naive Bayes | Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) | | :---------------: | :------------...
# TODO: Import 'GridSearchCV' and 'make_scorer' from sklearn.grid_search import GridSearchCV from sklearn.metrics import make_scorer # TODO: Create the parameters list you wish to tune num_features = len(feature_cols) parameters = { "C": [0.5, 1.0, 1.5, 2.0] } # TODO: Initialize the classifier clf = LogisticRegre...
student_intervention/student_intervention.ipynb
taylort7147/udacity-projects
mit
Project 3: Building a Neural Network Start with your neural network from the last chapter 3 layer neural network no non-linearity in hidden layer use our functions to create the training data create a "pre_process_data" function to create vocabulary for our training data generating functions modify "train" to train ov...
import time, sys class SentimentNetwork: def __init__(self, reviews, labels, hidden_nodes=10, learning_rate=0.1): np.random.seed(1) self.pre_process_data(reviews, labels) self.init_network(len(self.review_vocab), hidden_nodes, 1, learning_rate) def pre_pr...
sentiment_network/Sentiment Classification - Mini Project 3.ipynb
danresende/deep-learning
mit
1. Загрузите выборку из файла svm-data.csv. В нем записана двумерная выборка (целевая переменная указана в первом столбце, признаки — во втором и третьем).
df_train = pd.read_csv('../data/svm-data.csv', header=None) X_train = df_train[df_train.columns[1:3]] y_train = df_train[df_train.columns[0]]
03-svm_and_logistic_regression/statement-svm/statement-svm.ipynb
aKumpan/hse-shad-ml
apache-2.0
2. Обучите классификатор с линейным ядром, параметром C = 100000 и random_state=241. Такое значение параметра нужно использовать, чтобы убедиться, что SVM работает с выборкой как с линейно разделимой. При более низких значениях параметра алгоритм будет настраиваться с учетом слагаемого в функционале, штрафующего за мал...
clf = SVC(kernel='linear', C = 100000, random_state=241) clf.fit(X_train, y_train)
03-svm_and_logistic_regression/statement-svm/statement-svm.ipynb
aKumpan/hse-shad-ml
apache-2.0
3. Найдите номера объектов, которые являются опорными (нумерация с единицы). Они будут являться ответом на задание. Обратите внимание, что в качестве ответа нужно привести номера объектов в возрастающем порядке через запятую или пробел. Нумерация начинается с 1.
n_sv = clf.support_ n_sv ' '.join([str(n + 1) for n in n_sv])
03-svm_and_logistic_regression/statement-svm/statement-svm.ipynb
aKumpan/hse-shad-ml
apache-2.0
Problème Il s'agit de dessiner la pyramide suivante à l'aide de matplotlib.
from IPython.display import Image Image("http://www.xavierdupre.fr/app/code_beatrix/helpsphinx/_images/biodiversite_tri2.png")
_doc/notebooks/td1a/td1a_pyramide_bigarree_correction.ipynb
sdpython/ensae_teaching_cs
mit
Mais dans un premier temps, il faut un moyen de repérer chaque boule. On les numérote avec deux indices.
Image("http://www.xavierdupre.fr/app/code_beatrix/helpsphinx/_images/pyramide_num2.png")
_doc/notebooks/td1a/td1a_pyramide_bigarree_correction.ipynb
sdpython/ensae_teaching_cs
mit
Expected output: <table> <tr> <td> **list of sampled indices:** </td> <td> [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, <br> 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0] </td> </tr><tr> <...
# GRADED FUNCTION: optimize def optimize(X, Y, a_prev, parameters, learning_rate = 0.01): """ Execute one step of the optimization to train the model. Arguments: X -- list of integers, where each integer is a number that maps to a character in the vocabulary. Y -- list of integers, exactly the...
deeplearning.ai/C5.SequenceModel/Week1_RNN/assignment/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected output: <table> <tr> <td> **Loss ** </td> <td> 126.503975722 </td> </tr> <tr> <td> **gradients["dWaa"][1][2]** </td> <td> 0.194709315347 </td> <tr> <td> **np.argmax(gradients["dWax"])** </td> <td> 93 </td> </tr> <tr> <td> **gra...
# GRADED FUNCTION: model def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27): """ Trains the model and generates dinosaur names. Arguments: data -- text corpus ix_to_char -- dictionary that maps the index to a character char_to_ix -- ...
deeplearning.ai/C5.SequenceModel/Week1_RNN/assignment/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Loop Progress ProgIter is a (mostly) drop-in alternative to `tqdm https://pypi.python.org/pypi/tqdm`__. The advantage of ProgIter is that it does not use any python threading, and therefore can be safer with code that makes heavy use of multiprocessing. Note: ProgIter is now a standalone module: pip intstall progiter)
import ubelt as ub import math for n in ub.ProgIter(range(7500)): math.factorial(n) import ubelt as ub import math for n in ub.ProgIter(range(7500), freq=1000, adjust=False): math.factorial(n) # Note that forcing freq=2 all the time comes at a performance cost # The default adjustment algorithm caus...
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
Caching Cache intermediate results from blocks of code inside a script with minimal boilerplate or modification to the original code. For direct caching of data, use the Cacher class. By default results will be written to the ubelt's appdir cache, but the exact location can be specified via dpath or the appname argu...
import ubelt as ub depends = ['config', {'of': 'params'}, 'that-uniquely-determine-the-process'] cacher = ub.Cacher('test_process', depends=depends, appname='myapp', verbose=3) if 1: cacher.fpath.delete() for _ in range(2): data = cacher.tryload() if data is None: myvar1 = 'result of expensive...
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
For indirect caching, use the CacheStamp class. This simply writes a "stamp" file that marks that a process has completed. Additionally you can specify criteria for when the stamp should expire. If you let CacheStamp know about the expected "product", it will expire the stamp if that file has changed, which can be usef...
import ubelt as ub dpath = ub.Path.appdir('ubelt/demo/cache').delete().ensuredir() params = {'params1': 1, 'param2': 2} expected_fpath = dpath / 'file.txt' stamp = ub.CacheStamp('name', dpath=dpath, depends=params, hasher='sha256', product=expected_fpath, expires='2101-01-01T00...
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
Hashing The ub.hash_data constructs a hash for common Python nested data structures. Extensions to allow it to hash custom types can be registered. By default it handles lists, dicts, sets, slices, uuids, and numpy arrays.
import ubelt as ub data = [('arg1', 5), ('lr', .01), ('augmenters', ['flip', 'translate'])] ub.hash_data(data, hasher='sha256')
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
Support for torch tensors and pandas data frames are also included, but needs to be explicitly enabled. There also exists an non-public plugin architecture to extend this function to arbitrary types. While not officially supported, it is usable and will become better integrated in the future. See ubelt/util_hash.py fo...
import ubelt as ub info = ub.cmd('cmake --version') # Quickly inspect and parse output of a print(info['out']) # The info dict contains other useful data print(ub.repr2({k: v for k, v in info.items() if 'out' != k})) # Also possible to simultaneously capture and display output in realtime info = ub.cmd('cmake --vers...
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
Cross-Platform Config and Cache Directories If you have an application which writes configuration or cache files, the standard place to dump those files differs depending if you are on Windows, Linux, or Mac. Ubelt offers a unified functions for determining what these paths are. The ub.ensure_app_cache_dir and ub.ensur...
import ubelt as ub print(ub.shrinkuser(ub.ensure_app_cache_dir('my_app'))) print(ub.shrinkuser(ub.ensure_app_config_dir('my_app')))
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
New in version 1.0.0: the ub.Path.appdir classmethod provides a way to achieve the above with a chainable object oriented interface.
import ubelt as ub print(ub.Path.appdir('my_app').ensuredir().shrinkuser()) print(ub.Path.appdir('my_app', type='config').ensuredir().shrinkuser())
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
Downloading Files The function ub.download provides a simple interface to download a URL and save its data to a file. The function ub.grabdata works similarly to ub.download, but whereas ub.download will always re-download the file, ub.grabdata will check if the file exists and only re-download it if it needs to. New i...
>>> import ubelt as ub >>> url = 'http://i.imgur.com/rqwaDag.png' >>> fpath = ub.download(url, verbose=0) >>> print(ub.shrinkuser(fpath)) >>> import ubelt as ub >>> url = 'http://i.imgur.com/rqwaDag.png' >>> fpath = ub.grabdata(url, verbose=0, hash_prefix='944389a39') >>> print(ub.shrin...
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
Dictionary Tools
import ubelt as ub items = ['ham', 'jam', 'spam', 'eggs', 'cheese', 'bannana'] groupids = ['protein', 'fruit', 'protein', 'protein', 'dairy', 'fruit'] groups = ub.group_items(items, groupids) print(ub.repr2(groups, nl=1)) import ubelt as ub items = [1, 2, 39, 900, 1232, 900, 1232, 2, 2, 2, 900] ub.di...
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
AutoDict - Autovivification While the collections.defaultdict is nice, it is sometimes more convenient to have an infinitely nested dictionary of dictionaries. (But be careful, you may start to write in Perl)
>>> import ubelt as ub >>> auto = ub.AutoDict() >>> print('auto = {!r}'.format(auto)) >>> auto[0][10][100] = None >>> print('auto = {!r}'.format(auto)) >>> auto[0][1] = 'hello' >>> print('auto = {!r}'.format(auto))
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
String-based imports Ubelt contains functions to import modules dynamically without using the python import statement. While importlib exists, the ubelt implementation is simpler to user and does not have the disadvantage of breaking pytest. Note ubelt simply provides an interface to this functionality, the core implem...
import ubelt as ub try: # This is where I keep ubelt on my machine, so it is not expected to work elsewhere. module = ub.import_module_from_path(ub.expandpath('~/code/ubelt/ubelt')) print('module = {!r}'.format(module)) except OSError: pass module = ub.import_module_from_name('ubelt') print('mo...
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
Related to this functionality are the functions ub.modpath_to_modname and ub.modname_to_modpath, which statically transform (i.e. no code in the target modules is imported or executed) between module names (e.g. ubelt.util_import) and module paths (e.g. ~/.local/conda/envs/cenv3/lib/python3.5/site-packages/ubelt/util_i...
>>> import ubelt as ub >>> B = ub.repr2([[1, 2], [3, 4]], nl=1, cbr=True, trailsep=False) >>> C = ub.repr2([[5, 6], [7, 8]], nl=1, cbr=True, trailsep=False) >>> print(ub.hzcat(['A = ', B, ' * ', C]))
docs/notebooks/Ubelt Demo.ipynb
Erotemic/ubelt
apache-2.0
TensorFlow を使用した Azure Blob Storage <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/azure"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td> <td> <a target="_blank" href="https://colab.research.google....
try: %tensorflow_version 2.x except Exception: pass !pip install tensorflow-io
site/ja/io/tutorials/azure.ipynb
tensorflow/docs-l10n
apache-2.0
Azurite のインストールとセットアップ(オプション) Azure Storage アカウントをお持ちでない場合に、Azure Storage インターフェースをエミュレートする Azurite をインストールしてセットアップするには次を行う必要があります。
!npm install azurite@2.7.0 # The path for npm might not be exposed in PATH env, # you can find it out through 'npm bin' command npm_bin_path = get_ipython().getoutput('npm bin')[0] print('npm bin path: ', npm_bin_path) # Run `azurite-blob -s` as a background process. # IPython doesn't recognize `&` in inline bash ce...
site/ja/io/tutorials/azure.ipynb
tensorflow/docs-l10n
apache-2.0
TensorFlow を使用した Azure Storage のファイルの読み取りと書き込み 以下は、TensorFlow API を使用して、Azure Storage のファイルの読み取りと書き込みを行う例です。 tensorflow-io は自動的に azfs の使用を登録するため、tensorflow-io パッケージがインポートされると、ほかのファイルシステム(POSIX または GCS)と同じように動作します。 Azure Storage キーは、TF_AZURE_STORAGE_KEY 環境変数で指定します。これを行わない場合、TF_AZURE_USE_DEV_STORAGE は True に設定され、代わりに Azu...
import os import tensorflow as tf import tensorflow_io as tfio # Switch to False to use Azure Storage instead: use_emulator = True if use_emulator: os.environ['TF_AZURE_USE_DEV_STORAGE'] = '1' account_name = 'devstoreaccount1' else: # Replace <key> with Azure Storage Key, and <account> with Azure Storage Accoun...
site/ja/io/tutorials/azure.ipynb
tensorflow/docs-l10n
apache-2.0
Part 1: Encrypting and Decrypting a Message Pick Your Super Secret Message The super secret message you want to send must be the same or less than the length of the super secret key. If the key is shorter than the message, you will be forced to use parts of the key more than once. This may allow your lurking enemies to...
#Super secret message mes = 'hello world' print('Your super secret message: ',mes) #initial size of key n = len(mes)*3 #break up message into smaller parts if length > 10 nlist = [] for i in range(int(n/10)): nlist.append(10) if n%10 != 0: nlist.append(n%10) print('Initial key length: ',n)
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
The Big Picture Now that you (Alice) have the key, here's the big question: how are we going to get your key to Bob without eavesdroppers intercepting it? Quantum key distribution! Here are the steps and big picture (the effects of eavesdropping will be discussed later on): 1. You (Alice) generate a random string--the ...
# Make random strings of length string_length def randomStringGen(string_length): #output variables used to access quantum computer results at the end of the function output_list = [] output = '' #start up your quantum circuit information backend = Aer.get_backend('qasm_simulator') circu...
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Steps 2-4: Send Alice's Qubits to Bob Alice turns her key bits into corresponding qubit states. If a bit is a 0 she will prepare a qubit on the negative z axis. If the bit is a 1 she will prepare a qubit on the positive z axis. Next, if Alice has a 1 in her rotate string, she rotates her key qubit with a Hadamard gate....
#generate random rotation strings for Alice and Bob Alice_rotate = randomStringGen(n) Bob_rotate = randomStringGen(n) print("Alice's rotation string:",Alice_rotate) print("Bob's rotation string: ",Bob_rotate) #start up your quantum program backend = Aer.get_backend('qasm_simulator') shots = 1 circuits = ['send_over...
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Steps 5-6: Compare Rotation Strings and Make Keys Alice and Bob can now generate a secret quantum encryption key. First, they publicly share their rotation strings. If a bit in Alice's rotation string is the same as the corresponding bit in Bob's they know that Bob's result is the same as what Alice sent. They keep the...
def makeKey(rotation1,rotation2,results): key = '' count = 0 for i,j in zip(rotation1,rotation2): if i == j: key += results[count] count += 1 return key Akey = makeKey(Bob_rotate,Alice_rotate,key) Bkey = makeKey(Bob_rotate,Alice_rotate,Bob_result) print("Alice's key:",Ake...
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Pause We see that using only the public knowledge of Bob's and Alice's rotation strings, Alice and Bob can create the same identical key based on Alice's initial random key and Bob's results. Wow!! :D <strong>If Alice's and Bob's key length is less than the message</strong>, the encryption is compromised. If this is th...
#make key same length has message shortened_Akey = Akey[:len(mes)] encoded_m='' #encrypt message mes using encryption key final_key for m,k in zip(mes,shortened_Akey): encoded_c = chr(ord(m) + 2*ord(k) % 256) encoded_m += encoded_c print('encoded message: ',encoded_m) #make key same length has message shorte...
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Part 2: Eve the Eavesdropper What if someone is eavesdropping on Alice and Bob's line of communication? This process of random string making and rotations using quantum mechanics is only useful if it's robust against eavesdroppers. Eve is your lurking enemy. She eavesdrops by intercepting your transmission to Bob. To ...
#start up your quantum program backend = Aer.get_backend('qasm_simulator') shots = 1 circuits = ['Eve'] Eve_result = '' for ind,l in enumerate(nlist): #define temp variables used in breaking up quantum program if message length > 10 if l < 10: key_temp = key[10*ind:10*ind+l] Ar_temp = Alice_r...
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Step 2: Eve deceives Bob Eve sends her measured qubits on to Bob to deceive him! Since she doesn't know which of the qubits she measured were in a superposition or not, she doesn't even know whether to send the exact values she measured or opposite values. In the end, sending on the exact values is just as good a decep...
#start up your quantum program backend = Aer.get_backend('qasm_simulator') shots = 1 circuits = ['Eve2'] Bob_badresult = '' for ind,l in enumerate(nlist): #define temp variables used in breaking up quantum program if message length > 10 if l < 10: key_temp = key[10*ind:10*ind+l] Eve_temp = Ev...
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Step 4: Spot Check Alice and Bob know Eve is lurking out there. They decide to pick a few random values from their individual keys and compare with each other. This requires making these subsections of their keys public (so the other can see them). If any of the values in their keys are different, they know Eve's eaves...
#make keys for Alice and Bob Akey = makeKey(Bob_rotate,Alice_rotate,key) Bkey = makeKey(Bob_rotate,Alice_rotate,Bob_badresult) print("Alice's key: ",Akey) print("Bob's key: ",Bkey) check_key = randomStringGen(len(Akey)) print('spots to check:',check_key)
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Steps 5-7: Compare strings and detect Eve Alice and Bob compare the subsections of their keys. If they notice any discrepancy, they know that Eve was trying to intercept their message. They create new keys by throwing away the parts they shared publicly. It's possible that by throwing these parts away, they will not ha...
#find which values in rotation string were used to make the key Alice_keyrotate = makeKey(Bob_rotate,Alice_rotate,Alice_rotate) Bob_keyrotate = makeKey(Bob_rotate,Alice_rotate,Bob_rotate) # Detect Eve's interference #extract a subset of Alice's key sub_Akey = '' sub_Arotate = '' count = 0 for i,j in zip(Alice_rotate,A...
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Probability of Detecting Eve The longer the key, the more likely you will detect Eve. In fact, the probability goes up as a function of $1 - (3/4)^n$ where n is the number of bits Alice and Bob compare in their spot check. So, the longer the key, the more bits you can use to compare and the more likely you will detect ...
#!!! you may need to execute this cell twice in order to see the output due to an problem with matplotlib x = np.arange(0., 30.0) y = 1-(3/4)**x plt.plot(y) plt.title('Probablity of detecting Eve') plt.xlabel('# of key bits compared') plt.ylabel('Probablity of detecting Eve') plt.show()
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Tutorial: Checking and Comparing Models Goodness of fit, information criteria, and Bayesian evidence Introduction In this tutorial we'll look at some simple, realistic, simulated data, and do some model evaluation, including fitting a simple model, and then do a posterior predictive model check of the adequacy of the ...
import numpy as np import scipy.stats as st from scipy.special import logsumexp import emcee import matplotlib.pyplot as plt %matplotlib inline plt.rcParams.update({'font.size': 16});
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Load data into global variable y. Each entry is an offset in units of kpc.
y = np.loadtxt('data/model_comparison.dat')
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Check out a quick histogram of the data.
plt.rcParams['figure.figsize'] = (8.0, 6.0) bins = np.linspace(0,1000,20) plt.hist(y, bins=bins, color="skyblue"); plt.xlabel("Measured distance $y$"); plt.ylabel("Frequency");
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
1. Pre-registering a Test Statistic The hypothesis we will test in this tutorial is the model outlined in the next section - but how well that model fits the data is a question we will answer in part using a test statistic. Having understood what the data represent (and had a quick look at them), what feature in the d...
try: exec(open('solutions/teststatistic.py').read()) except IOError: REMOVE_THIS_LINE() def T(yy): """ Argument: a data vector (either the real data or a simulated data set) Returns: a scalar test statistic computed from the argument """ REPLACE_WITH_YOUR_SOLUTION()
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Setting up a Computational Framework Once we define a model to work with (below), we'll want to fit that model to the data, and then evaluate it using the methods we saw in the model evaluation lesson. These include: a visual check using replica datasets drawn from the posterior predictive distribution a quantitative ...
# This is something we can throw to discourage direct instantiation of the base class class VirtualClassError(Exception): def __init__(self): Exception.__init__(self,"Do not directly instantiate the base Model class!") class Model: """ Base class for inference and model evaluation in a simple clust...
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
2. Evaluating a Simple Model First, let's assume a simple model $H_1$, that the sampling distribution is an exponential: Model 1: $P(y|a_1, H_1) = \frac{1}{a_1}e^{-y/a_1}$; $y\geq0$ Our single parameter is $a_1$, the mean of the exponential distribution. 2a. Implementation in code Complete the implementation of this mo...
try: exec(open('solutions/exponentialmodel.py').read()) except IOError: REMOVE_THIS_LINE() class ExponentialModel(Model): """ Simple exponential model for mis-centering. """ def __init__(self): # Define any hyperparameters for the a1 prior here. # E.g...
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Test out the log-posterior function to make sure it's not obviously buggy.
for a1 in [1.0, 10.0, 100.0, -3.14]: print('Log-posterior for a1=', a1, ' = ', Model1.log_posterior(a1))
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Similarly the mock-data producing function (with an arbitrary $a_1$ value).
plt.rcParams['figure.figsize'] = (8.0, 6.0) plt.hist(Model1.generate_replica_dataset(500.), bins=bins, color="lightgray"); plt.xlabel("Measured distance $y$"); plt.ylabel("Frequency (replica)");
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Finally, test the sampling distribution function.
plt.plot(bins, Model1.sampling_distribution(bins, 500.)); plt.xlabel("Measured distance $y$"); plt.ylabel("$p(y|a_1)$");
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
2b. Fit the model to the data The draw_samples_from_posterior method carries out a parameter inference with emcee, displaying its Markov chains, removing burn-in, thinning, and concatenating the chains. Since this step isn't really the point of this problem, the code is given to you, but you'll still need to experiment...
try: exec(open('solutions/fit.py').read()) except IOError: # This will execute out of the box, but will not work well. The arguments should be fiddled with. Model1.draw_samples_from_posterior(guess=[1000.0], nwalkers=8, nsteps=10, burn=0, thinby=1)
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
It will be useful for later to know the mean of the posterior:
Model1.post_mean = np.mean(Model1.samples, axis=0) print("Posterior mean value of a1 = ", Model1.post_mean)
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
2c. Visually compare the posterior predictions with the data. First, let's just plot the posterior-mean model over the data.
plt.rcParams['figure.figsize'] = (8.0, 6.0) # First the histogram of observed data, as backdrop: plt.hist(y, bins=bins, color="skyblue", density=True, label="Observed") # Now overlay a curve following the sampling distribution conditioned on the posterior mean value of a1: pp = Model1.sampling_distribution(bins, Mod...
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
This kind of plot should be familiar: it's often a good idea to evaluate model adequacy in data space. You should already be able to see telling differences between the a well-fitting model's sampling distribution, and the data histogram. Now, let's compare a random predicted ("replica") data set, drawn from the poster...
plt.rcParams['figure.figsize'] = (8.0, 6.0) # First the histogram of observed data, as backdrop: plt.hist(y, bins=bins, color="skyblue", density=True, label="Observed") # Choose a posterior sample at random and generate a replica dataset, and show its histogram j = np.random.randint(0, len(Model1.samples)) mock = Mod...
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
This plot is nice because it is comparing apples with apples: do mock datasets drawn from our model sampling distribution with any plausible parameter value "look like" the real data? To best evaluate this, we want to visualize the posterior predictive distribution of replica datasets. We can do this by plotting many r...
def visual_check(Model, Nreps=None): plt.rcParams['figure.figsize'] = (8.0, 6.0) # First the histogram of observed data, as backdrop: plt.hist(y, bins=bins, color="skyblue", density=True, label="Observed") # Compute the posterior mean parameter (vector) pm = np.mean(Model.samples, axis=0) # M...
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Based on these visual checks, would you say the model does a good job of predicting the observed data? 2c. Quantitative posterior predictive model check Now let's quantify the (in)adequacy of the fit with a quantitative posterior predictive model check, based on the test_statistic function you've already defined. To sa...
def distribution_of_T(Model): """ Compute T(yrep) for each yrep drawn from the posterior predictive distribution, using parameter samples stored in Model. """ return np.array([T(Model.generate_replica_dataset(a)) for a in Model.samples])
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
We can now do the following: * plot a histogram of $T(\mathrm{mock~data})$ * compare that distribution with $T(\mathrm{real~data})$ * compute and report the p-value for $T(\mathrm{real~data})$ And we want all of that in packaged in functions of the model, so that we can re-use it later (on different models!). First le...
try: exec(open('solutions/pvalue.py').read()) except IOError: REMOVE_THIS_LINE() def pvalue(Model): """ Compute the posterior predictive p-value, P(T > T(y)|y,H): """ REPLACE_WITH_YOUR_SOLUTION()
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Here's a function that plots the distribution of T, and reports the p-value:
def posterior_predictive_check(Model, nbins=25): """ Compute the posterior predictive distribution of the test statistic T(y_rep), and compare with T(y_obs) """ # Compute distribution of T(yrep): TT = distribution_of_T(Model) # Plot: plt.rcParams['figure.figsize'] = (8.0, 6.0) plt....
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Does this result agree with your visual evaluation of the model fitness from the last section? If not, perhaps the test statistic you chose doesn't reflect the agreement you're looking for when inspecting the posterior predictions. If you'd like to re-define your test statistic, do so now and repeat this check. 6. Calc...
try: exec(open('solutions/dic.py').read()) except IOError: REMOVE_THIS_LINE() def DIC(Model): """ Compute the Deviance Information Criterion for the given model """ # Compute the deviance D for each sample, using the vectorized code. D = -2.0*Model.vectorized_log_like...
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Does your value of $p_D$ make intuitive sense? 2d. Compute the evidence To do this, note that $P(D|H)=\int P(D|\theta,H) \, P(\theta|H) d\theta$ can be approximated by an average over samples from the prior $P(D|H) \approx \frac{1}{m}\sum_{k=1}^m P(D|\theta_k,H)$; $\theta_k\sim P(\theta|H)$. This estimate is better tha...
try: exec(open('solutions/evidence.py').read()) except IOError: REMOVE_THIS_LINE() def log_evidence(Model, N=1000): """ Compute the log evidence for the model using N samples from the prior """ REPLACE_WITH_YOUR_SOLUTION()
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Roughly how precisely do we need to know the log Evidence, to be able to compare models? Run log_evidence with different values of N (the number of prior samples in the average) to until you're satisfied that you're getting a usefully accurate result.
for Nevidence in [1, 10, 100]: # You *will* want to change these values %time logE1 = log_evidence(Model1, N=Nevidence) print("From", Nevidence, "samples, the log-evidence is", logE1, "\n")
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
분산 입력 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/input"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs...
# Import TensorFlow !pip install tf-nightly import tensorflow as tf # Helper libraries import numpy as np import os print(tf.__version__) global_batch_size = 16 # Create a tf.data.Dataset object. dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(global_batch_size) @tf.function def train_step(in...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
사용자가 존재하는 코드를 최소한으로 변경하면서 tf.distribute 전략을 사용할 수 있도록 tf.data.Dataset 인스턴스를 배포하고 분산 데이터세트 객체를 반환하는 두 개의 API가 도입되었습니다. 그런 다음 사용자는 이 분산 데이터세트 인스턴스를 반복하고 이전과 같이 모델을 훈련할 수 있습니다. 이제 두 가지 API tf.distribute.Strategy.experimental_distribute_dataset 및 tf.distribute.Strategy.experimental_distribute_datasets_from_function를 자세히 살펴...
global_batch_size = 16 mirrored_strategy = tf.distribute.MirroredStrategy() dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(global_batch_size) # Distribute input using the `experimental_distribute_dataset`. dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset) # 1 global batc...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
속성 배치 처리 tf.distribute는 입력된 tf.data.Dataset 인스턴스를 전역 배치 크기와 동기화된 복제본 수로 나눈 새 배치 크기로 다시 배치합니다. 동기화 중인 복제본의 수는 훈련 중에 그래디언트 올리듀스(allreduce)에 참여하는 기기의 수와 같습니다. 사용자가 분산 반복기에서 next를 호출하면 복제본마다 배치 크기의 데이터가 각 복제본에 반환됩니다. 다시 배치된 데이터세트 카디널리티는 항상 여러 복제본의 배수입니다. 다음은 몇 가지 예입니다. tf.data.Dataset.range(6).batch(4, drop_remainder=Fal...
dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(64).batch(16) options = tf.data.Options() options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.DATA dataset = dataset.with_options(options)
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
tf.data.experimental.AutoShardPolicy에 대해서 세 가지 다른 옵션을 설정할 수 있습니다. AUTO: 기본 옵션으로 FILE별 샤딩 시도가 이루어짐을 의미합니다. 파일 기반 데이터세트가 탐지되지 않으면 FILE별 샤딩 시도가 실패합니다. 그러면 tf.distribute가 DATA별 샤딩으로 폴백합니다. 입력 데이터세트가 파일 기반이지만 파일 수가 작업자 수보다 적으면, <code>InvalidArgumentError</code>가 ​​발생합니다. 이 경우, 정책을 명시적으로 <code>AutoShardPolicy.DATA</code>로 ...
mirrored_strategy = tf.distribute.MirroredStrategy() def dataset_fn(input_context): batch_size = input_context.get_per_replica_batch_size(global_batch_size) dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(64).batch(16) dataset = dataset.shard( input_context.num_input_pipelines, input_context.input...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
속성 배치 처리 입력 함수의 반환 값인 tf.data.Dataset 인스턴스는 복제본별 배치 크기를 사용하여 배치해야 합니다. 복제본별 배치 크기는 전역 배치 크기를 동기화 훈련에 참여하는 복제본의 수로 나눈 값입니다. tf.distribute가 각 작업자의 CPU 기기에서 입력 함수를 호출하기 때문입니다. 지정된 작업자에서 생성된 데이터세트는 해당 작업자의 모든 복제본에서 사용할 수 있어야 합니다. 샤딩 사용자의 입력 함수에 대한 인수로 암시적으로 전달되는 tf.distribute.InputContext 객체는 내부에서 tf.distribute 에 의해 생성됩니다...
global_batch_size = 16 mirrored_strategy = tf.distribute.MirroredStrategy() dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(100).batch(global_batch_size) dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset) @tf.function def train_step(inputs): features, labels = inputs return label...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
iter를 사용하여 명시적인 반복기 만들기 tf.distribute.DistributedDataset 인스턴스의 요소를 반복하기 위해 iter API를 사용하여 tf.distribute.DistributedIterator를 생성할 수 있습니다. 명시적인 반복기를 사용하면 고정된 수의 스텝을 반복할 수 있습니다. tf.distribute.DistributedIterator 인스턴스 dist_iterator에서 다음 요소를 가져오려면 next(dist_iterator), dist_iterator.get_next() 또는 dist_iterator.get_next_as_op...
num_epochs = 10 steps_per_epoch = 5 for epoch in range(num_epochs): dist_iterator = iter(dist_dataset) for step in range(steps_per_epoch): # train_step trains the model using the dataset elements loss = mirrored_strategy.run(train_step, args=(next(dist_iterator),)) # which is the same as # loss = mi...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
next() 또는 tf.distribute.DistributedIterator.get_next()를 사용하여 tf.distribute.DistributedIterator의 끝에 도달하면 OutOfRange 오류가 발생합니다. 클라이언트는 Python 측에서 오류를 포착하고 체크포인트 및 평가와 같은 다른 작업을 계속할 수 있습니다. 그러나 호스트 훈련 루프를 사용하는 경우(예: tf.function당 여러 스텝 실행) 다음과 같이 작동하지 않습니다. @tf.function def train_fn(iterator): for _ in tf.range(steps_per...
# You can break the loop with get_next_as_optional by checking if the Optional contains value global_batch_size = 4 steps_per_loop = 5 strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "CPU:0"]) dataset = tf.data.Dataset.range(9).batch(global_batch_size) distributed_iterator = iter(strategy.experimental_dist...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
element_spec 속성 사용 분산된 데이터세트의 요소를 tf.function으로 전달하여 tf.TypeSpec 보장을 원할 경우, tf.function의 input_signature 인수를 지정합니다. 분산 데이터세트의 출력은 tf.distribute.DistributedValues이며 단일 기기 또는 여러 기기에 대한 입력을 나타낼 수 있습니다. 이 분산 값에 해당하는 tf.TypeSpec을 가져오려면 분산 데이터세트 또는 분산 반복기 객체의 element_spec 속성을 사용할 수 있습니다.
global_batch_size = 16 epochs = 5 steps_per_epoch = 5 mirrored_strategy = tf.distribute.MirroredStrategy() dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(100).batch(global_batch_size) dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset) @tf.function(input_signature=[dist_dataset.eleme...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
부분 배치 사용자가 생성한 tf.data.Dataset 인스턴스에 복제본의 수로 균등하게 나눌 수 없는 배치 크기가 포함되어 있거나 데이터세트 인스턴스의 카디널리티가 배치 크기로 나눌 수 없는 경우 부분 배치가 발생합니다. 이는 데이터세트가 여러 복제본에 분산될 때 일부 반복기에 대한 next 호출로 OutOfRangeError가 발생함을 의미합니다. 이 사용 사례를 처리하기 위해 tf.distribute는 처리할 데이터가 더 이상 없는 복제본에서 배치 크기가 0인 더미 배치를 반환합니다. 단일 작업자 사례의 경우 반복기에서 next 호출로 데이터가 반환되지 않으면 ...
mirrored_strategy = tf.distribute.MirroredStrategy() dataset_size = 24 batch_size = 6 dataset = tf.data.Dataset.range(dataset_size).enumerate().batch(batch_size) dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset) def predict(index, inputs): outputs = 2 * inputs return index, outputs result ...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
<a name="tensorinputs"> # 표준 tf.data.Dataset 인스턴스를 사용하지 않는 경우 데이터를 어떻게 배포하나요? </a> 때때로 사용자가 tf.data.Dataset을 사용하여 입력을 나타내고 이후에 언급한 API를 사용하여 데이터 세트를 여러 기기에 분배할 수 없습니다. 이런 경우 생성기의 원시 텐서 또는 입력을 사용할 수 있습니다. 임의의 텐서 입력에 experiment_distribute_values_from_function 사용하기 strategy.run은 next(iterator)의 출력인 tf.distribute.Distribut...
mirrored_strategy = tf.distribute.MirroredStrategy() worker_devices = mirrored_strategy.extended.worker_devices def value_fn(ctx): return tf.constant(1.0) distributed_values = mirrored_strategy.experimental_distribute_values_from_function(value_fn) for _ in range(4): result = mirrored_strategy.run(lambda x:x, arg...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
생성기에서 입력한 경우 tf.data.Dataset.from_generator 사용하기 사용하려는 생성기 함수가 있는 경우, from_generator API를 사용하여 tf.data.Dataset 인스턴스를 생성할 수 있습니다. 참고: 현재 tf.distribute.TPUStrategy에서는 지원하지 않습니다.
mirrored_strategy = tf.distribute.MirroredStrategy() def input_gen(): while True: yield np.random.rand(4) # use Dataset.from_generator dataset = tf.data.Dataset.from_generator( input_gen, output_types=(tf.float32), output_shapes=tf.TensorShape([4])) dist_dataset = mirrored_strategy.experimental_distribute_da...
site/ko/tutorials/distribute/input.ipynb
tensorflow/docs-l10n
apache-2.0
''The experiments I am about to relate ... may be repeated with great ease, whenever the sun shines, and without any other apparatus than is at hand to everyone [1]'' Así comenzó Thomas Young su famoso experimento el 24 de noviembre de 1803 en la Real Sociedad de Londres. Ante una audencia mayoritariamente defensora ...
from IPython.display import Image Image(filename="ExperimentoYoung.jpg")
Experimento de Young/ExperimentoYoung.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
Según la figura, $\Delta = r_2 - r_1$ lo podemos escribir como $\Delta = a sen(\theta)$, siendo $a$ la separación entre las rendijas. Si éste ángulo es pequeño (lo que significa que la distancia entre las fuentes y la pantalla de observación sea grande comparada con la separación entre las fuentes), esta expresión la p...
from matplotlib.pyplot import * from numpy import * %matplotlib inline style.use('fivethirtyeight') ################################################################################### # PARÁMETROS. SE PUEDEN MODIFICAR SUS VALORES ################################################################################### Lambd...
Experimento de Young/ExperimentoYoung.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
Como podemos ver, los máximos están equiespaciados (lo mismo sucede con los míminos), siendo la distancia entre dos máximos consecutivos $$ \text{Interfranja} = \frac{\lambda D}{a} $$ Dicha magnitud se conoce con el nombre de interfranja y nos da información sobre el tamaño característico del patrón de franjas. Además...
interfranja=Lambda*D/a # cálculo de la interfranja C = (Itotal.max() - Itotal.min())/(Itotal.max() + Itotal.min()) # cálculo del contraste print "a=",a*1e3,"mm ","D=",D,"m ","Longitud de onda=",Lambda*1e9,"nm" # valores de los parámetros print "Interfranja=",interfranja*1e3,"mm" # muestra el valor de la interfranja...
Experimento de Young/ExperimentoYoung.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
Some Pretty Printing and Imports (not the "real" work yet)
import base64 import numpy as np import pprint import os import tensorflow from graphviz import Source import tensorflow as tf from IPython.display import Image from IPython.lib import pretty import struct2tensor as s2t from struct2tensor.test import test_pb2 from google.protobuf import text_format def _display(gr...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
The real work: A function that parses our structured data (protobuffers) into tensors:
@tf.function(input_signature=[tf.TensorSpec(shape=(None), dtype=tf.string)], autograph=False) def parse_session(serialized_sessions): """A TF function parsing a batch of serialized Session protos into tensors. It is a TF graph that takes one 1-D tensor as input, and outputs a Dict[str, tf.SparseTensor] """ q...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
Lets see it in action:
serialized_sessions = tf.constant([ text_format.Merge( """ session_info { session_duration_sec: 1.0 session_feature: "foo" } event { query: "Hello" action { number_of_views: 1 } action { } } ...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
See how we went from our pre-pipeline data (the Protobuffer) all the way to the structured data, packed into SparseTensors? Digging Far Deeper Interested and want to learn more? Read on... Let's define several terms we mentioned before: Prensor A Prensor (protobuffer + tensor) is a data structure storing the data we wo...
#@title { display-mode: "form" } #@test {"skip": true} _display(""" digraph { root -> session [label="*"]; session -> event [label="*"]; session -> session_id [label="?"]; event -> action [label="*"]; event -> query_token [label="*"] action -> number_of_views [label="?"]; } """)
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
We will be using visualizations like this to demostrate struct2tensor queries later. Note: The "*" on the edge means the pointed node has repeated values; while the "?" means it has an optional value. There is always a "root" node whose only child is the root of the structure. Note that it's "repeated" because one str...
#@title { display-mode: "form" } #@test {"skip": true} _display(""" digraph { session_session_id [color="red"]; root -> session [label="*"]; session -> event [label="*"]; session -> session_id [label="?"]; event -> action [label="*"]; event -> session_session_id [label="?"]; event -> query_token [label=...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
We will talk about common struct2tensor queries in later sections. Projection A projection of paths in a Prensor produces another Prensor with just the selected paths. Logical representation of a projection The structure of the projected path can be represented losslessly as nested lists. For example, the projection of...
query = _create_query_from_text_sessions([''' event { action { number_of_views: 1} action { number_of_views: 2} action {} } event {} ''', ''' event { action { number_of_views: 3} } '''] ).project(["event.action.number_of_views"]) prensor = s2t.calculate_prensors([query]) pretty.pprint(prensor)
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
struct2tensor's internal data model is closer to the above "nested lists" abstraction and sometimes it's easier to reason with "nested lists" than with SparseTensors. Recently, tf.RaggedTensor was introduced to represent nested lists exactly. We are working on adding support for projecting into ragged tensors. Common s...
#@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> session [label="*"]; session -> event [label="*"]; event -> query_token [label="*"]; } ''')
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
promote(source_path="event.query_token", new_field_name="event_query_token")
#@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { event_query_token [color="red"]; root -> session [label="*"]; session -> event [label="*"]; session -> event_query_token [label="*"]; event -> query_token [label="*"]; } ''') query = (_create_query_from_text_sessions([ """ event ...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
The projected structure is like: { # this is under Session. event_query_token: "abc" event_query_token: "def" event_query_token: "ghi" } broadcast Broadcasts the value of a node to one of its sibling. The value will be replicated if the sibling is repeated. This is similar to TensorFlow and Numpy's broadcasting...
#@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> session [label="*"]; session -> session_id [label="?"]; session -> event [label="*"]; } ''')
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
broadcast(source_path="session_id", sibling_field="event", new_field_name="session_session_id")
#@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { session_session_id [color="red"]; root -> session [label="*"]; session -> session_id [label="?"]; session -> event [label="*"]; event -> session_session_id [label="?"]; } ''') query = (_create_query_from_text_sessions([ """ sessi...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
The projected structure is like: { event { session_session_id: 8 } event { session_session_id: 8 } } promote_and_broadcast The query accepts multiple source fields and a destination field. For each source field, it first promotes it to the least common ancestor with the destination field (if necessary),...
query = (_create_query_from_text_sessions([ """ session_id: 8 """, """ session_id: 9 """]) .map_field_values("session_id", lambda x: tf.add(x, 1), dtype=tf.int64, new_field_name="session_id_plus_one") .project(["session_id_plus_one"])) prensor = s2t.calculate_prensors([que...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
reroot Makes the given node the new root of the struct2tensorTree. This has two effects: restricts the scope of the struct2tensorTree The field paths in all the following queries are relative to the new root There's no way to refer to nodes that are outside the subtree rooted at the new root. changes the batch dimensi...
#@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> session [label="*"]; session -> session_id [label="?"]; session -> event [label="*"]; event -> event_id [label="?"]; } ''')
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
reroot("event")
#@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> event [label="*"]; event -> event_id [label="?"]; } ''') #@title { display-mode: "form" } text_protos = [""" session_id: 1 event { event_id: "a" } event { event_id: "b" } """, """ session_id: 2 """, """ session_id: 3 event ...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
Proto Map You can specify a key for the proto map field in a path via brackets. Given the following tf.Example: features { feature { key: "my_feature" value { float_list { value: 1.0 } } } feature { key: "other_feature" value { bytes_list { value: "my_val" ...
tf_example = text_format.Parse(""" features { feature { key: "my_feature" value { float_list { value: 1.0 } } } feature { key: "other_feature" value { bytes_list { value: "my_val" } } } } """, tf.train.Example()) query = s2t.create_expression_from...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
Apache Parquet Support struct2tensor offers an Apache Parquet tf.DataSet that allows reading from a Parquet file and apply queries to manipulate the structure of the data. Because of the powerful struct2tensor library, the dataset will only read the Parquet columns that are required. This reduces I/O cost if we only ne...
# Download our sample data file from the struct2tensor repository. The desciption of the data is below. #@test {"skip": true} !curl -o dremel_example.parquet 'https://raw.githubusercontent.com/google/struct2tensor/master/struct2tensor/testdata/parquet_testdata/dremel_example.parquet'
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
Example We will use a sample Parquet data file (dremel_example.parquet), which contains data based on the example used in this paper: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36632.pdf The file dremel_example.parquet has the following schema: message Document { required int64 DocId; opt...
#@test {"skip": true} from struct2tensor import expression_impl filenames = ["dremel_example.parquet"] batch_size = 1 exp = s2t.expression_impl.parquet.create_expression_from_parquet_file(filenames) new_exp = exp.promote_and_broadcast({"new_field": "Links.Forward"}, "Name") proj_exp = new_exp.project(["Name.new_fie...
examples/prensor_playground.ipynb
google/struct2tensor
apache-2.0
Ok you got me, the plot function still generates a line by default... but we can turn it off
### initialize the figure fig, ax = pyplot.subplots() points_plot = ax.plot(xdata, ydata, ls='', marker='o')
classes/12_matplotlib/2_points_and_errorbars.ipynb
theJollySin/python_for_scientists
gpl-3.0
Markersize
### initialize the figure fig, ax = pyplot.subplots() points_plot = ax.plot(xdata, ydata, ls='', marker='o', ms=15)
classes/12_matplotlib/2_points_and_errorbars.ipynb
theJollySin/python_for_scientists
gpl-3.0
Symbol
### initialize the figure fig, ax = pyplot.subplots() points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='o') #points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='s') #points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='D') #points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='^') #points_plot = a...
classes/12_matplotlib/2_points_and_errorbars.ipynb
theJollySin/python_for_scientists
gpl-3.0
Errorbars
### generate some random data xdata2 = numpy.arange(15) ydata2 = numpy.random.randn(15) yerrors = numpy.random.randn(15) ### initialize the figure fig, ax = pyplot.subplots() ax.errorbar(xdata2, ydata2, yerr=yerrors) ### initialize the figure fig, ax = pyplot.subplots() eb = ax.errorbar(xdata2, ydata2, yerr=yer...
classes/12_matplotlib/2_points_and_errorbars.ipynb
theJollySin/python_for_scientists
gpl-3.0
We will create a model with a listric fault from scratch. In addition to the previous parameters for creating a fault (see notebook 4-Create-model), we now change the fault "geometry" to "Curved" and add parameters defining the amplitude and radius of influence:
reload(pynoddy.history) reload(pynoddy.events) nm = pynoddy.history.NoddyHistory() # add stratigraphy strati_options = {'num_layers' : 8, 'layer_names' : ['layer 1', 'layer 2', 'layer 3', 'layer 4', 'layer 5', 'layer 6', 'layer 7', 'layer 8'], 'layer_thickness' : [1000, 500, 500, 500...
docs/notebooks/10-Fault-Shapes.ipynb
Leguark/pynoddy
gpl-2.0
With these settings, we obtain an example of a listric fault in Noddy:
history = "listric_example.his" outout_name = "listric_out" nm.write_history(history) # Compute the model pynoddy.compute_model(history, output_name) # Plot output reload(pynoddy.output) nout = pynoddy.output.NoddyOutput(output_name) nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1], ...
docs/notebooks/10-Fault-Shapes.ipynb
Leguark/pynoddy
gpl-2.0
As you can see the resulting topography is very different than in the case with continuous uplift. For our final example, we'll use NormalFault with a more complicated model in which we have both a soil layer and bedrock. In order to move, material must convert from bedrock to soil by weathering. First we import remai...
from landlab.components import DepthDependentDiffuser, ExponentialWeatherer # here are the parameters to change K = 0.0005 # stream power coefficient, bigger = streams erode more quickly U = 0.0001 # uplift rate in meters per year max_soil_production_rate = ( 0.001 ) # Maximum weathering rate for bare bedrock i...
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
cmshobe/landlab
mit
Manual iteration through test image to generate convolutional test features. Saves each batch to disk insetad of loading in memory.
# conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
I think conv_feat below should be conv_test_feat
fname = path + 'results/conv_test_feat.dat' %rm -r $fname for i in xrange(test_batches.n // batch_size + 1): conv_test_feat = conv_model.predict_on_batch(test_batches.next()[0]) if not i: c = bcolz.carray(conv_feat, rootdir= path + '/results/conv_test_feat.dat', mode='a') else: c.append(conv...
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
Question: Why does it look like I can have the entire conv_test_feat array open at once, when opened w/ bcolz; but when it's explicitly loaded as a Numpy array via bcolz.open(fname)[:], all of a sudden the RAM takes a severe memory hit?
# apparently you can just open a (massive) bcolz carray this way # without crashing memory... okay I'm learning things # carr = bcolz.open(fname) # forgot to add the '+1' so missed the last 14 images. Doing that here: # NOTE: below code only adds on the missed batch # iterate generator until final missed batch, then ...
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
As expected (& which motivated this) the full set of convolutional test features does not fit at once in memory.
fname = path + 'results/conv_test_feat.dat' x = bcolz.open(fname) len(x)
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
Loading train/valid features; defining & fitting NN model
# conv_train_feat_batches = get_batches(path + '/results/conv_feat.dat') # conv_valid_feat_batches = get_batches(path + '/results/conv_val_feat.dat') conv_trn_feat = load_array(path + '/results/conv_feat.dat') conv_val_feat = load_array(path + '/results/conv_val_feat.dat') (val_classes, trn_classes, val_labels, trn_la...
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit