markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Ha funcionat a la primera? Fer un quadrat perfecte no és fàcil, i el més normal és que calga ajustar un parell de coses: el gir de 90 graus: si el robot gira massa, heu de disminuir el temps del sleep; si gira massa poc, augmentar-lo (podeu posar decimals) si no va recte: és normal que un dels motors gire una mica ...
for i in range(4): # avançar # girar # parar
task/quadrat.ipynb
ecervera/mindstorms-nb
mit
És important que les instruccions de dins del bucle estiguen desplaçades cap a la dreta, és a dir indentades. Substituïu els comentaris per les instruccions i proveu. Recapitulem Per a acabar l'exercici, i abans de passar a la següent pàgina, desconnecteu el robot:
disconnect() next_notebook('sensors')
task/quadrat.ipynb
ecervera/mindstorms-nb
mit
TOC and elevation corrections Some further changes to the ICPW trends analysis are required: Heleen has discovered some strange results for TOC for some of the Canadian sites (see e-mail received 14/03/2017 at 17.45) <br><br> We now have elevation data for the remaining sites (see e-mail received 15/03/2017 at 08.3...
# Create db connection r2_func_path = r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template\useful_resa2_code.py' resa2 = imp.load_source('useful_resa2_code', r2_func_path) engine, conn = resa2.connect_to_resa2() # Get example data sql = ("SELECT * FROM resa2.water_chemistry_values2 " "WHERE sample_...
correct_toc_elev.ipynb
JamesSample/icpw
mit
method_id=10294 is DOC in mg-C/l, whereas method_id=10313 is DOCx in umol-C/l. Both were uploaded within the space of a few weeks back in 2006. I assume that the values with method_id=10313 are correct, and those with method_id=10294 are wrong. It seems as though, when both methods are present, RESA2 preferentially ch...
# Get a list of all water samples associated with # stations in the 'ICPW_TOCTRENDS_2015_CA_ICPW' project sql = ("SELECT water_sample_id FROM resa2.water_samples " "WHERE station_id IN ( " "SELECT station_id FROM resa2.stations " "WHERE station_id IN ( " "SELECT station_id FROM resa2...
correct_toc_elev.ipynb
JamesSample/icpw
mit
2. Update station elevations Heleen has provided the missing elevation data, which I copied here: C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\CRU_Climate_Data\missing_elev_data.xlsx
# Read elev data in_xlsx = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015' r'\CRU_Climate_Data\missing_elev_data.xlsx') elev_df = pd.read_excel(in_xlsx) elev_df.index = elev_df['station_id'] # Loop over stations and update info for stn_id in elev_df['station_id'].values: # Ge...
correct_toc_elev.ipynb
JamesSample/icpw
mit
Next, we'll load our data set.
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
turbomanage/training-data-analyst
apache-2.0
Examine the data It's a good idea to get to know your data a little bit before you work with it. We'll print out a quick summary of a few useful statistics on each column. This will include things like mean, standard deviation, max, min, and various quantiles.
df.head() df.describe()
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
turbomanage/training-data-analyst
apache-2.0
This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features ...
df['num_rooms'] = df['total_rooms'] / df['households'] df['num_bedrooms'] = df['total_bedrooms'] / df['households'] df['persons_per_house'] = df['population'] / df['households'] df.describe() df.drop(['total_rooms', 'total_bedrooms', 'population', 'households'], axis = 1, inplace = True) df.describe()
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
turbomanage/training-data-analyst
apache-2.0
Build a neural network model In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use the remaining columns as our input features. To train our model, we'll first use the LinearRegressor interface. Then, we'll change to DNNRegressor
featcols = { colname : tf.feature_column.numeric_column(colname) \ for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',') } # Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons featcols['longitude'] = tf.feature_column.bucket...
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
turbomanage/training-data-analyst
apache-2.0
Basic rich display Find a Physics related image on the internet and display it in this notebook using the Image object. Load it using the url argument to Image (don't upload the image to this server). Make sure the set the embed flag so the image is embedded in the notebook data. Set the width and height to 600px.
Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/6/6d/Particle2D.svg/320px-Particle2D.svg.png', embed=True, width = 600, height = 600) assert True # leave this to grade the image display
assignments/assignment06/DisplayEx01.ipynb
nproctor/phys202-2015-work
mit
Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
%%html <table> <tr> <th>Name</th> <th>Symbol</th> <th>Antiparticle</th> <th>Charge</th> <th>Mass</th> </tr> <tr> <th>Up</th> <td>u</td> <td>$\bar{u}$</td> <td>+2/3</td> <td>1.5-3.3</td> </tr> <tr> <th>Down</th> <td>d</td> <td>$\bar{d}$</td> <td>-1/3</td> <td>3.5-6.0</td> </tr> <tr> <th>Charm</th> <td>c</td> <td>$\bar{c...
assignments/assignment06/DisplayEx01.ipynb
nproctor/phys202-2015-work
mit
1. Setup and dataset download Download data required for this exercise. get_ilsvrc_aux.sh to download the ImageNet data mean, labels, etc. download_model_binary.py to download the pretrained reference model finetune_flickr_style/assemble_data.py downloads the style training and testing data We'll download just a smal...
# Download just a small subset of the data for this exercise. # (2000 of 80K images, 5 of 20 labels.) # To download the entire dataset, set `full_dataset = True`. full_dataset = False if full_dataset: NUM_STYLE_IMAGES = NUM_STYLE_LABELS = -1 else: NUM_STYLE_IMAGES = 2000 NUM_STYLE_LABELS = 5 # This downloa...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Define weights, the path to the ImageNet pretrained weights we just downloaded, and make sure it exists.
import os weights = os.path.join(caffe_root, 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel') assert os.path.exists(weights)
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Load the 1000 ImageNet labels from ilsvrc12/synset_words.txt, and the 5 style labels from finetune_flickr_style/style_names.txt.
# Load ImageNet labels to imagenet_labels imagenet_label_file = caffe_root + 'data/ilsvrc12/synset_words.txt' imagenet_labels = list(np.loadtxt(imagenet_label_file, str, delimiter='\t')) assert len(imagenet_labels) == 1000 print 'Loaded ImageNet labels:\n', '\n'.join(imagenet_labels[:10] + ['...']) # Load style labels...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
2. Defining and running the nets We'll start by defining caffenet, a function which initializes the CaffeNet architecture (a minor variant on AlexNet), taking arguments specifying the data and number of output classes.
from caffe import layers as L from caffe import params as P weight_param = dict(lr_mult=1, decay_mult=1) bias_param = dict(lr_mult=2, decay_mult=0) learned_param = [weight_param, bias_param] frozen_param = [dict(lr_mult=0)] * 2 def conv_relu(bottom, ks, nout, stride=1, pad=0, group=1, param=learned_p...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Now, let's create a CaffeNet that takes unlabeled "dummy data" as input, allowing us to set its input images externally and see what ImageNet classes it predicts.
dummy_data = L.DummyData(shape=dict(dim=[1, 3, 227, 227])) imagenet_net_filename = caffenet(data=dummy_data, train=False) imagenet_net = caffe.Net(imagenet_net_filename, weights, caffe.TEST)
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Define a function style_net which calls caffenet on data from the Flickr style dataset. The new network will also have the CaffeNet architecture, with differences in the input and output: the input is the Flickr style data we downloaded, provided by an ImageData layer the output is a distribution over 20 classes rathe...
def style_net(train=True, learn_all=False, subset=None): if subset is None: subset = 'train' if train else 'test' source = caffe_root + 'data/flickr_style/%s.txt' % subset transform_param = dict(mirror=train, crop_size=227, mean_file=caffe_root + 'data/ilsvrc12/imagenet_mean.binaryproto') ...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Use the style_net function defined above to initialize untrained_style_net, a CaffeNet with input images from the style dataset and weights from the pretrained ImageNet model. Call forward on untrained_style_net to get a batch of style training data.
untrained_style_net = caffe.Net(style_net(train=False, subset='train'), weights, caffe.TEST) untrained_style_net.forward() style_data_batch = untrained_style_net.blobs['data'].data.copy() style_label_batch = np.array(untrained_style_net.blobs['label'].data, dtype=np.int32)
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Pick one of the style net training images from the batch of 50 (we'll arbitrarily choose #8 here). Display it, then run it through imagenet_net, the ImageNet-pretrained network to view its top 5 predicted classes from the 1000 ImageNet classes. Below we chose an image where the network's predictions happen to be reaso...
def disp_preds(net, image, labels, k=5, name='ImageNet'): input_blob = net.blobs['data'] net.blobs['data'].data[0, ...] = image probs = net.forward(start='conv1')['probs'][0] top_k = (-probs).argsort()[:k] print 'top %d predicted %s labels =' % (k, name) print '\n'.join('\t(%d) %5.2f%% %s' % (i+...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
We can also look at untrained_style_net's predictions, but we won't see anything interesting as its classifier hasn't been trained yet. In fact, since we zero-initialized the classifier (see caffenet definition -- no weight_filler is passed to the final InnerProduct layer), the softmax inputs should be all zero and we ...
disp_style_preds(untrained_style_net, image)
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
We can also verify that the activations in layer fc7 immediately before the classification layer are the same as (or very close to) those in the ImageNet-pretrained model, since both models are using the same pretrained weights in the conv1 through fc7 layers.
diff = untrained_style_net.blobs['fc7'].data[0] - imagenet_net.blobs['fc7'].data[0] error = (diff ** 2).sum() assert error < 1e-8
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Delete untrained_style_net to save memory. (Hang on to imagenet_net as we'll use it again later.)
del untrained_style_net
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
3. Training the style classifier Now, we'll define a function solver to create our Caffe solvers, which are used to train the network (learn its weights). In this function we'll set values for various parameters used for learning, display, and "snapshotting" -- see the inline comments for explanations of what they mea...
from caffe.proto import caffe_pb2 def solver(train_net_path, test_net_path=None, base_lr=0.001): s = caffe_pb2.SolverParameter() # Specify locations of the train and (maybe) test networks. s.train_net = train_net_path if test_net_path is not None: s.test_net.append(test_net_path) s.tes...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Now we'll invoke the solver to train the style net's classification layer. For the record, if you want to train the network using only the command line tool, this is the command: <code> build/tools/caffe train \ -solver models/finetune_flickr_style/solver.prototxt \ -weights models/bvlc_reference_caffenet/bvlc_...
def run_solvers(niter, solvers, disp_interval=10): """Run solvers for niter iterations, returning the loss and accuracy recorded each iteration. `solvers` is a list of (name, solver) tuples.""" blobs = ('loss', 'acc') loss, acc = ({name: np.zeros(niter) for name, _ in solvers} ...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Let's create and run solvers to train nets for the style recognition task. We'll create two solvers -- one (style_solver) will have its train net initialized to the ImageNet-pretrained weights (this is done by the call to the copy_from method), and the other (scratch_style_solver) will start from a randomly initialize...
niter = 200 # number of iterations to train # Reset style_solver as before. style_solver_filename = solver(style_net(train=True)) style_solver = caffe.get_solver(style_solver_filename) style_solver.net.copy_from(weights) # For reference, we also create a solver that isn't initialized from # the pretrained ImageNet w...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Let's look at the training loss and accuracy produced by the two training procedures. Notice how quickly the ImageNet pretrained model's loss value (blue) drops, and that the randomly initialized model's loss value (green) barely (if at all) improves from training only the classifier layer.
plot(np.vstack([train_loss, scratch_train_loss]).T) xlabel('Iteration #') ylabel('Loss') plot(np.vstack([train_acc, scratch_train_acc]).T) xlabel('Iteration #') ylabel('Accuracy')
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Let's take a look at the testing accuracy after running 200 iterations of training. Note that we're classifying among 5 classes, giving chance accuracy of 20%. We expect both results to be better than chance accuracy (20%), and we further expect the result from training using the ImageNet pretraining initialization to ...
def eval_style_net(weights, test_iters=10): test_net = caffe.Net(style_net(train=False), weights, caffe.TEST) accuracy = 0 for it in xrange(test_iters): accuracy += test_net.forward()['acc'] accuracy /= test_iters return test_net, accuracy test_net, accuracy = eval_style_net(style_weights) ...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
4. End-to-end finetuning for style Finally, we'll train both nets again, starting from the weights we just learned. The only difference this time is that we'll be learning the weights "end-to-end" by turning on learning in all layers of the network, starting from the RGB conv1 filters directly applied to the input ima...
end_to_end_net = style_net(train=True, learn_all=True) # Set base_lr to 1e-3, the same as last time when learning only the classifier. # You may want to play around with different values of this or other # optimization parameters when fine-tuning. For example, if learning diverges # (e.g., the loss gets very large or...
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Let's now test the end-to-end finetuned models. Since all layers have been optimized for the style recognition task at hand, we expect both nets to get better results than the ones above, which were achieved by nets with only their classifier layers trained for the style task (on top of either ImageNet pretrained or r...
test_net, accuracy = eval_style_net(style_weights_ft) print 'Accuracy, finetuned from ImageNet initialization: %3.1f%%' % (100*accuracy, ) scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights_ft) print 'Accuracy, finetuned from random initialization: %3.1f%%' % (100*scratch_accuracy, )
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
We'll first look back at the image we started with and check our end-to-end trained model's predictions.
plt.imshow(deprocess_net_image(image)) disp_style_preds(test_net, image)
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Whew, that looks a lot better than before! But note that this image was from the training set, so the net got to see its label at training time. Finally, we'll pick an image from the test set (an image the model hasn't seen) and look at our end-to-end finetuned style model's predictions for it.
batch_index = 1 image = test_net.blobs['data'].data[batch_index] plt.imshow(deprocess_net_image(image)) print 'actual label =', style_labels[int(test_net.blobs['label'].data[batch_index])] disp_style_preds(test_net, image)
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
We can also look at the predictions of the network trained from scratch. We see that in this case, the scratch network also predicts the correct label for the image (Pastel), but is much less confident in its prediction than the pretrained net.
disp_style_preds(scratch_test_net, image)
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Of course, we can again look at the ImageNet model's predictions for the above image:
disp_imagenet_preds(imagenet_net, image)
tools/caffe-sphereface/examples/02-fine-tuning.ipynb
wy1iu/sphereface
mit
Here we construct a filter, $F$, such that $$F\left(x\right) = e^{-|x|} \cos{\left(2\pi x\right)} $$ We want to show that if $F$ is used to generate sample calibration data for the MKS, then the calculated influence coefficients are in fact just $F$.
x0 = -10. x1 = 10. x = np.linspace(x0, x1, 1000) def F(x): return np.exp(-abs(x)) * np.cos(2 * np.pi * x) p = plt.plot(x, F(x), color='#1a9850')
notebooks/filter.ipynb
XinyiGong/pymks
mit
Next we generate the sample data (X, y) using scipy.ndimage.convolve. This performs the convolution $$ p\left[ s \right] = \sum_r F\left[r\right] X\left[r - s\right] $$ for each sample.
import scipy.ndimage n_space = 101 n_sample = 50 np.random.seed(201) x = np.linspace(x0, x1, n_space) X = np.random.random((n_sample, n_space)) y = np.array([scipy.ndimage.convolve(xx, F(x), mode='wrap') for xx in X])
notebooks/filter.ipynb
XinyiGong/pymks
mit
For this problem, a basis is unnecessary as no discretization is required in order to reproduce the convolution with the MKS localization. Using the ContinuousIndicatorBasis with n_states=2 is the equivalent of a non-discretized convolution in space.
from pymks import MKSLocalizationModel from pymks import PrimitiveBasis prim_basis = PrimitiveBasis(n_states=2, domain=[0, 1]) model = MKSLocalizationModel(basis=prim_basis)
notebooks/filter.ipynb
XinyiGong/pymks
mit
Fit the model using the data generated by $F$.
model.fit(X, y)
notebooks/filter.ipynb
XinyiGong/pymks
mit
To check for internal consistency, we can compare the predicted output with the original for a few values
y_pred = model.predict(X) print y[0, :4] print y_pred[0, :4]
notebooks/filter.ipynb
XinyiGong/pymks
mit
With a slight linear manipulation of the coefficients, they agree perfectly with the shape of the filter, $F$.
plt.plot(x, F(x), label=r'$F$', color='#1a9850') plt.plot(x, -model.coeff[:,0] + model.coeff[:, 1], 'k--', label=r'$\alpha$') l = plt.legend()
notebooks/filter.ipynb
XinyiGong/pymks
mit
Verify tables exist Run the following cells to verify that we have previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.
%%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_train LIMIT 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_eval LIMIT 0
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Lab Task #1: Model 4: Increase complexity of model using DNN_REGRESSOR DNN_REGRESSOR is a new regression model_type vs. the LINEAR_REG that we have been using in previous labs. MODEL_TYPE="DNN_REGRESSOR" hidden_units: List of hidden units per layer; all layers are fully connected. Number of elements in the array w...
%%bigquery CREATE OR REPLACE MODEL babyweight.model_4 OPTIONS ( # TODO: Add DNN options INPUT_LABEL_COLS=["weight_pounds"], DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT # TODO: Add base features and label FROM babyweight.babyweight_data_train
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Get training information and evaluate Let's first look at our training statistics.
%%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_4)
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Now let's evaluate our trained model on our eval dataset.
%%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_4, ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_eval ))
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
%%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.model_4, ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_eval ))
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Lab Task #2: Final Model: Apply the TRANSFORM clause Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause as we did in the last notebook. This way we can have the same transformations applied for training and prediction without modifying the queries. Let's apply the TRAN...
%%bigquery CREATE OR REPLACE MODEL babyweight.final_model TRANSFORM( weight_pounds, is_male, mother_age, plurality, gestation_weeks, # TODO: Add FEATURE CROSS of: # is_male, bucketed_mother_age, plurality, and bucketed_gestation_weeks OPTIONS ( # TODO: Add DNN options INPUT_LAB...
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's first look at our training statistics.
%%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.final_model)
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Now let's evaluate our trained model on our eval dataset.
%%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.final_model, ( SELECT * FROM babyweight.babyweight_data_eval ))
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
%%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.final_model, ( SELECT * FROM babyweight.babyweight_data_eval ))
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Lab Task #3: Predict with final model. Now that you have evaluated your model, the next step is to use it to predict the weight of a baby before it is born, using BigQuery ML.PREDICT function. Predict from final model using an example from original dataset
%%bigquery SELECT * FROM ML.PREDICT(MODEL babyweight.final_model, ( SELECT # TODO Add base features example from original dataset ))
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Modify above prediction query using example from simulated dataset Use the feature values you made up above, however set is_male to "Unknown" and plurality to "Multiple(2+)". This is simulating us not knowing the gender or the exact plurality.
%%bigquery SELECT * FROM ML.PREDICT(MODEL babyweight.final_model, ( SELECT # TODO Add base features example from simulated dataset ))
courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Pipeline Constants
# Required Parameters project_id = '<ADD GCP PROJECT HERE>' output = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash # Optional Parameters REGION = 'us-central1' RUNTIME_VERSION = '1.13' PACKAGE_URIS=json.dumps(['gs://chicago-crime/chicago_crime_trainer-0.0.tar.gz']) TRAINER_OUTPUT_GCS_PATH = output + '/train/ou...
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Download data Define a download function that uses the BigQuery component
bigquery_query_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/bigquery/query/component.yaml') QUERY = """ SELECT count(*) as count, TIMESTAMP_TRUNC(date, DAY) as day FROM `bigquery-public-data.chicago_crime.cr...
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Train the model Run training code that will pre-process the data and then submit a training job to the AI Platform.
mlengine_train_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/train/component.yaml') def train(project_id, trainer_args, package_uris, trainer_output_gcs_path, gcs_wor...
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Deploy model Deploy the model with the ID given from the training step
mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/deploy/component.yaml') def deploy( project_id, model_uri, model_id, model_version, runtime_version): return mlengi...
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Define pipeline
@dsl.pipeline( name=PIPELINE_NAME, description=PIPELINE_DESCRIPTION ) def pipeline( data_gcs_path=DATA_GCS_PATH, gcs_working_dir=output, project_id=project_id, python_module=PYTHON_MODULE, region=REGION, runtime_version=RUNTIME_VERSION, package_uris=PACKAGE_URIS, trainer_output_...
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Submit the pipeline for execution
pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={}) # Run the pipeline on a separate Kubeflow Cluster instead # (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks) # pipeline = kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(pipelin...
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Wait for the pipeline to finish
run_detail = pipeline.wait_for_run_completion(timeout=1800) print(run_detail.run.status)
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Use the deployed model to predict (online prediction)
import os os.environ['MODEL_NAME'] = MODEL_NAME os.environ['MODEL_VERSION'] = MODEL_VERSION
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Create normalized input representing 14 days prior to prediction day.
%%writefile test.json {"lstm_input": [[-1.24344569, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387 , -0.90387016]]} !gcloud ai-platform predict --model=$MODEL_NAME --version=$MODEL_VERSION --json-instances=test.js...
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Examine cloud services invoked by the pipeline BigQuery query: https://console.cloud.google.com/bigquery?page=queries (click on 'Project History') AI Platform training job: https://console.cloud.google.com/ai-platform/jobs AI Platform model serving: https://console.cloud.google.com/ai-platform/models Clean models
# !gcloud ai-platform versions delete $MODEL_VERSION --model $MODEL_NAME # !gcloud ai-platform models delete $MODEL_NAME
samples/core/ai_platform/ai_platform.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Import all python modules that we'll need:
import phoebe import numpy as np import matplotlib.pyplot as plt from astropy.io import fits
development/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Pull a set of Sun-like emergent intensities as a function of $\mu = \cos \theta$ from the Castelli and Kurucz database of model atmospheres (the necessary file can be downloaded from here):
wl = np.arange(900., 39999.501, 0.5)/1e10 with fits.open('T06000G40P00.fits') as hdu: Imu = 1e7*hdu[0].data
development/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Grab only the normal component for testing purposes:
Inorm = Imu[-1,:]
development/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's load a Johnson V passband and the transmission function $P(\lambda)$ contained within:
pb = phoebe.get_passband('Johnson:V')
development/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Tesselate the wavelength interval to the range covered by the passband:
keep = (wl >= pb.ptf_table['wl'][0]) & (wl <= pb.ptf_table['wl'][-1]) Inorm = Inorm[keep] wl = wl[keep]
development/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Calculate $S(\lambda) P(\lambda)$ and plot it, to make sure everything so far makes sense:
plt.plot(wl, Inorm*pb.ptf(wl), 'b-') plt.show()
development/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's compute the term $\mathrm{d}(\mathrm{ln}\, I_\lambda) / \mathrm{d}(\mathrm{ln}\, \lambda)$. First we will compute $\mathrm{ln}\,\lambda$ and $\mathrm{ln}\,I_\lambda$ and plot them:
lnwl = np.log(wl) lnI = np.log(Inorm) plt.xlabel(r'$\mathrm{ln}\,\lambda$') plt.ylabel(r'$\mathrm{ln}\,I_\lambda$') plt.plot(lnwl, lnI, 'b-') plt.show()
development/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Per equation above, $B(\lambda)$ is then the slope of this curve (plus 5). Herein lies the problem: what part of this graph do we fit a line to? In versions 2 and 2.1, PHOEBE used a 5th order Legendre polynomial to fit the spectrum and then sigma-clipping to get to the continuum. Finally, it computed an average derivat...
envelope = np.polynomial.legendre.legfit(lnwl, lnI, 5) continuum = np.polynomial.legendre.legval(lnwl, envelope) diff = lnI-continuum sigma = np.std(diff) clipped = (diff > -sigma) while True: Npts = clipped.sum() envelope = np.polynomial.legendre.legfit(lnwl[clipped], lnI[clipped], 5) continuum = np.polyno...
development/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
It is clear that there is a pretty strong systematics here that we sweep under the rug. Thus, we need to revise the way we compute the spectral index and make it robust before we claim that we support boosting. For fun, this is what would happen if we tried to estimate $B(\lambda)$ at each $\lambda$:
dlnwl = lnwl[1:]-lnwl[:-1] dlnI = lnI[1:]-lnI[:-1] B = dlnI/dlnwl plt.plot(0.5*(wl[1:]+wl[:-1]), B, 'b-') plt.show()
development/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
General terminology and notations The notation in here follow the notations in Chapter 2 of the deep learning online book. We note that there are different notations in different places, all refer differently to the same mathematical construct. We consider $L$ layers marked by $l=1,\ldots,L$ where $l=1$ denotes the in...
def sigmoid(x): return 1./(1.+np.exp(-x)) def rectifier(x): return np.array([max(xv,0.0) for xv in x]) def softplus(x): return np.log(1.0 + np.exp(x)) x = np.array([1.0,0,0]) w = np.array([0.2,-0.03,0.14]) print ' Scalar product between unit and weights ',x.dot(w) print ' Values of Sigmoid activa...
NN playground.ipynb
srippa/nn_deep
mit
Python example : Simple feed forward classification NN
def softmax(z): alpha = np.sum(np.exp(z)) return np.exp(z)/alpha # Input a0 = np.array([1.,0,0]) # First layer W1 = np.array([[0.2,0.15,-0.01],[0.01,-0.1,-0.06],[0.14,-0.2,-0.03]]) b1 = np.array([1.,1.,1.]) z1 = W1.dot(a0) + b1 a1 = np.tanh(z1) # Output layer W2 = np.array([[0.08,0.11,-0.3],[0.1,-...
NN playground.ipynb
srippa/nn_deep
mit
Cost (or error) functions Suppose that the expected output for an input vector ${\bf x} \equiv {\bf a^1}$ is ${\bf y} = {\bf y_x}^ = (0,1,0)$, we can now compute the error vector ${\bf e}= {\bf e_x}= {\bf a_x}^L-{\bf y_x}^$. With this error, we can now compute a cost $C=C_x$ assotiated with the output $\bf{y_x}$ of the...
def abs_loss(e): return np.sum(np.abs(e)) def sqr_loss(e): return np.sum(e**2) def cross_entropy_loss(y_estimated,y_real): return -np.sum(y_real*np.log(y_estimated)) y_real = np.array([0.,1.,0]) err = a2 - ystar print ' Error ',err print ' Absolute loss ...
NN playground.ipynb
srippa/nn_deep
mit
The Shyft Environment This next step is highly specific on how and where you have installed Shyft. If you have followed the guidelines at github, and cloned the three shyft repositories: i) shyft, ii) shyft-data, and iii) shyft-doc, then you may need to tell jupyter notebooks where to find shyft. Uncomment the relevant...
# try to auto-configure the path, -will work in all cases where doc and data # are checked out at same level shyft_data_path = path.abspath("../../../shyft-data") if path.exists(shyft_data_path) and 'SHYFT_DATA' not in os.environ: os.environ['SHYFT_DATA']=shyft_data_path # shyft should be available either by i...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
2. A Shyft simulation The purpose of this notebook is to demonstrate setting up a Shyft simulation using existing repositories. Eventually, you will want to learn to write your own repositories, but once you understand what is presented herein, you'll be well on your way to working with Shyft. If you prefer to take a h...
# we need to import the repository to use it in a dictionary: from shyft.repository.netcdf.cf_region_model_repository import CFRegionModelRepository
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
region specification The first dictionary essentially establishes the domain of the simulation. We also specify a repository that is used to read the data that will provide Shyft a region_model (discussed below), based on geographic data. The geographic consists of properties of the catchment, e.g. "forest fraction", "...
# next, create the simulation dictionary RegionDict = {'region_model_id': 'demo', #a unique name identifier of the simulation 'domain': {'EPSG': 32633, 'nx': 400, 'ny': 80, 'step_x': 1000, 'step_y': 1000, ...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
The first keys, are probably quite clear: start_datetime: a string in the format: "2013-09-01T00:00:00" run_time_step: an integer representing the time step of the simulation (in seconds), so for a daily step: 86400 number_of_steps: an integer for how long the simulatoin should run: 365 (for a year long simulation) re...
ModelDict = {'model_t': shyft.api.pt_gs_k.PTGSKModel, # model to construct 'model_parameters': { 'ae':{ 'ae_scale_factor': 1.5}, 'gs':{ 'calculate_iso_pot_energy': False, 'fast_albedo_decay_rate': 6.752787747748934,...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
In this dictionary we define two variables: model_t: the import path to a shyft 'model stack' class model_parameters: a dictionary containing specific parameter values for a particular model class Specifics of the model_parameters dictionary will vary based on which class is used. Okay, so far we have two dictionarie...
region_repo = CFRegionModelRepository(RegionDict, ModelDict)
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
The region_model <div class="alert alert-info"> **TODO:** a notebook documenting the CFRegionModelRepository </div> The first step in conducting a hydrologic simulation is to define the domain of the simulation and the model type which we would like to simulate. To do this we create a region_model object. Above we c...
cell_data_file = os.path.join(os.environ['SHYFT_DATA'], 'netcdf/orchestration-testdata/cell_data.nc') cell_data = Dataset(cell_data_file) print(cell_data)
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
You might be surprised to see the dimensions are 'cells', but recall that in Shyft everything is vectorized. Each 'cell' is an element within a domain, and each cell has associated variables: location: x, y, z characteristics: forest-fraction, reservoir-fraction, lake-fraction, glacier-fraction, catchment-id We'll br...
region_model = region_repo.get_region_model('demo')
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Exploring the region_model So we now have created a region_model, but what is it actually? This is a very fundamental class in Shyft. It is actually one of the "model stacks", such as 'PTGSK', or 'PTHSK'. Essentially, the region_model contains all the information regarding the simulation type and domain. There are many...
region_model.bounding_region.epsg()
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
You'll likely note that there are a number of intriguing fucntions, e.g. initialize_cell_environment or interpolate. But before we can go further, we need some more information. Perhaps you are wondering about forcing data. So far, we haven't said anything about model input or the time of the simulation, we've only set...
cell_0 = region_model.cells[0] print(cell_0.geo)
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
So you can see that so far, each of the cells in the region_model contain information regarding their LandTypeFractions, geolocation, catchment_id, and area. A particulary important attribute is region_model.region_env. This is a container for each cell that holds the "environmental timeseries", or forcing data, for t...
#just so we don't see 'private' attributes print([d for d in dir(cell_0.env_ts) if '_' not in d[0]]) region_model.size()
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Adding forcing data to the region_model Clearly the next step is to add forcing data to our region_model object. Let's start by thinking about what kind of data we need. From above, where we looked at the env_ts attribute, it's clear that this particular model stack, PTGSKModel, requires: precipitation radiation relat...
from shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository from shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository ForcingData = {'sources': [ {'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository, 'params': {'epsg': 32633, 'filename...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Data Repositories In another notebook, further information will be provided regarding the repositories. For the time being, let's look at this configuration dictionary that was created. It essentially just contains a list, keyed by the name "sources". This key is known in some of the tools that are built in the Shyft o...
# get the temperature sources: tmp_sources = [source for source in ForcingData['sources'] if 'temperature' in source['types']] # in this example there is only one t0 = tmp_sources[0] # We will now instantiate the repository with the parameters that are provided # in the dictionary. # Note the 'call' structure expect...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
tmp_repo is now an instance of the Shyft CFDataRepository, and this will provide Shyft with the data when it sets up a simulation by reading the data directly out of the file referenced in the 'source'. But that is just one repository, and we defined many in fact. Furthermore, you may have a heterogenous collection of ...
# we'll actually create a collection of repositories, as we have different input types. from shyft.repository.geo_ts_repository_collection import GeoTsRepositoryCollection def construct_geots_repo(datasets_config, epsg=None): """ iterates over the different sources that are provided and prepares the repositor...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
geots_repo is now a "geographic timeseries repository", meaning that the timeseries it holds are spatially aware of their x,y,z coordinates (see CFDataRepository for details). It also has several methods. One in particular we are interested in is the get_timeseries method. However, before we can proceed, we need to def...
# next, create the simulation dictionary TimeDict = {'start_datetime': "2013-09-01T00:00:00", 'run_time_step': 86400, # seconds, daily 'number_of_steps': 360 # ~ one year } def time_axis_from_dict(t_dict)->api.TimeAxis: utc = api.Calendar() sim_start = dt.datetime.strptime...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
We now have an object that defines the time dimension for the simulation, and we will use this to initialize the region_model with the "environmental timeseries" or env_ts data. These containers will be given data from the appropriate repositories using the get_timeseries function. Following the templates in the shyft....
# we can extract our "bounding box" based on the `region_model` we set up bbox = region_model.bounding_region.bounding_box(region_model.bounding_region.epsg()) period = ta_1.total_period() #just defined above # required forcing data sets we want to retrieve geo_ts_names = ("temperature", "wind_speed", "precipitation"...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Now we have a new dictionary, called 'sources' that contains specialized Shyft api types specific to each forcing data type. You can look at one for example:
prec = sources['precipitation'] print(len(prec))
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
We can explore further and see each element is in itself an api.PrecipitationSource, which has a timeseries (ts). Recall from the first tutorial that we can easily convert the timeseries.time_axis into datetime values for plotting. Let's plot the precip of each of the sources:
fig, ax = plt.subplots(figsize=(15,10)) for pr in prec: t,p = [dt.datetime.utcfromtimestamp(t_.start) for t_ in pr.ts.time_axis], pr.ts.values ax.plot(t,p, label=pr.mid_point().x) #uid is empty now, but we reserve for later use fig.autofmt_xdate() ax.legend(title="Precipitation Input Sources") ax.set_ylabel("p...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Finally, the next step will take the data from the sources and connect it to our region_model.region_env class:
def get_region_environment(sources): region_env = api.ARegionEnvironment() region_env.temperature = sources["temperature"] region_env.precipitation = sources["precipitation"] region_env.radiation = sources["radiation"] region_env.wind_speed = sources["wind_speed"] region_env.rel_hum = sources["r...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
And now our forcing data is connected to the region_model. We are almost ready to run a simulation. There is just one more step. We've connected the sources to the model, but remember that Shyft is a distributed modeling framework, and we've connected point data sources (in this case). So we need to get the data from t...
from shyft.repository.interpolation_parameter_repository import InterpolationParameterRepository class interp_config(object): """ a simple class to provide the interpolation parameters """ def __init__(self): self.interp_params = {'precipitation': {'method': 'idw', 'params': {'dista...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
The next step is to set the intial states of the model using our last repository. This one, the GeneratedStateRepository will set empty default values. Now we are nearly ready to conduct a simulation. We just need to run a few methods to prepare the model and cells for the simulation. The region_model has a method call...
from shyft.repository.generated_state_repository import GeneratedStateRepository init_values = {'gs': {'acc_melt': 0.0, 'albedo': 0.65, 'alpha': 6.25, 'iso_pot_energy': 0.0, 'lwc': 0.1, 'sdc_melt_mean': 0.0, 'surface_heat': 30000.0, 'temp_swe': 0.0}, 'kirchner': {'q': 0.01}} state_generator...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Conduct the simulation We now have a region_model that is ready for simulation. As we discussed before, we still need to get the data from our point observations interpolated to the cells, and we need to get the env_ts of each cell populated. But all the machinery is now in place to make this happen. To summarize, we'...
region_model.initialize_cell_environment(ta_1)
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
As a habit, we have a quick "sanity check" function to see if the model is runnable. Itis recommended to have this function when you create 'run scripts'.
def runnable(reg_mod): """ returns True if model is properly configured **note** this is specific depending on your model's input data requirements """ return all((reg_mod.initial_state.size() > 0, reg_mod.time_axis.size() > 0, all([len(getattr(reg_mod.region_env, attr)) > 0 for attr in ...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Okay, so the simulation was run. Now we may be interested in looking at some of the output. We'll take a brief summary glance in the next section, and save a deeper dive into the simulation results for another notebook. 3. Simulation results The first step will be simply to look at the discharge results for each subcat...
# Here we are going to extact data from the simulation. # We start by creating a list to hold discharge for each of the subcatchments. # Then we'll get the data from the region_model object # mapping of internal catch ID to catchment catchment_id_map = region_model.catchment_id_map # First get the time-axis which we...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Okay, that was simple. Let's look at the timeseries in some individual cells. The following is a bit of a contrived example, but it shows some aspects of the api. We'll plot the temperature series of all the cells in one sub-catchment, and color them by elevation. This doesn't necessarily show anything about the simula...
from matplotlib.cm import jet as jet from matplotlib.colors import Normalize # get all the cells for one sub-catchment with 'id' == 1228 c1228 = [c for c in region_model.cells if c.geo.catchment_id() == 1228] # for plotting, create an mpl normalizer based on min,max elevation elv = [c.geo.mid_point().z for c in c1228...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
As we would expect from the temperature kriging method, we should find higher elevations have colder temperatures. As an exercise you could explore this relationship using a scatter plot. Now we're going to create a function that will read initial states from the initial_state_repo. In practice, this is already done by...
state_generator.find_state? # create a function to reaad the states from the state repository def get_init_state_from_repo(initial_state_repo_, region_model_id_=None, timestamp=None): state_id = 0 if hasattr(initial_state_repo_, 'n'): # No stored state, generated on-the-fly initial_state_repo_.n = reg...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Don't worry too much about the function for now, but do take note of the init_state object that we created. This is another container, this time it is a class that contains PTGSKStateWithId objects, which are specific to the model stack implemented in the simulation (in this case PTGSK). If we explore an individual sta...
def print_pub_attr(obj): #only public attributes print(f'{obj.__class__.__name__}:\t',[attr for attr in dir(obj) if attr[0] is not '_']) print(len(init_state)) init_state_cell0 = init_state[0] # the identifier print_pub_attr(init_state_cell0.id) # gam snow states print_pub_attr(init_state_cell0.state.gs)...
notebooks/repository/repositories-intro.ipynb
statkraft/shyft-doc
lgpl-3.0
Breaking it down... The while statement on line 2 starts the loop. The code indented beneath the while (lines 3-4) will repeat, in a linear fashion until the Boolean expression on line 2 i &lt;= 3 is False, at which time the program continues with line 5. Some Terminology We call i &lt;=3 the loop's exit condition. The...
## WARNING!!! INFINITE LOOP AHEAD ## IF YOU RUN THIS CODE YOU WILL NEED TO STOP OR RESTART THE KERNEL AFTER RUNNING THIS!!! i = 1 while i <= 3: print(i,"Mississippi...") print("Blitz!")
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit
For loops To prevent an infinite loop when the loop is definite, we use the for statement. Here's the same program using for:
for i in range(1,4): print(i,"Mississippi...") print("Blitz!")
content/lessons/04-Iterations/LAB-Iterations.ipynb
IST256/learn-python
mit