markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch: | def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Print the first examples input and target values: | for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()]))) | Input data: 'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou'
Target data: 'irst Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
| Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input char... | for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx]))) | Step 0
input: 18 ('F')
expected output: 47 ('i')
Step 1
input: 47 ('i')
expected output: 56 ('r')
Step 2
input: 56 ('r')
expected output: 57 ('s')
Step 3
input: 57 ('s')
expected output: 58 ('t')
Step 4
input: 58 ('t')
expected output: 1 (' ')
| Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Create training batchesWe used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches. | # Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZ... | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Build The Model Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A ty... | # Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
... | WARNING: Logging before flag parsing goes to stderr.
W0304 03:48:46.706135 140067035297664 tf_logging.py:161] <tensorflow.python.keras.layers.recurrent.UnifiedLSTM object at 0x7f637273ccf8>: Note that this layer is not optimized for performance. Please use tf.keras.layers.CuDNNLSTM for better performance on GPU.
| Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-liklihood of the next character::
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") | (64, 100, 65) # (batch_size, sequence_length, vocab_size)
| Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length: | model.summary() | Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (64, None, 256) 16640
____________________________________________... | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model ... | sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy() | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
This gives us, at each timestep, a prediction of the next character index: | sampled_indices | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Decode these to see the text predicted by this untrained model: | print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ]))) | Input:
'to it far before thy time?\nWarwick is chancellor and the lord of Calais;\nStern Falconbridge commands'
Next Char Predictions:
"I!tbdTa-FZRtKtY:KDnBe.TkxcoZEXLucZ&OUupVB rqbY&Tfxu :HQ!jYN:Jt'N3KNpehXxs.onKsdv:e;g?PhhCm3r-om! :t"
| Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_softmax_crossentropy` loss function works in... | def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_los... | Prediction shape: (64, 100, 65) # (batch_size, sequence_length, vocab_size)
scalar_loss: 4.174188
| Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function. | model.compile(optimizer='adam', loss=loss) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training: | # Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. | EPOCHS=10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) | Epoch 1/10
172/172 [==============================] - 31s 183ms/step - loss: 2.7052
Epoch 2/10
172/172 [==============================] - 31s 180ms/step - loss: 2.0039
Epoch 3/10
172/172 [==============================] - 31s 180ms/step - loss: 1.7375
Epoch 4/10
172/172 [==============================] - 31s 179ms/step... | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
Generate text Restore the latest checkpoint To keep this prediction step simple, use a batch size of 1.Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.To run the model with a different `batch_size`, we need to rebuild the model and restore the... | tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary() | Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (1, None, 256) 16640
__________________________________________... | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
The prediction loopThe following code block generates the text:* It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.* Get the prediction distribution of the next character using the start string and the RNN state.* Then, use a categorical distribution to c... | def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
#... | ROMEO: now to have weth hearten sonce,
No more than the thing stand perfect your self,
Love way come. Up, this is d so do in friends:
If I fear e this, I poisple
My gracious lusty, born once for readyus disguised:
But that a pry; do it sure, thou wert love his cause;
My mind is come too!
POMPEY:
Serve my master's him:... | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions. Advanced: Customize... | model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losse... | Epoch 1 Batch 0 Loss 4.174627780914307
Epoch 1 Batch 100 Loss 2.333711862564087
Epoch 1 Loss 2.0831
Time taken for 1 epoch 15.117910146713257 sec
Epoch 2 Batch 0 Loss 2.150496244430542
Epoch 2 Batch 100 Loss 1.8478351831436157
Epoch 2 Loss 1.7348
Time taken for 1 epoch 14.401937007904053 sec
Epoch 3 Batch 0 Loss 1.80... | Apache-2.0 | site/en/r2/tutorials/text/text_generation.ipynb | mullikine/tfdocs |
[1] About - This notebook is a part of tutorial series prepared by B. C. Yadav, Research Scholar @ IIT Roorkee. - ORCID iD: https://orcid.org/0000-0001-7288-0551 - Google Scholar: https://scholar.google.com/citations?user=6fJpxxQAAAAJ&hl=en&authuser=1 - Github: https://github.com/Bankimchandrayadav/PythonInGeomatics ... | # !conda install -c conda-forge rioxarray -y | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[3] First time usage for pip users | # !pip install rioxarray | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[4] Importing libraries | import rioxarray
import os
import numpy as np
from tqdm.notebook import tqdm as td
import shutil
import time
start = time.time() # will be used to measure the effectiveness of automation | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[5] Creating routine functions | def fresh(where):
if os.path.exists(where):
shutil.rmtree(where)
os.mkdir(where)
else:
os.mkdir(where) | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[6] Read files [6.1] Specify input directory | rootDir = "../02_data/02_netcdf_multiple" | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[6.2] Read files from input directory | # create an empty list
rasters = []
# loop starts here
for dirname, subdirnames, filenames in os.walk(rootDir):
# search message
print('Searched in directory: {}\n'.format(dirname))
# subloop starts here
for filename in filenames:
# get complete file name
filename = os.path.join(... | Searched in directory: ../02_data/02_netcdf_multiple
Files read
| MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[6.3] Check the input data | print('First file in sequence:', rasters[0])
print('Last file in sequence:', rasters[-1]) | Last file in sequence: ../02_data/02_netcdf_multiple\2000-02-11T15.nc
| MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[7] Converting to tiff [7.1] Specify output directory: | outDir = "../02_data/03_tiff/" | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[7.2] Delete any existing or old files | fresh(where=outDir) | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[7.3] Check output directory [optional] | # os.startfile(os.path.realpath(outDir)) | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[7.4] Conversion of netcdf to tiff | # loop starts here
for i in td(range(0, len(rasters)), desc = 'Converting to tiff'):
# read file from 'rasters' list and remove path name
fileName = rasters[i].split('\\')[1].split('.')[0]
# read the file as a dataset
ds = rioxarray.open_rasterio(rasters[i])
# set projection 'datum = WGS84'... | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[8] Time elapsed | end = time.time()
print('Time elapsed:', np.round(end-start ,2), 'secs') | Time elapsed: 514.1 secs
| MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
[9] See results [1000 converted tiff files, optional] | os.startfile(os.path.realpath(outDir)) | _____no_output_____ | MIT | 01_notebooks/02_converting_to_tif.ipynb | Bankimchandrayadav/PythonInGeomatics |
"Heart Attack Analysis & Prediction Dataset"> "[Kaggle]"- toc:true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [Colab, Kaggle Code Review] | # !mkdir "/content/drive/MyDrive/Kaggle/Data/Heart Attack Analysis & Prediction Dataset"
# !unzip "/content/drive/MyDrive/Kaggle/Data/Heart Attack Analysis & Prediction Dataset.zip" -d "/content/drive/MyDrive/Kaggle/Data/Heart Attack Analysis & Prediction Dataset/"
from google.colab import drive
drive.mount('/content/d... | _____no_output_____ | Apache-2.0 | _notebooks/2021_12_31_Heart_Attack_Analysis_&_Prediction_Dataset.ipynb | JoGyeongDeok/PigDuck |
1.Library & Data Load | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, confusion_matrix
from sklear... | _____no_output_____ | Apache-2.0 | _notebooks/2021_12_31_Heart_Attack_Analysis_&_Prediction_Dataset.ipynb | JoGyeongDeok/PigDuck |
2. 모델링 | X = heart.iloc[:, 0: -1]
y = heart.iloc[:, -1:]
X_train,X_test, y_train, y_test= train_test_split(X,y,test_size=0.2,random_state=2) | _____no_output_____ | Apache-2.0 | _notebooks/2021_12_31_Heart_Attack_Analysis_&_Prediction_Dataset.ipynb | JoGyeongDeok/PigDuck |
https://mkjjo.github.io/python/2019/01/10/scaler.htmlhttps://blog.naver.com/chlxogns92/221767376389 | pipe_svc=make_pipeline(StandardScaler(),
SVC(random_state=1))
param_C_range=[0,1,2,3,4,5,6,7,8,9,10]
param_gamma_range=[0.005,0, 0.1, 0.015]
param_grid=[{'svc__C' : param_C_range,
'svc__kernel':['linear']},
{'svc__C':param_C_range,
'svc__gamma':param_gamma_r... | _____no_output_____ | Apache-2.0 | _notebooks/2021_12_31_Heart_Attack_Analysis_&_Prediction_Dataset.ipynb | JoGyeongDeok/PigDuck |
Experiment 3: Study of LC-MS$^2$Struct ability to improve the identification of stereoisomersIn this experiment we output a score (either Only-MS$^2$ or LC-MS$^2$Struct) for each stereoisomer in the candidate set. Candidates are identified (or indexed, or distiguished) by their full InChIKey. Typically, the Only-MS$^2... | agg_setting = {
"marg_agg_fun": "average",
"cand_agg_id": "inchikey"
} | _____no_output_____ | MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
MetFragMetFrag performs an in-silico fragmentation for each candidate structure and compares the predicted and observed (from the MS2 spectrum) fragments. | # SSVM (2D)
setting = {"ds": "*", "mol_feat": "FCFP__binary__all__2D", "mol_id": "cid", "ms2scorer": "metfrag__norm", "ssvm_flavor": "default", "lloss_mode": "mol_feat_fps"}
res__ssvm__metfrag__2D = load_topk__publication(
setting, agg_setting, basedir=os.path.join("massbank__with_stereo"), top_k_method="csi", load... | Performed tests: [1500.]
| MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
Overview result table (LC-MS$^2$Struct) Without chrirality encoding (2D) | tab = table__top_k_acc_per_dataset_with_significance(res__ssvm__metfrag__2D, test="ttest", ks=[1, 5, 10, 20])
tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") | _____no_output_____ | MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
With chirality encoding (3D) | tab = table__top_k_acc_per_dataset_with_significance(res__ssvm__metfrag__3D, test="ttest", ks=[1, 5, 10, 20])
tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") | _____no_output_____ | MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
SIRIUS | # SSVM (2D)
setting = {"ds": "*", "mol_feat": "FCFP__binary__all__2D", "mol_id": "cid", "ms2scorer": "sirius__norm", "ssvm_flavor": "default", "lloss_mode": "mol_feat_fps"}
res__ssvm__sirius__2D = load_topk__publication(
setting, agg_setting, basedir=os.path.join("massbank__with_stereo"), top_k_method="csi", load_m... | Performed tests: [1500.]
| MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
Overview result table (LC-MS$^2$Struct) Without chrirality encoding (2D) | tab = table__top_k_acc_per_dataset_with_significance(res__ssvm__sirius__2D, test="ttest", ks=[1, 5, 10, 20])
tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") | _____no_output_____ | MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
With chirality encoding (3D) | tab = table__top_k_acc_per_dataset_with_significance(res__ssvm__sirius__3D, test="ttest", ks=[1, 5, 10, 20])
tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") | _____no_output_____ | MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
CFM-ID | # SSVM (2D)
setting = {"ds": "*", "mol_feat": "FCFP__binary__all__2D", "mol_id": "cid", "ms2scorer": "cfmid4__norm", "ssvm_flavor": "default", "lloss_mode": "mol_feat_fps"}
res__ssvm__cfmid4__2D = load_topk__publication(
setting, agg_setting, basedir=os.path.join("massbank__with_stereo"), top_k_method="csi", load_m... | Performed tests: [1500.]
| MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
Overview result table (LC-MS$^2$Struct) Without chrirality encoding (2D) | tab = table__top_k_acc_per_dataset_with_significance(res__ssvm__cfmid4__2D, test="ttest", ks=[1, 5, 10, 20])
tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") | _____no_output_____ | MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
With chirality encoding (3D) | tab = table__top_k_acc_per_dataset_with_significance(res__ssvm__cfmid4__3D, test="ttest", ks=[1, 5, 10, 20])
tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") | _____no_output_____ | MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
Visualization of the ranking performanceTop-k curve for each MS2-scoring method: CFM-ID, MetFrag and SIRIUS. | __tmp__03__a = plot__03__a(
res__baseline=[
res__ssvm__cfmid4__2D[(res__ssvm__cfmid4__2D["scoring_method"] == "Only MS") & (res__ssvm__cfmid4__2D["n_models"] == 8)].assign(scoring_method="Only-MS$^2$", ms2scorer="CFM-ID"),
res__ssvm__metfrag__2D[(res__ssvm__metfrag__2D["scoring_method"] == "Only MS"... | _____no_output_____ | MIT | results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_03__stereochemistry.ipynb | aalto-ics-kepaco/lcms2struct_exp |
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages. | import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy | _____no_output_____ | MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment. | env = gym.make('Blackjack-v1') | _____no_output_____ | MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below. | print(env.observation_space)
print(env.action_space) | Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
| MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._) | for i_episode in range(3):
state = env.reset()
while True:
prev_state = state
action = env.action_space.sample()
state, reward, done, info = env.step(action)
print(f"S={prev_state}, A={action}, R={reward}, S'={state}")
if done:
print('End game! Reward: ', rewa... | S=(15, 10, False), A=0, R=-1.0, S'=(15, 10, False)
End game! Reward: -1.0
You lost :(
S=(11, 8, False), A=0, R=-1.0, S'=(11, 8, False)
End game! Reward: -1.0
You lost :(
S=(9, 6, False), A=0, R=1.0, S'=(9, 6, False)
End game! Reward: 1.0
You won :)
| MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability ... | def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state,... | _____no_output_____ | MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*) | for i in range(3):
print(generate_episode_from_limit_stochastic(env)) | [((18, 1, True), 1, 0.0), ((13, 1, False), 1, 0.0), ((20, 1, False), 0, 0.0)]
[((12, 10, False), 1, 0.0), ((20, 10, False), 0, 1.0)]
[((14, 1, False), 0, -1.0)]
| MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episo... | def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0, first_visit=True):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
... | _____no_output_____ | MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**. | # obtain the action-value function
Q = mc_prediction_q(env, 50000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_bl... | <ipython-input-34-1d29bf91437b>:30: RuntimeWarning: invalid value encountered in true_divide
Q[k] = returns_sum[k] / N[k]
| MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: Th... | import pdb
def generate_episode_with_policy(env, policy, eps):
episode = []
state = env.reset()
while True:
if np.random.rand() > eps and state in policy:
action = policy[state] # greedy
else:
action = env.action_space.sample()
new... | _____no_output_____ | MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters. | # obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 50000, .2) | Episode 50000/50000. | MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Next, we plot the corresponding state-value function. | # obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V) | _____no_output_____ | MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Finally, we visualize the policy that is estimated to be optimal. | # plot the policy
plot_policy(policy) | _____no_output_____ | MIT | monte-carlo/Monte_Carlo.ipynb | piotrbazan/deep-reinforcement-learning |
Serving a Custom ModelThis example walks you through how to deploy a custom model with Tempo.In particular, we will walk you through how to write custom logic to run inference on a [numpyro model](http://num.pyro.ai/en/stable/).Note that we've picked `numpyro` for this example simply because it's not supported out of ... | !tree -P "*.py" -I "__init__.py|__pycache__" -L 2 | _____no_output_____ | Apache-2.0 | docs/examples/custom-model/README.ipynb | RafalSkolasinski/tempo |
TrainingThe first step will be to train our model.This will be a very simple bayesian regression model, based on an example provided in the [`numpyro` docs](https://nbviewer.jupyter.org/github/pyro-ppl/numpyro/blob/master/notebooks/source/bayesian_regression.ipynb).Since this is a probabilistic model, during training ... | # %load src/train.py
# Original source code and more details can be found in:
# https://nbviewer.jupyter.org/github/pyro-ppl/numpyro/blob/master/notebooks/source/bayesian_regression.ipynb
import numpy as np
import pandas as pd
from jax import random
from numpyro.infer import MCMC, NUTS
from src.tempo import model_fu... | _____no_output_____ | Apache-2.0 | docs/examples/custom-model/README.ipynb | RafalSkolasinski/tempo |
Saving trained modelNow that we have _trained_ our model, the next step will be to save it so that it can be loaded afterwards at serving-time.Note that, since this is a probabilistic model, we will only need to save the traces that approximate the posterior distribution over latent parameters.This will get saved in a... | save(mcmc, ARTIFACTS_FOLDER) | _____no_output_____ | Apache-2.0 | docs/examples/custom-model/README.ipynb | RafalSkolasinski/tempo |
ServingThe next step will be to serve our model through Tempo. For that, we will implement a custom model to perform inference using our custom `numpyro` model.Once our custom model is defined, we will be able to deploy it on any of the available runtimes using the same environment that we used for training. Custom i... | # %load src/tempo.py
import os
import json
import numpy as np
import numpyro
from numpyro import distributions as dist
from numpyro.infer import Predictive
from jax import random
from tempo import model, ModelFramework
def model_function(marriage : np.ndarray = None, age : np.ndarray = None, divorce : np.ndarray = Non... | _____no_output_____ | Apache-2.0 | docs/examples/custom-model/README.ipynb | RafalSkolasinski/tempo |
We can now test our custom logic by running inference locally. | marriage = np.array([28.0])
age = np.array([63])
pred = numpyro_divorce(marriage=marriage, age=age)
print(pred) | _____no_output_____ | Apache-2.0 | docs/examples/custom-model/README.ipynb | RafalSkolasinski/tempo |
Deploy the Model to DockerFinally, we'll be able to deploy our model using Tempo against one of the available runtimes (i.e. Kubernetes, Docker or Seldon Deploy).We'll deploy first to Docker to test. | !cat artifacts/conda.yaml
from tempo.serve.loader import save
save(numpyro_divorce)
from tempo import deploy
remote_model = deploy(numpyro_divorce) | _____no_output_____ | Apache-2.0 | docs/examples/custom-model/README.ipynb | RafalSkolasinski/tempo |
We can now test our model deployed in Docker as: | remote_model.predict(marriage=marriage, age=age)
remote_model.undeploy() | _____no_output_____ | Apache-2.0 | docs/examples/custom-model/README.ipynb | RafalSkolasinski/tempo |
Production Option 1 (Deploy to Kubernetes with Tempo) * Here we illustrate how to run the final models in "production" on Kubernetes by using Tempo to deploy Prerequisites Create a Kind Kubernetes cluster with Minio and Seldon Core installed using Ansible as described [here](../../overview/quickstart.mdkubernetes-clu... | !kubectl apply -f k8s/rbac -n production
from tempo.examples.minio import create_minio_rclone
import os
create_minio_rclone(os.getcwd()+"/rclone.conf")
from tempo.serve.loader import upload
upload(numpyro_divorce)
from tempo.serve.metadata import KubernetesOptions
from tempo.seldon.k8s import SeldonCoreOptions
runtime_... | _____no_output_____ | Apache-2.0 | docs/examples/custom-model/README.ipynb | RafalSkolasinski/tempo |
Production Option 2 (Gitops) * We create yaml to provide to our DevOps team to deploy to a production cluster * We add Kustomize patches to modify the base Kubernetes yaml created by Tempo | from tempo.seldon.k8s import SeldonKubernetesRuntime
from tempo.serve.metadata import RuntimeOptions, KubernetesOptions
runtime_options = RuntimeOptions(
k8s_options=KubernetesOptions(
namespace="production",
authSecretName="minio-secret"
)
)
k8s_runtime = SeldonKubernetesRun... | _____no_output_____ | Apache-2.0 | docs/examples/custom-model/README.ipynb | RafalSkolasinski/tempo |
Classes and Objects Object Orienation is a powerful tool that allows scientists to write larger, more complex simulations. Object orientation organizes data, methods, and functions into classes. Pages 117 and 118 provide a great introduction to this topic. ObjectsEverything in python is an object. We will use the hel... | a = 1
help(a) | Help on int object:
class int(object)
| int(x=0) -> integer
| int(x, base=10) -> integer
|
| Convert a number or string to an integer, or return 0 if no arguments
| are given. If x is a number, return x.__int__(). For floating point
| numbers, this truncates towards zero.
|
| If x is not a number o... | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
The help() function has told us that the integer is an object. This object must have be associated with rules and behaviors. The dir() function will tell us about some of those behaviors. The dir() function lists all of the attributes and methods associated with the argument. | a = 1
dir(a) | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Dir() has told us that objects in the int class have absolute values(\__abs__), and that they may be added together (\__add__). These objects also have real (real) and imaginary (imag) components. | a = 1
a.__abs__()
b = -2
b.__abs__() | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
In the preceding cells, we have directly called the \__abs__ method on two objects. However, it is almost never a good idea to directly call these double underscore (aka "dunder") methods. It is much safer to get the absolute value by just using the abs() function, as follows: | a = 1
abs(a)
b = -2
abs(b) | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
The help() and dir() functions are applied to some of the data types from chapter 2 in the following cells. You should try examining every data type you can think of. | a = ' '
help(a)
dir(a)
a = 3.14
help(a)
dir(a) | Help on float object:
class float(object)
| float(x) -> floating point number
|
| Convert a string or number to a floating point number, if possible.
|
| Methods defined here:
|
| __abs__(self, /)
| abs(self)
|
| __add__(self, value, /)
| Return self+value.
|
| __bool__(self, /)
... | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Functions are objects too, and we can use the dir() function on them to learn more. | import math
dir(math.sin) | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
We can access the docstrings of a function with the \__docs__ methods | import math
math.sin.__doc__ | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
The docstring is also an object, check this out: | dir(math.sin.__doc__) | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Classes Classes serve many roles. Classes define the collection of attributes that describe a type of object. They also describe how to create that type of object and may inherit attributes from other classes hierarchically.The following cells are examples of classes we would create during a particle physics simulatio... | class Particle(object): # Begins the class definition and names the class particle
"""A particle is a constituent unit of the universe.""" # A docstring for the class
# class body definition here | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Class variablesMany attributes should be included in the class definition. The first of these that we will introduce is the class variable. Class variables are data that are applicable to every object of the class. For example, in our particles class, every object should be able to say "I am a particle." We can then s... | # particle.py
class Particle(object):
"""A particle is a constituent unit of the universe."""
roar = "I am a particle!"
import os
import sys
sys.path.insert(0, os.path.abspath('obj')) | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
We can access class variables even without declaring an instance of the class, as in the following code: | # import the particle module
import particle as p
print(p.Particle.roar)
# import the particle module
import particle as p
higgs = p.Particle()
print(higgs.roar) | I am a particle!
| CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Instance VariablesSome variables should only apply to certain objects in the class. These are called "instance variables." For example, every particle should have its own position vector. A rather clumsy way to assign position vectors to all of the objects in the particle class is demonstrated in the following cell: | # import the Particle class from the particle module
from particle import Particle
# create an empty list to hold observed particle data
obs = []
# append the first particle
obs.append(Particle())
# assign its position
obs[0].r = {'x': 100.0, 'y': 38.0, 'z': -42.0}
# append the second particle
obs.append(Particle()... | {'x': 100.0, 'z': -42.0, 'y': 38.0}
{'x': 0.01, 'z': 32.0, 'y': 99.0}
| CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
ConstructorsConstructors allow us to associate data attributes with a specific instance of a class. Whenever an object is created as part of a class, the constructor function, which is always named `\__init__()`, is executed. The next example defines a constructor function that assigns values of charge, mass, and posi... | # particle.py
class Particle(object):
"""A particle is a constituent unit of the universe.
Attributes
----------
c : charge in units of [e]
m : mass in units of [kg]
r : position in units of [meters]
"""
roar = "I am a particle!"
def __init__(self):
"""Initializes the ... | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
The constructor can be made for powerful by passing arguments to it, as in this example: | # particle.py
class Particle(object):
"""A particle is a constituent unit of the universe.
Attributes
----------
c : charge in units of [e]
m : mass in units of [kg]
r : position in units of [meters]
"""
roar = "I am a particle!"
def __init__(self, charge, mass, position): # S... | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
MethodsWe have discussed methods a bit without formally introducing them. Methods are a special type of function that is associated with a class defintion. Methods may be used to operate on data contained by the object, as in these examples: | # particle.py
class Particle(object):
"""A particle is a constituent unit of the universe.
Attributes
----------
c : charge in units of [e]
m : mass in units of [kg]
r : position in units of [meters]
"""
roar = "I am a particle!"
def __init__(self, charge, mass, position):
... | I am a particle!
My mass is: 9.10938356e-31
My charge is: -1
My x position is: 11
My y position is: 1
My z position is: 53
| CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
The next example creates a `flip` method that changes a quark's flavor while maintaining symmetry | def flip(self):
if self.flavor == "up":
self.flavor = "down"
elif self.flavor == "down":
self.flavor = "up"
elif self.flavor == "top":
self.flavor = "bottom"
elif self.flavor == "bottom":
self.flavor = "top"
elif self.flavor == "strange":
self.flavor = "charm"... | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Here is our method in action, changing the attributes of an object after it is called: | # import the class
from quark import Quark
# create a Quark object
t = Quark()
# set the flavor
t.flavor = "top"
# flip the flavor
t.flip()
# print the flavor
print(t.flavor) | bottom
| CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Another powerful method that we could add to the `Particles` class uses the Heisenberg principle to determine the minimum uncertainty in position, given an uncertainty in momentum | from scipy import constants
class Particle(object):
"""A particle is a constituent unit of the universe."""
# ... other parts of the class definition ...
def delta_x_min(self, delta_p_x):
hbar = constants.hbar
delx_min = hbar / (2.0 * delta_p_x)
return delx_min | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Static MethodsWe can add a method that behaves the same for every instance. Consider a function that lists the possible quark flavors. This list does not depend on the flavor of the quark. Such a function could like the following: | def possible_flavors():
return ["up", "down", "top", "bottom", "strange", "charm"] | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
We can add this function as a method of a class by using python's built-in `@staticmethod` decorator so that the method does not take any arguments, and behaves the same for all objects in the class. | from scipy import constants
def possible_flavors():
return["up","down","top","bottom","strange","charm"]
class Particle(object):
"""A particle is a constituent unit of the universe."""
# ... other parts of the class definition ...
def delta_x_min(self, delta_p_x):
hbar = constants.hbar
... | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Duck TypingDuck typing was introduced in Chapter 3. We will explore it in more detail here. Duck typing refers to Python's tactic of only checking the types of an object that are relevant to its use at the time of its use. Thus, any particles with a valid `Charge()` method may be used identically, as in this example: | def total_charge(particles):
tot = 0
for p in particles:
tot += p.c
return tot
p = a_p
e1 = a_e
e2 = a_e
particles = [p, e1, e2]
total_charge(particles) | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Sometimes duck typing is undesirable. The isinstance() function can be used with an if statement to ensure only objects of a certain type are passed to a method | def total_charge(collection):
tot = 0
for p in collection:
if isinstance(p, Particle):
tot += p.c
return tot | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
PolymorphismPolymorphism occurs when a class inherits the attributes of a parent class. Generally, what works for a parent class should also work for the subclass, but the subclass should be able to execute its own specialized behavior as well. Consider a subclass of `particles` that describes elementary particles suc... | # elementary.py
class ElementaryParticle(Particle):
def __init__(self, spin):
self.s = spin
self.is_fermion = bool(spin % 1.0)
self.is_boson = not self.is_fermion | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
It seems that `ElementaryParticle` takes `Particle` as an argument. This syntax establishes `Particle` as the parent class to `ElementaryParticle`, following the inheritance diagram in figure 6-2 on page 136. Another subclass of `Particle` could be `CompositeParticle`. This class may have all the properties of the `Par... | # composite.py
class CompositeParticle(Particle):
def __init__(self, parts):
self.constituents = parts | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
SubclassesObjects in the `ElementaryParticle` class and in the `CompositeParticle` class __are__ in the the `Particle` class because those classes inherit from `Particle`. Inheritance has thus allowed us to reuse code without any rewriting. However, the behavior from `Particle` can be overwritten, as in the following ... | # elementary.py
class ElementaryParticle(Particle):
roar = "I am an Elementary Particle!"
def __init__(self, spin):
self.s = spin
self.is_fermion = bool(spin % 1.0)
self.is_boson = not self.is_fermion
from elementary import ElementaryParticle
spin = 1.5
p = ElementaryParticle(spin)
p.... | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
SuperclassesWhile `ElementaryParticle` is a subclass of `Particle`, it can also be a superclass to other classes, such as `Quark`, which is defined in the next example: | import randphys as rp
class Quark(ElementaryParticle):
def __init__(self):
phys = rp.RandomPhysics()
self.color = phys.color()
self.charge = phys.charge()
self.color_charge = phys.color_charge()
self.spin = phys.spin()
self.flavor = phys.flavor() | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Decorators and Metaclasses_MetaProgramming_ is when the definition of a class or a function is specified outside of that function or class. This practice is more common in other languages such as c++ than it is in python. Most of our metaprogramming needs are accomplished with decorators. We can define our own decorat... | def add_is_particle(cls): # Defines the class decorator, which takes one argument that is the class itself.
cls.is_particle = True # Modifies the class by adding the is_particle attribute.
return cls # Returns the class
@add_is_particle # Applies the decorator to the class. This line uses the same syntax as a... | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
We can even add methods to a class, as follows: | from math import sqrt
def add_distance(cls):
def distance(self, other):
d2 = 0.0
for axis in ['x', 'y', 'z']:
d2 += (self.r[axis] - other.r[axis])**2
d = sqrt(d2)
return d
cls.distance = distance
return cls
@add_distance
class Particle(object):
"""A parti... | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Metaclasses also exist, for when decorators are not enough. Learn about them from these examples, if you dare: | type(type)
class IsParticle(type):
pass
class Particle(metaclass=IsParticle):
"""A particle is a constituent unit of the universe."""
# ... other parts of the class definition ...
isinstance(Particle, IsParticle)
p = Particle()
isinstance(p, IsParticle) | _____no_output_____ | CC0-1.0 | ch06-classes-objects.ipynb | lkissin2/examples |
Lab Exercise 9 Solutions (Total Marks: 20)Deadline for submission: Two weeks from your lab session (eg, Tuesday/Friday $\rightarrow$ 11:59 pm of Tuesday/Friday two weeks later) Rename your file as AXXXXXXXY_LabEx9.ipynb, where AXXXXXXXY is your matric number. **Question 1**Write a program that uses the Monte Carlo ... | %reset
import numpy as np
import matplotlib.pyplot as plt
nPoints = eval(input("Enter the number of points to be used in the simulation: "))
# Ideal to use more points, but not way too many. 100,000 points is ideal for a close estimation.
# Prompt for triangle's base
p = eval(input("Enter the base of the triangle in... | Once deleted, variables cannot be recovered. Proceed (y/[n])? y
Enter the number of points to be used in the simulation: 10000
Enter the base of the triangle in the simulation: 0.4
Enter the height of the triangle in the simulation: 0.7
Area of the triangle from theory = 0.13999999999999999
Area of the triangle from si... | MIT | .ipynb_checkpoints/LabEx9_Solutions-checkpoint.ipynb | clarence-ong/COS2000-Solutions |
Question 2 (10 marks) | %reset
import matplotlib.pyplot as plt
import numpy as np
N = eval(input("Enter the number of attempts in the Jackpot: "))
outcome = [] # List to store the outcome values
amount = [] # List to store the net amount won or lost after each attempt
netamount = 0 # Initialise the net amount
for n in range (N):... | Once deleted, variables cannot be recovered. Proceed (y/[n])?
Nothing done.
Enter the number of attempts in the Jackpot: 10000
[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10, -11, -12, -13, -14, -15, -16, -17, -18, -19, -20, -21, -22, -23, -24, -25, -26, -27, -28, -29, -30, -31, -32, -33, -34, -35, -36, -37, -38, -39, -40, ... | MIT | .ipynb_checkpoints/LabEx9_Solutions-checkpoint.ipynb | clarence-ong/COS2000-Solutions |
データセット取得(Online Retail Dataset)Online Retail Dataset:英国を拠点とする登録オンライン小売業者の2010年12月1日から2011年9月12日の間に発生するレコードを含むトランザクションデータセット。以下のようなデータが含まれます。* InvoiceNo:請求書番号。各トランザクションに一意に割り当てられる6桁の整数。 このコードが文字「c」で始まる場合、キャンセルを示す。* StockCode:製品コード。各製品に一意に割り当てられる5桁の整数。* Description:製品名。* Quantity:トランザクションごとの各製品の数量。* InvoiceData:請求書の日付と時... | # PyCaretチュートリアル用データセット取得
# 詳細は<https://pycaret.org/get-data/>を参照
from pycaret.datasets import get_data
dataset = get_data('france')
dataset.to_csv('./dataset.csv')
dataset.info()
print('データ :' + str(dataset.shape) + ' ' + str(dataset.index)) | データ :(8557, 8) RangeIndex(start=0, stop=8557, step=1)
| MIT | 06.Association-Rule-Mining-Sample/06.Association-Rule-Mining-Sample.ipynb | Kazuhito00/PyCaret-Learn |
PyCaretでのデータセットアップ | # 相関ルールマイニング用インポート
from pycaret.arules import *
# session_idを指定することで欄数シードを固定
# セットアップが完了するとデータの情報や前処理のパイプラインの情報が表示される
# 詳細は<https://pycaret.org/setup/>を参照
exp = setup(data=dataset, transaction_id='InvoiceNo', item_id='Description', session_id=42)
print(exp) | ( InvoiceNo StockCode Description Quantity \
0 536370 22728 ALARM CLOCK BAKELIKE PINK 24
1 536370 22727 ALARM CLOCK BAKELIKE RED 24
2 536370 22726 ALARM CLOCK BAKELIKE GREEN 12
3 536370 21724 PANDA AND... | MIT | 06.Association-Rule-Mining-Sample/06.Association-Rule-Mining-Sample.ipynb | Kazuhito00/PyCaret-Learn |
モデル生成相関ルールマイニングのcreate_model()では必須パラメーターは無く、以下の4つのオプションある。metric:ルールが重要かどうかを評価するためのメトリック。 デフォルト設定は'confidence' その他の指定可能なオプションは、'support', 'lift', 'leverage', 'conviction'があります。threshold:metric候補ルールが対象かどうかを決定するための最小しきい値。デフォルトは0.5min_support:返されるアイテムセットの最小サポートの値(0.0 ~ 0.1) transactions_where_item(s)_occur / total_transa... | # 引数で作成するモデルを指定
arule_model = create_model()
print(arule_model)
arule_model.head() | _____no_output_____ | MIT | 06.Association-Rule-Mining-Sample/06.Association-Rule-Mining-Sample.ipynb | Kazuhito00/PyCaret-Learn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.