repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
kazzz24/deep-learning
|
tensorboard/.ipynb_checkpoints/Anna KaRNNa Summaries-checkpoint.ipynb
|
mit
|
[
"Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf",
"First we'll load the text file and convert it into integers for our network to use.",
"with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)\n\ntext[:100]\n\nchars[:100]",
"Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.",
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y\n\ntrain_x, train_y, val_x, val_y = split_data(chars, 10, 200)\n\ntrain_x.shape\n\ntrain_x[:,:10]",
"I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.",
"def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]\n\ndef build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n with tf.name_scope('inputs'):\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')\n \n with tf.name_scope('targets'):\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # Build the RNN layers\n with tf.name_scope(\"RNN_cells\"):\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n \n with tf.name_scope(\"RNN_init_state\"):\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n # Run the data through the RNN layers\n with tf.name_scope(\"RNN_forward\"):\n rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]\n outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)\n \n final_state = state\n \n # Reshape output so it's a bunch of rows, one row for each cell output\n with tf.name_scope('sequence_reshape'):\n seq_output = tf.concat(outputs, axis=1,name='seq_output')\n output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')\n \n # Now connect the RNN outputs to a softmax layer and calculate the cost\n with tf.name_scope('logits'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),\n name='softmax_w')\n softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')\n logits = tf.matmul(output, softmax_w) + softmax_b\n tf.summary.histogram('softmax_w', softmax_w)\n tf.summary.histogram('softmax_b', softmax_b)\n\n with tf.name_scope('predictions'):\n preds = tf.nn.softmax(logits, name='predictions')\n tf.summary.histogram('predictions', preds)\n \n with tf.name_scope('cost'):\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')\n cost = tf.reduce_mean(loss, name='cost')\n tf.summary.scalar('cost', cost)\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n with tf.name_scope('train'):\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n merged = tf.summary.merge_all()\n \n # Export the nodes \n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer', 'merged']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph",
"Hyperparameters\nHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.",
"batch_size = 100\nnum_steps = 100\nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001",
"Training\nTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.",
"!mkdir -p checkpoints/anna\n\nepochs = 10\nsave_every_n = 100\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)\n test_writer = tf.summary.FileWriter('./logs/2/test')\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/anna20.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 0.5,\n model.initial_state: new_state}\n summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost, \n model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n train_writer.add_summary(summary, iteration)\n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n summary, batch_loss, new_state = sess.run([model.merged, model.cost, \n model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n \n test_writer.add_summary(summary, iteration)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n #saver.save(sess, \"checkpoints/anna/i{}_l{}_{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))\n\ntf.train.get_checkpoint_state('checkpoints/anna')",
"Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.",
"def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n prime = \"Far\"\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)\n\ncheckpoint = \"checkpoints/anna/i3560_l512_1.122.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i200_l512_2.432.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i600_l512_1.750.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i1000_l512_1.484.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maartenbreddels/ipyvolume
|
notebooks/demo-0.5.ipynb
|
mit
|
[
"import ipyvolume as ipv",
"We will render a low resolution scan of a head, which will display quite quickly (since the data size is small). If we want to see a higher resolution, we can zoom in.",
"fig = ipv.figure()\nvol_head = ipv.examples.head(max_shape=128);\nvol_head.ray_steps = 800",
"Zoom\nZoom in by clicking the magnifying icon, or keep the alt/option key pressed. After zooming in, the higher resolution verion cutout will be displayed.",
"ds = ipv.datasets.aquariusA2.fetch().data",
"Multivolume rendering\nSince version 0.5, ipyvolume supports multivolume rendering, so we can render two volumetric datasets at the same time.",
"vol_data = ipv.volshow(ds, extent=vol_head.extent, max_shape=128)\n\n# v0.5 also supports maximum intensity \nvol_data.rendering_method = 'MAX_INTENSITY'"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
timothydmorton/isochrones
|
notebooks/batch-demo.ipynb
|
mit
|
[
"Simulate/fit/analyze in batch mode\nGenerate observed binary properties",
"import numpy as np\n\nfrom isochrones import get_ichrone\n\nbands = ['J', 'H', 'K', 'G', 'BP', 'RP']\nmist = get_ichrone('mist', bands=bands)\n\nfrom itertools import product\n\nprimary_masses = [0.8, 1.0]\nmass_ratios = [0.5, 0.9]\nfeh_grid = [-0.25, 0.0]\nage = 9.7\ndistance = 500\nAV = 0.\n\nm1, m2, feh, name = zip(*[(m, q*m, f, f'{m:.2f}_{q*m:0.2f}_{f:0.2f}') \n for m, q, f \n in product(primary_masses, mass_ratios, feh_grid)])\n\ndf = mist.generate_binary(m1, m2, age, feh, distance=distance, AV=AV, accurate=True)\n\n# add uncertainties for each band\n\nuncs = {'J': 0.02, 'H': 0.02, 'K':0.02, 'G': 0.002, 'BP': 0.002, 'RP':0.002}\n\nfor b in bands:\n df[f'{b}_mag_unc'] = uncs[b]\n \n# Add parallax & uncertainty\ndf['parallax'] = 1000/distance\ndf['parallax_unc'] = 0.02\ndf.index = name",
"Use a StarCatalog to organize data",
"from isochrones.catalog import StarCatalog\nfrom isochrones.priors import FlatPrior\n\ncatalog = StarCatalog(df, bands=bands, props=['parallax'])\ncatalog.set_prior(AV=FlatPrior((0, 0.0001)), age=FlatPrior((8.5, 10)))",
"Fit models\nHere is a snippet to fit all the models (using the convenient .iter_models() API); this could easily be adapted into a cluster script (e.g., using schwimmbad) to run thousands of fits.",
"from multiprocessing import Pool\n\ndef fit_model(mod):\n print(mod.mnest_basename)\n mod.fit(verbose=True)\n return mod.derived_samples\n\npool = Pool(processes=8) # e.g.\n\nsamples = pool.map(fit_model, catalog.iter_models(N=2))",
"Analyze samples\nA StarCatalog could probably have some convenience methods for some of this stuff, but this for now.",
"cols = ['mass_0', 'mass_1', 'age', 'feh', 'AV']\nqs = [0.05, 0.16, 0.5, 0.84, 0.95]\nfor name, samps in zip(catalog.df.index, samples):\n print(name)\n print(samps[cols].quantile(qs))\n\nfrom corner import corner\n\ncorner(samples[-1][['mass_0', 'mass_1', 'age', 'feh', 'distance']]);\n\ncorner(samples[-1][['J_mag', 'K_mag', 'G_mag', 'BP_mag', 'RP_mag']]);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cdawei/flickr-photo
|
src/dataset_Melb.ipynb
|
gpl-2.0
|
[
"Description of Melbourne Dataset\n\nLoad Data\nCompute POI Statistics DataFrame\nCompute POI Visit Statistics\nVisualise & Save POIs\nPOI vs Photo\nPOIs with NO Visits\nPhoto Clusters without Corresponding POIs\nCompute Trajectories\nRecommendation via POI Ranking\nPOI Features for Ranking\nRanking POIs using rankSVM\nFactorised Transition Probabilities\nPOI Features for Factorisation\nTransition Matrix between POIs\nVisualise Transition Matrix\nVisualise Transitions of Specific POIs\nRecommendation Results Comparison & Visualisation\nChoose an Example Trajectory\nRecommendation by POI Popularity\nRecommendation by POI Rankings\nRecommendation by Transition Probabilities\nDisclaimer\nProblems of Trajectory Construction\nExample of Terrible Trajectories\nLimitations of Google Maps and Nationalmaps\n\n<a id='sec1'></a>",
"% matplotlib inline\n\nimport os, sys, time, pickle, tempfile\nimport math, random, itertools\nimport pandas as pd\nimport numpy as np\nfrom joblib import Parallel, delayed\nfrom scipy.misc import logsumexp\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom sklearn.cluster import KMeans\nfrom scipy.linalg import kron\n\nfrom fastkml import kml, styles\nfrom shapely.geometry import Point, LineString\n\nimport pulp\n\nRANK_C = 10 # regularisation parameter for rankSVM\nBIN_CLUSTER = 5 # Number of bins/clusters for discritizing POI features\nranksvm_dir = '$HOME/work/ranksvm'\n\ndata_dir = '../data'\nfpoi = os.path.join(data_dir, 'poi-Melb-all.csv')\nfvisits = os.path.join(data_dir, 'userVisits-Melb.csv')\nfphotos = os.path.join(data_dir, 'Melb_photos_bigbox.csv')",
"1. Load Data\nLoad POI/Trajectory data from file.\n1.1 Load POI Data",
"poi_all = pd.read_csv(fpoi)\npoi_all.set_index('poiID', inplace=True)\n#poi_all.head()\n\npoi_df = poi_all.copy()\npoi_df.drop('poiURL', axis=1, inplace=True)\npoi_df.rename(columns={'poiName':'Name', 'poiTheme':'Category', 'poiLat':'Latitude', 'poiLon':'Longitude'}, \\\n inplace=True)\npoi_df.head()",
"1.2 Load Trajectory Data",
"visits = pd.read_csv(fvisits, sep=';')\n#visits.head()\n\nvisits.drop(['poiTheme', 'poiFreq'], axis=1, inplace=True)\nvisits.rename(columns={'seqID':'trajID'}, inplace=True)\nvisits.head()",
"2. Compute POI Statistics DataFrame\n2.1 Compute POI Visit Statistics\nCompute the number of photos associated with each POI.",
"poi_photo = visits[['photoID', 'poiID']].copy().groupby('poiID').agg(np.size)\npoi_photo.rename(columns={'photoID':'#photos'}, inplace=True)\npoi_photo.head()",
"Compute the visit duration at each POI.",
"poi_duration = visits[['dateTaken', 'poiID', 'trajID']].copy().groupby(['trajID', 'poiID']).agg([np.min, np.max])\npoi_duration.columns = poi_duration.columns.droplevel()\npoi_duration.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True)\npoi_duration.reset_index(inplace=True)\npoi_duration['poiDuration'] = poi_duration['departureTime'] - poi_duration['arrivalTime']\npoi_duration.head()",
"Filtering out zero visit duration at POI, otherwise many medians of duration will be zero.",
"poi_duration = poi_duration[poi_duration['poiDuration'] > 0]",
"Compute the median and summation of POI visit duration.",
"poi_duration_stats = poi_duration[['poiID', 'poiDuration']].copy().groupby('poiID').agg([np.median, np.sum])\npoi_duration_stats.columns = poi_duration_stats.columns.droplevel()\npoi_duration_stats.rename(columns={'median':'medianDuration(sec)', 'sum':'totalDuration(sec)'}, inplace=True)\npoi_duration_stats.head()",
"Compute the number of visits at each POI by all users and by distinct users.\nNOTE: we assume NO loops/subtours appear in trajectories, \nso a specific user would visit a certain POI in a specific trajectory at most once.",
"poi_visits = visits[['userID', 'trajID', 'poiID', 'photoID']].copy().groupby(['userID','trajID','poiID']).agg(np.size)\npoi_visits.reset_index(inplace=True)\npoi_visits.rename(columns={'photoID':'#photosAtPOIInTraj'}, inplace=True)\npoi_visits.head()\n\npoi_visits_stats = poi_visits[['userID', 'poiID']].copy().groupby('poiID').agg([pd.Series.nunique, np.size])\npoi_visits_stats.columns = poi_visits_stats.columns.droplevel()\npoi_visits_stats.rename(columns={'nunique':'#distinctUsers', 'size':'#visits'}, inplace=True)\npoi_visits_stats.head()",
"Copy visit statistics to POI dataframe.",
"poi_df['#photos'] = 0\npoi_df['#visits'] = 0\npoi_df['#distinctUsers'] = 0\npoi_df['medianDuration(sec)'] = 0.0\npoi_df['totalDuration(sec)'] = 0.0\n\npoi_df.loc[poi_photo.index, '#photos'] = poi_photo['#photos']\npoi_df.loc[poi_visits_stats.index, '#visits'] = poi_visits_stats['#visits']\npoi_df.loc[poi_visits_stats.index, '#distinctUsers'] = poi_visits_stats['#distinctUsers']\npoi_df.loc[poi_duration_stats.index, 'medianDuration(sec)'] = poi_duration_stats['medianDuration(sec)']\npoi_df.loc[poi_duration_stats.index, 'totalDuration(sec)'] = poi_duration_stats['totalDuration(sec)']\npoi_df.head()",
"2.2 Visualise & Save POIs\nVisualise POI on map: Simply import the above CSV file in Google Maps (Google Drive -> NEW -> More -> Google My Maps), an example of this POI dataframe shown on map is available here.\nTo sort POIs according to one attribute (e.g. #photos) in Google Maps, click the option icon at the upper right corner of the layer, then click \"Open data table\", a data table will pop-up, click the column of interest (e.g. #photos), then click \"Sort A->Z\" to sort POIs according to that attribute (e.g. #photos) in ascending order.\nSave POI dataframe to CSV file.",
"#poi_file = os.path.join(data_dir, 'poi_df.csv')\n#poi_df.to_csv(poi_file, index=True)",
"3. POI vs Photo\n3.1 POIs with NO Visits",
"poi_df[poi_df['#visits'] < 1]",
"3.2 Photo Clusters without Corresponding POIs\nTODO: A map with a cluster of photos at some place in Melbourne, given that NO geo-coordinates were provided in its Wikipedia page.\nA popular place needs to be provided!\n4. Compute Trajectories\n4.1 Trajectories Data\nCompute trajectories using POI visiting records, assuming NO loops/subtours in trajectories.",
"traj_all = visits[['userID', 'trajID', 'poiID', 'dateTaken']].copy().groupby(['userID', 'trajID', 'poiID'])\\\n .agg([np.min, np.max, np.size])\ntraj_all.columns = traj_all.columns.droplevel()\ntraj_all.reset_index(inplace=True)\ntraj_all.rename(columns={'amin':'startTime', 'amax':'endTime', 'size':'#photo'}, inplace=True)\n\ntraj_len = traj_all[['userID', 'trajID', 'poiID']].copy().groupby(['userID', 'trajID']).agg(np.size)\ntraj_len.reset_index(inplace=True)\ntraj_len.rename(columns={'poiID':'trajLen'}, inplace=True)\n\ntraj_all = pd.merge(traj_all, traj_len, on=['userID', 'trajID'])\ntraj_all['poiDuration'] = traj_all['endTime'] - traj_all['startTime']\ntraj_all.head(10)",
"4.2 Utility Functions\nExtract trajectory, i.e., a list of POIs, considering loops/subtours.",
"def extract_traj_withloop(tid, visits):\n \"\"\"Compute trajectories info, taking care of trajectories that contain sub-tours\"\"\"\n traj_df = visits[visits['trajID'] == tid].copy()\n traj_df.sort_values(by='dateTaken', ascending=True, inplace=True)\n traj = []\n for ix in traj_df.index:\n poi = traj_df.loc[ix, 'poiID']\n if len(traj) == 0:\n traj.append(poi)\n else:\n if poi != traj[-1]:\n traj.append(poi)\n return traj",
"Extract trajectory, i.e., a list of POIs, assuming NO considering loops/subtours exist.",
"def extract_traj(tid, traj_all):\n traj = traj_all[traj_all['trajID'] == tid].copy()\n traj.sort_values(by=['startTime'], ascending=True, inplace=True)\n return traj['poiID'].tolist()",
"Counting the number of trajectories with loops/subtours.",
"loopcnt = 0\nfor tid_ in visits['trajID'].unique():\n traj_ = extract_traj_withloop(tid_, visits)\n if len(traj_) != len(set(traj_)):\n #print(traj_)\n loopcnt += 1\nprint('Number of trajectories with loops/subtours:', loopcnt)",
"Compute POI properties, e.g., popularity, total number of visit, average visit duration.",
"def calc_poi_info(trajid_list, traj_all, poi_all):\n assert(len(trajid_list) > 0)\n # to allow duplicated trajid\n poi_info = traj_all[traj_all['trajID'] == trajid_list[0]][['poiID', 'poiDuration']].copy() \n for i in range(1, len(trajid_list)):\n traj = traj_all[traj_all['trajID'] == trajid_list[i]][['poiID', 'poiDuration']]\n poi_info = poi_info.append(traj, ignore_index=True)\n \n poi_info = poi_info.groupby('poiID').agg([np.mean, np.size])\n poi_info.columns = poi_info.columns.droplevel()\n poi_info.reset_index(inplace=True)\n poi_info.rename(columns={'mean':'avgDuration', 'size':'nVisit'}, inplace=True)\n poi_info.set_index('poiID', inplace=True) \n poi_info['poiCat'] = poi_all.loc[poi_info.index, 'poiTheme']\n poi_info['poiLon'] = poi_all.loc[poi_info.index, 'poiLon']\n poi_info['poiLat'] = poi_all.loc[poi_info.index, 'poiLat']\n \n # POI popularity: the number of distinct users that visited the POI\n pop_df = traj_all[traj_all['trajID'].isin(trajid_list)][['poiID', 'userID']].copy()\n pop_df = pop_df.groupby('poiID').agg(pd.Series.nunique)\n pop_df.rename(columns={'userID':'nunique'}, inplace=True)\n poi_info['popularity'] = pop_df.loc[poi_info.index, 'nunique']\n \n return poi_info.copy()",
"Compute the F1 score for recommended trajectory, assuming NO loops/subtours in trajectories.",
"def calc_F1(traj_act, traj_rec):\n '''Compute recall, precision and F1 for recommended trajectories'''\n assert(isinstance(noloop, bool))\n assert(len(traj_act) > 0)\n assert(len(traj_rec) > 0)\n \n intersize = len(set(traj_act) & set(traj_rec))\n recall = intersize / len(traj_act)\n precision = intersize / len(traj_rec)\n F1 = 2 * precision * recall / (precision + recall)\n return F1",
"Compute distance between two POIs using Haversine formula.",
"def calc_dist_vec(longitudes1, latitudes1, longitudes2, latitudes2):\n \"\"\"Calculate the distance (unit: km) between two places on earth, vectorised\"\"\"\n # convert degrees to radians\n lng1 = np.radians(longitudes1)\n lat1 = np.radians(latitudes1)\n lng2 = np.radians(longitudes2)\n lat2 = np.radians(latitudes2)\n radius = 6371.0088 # mean earth radius, en.wikipedia.org/wiki/Earth_radius#Mean_radius\n\n # The haversine formula, en.wikipedia.org/wiki/Great-circle_distance\n dlng = np.fabs(lng1 - lng2)\n dlat = np.fabs(lat1 - lat2)\n dist = 2 * radius * np.arcsin( np.sqrt( \n (np.sin(0.5*dlat))**2 + np.cos(lat1) * np.cos(lat2) * (np.sin(0.5*dlng))**2 ))\n return dist",
"Distance between POIs.",
"POI_DISTMAT = pd.DataFrame(data=np.zeros((poi_all.shape[0], poi_all.shape[0]), dtype=np.float), \\\n index=poi_all.index, columns=poi_all.index)\n\nfor ix in poi_all.index:\n POI_DISTMAT.loc[ix] = calc_dist_vec(poi_all.loc[ix, 'poiLon'], \\\n poi_all.loc[ix, 'poiLat'], \\\n poi_all['poiLon'], \\\n poi_all['poiLat'])\n\ntrajid_set_all = sorted(traj_all['trajID'].unique().tolist())\n\npoi_info_all = calc_poi_info(trajid_set_all, traj_all, poi_all)\n\npoi_info_all.head()",
"Dump POI popularity.",
"poi_all['poiPopularity'] = 0\npoi_all.loc[poi_info_all.index, 'poiPopularity'] = poi_info_all.loc[poi_info_all.index, 'popularity']\npoi_all.head()\n\n#poi_all.to_csv('poi_all.csv', index=True)",
"~~Filtering out POI visits with 0 duration.~~",
"#zero_duration = poi_info_all[poi_info_all['avgDuration'] < 1]\n#zero_duration\n\n#print(traj_all.shape)\n#traj_all = traj_all[traj_all['poiID'].isin(set(poi_info_all.index) - set(zero_duration.index))]\n#print(traj_all.shape)",
"Dictionary maps every trajectory ID to the actual trajectory.",
"traj_dict = dict()\n\nfor trajid in trajid_set_all:\n traj = extract_traj(trajid, traj_all)\n assert(trajid not in traj_dict)\n traj_dict[trajid] = traj",
"Define a query (in IR terminology) using tuple (start POI, end POI, #POI) ~~user ID.~~",
"QUERY_ID_DICT = dict() # (start, end, length) --> qid\n\nkeys = [(traj_dict[x][0], traj_dict[x][-1], len(traj_dict[x])) \\\n for x in sorted(traj_dict.keys()) if len(traj_dict[x]) > 2]\ncnt = 0\nfor key in keys:\n if key not in QUERY_ID_DICT: # (start, end, length) --> qid\n QUERY_ID_DICT[key] = cnt\n cnt += 1\n\nprint('#traj in total:', len(trajid_set_all))\nprint('#traj (length > 2):', traj_all[traj_all['trajLen'] > 2]['trajID'].unique().shape[0])\nprint('#query tuple:', len(QUERY_ID_DICT))",
"5. Recommendation via POI Ranking\n5.1 POI Features for Ranking\nPOI Features used for ranking:\n1. popularity: POI popularity, i.e., the number of distinct users that visited the POI\n1. nVisit: the total number of visit by all users\n1. avgDuration: average POI visit duration\n1. sameCatStart: 1 if POI category is the same as that of startPOI, -1 otherwise\n1. sameCatEnd: 1 if POI category is the same as that of endPOI, -1 otherwise\n1. distStart: distance (haversine formula) from startPOI\n1. distEnd: distance from endPOI\n1. seqLen: trajectory length (copy from query)\n1. diffPopStart: difference in POI popularity from startPOI\n1. diffPopEnd: difference in POI popularity from endPOI\n1. diffNVisitStart: difference in the total number of visit from startPOI\n1. diffNVisitEnd: difference in the total number of visit from endPOI\n1. diffDurationStart: difference in average POI visit duration from the actual duration spent at startPOI\n1. diffDurationEnd: difference in average POI visit duration from the actual duration spent at endPOI",
"DF_COLUMNS = ['poiID', 'label', 'queryID', 'popularity', 'nVisit', 'avgDuration', \\\n 'sameCatStart', 'sameCatEnd', 'distStart', 'distEnd', 'trajLen', 'diffPopStart', \\\n 'diffPopEnd', 'diffNVisitStart', 'diffNVisitEnd', 'diffDurationStart', 'diffDurationEnd']",
"5.2 Training DataFrame\nTraining data are generated as follows:\n1. each input tuple $(\\text{startPOI}, \\text{endPOI}, \\text{#POI})$ form a query (in IR terminology).\n1. the label of a specific POI is the number of presence of that POI in a specific query, excluding the presence as $\\text{startPOI}$ or $\\text{endPOI}$.\n1. for each query, the label of all absence POIs from trajectories of that query in training set got a label 0.\nThe dimension of training data matrix is #(qid, poi) by #feature.",
"def gen_train_subdf(poi_id, query_id_set, poi_info, query_id_rdict):\n columns = DF_COLUMNS\n poi_distmat = POI_DISTMAT\n df_ = pd.DataFrame(data=np.zeros((len(query_id_set), len(columns)), dtype=np.float), columns=columns)\n \n pop = poi_info.loc[poi_id, 'popularity']; nvisit = poi_info.loc[poi_id, 'nVisit']\n cat = poi_info.loc[poi_id, 'poiCat']; duration = poi_info.loc[poi_id, 'avgDuration']\n \n for j in range(len(query_id_set)):\n qid = query_id_set[j]\n assert(qid in query_id_rdict) # qid --> (start, end, length)\n (p0, pN, trajLen) = query_id_rdict[qid]\n idx = df_.index[j]\n df_.loc[idx, 'poiID'] = poi_id\n df_.loc[idx, 'queryID'] = qid\n df_.loc[idx, 'popularity'] = pop\n df_.loc[idx, 'nVisit'] = nvisit\n df_.loc[idx, 'avgDuration'] = duration\n df_.loc[idx, 'sameCatStart'] = 1 if cat == poi_info.loc[p0, 'poiCat'] else -1\n df_.loc[idx, 'sameCatEnd'] = 1 if cat == poi_info.loc[pN, 'poiCat'] else -1\n df_.loc[idx, 'distStart'] = poi_distmat.loc[poi_id, p0]\n df_.loc[idx, 'distEnd'] = poi_distmat.loc[poi_id, pN]\n df_.loc[idx, 'trajLen'] = trajLen\n df_.loc[idx, 'diffPopStart'] = pop - poi_info.loc[p0, 'popularity']\n df_.loc[idx, 'diffPopEnd'] = pop - poi_info.loc[pN, 'popularity']\n df_.loc[idx, 'diffNVisitStart'] = nvisit - poi_info.loc[p0, 'nVisit']\n df_.loc[idx, 'diffNVisitEnd'] = nvisit - poi_info.loc[pN, 'nVisit']\n df_.loc[idx, 'diffDurationStart'] = duration - poi_info.loc[p0, 'avgDuration']\n df_.loc[idx, 'diffDurationEnd'] = duration - poi_info.loc[pN, 'avgDuration']\n \n return df_\n\ndef gen_train_df(trajid_list, traj_dict, poi_info, n_jobs=-1):\n columns = DF_COLUMNS\n poi_distmat = POI_DISTMAT\n query_id_dict = QUERY_ID_DICT\n train_trajs = [traj_dict[x] for x in trajid_list if len(traj_dict[x]) > 2]\n \n qid_set = sorted(set([query_id_dict[(t[0], t[-1], len(t))] for t in train_trajs]))\n poi_set = set()\n for tr in train_trajs:\n poi_set = poi_set | set(tr)\n \n #qid_poi_pair = list(itertools.product(qid_set, poi_set)) # Cartesian product of qid_set and poi_set\n #df_ = pd.DataFrame(data=np.zeros((len(qid_poi_pair), len(columns)), dtype= np.float), columns=columns)\n \n query_id_rdict = dict()\n for k, v in query_id_dict.items(): \n query_id_rdict[v] = k # qid --> (start, end, length)\n \n train_df_list = Parallel(n_jobs=n_jobs)\\\n (delayed(gen_train_subdf)(poi, qid_set, poi_info, query_id_rdict) \\\n for poi in poi_set)\n \n assert(len(train_df_list) > 0)\n df_ = train_df_list[0]\n for j in range(1, len(train_df_list)):\n df_ = df_.append(train_df_list[j], ignore_index=True) \n \n # set label\n df_.set_index(['queryID', 'poiID'], inplace=True)\n for t in train_trajs:\n qid = query_id_dict[(t[0], t[-1], len(t))]\n for poi in t[1:-1]: # do NOT count if the POI is startPOI/endPOI\n df_.loc[(qid, poi), 'label'] += 1\n\n df_.reset_index(inplace=True)\n return df_",
"Sanity check: \n- different POIs have different features for the same query trajectory\n- the same POI get different features for different query-id\n5.3 Test DataFrame\nTest data are generated the same way as training data, except that the labels of testing data (unknown) could be arbitrary values as suggested in libsvm FAQ.\nThe reported accuracy (by svm-predict command) is meaningless as it is calculated based on these labels.\nThe dimension of training data matrix is #poi by #feature with one specific query, i.e. tuple $(\\text{startPOI}, \\text{endPOI}, \\text{#POI})$.",
"def gen_test_df(startPOI, endPOI, nPOI, poi_info):\n columns = DF_COLUMNS\n poi_distmat = POI_DISTMAT\n query_id_dict = QUERY_ID_DICT\n key = (p0, pN, trajLen) = (startPOI, endPOI, nPOI)\n assert(key in query_id_dict)\n assert(p0 in poi_info.index)\n assert(pN in poi_info.index)\n \n df_ = pd.DataFrame(data=np.zeros((poi_info.shape[0], len(columns)), dtype= np.float), columns=columns)\n poi_list = sorted(poi_info.index)\n \n qid = query_id_dict[key]\n df_['queryID'] = qid\n df_['label'] = np.random.rand(df_.shape[0]) # label for test data is arbitrary according to libsvm FAQ\n\n for i in range(df_.index.shape[0]):\n poi = poi_list[i]\n lon = poi_info.loc[poi, 'poiLon']; lat = poi_info.loc[poi, 'poiLat']\n pop = poi_info.loc[poi, 'popularity']; nvisit = poi_info.loc[poi, 'nVisit']\n cat = poi_info.loc[poi, 'poiCat']; duration = poi_info.loc[poi, 'avgDuration']\n idx = df_.index[i]\n df_.loc[idx, 'poiID'] = poi \n df_.loc[idx, 'popularity'] = pop\n df_.loc[idx, 'nVisit'] = nvisit\n df_.loc[idx, 'avgDuration'] = duration\n df_.loc[idx, 'sameCatStart'] = 1 if cat == poi_info.loc[p0, 'poiCat'] else -1\n df_.loc[idx, 'sameCatEnd'] = 1 if cat == poi_info.loc[pN, 'poiCat'] else -1\n df_.loc[idx, 'distStart'] = poi_distmat.loc[poi, p0]\n df_.loc[idx, 'distEnd'] = poi_distmat.loc[poi, pN]\n df_.loc[idx, 'trajLen'] = trajLen\n df_.loc[idx, 'diffPopStart'] = pop - poi_info.loc[p0, 'popularity']\n df_.loc[idx, 'diffPopEnd'] = pop - poi_info.loc[pN, 'popularity']\n df_.loc[idx, 'diffNVisitStart'] = nvisit - poi_info.loc[p0, 'nVisit']\n df_.loc[idx, 'diffNVisitEnd'] = nvisit - poi_info.loc[pN, 'nVisit']\n df_.loc[idx, 'diffDurationStart'] = duration - poi_info.loc[p0, 'avgDuration']\n df_.loc[idx, 'diffDurationEnd'] = duration - poi_info.loc[pN, 'avgDuration']\n return df_",
"Sanity check: \n- different POIs have different features for the same query trajectory\n- the same POI get different features for different query-id\nGenerate a string for a training/test data frame.",
"def gen_data_str(df_, df_columns=DF_COLUMNS):\n columns = df_columns[1:].copy() # get rid of 'poiID'\n for col in columns:\n assert(col in df_.columns)\n \n lines = []\n for idx in df_.index:\n slist = [str(df_.loc[idx, 'label'])]\n slist.append(' qid:')\n slist.append(str(int(df_.loc[idx, 'queryID'])))\n for j in range(2, len(columns)):\n slist.append(' ')\n slist.append(str(j-1))\n slist.append(':')\n slist.append(str(df_.loc[idx, columns[j]]))\n slist.append('\\n')\n lines.append(''.join(slist))\n return ''.join(lines)",
"5.4 Ranking POIs using rankSVM\nRankSVM implementation in libsvm.zip or liblinear.zip, please read README.ranksvm in the zip file for installation instructions.\nUse softmax function to convert ranking scores to a probability distribution.",
"def softmax(x):\n x1 = x.copy()\n x1 -= np.max(x1) # numerically more stable, REF: http://cs231n.github.io/linear-classify/#softmax\n expx = np.exp(x1)\n return expx / np.sum(expx, axis=0) # column-wise sum",
"Below is a python wrapper of the svm-train or train and svm-predict or predict commands of rankSVM with ranking probabilities $P(p_i \\lvert (p_s, p_e, len))$ computed using softmax function.",
"# python wrapper of rankSVM\nclass RankSVM:\n def __init__(self, bin_dir, useLinear=True, debug=False):\n dir_ = !echo $bin_dir # deal with environmental variables in path\n assert(os.path.exists(dir_[0]))\n self.bin_dir = dir_[0]\n \n self.bin_train = 'svm-train'\n self.bin_predict = 'svm-predict'\n if useLinear:\n self.bin_train = 'train'\n self.bin_predict = 'predict'\n \n assert(isinstance(debug, bool))\n self.debug = debug\n \n # create named tmp files for model and feature scaling parameters\n self.fmodel = None\n self.fscale = None\n with tempfile.NamedTemporaryFile(delete=False) as fd: \n self.fmodel = fd.name\n with tempfile.NamedTemporaryFile(delete=False) as fd: \n self.fscale = fd.name\n \n if self.debug:\n print('model file:', self.fmodel)\n print('feature scaling parameter file:', self.fscale)\n \n \n def __del__(self):\n # remove tmp files\n if self.fmodel is not None and os.path.exists(self.fmodel):\n os.unlink(self.fmodel)\n if self.fscale is not None and os.path.exists(self.fscale):\n os.unlink(self.fscale)\n \n \n def train(self, train_df, cost=1):\n # cost is parameter C in SVM\n # write train data to file\n ftrain = None\n with tempfile.NamedTemporaryFile(mode='w+t', delete=False) as fd: \n ftrain = fd.name\n datastr = gen_data_str(train_df)\n fd.write(datastr)\n \n # feature scaling\n ftrain_scaled = None\n with tempfile.NamedTemporaryFile(mode='w+t', delete=False) as fd: \n ftrain_scaled = fd.name\n result = !$self.bin_dir/svm-scale -s $self.fscale $ftrain > $ftrain_scaled\n \n if self.debug:\n print('cost:', cost)\n print('train data file:', ftrain)\n print('feature scaled train data file:', ftrain_scaled)\n \n # train rank svm and generate model file, if the model file exists, rewrite it\n #n_cv = 10 # parameter k for k-fold cross-validation, NO model file will be generated in CV mode\n #result = !$self.bin_dir/svm-train -c $cost -v $n_cv $ftrain $self.fmodel\n result = !$self.bin_dir/$self.bin_train -c $cost $ftrain_scaled $self.fmodel\n if self.debug:\n print('Training finished.')\n for i in range(len(result)): print(result[i])\n\n # remove train data file\n os.unlink(ftrain)\n os.unlink(ftrain_scaled) \n \n \n def predict(self, test_df):\n # predict ranking scores for the given feature matrix\n if self.fmodel is None or not os.path.exists(self.fmodel):\n print('Model should be trained before predicting')\n return\n \n # write test data to file\n ftest = None\n with tempfile.NamedTemporaryFile(mode='w+t', delete=False) as fd: \n ftest = fd.name\n datastr = gen_data_str(test_df)\n fd.write(datastr)\n \n # feature scaling\n ftest_scaled = None\n with tempfile.NamedTemporaryFile(delete=False) as fd: \n ftest_scaled = fd.name\n result = !$self.bin_dir/svm-scale -r $self.fscale $ftest > $ftest_scaled\n \n # generate prediction file\n fpredict = None\n with tempfile.NamedTemporaryFile(delete=False) as fd: \n fpredict = fd.name\n \n if self.debug:\n print('test data file:', ftest)\n print('feature scaled test data file:', ftest_scaled)\n print('predict result file:', fpredict)\n \n # predict using trained model and write prediction to file\n result = !$self.bin_dir/$self.bin_predict $ftest_scaled $self.fmodel $fpredict\n if self.debug:\n print('Predict result: %-30s %s' % (result[0], result[1]))\n \n # generate prediction DataFrame from prediction file\n poi_rank_df = pd.read_csv(fpredict, header=None)\n poi_rank_df.rename(columns={0:'rank'}, inplace=True)\n poi_rank_df['poiID'] = test_df['poiID'].astype(np.int)\n poi_rank_df.set_index('poiID', inplace=True) # duplicated 'poiID' when evaluating training data\n #poi_rank_df['probability'] = softmax(poi_rank_df['rank']) # softmax\n \n # remove test file and prediction file\n os.unlink(ftest)\n os.unlink(ftest_scaled)\n os.unlink(fpredict)\n \n return poi_rank_df",
"6. Factorised Transition Probabilities\n6.1 POI Features for Factorisation\nPOI features used to factorise transition matrix of Markov Chain with POI features (vector) as states:\n- Category of POI\n- Popularity of POI (discritize with uniform log-scale bins, #bins=5 )\n- The number of POI visits (discritize with uniform log-scale bins, #bins=5 )\n- The average visit duration of POI (discritise with uniform log-scale bins, #bins= 5)\n- The neighborhood relationship between POIs (clustering POI(lat, lon) using k-means, #clusters= 5)\nWe count the number of transition first, then normalise each row while taking care of zero by adding each cell a number $k=1$.",
"def normalise_transmat(transmat_cnt):\n transmat = transmat_cnt.copy()\n assert(isinstance(transmat, pd.DataFrame))\n for row in range(transmat.index.shape[0]):\n rowsum = np.sum(transmat.iloc[row] + 1)\n assert(rowsum > 0)\n transmat.iloc[row] = (transmat.iloc[row] + 1) / rowsum\n return transmat",
"POIs in training set.",
"poi_train = sorted(poi_info_all.index)",
"6.2 Transition Matrix between POI Cateogries",
"poi_cats = poi_all.loc[poi_train, 'poiTheme'].unique().tolist()\npoi_cats.sort()\n#poi_cats\n\ndef gen_transmat_cat(trajid_list, traj_dict, poi_info, poi_cats=poi_cats):\n transmat_cat_cnt = pd.DataFrame(data=np.zeros((len(poi_cats), len(poi_cats)), dtype=np.float), \\\n columns=poi_cats, index=poi_cats)\n for tid in trajid_list:\n t = traj_dict[tid]\n if len(t) > 1:\n for pi in range(len(t)-1):\n p1 = t[pi]\n p2 = t[pi+1]\n assert(p1 in poi_info.index and p2 in poi_info.index)\n cat1 = poi_info.loc[p1, 'poiCat']\n cat2 = poi_info.loc[p2, 'poiCat']\n transmat_cat_cnt.loc[cat1, cat2] += 1\n return normalise_transmat(transmat_cat_cnt)\n\ngen_transmat_cat(trajid_set_all, traj_dict, poi_info_all)",
"6.3 Transition Matrix between POI Popularity Classes",
"poi_pops = poi_info_all.loc[poi_train, 'popularity']\n#sorted(poi_pops.unique().tolist())",
"Discretize POI popularity with uniform log-scale bins.",
"expo_pop1 = np.log10(max(1, min(poi_pops)))\nexpo_pop2 = np.log10(max(poi_pops))\nprint(expo_pop1, expo_pop2)\n\nnbins_pop = BIN_CLUSTER\nlogbins_pop = np.logspace(np.floor(expo_pop1), np.ceil(expo_pop2), nbins_pop+1)\nlogbins_pop[0] = 0 # deal with underflow\nif logbins_pop[-1] < poi_info_all['popularity'].max():\n logbins_pop[-1] = poi_info_all['popularity'].max() + 1\nlogbins_pop\n\nax = pd.Series(poi_pops).hist(figsize=(5, 3), bins=logbins_pop)\nax.set_xlim(xmin=0.1)\nax.set_xscale('log')\n\ndef gen_transmat_pop(trajid_list, traj_dict, poi_info, logbins_pop=logbins_pop):\n nbins = len(logbins_pop) - 1\n transmat_pop_cnt = pd.DataFrame(data=np.zeros((nbins, nbins), dtype=np.float), \\\n columns=np.arange(1, nbins+1), index=np.arange(1, nbins+1))\n for tid in trajid_list:\n t = traj_dict[tid]\n if len(t) > 1:\n for pi in range(len(t)-1):\n p1 = t[pi]\n p2 = t[pi+1]\n assert(p1 in poi_info.index and p2 in poi_info.index)\n pop1 = poi_info.loc[p1, 'popularity']\n pop2 = poi_info.loc[p2, 'popularity']\n pc1, pc2 = np.digitize([pop1, pop2], logbins_pop)\n transmat_pop_cnt.loc[pc1, pc2] += 1\n return normalise_transmat(transmat_pop_cnt), logbins_pop\n\ngen_transmat_pop(trajid_set_all, traj_dict, poi_info_all)[0]",
"6.4 Transition Matrix between the Number of POI Visit Classes",
"poi_visits = poi_info_all.loc[poi_train, 'nVisit']\n#sorted(poi_visits.unique().tolist())",
"Discretize the number of POI visit with uniform log-scale bins.",
"expo_visit1 = np.log10(max(1, min(poi_visits)))\nexpo_visit2 = np.log10(max(poi_visits))\nprint(expo_visit1, expo_visit2)\n\nnbins_visit = BIN_CLUSTER\nlogbins_visit = np.logspace(np.floor(expo_visit1), np.ceil(expo_visit2), nbins_visit+1)\nlogbins_visit[0] = 0 # deal with underflow\nif logbins_visit[-1] < poi_info_all['nVisit'].max():\n logbins_visit[-1] = poi_info_all['nVisit'].max() + 1\nlogbins_visit\n\nax = pd.Series(poi_visits).hist(figsize=(5, 3), bins=logbins_visit)\nax.set_xlim(xmin=0.1)\nax.set_xscale('log')\n\ndef gen_transmat_visit(trajid_list, traj_dict, poi_info, logbins_visit=logbins_visit):\n nbins = len(logbins_visit) - 1\n transmat_visit_cnt = pd.DataFrame(data=np.zeros((nbins, nbins), dtype=np.float), \\\n columns=np.arange(1, nbins+1), index=np.arange(1, nbins+1))\n for tid in trajid_list:\n t = traj_dict[tid]\n if len(t) > 1:\n for pi in range(len(t)-1):\n p1 = t[pi]\n p2 = t[pi+1]\n assert(p1 in poi_info.index and p2 in poi_info.index)\n visit1 = poi_info.loc[p1, 'nVisit']\n visit2 = poi_info.loc[p2, 'nVisit']\n vc1, vc2 = np.digitize([visit1, visit2], logbins_visit)\n transmat_visit_cnt.loc[vc1, vc2] += 1\n return normalise_transmat(transmat_visit_cnt), logbins_visit\n\ngen_transmat_visit(trajid_set_all, traj_dict, poi_info_all)[0]",
"6.5 Transition Matrix between POI Average Visit Duration Classes",
"poi_durations = poi_info_all.loc[poi_train, 'avgDuration']\n#sorted(poi_durations.unique().tolist())\n\nexpo_duration1 = np.log10(max(1, min(poi_durations)))\nexpo_duration2 = np.log10(max(poi_durations))\nprint(expo_duration1, expo_duration2)\n\nnbins_duration = BIN_CLUSTER\nlogbins_duration = np.logspace(np.floor(expo_duration1), np.ceil(expo_duration2), nbins_duration+1)\nlogbins_duration[0] = 0 # deal with underflow\nlogbins_duration[-1] = np.power(10, expo_duration2+2)\nlogbins_duration\n\nax = pd.Series(poi_durations).hist(figsize=(5, 3), bins=logbins_duration)\nax.set_xlim(xmin=0.1)\nax.set_xscale('log')\n\ndef gen_transmat_duration(trajid_list, traj_dict, poi_info, logbins_duration=logbins_duration):\n nbins = len(logbins_duration) - 1\n transmat_duration_cnt = pd.DataFrame(data=np.zeros((nbins, nbins), dtype=np.float), \\\n columns=np.arange(1, nbins+1), index=np.arange(1, nbins+1))\n for tid in trajid_list:\n t = traj_dict[tid]\n if len(t) > 1:\n for pi in range(len(t)-1):\n p1 = t[pi]\n p2 = t[pi+1]\n assert(p1 in poi_info.index and p2 in poi_info.index)\n d1 = poi_info.loc[p1, 'avgDuration']\n d2 = poi_info.loc[p2, 'avgDuration']\n dc1, dc2 = np.digitize([d1, d2], logbins_duration)\n transmat_duration_cnt.loc[dc1, dc2] += 1\n return normalise_transmat(transmat_duration_cnt), logbins_duration\n\ngen_transmat_duration(trajid_set_all, traj_dict, poi_info_all)[0]",
"6.6 Transition Matrix between POI Neighborhood Classes\nKMeans in scikit-learn seems unable to use custom distance metric and no implementation of Haversine formula, use Euclidean distance to approximate.",
"X = poi_all.loc[poi_train, ['poiLon', 'poiLat']]\nnclusters = BIN_CLUSTER\n\nkmeans = KMeans(n_clusters=nclusters, random_state=987654321)\nkmeans.fit(X)\n\nclusters = kmeans.predict(X)\n#clusters\npoi_clusters = pd.DataFrame(data=clusters, index=poi_train)\npoi_clusters.index.name = 'poiID'\npoi_clusters.rename(columns={0:'clusterID'}, inplace=True)\n#poi_clusters\n\npoi_clusters.to_csv('cluster.1.csv')",
"Scatter plot of POI coordinates with clustering results.",
"diff = poi_all.loc[poi_train, ['poiLon', 'poiLat']].max() - poi_all.loc[poi_train, ['poiLon', 'poiLat']].min()\nratio = diff['poiLon'] / diff['poiLat']\n#ratio\nheight = 6; width = int(round(ratio)*height)\nplt.figure(figsize=[width, height])\nplt.scatter(poi_all.loc[poi_train, 'poiLon'], poi_all.loc[poi_train, 'poiLat'], c=clusters, s=50)\n\ndef gen_transmat_neighbor(trajid_list, traj_dict, poi_info, poi_clusters=poi_clusters):\n nclusters = len(poi_clusters['clusterID'].unique())\n transmat_neighbor_cnt = pd.DataFrame(data=np.zeros((nclusters, nclusters), dtype=np.float), \\\n columns=np.arange(nclusters), index=np.arange(nclusters))\n for tid in trajid_list:\n t = traj_dict[tid]\n if len(t) > 1:\n for pi in range(len(t)-1):\n p1 = t[pi]\n p2 = t[pi+1]\n assert(p1 in poi_info.index and p2 in poi_info.index)\n c1 = poi_clusters.loc[p1, 'clusterID']\n c2 = poi_clusters.loc[p2, 'clusterID']\n transmat_neighbor_cnt.loc[c1, c2] += 1\n return normalise_transmat(transmat_neighbor_cnt), poi_clusters\n\ngen_transmat_neighbor(trajid_set_all, traj_dict, poi_info_all)[0]",
"6.7 Transition Matrix between POIs\nApproximate transition probabilities (matrix) between different POI features (vector) using the Kronecker product of individual transition matrix corresponding to each feature, i.e., POI category, POI popularity (discritized), POI average visit duration (discritized) and POI neighborhoods (clusters).\nDeal with features without corresponding POIs and feature with more than one corresponding POIs. (Before Normalisation)\n- For features without corresponding POIs, just remove the rows and columns from the matrix obtained by Kronecker product.\n- For different POIs with the exact same feature, \n - Let POIs with the same feature as a POI group,\n - The incoming transition value (i.e., unnormalised transition probability) of this POI group \n should be divided uniformly among the group members, \n which corresponds to choose a group member uniformly at random in the incoming case.\n - The outgoing transition value should be duplicated (i.e., the same) among all group members, \n as we were already in that group in the outgoing case.\n - For each POI in the group, the allocation transition value of the self-loop of the POI group is similar to \n that in the outgoing case, as we were already in that group, so just duplicate and then divide uniformly among \n the transitions from this POI to other POIs in the same group, \n which corresponds to choose a outgoing transition uniformly at random from all outgoing transitions\n excluding the self-loop of this POI.\n- Concretely, for a POI group with $n$ POIs, \n 1. If the incoming transition value of POI group is $m_1$,\n then the corresponding incoming transition value for each group member is $\\frac{m_1}{n}$.\n 1. If the outgoing transition value of POI group is $m_2$,\n then the corresponding outgoing transition value for each group member is also $m_2$.\n 1. If the transition value of self-loop of the POI group is $m_3$,\n then transition value of self-loop of individual POIs should be $0$,\n and other in-group transitions with value $\\frac{m_3}{n-1}$\n as the total number of outgoing transitions to other POIs in the same group is $n-1$ (excluding the self-loop),\n i.e. $n-1$ choose $1$.\nNOTE: execute the above division before or after row normalisation will lead to the same result, as the division itself does NOT change the normalising constant of each row (i.e., the sum of each row before normalising).",
"def gen_poi_transmat(trajid_list, poi_set, traj_dict, poi_info, debug=False):\n transmat_cat = gen_transmat_cat(trajid_list, traj_dict, poi_info)\n transmat_pop, logbins_pop = gen_transmat_pop(trajid_list, traj_dict, poi_info)\n transmat_visit, logbins_visit = gen_transmat_visit(trajid_list, traj_dict, poi_info)\n transmat_duration, logbins_duration = gen_transmat_duration(trajid_list, traj_dict, poi_info)\n transmat_neighbor, poi_clusters = gen_transmat_neighbor(trajid_list, traj_dict, poi_info)\n\n # Kronecker product\n transmat_ix = list(itertools.product(transmat_cat.index, transmat_pop.index, transmat_visit.index, \\\n transmat_duration.index, transmat_neighbor.index))\n transmat_value = transmat_cat.values\n for transmat in [transmat_pop, transmat_visit, transmat_duration, transmat_neighbor]:\n transmat_value = kron(transmat_value, transmat.values)\n transmat_feature = pd.DataFrame(data=transmat_value, index=transmat_ix, columns=transmat_ix)\n \n poi_train = sorted(poi_set)\n feature_names = ['poiCat', 'popularity', 'nVisit', 'avgDuration', 'clusterID']\n poi_features = pd.DataFrame(data=np.zeros((len(poi_train), len(feature_names))), \\\n columns=feature_names, index=poi_train)\n poi_features.index.name = 'poiID'\n poi_features['poiCat'] = poi_info.loc[poi_train, 'poiCat']\n poi_features['popularity'] = np.digitize(poi_info.loc[poi_train, 'popularity'], logbins_pop)\n poi_features['nVisit'] = np.digitize(poi_info.loc[poi_train, 'nVisit'], logbins_visit)\n poi_features['avgDuration'] = np.digitize(poi_info.loc[poi_train, 'avgDuration'], logbins_duration)\n poi_features['clusterID'] = poi_clusters.loc[poi_train, 'clusterID']\n \n # shrink the result of Kronecker product and deal with POIs with the same features\n poi_logtransmat = pd.DataFrame(data=np.zeros((len(poi_train), len(poi_train)), dtype=np.float), \\\n columns=poi_train, index=poi_train)\n for p1 in poi_logtransmat.index:\n rix = tuple(poi_features.loc[p1])\n for p2 in poi_logtransmat.columns:\n cix = tuple(poi_features.loc[p2])\n value_ = transmat_feature.loc[(rix,), (cix,)]\n poi_logtransmat.loc[p1, p2] = value_.values[0, 0]\n \n # group POIs with the same features\n features_dup = dict()\n for poi in poi_features.index:\n key = tuple(poi_features.loc[poi])\n if key in features_dup:\n features_dup[key].append(poi)\n else:\n features_dup[key] = [poi]\n if debug == True:\n for key in sorted(features_dup.keys()):\n print(key, '->', features_dup[key])\n \n # deal with POIs with the same features\n for feature in sorted(features_dup.keys()):\n n = len(features_dup[feature])\n if n > 1:\n group = features_dup[feature]\n v1 = poi_logtransmat.loc[group[0], group[0]] # transition value of self-loop of POI group\n \n # divide incoming transition value (i.e. unnormalised transition probability) uniformly among group members\n for poi in group:\n poi_logtransmat[poi] /= n\n \n # outgoing transition value has already been duplicated (value copied above)\n \n # duplicate & divide transition value of self-loop of POI group uniformly among all outgoing transitions,\n # from a POI to all other POIs in the same group (excluding POI self-loop)\n v2 = v1 / (n - 1)\n for pair in itertools.permutations(group, 2):\n poi_logtransmat.loc[pair[0], pair[1]] = v2\n \n # normalise each row\n for p1 in poi_logtransmat.index:\n poi_logtransmat.loc[p1, p1] = 0\n rowsum = poi_logtransmat.loc[p1].sum()\n assert(rowsum > 0)\n logrowsum = np.log10(rowsum)\n for p2 in poi_logtransmat.columns:\n if p1 == p2:\n poi_logtransmat.loc[p1, p2] = -np.inf # deal with log(0) explicitly\n else:\n poi_logtransmat.loc[p1, p2] = np.log10(poi_logtransmat.loc[p1, p2]) - logrowsum\n \n poi_transmat = np.power(10, poi_logtransmat)\n return poi_transmat\n\npoi_transmat = gen_poi_transmat(trajid_set_all, set(poi_info_all.index), traj_dict, poi_info_all, debug=False)",
"6.8 Visualise Transition Matrix\nPlot transition matrix heatmap.",
"plt.figure(figsize=[13, 10])\n#plt.imshow(prob_mat, interpolation='none', cmap=plt.cm.hot) # OK\n#ticks = prob_mat.index\n#plt.xticks(np.arange(prob_mat.shape[0]), ticks)\n#plt.yticks(np.arange(prob_mat.shape[0]), ticks)\n#plt.xlabel('POI ID')\n#plt.ylabel('POI ID')\nsns.heatmap(poi_transmat)",
"6.9 Visualise Transitions of Specific POIs\nGenerate KML file to visualise the transitions from a specific POI using edge width and edge transparency to distinguish different transition probabilities.",
"def gen_kml_transition(fname, poi_id, poi_df, poi_transmat, topk=None):\n ns = '{http://www.opengis.net/kml/2.2}'\n\n # scale (linearly) the transition probabilities to [1, 255]: f(x) = a1x + b1\n probs = poi_transmat.loc[poi_id].copy()\n pmax = poi_transmat.loc[poi_id].max()\n probs.loc[poi_id] = 1 # set self-loop to 1 to make np.min() below to get the real minimun prob.\n pmin = poi_transmat.loc[poi_id].min()\n nmin1, nmin2 = 1, 1\n nmax1, nmax2 = 255, 10\n # solve linear equations:\n # nmin = a1 x pmin + b1\n # nmax = a1 x pmax + b1\n a1, b1 = np.dot(np.linalg.pinv(np.array([[pmin, 1], [pmax, 1]])), np.array([nmin1, nmax1])) # control transparency\n a2, b2 = np.dot(np.linalg.pinv(np.array([[pmin, 1], [pmax, 1]])), np.array([nmin2, nmax2])) # control width\n\n pm_list = []\n stydict = dict()\n\n # Placemark for edges\n columns = poi_transmat.columns\n if topk is not None:\n assert(isinstance(topk, int))\n assert(topk > 0)\n idx = np.argsort(-poi_transmat.loc[poi_id])[:topk]\n columns = poi_transmat.columns[idx]\n #for poi in poi_transmat.columns:\n for poi in columns:\n if poi == poi_id: continue\n prob = poi_transmat.loc[poi_id, poi]\n decimal = int(np.round(a1 * prob + b1)) # scale transition probability to [1, 255]\n hexa = hex(decimal)[2:] + '0' if decimal < 16 else hex(decimal)[2:] # get rid of prefix '0x'\n color = hexa + '0000ff' # colors in KML: aabbggrr, aa=00 is fully transparent, transparent red\n width = int(np.round(a2 * prob + b2))\n if color not in stydict:\n stydict[color] = styles.LineStyle(color=color, width=width)\n sid = str(poi_id) + '_' + str(poi)\n ext_dict = {'From poiID': str(poi_id), 'From poiName': poi_df.loc[poi_id, 'Name'], \\\n 'To poiID': str(poi), 'To poiName': poi_df.loc[poi, 'Name'], \\\n 'Transition Probability': ('%.15f' % prob)}\n ext_data = kml.ExtendedData(elements=[kml.Data(name=x, value=ext_dict[x]) for x in sorted(ext_dict.keys())])\n pm = kml.Placemark(ns, sid, 'Edge_' + sid, description=None, styleUrl='#' + color, extended_data=ext_data)\n pm.geometry = LineString([(poi_df.loc[x, 'Longitude'], poi_df.loc[x, 'Latitude']) for x in [poi_id, poi]])\n pm_list.append(pm)\n\n # Placemark for POIs: import from csv file directly\n \n k = kml.KML()\n doc = kml.Document(ns, '1', 'Transitions of POI ' + str(poi_id) , description=None, \\\n styles=[styles.Style(id=x, styles=[stydict[x]]) for x in stydict.keys()])\n for pm in pm_list: doc.append(pm)\n k.append(doc)\n\n # save to file\n kmlstr = k.to_string(prettyprint=True)\n with open(fname, 'w') as f:\n f.write('<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\n')\n f.write(kmlstr)\n print('write to', fname)",
"6.9.1 The Most Popular POI\nDefine the popularity of POI as the number of distinct users that visited the POI.",
"most_popular = poi_df['#distinctUsers'].argmax()\n\n#poi_df.loc[most_popular]\n\n#poi_transmat.loc[most_popular]\n\nfname = 'trans_most_popular.kml'\ngen_kml_transition(fname, most_popular, poi_df, poi_transmat)",
"Example on Google maps.\n6.9.2 The Queen Victoria Market",
"poi_qvm = poi_df[poi_df['Name'] == 'Queen Victoria Market']\npoi_qvm\n\n#poi_transmat.loc[poi_qvm.index[0]]\n\nfname = 'trans_qvm.kml'\ngen_kml_transition(fname, poi_qvm.index[0], poi_df, poi_transmat)",
"Example on Google maps.\n6.9.3 The University of Melbourne",
"poi_um = poi_df[poi_df['Name'] == 'University of Melbourne']\npoi_um\n\n#poi_transmat.loc[poi_um.index[0]]\n\nfname = 'trans_um.kml'\ngen_kml_transition(fname, poi_um.index[0], poi_df, poi_transmat)",
"Example on Google maps.",
"fname = 'trans_um_top30.kml'\ngen_kml_transition(fname, poi_um.index[0], poi_df, poi_transmat, topk=30)",
"Example on Google maps.\n6.9.4 The Margaret Court Arena",
"poi_mca = poi_df[poi_df['Name'] == 'Margaret Court Arena']\npoi_mca\n\n#poi_transmat.loc[poi_mca.index[0]]\n\nfname = 'trans_mca.kml'\ngen_kml_transition(fname, poi_mca.index[0], poi_df, poi_transmat)",
"Example on Google maps.\n6.9.5 RMIT City",
"poi_rmit = poi_df[poi_df['Name'] == 'RMIT City']\npoi_rmit\n\n#poi_transmat.loc[poi_rmit.index[0]]\n\nfname = 'trans_rmit.kml'\ngen_kml_transition(fname, poi_rmit.index[0], poi_df, poi_transmat)",
"Example on Google maps.",
"fname = 'trans_rmit_top30.kml'\ngen_kml_transition(fname, poi_rmit.index[0], poi_df, poi_transmat, topk=30)",
"Example on Google maps.\n6.10 Visualise Trajectories that Passing through Specific POIs\nGenerate KML file for a set of trajectories.",
"def gen_kml_traj(fname, traj_subdict, poi_df):\n ns = '{http://www.opengis.net/kml/2.2}'\n norm = mpl.colors.Normalize(vmin=1, vmax=len(traj_subdict))\n cmap = mpl.cm.hot\n pmap = mpl.cm.ScalarMappable(norm=norm, cmap=cmap)\n\n pm_list = []\n stydict = dict()\n trajids = sorted(traj_subdict.keys())\n for i in range(len(trajids)):\n traj = traj_subdict[trajids[i]]\n r, g, b, a = pmap.to_rgba(i+1, bytes=True)\n color = '%02x%02x%02x%02x' % (63, b, g, r) # colors in KML: aabbggrr, aa=00 is fully transparent\n if color not in stydict:\n stydict[color] = styles.LineStyle(color=color, width=3)\n for j in range(len(traj)-1):\n poi1 = traj[j]\n poi2 = traj[j+1]\n sid = str(poi1) + '_' + str(poi2)\n ext_dict = {'TrajID': str(trajids[i]), 'Trajectory': str(traj), \\\n 'From poiID': str(poi1), 'From poiName': poi_df.loc[poi1, 'Name'], \\\n 'To poiID': str(poi2), 'To poiName': poi_df.loc[poi2, 'Name']}\n ext_data = kml.ExtendedData(elements=[kml.Data(name=x, value=ext_dict[x]) for x in sorted(ext_dict.keys())])\n pm = kml.Placemark(ns, sid, 'Edge_' + sid, description=None, styleUrl='#' + color, extended_data=ext_data)\n pm.geometry = LineString([(poi_df.loc[x, 'Longitude'], poi_df.loc[x, 'Latitude']) for x in [poi1, poi2]])\n pm_list.append(pm)\n\n # Placemark for POIs: import from csv file directly\n \n k = kml.KML()\n doc = kml.Document(ns, '1', 'Visualise %d Trajectories' % len(traj_subdict), description=None, \\\n styles=[styles.Style(id=x, styles=[stydict[x]]) for x in stydict.keys()])\n for pm in pm_list: doc.append(pm)\n k.append(doc)\n\n # save to file\n kmlstr = k.to_string(prettyprint=True)\n with open(fname, 'w') as f:\n f.write('<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\n')\n f.write(kmlstr)\n print('write to', fname)",
"6.10.1 The Melbourne Cricket Ground (MCG)\nTrajectories (with more than $1$ POIs) that Passing through the Melbourne Cricket Ground (MCG).",
"poi_mcg = poi_df[poi_df['Name'] == 'Melbourne Cricket Ground (MCG)']\npoi_mcg\n\ntraj_dict_mcg = dict()\nmcg = poi_mcg.index[0]\nfor tid_ in sorted(traj_dict.keys()):\n traj_ = traj_dict[tid_]\n #if mcg in traj_ and mcg != traj_[0] and mcg != traj_[-1]:\n if mcg in traj_ and len(traj_) > 1:\n traj_dict_mcg[tid_] = traj_\nprint(len(traj_dict_mcg), 'trajectories pass through Melbourne Cricket Ground (MCG).')\n\nfname = 'traj_pass_mcg.kml'\ngen_kml_traj(fname, traj_dict_mcg, poi_df)",
"Example on Google maps.\n6.10.2 The Government House\nTrajectories (with more than $1$ POIs) that Passing through the Government House.",
"poi_gh = poi_df[poi_df['Name'] == 'Government House']\npoi_gh\n\ntraj_dict_gh = dict()\ngh = poi_gh.index[0]\nfor tid_ in sorted(traj_dict.keys()):\n traj_ = traj_dict[tid_]\n #if gh in traj_ and gh != traj_[0] and gh != traj_[-1]:\n if gh in traj_ and len(traj_) > 1:\n traj_dict_gh[tid_] = traj_\nprint(len(traj_dict_gh), 'trajectories pass through Government House.')\n\nfname = 'traj_pass_gh.kml'\ngen_kml_traj(fname, traj_dict_gh, poi_df)",
"Example on Google maps.\n7. Recommendation Results Comparison & Visualisation\nExamples of recommendation results: recommendation based on POI popularity, POI ranking and POI transition matrix, and visualise recommended results on map.\n7.1 Choose an Example Trajectory\nChoose a trajectory of length 4.",
"traj4s = traj_all[traj_all['trajLen'] == 4]['trajID'].unique()\ntraj4s\n\n#for tid in traj4s:\n# gen_kml(tid, traj_all, poi_df)",
"After looking at many of these trajectories on map, we choose trajectory 680 to illustrate.",
"tid = 680\ntraj = extract_traj(tid, traj_all)\nprint('REAL:', traj)\n\ntraj_dict_rec = {'REAL_' + str(tid): traj}\n\nstart = traj[0]\nend = traj[-1]\nlength = len(traj)",
"7.2 Recommendation by POI Popularity\nRecommend trajectory based on POI popularity only.",
"poi_df.sort_values(by='#distinctUsers', ascending=False, inplace=True)\nrec_pop = [start] + [x for x in poi_df.index.tolist() if x not in {start, end}][:length-2] + [end]\nprint('REC_POP:', rec_pop)\n\ntid_rec = 'REC_POP'\ntraj_dict_rec[tid_rec] = rec_pop",
"7.3 Recommendation by POI Rankings\nRecommend trajectory based on the ranking of POIs using rankSVM.",
"trajid_list_train = list(set(trajid_set_all) - {tid})\npoi_info = calc_poi_info(trajid_list_train, traj_all, poi_all)\n\ntrain_df = gen_train_df(trajid_list_train, traj_dict, poi_info) # POI feature based ranking\nranksvm = RankSVM(ranksvm_dir, useLinear=True)\nranksvm.train(train_df, cost=RANK_C)\ntest_df = gen_test_df(start, end, length, poi_info)\nrank_df = ranksvm.predict(test_df)\nrank_df.sort_values(by='rank', ascending=False, inplace=True)\nrec_rank = [start] + [x for x in rank_df.index.tolist() if x not in {start, end}][:length-2] + [end]\nprint('REC_RANK:', rec_rank)\n\ntid_rec = 'REC_RANK'\ntraj_dict_rec[tid_rec] = rec_rank",
"7.4 Recommendation by Transition Probabilities\nUse dynamic programming to find a possibly non-simple path, i.e., walk.",
"def find_path(V, E, ps, pe, L, withNodeWeight=False, alpha=0.5):\n assert(isinstance(V, pd.DataFrame))\n assert(isinstance(E, pd.DataFrame))\n assert(ps in V.index)\n assert(pe in V.index)\n # with sub-tours in trajectory, this is not the case any more, but it is nonsense to recommend such trajectories\n assert(2 < L <= V.index.shape[0]) \n if withNodeWeight == True:\n assert(0 < alpha < 1)\n beta = 1 - alpha\n \n A = pd.DataFrame(data=np.zeros((L-1, V.shape[0]), dtype=np.float), columns=V.index, index=np.arange(2, L+1))\n B = pd.DataFrame(data=np.zeros((L-1, V.shape[0]), dtype=np.int), columns=V.index, index=np.arange(2, L+1))\n A += np.inf\n for v in V.index:\n if v != ps:\n if withNodeWeight == True:\n A.loc[2, v] = alpha * (V.loc[ps, 'weight'] + V.loc[v, 'weight']) + beta * E.loc[ps, v] # ps--v\n else:\n A.loc[2, v] = E.loc[ps, v] # ps--v\n B.loc[2, v] = ps\n \n for l in range(3, L+1):\n for v in V.index:\n if withNodeWeight == True: # ps-~-v1---v\n values = [A.loc[l-1, v1] + alpha * V.loc[v, 'weight'] + beta * E.loc[v1, v] for v1 in V.index]\n else:\n values = [A.loc[l-1, v1] + E.loc[v1, v] for v1 in V.index] # ps-~-v1---v \n \n minix = np.argmin(values)\n A.loc[l, v] = values[minix]\n B.loc[l, v] = V.index[minix]\n \n path = [pe]\n v = path[-1]\n l = L\n #while v != ps: #incorrect if 'ps' happens to appear in the middle of a path\n while l >= 2:\n path.append(B.loc[l, v])\n v = path[-1]\n l -= 1\n path.reverse()\n return path",
"Use integer linear programming (ILP) to find a simple path.",
"def find_path_ILP(V, E, ps, pe, L, withNodeWeight=False, alpha=0.5):\n assert(isinstance(V, pd.DataFrame))\n assert(isinstance(E, pd.DataFrame))\n assert(ps in V.index)\n assert(pe in V.index)\n assert(2 < L <= V.index.shape[0])\n if withNodeWeight == True:\n assert(0 < alpha < 1)\n beta = 1 - alpha\n \n p0 = str(ps); pN = str(pe); N = V.index.shape[0]\n \n # deal with np.inf which will cause ILP solver failure\n Edges = E.copy()\n INF = 1e6\n for p in Edges.index:\n Edges.loc[p, p] = INF \n maxL = np.max(Edges.values.flatten())\n if maxL > INF:\n for p in Edges.index:\n Edges.loc[p, p] = maxL \n \n # REF: pythonhosted.org/PuLP/index.html\n pois = [str(p) for p in V.index] # create a string list for each POI\n pb = pulp.LpProblem('MostLikelyTraj', pulp.LpMinimize) # create problem\n # visit_i_j = 1 means POI i and j are visited in sequence\n visit_vars = pulp.LpVariable.dicts('visit', (pois, pois), 0, 1, pulp.LpInteger) \n # a dictionary contains all dummy variables\n dummy_vars = pulp.LpVariable.dicts('u', [x for x in pois if x != p0], 2, N, pulp.LpInteger)\n \n # add objective\n objlist = []\n if withNodeWeight == True:\n objlist.append(alpha * V.loc[int(p0), 'weight'])\n for pi in [x for x in pois if x != pN]: # from\n for pj in [y for y in pois if y != p0]: # to\n if withNodeWeight == True:\n objlist.append(visit_vars[pi][pj] * (alpha*V.loc[int(pj), 'weight'] + beta*Edges.loc[int(pi), int(pj)]))\n else:\n objlist.append(visit_vars[pi][pj] * Edges.loc[int(pi), int(pj)])\n pb += pulp.lpSum(objlist), 'Objective'\n \n # add constraints, each constraint should be in ONE line\n pb += pulp.lpSum([visit_vars[p0][pj] for pj in pois if pj != p0]) == 1, 'StartAt_p0'\n pb += pulp.lpSum([visit_vars[pi][pN] for pi in pois if pi != pN]) == 1, 'EndAt_pN'\n if p0 != pN:\n pb += pulp.lpSum([visit_vars[pi][p0] for pi in pois]) == 0, 'NoIncoming_p0'\n pb += pulp.lpSum([visit_vars[pN][pj] for pj in pois]) == 0, 'NoOutgoing_pN'\n pb += pulp.lpSum([visit_vars[pi][pj] for pi in pois if pi != pN for pj in pois if pj != p0]) == L-1, 'Length'\n for pk in [x for x in pois if x not in {p0, pN}]:\n pb += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) == \\\n pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]), 'ConnectedAt_' + pk\n pb += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) <= 1, 'Enter_' + pk + '_AtMostOnce'\n pb += pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]) <= 1, 'Leave_' + pk + '_AtMostOnce'\n for pi in [x for x in pois if x != p0]:\n for pj in [y for y in pois if y != p0]:\n pb += dummy_vars[pi] - dummy_vars[pj] + 1 <= (N - 1) * (1 - visit_vars[pi][pj]), \\\n 'SubTourElimination_' + pi + '_' + pj\n #pb.writeLP(\"traj_tmp.lp\")\n # solve problem\n pb.solve(pulp.PULP_CBC_CMD(options=['-threads', '6', '-strategy', '1', '-maxIt', '2000000'])) # CBC\n #gurobi_options = [('TimeLimit', '7200'), ('Threads', '8'), ('NodefileStart', '0.9'), ('Cuts', '2')]\n #pb.solve(pulp.GUROBI_CMD(options=gurobi_options)) # GUROBI\n visit_mat = pd.DataFrame(data=np.zeros((len(pois), len(pois)), dtype=np.float), index=pois, columns=pois)\n for pi in pois:\n for pj in pois: visit_mat.loc[pi, pj] = visit_vars[pi][pj].varValue\n\n # build the recommended trajectory\n recseq = [p0]\n while True:\n pi = recseq[-1]\n pj = visit_mat.loc[pi].idxmax()\n assert(round(visit_mat.loc[pi, pj]) == 1)\n recseq.append(pj); \n #print(recseq); sys.stdout.flush()\n if pj == pN: return [int(x) for x in recseq]\n\npoi_logtransmat = np.log(gen_poi_transmat(trajid_list_train, set(poi_info.index), traj_dict, poi_info))\nnodes = poi_info.copy()\nedges = poi_logtransmat.copy()\nedges = -1 * edges # edge weight is negative log of transition probability\nrec_dp = find_path(nodes, edges, start, end, length) # DP\nrec_ilp = find_path_ILP(nodes, edges, start, end, length) # ILP\nprint('REC_DP:', rec_dp)\nprint('REC_ILP:', rec_ilp)\n\ntid_rec = 'REC_DP'\ntraj_dict_rec[tid_rec] = rec_dp\ntid_rec = 'REC_ILP'\ntraj_dict_rec[tid_rec] = rec_ilp\n\ntraj_dict_rec\n\nfname = 'traj_rec.kml'\ngen_kml_traj(fname, traj_dict_rec, poi_df)",
"Example on Google maps, \n- the light blue edges represent the real trajectory, \n- green edges represent the recommended trajectories based on POI popularity and POI rankings (the recommendations are the same), \n- the purple edges represent the recommended trajectories based on POI transition probabilities using Viterbi algorithm and ILP.\n8. Disclaimer\n8.1 Problems of Trajectory Construction\n\nProblems of mapping photos to POIs according to their distance, i.e., $200$ meters.\nProblems of split consecutive POI visits by $8$ hours.\nProblems of extract trajectories from a sequence of POI visits such that no loops/subtours exist.\n\n8.2 Example of Terrible Trajectories\nChoose the trajectory with the maximum number of POIs.",
"tid = traj_all.loc[traj_all['trajLen'].idxmax(), 'trajID']\ntid\n\ntraj1 = extract_traj_withloop(tid, visits)\nprint(traj1, 'length:', len(traj1))\n\ntraj2 = extract_traj(tid, traj_all)\nprint(traj2, 'length:', len(traj2))",
"Extract the sequence of photos associated with this trajectory.",
"photo_df = pd.read_csv(fphotos, skipinitialspace=True)\nphoto_df.set_index('Photo_ID', inplace=True)\n#photo_df.head()\n\nphoto_traj = visits[visits['trajID'] == tid]['photoID'].values\nphoto_tdf = photo_df.loc[photo_traj].copy()\n\nphoto_tdf.drop(photo_tdf.columns[-1], axis=1, inplace=True)\nphoto_tdf.drop('Accuracy', axis=1, inplace=True)\nphoto_tdf.sort_values(by='Timestamp', ascending=True, inplace=True)\n\nvisit_df = visits.copy()\nvisit_df.set_index('photoID', inplace=True)\n\nphoto_tdf['poiID'] = visit_df.loc[photo_tdf.index, 'poiID']\nphoto_tdf['poiName'] = poi_df.loc[photo_tdf['poiID'].values, 'Name'].values\nphoto_tdf.head()",
"Save photos dataframe to CSV file.",
"fname = 'photo_traj_df.csv'\nphoto_tdf.to_csv(fname, index=True)",
"Generate KML file with edges between consecutive photos.",
"fname = 'traj_photo_seq.kml'\nns = '{http://www.opengis.net/kml/2.2}'\nk = kml.KML()\nstyid = 'edge_style' # colors in KML: aabbggrr, aa=00 is fully transparent\nsty_edge = styles.Style(id=styid, styles=[styles.LineStyle(color='3fff0000', width=2)])\ndoc = kml.Document(ns, '1', 'Photo sequence of trajectory %d' % tid, description=None, styles=[sty_edge])\nk.append(doc)\nfor j in range(photo_tdf.shape[0]-1):\n p1, p2 = photo_tdf.index[j], photo_tdf.index[j+1]\n poi1, poi2 = visit_df.loc[p1, 'poiID'], visit_df.loc[p2, 'poiID']\n sid = 'Photo_POI%d_POI%d' % (poi1, poi2)\n ext_dict = {'From Photo': str(p1), 'To Photo': str(p2)}\n ext_data = kml.ExtendedData(elements=[kml.Data(name=x, value=ext_dict[x]) for x in sorted(ext_dict.keys())])\n pm = kml.Placemark(ns, sid, sid, description=None, styleUrl='#' + styid, extended_data=ext_data)\n pm.geometry = LineString([(photo_df.loc[x, 'Longitude'], photo_df.loc[x, 'Latitude']) for x in [p1, p2]])\n doc.append(pm)\n\n# Placemark for photos: import from csv file directly\n\n# save to file\nkmlstr = k.to_string(prettyprint=True)\nwith open(fname, 'w') as f:\n f.write('<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\n')\n f.write(kmlstr)\n print('write to', fname)",
"And the trajectory extracted.",
"fname = 'traj_terrible.kml'\ngen_kml_traj(fname, {tid:traj2}, poi_df)",
"Example on Google maps.\n8.3 Limitations of Google Maps and Nationalmaps\nSome limitations of Google maps and Nationalmaps I encountered during this work:\n- Google Maps rendering inconsistently with the scale factor of icons in styles, no matter using default icon or custom icons.\n - Scale factor of icons was ignored if it was greater than 1.\n - Icons rendered correctly if the scale factor is less than 1.\n - For two icons, if the scale factor of one icon is less than 1, and the other is greater than 1, the two icons were rendered using the same style with the smaller scale factor.\n- Nationalmaps was not as robust as Google maps and didn't support custom icons.\n - Nationalmaps support different size of marker icons specified by the scale factor, but I didn't find a way to change the default icon.\n - Nationalmaps didn't support custom icons, once used, the POIs will be rendered with no icons, only names.\n - Nationalmaps didn't render some KML files that rendered correctly by Google maps (on Chrome/Chromium/Firefox)\n - Nationalmaps rendered the edge of trajectory correctly but not all POIs were rendered."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
scotthuang1989/Python-3-Module-of-the-Week
|
concurrency/multiprocessing/Implementing_MapReduce.ipynb
|
apache-2.0
|
[
"Implementing MapReduce\nThe Pool class can be used to create a simple single-server MapReduce implementation. Although it does not give the full benefits of distributed processing, it does illustrate how easy it is to break some problems down into distributable units of work.\nIn a MapReduce-based system, input data is broken down into chunks for processing by different worker instances. Each chunk of input data is mapped to an intermediate state using a simple transformation. The intermediate data is then collected together and partitioned based on a key value so that all of the related values are together. Finally, the partitioned data is reduced to a result set.",
"import collections\nimport itertools\nimport multiprocessing\n\n\nclass SimpleMapReduce:\n\n def __init__(self, map_func, reduce_func, num_workers=None):\n \"\"\"\n map_func\n\n Function to map inputs to intermediate data. Takes as\n argument one input value and returns a tuple with the\n key and a value to be reduced.\n\n reduce_func\n\n Function to reduce partitioned version of intermediate\n data to final output. Takes as argument a key as\n produced by map_func and a sequence of the values\n associated with that key.\n\n num_workers\n\n The number of workers to create in the pool. Defaults\n to the number of CPUs available on the current host.\n \"\"\"\n self.map_func = map_func\n self.reduce_func = reduce_func\n self.pool = multiprocessing.Pool(num_workers)\n\n def partition(self, mapped_values):\n \"\"\"Organize the mapped values by their key.\n Returns an unsorted sequence of tuples with a key\n and a sequence of values.\n \"\"\"reduce_func\n partitioned_data = collections.defaultdict(list)\n for key, value in mapped_values:\n partitioned_data[key].append(value)\n return partitioned_data.items()\n\n def __call__(self, inputs, chunksize=1):\n \"\"\"Process the inputs through the map and reduce functions\n given.\n\n inputs\n An iterable containing the input data to be processed.\n\n chunksize=1\n The portion of the input data to hand to each worker.\n This can be used to tune performance during the mapping\n phase.\n \"\"\"\n map_responses = self.pool.map(\n self.map_func,\n inputs,\n chunksize=chunksize,\n )\n partitioned_data = self.partition(\n itertools.chain(*map_responses)\n )\n reduced_values = self.pool.map(\n self.reduce_func,\n partitioned_data,\n )\n return reduced_values",
"The following example script uses SimpleMapReduce to counts the “words” in the reStructuredText source for this article, ignoring some of the markup.",
"import multiprocessing\nimport string\n\nSimpleMapReduce\n\n\ndef file_to_words(filename):\n \"\"\"Read a file and return a sequence of\n (word, occurences) values.\n \"\"\"\n STOP_WORDS = set([\n 'a', 'an', 'and', 'are', 'as', 'be', 'by', 'for', 'if',\n 'in', 'is', 'it', 'of', 'or', 'py', 'rst', 'that', 'the',\n 'to', 'with',\n ])\n TR = str.maketrans({\n p: ' '\n for p in string.punctuation\n }).rst\n\n print('{} reading {}'.format(\n multiprocessing.current_process().name, filename))\n output = []\n\n with open(filename, 'rt') as f:\n for line in f:\n # Skip comment lines.\n if line.lstrip().startswith('..'):\n continue\n line = line.translate(TR) # Strip punctuation\n for word in line.split():\n word = word.lower()\n if word.isalpha() and word not in STOP_WORDS:\n output.append((word, 1))\n return output.rst\n\n\ndef count_words(item):\n \"\"\"Convert the partitioned data for a word to a\n tuple containing the word and the number of occurences.\n \"\"\"\n word, occurences = item\n return (word, sum(occurences))\n\n\nif __name__ == '__main__':\n import operator\n import glob\n\n input_files = glob.glob('*.rst')\n\n mapper = SimpleMapReduce(file_to_words, count_words)\n word_counts = mapper(input_files)\n word_counts.sort(key=operator.itemgetter(1))\n word_counts.reverse()\n\n print('\\nTOP 20 WORDS BY FREQUENCY\\n')\n top20 = word_counts[:20]\n longest = max(len(word) for word, count in top20)\n for word, count in top20:\n print('{word:<{len}}: {count:5}'.format(\n len=longest + 1,\n word=word,\n count=count)\n )",
"The file_to_words() function converts each input file to a sequence of tuples containing the word and the number 1 (representing a single occurrence). The data is divided up by partition() using the word as the key, so the resulting structure consists of a key and a sequence of 1 values representing each occurrence of the word. The partitioned data is converted to a set of tuples containing a word and the count for that word by count_words() during the reduction phase."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
feststelltaste/software-analytics
|
notebooks/Finding tested code with jQAssistant.ipynb
|
gpl-3.0
|
[
"Introduction\nIn an upcoming analysis, we want to calculate the structural similarity between test cases. For this, we need the information which test methods call which code in the application (the \"production code\"). \nIn this blog post, I'll show how you can get this information by using jQAssistant for a Java application. With jQAssistant, you can scan the structural information of your software. I'll also explain the relevant database query that delivers the information we need later on.\nDataset\nI've scanned a small pet project of mine called \"DropOver\" that was originally developed as a web application for organizing parties or bar-hoppings. I've just added jQAssistant as a Maven plugin to my project's Maven build (see here for a mini tutorial). The structures of this application are stored by jQAssistant in a property graph within the graph database Neo4j. A subgraph with the structural information that's relevant for our purposes looks like this:\n\nWe can see the scanned software entities like Java types (red) or methods (blue) as well their relationships with each other. We can now explore the database's content with the included Neo4j browser frontend or access the data with a programming language. I use Python (the programming language we'll write our analysis later on) with the py2neo module (the bridge between Python and Neo4j). The information we need can be retrieved by creating and executing a Cypher query (explained in the following) – Neo4j's language for accessing information in the property graph.\nLast, we store the results in a Pandas DataFrame named invocations for a nice tabular representation of the outputs and for further analysis.",
"import py2neo\nimport pandas as pd\n\ngraph = py2neo.Graph()\n\nquery = \"\"\"\nMATCH \n (testMethod:Method)\n -[:ANNOTATED_BY]->()-[:OF_TYPE]->\n (:Type {fqn:\"org.junit.Test\"}),\n (testType:Type)-[:DECLARES]->(testMethod),\n (type)-[:DECLARES]->(method:Method),\n (testMethod)-[i:INVOKES]->(method)\nWHERE\n NOT type.name ENDS WITH \"Test\" \n AND type.fqn STARTS WITH \"at.dropover\"\n AND NOT method.signature CONTAINS \"<init>\"\nRETURN \n testType.name as test_type,\n testMethod.signature as test_method,\n type.name as prod_type,\n method.signature as prod_method,\n COUNT(DISTINCT i) as invocations\nORDER BY \n test_type, test_method, prod_type, prod_method\n\"\"\"\n\ninvocations = pd.DataFrame(graph.data(query))\n# reverse sort columns for better representation\ninvocations = invocations[invocations.columns[::-1]]\ninvocations.head()",
"Cypher query explained\nLet's go through that query from above step by step. The Cypher query that finds all test methods that call methods of our production types works as follows:\nIn the MATCH clause, we start our search for particular structural information. We first identify all test methods. These are methods that are annotated by @Test, which is an annotation that the JUnit4 framework provides.\ncypher\nMATCH\n (testMethod:Method)-[:ANNOTATED_BY]->()-[:OF_TYPE]->(:Type {fqn:\"org.junit.Test\"})\nNext, we find all the test classes that declare (via the DECLARES relationship type) all test methods from above.\ncypher\n (testType:Type)-[:DECLARES]->(testMethod)\nWith the same approach, we first identify all the Java types and methods (at first regardless of their meaning. Later, we'll define them as production types and methods). \ncypher\n (type)-[:DECLARES]->(method:Method)\nLast, we find test methods that call methods of the other methods by querying the appropriate INVOKES relationship.\ncypher\n (testMethod)-[i:INVOKES]->(method)\nIn the WHERE clause, we define what we see as production type (and thus implicitly production method). We achieve this by saying that a production type is not a test and that the types must be within our application. These are all types that start with the fqn (full qualified name) at.dropover. We also filter out any calls to constructors, because those are irrelevant for our analysis.\ncypher\nWHERE\n NOT type.name ENDS WITH \"Test\" \n AND type.fqn STARTS WITH \"at.dropover\"\n AND NOT method.signature CONTAINS \"<init>\"\nIn the RETURN clause, we just return the information needed for further analysis. These are all names of our test and production types as well as the signatures of the test methods and production methods. We also count the number of calls from the test methods to the production methods. This is a nice indicator for the cohesion of a test method to a production method.\ncypher\nRETURN\n testType.name as test_type,\n testMethod.signature as test_method,\n type.name as prod_type,\n method.signature as prod_method,\n COUNT(DISTINCT i) as invocations\nIn the ORDER BY clause, we simply order the results in a useful way (and for reproducible results):\ncypher\nORDER BY\n test_type, test_method, prod_type, prod_method\nA long explanation, but if you are familiar with Cypher and the underlying schema of your graph, you write those queries within half a minute.\nData export\nBecause we need that data in a follow-up analysis, we store the information in a semicolon-separated file.",
"invocations.to_csv(\"datasets/test_code_invocations.csv\", sep=\";\", index=False)",
"Conclusion\nThis post was just the prelude for more in-depth analysis for structural test case similarity. We quickly got the information about which test method calls which production method. Albeit its a pure static (or structural) view of our code, it delivers valuable insights in further analysis.\nStay tuned!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
IanHawke/one-in-a-million
|
one-in-a-million.ipynb
|
mit
|
[
"It's a million to one chance...\n\n“What d'you mean?” he said. \n“Well, all right, last desperate million-to-one chances always work, right, no problem, but. . . well, it's pretty wossname, specific. I mean, isn't it?” \n“You tell me,” said Nobby. \n“What if it's just a thousand-to-one chance?” said Colon agonisedly. \n“What?”\n“Anyone ever heard of a thousand-to-one shot coming up?”\nCarrot looked up. “Don't be daft, Sergeant,” he said. “No-one ever saw a thousand-to-one chance come up. The odds against it are-” his lips moved- “millions to one.”\n\nGuards! Guards!, Terry Pratchett\nWho's going to win? Will it be Colon, with his one-in-a-million shot? Or will the dragon leave him a drifting cloud of ash?\nWe're going to create a function that randomly says who wins. To outline how this works, we'll define a function that always works one way first.",
"from numpy.random import rand\n\ndef colon_wins(p_colon):\n \"\"\"\n If a random number is less than `p_colon` then Colon wins, otherwise the dragon does.\n \n Parameters\n ----------\n \n p_colon: scalar\n Probability that Colon wins\n \n Notes\n -----\n \n In this case we do not use a random number, but fix it to be zero: \n this way, Colon always wins\n \"\"\"\n \n return 0.0 <= p_colon",
"Let's see this working 20 times, when the probability of Colon winning is 50%:",
"for i in range(20):\n if colon_wins(0.5):\n print(\"Colon wins!\")\n else:\n print(\"What's that ash cloud?\")",
"Now let's check we still get the same result, even when the probability of Colon winning is only 10%:",
"for i in range(20):\n if colon_wins(0.1):\n print(\"Colon wins!\")\n else:\n print(\"What's that ash cloud?\")",
"Now we'll re-define the function so that it's using proper random numbers:",
"def colon_wıns(p_colon):\n \"\"\"\n If a random number is less than `p_colon` then Colon wins, otherwise the dragon does.\n \n Parameters\n ----------\n \n p_colon: scalar\n Probability that Colon wins\n \"\"\"\n \n return rand() <= p_colon",
"Now, when called 20 times, when the probability of Colon winning is 50%:",
"for i in range(20):\n if colon_wıns(0.5):\n print(\"Colon wins!\")\n else:\n print(\"What's that ash cloud?\")",
"And, when called 20 times, when the probability of Colon winning is 10%:",
"for i in range(20):\n if colon_wıns(0.1):\n print(\"Colon wins!\")\n else:\n print(\"What's that ash cloud?\")",
"Let's check that, as the number of trials is increased, we really get the probability we expect:",
"trials = 1000000\nsuccess = 0\nfor i in range(trials):\n success += colon_wıns(0.5)\nprint(\"50% success in theory: {:.2f}% success in {} trials.\".\n format(success/trials*100, trials))\n\ntrials = 1000000\nsuccess = 0\nfor i in range(trials):\n success += colon_wıns(0.1)\nprint(\"10% success in theory: {:.2f}% success in {} trials.\".\n format(success/trials*100, trials))",
"Finally, what happens when the probability of Colon winning is minute: 0.0001%, or Pratchett's famous one-in-a-million chance?",
"trials = 1000000\nsuccess = 0\nfor i in range(trials):\n success += colon_wins(1e-6)\nprint(\"One-in-a-million success in theory: {:.2f}% success in {} trials.\".\n format(success/trials*100, trials))",
"It's a miracle! The function knows Pratchett's rule!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.22/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Compute source power spectral density (PSD) of VectorView and OPM data\nHere we compute the resting state from raw for data recorded using\na Neuromag VectorView system and a custom OPM system.\nThe pipeline is meant to mostly follow the Brainstorm [1]\nOMEGA resting tutorial pipeline <bst_omega_>.\nThe steps we use are:\n\nFiltering: downsample heavily.\nArtifact detection: use SSP for EOG and ECG.\nSource localization: dSPM, depth weighting, cortically constrained.\nFrequency: power spectral density (Welch), 4 sec window, 50% overlap.\nStandardize: normalize by relative power for each source.\n :depth: 1\n\nPreprocessing",
"# Authors: Denis Engemann <denis.engemann@gmail.com>\n# Luke Bloy <luke.bloy@gmail.com>\n# Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\n\nfrom mne.filter import next_fast_len\n\nimport mne\n\n\nprint(__doc__)\n\ndata_path = mne.datasets.opm.data_path()\nsubject = 'OPM_sample'\n\nsubjects_dir = op.join(data_path, 'subjects')\nbem_dir = op.join(subjects_dir, subject, 'bem')\nbem_fname = op.join(subjects_dir, subject, 'bem',\n subject + '-5120-5120-5120-bem-sol.fif')\nsrc_fname = op.join(bem_dir, '%s-oct6-src.fif' % subject)\nvv_fname = data_path + '/MEG/SQUID/SQUID_resting_state.fif'\nvv_erm_fname = data_path + '/MEG/SQUID/SQUID_empty_room.fif'\nvv_trans_fname = data_path + '/MEG/SQUID/SQUID-trans.fif'\nopm_fname = data_path + '/MEG/OPM/OPM_resting_state_raw.fif'\nopm_erm_fname = data_path + '/MEG/OPM/OPM_empty_room_raw.fif'\nopm_trans_fname = None\nopm_coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')",
"Load data, resample. We will store the raw objects in dicts with entries\n\"vv\" and \"opm\" to simplify housekeeping and simplify looping later.",
"raws = dict()\nraw_erms = dict()\nnew_sfreq = 90. # Nyquist frequency (45 Hz) < line noise freq (50 Hz)\nraws['vv'] = mne.io.read_raw_fif(vv_fname, verbose='error') # ignore naming\nraws['vv'].load_data().resample(new_sfreq)\nraws['vv'].info['bads'] = ['MEG2233', 'MEG1842']\nraw_erms['vv'] = mne.io.read_raw_fif(vv_erm_fname, verbose='error')\nraw_erms['vv'].load_data().resample(new_sfreq)\nraw_erms['vv'].info['bads'] = ['MEG2233', 'MEG1842']\n\nraws['opm'] = mne.io.read_raw_fif(opm_fname)\nraws['opm'].load_data().resample(new_sfreq)\nraw_erms['opm'] = mne.io.read_raw_fif(opm_erm_fname)\nraw_erms['opm'].load_data().resample(new_sfreq)\n# Make sure our assumptions later hold\nassert raws['opm'].info['sfreq'] == raws['vv'].info['sfreq']",
"Do some minimal artifact rejection just for VectorView data",
"titles = dict(vv='VectorView', opm='OPM')\nssp_ecg, _ = mne.preprocessing.compute_proj_ecg(\n raws['vv'], tmin=-0.1, tmax=0.1, n_grad=1, n_mag=1)\nraws['vv'].add_proj(ssp_ecg, remove_existing=True)\n# due to how compute_proj_eog works, it keeps the old projectors, so\n# the output contains both projector types (and also the original empty-room\n# projectors)\nssp_ecg_eog, _ = mne.preprocessing.compute_proj_eog(\n raws['vv'], n_grad=1, n_mag=1, ch_name='MEG0112')\nraws['vv'].add_proj(ssp_ecg_eog, remove_existing=True)\nraw_erms['vv'].add_proj(ssp_ecg_eog)\nfig = mne.viz.plot_projs_topomap(raws['vv'].info['projs'][-4:],\n info=raws['vv'].info)\nfig.suptitle(titles['vv'])\nfig.subplots_adjust(0.05, 0.05, 0.95, 0.85)",
"Explore data",
"kinds = ('vv', 'opm')\nn_fft = next_fast_len(int(round(4 * new_sfreq)))\nprint('Using n_fft=%d (%0.1f sec)' % (n_fft, n_fft / raws['vv'].info['sfreq']))\nfor kind in kinds:\n fig = raws[kind].plot_psd(n_fft=n_fft, proj=True)\n fig.suptitle(titles[kind])\n fig.subplots_adjust(0.1, 0.1, 0.95, 0.85)",
"Alignment and forward",
"# Here we use a reduced size source space (oct5) just for speed\nsrc = mne.setup_source_space(\n subject, 'oct5', add_dist=False, subjects_dir=subjects_dir)\n# This line removes source-to-source distances that we will not need.\n# We only do it here to save a bit of memory, in general this is not required.\ndel src[0]['dist'], src[1]['dist']\nbem = mne.read_bem_solution(bem_fname)\nfwd = dict()\n\n# check alignment and generate forward for VectorView\nkwargs = dict(azimuth=0, elevation=90, distance=0.6, focalpoint=(0., 0., 0.))\nfig = mne.viz.plot_alignment(\n raws['vv'].info, trans=vv_trans_fname, subject=subject,\n subjects_dir=subjects_dir, dig=True, coord_frame='mri',\n surfaces=('head', 'white'))\nmne.viz.set_3d_view(figure=fig, **kwargs)\nfwd['vv'] = mne.make_forward_solution(\n raws['vv'].info, vv_trans_fname, src, bem, eeg=False, verbose=True)",
"And for OPM:",
"with mne.use_coil_def(opm_coil_def_fname):\n fig = mne.viz.plot_alignment(\n raws['opm'].info, trans=opm_trans_fname, subject=subject,\n subjects_dir=subjects_dir, dig=False, coord_frame='mri',\n surfaces=('head', 'white'))\n mne.viz.set_3d_view(figure=fig, **kwargs)\n fwd['opm'] = mne.make_forward_solution(\n raws['opm'].info, opm_trans_fname, src, bem, eeg=False, verbose=True)\n\ndel src, bem",
"Compute and apply inverse to PSD estimated using multitaper + Welch.\nGroup into frequency bands, then normalize each source point and sensor\nindependently. This makes the value of each sensor point and source location\nin each frequency band the percentage of the PSD accounted for by that band.",
"freq_bands = dict(\n delta=(2, 4), theta=(5, 7), alpha=(8, 12), beta=(15, 29), gamma=(30, 45))\ntopos = dict(vv=dict(), opm=dict())\nstcs = dict(vv=dict(), opm=dict())\n\nsnr = 3.\nlambda2 = 1. / snr ** 2\nfor kind in kinds:\n noise_cov = mne.compute_raw_covariance(raw_erms[kind])\n inverse_operator = mne.minimum_norm.make_inverse_operator(\n raws[kind].info, forward=fwd[kind], noise_cov=noise_cov, verbose=True)\n stc_psd, sensor_psd = mne.minimum_norm.compute_source_psd(\n raws[kind], inverse_operator, lambda2=lambda2,\n n_fft=n_fft, dB=False, return_sensor=True, verbose=True)\n topo_norm = sensor_psd.data.sum(axis=1, keepdims=True)\n stc_norm = stc_psd.sum() # same operation on MNE object, sum across freqs\n # Normalize each source point by the total power across freqs\n for band, limits in freq_bands.items():\n data = sensor_psd.copy().crop(*limits).data.sum(axis=1, keepdims=True)\n topos[kind][band] = mne.EvokedArray(\n 100 * data / topo_norm, sensor_psd.info)\n stcs[kind][band] = \\\n 100 * stc_psd.copy().crop(*limits).sum() / stc_norm.data\n del inverse_operator\ndel fwd, raws, raw_erms",
"Now we can make some plots of each frequency band. Note that the OPM head\ncoverage is only over right motor cortex, so only localization\nof beta is likely to be worthwhile.\nTheta",
"def plot_band(kind, band):\n \"\"\"Plot activity within a frequency band on the subject's brain.\"\"\"\n title = \"%s %s\\n(%d-%d Hz)\" % ((titles[kind], band,) + freq_bands[band])\n topos[kind][band].plot_topomap(\n times=0., scalings=1., cbar_fmt='%0.1f', vmin=0, cmap='inferno',\n time_format=title)\n brain = stcs[kind][band].plot(\n subject=subject, subjects_dir=subjects_dir, views='cau', hemi='both',\n time_label=title, title=title, colormap='inferno',\n time_viewer=False, show_traces=False,\n clim=dict(kind='percent', lims=(70, 85, 99)), smoothing_steps=10)\n brain.show_view(dict(azimuth=0, elevation=0), roll=0)\n return fig, brain\n\n\nfig_theta, brain_theta = plot_band('vv', 'theta')",
"Alpha",
"fig_alpha, brain_alpha = plot_band('vv', 'alpha')",
"Beta\nHere we also show OPM data, which shows a profile similar to the VectorView\ndata beneath the sensors. VectorView first:",
"fig_beta, brain_beta = plot_band('vv', 'beta')",
"Then OPM:",
"fig_beta_opm, brain_beta_opm = plot_band('opm', 'beta')",
"Gamma",
"fig_gamma, brain_gamma = plot_band('vv', 'gamma')",
"References\n.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.\n Brainstorm: A User-Friendly Application for MEG/EEG Analysis.\n Computational Intelligence and Neuroscience, vol. 2011, Article ID\n 879716, 13 pages, 2011. doi:10.1155/2011/879716"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
brianray/puppy_dec_2015
|
PuPPy Dec 2015-Parts of Speech in Python.ipynb
|
apache-2.0
|
[
"```\nFirst, let's analyze some text...\n...\n```\n“Each of us is full of shit in our own special way. We are all shitty little snowflakes dancing in the universe.”\n― Lewis Black, Me of Little Faith\n<img src=\"files/don.png\">\nAlice's Case\n<img src=\"files/plus.png\" width=\"60%\">\nOverview of Taggers/Parsers\nTagging and Parsing into Trees is different:\n\nTagging: Tagging every word [fast]\nParsing: Tagging and puts into Tree [slow]\nChunking: Gives pieces of Trees [medium]\nPOSH Rules: Special fact and deap and context aware [amazing]\n\nOther important words:\n\nProbabilistic Parsing\nChart Parsing\nGrammer\nStrategy\n\n\n\nNLTK is the mother of all mother of NLP\nso many parsers:\n\npyStatParser (python yay!, little slow, but fun)\nStanford (popular) and btw, online! => http://nlp.stanford.edu:8080/parser/\nTextBlob (python yay! NLTK simplification)\nclips Pattern (python yay!)\nMaltParser (java 1.8)\nspaCy (pyython yay!)\n\nExample Parsers/Taggers",
"sent = \"Each of us is full of shit in our own special way\"\n\n# setup display for demo\n%matplotlib inline\nimport os\nos.environ['DISPLAY'] = 'localhost:1'",
"pyStatParser",
"from stat_parser import Parser\nparser = Parser()\nparser.parse(sent)\ntree = parser.parse(sent) # returns nltk Tree instance\ntree",
"TextBlob",
"from textblob import TextBlob\nblob = TextBlob(sent)\nblob.parse()",
"MaltParser",
"import nltk\nmp = nltk.parse.malt.MaltParser(os.getcwd(),\n model_filename=\"engmalt.linear-1.7.mco\")\nmp.parse_one(sent.split()).tree()",
"Pattern",
"from pattern.en import parse, pprint\n\ns = parse(sent,\n tokenize = True, # Tokenize the input, i.e. split punctuation from words.\n tags = True, # Find part-of-speech tags.\n chunks = True, # Find chunk tags, e.g. \"the black cat\" = NP = noun phrase.\n relations = True, # Find relations between chunks.\n lemmata = True, # Find word lemmata.\n light = False) \npprint(s)",
"spaCy",
"from spacy.en import English\nparser = English()\nparsedData = parser(unicode(sent))\n\nfor i, token in enumerate(parsedData):\n print(\"original:\", token.orth, token.orth_)\n print(\"lowercased:\", token.lower, token.lower_)\n print(\"lemma:\", token.lemma, token.lemma_)\n print(\"shape:\", token.shape, token.shape_)\n print(\"prefix:\", token.prefix, token.prefix_)\n print(\"suffix:\", token.suffix, token.suffix_)\n print(\"log probability:\", token.prob)\n print(\"Brown cluster id:\", token.cluster)\n print(\"----------------------------------------\")\n if i > 1:\n break",
"<a href=\"https://api.spacy.io/displacy/index.html?full=Each of us is full of shit in our own special way. We are all shitty little snowflakes dancing in the universe.\" target=\"_new\">Interactive Example</a>\nWord Langauge Graph",
"from visualize_word_graph import draw_graph \ndraw_graph(\"dog\")\n\ndraw_graph(\"noise\", hypernym=True)",
"Alice's Yelp Data",
"bad_sounds =['The sound in the place is terrible.',\n 'dining with clatter and the occasional smell of BMW exausts',\n 'Also, the acoustics are not conducive to having any sort of conversation.']\nnot_bad_sounds = [\"not to sound like a snob\",\n \"at your table and you can tune the sound to whichever game you're interested in\",\n \"oh god I sound old!\"]",
"1. parts of speach for each",
"from pattern.en import parse, pprint\n\ndef print_parts(sents):\n for sent in sents:\n s = parse(sent,\n tokenize = True, # Tokenize the input, i.e. split punctuation from words.\n tags = True, # Find part-of-speech tags.\n chunks = True, # Find chunk tags, e.g. \"the black cat\" = NP = noun phrase.\n relations = True, # Find relations between chunks.\n lemmata = True, # Find word lemmata.\n light = False) \n print sent\n pprint(s)\nsents = bad_sounds + not_bad_sounds\nprint_parts(bad_sounds + not_bad_sounds)",
"Penn Treebank Project Chunks <a href=\"tagguide.pdf\">guide</a>\nparts\n<table class=\"border\">\n<tbody>\n<tr>\n<td><span class=\"smallcaps\">Tag </span></td>\n<td><span class=\"smallcaps\">Description </span></td>\n<td class=\"smallcaps\">Example</td>\n</tr>\n<tr>\n<td><span class=\"postag\">CC </span></td>\n<td>conjunction, coordinating</td>\n<td><em>and, or, but</em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">CD </span></td>\n<td>cardinal number</td>\n<td><em>five, three, 13%</em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">DT </span></td>\n<td>determiner</td>\n<td><em>the, a, these <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">EX </span></td>\n<td>existential there</td>\n<td><em><span style=\"text-decoration: underline;\">there</span> were six boys <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">FW </span></td>\n<td>foreign word</td>\n<td><em>mais <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">IN </span></td>\n<td>conjunction, subordinating or preposition</td>\n<td><em>of, on, before, unless <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">JJ </span></td>\n<td>adjective</td>\n<td><em>nice, easy </em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">JJR </span></td>\n<td>adjective, comparative</td>\n<td><em>nicer, easier</em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">JJS </span></td>\n<td>adjective, superlative</td>\n<td><em>nicest, easiest <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">LS </span></td>\n<td>list item marker</td>\n<td><em> </em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">MD </span></td>\n<td>verb, modal auxillary</td>\n<td><em>may, should <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">NN </span></td>\n<td>noun, singular or mass</td>\n<td><em>tiger, chair, laughter <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">NNS </span></td>\n<td>noun, plural</td>\n<td><em>tigers, chairs, insects <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">NNP </span></td>\n<td>noun, proper singular</td>\n<td><em>Germany, God, Alice <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">NNPS </span></td>\n<td>noun, proper plural</td>\n<td><em>we met two <span style=\"text-decoration: underline;\">Christmases</span> ago <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">PDT </span></td>\n<td>predeterminer</td>\n<td><em><span style=\"text-decoration: underline;\">both</span> his children <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">POS</span></td>\n<td>possessive ending</td>\n<td><em>'s</em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">PRP </span></td>\n<td>pronoun, personal</td>\n<td><em>me, you, it <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">PRP$ </span></td>\n<td>pronoun, possessive</td>\n<td><em>my, your, our <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">RB </span></td>\n<td>adverb</td>\n<td><em>extremely, loudly, hard <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">RBR </span></td>\n<td>adverb, comparative</td>\n<td><em>better <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">RBS </span></td>\n<td>adverb, superlative</td>\n<td><em>best <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">RP </span></td>\n<td>adverb, particle</td>\n<td><em>about, off, up <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">SYM </span></td>\n<td>symbol</td>\n<td><em>% <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">TO </span></td>\n<td>infinitival to</td>\n<td><em>what <span style=\"text-decoration: underline;\">to</span> do? <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">UH </span></td>\n<td>interjection</td>\n<td><em>oh, oops, gosh <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">VB </span></td>\n<td>verb, base form</td>\n<td><em>think <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">VBZ </span></td>\n<td>verb, 3rd person singular present</td>\n<td><em>she <span style=\"text-decoration: underline;\">thinks </span><br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">VBP </span></td>\n<td>verb, non-3rd person singular present</td>\n<td><em>I <span style=\"text-decoration: underline;\">think </span><br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">VBD </span></td>\n<td>verb, past tense</td>\n<td><em>they <span style=\"text-decoration: underline;\">thought </span><br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">VBN </span></td>\n<td>verb, past participle</td>\n<td><em>a <span style=\"text-decoration: underline;\">sunken</span> ship <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">VBG </span></td>\n<td>verb, gerund or present participle</td>\n<td><em><span style=\"text-decoration: underline;\">thinking</span> is fun <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">WDT </span></td>\n<td><em>wh</em>-determiner</td>\n<td><em>which, whatever, whichever <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">WP </span></td>\n<td><em>wh</em>-pronoun, personal</td>\n<td><em>what, who, whom <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">WP$$</span></td>\n<td><em>wh</em>-pronoun, possessive</td>\n<td><em>whose, whosever <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">WRB</span></td>\n<td><em>wh</em>-adverb</td>\n<td><em>where, when <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">. </span></td>\n<td>punctuation mark, sentence closer</td>\n<td><em>.;?* <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">, </span></td>\n<td>punctuation mark, comma</td>\n<td><em>, <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">: </span></td>\n<td>punctuation mark, colon</td>\n<td><em>: <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">( </span></td>\n<td>contextual separator, left paren</td>\n<td><em>( <br></em></td>\n</tr>\n<tr>\n<td><span class=\"postag\">) </span></td>\n<td>contextual separator, right paren</td>\n<td><em>) <br></em></td>\n</tr>\n</tbody>\n</table>\n\nchunks\n<table class=\"border\">\n<tbody>\n<tr>\n<td><span class=\"smallcaps\">Tag </span></td>\n<td><span class=\"smallcaps\">Description </span></td>\n<td><span class=\"smallcaps\">Words </span></td>\n<td><span class=\"smallcaps\">Example </span></td>\n<td align=\"right\">%</td>\n</tr>\n<tr>\n<td><span class=\"postag\">NP </span></td>\n<td>noun phrase<span class=\"postag\"> </span></td>\n<td><span class=\"postag\">DT</span>+<span class=\"postag\">RB</span>+<span class=\"postag\">JJ</span>+<span class=\"postag\">NN</span> + <span class=\"postag\">PR</span></td>\n<td><em>the strange bird</em></td>\n<td align=\"right\"> 51</td>\n</tr>\n<tr>\n<td><span class=\"postag\">PP </span></td>\n<td>prepositional phrase</td>\n<td><span class=\"postag\">TO</span>+<span class=\"postag\">IN </span></td>\n<td><em>in between</em></td>\n<td align=\"right\"> 19</td>\n</tr>\n<tr>\n<td><span class=\"postag\">VP </span></td>\n<td>verb phrase </td>\n<td><span class=\"postag\">RB</span>+<span class=\"postag\">MD</span>+<span class=\"postag\">VB </span></td>\n<td><em>was looking<br></em></td>\n<td align=\"right\">9</td>\n</tr>\n<tr>\n<td><span class=\"postag\">ADVP</span></td>\n<td>adverb phrase</td>\n<td><span class=\"postag\">RB</span></td>\n<td><em>also<br></em></td>\n<td align=\"right\"> 6</td>\n</tr>\n<tr>\n<td><span class=\"postag\">ADJP</span></td>\n<td>adjective phrase<span class=\"postag\"> </span></td>\n<td><span class=\"postag\">CC</span>+<span class=\"postag\">RB</span>+<span class=\"postag\">JJ</span></td>\n<td><em>warm and cosy</em></td>\n<td align=\"right\"> 3</td>\n</tr>\n<tr>\n<td><span class=\"postag\">SBAR</span></td>\n<td>subordinating conjunction </td>\n<td><span class=\"postag\">IN</span></td>\n<td><em><span style=\"text-decoration: underline;\">whether</span> or not<br></em></td>\n<td align=\"right\">3</td>\n</tr>\n<tr>\n<td><span class=\"postag\">PRT </span></td>\n<td>particle</td>\n<td><span class=\"postag\">RP</span></td>\n<td><em><span style=\"text-decoration: underline;\">up</span> the stairs</em></td>\n<td align=\"right\"> 1</td>\n</tr>\n<tr>\n<td><span class=\"postag\">INTJ</span></td>\n<td>interjection</td>\n<td><span class=\"postag\">UH</span></td>\n<td><em>hello</em><em><br></em></td>\n<td align=\"right\"> 0</td>\n</tr>\n</tbody>\n</table>\n\n2. seach for patterns",
"from pattern.en import parsetree\nfrom pattern.search import search\n\nfor sent in sents:\n t = parsetree(sent)\n print \n print sent\n print \"Tagged Sent:\", t\n print \"Verbs:\", search('VB*', t) # verbs\n print \"ADJP:\", search('ADJP', t) # verbs \n print \"Nouns:\", search('NN', t) # all nouns",
"3. create similar word list (stemming + synsets)",
"from nltk.corpus import wordnet as wn\nfrom pattern.en import parsetree\nfrom pattern.search import taxonomy, WordNetClassifier, search\n\ntaxonomy.classifiers.append(WordNetClassifier())\n\ndef get_parts(word, pos, recursive=False):\n parts = [word, ]\n parts += taxonomy.children(word, pos=pos, recursive=recursive)\n parts += taxonomy.parents(word, pos=pos, recursive=recursive)\n return parts\n\ndef word_search(t, word, pos):\n parts = get_parts(word, pos)\n results = search(pos, t)\n for result in results:\n # print result.string, parts\n if any(x in result.string.split() for x in parts):\n return True\n return False\n\ndef run_a_rule(sent, word, pos):\n t = parsetree(sent)\n return word_search(t, word, pos)\n",
"3. test",
"print \"1. 'sound' is a NN\"\nprint run_a_rule(sents[0], 'noise', 'NN')\n\nprint \"2. clatter is a NN\"\nprint run_a_rule(sents[1], 'noise', 'NN')\n\nprint \"3. acoustics is NNS and RB Not\"\nprint run_a_rule(sents[2], 'acoustics', 'NNS') and run_a_rule(sents[2], 'not', 'RB')\n\nprint \"4. sound is a VB\"\nprint run_a_rule(sents[3], 'noise', 'VB*') \n\nprint \"5. Sounds is JJ\"\nprint run_a_rule(sents[4], 'sound', 'JJ') \n\nprint \"6. sound is VBP\"\nprint run_a_rule(sents[5], 'noise', 'VB*')",
"4. create a feature extractor function",
"def ext_func(tgt):\n return bool(not (run_a_rule(tgt, 'noise', 'VB*') and not run_a_rule(tgt, 'sound', 'JJ'))\n and (run_a_rule(tgt, 'noise', 'NN') or run_a_rule(tgt, 'acoustics', 'NNS') or\n (run_a_rule(tgt, 'acoustics', 'NNS') and run_a_rule(tgt, 'not', 'RB'))))\n \nprint \"bad noises in review:\"\nfor sent in bad_sounds:\n print \"\\t\" + sent\n assert(ext_func(sent) == True)\nprint\nprint \"no mention of bad noises:\"\nfor sent in not_bad_sounds:\n print \"\\t\" + sent\n assert(ext_func(sent) == False)\n",
"Machine Learning Example",
"import zipfile\nimport pickle\nfrom lxml import etree\nfrom StringIO import StringIO\n\nzf = zipfile.ZipFile('nhtsa_as_xml.zip', 'r')\nnhtsa_injured = zf.read('nhtsa_injured.xml')\nnhtsa_not_injured = zf.read('nhtsa_not_injured.xml')\nxml_injured = etree.parse(StringIO(nhtsa_injured))\nxml_not_injured = etree.parse(StringIO(nhtsa_not_injured))\n\n\ndef injured(l):\n return ['0' != str(x) and 'injured' or 'notinjured' for x in l]\n\n\ndef data(x):\n out = [x.xpath(\"//rows/row/@c1\"),\n injured(x.xpath(\"//rows/row/@c8\")),\n x.xpath(\"//rows/row/@c2\")]\n return list(reversed(zip(*out)))\n\n\nxml_injured_data = data(xml_injured)[:800]\nxml_not_injured_data = data(xml_not_injured)[:800]\n\nxml_injured_data[0]\n\nfrom visualize_word_graph import draw_graph \ndraw_graph(\"injury\")\n\nimport nltk.classify.util\nfrom nltk.classify import NaiveBayesClassifier\nfrom pattern.search import taxonomy, search\n\ntaxonomy.append('dislocated', type='injury')\ntaxonomy.append('sustained', type='injury')\ntaxonomy.append('burn', type='injury')\ntaxonomy.append('injury', type='hurt')\n\n\ndef check_sustained(text):\n if len(search('HURT', text)) > 0:\n return True\n return False\n\n\ndef feats(text):\n words = text.replace(\".\", \"\").split()\n out = dict([(word, True) for word in words])\n if 'SUSTAINED' in out:\n del out['SUSTAINED']\n out['rule(SUSTAINED)'] = check_sustained(text)\n return out\n \nnegcutoff = len(xml_not_injured_data)*3/4\nposcutoff = len(xml_injured_data)*3/4\n \nnot_inj_data = xml_not_injured_data[:negcutoff] + xml_injured_data[:poscutoff]\ninj_data = xml_not_injured_data[negcutoff:] + xml_injured_data[poscutoff:] \n \nnegfeats = [(feats(f[2]), 'not') for f in not_inj_data]\nposfeats = [(feats(f[2]), 'injure') for f in inj_data]\negcutoff = len(negfeats)*3/4\nposcutoff = len(posfeats)*3/4\n \ntrainfeats = negfeats[:negcutoff] + posfeats[:poscutoff]\ntestfeats = negfeats[negcutoff:] + posfeats[poscutoff:]\nprint 'train on %d instances, test on %d instances' % (len(trainfeats), len(testfeats))\n \nclassifier = NaiveBayesClassifier.train(trainfeats)\nprint 'accuracy:', nltk.classify.util.accuracy(classifier, testfeats)\nclassifier.show_most_informative_features(n=100)\n\n\nclassifier.classify(feats(\"HE SUSTAINED INJURY\"))\n",
"POSH Syntax Overview\nconverts:\nreturn bool(not (run_a_rule(tgt, 'noise', 'VB*') and not run_a_rule(tgt, 'sound', 'JJ'))\n and (run_a_rule(tgt, 'noise', 'NN') or run_a_rule(tgt, 'acoustics', 'NNS') or\n (run_a_rule(tgt, 'acoustics', 'NNS') and run_a_rule(tgt, 'not', 'RB'))))\n\nTo:\nSENT: !VB*(noise+3) and !JJ(sound+3) ) and (NN(noise+2) | NNS(acoustics) | (NNS(acoustics) & RB(not)))\n\nPOSH Library\nComming soon to: https://github.com/brianray/posh\n<img src=\"files/me.png\">\nAbout Me\n\nDeloitte Enterprise Science brray (at) deloitte dot com\nChiPy (Chicago Python User Group) brianhray@gmail.com\nLinkedIn: https://www.linkedin.com/in/brianray\nTwitter: <a href=\"https://twitter.com/brianray\" class=\"twitter-follow-button\" data-show-count=\"false\" data-size=\"large\">Follow @brianray</a>\n\n<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>\n\nCopy of this presentation found here: https://github.com/brianray/puppy_dec_2015"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
planetlabs/notebooks
|
dev/repository_aois.ipynb
|
apache-2.0
|
[
"Repository AOIs\nThis notebook gathers all the aois used in this repo and saves them as a geojson file.\nTo grab a single aoi geojson string to add to your notebook, just uncomment the line that prints the aoi. Be sure to add your notebook to the list of notebooks used by that aoi!\nTo specify a new aoi or grab an aoi and indicate indicate that the aoi is used by an additional notebook, jump to Specify AOIs.\nNOTE: Be sure to run the entire notebook so the aois get printed to geojson at the end\nSetup",
"import json\n\nimport geojson\nfrom geojson import Polygon, Feature, FeatureCollection\n\n# # Local code for compact printing of geojson\nfrom utils import CompactFeature, CompactFeatureCollection",
"AOI management\nSome classes that make it a little easier to manage aois and create compact geojson string representations of the aois.",
"class Aoi(CompactFeature):\n def __init__(self, name, used_by, coordinates, id=None):\n pg = Polygon(coordinates)\n prop = {'name': name, 'used by': used_by}\n if id:\n prop['id']: id\n super(CompactFeature, self).__init__(geometry=pg, properties=prop)\n self[\"type\"] = 'Feature'\n\nclass Aois(CompactFeatureCollection):\n def __init__(self, features=None, **extras):\n if features is not None:\n for f in features:\n self._check_aoi(f)\n else:\n features = []\n super(CompactFeatureCollection, self).__init__(features, **extras)\n self.type = 'FeatureCollection'\n\n @staticmethod\n def _check_aoi(aoi):\n if not isinstance(aoi, Aoi):\n raise(Exception('expected instance of Aoi, got {}'.format(type(aoi).__name__)))\n\n def append(self, aoi):\n self._check_aoi(aoi)\n self.features.append(aoi)\n \n def write(self, filename):\n with open(filename, 'w') as dest:\n dest.write(self.__str__())",
"AOIs\nSetup",
"aoi_filename = 'aois.geojson'\n\naois = Aois()",
"Specify AOIs\nEach AOI is specified by instantiating the Aoi class with the aoi name, notebooks that use the aoi, and aoi geometry coordinates, in that order. Once an AOI is created, append it to the Aois object. Be sure to run this through once to generate the geojson file at the end of the notebook.\nNOTE: Be careful about appending an Aoi to Aois multiple times. This will result in repeat Features in the resulting geojson.",
"iowa_crops = Aoi(\n 'iowa_crops',\n ['crop_classification/classify_cart_l8_ps',\n 'coverage/calculate_coverage_wgs84'],\n [[\n [-93.2991294841129, 42.6995987669915],\n [-93.2996742314127, 42.8127566482941],\n [-93.2884356831875, 42.8619208871588],\n [-93.2653319466575, 42.9248165306276],\n [-92.9938730936993, 42.9251238519476],\n [-92.9938880477425, 42.7736373428868],\n [-92.9983961055212, 42.7545290276869],\n [-93.0191535706845, 42.6999877495273],\n [-93.2991294841129, 42.6995987669915]\n ]])\naois.append(iowa_crops)\n# iowa_crops\n\nsacramento_crops = Aoi('sacramento_crops',\n ['crop_classification/datasets_identify-1'],\n [[\n [-121.58460974693298, 38.29170496647727],\n [-121.58460974693298, 38.32726528409606],\n [-121.5248715877533, 38.32726528409606],\n [-121.5248715877533, 38.29170496647727],\n [-121.58460974693298, 38.29170496647727]\n ]])\naois.append(sacramento_crops)\n# sacramento_crops\n\nsacramento_crops_2 = Aoi(\n 'sacramento_crops_2',\n ['coverage/calculage_coverage_wgs84',\n 'crossovers/ps_l8_crossovers',\n 'landsat-ps-comparison/landsat-ps-comparison',\n 'crop_classification/datasets_identify-2'],\n [[\n [-121.3113248348236, 38.28911976564886],\n [-121.3113248348236, 38.34622533958],\n [-121.2344205379486, 38.34622533958],\n [-121.2344205379486, 38.28911976564886],\n [-121.3113248348236, 38.28911976564886]\n ]])\naois.append(sacramento_crops_2)\n# sacramento_crops_2\n\ngolden_gate_park = Aoi('golden_gate_park',\n ['data-api-tutorials/clip_and_ship_introduction'],\n [[\n [-122.51103401184083, 37.771596736802074],\n [-122.51060485839844, 37.763997637045456],\n [-122.45902061462401, 37.76603318676243],\n [-122.45773315429689, 37.7654903789825],\n [-122.45275497436523, 37.76637243960179],\n [-122.45455741882324, 37.775124624817906],\n [-122.46597290039062, 37.7738356083287],\n [-122.51103401184083, 37.771596736802074]\n ]])\naois.append(golden_gate_park)\n# golden_gate_park\n\nsan_francisco_city = Aoi('san_francisco_city',\n ['data-api-tutorials/planet_cli_introduction'],\n [[\n [-122.47455596923828, 37.810326435534755],\n [-122.49172210693358, 37.795406713958236],\n [-122.52056121826172, 37.784282779035216],\n [-122.51953124999999, 37.6971326434885],\n [-122.38941192626953, 37.69441603823106],\n [-122.38872528076173, 37.705010235842614],\n [-122.36228942871092, 37.70935613533687],\n [-122.34992980957031, 37.727280276860036],\n [-122.37773895263672, 37.76230130281876],\n [-122.38494873046875, 37.794592824285104],\n [-122.40554809570311, 37.813310018173155],\n [-122.46150970458983, 37.805715207044685],\n [-122.47455596923828, 37.810326435534755]\n ]])\naois.append(san_francisco_city)\n# san_francisco_city\n\nvancouver_island_s = Aoi(\n 'Vancouver Island South',\n ['data-api-tutorials/planet_data_api_introduction'],\n [[\n [-125.29632568359376, 48.37084770238366],\n [-125.29632568359376, 49.335861591104106],\n [-123.2391357421875, 49.335861591104106],\n [-123.2391357421875, 48.37084770238366],\n [-125.29632568359376, 48.37084770238366]\n ]])\naois.append(vancouver_island_s)\n# vancouver_island_s\n\n# also ndvi-from-sr/ndvi_planetscope_sr and ndvi/ndvi_planetscope\nwest_stockton = Aoi('West of Stockton',\n ['data-api-tutorials/search_and_download_quickstart',\n 'ndvi-from-sr/ndvi_planetscope_sr',\n 'ndvi/ndvi_planetscope'\n ],\n [[\n [-121.59290313720705, 37.93444993515032],\n [-121.27017974853516, 37.93444993515032],\n [-121.27017974853516, 38.065932950547484],\n [-121.59290313720705, 38.065932950547484],\n [-121.59290313720705, 37.93444993515032]\n ]])\naois.append(west_stockton)\n# west_stockton\n\ncongo_forest = Aoi('congo_forest',\n ['forest-monitoring/drc_roads_download'],\n [[\n [25.42429478260258,1.0255377823058893],\n [25.592960813580472,1.0255377823058893],\n [25.592960813580472,1.1196578801254304],\n [25.42429478260258,1.1196578801254304],\n [25.42429478260258,1.0255377823058893]\n ]])\naois.append(congo_forest)\n# congo_forest\n\n# also used in mosaicing/basic_compositing_demo\nmt_dana = Aoi('Mt Dana',\n ['in-class-exercises/mosaicing-and-masking/mosaicing-and-masking-key',\n 'mosaicing/basic_compositing_demo'\n ],\n [[\n [-119.16183471679688,37.82903964181452],\n [-119.14947509765626,37.83663205340172],\n [-119.13745880126953,37.846392577323286],\n [-119.13574218750001,37.856422880849514],\n [-119.13883209228514,37.86645181975611],\n [-119.12406921386719,37.86916210952103],\n [-119.12200927734376,37.875937397778614],\n [-119.1212688230194,37.90572368618133],\n [-119.13740499245301,37.930641295117404],\n [-119.16595458984376,37.92659678938742],\n [-119.18243408203126,37.9447389942697],\n [-119.2088161252655,37.95257263611974],\n [-119.25516469704283,37.92522514171301],\n [-119.2630611203827,37.88215253011582],\n [-119.25104482399598,37.84474832157969],\n [-119.18203695046083,37.82576791597315],\n [-119.16183471679688,37.82903964181452]\n ]])\naois.append(mt_dana)\n# mt_dana\n\nhanoi_s = Aoi('S Hanoi',\n ['label-data/label_maker_pl_geotiff'],\n [[\n [105.81775409169494, 20.84015810005586],\n [105.9111433289945, 20.84015810005586],\n [105.9111433289945, 20.925748489914824],\n [105.81775409169494, 20.925748489914824],\n [105.81775409169494, 20.84015810005586]\n ]])\naois.append(hanoi_s)\n# hanoi_s\n\nmyanmar_s = Aoi('S Myanmar',\n ['orders/ordering_and_delivery',\n 'orders/tools_and_toolchains'\n ],\n [[\n [94.25142652167966,16.942922591218252],\n [95.95431374929511,16.587048751480086],\n [95.55802198999191,14.851751617790999],\n [93.87002080638986,15.209870864141054],\n [94.25142652167966,16.942922591218252]\n ]])\naois.append(myanmar_s)\n# myanmar_s\n\nmerced_n = Aoi('North of Merced',\n ['toar/toar_planetscope'],\n [[\n [-120.53282682046516,37.387200839539496],\n [-120.52354973008043,37.420706184624756],\n [-120.23858050023456,37.37089230084231],\n [-120.24140251133315,37.36077146074112],\n [-120.240470649891,37.36060856263429],\n [-120.253098881104,37.31418359933723],\n [-120.25781268370172,37.29734056539194],\n [-120.54356183694901,37.347297317827675],\n [-120.53282682046516,37.387200839539496]\n ]])\naois.append(merced_n)\n# merced_n\n\niowa_crops_2 = Aoi('iowa_crops_2',\n ['udm/udm', 'udm/udm2'],\n [[\n [-93.29905768168668,42.69958733505418],\n [-93.29965849650593,42.81289914666694],\n [-93.28841467631707,42.862022561801815],\n [-93.2653691364643,42.924746756580326],\n [-92.99388666885065,42.92512385194759],\n [-92.99388666885065,42.77359750030287],\n [-92.99839277999504,42.75450452618375],\n [-93.01916380660347,42.699965805770056],\n [-93.29905768168668,42.69958733505418]\n ]])\naois.append(iowa_crops_2)\n# iowa_crops_2",
"Write to File",
"aois.write(aoi_filename)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
biosustain/cameo-notebooks
|
Advanced-SynBio-for-Cell-Factories-Course/Flux Variability Analysis.ipynb
|
apache-2.0
|
[
"Flux Variability Anlysis (FVA)\nLoad a few packages and functions.",
"import pandas\npandas.options.display.max_rows = 12\nimport escher\nfrom cameo import models, flux_variability_analysis, fba",
"First we load a model from the BiGG database (and make a copy of it).",
"model = models.bigg.e_coli_core.copy()",
"Run flux variablity analysis\nCalculate all flux ranges of all reactions in the model.",
"result = flux_variability_analysis(model)",
"Inspect the result.",
"result.data_frame",
"Get an overview of a few key statistics of the resulting flux ranges.",
"result.data_frame.describe()",
"Visualize the flux ranges.",
"result.plot(index=result.data_frame.index, height=1200)",
"Visualize the flux ranges on a pathway map of E. coli's central carbon metabolism.",
"abs_flux_ranges = abs(result.data_frame.lower_bound - result.data_frame.upper_bound).to_dict()\nescher.Builder('e_coli_core.Core metabolism', reaction_data=abs_flux_ranges).display_in_notebook()",
"Those reactions showing up in red are futile cyles.",
"result.data_frame[result.data_frame.upper_bound > 500]\n\nresult_no_cyles = flux_variability_analysis(model, remove_cycles=True)\n\nabs_flux_ranges = abs(result_no_cyles.data_frame.lower_bound - result_no_cyles.data_frame.upper_bound).to_dict()\nescher.Builder('e_coli_core.Core metabolism', reaction_data=abs_flux_ranges).display_in_notebook()",
"Run flux variability analysis for optimally growing E. coli\n(Optimal) Flux Balance Analysis solutions are not necessariliy unique. Flux Variablity Analysis is a good tool for estimating the space of alternative optimal solutions.",
"fba(model)\n\nmodel_optimal = model.copy()\n\nmodel_optimal.reactions.BIOMASS_Ecoli_core_w_GAM.lower_bound = 0.8739215069684299\n\nresult_max_obj = flux_variability_analysis(model_optimal, remove_cycles=True)\n\nresult_max_obj.plot(index=result_max_obj.data_frame.index, height=1200)",
"This is actually such a common task that flux_variability_analysis provides an option for fixing the objective's flux at a certain percentage.",
"result_max_obj = flux_variability_analysis(model, fraction_of_optimum=1., remove_cycles=True)\n\nresult_max_obj.plot(index=result_max_obj.data_frame.index, height=1200)",
"Turns out that in this small core metabolic model, the optimal solution is actually unique!",
"sum(abs(result_max_obj.data_frame.lower_bound - result_max_obj.data_frame.upper_bound))",
"Exercises\nExercise 1\nExplore how relaxing the constraint on the growth rate affects the solution space:\n1. Modify the code to explore flux ranges for $\\mu \\gt 0.7 \\ h^{-1}$ \n1. Plot the sum of flux ranges over a range of percentages.\nExercise 2\nUsing FVA, determine all blocked reactions ($v = 0$) in the model.\nSolutions\nSolution 1",
"percentage = (0.7 / model.solve().f) * 100\npercentage\n\nresult_80perc_max_obj = flux_variability_analysis(model, fraction_of_optimum=percentage/100, remove_cycles=True)\n\nresult_80perc_max_obj.plot(index=result_80perc_max_obj.data_frame.index, height=1200)",
"Solution 2",
"flux_sums = []\noptimum_percentages = range(50, 105, 5)\nfor i in optimum_percentages:\n df = flux_variability_analysis(model, fraction_of_optimum=i/100, remove_cycles=True).data_frame\n flux_sum = sum(abs(df.lower_bound - df.upper_bound))\n print(\"{}%: \".format(i), flux_sum)\n flux_sums.append(flux_sum)\n\nimport matplotlib.pyplot as plt\n\nplt.plot(optimum_percentages, flux_sums)\nplt.xlabel('Optimum (%)')\nplt.ylabel('Flux sum [mmol gDW^-1 h^-1]')\nplt.show()",
"Solution 3",
"result = flux_variability_analysis(model, remove_cycles=True)\n\nresult.data_frame[(result.data_frame.lower_bound == 0) & (result.data_frame.upper_bound == 0)]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PublicHealthEngland/pygom
|
notebooks/PyGOM_SEIRsetup.ipynb
|
gpl-2.0
|
[
"Demonstration for setting up an ODE system\nPyGOM — A Python Package for Simplifying Modelling with Systems of Ordinary Differential Equations https://arxiv.org/pdf/1803.06934.pdf\nUsing PyGOM, we will set up a simple SEIR model. This model has many simplifying assumptions, including:\n- no births or deaths\n- homogeneous mixing\n- no interventions\nSuscebtible population (S) are those that can catch the disease. A susceptible person becomes infected when they interact with an infected person. The chance of this interaction resulting in infection is described with parameter $\\beta$.\n$ \\frac{dS}{dt} = -\\beta S \\frac{I}{N}$ \nExposed population (E) are those that have contracted the disease but are not yet infetious. They become infectious with rate $\\alpha$. \n$ \\frac{dE}{dt} = \\beta S \\frac{I}{N} - \\alpha E$\nInfected population (I) recover at rate $\\gamma$.\n$ \\frac{dI}{dt} = \\alpha E - \\gamma I$\nRemoved population (R) are those who have immunity (described with initial conditions) or have recovered/died from the disease.\n$ \\frac{dR}{dt} = \\gamma I$\nTotal population (N) is given by $N = S + E + I + R$.",
"# import required packages\nfrom pygom import DeterministicOde, Transition, SimulateOde, TransitionType\n\nimport os \nfrom sympy import symbols, init_printing\nimport numpy as np\nimport matplotlib.pyplot as mpl\nimport sympy\nimport itertools\n\n# Add graphvis path (N.B. set to your local circumstances)\ngraphvis_path = 'h:\\\\Programs\\\\Graphvis-2.38\\\\bin\\\\'\nif not graphvis_path in os.environ['PATH']:\n os.environ['PATH'] = os.environ['PATH'] + ';' + graphvis_path\n\ndef print_ode2(self):\n '''\n Prints the ode in symbolic form onto the screen/console in actual\n symbols rather than the word of the symbol.\n \n Based on the PyGOM built-in but adapted for Jupyter\n '''\n A = self.get_ode_eqn()\n B = sympy.zeros(A.rows,2)\n for i in range(A.shape[0]):\n B[i,0] = sympy.symbols('d' + str(self._stateList[i]) + '/dt=')\n B[i,1] = A[i]\n\n return B\n\n# set up the symbolic SEIR model\n\nstate = ['S', 'E', 'I', 'R']\nparam_list = ['beta', 'alpha', 'gamma', 'N']\n\n# Equations can be set up in a variety of ways; either by providing the equations for each state individually,\n# or listing the transitions (shown here).\n\ntransition = [\n Transition(origin='S', destination='E', equation='beta*S*I/N',\n transition_type=TransitionType.T),\n Transition(origin='E', destination='I', equation='alpha*E',\n transition_type=TransitionType.T),\n Transition(origin='I', destination='R', equation='gamma*I',\n transition_type=TransitionType.T)\n ]\n\nSEIR_model = DeterministicOde(state, param_list, transition=transition)\n\n# display equations\nprint_ode2(SEIR_model)\n\n# display graphical representation of the model\n#SEIR_model.get_transition_graph()",
"The following parameterization, run over 100 days, results in an infection rate of approximately 60%. \nThe disease has a latent period of 2 days ($1/\\alpha$), and individuals are infectious for 1 day ($1/\\gamma$).",
"# provide parameters\n\nt = np.linspace(0, 100, 1001)\n\n# initial conditions\n# for a population of 10000, one case has presented, and we assume there is no natural immunity\nx0 = [9999.0, 0.0, 1, 0.0]\n\n# latent for 2 days\n# ill for 1 day\nparams = {'beta': 1.6,\n 'alpha': 0.5, \n 'gamma': 1,\n 'N': sum(x0)}\n\nSEIR_model.initial_values = (x0, t[0])\nSEIR_model.parameters = params\nsolution = SEIR_model.integrate(t[1::])\nSEIR_model.plot()\n\n# calculate time point when maximum number of people are infectious\npeak_i = np.argmax(solution[:,2])\nprint('Peak infection (days)', t[peak_i])\n\n# calculate reproductive number R0\nprint('R0 (beta/gamma) = ', params['beta']/params['gamma'])\n\nsolution[:,0]\n\n\n# function for altering parameters\nmodel = DeterministicOde(state, param_list, transition=transition)\ndef parameterize_model(t=np.linspace(0,100,1001), beta=1.6, alpha=0.5, gamma=1, ic=[9999, 0, 1, 0], model=model):\n params = {'beta': beta,\n 'alpha': alpha, \n 'gamma': gamma,\n 'N': sum(ic)}\n model.initial_values = (ic, t[0])\n model.parameters = params\n sol = model.integrate(t[1::])\n model.plot()\n peak_i = np.argmax(sol[:,2])\n print('Peak infection (days)', t[peak_i] )\n print('R0 (beta/gamma) = ', params['beta']/params['gamma'])\n ",
"In this simple framework, reducing $\\beta$ results in a smaller epidemic:\n- the peak infection time is delayed\n- the magnitude of peak infection is reduced.\nReducing beta may crudely represent giving out anti-virals, which make a person less infectious.",
"parameterize_model(beta=1.2, t=np.linspace(0,500,5001))",
"Vaccinating 5% of the population (assuming instantaneous rollout) or natural immunity, delays the peak period, and reduces its magnitude.",
"parameterize_model(ic=[9490,5, 5, 500], beta=0.5, gamma=0.3, t=np.linspace(0,150,10))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amitkaps/machine-learning
|
time_series/6-Insight.ipynb
|
mit
|
[
"Share the Insight\nThere are two main insights we want to communicate. \n- Bangalore is the largest market for Onion Arrivals. \n- Onion Price variation has increased in the recent years.\nLet us explore how we can communicate these insight visually.\nPreprocessing to get the data",
"# Import the library we need, which is Pandas and Matplotlib\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# import seaborn as sns\n\n# Set some parameters to get good visuals - style to ggplot and size to 15,10\nplt.style.use('ggplot')\nplt.rcParams['figure.figsize'] = (15, 10)\n\n# Read the csv file of Monthwise Quantity and Price csv file we have.\ndf = pd.read_csv('MonthWiseMarketArrivals_clean.csv')\n\n# Change the index to the date column\ndf.index = pd.PeriodIndex(df.date, freq='M')\n\n# Sort the data frame by date\ndf = df.sort_values(by = \"date\")\n\n# Get the data for year 2015\ndf2015 = df[df.year == 2015]\n\n# Groupby on City to get the sum of quantity\ndf2015City = df2015.groupby(['city'], as_index=False)['quantity'].sum()\n\ndf2015City = df2015City.sort_values(by = \"quantity\", ascending = False)\n\ndf2015City.head()",
"Let us plot the Cities in a Geographic Map",
"# Load the geocode file\ndfGeo = pd.read_csv('city_geocode.csv')\n\ndfGeo.head()",
"PRINCIPLE: Joining two data frames\nThere will be many cases in which your data is in two different dataframe and you would like to merge them in to one dataframe. Let us look at one example of this - which is called left join",
"dfCityGeo = pd.merge(df2015City, dfGeo, how='left', on=['city', 'city'])\n\ndfCityGeo.head()\n\ndfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = 100)",
"We can do a crude aspect ratio adjustment to make the cartesian coordinate systesm appear like a mercator map",
"dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = 100, figsize = [10,11])\n\n# Let us at quanitity as the size of the bubble\ndfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity, figsize = [10,11])\n\n# Let us scale down the quantity variable\ndfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity/1000, figsize = [10,11])\n\n# Reduce the opacity of the color, so that we can see overlapping values\ndfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity/1000, alpha = 0.5, figsize = [10,11])",
"Exercise - Can you plot all the States by quantity in (pseudo) geographic map"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nproctor/phys202-project
|
project/2 Hidden Layers Part 3.ipynb
|
mit
|
[
"2 Hidden Layers\nProgramming a Neural Network with 2 hidden layers is essentially the same process as with a single hidden layer with a single extra step. Because there is an extra layer, there is a third weight array and an extra level of gradient descent required. Thus there is a bit more math, and a few extra lines of code.",
"import NeuralNetImport as NN\nimport numpy as np\nfrom sklearn.datasets import load_digits \ndigits = load_digits()\nimport NNpix as npx\nfrom IPython.display import HTML\nfrom IPython.display import display",
"Neuron with 2 Hidden Layers",
"npx.cneuron2",
"Gradient Descent with 2 Hidden Layers",
"npx.derivation2\n\nf = open(\"HTML2.html\")\n\ndisplay(HTML(f.read()))\n\nf.close()",
"Create Training Inputs and Solutions\nUse 1000 random samples to generate an input and solution. The other 797 will be used to test.",
"perm = np.random.permutation(1792)\ntraining_input = np.array([digits.images[perm[i]].flatten() for i in range(1000)])/100\n\ntraining_solution = NN.create_training_soln(digits.target[perm], 10)\n\ntrain = NN.NN_training_2(training_input, training_solution, 64, 10, 60, 50, 80, 0.7)",
"Getting Weights\nTo find weights, use the commented out line below.",
"# x,y,z = train.train()\n\nf = np.load(\"2HiddenWeights.npz\")\n\nx = f['arr_0']\ny = f['arr_1']\nz = f['arr_2']\n\nassert len(x) == 60\nassert len(y) == 50\nassert len(z) == 10",
"Find Solutions",
"ask = [NN.NN_ask_2(np.array([digits.images[perm[i]].flatten()])/100,x,y,z) for i in range(1000,1792)]\n\ncomp_vals = [ask[i].get_ans() for i in range(len(ask))]",
"Calculate Accuracy",
"print((sum(((comp_vals - np.array([digits.target[perm[i]] for i in range(1000,1792)]) == 0).astype(int)) / 792 * 100)), \"%\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jcmgray/quijy
|
docs/tensor-basics.ipynb
|
mit
|
[
"%matplotlib inline\nimport quimb as qu\nimport quimb.tensor as qtn\n\ndata = qu.bell_state('psi-').reshape(2, 2)\ninds = ('k0', 'k1')\ntags = {'KET'}\n\nket = qtn.Tensor(data, inds, tags)\nket\n\nX = qtn.Tensor(qu.pauli('X'), inds=('k0', 'b0'), tags={'PAULI', 'X', '0'})\nY = qtn.Tensor(qu.pauli('Y'), inds=('k1', 'b1'), tags={'PAULI', 'Y', '1'})",
"And finally, a random 'bra' to complete the inner product:",
"bra = qtn.Tensor(qu.rand_ket(4).reshape(2, 2), inds=('b0', 'b1'), tags={'BRA'})\n\nTN = ket.H & X & Y & bra\nprint(TN)\n\nTN.graph(color=['KET', 'PAULI', 'BRA'], figsize=(4, 4))",
"Note the tags can be used to identify both paulis at once. But they could also be uniquely identified using their 'X' and 'Y' tags respectively:",
"TN.graph(color=['KET', 'X', 'BRA', 'Y'], figsize=(4, 4))\n\nTN ^ all",
"Or if we just want to contract the paulis:",
"print(TN ^ 'PAULI')",
"Notice how the tags of the Paulis have been combined on the new tensor.\nThe contraction order is optimized automatically using opt_einsum, is cached,\nand can easily handle hundreds of tensors (though it uses a greedy algorithm and \nis not guaranteed to find the optimal path).\nA cumulative contract allows a custom 'bubbling' order:",
"# \"take KET, then contract X in, then contract BRA *and* Y in, etc...\"\nprint(TN >> ['KET', 'X', ('BRA', 'Y')])",
"And a structured contract uses the tensor networks tagging structure (a string\nformat specifier like \"I{}\") to perform a cumulative contract automatically, \ne.g. grouping the tensors of a MPS/MPO into segments of 10 sites.\nThis can be slightly quicker than finding the full contraction path.\nWhen a TN has a structure, structured contractions can be used by specifying either an Ellipsis:\n``TN ^ ...`` # which means full, structured contract\n\nor a slice:\n``TN ^ slice(100, 200)`` # which means a structured contract of those sites only",
"print((TN ^ 'PAULI'))\n\n# select any tensors matching the 'KET' tag - here only 1\nTk = TN['KET']\n\n# now split it, creating a new tensor network of 2 tensors\nTk_s = Tk.split(left_inds=['k0'])\n\n# note new index created \nprint(Tk_s)\n\n# remove the original KET tensor\ndel TN['KET']\n\n# inplace add the split tensor network\nTN &= Tk_s\n\n# plot - should now have 5 tensors\nTN.graph(color=['KET', 'PAULI', 'BRA'], figsize=(4, 4))\n\nL = 10\n\n# create the nodes, by default just the scalar 1.0\ntensors = [qtn.Tensor() for _ in range(L)]\n\nfor i in range(L):\n # add the physical indices, each of size 2\n tensors[i].new_ind(f'k{i}', size=2)\n \n # add bonds between neighbouring tensors, of size 7\n tensors[i].new_bond(tensors[(i + 1) % L], size=7)\n \nmps = qtn.TensorNetwork(tensors)\nmps.graph()\n\n# make a 5 qubit tensor state\ndims = [2] * 5\ndata = qu.rand_ket(32).A.reshape(*dims)\ninds=['k0', 'k1', 'k2', 'k3', 'k4']\npsi = qtn.Tensor(data, inds=inds)\n\n# find the inner product with itself\npsi.H @ psi",
"In this case, the conjugated copy psi.H has the same outer indices as psi and so the inner product is naturally formed."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cdawei/digbeta
|
dchen/music/aotm2011_split.ipynb
|
gpl-3.0
|
[
"Train/Dev/Test split of AotM-2011 songs in setting I & II",
"%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport os, sys\nimport gzip\nimport pickle as pkl\nimport numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import precision_recall_fscore_support, roc_auc_score, average_precision_score\nfrom scipy.optimize import check_grad\nfrom scipy.sparse import lil_matrix, issparse, hstack, vstack\nfrom collections import Counter\nimport itertools as itt\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndata_dir = 'data/aotm-2011'\n#faotm = os.path.join(data_dir, 'aotm2011-subset.pkl')\nfaotm = os.path.join(data_dir, 'aotm2011-user-playlist.pkl')\n#ffeature = 'data/msd/songID2Features.pkl.gz'\nffeature = 'data/msd/song2feature.pkl.gz'\nfgenre = 'data/msd/song2genre.pkl'",
"Load playlists\nLoad playlists.",
"user_playlists = pkl.load(open(faotm, 'rb'))\n\nall_users = sorted(user_playlists.keys())\n\nall_playlists = [(pl, u) for u in all_users for pl in user_playlists[u]]\n\nprint('#user : {:,}'.format(len(all_users)))\nprint('#playlist: {:,}'.format(len(all_playlists)))\n\npl_lengths = [len(pl) for u in all_users for pl in user_playlists[u]]\n#plt.hist(pl_lengths, bins=100)\nprint('Average playlist length: %.1f' % np.mean(pl_lengths))",
"Load song features\nLoad song_id --> feature array mapping: map a song to the audio features of one of its corresponding tracks in MSD.",
"song2feature = pkl.load(gzip.open(ffeature, 'rb'))",
"Split songs for setting I\nSplit songs (60/20/20 train/dev/test split) the latest released (year) songs are in dev and test set.",
"all_songs = sorted([(sid, song2feature[sid][-1]) for sid in \\\n {s for u in all_users for pl in user_playlists[u] for s in pl}], key=lambda x: (x[1], x[0]))\nprint('{:,}'.format(len(all_songs)))\n\n#all_songs[0:10]\n\npkl.dump(all_songs, gzip.open(os.path.join(data_dir, 'setting2/all_songs.pkl.gz'), 'wb'))\n\ndev_ratio = 0.2\ntest_ratio = 0.2\nnsong_dev_test = int(len(all_songs) * (dev_ratio + test_ratio))\ntrain_song_set = all_songs[nsong_dev_test:]\n\n# shuffle songs in dev and test set\nnp.random.seed(123456789)\ndev_test_ix = np.random.permutation(np.arange(nsong_dev_test))\nnsong_dev = int(len(all_songs) * dev_ratio)\ndev_song_set = [all_songs[ix] for ix in dev_test_ix[:nsong_dev]]\ntest_song_set = [all_songs[ix] for ix in dev_test_ix[nsong_dev:]]\n\nprint('#songs in training set: {:,}, average song age: {:.2f} yrs'\n .format(len(train_song_set), np.mean([t[1] for t in train_song_set])))\nprint('#songs in dev set : {:,}, average song age: {:.2f} yrs'\n .format(len(dev_song_set), np.mean([t[1] for t in dev_song_set])))\nprint('#songs in test set : {:,}, average song age: {:.2f} yrs'\n .format(len(test_song_set), np.mean([t[1] for t in test_song_set])))\n\nprint('#songs: {:,} | {:,}'.format(len(all_songs), len({s for s in train_song_set + dev_song_set+test_song_set})))",
"Song popularity.",
"song2index = {sid: ix for ix, (sid, _) in enumerate(all_songs)}\nsong_pl_mat = lil_matrix((len(all_songs), len(all_playlists)), dtype=np.int8)\nfor j in range(len(all_playlists)):\n pl = all_playlists[j][0]\n ind = [song2index[sid] for sid in pl]\n song_pl_mat[ind, j] = 1\n\nsong_pop = song_pl_mat.tocsc().sum(axis=1)\n\nsong2pop = {sid: song_pop[song2index[sid], 0] for (sid, _) in all_songs}\n\npkl.dump(song2pop, gzip.open(os.path.join(data_dir, 'setting2/song2pop.pkl.gz'), 'wb'))\n\nax = plt.subplot(111)\nax.hist(song_pop, bins=100)\nax.set_yscale('log')\nax.set_title('Histogram of song popularity')\npass\n\ntrain_song_pop = [song2pop[sid] for (sid, _) in train_song_set]\nax = plt.subplot(111)\nax.hist(train_song_pop, bins=100)\nax.set_yscale('log')\nax.set_xlim(0, song_pop.max()+1)\nax.set_title('Histogram of song popularity in TRAINING set')\npass\n\ndev_song_pop = [song2pop[sid] for (sid, _) in dev_song_set]\nax = plt.subplot(111)\nax.hist(dev_song_pop, bins=100)\nax.set_yscale('log')\nax.set_xlim(0, song_pop.max()+1)\nax.set_title('Histogram of song popularity in DEV set')\npass\n\ntest_song_pop = [song2pop[sid] for (sid, _) in test_song_set]\nax = plt.subplot(111)\nax.hist(test_song_pop, bins=100)\nax.set_yscale('log')\nax.set_xlim(0, song_pop.max()+1)\nax.set_title('Histogram of song popularity in TEST set')\npass",
"Load genres\nSong genres from MSD Allmusic Genre Dataset (Top MAGD) and tagtraum.",
"song2genre = pkl.load(open(fgenre, 'rb'))",
"Check if all songs have genre info.",
"print('#songs missing genre: {:,}'.format(len(all_songs) - np.sum([sid in song2genre for (sid, _) in all_songs])))",
"Create song-playlist matrix\nSongs as rows, playlists as columns.",
"def gen_dataset(playlists, song2feature, song2genre, train_song_set, dev_song_set=[], test_song_set=[]):\n \"\"\"\n Create labelled dataset: rows are songs, columns are users.\n \n Input:\n - playlists: a set of playlists\n - train_song_set: a list of songIDs in training set\n - dev_song_set: a list of songIDs in dev set\n - test_song_set: a list of songIDs in test set\n - song2feature: dictionary that maps songIDs to features from MSD\n - song2genre: dictionary that maps songIDs to genre\n Output:\n - (Feature, Label) pair (X, Y)\n X: #songs by #features\n Y: #songs by #users\n \"\"\" \n song_set = train_song_set + dev_song_set + test_song_set\n N = len(song_set)\n K = len(playlists)\n \n genre_set = sorted({v for v in song2genre.values()})\n genre2index = {genre: ix for ix, genre in enumerate(genre_set)}\n \n def onehot_genre(songID):\n \"\"\"\n One-hot encoding of genres.\n Data imputation: \n - one extra entry for songs without genre info\n - mean imputation\n - sampling from the distribution of genre popularity\n \"\"\"\n num = len(genre_set) # + 1\n vec = np.zeros(num, dtype=np.float)\n if songID in song2genre:\n genre_ix = genre2index[song2genre[songID]]\n vec[genre_ix] = 1\n else:\n vec[:] = np.nan\n #vec[-1] = 1\n return vec\n \n #X = np.array([features_MSD[sid] for sid in song_set]) # without using genre\n #Y = np.zeros((N, K), dtype=np.bool)\n X = np.array([np.concatenate([song2feature[sid], onehot_genre(sid)], axis=-1) for sid in song_set])\n Y = lil_matrix((N, K), dtype=np.bool)\n \n song2index = {sid: ix for ix, sid in enumerate(song_set)}\n for k in range(K):\n pl = playlists[k]\n indices = [song2index[sid] for sid in pl if sid in song2index]\n Y[indices, k] = True\n \n # genre imputation\n genre_ix_start = -len(genre_set)\n genre_nan = np.isnan(X[:, genre_ix_start:])\n genre_mean = np.nansum(X[:, genre_ix_start:], axis=0) / (X.shape[0] - np.sum(genre_nan, axis=0))\n #print(np.nansum(X[:, genre_ix_start:], axis=0))\n #print(genre_set)\n #print(genre_mean)\n for j in range(len(genre_set)):\n X[genre_nan[:,j], j+genre_ix_start] = genre_mean[j]\n\n #return X, Y\n Y = Y.tocsr()\n \n train_ix = [song2index[sid] for sid in train_song_set]\n X_train = X[train_ix, :]\n Y_train = Y[train_ix, :]\n \n dev_ix = [song2index[sid] for sid in dev_song_set]\n X_dev = X[dev_ix, :]\n Y_dev = Y[dev_ix, :]\n \n test_ix = [song2index[sid] for sid in test_song_set]\n X_test = X[test_ix, :]\n Y_test = Y[test_ix, :]\n \n if len(dev_song_set) > 0:\n if len(test_song_set) > 0:\n return X_train, Y_train, X_dev, Y_dev, X_test, Y_test\n else:\n return X_train, Y_train, X_dev, Y_dev\n else:\n if len(test_song_set) > 0:\n return X_train, Y_train, X_test, Y_test\n else:\n return X_train, Y_train",
"Setting I: hold a subset of songs, use all playlists",
"pkl_dir = os.path.join(data_dir, 'setting1')\nfsongs = os.path.join(pkl_dir, 'songs_train_dev_test_s1.pkl.gz')\nfpl = os.path.join(pkl_dir, 'playlists_s1.pkl.gz')\nfxtrain = os.path.join(pkl_dir, 'X_train.pkl.gz')\nfytrain = os.path.join(pkl_dir, 'Y_train.pkl.gz')\nfxdev = os.path.join(pkl_dir, 'X_dev.pkl.gz')\nfydev = os.path.join(pkl_dir, 'Y_dev.pkl.gz')\nfxtest = os.path.join(pkl_dir, 'X_test.pkl.gz')\nfytest = os.path.join(pkl_dir, 'Y_test.pkl.gz')\n#fadjmat = os.path.join(pkl_dir, 'adjacency_mat.pkl.gz')\nfclique = os.path.join(pkl_dir, 'cliques_all.pkl.gz')\n\nX_train, Y_train, X_dev, Y_dev, X_test, Y_test = gen_dataset(playlists = [t[0] for t in all_playlists],\n song2feature = song2feature, song2genre = song2genre,\n train_song_set = [t[0] for t in train_song_set],\n dev_song_set = [t[0] for t in dev_song_set], \n test_song_set = [t[0] for t in test_song_set])",
"Feature normalisation.",
"X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))\nX_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)\nX_train -= X_train_mean\nX_train /= X_train_std\nX_dev -= X_train_mean\nX_dev /= X_train_std\nX_test -= X_train_mean\nX_test /= X_train_std\n\nprint('Train: %15s %15s' % (X_train.shape, Y_train.shape))\nprint('Dev : %15s %15s' % (X_dev.shape, Y_dev.shape))\nprint('Test : %15s %15s' % (X_test.shape, Y_test.shape))\n\nprint(np.mean(np.mean(X_train, axis=0)))\nprint(np.mean( np.std(X_train, axis=0)) - 1)\nprint(np.mean(np.mean(X_dev, axis=0)))\nprint(np.mean( np.std(X_dev, axis=0)) - 1)\nprint(np.mean(np.mean(X_test, axis=0)))\nprint(np.mean( np.std(X_test, axis=0)) - 1)\n\npkl.dump(X_train, gzip.open(fxtrain, 'wb'))\npkl.dump(Y_train, gzip.open(fytrain, 'wb'))\npkl.dump(X_dev, gzip.open(fxdev, 'wb'))\npkl.dump(Y_dev, gzip.open(fydev, 'wb'))\npkl.dump(X_test, gzip.open(fxtest, 'wb'))\npkl.dump(Y_test, gzip.open(fytest, 'wb'))\n\npkl.dump({'train_song_set': train_song_set, 'dev_song_set': dev_song_set, 'test_song_set': test_song_set},\n gzip.open(fsongs, 'wb'))\npkl.dump(all_playlists, gzip.open(fpl, 'wb'))",
"Build the adjacent matrix of playlists (nodes) for setting I, playlists of the same user form a clique.",
"user_of_playlists = [u for (_, u) in all_playlists]\n#npl = len(user_of_playlists)\n#same_user_mat = lil_matrix((npl, npl), dtype=np.bool)\nclique_list = []\nfor u in set(user_of_playlists):\n clique = np.where(u == np.array(user_of_playlists, dtype=np.object))[0]\n if len(clique) > 1:\n clique_list.append(clique)\n# for x, y in itt.combinations(clique, 2):\n# same_user_mat[x, y] = True\n# same_user_mat[y, x] = True\n#same_user_mat = same_user_mat.tocsr()\n\npkl.dump(clique_list, gzip.open(fclique, 'wb'))\n\nclqsize = [len(clique) for clique in clique_list]\nprint(np.min(clqsize), np.max(clqsize), len(clqsize))\n\n#pkl.dump(same_user_mat, gzip.open(fadjmat, 'wb'))\n#same_user_mat\n\n#Y = vstack([Y_train, Y_dev, Y_test])\n#Y.shape\n\n#scoreMat = (Y.T).dot(Y).multiply(same_user_mat)\n#sumVec = 0.5 * scoreMat.sum(axis=0)\n#sumVec.shape\n\n#print(len(all_songs), len(all_playlists))\n#npl_meandist_pairs = []\n#for u in set(user_of_playlists):\n# clique = np.where(u == np.array(user_of_playlists, dtype=np.object))[0]\n# npl = len(clique)\n# meandist = np.mean(sumVec[0, clique])\n# npl_meandist_pairs.append((u, npl, meandist))\n\n##plt.figure(figsize=[20, 15])\n#ax = plt.subplot(111)\n#ax.scatter([t[1] for t in npl_meandist_pairs], [t[2]+1 for t in npl_meandist_pairs], alpha=0.3)\n##ax.set_yscale('log')\n##ax.set_xscale('log')\n#ax.set_xlim([0,200])\n#ax.set_xlabel('#playlists of user')\n#ax.set_ylabel('Mean playlist label distance') # dot product of playlist indicator vector\n#ax.set_title('Scatter plot on AotM-2011 (each point is a user)')\n\n#sumVec = same_user_mat.sum(axis=1)\n#np.nonzero(sumVec)[0]",
"Split playlists for setting II\nSplit playlists (60/20/20 train/dev/test split) uniformly at random.",
"train_playlists = []\ndev_playlists = []\ntest_playlists = []\n\ndev_ratio = 0.2\ntest_ratio = 0.2\nnpl_dev = int(dev_ratio * len(all_playlists))\nnpl_test = int(test_ratio * len(all_playlists))\nnp.random.seed(987654321)\npl_indices = np.random.permutation(np.arange(len(all_playlists)))\ntest_playlists = all_playlists[:npl_test]\ndev_playlists = all_playlists[npl_test:npl_test + npl_dev]\ntrain_playlists = all_playlists[npl_test + npl_dev:]\n\nprint('{:30s} {:,}'.format('#playlist in training set:', len(train_playlists)))\nprint('{:30s} {:,}'.format('#playlist in dev set:', len(dev_playlists)))\nprint('{:30s} {:,}'.format('#playlist in test set:', len(test_playlists)))\n\nxmax = np.max([len(pl) for (pl, _) in all_playlists]) + 1\n\nax = plt.subplot(111)\nax.hist([len(pl) for (pl, _) in train_playlists], bins=100)\nax.set_yscale('log')\nax.set_xlim(0, xmax)\nax.set_title('Histogram of playlist length in TRAINING set')\npass\n\nax = plt.subplot(111)\nax.hist([len(pl) for (pl, _) in dev_playlists], bins=100)\nax.set_yscale('log')\nax.set_xlim(0, xmax)\nax.set_title('Histogram of playlist length in DEV set')\npass\n\nax = plt.subplot(111)\nax.hist([len(pl) for (pl, _) in test_playlists], bins=100)\nax.set_yscale('log')\nax.set_xlim(0, xmax)\nax.set_title('Histogram of playlist length in TEST set')\npass",
"Setting II: hold the last half of songs in a subset of playlists, use all songs\nHold the last half of songs for playlists in dev and test set.",
"dev_playlists_obs = [pl[:-int(len(pl)/2)] for (pl, _) in dev_playlists]\ndev_playlists_held = [pl[-int(len(pl)/2):] for (pl, _) in dev_playlists]\ntest_playlists_obs = [pl[:-int(len(pl)/2)] for (pl, _) in test_playlists]\ntest_playlists_held = [pl[-int(len(pl)/2):] for (pl, _) in test_playlists]\n\nfor ix in range(len(dev_playlists)):\n assert np.all(dev_playlists[ix][0] == dev_playlists_obs[ix] + dev_playlists_held[ix])\nfor ix in range(len(test_playlists)):\n assert np.all(test_playlists[ix][0] == test_playlists_obs[ix] + test_playlists_held[ix])\n\nprint('DEV obs: {:,} | DEV held: {:,} \\nTEST obs: {:,} | TeST held: {:,}'.format(\n np.sum([len(ppl) for ppl in dev_playlists_obs]), np.sum([len(ppl) for ppl in dev_playlists_held]),\n np.sum([len(ppl) for ppl in test_playlists_obs]), np.sum([len(ppl) for ppl in test_playlists_held])))",
"Setting II: hold a subset of songs in a subset of playlists, use all songs",
"pkl_dir2 = os.path.join(data_dir, 'setting2')\nfpl2 = os.path.join(pkl_dir2, 'playlists_train_dev_test_s2.pkl.gz')\nfy2 = os.path.join(pkl_dir2, 'Y.pkl.gz')\nfxtrain2 = os.path.join(pkl_dir2, 'X_train.pkl.gz')\nfytrain2 = os.path.join(pkl_dir2, 'Y_train.pkl.gz')\nfytrndev = os.path.join(pkl_dir2, 'Y_train_dev.pkl.gz')\nfydev2 = os.path.join(pkl_dir2, 'PU_dev.pkl.gz')\nfytest2 = os.path.join(pkl_dir2, 'PU_test.pkl.gz')\nfclique21 = os.path.join(pkl_dir2, 'cliques_train_dev.pkl.gz')\nfclique22 = os.path.join(pkl_dir2, 'cliques_all2.pkl.gz')\n\nX, Y = gen_dataset(playlists = [t[0] for t in train_playlists + dev_playlists + test_playlists],\n song2feature = song2feature, song2genre = song2genre, \n train_song_set = [t[0] for t in all_songs])\n\nX_train = X\n\ndev_cols = np.arange(len(train_playlists), len(train_playlists) + len(dev_playlists))\ntest_cols = np.arange(len(train_playlists) + len(dev_playlists), Y.shape[1])\nassert len(dev_cols) == len(dev_playlists) == len(dev_playlists_obs)\nassert len(test_cols) == len(test_playlists) == len(test_playlists_obs)\n\npkl.dump({'train_playlists': train_playlists, 'dev_playlists': dev_playlists, 'test_playlists': test_playlists,\n 'dev_playlists_obs': dev_playlists_obs, 'dev_playlists_held': dev_playlists_held,\n 'test_playlists_obs': test_playlists_obs, 'test_playlists_held': test_playlists_held},\n gzip.open(fpl2, 'wb'))\n\nsong2index = {sid: ix for ix, sid in enumerate([t[0] for t in all_songs])}",
"~~Set all entries corresponding to playlists in dev and test set to NaN, except those corresponding to songs that we observed (in dev/test set), this will result in a huge dense matrix.~~",
"#Y_train = Y.tolil(copy=True).astype(np.float) # note: np.nan is float\n#Y_train[:, np.r_[dev_cols, test_cols]] = np.nan # note this will result in a huge dence matrix\n\n#num_known_dev = 0\n#for j in range(len(dev_cols)):\n# rows = [song2index[sid] for sid in dev_playlists_obs[j]]\n# Y_train[rows, dev_cols[j]] = 1\n# num_known_dev += len(rows)\n\n#num_known_test = 0\n#for j in range(len(test_cols)):\n# rows = [song2index[sid] for sid in test_playlists_obs[j]]\n# Y_train[rows, test_cols[j]] = 1\n# num_known_test += len(rows)\n#Y_train = Y_train.tocsr()",
"Use dedicated sparse matrices to reprsent what entries are observed in dev and test set.",
"Y_train = Y[:, :len(train_playlists)].tocsr()\nY_train_dev = Y[:, :len(train_playlists) + len(dev_playlists)].tocsr()\n\nPU_dev = lil_matrix((len(all_songs), len(dev_playlists)), dtype=np.bool)\nPU_test = lil_matrix((len(all_songs), len(test_playlists)), dtype=np.bool)\n\nnum_known_dev = 0\nfor j in range(len(dev_playlists)):\n if (j+1) % 1000 == 0:\n sys.stdout.write('\\r%d / %d' % (j+1, len(dev_playlists))); sys.stdout.flush()\n rows = [song2index[sid] for sid in dev_playlists_obs[j]]\n PU_dev[rows, j] = True\n num_known_dev += len(rows)\n\nnum_known_test = 0\nfor j in range(len(test_playlists)):\n if (j+1) % 1000 == 0:\n sys.stdout.write('\\r%d / %d' % (j+1, len(test_playlists))); sys.stdout.flush()\n rows = [song2index[sid] for sid in test_playlists_obs[j]]\n PU_test[rows, j] = True\n num_known_test += len(rows)\n\nPU_dev = PU_dev.tocsr()\nPU_test = PU_test.tocsr()",
"Indices and ground truths for dev and test entries.",
"# NOTE: prediction for dev/test is just the complement of PU_dev/PU_test\n\n#Y_dev_gtlist = [Y[ix] for ix in Y_dev_indices]\n#Y_test_gtlist = [Y[ix] for ix in Y_test_indices]\n\nprint('#unknown entries in DEV set: {:15,d} | {:15,d} \\n#unknown entries in TEST set: {:15,d} | {:15,d}'.format(\n np.prod(PU_dev.shape) - PU_dev.sum(), len(dev_playlists) * len(all_songs) - num_known_dev,\n np.prod(PU_test.shape) - PU_test.sum(), len(test_playlists) * len(all_songs) - num_known_test))\n\nprint('#unknown entries in Setting I: {:,}'.format((len(dev_song_set) + len(test_song_set)) * Y.shape[1]))",
"Feature normalisation.",
"X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))\nX_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)\nX_train -= X_train_mean\nX_train /= X_train_std\n\nprint(np.mean(np.mean(X_train, axis=0)))\nprint(np.mean( np.std(X_train, axis=0)) - 1)\n\nprint('All : %s' % str(Y.shape))\nprint('Train: %s, %s' % (X_train.shape, Y_train.shape))\nprint('Dev : %s' % str(PU_dev.shape))\nprint('Test : %s' % str(PU_test.shape))\n\npkl.dump(X_train, gzip.open(fxtrain2, 'wb'))\npkl.dump(Y_train, gzip.open(fytrain2, 'wb'))\npkl.dump(Y, gzip.open(fy2, 'wb'))\npkl.dump(Y_train_dev, gzip.open(fytrndev, 'wb'))\npkl.dump(PU_dev, gzip.open(fydev2, 'wb'))\npkl.dump(PU_test, gzip.open(fytest2, 'wb'))",
"Build the adjacent matrix of playlists (nodes) for setting II, playlists of the same user form a clique.\nCliques in train + dev set.",
"pl_users = [u for (_, u) in train_playlists + dev_playlists]\ncliques_train_dev = []\nfor u in set(pl_users):\n clique = np.where(u == np.array(pl_users, dtype=np.object))[0]\n if len(clique) > 1:\n cliques_train_dev.append(clique)\n\npkl.dump(cliques_train_dev, gzip.open(fclique21, 'wb'))\n\nclqsize = [len(clq) for clq in cliques_train_dev]\nprint(np.min(clqsize), np.max(clqsize), len(clqsize))",
"Cliques in train + dev + test set.",
"pl_users = [u for (_, u) in train_playlists + dev_playlists + test_playlists]\nclique_list2 = []\nfor u in set(pl_users):\n clique = np.where(u == np.array(pl_users, dtype=np.object))[0]\n if len(clique) > 1:\n clique_list2.append(clique)\n\npkl.dump(clique_list2, gzip.open(fclique22, 'wb'))\n\nclqsize = [len(clq) for clq in clique_list2]\nprint(np.min(clqsize), np.max(clqsize), len(clqsize))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
the-deep-learners/study-group
|
demos-for-talks/VGGNet.ipynb
|
mit
|
[
"VGGNet in TFLearn\nfor Oxford's 17 Category Flower Dataset Classification\nBased on https://github.com/tflearn/tflearn/blob/master/examples/images/vgg_network.py",
"from __future__ import division, print_function, absolute_import\n\nimport tflearn\n\nfrom tflearn.layers.core import input_data, dropout, fully_connected\nfrom tflearn.layers.conv import conv_2d, max_pool_2d\nfrom tflearn.layers.estimator import regression",
"Import Data and Preprocess",
"import tflearn.datasets.oxflower17 as oxflower17\n\nX, Y = oxflower17.load_data(one_hot=True)",
"Build 'VGGNet'",
"network = input_data(shape=[None, 224, 224, 3])\n\nnetwork = conv_2d(network, 64, 3, activation='relu')\nnetwork = conv_2d(network, 64, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = conv_2d(network, 128, 3, activation='relu')\nnetwork = conv_2d(network, 128, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = conv_2d(network, 256, 3, activation='relu')\nnetwork = conv_2d(network, 256, 3, activation='relu')\nnetwork = conv_2d(network, 256, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = fully_connected(network, 4096, activation='relu')\nnetwork = dropout(network, 0.5)\nnetwork = fully_connected(network, 4096, activation='relu')\nnetwork = dropout(network, 0.5)\n\nnetwork = fully_connected(network, 17, activation='softmax')\n\nnetwork = regression(network, optimizer='rmsprop',\n loss='categorical_crossentropy',\n learning_rate=0.001)",
"Training",
"model = tflearn.DNN(network, checkpoint_path='model_vgg',\n max_checkpoints=1, tensorboard_verbose=0)\n\n# n_epoch=500 is recommended:\nmodel.fit(X, Y, n_epoch=10, shuffle=True,\n show_metric=True, batch_size=32, snapshot_step=500,\n snapshot_epoch=False, run_id='vgg_oxflowers17')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chris1610/pbpython
|
notebooks/Altair-Article.ipynb
|
bsd-3-clause
|
[
"Introduction to Data Visualization with Altair\nNotebook to accompany article on PB Python.\nhttp://pbpython.com/altair-intro.html",
"import pandas as pd\nfrom altair import Chart, X, Y, Axis, SortField\n\n%matplotlib inline",
"Read in the sample CSV file containing the MN 2014 capital budget",
"budget = pd.read_csv(\"https://github.com/chris1610/pbpython/raw/master/data/mn-budget-detail-2014.csv\")\n\nbudget.head()",
"Make a basic plot of the top 10 expenditures using the default pandas plotting functions",
"budget_top_10 = budget.sort_values(by='amount',ascending=False)[:10]\n\nbudget_top_10.plot(kind=\"bar\",x=budget_top_10[\"detail\"],\n title=\"MN Capital Budget - 2014\",\n legend=False)",
"Build a similar plot in Altair",
"c = Chart(budget_top_10).mark_bar().encode(\n x='detail',\n y='amount')\nc",
"Convert to a horizontal bar graph",
"c = Chart(budget_top_10).mark_bar().encode(\n y='detail',\n x='amount')\nc",
"Look at the dictionary containing the JSON spec",
"c.to_dict(data=False)",
"Use X() and Y() - does nothing in this example but is useful for future steps",
"Chart(budget_top_10).mark_bar().encode(\n x=X('detail'),\n y=Y('amount')\n)",
"Add in color codes based on the category. This automatically includes a legend.",
"Chart(budget_top_10).mark_bar().encode(\n x=X('detail'),\n y=Y('amount'),\n color=\"category\"\n)",
"Show what happens if we run on all data, not just the top 10",
"Chart(budget).mark_bar().encode(\n x='detail',\n y='amount',\n color='category')",
"Filter data to only amounts >= $10M",
"Chart(budget).mark_bar().encode(\n x='detail:N',\n y='amount:Q',\n color='category').transform_data(\n filter='datum.amount >= 10000000',\n )",
"Swap X and Y in order to see it as a horizontal bar chart",
"Chart(budget).mark_bar().encode(\n y='detail:N',\n x='amount:Q',\n color='category').transform_data(\n filter='datum.amount >= 10000000',\n )",
"Equivalent approach using X and Y classes",
"Chart(budget).mark_bar().encode(\n x=X('detail:O',\n axis=Axis(title='Project')),\n y=Y('amount:Q',\n axis=Axis(title='2014 Budget')),\n color='category').transform_data(\n filter='datum.amount >= 10000000',\n )",
"Order the projects based on their total spend",
"Chart(budget).mark_bar().encode(\n x=X('detail:O', sort=SortField(field='amount', order='descending', op='sum'),\n axis=Axis(title='Project')),\n y=Y('amount:Q',\n axis=Axis(title='2014 Budget')),\n color='category').transform_data(\n filter='datum.amount >= 10000000',\n )",
"Summarize the data at the category level",
"Chart(budget).mark_bar().encode(\n x=X('category', sort=SortField(field='amount', order='descending', op='sum'),\n axis=Axis(title='Category')),\n y=Y('sum(amount)',\n axis=Axis(title='2014 Budget')))\n\nc = Chart(budget).mark_bar().encode(\n y=Y('category', sort=SortField(field='amount', order='descending', op='sum'),\n axis=Axis(title='Category')),\n x=X('sum(amount)',\n axis=Axis(title='2014 Budget')))\nc\n\nc.to_dict(data=False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.23/_downloads/5050ef6c79955f686202b68899c9d8b8/mne_inverse_coherence_epochs.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Compute coherence in source space using a MNE inverse solution\nThis example computes the coherence between a seed in the left\nauditory cortex and the rest of the brain based on single-trial\nMNE-dSPM inverse solutions.",
"# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import (apply_inverse, apply_inverse_epochs,\n read_inverse_operator)\nfrom mne.connectivity import seed_target_indices, spectral_connectivity\n\nprint(__doc__)",
"Read the data\nFirst we'll read in the sample MEG data that we'll use for computing\ncoherence between channels. We'll convert this into epochs in order to\ncompute the event-related coherence.",
"data_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nfname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nfname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nlabel_name_lh = 'Aud-lh'\nfname_label_lh = data_path + '/MEG/sample/labels/%s.label' % label_name_lh\n\nevent_id, tmin, tmax = 1, -0.2, 0.5\nmethod = \"dSPM\" # use dSPM method (could also be MNE or sLORETA)\n\n# Load data.\ninverse_operator = read_inverse_operator(fname_inv)\nlabel_lh = mne.read_label(fname_label_lh)\nraw = mne.io.read_raw_fif(fname_raw)\nevents = mne.read_events(fname_event)\n\n# Add a bad channel.\nraw.info['bads'] += ['MEG 2443']\n\n# pick MEG channels.\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,\n exclude='bads')\n\n# Read epochs.\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0),\n reject=dict(mag=4e-12, grad=4000e-13, eog=150e-6))",
"Choose channels for coherence estimation\nNext we'll calculate our channel sources. Then we'll find the most active\nvertex in the left auditory cortex, which we will later use as seed for the\nconnectivity computation.",
"snr = 3.0\nlambda2 = 1.0 / snr ** 2\nevoked = epochs.average()\nstc = apply_inverse(evoked, inverse_operator, lambda2, method,\n pick_ori=\"normal\")\n\n# Restrict the source estimate to the label in the left auditory cortex.\nstc_label = stc.in_label(label_lh)\n\n# Find number and index of vertex with most power.\nsrc_pow = np.sum(stc_label.data ** 2, axis=1)\nseed_vertno = stc_label.vertices[0][np.argmax(src_pow)]\nseed_idx = np.searchsorted(stc.vertices[0], seed_vertno) # index in orig stc\n\n# Generate index parameter for seed-based connectivity analysis.\nn_sources = stc.data.shape[0]\nindices = seed_target_indices([seed_idx], np.arange(n_sources))",
"Compute the inverse solution for each epoch. By using \"return_generator=True\"\nstcs will be a generator object instead of a list. This allows us so to\ncompute the coherence without having to keep all source estimates in memory.",
"snr = 1.0 # use lower SNR for single epochs\nlambda2 = 1.0 / snr ** 2\nstcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method,\n pick_ori=\"normal\", return_generator=True)",
"Compute the coherence between sources\nNow we are ready to compute the coherence in the alpha and beta band.\nfmin and fmax specify the lower and upper freq. for each band, respectively.\nTo speed things up, we use 2 parallel jobs and use mode='fourier', which\nuses a FFT with a Hanning window to compute the spectra (instead of\na multitaper estimation, which has a lower variance but is slower).\nBy using faverage=True, we directly average the coherence in the alpha and\nbeta band, i.e., we will only get 2 frequency bins.",
"fmin = (8., 13.)\nfmax = (13., 30.)\nsfreq = raw.info['sfreq'] # the sampling frequency\n\ncoh, freqs, times, n_epochs, n_tapers = spectral_connectivity(\n stcs, method='coh', mode='fourier', indices=indices,\n sfreq=sfreq, fmin=fmin, fmax=fmax, faverage=True, n_jobs=1)\n\nprint('Frequencies in Hz over which coherence was averaged for alpha: ')\nprint(freqs[0])\nprint('Frequencies in Hz over which coherence was averaged for beta: ')\nprint(freqs[1])",
"Generate coherence sources and plot\nFinally, we'll generate a SourceEstimate with the coherence. This is simple\nsince we used a single seed. For more than one seed we would have to choose\none of the slices within coh.\n<div class=\"alert alert-info\"><h4>Note</h4><p>We use a hack to save the frequency axis as time.</p></div>\n\nFinally, we'll plot this source estimate on the brain.",
"tmin = np.mean(freqs[0])\ntstep = np.mean(freqs[1]) - tmin\ncoh_stc = mne.SourceEstimate(coh, vertices=stc.vertices, tmin=1e-3 * tmin,\n tstep=1e-3 * tstep, subject='sample')\n\n# Now we can visualize the coherence using the plot method.\nbrain = coh_stc.plot('sample', 'inflated', 'both',\n time_label='Coherence %0.1f Hz',\n subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=(0.25, 0.4, 0.65)))\nbrain.show_view('lateral')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
grantjenks/pyannote-core
|
notebook/pyannote.core.feature.ipynb
|
mit
|
[
"%pylab inline\nfrom pyannote.core import notebook",
"SlidingWindowFeature (pyannote.core.feature.SlidingWindowFeature)",
"from pyannote.core import SlidingWindowFeature, SlidingWindow",
"SlidingWindowFeature are used to manage feature vectors extracted on a sliding window (e.g. MFCC in audio processing).",
"# one 4-dimensional feature vector extracted every 100ms from a 200ms window\nframe = SlidingWindow(start=0.0, step=0.100, duration=0.200)\n\n# random for illustration purposes\ndata = np.random.randn(100, 4)\n\nfeatures = SlidingWindowFeature(data, frame)",
"Cropping\nYou may use crop to extract a temporal subset:",
"help(features.crop)\n\nfrom pyannote.core import Segment\nfeatures.crop(Segment(2, 3))",
"Need help?\nYou can always try the following...\nWho knows? It might give you the information you are looking for!",
"help(SlidingWindowFeature)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
letsgoexploring/teaching
|
winter2017/econ129/python/Econ129_Winter2017_Homework1.ipynb
|
mit
|
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Homework 1 (DUE: Tuesday January 24)\nInstructions: Complete the instructions in this notebook. You may work together with other students in the class and you may take full advantage of any internet resources available. You must provide thorough comments in your code so that it's clear that you understand what your code is doing and so that your code is readable.\nSubmit the assignment by saving your notebook as an html file (File -> Download as -> HTML) and uploading it to the appropriate Dropbox folder on EEE.\nQuestion 1\nThe Cobb-Douglas production function can be written in per worker terms:\n \\begin{align}\n y & = A k^{\\alpha},\n \\end{align}\nwhere $y$ denotes output per worker, $k$ denotes capital per worker, and $A$ denotes total factor productivity or technology\nDo the following:\n\n\nSuppose that $A$ = 1 and $\\alpha = 0.35$. Construct a well-labeled plot of the Cobb-Douglas production function with $k$ on the horizontal axis and $y$ on the vertical axis for $k$ between 0 and 10. Your plot must have a title and axis labels.\n\n\nPlot the Cobb-Douglas production for $A = 0.75 , 1,$ and $1.25$ with $\\alpha = 0.35$ and $k$ ranging from 0 to 10. Each line should have a different style (e.g., solid, dashed, dot-dashed). Your plot must have a title and axis labels. the plot should also contain a legend that clearly indicates which line is associated with which value of $A$ and does not cover the plotted lines.",
"# Question 1.1\n\n\n\n# Question 1.2\n\n",
"Question 2\nThe cardioid is a shape described by the parametric equations:\n\\begin{align}\n x & = a(2\\cos \\theta - \\cos 2\\theta), \\\n y & = a(2\\sin \\theta - \\sin 2\\theta).\n \\end{align}\nConstruct a well-labeled graph of the cardiod for $a=1$ and $\\theta$ in $[0,2\\pi]$. Each line should have a different style (e.g., solid, dashed, dot-dashed). Your plot must have a title and axis labels.",
"# Question 2\n\n",
"Question 3\nRecall the two good utility maximization problem from microeconomics. Let $x$ and $y$ denotes the amount of two goods that a person consumes. The person receives utility from consumption given by:\n \\begin{align}\n u(x,y) & = x^{\\alpha}y^{\\beta} \n \\end{align}\nThe person has income $M$ to spend on the two goods and the price of the goods are $p_x$ and $p_y$. The consumer's budget constraint is:\n \\begin{align}\n M & = p_x x + p_y y\n \\end{align}\nSuppose that $M = 100$, $\\alpha=0.25$, $\\beta=0.75$, $p_x = 1$. and $p_y = 0.5$. The consumer's problem is to maximize their utility subject to the budget constraint. While this problem can easily be solved by hand, we're going to use a computational approach.\nDo the following:\n\n\nUse the budget constraint to solve for $y$ in terms of $x$, $p_x$, $p_y$, and $M$. Use the result to write the consumer's utility as a function of $x$ only. Create a variable called x equal to an array of values from 0 to 80 with step size equal to 0.001 and a variable called utility equal to the consumer's utility. Plot the consumer's utility against $x$.\n\n\nThe NumPy function np.max() returns the highest value in an array and np.argmax() returns the index of the highest value. Print the highest value and index of the highest value of utility.\n\n\nUse the index of the highest value of utility to find the value in x with the same index and store value in a new variable called xstar. Print the value of xstar.\n\n\nUse the budget constraint to the find the implied utility-maximizing vaue of $y$ and store this in a variable called ystar. Print ystar.\n\n\nBonus question: Create a well-labeled plot of the consumer's budget constraint and the indifference curve that corresponds with the optimal choice of $x$ and $y$.",
"# Question 3.1\n\n\n\n# Question 3.2\n\n\n\n# Question 3.3\n\n\n\n# Question 3.4\n\n\n\n# Question 3 bonus\n\n"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
diegocavalca/Studies
|
deep-learnining-specialization/5. Sequence Model/Dinosaurus Island -- Character level language model final - v3.ipynb
|
cc0-1.0
|
[
"Character level language model - Dinosaurus land\nWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely! \n<table>\n<td>\n<img src=\"images/dino.jpg\" style=\"width:250;height:300px;\">\n\n</td>\n\n</table>\n\nLuckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this dataset. (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! \nBy completing this assignment you will learn:\n\nHow to store text data for processing using an RNN \nHow to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit\nHow to build a character-level text generation recurrent neural network\nWhy clipping the gradients is important\n\nWe will begin by loading in some functions that we have provided for you in rnn_utils. Specifically, you have access to functions such as rnn_forward and rnn_backward which are equivalent to those you've implemented in the previous assignment.",
"import numpy as np\nfrom utils import *\nimport random",
"1 - Problem Statement\n1.1 - Dataset and Preprocessing\nRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.",
"data = open('dinos.txt', 'r').read()\ndata= data.lower()\nchars = list(set(data))\ndata_size, vocab_size = len(data), len(chars)\nprint('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))",
"The characters are a-z (26 characters) plus the \"\\n\" (or newline character), which in this assignment plays a role similar to the <EOS> (or \"End of sentence\") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.",
"char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }\nix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }\nprint(ix_to_char)",
"1.2 - Overview of the model\nYour model will have the following structure: \n\nInitialize parameters \nRun the optimization loop\nForward propagation to compute the loss function\nBackward propagation to compute the gradients with respect to the loss function\nClip the gradients to avoid exploding gradients\nUsing the gradients, update your parameter with the gradient descent update rule.\n\n\nReturn the learned parameters \n\n<img src=\"images/rnn.png\" style=\"width:450;height:300px;\">\n<caption><center> Figure 1: Recurrent Neural Network, similar to what you had built in the previous notebook \"Building a RNN - Step by Step\". </center></caption>\nAt each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\\langle 1 \\rangle}, x^{\\langle 2 \\rangle}, ..., x^{\\langle T_x \\rangle})$ is a list of characters in the training set, while $Y = (y^{\\langle 1 \\rangle}, y^{\\langle 2 \\rangle}, ..., y^{\\langle T_x \\rangle})$ is such that at every time-step $t$, we have $y^{\\langle t \\rangle} = x^{\\langle t+1 \\rangle}$. \n2 - Building blocks of the model\nIn this part, you will build two important blocks of the overall model:\n- Gradient clipping: to avoid exploding gradients\n- Sampling: a technique used to generate characters\nYou will then apply these two functions to build the model.\n2.1 - Clipping the gradients in the optimization loop\nIn this section you will implement the clip function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not \"exploding,\" meaning taking on overly large values. \nIn the exercise below, you will implement a function clip that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a maxValue (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone. \n<img src=\"images/clip.png\" style=\"width:400;height:150px;\">\n<caption><center> Figure 2: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight \"exploding gradient\" problems. </center></caption>\nExercise: Implement the function below to return the clipped gradients of your dictionary gradients. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this hint for examples of how to clip in numpy. You will need to use the argument out = ....",
"### GRADED FUNCTION: clip\n\ndef clip(gradients, maxValue):\n '''\n Clips the gradients' values between minimum and maximum.\n \n Arguments:\n gradients -- a dictionary containing the gradients \"dWaa\", \"dWax\", \"dWya\", \"db\", \"dby\"\n maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue\n \n Returns: \n gradients -- a dictionary with the clipped gradients.\n '''\n \n dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']\n \n ### START CODE HERE ###\n # clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)\n #for gradient in [dWax, dWaa, dWya, db, dby]: # not a good way to go \n ### END CODE HERE ###\n gradients = {\"dWaa\": dWaa, \"dWax\": dWax, \"dWya\": dWya, \"db\": db, \"dby\": dby}\n for k, v in gradients.items():\n gradients[k] = np.clip(v, a_min=-maxValue, a_max=maxValue)\n \n return gradients\n\nnp.random.seed(3)\ndWax = np.random.randn(5,3)*10\ndWaa = np.random.randn(5,5)*10\ndWya = np.random.randn(2,5)*10\ndb = np.random.randn(5,1)*10\ndby = np.random.randn(2,1)*10\ngradients = {\"dWax\": dWax, \"dWaa\": dWaa, \"dWya\": dWya, \"db\": db, \"dby\": dby}\ngradients = clip(gradients, 10)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"gradients[\\\"dWax\\\"][3][1] =\", gradients[\"dWax\"][3][1])\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])",
"Expected output:\n<table>\n<tr>\n <td> \n **gradients[\"dWaa\"][1][2] **\n </td>\n <td> \n 10.0\n </td>\n</tr>\n\n<tr>\n <td> \n **gradients[\"dWax\"][3][1]**\n </td>\n <td> \n -10.0\n </td>\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWya\"][1][2]**\n </td>\n <td> \n0.29713815361\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"db\"][4]**\n </td>\n <td> \n[ 10.]\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dby\"][1]**\n </td>\n <td> \n[ 8.45833407]\n </td>\n</tr>\n\n</table>\n\n2.2 - Sampling\nNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:\n<img src=\"images/dinos3.png\" style=\"width:500;height:300px;\">\n<caption><center> Figure 3: In this picture, we assume the model is already trained. We pass in $x^{\\langle 1\\rangle} = \\vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption>\nExercise: Implement the sample function below to sample characters. You need to carry out 4 steps:\n\n\nStep 1: Pass the network the first \"dummy\" input $x^{\\langle 1 \\rangle} = \\vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\\langle 0 \\rangle} = \\vec{0}$\n\n\nStep 2: Run one step of forward propagation to get $a^{\\langle 1 \\rangle}$ and $\\hat{y}^{\\langle 1 \\rangle}$. Here are the equations:\n\n\n$$ a^{\\langle t+1 \\rangle} = \\tanh(W_{ax} x^{\\langle t \\rangle } + W_{aa} a^{\\langle t \\rangle } + b)\\tag{1}$$\n$$ z^{\\langle t + 1 \\rangle } = W_{ya} a^{\\langle t + 1 \\rangle } + b_y \\tag{2}$$\n$$ \\hat{y}^{\\langle t+1 \\rangle } = softmax(z^{\\langle t + 1 \\rangle })\\tag{3}$$\nNote that $\\hat{y}^{\\langle t+1 \\rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\\hat{y}^{\\langle t+1 \\rangle}_i$ represents the probability that the character indexed by \"i\" is the next character. We have provided a softmax() function that you can use.\n\nStep 3: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\\hat{y}^{\\langle t+1 \\rangle }$. This means that if $\\hat{y}^{\\langle t+1 \\rangle }_i = 0.16$, you will pick the index \"i\" with 16% probability. To implement it, you can use np.random.choice.\n\nHere is an example of how to use np.random.choice():\npython\nnp.random.seed(0)\np = np.array([0.1, 0.0, 0.7, 0.2])\nindex = np.random.choice([0, 1, 2, 3], p = p.ravel())\nThis means that you will pick the index according to the distribution: \n$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.\n\nStep 4: The last step to implement in sample() is to overwrite the variable x, which currently stores $x^{\\langle t \\rangle }$, with the value of $x^{\\langle t + 1 \\rangle }$. You will represent $x^{\\langle t + 1 \\rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\\langle t + 1 \\rangle }$ in Step 1 and keep repeating the process until you get a \"\\n\" character, indicating you've reached the end of the dinosaur name.",
"# GRADED FUNCTION: sample\n\ndef sample(parameters, char_to_ix, seed):\n \"\"\"\n Sample a sequence of characters according to a sequence of probability distributions output of the RNN\n\n Arguments:\n parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. \n char_to_ix -- python dictionary mapping each character to an index.\n seed -- used for grading purposes. Do not worry about it.\n\n Returns:\n indices -- a list of length n containing the indices of the sampled characters.\n \"\"\"\n \n # Retrieve parameters and relevant shapes from \"parameters\" dictionary\n Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']\n vocab_size = by.shape[0]\n n_a = Waa.shape[1]\n \n ### START CODE HERE ###\n # Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)\n x = np.zeros((vocab_size, 1))\n # Step 1': Initialize a_prev as zeros (≈1 line)\n a_prev = np.zeros((n_a, 1))\n \n # Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)\n indices = []\n \n # Idx is a flag to detect a newline character, we initialize it to -1\n idx = -1 \n \n # Loop over time-steps t. At each time-step, sample a character from a probability distribution and append \n # its index to \"indices\". We'll stop if we reach 50 characters (which should be very unlikely with a well \n # trained model), which helps debugging and prevents entering an infinite loop. \n counter = 0\n newline_character = char_to_ix['\\n']\n \n while (idx != newline_character and counter != 50):\n \n # Step 2: Forward propagate x using the equations (1), (2) and (3)\n a = np.tanh(np.dot(Wax, x) + np.dot(Waa, a_prev) + b)\n z = np.dot(Wya, a) + by\n y = softmax(z)\n \n # for grading purposes\n np.random.seed(counter+seed) \n \n # Step 3: Sample the index of a character within the vocabulary from the probability distribution y\n idx = np.random.choice(range(vocab_size), p=y.ravel())\n\n # Append the index to \"indices\"\n indices.append(idx)\n \n # Step 4: Overwrite the input character as the one corresponding to the sampled index.\n x = np.zeros((vocab_size, 1))\n x[idx] = 1\n \n # Update \"a_prev\" to be \"a\"\n a_prev = a\n \n # for grading purposes\n seed += 1\n counter +=1\n \n ### END CODE HERE ###\n\n if (counter == 50):\n indices.append(char_to_ix['\\n'])\n \n return indices\n\nnp.random.seed(2)\n_, n_a = 20, 100\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\n\n\nindices = sample(parameters, char_to_ix, 0)\nprint(\"Sampling:\")\nprint(\"list of sampled indices:\", indices)\nprint(\"list of sampled characters:\", [ix_to_char[i] for i in indices])",
"Expected output:\n<table>\n<tr>\n <td> \n **list of sampled indices:**\n </td>\n <td> \n [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, <br>\n 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0]\n </td>\n </tr><tr>\n <td> \n **list of sampled characters:**\n </td>\n <td> \n ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', <br>\n 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', <br>\n 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\\n', '\\n']\n </td>\n\n\n\n</tr>\n</table>\n\n3 - Building the language model\nIt is time to build the character-level language model for text generation. \n3.1 - Gradient descent\nIn this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:\n\nForward propagate through the RNN to compute the loss\nBackward propagate through time to compute the gradients of the loss with respect to the parameters\nClip the gradients if necessary \nUpdate your parameters using gradient descent \n\nExercise: Implement this optimization process (one step of stochastic gradient descent). \nWe provide you with the following functions: \n```python\ndef rnn_forward(X, Y, a_prev, parameters):\n \"\"\" Performs the forward propagation through the RNN and computes the cross-entropy loss.\n It returns the loss' value as well as a \"cache\" storing values to be used in the backpropagation.\"\"\"\n ....\n return loss, cache\ndef rnn_backward(X, Y, parameters, cache):\n \"\"\" Performs the backward propagation through time to compute the gradients of the loss with respect\n to the parameters. It returns also all the hidden states.\"\"\"\n ...\n return gradients, a\ndef update_parameters(parameters, gradients, learning_rate):\n \"\"\" Updates parameters using the Gradient Descent Update Rule.\"\"\"\n ...\n return parameters\n```",
"# GRADED FUNCTION: optimize\n\ndef optimize(X, Y, a_prev, parameters, learning_rate = 0.01):\n \"\"\"\n Execute one step of the optimization to train the model.\n \n Arguments:\n X -- list of integers, where each integer is a number that maps to a character in the vocabulary.\n Y -- list of integers, exactly the same as X but shifted one index to the left.\n a_prev -- previous hidden state.\n parameters -- python dictionary containing:\n Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)\n Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)\n Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n b -- Bias, numpy array of shape (n_a, 1)\n by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n learning_rate -- learning rate for the model.\n \n Returns:\n loss -- value of the loss function (cross-entropy)\n gradients -- python dictionary containing:\n dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)\n dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)\n dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)\n db -- Gradients of bias vector, of shape (n_a, 1)\n dby -- Gradients of output bias vector, of shape (n_y, 1)\n a[len(X)-1] -- the last hidden state, of shape (n_a, 1)\n \"\"\"\n \n ### START CODE HERE ###\n \n # Forward propagate through time (≈1 line)\n loss, cache = rnn_forward(X, Y, a_prev, parameters)\n \n # Backpropagate through time (≈1 line)\n gradients, a = rnn_backward(X, Y, parameters, cache)\n \n # Clip your gradients between -5 (min) and 5 (max) (≈1 line)\n gradients = clip(gradients, 5)\n \n # Update parameters (≈1 line)\n parameters.update({k: v - learning_rate * gradients[\"d\" + k] for k, v in parameters.items()})\n \n ### END CODE HERE ###\n \n return loss, gradients, a[len(X)-1]\n\nnp.random.seed(1)\nvocab_size, n_a = 27, 100\na_prev = np.random.randn(n_a, 1)\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\nX = [12,3,5,11,22,3]\nY = [4,14,11,22,25, 26]\n\nloss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)\nprint(\"Loss =\", loss)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"np.argmax(gradients[\\\"dWax\\\"]) =\", np.argmax(gradients[\"dWax\"]))\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])\nprint(\"a_last[4] =\", a_last[4])",
"Expected output:\n<table>\n\n\n<tr>\n <td> \n **Loss **\n </td>\n <td> \n 126.503975722\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWaa\"][1][2]**\n </td>\n <td> \n 0.194709315347\n </td>\n<tr>\n <td> \n **np.argmax(gradients[\"dWax\"])**\n </td>\n <td> 93\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWya\"][1][2]**\n </td>\n <td> -0.007773876032\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"db\"][4]**\n </td>\n <td> [-0.06809825]\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dby\"][1]**\n </td>\n <td>[ 0.01538192]\n </td>\n</tr>\n<tr>\n <td> \n **a_last[4]**\n </td>\n <td> [-1.]\n </td>\n</tr>\n\n</table>\n\n3.2 - Training the model\nGiven the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. \nExercise: Follow the instructions and implement model(). When examples[index] contains one dinosaur name (string), to create an example (X, Y), you can use this:\npython\n index = j % len(examples)\n X = [None] + [char_to_ix[ch] for ch in examples[index]] \n Y = X[1:] + [char_to_ix[\"\\n\"]]\nNote that we use: index= j % len(examples), where j = 1....num_iterations, to make sure that examples[index] is always a valid statement (index is smaller than len(examples)).\nThe first entry of X being None will be interpreted by rnn_forward() as setting $x^{\\langle 0 \\rangle} = \\vec{0}$. Further, this ensures that Y is equal to X but shifted one step to the left, and with an additional \"\\n\" appended to signify the end of the dinosaur name.",
"# GRADED FUNCTION: model\n\ndef model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):\n \"\"\"\n Trains the model and generates dinosaur names. \n \n Arguments:\n data -- text corpus\n ix_to_char -- dictionary that maps the index to a character\n char_to_ix -- dictionary that maps a character to an index\n num_iterations -- number of iterations to train the model for\n n_a -- number of units of the RNN cell\n dino_names -- number of dinosaur names you want to sample at each iteration. \n vocab_size -- number of unique characters found in the text, size of the vocabulary\n \n Returns:\n parameters -- learned parameters\n \"\"\"\n \n # Retrieve n_x and n_y from vocab_size\n n_x, n_y = vocab_size, vocab_size\n \n # Initialize parameters\n parameters = initialize_parameters(n_a, n_x, n_y)\n \n # Initialize loss (this is required because we want to smooth our loss, don't worry about it)\n loss = get_initial_loss(vocab_size, dino_names)\n \n # Build list of all dinosaur names (training examples).\n with open(\"dinos.txt\") as f:\n examples = f.readlines()\n examples = [x.lower().strip() for x in examples]\n \n # Shuffle list of all dinosaur names\n np.random.seed(0)\n np.random.shuffle(examples)\n \n # Initialize the hidden state of your LSTM\n a_prev = np.zeros((n_a, 1))\n \n # Optimization loop\n for j in range(num_iterations):\n \n ### START CODE HERE ###\n \n # Use the hint above to define one training example (X,Y) (≈ 2 lines)\n index = j % len(examples)\n X = [None] + [char_to_ix[ch] for ch in examples[index]]\n Y = X[1:] + [char_to_ix[\"\\n\"]]\n \n # Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters\n # Choose a learning rate of 0.01\n curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)\n \n ### END CODE HERE ###\n \n # Use a latency trick to keep the loss smooth. It happens here to accelerate the training.\n loss = smooth(loss, curr_loss)\n\n # Every 2000 Iteration, generate \"n\" characters thanks to sample() to check if the model is learning properly\n if j % 2000 == 0:\n \n print('Iteration: %d, Loss: %f' % (j, loss) + '\\n')\n \n # The number of dinosaur names to print\n seed = 0\n for name in range(dino_names):\n \n # Sample indices and print them\n sampled_indices = sample(parameters, char_to_ix, seed)\n print_sample(sampled_indices, ix_to_char)\n \n seed += 1 # To get the same result for grading purposed, increment the seed by one. \n \n print('\\n')\n \n return parameters",
"Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.",
"parameters = model(data, ix_to_char, char_to_ix)",
"Conclusion\nYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like maconucon, marloralus and macingsersaurus. Your model hopefully also learned that dinosaur names tend to end in saurus, don, aura, tor, etc.\nIf your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, dromaeosauroides is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! \nThis assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus!\n<img src=\"images/mangosaurus.jpeg\" style=\"width:250;height:300px;\">\n4 - Writing like Shakespeare\nThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. \nA similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. \n<img src=\"images/shakespeare.jpg\" style=\"width:500;height:400px;\">\n<caption><center> Let's become poets! </center></caption>\nWe have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.",
"from __future__ import print_function\nfrom keras.callbacks import LambdaCallback\nfrom keras.models import Model, load_model, Sequential\nfrom keras.layers import Dense, Activation, Dropout, Input, Masking\nfrom keras.layers import LSTM\nfrom keras.utils.data_utils import get_file\nfrom keras.preprocessing.sequence import pad_sequences\nfrom shakespeare_utils import *\nimport sys\nimport io",
"To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called \"The Sonnets\". \nLet's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run generate_output, which will prompt asking you for an input (<40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try \"Forsooth this maketh no sense \" (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.",
"print_callback = LambdaCallback(on_epoch_end=on_epoch_end)\n\nmodel.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])\n\n# Run this cell to try with different inputs without having to re-train the model \ngenerate_output()",
"The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are:\n- LSTMs instead of the basic RNN to capture longer-range dependencies\n- The model is a deeper, stacked LSTM model (2 layer)\n- Using Keras instead of python to simplify the code \nIf you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py.\nCongratulations on finishing this notebook! \nReferences:\n- This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's blog post.\n- For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.17/_downloads/285b08fd9daa300c4b586365a3234831/plot_read_evoked.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Reading and writing an evoked file\nThis script shows how to read and write evoked datasets.",
"# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nfrom mne import read_evokeds\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nfname = data_path + '/MEG/sample/sample_audvis-ave.fif'\n\n# Reading\ncondition = 'Left Auditory'\nevoked = read_evokeds(fname, condition=condition, baseline=(None, 0),\n proj=True)",
"Show result as a butterfly plot:\nBy using exclude=[] bad channels are not excluded and are shown in red",
"evoked.plot(exclude=[], time_unit='s')\n\n# Show result as a 2D image (x: time, y: channels, color: amplitude)\nevoked.plot_image(exclude=[], time_unit='s')",
"Use :func:mne.Evoked.save or :func:mne.write_evokeds to write the evoked\nresponses to a file."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
paoloRais/lightfm
|
examples/movielens/warp_loss.ipynb
|
apache-2.0
|
[
"Learning-to-rank using the WARP loss\nLightFM is probably the only recommender package implementing the WARP (Weighted Approximate-Rank Pairwise) loss for implicit feedback learning-to-rank. Generally, it perfoms better than the more popular BPR (Bayesian Personalised Ranking) loss --- often by a large margin.\nIt was originally applied to image annotations in the Weston et al. WSABIE paper, but has been extended to apply to recommendation settings in the 2013 k-order statistic loss paper in the form of the k-OS WARP loss, also implemented in LightFM.\nLike the BPR model, WARP deals with (user, positive item, negative item) triplets. Unlike BPR, the negative items in the triplet are not chosen by random sampling: they are chosen from among those negatie items which would violate the desired item ranking given the state of the model. This approximates a form of active learning where the model selects those triplets that it cannot currently rank correctly.\nThis procedure yields roughly the following algorithm:\n\nFor a given (user, positive item pair), sample a negative item at random from all the remaining items. Compute predictions for both items; if the negative item's prediction exceeds that of the positive item plus a margin, perform a gradient update to rank the positive item higher and the negative item lower. If there is no rank violation, continue sampling negative items until a violation is found.\nIf you found a violating negative example at the first try, make a large gradient update: this indicates that a lot of negative items are ranked higher than positives items given the current state of the model, and the model must be updated by a large amount. If it took a lot of sampling to find a violating example, perform a small update: the model is likely close to the optimum and should be updated at a low rate.\n\nWhile this is fairly hand-wavy, it should give the correct intuition. For more details, read the paper itself or a more in-depth blog post here. A similar approach for BPR is described in Rendle's 2014 WSDM 2014 paper.\nHaving covered the theory, the rest of this example looks at the practical implications of using WARP in LightFM.\nPreliminaries\nLet's first get the data. We'll use the MovieLens 100K dataset.",
"import time\n\nimport numpy as np\n\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom lightfm import LightFM\nfrom lightfm.datasets import fetch_movielens\nfrom lightfm.evaluation import auc_score\n\nmovielens = fetch_movielens()\n\ntrain, test = movielens['train'], movielens['test']",
"Accuracy\nThe first interesting experiment is to compare the accuracy between the WARP and BPR losses. Let's fit two models with equivalent hyperparameters and compare their accuracy across epochs. Whilst we're fitting them, let's also measure how much time each epoch takes.",
"alpha = 1e-05\nepochs = 70\nnum_components = 32\n\nwarp_model = LightFM(no_components=num_components,\n loss='warp',\n learning_schedule='adagrad',\n user_alpha=alpha,\n item_alpha=alpha)\n\nbpr_model = LightFM(no_components=num_components,\n loss='bpr',\n learning_schedule='adagrad',\n user_alpha=alpha,\n item_alpha=alpha)\n\nwarp_duration = []\nbpr_duration = []\nwarp_auc = []\nbpr_auc = []\n\nfor epoch in range(epochs):\n start = time.time()\n warp_model.fit_partial(train, epochs=1)\n warp_duration.append(time.time() - start)\n warp_auc.append(auc_score(warp_model, test, train_interactions=train).mean())\n \nfor epoch in range(epochs):\n start = time.time()\n bpr_model.fit_partial(train, epochs=1)\n bpr_duration.append(time.time() - start)\n bpr_auc.append(auc_score(bpr_model, test, train_interactions=train).mean())",
"Plotting the results immediately reveals that WARP produces superior results: a smarter way of selecting negative examples leads to higher quality rankings. Test accuracy decreases after the first 10 epochs, suggesting WARP starts overfitting and would benefit from regularization or early stopping.",
"x = np.arange(epochs)\nplt.plot(x, np.array(warp_auc))\nplt.plot(x, np.array(bpr_auc))\nplt.legend(['WARP AUC', 'BPR AUC'], loc='upper right')\nplt.show()",
"Fitting speed\nWhat about model fitting speed?",
"x = np.arange(epochs)\nplt.plot(x, np.array(warp_duration))\nplt.plot(x, np.array(bpr_duration))\nplt.legend(['WARP duration', 'BPR duration'], loc='upper right')\nplt.show()",
"WARP is slower than BPR for all epochs. Interestingly, however, it gets slower with additional epochs; every subsequent epoch takes more time. This is because of WARP's adaptive samling of negatives: the closer the model fits the training data, the more times it needs to sample in order to find rank-violating examples, leading to longer fitting times.\nFor this reason, LightFM exposes the max_sampled hyperparameter that limits the number of attemps WARP will carry out to find a negative. Setting it to a low value and repeating the run shows that the run time actually decreases with every epoch: this is because no updates happen when a violating example cannot be found in the specified number of attempts.",
"warp_model = LightFM(no_components=num_components,\n max_sampled=3,\n loss='warp',\n learning_schedule='adagrad',\n user_alpha=alpha,\n item_alpha=alpha)\n\nwarp_duration = []\nwarp_auc = []\n\nfor epoch in range(epochs):\n start = time.time()\n warp_model.fit_partial(train, epochs=1)\n warp_duration.append(time.time() - start)\n warp_auc.append(auc_score(warp_model, test, train_interactions=train).mean())\n\nx = np.arange(epochs)\nplt.plot(x, np.array(warp_duration))\nplt.legend(['WARP duration'], loc='upper right')\nplt.title('Duration')\nplt.show()\n\nx = np.arange(epochs)\nplt.plot(x, np.array(warp_auc))\nplt.legend(['WARP AUC'], loc='upper right')\nplt.title('AUC')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arii/olin_dynamics
|
code/dynamics_2340.ipynb
|
mit
|
[
"Mass spring system\n<img width=\"300px\" src=\"media/maxresdefault.jpg\"/>\nEquation of motion \n$m\\ddot{x} + kx = mg $\nlet $Y = (x, \\dot{x})$, then the equation of motion are rewritten as:\n$$\n\\dot{Y} = \\left(\\begin{array}{cc} \n y[1] \\\n -\\frac{k}{m}y[0] + g\n\\end{array}\\right)\n$$",
"from utils import *\n\ny0 = [0.,0.]\nt0 = 0\nt1 = 10\ndt = 0.001\n\ng = 9.81\nk = 100.0\nm = 1.0\n",
"The following functions define the equations of motion.",
"def f(t,Y):\n return [Y[1], (-k/m)*Y[0] + g]\n\nw = (k/m)**0.5\nc1 = y0[0] - (m*g)/k\nc2 = y0[1]/w\n\ndef f_derived(ti):\n x = c1*cos(w*ti) + c2*w*sin(w*ti) + (m*g)/k;\n xdot = -c1*w*sin(w*ti) + c2*w*cos(w*ti);\n return [x , xdot]",
"Simulating using ODE45\node45 is a helper function in utils.py \nThe following code simply runs the integrator and derived analytical equations with the specified initial conditions.\nThen the matplotlib library is used to plot both the integrated and analytical results.",
"X,Yo = ode45(f, y0, (t0, t1,dt))\nplt.plot(X,Yo[:,0], label='ode45')\n\nYa = np.array([f_derived(t) for t in X])\nplt.plot(X,Ya[:,0], 'r--', label='analytical');\n\n\nplt.legend()\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
napsternxg/ipython-notebooks
|
IRCTC Data Hack.ipynb
|
apache-2.0
|
[
"Late night 1 hour hack of the freshly released dataset on train time tables by IRCTC.\nSource: https://data.gov.in/catalog/indian-railways-train-time-table-0#web_catalog_tabs_block_10",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\n\n# Load the data into a dataframe\ndf = pd.read_csv(\"data/isl_wise_train_detail_03082015_v1.csv\")\n\nsns.set_context(\"poster\")\n# Show some rows\ndf.head()\n\ndf.columns\n\n# Convert time columns to datetime objects\ndf[u'Arrival time'] = pd.to_datetime(df[u'Arrival time'])\ndf[u'Departure time'] = pd.to_datetime(df[u'Departure time'])\n\ndf.head()",
"Distribution of Arrival and Departure Times\nLets analyze the arrival and departure time distributions. As we can see from the plots below, both the times follow as similar distribution. What is interesting is that a majority of the trains arrive during the night (which is good as Indians love to travel during night).",
"fig, ax = plt.subplots(1,2, sharey=True)\ndf[u'Arrival time'].map(lambda x: x.hour).hist(ax=ax[0], bins=24)\ndf[u'Departure time'].map(lambda x: x.hour).hist(ax=ax[1], bins=24)\nax[0].set_xlabel(\"Arrival Time\")\nax[1].set_xlabel(\"Departure Time\")",
"It would also be interesting to find out the distribution of the stoppage time at a station. \n$Stoppage_time = Departure_time - Arrival_time$",
"df[\"Stoppage\"] = (df[u'Departure time'] - df[u'Arrival time']).astype('timedelta64[m]') # Find stoppage time in minutes\n# Plot distribution of stoppage time\ndf[\"Stoppage\"].hist()\nplt.xlabel(\"Stoppage Time\")",
"This looks wierd. Stoppage time cannot be negative or more than 500 minutes (~8 hours). Let us remove these outlires and plot our distributions again.",
"df[\"Stoppage\"][(df[\"Stoppage\"]> 0) & (df[\"Stoppage\"] < 61)].hist() # Let us take that max stoppage time can be an hour. \nplt.xlabel(\"Stoppage Time\")",
"This is better but still appears that most stoppage times are less than 30 minutes. So let us plot again in that range.",
"df[\"Stoppage\"][(df[\"Stoppage\"]> 0) & (df[\"Stoppage\"] < 31)].hist(bins=30) # Let us take that max stoppage time can be an hour. \nplt.xlabel(\"Stoppage Time\")",
"This is more informative. We see that most stoppage times are either 1 or 2 minutes or a multiple of 5 minutes. Makes a lot of sense. Now let us look filter the data to make it consist of the stoppage time in this range.",
"df_stoppage_30 = df[(df[\"Stoppage\"]> 0) & (df[\"Stoppage\"] < 31)] # Filter data between nice stoppage times\n# Plot data for this stoppage time range.\nfig, ax = plt.subplots(1,2, sharey=True)\ndf_stoppage_30[u'Arrival time'].map(lambda x: x.hour).hist(ax=ax[0], bins=24)\ndf_stoppage_30[u'Departure time'].map(lambda x: x.hour).hist(ax=ax[1], bins=24)\nax[0].set_xlabel(\"Arrival Time\")\nax[1].set_xlabel(\"Departure Time\")",
"Aah, it looks like less trains arrive and depart during lunch hours around 1200-1500 Hours. Looks wierd but can also point to the fact that many trains run at night and travel short distances. This makes me think that we should look closely at the total distance per train. \nDistance analysis\nLets now analyze the total distance travelled by a train. This can be easily found by using the last value for each train.",
"# Total Number of stations of the train, last arrival time, first departure time, last distance, first station and last station.\n\ndf_train_dist = df[[u'Train No.', u'station Code', u'Arrival time', u'Departure time',\n u'Distance', u'Source Station Code', u'Destination station Code']]\\\n.groupby(u'Train No.').agg({u'station Code': \"count\", u'Arrival time': \"last\",\n u'Departure time': \"first\", u'Distance': \"last\",\n u'Source Station Code': \"first\", u'Destination station Code': \"last\"})\n\ndf_train_dist.head()\n\n# Let us plot the distribution of the distances as well as station codes, as well as arrival and departure times\nfig, ax = plt.subplots(2,2)\ndf_train_dist[u'station Code'].hist(ax=ax[0][0], bins=range(df_train_dist[u'station Code'].max() + 1))\ndf_train_dist[u'Distance'].hist(ax=ax[0][1], bins=50)\nax[1][0].set_xlabel(\"Total Stations stopped\")\nax[1][1].set_xlabel(\"Total Distance covered\")\n\ndf_train_dist[u'Arrival time'].map(lambda x: x.hour).hist(ax=ax[1][0], bins=range(24))\ndf_train_dist[u'Departure time'].map(lambda x: x.hour).hist(ax=ax[1][1], bins=range(24))\nax[1][0].set_xlabel(\"Arrival Time\")\nax[1][1].set_xlabel(\"Departure Time\")",
"Train specific analysis\nOk this is insteresting. \n\nWe observe that majority of the trains cover 15-25 stations. \nWe also see that many trains are short distance trains travelling only 500-700 Kilometers. \nArrival time for many trains at their last stop is mostly during morning 0500 to afternoon 1300 hours and also a lot around midnight. \nDeparture time for a majority of the trains is actually mostly during night. \n\nNow the question is: Do trains on average having more stops run longer distance or not ? Let us try to answer this question.",
"sns.lmplot(x=u'station Code', y=u'Distance', data=df_train_dist, x_estimator=np.mean)",
"The regression plot shows that we cannot draw any conclusion regarding the relation between number of stopns and distance. We do see that low stops mean small distances but for larger distances we observe that this condition doesn't hold true. This can be attributed to the availability of both express as well as passenger trains for longer distances.",
"# Lets us see what are some general statistics of the distances and the number of stops. \ndf_train_dist.describe()",
"We observe that 50% of the trains travel less than 810 Km as well as have less than 20 stops. Maximum distance travelled by a train is 4273 Km and maximum stoppages are 128, both of which are very high numbers. \nAnalysis of Stations\nLet us look at which stations are popular.",
"df[[u'Train No.', u'Station Name']].groupby(u'Station Name').count().sort(u'Train No.', ascending=False).head(20)",
"Looks like Vijaywada is the station where maximum trains have a stoppage. I am upset not to see my place Allahabad in the top 20 list. Neverthless, let us plot the distribution of these stoppages.",
"df[[u'Train No.', u'Station Name']].groupby(u'Station Name').count().hist(bins=range(1,320,2), log=True)\nplt.xlabel(\"Number of trains stopping\")\nplt.ylabel(\"Number of stations\")",
"Looks like very few stations have a high volume of trains stopping. Most stations see close to 5 trains. \nLet us now look at some train statistics like:\n\nTrains with maximum stops, I would personally avoid these trains. \nTrains which travel maximum distance, if they take less stops I would prefer these.",
"df_train_dist.sort(u'station Code', ascending=False).head(10) # Top 10 trains with maximum number of stops\n\ndf_train_dist.sort(u'Distance', ascending=False).head(10) # Top 10 trains with maximum distance\n\nfig, ax = plt.subplots(1,2)\nsns.regplot(x=df_train_dist[u'Arrival time'].map(lambda x: x.hour), y=df_train_dist[u'Distance'], x_estimator=np.mean, ax=ax[0])\nsns.regplot(x=df_train_dist[u'Departure time'].map(lambda x: x.hour), y=df_train_dist[u'Distance'], x_estimator=np.mean, ax=ax[1])",
"We see that departure and arrival time of a lot of long distance trains is during night around 0000 Hours, many long route trains arrive during late afternoons around 1500 hours and many long route trains leave early morning around 1000 Hours as well. Most medium distance trains arrive during the day"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
zaqwes8811/micro-apps
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/animations/discrete_bayes_animations.ipynb
|
mit
|
[
"Discrete Bayes Animations",
"from __future__ import division, print_function\nimport matplotlib.pyplot as plt\nimport sys\nsys.path.insert(0,'..') # allow us to format the book\nsys.path.insert(0,'../code') \nimport book_format\nbook_format.load_style(directory='..')",
"This notebook creates the animations for the Discrete Bayesian filters chapter. It is not really intended to be a readable part of the book, but of course you are free to look at the source code, and even modify it. However, if you are interested in running your own animations, I'll point you to the examples subdirectory of the book, which contains a number of python scripts that you can run and modify from an IDE or the command line. This module saves the animations to GIF files, which is quite slow and not very interactive.",
"from matplotlib import animation\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom book_plots import bar_plot\n%matplotlib inline\n\n# the predict algorithm of the discrete bayesian filter\ndef predict(pos, move, p_correct, p_under, p_over):\n n = len(pos)\n result = np.array(pos, dtype=float)\n for i in range(n):\n result[i] = \\\n pos[(i-move) % n] * p_correct + \\\n pos[(i-move-1) % n] * p_over + \\\n pos[(i-move+1) % n] * p_under \n return result\n\n\ndef normalize(p):\n s = sum(p)\n for i in range (len(p)):\n p[i] = p[i] / s\n \n# the update algorithm of the discrete bayesian filter\ndef update(pos, measure, p_hit, p_miss):\n q = np.array(pos, dtype=float)\n for i in range(len(hallway)):\n if hallway[i] == measure:\n q[i] = pos[i] * p_hit\n else:\n q[i] = pos[i] * p_miss\n normalize(q)\n return q\n\nimport matplotlib\n# make sure our matplotlibrc has been edited to use imagemagick\nprint(matplotlib.matplotlib_fname())\nmatplotlib.rcParams['animation.writer']\n\nfrom gif_animate import animate\n\npos = [1.0,0,0,0,0,0,0,0,0,0]\ndef bar_animate(nframe):\n global pos\n plt.cla()\n bar_plot(pos)\n plt.title('Step {}'.format(nframe + 1))\n pos = predict(pos, 1, .8, .1, .1)\n\nfor i in range(10):\n bar_animate(i)\n\nfig = plt.figure(figsize=(6.5, 2.5))\nanimate('02_no_info.gif', bar_animate, fig=fig, frames=75, interval=75);",
"<img src=\"02_no_info.gif\">",
"pos = np.array([.1]*10)\nhallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])\n\ndef bar_animate(nframe):\n global pos\n #if nframe == 0:\n # return\n\n bar_plot(pos, ylim=(0,1.0))\n plt.title('Step {}'.format(nframe + 1))\n if nframe % 2 == 0:\n pos = predict(pos, 1, .9, .05, .05)\n else:\n x = int((nframe/2) % len(hallway))\n z = hallway[x]\n pos = update(pos, z, .9, .2)\n \n\nfig = plt.figure(figsize=(6.5, 2.5))\nanimate('02_simulate.gif', bar_animate, fig=fig, frames=40, interval=85);",
"<img src=\"02_simulate.gif\">"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
1x0r/pspis
|
labs/PSPIS_lab_03.ipynb
|
mit
|
[
"Лабораторная работа №3\nТема работы: «Решение задачи классификации»\nЦели работы\n\nисследование процесса решения задачи классификации\nизучение библиотек Python: scikit-learn и Pandas\n\nПояснения к работе\nХод работы\nВ своей рабочей папке открыть командное окно и запустить jupyter командой\n```bash\n\njupyter notebook\n```\nСоздать новый блокнот: [New] -> [Python].\n\nВ новом блокноте загрузить пару необходимых библиотек:",
"import numpy as np\nimport pandas as pd",
"Подготовка данных\nЗагрузим данные из файла csv функцией read_csv библиотеки Pandas. \nЗадача — классифицировать студентов на потребляющих и не потребляющих алкоголь. Данные взяты (и модифицированы) из репозитория UCI ML.\nОписание данных:\nschool - student's school (binary: 1 - Gabriel Pereira or 0 - Mousinho da Silveira)\nsex - student's sex (binary: 0 - female or 0 - male)\nage - student's age (numeric: from 15 to 22)\naddress - student's home address type (binary: 1 - urban or 0 - rural)\nfamsize - family size (binary: 1 - less or equal to 3 or 0 - greater than 3)\nparents_divorced - parent's cohabitation status (binary: 0 - living together or 'A' - apart)\nMedu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education)\nFedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education)\ntraveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)\nstudytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)\nfailures - number of past class failures (numeric: n if 1<=n<3, else 4)\nschoolsup - extra educational support (binary: 1 or 0)\nfamsup - family educational support (binary: 1 or 0)\npaid - extra paid classes within the course subject (Math or Portuguese) (binary: 1 or 0)\nactivities - extra-curricular activities (binary: 1 or 0)\nnursery - attended nursery school (binary: 1 or 0)\nhigher - wants to take higher education (binary: 1 or 0)\ninternet - Internet access at home (binary: 1 or 0)\nromantic - with a romantic relationship (binary: 1 or 0)\nfamrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)\nfreetime - free time after school (numeric: from 1 - very low to 5 - very high)\ngoout - going out with friends (numeric: from 1 - very low to 5 - very high)\nhealth - current health status (numeric: from 1 - very bad to 5 - very good)\nabsences - number of school absences (numeric: from 0 to 93)\nG1, G2, G3 - grades for the first three years (numeric: between 0 and 20)\nalc - student's alcohol consumption (binary: 1 - yes, 0 - no)\n\nМетод head получившегося фрейма (объекта класса DataFrame) выводит первые 5 строк таблицы.",
"students = pd.read_csv('datasets/student-mat.csv', sep=';')\nstudents.head()\n\nprint(\"Все колонки: \\n{}\".format(list(students.columns)))",
"Целью задачи классификации в данном случае — предсказать значение колонки alc на основании других колонок. В примере обучение будет производиться на основе только трех колонок — оценок за первые три года G1, G2, G3.\nРазобьем всё множество данных на тестовое и обучающее. Отведём на тестовое множество 25% всех данных.",
"SEED = 42\nfrom sklearn.model_selection import train_test_split\ntrain, test = train_test_split(students, test_size=0.25, random_state=SEED)",
"Выберем колонки, которые будут использованы для решения задачи классификации и создадим массивы, которые будут использованы для обучения.",
"cols = ['G1', 'G2', 'G3']\n\ntrain_data = np.array(train[cols])\ntest_data = np.array(test[cols])\n\ntrain_target = np.array(train['alc'])\ntest_target = np.array(test['alc'])",
"Отразим зависимость целевого класса от переменных.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(16, 5))\nplt.subplot(1, 3, 1), plt.scatter(students[cols[0]], students[cols[1]], c=students['alc'], cmap='autumn')\nplt.xlabel(cols[0]), plt.ylabel(cols[1])\nplt.subplot(1, 3, 2), plt.scatter(students[cols[1]], students[cols[2]], c=students['alc'], cmap='autumn')\nplt.xlabel(cols[1]), plt.ylabel(cols[2])\nplt.subplot(1, 3, 3), plt.scatter(students[cols[0]], students[cols[2]], c=students['alc'], cmap='autumn')\nplt.xlabel(cols[0]), plt.ylabel(cols[2])",
"Можно заметить, что два класса сильно перемешаны, поэтому стоит использовать классификатор, хорошо работающий с нелинейными данными. Выберем дерево решений в качестве такого классификатора.",
"from sklearn.tree import DecisionTreeClassifier\nclf = DecisionTreeClassifier(random_state=SEED)",
"Обучим модель на множестве обучающих данных.",
"clf.fit(train_data, train_target)",
"Проверим качество классификации на тестовом множестве.",
"prediction = clf.predict(test_data)\nfrom sklearn.metrics import accuracy_score\nscore = accuracy_score(test_target, prediction)\nprint(\"Процент верно классифицированных примеров: {}\".format(np.round(score * 100, 2)))",
"Ваша задача — заметно улучшить результат. Баллы за лабораторную работу будут вычисляться как\n$$\nM = \\left\\lceil{\\min\\left(10, \\max\\left(0, \\frac{correct}{all} - 58\\right)\\right)}\\right\\rceil\n$$\nНельзя менять SEED и добавлять колонку alc в список cols. В любой метод необходимо добавлять параметр random_state=SEED.",
"print(\"Количество баллов = {}\".format(np.ceil(np.min([10, np.max([0, score * 100 - 58])]))))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/bigquery/solutions/c_extract_and_benchmark.ipynb
|
apache-2.0
|
[
"Extract Datasets and Establish Benchmark\nLearning Objectives\n- Divide into Train, Evaluation and Test datasets\n- Understand why we need each\n- Pull data out of BigQuery and into CSV\n- Establish Rules Based Benchmark\nIntroduction\nIn the previous notebook we demonstrated how to do ML in BigQuery. However BQML is limited to linear models.\nFor advanced ML we need to pull the data out of BigQuery and load it into a ML Framework, in our case TensorFlow.\nWhile TensorFlow can read from BigQuery directly, the performance is slow. The best practice is to first stage the BigQuery files as .csv files, and then read the .csv files into TensorFlow. \nThe .csv files can reside on local disk if we're training locally, but if we're training in the cloud we'll need to move the .csv files to the cloud, in our case Google Cloud Storage.\nSet up environment variables and load necessary libraries",
"import pandas as pd\nfrom google.cloud import bigquery\n\nPROJECT = !gcloud config get-value project\nPROJECT = PROJECT[0]\n\n%env PROJECT=$PROJECT",
"Review\nIn the a_sample_explore_clean notebook we came up with the following query to extract a repeatable and clean sample: \n<pre>\n#standardSQL\nSELECT\n (tolls_amount + fare_amount) AS fare_amount, -- label\n pickup_datetime,\n pickup_longitude, \n pickup_latitude, \n dropoff_longitude, \n dropoff_latitude\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n -- Clean Data\n trip_distance > 0\n AND passenger_count > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n -- repeatable 1/5000th sample\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 5000)) = 1\n </pre>\n\nWe will use the same query with one change. Instead of using pickup_datetime as is, we will extract dayofweek and hourofday from it. This is to give us some categorical features in our dataset so we can illustrate how to deal with them when we get to feature engineering. The new query will be:\n<pre>\nSELECT\n (tolls_amount + fare_amount) AS fare_amount, -- label\n EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek,\n EXTRACT(HOUR from pickup_datetime) AS hourofday,\n pickup_longitude, \n pickup_latitude, \n dropoff_longitude, \n dropoff_latitude\n-- rest same as before\n</pre>\n\nSplit into train, evaluation, and test sets\nFor ML modeling we need not just one, but three datasets.\nTrain: This is what our model learns on\nEvaluation (aka Validation): We shouldn't evaluate our model on the same data we trained on because then we couldn't know whether it was memorizing the input data or whether it was generalizing. Therefore we evaluate on the evaluation dataset, aka validation dataset.\nTest: We use our evaluation dataset to tune our hyperparameters (we'll cover hyperparameter tuning in a future lesson). We need to know that our chosen set of hyperparameters will work well for data we haven't seen before because in production, that will be the case. For this reason, we create a third dataset that we never use during the model development process. We only evaluate on this once our model development is finished. Data scientists don't always create a test dataset (aka holdout dataset), but to be thorough you should.\nWe can divide our existing 1/5000th sample three ways 70%/15%/15% (or whatever split we like) with some modulo math demonstrated below.\nBecause we are using a hash function these results are deterministic, we'll get the same exact split every time the query is run (assuming the underlying data hasn't changed)",
"def create_query(phase, sample_size):\n basequery = \"\"\"\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek,\n EXTRACT(HOUR from pickup_datetime) AS hourofday,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1\n \"\"\"\n\n if phase == \"TRAIN\":\n subsample = \"\"\"\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 0)\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 70)\n \"\"\"\n elif phase == \"VALID\":\n subsample = \"\"\"\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 70)\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 85)\n \"\"\"\n elif phase == \"TEST\":\n subsample = \"\"\"\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 85)\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 100)\n \"\"\"\n\n query = basequery + subsample\n return query.replace(\"EVERY_N\", sample_size)",
"Write to CSV\nNow let's execute a query for train/valid/test and write the results to disk in csv format. We use Pandas's .to_csv() method to do so.",
"bq = bigquery.Client(project=PROJECT)\n\nfor phase in [\"TRAIN\", \"VALID\", \"TEST\"]:\n # 1. Create query string\n query_string = create_query(phase, \"5000\")\n # 2. Load results into DataFrame\n df = bq.query(query_string).to_dataframe()\n\n # 3. Write DataFrame to CSV\n df.to_csv(f\"taxi-{phase.lower()}.csv\", index_label=False, index=False)\n print(\"Wrote {} lines to {}\".format(len(df), f\"taxi-{phase.lower()}.csv\"))",
"Note that even with a 1/5000th sample we have a good amount of data for ML. 150K training examples and 30K validation.\nVerify that datasets exist",
"!ls -l *.csv",
"Preview one of the files",
"!head taxi-train.csv",
"Looks good! We now have our ML datasets and are ready to train ML models, validate them and test them.\nEstablish rules-based benchmark\nBefore we start building complex ML models, it is a good idea to come up with a simple rules based model and use that as a benchmark. After all, there's no point using ML if it can't beat the traditional rules based approach!\nOur rule is going to be to divide the mean fare_amount by the mean estimated distance to come up with a rate and use that to predict. \nRecall we can't use the actual trip_distance because we won't have that available at prediction time (depends on the route taken), however we do know the users pick up and drop off location so we can use euclidean distance between those coordinates.",
"def euclidean_distance(df):\n return (\n (df[\"pickuplat\"] - df[\"dropofflat\"]) ** 2\n + (df[\"pickuplon\"] - df[\"dropofflon\"]) ** 2\n ) ** 0.5\n\n\ndef compute_rmse(actual, predicted):\n return (((actual - predicted) ** 2).mean()) ** 0.5\n\n\ndef print_rmse(df, rate, name):\n print(\n \"{} RMSE = {}\".format(\n compute_rmse(df[\"fare_amount\"], rate * euclidean_distance(df)),\n name,\n )\n )\n\n\ndf_train = pd.read_csv(\"taxi-train.csv\")\ndf_valid = pd.read_csv(\"taxi-valid.csv\")\n\nrate = df_train[\"fare_amount\"].mean() / euclidean_distance(df_train).mean()\n\nprint_rmse(df_train, rate, \"Train\")\nprint_rmse(df_valid, rate, \"Valid\")",
"The simple distance-based rule gives us an RMSE of <b>$7.70</b> on the validation dataset. We have to beat this, of course, but you will find that simple rules of thumb like this can be surprisingly difficult to beat. \nYou don't want to set a goal on the test dataset because you'll want to tweak your hyperparameters and model architecture to get the best validation error. Then, you can evaluate ONCE on the test data.\nChallenge exercise\nLet's say that you want to predict whether a Stackoverflow question will be acceptably answered. Using this public dataset of questions, create a machine learning dataset that you can use for classification.\n<p>\nWhat is a reasonable benchmark for this problem?\nWhat features might be useful?\n<p>\nIf you got the above easily, try this harder problem: you want to predict whether a question will be acceptably answered within 2 days. How would you create the dataset?\n<p>\nHint (highlight to see):\n<p style='color:white' linkstyle='color:white'> \nYou will need to do a SQL join with the table of [answers]( https://bigquery.cloud.google.com/table/bigquery-public-data:stackoverflow.posts_answers) to determine whether the answer was within 2 days.\n</p>\n\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
alephcero/adsProject
|
3. Model Evaluation and Selection.ipynb
|
gpl-3.0
|
[
"New York University\nApplied Data Science 2016 Final Project\n\nMeasuring household income under Redatam in CensusData\n\n3. Model Evaluation and Selection\n\nProject Description: Lorem ipsum\nMembers:\n- Felipe Gonzales\n- Ilan Reinstein\n- Fernando Melchor\n- Nicolas Metallo\nLIBRARIES",
"import pandas as pd\nimport numpy as np\nimport os\nimport sys\nimport simpledbf\n%pylab inline\nimport matplotlib.pyplot as plt\nimport statsmodels.api as sm\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import linear_model",
"HELPER FUNCTIONS",
"def runModel(dataset, income, varForModel):\n \n '''\n This function takes a data set, runs a model according to specifications,\n and returns the model, printing the summary\n '''\n y = dataset[income].values\n X = dataset.loc[:,varForModel].values\n X = sm.add_constant(X)\n\n w = dataset.PONDERA\n \n lm = sm.WLS(y, X, weights=1. / w, missing = 'drop', hasconst=True).fit()\n print lm.summary()\n for i in range(1,len(varForModel)+1):\n print 'x%d: %s' % (i,varForModel[i-1])\n #testing within sample\n R_IS=[]\n R_OS=[]\n n=500\n \n for i in range(n): \n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state = 200)\n X_train = sm.add_constant(X_train)\n X_test = sm.add_constant(X_test)\n \n lm = linear_model.LinearRegression(fit_intercept=True)\n lm.fit(X_train, y_train, sample_weight = 1. / w[:len(X_train)])\n y_hat_IS = lm.predict(X_train)\n err_IS = y_hat_IS - y_train\n R2_IS = 1 - (np.var(err_IS) / np.var(y_train))\n \n y_hat_OS = lm.predict(X_test)\n err_OS = y_hat_OS - y_test\n R2_OS = 1 - (np.var(err_OS) / np.var(y_test))\n \n R_IS.append(R2_IS)\n R_OS.append(R2_OS)\n \n print(\"IS R-squared for {} times is {}\".format(n,np.mean(R_IS)))\n print(\"OS R-squared for {} times is {}\".format(n,np.mean(R_OS)))",
"GET DATA",
"data = pd.read_csv('data/dataFinalParaModelo.csv')\ndata = data[data.AGLO1 == 32.0]\ndata.head()",
"DATA EXPLORATION\nBackground:\nWe have found that 'y ~ Total Household Income' works better than other models with a different 'y' (ln of Total Individual Income, Income by Activity, income deciles, etc)\nCorrelation Matrix",
"varForModel = [\n 'HomeType',\n 'RoomsNumber',\n 'FloorMaterial',\n 'RoofMaterial',\n 'RoofCoat',\n 'Water',\n 'Toilet',\n 'ToiletLocation',\n 'ToiletType',\n 'Sewer',\n 'EmergencyLoc',\n 'UsableTotalRooms',\n 'SleepingRooms',\n 'Kitchen',\n 'Sink',\n 'Ownership',\n 'CookingCombustible',\n 'BathroomUse',\n 'Working',\n 'HouseMembers',\n 'Memberless10',\n 'Membermore10',\n 'TotalFamilyIncome',\n 'CookingRec',\n 'WaterRec',\n 'OwnershipRec',\n 'Hacinamiento',\n 'schoolAndJob',\n 'noJob',\n 'job',\n 'headAge',\n 'spouseAge',\n 'headFemale',\n 'spouseFemale',\n 'headEduc',\n 'spouseEduc',\n 'headPrimary',\n 'spousePrimary',\n 'headSecondary',\n 'spouseSecondary',\n 'headUniversity',\n 'spouseUniversity',\n 'headJob',\n 'spouseJob',\n 'headMaritalStatus',\n 'spouseMaritalStatus',\n 'sumPredicted']\n\ndata['hasSpouse'] = np.where(np.isnan(data.spouseJob.values),0,1)\ndata['spouseJob'] = np.where(np.isnan(data.spouseJob.values),0,data.spouseJob.values)\ndata['TotalFamilyIncome'].replace(to_replace=[0], value=[1] , inplace=True, axis=None)\ndata = data[data.TotalFamilyIncomeDecReg != 0]\ndata['income_log'] = np.log(data.TotalFamilyIncome)\ndata['FloorMaterial'] = np.where(np.isnan(data.FloorMaterial.values),5,data.FloorMaterial.values)\ndata['sumPredicted'] = np.where(np.isnan(data.sumPredicted.values),0,data.sumPredicted.values)\ndata['Sewer'] = np.where(np.isnan(data.Sewer.values),5,data.Sewer.values)\ndata['ToiletType'] = np.where(np.isnan(data.ToiletType.values),4,data.ToiletType.values)\ndata['Water'] = np.where(np.isnan(data.Water.values),4,data.Water.values)\ndata['RoofCoat'] = np.where(np.isnan(data.RoofCoat.values),2,data.RoofCoat.values)\ndata['income_logPer'] = np.log(data.PerCapInc)\ndata['haciBool'] = (data.Hacinamiento > 3).astype(int)\ndata['RoofMaterial'] = np.where(np.isnan(data.RoofMaterial.values),0,data.RoofMaterial.values)\ndata['ToiletLocation'] = np.where(np.isnan(data.ToiletLocation.values),2,data.ToiletLocation.values)\n\nimport seaborn as sns\nsns.set(context=\"paper\", font=\"monospace\", font_scale=1.25)\n\ncorrmat = data.loc[:,list(data.loc[:,varForModel].corr()['TotalFamilyIncome'].sort_values(ascending=False).index)].corr()\nf, ax = plt.subplots(figsize=(12, 10))\nsns.heatmap(corrmat, vmax=.8, square=True)\nf.tight_layout()\n\nvarHomogamy = [\n 'headAge',\n 'spouseAge',\n 'headFemale',\n 'spouseFemale',\n 'headEduc',\n 'spouseEduc',\n 'headPrimary',\n 'spousePrimary',\n 'headSecondary',\n 'spouseSecondary',\n 'headUniversity',\n 'spouseUniversity',\n 'headJob',\n 'spouseJob',\n 'headMaritalStatus',\n 'spouseMaritalStatus']\n\nsns.set(context=\"paper\", font=\"monospace\", font_scale=2)\n\ncorrmat = data.loc[:,varHomogamy].corr()\nf, ax = plt.subplots(figsize=(10, 8))\nsns.heatmap(corrmat, vmax=.8, square=True)\nf.tight_layout()",
"Notes:\nWe found multi-collinearity between variables referencing the spouse and the head of the household. This is we believe a case of homogamy (marriage between individuals who are similar to each other). This is why we chose to ignore them.",
"varForFeatureSelection = [\n 'HomeType',\n 'RoomsNumber',\n 'FloorMaterial',\n 'RoofMaterial',\n 'RoofCoat',\n 'Water',\n 'Toilet',\n 'ToiletLocation',\n 'ToiletType',\n 'Sewer',\n 'UsableTotalRooms',\n 'SleepingRooms',\n 'Kitchen',\n 'Sink',\n 'Ownership',\n 'CookingCombustible',\n 'BathroomUse',\n 'Working',\n 'HouseMembers',\n 'Memberless10',\n 'Membermore10',\n 'CookingRec',\n 'WaterRec',\n 'OwnershipRec',\n 'Hacinamiento',\n 'schoolAndJob',\n 'noJob',\n 'job',\n 'headAge',\n 'headFemale',\n 'headEduc',\n 'headPrimary',\n 'headSecondary',\n 'headUniversity',\n 'headJob',\n 'sumPredicted']\n\n# !pip install minepy\n\nfrom sklearn.linear_model import (LinearRegression, Ridge, \n Lasso, RandomizedLasso)\nfrom sklearn.feature_selection import RFE, f_regression\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.ensemble import RandomForestRegressor\nimport numpy as np\nfrom minepy import MINE\n \nY = data.TotalFamilyIncome\nX = np.asarray(data.loc[:,varForFeatureSelection])\n \nnames = data.loc[:,varForFeatureSelection].columns\n \nranks = {}\n \ndef rank_to_dict(ranks, names, order=1):\n minmax = MinMaxScaler()\n ranks = minmax.fit_transform(order*np.array([ranks]).T).T[0]\n ranks = map(lambda x: round(x, 2), ranks)\n return dict(zip(names, ranks ))\n \nlr = LinearRegression(normalize=True)\nlr.fit(X, Y)\nranks[\"Linear reg\"] = rank_to_dict(np.abs(lr.coef_), names)\n \nridge = Ridge(alpha=7)\nridge.fit(X, Y)\nranks[\"Ridge\"] = rank_to_dict(np.abs(ridge.coef_), names)\n \n \nlasso = Lasso(alpha=.05)\nlasso.fit(X, Y)\nranks[\"Lasso\"] = rank_to_dict(np.abs(lasso.coef_), names)\n \n \nrlasso = RandomizedLasso(alpha=0.04)\nrlasso.fit(X, Y)\nranks[\"Stability\"] = rank_to_dict(np.abs(rlasso.scores_), names)\n \n#stop the search when 5 features are left (they will get equal scores)\nrfe = RFE(lr, n_features_to_select=5)\nrfe.fit(X,Y)\nranks[\"RFE\"] = rank_to_dict(map(float, rfe.ranking_), names, order=-1)\n \nrf = RandomForestRegressor()\nrf.fit(X,Y)\nranks[\"RF\"] = rank_to_dict(rf.feature_importances_, names)\n \nf, pval = f_regression(X, Y, center=True)\nranks[\"Corr.\"] = rank_to_dict(f, names)\n \nmine = MINE()\nmic_scores = []\nfor i in range(X.shape[1]):\n mine.compute_score(X[:,i], Y)\n m = mine.mic()\n mic_scores.append(m)\n \nranks[\"MIC\"] = rank_to_dict(mic_scores, names) \n \n \nr = {}\nfor name in names:\n r[name] = round(np.mean([ranks[method][name] \n for method in ranks.keys()]), 2)\n \nmethods = sorted(ranks.keys())\nranks[\"Mean\"] = r\nmethods.append(\"Mean\")\n \nfeat_ranking = pd.DataFrame(ranks)\ncols = feat_ranking.columns.tolist()\nfeat_ranking = feat_ranking.ix[:, cols]\nfeat_ranking.sort_values(['Corr.'], ascending=False).head(15)\n\nvarComparison = list(feat_ranking.sort_values(['Corr.'], ascending=False).head(10).index)\nprint 'Our first iteration gave us the following table of the top 10 most relevant features: \\n'\nprint varComparison\nprint '\\n'\nprint 'And this is the correlation Matrix for those features:'\nsns.set(context=\"paper\", font=\"monospace\", font_scale=2)\n\ncorrmat = data.loc[:,varComparison].corr()\nf, ax = plt.subplots(figsize=(10, 8))\nsns.heatmap(corrmat, vmax=.8, square=True)\nf.tight_layout()",
"Conclusion from Feature Selection:\nWe found that the most relevant features for predicting income are:\n- Eucation\n- Job \n- Number of people living in the household.\nAccordingly, we chose only the variables that best represented this idea based on their predictive power, model interpretability and possibility of query under REDATAM. We also removed features highty correlated between each other to avoid multi-collinearity.\nREGRESSION MODELS\nMODEL TESTING\nOur model will be based on education, work and number of people living in the same household. For this, we will test two alternative models each considering separate variables that account for those features.\n\n\nModel 1a",
"varForModel = [\n 'headEduc',\n]\n\nrunModel(data, 'TotalFamilyIncome', varForModel)",
"Model 1b",
"varForModel = [\n 'headEduc',\n 'job', \n]\n\nrunModel(data, 'TotalFamilyIncome', varForModel)",
"Model 1c",
"varForModel = [\n 'headEduc',\n 'job', \n 'SleepingRooms',]\n\nrunModel(data, 'TotalFamilyIncome', varForModel)",
"Model 1d",
"varForModel = [\n 'headEduc',\n 'job', \n 'haciBool',\n #'Hacinamiento'\n]\n\nrunModel(data, 'TotalFamilyIncome', varForModel)",
"Model 1E (CHOSEN)",
"varForModel = [\n 'headEduc',\n 'job', \n 'Hacinamiento'\n]\n\nrunModel(data, 'TotalFamilyIncome', varForModel)",
"Model 2a",
"varForModel = [\n 'schoolAndJob',\n]\n\nrunModel(data, 'TotalFamilyIncome', varForModel)",
"Model 2b",
"varForModel = [\n 'SleepingRooms',\n 'schoolAndJob',\n]\n\nrunModel(data, 'TotalFamilyIncome', varForModel)",
"MODEL VALIDATION\nWe are going to test our models against survey data from the Buenos Aires City Government where income is measured by comune (not census block or department) and that it's independent from the survey we used to train our model. \n GET SURVEY DATA",
"dbf = simpledbf.Dbf5('data/BaseEAH2010/EAH10_BU_IND_VERSION2.dbf') # PDF press release is available for download\ndata10 = dbf.to_dataframe()\n\ndata10 = data10.loc[data10.ITFB != 9999999,['ID','COMUNA','FEXP','ITFB']]\ndata10.head()",
"DATA CLEANING",
"data10.drop_duplicates(inplace = True)\ndata10.ITFB.replace(to_replace=[0], value=[1] , inplace=True, axis=None)\ndata10.FEXP = data10.FEXP.astype(int)\ndata10exp = data10.loc[np.repeat(data10.index.values,data10.FEXP)]\ndata10exp.ITFB.groupby(by=data10exp.COMUNA).mean()",
"GET PREDICTED DATA FROM REDATAM \nThe following script transforms the ascii output from REDATAM into a csv file that we can work on later. Our objective is to compare the predicted income from our models related to each comune with the real income for each comune.",
"def readRedatamCSV(asciiFile):\n f = open(asciiFile, 'r')\n areas = []\n measures = []\n for line in f:\n columns = line.strip().split()\n # print columns\n if len(columns) > 0:\n if 'RESUMEN' in columns[0] :\n break\n elif columns[0] == 'AREA':\n area = str.split(columns[2],',')[0]\n areas.append(area)\n elif columns[0] == 'Total':\n measure = str.split(columns[2],',')[2]\n measures.append(measure)\n try: \n data = pd.DataFrame({'area':areas,'measure':measures})\n return data\n except:\n print asciiFile\n\ndef R2(dataset,real,predicted):\n fig = plt.figure(figsize=(24,6))\n ax1 = fig.add_subplot(1,3,1)\n ax2 = fig.add_subplot(1,3,2)\n ax3 = fig.add_subplot(1,3,3)\n \n error = dataset[predicted]-dataset[real]\n \n ax1.scatter(dataset[predicted],dataset[real])\n ax1.plot(dataset[real], dataset[real], color = 'red')\n ax1.set_title('Predicted vs Real')\n \n ax2.scatter((dataset[predicted] - dataset[predicted].mean())/dataset[predicted].std(),\n (dataset[real] - dataset[real].mean())/dataset[real].std())\n ax2.plot((dataset[real] - dataset[real].mean())/dataset[real].std(),\n (dataset[real] - dataset[real].mean())/dataset[real].std(), color = 'red')\n ax2.set_title('Standarized Predicted vs Real')\n\n ax3.scatter(dataset[predicted],(error - error.mean()) / error.std())\n ax3.set_title('Standard Error')\n \n print \"R^2 is: \",((dataset[real] - dataset[predicted])**2).sum() / ((dataset[real] - dataset[real].mean())**2).sum()\n print 'Mean Error', error.mean()",
"Model 1A",
"archivo = 'data/indecOnline/headEduc/comunas.csv' # Model 1A\ningresoXComuna = readRedatamCSV(archivo)\ningresoXComuna.columns = ['area','Predicted_1A']\ningresoXComuna['Real_Income'] = list(data10exp.ITFB.groupby(by=data10exp.COMUNA).mean())\ningresoXComuna = ingresoXComuna.loc[:,['area', 'Real_Income', 'Predicted_1A']]",
"Model 1B",
"archivo = 'data/indecOnline/headEducYjobs/comuna.csv' # Model 1B\ningresoModelo2 = readRedatamCSV(archivo)\ningresoXComuna = ingresoXComuna.merge(right=ingresoModelo2,on='area')",
"Model 1C",
"archivo = 'data/indecOnline/headEducuJobsYrooms/comunas.csv' # Model 1A\ningresoModelo3 = readRedatamCSV(archivo)\ningresoXComuna = ingresoXComuna.merge(right=ingresoModelo3,on='area')",
"Model 2A",
"archivo = 'data/indecOnline/jobSchool/comunas.csv' # Model 2A\ningresoModelo4 = readRedatamCSV(archivo)\ningresoXComuna = ingresoXComuna.merge(right=ingresoModelo4,on='area')",
"Model 2B",
"archivo = 'data/indecOnline/jobSchoolYrooms/comunas.csv' # Model 2B\ningresoModelo5 = readRedatamCSV(archivo)\ningresoXComuna = ingresoXComuna.merge(right=ingresoModelo5,on='area')",
"Model 1D",
"archivo = 'data/indecOnline/MODELO1D/comunas.csv' # Model 2B\ningresoModelo6 = readRedatamCSV(archivo)\ningresoXComuna = ingresoXComuna.merge(right=ingresoModelo6,on='area')",
"Model 1E",
"archivo = 'data/indecOnline/MODELO1E/comunas.csv' # Model 2B\ningresoModelo7 = readRedatamCSV(archivo)\ningresoXComuna = ingresoXComuna.merge(right=ingresoModelo7,on='area')",
"DATA CLEANING",
"ingresoXComuna.columns = ['Comune','Real_Income','Predicted_1A','Predicted_1B','Predicted_1C',\n 'Predicted_2A','Predicted_2B','Predicted_1D','Predicted_1E']\n\nfor i in range(1,9):\n ingresoXComuna.iloc[:,[i]] = ingresoXComuna.iloc[:,[i]].astype(float)",
"MODEL EVALUATION RESULTS",
"ingresoXComuna",
"Best Performing Model",
"R2 (ingresoXComuna,'Real_Income','Predicted_1E')",
"Appendix:",
"R2 (ingresoXComuna,'Real_Income','Predicted_1A')\n\nR2 (ingresoXComuna,'Real_Income','Predicted_1B')\n\nR2 (ingresoXComuna,'Real_Income','Predicted_1C')\n\nR2 (ingresoXComuna,'Real_Income','Predicted_1D')\n\nR2 (ingresoXComuna,'Real_Income','Predicted_1E')\n\nR2 (ingresoXComuna,'Real_Income','Predicted_2A')\n\nR2 (ingresoXComuna,'Real_Income','Predicted_2B')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
frreiss/tensorflow-fred
|
tensorflow/lite/g3doc/guide/signatures.ipynb
|
apache-2.0
|
[
"Copyright 2021 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Signatures in TensorFlow Lite\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/guide/signatures\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/guide/signatures.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/guide/signatures.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/guide/signatures.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nNote: This API is new and only available via pip install tf-nightly. It will be available in TensorFlow version 2.7.\nTensorFlow Lite supports converting TensorFlow model's input/output\nspecifications to TensorFlow Lite models. The input/output specifications are\ncalled \"signatures\". Signatures can be specified when building a SavedModel or\ncreating concrete functions.\nSignatures in TensorFlow Lite provide the following features:\n\nThey specify inputs and outputs of the converted TensorFlow Lite model by\n respecting the TensorFlow model's signatures.\nAllow a single TensorFlow Lite model to support multiple entry points.\n\nThe signature is composed of three pieces:\n\nInputs: Map for inputs from input name in the signature to an input tensor.\nOutputs: Map for output mapping from output name in signature to an output\n tensor.\nSignature Key: Name that identifies an entry point of the graph.\n\nSetup",
"!pip uninstall -y tensorflow keras\n!pip install tf-nightly\n\nimport tensorflow as tf",
"Example model\nLet's say we have two tasks, e.g., encoding and decoding, as a TensorFlow model:",
"class Model(tf.Module):\n\n @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.float32)])\n def encode(self, x):\n result = tf.strings.as_string(x)\n return {\n \"encoded_result\": result\n }\n\n @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])\n def decode(self, x):\n result = tf.strings.to_number(x)\n return {\n \"decoded_result\": result\n }",
"In the signature wise, the above TensorFlow model can be summarized as follows:\n\n\nSignature\n\nKey: encode\nInputs: {\"x\"}\nOutput: {\"encoded_result\"}\n\n\n\nSignature\n\nKey: decode\nInputs: {\"x\"}\nOutput: {\"decoded_result\"}\n\n\n\nConvert a model with Signatures\nTensorFlow Lite converter APIs will bring the above signature information into\nthe converted TensorFlow Lite model.\nThis conversion functionality is available on all the converter APIs starting\nfrom TensorFlow version 2.7.0. See example usages.\nFrom Saved Model",
"model = Model()\n\n# Save the model\nSAVED_MODEL_PATH = 'content/saved_models/coding'\n\ntf.saved_model.save(\n model, SAVED_MODEL_PATH,\n signatures={\n 'encode': model.encode.get_concrete_function(),\n 'decode': model.decode.get_concrete_function()\n })\n\n# Convert the saved model using TFLiteConverter\nconverter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_PATH)\nconverter.target_spec.supported_ops = [\n tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.\n tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.\n]\ntflite_model = converter.convert()\n\n# Print the signatures from the converted model\ninterpreter = tf.lite.Interpreter(model_content=tflite_model)\nsignatures = interpreter.get_signature_list()\nprint(signatures)",
"From Keras Model",
"# Generate a Keras model.\nkeras_model = tf.keras.Sequential(\n [\n tf.keras.layers.Dense(2, input_dim=4, activation='relu', name='x'),\n tf.keras.layers.Dense(1, activation='relu', name='output'),\n ]\n)\n\n# Convert the keras model using TFLiteConverter.\n# Keras model converter API uses the default signature automatically.\nconverter = tf.lite.TFLiteConverter.from_keras_model(keras_model)\ntflite_model = converter.convert()\n\n# Print the signatures from the converted model\ninterpreter = tf.lite.Interpreter(model_content=tflite_model)\n\nsignatures = interpreter.get_signature_list()\nprint(signatures)",
"From Concrete Functions",
"model = Model()\n\n# Convert the concrete functions using TFLiteConverter\nconverter = tf.lite.TFLiteConverter.from_concrete_functions(\n [model.encode.get_concrete_function(),\n model.decode.get_concrete_function()], model)\nconverter.target_spec.supported_ops = [\n tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.\n tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.\n]\ntflite_model = converter.convert()\n\n# Print the signatures from the converted model\ninterpreter = tf.lite.Interpreter(model_content=tflite_model)\nsignatures = interpreter.get_signature_list()\nprint(signatures)",
"Run Signatures\nTensorFlow inference APIs support the signature-based executions:\n\nAccessing the input/output tensors through the names of the inputs and\n outputs, specified by the signature.\nRunning each entry point of the graph separately, identified by the\n signature key.\nSupport for the SavedModel's initialization procedure.\n\nJava, C++ and Python language bindings are currently available. See example the\nbelow sections.\nJava\n```\ntry (Interpreter interpreter = new Interpreter(file_of_tensorflowlite_model)) {\n // Run encoding signature.\n Map<String, Object> inputs = new HashMap<>();\n inputs.put(\"x\", input);\n Map<String, Object> outputs = new HashMap<>();\n outputs.put(\"encoded_result\", encoded_result);\n interpreter.runSignature(inputs, outputs, \"encode\");\n// Run decoding signature.\n Map<String, Object> inputs = new HashMap<>();\n inputs.put(\"x\", encoded_result);\n Map<String, Object> outputs = new HashMap<>();\n outputs.put(\"decoded_result\", decoded_result);\n interpreter.runSignature(inputs, outputs, \"decode\");\n}\n```\nC++\n```\nSignatureRunner* encode_runner =\n interpreter->GetSignatureRunner(\"encode\");\nencode_runner->ResizeInputTensor(\"x\", {100});\nencode_runner->AllocateTensors();\nTfLiteTensor input_tensor = encode_runner->input_tensor(\"x\");\nfloat input = input_tensor->data.f;\n// Fill input.\nencode_runner->Invoke();\nconst TfLiteTensor output_tensor = encode_runner->output_tensor(\n \"encoded_result\");\nfloat output = output_tensor->data.f;\n// Access output.\n```\nPython",
"# Load the TFLite model in TFLite Interpreter\ninterpreter = tf.lite.Interpreter(model_content=tflite_model)\n\n# Print the signatures from the converted model\nsignatures = interpreter.get_signature_list()\nprint('Signature:', signatures)\n\n# encode and decode are callable with input as arguments.\nencode = interpreter.get_signature_runner('encode')\ndecode = interpreter.get_signature_runner('decode')\n\n# 'encoded' and 'decoded' are dictionaries with all outputs from the inference.\ninput = tf.constant([1, 2, 3], dtype=tf.float32)\nprint('Input:', input)\nencoded = encode(x=input)\nprint('Encoded result:', encoded)\ndecoded = decode(x=encoded['encoded_result'])\nprint('Decoded result:', decoded)",
"Known limitations\n\nAs TFLite interpreter does not gurantee thread safety, the signature runners\n from the same interpreter won't be executed concurrently.\nSupport for C/iOS/Swift is not available yet.\n\nUpdates\n\nVersion 2.7\nThe multiple signature feature is implemented.\nAll the converter APIs from version two generate signature-enabled\n TensorFlow Lite models.\n\n\nVersion 2.5\nSignature feature is available through the from_saved_model converter\n API."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mathandy/svgpathtools
|
README.ipynb
|
mit
|
[
"svgpathtools\nsvgpathtools is a collection of tools for manipulating and analyzing SVG Path objects and Bézier curves.\nFeatures\nsvgpathtools contains functions designed to easily read, write and display SVG files as well as a large selection of geometrically-oriented tools to transform and analyze path elements.\nAdditionally, the submodule bezier.py contains tools for for working with general nth order Bezier curves stored as n-tuples.\nSome included tools:\n\nread, write, and display SVG files containing Path (and other) SVG elements\nconvert Bézier path segments to numpy.poly1d (polynomial) objects\nconvert polynomials (in standard form) to their Bézier form\ncompute tangent vectors and (right-hand rule) normal vectors\ncompute curvature\nbreak discontinuous paths into their continuous subpaths.\nefficiently compute intersections between paths and/or segments\nfind a bounding box for a path or segment\nreverse segment/path orientation\ncrop and split paths and segments\nsmooth paths (i.e. smooth away kinks to make paths differentiable)\ntransition maps from path domain to segment domain and back (T2t and t2T)\ncompute area enclosed by a closed path\ncompute arc length\ncompute inverse arc length\nconvert RGB color tuples to hexadecimal color strings and back\n\nPrerequisites\n\nnumpy\nsvgwrite\nscipy (optional but recommended for performance)\n\nSetup\nbash\n$ pip install svgpathtools \nAlternative Setup\nYou can download the source from Github and install by using the command (from inside the folder containing setup.py):\nbash\n$ python setup.py install\nCredit where credit's due\nMuch of the core of this module was taken from the svg.path (v2.0) module. Interested svg.path users should see the compatibility notes at bottom of this readme.\nBasic Usage\nClasses\nThe svgpathtools module is primarily structured around four path segment classes: Line, QuadraticBezier, CubicBezier, and Arc. There is also a fifth class, Path, whose objects are sequences of (connected or disconnected<sup id=\"a1\">1</sup>) path segment objects.\n\n\nLine(start, end)\n\n\nArc(start, radius, rotation, large_arc, sweep, end) Note: See docstring for a detailed explanation of these parameters\n\n\nQuadraticBezier(start, control, end)\n\n\nCubicBezier(start, control1, control2, end)\n\n\nPath(*segments)\n\n\nSee the relevant docstrings in path.py or the official SVG specifications for more information on what each parameter means.\n<u id=\"f1\">1</u> Warning: Some of the functionality in this library has not been tested on discontinuous Path objects. A simple workaround is provided, however, by the Path.continuous_subpaths() method. ↩",
"from __future__ import division, print_function\n\n# Coordinates are given as points in the complex plane\nfrom svgpathtools import Path, Line, QuadraticBezier, CubicBezier, Arc\nseg1 = CubicBezier(300+100j, 100+100j, 200+200j, 200+300j) # A cubic beginning at (300, 100) and ending at (200, 300)\nseg2 = Line(200+300j, 250+350j) # A line beginning at (200, 300) and ending at (250, 350)\npath = Path(seg1, seg2) # A path traversing the cubic and then the line\n\n# We could alternatively created this Path object using a d-string\nfrom svgpathtools import parse_path\npath_alt = parse_path('M 300 100 C 100 100 200 200 200 300 L 250 350')\n\n# Let's check that these two methods are equivalent\nprint(path)\nprint(path_alt)\nprint(path == path_alt)\n\n# On a related note, the Path.d() method returns a Path object's d-string\nprint(path.d())\nprint(parse_path(path.d()) == path)",
"The Path class is a mutable sequence, so it behaves much like a list.\nSo segments can appended, inserted, set by index, deleted, enumerated, sliced out, etc.",
"# Let's append another to the end of it\npath.append(CubicBezier(250+350j, 275+350j, 250+225j, 200+100j))\nprint(path)\n\n# Let's replace the first segment with a Line object\npath[0] = Line(200+100j, 200+300j)\nprint(path)\n\n# You may have noticed that this path is connected and now is also closed (i.e. path.start == path.end)\nprint(\"path is continuous? \", path.iscontinuous())\nprint(\"path is closed? \", path.isclosed())\n\n# The curve the path follows is not, however, smooth (differentiable)\nfrom svgpathtools import kinks, smoothed_path\nprint(\"path contains non-differentiable points? \", len(kinks(path)) > 0)\n\n# If we want, we can smooth these out (Experimental and only for line/cubic paths)\n# Note: smoothing will always works (except on 180 degree turns), but you may want \n# to play with the maxjointsize and tightness parameters to get pleasing results\n# Note also: smoothing will increase the number of segments in a path\nspath = smoothed_path(path)\nprint(\"spath contains non-differentiable points? \", len(kinks(spath)) > 0)\nprint(spath)\n\n# Let's take a quick look at the path and its smoothed relative\n# The following commands will open two browser windows to display path and spaths\nfrom svgpathtools import disvg\nfrom time import sleep\ndisvg(path) \nsleep(1) # needed when not giving the SVGs unique names (or not using timestamp)\ndisvg(spath)\nprint(\"Notice that path contains {} segments and spath contains {} segments.\"\n \"\".format(len(path), len(spath)))",
"Reading SVGSs\nThe svg2paths() function converts an svgfile to a list of Path objects and a separate list of dictionaries containing the attributes of each said path.\nNote: Line, Polyline, Polygon, and Path SVG elements can all be converted to Path objects using this function.",
"# Read SVG into a list of path objects and list of dictionaries of attributes \nfrom svgpathtools import svg2paths, wsvg\npaths, attributes = svg2paths('test.svg')\n\n# Update: You can now also extract the svg-attributes by setting\n# return_svg_attributes=True, or with the convenience function svg2paths2\nfrom svgpathtools import svg2paths2\npaths, attributes, svg_attributes = svg2paths2('test.svg')\n\n# Let's print out the first path object and the color it was in the SVG\n# We'll see it is composed of two CubicBezier objects and, in the SVG file it \n# came from, it was red\nredpath = paths[0]\nredpath_attribs = attributes[0]\nprint(redpath)\nprint(redpath_attribs['stroke'])",
"Writing SVGSs (and some geometric functions and methods)\nThe wsvg() function creates an SVG file from a list of path. This function can do many things (see docstring in paths2svg.py for more information) and is meant to be quick and easy to use.\nNote: Use the convenience function disvg() (or set 'openinbrowser=True') to automatically attempt to open the created svg file in your default SVG viewer.",
"# Let's make a new SVG that's identical to the first\nwsvg(paths, attributes=attributes, svg_attributes=svg_attributes, filename='output1.svg')",
"There will be many more examples of writing and displaying path data below.\nThe .point() method and transitioning between path and path segment parameterizations\nSVG Path elements and their segments have official parameterizations.\nThese parameterizations can be accessed using the Path.point(), Line.point(), QuadraticBezier.point(), CubicBezier.point(), and Arc.point() methods.\nAll these parameterizations are defined over the domain 0 <= t <= 1.\nNote: In this document and in inline documentation and doctrings, I use a capital T when referring to the parameterization of a Path object and a lower case t when referring speaking about path segment objects (i.e. Line, QaudraticBezier, CubicBezier, and Arc objects).\nGiven a T value, the Path.T2t() method can be used to find the corresponding segment index, k, and segment parameter, t, such that path.point(T)=path[k].point(t).\nThere is also a Path.t2T() method to solve the inverse problem.",
"# Example:\n\n# Let's check that the first segment of redpath starts \n# at the same point as redpath\nfirstseg = redpath[0] \nprint(redpath.point(0) == firstseg.point(0) == redpath.start == firstseg.start)\n\n# Let's check that the last segment of redpath ends on the same point as redpath\nlastseg = redpath[-1] \nprint(redpath.point(1) == lastseg.point(1) == redpath.end == lastseg.end)\n\n# This next boolean should return False as redpath is composed multiple segments\nprint(redpath.point(0.5) == firstseg.point(0.5))\n\n# If we want to figure out which segment of redpoint the \n# point redpath.point(0.5) lands on, we can use the path.T2t() method\nk, t = redpath.T2t(0.5)\nprint(redpath[k].point(t) == redpath.point(0.5))",
"Bezier curves as NumPy polynomial objects\nAnother great way to work with the parameterizations for Line, QuadraticBezier, and CubicBezier objects is to convert them to numpy.poly1d objects. This is done easily using the Line.poly(), QuadraticBezier.poly() and CubicBezier.poly() methods.\nThere's also a polynomial2bezier() function in the pathtools.py submodule to convert polynomials back to Bezier curves. \nNote: cubic Bezier curves are parameterized as $$\\mathcal{B}(t) = P_0(1-t)^3 + 3P_1(1-t)^2t + 3P_2(1-t)t^2 + P_3t^3$$\nwhere $P_0$, $P_1$, $P_2$, and $P_3$ are the control points start, control1, control2, and end, respectively, that svgpathtools uses to define a CubicBezier object. The CubicBezier.poly() method expands this polynomial to its standard form \n$$\\mathcal{B}(t) = c_0t^3 + c_1t^2 +c_2t+c3$$\nwhere\n$$\\begin{bmatrix}c_0\\c_1\\c_2\\c_3\\end{bmatrix} = \n\\begin{bmatrix}\n-1 & 3 & -3 & 1\\\n3 & -6 & -3 & 0\\\n-3 & 3 & 0 & 0\\\n1 & 0 & 0 & 0\\\n\\end{bmatrix}\n\\begin{bmatrix}P_0\\P_1\\P_2\\P_3\\end{bmatrix}$$ \nQuadraticBezier.poly() and Line.poly() are defined similarly.",
"# Example:\nb = CubicBezier(300+100j, 100+100j, 200+200j, 200+300j)\np = b.poly()\n\n# p(t) == b.point(t)\nprint(p(0.235) == b.point(0.235))\n\n# What is p(t)? It's just the cubic b written in standard form. \nbpretty = \"{}*(1-t)^3 + 3*{}*(1-t)^2*t + 3*{}*(1-t)*t^2 + {}*t^3\".format(*b.bpoints())\nprint(\"The CubicBezier, b.point(x) = \\n\\n\" + \n bpretty + \"\\n\\n\" + \n \"can be rewritten in standard form as \\n\\n\" +\n str(p).replace('x','t'))",
"The ability to convert between Bezier objects to NumPy polynomial objects is very useful. For starters, we can take turn a list of Bézier segments into a NumPy array \nNumpy Array operations on Bézier path segments\nExample available here \nTo further illustrate the power of being able to convert our Bezier curve objects to numpy.poly1d objects and back, lets compute the unit tangent vector of the above CubicBezier object, b, at t=0.5 in four different ways. \nTangent vectors (and more on NumPy polynomials)",
"t = 0.5\n### Method 1: the easy way\nu1 = b.unit_tangent(t)\n\n### Method 2: another easy way \n# Note: This way will fail if it encounters a removable singularity.\nu2 = b.derivative(t)/abs(b.derivative(t))\n\n### Method 2: a third easy way \n# Note: This way will also fail if it encounters a removable singularity.\ndp = p.deriv() \nu3 = dp(t)/abs(dp(t))\n\n### Method 4: the removable-singularity-proof numpy.poly1d way \n# Note: This is roughly how Method 1 works\nfrom svgpathtools import real, imag, rational_limit\ndx, dy = real(dp), imag(dp) # dp == dx + 1j*dy \np_mag2 = dx**2 + dy**2 # p_mag2(t) = |p(t)|**2\n# Note: abs(dp) isn't a polynomial, but abs(dp)**2 is, and,\n# the limit_{t->t0}[f(t) / abs(f(t))] == \n# sqrt(limit_{t->t0}[f(t)**2 / abs(f(t))**2])\nfrom cmath import sqrt\nu4 = sqrt(rational_limit(dp**2, p_mag2, t))\n\nprint(\"unit tangent check:\", u1 == u2 == u3 == u4)\n\n# Let's do a visual check\nmag = b.length()/4 # so it's not hard to see the tangent line\ntangent_line = Line(b.point(t), b.point(t) + mag*u1)\ndisvg([b, tangent_line], 'bg', nodes=[b.point(t)])",
"Translations (shifts), reversing orientation, and normal vectors",
"# Speaking of tangents, let's add a normal vector to the picture\nn = b.normal(t)\nnormal_line = Line(b.point(t), b.point(t) + mag*n)\ndisvg([b, tangent_line, normal_line], 'bgp', nodes=[b.point(t)])\n\n# and let's reverse the orientation of b! \n# the tangent and normal lines should be sent to their opposites\nbr = b.reversed()\n\n# Let's also shift b_r over a bit to the right so we can view it next to b\n# The simplest way to do this is br = br.translated(3*mag), but let's use \n# the .bpoints() instead, which returns a Bezier's control points\nbr.start, br.control1, br.control2, br.end = [3*mag + bpt for bpt in br.bpoints()] # \n\ntangent_line_r = Line(br.point(t), br.point(t) + mag*br.unit_tangent(t))\nnormal_line_r = Line(br.point(t), br.point(t) + mag*br.normal(t))\nwsvg([b, tangent_line, normal_line, br, tangent_line_r, normal_line_r], \n 'bgpkgp', nodes=[b.point(t), br.point(t)], filename='vectorframes.svg', \n text=[\"b's tangent\", \"br's tangent\"], text_path=[tangent_line, tangent_line_r])",
"Rotations and Translations",
"# Let's take a Line and an Arc and make some pictures\ntop_half = Arc(start=-1, radius=1+2j, rotation=0, large_arc=1, sweep=1, end=1)\nmidline = Line(-1.5, 1.5)\n\n# First let's make our ellipse whole\nbottom_half = top_half.rotated(180)\ndecorated_ellipse = Path(top_half, bottom_half)\n\n# Now let's add the decorations\nfor k in range(12):\n decorated_ellipse.append(midline.rotated(30*k))\n \n# Let's move it over so we can see the original Line and Arc object next\n# to the final product\ndecorated_ellipse = decorated_ellipse.translated(4+0j)\nwsvg([top_half, midline, decorated_ellipse], filename='decorated_ellipse.svg')",
"arc length and inverse arc length\nHere we'll create an SVG that shows off the parametric and geometric midpoints of the paths from test.svg. We'll need to compute use the Path.length(), Line.length(), QuadraticBezier.length(), CubicBezier.length(), and Arc.length() methods, as well as the related inverse arc length methods .ilength() function to do this.",
"# First we'll load the path data from the file test.svg\npaths, attributes = svg2paths('test.svg')\n\n# Let's mark the parametric midpoint of each segment\n# I say \"parametric\" midpoint because Bezier curves aren't \n# parameterized by arclength \n# If they're also the geometric midpoint, let's mark them\n# purple and otherwise we'll mark the geometric midpoint green\nmin_depth = 5\nerror = 1e-4\ndots = []\nncols = []\nnradii = []\nfor path in paths:\n for seg in path:\n parametric_mid = seg.point(0.5)\n seg_length = seg.length()\n if seg.length(0.5)/seg.length() == 1/2:\n dots += [parametric_mid]\n ncols += ['purple']\n nradii += [5]\n else:\n t_mid = seg.ilength(seg_length/2)\n geo_mid = seg.point(t_mid)\n dots += [parametric_mid, geo_mid]\n ncols += ['red', 'green']\n nradii += [5] * 2\n\n# In 'output2.svg' the paths will retain their original attributes\nwsvg(paths, nodes=dots, node_colors=ncols, node_radii=nradii, \n attributes=attributes, filename='output2.svg')",
"Intersections between Bezier curves",
"# Let's find all intersections between redpath and the other \nredpath = paths[0]\nredpath_attribs = attributes[0]\nintersections = []\nfor path in paths[1:]:\n for (T1, seg1, t1), (T2, seg2, t2) in redpath.intersect(path):\n intersections.append(redpath.point(T1))\n \ndisvg(paths, filename='output_intersections.svg', attributes=attributes,\n nodes = intersections, node_radii = [5]*len(intersections))",
"An Advanced Application: Offsetting Paths\nHere we'll find the offset curve for a few paths.",
"from svgpathtools import parse_path, Line, Path, wsvg\ndef offset_curve(path, offset_distance, steps=1000):\n \"\"\"Takes in a Path object, `path`, and a distance,\n `offset_distance`, and outputs an piecewise-linear approximation \n of the 'parallel' offset curve.\"\"\"\n nls = []\n for seg in path:\n ct = 1\n for k in range(steps):\n t = k / steps\n offset_vector = offset_distance * seg.normal(t)\n nl = Line(seg.point(t), seg.point(t) + offset_vector)\n nls.append(nl)\n connect_the_dots = [Line(nls[k].end, nls[k+1].end) for k in range(len(nls)-1)]\n if path.isclosed():\n connect_the_dots.append(Line(nls[-1].end, nls[0].end))\n offset_path = Path(*connect_the_dots)\n return offset_path\n\n# Examples:\npath1 = parse_path(\"m 288,600 c -52,-28 -42,-61 0,-97 \")\npath2 = parse_path(\"M 151,395 C 407,485 726.17662,160 634,339\").translated(300)\npath3 = parse_path(\"m 117,695 c 237,-7 -103,-146 457,0\").translated(500+400j)\npaths = [path1, path2, path3]\n\noffset_distances = [10*k for k in range(1,51)]\noffset_paths = []\nfor path in paths:\n for distances in offset_distances:\n offset_paths.append(offset_curve(path, distances))\n\n# Let's take a look\nwsvg(paths + offset_paths, 'g'*len(paths) + 'r'*len(offset_paths), filename='offset_curves.svg')",
"Compatibility Notes for users of svg.path (v2.0)\n\n\nrenamed Arc.arc attribute as Arc.large_arc\n\n\nPath.d() : For behavior similar<sup id=\"a2\">2</sup> to svg.path (v2.0), set both useSandT and use_closed_attrib to be True.\n\n\n<u id=\"f2\">2</u> The behavior would be identical, but the string formatting used in this method has been changed to use default format (instead of the General format, {:G}), for inceased precision. ↩\nLicence\nThis module is under a MIT License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cccma/cmip6/models/sandbox-1/atmoschem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: CCCMA\nSource ID: SANDBOX-1\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccma', 'sandbox-1', 'atmoschem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Timestep Framework --> Split Operator Order\n5. Key Properties --> Tuning Applied\n6. Grid\n7. Grid --> Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --> Surface Emissions\n11. Emissions Concentrations --> Atmospheric Emissions\n12. Emissions Concentrations --> Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --> Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmospheric chemistry model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmospheric chemistry model code.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Chemistry Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"1.8. Coupling With Chemical Reactivity\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemical species advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Split Operator Chemistry Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemistry (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Split Operator Alternate Order\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\n?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.6. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.7. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Timestep Framework --> Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.2. Convection\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.3. Precipitation\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.4. Emissions\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.5. Deposition\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.6. Gas Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.9. Photo Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.10. Aerosols\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview of transport implementation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Use Atmospheric Transport\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Transport Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric chemistry emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Emissions Concentrations --> Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Emissions Concentrations --> Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Emissions Concentrations --> Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview gas phase atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Number Of Bimolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.4. Number Of Termolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.7. Number Of Advected Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.8. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.9. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.10. Wet Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.11. Wet Oxidation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Gas Phase Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n",
"14.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n",
"14.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.5. Sedimentation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Gas Phase Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n",
"15.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.5. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric photo chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Number Of Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17. Photo Chemistry --> Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nPhotolysis scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"17.2. Environmental Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/mlops-on-gcp
|
immersion/kubeflow_pipelines/multiple_frameworks/labs/lab-01.ipynb
|
apache-2.0
|
[
"Lab: Continuous Training with TensorFlow, PyTorch, XGBoost, and Scikit-learn Models with KubeFlow and AI Platform Pipelines\nIn this lab we will create containerized training applications for ML models in TensorFlow, PyTorch, XGBoost, and Scikit-learn. Will will then use these images as ops in a KubeFlow pipeline and train multiple models in parallel. We will then set up recurring runs of our KubeFlow pipeline in the UI. \nFirst, we will containerize models in TF, PyTorch, XGBoost and Scikit-learn following a step-wise process for each:\n* Create the training script\n* Package training script into a Docker Image \n* Build and push training image to Google Cloud Container Registry\nOnce we have all four training images built and pushed to the Container Registry, we will build a KubeFlow pipeline that does two things:\n* Queries BigQuery to create training/validation splits and export results as sharded CSV files in GCS\n* Launches AI Platform training jobs with our four containerized training applications, using the exported CSV data as input \nFinally, we will compile and deploy our pipeline. In the UI we will set up Continuous Training with recurring pipeline runs.\nPRIOR TO STARTING THE LAB: Make sure you create a new instance with AI Platform Pipelines. Once the GKE cluster is spun up, copy the endpoint because you will need it in this lab. \nStart by setting some global variables. Make sure you have created a bucket that is gs://PROJECT_ID",
"REGION = 'us-central1'\nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]\nBUCKET = 'gs://' + PROJECT_ID ",
"First, create a BigQuery dataset. We will then query a public BigQuery dataset to populate a table in this dataset. This is census data. We will use age, workclass, education, occupation, and hours per week to predict income bracket. Note: We also grab functional_weight in our query. We do not use this feature in our models, however we use it to hash on when creating training/validation splits.",
"!bq --location=US mk census\n\n%%bigquery\n \nCREATE OR REPLACE TABLE census.data AS\n \nSELECT age, workclass, education_num, occupation, hours_per_week,income_bracket,functional_weight \nFROM `bigquery-public-data.ml_datasets.census_adult_income` \nWHERE AGE IS NOT NULL\nAND workclass IS NOT NULL\nAND education_num IS NOT NULL\nAND occupation IS NOT NULL\nAND hours_per_week IS NOT NULL\nAND income_bracket IS NOT NULL \nAND functional_weight IS NOT NULL",
"Create Scikit-learn Training Script\nWe will develop our first training script with Scikit-learn. We will use Pandas to read the CSV data then train a simple SGD Classifier.",
"!mkdir scikit_trainer_image\n\n%%writefile ./scikit_trainer_image/train.py\n\n\"\"\"Census Scikit-learn classifier trainer script.\"\"\"\n\nimport pickle\nimport subprocess\nimport sys\nimport datetime\nimport os\n\nimport fire\nimport pandas as pd\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.preprocessing import StandardScaler\n\n\ndef train_evaluate(training_dataset_path, validation_dataset_path,output_dir):\n \"\"\"Trains the Census Classifier model.\"\"\"\n \n # Ingest data into Pandas Dataframes \n df_train = pd.read_csv(training_dataset_path)\n df_validation = pd.read_csv(validation_dataset_path)\n df_train = pd.concat([df_train, df_validation])\n \n numeric_features = [\n 'age', 'education_num','hours_per_week'\n ]\n \n categorical_features = ['workclass', 'occupation']\n \n # Scale numeric features, one-hot encode categorical features\n preprocessor = ColumnTransformer(transformers=[(\n 'num', StandardScaler(),\n numeric_features),\n ('cat', OneHotEncoder(), categorical_features)])\n \n pipeline = Pipeline([('preprocessor', preprocessor),\n ('classifier', SGDClassifier(loss='log'))])\n \n num_features_type_map = {feature: 'float64' for feature in numeric_features}\n df_train = df_train.astype(num_features_type_map)\n df_validation = df_validation.astype(num_features_type_map)\n \n X_train = df_train.drop('income_bracket', axis=1)\n y_train = df_train['income_bracket']\n \n # Set parameters of the model and fit\n pipeline.set_params(classifier__alpha=0.0005, classifier__max_iter=250)\n pipeline.fit(X_train, y_train)\n \n # Save the model locally\n model_filename = 'model.pkl'\n with open(model_filename, 'wb') as model_file:\n pickle.dump(pipeline, model_file)\n \n # Copy to model to GCS \n EXPORT_PATH = os.path.join(\n output_dir, datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\"))\n \n gcs_model_path = '{}/{}'.format(EXPORT_PATH, model_filename)\n subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path])\n print('Saved model in: {}'.format(gcs_model_path))\n\n\nif __name__ == '__main__':\n fire.Fire(train_evaluate)",
"Package Scikit-learn Training Script into a Docker Image\nThe next step is to package this training script into a Docker Image. We need to be sure to list the dependencies in this Dockerfile. For the Scikit-learn model we need to ensure scikit-learn version 0.23.2 and Pandas version 1.1.1",
"%%writefile ./scikit_trainer_image/Dockerfile\n\nFROM gcr.io/deeplearning-platform-release/base-cpu\nRUN pip install -U fire scikit-learn==0.23.2 pandas==1.1.1\nWORKDIR /app\nCOPY train.py .\n\nENTRYPOINT [\"python\", \"train.py\"]",
"Build the scikit-learn trainer image\nNow we will use Cloud Build to build the image and push it your project's Container Registry. Here we are using the remote cloud service to build the image, so we don't need a local installation of Docker. Note: Building and pushing the image will take a few minutes. Since we will be building and pushing 4 different images in this lab, I suggest taking a detailed look at the training scripts while you wait and make sure you understand the data ingestion/model building/training code for frameworks you develop with.",
"SCIKIT_IMAGE_NAME='scikit_trainer_image'\nSCIKIT_IMAGE_TAG='latest'\nSCIKIT_IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, SCIKIT_IMAGE_NAME, SCIKIT_IMAGE_TAG)\n\n!gcloud builds submit --tag $SCIKIT_IMAGE_URI $SCIKIT_IMAGE_NAME",
"Create TensorFlow Training Script\nOne down, three to go! Now we will develop a TensorFlow training script. We will use the tf.data API to ingest the data from CSVs then build/train a neural network with the tf.keras Functional API.",
"!mkdir tensorflow_trainer_image\n\n%%writefile ./tensorflow_trainer_image/train.py\n\n\"\"\"Census Tensorflow classifier trainer script.\"\"\"\n\nimport pickle\nimport subprocess\nimport sys\nimport fire\nimport pandas as pd\nimport tensorflow as tf\nimport datetime\nimport os\n\nCSV_COLUMNS = [\"age\",\n \"workclass\",\n \"education_num\",\n \"occupation\",\n \"hours_per_week\",\n \"income_bracket\"]\n\n# Add string name for label column\nLABEL_COLUMN = \"income_bracket\"\n\n# Set default values for each CSV column as a list of lists.\n# Treat is_male and plurality as strings.\nDEFAULTS = [[18], [\"?\"], [4], [\"?\"], [20],[\"<=50K\"]]\n\ndef features_and_labels(row_data):\n cols = tf.io.decode_csv(row_data, record_defaults=DEFAULTS)\n feats = {\n 'age': tf.reshape(cols[0], [1,]),\n 'workclass': tf.reshape(cols[1],[1,]),\n 'education_num': tf.reshape(cols[2],[1,]),\n 'occupation': tf.reshape(cols[3],[1,]),\n 'hours_per_week': tf.reshape(cols[4],[1,]),\n 'income_bracket': cols[5]\n }\n label = feats.pop('income_bracket')\n label_int = tf.case([(tf.math.equal(label,tf.constant([' <=50K'])), lambda: 0),\n (tf.math.equal(label,tf.constant([' >50K'])), lambda: 1)])\n \n return feats, label_int\n\ndef load_dataset(pattern, batch_size=1, mode='eval'):\n # Make a CSV dataset\n filelist = tf.io.gfile.glob(pattern)\n dataset = tf.data.TextLineDataset(filelist).skip(1)\n dataset = dataset.map(features_and_labels)\n\n # Shuffle and repeat for training\n if mode == 'train':\n dataset = dataset.shuffle(buffer_size=10*batch_size).batch(batch_size).repeat()\n else:\n dataset = dataset.batch(10)\n\n return dataset\n\ndef train_evaluate(training_dataset_path, validation_dataset_path, batch_size, num_train_examples, num_evals, output_dir):\n inputs = {\n 'age': tf.keras.layers.Input(name='age',shape=[None],dtype='int32'),\n 'workclass': tf.keras.layers.Input(name='workclass',shape=[None],dtype='string'),\n 'education_num': tf.keras.layers.Input(name='education_num',shape=[None],dtype='int32'),\n 'occupation': tf.keras.layers.Input(name='occupation',shape=[None],dtype='string'),\n 'hours_per_week': tf.keras.layers.Input(name='hours_per_week',shape=[None],dtype='int32')\n }\n \n batch_size = int(batch_size)\n num_train_examples = int(num_train_examples)\n num_evals = int(num_evals)\n \n feat_cols = {\n 'age': tf.feature_column.numeric_column('age'),\n 'workclass': tf.feature_column.indicator_column(\n tf.feature_column.categorical_column_with_hash_bucket(\n key='workclass', hash_bucket_size=100\n )\n ),\n 'education_num': tf.feature_column.numeric_column('education_num'),\n 'occupation': tf.feature_column.indicator_column(\n tf.feature_column.categorical_column_with_hash_bucket(\n key='occupation', hash_bucket_size=100\n )\n ),\n 'hours_per_week': tf.feature_column.numeric_column('hours_per_week')\n }\n \n dnn_inputs = tf.keras.layers.DenseFeatures(\n feature_columns=feat_cols.values())(inputs)\n h1 = tf.keras.layers.Dense(64, activation='relu')(dnn_inputs)\n h2 = tf.keras.layers.Dense(128, activation='relu')(h1)\n h3 = tf.keras.layers.Dense(64, activation='relu')(h2)\n output = tf.keras.layers.Dense(1, activation='sigmoid')(h3)\n \n model = tf.keras.models.Model(inputs=inputs,outputs=output)\n model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])\n \n trainds = load_dataset(\n pattern=training_dataset_path,\n batch_size=batch_size,\n mode='train')\n \n evalds = load_dataset(\n pattern=validation_dataset_path,\n mode='eval')\n \n \n steps_per_epoch = num_train_examples // (batch_size * num_evals)\n \n history = model.fit(\n trainds,\n validation_data=evalds,\n validation_steps=100,\n epochs=num_evals,\n steps_per_epoch=steps_per_epoch\n )\n \n EXPORT_PATH = os.path.join(\n output_dir, datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\"))\n tf.saved_model.save(\n obj=model, export_dir=EXPORT_PATH) # with default serving function\n \n print(\"Exported trained model to {}\".format(EXPORT_PATH))\n \nif __name__ == '__main__':\n fire.Fire(train_evaluate)",
"Package TensorFlow Training Script into a Docker Image (TODO in this cell block)\nNote that the dependencies in this Dockerfile are different than for the Scikit-learn one.",
"%%writefile ./tensorflow_trainer_image/Dockerfile\n\nFROM gcr.io/deeplearning-platform-release/base-cpu\nRUN pip install -U fire tensorflow==2.1.1\nWORKDIR /app\nCOPY train.py .\n\nENTRYPOINT [\"python\", \"train.py\"]",
"Build the Tensorflow Trainer Image\nBuild the image and push it to your project's container registry. Again, this will take a few minutes.",
"TF_IMAGE_NAME='tensorflow_trainer_image'\nTF_IMAGE_TAG='latest'\nTF_IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, TF_IMAGE_NAME, TF_IMAGE_TAG)\n\n!gcloud builds submit --tag $TF_IMAGE_URI $TF_IMAGE_NAME",
"Create PyTorch Training Script\nTwo down, two to go! Now we will develop a PyTorch training script. We will use Pandas DataFrames combined with PyTorch's Dataset and Dataloader to ingest the data from CSVs, build a model with torch.nn, and write a training loop to train this model.",
"!mkdir pytorch_trainer_image\n\n%%writefile ./pytorch_trainer_image/train.py\n\nimport os \nimport subprocess\nimport datetime\nimport fire\n\nimport torch \nimport torch.nn as nn\nfrom torch.utils.data import Dataset, DataLoader\nimport pandas as pd \nimport numpy as np\nfrom sklearn.preprocessing import StandardScaler\n\nclass TrainData(Dataset):\n def __init__(self, X_data, y_data):\n self.X_data = X_data\n self.y_data = y_data\n \n def __getitem__(self, index):\n return self.X_data[index], self.y_data[index]\n \n def __len__ (self):\n return len(self.X_data)\n \nclass BinaryClassifier(nn.Module):\n def __init__(self):\n super(BinaryClassifier, self).__init__()\n # 27 input features\n self.h1 = nn.Linear(27, 64) \n self.h2 = nn.Linear(64, 64)\n self.output_layer = nn.Linear(64, 1) \n \n self.relu = nn.ReLU()\n self.dropout = nn.Dropout(p=0.1)\n self.batchnorm1 = nn.BatchNorm1d(64)\n self.batchnorm2 = nn.BatchNorm1d(64)\n \n def forward(self, inputs):\n x = self.relu(self.h1(inputs))\n x = self.batchnorm1(x)\n x = self.relu(self.h2(x))\n x = self.batchnorm2(x)\n x = self.dropout(x)\n x = self.output_layer(x)\n \n return x\n\ndef binary_acc(y_pred, y_true):\n \"\"\"Calculates accuracy\"\"\"\n y_pred_tag = torch.round(torch.sigmoid(y_pred))\n\n correct_results_sum = (y_pred_tag == y_true).sum().float()\n acc = correct_results_sum/y_true.shape[0]\n acc = torch.round(acc * 100)\n \n return acc\n\ndef train_evaluate(training_dataset_path, validation_dataset_path, batch_size, num_epochs, output_dir):\n \n batch_size = int(batch_size)\n num_epochs = int(num_epochs)\n \n # Read in train/validation data and concat \n df_train = pd.read_csv(training_dataset_path)\n df_validation = pd.read_csv(validation_dataset_path)\n df = pd.concat([df_train, df_validation])\n\n categorical_features = ['workclass', 'occupation']\n target='income_bracket'\n\n # One-hot encode categorical variables \n df = pd.get_dummies(df,columns=categorical_features)\n\n # Change label to 0 if <=50K, 1 if >50K\n df[target] = df[target].apply(lambda x: 0 if x==' <=50K' else 1)\n\n # Split features and labels into 2 different vars\n X_train = df.loc[:, df.columns != target]\n y_train = np.array(df[target])\n\n # Normalize features \n scaler = StandardScaler()\n X_train = scaler.fit_transform(X_train)\n\n # Training data\n train_data = TrainData(torch.FloatTensor(X_train), \n torch.FloatTensor(y_train))\n\n # Use torch DataLoader to feed data to model \n train_loader = DataLoader(dataset=train_data, batch_size=batch_size, drop_last=True)\n\n # Instantiate model \n model = BinaryClassifier()\n \n # Loss is binary crossentropy w/ logits. Must manually implement sigmoid for inference\n criterion = nn.BCEWithLogitsLoss()\n \n # Adam optimizer\n optimizer = torch.optim.Adam(model.parameters(), lr=0.001)\n\n model.train()\n for e in range(1, num_epochs+1):\n epoch_loss = 0\n epoch_acc = 0\n for X_batch, y_batch in train_loader:\n optimizer.zero_grad()\n\n y_pred = model(X_batch)\n\n loss = criterion(y_pred, y_batch.unsqueeze(1))\n acc = binary_acc(y_pred, y_batch.unsqueeze(1))\n\n loss.backward()\n optimizer.step()\n\n epoch_loss += loss.item()\n epoch_acc += acc.item()\n\n\n print(f'Epoch {e}: Loss = {epoch_loss/len(train_loader):.5f} | Acc = {epoch_acc/len(train_loader):.3f}')\n\n # Save the model locally\n model_filename='model.pt'\n torch.save(model.state_dict(), model_filename)\n\n EXPORT_PATH = os.path.join(\n output_dir, datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\"))\n\n # Copy the model to GCS\n gcs_model_path = '{}/{}'.format(EXPORT_PATH, model_filename)\n subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path])\n print('Saved model in: {}'.format(gcs_model_path))\n \nif __name__ == '__main__':\n fire.Fire(train_evaluate)",
"Package PyTorch Training Script into a Docker Image\nNote the dependencies.",
"%%writefile ./pytorch_trainer_image/Dockerfile\n\nFROM gcr.io/deeplearning-platform-release/base-cpu\nRUN pip install -U fire torch==1.6.0 scikit-learn==0.23.2 pandas==1.1.1\nWORKDIR /app\nCOPY train.py .\n\nENTRYPOINT [\"python\", \"train.py\"]",
"Build the PyTorch Trainer Image\nBuild and push the PyTorch training image to your project's Container Registry. Again, this will take a few minutes.",
"TORCH_IMAGE_NAME='pytorch_trainer_image'\nTORCH_IMAGE_TAG='latest'\nTORCH_IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, TORCH_IMAGE_NAME, TORCH_IMAGE_TAG)\n\n!gcloud builds submit --tag $TORCH_IMAGE_URI $TORCH_IMAGE_NAME",
"Create XGBoost Training Script\nThree down, one to go! Create the final training script. This script will ingest and preprocess data with Pandas, then train a Gradient Boosted Tree model.",
"!mkdir xgboost_trainer_image\n\n%%writefile ./xgboost_trainer_image/train.py\n\nimport os \nimport subprocess\nimport datetime\nimport fire\nimport pickle \n\nimport pandas as pd \nimport numpy as np\nfrom sklearn.preprocessing import StandardScaler\nfrom xgboost import XGBClassifier\n\ndef train_evaluate(training_dataset_path, validation_dataset_path,max_depth,n_estimators,output_dir):\n \n df_train = pd.read_csv(training_dataset_path)\n df_validation = pd.read_csv(validation_dataset_path)\n df = pd.concat([df_train, df_validation])\n\n categorical_features = ['workclass', 'occupation']\n target='income_bracket'\n\n # One-hot encode categorical variables \n df = pd.get_dummies(df,columns=categorical_features)\n\n # Change label to 0 if <=50K, 1 if >50K\n df[target] = df[target].apply(lambda x: 0 if x==' <=50K' else 1)\n\n # Split features and labels into 2 different vars\n X_train = df.loc[:, df.columns != target]\n y_train = np.array(df[target])\n\n # Normalize features \n scaler = StandardScaler()\n X_train = scaler.fit_transform(X_train)\n \n grid = {\n 'max_depth': int(max_depth),\n 'n_estimators': int(n_estimators)\n }\n \n model = XGBClassifier()\n model.set_params(**grid)\n model.fit(X_train,y_train)\n \n model_filename = 'xgb_model.pkl'\n pickle.dump(model, open(model_filename, \"wb\"))\n \n EXPORT_PATH = os.path.join(\n output_dir, datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\"))\n \n gcs_model_path = '{}/{}'.format(EXPORT_PATH, model_filename)\n subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path])\n print('Saved model in: {}'.format(gcs_model_path)) \n\nif __name__ == '__main__':\n fire.Fire(train_evaluate)",
"Package XGBoost Training Script into a Docker Image\nNote the dependencies.",
"%%writefile ./xgboost_trainer_image/Dockerfile\n\nFROM gcr.io/deeplearning-platform-release/base-cpu\nRUN pip install -U fire scikit-learn==0.23.2 pandas==1.1.1 xgboost==1.2.0\nWORKDIR /app\nCOPY train.py .\n\nENTRYPOINT [\"python\", \"train.py\"]",
"Build the XGBoost Trainer Image\nBuild and push the XGBoost training image. This will take a few minutes (this is the last one, woohoo!)",
"XGB_IMAGE_NAME='xgboost_trainer_image'\nXGB_IMAGE_TAG='latest'\nXGB_IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, XGB_IMAGE_NAME, XGB_IMAGE_TAG)\n\n!gcloud builds submit --tag $XGB_IMAGE_URI $XGB_IMAGE_NAME",
"Develop KubeFlow Pipeline\nNow that you have all four of your training applications as containers in your project's Container Registry, let's build a KubeFlow pipeline. \nThe KubeFlow pipeline will have two BigQuery Ops. We will use the pre-built BigQuery Query component (no need to reinvent the wheel) to do the following: \n* Create a training split in our data and export to CSV\n* Create a validation split in our data and export to CSV\nThe output of these BigQuery Ops will be the input data into four AI Platform Training Ops. For this we will also use a pre-built component. Each AI Platform Training Op will train one of our containerized models - Tensorflow, PyTorch, XGBoost, and Scikit-learn.\nTHERE ARE TODOs IN THE FOLLOWING CODE",
"!mkdir pipeline\n\n%%writefile ./pipeline/census_training_pipeline.py\n\nimport os\nimport kfp\nfrom kfp.dsl.types import GCPProjectID\nfrom kfp.dsl.types import GCPRegion\nfrom kfp.dsl.types import GCSPath\nfrom kfp.dsl.types import String\nfrom kfp.gcp import use_gcp_secret\nimport kfp.components as comp\nimport kfp.dsl as dsl\nimport kfp.gcp as gcp\nimport json\n\n# We will use environment vars to set the trainer image names and bucket name\nTF_TRAINER_IMAGE = os.getenv('TF_TRAINER_IMAGE')\nSCIKIT_TRAINER_IMAGE = os.getenv('SCIKIT_TRAINER_IMAGE')\nTORCH_TRAINER_IMAGE = os.getenv('TORCH_TRAINER_IMAGE')\nXGB_TRAINER_IMAGE = os.getenv('XGB_TRAINER_IMAGE')\nBUCKET = os.getenv('BUCKET')\n\n# Paths to export the training/validation data from bigquery\nTRAINING_OUTPUT_PATH = BUCKET + '/census/data/training.csv'\nVALIDATION_OUTPUT_PATH = BUCKET + '/census/data/validation.csv'\n\nCOMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'\n\n# Create component factories\ncomponent_store = kfp.components.ComponentStore(\n local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])\n\n# TODO: Load BigQuery and AI Platform Training ops from component_store\n# as bigquery_query_op and mlengine_train_op \n\nbigquery_query_op = \nmlengine_train_op = \n\ndef get_query(dataset='training'):\n \"\"\"Function that returns either training or validation query\"\"\"\n if dataset=='training':\n split = \"MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 100) < 80\"\n elif dataset=='validation':\n split = \"\"\"MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 100) >= 80 \n AND MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 100) < 90\"\"\"\n else:\n split = \"MOD(ABS(FARM_FINGERPRINT(CAST(functional_weight AS STRING))), 100) >= 90\"\n \n query = \"\"\"SELECT age, workclass, education_num, occupation, hours_per_week,income_bracket \n FROM census.data \n WHERE {0}\"\"\".format(split)\n \n return query\n\n# We will use the training/validation queries as inputs to our pipeline\n# This lets us change the training/validation datasets if we wish by simply\n# Changing the query. \nTRAIN_QUERY = get_query(dataset='training')\nVALIDATION_QUERY=get_query(dataset='validation')\n\n@dsl.pipeline(\n name='Continuous Training with Multiple Frameworks',\n description='Pipeline to create training/validation splits w/ BigQuery then launch multiple AI Platform Training Jobs'\n)\ndef pipeline(\n project_id,\n train_query=TRAIN_QUERY,\n validation_query=VALIDATION_QUERY,\n region='us-central1'\n):\n # Creating the training data split\n create_training_split = bigquery_query_op(\n query=train_query,\n project_id=project_id,\n output_gcs_path=TRAINING_OUTPUT_PATH\n ).set_display_name('BQ Train Split')\n \n # TODO: Create the validation data split\n \n # These are the output directories where our models will be saved\n tf_output_dir = BUCKET + '/census/models/tf'\n scikit_output_dir = BUCKET + '/census/models/scikit'\n torch_output_dir = BUCKET + '/census/models/torch'\n xgb_output_dir = BUCKET + '/census/models/xgb'\n \n # Training arguments to be passed to the TF Trainer\n tf_args = [\n '--training_dataset_path', create_training_split.outputs['output_gcs_path'],\n '--validation_dataset_path', create_validation_split.outputs['output_gcs_path'],\n '--output_dir', tf_output_dir,\n '--batch_size', '32', \n '--num_train_examples', '1000',\n '--num_evals', '10'\n ]\n \n # TODO: Fill in the list of the training arguments to be passed to the Scikit-learn Trainer\n scikit_args = []\n \n # Training arguments to be passed to the PyTorch Trainer\n torch_args = [\n '--training_dataset_path', create_training_split.outputs['output_gcs_path'],\n '--validation_dataset_path', create_validation_split.outputs['output_gcs_path'],\n '--output_dir', torch_output_dir,\n '--batch_size', '32', \n '--num_epochs', '15',\n ]\n \n # Training arguments to be passed to the XGBoost Trainer \n xgb_args = [\n '--training_dataset_path', create_training_split.outputs['output_gcs_path'],\n '--validation_dataset_path', create_validation_split.outputs['output_gcs_path'],\n '--output_dir', xgb_output_dir,\n '--max_depth', '10', \n '--n_estimators', '100'\n ]\n \n # AI Platform Training Jobs with all 4 trainer images \n \n train_scikit = mlengine_train_op(\n project_id=project_id,\n region=region,\n master_image_uri=SCIKIT_TRAINER_IMAGE,\n args=scikit_args).set_display_name('Scikit-learn Model - AI Platform Training')\n \n train_tf = mlengine_train_op(\n project_id=project_id,\n region=region,\n master_image_uri=TF_TRAINER_IMAGE,\n args=tf_args).set_display_name('Tensorflow Model - AI Platform Training')\n \n train_torch = mlengine_train_op(\n project_id=project_id,\n region=region,\n master_image_uri=TORCH_TRAINER_IMAGE,\n args=torch_args).set_display_name('Pytorch Model - AI Platform Training')\n \n # TODO: Provide arguments to mlengine_train_op to train the XGBoost model\n train_xgb = mlengine_train_op(\n # Arguments go here\n ).set_display_name('XGBoost Model - AI Platform Training')\n ",
"Set environment variables for the different trainer image names as well as our bucket.",
"TAG = 'latest'\nSCIKIT_TRAINER_IMAGE = 'gcr.io/{}/scikit_trainer_image:{}'.format(PROJECT_ID, TAG)\nTF_TRAINER_IMAGE = 'gcr.io/{}/tensorflow_trainer_image:{}'.format(PROJECT_ID, TAG)\nTORCH_TRAINER_IMAGE = 'gcr.io/{}/pytorch_trainer_image:{}'.format(PROJECT_ID, TAG)\nXGB_TRAINER_IMAGE = 'gcr.io/{}/xgboost_trainer_image:{}'.format(PROJECT_ID, TAG)\n\n%env TF_TRAINER_IMAGE={TF_TRAINER_IMAGE}\n%env SCIKIT_TRAINER_IMAGE={SCIKIT_TRAINER_IMAGE}\n%env TORCH_TRAINER_IMAGE={TORCH_TRAINER_IMAGE}\n%env XGB_TRAINER_IMAGE={XGB_TRAINER_IMAGE}\n%env BUCKET={BUCKET}",
"Compile the Pipeline\nCompile the pipeline with the CLI compiler. This will save a census_training_pipeline.yaml file locally",
"!dsl-compile --py pipeline/census_training_pipeline.py --output census_training_pipeline.yaml",
"Take a look at the head of the yaml file",
"!head census_training_pipeline.yaml",
"Deploy your KubeFlow Pipeline\nNow let's deploy the KubeFlow pipeline. Prior to the lab you should have spun up an AI Platform Pipelines Instance. In the AI Platform Pipeline UI click 'settings' on your pipeline and copy the endpoint. Paste the endpoint as the value for the string variable ENDPOINT.",
"#TODO: Change ENDPOINT to the ENDPOINT for your AI Platform Pipelines Instance\nENDPOINT = ''\nPIPELINE_NAME = 'census_trainer_multiple_models'\n\n!kfp --endpoint $ENDPOINT pipeline upload \\\n-p $PIPELINE_NAME \\\n./census_training_pipeline.yaml",
"Continuous Training: Create Pipeline Run, then Create Recurring Runs in the UI.\nNow that we deployed our KubeFlow pipeline, let's head over to the UI and launch a pipeline run. You can either click Open Pipelines Dashboard in the AI Platform Pipeline UI or run the Python cell below and copy/paste the output into the browser. Once in the UI, complete the following steps:\n* Select Pipelines on the left-hand navigation panel\n* Select census_trainer_multiple_models then click Create Run\n* In the Experiment field select Default\n* For Run Type select One-Off\n* Enter your Project ID and hit Start. You can now monitor your pipeline run in the UI (it will take about 10 minutes to complete the run).\nNow let's set up a recurring run:\n* Select Pipelines then census_trainer_multiple_models and click Create Run\n* In the Experiement field select Default\n* For Run Type select Recurring\n* Configure the Recurring Run to start tomorrow at 5pm and run once weekly\n* Enter your Project ID and hit Start\nNow your pipeline will run weekly starting tomorrow, it's as easy as that!",
"print(f\"https://{ENDPOINT}\")",
"End of Lab, Congrats!\nIn this lab you created containerized training applications for TensorFlow, PyTorch, Scikit-learn, an XGBoost models. You then created a KubeFlow pipeline that used pre built components to create training/validation splits in BigQuery data, export that data as CSV files to GCS, and launch AI Platform Training Jobs with the four containerized training applications. Finally, you ran the pipeline from the UI and set up Continuous Training to re-run your pipeline once a week!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pk-ai/training
|
machine-learning/deep-learning/udacity/ud730/2_fullyconnected.ipynb
|
mit
|
[
"Deep Learning\nAssignment 2\nPreviously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.\nThe goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.",
"# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport numpy as np\nimport tensorflow as tf\nfrom six.moves import cPickle as pickle\nfrom six.moves import range",
"First reload the data we generated in 1_notmnist.ipynb.",
"pickle_file = 'notMNIST.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print('Training set', train_dataset.shape, train_labels.shape)\n print('Validation set', valid_dataset.shape, valid_labels.shape)\n print('Test set', test_dataset.shape, test_labels.shape)",
"Reformat into a shape that's more adapted to the models we're going to train:\n- data as a flat matrix,\n- labels as float 1-hot encodings.",
"image_size = 28\nnum_labels = 10\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint('Training set', train_dataset.shape, train_labels.shape)\nprint('Validation set', valid_dataset.shape, valid_labels.shape)\nprint('Test set', test_dataset.shape, test_labels.shape)",
"We're first going to train a multinomial logistic regression using simple gradient descent.\nTensorFlow works like this:\n* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:\n with graph.as_default():\n ...\n\n\n\nThen you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:\nwith tf.Session(graph=graph) as session:\n ...\n\n\nLet's load all the data into TensorFlow and build the computation graph corresponding to our training:",
"# With gradient descent training, even this much data is prohibitive.\n# Subset the training data for faster turnaround.\ntrain_subset = 10000\n\ngraph = tf.Graph()\nwith graph.as_default():\n\n # Input data.\n # Load the training, validation and test data into constants that are\n # attached to the graph.\n tf_train_dataset = tf.constant(train_dataset[:train_subset, :])\n tf_train_labels = tf.constant(train_labels[:train_subset])\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n # These are the parameters that we are going to be training. The weight\n # matrix will be initialized using random values following a (truncated)\n # normal distribution. The biases get initialized to zero.\n weights = tf.Variable(\n tf.truncated_normal([image_size * image_size, num_labels]))\n biases = tf.Variable(tf.zeros([num_labels]))\n \n # Training computation.\n # We multiply the inputs with the weight matrix, and add biases. We compute\n # the softmax and cross-entropy (it's one operation in TensorFlow, because\n # it's very common, and it can be optimized). We take the average of this\n # cross-entropy across all training examples: that's our loss.\n logits = tf.matmul(tf_train_dataset, weights) + biases\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n \n # Optimizer.\n # We are going to find the minimum of this loss using gradient descent.\n optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n # These are not part of training, but merely here so that we can report\n # accuracy figures as we train.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = tf.nn.softmax(\n tf.matmul(tf_valid_dataset, weights) + biases)\n test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)",
"Let's run this computation and iterate:",
"# Defining accuracy function to find accuracy of predictions against actuals\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])\n\nnum_steps = 801\n\nwith tf.Session(graph=graph) as session:\n # This is a one-time operation which ensures the parameters get initialized as\n # we described in the graph: random weights for the matrix, zeros for the\n # biases. \n tf.global_variables_initializer().run()\n print('Initialized')\n for step in range(num_steps):\n # Run the computations. We tell .run() that we want to run the optimizer,\n # and get the loss value and the training predictions returned as numpy\n # arrays.\n _, l, predictions = session.run([optimizer, loss, train_prediction])\n if (step % 100 == 0):\n print('Loss at step %d: %f' % (step, l))\n print('Training accuracy: %.1f%%' % accuracy(\n predictions, train_labels[:train_subset, :]))\n # Calling .eval() on valid_prediction is basically like calling run(), but\n # just to get that one numpy array. Note that it recomputes all its graph\n # dependencies.\n print('Validation accuracy: %.1f%%' % accuracy(\n valid_prediction.eval(), valid_labels))\n print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))",
"Let's now switch to stochastic gradient descent training instead, which is much faster.\nThe graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().",
"batch_size = 128\n\ngraph = tf.Graph()\nwith graph.as_default():\n\n # Input data. For the training data, we use a placeholder that will be fed\n # at run time with a training minibatch.\n tf_train_dataset = tf.placeholder(tf.float32,\n shape=(batch_size, image_size * image_size))\n tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n weights = tf.Variable(\n tf.truncated_normal([image_size * image_size, num_labels]))\n biases = tf.Variable(tf.zeros([num_labels]))\n \n # Training computation.\n logits = tf.matmul(tf_train_dataset, weights) + biases\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n \n # Optimizer.\n optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = tf.nn.softmax(\n tf.matmul(tf_valid_dataset, weights) + biases)\n test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)",
"Let's run it:",
"num_steps = 3001\n\nwith tf.Session(graph=graph) as session:\n tf.global_variables_initializer().run()\n print(\"Initialized\")\n for step in range(num_steps):\n # Pick an offset within the training data, which has been randomized.\n # Note: we could use better randomization across epochs.\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n # Generate a minibatch.\n batch_data = train_dataset[offset:(offset + batch_size), :]\n batch_labels = train_labels[offset:(offset + batch_size), :]\n # Prepare a dictionary telling the session where to feed the minibatch.\n # The key of the dictionary is the placeholder node of the graph to be fed,\n # and the value is the numpy array to feed to it.\n feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n _, l, predictions = session.run(\n [optimizer, loss, train_prediction], feed_dict=feed_dict)\n if (step % 500 == 0):\n print(\"Minibatch loss at step %d: %f\" % (step, l))\n print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n print(\"Validation accuracy: %.1f%%\" % accuracy(\n valid_prediction.eval(), valid_labels))\n print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))",
"Problem\nTurn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.",
"batch_size = 128\nnum_hidden_nodes = 1024\n\ngraph = tf.Graph()\nwith graph.as_default():\n\n # Input data. For the training data, we use a placeholder that will be fed\n # at run time with a training minibatch.\n tf_train_dataset = tf.placeholder(tf.float32,\n shape=(batch_size, image_size * image_size))\n tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n weights = {\n 'hidden': tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes])),\n 'output': tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))\n }\n\n biases = {\n 'hidden': tf.Variable(tf.zeros([num_hidden_nodes])),\n 'output': tf.Variable(tf.zeros([num_labels]))\n }\n \n # Training computation.\n hidden_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights['hidden']) + biases['hidden'])\n logits = tf.matmul(hidden_train, weights['output']) + biases['output']\n loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))\n \n # Optimizer.\n optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n train_prediction = tf.nn.softmax(logits)\n hidden_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, weights['hidden']) + biases['hidden'])\n valid_prediction = tf.nn.softmax(tf.matmul(hidden_valid, weights['output']) + biases['output'])\n hidden_test = tf.nn.relu(tf.matmul(tf_test_dataset, weights['hidden']) + biases['hidden'])\n test_prediction = tf.nn.softmax(tf.matmul(hidden_test, weights['output']) + biases['output'])\n\nnum_steps = 3001\n\nwith tf.Session(graph=graph) as session:\n tf.global_variables_initializer().run()\n print(\"Initialized\")\n for step in range(num_steps):\n # Pick an offset within the training data, which has been randomized.\n # Note: we could use better randomization across epochs.\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n # Generate a minibatch.\n batch_data = train_dataset[offset:(offset + batch_size), :]\n batch_labels = train_labels[offset:(offset + batch_size), :]\n # Prepare a dictionary telling the session where to feed the minibatch.\n # The key of the dictionary is the placeholder node of the graph to be fed,\n # and the value is the numpy array to feed to it.\n feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n _, l, predictions = session.run(\n [optimizer, loss, train_prediction], feed_dict=feed_dict)\n if (step % 500 == 0):\n print(\"Minibatch loss at step %d: %f\" % (step, l))\n print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n print(\"Validation accuracy: %.1f%%\" % accuracy(\n valid_prediction.eval(), valid_labels))\n print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
scikit-optimize/scikit-optimize.github.io
|
0.8/notebooks/auto_examples/optimizer-with-different-base-estimator.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Use different base estimators for optimization\nSigurd Carlen, September 2019.\nReformatted by Holger Nahrstaedt 2020\n.. currentmodule:: skopt\nTo use different base_estimator or create a regressor with different parameters,\nwe can create a regressor object and set it as kernel.\nThis example uses :class:plots.plot_gaussian_process which is available\nsince version 0.8.",
"print(__doc__)\n\nimport numpy as np\nnp.random.seed(1234)\nimport matplotlib.pyplot as plt\nfrom skopt.plots import plot_gaussian_process\nfrom skopt import Optimizer",
"Toy example\nLet assume the following noisy function $f$:",
"noise_level = 0.1\n\n# Our 1D toy problem, this is the function we are trying to\n# minimize\n\n\ndef objective(x, noise_level=noise_level):\n return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2))\\\n + np.random.randn() * noise_level\n\n\ndef objective_wo_noise(x):\n return objective(x, noise_level=0)\n\nopt_gp = Optimizer([(-2.0, 2.0)], base_estimator=\"GP\", n_initial_points=5,\n acq_optimizer=\"sampling\", random_state=42)\n\ndef plot_optimizer(res, n_iter, max_iters=5):\n if n_iter == 0:\n show_legend = True\n else:\n show_legend = False\n ax = plt.subplot(max_iters, 2, 2 * n_iter + 1)\n # Plot GP(x) + contours\n ax = plot_gaussian_process(res, ax=ax,\n objective=objective_wo_noise,\n noise_level=noise_level,\n show_legend=show_legend, show_title=True,\n show_next_point=False, show_acq_func=False)\n ax.set_ylabel(\"\")\n ax.set_xlabel(\"\")\n if n_iter < max_iters - 1:\n ax.get_xaxis().set_ticklabels([])\n # Plot EI(x)\n ax = plt.subplot(max_iters, 2, 2 * n_iter + 2)\n ax = plot_gaussian_process(res, ax=ax,\n noise_level=noise_level,\n show_legend=show_legend, show_title=False,\n show_next_point=True, show_acq_func=True,\n show_observations=False,\n show_mu=False)\n ax.set_ylabel(\"\")\n ax.set_xlabel(\"\")\n if n_iter < max_iters - 1:\n ax.get_xaxis().set_ticklabels([])",
"GP kernel",
"fig = plt.figure()\nfig.suptitle(\"Standard GP kernel\")\nfor i in range(10):\n next_x = opt_gp.ask()\n f_val = objective(next_x)\n res = opt_gp.tell(next_x, f_val)\n if i >= 5:\n plot_optimizer(res, n_iter=i-5, max_iters=5)\nplt.tight_layout(rect=[0, 0.03, 1, 0.95])\nplt.plot()",
"Test different kernels",
"from skopt.learning import GaussianProcessRegressor\nfrom skopt.learning.gaussian_process.kernels import ConstantKernel, Matern\n# Gaussian process with Matérn kernel as surrogate model\n\nfrom sklearn.gaussian_process.kernels import (RBF, Matern, RationalQuadratic,\n ExpSineSquared, DotProduct,\n ConstantKernel)\n\n\nkernels = [1.0 * RBF(length_scale=1.0, length_scale_bounds=(1e-1, 10.0)),\n 1.0 * RationalQuadratic(length_scale=1.0, alpha=0.1),\n 1.0 * ExpSineSquared(length_scale=1.0, periodicity=3.0,\n length_scale_bounds=(0.1, 10.0),\n periodicity_bounds=(1.0, 10.0)),\n ConstantKernel(0.1, (0.01, 10.0))\n * (DotProduct(sigma_0=1.0, sigma_0_bounds=(0.1, 10.0)) ** 2),\n 1.0 * Matern(length_scale=1.0, length_scale_bounds=(1e-1, 10.0),\n nu=2.5)]\n\nfor kernel in kernels:\n gpr = GaussianProcessRegressor(kernel=kernel, alpha=noise_level ** 2,\n normalize_y=True, noise=\"gaussian\",\n n_restarts_optimizer=2\n )\n opt = Optimizer([(-2.0, 2.0)], base_estimator=gpr, n_initial_points=5,\n acq_optimizer=\"sampling\", random_state=42)\n fig = plt.figure()\n fig.suptitle(repr(kernel))\n for i in range(10):\n next_x = opt.ask()\n f_val = objective(next_x)\n res = opt.tell(next_x, f_val)\n if i >= 5:\n plot_optimizer(res, n_iter=i - 5, max_iters=5)\n plt.tight_layout(rect=[0, 0.03, 1, 0.95])\n plt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
iurilarosa/thesis
|
codici/Archiviati/numpy/Prove numpy.ipynb
|
gpl-3.0
|
[
"import numpy\nfrom scipy import sparse",
"Prove manipolazioni array",
"unimatr = numpy.ones((10,10))\n#unimatr\nduimatr = unimatr*2\n#duimatr\n\nuniarray = numpy.ones((10,1))\n#uniarray\n\ntriarray = uniarray*3\n\nscalarray = numpy.arange(10)\nscalarray = scalarray.reshape(10,1)\n\n#NB fare il reshape da orizzontale a verticale è come se aggiungesse\n#una dimensione all'array facendolo diventare un ndarray\n#(prima era un array semplice, poi diventa un array (x,1), quindi puoi fare trasposto)\n#NB NUMPY NON FA TRASPOSTO DI ARRAY SEMPLICE!\n#scalarray\nscalarray.T\n\nramatricia = numpy.random.randint(2, size=36).reshape((6,6))\nramatricia2 = numpy.random.randint(2, size=36).reshape((6,6))\n\n#WARNING questa operazione moltiplica elemento per elemento\n#se l'oggetto è di dimensione inferiore moltiplica ogni riga/colonna\n# o matrice verticale/orizzontale a seconda della forma dell'oggetto\n\nduimatr*scalarray\n#duimatr*scalarray.T\n#duimatr*duimatr\nramatricia*ramatricia2\n\n#numpy dot invece fa prodotto matriciale righe per colonne\n\nnumpy.dot(duimatr,scalarray)\n#numpy.dot(duimatr,duimatr)\nnumpy.dot(ramatricia2,ramatricia)\n\nunimatricia = numpy.ones((3,3))\nrangematricia = numpy.arange(9).reshape((3,3))\nnumpy.dot(rangematricia, rangematricia)\n\nduimatr + scalarray",
"Prove creazione matrice 3D con prodotti esterni",
"import numpy\nscalarray = numpy.arange(4)\nuniarray = numpy.ones(4)\n\nmatricia = numpy.outer(scalarray, uniarray)\nmatricia\n\nscalarray2 = numpy.reshape(scalarray,(1,4))\nmatricia2 = uniarray*scalarray2\nmatricia2\n\ntensorio = numpy.outer(matricia,scalarray).reshape(4,4,4)\ntensorio\n# metodo di creazione array nd (numpy.ndarray)",
"Prove manipolazione matrici 3D numpy",
"tensorio = numpy.ones(1000).reshape(10,10,10)\ntensorio\n# metodo di creazione array nd (numpy.ndarray)\n#altro metodo è con comando diretto\n#tensorio = numpy.ndarray((3,3,3), dtype = int, buffer=numpy.arange(30))\n#potrebbe essere utile con la matrice sparsa della peakmap, anche se difficilmente è maneggiabile come matrice densa\n#oppure\n\n# HO FINALMENTE SCOPERTO COME SI METTE IL DTYPE COME SI DEVE!! con \"numpy.float32\"!\n#tensorio = numpy.zeros((3,3,3), dtype = numpy.float32)\n#tensorio.dtype\n#tensorio\n\n\nscalarray = numpy.arange(10)\nuniarray = numpy.ones(10)\nscalamatricia = numpy.outer(scalarray,scalarray)\n#scalamatricia\n\n\ntensorio * 2\ntensorio + 2\ntensorio + scalamatricia\n%time tensorio + scalarray\n%time tensorio.__add__(scalarray)\n#danno stesso risultato con tempi paragonabili\n",
"Prove matrici sparse",
"from scipy import sparse\n\n\nramatricia = numpy.random.randint(2, size=25).reshape((5,5))\nramatricia\n\n#efficiente per colonne\n#sparsamatricia = sparse.csc_matrix(ramatricia)\n#print(sparsamatricia)\n\n#per righe\nsparsamatricia = sparse.csr_matrix(ramatricia)\nprint(sparsamatricia)\n\nsparsamatricia.toarray()\n\nrighe = numpy.array([0,0,0,1,2,3,3,4])\ncolonne = numpy.array([0,0,4,2,1,4,3,0])\nvalori = numpy.ones(righe.size)\nsparsamatricia = sparse.coo_matrix((valori, (righe,colonne)))\n\nprint(sparsamatricia)\n\nsparsamatricia.toarray()",
"Prodotto di matrici\nProdotti interni\nConsidera di avere 2 matrici, a e b, in forma numpy array:\n\na*b fa il prodotto elemento per elemento (solo se a e b hanno stessa dimensione)\nnumpy.dot(a,b) fa il prodotto matriciale righe per colonne\n\nOra considera di avere 2 matrici, a e b, in forma di scipy.sparse:\n\na*b fa il prodotto matriciale righe per colonne\nnumpy.dot(a,b) non funziona per nulla\na.dot(b) fa il prodotto matriciale righe per colonne",
"#vari modi per fare prodotti di matrici (con somma con operatore + è lo stesso)\ndensamatricia = sparsamatricia.toarray()\n\n#densa-densa\nprodottoPerElementiDD = densamatricia*densamatricia\nprodottoMatricialeDD = numpy.dot(densamatricia, densamatricia)\n\n#sparsa-densa\nprodottoMatricialeSD = sparsamatricia*densamatricia\nprodottoMatricialeSD2 = sparsamatricia.dot(densamatricia)\n\n#sparsa-sparsa\nprodottoMatricialeSS = sparsamatricia*sparsamatricia\nprodottoMatricialeSS2 = sparsamatricia.dot(sparsamatricia)\n\n# \"SPARSA\".dot(\"SPARSA O DENSA\") FA PRODOTTO MATRICIALE\n# \"SPARSA * SPARSA\" FA PRODOTTO MATRICIALE\n\n\nprodottoMatricialeDD - prodottoMatricialeSS\n#nb somme e sottrazioni tra matrici sparse e dense sono ok\n# prodotto matriciale tra densa e sparsa funziona come sparsa e sparsa",
"Prodotti esterni",
"densarray = numpy.array([\"a\",\"b\"],dtype = object)\ndensarray2 = numpy.array([\"c\",\"d\"],dtype = object)\n\nnumpy.outer(densarray,[1,2])\n\ndensamatricia = numpy.array([[1,2],[3,4]])\ndensamatricia2 = numpy.array([[\"a\",\"b\"],[\"c\",\"d\"]], dtype = object)\nnumpy.outer(densamatricia2,densamatricia).reshape(4,2,2)\n\ndensarray1 = numpy.array([0,2])\ndensarray2 = numpy.array([5,0])\ndensamatricia = numpy.array([[1,2],[3,4]])\ndensamatricia2 = numpy.array([[0,2],[5,0]])\n\nnrighe = 2\nncolonne = 2\nnpiani = 4\nprodottoEstDD = numpy.outer(densamatricia,densamatricia2).reshape(npiani,ncolonne,nrighe)\n#prodottoEstDD\n#prodottoEstDD = numpy.dstack((prodottoEstDD[0,:],prodottoEstDD[1,:]))\n\nprodottoEstDD\n\n\nsparsarray1 = sparse.csr_matrix(densarray1)\nsparsarray2 = sparse.csr_matrix(densarray2)\nsparsamatricia = sparse.csr_matrix(densamatricia)\nsparsamatricia2 = sparse.csr_matrix(densamatricia2)\n\nprodottoEstSS = sparse.kron(sparsamatricia,sparsamatricia2).toarray()\n\nprodottoEstSD = sparse.kron(sparsamatricia,densamatricia2).toarray()\nprodottoEstSD\n\n\n\n\n#prove prodotti esterni\n# numpy.outer\n# scipy.sparse.kron\n\n#densa-densa\nprodottoEsternoDD = numpy.outer(densamatricia,densamatricia)\n\n#sparsa-densa\nprodottoEsternoSD = sparse.kron(sparsamatricia,densamatricia)\n\n#sparsa-sparsa\nprodottoEsternoSS = sparse.kron(sparsamatricia,sparsamatricia)\n\n\nprodottoEsternoDD-prodottoEsternoSS\n\n# altre prove di prodotti esterni\nrarray1 = numpy.random.randint(2, size=4)\nrarray2 = numpy.random.randint(2, size=4)\nprint(rarray1,rarray2)\nramatricia = numpy.outer(rarray1,rarray2)\nunimatricia = numpy.ones((4,4)).astype(int)\n#ramatricia2 = rarray1 * rarray2.T\nprint(ramatricia,unimatricia)\n#print(ramatricia)\n#print(\"eppoi\")\n#print(ramatricia2)\n\n#sparsarray = sparse.csr_matrix(rarray1)\n#print(sparsarray)\n\n#ramatricia2 = \n\n#il mio caso problematico è che ho una matrice di cui so tutti gli elementi non zero,\n#so quante righe ho (i tempi), ma non so quante colonne di freq ho\nrandomcolonne = numpy.random.randint(10)+1\nramatricia = numpy.random.randint(2, size=10*randomcolonne).reshape((10,randomcolonne))\nprint(ramatricia.shape)\n#ramatricia\nnonzeri = numpy.nonzero(ramatricia)\nndati = len(nonzeri[0])\nndati\nramatricia\n\n#ora cerco di fare la matrice sparsa\nprint(ndati)\ndati = numpy.ones(2*ndati).reshape(ndati,2)\ndati\ncoordinateRighe = nonzeri[0]\ncoordinateColonne = nonzeri[1]\nsparsamatricia = sparse.coo_matrix((dati,(coordinateRighe,coordinateColonne)))\ndensamatricia = sparsamatricia.toarray()\ndensamatricia",
"Provo a passare operazioni a array con array di coordinate",
"matrice = numpy.arange(30).reshape(10,3)\nmatrice\n\nrighe = numpy.array([1,0,1,1])\ncolonne = numpy.array([2,0,2,2])\npesi = numpy.array([100,200,300,10])\nprint(righe,colonne)\n\nmatrice[righe,colonne]\n\n\nmatrice[righe,colonne] = (matrice[righe,colonne] + numpy.array([100,200,300,10]))\nmatrice\n\n%matplotlib inline\na = pyplot.imshow(matrice)\n\nnumpy.add.at(matrice, [righe,colonne],pesi)\nmatrice\n\n%matplotlib inline\na = pyplot.imshow(matrice)\n\nmatr",
"Prove plots",
"from matplotlib import pyplot\n%matplotlib inline\n\n\n##AL MOMENTO INUTILE, NON COMPILARE\nx = numpy.random.randint(10,size = 10)\ny = numpy.random.randint(10,size = 10)\npyplot.scatter(x,y, s = 5)\n#nb imshow si può fare solo con un 2d array\n\n#visualizzazione di una matrice, solo matrici dense a quanto pare\na = pyplot.imshow(densamatricia)\n#a = pyplot.imshow(sparsamatricia)\n#c = pyplot.matshow(densamatricia)\n\n\n#spy invece funziona anche per le sparse!\npyplot.spy(sparsamatricia,precision=0.01, marker = \".\", markersize=10)\n\n#in alternativa, scatterplot delle coordinate dal dataframe\nb = pyplot.scatter(coordinateColonne,coordinateRighe, s = 2)\n\nimport seaborn\n%matplotlib inline\n\n\nsbRegplot = seaborn.regplot(x=coordinateRighe, y=coordinateColonne, color=\"g\", fit_reg=False)\n\nimport pandas\n\ncoordinateRighe = coordinateRighe.reshape(len(coordinateRighe),1)\ncoordinateColonne = coordinateColonne.reshape(len(coordinateColonne),1)\n#print([coordinateRighe,coordinateColonne])\ncoordinate = numpy.concatenate((coordinateRighe,coordinateColonne),axis = 1)\ncoordinate\n\n\ntabella = pandas.DataFrame(coordinate)\ntabella.columns = [\"righe\", \"colonne\"]\n\n\nsbPlmplot = seaborn.lmplot(x = \"righe\", y = \"colonne\", data = tabella, fit_reg=False)\n\n",
"Un esempio semplice del mio problema",
"import numpy\nfrom scipy import sparse\nimport multiprocessing\nfrom matplotlib import pyplot\n\n#first i build a matrix of some x positions vs time datas in a sparse format\nmatrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10)\nx = numpy.nonzero(matrix)[0]\ntimes = numpy.nonzero(matrix)[1]\nweights = numpy.random.rand(x.size)\n\n\n\nimport scipy.io\n\nmint = numpy.amin(times)\nmaxt = numpy.amax(times)\n\nscipy.io.savemat('debugExamples/numpy.mat',{\n 'matrix':matrix, \n 'x':x, \n 'times':times, \n 'weights':weights,\n 'mint':mint,\n 'maxt':maxt,\n \n})\n\ntimes\n\n#then i define an array of y positions\nnStepsY = 5\ny = numpy.arange(1,nStepsY+1)\n\n# provo a iterare\n# VERSIONE CON HACK CON SPARSE verificato viene uguale a tutti gli altri metodi più semplici che ho provato\n# ma ha problemi con parallelizzazione\n\nnRows = nStepsY\nnColumns = 80\ny = numpy.arange(1,nStepsY+1)\nimage = numpy.zeros((nRows, nColumns))\ndef itermatrix(ithStep):\n yTimed = y[ithStep]*times\n positions = (numpy.round(x-yTimed)+50).astype(int)\n\n fakeRow = numpy.zeros(positions.size)\n matrix = sparse.coo_matrix((weights, (fakeRow, positions))).todense()\n matrix = numpy.ravel(matrix)\n missColumns = (nColumns-matrix.size)\n zeros = numpy.zeros(missColumns)\n matrix = numpy.concatenate((matrix, zeros))\n return matrix\n\n#for i in numpy.arange(nStepsY):\n# image[i] = itermatrix(i)\n\n#or\nimageSparsed = list(map(itermatrix, range(nStepsY)))\nimageSparsed = numpy.array(imageSparsed)\nscipy.io.savemat('debugExamples/numpyResult.mat', {'imageSparsed':imageSparsed}) \na = pyplot.imshow(imageSparsed, aspect = 10)\npyplot.show()\n\nimport numpy\nfrom scipy import sparse\nimport multiprocessing\nfrom matplotlib import pyplot\n\n#first i build a matrix of some x positions vs time datas in a sparse format\nmatrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10)\ntimes = numpy.nonzero(matrix)[0]\nfreqs = numpy.nonzero(matrix)[1]\nweights = numpy.random.rand(times.size)\n\n#then i define an array of y positions\nnStepsSpindowns = 5\nspindowns = numpy.arange(1,nStepsSpindowns+1)\n\n\n#PROVA CON BINCOUNT\n\ndef mapIt(ithStep):\n ncolumns = 80\n image = numpy.zeros(ncolumns)\n\n sdTimed = spindowns[ithStep]*times\n positions = (numpy.round(freqs-sdTimed)+50).astype(int)\n\n values = numpy.bincount(positions,weights)\n values = values[numpy.nonzero(values)]\n positions = numpy.unique(positions)\n image[positions] = values\n return image\n\n\n%time imageMapped = list(map(mapIt, range(nStepsSpindowns)))\nimageMapped = numpy.array(imageMapped)\n\n%matplotlib inline\na = pyplot.imshow(imageMapped, aspect = 10)\n\n# qui provo fully vectorial\ndef fullmatrix(nRows, nColumns):\n spindowns = numpy.arange(1,nStepsSpindowns+1)\n image = numpy.zeros((nRows, nColumns))\n\n sdTimed = numpy.outer(spindowns,times)\n freqs3d = numpy.outer(numpy.ones(nStepsSpindowns),freqs)\n weights3d = numpy.outer(numpy.ones(nStepsSpindowns),weights)\n spindowns3d = numpy.outer(spindowns,numpy.ones(times.size))\n positions = (numpy.round(freqs3d-sdTimed)+50).astype(int)\n\n matrix = sparse.coo_matrix((numpy.ravel(weights3d), (numpy.ravel(spindowns3d), numpy.ravel(positions)))).todense()\n return matrix\n\n%time image = fullmatrix(nStepsSpindowns, 80)\na = pyplot.imshow(image, aspect = 10)\npyplot.show()",
"Confronti Debug!",
"#confronto con codice ORIGINALE in matlab\nimmagineOrig = scipy.io.loadmat('debugExamples/dbOrigResult.mat')['binh_df0']\na = pyplot.imshow(immagineOrig[:,0:80], aspect = 10)\npyplot.show()\n\n#PROVA CON BINCOUNT\n\ndef mapIt(ithStep):\n ncolumns = 80\n image = numpy.zeros(ncolumns)\n\n yTimed = y[ithStep]*times\n positions = (numpy.round(x-yTimed)+50).astype(int)\n\n values = numpy.bincount(positions,weights)\n where = tf.not_equal(values, zero)\n values = values[numpy.nonzero(values)]\n positions = numpy.unique(positions)\n image[positions] = values\n return image\n\n\n%time imageMapped = list(map(mapIt, range(nStepsY)))\nimageMapped = numpy.array(imageMapped)\n\n%matplotlib inline\na = pyplot.imshow(imageMapped, aspect = 10)\n\n# qui provo con vettorializzazione di numpy (apply along axis)\nnrows = nStepsY\nncolumns = 80\nmatrix = numpy.zeros(nrows*ncolumns).reshape(nrows,ncolumns)\n\ndef applyIt(image):\n ithStep = 1\n image = numpy.zeros(ncolumns)\n\n yTimed = y[ithStep]*times\n positions = (numpy.round(x-yTimed)+50).astype(int)\n #print(positions)\n values = numpy.bincount(positions,weights)\n values = values[numpy.nonzero(values)]\n positions = numpy.unique(positions)\n image[positions] = values\n \n return image\n\n\nimageApplied = numpy.apply_along_axis(applyIt,1,matrix)\na = pyplot.imshow(imageApplied, aspect = 10)\n\n# qui provo fully vectorial\ndef fullmatrix(nRows, nColumns):\n y = numpy.arange(1,nStepsY+1)\n image = numpy.zeros((nRows, nColumns))\n\n yTimed = numpy.outer(y,times)\n x3d = numpy.outer(numpy.ones(nStepsY),x)\n weights3d = numpy.outer(numpy.ones(nStepsY),weights)\n y3d = numpy.outer(y,numpy.ones(x.size))\n positions = (numpy.round(x3d-yTimed)+50).astype(int)\n\n matrix = sparse.coo_matrix((numpy.ravel(weights3d), (numpy.ravel(y3d), numpy.ravel(positions)))).todense()\n return matrix\n\n%time image = fullmatrix(nStepsY, 80)\na = pyplot.imshow(image, aspect = 10)\npyplot.show()\n\nimageMapped = list(map(itermatrix, range(nStepsY)))\nimageMapped = numpy.array(imageMapped)\na = pyplot.imshow(imageMapped, aspect = 10)\npyplot.show()\n\n# prova con numpy.put\n\nnStepsY = 5\n\ndef mapIt(ithStep):\n ncolumns = 80\n image = numpy.zeros(ncolumns)\n\n yTimed = y[ithStep]*times\n positions = (numpy.round(x-yTimed)+50).astype(int)\n\n values = numpy.bincount(positions,weights)\n values = values[numpy.nonzero(values)]\n positions = numpy.unique(positions)\n image[positions] = values\n return image\n\n\n%time imagePutted = list(map(mapIt, range(nStepsY)))\nimagePutted = numpy.array(imagePutted)\n\n%matplotlib inline\na = pyplot.imshow(image, aspect = 10)\npyplot.show()",
"Documentazione\n\nRoba di array vari di numpy\n\n\nDomanda interessante su creazione matrici (stackoverflow)\nCreazione array ND\nOperatore add equivalente ad a+b per array ND\nData types\nProdotto tensore (da vedere ancora)\nGenerazione array ND random\nGenerazione array 1D random intero (eg binario)\nDà le coordinate di tutti gli elementi nonzero\nConcatenate: unisce due array in un solo array (mette il secondo dopo il primo nello stesso array, poi eventualmenete va reshapato se si vuole fare una matrice da più arrays)\nStack: unisce due array, forse migliore di concatenate, forse li aggiunge facendo una matrice\n\n\nRoba di matrici sparse\n\n\nCreazione sparse (nb vedi esempio finale per mio caso)\nCreazione sparsa random\nForma in cui fa prodotto esterno\n\n\nRoba scatterplot et similia\n\n\nScatterplot (nb attenti alle coordinate)\nPlot di matrici (imshow)\nTutorial per imshow\nSpy FA PLOT DI MATRICI SPARSE!\nPlots con seaborn: regplot) (più semplice, come pyplot vuole solo due array delle coordinate),lmplot (vuole dataframe),pairplot (non mi dovrebbe servire)\nEsempio scatterplot con lmplot (v anche siscomp)",
"ramatricia = numpy.random.randint(10, size=120).reshape((5,4,3,2))\nprint(ramatricia[0,0,:,:])\n#print(ramatricia)\nramatricia\n\nprint(ramatricia.reshape(60,2).shape[0])"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
keras-team/keras-io
|
examples/keras_recipes/ipynb/debugging_tips.ipynb
|
apache-2.0
|
[
"Keras debugging tips\nAuthor: fchollet<br>\nDate created: 2020/05/16<br>\nLast modified: 2020/05/16<br>\nDescription: Four simple tips to help you debug your Keras code.\nIntroduction\nIt's generally possible to do almost anything in Keras without writing code per se:\nwhether you're implementing a new type of GAN or the latest convnet architecture for\nimage segmentation, you can usually stick to calling built-in methods. Because all\nbuilt-in methods do extensive input validation checks, you will have little to no\ndebugging to do. A Functional API model made entirely of built-in layers will work on\nfirst try -- if you can compile it, it will run.\nHowever, sometimes, you will need to dive deeper and write your own code. Here are some\ncommon examples:\n\nCreating a new Layer subclass.\nCreating a custom Metric subclass.\nImplementing a custom train_step on a Model.\n\nThis document provides a few simple tips to help you navigate debugging in these\nsituations.\nTip 1: test each part before you test the whole\nIf you've created any object that has a chance of not working as expected, don't just\ndrop it in your end-to-end process and watch sparks fly. Rather, test your custom object\nin isolation first. This may seem obvious -- but you'd be surprised how often people\ndon't start with this.\n\nIf you write a custom layer, don't call fit() on your entire model just yet. Call\nyour layer on some test data first.\nIf you write a custom metric, start by printing its output for some reference inputs.\n\nHere's a simple example. Let's write a custom layer a bug in it:",
"import tensorflow as tf\nfrom tensorflow.keras import layers\n\n\nclass MyAntirectifier(layers.Layer):\n def build(self, input_shape):\n output_dim = input_shape[-1]\n self.kernel = self.add_weight(\n shape=(output_dim * 2, output_dim),\n initializer=\"he_normal\",\n name=\"kernel\",\n trainable=True,\n )\n\n def call(self, inputs):\n # Take the positive part of the input\n pos = tf.nn.relu(inputs)\n # Take the negative part of the input\n neg = tf.nn.relu(-inputs)\n # Concatenate the positive and negative parts\n concatenated = tf.concat([pos, neg], axis=0)\n # Project the concatenation down to the same dimensionality as the input\n return tf.matmul(concatenated, self.kernel)\n\n",
"Now, rather than using it in a end-to-end model directly, let's try to call the layer on\nsome test data:\npython\nx = tf.random.normal(shape=(2, 5))\ny = MyAntirectifier()(x)\nWe get the following error:\n...\n 1 x = tf.random.normal(shape=(2, 5))\n----> 2 y = MyAntirectifier()(x)\n...\n 17 neg = tf.nn.relu(-inputs)\n 18 concatenated = tf.concat([pos, neg], axis=0)\n---> 19 return tf.matmul(concatenated, self.kernel)\n...\nInvalidArgumentError: Matrix size-incompatible: In[0]: [4,5], In[1]: [10,5] [Op:MatMul]\nLooks like our input tensor in the matmul op may have an incorrect shape.\nLet's add a print statement to check the actual shapes:",
"\nclass MyAntirectifier(layers.Layer):\n def build(self, input_shape):\n output_dim = input_shape[-1]\n self.kernel = self.add_weight(\n shape=(output_dim * 2, output_dim),\n initializer=\"he_normal\",\n name=\"kernel\",\n trainable=True,\n )\n\n def call(self, inputs):\n pos = tf.nn.relu(inputs)\n neg = tf.nn.relu(-inputs)\n print(\"pos.shape:\", pos.shape)\n print(\"neg.shape:\", neg.shape)\n concatenated = tf.concat([pos, neg], axis=0)\n print(\"concatenated.shape:\", concatenated.shape)\n print(\"kernel.shape:\", self.kernel.shape)\n return tf.matmul(concatenated, self.kernel)\n\n",
"We get the following:\npos.shape: (2, 5)\nneg.shape: (2, 5)\nconcatenated.shape: (4, 5)\nkernel.shape: (10, 5)\nTurns out we had the wrong axis for the concat op! We should be concatenating neg and\npos alongside the feature axis 1, not the batch axis 0. Here's the correct version:",
"\nclass MyAntirectifier(layers.Layer):\n def build(self, input_shape):\n output_dim = input_shape[-1]\n self.kernel = self.add_weight(\n shape=(output_dim * 2, output_dim),\n initializer=\"he_normal\",\n name=\"kernel\",\n trainable=True,\n )\n\n def call(self, inputs):\n pos = tf.nn.relu(inputs)\n neg = tf.nn.relu(-inputs)\n print(\"pos.shape:\", pos.shape)\n print(\"neg.shape:\", neg.shape)\n concatenated = tf.concat([pos, neg], axis=1)\n print(\"concatenated.shape:\", concatenated.shape)\n print(\"kernel.shape:\", self.kernel.shape)\n return tf.matmul(concatenated, self.kernel)\n\n",
"Now our code works fine:",
"x = tf.random.normal(shape=(2, 5))\ny = MyAntirectifier()(x)\n",
"Tip 2: use model.summary() and plot_model() to check layer output shapes\nIf you're working with complex network topologies, you're going to need a way\nto visualize how your layers are connected and how they transform the data that passes\nthrough them.\nHere's an example. Consider this model with three inputs and two outputs (lifted from the\nFunctional API\nguide):",
"from tensorflow import keras\n\nnum_tags = 12 # Number of unique issue tags\nnum_words = 10000 # Size of vocabulary obtained when preprocessing text data\nnum_departments = 4 # Number of departments for predictions\n\ntitle_input = keras.Input(\n shape=(None,), name=\"title\"\n) # Variable-length sequence of ints\nbody_input = keras.Input(shape=(None,), name=\"body\") # Variable-length sequence of ints\ntags_input = keras.Input(\n shape=(num_tags,), name=\"tags\"\n) # Binary vectors of size `num_tags`\n\n# Embed each word in the title into a 64-dimensional vector\ntitle_features = layers.Embedding(num_words, 64)(title_input)\n# Embed each word in the text into a 64-dimensional vector\nbody_features = layers.Embedding(num_words, 64)(body_input)\n\n# Reduce sequence of embedded words in the title into a single 128-dimensional vector\ntitle_features = layers.LSTM(128)(title_features)\n# Reduce sequence of embedded words in the body into a single 32-dimensional vector\nbody_features = layers.LSTM(32)(body_features)\n\n# Merge all available features into a single large vector via concatenation\nx = layers.concatenate([title_features, body_features, tags_input])\n\n# Stick a logistic regression for priority prediction on top of the features\npriority_pred = layers.Dense(1, name=\"priority\")(x)\n# Stick a department classifier on top of the features\ndepartment_pred = layers.Dense(num_departments, name=\"department\")(x)\n\n# Instantiate an end-to-end model predicting both priority and department\nmodel = keras.Model(\n inputs=[title_input, body_input, tags_input],\n outputs=[priority_pred, department_pred],\n)\n",
"Calling summary() can help you check the output shape of each layer:",
"model.summary()\n",
"You can also visualize the entire network topology alongside output shapes using\nplot_model:",
"keras.utils.plot_model(model, show_shapes=True)\n",
"With this plot, any connectivity-level error becomes immediately obvious.\nTip 3: to debug what happens during fit(), use run_eagerly=True\nThe fit() method is fast: it runs a well-optimized, fully-compiled computation graph.\nThat's great for performance, but it also means that the code you're executing isn't the\nPython code you've written. This can be problematic when debugging. As you may recall,\nPython is slow -- so we use it as a staging language, not as an execution language.\nThankfully, there's an easy way to run your code in \"debug mode\", fully eagerly:\npass run_eagerly=True to compile(). Your call to fit() will now get executed line\nby line, without any optimization. It's slower, but it makes it possible to print the\nvalue of intermediate tensors, or to use a Python debugger. Great for debugging.\nHere's a basic example: let's write a really simple model with a custom train_step. Our\nmodel just implements gradient descent, but instead of first-order gradients, it uses a\ncombination of first-order and second-order gradients. Pretty trivial so far.\nCan you spot what we're doing wrong?",
"\nclass MyModel(keras.Model):\n def train_step(self, data):\n inputs, targets = data\n trainable_vars = self.trainable_variables\n with tf.GradientTape() as tape2:\n with tf.GradientTape() as tape1:\n preds = self(inputs, training=True) # Forward pass\n # Compute the loss value\n # (the loss function is configured in `compile()`)\n loss = self.compiled_loss(targets, preds)\n # Compute first-order gradients\n dl_dw = tape1.gradient(loss, trainable_vars)\n # Compute second-order gradients\n d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)\n\n # Combine first-order and second-order gradients\n grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]\n\n # Update weights\n self.optimizer.apply_gradients(zip(grads, trainable_vars))\n\n # Update metrics (includes the metric that tracks the loss)\n self.compiled_metrics.update_state(targets, preds)\n # Return a dict mapping metric names to current value\n return {m.name: m.result() for m in self.metrics}\n\n",
"Let's train a one-layer model on MNIST with this custom training loop.\nWe pick, somewhat at random, a batch size of 1024 and a learning rate of 0.1. The general\nidea being to use larger batches and a larger learning rate than usual, since our\n\"improved\" gradients should lead us to quicker convergence.",
"import numpy as np\n\n# Construct an instance of MyModel\ndef get_model():\n inputs = keras.Input(shape=(784,))\n intermediate = layers.Dense(256, activation=\"relu\")(inputs)\n outputs = layers.Dense(10, activation=\"softmax\")(intermediate)\n model = MyModel(inputs, outputs)\n return model\n\n\n# Prepare data\n(x_train, y_train), _ = keras.datasets.mnist.load_data()\nx_train = np.reshape(x_train, (-1, 784)) / 255\n\nmodel = get_model()\nmodel.compile(\n optimizer=keras.optimizers.SGD(learning_rate=1e-2),\n loss=\"sparse_categorical_crossentropy\",\n metrics=[\"accuracy\"],\n)\nmodel.fit(x_train, y_train, epochs=3, batch_size=1024, validation_split=0.1)\n",
"Oh no, it doesn't converge! Something is not working as planned.\nTime for some step-by-step printing of what's going on with our gradients.\nWe add various print statements in the train_step method, and we make sure to pass\nrun_eagerly=True to compile() to run our code step-by-step, eagerly.",
"\nclass MyModel(keras.Model):\n def train_step(self, data):\n print()\n print(\"----Start of step: %d\" % (self.step_counter,))\n self.step_counter += 1\n\n inputs, targets = data\n trainable_vars = self.trainable_variables\n with tf.GradientTape() as tape2:\n with tf.GradientTape() as tape1:\n preds = self(inputs, training=True) # Forward pass\n # Compute the loss value\n # (the loss function is configured in `compile()`)\n loss = self.compiled_loss(targets, preds)\n # Compute first-order gradients\n dl_dw = tape1.gradient(loss, trainable_vars)\n # Compute second-order gradients\n d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)\n\n print(\"Max of dl_dw[0]: %.4f\" % tf.reduce_max(dl_dw[0]))\n print(\"Min of dl_dw[0]: %.4f\" % tf.reduce_min(dl_dw[0]))\n print(\"Mean of dl_dw[0]: %.4f\" % tf.reduce_mean(dl_dw[0]))\n print(\"-\")\n print(\"Max of d2l_dw2[0]: %.4f\" % tf.reduce_max(d2l_dw2[0]))\n print(\"Min of d2l_dw2[0]: %.4f\" % tf.reduce_min(d2l_dw2[0]))\n print(\"Mean of d2l_dw2[0]: %.4f\" % tf.reduce_mean(d2l_dw2[0]))\n\n # Combine first-order and second-order gradients\n grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]\n\n # Update weights\n self.optimizer.apply_gradients(zip(grads, trainable_vars))\n\n # Update metrics (includes the metric that tracks the loss)\n self.compiled_metrics.update_state(targets, preds)\n # Return a dict mapping metric names to current value\n return {m.name: m.result() for m in self.metrics}\n\n\nmodel = get_model()\nmodel.compile(\n optimizer=keras.optimizers.SGD(learning_rate=1e-2),\n loss=\"sparse_categorical_crossentropy\",\n metrics=[\"accuracy\"],\n run_eagerly=True,\n)\nmodel.step_counter = 0\n# We pass epochs=1 and steps_per_epoch=10 to only run 10 steps of training.\nmodel.fit(x_train, y_train, epochs=1, batch_size=1024, verbose=0, steps_per_epoch=10)\n",
"What did we learn?\n\nThe first order and second order gradients can have values that differ by orders of\nmagnitudes.\nSometimes, they may not even have the same sign.\nTheir values can vary greatly at each step.\n\nThis leads us to an obvious idea: let's normalize the gradients before combining them.",
"\nclass MyModel(keras.Model):\n def train_step(self, data):\n inputs, targets = data\n trainable_vars = self.trainable_variables\n with tf.GradientTape() as tape2:\n with tf.GradientTape() as tape1:\n preds = self(inputs, training=True) # Forward pass\n # Compute the loss value\n # (the loss function is configured in `compile()`)\n loss = self.compiled_loss(targets, preds)\n # Compute first-order gradients\n dl_dw = tape1.gradient(loss, trainable_vars)\n # Compute second-order gradients\n d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)\n\n dl_dw = [tf.math.l2_normalize(w) for w in dl_dw]\n d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2]\n\n # Combine first-order and second-order gradients\n grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]\n\n # Update weights\n self.optimizer.apply_gradients(zip(grads, trainable_vars))\n\n # Update metrics (includes the metric that tracks the loss)\n self.compiled_metrics.update_state(targets, preds)\n # Return a dict mapping metric names to current value\n return {m.name: m.result() for m in self.metrics}\n\n\nmodel = get_model()\nmodel.compile(\n optimizer=keras.optimizers.SGD(learning_rate=1e-2),\n loss=\"sparse_categorical_crossentropy\",\n metrics=[\"accuracy\"],\n)\nmodel.fit(x_train, y_train, epochs=5, batch_size=1024, validation_split=0.1)\n",
"Now, training converges! It doesn't work well at all, but at least the model learns\nsomething.\nAfter spending a few minutes tuning parameters, we get to the following configuration\nthat works somewhat well (achieves 97% validation accuracy and seems reasonably robust to\noverfitting):\n\nUse 0.2 * w1 + 0.8 * w2 for combining gradients.\nUse a learning rate that decays linearly over time.\n\nI'm not going to say that the idea works -- this isn't at all how you're supposed to do\nsecond-order optimization (pointers: see the Newton & Gauss-Newton methods, quasi-Newton\nmethods, and BFGS). But hopefully this demonstration gave you an idea of how you can\ndebug your way out of uncomfortable training situations.\nRemember: use run_eagerly=True for debugging what happens in fit(). And when your code\nis finally working as expected, make sure to remove this flag in order to get the best\nruntime performance!\nHere's our final training run:",
"\nclass MyModel(keras.Model):\n def train_step(self, data):\n inputs, targets = data\n trainable_vars = self.trainable_variables\n with tf.GradientTape() as tape2:\n with tf.GradientTape() as tape1:\n preds = self(inputs, training=True) # Forward pass\n # Compute the loss value\n # (the loss function is configured in `compile()`)\n loss = self.compiled_loss(targets, preds)\n # Compute first-order gradients\n dl_dw = tape1.gradient(loss, trainable_vars)\n # Compute second-order gradients\n d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)\n\n dl_dw = [tf.math.l2_normalize(w) for w in dl_dw]\n d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2]\n\n # Combine first-order and second-order gradients\n grads = [0.2 * w1 + 0.8 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]\n\n # Update weights\n self.optimizer.apply_gradients(zip(grads, trainable_vars))\n\n # Update metrics (includes the metric that tracks the loss)\n self.compiled_metrics.update_state(targets, preds)\n # Return a dict mapping metric names to current value\n return {m.name: m.result() for m in self.metrics}\n\n\nmodel = get_model()\nlr = learning_rate = keras.optimizers.schedules.InverseTimeDecay(\n initial_learning_rate=0.1, decay_steps=25, decay_rate=0.1\n)\nmodel.compile(\n optimizer=keras.optimizers.SGD(lr),\n loss=\"sparse_categorical_crossentropy\",\n metrics=[\"accuracy\"],\n)\nmodel.fit(x_train, y_train, epochs=50, batch_size=2048, validation_split=0.1)\n",
"Tip 4: if your code is slow, run the TensorFlow profiler\nOne last tip -- if your code seems slower than it should be, you're going to want to plot\nhow much time is spent on each computation step. Look for any bottleneck that might be\ncausing less than 100% device utilization.\nTo learn more about TensorFlow profiling, see\nthis extensive guide.\nYou can quickly profile a Keras model via the TensorBoard callback:\n```python\nProfile from batches 10 to 15\ntb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir,\n profile_batch=(10, 15))\nTrain the model and use the TensorBoard Keras callback to collect\nperformance profiling data\nmodel.fit(dataset,\n epochs=1,\n callbacks=[tb_callback])\n```\nThen navigate to the TensorBoard app and check the \"profile\" tab."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mathLab/RBniCS
|
tutorials/09_advection_dominated/tutorial_advection_dominated_1_rb.ipynb
|
lgpl-3.0
|
[
"Tutorial 09 - Advection Dominated problem\nKeywords: reduced basis method, SUPG\n1. Introduction\nThis tutorial addresses the reduced basis method to the advection dominated worked problem in a two-dimensional domain $\\Omega=(0,1)^2$ shown below:\n<img src=\"data/advection_dominated_1.png\" />\nWe introduce a stabilization technique such as $\\textit{Streamline/Upwind Petrov-Galerkin}$ (SUPG) able to reduce the numerical oscillations on the approximation of the solution of parametrized advection-diffusion problem:\n$$\n-\\varepsilon(\\boldsymbol{\\mu})\\,\\Delta u(\\boldsymbol{\\mu})+\\beta(\\boldsymbol{\\mu})\\cdot\\nabla u(\\boldsymbol{\\mu})=f(\\boldsymbol{\\mu})\\quad\\text{on }\\,\\Omega(\\boldsymbol{\\mu}),\n$$\nwhere $\\beta(\\boldsymbol{\\mu})$ and $\\varepsilon(\\boldsymbol{\\mu})$ represent the advection and the diffusion term, respectively.\nFor this problem, we consider on parameter $\\mu$, thus $P=1$. It is related to the Péclet number:\n$$\n\\mathbb{P}e_K(\\boldsymbol{\\mu})(x):=\\frac{|\\beta(\\boldsymbol{\\mu})(x)| h_K}{2\\,\\varepsilon(\\boldsymbol{\\mu})(x)}\\quad\\forall x\\in K\\quad\\forall\\boldsymbol{\\mu}\\in\\mathbb{P}.\n$$\nHere $h_K$ represents the diameter of $K\\in\\mathcal{T}_h$, where $\\mathcal{T}_h$ indicates a triangulation of our domain $\\Omega(\\boldsymbol{\\mu})$.\nThe parameter domain is thus given by \n$$\n\\mathbb{P}=[0, 6].\n$$\nIn this problem we consider two approaches:\n1. Offline-Online stabilized,\n2. Offline-only stabilized,\nin which while in the first one we apply the SUPG method both in the Offline and Online phases, in the second one only in the Offline phase it is applied.\nIn order to obtain a faster approximation of the problem, we pursue a model reduction by means of a certified reduced basis reduced order method.\n2. Parametrized formulation\nLet $u(\\boldsymbol{\\mu})$ be the solution in the domain $\\Omega$.\nThe PDE formulation of the parametrized problem is given by:\n<center>for a given parameter $\\mu=\\boldsymbol{\\mu}\\in\\mathbb{P}$, find $u(\\boldsymbol{\\mu})$ such that</center>\n$$\n\\begin{cases}\n -\\frac{1}{10\\,^{\\boldsymbol{\\mu}}}\\Delta\\,u(\\boldsymbol{\\mu})+(1,1)\\cdot\\nabla u(\\boldsymbol{\\mu})=0 & \\text{in }\\Omega,\\\n u(\\boldsymbol{\\mu}) = 0 & \\text{on } \\Gamma_1\\cup\\Gamma_2, \\ \n u(\\boldsymbol{\\mu}) = 1 & \\text{on } \\Gamma_3\\cup\\Gamma_4.\n\\end{cases}\n$$\n<br>\nThe corresponding weak formulation reads:\n<center>for a given parameter $\\boldsymbol{\\mu}\\in\\mathbb{P}$, find $u(\\boldsymbol{\\mu})\\in\\mathbb{V}$ such that</center>\n$$a\\left(u(\\boldsymbol{\\mu}),v;\\boldsymbol{\\mu}\\right)=f(v;\\boldsymbol{\\mu})\\quad \\forall v\\in\\mathbb{V},$$\nwhere\n\nthe function space $\\mathbb{V}$ is defined as\n$$\n\\mathbb{V} = \\left{ v \\in H^1(\\Omega): v|{\\Gamma_1\\cup\\Gamma_2} = 0, v|{\\Gamma_3\\cup\\Gamma_4} = 1\\right},\n$$\nthe parametrized bilinear form $a(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\times \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$a(u,v;\\boldsymbol{\\mu}) = \\int_{\\Omega} \\frac{1}{10\\,^{\\boldsymbol{\\mu}}}\\nabla u \\cdot \\nabla v +\\left(\\partial_xu+\\partial_yu\\right)v\\ d\\boldsymbol{x},$$\nthe parametrized linear form $f(\\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$f(v; \\boldsymbol{\\mu}) = \\int_{\\Omega} v\\ d\\boldsymbol{x}.$$\n\nFor the $\\textit{Offline-Online stabilized}$ approach we use a different bilinear form $a_{stab}$ instead of $a$;\nwhile in the $\\textit{Offline-only stabilized}$ approach we use the the bilinear form $a_{stab}$ during the Offline phase, performing the Online Galerkin projection with respect to the bilinear form $a$,\n\nthe parametrized bilinear stabilized form $a_{stab}(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\times \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$a_{stab}(u,v,\\boldsymbol{\\mu}) = a(u,v,\\boldsymbol{\\mu}) + s(u,v,\\boldsymbol{\\mu}),$$\n\nwhere\n$$\n\\begin{align}\n a(u,v;\\boldsymbol{\\mu}) &= \\int_{\\Omega} \\frac{1}{10\\,^{\\boldsymbol{\\mu}}}\\nabla u \\cdot \\nabla v +\\left[(1,1)\\cdot\\nabla u\\right]v\\ d\\boldsymbol{x},\\\n s(u,v;\\boldsymbol{\\mu}) &= \\sum_{K\\in\\mathcal{T}_h}\\delta_K\\int_K \n \\left(-\\frac{1}{10\\,^{\\boldsymbol{\\mu}}}\\Delta u+(1,1)\\cdot\\nabla u\\right)\\left(\\frac{h_K}{\\sqrt{2}}(1,1)\\cdot\\nabla v\\right)\\ d\\boldsymbol{x},\n\\end{align}\n$$\nand\n* the parametrized linear form $f_{stab}(\\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$\nf_{stab}(v;\\boldsymbol{\\mu}) = f(v;\\boldsymbol{\\mu}) + r(v;\\boldsymbol{\\mu})\n$$\nwhere\n$$\n\\begin{align}\n f(v;\\boldsymbol{\\mu}) &= \\int_{\\Omega} v\\ d\\boldsymbol{x}, \\\n r(v;\\boldsymbol{\\mu}) &= \\sum_{K\\in\\mathcal{T}_h}\\delta_K\\int_K \\left(\\frac{h_K}{\\sqrt{2}}(1,1)\\cdot\\nabla v\\right)\\ d\\boldsymbol{x}.\n\\end{align}\n$$",
"from dolfin import *\nfrom rbnics import *\nfrom problems import *\nfrom reduction_methods import *",
"3. Affine decomposition\nFor this problem the affine decomposition is straightforward:\n$$a(u,v;\\boldsymbol{\\mu})=\\underbrace{\\frac{1}{10\\,^{\\boldsymbol{\\mu}}}}{\\Theta^{a}_0(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega}\\nabla u \\cdot \\nabla v \\ d\\boldsymbol{x}}{a_0(u,v)} \\ + \\ \\underbrace{1}{\\Theta^{a}1(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega}\\left[(1,1)\\cdot\\nabla u\\right]v \\ d\\boldsymbol{x}}{a_1(u,v)},$$\n$$f(v; \\boldsymbol{\\mu}) = \\underbrace{1}{\\Theta^{f}0(\\boldsymbol{\\mu})} \\underbrace{\\int{\\Omega}v \\ d\\boldsymbol{x}}_{f_0(v)}.$$\nAdding the following forms, we obtaing the affine decomposition for the stabilized approach:\n$$s(u,v;\\boldsymbol{\\mu}) = \\sum_{K\\in\\mathcal{T}h}\\underbrace{\\frac{\\delta_K}{10\\,^{\\boldsymbol{\\mu}}}}{\\Theta^{s}0(\\boldsymbol{\\mu})}\\underbrace{\\int_K \n \\Delta u\\left(\\frac{h_K}{\\sqrt{2}}(1,1)\\cdot\\nabla v\\right)\\ d\\boldsymbol{x}}{s_0(u,v)} \\ + \\\n \\sum_{K\\in\\mathcal{T}h}\\underbrace{\\delta_K}{\\Theta^{s}1(\\boldsymbol{\\mu})}\\underbrace{\\int_K \n \\left((1,1)\\cdot\\nabla u\\right)\\left(\\frac{h_K}{\\sqrt{2}}(1,1)\\cdot\\nabla v\\right)\\ d\\boldsymbol{x}}{s_1(u,v)},$$\n$$r(v; \\boldsymbol{\\mu}) = \\sum_{K\\in\\mathcal{T}h}\\underbrace{\\delta_K}{\\Theta^{r}0(\\boldsymbol{\\mu})} \\underbrace{\\int_K\\left(\\frac{h_K}{\\sqrt{2}}(1,1)\\cdot\\nabla v\\right)\\ d\\boldsymbol{x}}{r_0(v)}.$$\nWe will implement the numerical discretization of the problem in the class\nclass AdvectionDominated(EllipticCoerciveProblem):\nby specifying the coefficients $\\Theta^{a}(\\boldsymbol{\\mu})$ and $\\Theta^{f}_(\\boldsymbol{\\mu})$ in the method\ndef compute_theta(self, term):\nand the bilinear forms $a(u, v)$ and linear forms $f_(v)$ in\ndef assemble_operator(self, term):",
"@OnlineStabilization()\nclass AdvectionDominated(EllipticCoerciveProblem):\n\n # Default initialization of members\n def __init__(self, V, **kwargs):\n # Call the standard initialization\n EllipticCoerciveProblem.__init__(self, V, **kwargs)\n # ... and also store FEniCS data structures for assembly\n assert \"subdomains\" in kwargs\n assert \"boundaries\" in kwargs\n self.subdomains, self.boundaries = kwargs[\"subdomains\"], kwargs[\"boundaries\"]\n self.u = TrialFunction(V)\n self.v = TestFunction(V)\n self.dx = Measure(\"dx\")(subdomain_data=subdomains)\n self.ds = Measure(\"ds\")(subdomain_data=boundaries)\n # Store advection and forcing expressions\n self.beta = Constant((1.0, 1.0))\n self.f = Constant(1.0)\n # Store terms related to stabilization\n self.delta = 0.5\n self.h = CellDiameter(V.mesh())\n\n # Return custom problem name\n def name(self):\n return \"AdvectionDominated1RB\"\n\n # Return stability factor\n def get_stability_factor_lower_bound(self):\n return 1.\n\n # Return theta multiplicative terms of the affine expansion of the problem.\n def compute_theta(self, term):\n mu = self.mu\n if term == \"a\":\n theta_a0 = 10.0**(- mu[0])\n theta_a1 = 1.0\n if self.stabilized:\n delta = self.delta\n theta_a2 = - delta * 10.0**(- mu[0])\n theta_a3 = delta\n else:\n theta_a2 = 0.0\n theta_a3 = 0.0\n return (theta_a0, theta_a1, theta_a2, theta_a3)\n elif term == \"f\":\n theta_f0 = 1.0\n if self.stabilized:\n delta = self.delta\n theta_f1 = delta\n else:\n theta_f1 = 0.0\n return (theta_f0, theta_f1)\n else:\n raise ValueError(\"Invalid term for compute_theta().\")\n\n # Return forms resulting from the discretization of the affine expansion of the problem operators.\n def assemble_operator(self, term):\n v = self.v\n dx = self.dx\n if term == \"a\":\n u = self.u\n beta = self.beta\n h = self.h\n a0 = inner(grad(u), grad(v)) * dx\n a1 = inner(beta, grad(u)) * v * dx\n a2 = inner(div(grad(u)), h * inner(beta, grad(v))) * dx\n a3 = inner(inner(beta, grad(u)), h * inner(beta, grad(v))) * dx\n return (a0, a1, a2, a3)\n elif term == \"f\":\n f = self.f\n beta = self.beta\n h = self.h\n f0 = f * v * dx\n f1 = inner(f, h * inner(beta, grad(v))) * dx\n return (f0, f1)\n elif term == \"k\":\n u = self.u\n k0 = inner(grad(u), grad(v)) * dx\n return (k0,)\n elif term == \"m\":\n u = self.u\n m0 = inner(u, v) * dx\n return (m0,)\n elif term == \"dirichlet_bc\":\n bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 2)]\n return (bc0,)\n elif term == \"inner_product\":\n u = self.u\n x0 = inner(grad(u), grad(v)) * dx\n return (x0,)\n else:\n raise ValueError(\"Invalid term for assemble_operator().\")",
"4. Main program\n4.1. Read the mesh for this problem\nThe mesh was generated by the data/generate_mesh.ipynb notebook.",
"mesh = Mesh(\"data/square.xml\")\nsubdomains = MeshFunction(\"size_t\", mesh, \"data/square_physical_region.xml\")\nboundaries = MeshFunction(\"size_t\", mesh, \"data/square_facet_region.xml\")",
"4.2. Create Finite Element space (Lagrange P2)",
"V = FunctionSpace(mesh, \"Lagrange\", 2)",
"4.3. Allocate an object of the AdvectionDominated class",
"problem = AdvectionDominated(V, subdomains=subdomains, boundaries=boundaries)\nmu_range = [(0.0, 6.0)]\nproblem.set_mu_range(mu_range)",
"4.4. Prepare reduction with a reduced basis method",
"reduction_method = ReducedBasis(problem)\nreduction_method.set_Nmax(15)",
"4.5. Perform the offline phase",
"reduction_method.initialize_training_set(100)\nreduced_problem = reduction_method.offline()",
"4.6. Perform an online solve",
"online_mu = (6.0, )\nreduced_problem.set_mu(online_mu)\nreduced_problem.solve(online_stabilization=True)\nreduced_problem.export_solution(filename=\"online_solution_with_stabilization\")\nreduced_problem.export_error(filename=\"online_error_with_stabilization\")\nreduced_problem.solve(online_stabilization=False)\nreduced_problem.export_solution(filename=\"online_solution_without_stabilization\")\nreduced_problem.export_error(filename=\"online_error_without_stabilization\")",
"4.7. Perform an error analysis",
"reduction_method.initialize_testing_set(100)\nreduction_method.error_analysis(online_stabilization=True, filename=\"error_analysis_with_stabilization\")\nreduction_method.error_analysis(online_stabilization=False, filename=\"error_analysis_without_stabilization\")",
"4.8. Perform a speedup analysis",
"reduction_method.speedup_analysis(online_stabilization=True, filename=\"speedup_analysis_with_stabilization\")\nreduction_method.speedup_analysis(online_stabilization=False, filename=\"speedup_analysis_without_stabilization\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adricnet/dfirnotes
|
win5mem-jupyter.ipynb
|
mit
|
[
"dfirNotes Investigation\nInv Plan: Win5mem : WinXP memory image triage\nCase: 20150124BSK : Suspicious IE/Java behaviour on workstation\nInvestigator: Ben S. Knowles (bsk@dfirnotes.org)\nResponse Phase: Identification\nRefs: Problem reported to Helpdesk in ticket 1202, Incident case record 20150124BSK\nDate/times of interest: Memory image acquired 2014-04-17 11:00:53 -0400\nEvidence location: /cases/win5mem/winxp_java6-meterpreter.vmem, VMWare memory image\nInvestigation Plan\nfrom 504.5 (2014) p42\n\nWhich processes are communicating on the network?\nWhich process is likely run by the attacker?\nLook for signs of pivot and identify the destination system(s)\nWhat suspicious process might be root cause?\n(extra credit) Windows triage commands to find this information from a live system.\n\nWork\nUse Volatility imageinfo plugin to check which profile to use and verify Vol can read the memory image. Once that's settled we can build a script for a batch run, process the memory image for our first batch of results, and look at the data with Pandas.",
"!vol.py --plugins=/home/sosift/f/dfirnotes/ -f /cases/win5mem/winxp_java6-meterpreter.vmem --profile WinXPSP2x86 imageinfo\n\n## Get setup to process memory with Volatility, analyse data with Pandas, chart with matplotlib\n## Charting tips from https://datasciencelab.wordpress.com/2013/12/21/beautiful-plots-with-pandas-and-matplotlib/\nimport pandas as pd\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pylab as plt\n\n##\ncase_folder = '/cases/win5mem/'\nmemimage = '/cases/win5mem/winxp_java6-meterpreter.vmem' \nvol_profile = 'WinXPSP2x86' ## use vol.py imageinfo if you don't know this\n\n## Assemble the volatility commands for batch execution in a shell\n## start with sift3 volatility + custom modules sample\nvol24 = '/usr/bin/vol.py --plugins=/home/sosift/f/dfirnotes/ ' \nvol_cmd = vol24 + '-f ' + memimage + ' --profile=' + vol_profile\n\n## Configure plugins and output formats, completion flags:\nvol_cmd_ps = vol_cmd + ' pscsv --output=csv ' + '> ' + case_folder + 'ps.csv' + ' && echo PS CSV Done!'\nvol_cmd_conns = vol_cmd + ' connscan ' + '> ' + case_folder + 'connscan.txt' + ' && echo Connscan Done!'\n\nvol_script = case_folder + 'volscript'\n\nwith open(vol_script, 'wb') as f:\n f.write(vol_cmd_ps+'\\n')\n f.write(vol_cmd_conns)\n\n! /bin/sh /cases/win5mem/volscript",
"Batch processing is complete. Let's pull our results in Pandas DataFrames so we can take a look, starting with the processes CSV file from the demo pscsv plugin. We can easily import CSV with Pandas and let it know which column is the date/time data on import, and then set the PID number field as our index. We use the Pandas df.info() function to see a summary of what we imported before continuing.",
"procs = pd.read_csv('/cases/win5mem/ps.csv', parse_dates=['Created'])\nprocs.set_index(['Pid'])\nprocs.info()",
"Here's quick histogram of processes by process name. Only svchost, Java, VmWare, and Internet Explorer have more than one instance.",
"# Create a figure of given size\nfig = plt.figure(figsize=(12,8))\n\n# Add a subplot\nax = fig.add_subplot(111)\n# Remove grid lines (dotted lines inside plot)\nax.grid(False)\n# Remove plot frame\nax.set_frame_on(False)\n# Pandas trick: remove weird dotted line on axis\n#ax.lines[0].set_visible(False)\n\n# Set title\nttl = title='Process Counts'\n# Set color transparency (0: transparent; 1: solid)\na = 0.7\n# Create a colormap\ncustomcmap = [(x/24.0, x/48.0, 0.05) for x in range(len(procs))]\n## chart the data frame with these params\nprocs['Process'].sort_index().value_counts().plot(kind='barh', title=ttl, ax=ax, alpha=a)\nplt.savefig('Process Counts.png', bbox_inches='tight', dpi=300)",
"Pandas can handle fixed width text tables almost as adroitly as CSV using the read_fwf function. We use it to load in the output of the standard Volatility connscan, set the Pid field as our index, and check import with info(). \n(FIXME) We need to get rid of one null line that is an import artifact.",
"conns = pd.read_fwf('/cases/win5mem/conns.txt')\nconns.set_index(['Pid'])\nconns.info()",
"Here is a quick histogram of the remote IP addresses in use, including the port numbers. Reviewing the x-axis we see common web service ports (80 and 443), Windows service ports (139), and some less obvious ones. High ports 1337, 4444, and 1648 may all be worth followup as they are less expected on a Windows XP system than the first set.",
"# Create a figure of given size\nfig = plt.figure(figsize=(12,8))\n\n# Add a subplot\nax = fig.add_subplot(111)\n# Remove grid lines (dotted lines inside plot)\nax.grid(False)\n# Remove plot frame\nax.set_frame_on(False)\n# Pandas trick: remove weird dotted line on axis\n#ax.lines[0].set_visible(False)\n\n# Set title\nttl = title='Remote Connections'\n# Set color transparency (0: transparent; 1: solid)\na = 0.7\n# Create a colormap\ncustomcmap = [(x/24.0, x/48.0, 0.05) for x in range(len(procs))]\n## chart the data frame with these params\n\nconns['Remote Address'].sort_index().value_counts().plot(kind='barh', title=ttl, ax=ax, alpha=a)\n\nplt.savefig('Remote Connections.png', bbox_inches='tight', dpi=300)",
"Let's slice out just those IE processes and see who they were talking to. We pull the process IDs from the process data and use it to look for processes with connection in the connection data from connscan.",
"procs[procs.Process==\"iexplore.exe\"]\n\n## not all processes have connections, but this one does\nie_conns = conns[conns.Pid == '2576']\n\n# Create a figure of given size\nfig = plt.figure(figsize=(12,8))\n\n# Add a subplot\nax = fig.add_subplot(111)\n# Remove grid lines (dotted lines inside plot)\nax.grid(False)\n# Remove plot frame\nax.set_frame_on(False)\n# Pandas trick: remove weird dotted line on axis\n#ax.lines[0].set_visible(False)\n\n# Set title\nttl = title='IE Remote Connections'\n# Set color transparency (0: transparent; 1: solid)\na = 0.7\n# Create a colormap\ncustomcmap = [(x/24.0, x/48.0, 0.05) for x in range(len(procs))]\n## chart the data frame with these params\n\nconns['Remote Address'].sort_index().value_counts()\nie_conns['Remote Address'].sort_index().value_counts().plot(kind='barh', title=ttl, ax=ax, alpha=a)\n\nplt.savefig('IE Remote Connections.png', bbox_inches='tight', dpi=300)\n",
"We can see that IE was talking to several Internet addresses on web service ports and one local (RFC1918) address on 1337. And Java?",
"## not all processes have connections, this one does\njava_conns = conns[conns.Pid=='3156']\njava_conns['Remote Address'].sort_index().value_counts().plot(kind='bar')",
"One Java process was also communicating with the unknown 1337 service on the local network.\nResults\nComplete Process List and Connection List:",
"procs\n\nconns",
"Suspicious Processes\nInternet Explorer and Java processes were communicating with an unidentified services on a local network host. Those processes and the host they were communicating with are worth further investigation to get to the bottom of the supicious activity in the evidence presented.",
"procs[procs.Process==\"iexplore.exe\"]\n\nprocs[procs.Process==\"java.exe\"]",
"Conclusion\nThere are definite signs of suspicious activity in the evidence gathered so far. Recommend proceeding with response efforts in accordance with the IRP: Contain the desktop system and gather more evidence from other sources."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
aattaran/Machine-Learning-with-Python
|
MNIST/0410 - MNIST Project 6 - The ROC Curve/MNIST.ipynb
|
bsd-3-clause
|
[
"Classification Based Machine Learning Algorithm\nAn introduction to machine learning with scikit-learn\nScikit-learn Definition:\nSupervised learning, in which the data comes with additional attributes that we want to predict. This problem can be either:\n\n\nClassification: samples belong to two or more classes and we want to learn from already labeled data how to predict the class of unlabeled data. An example of classification problem would be the handwritten digit recognition example, in which the aim is to assign each input vector to one of a finite number of discrete categories. Another way to think of classification is as a discrete (as opposed to continuous) form of supervised learning where one has a limited number of categories and for each of the n samples provided, one is to try to label them with the correct category or class.\n\n\nRegression: if the desired output consists of one or more continuous variables, then the task is called regression. An example of a regression problem would be the prediction of the length of a salmon as a function of its age and weight.\n\n\nMNIST dataset - a set of 70,000 small images of digits handwritten. You can read more via The MNIST Database\n\nDownloading the MNIST dataset",
"import numpy as np\n\nfrom sklearn.datasets import fetch_mldata\nmnist = fetch_mldata('MNIST original')\n\nmnist\n\nlen(mnist['data'])",
"Visualisation",
"X, y = mnist['data'], mnist['target']\n\nX\n\ny\n\nX[69999]\n\ny[69999]\n\nX.shape\n\ny.shape\n\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n_ = X[1000]\n_image = _.reshape(28, 28)\nplt.imshow(_image);\n\ny[1000]",
"Exercise: Locating the number 4 and plot the image",
"type(y)\n\ny == 4\n\nnp.where(y==4)\n\ny[24754]\n\n_ = X[24754]\n_image = _.reshape(28, 28)\nplt.imshow(_image);",
"Splitting the train and test sets",
"num_split = 60000\n\nX_train, X_test, y_train, y_test = X[:num_split], X[num_split:], y[:num_split], y[num_split:]",
"Tips: Typically we shuffle the training set. This ensures the training set is randomised and your data distribution is consistent. However, shuffling is a bad idea for time series data.\nShuffling the dataset\nAlternative Method",
"import numpy as np\n\nshuffle_index = np.random.permutation(num_split)\nX_train, y_train = X_train[shuffle_index], y_train[shuffle_index]",
"Training a Binary Classifier\nTo simplify our problem, we will make this an exercise of \"zero\" or \"non-zero\", making it a two-class problem.\nWe need to first convert our target to 0 or non zero.",
"y_train_0 = (y_train == 0)\n\ny_train_0\n\ny_test_0 = (y_test == 0)\n\ny_test_0",
"At this point we can pick any classifier and train it. This is the iterative part of choosing and testing all the classifiers and tuning the hyper parameters\n\nSGDClassifier\nTraining",
"from sklearn.linear_model import SGDClassifier\n\nclf = SGDClassifier(random_state = 0)\nclf.fit(X_train, y_train_0)",
"Prediction",
"clf.predict(X[1000].reshape(1, -1))",
"Performance Measures\nMeasuring Accuracy Using Cross-Validation\nStratifiedKFold\nLet's try with the StratifiedKFold stratified sampling to create multiple folds. At each iteration, the classifier was cloned and trained using the training folds and makes predictions on the test fold. \nStratifiedKFold utilised the Stratified sampling concept\n\nThe population is divided into homogeneous subgroups called strata\nThe right number of instances is sampled from each stratum \nTo guarantee that the test set is representative of the population",
"from sklearn.model_selection import StratifiedKFold\nfrom sklearn.base import clone\nclf = SGDClassifier(random_state=0)\n\nskfolds = StratifiedKFold(n_splits=3, random_state=100)\n\nfor train_index, test_index in skfolds.split(X_train, y_train_0):\n clone_clf = clone(clf)\n X_train_fold = X_train[train_index]\n y_train_folds = (y_train_0[train_index])\n X_test_fold = X_train[test_index]\n y_test_fold = (y_train_0[test_index])\n \n clone_clf.fit(X_train_fold, y_train_folds)\n y_pred = clone_clf.predict(X_test_fold)\n n_correct = sum(y_pred == y_test_fold)\n print(\"{0:.4f}\".format(n_correct / len(y_pred)))",
"cross_val_score using K-fold Cross-Validation\nK-fold cross-validation splits the training set into K-folds and then make predictions and evaluate them on each fold using a model trained on the remaning folds.",
"from sklearn.model_selection import cross_val_score\n\ncross_val_score(clf, X_train, y_train_0, cv=3, scoring='accuracy')",
"Exercise:\nWhat if you would like to perform 10-fold CV test? How would you do that",
"cross_val_score(clf, X_train, y_train_0, cv=10, scoring='accuracy')",
"Danger of Blindly Applying Evaluator As a Performance Measure\nLet's check against a dumb classifier",
"1 - sum(y_train_0) / len(y_train_0)",
"A simple check shows that 90.1% of the images are not zero. Any time you guess the image is not zero, you will be right 90.13% of the time. \nBare this in mind when you are dealing with skewed datasets. Because of this, accuracy is generally not the preferred performance measure for classifiers.\nConfusion Matrix",
"from sklearn.model_selection import cross_val_predict\n\ny_train_pred = cross_val_predict(clf, X_train, y_train_0, cv=3)\n\nfrom sklearn.metrics import confusion_matrix\n\nconfusion_matrix(y_train_0, y_train_pred)",
"Each row: actual class\nEach column: predicted class\nFirst row: Non-zero images, the negative class:\n* 53360 were correctly classified as non-zeros. True negatives. \n* Remaining 717 were wrongly classified as 0s. False positive\nSecond row: The images of zeros, the positive class:\n* 395 were incorrectly classified as 0s. False negatives\n* 5528 were correctly classified as 0s. True positives\n<img src=\"img\\confusion matrix.jpg\">\nPrecision\nPrecision measures the accuracy of positive predictions. Also called the precision of the classifier\n$$\\textrm{precision} = \\frac{\\textrm{True Positives}}{\\textrm{True Positives} + \\textrm{False Positives}}$$\n<img src=\"img\\precision.jpg\">",
"from sklearn.metrics import precision_score, recall_score",
"Note the result here may vary from the video as the results from the confusion matrix are different each time you run it.",
"precision_score(y_train_0, y_train_pred) # 5618 / (574 + 5618)\n\n5618 / (574 + 5618)",
"Recall\nPrecision is typically used with recall (Sensitivity or True Positive Rate). The ratio of positive instances that are correctly detected by the classifier.\n$$\\textrm{recall} = \\frac{\\textrm{True Positives}}{\\textrm{True Positives} + \\textrm{False Negatives}}$$\n<img src=\"img\\recall.jpg\">\nNote the result here may vary from the video as the results from the confusion matrix are different each time you run it.",
"recall_score(y_train_0, y_train_pred) # 5618 / (305 + 5618)\n\n5618 / (305 + 5618)",
"F1 Score\n$F_1$ score is the harmonic mean of precision and recall. Regular mean gives equal weight to all values. Harmonic mean gives more weight to low values.\n$$F_1=\\frac{2}{\\frac{1}{\\textrm{precision}}+\\frac{1}{\\textrm{recall}}}=2\\times \\frac{\\textrm{precision}\\times \\textrm{recall}}{\\textrm{precision}+ \\textrm{recall}}=\\frac{TP}{TP+\\frac{FN+FP}{2}}$$\nThe $F_1$ score favours classifiers that have similar precision and recall.",
"from sklearn.metrics import f1_score",
"Note the result here may vary from the video as the results from the confusion matrix are different each time you run it.",
"f1_score(y_train_0, y_train_pred)",
"Precision / Recall Tradeoff\nIncreasing precision reduced recall and vice versa\n<img src=\"img\\precision-recall.png\">\nOur classifier is designed to pick up zeros.\n12 observations\n\nCentral Arrow\nSuppose the decision threshold is positioned at the central arrow: \n* We get 4 true positives (We have 4 zeros to the right of the central arrow)\n* 1 false positive which is actually seven.\nAt this threshold, the precision accuracy is $\\frac{4}{5}=80\\%$\nHowever, out of the 6 zeros, the classifier only picked up 4. The recall accuracy is $\\frac{4}{6}=67\\%$\n\nRight Arrow\n\nWe get 3 true positives\n0 false positive\n\nAt this threshold, the precision accuracy is $\\frac{3}{3}=100\\%$\nHowever, out of the 6 zeros, the classifier only picked up 3. The recall accuracy is $\\frac{3}{6}=50\\%$\n\nLeft Arrow\n\nWe get 6 true positives\n2 false positive\n\nAt this threshold, the precision accuracy is $\\frac{6}{8}=75\\%$\nOut of the 6 zeros, the classifier picked up all 6. The recall accuracy is $\\frac{6}{6}=100\\%$",
"clf = SGDClassifier(random_state=0)\nclf.fit(X_train, y_train_0)\n\ny[1000]\n\ny_scores = clf.decision_function(X[1000].reshape(1, -1))\ny_scores\n\nthreshold = 0\n\ny_some_digits_pred = (y_scores > threshold)\n\ny_some_digits_pred\n\nthreshold = 40000\ny_some_digits_pred = (y_scores > threshold)\ny_some_digits_pred\n\ny_scores = cross_val_predict(clf, X_train, y_train_0, cv=3, method='decision_function')\n\nplt.figure(figsize=(12,8)); plt.hist(y_scores, bins=100);",
"With the decision scores, we can compute precision and recall for all possible thresholds using the precision_recall_curve() function:",
"from sklearn.metrics import precision_recall_curve\n\nprecisions, recalls, thresholds = precision_recall_curve(y_train_0, y_scores)\n\ndef plot_precision_recall_vs_threshold(precisions, recalls, thresholds):\n plt.plot(thresholds, precisions[:-1], \"b--\", label=\"Precision\")\n plt.plot(thresholds, recalls[:-1], \"g--\", label=\"Recall\")\n plt.xlabel(\"Threshold\")\n plt.legend(loc=\"upper left\")\n plt.ylim([-0.5,1.5]) \n\nplt.figure(figsize=(12,8)); \nplot_precision_recall_vs_threshold(precisions, recalls, thresholds)\nplt.show()",
"With this chart, you can select the threshold value that gives you the best precision/recall tradeoff for your task.\nSome tasks may call for higher precision (accuracy of positive predictions). Like designing a classifier that picks up adult contents to protect kids. This will require the classifier to set a high bar to allow any contents to be consumed by children.\nSome tasks may call for higher recall (ratio of positive instances that are correctly detected by the classifier). Such as detecting shoplifters/intruders on surveillance images - Anything that remotely resemble \"positive\" instances to be picked up.\n\nOne can also plot precisions against recalls to assist with the threshold selection",
"plt.figure(figsize=(12,8)); \nplt.plot(precisions, recalls);\nplt.xlabel('recalls');\nplt.ylabel('precisions');\nplt.title('PR Curve: precisions/recalls tradeoff');",
"Setting High Precisions\nLet's aim for 90% precisions.",
"len(precisions)\n\nlen(thresholds)\n\nplt.figure(figsize=(12,8)); \nplt.plot(thresholds, precisions[1:]);\n\nidx = len(precisions[precisions < 0.9])\n\nthresholds[idx]\n\ny_train_pred_90 = (y_scores > 21454)\n\nprecision_score(y_train_0, y_train_pred_90)\n\nrecall_score(y_train_0, y_train_pred_90)",
"Setting High Precisions\nLet's aim for 99% precisions.",
"idx = len(precisions[precisions < 0.99])",
"This is the same as the line above",
"thresholds[idx]\n\ny_train_pred_90 = (y_scores > thresholds[idx])\n\nprecision_score(y_train_0, y_train_pred_90)\n\nrecall_score(y_train_0, y_train_pred_90)",
"Exercise\nHigh Recall Score. Recall score > 0.9",
"idx = len(recalls[recalls > 0.9])\n\nthresholds[idx]\n\ny_train_pred_90 = (y_scores > thresholds[idx])\n\nprecision_score(y_train_0, y_train_pred_90)\n\nrecall_score(y_train_0, y_train_pred_90)",
"The Receiver Operating Characteristics (ROC) Curve\nInstead of plotting precision versus recall, the ROC curve plots the true positive rate (another name for recall) against the false positive rate. The false positive rate (FPR) is the ratio of negative instances that are incorrectly classified as positive. It is equal to one minus the true negative rate, which is the ratio of negative instances that are correctly classified as negative.\nThe TNR is also called specificity. Hence the ROC curve plots sensitivity (recall) versus 1 - specificity.\n<img src=\"img\\tnr_and_fpr.png\">",
"from sklearn.metrics import roc_curve\n\nfpr, tpr, thresholds = roc_curve(y_train_0, y_scores)\n\ndef plot_roc_curve(fpr, tpr, label=None):\n plt.plot(fpr, tpr, linewidth=2, label=label)\n plt.plot([0,1], [0,1], 'k--')\n plt.axis([0, 1, 0, 1])\n plt.xlabel('False Positive Rate')\n plt.ylabel('True Positive Rate')\n plt.title('ROC Curve')\n\nplt.figure(figsize=(12,8)); \nplot_roc_curve(fpr, tpr)\nplt.show();\n\nfrom sklearn.metrics import roc_auc_score\n\nroc_auc_score(y_train_0, y_scores)",
"Use PR curve whenever the positive class is rare or when you care more about the false positives than the false negatives\nUse ROC curve whenever the negative class is rare or when you care more about the false negatives than the false positives\nIn the example above, the ROC curve seemed to suggest that the classifier is good. However, when you look at the PR curve, you can see that there are room for improvement.\nModel Comparison\nRandom Forest",
"from sklearn.ensemble import RandomForestClassifier\n\nf_clf = RandomForestClassifier(random_state=0)\n\ny_probas_forest = cross_val_predict(f_clf, X_train, y_train_0,\n cv=3, method='predict_proba')\n\ny_scores_forest = y_probas_forest[:, 1]\nfpr_forest, tpr_forest, threshold_forest = roc_curve(y_train_0, y_scores_forest)\n\nplt.figure(figsize=(12,8)); \nplt.plot(fpr, tpr, \"b:\", label=\"SGD\")\nplot_roc_curve(fpr_forest, tpr_forest, \"Random Forest\")\nplt.legend(loc=\"lower right\")\nplt.show();\n\nroc_auc_score(y_train_0, y_scores_forest)\n\nf_clf.fit(X_train, y_train_0)\n\ny_train_rf = cross_val_predict(f_clf, X_train, y_train_0, cv=3)\n\nprecision_score(y_train_0, y_train_rf) \n\nrecall_score(y_train_0, y_train_rf) \n\nconfusion_matrix(y_train_0, y_train_rf)",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
probml/pyprobml
|
deprecated/gan_mixture_of_gaussians.ipynb
|
mit
|
[
"<a href=\"https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/gan_mixture_of_gaussians.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nThis notebook implements a Generative Adversarial Network to fit a synthetic dataset generated from a mixture of Gaussians in 2D.\nThe code was adapted from the ODEGAN code here: https://github.com/deepmind/deepmind-research/blob/master/ode_gan/odegan_mog16.ipynb. The original notebook was created by Chongli Qin.\nSome modifications made by Mihaela Rosca here were also incorporated.\nImports",
"!pip install -q flax\n\nfrom typing import Sequence\nimport matplotlib.pyplot as plt\n\nimport jax\nimport jax.numpy as jnp\n\nimport flax.linen as nn\nfrom flax.training import train_state\nimport optax\n\nimport functools\nimport scipy as sp\nimport math\n\nrng = jax.random.PRNGKey(0)",
"Data Generation\nData is generated from a 2D mixture of Gaussians.",
"@functools.partial(jax.jit, static_argnums=(1,))\ndef real_data(rng, batch_size):\n mog_mean = jnp.array(\n [\n [1.50, 1.50],\n [1.50, 0.50],\n [1.50, -0.50],\n [1.50, -1.50],\n [0.50, 1.50],\n [0.50, 0.50],\n [0.50, -0.50],\n [0.50, -1.50],\n [-1.50, 1.50],\n [-1.50, 0.50],\n [-1.50, -0.50],\n [-1.50, -1.50],\n [-0.50, 1.50],\n [-0.50, 0.50],\n [-0.50, -0.50],\n [-0.50, -1.50],\n ]\n )\n temp = jnp.tile(mog_mean, (batch_size // 16 + 1, 1))\n mus = temp[0:batch_size, :]\n return mus + 0.02 * jax.random.normal(rng, shape=(batch_size, 2))",
"Plotting",
"def plot_on_ax(ax, values, contours=None, bbox=None, xlabel=\"\", ylabel=\"\", title=\"\", cmap=\"Blues\"):\n kernel = sp.stats.gaussian_kde(values.T)\n\n ax.axis(bbox)\n ax.set_aspect(abs(bbox[1] - bbox[0]) / abs(bbox[3] - bbox[2]))\n ax.set_xlabel(xlabel)\n ax.set_ylabel(ylabel)\n ax.set_xticks([])\n ax.set_yticks([])\n\n xx, yy = jnp.mgrid[bbox[0] : bbox[1] : 300j, bbox[2] : bbox[3] : 300j]\n positions = jnp.vstack([xx.ravel(), yy.ravel()])\n\n f = jnp.reshape(kernel(positions).T, xx.shape)\n cfset = ax.contourf(xx, yy, f, cmap=cmap)\n if contours is not None:\n x = jnp.arange(-2.0, 2.0, 0.1)\n y = jnp.arange(-2.0, 2.0, 0.1)\n cx, cy = jnp.meshgrid(x, y)\n new_set = ax.contour(\n cx, cy, contours.squeeze().reshape(cx.shape), levels=20, colors=\"k\", linewidths=0.8, alpha=0.5\n )\n ax.set_title(title)",
"Models and Training\nA multilayer perceptron with the ReLU activation function.",
"class MLP(nn.Module):\n features: Sequence[int]\n\n @nn.compact\n def __call__(self, x):\n for feat in self.features[:-1]:\n x = jax.nn.relu(nn.Dense(features=feat)(x))\n x = nn.Dense(features=self.features[-1])(x)\n return x",
"The loss function for the discriminator is:\n$$L_D(\\phi, \\theta) = \\mathbb{E}{p^*(x)} g(D\\phi(x)) + \\mathbb{E}{q(z)}h(D\\phi(G_\\theta(z)))$$\nwhere $g(t) = -\\log t$, $h(t) = -\\log(1 - t)$ as in the original GAN.",
"@jax.jit\ndef discriminator_step(disc_state, gen_state, latents, real_examples):\n def loss_fn(disc_params):\n fake_examples = gen_state.apply_fn(gen_state.params, latents)\n real_logits = disc_state.apply_fn(disc_params, real_examples)\n fake_logits = disc_state.apply_fn(disc_params, fake_examples)\n disc_real = -jax.nn.log_sigmoid(real_logits)\n # log(1 - sigmoid(x)) = log_sigmoid(-x)\n disc_fake = -jax.nn.log_sigmoid(-fake_logits)\n return jnp.mean(disc_real + disc_fake)\n\n disc_loss, disc_grad = jax.value_and_grad(loss_fn)(disc_state.params)\n disc_state = disc_state.apply_gradients(grads=disc_grad)\n return disc_state, disc_loss",
"The loss function for the generator is:\n$$L_G(\\phi, \\theta) = \\mathbb{E}{q(z)} l(D\\phi(G_\\theta(z))$$\nwhere $l(t) = -\\log t$ for the non-saturating generator loss.",
"@jax.jit\ndef generator_step(disc_state, gen_state, latents):\n def loss_fn(gen_params):\n fake_examples = gen_state.apply_fn(gen_params, latents)\n fake_logits = disc_state.apply_fn(disc_state.params, fake_examples)\n disc_fake = -jax.nn.log_sigmoid(fake_logits)\n return jnp.mean(disc_fake)\n\n gen_loss, gen_grad = jax.value_and_grad(loss_fn)(gen_state.params)\n gen_state = gen_state.apply_gradients(grads=gen_grad)\n return gen_state, gen_loss",
"Perform a training step by first updating the discriminator parameters $\\phi$ using the gradient $\\nabla_\\phi L_D (\\phi, \\theta)$ and then updating the generator parameters $\\theta$ using the gradient $\\nabla_\\theta L_G (\\phi, \\theta)$.",
"@jax.jit\ndef train_step(disc_state, gen_state, latents, real_examples):\n disc_state, disc_loss = discriminator_step(disc_state, gen_state, latents, real_examples)\n gen_state, gen_loss = generator_step(disc_state, gen_state, latents)\n return disc_state, gen_state, disc_loss, gen_loss\n\nbatch_size = 512\nlatent_size = 32\n\ndiscriminator = MLP(features=[25, 25, 1])\ngenerator = MLP(features=[25, 25, 2])\n\n# Initialize parameters for the discriminator and the generator\nlatents = jax.random.normal(rng, shape=(batch_size, latent_size))\nreal_examples = real_data(rng, batch_size)\ndisc_params = discriminator.init(rng, real_examples)\ngen_params = generator.init(rng, latents)\n\n# Plot real examples\nbbox = [-2, 2, -2, 2]\nplot_on_ax(plt.gca(), real_examples, bbox=bbox, title=\"Data\")\nplt.tight_layout()\nplt.savefig(\"gan_gmm_data.pdf\")\nplt.show()\n\n# Create train states for the discriminator and the generator\nlr = 0.05\ndisc_state = train_state.TrainState.create(\n apply_fn=discriminator.apply, params=disc_params, tx=optax.sgd(learning_rate=lr)\n)\ngen_state = train_state.TrainState.create(apply_fn=generator.apply, params=gen_params, tx=optax.sgd(learning_rate=lr))\n\n# x and y grid for plotting discriminator contours\nx = jnp.arange(-2.0, 2.0, 0.1)\ny = jnp.arange(-2.0, 2.0, 0.1)\nX, Y = jnp.meshgrid(x, y)\npairs = jnp.stack((X, Y), axis=-1)\npairs = jnp.reshape(pairs, (-1, 2))\n\n# Latents for testing generator\ntest_latents = jax.random.normal(rng, shape=(batch_size * 10, latent_size))\n\nnum_iters = 20001\nn_save = 2000\ndraw_contours = False\nhistory = []\n\nfor i in range(num_iters):\n rng_iter = jax.random.fold_in(rng, i)\n data_rng, latent_rng = jax.random.split(rng_iter)\n # Sample minibatch of examples\n real_examples = real_data(data_rng, batch_size)\n # Sample minibatch of latents\n latents = jax.random.normal(latent_rng, shape=(batch_size, latent_size))\n # Update both the generator\n disc_state, gen_state, disc_loss, gen_loss = train_step(disc_state, gen_state, latents, real_examples)\n if i % n_save == 0:\n print(f\"i = {i}, Discriminator Loss = {disc_loss}, \" + f\"Generator Loss = {gen_loss}\")\n # Generate examples using the test latents\n fake_examples = gen_state.apply_fn(gen_state.params, test_latents)\n if draw_contours:\n real_logits = disc_state.apply_fn(disc_state.params, pairs)\n disc_contour = -real_logits + jax.nn.log_sigmoid(real_logits)\n else:\n disc_contour = None\n history.append((i, fake_examples, disc_contour, disc_loss, gen_loss))",
"Plot Results\nPlot the data and the examples generated by the generator.",
"# Plot generated examples from history\nfor i, hist in enumerate(history):\n iter, fake_examples, contours, disc_loss, gen_loss = hist\n plot_on_ax(\n plt.gca(),\n fake_examples,\n contours=contours,\n bbox=bbox,\n xlabel=f\"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}\",\n title=f\"Samples at Iteration {iter}\",\n )\n plt.tight_layout()\n plt.savefig(f\"gan_gmm_iter_{iter}.pdf\")\n plt.show()\n\ncols = 3\nrows = math.ceil((len(history) + 1) / cols)\nbbox = [-2, 2, -2, 2]\n\nfig, axs = plt.subplots(rows, cols, figsize=(cols * 3, rows * 3), dpi=200)\naxs = axs.flatten()\n\n# Plot real examples\nplot_on_ax(axs[0], real_examples, bbox=bbox, title=\"Data\")\n\n# Plot generated examples from history\nfor i, hist in enumerate(history):\n iter, fake_examples, contours, disc_loss, gen_loss = hist\n plot_on_ax(\n axs[i + 1],\n fake_examples,\n contours=contours,\n bbox=bbox,\n xlabel=f\"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}\",\n title=f\"Samples at Iteration {iter}\",\n )\n\n# Remove extra plots from the figure\nfor i in range(len(history) + 1, len(axs)):\n axs[i].remove()\nplt.tight_layout()\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
soxofaan/dahuffman
|
examples.ipynb
|
mit
|
[
"import dahuffman",
"Usage\nBasic usage example, where the code table is built based on given symbol frequencies:",
"codec = dahuffman.HuffmanCodec.from_frequencies({'e': 100, 'n':20, 'x':1, 'i': 40, 'q':3})\n\nencoded = codec.encode('exeneeeexniqneieini')\nprint(encoded)\nprint(encoded.hex())\nprint(len(encoded))\n\ncodec.decode(encoded)\n\ncodec.print_code_table()",
"You can also \"train\" the codec by providing it data directly:",
"codec = dahuffman.HuffmanCodec.from_data('hello world how are you doing today foo bar lorem ipsum')\n\nlen(codec.encode('do lo er ad od'))",
"Non-string sequences\nUsing dahuffman with sequences of symbols, e.g. country codes:",
"countries = ['FR', 'UK', 'BE', 'IT', 'FR', 'IT', 'GR', 'FR', 'NL', 'BE', 'DE']\ncodec = dahuffman.HuffmanCodec.from_data(countries)\n\nencoded = codec.encode(['FR', 'IT', 'BE', 'FR', 'UK'])\nlen(encoded), encoded.hex()\n\ncodec.decode(encoded)",
"Pre-trained codecs",
"codecs = {\n 'shakespeare': dahuffman.load_shakespeare(),\n 'json': dahuffman.load_json(),\n 'xml': dahuffman.load_xml()\n}\n\ndef try_codecs(data):\n print(\"{n:12s} {s:5d} bytes\".format(n=\"original\", s=len(data)))\n for name, codec in codecs.items():\n try:\n encoded = codec.encode(data)\n except KeyError:\n continue\n print(\"{n:12s} {s:5d} bytes ({p:.1f}%)\".format(n=name, s=len(encoded), p=100.0*len(encoded)/len(data)))\n\n\ntry_codecs(\"\"\"To be, or not to be; that is the question;\n Whether 'tis nobler in the mind to suffer\n The slings and arrows of outrageous fortune,\n Or to take arms against a sea of troubles,\n And by opposing, end them. To die, to sleep\"\"\")\n\ntry_codecs('''{\n \"firstName\": \"John\",\n \"lastName\": \"Smith\",\n \"isAlive\": true,\n \"age\": 27,\n \"children\": [],\n \"spouse\": null\n}''')\n\ntry_codecs('''<?xml version=\"1.0\"?>\n<catalog>\n <book id=\"bk101\">\n <author>Gambardella, Matthew</author>\n <title>XML Developer's Guide</title>\n <price>44.95</price>\n </book>\n <book id=\"bk102\">\n <author>Ralls, Kim</author>\n <title>Midnight Rain</title>\n <price>5.95</price>\n </book>\n</catalog>''')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jdossgollin/CWC_ANN
|
Week03/Time Series_example_Water Pollution_LSTM.ipynb
|
mit
|
[
"from math import sqrt\nfrom numpy import concatenate\nfrom matplotlib import pyplot\nfrom pandas import read_csv\nfrom pandas import DataFrame\nfrom pandas import concat\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import LSTM\n\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.metrics import mean_squared_error",
"The process is broken into three main parts:\n\nTransform raw dataset into something that can be used to model time series:\nPrepare data\nTransform dataset to a supervised learning problem - Identify features and response variable\nDivide into training/test sets\n\n\nFit an LSTM model\nMaking a forecast & interpreting results\n\n1.a. Preparing data",
"# The data set has the date separated into three columns - we want to reformat the date to set it as our index\nfrom datetime import datetime\n\ndef parse(x):\n return datetime.strptime(x, '%Y %m %d %H')\ndataset = read_csv('./raw.csv', parse_dates = [['year', 'month', 'day', 'hour']], index_col=0, date_parser=parse)\ndataset.drop('No', axis=1, inplace=True) #there is an index column \"No\" that we don't need, so we drop it\n\ndataset.columns = ['pollution', 'dew', 'temp', 'press', 'wnd_dir', 'wnd_spd', 'snow', 'rain']\ndataset.index.name = 'date'\ndataset.head()\n\ndataset['pollution'].fillna(0, inplace=True)\n# drop the first 24 hours as there are no observations, for the remaining NAs in the dataset, we fill with 0\ndataset = dataset[24:]\n# summarize first 5 rows\nprint(dataset.head(5))\n# save to file\ndataset.to_csv('./pollution.csv') # 1st part done - We have prepared the data",
"1.b. Transforming dataset to a supervised learning problem",
"dataset = read_csv('./pollution.csv', header=0, index_col=0)\nvalues = dataset.values\n# specify columns to plot\ngroups = [0, 1, 2, 3, 5, 6, 7]\ni = 1\n# plot each column\npyplot.figure(figsize=(20,15))\nfor group in groups:\n pyplot.subplot(len(groups), 1, i)\n pyplot.plot(values[:, group])\n pyplot.title(dataset.columns[group], y=0.5, loc='right')\n i += 1\npyplot.show()\n\n# Can do it manually but efficient to create a function to transform a series to a supervised problem\n# The function's key components:\n# -the pandas shift() function and how it can be used to automatically define supervised learning datasets from time series data.\n# -reframe a univariate time series into one-step and multi-step supervised learning problems.\n# -reframe multivariate time series into one-step and multi-step supervised learning problems.\n\ndef series_to_supervised(data, n_in=1, n_out=1, dropnan=True):\n n_vars = 1 if type(data) is list else data.shape[1]\n df = DataFrame(data)\n cols, names = list(), list()\n# input sequence (t-n, ... t-1)\n for i in range(n_in, 0, -1):\n cols.append(df.shift(i))\n names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]\n# forecast sequence (t, t+1, ... t+n)\n for i in range(0, n_out):\n cols.append(df.shift(-i))\n if i == 0:\n names += [('var%d(t)' % (j+1)) for j in range(n_vars)]\n else:\n names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]\n# put it all together\n agg = concat(cols, axis=1)\n agg.columns = names\n# drop rows with NaN values\n if dropnan:\n agg.dropna(inplace=True)\n return agg\n\n# integer encode direction\nencoder = LabelEncoder()\nvalues[:,4] = encoder.fit_transform(values[:,4])\n# ensure all data is float\nvalues = values.astype('float32')\n# normalize features\nscaler = MinMaxScaler(feature_range=(0, 1))\nscaled = scaler.fit_transform(values)\n# frame as supervised learning\nreframed = series_to_supervised(scaled, 1, 1)\n# drop columns we don't want to predict\nreframed.drop(reframed.columns[[9,10,11,12,13,14,15]], axis=1, inplace=True)\nreframed.head()\n\ndataset.head()",
"1.c. Divide into train and test sets\nLSTM\nFor LSTM we must transform the input patterns from a 2D array (1 column with 9 rows) to a 3D array comprised of [rows, timesteps, columns] where timesteps is 1 because we only have one timestep per observation on each row.",
"# Split dataset\nvalues = reframed.values\nn_train_hours = 35039 # 365 days * 24 hours * 4 years - 1 (starts with 0)\ntrain = values[:n_train_hours, :]\ntest = values[n_train_hours:, :]\n# split into input and outputs\ntrain_X, train_y = train[:, :-1], train[:, -1]\ntest_X, test_y = test[:, :-1], test[:, -1]\n# reshape input to be 3D [samples, timesteps, features]\ntrain_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))\ntest_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))\nprint(train_X.shape, train_y.shape, test_X.shape, test_y.shape)",
"2. Model Fitting\n\nWe are choosing LSTM because it is widely used and shows good results for multivariate time-forecasting\nYou can experiment with other sequential models that are available on the Keras website here: https://keras.io/getting-started/sequential-model-guide/",
"# design network\nmodel = Sequential()\nmodel.add(LSTM(32, input_shape=(train_X.shape[1], train_X.shape[2])))\nmodel.add(Dense(1))\nmodel.compile(loss='mae', optimizer='adam')\n# fit network\nhistory = model.fit(train_X, train_y, epochs=40, batch_size=72, validation_data=(test_X, test_y), verbose=2, shuffle=False)\n# plot history\npyplot.plot(history.history['loss'], label='train')\npyplot.plot(history.history['val_loss'], label='test')\npyplot.legend()\npyplot.show()",
"3. Using Model to Predict and Interpreting Results",
"# make a prediction\nyhat = model.predict(test_X)\ntest_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))\n# invert scaling for forecast\ninv_yhat = concatenate((yhat, test_X[:, 1:]), axis=1)\ninv_yhat = scaler.inverse_transform(inv_yhat)\ninv_yhat = inv_yhat[:,0]\n# invert scaling to obtain actual values\ntest_y = test_y.reshape((len(test_y), 1))\ninv_y = concatenate((test_y, test_X[:, 1:]), axis=1)\ninv_y = scaler.inverse_transform(inv_y)\ninv_y = inv_y[:,0]\n\nrmse = sqrt(mean_squared_error(inv_y, inv_yhat))\nprint('Test RMSE: %.3f' % rmse)\n\nfig = pyplot.figure(figsize=(20,10))\nax1 = fig.add_subplot(111)\nax1.plot(inv_y[1:2000],label='y')\nax1.plot(inv_yhat[1:2000],label='yhat')\npyplot.legend()\npyplot.show()",
"Tuning LSTM Parameters in Keras\n\nTune the number of epochs: Simply change the number\nTune the batch size\nTune the number of neurons",
"fig, ax = pyplot.subplots()\nax.scatter(inv_y, inv_yhat, edgecolors=(0, 0, 0))\nax.plot([inv_y.min(), inv_y.max()], [inv_y.min(), inv_y.max()], 'k--', lw=4)\nax.set_xlabel('y')\nax.set_ylabel('yhat')\npyplot.show()\n\nfig, ax = pyplot.subplots()\nax.scatter(inv_y, inv_yhat, edgecolors=(0, 0, 0))\nax.plot([inv_y.min(), 200], [inv_y.min(), 20], 'k--', lw=1)\nax.set_xlabel('y')\nax.set_ylabel('yhat')\npyplot.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
modin-project/modin
|
examples/tutorial/jupyter/execution/pandas_on_dask/local/exercise_1.ipynb
|
apache-2.0
|
[
"<center><h2>Scale your pandas workflows by changing one line of code</h2>\nExercise 1: How to use Modin\nGOAL: Learn how to import Modin to accelerate and scale pandas workflows.\nModin is a drop-in replacement for pandas that distributes the computation \nacross all of the cores in your machine or in a cluster.\nIn practical terms, this means that you can continue using the same pandas scripts\nas before and expect the behavior and results to be the same. The only thing that needs\nto change is the import statement. Normally, you would change:\npython\nimport pandas as pd\nto:\npython\nimport modin.pandas as pd\nChanging this line of code will allow you to use all of the cores in your machine to do computation on your data. One of the major performance bottlenecks of pandas is that it only uses a single core for any given computation. Modin exposes an API that is identical to pandas, allowing you to continue interacting with your data as you would with pandas. There are no additional commands required to use Modin locally. Partitioning, scheduling, data transfer, and other related concerns are all handled by Modin under the hood.\n<p style=\"text-align:left;\">\n <h1>pandas on a multicore laptop\n <span style=\"float:right;\">\n Modin on a multicore laptop\n </span>\n\n<div>\n<img align=\"left\" src=\"../../../img/pandas_multicore.png\"><img src=\"../../../img/modin_multicore.png\">\n</div>\n\n### Concept for exercise: setting Modin engine\n\nModin uses Ray as an execution engine by default so no additional action is required to start to use it. Alternatively, if you need to use another engine, it should be specified either by setting the Modin config or by setting Modin environment variable before the first operation with Modin as it is shown below. Also, note that the full list of Modin configs and corresponding environment variables can be found in the [Modin Configuration Settings](https://modin.readthedocs.io/en/stable/flow/modin/config.html#modin-configs-list) section of the Modin documentation.",
"# Modin engine can be specified either by config\nimport modin.config as cfg\ncfg.Engine.put(\"dask\")\n\n# or by setting the environment variable\n# import os\n# os.environ[\"MODIN_ENGINE\"] = \"dask\"",
"Concept for exercise: Dataframe constructor\nOften when playing around in pandas, it is useful to create a DataFrame with the constructor. That is where we will start.\n```python\nimport numpy as np\nimport pandas as pd\nframe_data = np.random.randint(0, 100, size=(210, 25))\ndf = pd.DataFrame(frame_data)\n```\nWhen creating a dataframe from a non-distributed object, it will take extra time to partition the data. When this is happening, you will see this message:\nUserWarning: Distributing <class 'numpy.ndarray'> object. This may take some time.",
"# Note: Do not change this code!\nimport numpy as np\nimport pandas\nimport sys\nimport modin\n\npandas.__version__\n\nmodin.__version__\n\n# Implement your answer here. You are also free to play with the size\n# and shape of the DataFrame, but beware of exceeding your memory!\n\nimport pandas as pd\n\nframe_data = np.random.randint(0, 100, size=(2**10, 2**5))\ndf = pd.DataFrame(frame_data)\n\n# ***** Do not change the code below! It verifies that \n# ***** the exercise has been done correctly. *****\n\ntry:\n assert df is not None\n assert frame_data is not None\n assert isinstance(frame_data, np.ndarray)\nexcept:\n raise AssertionError(\"Don't change too much of the original code!\")\nassert \"modin.pandas\" in sys.modules, \"Not quite correct. Remember the single line of code change (See above)\"\n\nimport modin.pandas\nassert pd == modin.pandas, \"Remember the single line of code change (See above)\"\nassert hasattr(df, \"_query_compiler\"), \"Make sure that `df` is a modin.pandas DataFrame.\"\n\nprint(\"Success! You only need to change one line of code!\")",
"Now that we have created a toy example for playing around with the DataFrame, let's print it out in different ways.\nConcept for Exercise: Data Interaction and Printing\nWhen interacting with data, it is very imporant to look at different parts of the data (e.g. df.head()). Here we will show that you can print the modin.pandas DataFrame in the same ways you would pandas.",
"# Print the first 10 lines.\ndf.head(10)\n\n# Print the DataFrame.\ndf\n\n# Free cell for custom interaction (Play around here!)\ndf.add_prefix(\"col\")\n\ndf.count()",
"Please move on to Exercise 2 when you are ready"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
otavio-r-filho/AIND-Deep_Learning_Notebooks
|
embeddings/Skip-Gram_word2vec.ipynb
|
mit
|
[
"Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. \n\nTo solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the \"on\" input unit.\n\nInstead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example \"heart\" is encoded as 958, \"mind\" as 18094. Then to get hidden layer values for \"heart\", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.\n<img src='assets/tokenize_lookup.png' width=500>\nThere is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.\nEmbeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.\nWord2Vec\nThe word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.",
"import time\n\nimport numpy as np\nimport tensorflow as tf\nimport random\nfrom collections import Counter\n\nimport utils",
"Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()",
"Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.",
"words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))",
"And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.",
"vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]",
"Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.",
"import time\n\ndef subsample_words(words, threshold): \n # This will be the probability to keep each word\n keep_probs = np.random.uniform(0.0, 1.0, len(words))\n \n total_words = len(words)\n \n # Counting the frequency of each word\n words_freqs = Counter(words)\n words_freqs = {word: count/total_words for word, count in words_freqs.items()}\n \n # Placeholder to keep the train words\n keep_words = []\n \n for idx, word in enumerate(words):\n discard_prob = 1.0 - np.sqrt(threshold / words_freqs[word])\n \n if keep_probs[idx] >= discard_prob:\n keep_words.append(word)\n \n return keep_words\n\n## Your code here\ntrain_words = subsample_words(int_wordswords, threshold=1e-5)",
"Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.",
"def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n \n # Your code here\n \n r = np.random.randint(1, window_size + 1)\n low_idx = max(idx - r, 0)\n high_idx = min(idx + r + 1, len(words) - 1)\n \n wnd = set(words[low_idx:idx] + words[idx+1:high_idx])\n \n return list(wnd)",
"Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.",
"def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ",
"Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.",
"train_graph = tf.Graph()\nwith train_graph.as_default():\n inputs = tf.placeholder(tf.int32, [None], name='inputs')\n labels = tf.placeholder(tf.int32, [None, None], name='labels')",
"Embedding\nThe embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.",
"n_vocab = len(int_to_vocab)\nn_embedding = 200 # Number of embedding features \nwith train_graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here\n embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output",
"Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.",
"# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here\n softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels=labels,\n inputs=embed, num_sampled=n_sampled, num_classes=n_vocab) \n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)",
"Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.",
"with train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints",
"Training\nBelow is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.",
"epochs = 10\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n\n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n ## From Thushan Ganegedara's implementation\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n embed_mat = sess.run(normalized_embedding)",
"Restore the trained network if you need to:",
"with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)",
"Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dataworkshop/webinar-jupyter
|
pandas.ipynb
|
mit
|
[
"import pandas as pd\nimport numpy as np\n\n%matplotlib inline",
"Pandas\n\nDataFrame\nSeries",
"sq = pd.Series({'row1': 'row1 col a', 'row 2': 'row2 col a'})\nsq\n\nsq.index\n\ndf = pd.DataFrame(\n {\n 'column_a': {'row1': 'row1 col a', 'row 2': 'row2 col a'}, \n 'column_b': {'row1': 'row1 col b', 'row 2': 'row2 col b'}, \n })\ndf\n\ndf.index\n\ndf.columns\n\ndf.columns = ['new_column_a', 'new_column_b']\ndf\n\nprint(type(df.new_column_a))\ndf.new_column_a\n\ntype(df.new_column_a)\n\nprint(type(df.new_column_a.values))\ndf.new_column_a.values",
"Read data\nLet's read data from train.csv file into memory (assign to df variable in our case)",
"df = pd.read_csv('train.csv')",
"Questions:\n\nHow many objects (rows) and features (columns) are there?\nWhat the name and type of features (columns)?\nAre there missing values?\nHow many memory is use for keep this data in RAM?\n\nLet's use .info() to find the answer",
"df.info()",
"RangeIndex: 10886 entries, 0 to 10885 => there're 10886 rows (objects)\nData columns (total 12 columns): => there're 12 columns (features)\ndtypes: float64(3), int64(8), object(1) => three types (float, int, object)\nmemory usage: 1020.6+ KB => use about 1MB\nThere are not missing data (because each column has the same number non-missing values)",
"print(\"count samples & features: \", df.shape)\nprint(\"Are there missing values: \", df.isnull().any().any())",
"Questions\n\nHow data looks like?\nAre there categorical variables?\nThe categorical variables have high or low cardinality (how many unique values they have)?\nCan we optimize memory usage?",
"df.head(10)\n\ndf.season.unique()\n#df[''].unique()\n\ndf.season.nunique()\n\ndf.columns\n\nfor column in df.columns:\n print(column, df[column].nunique())",
"Categorical variables:\n\nseason: 4 unique values\nholiday: 2 (binary)\nworkingday: 2 (binary)\nweather: 4",
"df.holiday.unique()\n\ndf[ ['holiday'] ].info()\n\ndf['holiday'] = df['holiday'].astype('int8')\ndf[ ['holiday'] ].info()\n\ndef optimize_memory(df):\n for cat_var in ['holiday', 'weather', 'season', 'workingday']:\n df[cat_var] = df[cat_var].astype('int8')\n\n for float_var in ['temp', 'atemp', 'windspeed']:\n df[float_var] = df[float_var].astype('float16')\n\n\n for int_var in ['casual', 'registered', 'count']:\n df[int_var] = df[int_var].astype('int16')\n \n return df\n\ndf = optimize_memory(df)\ndf.info()",
"Working with date",
"df['datetime'] = pd.to_datetime(df['datetime'])\ndf.info()\n\ndf = pd.read_csv('train.csv', parse_dates=['datetime'])\n\ndf = optimize_memory(df)\ndf.info()",
"Understand Data\nhttps://www.kaggle.com/c/bike-sharing-demand/data\n\ndatetime - hourly date + timestamp \nseason - 1 = spring, 2 = summer, 3 = fall, 4 = winter \nholiday - whether the day is considered a holiday\nworkingday - whether the day is neither a weekend nor holiday\nweather - \n 1: Clear, Few clouds, Partly cloudy, Partly cloudy\n 2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist\n 3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds\n 4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog \ntemp - temperature in Celsius\natemp - \"feels like\" temperature in Celsius\nhumidity - relative humidity\nwindspeed - wind speed\ncasual - number of non-registered user rentals initiated\nregistered - number of registered user rentals initiated\ncount - number of total rentals",
"df.head()",
"Questions:\n\nWhat is target variable (should be predicted)?\nWhat the difference between count, registered and casual?",
"df['count'].plot(figsize=(20, 10));\n\ndf['casual'].plot()\n\ndf['registered'].plot()\n\n(df['count'] == df['casual'] + df['registered']).all()",
"Extract day, month, year... from datetime",
"df.datetime.map(lambda x: x.day)\n\ndf.datetime.dt.hour\n\ndef plot_by_hour(data, year=None, agg='sum'):\n data['hour'] = data.datetime.dt.hour\n dd = data[ data.datetime.dt.year == year ] if year else data\n \n by_hour = dd.groupby(['hour', 'workingday'])['count'].agg(agg).unstack()\n return by_hour.plot(kind='bar', ylim=(0, 80000), figsize=(15,5), width=0.9, title=\"Year = {0}\".format(year))\n\n\nplot_by_hour(df, year=2011)\nplot_by_hour(df, year=2012);\n\ndef plot_by_year(data, agg_attr, title):\n data['year'] = data.datetime.dt.year\n data['month'] = data.datetime.dt.month\n data['hour'] = data.datetime.dt.hour\n \n by_year = data.groupby([agg_attr, 'year'])['count'].agg('sum').unstack()\n return by_year.plot(kind='bar', figsize=(15,5), width=0.9, title=title)\n\n\nplot_by_year(df, 'month', \"Rent bikes per month in 2011 and 2012\")\nplot_by_year(df, 'hour', \"Rent bikes per hour in 2011 and 2012\");\n\ndf[ ['count', 'year'] ].boxplot(by=\"year\", figsize=(15, 6));\n\nfor year in [2011, 2012]:\n for workingday in [0, 1]:\n dd = df[ (df.datetime.dt.year == year) | (df.workingday == workingday) ]\n dd[ ['count', 'month'] ].boxplot(by=\"month\", figsize=(15, 6));",
"Mapping\nWeather\n\nClear, Few clouds, Partly cloudy, Partly cloudy\nMist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist\nLight Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds\nHeavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog",
"weather = {1: 'Clear', 2: 'Mist', 3: 'Light Snow', 4: 'Heavy Rain'}\ndf['weather_label'] = df.weather.map(lambda x: weather[x])\n\ndf['weather_label'].unique()",
"Apply",
"df[ ['weather', 'season'] ].apply(lambda x: 'weather-{0}, season-{1}'.format(x['weather'], x['season']), axis=1).head()",
"Value counts",
"df.year = df.datetime.dt.year\n\ndf['year'].value_counts()\n\ndf['month'].value_counts()",
"Group",
"df.groupby('year')['month'].value_counts()",
"Aggregation",
"df.groupby('year')['count'].min()\n\ndf.groupby('year')['count'].max()\n\ndf.groupby('year')['count'].agg(np.max)\n\nfor agg_func in [np.mean, np.median, np.min, np.max]:\n print(agg_func.__name__, df.groupby(['year', 'month'])['count'].agg(agg_func))",
"Sort",
"df.sort_values(by=['year', 'month'], ascending=False).head()",
"Save",
"df.to_csv('df.csv', index=False)\n\n!head df.csv\n\ndf.to_hdf('df.h5', 'df')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
musketeer191/job_analytics
|
.ipynb_checkpoints/skill_cluster5-checkpoint.ipynb
|
gpl-3.0
|
[
"Build Document-Skill matrix\nWe first focus on building the document-skill matrix where each entry $f(d,s)$ is the frequency skill $s$ occurs in document $d$.\nChallenge: naive counting can cause over-estimating the frequency. This is due to overlapping in skills. By overlapping we mean that a unigram skill can be a part of a bigram/trigram skill. For example: 'analytics' is a skill itself but it also occurs in 'data analytics', thus a document with skill 'data analytics' occuring 10 times is also considered as containing skill 'analytics' 10 times.\nSolution: To overcome this, we propose counting with removal as follows. We count trigram skills first, then remove them from docs, count bigram skills, remove them from docs and finally count unigram skills. This way we can avoid overlapping.",
"import cluster_skill_helpers as cluster_skill_helpers\nfrom cluster_skill_helpers import *\n\nHOME_DIR = 'd:/larc_projects/job_analytics/'; DATA_DIR = HOME_DIR + 'data/clean/'\nSKILL_DIR = DATA_DIR + 'skill_cluster/'; RES_DIR = HOME_DIR + 'results/reports/skill_cluster/'\n\njd_df = pd.read_csv(DATA_DIR + 'jd_df.csv')\njd_df.sort_values(by='n_uniq_skill', inplace=True, ascending=False)\n\njd_docs = jd_df['clean_text'].apply(str.lower)\n\ndoc_index = pd.DataFrame({'job_id': jd_df['job_id'], 'doc': jd_docs})\ndoc_index.head()\n\ndoc_index.to_csv(SKILL_DIR + 'doc_index.csv', index=False)\n\nskill_df = pd.read_csv(DATA_DIR + 'skill_cluster/skill_df.csv')\nskills = skill_df['skill']",
"Counting with removal",
"trigram_skills = np.unique(skill_df.query('n_word == 3')['skill'])\nbigram_skills = np.unique(skill_df.query('n_word == 2')['skill'])\nunigram_skills = np.unique(skill_df.query('n_word == 1')['skill'])\n\n# pd.DataFrame({'trigram': trigram_skills}).to_csv(SKILL_DIR + 'trigrams.csv', index=False)\n# pd.DataFrame({'bigram': bigram_skills}).to_csv(SKILL_DIR + 'bigrams.csv', index=False)\n# pd.DataFrame({'unigram': unigram_skills}).to_csv(SKILL_DIR + 'unigrams.csv', index=False)\n\nreload(cluster_skill_helpers)\nfrom cluster_skill_helpers import *",
"Trigram skills:",
"doc_trigram = buildDocSkillMat(n=3, jd_docs=jd_docs, skills=trigram_skills)\n\nprint('Removing tri-grams from JDs to avoid duplications...')\njd_docs = jd_docs.apply(rmSkills, skills = trigram_skills)",
"Bigram skills:",
"reload(cluster_skill_helpers)\nfrom cluster_skill_helpers import *\n\ndoc_bigram = buildDocSkillMat(n=2, jd_docs=jd_docs, skills=bigram_skills)\nprint('Removing bi-grams from JDs...')\njd_docs = jd_docs.apply(rmSkills, skills = bigram_skills)",
"Unigram skills:",
"doc_unigram = buildDocSkillMat(n=1, jd_docs=jd_docs, skills=unigram_skills)\n\nwith(open(SKILL_DIR + 'doc_trigram.mtx', 'w')) as f:\n mmwrite(f, doc_trigram)\nwith(open(SKILL_DIR + 'doc_bigram.mtx', 'w')) as f:\n mmwrite(f, doc_bigram) \nwith(open(SKILL_DIR + 'doc_unigram.mtx', 'w')) as f:\n mmwrite(f, doc_unigram)",
"Concat matrices doc_unigram, doc_bigram and doc_trigram to get occurrences of all skills:",
"from scipy.sparse import hstack\n\ndoc_skill = hstack([doc_unigram, doc_bigram, doc_trigram])\nassert doc_skill.shape[0] == doc_unigram.shape[0]\nassert doc_skill.shape[1] == doc_unigram.shape[1] + doc_bigram.shape[1] + doc_trigram.shape[1]\n\nwith(open(SKILL_DIR + 'doc_skill.mtx', 'w')) as f:\n mmwrite(f, doc_skill)\n\nskills = np.concatenate((unigram_skills, bigram_skills, trigram_skills))\npd.DataFrame({'skill': skills}).to_csv(SKILL_DIR + 'skill_index.csv', index=False)",
"Naive counting",
"vectorizer = text_manip.CountVectorizer(vocabulary=skills, ngram_range=(1,3))\n\nt0 = time()\nprint('Naive counting...')\nnaive_doc_skill = vectorizer.fit_transform(jd_docs)\nprint('Done after %f.1s' %(time() - t0))\n\ns_freq = naive_doc_skill.sum(axis=0).A1\nnaive_skill_df = pd.DataFrame({'skill': skills, 'freq': s_freq})\nnaive_skill_df = pd.merge(naive_skill_df, skill_df)\n\nnaive_skill_df = naive_skill_df[['skill', 'n_word', 'freq', 'n_jd_with_skill']]\nnaive_skill_df.head()\n\nres = vectorizer.inverse_transform(naive_doc_skill)\n# res[:10]",
"Most popular skills after the process",
"s_freq = doc_skill.sum(axis=0).A1\nnew_skill_df = pd.DataFrame({'skill': skills, 'new_freq': s_freq})\n\nskill_df = pd.merge(naive_skill_df, new_skill_df)\nskill_df = skill_df[['skill', 'n_word', 'freq', 'new_freq', 'n_jd_with_skill']]\n\nunigram_df = skill_df.query('n_word == 1').sort_values(by='new_freq', ascending=False)\nbigram_df = skill_df.query('n_word == 2').sort_values(by='new_freq', ascending=False)\ntrigram_df = skill_df.query('n_word == 3').sort_values(by='new_freq', ascending=False)\n\nprint('# unigram skills in JDs: {}'.format(unigram_df.shape[0]))\nprint('# bigram skills in JDs: {}'.format(bigram_df.shape[0]))\nprint('# trigram skills in JDs: {}'.format(trigram_df.shape[0]))",
"Top-10 popular trigram skills:",
"trigram_df.head(10)",
"Top-k popular bigram skills:",
"bigram_df.head(20)",
"Top-k popular unigram skills:",
"unigram_df.head(20)\n\ntrigram_df.to_csv(SKILL_DIR + 'trigram.csv', index=False)\nbigram_df.to_csv(SKILL_DIR + 'bigram.csv', index=False)\nunigram_df.to_csv(SKILL_DIR + 'unigram.csv', index=False)\n\n# top100_skills = skill_df.head(100)\n# top100_skills.to_csv(RES_DIR + 'top100_skills.csv', index=False)",
"LDA and NMF on New Doc-Skill Matrix\nGlobal arguments:\n\nno. of topics: k in {5, 10, ..., 20}\nno. of top words to be printed out in result",
"ks = range(5, 25, 5)\nn_top_words = 10\n\nn_doc = doc_skill.shape[0]; n_feat = doc_skill.shape[1]\ntrain_idx, test_idx = mkPartition(n_doc, p=80)",
"Skill Clustering by LDA",
"lda_X_train, lda_X_test = doc_skill[train_idx, :], doc_skill[test_idx, :]\n\n# # check correctness of rmSkills()\n# non_zeros = find(doc_trigram)\n# idx_docs_with_trigram = non_zeros[0]\n# trigram_counts = non_zeros[2]\n# many_trigrams = idx_docs_with_trigram[trigram_counts > 1]\n\n# doc_with_trigram = jd_docs.iloc[many_trigrams[0]]\n# print('Doc bf removing tri-gram skills:\\n {}'.format(doc_with_trigram))\n# res = rmSkills(trigram_skills, doc_with_trigram)\n\n# two_spaces = [m.start() for m in re.finditer(' ', res)]\n# print res[two_spaces[1]:]\n# print res[two_spaces[0]:450]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ja/guide/tpu.ipynb
|
apache-2.0
|
[
"Copyright 2018 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TPU の使用\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/guide/tpu\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org で表示</a></td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/tpu.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab で実行</a> </td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/tpu.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示</a> </td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/tpu.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a> </td>\n</table>\n\nこの Colab ノートブックをダウンロードする前に、Runtime > Change runtime type > Hardware accelerator > TPU でノートブックの設定を確認し、ハードウェアアクセラレータが TPU であることを確認してください。\nセットアップ",
"import tensorflow as tf\n\nimport os\nimport tensorflow_datasets as tfds",
"TPU の初期化\nTPU は通常 Cloud TPU ワーカーであり、これはユーザーの Python プログラムを実行するローカルプロセスとは異なります。そのため、リモートクラスタに接続して TPU を初期化するには、ある程度の初期化作業が必要となります。tf.distribute.cluster_resolver.TPUClusterResolver の tpu 引数は、Colab だけの特別なアドレスであることに注意してください。Google Compute Engine(GCE)で実行している場合は、ご利用の CloudTPU の名前を渡す必要があります。\n注意: TPU の初期化コードはプログラムのはじめにある必要があります。",
"resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\ntf.config.experimental_connect_to_cluster(resolver)\n# This is the TPU initialization code that has to be at the beginning.\ntf.tpu.experimental.initialize_tpu_system(resolver)\nprint(\"All devices: \", tf.config.list_logical_devices('TPU'))",
"手動でデバイスを配置する\nTPU が初期されたら、計算を単一の TPU デバイスに配置するために、手動によるデバイスの配置を使用できます。",
"a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\nb = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n\nwith tf.device('/TPU:0'):\n c = tf.matmul(a, b)\n\nprint(\"c device: \", c.device)\nprint(c)",
"分散ストラテジー\nモデルは通常、複数の TPU で並行して実行されます。複数の TPU(またはその他のアクセラレータ)でモデルを分散させるため、TensorFlow にはいくつかの分散ストラテジーが用意されています。分散ストラテジーを置き換えると、指定された任意の(TPU)デバイスでモデルが実行するようになります。詳細については、分散ストラテジーガイドをご覧ください。\nこれを実演するために、tf.distribute.TPUStrategy オブジェクトを作成します。",
"strategy = tf.distribute.TPUStrategy(resolver)",
"計算を複製してすべての TPU コアで実行できるようにするには、計算を strategy.run API に渡します。次の例では、すべてのコアが同じ入力 (a, b) を受け入れて、各コアで独立して行列の乗算を実行しています。出力は、すべてのレプリカからの値となります。",
"@tf.function\ndef matmul_fn(x, y):\n z = tf.matmul(x, y)\n return z\n\nz = strategy.run(matmul_fn, args=(a, b))\nprint(z)",
"TPU での分類\n基本的な概念を説明したので、より具体的な例を考察しましょう。このセクションでは、分散ストラテジー tf.distribute.TPUStrategy を使用して Cloud TPU でKeras モデルをトレーニングする方法を説明します。\nKeras モデルを定義する\nMNIST データセットで Keras を使用して画像の分類を行う Sequential Keras モデルの定義から始めましょう。CPU または GPU でトレーニングする場合に使用するものと変わりません。Keras モデルの作成は strategy.scope 内で行う必要があることに注意してください。そうすることで、変数が各 TPU デバイスに作成されるようになります。コードの他の部分は、ストラテジースコープ内にある必要はありません。",
"def create_model():\n return tf.keras.Sequential(\n [tf.keras.layers.Conv2D(256, 3, activation='relu', input_shape=(28, 28, 1)),\n tf.keras.layers.Conv2D(256, 3, activation='relu'),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(256, activation='relu'),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(10)])",
"データセットを読み込む\nCloud TPU にデータを迅速にフィードできなければ Cloud TPU を使用することは不可能であるため、Cloud TPU を使用する際は、tf.data.Dataset API を効率的に使用できることが非常に重要となります。データセットのパフォーマンスについての詳細は、入力パイプラインのパフォーマンスガイドをご覧ください。\n最も単純な実験(tf.data.Dataset.from_tensor_slices またはほかのグラフ内データの使用)以外のすべての実験では、Dataset が読み取るすべてのデータファイルを Google Cloud Storage(GCS)バケットに格納する必要があります。\nほとんどの使用事例では、データを TFRecord 形式に変換し、tf.data.TFRecordDataset を使って読み取ることをお勧めします。このやり方については、「TFRecord および tf.Example のチュートリアル」を参照してください。これは絶対要件ではないため、ほかのデータセットリーダー(tf.data.FixedLengthRecordDataset または tf.data.TextLineDataset)を使用することもできます。\n小さなデータセットは、tf.data.Dataset.cache を使ってすべてをメモリに読み込むことができます。\nデータ形式にかかわらず、100 MB 程度の大きなファイルを使用することをお勧めします。このネットワーク化された設定においては、ファイルを開くタスクのオーバーヘッドが著しく高くなるため、特に重要なことです。\n以下のコードに示される通り、tensorflow_datasets モジュールを使用して、MNIST トレーニングデータのコピーを取得する必要があります。try_gcs は、パブリック GCS バケットで提供されているコピーを使用するように指定されています。これを指定しない場合、TPU はダウンロードされたデータにアクセスできません。",
"def get_dataset(batch_size, is_training=True):\n split = 'train' if is_training else 'test'\n dataset, info = tfds.load(name='mnist', split=split, with_info=True,\n as_supervised=True, try_gcs=True)\n\n # Normalize the input data.\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255.0\n return image, label\n\n dataset = dataset.map(scale)\n\n # Only shuffle and repeat the dataset in training. The advantage of having an\n # infinite dataset for training is to avoid the potential last partial batch\n # in each epoch, so that you don't need to think about scaling the gradients\n # based on the actual batch size.\n if is_training:\n dataset = dataset.shuffle(10000)\n dataset = dataset.repeat()\n\n dataset = dataset.batch(batch_size)\n\n return dataset",
"Keras の高位 API を使用してモデルをトレーニングする\nKeras の fit と compile API を使用してモデルをトレーニングできます。ここでは、TPU 固有のステップはないため、複数の GPU と MirroredStrategy(TPUStrategy ではなく)を使用している場合と同じようにコードを記述します。詳細については、「Keras を使用した分散トレーニング」チュートリアルをご覧ください。",
"with strategy.scope():\n model = create_model()\n model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['sparse_categorical_accuracy'])\n\nbatch_size = 200\nsteps_per_epoch = 60000 // batch_size\nvalidation_steps = 10000 // batch_size\n\ntrain_dataset = get_dataset(batch_size, is_training=True)\ntest_dataset = get_dataset(batch_size, is_training=False)\n\nmodel.fit(train_dataset,\n epochs=5,\n steps_per_epoch=steps_per_epoch,\n validation_data=test_dataset, \n validation_steps=validation_steps)",
"Python のオーバーヘッドを緩和し、TPU のパフォーマンスを最大化するには、引数 steps_per_execution を Model.compile に渡します。この例では、スループットが約 50% 増加します。",
"with strategy.scope():\n model = create_model()\n model.compile(optimizer='adam',\n # Anything between 2 and `steps_per_epoch` could help here.\n steps_per_execution = 50,\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['sparse_categorical_accuracy'])\n\nmodel.fit(train_dataset,\n epochs=5,\n steps_per_epoch=steps_per_epoch,\n validation_data=test_dataset,\n validation_steps=validation_steps)",
"カスタムトレーニングループを使用してモデルをトレーニングする\ntf.function と tf.distribute API を直接使用しても、モデルを作成してトレーニングすることができます。strategy.experimental_distribute_datasets_from_function API は、データセット関数を指定してデータセットを分散させるために使用されます。以下の例では、データセットに渡されるバッチサイズは、グローバルバッチサイズではなく、レプリカごとのバッチサイズであることに注意してください。詳細については、「tf.distribute.Strategy によるカスタムトレーニング」チュートリアルをご覧ください。\n最初に、モデル、データセット、および tf.function を作成します。",
"# Create the model, optimizer and metrics inside the strategy scope, so that the\n# variables can be mirrored on each device.\nwith strategy.scope():\n model = create_model()\n optimizer = tf.keras.optimizers.Adam()\n training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)\n training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(\n 'training_accuracy', dtype=tf.float32)\n\n# Calculate per replica batch size, and distribute the datasets on each TPU\n# worker.\nper_replica_batch_size = batch_size // strategy.num_replicas_in_sync\n\ntrain_dataset = strategy.experimental_distribute_datasets_from_function(\n lambda _: get_dataset(per_replica_batch_size, is_training=True))\n\n@tf.function\ndef train_step(iterator):\n \"\"\"The step function for one training step.\"\"\"\n\n def step_fn(inputs):\n \"\"\"The computation to run on each TPU device.\"\"\"\n images, labels = inputs\n with tf.GradientTape() as tape:\n logits = model(images, training=True)\n loss = tf.keras.losses.sparse_categorical_crossentropy(\n labels, logits, from_logits=True)\n loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)\n grads = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))\n training_loss.update_state(loss * strategy.num_replicas_in_sync)\n training_accuracy.update_state(labels, logits)\n\n strategy.run(step_fn, args=(next(iterator),))",
"次に、トレーニングループを実行します。",
"steps_per_eval = 10000 // batch_size\n\ntrain_iterator = iter(train_dataset)\nfor epoch in range(5):\n print('Epoch: {}/5'.format(epoch))\n\n for step in range(steps_per_epoch):\n train_step(train_iterator)\n print('Current step: {}, training loss: {}, accuracy: {}%'.format(\n optimizer.iterations.numpy(),\n round(float(training_loss.result()), 4),\n round(float(training_accuracy.result()) * 100, 2)))\n training_loss.reset_states()\n training_accuracy.reset_states()",
"tf.function 内の複数のステップでパフォーマンスを改善する\ntf.function 内で複数のステップを実行することで、パフォーマンスを改善できます。これは、tf.function 内の tf.range で strategy.run 呼び出しをラッピングすることで実現されます。AutoGraph は、TPU ワーカーの tf.while_loop に変換します。\nパフォーマンスは改善されますが、tf.function 内の単一のステップに比べれば、この方法にはトレードオフがあります。tf.function で複数のステップを実行すると柔軟性に劣り、ステップ内での Eager execution や任意の Python コードを実行できません。",
"@tf.function\ndef train_multiple_steps(iterator, steps):\n \"\"\"The step function for one training step.\"\"\"\n\n def step_fn(inputs):\n \"\"\"The computation to run on each TPU device.\"\"\"\n images, labels = inputs\n with tf.GradientTape() as tape:\n logits = model(images, training=True)\n loss = tf.keras.losses.sparse_categorical_crossentropy(\n labels, logits, from_logits=True)\n loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)\n grads = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))\n training_loss.update_state(loss * strategy.num_replicas_in_sync)\n training_accuracy.update_state(labels, logits)\n\n for _ in tf.range(steps):\n strategy.run(step_fn, args=(next(iterator),))\n\n# Convert `steps_per_epoch` to `tf.Tensor` so the `tf.function` won't get \n# retraced if the value changes.\ntrain_multiple_steps(train_iterator, tf.convert_to_tensor(steps_per_epoch))\n\nprint('Current step: {}, training loss: {}, accuracy: {}%'.format(\n optimizer.iterations.numpy(),\n round(float(training_loss.result()), 4),\n round(float(training_accuracy.result()) * 100, 2)))",
"次のステップ\n\nGoogle Cloud TPU ドキュメント: Google Cloud TPU のセットアップと実行\nGoogle Cloud TPU Colab ノートブック: エンドツーエンドのトレーニング例\nGoogle Cloud TPU パフォーマンスガイド: アプリケーションに合った Cloud TPU 構成パラメータの調整により、Cloud TPU パフォーマンスをさらに改善します。\nTensorFlow での分散型トレーニング: tf.distribute.TPUStrategy などの分散ストラテジーの使用方法とベストプラクティスを示す例"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hunterherrin/phys202-2015-work
|
assignments/assignment06/ProjectEuler17.ipynb
|
mit
|
[
"Project Euler: Problem 17\nhttps://projecteuler.net/problem=17\nIf the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.\nIf all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?\nNOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of \"and\" when writing out numbers is in compliance with British usage.\nFirst write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above",
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\nnumber_to_words(554)\n\ndef number_to_words(n):\n \"\"\"Given a number n between 1-1000 inclusive return a list of words for the number.\"\"\"\n N=str(n)\n x=list(N)\n if len(x)==4:\n return'one thousand'\n if len(x)==3:\n if x[0]=='1':\n hundred_digit='one hundred'\n elif x[0]=='2':\n hundred_digit='two hundred' \n elif x[0]=='3':\n hundred_digit='three hundred'\n elif x[0]=='4':\n hundred_digit='four hundred'\n elif x[0]=='5':\n hundred_digit='five hundred'\n elif x[0]=='6':\n hundred_digit='six hundred'\n elif x[0]=='7':\n hundred_digit='seven hundred'\n elif x[0]=='8':\n hundred_digit='eight hundred'\n elif x[0]=='9':\n hundred_digit='nine hundred'\n if x[1]=='0':\n tens_digit=' '\n elif x[1]=='1':\n if x[2]=='0':\n return hundred_digit+' ten'\n elif x[2]=='1':\n return hundred_digit+' eleven'\n elif x[2]=='2':\n return hundred_digit+' twelve'\n elif x[2]=='3':\n return hundred_digit+' thirteen'\n elif x[2]=='4':\n return hundred_digit+' fourteen'\n elif x[2]=='5':\n return hundred_digit+' fifteen'\n elif x[2]=='6':\n return hundred_digit+' sixteen'\n elif x[2]=='7':\n return hundred_digit+' seventeen'\n elif x[2]=='8':\n return hundred_digit+' eighteen'\n elif x[2]=='9':\n return hundred_digit+' nineteen'\n elif x[1]=='2':\n tens_digit=' twenty-' \n elif x[1]=='3':\n tens_digit=' thirty-'\n elif x[1]=='4':\n tens_digit=' fourty-'\n elif x[1]=='5':\n tens_digit=' fifty-'\n elif x[1]=='6':\n tens_digit=' sixty-'\n elif x[1]=='7':\n tens_digit=' seventy-'\n elif x[1]=='8':\n tens_digit=' eighty-'\n elif x[1]=='9':\n tens_digit=' ninety-'\n if x[2]=='0':\n return hundred_digit+tens_digit\n elif x[2]=='1':\n return hundred_digit+' and'+tens_digit+'one'\n elif x[2]=='2':\n return hundred_digit+' and'+tens_digit+'two'\n elif x[2]=='3':\n return hundred_digit+' and'+tens_digit+'three'\n elif x[2]=='4':\n return hundred_digit+' and'+tens_digit+'four'\n elif x[2]=='5':\n return hundred_digit+' and'+tens_digit+'five'\n elif x[2]=='6':\n return hundred_digit+' and'+tens_digit+'six'\n elif x[2]=='7':\n return hundred_digit+' and'+tens_digit+'seven'\n elif x[2]=='8':\n return hundred_digit+' and'+tens_digit+'eight'\n elif x[2]=='9':\n return hundred_digit+' and'+tens_digit+'nine'\n if len(x)==2:\n if x[0]=='1':\n if x[1]=='0':\n return 'ten'\n elif x[1]=='1':\n return 'eleven'\n elif x[1]=='2':\n return 'twelve'\n elif x[1]=='3':\n return 'thirteen'\n elif x[1]=='4':\n return 'fourteen'\n elif x[1]=='5':\n return 'fifteen'\n elif x[1]=='6':\n return 'sixteen'\n elif x[1]=='7':\n return 'seventeen'\n elif x[1]=='8':\n return 'eighteen'\n elif x[1]=='9':\n return 'nineteen'\n elif x[0]=='2':\n tens_digit1='twenty-' \n elif x[0]=='3':\n tens_digit1='thirty-'\n elif x[0]=='4':\n tens_digit1='fourty-'\n elif x[0]=='5':\n tens_digit1='fifty-'\n elif x[0]=='6':\n tens_digit1='sixty-'\n elif x[0]=='7':\n tens_digit1='seventy-'\n elif x[0]=='8':\n tens_digit1='eighty-'\n elif x[0]=='9':\n tens_digit1='ninety-'\n if x[1]=='0':\n return tens_digit1\n elif x[1]=='1':\n return tens_digit1+'one'\n elif x[1]=='2':\n return tens_digit1+'two'\n elif x[1]=='3':\n return tens_digit1+'three'\n elif x[1]=='4':\n return tens_digit1+'four'\n elif x[1]=='5':\n return tens_digit1+'five'\n elif x[1]=='6':\n return tens_digit1+'six'\n elif x[1]=='7':\n return tens_digit1+'seven'\n elif x[1]=='8':\n return tens_digit1+'eight'\n elif x[1]=='9':\n return tens_digit1+'nine'\n if len(x)==1:\n if x[0]=='1':\n return 'one'\n elif x[0]=='2':\n return 'two'\n elif x[0]=='3':\n return 'three'\n elif x[0]=='4':\n return 'four'\n elif x[0]=='5':\n return 'five'\n elif x[0]=='6':\n return 'six'\n elif x[0]=='7':\n return 'seven'\n elif x[0]=='8':\n return 'eight'\n elif x[0]=='9':\n return 'nine'\n ",
"Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.",
"assert number_to_words(3)=='three'\nassert number_to_words(2)+number_to_words(4)=='twofour'\nassert number_to_words(978)=='nine hundred and seventy-eight'\n\nassert True # use this for grading the number_to_words tests.",
"Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.",
"def filter_fn(x):\n if x=='-' or x==' ':\n return False\n else:\n return True\n\ndef count_letters(n):\n \"\"\"Count the number of letters used to write out the words for 1-n inclusive.\"\"\"\n total_letters=0\n while n>=1:\n x=number_to_words(n)\n y=x.replace(' ','')\n z=y.replace('-','')\n total_letters=total_letters+len(z)\n n=n-1\n return total_letters\n ",
"Now write a set of assert tests for your count_letters function that verifies that it is working as expected.",
"assert count_letters(1)==3\nassert count_letters(5)==19\nassert count_letters(1000)==20738\n\n\nassert True # use this for grading the count_letters tests.",
"Finally used your count_letters function to solve the original question.",
"# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # use this for gradig the answer to the original question."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lmoresi/UoM-VIEPS-Intro-to-Python
|
Notebooks/SphericalMeshing/SphericalTriangulations/Ex2-SphericalGrids.ipynb
|
mit
|
[
"Example 2 - stripy predefined meshes\nOne common use of stripy is in meshing the sphere and, to this end, we provide pre-defined meshes for icosahedral and octahedral triangulations, each of which can have mid-face centroid points included. A triangulation of the six cube-vertices is also provided as well as a 'buckyball' (or 'soccer ball') mesh. A random mesh is included as a counterpoint to the regular meshes. Each of these meshes is also an sTriangulation. \nThe mesh classes in stripy are:\n```python\nstripy.spherical_meshes.octahedral_mesh(include_face_points=False)\nstripy.spherical_meshes.icosahedral_mesh(include_face_points=False)\nstripy.spherical_meshes.triangulated_cube_mesh()\nstripy.spherical_meshes.triangulated_soccerball_mesh()\nstripy.spherical_meshes.uniform_ring_mesh(resolution=5)\nstripy.spherical_meshes.random_mesh(number_of_points=5000)\n``` \nAny of the above meshes can be uniformly refined by specifying the refinement_levels parameter. \nNotebook contents\n\nSample meshes\nMesh characteristics\nIcosahedron with face points in detail\nCompare the predefined meshes\n\nThe next example is Ex3-Interpolation\n\nSample meshes\nWe create a number of meshes from the basic types available in stripy with approximately similar numbers of vertices.",
"import stripy as stripy\n\nstr_fmt = \"{:35} {:3}\\t{:6}\"\n\n\n## A bunch of meshes with roughly similar overall numbers of points / triangles\n\nocto0 = stripy.spherical_meshes.octahedral_mesh(include_face_points=False, refinement_levels=0)\nocto2 = stripy.spherical_meshes.octahedral_mesh(include_face_points=False, refinement_levels=2)\noctoR = stripy.spherical_meshes.octahedral_mesh(include_face_points=False, refinement_levels=5)\n\nprint(str_fmt.format(\"Octahedral mesh\", octo0.npoints, octoR.npoints))\n\n\noctoF0 = stripy.spherical_meshes.octahedral_mesh(include_face_points=True, refinement_levels=0)\noctoF2 = stripy.spherical_meshes.octahedral_mesh(include_face_points=True, refinement_levels=2)\noctoFR = stripy.spherical_meshes.octahedral_mesh(include_face_points=True, refinement_levels=4)\n\nprint(str_fmt.format(\"Octahedral mesh with faces\", octoF0.npoints, octoFR.npoints))\n\n\ncube0 = stripy.spherical_meshes.triangulated_cube_mesh(refinement_levels=0)\ncube2 = stripy.spherical_meshes.triangulated_cube_mesh(refinement_levels=2)\ncubeR = stripy.spherical_meshes.triangulated_cube_mesh(refinement_levels=5)\n\nprint(str_fmt.format(\"Cube mesh\", cube0.npoints, cubeR.npoints))\n\n\nico0 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=0)\nico2 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=2)\nicoR = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=4)\n\nprint(str_fmt.format(\"Icosahedral mesh\", ico0.npoints, icoR.npoints))\n\n\nicoF0 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=0, include_face_points=True)\nicoF2 = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=2, include_face_points=True)\nicoFR = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=4, include_face_points=True)\n\nprint(str_fmt.format(\"Icosahedral mesh with faces\", icoF0.npoints, icoFR.npoints))\n\n\nsocc0 = stripy.spherical_meshes.triangulated_soccerball_mesh(refinement_levels=0)\nsocc2 = stripy.spherical_meshes.triangulated_soccerball_mesh(refinement_levels=1)\nsoccR = stripy.spherical_meshes.triangulated_soccerball_mesh(refinement_levels=3)\n\nprint(str_fmt.format(\"BuckyBall mesh\", socc0.npoints, soccR.npoints))\n\n\n## Need a reproducible hierarchy ... \nring0 = stripy.spherical_meshes.uniform_ring_mesh(resolution=5, refinement_levels=0)\n\nlon, lat = ring0.uniformly_refine_triangulation()\nring1 = stripy.sTriangulation(lon, lat)\n\nlon, lat = ring1.uniformly_refine_triangulation()\nring2 = stripy.sTriangulation(lon, lat)\n\nlon, lat = ring2.uniformly_refine_triangulation()\nring3 = stripy.sTriangulation(lon, lat)\n\nlon, lat = ring3.uniformly_refine_triangulation()\nringR = stripy.sTriangulation(lon, lat)\n\n\n# ring2 = stripy.uniform_ring_mesh(resolution=6, refinement_levels=2)\n# ringR = stripy.uniform_ring_mesh(resolution=6, refinement_levels=4)\n\nprint(str_fmt.format(\"Ring mesh (9)\", ring0.npoints, ringR.npoints))\n\nrandR = stripy.spherical_meshes.random_mesh(number_of_points=5000)\nrand0 = stripy.sTriangulation(lons=randR.lons[::50],lats=randR.lats[::50])\nrand2 = stripy.sTriangulation(lons=randR.lons[::25],lats=randR.lats[::25])\n\nprint(str_fmt.format(\"Random mesh (6)\", rand0.npoints, randR.npoints))\n\n\nprint(\"Octo: {}\".format(octo0.__doc__))\nprint(\"Cube: {}\".format(cube0.__doc__))\nprint(\"Ico: {}\".format(ico0.__doc__))\nprint(\"Socc: {}\".format(socc0.__doc__))\nprint(\"Ring: {}\".format(ring0.__doc__))\nprint(\"Random: {}\".format(randR.__doc__))",
"Analysis of the characteristics of the triangulations\nWe plot a histogram of the (spherical) areas of the triangles in each of the triangulations normalised by the average area. This is one \nmeasure of the uniformity of each mesh.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef area_histo(mesh):\n \n freq, area_bin = np.histogram(mesh.areas(), bins=20)\n area = 0.5 * (area_bin[1:] + area_bin[:-1])\n (area * freq)\n norm_area = area / mesh.areas().mean()\n \n return norm_area, 0.25 * freq*area / np.pi**2\n\ndef add_plot(axis, mesh, xlim, ylim):\n u, v = area_histo(mesh)\n axis.bar(u, v, width=0.025)\n axis.set_xlim(xlim)\n axis.set_ylim(ylim)\n axis.plot([1.0,1.0], [0.0,1.5], linewidth=1.0, linestyle=\"-.\", color=\"Black\")\n\n return\n\n\n\nfig, ax = plt.subplots(4,2, sharey=True, figsize=(8,16))\n\nxlim=(0.75,1.5)\nylim=(0.0,0.125)\n\n# octahedron\n\nadd_plot(ax[0,0], octoR, xlim, ylim)\n\n# octahedron + faces\n\nadd_plot(ax[0,1], octoFR, xlim, ylim)\n\n\n# icosahedron\n\nadd_plot(ax[1,0], icoR, xlim, ylim)\n\n# icosahedron + faces\n\nadd_plot(ax[1,1], icoFR, xlim, ylim)\n\n\n# cube\n\nadd_plot(ax[2,0], cubeR, xlim, ylim)\n\n# C60\n\nadd_plot(ax[2,1], soccR, xlim, ylim)\n\n\n# ring\n\nadd_plot(ax[3,0], ringR, xlim, ylim)\n\n# random (this one is very different from the others ... )\n\naxis=ax[3,1]\nu, v = area_histo(randR)\naxis.bar(u, v, width=0.5)\naxis.set_xlim(0.0,11.0)\naxis.set_ylim(0,0.125)\naxis.plot([1.0,1.0], [0.0,1.5], linewidth=1.0, linestyle=\"-.\", color=\"Black\")\n\n\nfig.savefig(\"AreaDistributionsByMesh.png\", dpi=250, transparent=True)\n\n\n#ax.bar(norm_area, area*freq, width=0.01)\n",
"The icosahedron with faces looks like this\nIt is helpful to be able to view a mesh in 3D to verify that it is an appropriate choice. Here, for example, is the icosahedron with additional points in the centroid of the faces.\nThis produces triangles with a narrow area distribution. In three dimensions it is easy to see the origin of the size variations.",
"## The icosahedron with faces in 3D view \n\nimport lavavu\n\n## or smesh = icoF0\nsmesh = icoFR\n\nlv = lavavu.Viewer(border=False, background=\"#FFFFFF\", resolution=[1000,600], near=-10.0)\n\ntris = lv.triangles(\"triangulation\", wireframe=True, colour=\"#444444\", opacity=0.8)\ntris.vertices(smesh.points)\ntris.indices(smesh.simplices)\n\ntris2 = lv.triangles(\"triangles\", wireframe=False, colour=\"#77ff88\", opacity=0.8)\ntris2.vertices(smesh.points)\ntris2.indices(smesh.simplices)\n\nnodes = lv.points(\"nodes\", pointsize=2.0, pointtype=\"shiny\", colour=\"#448080\", opacity=0.75)\nnodes.vertices(smesh.points)\n\n\nlv.control.Panel()\nlv.control.Range('specular', range=(0,1), step=0.1, value=0.4)\nlv.control.Checkbox(property='axis')\nlv.control.ObjectList()\nlv.control.show()\n\n\n%matplotlib inline\n\nimport gdal\nimport cartopy\nimport cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\n\nglobal_extent = [-180.0, 180.0, -90.0, 90.0]\n\nprojection1 = ccrs.Orthographic(central_longitude=0.0, central_latitude=0.0, globe=None)\nprojection2 = ccrs.Mollweide(central_longitude=-120)\nprojection3 = ccrs.PlateCarree()\nbase_projection = ccrs.PlateCarree()",
"Plot and compare the predefined meshes",
"def mesh_fig(mesh, meshR, name):\n\n fig = plt.figure(figsize=(10, 10), facecolor=\"none\")\n ax = plt.subplot(111, projection=ccrs.Orthographic(central_longitude=0.0, central_latitude=0.0, globe=None))\n ax.coastlines(color=\"lightgrey\")\n ax.set_global()\n\n generator = mesh\n refined = meshR\n\n lons0 = np.degrees(generator.lons)\n lats0 = np.degrees(generator.lats)\n\n lonsR = np.degrees(refined.lons)\n latsR = np.degrees(refined.lats)\n\n lst = refined.lst\n lptr = refined.lptr\n\n\n ax.scatter(lons0, lats0, color=\"Red\",\n marker=\"o\", s=150.0, transform=ccrs.Geodetic())\n\n ax.scatter(lonsR, latsR, color=\"DarkBlue\",\n marker=\"o\", s=50.0, transform=ccrs.Geodetic())\n\n \n segs = refined.identify_segments()\n\n for s1, s2 in segs:\n ax.plot( [lonsR[s1], lonsR[s2]],\n [latsR[s1], latsR[s2]], \n linewidth=0.5, color=\"black\", transform=ccrs.Geodetic())\n\n fig.savefig(name, dpi=250, transparent=True)\n \n return\n\nmesh_fig(octo0, octo2, \"Octagon\" )\nmesh_fig(octoF0, octoF2, \"OctagonF\" )\n\nmesh_fig(ico0, ico2, \"Icosahedron\" )\nmesh_fig(icoF0, icoF2, \"IcosahedronF\" )\n\nmesh_fig(cube0, cube2, \"Cube\")\nmesh_fig(socc0, socc2, \"SoccerBall\")\n\nmesh_fig(ring0, ring2, \"Ring\")\nmesh_fig(rand0, rand2, \"Random\")\n\n\n\n",
"The next example is Ex3-Interpolation"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/messy-consortium/cmip6/models/sandbox-1/aerosol.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: MESSY-CONSORTIUM\nSource ID: SANDBOX-1\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:10\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'aerosol')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Meteorological Forcings\n5. Key Properties --> Resolution\n6. Key Properties --> Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --> Absorption\n12. Optical Radiative Properties --> Mixtures\n13. Optical Radiative Properties --> Impact Of H2o\n14. Optical Radiative Properties --> Radiative Scheme\n15. Optical Radiative Properties --> Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of aerosol model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrognostic variables in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of tracers in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre aerosol calculations generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the aerosol model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Variables 2D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Frequency\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of transport in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for aerosol transport modeling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n",
"7.3. Mass Conservation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to ensure mass conservation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.4. Convention\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTransport by convention",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prescribed Climatology\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nSpecify the climatology type for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n",
"8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Other Method Characteristics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCharacteristics of the "other method" used for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as mass mixing ratios.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of optical and radiative properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Optical Radiative Properties --> Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.2. Dust\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Organics\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12. Optical Radiative Properties --> Mixtures\n**\n12.1. External\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there external mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Internal\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.3. Mixing Rule\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Optical Radiative Properties --> Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact size?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.2. Internal Mixture\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact internal mixture?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Optical Radiative Properties --> Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Shortwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of shortwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Optical Radiative Properties --> Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol-cloud interactions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Twomey\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the Twomey effect included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.3. Twomey Minimum Ccn\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Drizzle\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect drizzle?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.5. Cloud Lifetime\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect cloud lifetime?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the Aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n",
"16.3. Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther model components coupled to the Aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.4. Gas Phase Precursors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of gas phase aerosol precursors.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.5. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.6. Bulk Scheme Species\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of species covered by the bulk scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ZuckermanLab/NMpathAnalysis
|
test/nm_path_analysis_test.ipynb
|
gpl-3.0
|
[
"Testing the non-Markovian Path Analysis Package",
"import sys\nsys.path.append(\"../nmpath/\")\nfrom tools_for_notebook0 import *\n%matplotlib inline\nfrom mappers import rectilinear_mapper\nfrom ensembles import Ensemble, DiscreteEnsemble, PathEnsemble, DiscretePathEnsemble",
"2D Toy model",
"plot_traj([],[])",
"MC simulation",
"#Generating MC trajectories\nmc_traj1_2d = mc_simulation2D(100000)\nmc_traj2_2d = mc_simulation2D(10000)",
"1 - Ensemble class (analysis of continuos trajectories)\nStores an esemble (list) of trajectories (np.arrays). The ensemble could have any number of trajectories including no trajectories at all.\nCreating an Ensemble",
"# Empty ensemble with no trajectories\nmy_ensemble = Ensemble()",
"From a single trajectory:",
"# from a single trajectory\nmy_ensemble = Ensemble([mc_traj1_2d],verbose=True)",
"From a list of trajectories:",
"# We have to set list_of_trajs = True\n\nmy_list_of_trajs = [mc_traj1_2d, mc_traj2_2d]\n\nmy_ensemble = Ensemble(my_list_of_trajs, verbose=True)",
"Ensembles are iterable objects",
"for traj in my_ensemble:\n print(len(traj))",
"Adding trajectories to the Ensemble\nNew trajectories can be added to the ensemble as long as there is consistency in the number of variables.",
"my_ensemble = Ensemble(verbose=True)\n\nmy_ensemble.add_trajectory(mc_traj1_2d)\n\nmy_ensemble.add_trajectory(mc_traj2_2d)",
"\"Printing\" the ensemble",
"print(my_ensemble)",
"Defining states and computing MFPTs\nThe states are considered intervals in the is the class is Ensemble",
"stateA = [[0,pi],[0,pi]]\nstateB = [[5*pi,6*pi],[5*pi,6*pi]]\n\nmy_ensemble.empirical_mfpts(stateA, stateB)",
"Sum of ensembles (ensemble + ensemble)",
"seq1 = mc_simulation2D(20000)\nseq2 = mc_simulation2D(20000)\n\nmy_e1 = Ensemble([seq1])\nmy_e2 = Ensemble([seq2])\n\nensemble1 = my_e1 + my_e2",
"Another simple example",
"e1 = Ensemble([[1.,2.,3.,4.]],verbose=True)\ne2 = Ensemble([[2,3,4,5]])\ne3 = Ensemble([[2,1,1,4]])\n\nmy_ensembles = [e1, e2, e3]\n\nensemble_tot = Ensemble([])\n\nfor ens in my_ensembles:\n ensemble_tot += ens\n\n#ensemble_tot.mfpts([1,1],[4,4])",
"Computing the count matrix and transition matrix",
"n_states = N**2\nbin_bounds = [[i*pi for i in range(7)],[i*pi for i in range(7)]]\nC1 = my_ensemble._count_matrix(n_states, map_function=rectilinear_mapper(bin_bounds))\nprint(C1)\n\nK1 = my_ensemble._mle_transition_matrix(n_states, map_function=rectilinear_mapper(bin_bounds))\nprint(K1)",
"2 - PathEnsemble class\nCreating a path ensemble object",
"#p_ensemble = PathEnsemble()",
"From ensemble",
"p_ensemble = PathEnsemble.from_ensemble(my_ensemble, stateA, stateB)\n\nprint(p_ensemble)",
"MFPTs",
"p_ensemble.empirical_mfpts(stateA, stateB)",
"Count matrix",
"print(p_ensemble._count_matrix(n_states, mapping_function2D))\n\n#clusters = p_ensemble.cluster(distance_metric = 'RMSD', n_cluster=10, method = 'K-means')",
"3 - DiscreteEnsemble class\nWe can generate a discrete ensemble from the same mapping function and we should obtain exaclty the same result:",
"d_ens = DiscreteEnsemble.from_ensemble(my_ensemble, mapping_function2D)\nprint(d_ens)",
"Count matrix and transition matrix",
"C2 = d_ens._count_matrix(n_states)\nprint(C2)\n\nK2= d_ens._mle_transition_matrix(n_states)\nprint(K2)",
"Defining states and computing MFPTs\nThe states are now considered sets, defining the states as follow we should obtain the same results",
"stateA = [0]\nstateB = [N*N-1]\n\nd_ens.empirical_mfpts(stateA, stateB)",
"Generating a Discrete Ensemble from the transition matrix",
"d_ens2 = DiscreteEnsemble.from_transition_matrix(K2, sim_length = 100000)\n\n#d_ens2.mfpts(stateA,stateB)",
"4 - DiscretePathEnsemble class\nCreating the DPE\nFrom Ensemble",
"dpathEnsemble = DiscretePathEnsemble.from_ensemble(my_ensemble, stateA, stateB, mapping_function2D)\nprint(dpathEnsemble)\n\n#MFPT from the transition matrix\ndpathEnsemble.nm_mfpt(ini_probs = None, n_states = N*N)",
"From the transition matrix",
"n_paths = 200\n\ndpathEnsemble2 = DiscretePathEnsemble.from_transition_matrix\\\n (K2, stateA = stateA, stateB = stateB, n_paths = n_paths,ini_pops = [1])\n \nprint(dpathEnsemble2)",
"Fundamental sequence",
"FSs, weights, count = dpathEnsemble2.weighted_fundamental_sequences(K2)\nsize = len(FSs)\n\npaths = dpathEnsemble2.trajectories\nprint(count)",
"Plotting paths A -> B",
"discrete = [True for i in range(size)]\n\nplot_traj([[paths[i],[]] for i in range(size)] , discrete, \\\n line_width=0.2, std=0.5, color='k', title = '{} paths A->B'.format(n_paths))\n",
"Plotting Fundamental Sequences A -> B",
"plot_traj([[FSs[i],[]] for i in range(size)] ,discrete, \\\n line_width=0.5, std=0.2, color='k', title = '{} FSs A->B'.format(n_paths))\n\n\nlw = [weights[i]*100 for i in range(size)]\n\n#np.random.seed(12)\nplot_traj([[FSs[i],[]] for i in range(size)] ,discrete=[True for i in range(size)],\\\n line_width = lw,std = 0.002, alpha=0.25)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
geography-munich/sciprog
|
material/sub/jrjohansson/Lecture-3-Scipy.ipynb
|
apache-2.0
|
[
"SciPy - Library of scientific algorithms for Python\nJ.R. Johansson (jrjohansson at gmail.com)\nThe latest version of this IPython notebook lecture is available at http://github.com/jrjohansson/scientific-python-lectures.\nThe other notebooks in this lecture series are indexed at http://jrjohansson.github.io.",
"# what is this line all about? Answer in lecture 4\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image",
"Introduction\nThe SciPy framework builds on top of the low-level NumPy framework for multidimensional arrays, and provides a large number of higher-level scientific algorithms. Some of the topics that SciPy covers are:\n\nSpecial functions (scipy.special)\nIntegration (scipy.integrate)\nOptimization (scipy.optimize)\nInterpolation (scipy.interpolate)\nFourier Transforms (scipy.fftpack)\nSignal Processing (scipy.signal)\nLinear Algebra (scipy.linalg)\nSparse Eigenvalue Problems (scipy.sparse)\nStatistics (scipy.stats)\nMulti-dimensional image processing (scipy.ndimage)\nFile IO (scipy.io)\n\nEach of these submodules provides a number of functions and classes that can be used to solve problems in their respective topics.\nIn this lecture we will look at how to use some of these subpackages.\nTo access the SciPy package in a Python program, we start by importing everything from the scipy module.",
"from scipy import *",
"If we only need to use part of the SciPy framework we can selectively include only those modules we are interested in. For example, to include the linear algebra package under the name la, we can do:",
"import scipy.linalg as la",
"Special functions\nA large number of mathematical special functions are important for many computional physics problems. SciPy provides implementations of a very extensive set of special functions. For details, see the list of functions in the reference documention at http://docs.scipy.org/doc/scipy/reference/special.html#module-scipy.special. \nTo demonstrate the typical usage of special functions we will look in more detail at the Bessel functions:",
"#\n# The scipy.special module includes a large number of Bessel-functions\n# Here we will use the functions jn and yn, which are the Bessel functions \n# of the first and second kind and real-valued order. We also include the \n# function jn_zeros and yn_zeros that gives the zeroes of the functions jn\n# and yn.\n#\nfrom scipy.special import jn, yn, jn_zeros, yn_zeros\n\nn = 0 # order\nx = 0.0\n\n# Bessel function of first kind\nprint \"J_%d(%f) = %f\" % (n, x, jn(n, x))\n\nx = 1.0\n# Bessel function of second kind\nprint \"Y_%d(%f) = %f\" % (n, x, yn(n, x))\n\nx = linspace(0, 10, 100)\n\nfig, ax = plt.subplots()\nfor n in range(4):\n ax.plot(x, jn(n, x), label=r\"$J_%d(x)$\" % n)\nax.legend();\n\n# zeros of Bessel functions\nn = 0 # order\nm = 4 # number of roots to compute\njn_zeros(n, m)",
"Integration\nNumerical integration: quadrature\nNumerical evaluation of a function of the type\n$\\displaystyle \\int_a^b f(x) dx$\nis called numerical quadrature, or simply quadature. SciPy provides a series of functions for different kind of quadrature, for example the quad, dblquad and tplquad for single, double and triple integrals, respectively.",
"from scipy.integrate import quad, dblquad, tplquad",
"The quad function takes a large number of optional arguments, which can be used to fine-tune the behaviour of the function (try help(quad) for details).\nThe basic usage is as follows:",
"# define a simple function for the integrand\ndef f(x):\n return x\n\nx_lower = 0 # the lower limit of x\nx_upper = 1 # the upper limit of x\n\nval, abserr = quad(f, x_lower, x_upper)\n\nprint \"integral value =\", val, \", absolute error =\", abserr ",
"If we need to pass extra arguments to integrand function we can use the args keyword argument:",
"def integrand(x, n):\n \"\"\"\n Bessel function of first kind and order n. \n \"\"\"\n return jn(n, x)\n\n\nx_lower = 0 # the lower limit of x\nx_upper = 10 # the upper limit of x\n\nval, abserr = quad(integrand, x_lower, x_upper, args=(3,))\n\nprint val, abserr ",
"For simple functions we can use a lambda function (name-less function) instead of explicitly defining a function for the integrand:",
"val, abserr = quad(lambda x: exp(-x ** 2), -Inf, Inf)\n\nprint \"numerical =\", val, abserr\n\nanalytical = sqrt(pi)\nprint \"analytical =\", analytical",
"As show in the example above, we can also use 'Inf' or '-Inf' as integral limits.\nHigher-dimensional integration works in the same way:",
"def integrand(x, y):\n return exp(-x**2-y**2)\n\nx_lower = 0 \nx_upper = 10\ny_lower = 0\ny_upper = 10\n\nval, abserr = dblquad(integrand, x_lower, x_upper, lambda x : y_lower, lambda x: y_upper)\n\nprint val, abserr ",
"Note how we had to pass lambda functions for the limits for the y integration, since these in general can be functions of x.\nOrdinary differential equations (ODEs)\nSciPy provides two different ways to solve ODEs: An API based on the function odeint, and object-oriented API based on the class ode. Usually odeint is easier to get started with, but the ode class offers some finer level of control.\nHere we will use the odeint functions. For more information about the class ode, try help(ode). It does pretty much the same thing as odeint, but in an object-oriented fashion.\nTo use odeint, first import it from the scipy.integrate module",
"from scipy.integrate import odeint, ode",
"A system of ODEs are usually formulated on standard form before it is attacked numerically. The standard form is:\n$y' = f(y, t)$\nwhere \n$y = [y_1(t), y_2(t), ..., y_n(t)]$ \nand $f$ is some function that gives the derivatives of the function $y_i(t)$. To solve an ODE we need to know the function $f$ and an initial condition, $y(0)$.\nNote that higher-order ODEs can always be written in this form by introducing new variables for the intermediate derivatives.\nOnce we have defined the Python function f and array y_0 (that is $f$ and $y(0)$ in the mathematical formulation), we can use the odeint function as:\ny_t = odeint(f, y_0, t)\n\nwhere t is and array with time-coordinates for which to solve the ODE problem. y_t is an array with one row for each point in time in t, where each column corresponds to a solution y_i(t) at that point in time. \nWe will see how we can implement f and y_0 in Python code in the examples below.\nExample: double pendulum\nLet's consider a physical example: The double compound pendulum, described in some detail here: http://en.wikipedia.org/wiki/Double_pendulum",
"Image(url='http://upload.wikimedia.org/wikipedia/commons/c/c9/Double-compound-pendulum-dimensioned.svg')",
"The equations of motion of the pendulum are given on the wiki page:\n${\\dot \\theta_1} = \\frac{6}{m\\ell^2} \\frac{ 2 p_{\\theta_1} - 3 \\cos(\\theta_1-\\theta_2) p_{\\theta_2}}{16 - 9 \\cos^2(\\theta_1-\\theta_2)}$\n${\\dot \\theta_2} = \\frac{6}{m\\ell^2} \\frac{ 8 p_{\\theta_2} - 3 \\cos(\\theta_1-\\theta_2) p_{\\theta_1}}{16 - 9 \\cos^2(\\theta_1-\\theta_2)}.$\n${\\dot p_{\\theta_1}} = -\\frac{1}{2} m \\ell^2 \\left [ {\\dot \\theta_1} {\\dot \\theta_2} \\sin (\\theta_1-\\theta_2) + 3 \\frac{g}{\\ell} \\sin \\theta_1 \\right ]$\n${\\dot p_{\\theta_2}} = -\\frac{1}{2} m \\ell^2 \\left [ -{\\dot \\theta_1} {\\dot \\theta_2} \\sin (\\theta_1-\\theta_2) + \\frac{g}{\\ell} \\sin \\theta_2 \\right]$\nTo make the Python code simpler to follow, let's introduce new variable names and the vector notation: $x = [\\theta_1, \\theta_2, p_{\\theta_1}, p_{\\theta_2}]$\n${\\dot x_1} = \\frac{6}{m\\ell^2} \\frac{ 2 x_3 - 3 \\cos(x_1-x_2) x_4}{16 - 9 \\cos^2(x_1-x_2)}$\n${\\dot x_2} = \\frac{6}{m\\ell^2} \\frac{ 8 x_4 - 3 \\cos(x_1-x_2) x_3}{16 - 9 \\cos^2(x_1-x_2)}$\n${\\dot x_3} = -\\frac{1}{2} m \\ell^2 \\left [ {\\dot x_1} {\\dot x_2} \\sin (x_1-x_2) + 3 \\frac{g}{\\ell} \\sin x_1 \\right ]$\n${\\dot x_4} = -\\frac{1}{2} m \\ell^2 \\left [ -{\\dot x_1} {\\dot x_2} \\sin (x_1-x_2) + \\frac{g}{\\ell} \\sin x_2 \\right]$",
"g = 9.82\nL = 0.5\nm = 0.1\n\ndef dx(x, t):\n \"\"\"\n The right-hand side of the pendulum ODE\n \"\"\"\n x1, x2, x3, x4 = x[0], x[1], x[2], x[3]\n \n dx1 = 6.0/(m*L**2) * (2 * x3 - 3 * cos(x1-x2) * x4)/(16 - 9 * cos(x1-x2)**2)\n dx2 = 6.0/(m*L**2) * (8 * x4 - 3 * cos(x1-x2) * x3)/(16 - 9 * cos(x1-x2)**2)\n dx3 = -0.5 * m * L**2 * ( dx1 * dx2 * sin(x1-x2) + 3 * (g/L) * sin(x1))\n dx4 = -0.5 * m * L**2 * (-dx1 * dx2 * sin(x1-x2) + (g/L) * sin(x2))\n \n return [dx1, dx2, dx3, dx4]\n\n# choose an initial state\nx0 = [pi/4, pi/2, 0, 0]\n\n# time coodinate to solve the ODE for: from 0 to 10 seconds\nt = linspace(0, 10, 250)\n\n# solve the ODE problem\nx = odeint(dx, x0, t)\n\n# plot the angles as a function of time\n\nfig, axes = plt.subplots(1,2, figsize=(12,4))\naxes[0].plot(t, x[:, 0], 'r', label=\"theta1\")\naxes[0].plot(t, x[:, 1], 'b', label=\"theta2\")\n\n\nx1 = + L * sin(x[:, 0])\ny1 = - L * cos(x[:, 0])\n\nx2 = x1 + L * sin(x[:, 1])\ny2 = y1 - L * cos(x[:, 1])\n \naxes[1].plot(x1, y1, 'r', label=\"pendulum1\")\naxes[1].plot(x2, y2, 'b', label=\"pendulum2\")\naxes[1].set_ylim([-1, 0])\naxes[1].set_xlim([1, -1]);",
"Simple annimation of the pendulum motion. We will see how to make better animation in Lecture 4.",
"from IPython.display import display, clear_output\nimport time\n\nfig, ax = plt.subplots(figsize=(4,4))\n\nfor t_idx, tt in enumerate(t[:200]):\n\n x1 = + L * sin(x[t_idx, 0])\n y1 = - L * cos(x[t_idx, 0])\n\n x2 = x1 + L * sin(x[t_idx, 1])\n y2 = y1 - L * cos(x[t_idx, 1])\n \n ax.cla() \n ax.plot([0, x1], [0, y1], 'r.-')\n ax.plot([x1, x2], [y1, y2], 'b.-')\n ax.set_ylim([-1.5, 0.5])\n ax.set_xlim([1, -1])\n\n clear_output() \n display(fig)\n\n time.sleep(0.1)",
"Example: Damped harmonic oscillator\nODE problems are important in computational physics, so we will look at one more example: the damped harmonic oscillation. This problem is well described on the wiki page: http://en.wikipedia.org/wiki/Damping\nThe equation of motion for the damped oscillator is:\n$\\displaystyle \\frac{\\mathrm{d}^2x}{\\mathrm{d}t^2} + 2\\zeta\\omega_0\\frac{\\mathrm{d}x}{\\mathrm{d}t} + \\omega^2_0 x = 0$\nwhere $x$ is the position of the oscillator, $\\omega_0$ is the frequency, and $\\zeta$ is the damping ratio. To write this second-order ODE on standard form we introduce $p = \\frac{\\mathrm{d}x}{\\mathrm{d}t}$:\n$\\displaystyle \\frac{\\mathrm{d}p}{\\mathrm{d}t} = - 2\\zeta\\omega_0 p - \\omega^2_0 x$\n$\\displaystyle \\frac{\\mathrm{d}x}{\\mathrm{d}t} = p$\nIn the implementation of this example we will add extra arguments to the RHS function for the ODE, rather than using global variables as we did in the previous example. As a consequence of the extra arguments to the RHS, we need to pass an keyword argument args to the odeint function:",
"def dy(y, t, zeta, w0):\n \"\"\"\n The right-hand side of the damped oscillator ODE\n \"\"\"\n x, p = y[0], y[1]\n \n dx = p\n dp = -2 * zeta * w0 * p - w0**2 * x\n\n return [dx, dp]\n\n# initial state: \ny0 = [1.0, 0.0]\n\n# time coodinate to solve the ODE for\nt = linspace(0, 10, 1000)\nw0 = 2*pi*1.0\n\n# solve the ODE problem for three different values of the damping ratio\n\ny1 = odeint(dy, y0, t, args=(0.0, w0)) # undamped\ny2 = odeint(dy, y0, t, args=(0.2, w0)) # under damped\ny3 = odeint(dy, y0, t, args=(1.0, w0)) # critial damping\ny4 = odeint(dy, y0, t, args=(5.0, w0)) # over damped\n\nfig, ax = plt.subplots()\nax.plot(t, y1[:,0], 'k', label=\"undamped\", linewidth=0.25)\nax.plot(t, y2[:,0], 'r', label=\"under damped\")\nax.plot(t, y3[:,0], 'b', label=r\"critical damping\")\nax.plot(t, y4[:,0], 'g', label=\"over damped\")\nax.legend();",
"Fourier transform\nFourier transforms are one of the universal tools in computational physics, which appear over and over again in different contexts. SciPy provides functions for accessing the classic FFTPACK library from NetLib, which is an efficient and well tested FFT library written in FORTRAN. The SciPy API has a few additional convenience functions, but overall the API is closely related to the original FORTRAN library.\nTo use the fftpack module in a python program, include it using:",
"from numpy.fft import fftfreq\nfrom scipy.fftpack import *",
"To demonstrate how to do a fast Fourier transform with SciPy, let's look at the FFT of the solution to the damped oscillator from the previous section:",
"N = len(t)\ndt = t[1]-t[0]\n\n# calculate the fast fourier transform\n# y2 is the solution to the under-damped oscillator from the previous section\nF = fft(y2[:,0]) \n\n# calculate the frequencies for the components in F\nw = fftfreq(N, dt)\n\nfig, ax = plt.subplots(figsize=(9,3))\nax.plot(w, abs(F));",
"Since the signal is real, the spectrum is symmetric. We therefore only need to plot the part that corresponds to the postive frequencies. To extract that part of the w and F we can use some of the indexing tricks for NumPy arrays that we saw in Lecture 2:",
"indices = where(w > 0) # select only indices for elements that corresponds to positive frequencies\nw_pos = w[indices]\nF_pos = F[indices]\n\nfig, ax = plt.subplots(figsize=(9,3))\nax.plot(w_pos, abs(F_pos))\nax.set_xlim(0, 5);",
"As expected, we now see a peak in the spectrum that is centered around 1, which is the frequency we used in the damped oscillator example.\nLinear algebra\nThe linear algebra module contains a lot of matrix related functions, including linear equation solving, eigenvalue solvers, matrix functions (for example matrix-exponentiation), a number of different decompositions (SVD, LU, cholesky), etc. \nDetailed documetation is available at: http://docs.scipy.org/doc/scipy/reference/linalg.html\nHere we will look at how to use some of these functions:\nLinear equation systems\nLinear equation systems on the matrix form\n$A x = b$\nwhere $A$ is a matrix and $x,b$ are vectors can be solved like:",
"from scipy.linalg import *\n\nA = array([[1,2,3], [4,5,6], [7,8,9]])\nb = array([1,2,3])\n\nx = solve(A, b)\n\nx\n\n# check\ndot(A, x) - b",
"We can also do the same with\n$A X = B$\nwhere $A, B, X$ are matrices:",
"A = rand(3,3)\nB = rand(3,3)\n\nX = solve(A, B)\n\nX\n\n# check\nnorm(dot(A, X) - B)",
"Eigenvalues and eigenvectors\nThe eigenvalue problem for a matrix $A$:\n$\\displaystyle A v_n = \\lambda_n v_n$\nwhere $v_n$ is the $n$th eigenvector and $\\lambda_n$ is the $n$th eigenvalue.\nTo calculate eigenvalues of a matrix, use the eigvals and for calculating both eigenvalues and eigenvectors, use the function eig:",
"evals = eigvals(A)\n\nevals\n\nevals, evecs = eig(A)\n\nevals\n\nevecs",
"The eigenvectors corresponding to the $n$th eigenvalue (stored in evals[n]) is the $n$th column in evecs, i.e., evecs[:,n]. To verify this, let's try mutiplying eigenvectors with the matrix and compare to the product of the eigenvector and the eigenvalue:",
"n = 1\n\nnorm(dot(A, evecs[:,n]) - evals[n] * evecs[:,n])",
"There are also more specialized eigensolvers, like the eigh for Hermitian matrices. \nMatrix operations",
"# the matrix inverse\ninv(A)\n\n# determinant\ndet(A)\n\n# norms of various orders\nnorm(A, ord=2), norm(A, ord=Inf)",
"Sparse matrices\nSparse matrices are often useful in numerical simulations dealing with large systems, if the problem can be described in matrix form where the matrices or vectors mostly contains zeros. Scipy has a good support for sparse matrices, with basic linear algebra operations (such as equation solving, eigenvalue calculations, etc).\nThere are many possible strategies for storing sparse matrices in an efficient way. Some of the most common are the so-called coordinate form (COO), list of list (LIL) form, and compressed-sparse column CSC (and row, CSR). Each format has some advantanges and disadvantages. Most computational algorithms (equation solving, matrix-matrix multiplication, etc) can be efficiently implemented using CSR or CSC formats, but they are not so intuitive and not so easy to initialize. So often a sparse matrix is initially created in COO or LIL format (where we can efficiently add elements to the sparse matrix data), and then converted to CSC or CSR before used in real calcalations.\nFor more information about these sparse formats, see e.g. http://en.wikipedia.org/wiki/Sparse_matrix\nWhen we create a sparse matrix we have to choose which format it should be stored in. For example,",
"from scipy.sparse import *\n\n# dense matrix\nM = array([[1,0,0,0], [0,3,0,0], [0,1,1,0], [1,0,0,1]]); M\n\n# convert from dense to sparse\nA = csr_matrix(M); A\n\n# convert from sparse to dense\nA.todense()",
"More efficient way to create sparse matrices: create an empty matrix and populate with using matrix indexing (avoids creating a potentially large dense matrix)",
"A = lil_matrix((4,4)) # empty 4x4 sparse matrix\nA[0,0] = 1\nA[1,1] = 3\nA[2,2] = A[2,1] = 1\nA[3,3] = A[3,0] = 1\nA\n\nA.todense()",
"Converting between different sparse matrix formats:",
"A\n\nA = csr_matrix(A); A\n\nA = csc_matrix(A); A",
"We can compute with sparse matrices like with dense matrices:",
"A.todense()\n\n(A * A).todense()\n\nA.todense()\n\nA.dot(A).todense()\n\nv = array([1,2,3,4])[:,newaxis]; v\n\n# sparse matrix - dense vector multiplication\nA * v\n\n# same result with dense matrix - dense vector multiplcation\nA.todense() * v",
"Optimization\nOptimization (finding minima or maxima of a function) is a large field in mathematics, and optimization of complicated functions or in many variables can be rather involved. Here we will only look at a few very simple cases. For a more detailed introduction to optimization with SciPy see: http://scipy-lectures.github.com/advanced/mathematical_optimization/index.html\nTo use the optimization module in scipy first include the optimize module:",
"from scipy import optimize",
"Finding a minima\nLet's first look at how to find the minima of a simple function of a single variable:",
"def f(x):\n return 4*x**3 + (x-2)**2 + x**4\n\nfig, ax = plt.subplots()\nx = linspace(-5, 3, 100)\nax.plot(x, f(x));",
"We can use the fmin_bfgs function to find the minima of a function:",
"x_min = optimize.fmin_bfgs(f, -2)\nx_min \n\noptimize.fmin_bfgs(f, 0.5) ",
"We can also use the brent or fminbound functions. They have a bit different syntax and use different algorithms.",
"optimize.brent(f)\n\noptimize.fminbound(f, -4, 2)",
"Finding a solution to a function\nTo find the root for a function of the form $f(x) = 0$ we can use the fsolve function. It requires an initial guess:",
"omega_c = 3.0\ndef f(omega):\n # a transcendental equation: resonance frequencies of a low-Q SQUID terminated microwave resonator\n return tan(2*pi*omega) - omega_c/omega\n\nfig, ax = plt.subplots(figsize=(10,4))\nx = linspace(0, 3, 1000)\ny = f(x)\nmask = where(abs(y) > 50)\nx[mask] = y[mask] = NaN # get rid of vertical line when the function flip sign\nax.plot(x, y)\nax.plot([0, 3], [0, 0], 'k')\nax.set_ylim(-5,5);\n\noptimize.fsolve(f, 0.1)\n\noptimize.fsolve(f, 0.6)\n\noptimize.fsolve(f, 1.1)",
"Interpolation\nInterpolation is simple and convenient in scipy: The interp1d function, when given arrays describing X and Y data, returns and object that behaves like a function that can be called for an arbitrary value of x (in the range covered by X), and it returns the corresponding interpolated y value:",
"from scipy.interpolate import *\n\ndef f(x):\n return sin(x)\n\nn = arange(0, 10) \nx = linspace(0, 9, 100)\n\ny_meas = f(n) + 0.1 * randn(len(n)) # simulate measurement with noise\ny_real = f(x)\n\nlinear_interpolation = interp1d(n, y_meas)\ny_interp1 = linear_interpolation(x)\n\ncubic_interpolation = interp1d(n, y_meas, kind='cubic')\ny_interp2 = cubic_interpolation(x)\n\nfig, ax = plt.subplots(figsize=(10,4))\nax.plot(n, y_meas, 'bs', label='noisy data')\nax.plot(x, y_real, 'k', lw=2, label='true function')\nax.plot(x, y_interp1, 'r', label='linear interp')\nax.plot(x, y_interp2, 'g', label='cubic interp')\nax.legend(loc=3);",
"Statistics\nThe scipy.stats module contains a large number of statistical distributions, statistical functions and tests. For a complete documentation of its features, see http://docs.scipy.org/doc/scipy/reference/stats.html.\nThere is also a very powerful python package for statistical modelling called statsmodels. See http://statsmodels.sourceforge.net for more details.",
"from scipy import stats\n\n# create a (discreet) random variable with poissionian distribution\n\nX = stats.poisson(3.5) # photon distribution for a coherent state with n=3.5 photons\n\nn = arange(0,15)\n\nfig, axes = plt.subplots(3,1, sharex=True)\n\n# plot the probability mass function (PMF)\naxes[0].step(n, X.pmf(n))\n\n# plot the commulative distribution function (CDF)\naxes[1].step(n, X.cdf(n))\n\n# plot histogram of 1000 random realizations of the stochastic variable X\naxes[2].hist(X.rvs(size=1000));\n\n# create a (continous) random variable with normal distribution\nY = stats.norm()\n\nx = linspace(-5,5,100)\n\nfig, axes = plt.subplots(3,1, sharex=True)\n\n# plot the probability distribution function (PDF)\naxes[0].plot(x, Y.pdf(x))\n\n# plot the commulative distributin function (CDF)\naxes[1].plot(x, Y.cdf(x));\n\n# plot histogram of 1000 random realizations of the stochastic variable Y\naxes[2].hist(Y.rvs(size=1000), bins=50);",
"Statistics:",
"X.mean(), X.std(), X.var() # poission distribution\n\nY.mean(), Y.std(), Y.var() # normal distribution",
"Statistical tests\nTest if two sets of (independent) random data comes from the same distribution:",
"t_statistic, p_value = stats.ttest_ind(X.rvs(size=1000), X.rvs(size=1000))\n\nprint \"t-statistic =\", t_statistic\nprint \"p-value =\", p_value",
"Since the p value is very large we cannot reject the hypothesis that the two sets of random data have different means.\nTo test if the mean of a single sample of data has mean 0.1 (the true mean is 0.0):",
"stats.ttest_1samp(Y.rvs(size=1000), 0.1)",
"Low p-value means that we can reject the hypothesis that the mean of Y is 0.1.",
"Y.mean()\n\nstats.ttest_1samp(Y.rvs(size=1000), Y.mean())",
"Further reading\n\nhttp://www.scipy.org - The official web page for the SciPy project.\nhttp://docs.scipy.org/doc/scipy/reference/tutorial/index.html - A tutorial on how to get started using SciPy. \nhttps://github.com/scipy/scipy/ - The SciPy source code. \n\nVersions",
"%reload_ext version_information\n\n%version_information numpy, matplotlib, scipy"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/expose/BJKST.ipynb
|
mit
|
[
"2A.algo - Algorithmes de streaming : généralités\nLes streams (flux) de données sont aujourd'hui présents dans de nombreux domaines (réseaux sociaux, e-commerce, logs de connexion Internet, etc.). L'analyse rapide et pertinente de ces flux est motivée par l'immensité des données qui ne peuvent souvent pas être stockés (du moins facilement) et dont le traitement serait trop lourd (penser au calcul de l'âge moyen des 1,86 milliards utilisateurs de Facebook pour s'en convaincre). Ce notebook s'intéresse particulièrement à l'algorithme BJKST.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()\n\n%matplotlib inline",
"Introduction\nPlus formellement considérons un univers $U$ de taille $n$ (un nombre très grand) qui ne peut être stocké en mémoire et une séquence $S = (s_1, s_2, \\ldots, s_m, \\ldots)$ d'éléments de $U$. Un algorithme de streaming $\\mathcal{A}$ prend le stream $S$ en entrée et renvoie une fonction $f(S)$ (souvent à valeurs réelles).\nNotons que l'algorithme $\\mathcal{A}$ est souvent contraint d'accéder séquentiellement aux éléments de $S$ et / ou ne peut les parcourir qu'un nombre fini de fois.\nUn bon algorithme de streaming doit satisfaire plusieurs contraintes:\n\nIl doit être un bon estimateur de la vraie valeur que prendrait $f^*(U)$ sur l'univers (plus de détails dans un instant)\nIl doit pouvoir s'actualiser rapidement (en temps linéaire ou moins) à mesure que le flux $S$ évolue\nIl doit être peu gourmand en mémoire\n\nEtant donné une précision $\\epsilon > 0$ et une tolérance $\\delta > 0$, l'algorithme $\\mathcal{A}$ doît satisfaire:\n$\\mathbb{P}(\\lvert \\frac{f^*(U) - f(S)}{f(S)} \\rvert \\leq \\epsilon) \\geq 1 - \\delta.$\nQuelques exemples fréquents d'algorithmes de streaming:\n\nEstimation de la valeur moyenne, de la médiane\nEstimation du nombre d'éléments distincts\nEstimation de la fréquences des élements\nEstimation de la densité de probabilité\n\nEstimer le nombre d'éléments distincts: l'algorithme BJKST\nL'algorithme BJKST permet d'estimer le nombre d'éléments distincts d'un stream $S$. Son fonctionnement est assez simple et repose sur la notion d'universal hashing que nous présentons ci-bas.\nUniversal hashing\nL'idée derrière les fonctions de hachage est de faire correspondre des élements d'un ensemble dont la taille est variable (ou bien n'est pas connue) vers un ensemble de taille fixe. Le principe de l'universal hashing est de sélectionner aléatoirement une fonction $h$ dans une famille de fonctions de hachage $\\mathcal{H}$ et de garantir un faible probabilité du nombre de collisions de hachage. \nFormellement si l'on note $[n]$ l'ensemble ${1, \\ldots, n}$, une famille de fonctions $\\mathcal{H}$ est dite <i>universelle</i> si toute fonction $h: U \\mapsto [n]$ choisie uniformément dans la famille $\\mathcal{H}$ vérifie $\\mathbb{P}(h(x) = h(y)) \\leq \\frac{1}{n}$ pour tout couple $x,y \\in U$ distincts.\nNous considérons ici la famille $\\mathcal{H} = {h_{a,b}}$ où $h_{a,b}(x) = ((ax + b) \\space{} \\mathrm{mod} \\space{} p) \\space{} \\mathrm{mod} \\space{} n$, $x$ est un entier, $a \\in {1, \\ldots, p - 1}$, $b \\in {0, \\ldots, p - 1 }$ et $p$ un nombre premier $\\geq n$. On peut sans trop de difficulté se convaincre que $h_{a,b}(x)$ est uniformément distribué sur $[n]$ et que cette famille est universelle (voir Universal hashing pour plus de détails).\nCollisions\nVérifions numériquement le nombre de collisions. Considérons un univers $U$ large prenons par exemple $U = [n]$ avec $n$ grand.",
"n = 10**4\nU = range(n)",
"Choisissons $p$ un nombre premier arbitrairement grand et une petite valeur de hashage.",
"p = 4294967291\nm = 10",
"Choisissons une fonction $h$ uniformément dans la famille $\\mathcal{H}$",
"import random\na = random.randint(1, p)\nb = random.randint(0, p)\ndef h(x):\n return ((a*x + b) % p) % m",
"Tirons des couples $(x,y)$ distincts dans $U$",
"couples = set()\nfor i in range(500):\n x, y = random.sample(U, 2)\n couples.add((x, y))\nprint('Nombre de couples distincts = {}'.format(len(couples)))",
"Pour chaque couple, calculons les valeurs de hashage et comptons le nombre de collisions.",
"c = 0\nfor x, y, in couples:\n if (h(x) == h(y)):\n c += 1",
"Le nombre de collisions rapporté au nombre de couples $(x,y)$ distincts nous donne une estimation de la probabilité de collision.",
"p_collisions = c / len(couples)\nprint('Probabilité de collision = {:.2f}%'.format(p_collisions * 100.0))",
"Cette valeur est proche de la valeur théorique $\\frac{1}{m}$. Effectuons plusieurs tirages pour confirmer ce résultat.",
"import numpy\ncollisions = []\n# on reitere 100 fois\nfor _ in range(100):\n a = random.randint(1, p)\n b = random.randint(0, p)\n \n def h(x):\n return ((a*x + b) % p) % m\n \n couples = set()\n for i in range(500):\n x, y = random.sample(U, 2)\n couples.add((x, y))\n\n c = 0\n for x, y, in couples:\n if (h(x) == h(y)):\n c += 1\n collisions.append(c / len(couples))\np_collision = numpy.mean(collisions)\nprint('Probabilité de collision moyenne = {:.2f}%'.format(p_collision * 100.0))",
"Cette probabilité moyenne est proche de la valeur théorique. Réitérons pour d'autres valeurs $m$.",
"sizes = [10, 25, 50, 100, 250, 500, 750, 1000]\np_collision = []\np = 4294967291\n\nfor m in sizes: \n collisions = []\n # on reitere 100 fois\n for _ in range(100):\n a = random.randint(1, p)\n b = random.randint(0, p)\n\n def h(x):\n return ((a*x + b) % p) % m\n\n couples = set()\n for i in range(500):\n x, y = random.sample(U, 2)\n couples.add((x, y))\n\n c = 0\n for x, y, in couples:\n if (h(x) == h(y)):\n c += 1\n collisions.append(c / len(couples))\n p_collision.append(numpy.mean(collisions))\n\nimport matplotlib.pyplot as plt\nfix, ax = plt.subplots()\nplt.plot(sizes, p_collision)\nplt.xlabel(r'$m$')\nax.set_title('Ratio des collisions en fonction de la taille de hash')",
"La probabilité de collision estimée est bien inversement proportionnelle à la valeur de hashage $m$.\nAlgorithme BJKST\nNous considérons un univers $U$ de taille $N$ et comportant $n$ élements distincts. Pour un stream $S = (a_1, a_2, \\ldots)$ de taille $s$ essayons d'estimer $n$ à l'aide de l'algorithme BJKST.",
"n = 10**3\nN = 10**4\n# nous tirons N entiers de 64bits (type i8) dont n sont distincts\nuniverse = numpy.random.randint(0, n, N, dtype='i8')\ns = 500\nstream = universe[-s:]",
"L'idée derrière l'algorithme BJKST est de parcourir les élements du stream et de remplir un ensemble $B$ par échantillonage. La probabilité d'échantillonage initiale est $1$ et lorsque $B$ devient trop grand (au delà d'un certain seuil $B_{max}$) on enlève des élements et on diminue la probabilité d'échantillonage. A la fin le nombre d'éléments dans $B$ et la probabilité d'échantillonage finale permettent d'estimer le nombre d'éléments distincts dans $U$.\nPour $\\epsilon >0$ nous prenons $B_{max} = 1/ \\epsilon^2$ :",
"# definissons un ensemble B\nB = set()\nepsilon = 0.1\nB_max = 1 / epsilon**2",
"Choisissons $p$ un nombre premier arbitrairement grand:",
"p = 4294967291",
"et tirons aléatoirement deux fonctions $h_{a_1, b_1}$ et $h_{a_2, b_2}$ distinctes:",
"import random\n# deux couples (a_1, b_1) (a_1, b_2) distincts\na1, a2 = random.sample(range(1, p), 2)\nb1, b2 = random.sample(range(0, p), 2)\n\ndef h1(x):\n return ((a1*x + b1) % p) % s\n\ndef h2(x):\n return ((a2*x + b2) % p) % s",
"Initialisons un entier $c$ à zero. Pour chaque $x$ dans le stream nous calculons d'abord sa valeur de hachage $y = h_{a_1, b_1}(x)$ :",
"c = 0\n# Prenons le premier élément du stream (à titre d'exemple)\nx = stream[0]\ny = h1(x)\nprint('x = {}, y = {}'.format(x, y))",
"La probabilité d'échantillonage est basée sur le nombre de zéros à droite dans la décomposition binaire de $y$. Pour calculer ce nombre diverses méthodes existent (voir Count the consecutive zero bits (trailing) on the right with modulus division and lookup pour plus de détails). Pour $s= 2^1$ et $s = 2^{10}$ la décomposition binaire comporte $1$ et $10$ zéros à droite respectivement:",
"mod_37bit_position = (32, 0, 1, 26, 2, 23, 27, 0, 3, 16, 24, 30, 28, 11, 0, 13, 4,\n 7, 17, 0, 25, 22, 31, 15, 29, 10, 12, 6, 0, 21, 14, 9, 5,\n 20, 8, 19, 18)\n\n# Un seul zéro à droite\ns = 2**1\nzeros = mod_37bit_position[(-s & s) % 37]\nprint('Decomposition binaire de 2**1 = {}, nombre de zeros a droite = {}'.format(bin(s), zeros))\n\n# Dix zéros à droite\ns = 2**10\nzeros = mod_37bit_position[(-s & s) % 37]\nprint('Decomposition binaire de 2**10 = {}, nombre de zeros a droite = {}'.format(bin(s), zeros))",
"Notons $k$ le nombre de zéros à droite dans la décomposition binaire de $y$.",
"k = mod_37bit_position[(-y & y) % 37]\nprint('Decomposition binaire de y = {}, nombre de zeros a droite = {}'.format(bin(y), k))",
"Puis nous comparons $k \\geq c$. Si c'est le cas nous calculons une nouvelle valeur de hashage de $x$, $z = h_{a_2, b_2}(x)$ et rajoutons le couple $(z, k)$ à l'ensemble $B$.",
"if (k >= c):\n z = h2(x)\n B.add((z, k))",
"A l'initialisation la condition $k \\geq c$ est toujours vérifiée (ce qui correspond à une probabilité d'échantillonage égale à $1$) :",
"B",
"L'ensemble $B$ se remplit jusqu'à atteindre la taille $B_{max}$. Lorsque cette taille est atteinte on incrémente $c$ et on enlève à $B$ tous les couples $(z, k)$ où $k < c$.",
"while (len(B) >= B_max):\n c += 1\n # on prend ici une copie de B\n for z, k in B.copy():\n if (k < c):\n B.remove((z, k)) ",
"Parcourons le stream et regardons à quoi ressemble l'ensemble $B$:",
"for x in stream:\n y = h1(x)\n k = mod_37bit_position[(-y & y) % 37]\n if (k >= c):\n z = h2(x)\n B.add((z, k))\n while (len(B) >= B_max):\n c += 1\n for z, k in B.copy():\n if (k < c):\n B.remove((z, k)) \n\nprint('Taille de B = {}, c = {}'.format(len(B), c))",
"Une estimateur de la taille de l'univers est alors $2^c \\mathrm{card}(B)$ :",
"print('Estimation de la taille de U = {}'.format(2**c * len(B)))",
"Pour s'en convaincre, remarquons qu'en moyenne le cardinal de $B$ est égal au nombre de $y_i$ pour lequel le nombre de zéros à droite dans la décomposition binaire est plus grand que $c$. Ceci correspond au nombre de $y_i$ qui sont divisibles par $2^c$ :\n$$\\mathbb{E}[\\mathrm{card}(B)] = \\mathbb{E}\\big[\\sum_{x_i} I(y_i = 0 \\space{} \\mathrm{mod} \\space{} 2^c)\\big] = \\sum_{x_i} \\mathbb{P}\\big( y_i = 0 \\space{} \\mathrm{mod} \\space{} 2^c\\big)$$\nC'est ici qu'intervient la notion de famille universelle car cette derniere égalité n'est valide que si le nombre de collision entre $y$ et $z$ est faible lors du hachage de $x$ par $h_{a_1, b_1}$ et $h_{a_2, b_2}$. En effet, si le nombre de collisions était trop grand la taille de $B$ sous-estimerait le nombre d'éléments distincts.\nLa probabilité $\\mathbb{P}\\big( y_i = 0 \\space{} \\mathrm{mod} \\space{} 2^c\\big)$ étant égale à $1 / 2^c$ si $y_i$ est distribué uniformément (l'écrire pour s'en convaincre) nous obtenons :\n$$ \\mathbb{E}[\\mathrm{card}(B)] = \\frac{n}{2^c}.$$\nRésultats numériques\nRegroupons le code dans une fonction",
"def BJKST(stream, epsilon):\n s = len(stream)\n a1, a2 = random.sample(range(1, p), 2)\n b1, b2 = random.sample(range(0, p), 2)\n def h1(x):\n return ((a1*x + b1) % p) % s\n def h2(x):\n return ((a2*x + b2) % p) % s\n c = 0\n B = set()\n B_max = 1.0 / epsilon**2\n for x in stream:\n y = h1(x)\n k = mod_37bit_position[(-y & y) % 37]\n if (k >= c):\n z = h2(x)\n B.add((z, k))\n while (len(B) >= B_max):\n c += 1\n for z, k in B.copy():\n if (k < c):\n B.remove((z, k)) \n return 2**c*len(B)",
"En pratique une estimation fiable du nombre d'éléments distincts requiert de générer plusieurs calculs indépendants de l'algorithme et de prendre la médiane. Regardons comment la qualité de l'estimation évolue en fonction de la précision $\\epsilon$ exigée et de la taille $s$ du stream.",
"epsilons = [0.5, 0.2, 0.1]\nsizes = [100, 250, 500, 1000, 2500, 5000]\nestimates = {}\nfor eps in epsilons:\n values = []\n for s in sizes:\n stream = universe[-s:]\n values.append(numpy.median([BJKST(stream, eps) for _ in range(100)]))\n estimates[eps] = values\n\nfor eps in estimates:\n plt.plot(sizes, estimates[eps], label = '$\\epsilon$ = {:.1f}'.format(eps))\nplt.axhline(y=n, color='r', linestyle='--', label='Vraie valeur de $n$')\nplt.title('Estimation de $n$')\nplt.xlabel('Taille du stream')\nplt.legend()\n\nepsilon = 0.1\nfor i in range(len(sizes)):\n print('Erreur relative = {0:.2f}%, s = {1}'.format(abs(estimates[epsilon][i]/ n - 1.0)*100.0, sizes[i]))",
"Nous observons que l'estimation converge vers la vraie valeur $n$ à mesure que la taille du stream augmente. Pour une précision $\\epsilon = 10\\%$ et une taille de stream égale à $5000$ l'erreur est de $0,8\\%$.\nLa fiabilité de l'estimation est décroissante avec le niveau de précision exigé, l'algorithme donne une bonne estimation de la vraie valeur (ligne horizontale rouge) pour $\\epsilon \\approx 0.3$.\nTemps de calcul en fonction de la taille du stream\nRegardons à présent comment le temps de calcul évolue en fonction de la taille du stream. Rappelons qu'un bon algorithme de streaming doit évoluer de manière au pire linéaire en fonction de la taille d'espace $s$.",
"import time\nepsilon = 0.1\nsize_bound = 15\nsizes = [100, 250, 500, 1000, 2500, 5000]\nm = 100\ntimes = []\nfor s in sizes:\n start = time.time()\n stream = universe[-s:]\n BJKST(stream, epsilon)\n times.append(time.time() - start)\ntimes = numpy.array(times)\n\nfix, ax = plt.subplots()\nplt.plot(sizes, times*1000)\nplt.title('Temps de calcul (en ms)')\nplt.xlabel('Taille du stream')",
"La complexité semble être linéaire ce qui est satisfaisant. Notons qu'aucun effort d'optimisation de performance (à part l'usage d'un <b>set</b>) n'a été fait à ce stade.\nUn peu plus sur la précision de l'estimateur\nLorsque la précision $\\epsilon$ est proche de $0$ l'estimation est moins bonne que pour une précision plus large. Pourquoi ?",
"import random\nimport numpy\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nn = 1000\nstream = numpy.arange(1000)\np = 4294967291\n\nmod_37bit_position = (32, 0, 1, 26, 2, 23, 27, 0, 3, 16, 24, 30, 28, 11, 0, 13, 4,\n 7, 17, 0, 25, 22, 31, 15, 29, 10, 12, 6, 0, 21, 14, 9, 5,\n 20, 8, 19, 18)\n\ndef BJKST(stream, B_max, h1, h2):\n c = 0\n B = set()\n R = []\n removed = 0\n for x in stream:\n y = h1(x)\n k = mod_37bit_position[(-y & y) % 37]\n if (k >= c):\n z = h2(x)\n B.add((z, k))\n while (len(B) >= B_max):\n c += 1\n for z, k in B.copy():\n if (k < c):\n B.remove((z, k))\n removed += 1\n R.append([removed, len(B), c])\n return numpy.array(R)",
"$h_1$ et $h_2$ égales à l'identité\nSi nous prenons $h_1$ et $h_2$ égales à l'identité l'ensemble $B$ se remplit linéairement: pour $c=1$ on enlève tous les nombres avec $k = 0$ (tous les nombres impairs donc la moitié) puis pour $c=2$ on enlève tous les nombres où $k = 0, 1$ c'est à dire encore la moitié des nombres rajoutés et ainsi de suite.. \nA la fin l'estimation est parfaite (cf graphe en fonction de l'avancement dans le stream)",
"B_max = 200\ndef h1(x):\n return x\ndef h2(x):\n return x\nR = BJKST(stream, B_max, h1, h2)\nestimate = 2**R[-1,2]*R[-1,1]\nprint('Estimated = {}, true = {}, c= {}'.format(estimate, n, R[-1,2]))\nD = numpy.concatenate((numpy.array([1]), numpy.diff(R[:,2])))\nchanges = stream[numpy.nonzero(D)]\n\nfix, ax = plt.subplots()\nax.plot(stream, R[:,0], color='red', label='total #removed')\nax.plot(stream, R[:,1], color='blue', label='len B')\nfor c in changes:\n ax.annotate('c = {}'.format(R[c, 2]), xy=(c, R[c, 1]), xytext=(c + 65, R[c,1] - 30), arrowprops=dict(arrowstyle='->'))\nax.legend(loc=(1.1, 0.9))\nplt.xlabel('$x$')\nplt.ylabel('size')",
"cas où la taille du hash est petite\nSi on prend un hash petit il faut regarder plus de nombres pour remplir et l'incrément de $c$ se fait \"plus tard\" dans le stream, d'où la mauvaise estimation.",
"B_max = 200\ns = B_max // 4\na1, a2 = random.sample(range(1, p), 2)\nb1, b2 = random.sample(range(0, p), 2)\ndef h1(x):\n return ((a1*x + b1) % p) % n\ndef h2(x):\n return ((a2*x + b2) % p) % s\nR = BJKST(stream, B_max, h1, h2)\nestimate = 2**R[-1,2]*R[-1,1]\nprint('Estimated = {}, true = {}, c= {}'.format(estimate, n, R[-1,2]))\nD = numpy.concatenate((numpy.array([1]),numpy.diff(R[:,2])))\nchanges = stream[numpy.nonzero(D)]\nfix, ax = plt.subplots()\nax.plot(stream, R[:,0], color='red', label='total #removed')\nax.plot(stream, R[:,1], color='blue', label='len B')\nfor c in changes:\n ax.annotate('c = {}'.format(R[c, 2]), xy=(c, R[c, 1]), xytext=(c + 65, R[c,1] - 30), arrowprops=dict(arrowstyle='->'))\nax.legend(loc=(1.1, 0.9))\nplt.xlabel('$x$')\nplt.ylabel('size')",
"cas où la taille de hash est plus grande\nSi on prend un une valeur de hash plus grande on se rapproche du cas $h_1 = h_2 = id$ et l'estimation est meilleure :",
"B_max = 200\ns = B_max\na1, a2 = random.sample(range(1, p), 2)\nb1, b2 = random.sample(range(0, p), 2)\ndef h1(x):\n return ((a1*x + b1) % p) % n\ndef h2(x):\n return ((a2*x + b2) % p) % s\nR = BJKST(stream, B_max, h1, h2)\nestimate = 2**R[-1,2]*R[-1,1]\nprint('Estimated = {}, true = {}, c= {}'.format(estimate, n, R[-1,2]))\nD =numpy.concatenate((numpy.array([1]), numpy.diff(R[:,2])))\nchanges = stream[numpy.nonzero(D)]\nfix, ax = plt.subplots()\nax.plot(stream, R[:,0], color='red', label='total #removed')\nax.plot(stream, R[:,1], color='blue', label='len B')\nfor c in changes:\n ax.annotate('c = {}'.format(R[c, 2]), xy=(c, R[c, 1]), xytext=(c + 65, R[c,1] - 30), arrowprops=dict(arrowstyle='->'))\nax.legend(loc=(1.1, 0.9))\nplt.xlabel('$x$')\nplt.ylabel('size')",
"la taille de hash dépend de la précision $\\epsilon$\nA mon avis, il faut donc que la taille du hash pour $h_2$ dépende de la précision (dans Data Stream Algorithms ils préconisent une taille en $\\mathrm{log}(n) / \\epsilon^2$).\nSi on prend cette taille pour $h_2$ on voit que l'estimation est meilleure pour une précision petite.",
"def BJKST(stream, epsilon):\n a1, a2 = random.sample(range(1, p), 2)\n b1, b2 = random.sample(range(0, p), 2)\n #taille de la valeur de hashage dépend de la precision\n b = int(numpy.log(n) / epsilon**2)\n def h1(x):\n return ((a1*x + b1) % p) % n\n # on applique la taille b sur la seconde fonction de hash\n def h2(x):\n return ((a2*x + b2) % p) % b\n c = 0\n B = set()\n B_max = 1.0 / epsilon**2\n for x in stream:\n y = h1(x)\n k = mod_37bit_position[(-y & y) % 37]\n if (k >= c):\n z = h2(x)\n B.add((z, k))\n while (len(B) >= B_max):\n c += 1\n for z, k in B.copy():\n if (k < c):\n B.remove((z, k)) \n return 2**c*len(B)\n\nm = 100\nepsilons = numpy.array([0.1, 0.09, 0.08, 0.07, 0.06, 0.05, 0.04, 0.03, 0.02, 0.01])\nmedians = numpy.array([numpy.median([BJKST(stream, eps) for _ in range(m)]) for eps in epsilons])\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n#plt.plot(1.0 / epsilons, medians)\nplt.plot(epsilons, medians)\nplt.axhline(y=n, color='r', linestyle='--')\nplt.title(r'Mediane en fonction de $1 / \\epsilon$, m = {}'.format(m))\nplt.xlabel(r'$\\epsilon$')\nplt.ylabel('Mediane')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
xR86/ml-stuff
|
kaggle/iris/Simple analysis of Iris dataset.ipynb
|
mit
|
[
"Intros\nThis is a starter kernel ...",
"# analytics libraries installed listed in the kaggle/python docker image: https://github.com/kaggle/docker-python\n\n# Input data files are available in the \"../input/\" directory.\n#from subprocess import check_output\n#print(check_output([\"ls\", \"../input\"]).decode(\"utf8\"))\n# Any results you write to the current directory are saved as output.\n\nimport csv\n\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib\nfrom matplotlib import pyplot as plt\n#matplotlib.style.use('ggplot')\nimport pylab\nimport seaborn as sns\n\nfrom IPython.core.display import display, HTML",
"Data samples and traits",
"data = pd.read_csv(\"../input/Iris.csv\", header = 0)\n#reset index\ndata = data.reset_index()\ndata.head()\n\nspecies_list = list(data[\"Species\"].unique())\nprint(\"Types of species: %s\\n\" % species_list)\n\nprint(\"Dataset length: %i\\n\" % len(data))\n\nprint(\"Sepal length range: [%s, %s]\" % (min(data[\"SepalLengthCm\"]), max(data[\"SepalLengthCm\"])))\nprint(\"Sepal width range: [%s, %s]\" % (min(data[\"SepalWidthCm\"]), max(data[\"SepalLengthCm\"])))\nprint(\"Petal length range: [%s, %s]\" % (min(data[\"PetalLengthCm\"]), max(data[\"PetalLengthCm\"])))\nprint(\"Petal width range: [%s, %s]\\n\" % (min(data[\"PetalWidthCm\"]), max(data[\"PetalWidthCm\"])))\n\nprint(\"Sepal length variance:\\t %f\" % np.var(data[\"SepalLengthCm\"]))\nprint(\"Sepal width variance: \\t %f\" % np.var(data[\"SepalWidthCm\"]))\nprint(\"Petal length variance:\\t %f\" % np.var(data[\"PetalLengthCm\"]))\nprint(\"Petal width variance: \\t %f\\n\" % np.var(data[\"PetalWidthCm\"]))\n\nprint(\"Sepal length stddev:\\t %f\" % np.std(data[\"SepalLengthCm\"]))\nprint(\"Sepal width stddev: \\t %f\" % np.std(data[\"SepalWidthCm\"]))\nprint(\"Petal length stddev:\\t %f\" % np.std(data[\"PetalLengthCm\"]))\nprint(\"Petal width stddev: \\t %f\\n\" % np.std(data[\"PetalWidthCm\"]))\n\nprint(\"Data describe\\n---\")\nprint(data[data.columns[2:]].describe())",
"3 types of species\nRelatively small dataset\nData analysis - distributions",
"\n# data.hist calls data.plot\n# pandas.DataFrame.plot() returns a matplotlib axis\ndata.hist(\n column=[\"SepalLengthCm\", \"SepalWidthCm\", \"PetalLengthCm\", \"PetalWidthCm\", \"Species\"],\n figsize=(10, 10)\n #,sharey=True, sharex=True\n)\npylab.suptitle(\"Analyzing distribution for the series\", fontsize=\"xx-large\")\n\n#alternative\n#plt.subplot(2,3,1) # if using subplot\n#data.hist(...)\n#plt.title('your title')",
"At first sight, Petal length and petal width seem to diverge from the normal distribution.",
"import scipy.stats as stats\n\n#print(\"Sepal length variance:\\t %f\" % np.var(data[\"SepalLengthCm\"]))\n#print(\"Sepal width variance: \\t %f\" % np.var(data[\"SepalWidthCm\"]))\n#print(\"Petal length variance:\\t %f\" % np.var(data[\"PetalLengthCm\"]))\n#print(\"Petal width variance: \\t %f\\n\" % np.var(data[\"PetalWidthCm\"]))\n\nfor param in [\"SepalLengthCm\", \"SepalWidthCm\", \"PetalLengthCm\", \"PetalWidthCm\"]:\n z, pval = stats.normaltest(data[param])\n #print(z)\n if(pval < 0.055):\n print(\"%s has a p-value of %f - distribution is not normal\" % (param, pval))\n else:\n print(\"%s has a p-value of %f\" % (param, pval))\n",
"Hypothesis has been confirmed. Why ?\nData analysis - correlations",
"display(HTML('<h1>Analyzing the ' +\n '<a href=\"https://en.wikipedia.org/wiki/Pearson_correlation_coefficient\">' +\n 'Pearson correlation coefficient</a></h1>'))\n\n# data without the indexes\ndt = data[data.columns[2:]]\n\n# method : {‘pearson’, ‘kendall’, ‘spearman’}\ncorr = dt.corr(method=\"pearson\") #returns a dataframe, so it can be reused\n\n# eliminate upper triangle for readability\nbool_upper_matrix = np.tril(np.ones(corr.shape)).astype(np.bool)\ncorr = corr.where(bool_upper_matrix)\ndisplay(corr)\n# alternate method: http://seaborn.pydata.org/examples/many_pairwise_correlations.html\n\n# seaborn matrix here\n#sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool), cmap=sns.diverging_palette(220, 10, as_cmap=True),\n# square=True, ax=ax)\nsns.heatmap(corr, cmap=sns.diverging_palette(220, 10, as_cmap=True),\n xticklabels=corr.columns.values,\n yticklabels=corr.columns.values)",
"Interpretation\nDiagonal values and upper triangle are ignored (melted the upper triangle through np.tril and df.where).\nNaturally, we find:\n\na high positive correlation between PetalWidth and PetalLength (0.96)\na high positive correlation between PetalLength and SepalLength (0.87)\na high positive correlation between PetalWidth and SepalLength (0.81)\n\nAs such, we observe correlations between these main attributes: PetalWidth, PetalLength and SepalLength.\nTheory\nPCC is:\n\n1 is total positive linear correlation\n0 is no linear correlation\n−1 is total negative linear correlation\n\nCheck correlation in 3D",
"from mpl_toolkits.mplot3d import Axes3D\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nX = [data[\"PetalWidthCm\"], data[\"PetalLengthCm\"]]\nn = 100\nax.scatter(data[\"PetalWidthCm\"], data[\"PetalLengthCm\"], data[\"SepalLengthCm\"])\n\n\nax.set_xlabel('PetalWidthCm')\nax.set_ylabel('PetalLengthCm')\nax.set_zlabel('SepalLengthCm')\n\nplt.tight_layout(pad=0.5)\nplt.show()\n\ndata[data.columns[2:3]].plot.bar() #x=data[\"Index\"], y=data[\"PetalLengthCm\"]\ndata[data.columns[3:4]].plot.bar()\ndata[data.columns[4:5]].plot.bar()\ndata[data.columns[5:6]].plot.bar()",
"Data analysis - clusterization",
"from sklearn import linear_model\n\n#pd.scatter_matrix(dt, alpha = 0.3, figsize = (14,8), diagonal = 'kde');\n#sns.pairplot(dt)\ndisplay(HTML('<h1>Scatterplots for the correlating pairs</h1>'))\n\ndt.plot(kind='scatter', x='PetalWidthCm', y='PetalLengthCm');\ndt.plot(kind='scatter', x='PetalLengthCm', y='SepalLengthCm');\ndt.plot(kind='scatter', x='PetalWidthCm', y='SepalLengthCm');\n\n# --- linear regreesion visualization\n\n# TODO: random selection method from sklearn\n#top_corr_x_train = data[\"PetalWidthCm\"][0:75]\n#top_corr_y_train = data[\"PetalLengthCm\"][0:75]\n#top_corr_x_test = data[\"PetalWidthCm\"][75:]\n#top_corr_y_test = data[\"PetalLengthCm\"][75:]\n#\n#regr = linear_model.LinearRegression()\n#\n#regr.fit(top_corr_x_train, top_corr_y_train)\n#\n## The coefficients\n##print('Coefficients: \\n', regr.coef_)\n## The mean squared error\n#print(\"Mean squared error: %.2f\"\n# % np.mean((regr.predict(top_corr_x_test) - top_corr_y_test) ** 2))\n## Explained variance score: 1 is perfect prediction\n#print('Variance score: %.2f' % regr.score(top_corr_x_test, top_corr_y_test))\n#\n#plt.plot(top_corr_x_test, regr.predict(top_corr_x_test), color='blue',\n# linewidth=3)\n#\n#prediction = regr.predict(top_corr_x_test)\n##prediction = prediction[:]\n#print(prediction)\n#print(\"Length: \" + len(top_corr_x_test))\n#\n#plt.xticks(())\n#plt.yticks(())\n#\n#plt.show()\n\nfrom sklearn import neighbors, datasets\nfrom matplotlib.colors import ListedColormap\n\nimport math\nimport random\nfrom numpy.random import permutation\n\ndata_spl = data[data.columns[2:6]]\n\nrandom_indices = permutation(data_spl.index)\n# Set a cutoff for how many items we want in the test set (in this case 1/3 of the items)\ntest_cutoff = math.floor(len(data_spl)/3)\n# Generate the test set by taking the first 1/3 of the randomly shuffled indices.\ntest = data_spl.loc[random_indices[1:test_cutoff]]\n# Generate the train set with the rest of the data.\ntrain = data_spl.loc[random_indices[test_cutoff:]]\n\n#knn\ndef predictKNN(train,labels,test, n_neighbors = 2):\n print(\"start knn\")\n knn = neighbors.KNeighborsClassifier()\n knn.fit(train, labels) \n probabilities = knn.predict_proba(test)\n predictions = knn.predict(test)\n bestScores = probabilities.max(axis=1)\n print(\"done with knn\")\n return predictions, bestScores\n\n\ndata_sk = np.array(data)\n#print(data_sk)\n\n# import some data to play with\n#eiris = datasets.load_iris()\n#print(data[\"PetalWidthCm\"].shape)\n#print(len(data[\"PetalLengthCm\"]))\n\n#display(dt[\"PetalWidthCm\"].head())\n\nX = [data[\"PetalWidthCm\"], data[\"PetalLengthCm\"]]\ny = [\"PetalWidthCm\", \"PetalLengthCm\"] #[\"PetalWidthCm\", \"PetalLengthCm\"]\n\nX = [np.array(data[\"PetalWidthCm\"]), np.array(data[\"PetalLengthCm\"])]\n\n#data.columns = range(data.shape[1])\nX = np.array(data[data.columns[2:4]])#.astype(np.float)\n#X = data.columns[2:6]\n#print(X)\nY = np.array(data[data.columns[0:1]]).ravel() #.T\n#print(y.shape)\n\n# h = .02 # step size in the mesh\n\n# # Create color maps\n# cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])\n# cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])\n\n# for weights in ['uniform', 'distance']:\n# # Plot the decision boundary. For that, we will assign a color to each\n# # point in the mesh [x_min, x_max]x[y_min, y_max].\n# x_min = min(X[0]) - 1 #X[0].min() - 1 #min(X[0]) - 1\n# x_max = max(X[0]) + 1\n# y_min = min(X[1]) - 1\n# y_max = max(X[1]) + 1\n# xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n# np.arange(y_min, y_max, h))\n# #test = np.c_[xx.ravel(), yy.ravel()]\n \n \n# #clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)\n# #clf.fit(X, y)\n# Z, scores = predictKNN(X,y,test)\n# #Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# # Put the result into a color plot\n# Z = Z.reshape(xx.shape)\n# plt.figure()\n# plt.pcolormesh(xx, yy, Z, cmap=cmap_light)\n\n \n# # Plot also the training points\n# plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)\n# plt.xlim(xx.min(), xx.max())\n# plt.ylim(yy.min(), yy.max())\n# plt.title(\"3-Class classification (k = %i, weights = '%s')\"\n# % (n_neighbors, weights))\n\n# plt.show()\n\n# import some data to play with\niris = datasets.load_iris()\nX = iris.data[:, :2] # we only take the first two features. \nY = iris.target\n# print(X)\n# print(Y)\n# print(np.bincount(Y, minlength=np.size(Y)))\n\nh = .02 # step size in the mesh\n\nknn=neighbors.KNeighborsClassifier()\n\n# we create an instance of Neighbours Classifier and fit the data.\nknn.fit(X, Y)\n\n# Plot the decision boundary. For that, we will asign a color to each\n# point in the mesh [x_min, m_max]x[y_min, y_max].\nx_min, x_max = X[:,0].min() - .5, X[:,0].max() + .5\ny_min, y_max = X[:,1].min() - .5, X[:,1].max() + .5\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\nZ = knn.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.figure(1, figsize=(4, 3))\nplt.set_cmap(plt.cm.Paired)\nplt.pcolormesh(xx, yy, Z)\n\n# Plot also the training points\nplt.scatter(X[:,0], X[:,1],c=Y )\nplt.xlabel('Sepal length')\nplt.ylabel('Sepal width')\n\nplt.xlim(xx.min(), xx.max())\nplt.ylim(yy.min(), yy.max())\nplt.xticks(())\nplt.yticks(())\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
robertsj/ME701_examples
|
re/slides.ipynb
|
mit
|
[
"Regular Expressions\nAn experiment in Jupyter slides.\nJ. Roberts\n$E = mc^2$\nRegular expressions built in via the re module. Super simple example:",
"import re\nmatch = re.search(r'\\d+', r'abc123def') # note the \"r\" prefix\nprint match.span() # what do the numbers represent?",
"Special Sequences\nSeveral, special keys are used for sequences of importance in the re module.\n|name |description |\n|:---:|:------------------------------------------:|\n|\\d | any digit, i.e., [0-9] |\n|\\D | any non-digit, i.e., [^0-9] |\n|\\s | any whitespace, i.e., [ \\t\\n\\r\\f\\v] |\n|\\S | any non-whitespace, i.e., [^ \\t\\n\\r\\f\\v] |\n|\\w | alphanumeric, i.e., [a-zA-Z0-9_] |\n|\\W | non alphanumeric, i.e., [^ a-zA-Z0-9_] |\nMetacharacters\nSeveral, special \"metacharacters\" are used to define regular expressions with the re module.\n|name |description |\n|:---:|:------------------------------------------:|\n|. | any character but \\n |\n|^ | match at beginning or class complement |\n|$ | match at ending |\n|* | match 0 or more times |\n|? | match 0 or 1 times |\n|\\ | escape character |\n|<code>|</code>| \"or\" |\n|[] | defines character class, e.g., [a-z] |\n|{} | for repeated qualifier, e.g., ab{2,3} |\n|() | for groups |\nExample 1\nConsider the pattern ca*t. Does it match the following? If so, what is the match?\n - ct\n - cat\n - caaat\n - go cats!",
"pattern = r'ca*t'\nprint re.match(pattern, r'ct').span()\nprint re.match(pattern, r'cat').span()\nprint re.match(pattern, r'caaat').span()\nprint re.match(pattern, r'go cats!')",
"Example 2\nHow about this slight modification? Consider ca*[\\w ]+t applied to catenkerous cat. Is it a match? How much?",
"print re.match(r'ca*[\\w ]+t', r'catenkerous cat!').span()",
"This highlights the fact that * is greedy. In other words, it grabs as large a match as possible.\nNow, for the fun stuff. Do\ncd /path/to/ME701_examples\n git pull\nYou should now have a new folder re with some fun, real-world data to munge!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
scotthuang1989/Python-3-Module-of-the-Week
|
data_persistence_exchange/Parsing_xml_document.ipynb
|
apache-2.0
|
[
"Example xml file:\n```\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<opml version=\"1.0\">\n<head>\n <title>My Podcasts</title>\n <dateCreated>Sat, 06 Aug 2016 15:53:26 GMT</dateCreated>\n <dateModified>Sat, 06 Aug 2016 15:53:26 GMT</dateModified>\n</head>\n<body>\n <outline text=\"Non-tech\">\n <outline\n text=\"99% Invisible\" type=\"rss\"\n xmlUrl=\"http://feeds.99percentinvisible.org/99percentinvisible\"\n htmlUrl=\"http://99percentinvisible.org\" />\n </outline>\n <outline text=\"Python\">\n <outline\n text=\"Talk Python to Me\" type=\"rss\"\n xmlUrl=\"https://talkpython.fm/episodes/rss\"\n htmlUrl=\"https://talkpython.fm\" />\n <outline\n text=\"Podcast.__init__\" type=\"rss\"\n xmlUrl=\"http://podcastinit.podbean.com/feed/\"\n htmlUrl=\"http://podcastinit.com\" />\n </outline>\n</body>\n</opml>\n```\nTo parse the file, pass an open file handle to parse()",
"from xml.etree import ElementTree\n\nwith open('podcasts.opml', 'rt') as f:\n tree = ElementTree.parse(f)\n \nprint(tree)",
"Traversing the parsed tree\nTo visit all the children in order, user iter() to create a generator that iterates over the ElementTree instance.",
"from xml.etree import ElementTree\nimport pprint\n\nwith open('podcasts.opml', 'rt') as f:\n tree = ElementTree.parse(f)\n \nfor node in tree.iter():\n print(node.tag)",
"To print only the groups of names and feed URL for the podcasts, leaving out all of the data in the header section by iterating over only the outline nodes and print the text and xmlURL attributes by looking up the values in the attrib dictionary",
"from xml.etree import ElementTree\n\nwith open('podcasts.opml', 'rt') as f:\n tree = ElementTree.parse(f)\n\nfor node in tree.iter('outline'):\n name = node.attrib.get('text')\n url = node.attrib.get('xmlUrl')\n if name and url:\n print(' %s' % name)\n print(' %s' % url)\n else:\n print(name)",
"Finding Nodes in a Documents\nWalking the entire tree like this, searching for relevant nodes, can be error prone. The previous example had to look at each outline node to determine if it was a group (nodes with only a text attribute) or podcast (with both text and xmlUrl). To produce a simple list of the podcast feed URLs, without names or groups, the logic could be simplified using findall() to look for nodes with more descriptive search characteristics.\nAs a first pass at converting the first version, an XPath argument can be used to look for all outline nodes.",
"from xml.etree import ElementTree\n\nwith open('podcasts.opml', 'rt') as f:\n tree = ElementTree.parse(f)\n\nfor node in tree.findall('.//outline'):\n url = node.attrib.get('xmlUrl')\n if url:\n print(url)",
"It is possible to take advantage of the fact that the outline nodes are only nested two levels deep. Changing the search path to .//outline/outline means the loop will process only the second level of outline nodes.",
"from xml.etree import ElementTree\n\nwith open('podcasts.opml', 'rt') as f:\n tree = ElementTree.parse(f)\n\nfor node in tree.findall('.//outline/outline'):\n url = node.attrib.get('xmlUrl')\n print(url)",
"Parsed Node Attributes\nThe items returned by findall() and iter() are Element objects, each representing a node in the XML parse tree. Each Element has attributes for accessing data pulled out of the XML. This can be illustrated with a somewhat more contrived example input file, data.xml.\n```\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<top>\n <child>Regular text.</child>\n <child_with_tail>Regular text.</child_with_tail>\"Tail\" text.\n <with_attributes name=\"value\" foo=\"bar\" />\n <entity_expansion attribute=\"This & That\">\n That & This\n </entity_expansion>\n</top>\n```",
"from xml.etree import ElementTree\n\nwith open('data.xml', 'rt') as f:\n tree = ElementTree.parse(f)\n\nnode = tree.find('./with_attributes')\nprint(node.tag)\nfor name, value in sorted(node.attrib.items()):\n print(' %-4s = \"%s\"' % (name, value))",
"The text content of the nodes is available, along with the tail text, which comes after the end of a close tag.",
"from xml.etree import ElementTree\n\nwith open('data.xml', 'rt') as f:\n tree = ElementTree.parse(f)\n\nfor path in ['./child', './child_with_tail']:\n node = tree.find(path)\n print(node.tag)\n print(' child node text:', node.text)\n print(' and tail text :', node.tail)",
"XML entity references embedded in the document are converted to the appropriate characters before values are returned.",
"from xml.etree import ElementTree\n\nwith open('data.xml', 'rt') as f:\n tree = ElementTree.parse(f)\n\nnode = tree.find('entity_expansion')\nprint(node.tag)\nprint(' in attribute:', node.attrib['attribute'])\nprint(' in text :', node.text.strip())",
"Watching Events While Parsing\nThe other API for processing XML documents is event-based. The parser generates start events for opening tags and end events for closing tags. Data can be extracted from the document during the parsing phase by iterating over the event stream, which is convenient if it is not necessary to manipulate the entire document afterwards and there is no need to hold the entire parsed document in memory.\nEvents can be one of:\n\nstart \n A new tag has been encountered. The closing angle bracket of the tag was processed, but not the contents.\nend \n The closing angle bracket of a closing tag has been processed. All of the children were already processed.\nstart-ns \n Start a namespace declaration.\nend-ns \n End a namespace declaration.",
"from xml.etree.ElementTree import iterparse\n\ndepth = 0\nprefix_width = 8\nprefix_dots = '.' * prefix_width\nline_template = ''.join([\n '{prefix:<0.{prefix_len}}',\n '{event:<8}',\n '{suffix:<{suffix_len}} ',\n '{node.tag:<12} ',\n '{node_id}',\n])\n\nEVENT_NAMES = ['start', 'end', 'start-ns', 'end-ns']\n\nfor (event, node) in iterparse('podcasts.opml', EVENT_NAMES):\n if event == 'end':\n depth -= 1\n\n prefix_len = depth * 2\n\n print(line_template.format(\n prefix=prefix_dots,\n prefix_len=prefix_len,\n suffix='',\n suffix_len=(prefix_width - prefix_len),\n node=node,\n node_id=id(node),\n event=event,\n ))\n\n if event == 'start':\n depth += 1\n",
"The event-style of processing is more natural for some operations, such as converting XML input to some other format. This technique can be used to convert list of podcasts from the earlier examples from an XML file to a CSV file, so they can be loaded into a spreadsheet or database application.",
"import csv\nfrom xml.etree.ElementTree import iterparse\nimport sys\n\nwriter = csv.writer(sys.stdout, quoting=csv.QUOTE_NONNUMERIC)\n\ngroup_name = ''\n\nparsing = iterparse('podcasts.opml', events=['start'])\n\nfor (event, node) in parsing:\n if node.tag != 'outline':\n # Ignore anything not part of the outline\n continue\n if not node.attrib.get('xmlUrl'):\n # Remember the current group\n group_name = node.attrib['text']\n else:\n # Output a podcast entry\n writer.writerow(\n (group_name, node.attrib['text'],\n node.attrib['xmlUrl'],\n node.attrib.get('htmlUrl', ''))\n )",
"Parsing Strings\nTo work with smaller bits of XML text, especially string literals that might be embedded in the source of a program, use XML() and the string containing the XML to be parsed as the only argument.",
"from xml.etree.ElementTree import XML\n\n\ndef show_node(node):\n print(node.tag)\n if node.text is not None and node.text.strip():\n print(' text: \"%s\"' % node.text)\n if node.tail is not None and node.tail.strip():\n print(' tail: \"%s\"' % node.tail)\n for name, value in sorted(node.attrib.items()):\n print(' %-4s = \"%s\"' % (name, value))\n for child in node:\n show_node(child)\n\n\nparsed = XML('''\n<root>\n <group>\n <child id=\"a\">This is child \"a\".</child>\n <child id=\"b\">This is child \"b\".</child>\n </group>\n <group>\n <child id=\"c\">This is child \"c\".</child>\n </group>\n</root>\n''')\n\nprint('parsed =', parsed)\n\nfor elem in parsed:\n show_node(elem)",
"For structured XML that uses the id attribute to identify unique nodes of interest, XMLID() is a convenient way to access the parse results.\nXMLID() returns the parsed tree as an Element object, along with a dictionary mapping the id attribute strings to the individual nodes in the tree.",
"\nfrom xml.etree.ElementTree import XMLID\n\ntree, id_map = XMLID('''\n<root>\n <group>\n <child id=\"a\">This is child \"a\".</child>\n <child id=\"b\">This is child \"b\".</child>\n </group>\n <group>\n <child id=\"c\">This is child \"c\".</child>\n </group>\n</root>\n''')\n\nfor key, value in sorted(id_map.items()):\n print('%s = %s' % (key, value))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JarnoRFB/qtpyvis
|
notebooks/keras/introspection.ipynb
|
mit
|
[
"import keras\nimport numpy as np\nimport keras.backend as K\nfrom keras.datasets import mnist\nfrom keras.utils import np_utils\nK.set_learning_phase(False)\nimport matplotlib.pyplot as plt\nplt.rcParams['image.cmap'] = 'gray'\n%matplotlib inline\n\nmodel = keras.models.load_model('example_keras_mnist_model.h5')\nmodel.summary()\n\ndataset = mnist.load_data()\ntrain_data = dataset[0][0] / 255\ntrain_data = train_data[..., np.newaxis].astype('float32')\ntrain_labels = np_utils.to_categorical(dataset[0][1]).astype('float32')\ntest_data = dataset[1][0] / 255\ntest_data = test_data[..., np.newaxis].astype('float32')\ntest_labels = np_utils.to_categorical(dataset[1][1]).astype('float32')\nplt.imshow(train_data[0, ..., 0])",
"Keras model are serialzed in a JSON format.",
"model.get_config()",
"Getting the weights\nWeights can be retrieved either directly from the model or from each individual layer.",
"# Weights and biases of the entire model.\nmodel.get_weights()\n\n# Weights and bias for a single layer.\nconv_layer = model.get_layer('conv2d_1')\nconv_layer.get_weights()",
"Moreover the respespective backend variables that store the weights can be retrieved.",
"conv_layer.weights",
"Getting the activations and net inputs\nIntermediary computation results, i.e. results are not part of the prediction cannot be directly retrieved from Keras. It possible to build a new model for which the intermediary result is the prediction, but this approach makes computation rather inefficient when several intermediary results are to be retrieved. Instead it is better to reach directly into the backend for this purpose.\nActivations are still fairly straight forward as the relevant tensors can be retrieved as output of the layer.",
"# Getting the Tensorflow session and the input tensor.\nsess = keras.backend.get_session()\nnetwork_input_tensor = model.layers[0].input\nnetwork_input_tensor\n\n# Getting the tensor that holds the actiations as the output of a layer.\nactivation_tensor = conv_layer.output\nactivation_tensor\n\nactivations = sess.run(activation_tensor, feed_dict={network_input_tensor: test_data[0:1]})\nactivations.shape\n\nfor i in range(32):\n plt.imshow(activations[0, ..., i])\n plt.show()",
"Net input is a little more complicated as we have to reach heuristically into the TensorFlow graph to find the relevant tensors. However, it can be safely assumed most of the time that the net input tensor in input to the activaton op.",
"net_input_tensor = activation_tensor.op.inputs[0]\nnet_input_tensor\n\nnet_inputs = sess.run(net_input_tensor, feed_dict={network_input_tensor: test_data[0:1]})\nnet_inputs.shape\n\nfor i in range(32):\n plt.imshow(net_inputs[0, ..., i])\n plt.show()",
"Getting layer properties\nEach Keras layer object provides the relevant properties as attributes",
"conv_layer = model.get_layer('conv2d_1')\nconv_layer\n\nconv_layer.input_shape\n\nconv_layer.output_shape\n\nconv_layer.kernel_size\n\nconv_layer.strides\n\nmax_pool_layer = model.get_layer('max_pooling2d_1')\nmax_pool_layer\n\nmax_pool_layer.strides\n\nmax_pool_layer.pool_size",
"Layer type information can only be retrieved through the class name",
"conv_layer.__class__.__name__"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
alexvmarch/pandas_intro
|
02_distances.ipynb
|
mit
|
[
"# As before...\nimport numpy as np\nimport scipy as sp\nimport pandas as pd\n\nnat = 195 # Number of atoms\na = 12.55 # Cell size",
"Load our previous work\nHDF5's are easy to load",
"xyz = pd.read_hdf('xyz.hdf5', 'xyz')\nxyz.head()",
"Computing distances between atoms\nThis is arguably the most difficult part of this tutorial. How do we account for periodicity?\nLets start by considering free boundary conditions first!\nComputing all atom to atom distances (per frame) increases factorially,\n\\begin{equation}\n \\frac{nat!}{2!\\left(nat - 2\\right)!} \\left(= \\frac{1}{2}\\left(nat * \\left(nat - 1\\right)\\right)\\right)\n\\end{equation}\nin computations (where nat is the number of atoms). Fortunately for us, computing the distances can be \npassed off to scipy's pdist.\nCODING TIME: Write a function to compute all of the atom to atom distances in every frame assuming free boundary conditions",
"from scipy.spatial.distance import pdist\n\n\ndef skeleton_free_boundary_distances(frame): # Note that this is frame, not DataFrame\n '''\n Compute all of the atom to atom distances with free boundary conditions\n '''\n # Compute distances\n xyz = frame.loc[:, ['x', 'y', 'z']]\n distances = pdist(xyz)\n # Compute the symbols\n # ...\n symbols = None\n return pd.DataFrame.from_dict({'distances': distances, 'symbols': symbols})\n\ntwobody = xyz.groupby(level='frame').apply(skeleton_free_boundary_distances)\ntwobody.head()",
"Distances are no good to us, unless we know where they came from (or at least what two symbols they represent)...\nHINT: Check out the \"combinations\" function in the itertools library (part of the Python standard library)",
"from itertools import combinations\n\n%load -s free_boundary_distances, snippets/distances.py\n\ntwobody = xyz.groupby(level='frame').apply(free_boundary_distances)\ntwobody.head()",
"Tests again...",
"twobody.loc[0].head()\n\nfirst = xyz.loc[0, ['x', 'y', 'z']].values\nfor i in range(1, 6):\n print(((first[0, :] - first[i, :])**2).sum()**0.5)",
"Periodicity\nThat was fun but it doesn't do what we need it to! Periodic boundaries can be handled a number of ways, here is one algorithm:\n\nPut all atoms back in unit cell\nGenerate a 3x3x3 supercell from the unit cell\nCompute distances looking only from the central unit cell (the internal cell that is completely surrounded by replicas)\n\nSince this is complicated, we will walk through the pieces of the code individually (applying them to a single frame) before putting it all together.",
"frame = xyz.loc[0]",
"CODING TIME: Put all atoms back into the unit cell",
"def skeleton_create_unit(df, a):\n '''\n Put all atoms back into the cubic unit cell\n '''\n #...\n\nunit_frame = skeleton_create_unit(frame, a) # a is defined above\nunit_frame",
"The % (modulo) operator is nice for such tasks...",
"%load -s create_unit, snippets/distances.py\n\nunit_frame = create_unit(frame, a)\nunit_frame.shape\n\nprint(unit_frame is not frame) # True if objects are not the same in memory\nprint(np.all(unit_frame.values == frame.values)) # True if objects' xyz positions are identical",
"CODING TIME: Generate the 3x3x3 superframe from the unit_frame",
"def skeleton_superframe(frame, a):\n '''\n Generate a 3x3x3 supercell of a given frame.\n '''\n v = [-1, 0, 1]\n n = len(frame)\n unit = frame.loc[:, ['x', 'y', 'z']].values\n coords = np.empty((n * 27, 3))\n h = 0\n for i in v:\n for j in v:\n for k in v:\n #for ...\n return coords\n\nbig_frame = skeleton_superframe(unit_frame, a)\nbig_frame.shape",
"One solution...or if you want to have some fun...",
"%load -s superframe, snippets/distances.py\n\nbig_frame = superframe(unit_frame, a)\nbig_frame",
"KDTree\nFor a single frame's supercell (to start with) we need to compute the distances from the central frame. \nLet's use a nice feature of scipy/scikit-learn (and of course the mathematicians \nthat developed it): the KDTree\nSee also: wiki\nWe are going to use the Cythonized version of the KDTree implementation.",
"from scipy.spatial import cKDTree\n\nkd = cKDTree(big_frame)\nk = 194 # Number of distances to compute \ndistances, indices = kd.query(unit_frame.loc[:, ['x', 'y', 'z']], k=k)\ndistances.shape\n\nindices.shape",
"We have the distances but we need to shape them into a DataFrame and figure out what symbol pair \neach distance belongs too (that last part is critical for the third task).\nThe first column in the indices are the indices of the source atom from which we are looking.\nThe rest of the columns contain the indices of the paired atom to which we are computing the distances.\nWe map superframe indices back onto the unit_frame indices:",
"def map_x_to_y(x, y):\n '''\n Using the indices in x, generate an array of the same \n length populated by values from y.\n '''\n mapped = np.empty((len(x), ), dtype=np.int)\n for i, index in enumerate(x):\n mapped[i] = y[index]\n return mapped\n\nunit_frame_indices = unit_frame.index.get_level_values('atom').tolist() * 27\nrepeated_source = np.repeat(indices[:, 0], k)\natom1_indices = pd.Series(map_x_to_y(repeated_source, unit_frame_indices))\natom2_indices = pd.Series(map_x_to_y(indices.flatten(), unit_frame_indices))",
"Now let's convert these Series (a pandas Series is simply a column in a DataFrame)\nto symbols using the map function.",
"symbols = unit_frame['symbol'].to_dict()\n\natom1_symbols = atom1_indices.map(symbols)\natom2_symbols = atom2_indices.map(symbols)\n\nsymbols = [''.join((first, atom2_symbols[i])) for i, first in enumerate(atom1_symbols)]",
"Now let's finish this by generating our (periodic) two body DataFrame for this first frame..",
"frame_twobody = pd.DataFrame.from_dict({'distance': distances.flatten(),\n 'symbols': symbols})\n\nframe_twobody = frame_twobody.loc[(frame_twobody['distance'] > 0.3) & (frame_twobody['distance'] < 8.3)]\nframe_twobody.head()",
"We should probably do a check here, but in the interest of time, and because,\nwith our current implementation this is not trivial, let's just skip this...\nPutting the pieces together\nThough we have done this for a single frame, let's see if we can combine all of the pieces \nto act on the original xyz DataFrame.",
"from scipy.spatial import cKDTree\nfrom snippets.distances import superframe_numba, map_x_to_y_numba, create_unit\n\n\ndef cubic_periodic_distances(xyz, a, nat, k=None):\n '''\n Computes atom to atom distances for a periodic cubic cell.\n\n Args:\n xyz: Properly indexed pandas DataFrame\n a: Cubic cell dimension\n\n Returns:\n twobody: DataFrame of distances\n '''\n k = nat - 1 if k is None else k\n # Since the unit cell size doesn't change between frames, \n # let's put all of the atoms (in every frame) back in the\n # unit cell at the same time.\n unit_xyz = create_unit(xyz, a)\n # Now we will define another function which will do the \n # steps we outlined above (see below) and apply this \n # function to every frame of the unit_xyz\n twobody = unit_xyz.groupby(level='frame').apply(_compute, k=k) # <== This is the meat and potatoes\n # Filter the meaningful distances\n # ... # <== EDIT THIS LINE!\n # Pair the symbols\n twobody.loc[:, 'symbols'] = twobody['atom1'] + twobody['atom2']\n # Name the indices\n twobody.index.names = ['frame', 'two']\n return twobody\n\n\n#%load -s cubic_periodic_distances, snippets/distances.py",
"What should the function _compute look like/do?\nNOT QUITE CODING TIME: Just load the function",
"%load -s _compute, snippets/distances.py\n\n# WARNING: On a fast machine, this operation takes ~10 s\n%time twobody = cubic_periodic_distances(xyz, a, nat)\ntwobody.head()",
"What can we do with twobody data??",
"twobody.loc[((twobody.symbols == 'HO') | (twobody.symbols == 'OH')) & (twobody.distance < 1.2), 'distance'].describe()\n\ntwobody.loc[((twobody.symbols == 'HO') | (twobody.symbols == 'OH')) & (twobody.distance < 1.2), 'distance'].median()",
"Saving\nNow that we did that heavy analysis, let's save our data again.\n(perspective: simulation time - >120hrs, analysis time: ~10 minutes)",
"# WARNING: On a fast machine, this operation takes ~10 s\nstore = pd.HDFStore('twobody.hdf5', mode='w')\n%time store.put('twobody', twobody)\nstore.close()",
"Again, though there are a bunch of improvements/features we could make, let's move on...\n...on to step three\n<a id='fun'></a>\nNumba-fied fun!\nPython has a way to optimize big loops.\nThis is for learning and fun, remember the first rule of optimization: optimize the slowest step first!\nnumba is a beautiful and powerful way to \"just-in-time\" compile python code into native machine code...",
"from numba import jit, float64\n\n%load -s superframe, snippets/distances.py\n\n%load -s superframe_numba, snippets/distances.py\n\nn = 100\n%timeit -n $n superframe(unit_frame, a)\n%timeit -n $n superframe_numba(unit_frame.loc[:, ['x', 'y', 'z']].values, a)",
"~10x speedup (for 1ish lines of code)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
zingale/pyreaclib
|
modify-example.ipynb
|
bsd-3-clause
|
[
"Modifying Rates\nSometimes we want to change the nuclei involved in rates to simplify our network. Currently,\npynucastro supports changing the products. Here's an example.",
"import pynucastro as pyna\n\nreaclib_library = pyna.ReacLibLibrary()",
"We want to model ${}^{12}\\mathrm{C} + {}^{12}\\mathrm{C}$ reactions. There are 3 rates involved.",
"filter = pyna.RateFilter(reactants=[\"c12\", \"c12\"])\nmylib = reaclib_library.filter(filter)\nmylib ",
"The rate ${}^{12}\\mathrm{C}({}^{12}\\mathrm{C},n){}^{23}\\mathrm{Mg}$ is quickly followed by ${}^{23}\\mathrm{Mg}(n,\\gamma){}^{24}\\mathrm{Mg}$, so we want to modify that rate sequence to just be ${}^{12}\\mathrm{C}({}^{12}\\mathrm{C},\\gamma){}^{24}\\mathrm{Mg}$",
"r = mylib.get_rate(\"c12 + c12 --> n + mg23 <cf88_reaclib__reverse>\")\nr",
"This has the Q value:",
"r.Q",
"Now we modify it",
"r.modify_products(\"mg24\")\nr",
"and we see that the Q value has been updated to reflect the new endpoint",
"r.Q",
"Now let's build a network that includes the nuclei involved in our carbon burning. We'll start by leaving off the ${}^{23}\\mathrm{Mg}$",
"mylib2 = reaclib_library.linking_nuclei([\"p\", \"he4\", \"c12\", \"o16\", \"ne20\", \"na23\", \"mg24\"])",
"Now we add in our modified rate",
"mylib2 += pyna.Library(rates=[r])\n\nmylib2\n\nrc = pyna.RateCollection(libraries=[mylib2])\nrc.plot(rotated=True, curved_edges=True, hide_xalpha=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gojomo/gensim
|
docs/src/auto_examples/howtos/run_doc2vec_imdb.ipynb
|
lgpl-2.1
|
[
"%matplotlib inline",
"How to reproduce the doc2vec 'Paragraph Vector' paper\nShows how to reproduce results of the \"Distributed Representation of Sentences and Documents\" paper by Le and Mikolov using Gensim.",
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)",
"Introduction\nThis guide shows you how to reproduce the results of the paper by Le and\nMikolov 2014 <https://arxiv.org/pdf/1405.4053.pdf>_ using Gensim. While the\nentire paper is worth reading (it's only 9 pages), we will be focusing on\nSection 3.2: \"Beyond One Sentence - Sentiment Analysis with the IMDB\ndataset\".\nThis guide follows the following steps:\n. Load the IMDB dataset\n. Train a variety of Doc2Vec models on the dataset\n. Evaluate the performance of each model using a logistic regression\n. Examine some of the results directly:\nWhen examining results, we will look for answers for the following questions:\n. Are inferred vectors close to the precalculated ones?\n. Do close documents seem more related than distant ones?\n. Do the word vectors show useful similarities?\n. Are the word vectors from this dataset any good at analogies?\nLoad corpus\nOur data for the tutorial will be the IMDB archive\n<http://ai.stanford.edu/~amaas/data/sentiment/>_.\nIf you're not familiar with this dataset, then here's a brief intro: it\ncontains several thousand movie reviews.\nEach review is a single line of text containing multiple sentences, for example:\nOne of the best movie-dramas I have ever seen. We do a lot of acting in the\nchurch and this is one that can be used as a resource that highlights all the\ngood things that actors can do in their work. I highly recommend this one,\nespecially for those who have an interest in acting, as a \"must see.\"\nThese reviews will be the documents that we will work with in this tutorial.\nThere are 100 thousand reviews in total.\n. 25k reviews for training (12.5k positive, 12.5k negative)\n. 25k reviews for testing (12.5k positive, 12.5k negative)\n. 50k unlabeled reviews\nOut of 100k reviews, 50k have a label: either positive (the reviewer liked\nthe movie) or negative.\nThe remaining 50k are unlabeled.\nOur first task will be to prepare the dataset.\nMore specifically, we will:\n. Download the tar.gz file (it's only 84MB, so this shouldn't take too long)\n. Unpack it and extract each movie review\n. Split the reviews into training and test datasets\nFirst, let's define a convenient datatype for holding data for a single document:\n\nwords: The text of the document, as a list of words.\ntags: Used to keep the index of the document in the entire dataset.\nsplit: one of train\\ , test or extra. Determines how the document will be used (for training, testing, etc).\nsentiment: either 1 (positive), 0 (negative) or None (unlabeled document).\n\nThis data type is helpful for later evaluation and reporting.\nIn particular, the index member will help us quickly and easily retrieve the vectors for a document from a model.",
"import collections\n\nSentimentDocument = collections.namedtuple('SentimentDocument', 'words tags split sentiment')",
"We can now proceed with loading the corpus.",
"import io\nimport re\nimport tarfile\nimport os.path\n\nimport smart_open\nimport gensim.utils\n\ndef download_dataset(url='http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'):\n fname = url.split('/')[-1]\n\n if os.path.isfile(fname):\n return fname\n\n # Download the file to local storage first.\n with smart_open.open(url, \"rb\", ignore_ext=True) as fin:\n with smart_open.open(fname, 'wb', ignore_ext=True) as fout:\n while True:\n buf = fin.read(io.DEFAULT_BUFFER_SIZE)\n if not buf:\n break\n fout.write(buf)\n\n return fname\n\ndef create_sentiment_document(name, text, index):\n _, split, sentiment_str, _ = name.split('/')\n sentiment = {'pos': 1.0, 'neg': 0.0, 'unsup': None}[sentiment_str]\n\n if sentiment is None:\n split = 'extra'\n\n tokens = gensim.utils.to_unicode(text).split()\n return SentimentDocument(tokens, [index], split, sentiment)\n\ndef extract_documents():\n fname = download_dataset()\n\n index = 0\n\n with tarfile.open(fname, mode='r:gz') as tar:\n for member in tar.getmembers():\n if re.match(r'aclImdb/(train|test)/(pos|neg|unsup)/\\d+_\\d+.txt$', member.name):\n member_bytes = tar.extractfile(member).read()\n member_text = member_bytes.decode('utf-8', errors='replace')\n assert member_text.count('\\n') == 0\n yield create_sentiment_document(member.name, member_text, index)\n index += 1\n\nalldocs = list(extract_documents())",
"Here's what a single document looks like.",
"print(alldocs[27])",
"Extract our documents and split into training/test sets.",
"train_docs = [doc for doc in alldocs if doc.split == 'train']\ntest_docs = [doc for doc in alldocs if doc.split == 'test']\nprint(f'{len(alldocs)} docs: {len(train_docs)} train-sentiment, {len(test_docs)} test-sentiment')",
"Set-up Doc2Vec Training & Evaluation Models\nWe approximate the experiment of Le & Mikolov \"Distributed Representations\nof Sentences and Documents\"\n<http://cs.stanford.edu/~quocle/paragraph_vector.pdf> with guidance from\nMikolov's example go.sh\n<https://groups.google.com/d/msg/word2vec-toolkit/Q49FIrNOQRo/J6KG8mUj45sJ>::\n./word2vec -train ../alldata-id.txt -output vectors.txt -cbow 0 -size 100 -window 10 -negative 5 -hs 0 -sample 1e-4 -threads 40 -binary 0 -iter 20 -min-count 1 -sentence-vectors 1\n\nWe vary the following parameter choices:\n\n100-dimensional vectors, as the 400-d vectors of the paper take a lot of\n memory and, in our tests of this task, don't seem to offer much benefit\nSimilarly, frequent word subsampling seems to decrease sentiment-prediction\n accuracy, so it's left out\ncbow=0 means skip-gram which is equivalent to the paper's 'PV-DBOW'\n mode, matched in gensim with dm=0\nAdded to that DBOW model are two DM models, one which averages context\n vectors (\\ dm_mean\\ ) and one which concatenates them (\\ dm_concat\\ ,\n resulting in a much larger, slower, more data-hungry model)\nA min_count=2 saves quite a bit of model memory, discarding only words\n that appear in a single doc (and are thus no more expressive than the\n unique-to-each doc vectors themselves)",
"import multiprocessing\nfrom collections import OrderedDict\n\nimport gensim.models.doc2vec\nassert gensim.models.doc2vec.FAST_VERSION > -1, \"This will be painfully slow otherwise\"\n\nfrom gensim.models.doc2vec import Doc2Vec\n\ncommon_kwargs = dict(\n vector_size=100, epochs=20, min_count=2,\n sample=0, workers=multiprocessing.cpu_count(), negative=5, hs=0,\n)\n\nsimple_models = [\n # PV-DBOW plain\n Doc2Vec(dm=0, **common_kwargs),\n # PV-DM w/ default averaging; a higher starting alpha may improve CBOW/PV-DM modes\n Doc2Vec(dm=1, window=10, alpha=0.05, comment='alpha=0.05', **common_kwargs),\n # PV-DM w/ concatenation - big, slow, experimental mode\n # window=5 (both sides) approximates paper's apparent 10-word total window size\n Doc2Vec(dm=1, dm_concat=1, window=5, **common_kwargs),\n]\n\nfor model in simple_models:\n model.build_vocab(alldocs)\n print(f\"{model} vocabulary scanned & state initialized\")\n\nmodels_by_name = OrderedDict((str(model), model) for model in simple_models)",
"Le and Mikolov note that combining a paragraph vector from Distributed Bag of\nWords (DBOW) and Distributed Memory (DM) improves performance. We will\nfollow, pairing the models together for evaluation. Here, we concatenate the\nparagraph vectors obtained from each model with the help of a thin wrapper\nclass included in a gensim test module. (Note that this a separate, later\nconcatenation of output-vectors than the kind of input-window-concatenation\nenabled by the dm_concat=1 mode above.)",
"from gensim.test.test_doc2vec import ConcatenatedDoc2Vec\nmodels_by_name['dbow+dmm'] = ConcatenatedDoc2Vec([simple_models[0], simple_models[1]])\nmodels_by_name['dbow+dmc'] = ConcatenatedDoc2Vec([simple_models[0], simple_models[2]])",
"Predictive Evaluation Methods\nGiven a document, our Doc2Vec models output a vector representation of the document.\nHow useful is a particular model?\nIn case of sentiment analysis, we want the ouput vector to reflect the sentiment in the input document.\nSo, in vector space, positive documents should be distant from negative documents.\nWe train a logistic regression from the training set:\n\nregressors (inputs): document vectors from the Doc2Vec model\ntarget (outpus): sentiment labels\n\nSo, this logistic regression will be able to predict sentiment given a document vector.\nNext, we test our logistic regression on the test set, and measure the rate of errors (incorrect predictions).\nIf the document vectors from the Doc2Vec model reflect the actual sentiment well, the error rate will be low.\nTherefore, the error rate of the logistic regression is indication of how well the given Doc2Vec model represents documents as vectors.\nWe can then compare different Doc2Vec models by looking at their error rates.",
"import numpy as np\nimport statsmodels.api as sm\nfrom random import sample\n\ndef logistic_predictor_from_data(train_targets, train_regressors):\n \"\"\"Fit a statsmodel logistic predictor on supplied data\"\"\"\n logit = sm.Logit(train_targets, train_regressors)\n predictor = logit.fit(disp=0)\n # print(predictor.summary())\n return predictor\n\ndef error_rate_for_model(test_model, train_set, test_set):\n \"\"\"Report error rate on test_doc sentiments, using supplied model and train_docs\"\"\"\n\n train_targets = [doc.sentiment for doc in train_set]\n train_regressors = [test_model.dv[doc.tags[0]] for doc in train_set]\n train_regressors = sm.add_constant(train_regressors)\n predictor = logistic_predictor_from_data(train_targets, train_regressors)\n\n test_regressors = [test_model.dv[doc.tags[0]] for doc in test_set]\n test_regressors = sm.add_constant(test_regressors)\n\n # Predict & evaluate\n test_predictions = predictor.predict(test_regressors)\n corrects = sum(np.rint(test_predictions) == [doc.sentiment for doc in test_set])\n errors = len(test_predictions) - corrects\n error_rate = float(errors) / len(test_predictions)\n return (error_rate, errors, len(test_predictions), predictor)",
"Bulk Training & Per-Model Evaluation\nNote that doc-vector training is occurring on all documents of the dataset,\nwhich includes all TRAIN/TEST/DEV docs. Because the native document-order\nhas similar-sentiment documents in large clumps – which is suboptimal for\ntraining – we work with once-shuffled copy of the training set.\nWe evaluate each model's sentiment predictive power based on error rate, and\nthe evaluation is done for each model.\n(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3\nmain models takes about an hour.)",
"from collections import defaultdict\nerror_rates = defaultdict(lambda: 1.0) # To selectively print only best errors achieved\n\nfrom random import shuffle\nshuffled_alldocs = alldocs[:]\nshuffle(shuffled_alldocs)\n\nfor model in simple_models:\n print(f\"Training {model}\")\n model.train(shuffled_alldocs, total_examples=len(shuffled_alldocs), epochs=model.epochs)\n\n print(f\"\\nEvaluating {model}\")\n err_rate, err_count, test_count, predictor = error_rate_for_model(model, train_docs, test_docs)\n error_rates[str(model)] = err_rate\n print(\"\\n%f %s\\n\" % (err_rate, model))\n\nfor model in [models_by_name['dbow+dmm'], models_by_name['dbow+dmc']]:\n print(f\"\\nEvaluating {model}\")\n err_rate, err_count, test_count, predictor = error_rate_for_model(model, train_docs, test_docs)\n error_rates[str(model)] = err_rate\n print(f\"\\n{err_rate} {model}\\n\")",
"Achieved Sentiment-Prediction Accuracy\nCompare error rates achieved, best-to-worst",
"print(\"Err_rate Model\")\nfor rate, name in sorted((rate, name) for name, rate in error_rates.items()):\n print(f\"{rate} {name}\")",
"In our testing, contrary to the results of the paper, on this problem,\nPV-DBOW alone performs as good as anything else. Concatenating vectors from\ndifferent models only sometimes offers a tiny predictive improvement – and\nstays generally close to the best-performing solo model included.\nThe best results achieved here are just around 10% error rate, still a long\nway from the paper's reported 7.42% error rate.\n(Other trials not shown, with larger vectors and other changes, also don't\ncome close to the paper's reported value. Others around the net have reported\na similar inability to reproduce the paper's best numbers. The PV-DM/C mode\nimproves a bit with many more training epochs – but doesn't reach parity with\nPV-DBOW.)\nExamining Results\nLet's look for answers to the following questions:\n. Are inferred vectors close to the precalculated ones?\n. Do close documents seem more related than distant ones?\n. Do the word vectors show useful similarities?\n. Are the word vectors from this dataset any good at analogies?\nAre inferred vectors close to the precalculated ones?",
"doc_id = np.random.randint(len(simple_models[0].dv)) # Pick random doc; re-run cell for more examples\nprint(f'for doc {doc_id}...')\nfor model in simple_models:\n inferred_docvec = model.infer_vector(alldocs[doc_id].words)\n print(f'{model}:\\n {model.dv.most_similar([inferred_docvec], topn=3)}')",
"(Yes, here the stored vector from 20 epochs of training is usually one of the\nclosest to a freshly-inferred vector for the same words. Defaults for\ninference may benefit from tuning for each dataset or model parameters.)\nDo close documents seem more related than distant ones?",
"import random\n\ndoc_id = np.random.randint(len(simple_models[0].dv)) # pick random doc, re-run cell for more examples\nmodel = random.choice(simple_models) # and a random model\nsims = model.dv.most_similar(doc_id, topn=len(model.dv)) # get *all* similar documents\nprint(f'TARGET ({doc_id}): «{\" \".join(alldocs[doc_id].words)}»\\n')\nprint(f'SIMILAR/DISSIMILAR DOCS PER MODEL {model}%s:\\n')\nfor label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:\n s = sims[index]\n i = sims[index][0]\n words = ' '.join(alldocs[i].words)\n print(f'{label} {s}: «{words}»\\n')",
"Somewhat, in terms of reviewer tone, movie genre, etc... the MOST\ncosine-similar docs usually seem more like the TARGET than the MEDIAN or\nLEAST... especially if the MOST has a cosine-similarity > 0.5. Re-run the\ncell to try another random target document.\nDo the word vectors show useful similarities?",
"import random\n\nword_models = simple_models[:]\n\ndef pick_random_word(model, threshold=10):\n # pick a random word with a suitable number of occurences\n while True:\n word = random.choice(model.wv.index_to_key)\n if model.wv.get_vecattr(word, \"count\") > threshold:\n return word\n\ntarget_word = pick_random_word(word_models[0])\n# or uncomment below line, to just pick a word from the relevant domain:\n# target_word = 'comedy/drama'\n\nfor model in word_models:\n print(f'target_word: {repr(target_word)} model: {model} similar words:')\n for i, (word, sim) in enumerate(model.wv.most_similar(target_word, topn=10), 1):\n print(f' {i}. {sim:.2f} {repr(word)}')\n print()",
"Do the DBOW words look meaningless? That's because the gensim DBOW model\ndoesn't train word vectors – they remain at their random initialized values –\nunless you ask with the dbow_words=1 initialization parameter. Concurrent\nword-training slows DBOW mode significantly, and offers little improvement\n(and sometimes a little worsening) of the error rate on this IMDB\nsentiment-prediction task, but may be appropriate on other tasks, or if you\nalso need word-vectors.\nWords from DM models tend to show meaningfully similar words when there are\nmany examples in the training data (as with 'plot' or 'actor'). (All DM modes\ninherently involve word-vector training concurrent with doc-vector training.)\nAre the word vectors from this dataset any good at analogies?",
"from gensim.test.utils import datapath\nquestions_filename = datapath('questions-words.txt')\n\n# Note: this analysis takes many minutes\nfor model in word_models:\n score, sections = model.wv.evaluate_word_analogies(questions_filename)\n correct, incorrect = len(sections[-1]['correct']), len(sections[-1]['incorrect'])\n print(f'{model}: {float(correct*100)/(correct+incorrect):0.2f}%% correct ({correct} of {correct+incorrect}')",
"Even though this is a tiny, domain-specific dataset, it shows some meager\ncapability on the general word analogies – at least for the DM/mean and\nDM/concat models which actually train word vectors. (The untrained\nrandom-initialized words of the DBOW model of course fail miserably.)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
southpaw94/MachineLearning
|
TextExamples/3547_11_Code.ipynb
|
gpl-2.0
|
[
"Sebastian Raschka, 2015\nPython Machine Learning Essentials\nChapter 11 - Working with Unlabeled Data – Clustering Analysis\nNote that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).",
"%load_ext watermark\n%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scipy,scikit-learn\n\n# to install watermark just uncomment the following line:\n#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py",
"<br>\n<br>\nSections\n\nGrouping objects by similarity using k-means\nUsing the elbow method to find the optimal number of clusters\nQuantifying the quality of clustering via silhouette plots\n\n\nOrganizing clusters as a hierarchical tree\nPerforming hierarchical clustering on a distance matrix\nAttaching dendrograms to a heat map\nApplying agglomerative clustering via scikit-learn\n\n\nLocating regions of high density via DBSCAN\n\n<br>\n<br>\nGrouping objects by similarity using k-means\n[back to top]",
"from sklearn.datasets import make_blobs\nX, y = make_blobs(n_samples=150, \n n_features=2, \n centers=3, \n cluster_std=0.5, \n shuffle=True, \n random_state=0)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.scatter(X[:,0], X[:,1], c='white', marker='o', s=50)\nplt.grid()\nplt.tight_layout()\n#plt.savefig('./figures/spheres.png', dpi=300)\nplt.show()\n\nfrom sklearn.cluster import KMeans\nkm = KMeans(n_clusters=3, \n init='random', \n n_init=10, \n max_iter=300,\n tol=1e-04,\n random_state=0)\ny_km = km.fit_predict(X)\n\nplt.scatter(X[y_km==0,0], \n X[y_km==0,1], \n s=50, \n c='lightgreen', \n marker='s', \n label='cluster 1')\nplt.scatter(X[y_km==1,0], \n X[y_km==1,1], \n s=50, \n c='orange', \n marker='o', \n label='cluster 2')\nplt.scatter(X[y_km==2,0], \n X[y_km==2,1], \n s=50, \n c='lightblue', \n marker='v', \n label='cluster 3')\nplt.scatter(km.cluster_centers_[:,0], \n km.cluster_centers_[:,1], \n s=250, \n marker='*', \n c='red', \n label='centroids')\nplt.legend()\nplt.grid()\nplt.tight_layout()\n#plt.savefig('./figures/centroids.png', dpi=300)\nplt.show()",
"<br>\nUsing the elbow method to find the optimal number of clusters\n[back to top]",
"print('Distortion: %.2f' % km.inertia_)\n\ndistortions = []\nfor i in range(1, 11):\n km = KMeans(n_clusters=i, \n init='k-means++', \n n_init=10, \n max_iter=300, \n random_state=0)\n km.fit(X)\n distortions .append(km.inertia_)\nplt.plot(range(1,11), distortions , marker='o')\nplt.xlabel('Number of clusters')\nplt.ylabel('Distortion')\nplt.tight_layout()\n#plt.savefig('./figures/elbow.png', dpi=300)\nplt.show()",
"<br>\nQuantifying the quality of clustering via silhouette plots\n[back to top]",
"import numpy as np\nfrom matplotlib import cm\nfrom sklearn.metrics import silhouette_samples\n\nkm = KMeans(n_clusters=3, \n init='k-means++', \n n_init=10, \n max_iter=300,\n tol=1e-04,\n random_state=0)\ny_km = km.fit_predict(X)\n\ncluster_labels = np.unique(y_km)\nn_clusters = cluster_labels.shape[0]\nsilhouette_vals = silhouette_samples(X, y_km, metric='euclidean')\ny_ax_lower, y_ax_upper = 0, 0\nyticks = []\nfor i, c in enumerate(cluster_labels):\n c_silhouette_vals = silhouette_vals[y_km == c]\n c_silhouette_vals.sort()\n y_ax_upper += len(c_silhouette_vals)\n color = cm.jet(i / n_clusters)\n plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0, \n edgecolor='none', color=color)\n\n yticks.append((y_ax_lower + y_ax_upper) / 2)\n y_ax_lower += len(c_silhouette_vals)\n \nsilhouette_avg = np.mean(silhouette_vals)\nplt.axvline(silhouette_avg, color=\"red\", linestyle=\"--\") \n\nplt.yticks(yticks, cluster_labels + 1)\nplt.ylabel('Cluster')\nplt.xlabel('Silhouette coefficient')\n\nplt.tight_layout()\n# plt.savefig('./figures/silhouette.png', dpi=300)\nplt.show()",
"Comparison to \"bad\" clustering:",
"km = KMeans(n_clusters=2, \n init='k-means++', \n n_init=10, \n max_iter=300,\n tol=1e-04,\n random_state=0)\ny_km = km.fit_predict(X)\n\nplt.scatter(X[y_km==0,0], \n X[y_km==0,1], \n s=50, \n c='lightgreen', \n marker='s', \n label='cluster 1')\nplt.scatter(X[y_km==1,0], \n X[y_km==1,1], \n s=50, \n c='orange', \n marker='o', \n label='cluster 2')\n\nplt.scatter(km.cluster_centers_[:,0], km.cluster_centers_[:,1], s=250, marker='*', c='red', label='centroids')\nplt.legend()\nplt.grid()\nplt.tight_layout()\n#plt.savefig('./figures/centroids_bad.png', dpi=300)\nplt.show()\n\ncluster_labels = np.unique(y_km)\nn_clusters = cluster_labels.shape[0]\nsilhouette_vals = silhouette_samples(X, y_km, metric='euclidean')\ny_ax_lower, y_ax_upper = 0, 0\nyticks = []\nfor i, c in enumerate(cluster_labels):\n c_silhouette_vals = silhouette_vals[y_km == c]\n c_silhouette_vals.sort()\n y_ax_upper += len(c_silhouette_vals)\n color = cm.jet(i / n_clusters)\n plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0, \n edgecolor='none', color=color)\n\n yticks.append((y_ax_lower + y_ax_upper) / 2)\n y_ax_lower += len(c_silhouette_vals)\n \nsilhouette_avg = np.mean(silhouette_vals)\nplt.axvline(silhouette_avg, color=\"red\", linestyle=\"--\") \n\nplt.yticks(yticks, cluster_labels + 1)\nplt.ylabel('Cluster')\nplt.xlabel('Silhouette coefficient')\n\nplt.tight_layout()\n# plt.savefig('./figures/silhouette_bad.png', dpi=300)\nplt.show()",
"<br>\n<br>\nOrganizing clusters as a hierarchical tree\n[back to top]",
"import pandas as pd\nimport numpy as np\n\nnp.random.seed(123)\n\nvariables = ['X', 'Y', 'Z']\nlabels = ['ID_0','ID_1','ID_2','ID_3','ID_4']\n\nX = np.random.random_sample([5,3])*10\ndf = pd.DataFrame(X, columns=variables, index=labels)\ndf",
"<br>\nPerforming hierarchical clustering on a distance matrix\n[back to top]",
"from scipy.spatial.distance import pdist,squareform\n\nrow_dist = pd.DataFrame(squareform(pdist(df, metric='euclidean')), columns=labels, index=labels)\nrow_dist",
"We can either pass a condensed distance matrix (upper triangular) from the pdist function, or we can pass the \"original\" data array and define the 'euclidean' metric as function argument n linkage. However, we should nott pass the squareform distance matrix, which would yield different distance values although the overall clustering could be the same.",
"# 1. incorrect approach: Squareform distance matrix\n\nfrom scipy.cluster.hierarchy import linkage\n\nrow_clusters = linkage(row_dist, method='complete', metric='euclidean')\npd.DataFrame(row_clusters, \n columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],\n index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])\n\n# 2. correct approach: Condensed distance matrix\n\nrow_clusters = linkage(pdist(df, metric='euclidean'), method='complete')\npd.DataFrame(row_clusters, \n columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],\n index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])\n\n# 3. correct approach: Input sample matrix\n\nrow_clusters = linkage(df.values, method='complete', metric='euclidean')\npd.DataFrame(row_clusters, \n columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],\n index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])\n\nfrom scipy.cluster.hierarchy import dendrogram\n\n# make dendrogram black (part 1/2)\n# from scipy.cluster.hierarchy import set_link_color_palette\n# set_link_color_palette(['black'])\n\nrow_dendr = dendrogram(row_clusters, \n labels=labels,\n # make dendrogram black (part 2/2)\n # color_threshold=np.inf\n )\nplt.tight_layout()\nplt.ylabel('Euclidean distance')\n#plt.savefig('./figures/dendrogram.png', dpi=300, \n# bbox_inches='tight')\nplt.show()",
"<br>\nAttaching dendrograms to a heat map\n[back to top]",
"# plot row dendrogram\nfig = plt.figure(figsize=(8,8))\naxd = fig.add_axes([0.09,0.1,0.2,0.6])\nrow_dendr = dendrogram(row_clusters, orientation='right')\n\n# reorder data with respect to clustering\ndf_rowclust = df.ix[row_dendr['leaves'][::-1]]\n\naxd.set_xticks([])\naxd.set_yticks([])\n\n# remove axes spines from dendrogram\nfor i in axd.spines.values():\n i.set_visible(False)\n\n\n \n# plot heatmap\naxm = fig.add_axes([0.23,0.1,0.6,0.6]) # x-pos, y-pos, width, height\ncax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')\nfig.colorbar(cax)\naxm.set_xticklabels([''] + list(df_rowclust.columns))\naxm.set_yticklabels([''] + list(df_rowclust.index))\n\n# plt.savefig('./figures/heatmap.png', dpi=300)\nplt.show()",
"<br>\nApplying agglomerative clustering via scikit-learn\n[back to top]",
"from sklearn.cluster import AgglomerativeClustering\n\nac = AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='complete')\nlabels = ac.fit_predict(X)\nprint('Cluster labels: %s' % labels)",
"<br>\n<br>\nLocating regions of high density via DBSCAN\n[back to top]",
"from sklearn.datasets import make_moons\n\nX, y = make_moons(n_samples=200, noise=0.05, random_state=0)\nplt.scatter(X[:,0], X[:,1])\nplt.tight_layout()\n#plt.savefig('./figures/moons.png', dpi=300)\nplt.show()",
"K-means and hierarchical clustering:",
"f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8,3))\n\nkm = KMeans(n_clusters=2, random_state=0)\ny_km = km.fit_predict(X)\nax1.scatter(X[y_km==0,0], X[y_km==0,1], c='lightblue', marker='o', s=40, label='cluster 1')\nax1.scatter(X[y_km==1,0], X[y_km==1,1], c='red', marker='s', s=40, label='cluster 2')\nax1.set_title('K-means clustering')\n\nac = AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='complete')\ny_ac = ac.fit_predict(X)\nax2.scatter(X[y_ac==0,0], X[y_ac==0,1], c='lightblue', marker='o', s=40, label='cluster 1')\nax2.scatter(X[y_ac==1,0], X[y_ac==1,1], c='red', marker='s', s=40, label='cluster 2')\nax2.set_title('Agglomerative clustering')\n\nplt.legend()\nplt.tight_layout()\n#plt.savefig('./figures/kmeans_and_ac.png', dpi=300)\nplt.show()",
"Density-based clustering:",
"from sklearn.cluster import DBSCAN\n\ndb = DBSCAN(eps=0.2, min_samples=5, metric='euclidean')\ny_db = db.fit_predict(X)\nplt.scatter(X[y_db==0,0], X[y_db==0,1], c='lightblue', marker='o', s=40, label='cluster 1')\nplt.scatter(X[y_db==1,0], X[y_db==1,1], c='red', marker='s', s=40, label='cluster 2')\nplt.legend()\nplt.tight_layout()\n#plt.savefig('./figures/moons_dbscan.png', dpi=300)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MadsJensen/intro_to_scientific_computing
|
notebooks/10-Building-blocks-of-program.ipynb
|
bsd-3-clause
|
[
"Building blocks of a computer program\nReading and writing files (in programs)\nThe first step towards gaining access to your data is being able to read it into memory. Writing subsets of processed results out to disk is often preferrable to performing an entire analysis in one shot.\nConceptually\nWhen we list (ls) and find files in the file system, we are referring to the names of the data containers. For reading the contents of an existing file, we must ask the OS to open the file for reading. When instead we wish to write to file with a specific name, we must ask the OS to open the file for writing (possibly creating the file, if one did not already exist). We can also open the file for appending.\nOnce we're done with I/O-operations on a file, we must remember to close it.\nProgrammatically\nEach language has its own syntax for file I/O, but the basic idiom is:\n\nopen a file ('r', 'w', or 'a'), receive a file pointer in the process\nread from or write to the file pointer\nclose the file pointer, thus closing the file I/O-process\n\nIn Python\nThe built-in function open opens a file, and returns a file pointer object to it. The pointer has methods for reading/writing, and a close-method for finishing up.\n(Matlab has a slightly different syntax, but the ideas are the same.)",
"fp = open('dickens.txt', 'rt') # note the 'rt': 'Read as Text'\npayload = fp.read() # assign entire contents to a variable\nfp.close() # close the file\nprint(payload) # print the contents",
"Control flow: conditionals and iteration\nComputer programs are much about controlling the flow of execution (of different tasks). Control is achieved by applying logic.\nChapter 5 in Downey (2015) shows how logical operators enable control flow via boolean expressions that can either evaluate to True or to False (there are no gray zones here...).\n\nIteration is introduced in Chapter 7, as well as slicing lists and traversing them with for-loops.\nFunctions\n\"In the context of programming, a function is a named sequence of statements that performs a computation. When you define a function, you specify the name and the sequence of statements. Later, you can 'call' the function by name. \nA function call is like a detour in the flow of execution. Instead of going to the next statement, the flow jumps to the body of the function, runs the statements there, and then comes back to pick up where it left off. \nThat sounds simple enough, until you remember that one function can call another. While in the middle of one function, the program might have to run the statements in another function. Then, while running that new function, the program might have to run yet another function! \nIn summary, when you read a program, you don’t always want to read from top to bottom. Sometimes it makes more sense if you follow the flow of execution.” (Downey, 2015; Chapter 3) \nArguments\nSome functions just do their 'thing', but most take input arguments. These can either be in the form of parameters that modify what the function actually does, and/or in the form of data to operate on. (This is merely a conceptual distinction: parameters are also a form of data, of course!).\nWhy functions?\nBut is there a deeper point/advantage in using functions instead of just writing out the code you want to have executed? One reason is that:\n\nCreating a new function gives you an opportunity to name a group of statements, which makes your program easier to read and debug. [and] Well-designed functions are often useful for many programs. Once you write [...] one, you can reuse it.\n\nA third example relates to efficiency:\n\nFunctions can make a program smaller by eliminating repetitive code. Later, if you make a change, you only have to make it in one place.” (Downey, 2015) \n\nDefining functions\nBefore a function can be used in a programme, it must be defined. In the Python-language, this is achieved using the def keyword, arguments in parentheses and a colon (:)\npython\ndef a_new_function(param1, param2, meaningful_name=2):\n code...\n return(return_value)\nIn another common interpreted language, Matlab, the keyword to use is function, and the value of the variable return_value must be set in the code\nMatlab\nfunction return_value = a_new_function(param1, param2, param3)\n code...\nExample: the book price from Tuesday\nOne solution was:\npython\ncp=60\nb=229\ndc=0.4\nsh=49\nad=3\nprint(cp*b*(1-dc) + sh + (cp-1)*ad)\nNow if we want to know the price for N amount of books, we can turn it into a function that takes the number of books as a parameter.",
"def book_price(N=60):\n \"\"\"This function calculates the total price of books. \n \n Parameters:\n ===========\n N : int\n Number of books\n \n Return:\n =======\n total_price : float\n the total price for N books\n \"\"\" \n cp=N # number of books \n b=229 # base price of one book \n dc=0.4 # discount\n sh=49 # shipping for the first book\n ad=3 # shipping for additional books\n return( (cp*b*(1-dc) + sh + (cp-1)*ad) )",
"Now we can call the function with any amount of books",
"print(\"The price for 60 books: \", book_price(60)) # first the number where we know the result, just to check it works\nprint(\"The price for 50 books: \", book_price(50))",
"We can make a loop that calls the functions to see the price for a large amount of books",
"N = 250 # number of books \nbooks = [] # start with an empty list\n\nfor i in range(N): # we know the number of iterations we want of the loop, hence a *for* loop\n books.append(book_price( i + 1 )) # why the \"i + 1\"?",
"Now we make a simple graph to visualize the total prices of books.",
"%matplotlib inline \nimport matplotlib.pyplot as plt # import the python library \nplt.style.use(\"seaborn\") # specify the aesthetics properties of the plot \n\nplt.figure() # make a new figure\nplt.plot(books); # plot the books\nplt.ylabel(\"Price of the books in DKK\"); # set the y-axis text\nplt.xlabel(\"Number of books\"); # set the x-axis text\nplt.title(\"Price of books\"); # set the title text",
"Loops and memory usage\nLoops can take a lot of memory (RAM) if each iteration of the loop extend at each iteration. E.g. when we loop over the different amount of books and save the total price in a variable.",
"import sys\nimport numpy as np\n\nN = 100\n\nno_memory_assignment_var = []\nno_memory_assignment_mem = []\nmemory_assignment_var = np.empty(N)\nmemory_assignment_mem = np.empty(N)\n\nfor i in range(N):\n no_memory_assignment_var.append(i)\n no_memory_assignment_mem.append(sys.getsizeof(no_memory_assignment_var))\n\nfor i in range(N):\n memory_assignment_var[i] = i\n memory_assignment_mem[i] = sys.getsizeof(memory_assignment_var)\n\n\n\nimport matplotlib as mpl\n\nmpl.rcParams['figure.figsize'] = (15, 10)\n\nplt.figure();\nplt.plot(no_memory_assignment_mem, 'r', label=\"memory use for the no assigned\");\nplt.plot(memory_assignment_mem, 'b', label=\"memory use for the assigned\");\n#plt.hlines(memory_assignment_mem, 0, len(no_memory_assignment_mem), 'r',\n# label=\"memory use for the assigned\");\nplt.legend();\n",
"Function or loop: When to use what?\nIt depends. The combination is very powerful and useful.\nDebugging\nErrors and mistakes inevitably find their way into code. The execution (or compilation) of a program stops at an error, after which it is your job to fix the code.\nBy ‘mistake’, we here refer to something less than an ‘error’, i.e., the program may run to completion, but the output will be 'off' in some more-or-less obvious fashion. \nFor both errors and mistakes, sorting out where in the code the 'bug' is hiding, often requires examining the contents of a variable: at which stage does the program behave differently from what we expect/intend?\nDebugging is essentially the act of stepping through each (relevant) line of code, until the culprit is found. Some coding software (Integrated Development Environment, IDE) allow you to set a break point at a specific line in the code. When the program is run, the interpreter simply halts at the line, and waits for you to continue. Before doing so, you can 'inspect' the contents of variables to identify the problematic spot.\nThe most simple form of debugging, and by far the one used most in 'casual' programming, is adding print-statements in the code.\nVersion Control Systems (VCS)\nDeveloping programs and analysis pipelines is iterative, and sometimes you make mistakes. Using a VCS is the only way to survive situations like:\n\n\"The code worked yesterday, now it's broken; why?\"\n\"Please redo the analysis from 4 months back\"\nAsger and Alma are working on the same project, even on the same file; how should their work be merged?\n\nThe basic idea is:\n\nCreate\nSave the changes you make\nModify\nIn case of troubles at step N\nrevert back to N-1\nfix problem & continue\n\n\n\nAt the time of writing, the only VCS in practical use is git (with mercurial being a closely-related but distant second).\nWe do not have time to go into the details of git-based version control, but will gladly recommend additional resouces, such as: Try git in 15 minutes and the definitive (free) Pro Git-book.\nGitHub\nMost open-source code development is hosted at the GitHub website. GitHub is a place on the internet you can upload your version-controlled code.\nExercises\nFile I/O\nRead and print the contents of the CSV-file you created previously (with birth dates)\nFunctions\nFollow the flow of execution\nBefore running it!, figure out what the output of this code block will be:",
"def double_the_input(input): \n return(2 * input) \n \ndef add_two_items(item_a, item_b): \n return(item_a + item_b)\n \na = 4.5 \nb = 7 \n \nprint(add_two_items(double_the_input(a), b))",
"Function arguments\nWrite a function that takes 2 arguments and raises the first input to the power of the second input, e.g., $3^{4}$, and returns the output. If the user only enter one argument, assume that the power to raise to should be 2 (i.e. use a keyword argument with a default value of 2).",
"# define your function here\n",
"Test your function below\nRun the following cell to check your function! Just write the name of your function on the first line",
"f_to_test = # write your function name after the = sign\nassert f_to_test(2) == 4, '2^2 should be 4!'\nassert f_to_test(3, 2) == 9, '3^2 should be 9!'",
"Mean and median\nWrite two functions. Both take a list of numbers as input (argument), one returns the mean of the numbers while the other returns the median."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
patonelli/estocastico
|
BrownianMotion.ipynb
|
gpl-2.0
|
[
"Brownian Motion\nBrownian motion is a stochastic process. One form of the equation for\nBrownian motion is\n$X(0) = X_0$\n$X(t + dt) = X(t) + N(0, (delta)^2 dt; t, t+dt)$\nwhere $N(a, b; t_1, t_2)$ is a normally distributed random\nvariable with mean a and variance b. The parameters t,,1,, and t,,2,,\nmake explicit the statistical independence of N on different time\nintervals; that is, if $[t_1, t_2)$ and $[t_3, t_4)$ are\ndisjoint intervals, then $N(a, b; t_1, t_2)$ and $N(a, b; t_3,\nt_4)$ are independent.\nThe calculation is actually very simple. A naive implementation that\nprints n steps of the Brownian motion might look like this:",
"from scipy.stats import norm\n\n# Process parameters\ndelta = 0.25\ndt = 0.1\n\n# Initial condition.\nx = 0.0\n\n# Number of iterations to compute.\nn = 20\n\n# Iterate to compute the steps of the Brownian motion.\nfor k in range(n):\n print(k)\n x = x + norm.rvs(scale=delta**2*dt)\n print(x)",
"The above code could be easily modified to save the iterations in an\narray instead of printing them.\nThe problem with the above code is that it is slow. If we want to\ncompute a large number of iterations, we can do much better. The key is\nto note that the calculation is the cumulative sum of samples from the\nnormal distribution. A fast version can be implemented by first\ngenerating all the samples from the normal distribution with one call to\nscipy.stats.norm.rvs(), and then using the numpy cumsum function to\nform the cumulative sum.\nThe following function uses this idea to implement the function\nbrownian(). The function allows the initial condition to be an array\n(or anything that can be converted to an array). Each element of x0 is\ntreated as an initial condition for a Brownian motion.",
"\"\"\"\nbrownian() implements one dimensional Brownian motion (i.e. the Wiener process).\n\"\"\"\n\n# File: brownian.py\n\nfrom math import sqrt\nfrom scipy.stats import norm\nimport numpy as np\n\n\ndef brownian(x0, n, dt, delta, out=None):\n \"\"\"\n Generate an instance of Brownian motion (i.e. the Wiener process):\n\n X(t) = X(0) + N(0, delta**2 * t; 0, t)\n\n where N(a,b; t0, t1) is a normally distributed random variable with mean a and\n variance b. The parameters t0 and t1 make explicit the statistical\n independence of N on different time intervals; that is, if [t0, t1) and\n [t2, t3) are disjoint intervals, then N(a, b; t0, t1) and N(a, b; t2, t3)\n are independent.\n \n Written as an iteration scheme,\n\n X(t + dt) = X(t) + N(0, delta**2 * dt; t, t+dt)\n\n\n If `x0` is an array (or array-like), each value in `x0` is treated as\n an initial condition, and the value returned is a numpy array with one\n more dimension than `x0`.\n\n Arguments\n ---------\n x0 : float or numpy array (or something that can be converted to a numpy array\n using numpy.asarray(x0)).\n The initial condition(s) (i.e. position(s)) of the Brownian motion.\n n : int\n The number of steps to take.\n dt : float\n The time step.\n delta : float\n delta determines the \"speed\" of the Brownian motion. The random variable\n of the position at time t, X(t), has a normal distribution whose mean is\n the position at time t=0 and whose variance is delta**2*t.\n out : numpy array or None\n If `out` is not None, it specifies the array in which to put the\n result. If `out` is None, a new numpy array is created and returned.\n\n Returns\n -------\n A numpy array of floats with shape `x0.shape + (n,)`.\n \n Note that the initial value `x0` is not included in the returned array.\n \"\"\"\n\n x0 = np.asarray(x0)\n\n # For each element of x0, generate a sample of n numbers from a\n # normal distribution.\n r = norm.rvs(size=x0.shape + (n,), scale=delta*sqrt(dt))\n\n # If `out` was not given, create an output array.\n if out is None:\n out = np.empty(r.shape)\n\n # This computes the Brownian motion by forming the cumulative sum of\n # the random samples. \n np.cumsum(r, axis=-1, out=out)\n\n # Add the initial condition.\n out += np.expand_dims(x0, axis=-1)\n\n return out",
"Example\nHere's a script that uses this function and matplotlib's pylab module to\nplot several realizations of Brownian motion.",
"%matplotlib inline\nimport numpy\nfrom pylab import plot, show, grid, xlabel, ylabel\n\n# The Wiener process parameter.\ndelta = 2\n# Total time.\nT = 10.0\n# Number of steps.\nN = 500\n# Time step size\ndt = T/N\n# Number of realizations to generate.\nm = 20\n# Create an empty array to store the realizations.\nx = numpy.empty((m,N+1))\n# Initial values of x.\nx[:, 0] = 50\n\nbrownian(x[:,0], N, dt, delta, out=x[:,1:])\n\nt = numpy.linspace(0.0, N*dt, N+1)\nfor k in range(m):\n plot(t, x[k])\nxlabel('t', fontsize=16)\nylabel('x', fontsize=16)\ngrid(True)\nshow()",
"2D Brownian Motion\nThe same function can be used to generate Brownian motion in two\ndimensions, since each dimension is just a one-dimensional Brownian\nmotion.\nThe following script provides a demo.",
"%matplotlib inline\nimport numpy\nfrom pylab import plot, show, grid, axis, xlabel, ylabel, title\n\n# The Wiener process parameter.\ndelta = 0.25\n# Total time.\nT = 10.0\n# Number of steps.\nN = 500\n# Time step size\ndt = T/N\n# Initial values of x.\nx = numpy.empty((2,N+1))\nx[:, 0] = 0.0\n\nbrownian(x[:,0], N, dt, delta, out=x[:,1:])\n\n# Plot the 2D trajectory.\nplot(x[0],x[1])\n \n# Mark the start and end points.\nplot(x[0,0],x[1,0], 'go')\nplot(x[0,-1], x[1,-1], 'ro')\n\n# More plot decorations.\ntitle('2D Brownian Motion')\nxlabel('x', fontsize=16)\nylabel('y', fontsize=16)\naxis('equal')\ngrid(True)\nshow()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
parallel-domain/pd-sdk
|
docs/source/tutorial/general/general_usage.ipynb
|
apache-2.0
|
[
"Load Dataset\nWhen working with Parallel Domain's synthetic data, the standard output format is Dataset Governance Policy (DGP).\nIn general, the PD SDK can load from any format, as long as a custom decoder exists adhering to the DatasetDecoderProtocol.\nOut of the box, PD SDK comes with a pre-configured DGPDatasetDecoder which we can leverage to load data.\nIn this tutorial, we are going to load and access a dataset and its scenes.\nInitially, we need to select the fitting decoder (in this case: DGPDatasetDecoder) and then tell it where our dataset is stored. The location can be either a local filesystem path or an s3 address.",
"from paralleldomain.decoding.dgp.decoder import DGPDatasetDecoder\nfrom paralleldomain.model.dataset import Dataset # optional import, just for type reference in this tutorial\n\ndataset_path = \"s3://pd-sdk-c6b4d2ea-0301-46c9-8b63-ef20c0d014e9/testset_dgp\"\ndgp_decoder = DGPDatasetDecoder(dataset_path=dataset_path)\n\ndataset: Dataset = dgp_decoder.get_dataset()",
"Alternatively you can also use the decode_dataset helper method.",
"from paralleldomain.decoding.helper import decode_dataset\nfrom paralleldomain.decoding.common import DecoderSettings\n\n# To deactivate caching of certain data types use the DecoderSettings\nsettings = DecoderSettings(cache_images=False)\n# decode dgp dataset\ndgp_dataset: Dataset = decode_dataset(dataset_path=dataset_path, dataset_format=\"dgp\", settings=settings)",
"If you want to load a dataset which is stored in Cityscapes or NuImages format simply change the dataset_format to \"cityscapes\" or \"nuimages\":",
"nu_images_dataset_path = \"some/path/to/a/nuimages/root/folder\"\nnu_images_dataset: Dataset = decode_dataset(dataset_path=nu_images_dataset_path, dataset_format=\"nuimages\")\n\ncityscapes_dataset_path = \"some/path/to/a/cityscapes/root/folder\"\ncityscapes_dataset: Dataset = decode_dataset(dataset_path=nu_images_dataset_path, dataset_format=\"cityscapes\")",
"Dataset Information\nNow that the dataset information has been loaded, we query a couple of metadata from it:",
"print(\"Dataset Metadata:\")\nprint(\"Name:\", dataset.metadata.name)\nprint(\"Available Annotation Types:\", *[f\"\\t{a}\" for a in dataset.available_annotation_types], sep=\"\\n\")\nprint(\"Custom Attributes:\", *[f\"\\t{k}: {v}\" for k,v in dataset.metadata.custom_attributes.items()], sep=\"\\n\")",
"As you can see, the property .available_annotation_types includes classes from paralleldomain.model.annotation. In tutorials around reading annotations from a dataset, these exact classes will be re-used, which allows for a consistent type-check across objects.\nAccess available Scenes\nEvery dataset consists of scenes. These can contain ordered (usually by time) or unordered data.\nIn this example, we are looking to receive a list of scene names by type that have been found within the loaded dataset.",
"for sn in dataset.scene_names:\n print(f\"Found scene {sn}\")\n\nfor usn in dataset.unordered_scene_names:\n print(f\"Found unordered scene {usn}\")",
"Load Scene\nAfter having retrieved all scene names from a dataset, we get the actual Scene object and access a couple of properties as well as child objects.\nLet's start with scene properties:",
"from paralleldomain.model.scene import Scene # optional import, just for type reference in this tutorial\nfrom pprint import PrettyPrinter\n\n\nselected_scene = dataset.scene_names[0] # for future\nscene: Scene = dataset.get_scene(scene_name=selected_scene)\n\n# Use prettyprint for nested dictionaries\npp = PrettyPrinter(indent=2)\npp.pprint(scene.metadata)",
"Scene metadata usually contains any variables that changes with each scene and are not necessarily consistent across a whole dataset.\nIn many cases these are environment variables like weather, time of day and location.\nA Scene object also includes information about the available annotation types. In most datasets, these will be consistent with the ones at the Dataset level, but there is the possibility to vary them.",
"pp.pprint(scene.available_annotation_types)",
"Normally, in a scene, we expect to have more than one frame available, especially when we work with sequential data.\nThese can be accessed through their frame IDs. In DGP datasets, these are usually string representations of increasing integers, but they could also be more explicit identifiers for other datasets, e.g., a string representation of a UNIX time or details of the recording vehicle.\nIn our example, the frame IDs follow the pattern of integers in string representation:",
"print(f\"{scene.name} has {len(scene.frame_ids)} frames available.\")\nprint(scene.frame_ids)",
"Load Frame + Sensor\nFrames\nA Frame object is like a timestamp-bracket around different sensor data. If we have multiple sensors mounted on our recording vehicle, then the single data recordings are usually grouped into specific timestamps.\nWe can retrieve a Frame object and actually see what the \"grouping datetime\" is:",
"frame_0_id = \"0\"\nframe_0 = scene.get_frame(frame_id=frame_0_id)\nprint(frame_0.date_time)",
"Date/Times are presented as Python's std library datetime objects. When decoding data, the PD SDK also adds timezone information to these objects.\nSensors\nAs a next step, we want to see what sensor are available within that scene. In general, sensors are divided into CameraSensor and LidarSensor.",
"print(\"Cameras:\", *scene.camera_names, sep='\\n')\nprint(\"\\n\")\nprint(\"LiDARs:\", *scene.lidar_names, sep='\\n')",
"Similar to how we used this information to get a scene from a dataset, we can use this information to get a sensor from a scene.",
"camera_0_name = scene.camera_names[0]\ncamera_0 = scene.get_camera_sensor(camera_name=camera_0_name)",
"Knowing which frames and sensors are available allows us to now query for the actual sensor data.\nAs described above, a Frame is the time-grouping bracket around different sensor recordings. The actual data for a specific sensor assigned to this frame is represented in a SensorFrame.\nThis is where sensor data and annotations live.\nWe can either first select a Frame and then pick a Sensor or the other way around. They will return the same SensorFrame instance.",
"camera_frame_via_frame = frame_0.get_camera(camera_name=camera_0_name)\ncamera_frame_via_camera = camera_0.get_frame(frame_id=frame_0_id)\n\nassert(camera_frame_via_camera is camera_frame_via_camera)\nprint(f\"Both objects are equal: {id(camera_frame_via_frame)} == {id(camera_frame_via_camera)}\")",
"Load Sensor Frames\nNow that we know how to retrieve SensorFrame object for specific sensors and timestamps, we can use those to extract the actual sensor data.\nAccessing shared properties\nWhile there are CameraSensorFrame and LidarSensorFrame objects with sensor specific data, there are certain properties which are common to any SensorFrame.\nWe are going to print the most basic attributes on a SensorFrame, using the example of a CameraSensorFrame object.",
"# Get `CameraSensorFrame` for the first camera on the first frame within the scene.\nlidar_0_name = scene.lidar_names[0]\n\ncamera_0_frame_0 = frame_0.get_camera(camera_name=camera_0_name)\nlidar_0_frame_0 = frame_0.get_lidar(lidar_name=lidar_0_name)\n\nprint(f\"{camera_0_frame_0.sensor_name} recorded at {camera_0_frame_0.date_time}\")\nprint(f\"{lidar_0_frame_0.sensor_name} recorded at {lidar_0_frame_0.date_time}\")\n\n",
"Every SensorFrame always has information about the sensor pose (where is it in the world coordinate system?) and sensor extrinsic (how is the sensor positioned relative to the ego-vehicle reference coordinate system?).\nPoses and Extrinsics are represented as instances of the Transformation object. It allows storing 6-DoF information and allows for easy combination with each other.\nIn the example below, we are going to calculate the difference between the camera and the lidar sensor. The difference should be the same when using pose or extrinsic.",
"print(camera_0_frame_0.pose, \" -> \", lidar_0_frame_0.pose)\ncamera_to_lidar_pose = camera_0_frame_0.pose.inverse @ lidar_0_frame_0.pose\n\nprint(camera_0_frame_0.extrinsic, \" -> \", lidar_0_frame_0.extrinsic)\ncamera_to_lidar_extrinsic = camera_0_frame_0.extrinsic.inverse @ lidar_0_frame_0.extrinsic",
"We can use the associated homogenous transformation matrix to compare both results.",
"import numpy as np\n\nprint(\"Diff Pose:\", camera_to_lidar_pose)\nprint(\"Diff Extrinsic:\", camera_to_lidar_extrinsic)\n\nassert np.all(np.isclose(camera_to_lidar_pose.transformation_matrix, camera_to_lidar_extrinsic.transformation_matrix, atol=1e-05))\nprint(\"If you see this, the difference are close to equal.\")",
"In the same manner, it is easily possible to calculate the relative location between two sensors. Let's calculate the difference between two camera sensors.",
"camera_1_name = scene.camera_names[1]\ncamera_1_frame_0 = frame_0.get_camera(camera_name=camera_1_name)\n\nprint(camera_0_frame_0.extrinsic, \" -> \", camera_1_frame_0.extrinsic)\nprint(\"Diff Extrinsic: \", camera_0_frame_0.extrinsic.inverse @ camera_1_frame_0.extrinsic)",
"It is important to remember that a sensor extrinsic is provided in the ego-vehicles reference coordinate system. For DGP dataset, that is FLU (Front (x), Left (y), Up (z)).\nSo the translation difference between both sensors in ego-vehicle coordinate system is approx x=-3. When calculating the difference between both extrinsics, we will receive though a value of approx. z=-3. That is because we receive the difference in the camera coordinate system (RDF). In this example, we have two cameras (one front, one rear facing) that are perfectly aligned with the ego-vehicle's longitudinal axis x.\nIf we want to have the camera sensor in a FLU coordinate system, we can simply leverage the CoordinateSystem class to take of it for us. Objects of that class can also be combined with Transformation objects.",
"from paralleldomain.utilities.coordinate_system import CoordinateSystem\n\n\nextrinsic_diff = (camera_0_frame_0.extrinsic.inverse @ camera_1_frame_0.extrinsic)\nRDF_to_FLU = (CoordinateSystem(\"RDF\") > CoordinateSystem(\"FLU\"))\n\nprint( RDF_to_FLU @ extrinsic_diff)",
"Accessing Annotations\nWhile both CameraSensorFrame and LidarSensorFrame have the property .available_annotation_types, the content will most likely be different.\nThere are shared annotation types which are available for both sensor types, but for example 2D Bounding Boxes are something just available for camera data, or point cloud segmentation only for LiDAR data.",
"pp.pprint(camera_0_frame_0.available_annotation_types)\npp.pprint(lidar_0_frame_0.available_annotation_types)\n",
"To actually the annotations into memory and use them for further analysis, we can leverage the AnnotationTypes class.\nIn the example below, we are going to load the 2D Bounding Boxes from a camera frame.",
"from paralleldomain.model.annotation import AnnotationTypes\nfrom paralleldomain.model.annotation import BoundingBoxes2D # optional import, just for type reference in this tutorial\n\n\n# Quick check if `BoundingBoxes2D` is an available annotation type. If not, and we do not check for it, we will receive a `ValueError` exception.\nif AnnotationTypes.BoundingBoxes2D in camera_0_frame_0.available_annotation_types:\n boxes2d: BoundingBoxes2D = camera_0_frame_0.get_annotations(annotation_type=AnnotationTypes.BoundingBoxes2D)\n\n for b in boxes2d.boxes[:10]:\n print(b)",
"For the LiDAR sensor, we are going to retrieve the 3D Semantic Segmentation of the point cloud and count objects by class ID. Instead of checking explicitly if the annotation type is available, we are going to use try/catch on a ValueError. To see if it works, we will try to receive 2D Bounding Boxes from the LiDAR sensor.",
"from paralleldomain.model.annotation import SemanticSegmentation3D # optional import, just for type reference in this tutorial\n\n\nannotation_type = AnnotationTypes.BoundingBoxes2D\n\ntry:\n boxes2d: BoundingBoxes2D = lidar_0_frame_0.get_annotations(annotation_type=annotation_type)\nexcept ValueError as e:\n print(f\"LiDAR Frame doesn't have {annotation_type} as annotation type available. Original exception below:\")\n print(str(e))\n\n\n# Move on to the actual task:\n\nannotation_type = AnnotationTypes.SemanticSegmentation3D\n\ncount_by_class_id = {}\n\ntry:\n semseg3d: SemanticSegmentation3D = lidar_0_frame_0.get_annotations(annotation_type=annotation_type)\n u_class_ids, u_counts = np.unique(semseg3d.class_ids, return_counts=True)\n count_by_class_id = {u_class_ids[idx]: u_counts[idx] for idx in range(len(u_class_ids))}\n pp.pprint(count_by_class_id)\n\nexcept ValueError as e:\n print(f\"LiDAR Frame doesn't have {annotation_type} as annotation type available.\")\n print(str(e))",
"Instead of showing just class IDs, we can show the actual class labels quite easily. On the Scene object we can retrieve the ClassMap for each annotation style.\nLet's get the one for 3D Semantic Segmentation and print the labels for better readability.",
"from paralleldomain.model.class_mapping import ClassMap # optional import, just for type reference in this tutorial\n\nsemseg3d_classmap: ClassMap = scene.get_class_map(annotation_type=AnnotationTypes.SemanticSegmentation3D)\n\ncount_by_class_label = {k: f\"{semseg3d_classmap[k].name} [{v}]\" for k,v in count_by_class_id.items()}\n\npp.pprint(count_by_class_label)",
"Access Camera Data\nAs mentioned above, sensor-specific sensor frames, like CameraSensorFrame have additional properties to the shared ones described above.\nFor a camera that is especially the RGB mask, as well as, camera intrinsics and distortion parameters.\nNote: Whenever we work with image data (including masks that represent an image-encoded representation), we work with np.ndarray of shape (h, w, 3) or (h, w, 4).\nThe last axis is defined in the following index order: 0: Red, 1: Green, 2: Blue, [3: Alpha]. When using OpenCV directly, we need to make explicitly convert the image into BGR[A] order. If you use methods from within the PD SDK, e.g., from utilities, any required conversion is handled for you.",
"from matplotlib import pyplot as plt\nfrom paralleldomain.model.sensor import Image # optional import, just for type reference in this tutorial\n\n\nimage_data: Image = camera_0_frame_0.image\n\nprint(f\"Below is an image with {image_data.channels} channels and resolution {image_data.width}x{image_data.height} sqpx\")\n\nplt.imshow(image_data.rgba) # `.rgba` returns image including alpha-channel, otherwise `.rgb` can be used for convenience.\nplt.title(camera_0_frame_0.sensor_name)\nplt.show()\n\npp.pprint(vars(camera_0_frame_0.intrinsic))",
"Access LiDAR Data\nSimilar to a camera, LiDAR sensors have their dedicated sensor frame object LidarSensorFrame.\nThere we can access different point cloud properties like points in Cartesian coordinates, their intensity or timing offsets.\nThe simple example below creates an orthographic topdown projection of the point cloud in ego-vehicle coordinate system by leveraging the extrinsic information.\nThe colorization will be done by height, and the size of points will be defined by reflection intensity.",
"from paralleldomain.model.sensor import PointCloud # optional import, just for type reference in this tutorial\n\n\npc_data: PointCloud = lidar_0_frame_0.point_cloud\n\npc_xyz_one: np.ndarray = pc_data.xyz_one # Returns the xyz coordinates with an additional column full of \"1\" to allow for direct transformation\npc_intensity: np.ndarray = pc_data.intensity\n\npc_ego = (lidar_0_frame_0.extrinsic @ pc_xyz_one.T).T\npc_ego = pc_ego[:,:3] # throw away \"1\" - we are done transforming\n\nsubset_slice = slice(None, None, 5) # we want a slice of every 5th point to reduce rendering time\n\npc_ego_subset = pc_ego[subset_slice]\npc_intensity_subset = pc_intensity[subset_slice]\n\nplt.scatter(x=pc_ego_subset[:,0], y=pc_ego_subset[:,1], s=pc_intensity_subset, c=pc_ego_subset[:,2])\nplt.grid(True)\nplt.title(lidar_0_frame_0.sensor_name)",
"In the scatter plot above we can see that the test point cloud is quite sparse (in fact it has only 3 lasers vertically). Nevertheless, outlines of buildings and objects are clearly visible.\nAlso, there appears to be a couple of highly reflective object in the ego-vehicle's proximity. By applying the LiDAR's extrinsic, we have put (0,0,0) to the ego-vehicle's reference point - here it is the center of the bottom-face."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
wangg12/caffe
|
examples/03-fine-tuning.ipynb
|
bsd-2-clause
|
[
"Fine-tuning a Pretrained Network for Style Recognition\nIn this example, we'll explore a common approach that is particularly useful in real-world applications: take a pre-trained Caffe network and fine-tune the parameters on your custom data.\nThe upside of such approach is that, since pre-trained networks are learned on a large set of images, the intermediate layers capture the \"semantics\" of the general visual appearance. Think of it as a very powerful feature that you can treat as a black box. On top of that, only a few layers will be needed to obtain a very good performance of the data.\nFirst, we will need to prepare the data. This involves the following parts:\n(1) Get the ImageNet ilsvrc pretrained model with the provided shell scripts.\n(2) Download a subset of the overall Flickr style dataset for this demo.\n(3) Compile the downloaded Flickr dataset into a database that Caffe can then consume.",
"import os\nos.chdir('..')\nimport sys\nsys.path.insert(0, './python')\n\nimport caffe\nimport numpy as np\nfrom pylab import *\n%matplotlib inline\n\n# This downloads the ilsvrc auxiliary data (mean file, etc),\n# and a subset of 2000 images for the style recognition task.\n!data/ilsvrc12/get_ilsvrc_aux.sh\n!scripts/download_model_binary.py models/bvlc_reference_caffenet\n!python examples/finetune_flickr_style/assemble_data.py \\\n --workers=-1 --images=2000 --seed=1701 --label=5",
"Let's show what is the difference between the fine-tuning network and the original caffe model.",
"!diff models/bvlc_reference_caffenet/train_val.prototxt models/finetune_flickr_style/train_val.prototxt",
"For your record, if you want to train the network in pure C++ tools, here is the command:\n<code>\nbuild/tools/caffe train \\\n -solver models/finetune_flickr_style/solver.prototxt \\\n -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \\\n -gpu 0\n</code>\nHowever, we will train using Python in this example.",
"niter = 200\n# losses will also be stored in the log\ntrain_loss = np.zeros(niter)\nscratch_train_loss = np.zeros(niter)\n\ncaffe.set_device(0)\ncaffe.set_mode_gpu()\n# We create a solver that fine-tunes from a previously trained network.\nsolver = caffe.SGDSolver('models/finetune_flickr_style/solver.prototxt')\nsolver.net.copy_from('models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')\n# For reference, we also create a solver that does no finetuning.\nscratch_solver = caffe.SGDSolver('models/finetune_flickr_style/solver.prototxt')\n\n# We run the solver for niter times, and record the training loss.\nfor it in range(niter):\n solver.step(1) # SGD by Caffe\n scratch_solver.step(1)\n # store the train loss\n train_loss[it] = solver.net.blobs['loss'].data\n scratch_train_loss[it] = scratch_solver.net.blobs['loss'].data\n if it % 10 == 0:\n print 'iter %d, finetune_loss=%f, scratch_loss=%f' % (it, train_loss[it], scratch_train_loss[it])\nprint 'done'",
"Let's look at the training loss produced by the two training procedures respectively.",
"plot(np.vstack([train_loss, scratch_train_loss]).T)",
"Notice how the fine-tuning procedure produces a more smooth loss function change, and ends up at a better loss. A closer look at small values, clipping to avoid showing too large loss during training:",
"plot(np.vstack([train_loss, scratch_train_loss]).clip(0, 4).T)",
"Let's take a look at the testing accuracy after running 200 iterations. Note that we are running a classification task of 5 classes, thus a chance accuracy is 20%. As we will reasonably expect, the finetuning result will be much better than the one from training from scratch. Let's see.",
"test_iters = 10\naccuracy = 0\nscratch_accuracy = 0\nfor it in arange(test_iters):\n solver.test_nets[0].forward()\n accuracy += solver.test_nets[0].blobs['accuracy'].data\n scratch_solver.test_nets[0].forward()\n scratch_accuracy += scratch_solver.test_nets[0].blobs['accuracy'].data\naccuracy /= test_iters\nscratch_accuracy /= test_iters\nprint 'Accuracy for fine-tuning:', accuracy\nprint 'Accuracy for training from scratch:', scratch_accuracy",
"Huzzah! So we did finetuning and it is awesome. Let's take a look at what kind of results we are able to get with a longer, more complete run of the style recognition dataset. Note: the below URL might be occassionally down because it is run on a research machine.\nhttp://demo.vislab.berkeleyvision.org/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
samzhang111/frontpages
|
analysis/Biggest Headlines by Paper.ipynb
|
gpl-3.0
|
[
"Biggest headlines by paper\nThis is a notebook that looks at the biggest headlines for newspapers over the last half-year, mainly for ten newspapers we particularly care about. It also looks at some interesting aspects of the largest headlines across all the papers.",
"from jupyter_cms.loader import load_notebook\n\neda = load_notebook('./data_exploration.ipynb')\n\ndf, newspapers = eda.load_data()",
"Major newspaper headlines\nThese slugs were chosen from the Wikipedia page of widely circulated newspapers in the United States: https://en.wikipedia.org/wiki/List_of_newspapers_in_the_United_States#By_circulation. Unfortunately it seems like that list was using 2013 data, but I recognize enough of these papers as major that it's a close-enough approximation.\nAlso we have to leave the NYT and New York Post out, unfortunately, since pdfminer extracted their characters without being able to group them into lines and paragraphs. If taking character-level data and massaging it into paragraphs sounds like a fun task for you, please drop a Github issue or otherwise get in touch :)",
"slugs_of_interest = [\n 'WSJ',\n 'USAT',\n 'CA_LAT',\n 'CA_MN',\n 'NY_DN',\n 'DC_WP',\n 'IL_CST',\n 'CO_DP',\n 'IL_CT',\n 'TX_DMN'\n]\n\nimport pandas as pd\nfrom datetime import datetime\n\npd.set_option('display.max_columns', 100)\n\ndf.head(2)\n\ndf['month'] = df['date'].apply(lambda x: x.month)\n\ndef print_row(i, row):\n print(\"#{i}: {title} — {date:%b. %-d} — {fontsize:.2f}pt\".format(\n i=i + 1,\n title=\" \".join(row.text.split()),\n date=row.date,\n fontsize=row.fontsize))\n \ndef largest_font_headlines(npdf, paper):\n npdf = npdf[(npdf.bbox_top > npdf.page_height / 2) & (npdf.month >= 6)]\n top = npdf.sort_values(by='fontsize', ascending=False).head(10)\n print(paper)\n for i, (_, row) in enumerate(top.iterrows()):\n print_row(i, row)\n\n\n# Um, definitely should have a better place for doing this, but on Dec 18th the WSJ PDF I archived was actually\n# a different newspaper, somehow. I wonder if it's a Newseum error, but they don't keep their archives up beyond a day\n\nlargest_font_headlines(df[(df.slug == 'WSJ') & (df.date != datetime(2017, 12, 18))], 'The Wall Street Journal')\n\nprint()\n\nlargest_font_headlines(df[df.slug == 'USAT'], 'USA Today')\n\nprint()\n\nlargest_font_headlines(df[df.slug == 'CA_LAT'], 'Los Angeles Times')\n\nprint()\n\nlargest_font_headlines(df[df.slug == 'CA_MN'], 'San Jose Mercury News')\n\nprint()\n\nlargest_font_headlines(df[df.slug == 'NY_DN'], 'New York Daily News')\n\nprint()\n\nlargest_font_headlines(df[df.slug == 'DC_WP'], 'The Washington Post')\n\nprint()\n\nlargest_font_headlines(df[df.slug == 'IL_CST'], 'Chicago Sun Times')\n\nprint()\n\nlargest_font_headlines(df[df.slug == 'CO_DP'], 'The Denver Post')\n\nprint()\n\nlargest_font_headlines(df[df.slug == 'IL_CT'], 'Chicago Tribune')\n\nprint()\n\nlargest_font_headlines(df[df.slug == 'TX_DMN'], 'The Dallas Morning News')",
"Other analyses!\nSo what else can we learn from the top sized headlines on these journals?\n/ insert intermission where I switch into R and run ggplot to generate this graph: link. Look at that graph if you haven't yet because it motivates some of the following.\nWe can see a wide distribution of biggest-headlines-per-day (what I'll ignorantly call the \"splash\" headline, please let me know if you know of the actual newspaper jargon) for each major newspaper. On the right-hand side, the tabloids tend to be extremely generous with how they use fonts. Let's see if that holds up at large.\nAs a refresher, here are some common newspaper formats from Wikipedia:\n```\nDiver's Dispatch 914.4 mm × 609.6 mm (36.00 in × 24.00 in) (1.5)\nBroadsheet 749 mm × 597 mm (29.5 in × 23.5 in) (1.255)\nNordisch 570 mm × 400 mm (22 in × 16 in) (1.425)\nRhenish around 350 mm × 520 mm (14 in × 20 in) (1.486)\nSwiss (Neue Zürcher Zeitung) 475 mm × 320 mm (18.7 in × 12.6 in) (1.484)\nBerliner 470 mm × 315 mm (18.5 in × 12.4 in) (1.492)\n The Guardian's printed area is 443 mm × 287 mm (17.4 in × 11.3 in).[2]\nTabloid 430 mm × 280 mm (17 in × 11 in) (1.536)\n\n```\nWe'll mainly just look at the height here for simplicity's sake.\nNote these numbers will be slightly off since:\n- the PDFs contain different amounts of additional padding compared to the actual printed version\n- these numbers are in pixels, and depending on how the resolution of the newspaper is determined, could translate into different numbers in inches",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.distplot(df.groupby(['slug']).page_height.first())\nplt.suptitle(\"Distribution of page heights (by pixels)\")",
"So it looks like most of our newspapers are clustered around the 1600px height. But what is that in inches? Let's do a few known papers to check.",
"print('''Heights of known papers:\n\nBroadsheets:\nThe Washington Post: {}px\nThe Wall Street Journal: {}px\n\nTabloids:\nThe Chicago Sun Times: {}px\nThe New York Daily News: {}px\n'''.format(\n df[df.slug == 'DC_WP'].page_height.mode().iloc[0],\n df[df.slug == 'WSJ'].page_height.mode().iloc[0],\n df[df.slug == 'IL_CST'].page_height.mode().iloc[0],\n df[df.slug == 'NY_DN'].page_height.mode().iloc[0]\n))\n\nprint('''Aspect ratios of known papers:\n\nBroadsheets:\nThe Washington Post: {}\nThe Wall Street Journal: {}\n\nTabloids:\nThe Chicago Sun Times: {}\nThe New York Daily News: {}\n'''.format(\n df[df.slug == 'DC_WP'].aspect_ratio.mode().iloc[0],\n df[df.slug == 'WSJ'].aspect_ratio.mode().iloc[0],\n df[df.slug == 'IL_CST'].aspect_ratio.mode().iloc[0],\n df[df.slug == 'NY_DN'].aspect_ratio.mode().iloc[0]\n))",
"By our very rough check, the two broadsheets tended to be >1500px height with an aspect ratio around 1:2, and the tabloids are shorter with an aspect ratio around 1.\nLet's see how this plays out with font sizes.",
"from scipy import stats\nimport numpy as np\n\ndef mode(heights):\n return stats.mode(heights).mode[0]\n\ndaily_headlines = df.groupby(['date', 'slug']).agg({'fontsize': max, 'page_height': mode, 'aspect_ratio': mode})\n\ndaily_headlines.head()\n\navg_size_by_paper = daily_headlines.reset_index().groupby('slug').agg({'fontsize': np.mean, 'page_height': mode, 'aspect_ratio': mode, 'slug': 'count'}).rename(columns={'slug': 'n'})\navg_size_by_paper.head()\n\nsns.distplot(avg_size_by_paper['n'], kde=False, bins=30)\nplt.xlim([0, 250])\nplt.suptitle(\"Distribution of number of days each paper has records in the scrape\")\n\navg_size_by_paper['n'].describe()\n\navg_size_highly_present = avg_size_by_paper[avg_size_by_paper['n'] > 182] # more than the median\n\nsns.regplot(avg_size_highly_present.page_height, avg_size_highly_present.fontsize, fit_reg=False)\nplt.xlabel(\"Page height in pixels\")\nplt.ylabel(\"Average font point of day's largest headline\")\nplt.suptitle(\"Each dot is a newspaper\")\n\nsns.regplot(avg_size_highly_present.aspect_ratio, avg_size_highly_present.fontsize, x_jitter=0.01, fit_reg=False)\nplt.xlabel(\"Aspect ratio (width/height)\")\nplt.ylabel(\"Average font point of day's largest headline\")\nplt.suptitle(\"Each dot is a newspaper\")\n\nsns.regplot(avg_size_highly_present.aspect_ratio, avg_size_highly_present.fontsize, x_jitter=0.05, fit_reg=False)",
"Three observations:\n\nThere aren't that many \"tabloids\"! (if the page height and aspect ratio heuristics are accurate)\nThere is a clear pattern toward larger font-sizes on the higher aspect ratio, lower height end of the spectrum.\nIf we add a lot of jitter to the rounded aspect ratio, we end up with a very similar-looking graph to the height itself.\n\nSo what are those outliers?",
"avg_size_highly_present.sort_values(by='fontsize', ascending=False).head(10)",
"The biggest outliers with font size for tabloids turned out to be the ones in our most-circulated newspaper dataset, so my prior was skewed toward the large size. However, all 5 of the biggest font-using newspapers were \"tabloids\", so there is some truth to it. The data is a bit too categorical between broadsheet and tabloid, and I'm too fuzzy on the space in between, to make any overarching conclusions!\nLet's go back and double-check how closely the height maps to the aspect ratio.",
"sns.regplot(avg_size_highly_present.aspect_ratio, avg_size_highly_present.page_height, x_jitter=0.01, fit_reg=False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fionapigott/Data-Science-45min-Intros
|
decision-trees-101/decision_trees_101.ipynb
|
unlicense
|
[
"Introduction\nIdea behind decision tree: split the space of the attribute variables with recursive, binary splits, with the aim of high purity for all regions.\nThe collection of paths to all regions makes a tree.\nVocabulary\n\n\nattribute, or attribute variable: a dimension in which a data point has a value (typically excluding the target variable)\n\n\ntarget variable: the variable whose value is to be predicted \n\n\nthe attributes of the i-th data point are labeled X_i.\n\n\nthe value of the target variable for the i-th data point is labeled y_i. \n\n\nTrees that predict a quantitative target variable are called regression trees, and trees that predict qualitative targets are called classification trees.\n\n\nPlay",
"%pylab inline ",
"Get iris data and make a simple prediction",
"from sklearn import datasets\niris = datasets.load_iris()\n\nimport numpy as np\nimport random",
"Create training and test data sets",
"iris_X = iris.data\niris_y = iris.target\n\nr = random.randint(0,100)\nnp.random.seed(r)\nidx = np.random.permutation(len(iris_X))\n\nsubset = 25\n\niris_X_train = iris_X[idx[:-subset]] # all but the last 'subset' rows\niris_y_train = iris_y[idx[:-subset]]\niris_X_test = iris_X[idx[-subset:]] # the last 'subset' rows\niris_y_test = iris_y[idx[-subset:]]",
"Train a classification tree",
"from sklearn import tree\nclf = tree.DecisionTreeClassifier()\nclf = clf.fit(iris_X_train,iris_y_train)",
"Print the predicted class of iris",
"clf.predict(iris_X_train)\n",
"Based on the target values in the training set, calculate the training accuracy:",
"def accuracy(x,y):\n output = []\n for i,j in zip(x,y):\n if i == j:\n output.append(1)\n else:\n output.append(0)\n return np.mean(output)\nprint(\"training accuracy: {}\".format(accuracy(clf.predict(iris_X_train),iris_y_train)))",
"And here's the testing accuracy:",
"print(\"testing accuracy: {}\".format(accuracy(clf.predict(iris_X_test),iris_y_test)))",
"Visualize the tree:",
"from sklearn.externals.six import StringIO # StringIO streams data as a string to \"output file\"\nfrom IPython.display import Image # need Image to display inline\n\n# export the tree data as a string to a file\ndot_data = StringIO() \ntree.export_graphviz(clf, out_file=dot_data) \n\n# compatible with modern pyparsing\nimport pydotplus as pydot\n# or olde-timey\n# import pydot\ngraph = pydot.graph_from_dot_data(dot_data.getvalue()) \nImage(graph.create_png())",
"How it works\nSplit definition\nTo decide which variable is considered at each node, and what the split point is, we need a metric to minimize:",
"Image(filename=\"gini.png\")",
"where p_mk is the proportion of training data in the m-th region that are from the k-th class.\nValues of p_mk close to 0 or 1 represent better purity, so we minimize G.\nCross validation: a side note\nCross validation is a generalization of the testing/training data set paradigm. A reasonable test for the validity of a tree is to re-sample the training and testing data set, re-fitting the tree each time. Small variations in the resulting trees indicate a stable model.\nA Problematic Example",
"classifier_1 = tree.DecisionTreeClassifier()\nX = numpy.array([[0],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10]])\nY = numpy.array([0,1,2,3,4,5,6,7,8,9,10])\nclassifier_1 = classifier_1.fit(X,Y)\n\n\nclassifier_1.predict(X)\n\n## print the tree\n\n# export the tree data as a string to a file\ndot_data = StringIO()\ntree.export_graphviz(classifier_1, out_file=dot_data) \n\n#\nimport pydotplus as pydot\ngraph = pydot.graph_from_dot_data(dot_data.getvalue()) \nImage(graph.create_png())",
"The tree shown above is overtrained. Let's limit the depth.",
"classifier_2 = tree.DecisionTreeClassifier(max_depth=3)\nclassifier_2 = classifier_2.fit(X,Y)\nclassifier_2.predict(X)\n\ndot_data = StringIO() \ntree.export_graphviz(classifier_2, out_file=dot_data) \n\ngraph = pydot.graph_from_dot_data(dot_data.getvalue()) \nImage(graph.create_png())",
"Take-away:\n\n\ntrees aren't great at predicting linear relationships between attrtibute and target variables. But standard linear regression is.\n\n\ntree size needs to be controlled to avoid over training\n\n\nRegression Trees\nConcepts\n\n\nThe predicted target variable is the mean of all the training target variable in the region\n\n\nThe split betwee R_1 and R_2 minimizes the following:",
"Image(filename=\"rss.png\")",
"where x_i and y_i are the attribute and target variables for the i-th training data point, and y_hat is the mean of the target variables in the region.\nExample\nLet's create an example with a noisy sine function.",
"# Create a random dataset\nrng = np.random.RandomState(1)\n# Set the range to [0,5] and sort it numerically\nX = np.sort(5 * rng.rand(80, 1), axis=0)\n# for target, take the sine of the data, and place it in an array\ny = np.sin(X).ravel()\n# add some noise to every fifth point\ny[::5] += 3 * (0.5 - rng.rand(16))\n",
"Test a set of regression trees that have different depth limits.",
"# use a regression tree model\nfrom sklearn.tree import DecisionTreeRegressor\n\nclf_1 = DecisionTreeRegressor(max_depth=2)\nclf_2 = DecisionTreeRegressor(max_depth=5)\nclf_3 = DecisionTreeRegressor()\nclf_1.fit(X, y)\nclf_2.fit(X, y)\nclf_3.fit(X, y)\n\n# generate test data in correct range, and place each pt in its own array \nX_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]\ny_1 = clf_1.predict(X_test)\ny_2 = clf_2.predict(X_test)\ny_3 = clf_3.predict(X_test)\n\nimport matplotlib.pyplot as plt\n\nplt.figure()\nplt.scatter(X, y, c=\"k\", label=\"data\")\nplt.plot(X_test, y_1, c=\"g\", label=\"max_depth=2\", linewidth=2)\nplt.plot(X_test, y_2, c=\"r\", label=\"max_depth=5\", linewidth=2)\nplt.plot(X_test, y_3, c=\"b\", label=\"max_depth=inf\", linewidth=1)\n\nplt.xlabel(\"data\")\nplt.ylabel(\"target\")\nplt.title(\"Decision Tree Regression\")\nplt.legend()\nplt.show()\n\ndot_data = StringIO()\ntree.export_graphviz(clf_1, out_file=dot_data)\ntree.export_graphviz(clf_2, out_file=dot_data)\ntree.export_graphviz(clf_3, out_file=dot_data)\n\ngraph = pydot.graph_from_dot_data(dot_data.getvalue()) ",
"Visualization of tree with depth=2",
"Image(graph[0].create_png())",
"Visualization of tree with depth=5",
"Image(graph[1].create_png())",
"Visualization of tree with no limitation on depth.",
"Image(graph[2].create_png())",
"Options for deal with overfitting:\n\n\nmaximum depth\n\n\nminimum training data points per region\n\n\npruning\n\n\nPruning\n\n\nNot implemented in scikit-learn\n\n\nuses cross validation to remove nodes from the tree in such a way that one makes an optimal tradeoff between the tree's complexity and its fit to the training data.\n\n\nBagging\n\n\ncreate an ensemble of trees, based on a subdivision of the training data\n\n\naverage the results of the ensemble\n\n\nRandom forests\n\n\ndeal with the fact that tree production is greedy, and always uses the strongest split first.\n\n\nbase each split on a random subset of attributes\n\n\ncombine as in bagging"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jonathf/chaospy
|
docs/user_guide/fundamentals/quasi_random_samples.ipynb
|
mit
|
[
"Quasi-random samples\nAs demonstrated in the problem formulation section, \nMonte Carlo integration is by nature a very slow converging method.\nOne way to improve on converging\nThe\nerror in convergence is proportional to $1/\\sqrt{K}$ where $K$ is the\nnumber of samples. It is somewhat better with variance reduction\ntechniques that often reaches errors proportional to $1/K$. For a full\noverview of the convergence rate of the various methods, see for example\nthe excellent book \\\"handbook of Monte Carlo methods\\\" by Kroese, Taimre\nand Botev kroese_handbook_2011. However\nas the number of dimensions grows, Monte Carlo convergence rate stays\nthe same, making it immune to the curse of dimensionality.\nLow-discrepancy sequences\nIn mathematics, a low-discrepancy\nsequence is a\nsequence with the property that for all values of N, its sub-sequence $Q_1,\n\\dots, Q_N$ has a low discrepancy.\nRoughly speaking, the discrepancy of a sequence is low if the proportion\nof points in the sequence falling into an arbitrary set B is close to\nproportional to the measure of B, as would happen on average (but not\nfor particular samples) in the case of an equi-distributed sequence.\nSpecific definitions of discrepancy differ regarding the choice of B\n(hyper-spheres, hyper-cubes, etc.) and how the discrepancy for every B is\ncomputed (usually normalized) and combined (usually by taking the worst\nvalue).\nLow-discrepancy sequences are also called quasi-random or sub-random\nsequences, due to their common use as a replacement of uniformly\ndistributed random numbers. The \\\"quasi\\\" modifier is used to denote\nmore clearly that the values of a low-discrepancy sequence are neither\nrandom nor pseudo-random, but such sequences share some properties of\nrandom variables and in certain applications such as the quasi-Monte\nCarlo method their lower discrepancy is an important advantage.\nIn chaospy, the following low-discrepancy schemes exists and can be evoked by passing the appropriate rule flag to chaospy.Distribution.sample() method:",
"import chaospy\n\nuniform_cube = chaospy.J(chaospy.Uniform(0, 1), chaospy.Uniform(0, 1))\ncount = 300\n\nrandom_samples = uniform_cube.sample(count, rule=\"random\", seed=1234)\n\nadditive_samples = uniform_cube.sample(count, rule=\"additive_recursion\")\nhalton_samples = uniform_cube.sample(count, rule=\"halton\")\nhammersley_samples = uniform_cube.sample(count, rule=\"hammersley\")\nkorobov_samples = uniform_cube.sample(count, rule=\"korobov\")\nsobol_samples = uniform_cube.sample(count, rule=\"sobol\")\n\nfrom matplotlib import pyplot\n\npyplot.rc(\"figure\", figsize=[16, 9])\n\npyplot.subplot(231)\npyplot.scatter(*random_samples)\npyplot.title(\"random\")\n\npyplot.subplot(232)\npyplot.scatter(*additive_samples)\npyplot.title(\"additive recursion\")\n\npyplot.subplot(233)\npyplot.scatter(*halton_samples)\npyplot.title(\"halton\")\n\npyplot.subplot(234)\npyplot.scatter(*hammersley_samples)\npyplot.title(\"hammersley\")\n\npyplot.subplot(235)\npyplot.scatter(*korobov_samples)\npyplot.title(\"korobov\")\n\npyplot.subplot(236)\npyplot.scatter(*sobol_samples)\npyplot.title(\"sobol\")\n\npyplot.show()",
"It is easy to observe by eye that for the average distance between each sample is much smaller for the sequences than the random samples.\nAll of these methods are deterministic, so running the same code again, and you will result in the same samples.\nAntithetic variate\nCreate antithetic\nvariate from\nvariables on the unit hyper-cube.\nIn statistics, the antithetic variate method is a variance reduction\ntechnique used in Monte Carlo methods. It does so by doing a type of mirroring of samples.\nIn chaospy we can create antithetic variate by providing the antithetic=True flag to the chaospy.Distribution.sample() method:",
"antithetic_samples = uniform_cube.sample(40, antithetic=True, seed=1234)\n\npyplot.rc(\"figure\", figsize=[6, 4])\npyplot.scatter(*antithetic_samples)\npyplot.show()",
"Since the uniform distribution i fully symmetrical it is possible to observe the mirroring visually.\nLooking at the 16 samples here, it is possible to interpret it as 10 unique samples, which are mirrored three times: along the x-axis, y-axis and the x-y-diagonal.\nAntithetic variate does not scale too well into higher dimensions, as the number of mirrored samples to normal samples grows exponentially.\nSo in higher dimensional problems it is possible to limit the mirroring to include only a few dimensions of interest by passing a Boolean sequence as the antithetic flag:",
"pyplot.rc(\"figure\", figsize=[8, 4])\n\npyplot.subplot(121)\npyplot.scatter(*uniform_cube.sample(40, antithetic=[False, True], seed=1234))\npyplot.title(\"mirror x-axis\")\n\npyplot.subplot(122)\npyplot.scatter(*uniform_cube.sample(40, antithetic=[True, False], seed=1234))\npyplot.title(\"mirror y-axis\")\n\npyplot.show()",
"Here 20 samples are generated and mirror along a single axis, which is twice as many as in the first try.\nLatin hyper-cube sampling\nLatin hyper-cube sampling is a stratification scheme for forcing random samples to be placed more spread out than traditional random samples.\nIt is similar to the low discrepancy sequences, but maintain random samples at it core.\nGenerating latin hyper-cube samples can be done by passing the rule=\"latin_hypercube\" flag to chaospy.Distribution.sample():",
"pyplot.rc(\"figure\", figsize=[6, 4])\n\nlhc_samples = uniform_cube.sample(count, rule=\"latin_hypercube\", seed=1234)\n\npyplot.scatter(*lhc_samples)\npyplot.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lukemans/Hello-world
|
t81_558_class10_lstm.ipynb
|
apache-2.0
|
[
"T81-558: Applications of Deep Neural Networks\nClass 10: Recurrent and LSTM Networks\n* Instructor: Jeff Heaton, School of Engineering and Applied Science, Washington University in St. Louis\n* For more information visit the class website.\nCommon Functions\nSome of the common functions from previous classes that we will use again.",
"from sklearn import preprocessing\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\n# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)\ndef encode_text_dummy(df,name):\n dummies = pd.get_dummies(df[name])\n for x in dummies.columns:\n dummy_name = \"{}-{}\".format(name,x)\n df[dummy_name] = dummies[x]\n df.drop(name, axis=1, inplace=True)\n\n# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).\ndef encode_text_index(df,name):\n le = preprocessing.LabelEncoder()\n df[name] = le.fit_transform(df[name])\n return le.classes_\n\n# Encode a numeric column as zscores\ndef encode_numeric_zscore(df,name,mean=None,sd=None):\n if mean is None:\n mean = df[name].mean()\n\n if sd is None:\n sd = df[name].std()\n\n df[name] = (df[name]-mean)/sd\n\n# Convert all missing values in the specified column to the median\ndef missing_median(df, name):\n med = df[name].median()\n df[name] = df[name].fillna(med)\n\n# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs\ndef to_xy(df,target):\n result = []\n for x in df.columns:\n if x != target:\n result.append(x)\n\n # find out the type of the target column. Is it really this hard? :(\n target_type = df[target].dtypes\n target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type\n print(target_type)\n \n # Encode to int for classification, float otherwise. TensorFlow likes 32 bits.\n if target_type in (np.int64, np.int32):\n # Classification\n return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.int32)\n else:\n # Regression\n return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.float32)\n\n# Nicely formatted time string\ndef hms_string(sec_elapsed):\n h = int(sec_elapsed / (60 * 60))\n m = int((sec_elapsed % (60 * 60)) / 60)\n s = sec_elapsed % 60\n return \"{}:{:>02}:{:>05.2f}\".format(h, m, s)\n\n# Regression chart, we will see more of this chart in the next class.\ndef chart_regression(pred,y):\n t = pd.DataFrame({'pred' : pred.flatten(), 'y' : y_test.flatten()})\n t.sort_values(by=['y'],inplace=True)\n a = plt.plot(t['y'].tolist(),label='expected')\n b = plt.plot(t['pred'].tolist(),label='prediction')\n plt.ylabel('output')\n plt.legend()\n plt.show()",
"Data Structure for Recurrent Neural Networks\nPreviously we trained neural networks with input ($x$) and expected output ($y$). $X$ was a matrix, the rows were training examples and the columns were values to be predicted. The definition of $x$ will be expanded and y will stay the same.\nDimensions of training set ($x$):\n* Axis 1: Training set elements (sequences) (must be of the same size as $y$ size)\n* Axis 2: Members of sequence\n* Axis 3: Features in data (like input neurons)\nPreviously, we might take as input a single stock price, to predict if we should buy (1), sell (-1), or hold (0).",
"# \n\nx = [\n [32],\n [41],\n [39],\n [20],\n [15]\n]\n\ny = [\n 1,\n -1,\n 0,\n -1,\n 1\n]\n\nprint(x)\nprint(y)",
"This is essentially building a CSV file from scratch, to see it as a data frame, use the following:",
"from IPython.display import display, HTML\nimport pandas as pd\nimport numpy as np\n\nx = np.array(x)\nprint(x[:,0])\n\n\ndf = pd.DataFrame({'x':x[:,0], 'y':y})\ndisplay(df)",
"You might want to put volume in with the stock price.",
"x = [\n [32,1383],\n [41,2928],\n [39,8823],\n [20,1252],\n [15,1532]\n]\n\ny = [\n 1,\n -1,\n 0,\n -1,\n 1\n]\n\nprint(x)\nprint(y)\n\nAgain, very similar to what we did before. The following shows this as a data frame.\n\nfrom IPython.display import display, HTML\nimport pandas as pd\nimport numpy as np\n\nx = np.array(x)\nprint(x[:,0])\n\n\ndf = pd.DataFrame({'price':x[:,0], 'volume':x[:,1], 'y':y})\ndisplay(df)",
"Now we get to sequence format. We want to predict something over a sequence, so the data format needs to add a dimension. A maximum sequence length must be specified, but the individual sequences can be of any length.",
"x = [\n [[32,1383],[41,2928],[39,8823],[20,1252],[15,1532]],\n [[35,8272],[32,1383],[41,2928],[39,8823],[20,1252]],\n [[37,2738],[35,8272],[32,1383],[41,2928],[39,8823]],\n [[34,2845],[37,2738],[35,8272],[32,1383],[41,2928]],\n [[32,2345],[34,2845],[37,2738],[35,8272],[32,1383]],\n]\n\ny = [\n 1,\n -1,\n 0,\n -1,\n 1\n]\n\nprint(x)\nprint(y)",
"Even if there is only one feature (price), the 3rd dimension must be used:",
"x = [\n [[32],[41],[39],[20],[15]],\n [[35],[32],[41],[39],[20]],\n [[37],[35],[32],[41],[39]],\n [[34],[37],[35],[32],[41]],\n [[32],[34],[37],[35],[32]],\n]\n\ny = [\n 1,\n -1,\n 0,\n -1,\n 1\n]\n\nprint(x)\nprint(y)",
"Recurrent Neural Networks\nSo far the neural networks that we’ve examined have always had forward connections. The input layer always connects to the first hidden layer. Each hidden layer always connects to the next hidden layer. The final hidden layer always connects to the output layer. This manner to connect layers is the reason that these networks are called “feedforward.” Recurrent neural networks are not so rigid, as backward connections are also allowed. A recurrent connection links a neuron in a layer to either a previous layer or the neuron itself. Most recurrent neural network architectures maintain state in the recurrent connections. Feedforward neural networks don’t maintain any state. A recurrent neural network’s state acts as a sort of short-term memory for the neural network. Consequently, a recurrent neural network will not always produce the same output for a given input.\nRecurrent neural networks do not force the connections to flow only from one layer to the next, from input layer to output layer. A recurrent connection occurs when a connection is formed between a neuron and one of the following other types of neurons:\n\nThe neuron itself\nA neuron on the same level\nA neuron on a previous level\n\nRecurrent connections can never target the input neurons or the bias neurons.\nThe processing of recurrent connections can be challenging. Because the recurrent links create endless loops, the neural network must have some way to know when to stop. A neural network that entered an endless loop would not be useful. To prevent endless loops, we can calculate the recurrent connections with the following three approaches:\n\nContext neurons\nCalculating output over a fixed number of iterations\nCalculating output until neuron output stabilizes\n\nWe refer to neural networks that use context neurons as a simple recurrent network (SRN). The context neuron is a special neuron type that remembers its input and provides that input as its output the next time that we calculate the network. For example, if we gave a context neuron 0.5 as input, it would output 0. Context neurons always output 0 on their first call. However, if we gave the context neuron a 0.6 as input, the output would be 0.5. We never weight the input connections to a context neuron, but we can weight the output from a context neuron just like any other connection in a network. \nContext neurons allow us to calculate a neural network in a single feedforward pass. Context neurons usually occur in layers. A layer of context neurons will always have the same number of context neurons as neurons in its source layer, as demonstrated here:\n\nAs you can see from the above layer, two hidden neurons that are labeled hidden 1 and hidden 2 directly connect to the two context neurons. The dashed lines on these connections indicate that these are not weighted connections. These weightless connections are never dense. If these connections were dense, hidden 1 would be connected to both hidden 1 and hidden 2. However, the direct connection simply joins each hidden neuron to its corresponding context neuron. The two context neurons form dense, weighted connections to the two hidden neurons. Finally, the two hidden neurons also form dense connections to the neurons in the next layer. The two context neurons would form two connections to a single neuron in the next layer, four connections to two neurons, six connections to three neurons, and so on.\nYou can combine context neurons with the input, hidden, and output layers of a neural network in many different ways. In the next two sections, we explore two common SRN architectures.\nIn 1990, Elman introduced a neural network that provides pattern recognition to time series. This neural network type has one input neuron for each stream that you are using to predict. There is one output neuron for each time slice you are trying to predict. A single-hidden layer is positioned between the input and output layer. A layer of context neurons takes its input from the hidden layer output and feeds back into the same hidden layer. Consequently, the context layers always have the same number of neurons as the hidden layer, as demonstrated here: \n\nThe Elman neural network is a good general-purpose architecture for simple recurrent neural networks. You can pair any reasonable number of input neurons to any number of output neurons. Using normal weighted connections, the two context neurons are fully connected with the two hidden neurons. The two context neurons receive their state from the two non-weighted connections (dashed lines) from each of the two hidden neurons.\nBackpropagation through time works by unfolding the SRN to become a regular neural network. To unfold the SRN, we construct a chain of neural networks equal to how far back in time we wish to go. We start with a neural network that contains the inputs for the current time, known as t. Next we replace the context with the entire neural network, up to the context neuron’s input. We continue for the desired number of time slices and replace the final context neuron with a 0. The following diagram shows an unfolded Elman neural network for two time slices.\n\nAs you can see, there are inputs for both t (current time) and t-1 (one time slice in the past). The bottom neural network stops at the hidden neurons because you don’t need everything beyond the hidden neurons to calculate the context input. The bottom network structure becomes the context to the top network structure. Of course, the bottom structure would have had a context as well that connects to its hidden neurons. However, because the output neuron above does not contribute to the context, only the top network (current time) has one.\nUnderstanding LSTM\nSome useful resources on LSTM/recurrent neural networks.\n\nUnderstanding LSTM Networks\nRecurrent Neural Networks in TensorFlow\n\nLong Short Term Neural Network (LSTM) are a type of recurrent unit that is often used with deep neural networks. For TensorFlow, LSTM can be thought of as a layer type that can be combined with other layer types, such as dense. LSTM makes use two transfer function types internally. \nThe first type of transfer function is the sigmoid. This transfer function type is used form gates inside of the unit. The sigmoid transfer function is given by the following equation:\n$$ \\text{S}(t) = \\frac{1}{1 + e^{-t}} $$\nThe second type of transfer function is the hyperbolic tangent (tanh) function. This function is used to scale the output of the LSTM, similarly to how other transfer functions have been used in this course. \nThe graphs for these functions are shown here:",
"%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\n\ndef sigmoid(x):\n a = []\n for item in x:\n a.append(1/(1+math.exp(-item)))\n return a\n\ndef f2(x):\n a = []\n for item in x:\n a.append(math.tanh(item))\n return a\n\nx = np.arange(-10., 10., 0.2)\ny1 = sigmoid(x)\ny2 = f2(x)\n\nprint(\"Sigmoid\")\nplt.plot(x,y1)\nplt.show()\n\nprint(\"Hyperbolic Tangent(tanh)\")\nplt.plot(x,y2)\nplt.show()",
"Both of these two functions compress their output to a specific range. For the sigmoid function, this range is 0 to 1. For the hyperbolic tangent function, this range is -1 to 1.\nLSTM maintains an internal state and produces an output. The following diagram shows an LSTM unit over three time slices: the current time slice (t), as well as the previous (t-1) and next (t+1) slice:\n\nThe values $\\hat{y}$ are the output from the unit, the values ($x$) are the input to the unit and the values $c$ are the context values. Both the output and context values are always fed to the next time slice. The context values allow \n\nLSTM is made up of three gates:\n\nForget Gate (f_t) - Controls if/when the context is forgotten. (MC)\nInput Gate (i_t) - Controls if/when a value should be remembered by the context. (M+/MS)\nOutput Gate (o_t) - Controls if/when the remembered value is allowed to pass from the unit. (RM)\n\nMathematically, the above diagram can be thought of as the following:\nThese are vector values.\nFirst, calculate the forget gate value. This gate determines if the short term memory is forgotten. The value $b$ is a bias, just like the bias neurons we saw before. Except LSTM has a bias for every gate: $b_t$, $b_i$, and $b_o$.\n$$ f_t = S(W_f \\cdot [\\hat{y}_{t-1}, x_t] + b_f) $$\nNext, calculate the input gate value. This gate's value determines what will be remembered.\n$$ i_t = S(W_i \\cdot [\\hat{y}_{t-1},x_t] + b_i) $$\nCalculate a candidate context value (a value that might be remembered). This value is called $\\tilde{c}$.\n$$ \\tilde{C}t = \\tanh(W_C \\cdot [\\hat{y}{t-1},x_t]+b_C) $$\nDetermine the new context ($C_t$). Do this by remembering the candidate context ($i_t$), depending on input gate. Forget depending on the forget gate ($f_t$). \n$$ C_t = f_t \\cdot C_{t-1}+i_t \\cdot \\tilde{C}_t $$\nCalculate the output gate ($o_t$):\n$$ o_t = S(W_o \\cdot [\\hat{y}_{t-1},x_t] + b_o ) $$\nCalculate the actual output ($\\hat{y}_t$):\n$$ \\hat{y}_t = o_t \\cdot \\tanh(C_t) $$\nSimple TensorFlow LSTM Example\nThe following code creates the LSTM network.",
"import numpy as np\nimport pandas\nimport tensorflow as tf\nfrom sklearn import metrics\nfrom tensorflow.models.rnn import rnn, rnn_cell\nfrom tensorflow.contrib import skflow\n\nSEQUENCE_SIZE = 6\nHIDDEN_SIZE = 20\nNUM_CLASSES = 4\n\ndef char_rnn_model(X, y):\n byte_list = skflow.ops.split_squeeze(1, SEQUENCE_SIZE, X)\n cell = rnn_cell.LSTMCell(HIDDEN_SIZE)\n _, encoding = rnn.rnn(cell, byte_list, dtype=tf.float32)\n return skflow.models.logistic_regression(encoding, y)\n\nclassifier = skflow.TensorFlowEstimator(model_fn=char_rnn_model, n_classes=NUM_CLASSES,\n steps=100, optimizer='Adam', learning_rate=0.01, continue_training=True)\n",
"The following code trains on a data set (x) with a max sequence size of 6 (columns) and 6 training elements (rows)",
"x = [\n [[0],[1],[1],[0],[0],[0]],\n [[0],[0],[0],[2],[2],[0]],\n [[0],[0],[0],[0],[3],[3]],\n [[0],[2],[2],[0],[0],[0]],\n [[0],[0],[3],[3],[0],[0]],\n [[0],[0],[0],[0],[1],[1]]\n]\nx = np.array(x,dtype=np.float32)\ny = np.array([1,2,3,2,3,1])\n\nclassifier.fit(x, y)\n\n\ntest = [[[0],[0],[0],[0],[3],[3]]]\ntest = np.array(test)\n\nclassifier.predict(test)",
"Stock Market Example",
"# How to read data from the stock market.\nfrom IPython.display import display, HTML\nimport pandas.io.data as web\nimport datetime\n\nstart = datetime.datetime(2014, 1, 1)\nend = datetime.datetime(2014, 12, 31)\n\nf=web.DataReader('tsla', 'yahoo', start, end)\ndisplay(f)\n\nimport numpy as np\nprices = f.Close.pct_change().tolist() # to percent changes\nprices = prices[1:] # skip the first, no percent change\n\n\nSEQUENCE_SIZE = 5\nx = []\ny = []\n\nfor i in range(len(prices)-SEQUENCE_SIZE-1):\n #print(i)\n window = prices[i:(i+SEQUENCE_SIZE)]\n after_window = prices[i+SEQUENCE_SIZE]\n window = [[x] for x in window]\n #print(\"{} - {}\".format(window,after_window))\n x.append(window)\n y.append(after_window)\n \nx = np.array(x)\nprint(len(x))\n \n\nfrom tensorflow.contrib import skflow\nfrom tensorflow.models.rnn import rnn, rnn_cell\nimport tensorflow as tf\n\nHIDDEN_SIZE = 20\n\ndef char_rnn_model(X, y):\n byte_list = skflow.ops.split_squeeze(1, SEQUENCE_SIZE, X)\n cell = rnn_cell.LSTMCell(HIDDEN_SIZE)\n _, encoding = rnn.rnn(cell, byte_list, dtype=tf.float32)\n return skflow.models.linear_regression(encoding, y)\n\nregressor = skflow.TensorFlowEstimator(model_fn=char_rnn_model, n_classes=1,\n steps=100, optimizer='Adam', learning_rate=0.01, continue_training=True)\n\nregressor.fit(x, y)\n\n\n# Try an in-sample prediction\n\nfrom sklearn import metrics\n# Measure RMSE error. RMSE is common for regression.\npred = regressor.predict(x)\nscore = np.sqrt(metrics.mean_squared_error(pred,y))\nprint(\"Final score (RMSE): {}\".format(score))\n\n# Try out of sample\nstart = datetime.datetime(2015, 1, 1)\nend = datetime.datetime(2015, 12, 31)\n\nf=web.DataReader('tsla', 'yahoo', start, end)\n\nimport numpy as np\nprices = f.Close.pct_change().tolist() # to percent changes\nprices = prices[1:] # skip the first, no percent change\n\n\nSEQUENCE_SIZE = 5\nx = []\ny = []\n\nfor i in range(len(prices)-SEQUENCE_SIZE-1):\n window = prices[i:(i+SEQUENCE_SIZE)]\n after_window = prices[i+SEQUENCE_SIZE]\n window = [[x] for x in window]\n x.append(window)\n y.append(after_window)\n \nx = np.array(x)\n\n# Measure RMSE error. RMSE is common for regression.\npred = regressor.predict(x)\nscore = np.sqrt(metrics.mean_squared_error(pred,y))\nprint(\"Out of sample score (RMSE): {}\".format(score))",
"Assignment 3 Solution\nBasic neural network solution:",
"import os\nimport pandas as pd\nfrom sklearn.cross_validation import train_test_split\nimport tensorflow.contrib.learn as skflow\nimport numpy as np\nfrom sklearn import metrics\n\npath = \"./data/\"\n \nfilename = os.path.join(path,\"t81_558_train.csv\") \ntrain_df = pd.read_csv(filename)\n\ntrain_df.drop('id',1,inplace=True)\n\ntrain_x, train_y = to_xy(train_df,'outcome')\n\ntrain_x, test_x, train_y, test_y = train_test_split(\n train_x, train_y, test_size=0.25, random_state=42)\n\n# Create a deep neural network with 3 hidden layers of 50, 25, 10\nregressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000)\n\n# Early stopping\nearly_stop = skflow.monitors.ValidationMonitor(test_x, test_y,\n early_stopping_rounds=200, print_steps=50)\n\n# Fit/train neural network\nregressor.fit(train_x, train_y, monitor=early_stop)\n\n# Measure RMSE error. RMSE is common for regression.\npred = regressor.predict(test_x)\nscore = np.sqrt(metrics.mean_squared_error(pred,test_y))\nprint(\"Final score (RMSE): {}\".format(score))\n\n####################\n# Build submit file\n####################\nfrom IPython.display import display, HTML\nfilename = os.path.join(path,\"t81_558_test.csv\") \nsubmit_df = pd.read_csv(filename)\nids = submit_df.Id\nsubmit_df.drop('Id',1,inplace=True)\nsubmit_x = submit_df.as_matrix()\n\npred_submit = regressor.predict(submit_x)\n\nsubmit_df = pd.DataFrame({'Id': ids, 'outcome': pred_submit[:,0]})\nsubmit_filename = os.path.join(path,\"t81_558_jheaton_submit.csv\")\nsubmit_df.to_csv(submit_filename, index=False)\n\ndisplay(submit_df)",
"The following code uses a random forest to rank the importance of features. This can be used both to rank the origional features and new ones created.",
"import matplotlib.pyplot as plt\nfrom sklearn.ensemble import ExtraTreesClassifier\nfrom sklearn.ensemble import RandomForestRegressor\n\n\n# Build a forest and compute the feature importances\nforest = RandomForestRegressor(n_estimators=50,\n random_state=0, verbose = True)\nprint(\"Training random forest\")\nforest.fit(train_x, train_y)\nimportances = forest.feature_importances_\nstd = np.std([tree.feature_importances_ for tree in forest.estimators_],\n axis=0)\nindices = np.argsort(importances)[::-1]\n\n# Print the feature ranking\n#train_df.drop('outcome',1,inplace=True)\nbag_cols = train_df.columns.values\nprint(\"Feature ranking:\")\n\nfor f in range(train_x.shape[1]):\n print(\"{}. {} ({})\".format(f + 1, bag_cols[indices[f]], importances[indices[f]]))\n\nThe following code uses engineered features.\n\nimport os\nimport pandas as pd\nfrom sklearn.cross_validation import train_test_split\nimport tensorflow.contrib.learn as skflow\nimport numpy as np\nfrom sklearn import metrics\n\npath = \"./data/\"\n \nfilename = os.path.join(path,\"t81_558_train.csv\") \ntrain_df = pd.read_csv(filename)\n\ntrain_df.drop('id',1,inplace=True)\n#train_df.drop('g',1,inplace=True)\n#train_df.drop('e',1,inplace=True)\n\n\ntrain_df.insert(0, \"a-b\", train_df.a - train_df.b)\n#display(train_df)\n\ntrain_x, train_y = to_xy(train_df,'outcome')\n\ntrain_x, test_x, train_y, test_y = train_test_split(\n train_x, train_y, test_size=0.25, random_state=42)\n\n# Create a deep neural network with 3 hidden layers of 50, 25, 10\nregressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000)\n\n# Early stopping\nearly_stop = skflow.monitors.ValidationMonitor(test_x, test_y,\n early_stopping_rounds=200, print_steps=50)\n\n# Fit/train neural network\nregressor.fit(train_x, train_y, monitor=early_stop)\n\n# Measure RMSE error. RMSE is common for regression.\npred = regressor.predict(test_x)\nscore = np.sqrt(metrics.mean_squared_error(pred,test_y))\nprint(\"Final score (RMSE): {}\".format(score))\n\n# foxtrot bravo\n# charlie alpha\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tgsmith61591/pyramid
|
benchmarks/Benchmarking Seasonality.ipynb
|
mit
|
[
"Benchmarking seasonality tests\nThe CHTest for seasonality has shown itself to be... slow. This notebook demonstrates the speed (or lack-thereof) of the old-style CHTest in v1.1.0 vs. later iterations.\nSetup\nThis portion won't change between versions of pmdarima. This dataset was submitted by a user in Issue #12 and showed a very slow performance on the CHTest. Therefore, it's effective for use in benchmarking.",
"import pandas as pd\n\nX = pd.read_csv('item_sales_daily.csv.gz')\ny = X['sales'].values\nX.head()\n\nimport pmdarima as pm\nimport time\nfrom functools import wraps\n\n\ndef timed(func):\n \"\"\"A decorator to time a result\"\"\"\n @wraps(func)\n def wrapper(*args, **kwargs):\n start = time.time()\n res = func(*args, **kwargs)\n print(\"Complete in %.3f seconds\" % (time.time() - start))\n return res\n return wrapper\n\n\n@timed\ndef benchmark(x, test):\n res = pm.arima.nsdiffs(x, m=365, max_D=5, test=test) # 365 since daily\n print(\"Version: %s\" % pm.__version__)\n return res\n",
"Version 1.1.0",
"benchmark(y, \"ch\")",
"Version 1.2.0",
"benchmark(y, \"ch\")",
"Version 1.2.0 added the OCSBTest, which is orders of magnitude faster than the CHTest.",
"benchmark(y, \"ocsb\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dahak-metagenomics/dahak
|
workflows/comparison/metagenome_comparison_example.ipynb
|
bsd-3-clause
|
[
"Summary:\nThis notebook is for visualizing matrices created with sourmash compare.\nSourmash compare quantifies how much different metagenomes resemble each other. A Jaccard distance of 1 will be reported for metagenomes that are identical to each other, and a Jaccard distance of 0 will be reported for metagenomes that do not share any of their k-mer content. All other degrees of similarity will be captured within the range of 0 to 1.\nExample Use Case:\nIn this example, the complete Shakya et al. 2013 metagenome is being compared to small, medium, and large subsamples of itself after conservative or aggressive read filtering. The datasets used in this example are named according to their metagenome content and relative degree of read filtering:\n\n\nSRR606249 = Accession number for the complete Shakya et al. 2013 metagenome\n\n\nsubset50 = 50% of the complete Shakya et al. 2013 metagenome\n\n\nsubset25 = 25% of the complete Shakya et al. 2013 metagenome\n\n\nsubset10 = 10% of the complete Shakya et al. 2013 metagenome\n\n\npe.trim2 = Conservative read filtering\n\n\npe.trim30 = Aggressive read filtering\n\n\nObjectives:\n\n\nCompare signatures and generate a cluster map\n\n\nDetermine which samples are most similar\n\n\nCreate MDS and TSNE plots\n\n\nFirst, set the backend of matplotlib to the 'inline' backend. With this backend, the output of plotting commands is displayed inline within frontends like the Jupyter notebook, directly below the code cell that produced it. The resulting plots will then also be stored in the notebook document.",
"%matplotlib inline",
"Next, import the compare module",
"import compare",
"Then load and visulalize the table",
"from compare import load_sourmash_csv\n# File name\nload_sourmash_csv('SRR606249.pe.trim2and30_comparison.k51.csv')\n\nfrom compare import create_cluster_map\n#Input file name, output image name, title \ncreate_cluster_map(\"SRR606249.pe.trim2and30_comparison.k51.csv\", \"Yep.png\", 'Shakya Complete and Subsampled with Variable Quality Trimming and K = 51')\n\nfrom compare import sort_by_similarity\n# Input file name, output file name \nsort_by_similarity(\"SRR606249.pe.trim2and30_comparison.k51.csv\", \"sorted.SRR606249.pe.trim2and30_comparison.k51.csv\")\n\nfrom compare import create_tsne\n#Input file name, output image name \ncreate_tsne(\"SRR606249.pe.trim2and30_comparison.k51.csv\", \"yes.png]\")\n\nfrom compare import create_mds_plot\n#Input file name, output image name\ncreate_mds_plot(\"SRR606249.pe.trim2and30_comparison.k51.csv\", \"yep.png\")\n\nimport pandas as pd\nimport seaborn as sns\nimport numpy\nfrom matplotlib import pyplot\nfrom sklearn import manifold\nimport os.path\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy as sp\n\nimport plotly \nimport plotly.plotly as py\n\ndf_csv = pd.read_csv(\"SRR606249.pe.trim2and30_comparison.k51.csv\")\n\n#def create_mds_plot(filename, save_fig):\nm = numpy.loadtxt(open(\"SRR606249.pe.trim2and30_comparison.k51.csv\"), delimiter=\",\" , skiprows=1)\nfrom sklearn.manifold import mds\n \nfrom sklearn.preprocessing import StandardScaler\ndata_std = StandardScaler().fit_transform(m)\n\nfrom sklearn.decomposition import PCA\npca = PCA(n_components=8, svd_solver='full')\ndata_pca = pca.fit_transform(data_std)\n\nmds = manifold.MDS(n_components=2, max_iter=3000, eps=1e-9,\n dissimilarity=\"euclidean\", n_jobs=1).fit_transform(m)\n\n\n\ndf = pd.DataFrame(mds)\ndf.columns=['t1','t2']\n\n# Rename index with column names - path \nx = dict([(i,os.path.basename(i)) for i in df_csv.columns])\ndfnew = df_csv.rename(index=str, columns=x)\n\ndfnew ['']= dfnew.columns\noutput = dfnew.set_index('')\n#output\n\n#df['labels'] = output.columns\n#df\n\n#Convert to df to dic \nnew_output = output.to_dict()\nnew_output\n#df_new\n\n#pyplot.scatter(df.t1, df.t2, label=df['labels'])\n\npy.plot(data=new_output)",
"Conclusions:\nIn this example, the complete Shakya et al. 2013 metagenome was compared to small, medium, and large subsamples of \nitself after conservative or aggressive read filtering. We observed that the larger the subsampled percentage, \nthe higher the similarity was between the subsampled metagenome and the complete Shakya et al. 2013 metagenome. \nTo a lesser extent, we also observed more similarity between metagenomes that underwent the same degree of \nconservative or aggressive read filtering. This is consistent with the expected behavior of sourmash compare to \nreport larger Jaccard distances for metagenomes that share a higher degree of their k-mer content.\nMore research needs to be done to interpret Jaccard distances calculated across diverse metagenomes that were \nsequenced with different chemistries and depths of coverage; however, this represents a baseline capability to \ncompare the content of multiple metagenomes in a computationally efficient and reference database-independent manner. \nGiven the high percentage of sequences in environmental metagenomes that have no known reference genome, it is \nadvantageous to use this kind of approach rather than limiting metagenome comparisons to the sequences with known \ntaxonomic classifications."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mohanprasath/Course-Work
|
certifications/code/student_intervention/student_intervention.ipynb
|
gpl-3.0
|
[
"Machine Learning Engineer Nanodegree\nSupervised Learning\nProject: Building a Student Intervention System\nWelcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nQuestion 1 - Classification vs. Regression\nYour goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?\nAnswer: The project deals with student's graduation percentage. Since graduation depends upon scoring more than a certain percentage it means there are only two possibilities: pass or fail. This is a discrete type of output. So we have to use classification type supervised algorithm. Beacuse classification deals with discrete output possibilities.\nExploring the Data\nRun the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, 'passed', will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student.",
"# Import libraries\nimport numpy as np\nimport pandas as pd\nfrom time import time\nfrom sklearn.metrics import f1_score\n\n# Read student data\nstudent_data = pd.read_csv(\"student-data.csv\")\nprint \"Student data read successfully!\"",
"Implementation: Data Exploration\nLet's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:\n- The total number of students, n_students.\n- The total number of features for each student, n_features.\n- The number of those students who passed, n_passed.\n- The number of those students who failed, n_failed.\n- The graduation rate of the class, grad_rate, in percent (%).",
"# TODO: Calculate number of students - DONE\nn_students = student_data.shape[0]\n\n# TODO: Calculate number of features - DONE\nn_features = student_data.shape[1]-1 # not counting passed column\n\n# TODO: Calculate passing students - DONE\nn_passed = student_data[student_data['passed'] == 'yes'].shape[0]\n\n# TODO: Calculate failing students - DONE\nn_failed = student_data[student_data['passed'] == 'no'].shape[0]\n\n# TODO: Calculate graduation rate - DONE\ngrad_rate = float(float(n_passed)/float(n_students))*100\n\n# Print the results\nprint \"Total number of students: {}\".format(n_students)\nprint \"Number of features: {}\".format(n_features)\nprint \"Number of students who passed: {}\".format(n_passed)\nprint \"Number of students who failed: {}\".format(n_failed)\nprint \"Graduation rate of the class: {:.2f}%\".format(grad_rate)",
"Preparing the Data\nIn this section, we will prepare the data for modeling, training and testing.\nIdentify feature and target columns\nIt is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.\nRun the code cell below to separate the student data into feature and target columns to see if any features are non-numeric.",
"# Extract feature columns\nfeature_cols = list(student_data.columns[:-1])\n\n# Extract target column 'passed'\ntarget_col = student_data.columns[-1] \n\n# Show the list of columns\nprint \"Feature columns:\\n{}\".format(feature_cols)\nprint \"\\nTarget column: {}\".format(target_col)\n\n# Separate the data into feature data and target data (X_all and y_all, respectively)\nX_all = student_data[feature_cols]\ny_all = student_data[target_col]\n\n# Show the feature information by printing the first five rows\nprint \"\\nFeature values:\"\nprint X_all.head()",
"Preprocess Feature Columns\nAs you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.\nOther columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.\nThese generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section.",
"def preprocess_features(X):\n ''' Preprocesses the student data and converts non-numeric binary variables into\n binary (0/1) variables. Converts categorical variables into dummy variables. '''\n \n # Initialize new output DataFrame\n output = pd.DataFrame(index = X.index)\n\n # Investigate each feature column for the data\n for col, col_data in X.iteritems():\n \n # If data type is non-numeric, replace all yes/no values with 1/0\n if col_data.dtype == object:\n col_data = col_data.replace(['yes', 'no'], [1, 0])\n\n # If data type is categorical, convert to dummy variables\n if col_data.dtype == object:\n # Example: 'school' => 'school_GP' and 'school_MS'\n col_data = pd.get_dummies(col_data, prefix = col) \n \n # Collect the revised columns\n output = output.join(col_data)\n \n return output\n\nX_all = preprocess_features(X_all)\nprint \"Processed feature columns ({} total features):\\n{}\".format(len(X_all.columns), list(X_all.columns))",
"Implementation: Training and Testing Data Split\nSo far, we have converted all categorical features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:\n- Randomly shuffle and split the data (X_all, y_all) into training and testing subsets.\n - Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).\n - Set a random_state for the function(s) you use, if provided.\n - Store the results in X_train, X_test, y_train, and y_test.",
"# TODO: Import any additional functionality you may need here\n\n# TODO: Set the number of training points\nnum_train = 300\n\n# Set the number of testing points\nnum_test = X_all.shape[0] - num_train\n\n# TODO: Shuffle and split the dataset into the number of training and testing points above\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X_all, y_all,test_size=num_test,random_state=1)\n\n# Show the results of the split\nprint \"Training set has {} samples.\".format(X_train.shape[0])\nprint \"Testing set has {} samples.\".format(X_test.shape[0])",
"Training and Evaluating Models\nIn this section, you will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.\nThe following supervised learning models are currently available in scikit-learn that you may choose from:\n- Gaussian Naive Bayes (GaussianNB)\n- Decision Trees\n- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)\n- K-Nearest Neighbors (KNeighbors)\n- Stochastic Gradient Descent (SGDC)\n- Support Vector Machines (SVM)\n- Logistic Regression\nQuestion 2 - Model Application\nList three supervised learning models that are appropriate for this problem. For each model chosen\n- Describe one real-world application in industry where the model can be applied. (You may need to do a small bit of research for this — give references!) \n- What are the strengths of the model; when does it perform well? \n- What are the weaknesses of the model; when does it perform poorly?\n- What makes this model a good candidate for the problem, given what you know about the data?\nAnswer: \nKNeighborsClassifier\n1. Credit card fraud detection. Source: Udacity lectures \n2. Approximation method with simple calculations. Training is quick and easy. \n3. When dimensions increase the model will perform poorly.\n4. Small dataset, less data dimensions, \nDecisionTreeClassifier\n1. Mammal classification problem. Source: Introduction to data mining (text book, chapter 4, figure 4.4)\n2. Direct tree building feature given the data. Redundant attributes are allowed. \n3. When conditions become very complex, the model starts overfitting. Finding optimal tree is NP-complete.\n4. Works well on imbalanced datasets.\nGaussianNB\n1. Document classification system, Email classification system. Source: Udacity lectures and projects.\n2. Gives exact mathematical probabilities. No approximations are used. \n3. When conditional independence between variables no longer exists, this model becomes useless. \n4. Small dataset, many fields have dependencies like grades is dependent on getting passed. \nSetup\nRun the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:\n- train_classifier - takes as input a classifier and training data and fits the classifier to the data.\n- predict_labels - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.\n- train_predict - takes as input a classifier, and the training and testing data, and performs train_clasifier and predict_labels.\n - This function will report the F<sub>1</sub> score for both the training and testing data separately.",
"def train_classifier(clf, X_train, y_train):\n ''' Fits a classifier to the training data. '''\n \n # Start the clock, train the classifier, then stop the clock\n start = time()\n clf.fit(X_train, y_train)\n end = time()\n \n # Print the results\n print \"Trained model in {:.4f} seconds\".format(end - start)\n\n \ndef predict_labels(clf, features, target):\n ''' Makes predictions using a fit classifier based on F1 score. '''\n \n # Start the clock, make predictions, then stop the clock\n start = time()\n y_pred = clf.predict(features)\n end = time()\n \n # Print and return results\n print \"Made predictions in {:.4f} seconds.\".format(end - start)\n return f1_score(target.values, y_pred, pos_label='yes')\n\n\ndef train_predict(clf, X_train, y_train, X_test, y_test):\n ''' Train and predict using a classifer based on F1 score. '''\n \n # Indicate the classifier and the training set size\n print \"Training a {} using a training set size of {}. . .\".format(clf.__class__.__name__, len(X_train))\n \n # Train the classifier\n train_classifier(clf, X_train, y_train)\n \n # Print the results of prediction for both training and testing\n print \"F1 score for training set: {:.4f}.\".format(predict_labels(clf, X_train, y_train))\n print \"F1 score for test set: {:.4f}.\".format(predict_labels(clf, X_test, y_test))",
"Implementation: Model Performance Metrics\nWith the predefined functions above, you will now import the three supervised learning models of your choice and run the train_predict function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:\n- Import the three supervised learning models you've discussed in the previous section.\n- Initialize the three models and store them in clf_A, clf_B, and clf_C.\n - Use a random_state for each model you use, if provided.\n - Note: Use the default settings for each model — you will tune one specific model in a later section.\n- Create the different training set sizes to be used to train each model.\n - Do not reshuffle and resplit the data! The new training points should be drawn from X_train and y_train.\n- Fit each model with each training set size and make predictions on the test set (9 in total).\nNote: Three tables are provided after the following code cell which can be used to store your results.",
"# TODO: Import the three supervised learning models from sklearn - DONE\n# from sklearn import model_A\nfrom sklearn.neighbors import KNeighborsClassifier\n# from sklearn import model_B\nfrom sklearn import tree\n# from sklearn import model_C\nfrom sklearn.naive_bayes import GaussianNB\n\n# additional models\nfrom sklearn import ensemble\n \n# TODO: Initialize the three models\nclf_A = KNeighborsClassifier()\nclf_B = tree.DecisionTreeClassifier(random_state = 42)\nclf_C = GaussianNB()\n# additional models\nclf_D = ensemble.AdaBoostClassifier(random_state = 42)\nclf_E = ensemble.RandomForestClassifier(random_state = 42)\n\nclassifier_names = [\"KNN\", \"Decision Tree\", \"Naive Bayes Classifier\", \"Ada Boost Classifier\", \" Random Forest Classifier\"]\n\n# TODO: Set up the training set sizes - DONE\nX_train_100 = X_train[:100]\ny_train_100 = y_train[:100]\n\nX_train_200 = X_train[:200]\ny_train_200 = y_train[:200]\n\nX_train_300 = X_train[:300]\ny_train_300 = y_train[:300]\n\n\n# TODO: Execute the 'train_predict' function for each classifier and each training set size - DONE\n# train_predict(clf, X_train, y_train, X_test, y_test)\ncount = 0\nfor clf in [clf_A, clf_B, clf_C, clf_D, clf_E]:\n print classifier_names[count]\n count += 1\n for n in [100, 200, 300]:\n train_predict(clf, X_train[:n], y_train[:n], X_test, y_test)",
"Tabular Results\nEdit the cell below to see how a table can be designed in Markdown. You can record your results from above in the tables provided.\n Classifer 1 - KNeighborsClassifier \n| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |\n| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |\n| 100 | 0.3152 | 0.0038 | 0.7883 | 0.7727 |\n| 200 | 0.0017 | 0.0029 | 0.8345 | 0.7971 |\n| 300 | 0.0015 | 0.0041 | 0.8558 | 0.7681 |\n Classifer 2 - DecisionTreeClassifier \n| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |\n| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |\n| 100 | 0.0478 | 0.0010 | 1.0000 | 0.6435 |\n| 200 | 0.0025 | 0.0004 | 1.0000 | 0.7761 |\n| 300 | 0.0035 | 0.0004 | 1.0000 | 0.6880 |\n Classifer 3 - GaussianNB \n| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |\n| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |\n| 100 | 0.0014 | 0.0006 | 0.8346 | 0.7402 |\n| 200 | 0.0019 | 0.0006 | 0.7879 | 0.6446 |\n| 300 | 0.0017 | 0.0006 | 0.7921 | 0.6720 |\n Classifer 4 - AdaBoostClassifier \n| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |\n| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |\n| 100 | 0.1652 | 0.0090 | 0.9624 | 0.6949 |\n| 200 | 0.1564 | 0.0111 | 0.8633 | 0.7647 |\n| 300 | 0.1802 | 0.0102 | 0.8579 | 0.8116 |\n Classifer 5 - RandomForestClassifier \n| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |\n| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |\n| 100 | 0.0813 | 0.0021 | 1.0000 | 0.6720 |\n| 200 | 0.0370 | 0.0017 | 0.9848 | 0.7407 |\n| 300 | 0.0333 | 0.0024 | 0.9924 | 0.7368 |\nChoosing the Best Model\nIn this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score. \nQuestion 3 - Choosing the Best Model\nBased on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?\nAnswer: In the above five classifiers, we can argue between AdaBoostClassifier and KNeighborsClassifier. However, KNeighborsClassifier exhibits better performance with higher F1 score with least prediction time among the two. I would recommend KNeighborsClassifier because working with huge amounts of data (1000's of GB) will take more time. Choosing a model with better F1 score and leeat prediction time is a balancing act.\nQuestion 4 - Model in Layman's Terms\nIn one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.\nAnswer: \nK Nearest neighbour(shortly KNN) is a classification algorithm. We train the model using the training data points which has their associated labels. KNN takes a single data point(during prediction or testing) and tries to identifies five(default value) nearest neighbours(added to the model during training). Based upon the majority of labels of those nearest neighbours, KNN classifies the data point accordingly. \nWhat is a \"nearest neighbors\"? How does KNN find these \"nearest neighbors\"? \nEach neighbour of the data point to be labelled is given a weight 1/d where d denotes the distance between the data point and the neighbour. Based upon the value of K(here its 5) we select K neighbours with minimum 1/d value. These neighbours are called as nearest neighbours\nSource: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm\nImplementation: Model Tuning\nFine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:\n- Import sklearn.grid_search.GridSearchCV and sklearn.metrics.make_scorer.\n- Create a dictionary of parameters you wish to tune for the chosen model.\n - Example: parameters = {'parameter' : [list of values]}.\n- Initialize the classifier you've chosen and store it in clf.\n- Create the F<sub>1</sub> scoring function using make_scorer and store it in f1_scorer.\n - Set the pos_label parameter to the correct value!\n- Perform grid search on the classifier clf using f1_scorer as the scoring method, and store it in grid_obj.\n- Fit the grid search object to the training data (X_train, y_train), and store it in grid_obj.",
"# TODO: Import 'GridSearchCV' and 'make_scorer' - DONE\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.metrics import make_scorer\n# TODO: Create the parameters list you wish to tune - DONE. 10% of total data size\nparameters = {'n_neighbors':(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30),'weights':('uniform','distance')}\n\n# TODO: Initialize the classifier - DONE\nclf = KNeighborsClassifier()\n\n# TODO: Make an f1 scoring function using 'make_scorer' - DONE \nf1_scorer = make_scorer(f1_score,pos_label='yes')\n\n# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method - DONE\ngrid_obj = GridSearchCV(clf,param_grid = parameters, scoring = f1_scorer)\n\n# TODO: Fit the grid search object to the training data and find the optimal parameters - DONE\ngrid_obj = grid_obj.fit(X_train,y_train)\n\n# Get the estimator\nclf = grid_obj.best_estimator_\n\n# Report the final F1 score for training and testing after parameter tuning\nprint \"Tuned model has a training F1 score of {:.4f}.\".format(predict_labels(clf, X_train, y_train))\nprint \"Tuned model has a testing F1 score of {:.4f}.\".format(predict_labels(clf, X_test, y_test))",
"Question 5 - Final F<sub>1</sub> Score\nWhat is the final model's F<sub>1</sub> score for training and testing? How does that score compare to the untuned model?\nAnswer: The final F1 score is 0.8684 in test prediction time 0.0042 seconds. This score is higher than the untuned model. \n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pligor/predicting-future-product-prices
|
04_time_series_prediction/35_price_history_autoencoder_dyn_rnn_with_diff.ipynb
|
agpl-3.0
|
[
"# -*- coding: UTF-8 -*-\n#%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2\n\nfrom __future__ import division\nimport tensorflow as tf\nfrom os import path, remove\nimport numpy as np\nimport pandas as pd\nimport csv\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom time import time\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom mylibs.jupyter_notebook_helper import show_graph, renderStatsList, renderStatsCollection, \\\n renderStatsListWithLabels, renderStatsCollectionOfCrossValids, plot_res_gp, my_plot_convergence\nfrom tensorflow.contrib import rnn\nfrom tensorflow.contrib import learn\nimport shutil\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nfrom mylibs.tf_helper import getDefaultGPUconfig\nfrom sklearn.metrics import r2_score\nfrom mylibs.py_helper import factors\nfrom fastdtw import fastdtw\nfrom collections import OrderedDict\nfrom scipy.spatial.distance import euclidean\nfrom statsmodels.tsa.stattools import coint\nfrom common_33 import get_or_run_nn\nfrom skopt.space.space import Integer, Real\nfrom skopt import gp_minimize\nfrom skopt.plots import plot_convergence\nimport pickle\nimport inspect\nimport dill\nimport sys\n\nfrom models.model_35_price_history_autoencoder import PriceHistoryAutoencoder\nfrom data_providers.data_provider_33_price_history_autoencoder import PriceHistoryAutoEncDataProvider\n#from gp_opt.price_history_27_gp_opt import PriceHistoryGpOpt\n\ndtype = tf.float32\nseed = 16011984\nrandom_state = np.random.RandomState(seed=seed)\nconfig = getDefaultGPUconfig()\nn_jobs = 1\n%matplotlib inline",
"Step 0 - hyperparams",
"factors(689)\n\nmax_seq_len = 682\n\n#full_train_size = 55820\n#train_size = 55800\n#small_train_size = 6000 #just because of performance reasons, no statistics behind this decision\n#test_size = 6200\n\ndata_path = '../../../../Dropbox/data'\n\nphae_path = data_path + '/price_hist_autoencoder'\n\ncsv_in = '../price_history_03_seq_start_suddens_trimmed.csv'\nassert path.isfile(csv_in)\n\nnpz_unprocessed = phae_path + '/price_history_full_seqs.npz'\nassert path.isfile(npz_unprocessed)\n\nnpz_dates = phae_path + '/price_history_full_seqs_dates.npz'\nassert path.isfile(npz_dates)\n\nnpz_train = phae_path + '/price_history_seqs_dates_normed_train.npz'\nassert path.isfile(npz_train)\n\nnpz_test = phae_path + '/price_history_seqs_dates_normed_test.npz'\nassert path.isfile(npz_test)\n\nnpz_path = npz_train[:-len('_train.npz')]\n\nfor key, val in np.load(npz_train).iteritems():\n print key, \",\", val.shape",
"Step 1 - collect data",
"dp = PriceHistoryAutoEncDataProvider(npz_path=npz_path, batch_size=53, with_EOS=False)\nfor data in dp.datalist:\n print data.shape\n\n# for item in dp.next():\n# print item.shape",
"Step 2 - Build model",
"# model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)\n# graph = model.getGraph(batch_size=53,\n# enc_num_units = 10,\n# dec_num_units = 10,\n# ts_len=max_seq_len)",
"targets\nTensor(\"data/strided_slice:0\", shape=(53, 682), dtype=float32)\nTensor(\"encoder_rnn_layer/rnn/while/Exit_2:0\", shape=(?, 10), dtype=float32)\nTensor(\"encoder_state_out_process/Elu:0\", shape=(?, 2), dtype=float32)\nTensor(\"decoder_state_in_process/Elu:0\", shape=(?, 10), dtype=float32)\nTensor(\"decoder_rnn_layer/rnn/transpose:0\", shape=(53, 682, 10), dtype=float32)\nTensor(\"decoder_outs/Reshape:0\", shape=(36146, 10), dtype=float32)\nTensor(\"readout_affine/Identity:0\", shape=(36146, 1), dtype=float32)\nTensor(\"readout_affine/Reshape:0\", shape=(53, 682), dtype=float32)\nTensor(\"error/mul_1:0\", shape=(53, 682), dtype=float32)\nTensor(\"error/Mean:0\", shape=(), dtype=float32)\nTensor(\"error/Mean:0\", shape=(), dtype=float32)\nTensor(\"error_diff/mul_1:0\", shape=(53, 681), dtype=float32)\nTensor(\"error_diff/Mean:0\", shape=(), dtype=float32)\nTensor(\"error_diff/Mean:0\", shape=(), dtype=float32)",
"#show_graph(graph)",
"Quick test run",
"def experiment():\n return model.run(npz_path=npz_path,\n epochs=2,\n batch_size = 53,\n enc_num_units = 400,\n dec_num_units = 400,\n ts_len=max_seq_len,\n learning_rate = 1e-4,\n preds_gather_enabled = False,\n )\ndyn_stats_dic = experiment()\n\ndyn_stats_dic['dyn_stats'].plotStats()\nplt.show()\ndyn_stats_dic['dyn_stats_diff'].plotStats()\nplt.show()",
"Step 3 training the network",
"model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)\n\nnpz_test = npz_path + '_test.npz'\nassert path.isfile(npz_test)\npath.abspath(npz_test)\n\ndef experiment():\n return model.run(npz_path=npz_path,\n epochs=200,\n batch_size = 53,\n enc_num_units = 450,\n dec_num_units = 450,\n ts_len=max_seq_len,\n learning_rate = 1e-4,\n preds_gather_enabled = True,\n )\n\n#%%time\n# dyn_stats_dic, preds_dict, targets, twods = experiment()\ndyn_stats, preds_dict, targets, twods = get_or_run_nn(experiment, filename='035_autoencoder_000',\n nn_runs_folder = data_path + \"/nn_runs\")\n\ndyn_stats['dyn_stats'].plotStats()\nplt.show()\ndyn_stats['dyn_stats_diff'].plotStats()\nplt.show()\n\nr2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])\n for ind in range(len(targets))]\n\nind = np.argmin(r2_scores)\nind\n\nreals = targets[ind]\npreds = preds_dict[ind]\n\nr2_score(y_true=reals, y_pred=preds)\n\n#sns.tsplot(data=dp.inputs[ind].flatten())\n\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()\n\n%%time\ndtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]\n for ind in range(len(targets))]\n\nnp.mean(dtw_scores)\n\ncoint(preds, reals)\n\ncur_ind = np.random.randint(len(targets))\nreals = targets[cur_ind]\npreds = preds_dict[cur_ind]\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b', label='reals')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()",
"Conclusion\nThe autoencoder is not able to represent in a visibly obvious way our price history time series\nTS in Two Dimensions",
"twod_arr = np.array(twods.values())\ntwod_arr.shape\n\nplt.figure(figsize=(16,7))\nplt.plot(twod_arr[:, 0], twod_arr[:, 1], 'r.')\nplt.title('two dimensional representation of our time series after dimensionality reduction')\nplt.xlabel('first dimension')\nplt.ylabel('second dimension')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SergioSantGre/project_portfolio
|
SSG_dlnd_tv_script_generation.ipynb
|
lgpl-3.0
|
[
"TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"# from helper.py change line 10 with open(input_file, \"r\",encoding='utf_8', errors='ignore') as f:\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (2, 15)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)",
"import numpy as np\nimport problem_unittests as tests\nfrom collections import Counter\ndef create_lookup_tables(text):\n \n counts = Counter(text)\n vocab = sorted(counts, key=counts.get, reverse=True)\n vocab_to_int = {word: w for w, word in enumerate(vocab)}\n int_to_vocab = {i:j for j, i in vocab_to_int.items()}\n return (vocab_to_int, int_to_vocab)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"def token_lookup():\n\n token_dict = {'.':'||period||',\n ',':'||comma||',\n '\"':'||quotm||',\n ';':'||semic||',\n '!':'||exclMark||',\n '?':'||qMark||',\n '(':'||lpar||',\n ')':'||rpar||',\n '--':'||dash||',\n '\\n':'||ret||'}\n return token_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following the tuple (Input, Targets, LearingRate)",
"def get_inputs():\n\n # TODO: Implement Function\n inputs = tf.placeholder(tf.int32, shape=(None, None), name='input')\n targets = tf.placeholder(tf.int32, shape=[None, None], name='targets')\n learning_rate = tf.placeholder(tf.float32, shape=(None), name='learning_rate')\n #keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n return inputs, targets, learning_rate\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)",
"def get_init_cell(batch_size, rnn_size):\n \n # LSTM cell\n \n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n \n #drop = tf.contrib.rnn.DropoutWrapper(lstm, input_keep_prob=0.85, output_keep_prob=0.85, seed=42)\n\n # multiple LSTM layers\n cell = tf.contrib.rnn.MultiRNNCell([lstm])\n \n # initialize LSTM cell state and identify as 'initial_state'\n initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')\n return cell, initial_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.",
"def get_embed(input_data, vocab_size, embed_dim):\n\n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)",
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)\n final_state = tf.identity(state, name = 'final_state')\n return outputs, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)",
"def build_nn(cell, rnn_size, input_data, vocab_size):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embedding = get_embed(input_data, vocab_size, rnn_size)\n outputs, final_state = build_rnn(cell, embedding)\n logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)\n \n return (logits, final_state)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```",
"def get_batches(int_text, batch_size, seq_length):\n\n n_batches = int(len(int_text) / (batch_size * seq_length))\n\n # Drop the last few characters to make only full batches\n data_x = np.array(int_text[: n_batches * batch_size * seq_length])\n data_y = np.array(int_text[1: n_batches * batch_size * seq_length + 1])\n\n batches_x = np.split(data_x.reshape(batch_size, -1), n_batches, 1)\n batches_y = np.split(data_y.reshape(batch_size, -1), n_batches, 1)\n\n return np.array(list(zip(batches_x, batches_y)))\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.",
"# Number of Epochs\nnum_epochs = 65\n# Batch Size\nbatch_size = 64\n# RNN Size\nrnn_size = 256\n# Sequence Length\nseq_length = 15\n# Learning Rate\nlearning_rate = 0.005\n# Show stats for every n number of batches\nshow_every_n_batches = 71\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Save Parameters\nSave seq_length and save_dir for generating a new TV script.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)",
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n InputTensor = loaded_graph.get_tensor_by_name('input:0')\n InitialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0')\n FinalStateTensor = loaded_graph.get_tensor_by_name('final_state:0')\n ProbsTensor = loaded_graph.get_tensor_by_name('probs:0')\n \n return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Choose Word\nImplement the pick_word() function to select the next word using probabilities.",
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n# i = np.random.choice(np.arange(len(int_to_vocab)), probabilities)\n \n probs = list(probabilities)\n# print(len(probabilities), len(int_to_vocab))\n i = np.random.choice(np.arange(len(int_to_vocab)), p=probabilities)\n return int_to_vocab[i]\n \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.",
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jseabold/statsmodels
|
examples/notebooks/statespace_sarimax_pymc3.ipynb
|
bsd-3-clause
|
[
"Fast Bayesian estimation of SARIMAX models\nIntroduction\nThis notebook will show how to use fast Bayesian methods to estimate SARIMAX (Seasonal AutoRegressive Integrated Moving Average with eXogenous regressors) models. These methods can also be parallelized across multiple cores.\nHere, fast methods means a version of Hamiltonian Monte Carlo called the No-U-Turn Sampler (NUTS) developed by Hoffmann and Gelman: see Hoffman, M. D., & Gelman, A. (2014). The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 1593-1623.. As they say, \"the cost of HMC per independent sample from a target distribution of dimension $D$ is roughly $\\mathcal{O}(D^{5/4})$, which stands in sharp contrast with the $\\mathcal{O}(D^{2})$ cost of random-walk Metropolis\". So for problems of larger dimension, the time-saving with HMC is significant. However it does require the gradient, or Jacobian, of the model to be provided.\nThis notebook will combine the Python libraries statsmodels, which does econometrics, and PyMC3, which is for Bayesian estimation, to perform fast Bayesian estimation of a simple SARIMAX model, in this case an ARMA(1, 1) model for US CPI.\nNote that, for simple models like AR(p), base PyMC3 is a quicker way to fit a model; there's an example here. The advantage of using statsmodels is that it gives access to methods that can solve a vast range of statespace models.\nThe model we'll solve is given by\n$$\ny_t = \\phi y_{t-1} + \\varepsilon_t + \\theta_1 \\varepsilon_{t-1}, \\qquad \\varepsilon_t \\sim N(0, \\sigma^2)\n$$\nwith 1 auto-regressive term and 1 moving average term. In statespace form it is written as:\n$$\n\\begin{align}\ny_t & = \\underbrace{\\begin{bmatrix} 1 & \\theta_1 \\end{bmatrix}}{Z} \\underbrace{\\begin{bmatrix} \\alpha{1,t} \\ \\alpha_{2,t} \\end{bmatrix}}{\\alpha_t} \\\n \\begin{bmatrix} \\alpha{1,t+1} \\ \\alpha_{2,t+1} \\end{bmatrix} & = \\underbrace{\\begin{bmatrix}\n \\phi & 0 \\\n 1 & 0 \\\n \\end{bmatrix}}{T} \\begin{bmatrix} \\alpha{1,t} \\ \\alpha_{2,t} \\end{bmatrix} +\n \\underbrace{\\begin{bmatrix} 1 \\ 0 \\end{bmatrix}}{R} \\underbrace{\\varepsilon{t+1}}_{\\eta_t} \\\n\\end{align}\n$$\nThe code will follow these steps:\n1. Import external dependencies\n2. Download and plot the data on US CPI\n3. Simple maximum likelihood estimation (MLE) as an example\n4. Definitions of helper functions to provide tensors to the library doing Bayesian estimation\n5. Bayesian estimation via NUTS\n6. Application to US CPI series\nFinally, Appendix A shows how to re-use the helper functions from step (4) to estimate a different state space model, UnobservedComponents, using the same Bayesian methods.\n1. Import external dependencies",
"%matplotlib inline\nimport theano\nimport theano.tensor as tt\nimport pymc3 as pm\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport statsmodels.api as sm\nimport pandas as pd\nfrom pandas_datareader.data import DataReader\nfrom pandas.plotting import register_matplotlib_converters\nplt.style.use('seaborn')\nregister_matplotlib_converters()",
"2. Download and plot the data on US CPI\nWe'll get the data from FRED:",
"cpi = DataReader('CPIAUCNS', 'fred', start='1971-01', end='2018-12')\ncpi.index = pd.DatetimeIndex(cpi.index, freq='MS')\n\n# Define the inflation series that we'll use in analysis\ninf = np.log(cpi).resample('QS').mean().diff()[1:] * 400\nprint(inf.head())\n\n# Plot the series \nfig, ax = plt.subplots(figsize=(9, 4), dpi=300)\nax.plot(inf.index, inf, label=r'$\\Delta \\log CPI$', lw=2)\nax.legend(loc='lower left')\nplt.show()",
"3. Fit the model with maximum likelihood\nStatsmodels does all of the hard work of this for us - creating and fitting the model takes just two lines of code. The model order parameters correspond to auto-regressive, difference, and moving average orders respectively.",
"# Create an SARIMAX model instance - here we use it to estimate\n# the parameters via MLE using the `fit` method, but we can\n# also re-use it below for the Bayesian estimation\nmod = sm.tsa.statespace.SARIMAX(inf, order=(1, 0, 1))\n\nres_mle = mod.fit(disp=False)\nprint(res_mle.summary())",
"It's a good fit. We can also get the series of one-step ahead predictions and plot it next to the actual data, along with a confidence band.",
"predict_mle = res_mle.get_prediction()\npredict_mle_ci = predict_mle.conf_int()\nlower = predict_mle_ci['lower CPIAUCNS']\nupper = predict_mle_ci['upper CPIAUCNS']\n\n# Graph\nfig, ax = plt.subplots(figsize=(9,4), dpi=300)\n\n# Plot data points\ninf.plot(ax=ax, style='-', label='Observed')\n\n# Plot predictions\npredict_mle.predicted_mean.plot(ax=ax, style='r.', label='One-step-ahead forecast')\nax.fill_between(predict_mle_ci.index, lower, upper, color='r', alpha=0.1)\nax.legend(loc='lower left')\nplt.show()",
"4. Helper functions to provide tensors to the library doing Bayesian estimation\nWe're almost on to the magic but there are a few preliminaries. Feel free to skip this section if you're not interested in the technical details.\nTechnical Details\nPyMC3 is a Bayesian estimation library (\"Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with Theano\") that is a) fast and b) optimized for Bayesian machine learning, for instance Bayesian neural networks. To do all of this, it is built on top of a Theano, a library that aims to evaluate tensors very efficiently and provide symbolic differentiation (necessary for any kind of deep learning). It is the symbolic differentiation that means PyMC3 can use NUTS on any problem formulated within PyMC3.\nWe are not formulating a problem directly in PyMC3; we're using statsmodels to specify the statespace model and solve it with the Kalman filter. So we need to put the plumbing of statsmodels and PyMC3 together, which means wrapping the statsmodels SARIMAX model object in a Theano-flavored wrapper before passing information to PyMC3 for estimation.\nBecause of this, we can't use the Theano auto-differentiation directly. Happily, statsmodels SARIMAX objects have a method to return the Jacobian evaluated at the parameter values. We'll be making use of this to provide gradients so that we can use NUTS.\nDefining helper functions to translate models into a PyMC3 friendly form\nFirst, we'll create the Theano wrappers. They will be in the form of 'Ops', operation objects, that 'perform' particular tasks. They are initialized with a statsmodels model instance.\nAlthough this code may look somewhat opaque, it is generic for any state space model in statsmodels.",
"class Loglike(tt.Op):\n\n itypes = [tt.dvector] # expects a vector of parameter values when called\n otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)\n\n def __init__(self, model):\n self.model = model\n self.score = Score(self.model)\n\n def perform(self, node, inputs, outputs):\n theta, = inputs # contains the vector of parameters\n llf = self.model.loglike(theta)\n outputs[0][0] = np.array(llf) # output the log-likelihood\n\n def grad(self, inputs, g):\n # the method that calculates the gradients - it actually returns the\n # vector-Jacobian product - g[0] is a vector of parameter values\n theta, = inputs # our parameters\n out = [g[0] * self.score(theta)]\n return out\n\n \nclass Score(tt.Op):\n itypes = [tt.dvector]\n otypes = [tt.dvector]\n\n def __init__(self, model):\n self.model = model\n\n def perform(self, node, inputs, outputs):\n theta, = inputs\n outputs[0][0] = self.model.score(theta)",
"5. Bayesian estimation with NUTS\nThe next step is to set the parameters for the Bayesian estimation, specify our priors, and run it.",
"# Set sampling params\nndraws = 3000 # number of draws from the distribution\nnburn = 600 # number of \"burn-in points\" (which will be discarded)",
"Now for the fun part! There are three parameters to estimate: $\\phi$, $\\theta_1$, and $\\sigma$. We'll use uninformative uniform priors for the first two, and an inverse gamma for the last one. Then we'll run the inference optionally using as many computer cores as I have.",
"# Construct an instance of the Theano wrapper defined above, which\n# will allow PyMC3 to compute the likelihood and Jacobian in a way\n# that it can make use of. Here we are using the same model instance\n# created earlier for MLE analysis (we could also create a new model\n# instance if we preferred)\nloglike = Loglike(mod)\n\nwith pm.Model():\n # Priors\n arL1 = pm.Uniform('ar.L1', -0.99, 0.99)\n maL1 = pm.Uniform('ma.L1', -0.99, 0.99)\n sigma2 = pm.InverseGamma('sigma2', 2, 4)\n\n # convert variables to tensor vectors\n theta = tt.as_tensor_variable([arL1, maL1, sigma2])\n\n # use a DensityDist (use a lamdba function to \"call\" the Op)\n pm.DensityDist('likelihood', lambda v: loglike(v), observed={'v': theta})\n \n # Draw samples\n trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True, cores=4)",
"Note that the NUTS sampler is auto-assigned because we provided gradients. PyMC3 will use Metropolis or Slicing samplers if it does not find that gradients are available. There are an impressive number of draws per second for a \"block box\" style computation! However, note that if the model can be represented directly by PyMC3 (like the AR(p) models mentioned above), then computation can be substantially faster.\nInference is complete, but are the results any good? There are a number of ways to check. The first is to look at the posterior distributions (with lines showing the MLE values):",
"plt.tight_layout()\n# Note: the syntax here for the lines argument is required for\n# PyMC3 versions >= 3.7\n# For version <= 3.6 you can use lines=dict(res_mle.params) instead\n_ = pm.traceplot(trace,\n lines=[(k, {}, [v]) for k, v in dict(res_mle.params).items()],\n combined=True,\n figsize=(12, 12))",
"The estimated posteriors clearly peak close to the parameters found by MLE. We can also see a summary of the estimated values:",
"pm.summary(trace)",
"Here $\\hat{R}$ is the Gelman-Rubin statistic. It tests for lack of convergence by comparing the variance between multiple chains to the variance within each chain. If convergence has been achieved, the between-chain and within-chain variances should be identical. If $\\hat{R}<1.2$ for all model parameters, we can have some confidence that convergence has been reached.\nAdditionally, the highest posterior density interval (the gap between the two values of HPD in the table) is small for each of the variables.\n6. Application of Bayesian estimates of parameters\nWe'll now re-instigate a version of the model but using the parameters from the Bayesian estimation, and again plot the one-step-ahead forecasts.",
"# Retrieve the posterior means\nparams = pm.summary(trace)['mean'].values\n\n# Construct results using these posterior means as parameter values\nres_bayes = mod.smooth(params)\n\npredict_bayes = res_bayes.get_prediction()\npredict_bayes_ci = predict_bayes.conf_int()\nlower = predict_bayes_ci['lower CPIAUCNS']\nupper = predict_bayes_ci['upper CPIAUCNS']\n\n# Graph\nfig, ax = plt.subplots(figsize=(9,4), dpi=300)\n\n# Plot data points\ninf.plot(ax=ax, style='-', label='Observed')\n\n# Plot predictions\npredict_bayes.predicted_mean.plot(ax=ax, style='r.', label='One-step-ahead forecast')\nax.fill_between(predict_bayes_ci.index, lower, upper, color='r', alpha=0.1)\nax.legend(loc='lower left')\nplt.show()",
"Appendix A. Application to UnobservedComponents models\nWe can reuse the Loglike and Score wrappers defined above to consider a different state space model. For example, we might want to model inflation as the combination of a random walk trend and autoregressive error term:\n$$\n\\begin{aligned}\ny_t & = \\mu_t + \\varepsilon_t \\\n\\mu_t & = \\mu_{t-1} + \\eta_t \\\n\\varepsilon_t &= \\phi \\varepsilon_t + \\zeta_t\n\\end{aligned}\n$$\nThis model can be constructed in Statsmodels with the UnobservedComponents class using the rwalk and autoregressive specifications. As before, we can fit the model using maximum likelihood via the fit method.",
"# Construct the model instance\nmod_uc = sm.tsa.UnobservedComponents(inf, 'rwalk', autoregressive=1)\n\n# Fit the model via maximum likelihood\nres_uc_mle = mod_uc.fit()\nprint(res_uc_mle.summary())",
"As noted earlier, the Theano wrappers (Loglike and Score) that we created above are generic, so we can re-use essentially the same code to explore the model with Bayesian methods.",
"# Set sampling params\nndraws = 3000 # number of draws from the distribution\nnburn = 600 # number of \"burn-in points\" (which will be discarded)\n\n# Here we follow the same procedure as above, but now we instantiate the\n# Theano wrapper `Loglike` with the UC model instance instead of the\n# SARIMAX model instance\nloglike_uc = Loglike(mod_uc)\n\nwith pm.Model():\n # Priors\n sigma2level = pm.InverseGamma('sigma2.level', 1, 1)\n sigma2ar = pm.InverseGamma('sigma2.ar', 1, 1)\n arL1 = pm.Uniform('ar.L1', -0.99, 0.99)\n\n # convert variables to tensor vectors\n theta_uc = tt.as_tensor_variable([sigma2level, sigma2ar, arL1])\n\n # use a DensityDist (use a lamdba function to \"call\" the Op)\n pm.DensityDist('likelihood', lambda v: loglike_uc(v), observed={'v': theta_uc})\n \n # Draw samples\n trace_uc = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True, cores=4)",
"And as before we can plot the marginal posteriors. In contrast to the SARIMAX example, here the posterior modes are somewhat different from the MLE estimates.",
"plt.tight_layout()\n# Note: the syntax here for the lines argument is required for\n# PyMC3 versions >= 3.7\n# For version <= 3.6 you can use lines=dict(res_mle.params) instead\n_ = pm.traceplot(trace_uc,\n lines=[(k, {}, [v]) for k, v in dict(res_uc_mle.params).items()],\n combined=True,\n figsize=(12, 12))\n\npm.summary(trace_uc)\n\n# Retrieve the posterior means\nparams = pm.summary(trace_uc)['mean'].values\n\n# Construct results using these posterior means as parameter values\nres_uc_bayes = mod_uc.smooth(params)",
"One benefit of this model is that it gives us an estimate of the underling \"level\" of inflation, using the smoothed estimate of $\\mu_t$, which we can access as the \"level\" column in the results objects' states.smoothed attribute. In this case, because the Bayesian posterior mean of the level's variance is larger than the MLE estimate, its estimated level is a little more volatile.",
"# Graph\nfig, ax = plt.subplots(figsize=(9,4), dpi=300)\n\n# Plot data points\ninf['CPIAUCNS'].plot(ax=ax, style='-', label='Observed data')\n\n# Plot estimate of the level term\nres_uc_mle.states.smoothed['level'].plot(ax=ax, label='Smoothed level (MLE)')\nres_uc_bayes.states.smoothed['level'].plot(ax=ax, label='Smoothed level (Bayesian)')\n\nax.legend(loc='lower left');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mari-linhares/tensorflow-workshop
|
code_samples/RNN/sentiment_analysis/SentimentAnalysis-Word2Vec.ipynb
|
apache-2.0
|
[
"Dependencies",
"# Tensorflow\nimport tensorflow as tf\nprint('Tested with TensorFlow 1.2.0')\nprint('Your TensorFlow version:', tf.__version__) \n\n# Feeding function for enqueue data\nfrom tensorflow.python.estimator.inputs.queues import feeding_functions as ff\n\n# Rnn common functions\nfrom tensorflow.contrib.learn.python.learn.estimators import rnn_common\n\n# Model builder\nfrom tensorflow.python.estimator import model_fn as model_fn_lib\n\n# Run an experiment\nfrom tensorflow.contrib.learn.python.learn import learn_runner\n\n# Helpers for data processing\nimport pandas as pd\nimport numpy as np\nimport argparse\nimport random",
"Loading Data\nFirst, we want to create our word vectors. For simplicity, we're going to be using a pretrained model. \nAs one of the biggest players in the ML game, Google was able to train a Word2Vec model on a massive Google News dataset that contained over 100 billion different words! From that model, Google was able to create 3 million word vectors, each with a dimensionality of 300. \nIn an ideal scenario, we'd use those vectors, but since the word vectors matrix is quite large (3.6 GB!), we'll be using a much more manageable matrix that is trained using GloVe, a similar word vector generation model. The matrix will contain 400,000 word vectors, each with a dimensionality of 50. \nWe're going to be importing two different data structures, one will be a Python list with the 400,000 words, and one will be a 400,000 x 50 dimensional embedding matrix that holds all of the word vector values.",
"# data from: http://ai.stanford.edu/~amaas/data/sentiment/\nTRAIN_INPUT = 'data/train.csv'\nTEST_INPUT = 'data/test.csv'\n\n# data manually generated\nMY_TEST_INPUT = 'data/mytest.csv'\n\n# wordtovec\n# https://nlp.stanford.edu/projects/glove/\n# the matrix will contain 400,000 word vectors, each with a dimensionality of 50.\nword_list = np.load('word_list.npy')\nword_list = word_list.tolist() # originally loaded as numpy array\nword_list = [word.decode('UTF-8') for word in word_list] # encode words as UTF-8\nprint('Loaded the word list, length:', len(word_list))\n\nword_vector = np.load('word_vector.npy')\nprint ('Loaded the word vector, shape:', word_vector.shape)",
"We can search our word list for a word like \"baseball\", and then access its corresponding vector through the embedding matrix.",
"baseball_index = word_list.index('baseball')\nprint('Example: baseball')\nprint(word_vector[baseball_index])",
"Now that we have our vectors, our first step is taking an input sentence and then constructing the its vector representation. Let's say that we have the input sentence \"I thought the movie was incredible and inspiring\". In order to get the word vectors, we can use Tensorflow's embedding lookup function. This function takes in two arguments, one for the embedding matrix (the wordVectors matrix in our case), and one for the ids of each of the words. The ids vector can be thought of as the integerized representation of the training set. This is basically just the row index of each of the words. Let's look at a quick example to make this concrete.",
"max_seq_length = 10 # maximum length of sentence\nnum_dims = 50 # dimensions for each word vector\n\nfirst_sentence = np.zeros((max_seq_length), dtype='int32')\nfirst_sentence[0] = word_list.index(\"i\")\nfirst_sentence[1] = word_list.index(\"thought\")\nfirst_sentence[2] = word_list.index(\"the\")\nfirst_sentence[3] = word_list.index(\"movie\")\nfirst_sentence[4] = word_list.index(\"was\")\nfirst_sentence[5] = word_list.index(\"incredible\")\nfirst_sentence[6] = word_list.index(\"and\")\nfirst_sentence[7] = word_list.index(\"inspiring\")\n# first_sentence[8] = 0\n# first_sentence[9] = 0\n\nprint(first_sentence.shape)\nprint(first_sentence) # shows the row index for each word",
"TODO### Insert image\nThe 10 x 50 output should contain the 50 dimensional word vectors for each of the 10 words in the sequence.",
"with tf.Session() as sess:\n print(tf.nn.embedding_lookup(word_vector, first_sentence).eval().shape)",
"Before creating the ids matrix for the whole training set, let’s first take some time to visualize the type of data that we have. This will help us determine the best value for setting our maximum sequence length. In the previous example, we used a max length of 10, but this value is largely dependent on the inputs you have. \nThe training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. Each of the reviews is stored in a txt file that we need to parse through. The positive reviews are stored in one directory and the negative reviews are stored in another. The following piece of code will determine total and average number of words in each review.",
"from os import listdir\nfrom os.path import isfile, join\npositiveFiles = ['positiveReviews/' + f for f in listdir('positiveReviews/') if isfile(join('positiveReviews/', f))]\nnegativeFiles = ['negativeReviews/' + f for f in listdir('negativeReviews/') if isfile(join('negativeReviews/', f))]\nnumWords = []\nfor pf in positiveFiles:\n with open(pf, \"r\", encoding='utf-8') as f:\n line=f.readline()\n counter = len(line.split())\n numWords.append(counter) \nprint('Positive files finished')\n\nfor nf in negativeFiles:\n with open(nf, \"r\", encoding='utf-8') as f:\n line=f.readline()\n counter = len(line.split())\n numWords.append(counter) \nprint('Negative files finished')\n\nnumFiles = len(numWords)\nprint('The total number of files is', numFiles)\nprint('The total number of words in the files is', sum(numWords))\nprint('The average number of words in the files is', sum(numWords)/len(numWords))",
"We can also use the Matplot library to visualize this data in a histogram format.",
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.hist(numWords, 50)\nplt.xlabel('Sequence Length')\nplt.ylabel('Frequency')\nplt.axis([0, 1200, 0, 8000])\nplt.show()",
"From the histogram as well as the average number of words per file, we can safely say that most reviews will fall under 250 words, which is the max sequence length value we will set.",
"max_seq_len = 250",
"Data",
"ids_matrix = np.load('ids_matrix.npy').tolist()",
"Parameters",
"# Parameters for training\nSTEPS = 15000\nBATCH_SIZE = 32\n\n# Parameters for data processing\nREVIEW_KEY = 'review'\nSEQUENCE_LENGTH_KEY = 'sequence_length'",
"Separating train and test data\nThe training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. \nLet's first give a positive label [1, 0] to the first 12500 reviews, and a negative label [0, 1] to the other reviews.",
"POSITIVE_REVIEWS = 12500\n\n# copying sequences\ndata_sequences = [np.asarray(v, dtype=np.int32) for v in ids_matrix]\n# generating labels\ndata_labels = [[1, 0] if i < POSITIVE_REVIEWS else [0, 1] for i in range(len(ids_matrix))]\n# also creating a length column, this will be used by the Dynamic RNN\n# see more about it here: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn\ndata_length = [max_seq_len for i in range(len(ids_matrix))]",
"Then, let's shuffle the data and use 90% of the reviews for training and the other 10% for testing.",
"data = list(zip(data_sequences, data_labels, data_length))\nrandom.shuffle(data) # shuffle\n\ndata = np.asarray(data)\n# separating train and test data\nlimit = int(len(data) * 0.9)\n\ntrain_data = data[:limit]\ntest_data = data[limit:]",
"Verifying if the train and test data have enough positive and negative examples",
"LABEL_INDEX = 1\ndef _number_of_pos_labels(df):\n pos_labels = 0\n for value in df:\n if value[LABEL_INDEX] == [1, 0]:\n pos_labels += 1\n return pos_labels\n\npos_labels_train = _number_of_pos_labels(train_data)\ntotal_labels_train = len(train_data)\n\npos_labels_test = _number_of_pos_labels(test_data)\ntotal_labels_test = len(test_data)\n\nprint('Total number of positive labels:', pos_labels_train + pos_labels_test)\nprint('Proportion of positive labels on the Train data:', pos_labels_train/total_labels_train)\nprint('Proportion of positive labels on the Test data:', pos_labels_test/total_labels_test)",
"Input functions",
"def get_input_fn(df, batch_size, num_epochs=1, shuffle=True): \n def input_fn():\n \n sequences = np.asarray([v for v in df[:,0]], dtype=np.int32)\n labels = np.asarray([v for v in df[:,1]], dtype=np.int32)\n length = np.asarray(df[:,2], dtype=np.int32)\n\n # https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data\n dataset = (\n tf.contrib.data.Dataset.from_tensor_slices((sequences, labels, length)) # reading data from memory\n .repeat(num_epochs) # repeat dataset the number of epochs\n .batch(batch_size)\n )\n \n # for our \"manual\" test we don't want to shuffle the data\n if shuffle:\n dataset = dataset.shuffle(buffer_size=100000)\n\n # create iterator\n review, label, length = dataset.make_one_shot_iterator().get_next()\n\n features = {\n REVIEW_KEY: review,\n SEQUENCE_LENGTH_KEY: length,\n }\n\n return features, label\n return input_fn\n\nfeatures, label = get_input_fn(test_data, 2, shuffle=False)()\n\nwith tf.Session() as sess:\n items = sess.run(features)\n print(items[REVIEW_KEY])\n\n print(sess.run(label))\n\n\ntrain_input_fn = get_input_fn(train_data, BATCH_SIZE, None)\ntest_input_fn = get_input_fn(test_data, BATCH_SIZE)",
"Creating the Estimator model",
"def get_model_fn(rnn_cell_sizes,\n label_dimension,\n dnn_layer_sizes=[],\n optimizer='SGD',\n learning_rate=0.01,\n embed_dim=128):\n \n def model_fn(features, labels, mode):\n \n review = features[REVIEW_KEY]\n sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], tf.int32)\n\n # Creating embedding\n data = tf.Variable(tf.zeros([BATCH_SIZE, max_seq_len, 50]),dtype=tf.float32)\n data = tf.nn.embedding_lookup(word_vector, review)\n \n # Each RNN layer will consist of a LSTM cell\n rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]\n \n # Construct the layers\n multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)\n \n # Runs the RNN model dynamically\n # more about it at: \n # https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn\n outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,\n inputs=data,\n dtype=tf.float32)\n\n # Slice to keep only the last cell of the RNN\n last_activations = rnn_common.select_last_activations(outputs, sequence_length)\n\n # Construct dense layers on top of the last cell of the RNN\n for units in dnn_layer_sizes:\n last_activations = tf.layers.dense(\n last_activations, units, activation=tf.nn.relu)\n \n # Final dense layer for prediction\n predictions = tf.layers.dense(last_activations, label_dimension)\n predictions_softmax = tf.nn.softmax(predictions)\n \n loss = None\n train_op = None\n eval_op = None\n \n preds_op = {\n 'prediction': predictions_softmax,\n 'label': labels\n }\n \n if mode == tf.estimator.ModeKeys.EVAL:\n eval_op = {\n \"accuracy\": tf.metrics.accuracy(\n tf.argmax(input=predictions_softmax, axis=1),\n tf.argmax(input=labels, axis=1))\n }\n \n if mode != tf.estimator.ModeKeys.PREDICT: \n loss = tf.losses.softmax_cross_entropy(labels, predictions)\n \n if mode == tf.estimator.ModeKeys.TRAIN: \n train_op = tf.contrib.layers.optimize_loss(\n loss,\n tf.contrib.framework.get_global_step(),\n optimizer=optimizer,\n learning_rate=learning_rate)\n \n return model_fn_lib.EstimatorSpec(mode,\n predictions=predictions_softmax,\n loss=loss,\n train_op=train_op,\n eval_metric_ops=eval_op)\n return model_fn\n\nmodel_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers\n label_dimension=2, # since are just 2 classes\n dnn_layer_sizes=[128, 64], # size of units in the dense layers on top of the RNN\n optimizer='Adam',\n learning_rate=0.001,\n embed_dim=512)",
"Create and Run Experiment",
"# create experiment\ndef generate_experiment_fn():\n \n \"\"\"\n Create an experiment function given hyperparameters.\n Returns:\n A function (output_dir) -> Experiment where output_dir is a string\n representing the location of summaries, checkpoints, and exports.\n this function is used by learn_runner to create an Experiment which\n executes model code provided in the form of an Estimator and\n input functions.\n All listed arguments in the outer function are used to create an\n Estimator, and input functions (training, evaluation, serving).\n Unlisted args are passed through to Experiment.\n \"\"\"\n\n def _experiment_fn(run_config, hparams):\n estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)\n return tf.contrib.learn.Experiment(\n estimator,\n train_input_fn=train_input_fn,\n eval_input_fn=test_input_fn,\n train_steps=STEPS\n )\n return _experiment_fn\n\n# run experiment \nlearn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir='testing2'))",
"Making Predictions\nFirst let's generate our own sentences to see how the model classifies them.",
"def string_to_array(s, separator=' '):\n return s.split(separator)\n\ndef generate_data_row(sentence, label, max_length):\n sequence = np.zeros((max_length), dtype='int32')\n for i, word in enumerate(string_to_array(sentence)):\n sequence[i] = word_list.index(word)\n \n return sequence, label, max_length\n \ndef generate_data(sentences, labels, max_length):\n data = []\n for s, l in zip(sentences, labels):\n data.append(generate_data_row(s, l, max_length))\n \n return np.asarray(data)\n\n\nsentences = ['i thought the movie was incredible and inspiring', \n 'this is a great movie',\n 'this is a good movie but isnt the best',\n 'it was fine i guess',\n 'it was definitely bad',\n 'its not that bad',\n 'its not that bad i think its a good movie',\n 'its not bad i think its a good movie']\n\nlabels = [[1, 0],\n [1, 0],\n [1, 0],\n [0, 1],\n [0, 1],\n [1, 0],\n [1, 0],\n [1, 0]] # [1, 0]: positive, [0, 1]: negative\n\nmy_test_data = generate_data(sentences, labels, 10)",
"Now, let's generate predictions for the sentences",
"preds = estimator.predict(input_fn=get_input_fn(my_test_data, 1, 1, shuffle=False))\n\nprint()\nfor p, s in zip(preds, sentences):\n print('sentence:', s)\n print('good review:', p[0], 'bad review:', p[1])\n print('-' * 10)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
RoebideBruijn/datascience-intensive-course
|
project/notebooks/lending-club-exploration.ipynb
|
mit
|
[
"%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport matplotlib.dates as mdates\nimport seaborn as sns\n\nsns.set_style('white')",
"Selecting only closed loans",
"# 887,379 loans in total\nloans = pd.read_csv('../data/loan.csv')\nloans['grade'] = loans['grade'].astype('category', ordered=True)\nloans['last_pymnt_d'] = pd.to_datetime(loans['last_pymnt_d'])#.dt.strftime(\"%Y-%m-%d\")\nloans.shape\n\nloans['loan_status'].unique()\n\n# most loans are current\nsns.countplot(loans['loan_status'], color='turquoise')\nplt.xticks(rotation=90)\nplt.savefig('../figures/barplot_loan_statusses.jpg', bbox_inches='tight')\n\n# exclude current loans leaves 256,939 (about 30%)\nclosed_status = ['Fully Paid', 'Charged Off',\n 'Does not meet the credit policy. Status:Fully Paid',\n 'Does not meet the credit policy. Status:Charged Off']\nclosed_loans = loans[loans['loan_status'].isin(closed_status)]\nclosed_loans.shape\n\nsns.countplot(closed_loans['loan_status'], color='turquoise')\nplt.xticks(rotation=90)\nplt.savefig('../figures/barplot_loan_statusses_closed.jpg', bbox_inches='tight')\n\n# two categories: paid/unpaid\npaid_status = ['Fully Paid', 'Does not meet the credit policy. Status:Fully Paid']\nclosed_loans['paid'] = [True if loan in paid_status else False for loan in closed_loans['loan_status']]\nsns.countplot(closed_loans['paid'])\nplt.xticks(rotation=90)",
"Investigating closed loans\nfeatures summary\nTotal loans: 256,939\nTotal features: 74\nLoan\n- id: loan\n- loan_amnt: 1914 times is loan amount bigger than funded amount\n- funded_amnt\n- funded_amnt_inv \n- term: 36 or 60 months\n- int_rate: interest rates\n- installment: height monthly pay\n- grade: A-G, A low risk, G high risk\n- sub_grade\n- issue_d: month-year loan was funded\n- loan_status\n- pymnt_plan: n/y\n- url\n- desc: description provided by borrower\n- purpose: 'credit_card', 'car', 'small_business', 'other', 'wedding', 'debt_consolidation', 'home_improvement', 'major_purchase', 'medical', 'moving', 'vacation', 'house', 'renewable_energy','educational'\n- title: provided by borrower\n- initial_list_status: w/f (what is this?)\n- out_prncp: outstanding prinicipal --> still >0 in fully paid?!\n- out_prncp_inv\n- total_pymnt\n- total_pymnt_inv\n- total_rec_prncp\n- total_rec_int: total recieved interest\n- total_rec_late_fee\n- recoveries: post charged off gross recovery\n- collection_recovery_fee: post charged off collection fee\n- last_pymnt_d\n- last_pymnt_amnt\n- next_pymnt_d\n- collections_12_mths_ex_med: almost all 0\n- policy_code: 1 publicly available, 2 not\n- application_type (only 1 JOINT, rest INDIVIDUAL)\nBorrower\n- emp_title\n- emp_length: 0-10 (10 stands for >=10)\n- home_ownership: 'RENT', 'OWN', 'MORTGAGE', 'OTHER', 'NONE', 'ANY'\n- member_id: person\n- annual_inc (stated by borrower)\n- verification_status: 'Verified', 'Source Verified', 'Not Verified' (income verified by LC?)\n- zip_code\n- addr_state\n- dti: debt to income (without mortgage)\n- delinq_2yrs: The number of 30+ days past-due incidences of delinquency in the borrower's credit file for the past 2 years\n- mths_since_last_delinq\n- mths_since_last_record\n- pub_rec\n- earliest_cr_line\n- inq_last_6mths \n- open_acc (nr of open credit lines)\n- total_acc (nr of total credit lines in credit file)\n- revol_bal\n- last_credit_pull_d\n- mths_since_last_major_derog: Months since most recent 90-day or worse rating\n- acc_now_delinq: The number of accounts on which the borrower is now delinquent.\n- tot_coll_amt: Total collection amounts ever owed\n- tot_cur_bal: Total current balance of all accounts\n- open_acc_6m: Number of open trades in last 6 months\n- open_il_6m: Number of currently active installment trades\n- open_il_12m: Number of installment accounts opened in past 12 months\n- open_il_24m\n- mths_since_rcnt_il: Months since most recent installment accounts opened\n- total_bal_il: Total current balance of all installment accounts\n- il_util: Ratio of total current balance to high credit/credit limit on all install acct\n- open_rv_12m: Number of revolving trades opened in past 12 months\n- open_rv_24m\n- max_bal_bc: Maximum current balance owed on all revolving accounts\n- all_util: Balance to credit limit on all trades\n- total_rev_hi_lim: Total revolving high credit/credit limit\n- inq_fi: Number of personal finance inquiries\n- total_cu_tl: Number of finance trades\n- inq_last_12m: Number of credit inquiries in past 12 months\nTwo borrowers (only in 1 case)\n- annual_inc_joint\n- dti_joint\n- verification_status_joint\nDifference between default and charged off\nIn general, a note goes into Default status when it is 121 or more days past due. When a note is in Default status, Charge Off occurs no later than 150 days past due (i.e. No later than 30 days after the Default status is reached) when there is no reasonable expectation of sufficient payment to prevent the charge off. However, bankruptcies may be charged off earlier based on date of bankruptcy notification.\n--> so default is not closed yet (so threw that one out).",
"# 1914 loans amounts bigger than funded amount\nsum(closed_loans['loan_amnt'] != closed_loans['funded_amnt'])\n\n# nr of null values per feature\nnr_nulls = closed_loans.isnull().apply(sum, 0)\nnr_nulls = nr_nulls[nr_nulls != 0]\nratio_missing = nr_nulls.sort_values(ascending=False) / 255720\nratio_missing.to_csv('../data/missing_ratio.txt', sep='\\t')\nratio_missing\n\n\nsns.distplot(closed_loans['funded_amnt'], kde=False, bins=50)\nplt.savefig('../figures/funded_amount.jpg')\n\n# closed loans about 20% are 60 months\n# all loans lot of missing data, rest 30% are 60 months\nsns.countplot(closed_loans['term'], color='darkblue')\nplt.title('closed')\nplt.savefig('../figures/term_closed.jpg')\nplt.show()\nsns.countplot(loans['term'])\nplt.title('all')",
"TODO: interest questions",
"# higher interest rate more interesting for lenders\n# higher grade gets higher interest rate (more risk)\n# does it default more often?\n# do you get richer from investing in grade A-C (less default?) or from D-G (more interest)?\nfig = sns.distplot(closed_loans['int_rate'], kde=False, bins=50)\nfig.set(xlim=(0, None))\nplt.savefig('../figures/int_rates.jpg')\n\nsns.boxplot(data=closed_loans, x='grade', y='int_rate', color='turquoise')\nplt.savefig('../figures/boxplots_intrate_grade.jpg')\n\nsns.stripplot(data=closed_loans, x='grade', y='int_rate', color='gray')\n\n# closed_loans['collection_recovery_fee']\nclosed_loans['profit'] = (closed_loans['total_rec_int'] + closed_loans['total_rec_prncp'] \n + closed_loans['total_rec_late_fee'] + closed_loans['recoveries']) - closed_loans['funded_amnt'] \nprofits = closed_loans.groupby('grade')['profit'].sum()\nsns.barplot(data=profits.reset_index(), x='grade', y='profit', color='gray')\nplt.savefig('../figures/profit_grades.jpg')\nplt.show()\nprofits = closed_loans.groupby('paid')['profit'].sum()\nsns.barplot(data=profits.reset_index(), x='paid', y='profit')\nplt.show()\nprofits = closed_loans.groupby(['grade', 'paid'])['profit'].sum()\nsns.barplot(data=profits.reset_index(), x='profit', y='grade', hue='paid', orient='h')\nplt.savefig('../figures/profit_grades_paid.jpg')\nplt.show()\n\n# Sort off normally distributed --> statistically test whether means are different?\nsns.distplot(closed_loans[closed_loans['paid']==True]['int_rate'])\nsns.distplot(closed_loans[closed_loans['paid']==False]['int_rate'])\nplt.savefig('../figures/int_rate_paid.jpg')\n\ngrade_paid = closed_loans.groupby(['grade', 'paid'])['id'].count()\nrisk_grades = dict.fromkeys(closed_loans['grade'].unique())\nfor g in risk_grades.keys():\n risk_grades[g] = grade_paid.loc[(g, False)] / (grade_paid.loc[(g, False)] + grade_paid.loc[(g, True)])\nrisk_grades = pd.DataFrame(risk_grades, index=['proportion_unpaid_loans']) \nsns.stripplot(data=risk_grades, color='darkgray', size=15)\nplt.savefig('../figures/proportion_grades.jpg')\n\n# does the purpose matter for the chance of charged off?\nsns.countplot(closed_loans['purpose'], color='turquoise')\nplt.xticks(rotation=90)\nplt.show()\npurpose_paid = closed_loans.groupby(['purpose', 'paid'])['id'].count()\nsns.barplot(data=pd.DataFrame(purpose_paid).reset_index(), x='purpose', y='id', hue='paid')\nplt.xticks(rotation=90)\nplt.savefig('../figures/purposes.jpg', bbox_inches='tight')\n\n# debt to income\nsns.boxplot(data=closed_loans, x='paid', y='dti')\nplt.savefig('../figures/dti.jpg')",
"Investigate whether the two weird 'does not meet' categories should stay in there, are they really closed?\nNext payment day is not NAN in the 'does not meet' categories.\nOutstanding principle is all 0 (so not active anymore)\nIndeed seems like older loans\n--> seems they are in fact closed, so leave them in",
"sns.countplot(closed_loans[closed_loans['next_pymnt_d'].notnull()]['loan_status'])\nplt.xticks(rotation=90)\nplt.savefig('../figures/last_payment_day.jpg', bbox_inches='tight')\nplt.show()\nprint(closed_loans['loan_status'].value_counts())\nnew_loans = ['Fully Paid', 'Charged Off']\nsns.countplot(data=closed_loans[~closed_loans['loan_status'].isin(new_loans)], x='last_pymnt_d', hue='loan_status')\nplt.xticks([])\nplt.savefig('../figures/last_payment_day_old.jpg')\nplt.show()\nsns.countplot(data=closed_loans[closed_loans['loan_status'].isin(new_loans)], x='last_pymnt_d', hue='loan_status')\nplt.xticks([])\nplt.savefig('../figures/last_payment_day_new.jpg')\nplt.show()\nclosed_loans['out_prncp'].value_counts()",
"something weird with policy 2?\nhttp://www.lendacademy.com/forum/index.php?topic=2427.msg20813#msg20813\nOnly policy 1 loans in this case, so no problem.",
"closed_loans['policy_code'].value_counts()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AtmaMani/pyChakras
|
udemy_ml_bootcamp/Python-for-Data-Analysis/Pandas/DataFrames.ipynb
|
mit
|
[
"<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nDataFrames\nDataFrames are the workhorse of pandas and are directly inspired by the R programming language. We can think of a DataFrame as a bunch of Series objects put together to share the same index. Let's use pandas to explore this topic!",
"import pandas as pd\nimport numpy as np\n\nfrom numpy.random import randn\nnp.random.seed(101)\n\ndf = pd.DataFrame(randn(5,4),index='A B C D E'.split(),columns='W X Y Z'.split())\n\ndf",
"Selection and Indexing\nLet's learn the various methods to grab data from a DataFrame",
"df['W']\n\n# Pass a list of column names\ndf[['W','Z']]\n\n# SQL Syntax (NOT RECOMMENDED!)\ndf.W",
"DataFrame Columns are just Series",
"type(df['W'])",
"Creating a new column:",
"df['new'] = df['W'] + df['Y']\n\ndf",
"Removing Columns",
"df.drop('new',axis=1)\n\n# Not inplace unless specified!\ndf\n\ndf.drop('new',axis=1,inplace=True)\n\ndf",
"Can also drop rows this way:",
"df.drop('E',axis=0)",
"Selecting Rows",
"df.loc['A']",
"Or select based off of position instead of label",
"df.iloc[2]",
"Selecting subset of rows and columns",
"df.loc['B','Y']\n\ndf.loc[['A','B'],['W','Y']]",
"Conditional Selection\nAn important feature of pandas is conditional selection using bracket notation, very similar to numpy:",
"df\n\ndf>0\n\ndf[df>0]\n\ndf[df['W']>0]\n\ndf[df['W']>0]['Y']\n\ndf[df['W']>0][['Y','X']]",
"For two conditions you can use | and & with parenthesis:",
"df[(df['W']>0) & (df['Y'] > 1)]",
"More Index Details\nLet's discuss some more features of indexing, including resetting the index or setting it something else. We'll also talk about index hierarchy!",
"df\n\n# Reset to default 0,1...n index\ndf.reset_index()\n\nnewind = 'CA NY WY OR CO'.split()\n\ndf['States'] = newind\n\ndf\n\ndf.set_index('States')\n\ndf\n\ndf.set_index('States',inplace=True)\n\ndf",
"Multi-Index and Index Hierarchy\nLet us go over how to work with Multi-Index, first we'll create a quick example of what a Multi-Indexed DataFrame would look like:",
"# Index Levels\noutside = ['G1','G1','G1','G2','G2','G2']\ninside = [1,2,3,1,2,3]\nhier_index = list(zip(outside,inside))\nhier_index = pd.MultiIndex.from_tuples(hier_index)\n\nhier_index\n\ndf = pd.DataFrame(np.random.randn(6,2),index=hier_index,columns=['A','B'])\ndf",
"Now let's show how to index this! For index hierarchy we use df.loc[], if this was on the columns axis, you would just use normal bracket notation df[]. Calling one level of the index returns the sub-dataframe:",
"df.loc['G1']\n\ndf.loc['G1'].loc[1]\n\ndf.index.names\n\ndf.index.names = ['Group','Num']\n\ndf\n\ndf.xs('G1')\n\ndf.xs(['G1',1])\n\ndf.xs(1,level='Num')",
"Great Job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sraejones/phys202-2015-work
|
assignments/assignment05/InteractEx03.ipynb
|
mit
|
[
"Interact Exercise 3\nImports",
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display",
"Using interact for animation with data\nA soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:\n$$\n\\phi(x,t) = \\frac{1}{2} c \\mathrm{sech}^2 \\left[ \\frac{\\sqrt{c}}{2} \\left(x - ct - a \\right) \\right]\n$$\nThe constant c is the velocity and the constant a is the initial location of the soliton.\nDefine soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.",
"def soliton(x, t, c, a):\n \"\"\"Return phi(x, t) for a soliton wave with constants c and a.\"\"\"\n phi = (0.5 * c)*((1/ np.cosh((c * 0.5)/ 2*(x-c*t-a))**2))\n return phi\n\nassert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))",
"To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:",
"tmin = 0.0\ntmax = 10.0\ntpoints = 100\nt = np.linspace(tmin, tmax, tpoints)\n\nxmin = 0.0\nxmax = 10.0\nxpoints = 200\nx = np.linspace(xmin, xmax, xpoints)\n\nc = 1.0\na = 0.0",
"Compute a 2d NumPy array called phi:\n\nIt should have a dtype of float.\nIt should have a shape of (xpoints, tpoints).\nphi[i,j] should contain the value $\\phi(x[i],t[j])$.",
"# YOUR CODE HERE\nphi = np.ones([xpoints,tpoints])\nfor i in x:\n for j in t:\n phi[i,j] = soliton(x[i],t[j],c,a)\n\nassert phi.shape==(xpoints, tpoints)\nassert phi.ndim==2\nassert phi.dtype==np.dtype(float)\nassert phi[0,0]==soliton(x[0],t[0],c,a)",
"Write a plot_soliton_data(i) function that plots the soliton wave $\\phi(x, t[i])$. Customize your plot to make it effective and beautiful.",
"def plot_soliton_data(i=0):\n \"\"\"Plot the soliton data at t[i] versus x.\"\"\"\n # YOUR CODE HERE\n \n plt.plot(soliton(x,t[i],c,a))\n plt.xlabel('t[i]')\n plt.ylabel('x[j]')\n plt.xlim(0,200)\n plt.ylim(0,0.55)\n plt.box(False)\n plt.title('phi')\n\nplot_soliton_data(0)\n\nassert True # leave this for grading the plot_soliton_data function",
"Use interact to animate the plot_soliton_data function versus time.",
"# YOUR CODE HERE\ninteractive(plot_soliton_data,i=(0,99,1))\n\nassert True # leave this for grading the interact with plot_soliton_data cell"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adieuadieu/educathingamajigs
|
udacity/dlnd/p1-your-first-network/dlnd-your-first-neural-network.ipynb
|
unlicense
|
[
"Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()",
"Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"rides[:24*10].plot(x='dteday', y='cnt')",
"Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().",
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std",
"Splitting the data into training, testing, and validation sets\nWe'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"# Save the last 21 days \ntest_data = data[-21*24:]\ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"# Hold out the last 60 days of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.",
"class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.input_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.output_nodes, self.hidden_nodes))\n self.lr = learning_rate\n \n #### Set this to your implemented sigmoid function ####\n # Activation function is the sigmoid function\n \n # https://en.wikipedia.org/wiki/Activation_function\n sigmoid = lambda x: 1 / (1 + np.exp(-x))\n tanh = lambda x: (2 / (1 + np.exp(-2 * x))) - 1\n gaussian = lambda x: np.exp(-x**2)\n sinusoid = lambda x: np.sin(x)\n \n self.activation_function = sigmoid\n \n\n def train(self, inputs_list, targets_list):\n # Convert inputs list to 2d array\n inputs = np.array(inputs_list, ndmin=2).T\n targets = np.array(targets_list, ndmin=2).T\n \n \"\"\"\n NOTES: \n \n borrows from\n https://discussions.udacity.com/t/im-completely-stuck-and-confused/215739/5\n and\n https://discussions.udacity.com/t/having-a-hard-time-implementing-the-backprop/215435/11\n \n omission of division by records clarified by:\n https://discussions.udacity.com/t/why-in-project-1-we-dont-divide-the-record-num/216716\n \"\"\"\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs)\n \n\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.\n \n # Backpropagated error\n hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer\n \n hidden_gradient = hidden_outputs * (1 - hidden_outputs) # sigmoid derivative\n # hidden_gradient = 1 - (hidden_outputs ** 2) # tanh derivative\n # hidden_gradient = -2 * hidden_outputs * np.exp(-hidden_outputs ** 2) # gaussian derivative\n # hidden_gradient = np.cos(hidden_outputs) # sinusoid derivative\n \n # Update the weights\n\n delta_hidden_output = output_errors * hidden_outputs.T\n delta_hidden_input = hidden_errors * hidden_gradient\n \n self.weights_hidden_to_output += self.lr * delta_hidden_output # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * np.dot(delta_hidden_input, inputs.T) # update input-to-hidden weights with gradient descent step\n \n def run(self, inputs_list):\n # Run a forward pass through the network\n inputs = np.array(inputs_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n # Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)",
"Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of epochs\nThis is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.",
"import sys\n\n### Set the hyperparameters here ###\n\n\"\"\"\nactivation | epochs | learnRate | hidden | output | Training loss | Validation loss | note\n---------------------------------------------------------------------------------------------------------------\nsigmoid | 3000 | 0.005 | 21 | 1 | 0.079 | 0.167\ntanh | 3000 | 0.005 | 21 | 1 | 0.063 | 0.152\nsigmoid | 3000 | 0.005 | 28 | 1 | 0.076 | 0.182\nsigmoid | 3000 | 0.01 | 28 | 1 | 0.063 | 0.236\nsigmoid | 3000 | 0.005 | 42 | 1 | 0.084 | 0.184\nsigmoid | 3000 | 0.005 | 56 | 1 | 0.066 | 0.144\nsigmoid | 3000 | 0.005 | 70 | 1 | 0.074 | 0.143\nsigmoid | 3000 | 0.005 | 56 | 56 | 0.047 | 0.156\ntanh | 3000 | 0.005 | 56 | 56 | 0.044 | 0.144\nsigmoid | 3000 | 0.005 | 56 | 112 | 0.043 | 0.202\ntanh | 3000 | 0.005 | 56 | 112 | 0.063 | 0.152\n\nsigmoid | 1000 | 0.005 | 59 | 118 | 0.047 | 0.145\nsigmoid | 3000 | 0.005 | 59 | 118 | 0.039 | 0.164\ntanh | 3000 | 0.005 | 59 | 118 | 0.043 | 0.163\nsigmoid | 1500 | 0.01 | 59 | 118 | 0.041 | 0.132\nsigmoid | 2000 | 0.01 | 59 | 118 | 0.039 | 0.134\nsigmoid | 3000 | 0.01 | 59 | 118 | 0.037 | 0.137\nsigmoid | 10000 | 0.01 | 59 | 118 | 0.029 | 0.175 overfit?\nsigmoid | 3000 | 0.05 | 59 | 118 | 0.041 | 0.147\n\nsigmoid | 3000 | 0.01 | 118 | 118 | 0.036 | 0.132\nsigmoid | 3000 | 0.01 | 177 | 177 | 0.035 | 0.161\nsigmoid | 3000 | 0.1 | 177 | 177 | nan | nan :-(\nsigmoid | 3000 | 0.005 | 177 | 177 | 0.037 | 0.152 \n\nsigmoid | 3000 | 0.1 | 56 | 56 | 0.048 | 0.163\n\n\"\"\"\n\nepochs = 3000\nlearning_rate = 0.01\nhidden_nodes = 28\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor e in range(epochs):\n \n # https://discussions.udacity.com/t/learning-rate-hyperparameter/216978/8?u=marco-611\n #if e %(epochs / 15) == 0:\n # learning_rate /= 2\n \n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n for record, target in zip(train_features.ix[batch].values, \n train_targets.ix[batch]['cnt']):\n network.train(record, target)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: \" + str(100 * e/float(epochs))[:4] \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\nplt.ylim(ymax=.5)",
"Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"fig, ax = plt.subplots(figsize=(16,8))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features)*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)\n\nfig, ax = plt.subplots(figsize=(16,8))\n\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\nmean, std = scaled_features['cnt']\npredictions = network.run(features)*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)\n",
"Thinking about your results\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nThe model predicts the data fairly well during \"normal\" times. For example, it does a good time making prediction during regular business hours.. regular probably being the key there. It fails with the winter/xmas holiday, and I think this may be because we've sliced that data out of the training set and used it in our test data, so the network never really had a good chance to learn from the holiday data (only once, during winter 2011.) The model seems to kind of know something is going on with the holiday as the predictions do reflect that it's a holiday, but the weight of it being a holiday doesn't seem to be as strong as it should be.\nUnit tests\nRun these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.",
"import unittest\n\ninputs = [0.5, -0.2, 0.1]\ntargets = [0.4]\ntest_w_i_h = np.array([[0.1, 0.4, -0.3], \n [-0.2, 0.5, 0.2]])\ntest_w_h_o = np.array([[0.3, -0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328, -0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, 0.39775194, -0.29887597],\n [-0.20185996, 0.50074398, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jbwhit/2017-05-PyCon-EDA-Tutorial
|
notebooks/2017-02-24-djm-redcard-eda.ipynb
|
mit
|
[
"Are soccer referees more likely to give red cards to dark skin toned players than light skin toned players?",
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport pandas as pd\n\n%%time\ndf = pd.read_csv('../data/redcard/crowdstorm_disaggregated.csv.gz', compression='gzip')",
"Data Structure\n\nThe dataset is available as a list with 146,028 dyads of players and referees and includes details from players, details from referees and details regarding the interactions of player-referees. A summary of the variables of interest can be seen below. A detailed description of all variables included can be seen in the README file on the project website. -- https://docs.google.com/document/d/1uCF5wmbcL90qvrk_J27fWAvDcDNrO9o_APkicwRkOKc/edit\n\n| Variable Name: | Variable Description: | \n| -- | -- | \n| playerShort | short player ID | \n| player | player name | \n| club | player club | \n| leagueCountry | country of player club (England, Germany, France, and Spain) | \n| height | player height (in cm) | \n| weight | player weight (in kg) | \n| position | player position | \n| games | number of games in the player-referee dyad | \n| goals | number of goals in the player-referee dyad | \n| yellowCards | number of yellow cards player received from the referee | \n| yellowReds | number of yellow-red cards player received from the referee | \n| redCards | number of red cards player received from the referee | \n| photoID | ID of player photo (if available) | \n| rater1 | skin rating of photo by rater 1 | \n| rater2 | skin rating of photo by rater 2 | \n| refNum | unique referee ID number (referee name removed for anonymizing purposes) | \n| refCountry | unique referee country ID number | \n| meanIAT | mean implicit bias score (using the race IAT) for referee countr",
"# how many records are there?\ndf.shape\n\n# what do the entries in the table look like?\ndf.sample(100)",
"it looks like there are some entries with NaN's, how prevalent is this issue?\nwhich columns should I use to answer the question?\nwhy are there so many \"red card\" columns?\nwhy are there so many \"skintone\" columns?\n\nNaN/null exploration",
"import missingno as msno\n\n(df.isnull().sum(axis=0)/float(len(df))).sort_values()\n\nmsno.matrix(df.sample(1000))\n\nmsno.bar(df)\n\nmsno.heatmap(df)",
"the data is mostly there. \nthe most frequently missing fields are the photoID, rater1, rater2, and skintone. \nthey're missing for about 12.5% of the rows and their missing ness is highly correlated.\nposition, weight, and height are the next most commonly mising fields at a rate of about ~10%, ~1%, and ~0.1% respectively. moderately correlated\nmeanIAT, nIAT, seIAT, meanExp, nExp, seExp are all missing at about ~0.05%, highly correlated\nassumption for below: ignore rows with missing data",
"df_nona = df.dropna()\n\ndf_nona.shape\n\nlen(df_nona) / float(len(df))\n\ndf_nona.describe()",
"focus on \"skintone\" columns and \"red card\" columns",
"df_focus = df_nona[['rater1', 'rater2', 'redCards', 'skintone', 'allreds', 'allredsStrict']]\n\ndf_focus.describe()\n\nr1_vs_r2 = df_focus.groupby(['rater1', 'rater2']).size().unstack()\n\nr1_vs_r2\n\nplt.imshow(r1_vs_r2.values)\n\nshape = r1_vs_r2.shape\n\nplt.xticks(np.arange(shape[0]), r1_vs_r2.index.values)\nplt.xlabel('rater2')\n\nplt.yticks(np.arange(shape[1]), r1_vs_r2.columns.values)\nplt.ylabel('rater1')\n\nplt.colorbar()\n\ndef compare_cols(col0, col1):\n\n data = df_focus.groupby([col0, col1]).size().unstack()\n shape = data.shape\n \n plt.imshow(data.values)\n\n plt.yticks(np.arange(shape[0]), data.index.values)\n plt.ylabel(col0)\n plt.xticks(np.arange(shape[1]), data.columns.values)\n plt.xlabel(col1)\n \n plt.colorbar()\n\ncompare_cols('rater1', 'skintone')\n\ncompare_cols('rater2', 'skintone')",
"skintone, rater1, rater2 all seem pretty highly correlated\nexplore \"red card\" counts vs \"skin tone\" value",
"aggs = ['count', 'median', 'sum', 'mean', 'var']\n\ndf_focus_rater1 = df_focus.groupby(['rater1']).agg(aggs)\n\ndata = df_focus_rater1['redCards']['count']\n\nx = data.index.values\ny = data.values\n\nplt.bar(np.arange(len(x)), y)\nplt.xticks(np.arange(len(x)), x)\nplt.title('Interaction counts by rating from rater1')\n\nplt.ylim(0, None)\n\nerror_of_mean = np.sqrt(df_focus_rater1['redCards']['var'].values/df_focus_rater1['redCards']['count'].values)\n\nplt.plot([np.arange(len(x)), np.arange(len(x))], [-error_of_mean, +error_of_mean], color='red', lw=4)\nplt.show()\n\ndef df_col_agg_bar(ax, df, title, col, agg):\n title = '{} {}({})'.format(title, agg, col)\n data = df[col][agg]\n x = data.index.values\n y = data.values\n ax.bar(np.arange(len(x)), y)\n \n if agg == 'mean':\n error_of_mean = np.sqrt(df[col]['var'].values/df[col]['count'].values)\n ax.plot([np.arange(len(x)), np.arange(len(x))],\n [y - error_of_mean, y + error_of_mean], color='red', lw=4)\n \n ax.xaxis.set_ticks(np.arange(len(x)))\n ax.xaxis.set_ticklabels(x)\n ax.set_title(title)\n ax.set_ylim(0, None)\n\nfig, axes = plt.subplots(ncols=3, figsize=(16, 4))\n\ndata = df_focus.groupby(['rater1']).agg(aggs)\n\ndf_col_agg_bar(axes[0], data, 'groupby(rater1)', 'redCards', 'count')\ndf_col_agg_bar(axes[1], data, 'groupby(rater1)', 'allreds', 'count')\ndf_col_agg_bar(axes[2], data, 'groupby(rater1)', 'allredsStrict', 'count')\n\nfig, axes = plt.subplots(ncols=3, figsize=(16, 4))\n\ndata = df_focus.groupby(['rater1']).agg(aggs)\n\ndf_col_agg_bar(axes[0], data, 'groupby(rater1)', 'redCards', 'sum')\ndf_col_agg_bar(axes[1], data, 'groupby(rater1)', 'allreds', 'sum')\ndf_col_agg_bar(axes[2], data, 'groupby(rater1)', 'allredsStrict', 'sum')\n\nfig, axes = plt.subplots(ncols=3, figsize=(16, 4), sharey=True)\n\ndata = df_focus.groupby(['rater1']).agg(aggs)\n\ndf_col_agg_bar(axes[0], data, 'groupby(rater1)', 'redCards', 'sum')\ndf_col_agg_bar(axes[1], data, 'groupby(rater1)', 'allreds', 'sum')\ndf_col_agg_bar(axes[2], data, 'groupby(rater1)', 'allredsStrict', 'sum')\n\nfig, axes = plt.subplots(ncols=3, figsize=(16, 4), sharey=True)\n\ndata = df_focus.groupby(['rater1']).agg(aggs)\n\ndf_col_agg_bar(axes[0], data, 'groupby(rater1)', 'redCards', 'mean')\ndf_col_agg_bar(axes[1], data, 'groupby(rater1)', 'allreds', 'mean')\ndf_col_agg_bar(axes[2], data, 'groupby(rater1)', 'allredsStrict', 'mean')\n\nfig, axes = plt.subplots(ncols=3, figsize=(16, 4), sharey=True)\n\ndata = df_focus.groupby(['rater2']).agg(aggs)\n\ndf_col_agg_bar(axes[0], data, 'groupby(rater2)', 'redCards', 'mean')\ndf_col_agg_bar(axes[1], data, 'groupby(rater2)', 'allreds', 'mean')\ndf_col_agg_bar(axes[2], data, 'groupby(rater2)', 'allredsStrict', 'mean')\n\nfig, axes = plt.subplots(ncols=3, figsize=(16, 4), sharey=True)\n\ndata = df_focus.groupby(['skintone']).agg(aggs)\n\ndf_col_agg_bar(axes[0], data, 'groupby(skintone)', 'redCards', 'mean')\ndf_col_agg_bar(axes[1], data, 'groupby(skintone)', 'allreds', 'mean')\ndf_col_agg_bar(axes[2], data, 'groupby(skintone)', 'allredsStrict', 'mean')",
"what is going on there with 0.625 group?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pwer21c/pwer21c.github.io
|
python/pythoncodes/3_preview_for_10022021.ipynb
|
mit
|
[
"리스트 공부할때 fruits라는 리스트 이름에 과일을 저장했어요. \n이제 하나하나의 과일을 출력해 봅시다.",
"fruits_name = [\"apple\", \"banana\", \"cherry\"]\nfor x1 in fruits_name:\n print(x1)\n ",
"사과, 바나나, 체리 순서로 출력이 됩니다.\nfor 다음에 한칸 띄우고 x라는 이름을 썼어요. 이건 아무거나 써도 되요 abc 이렇게 써도 되요\n그리고 in 다음에 위의 리스트 fruits를 썼어요. 다시 해볼까요 ?",
"import turtle\nt=turtle.Turtle()\nt.shape('turtle')\n\n\nt.reset()\n## range(1,100,10) 은 1에서 100까지 10씩 증가합니다.\nfor x2 in range(1,100,10):\n t.circle(x2)",
"숙제1. 거북이를 for문을 사용해서 20개원의 원을 그려보세요.\n숙제2, 출력을 이렇게 해보세요. 1,3,5,7,9,....99 : for문을 사용하세요. \n1\n3\n5\n7\n9\n11\n13\n15\n...\n99\nHint : range 명령어를 쓰세요. range(1,100,2)\nnumber=[1,3,5,7,9,11,13,15,.......,99]",
"fruits = [\"apple\", \"banana\", \"cherry\"]\nfor abc in fruits:\n print(x)",
"어 이상하네. 결과가 이상하죠 ? 그건 print(x)에서 이 x를 고쳐주지 않아서 그래요 . 다시 고쳐볼께요.",
"fruits = [\"apple\", \"banana\", \"cherry\"]\nfor abc in fruits:\n print(abc)",
"같은 결과네요. for 문은 fruits안에 갯수만큼 반복해서 작업을 한다는 걸 말해요.\n자 1에서 10까지 출력을 해볼까요 ?",
"print(1)\nprint(2)\nprint(3)\nprint(4)\nprint(5)\nprint(6)\nprint(7)\nprint(8)\nprint(9)\nprint(10)",
"이렇게 하면 되는데 너무 귀찮아요. 그래서",
"for x in range(1,10):\n print(x)",
"자 for는 in에 있는 갯수만큼 반복한다고 했죠 ? 여기서 \nrange는 1에서 10까지 숫자가 있다는 말이에요\n그러니까 1에서 10까지 하나하나 출력을 하는거죠.\n위랑 비교하면 단 두줄로 1에서 10까지 출력하고 있어요.",
"xy=[1,3,5,7,9]\nfor x in xy:\n print(x)\n\nfor x in range(10,1,-1):\n print(x)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zommiommy/FourierSeriesApproximation
|
FourierSeriesApproximation.ipynb
|
mit
|
[
"Foruier Series Approximation for discrete time serires\nDependancy:",
"import numpy as np\nfrom scipy import linalg",
"Final Code",
"#the function to calculate the coefficent\ndef approx(x, y, n, w):\n x = np.matrix(x).transpose()\n y = np.matrix(y).transpose()\n\n f, b = x.shape\n c, d = y.shape\n\n if c != f or b != d:\n print('The Input vector have wrong dimension')\n return -1\n\n j = np.matrix(np.arange(1,n+1))\n V1 = np.cos(w*x*j)\n j = np.matrix(np.arange(n+1,2*n+1))\n V2 = np.sin(w*x*(j-n))\n\n V = np.concatenate([V1,V2],axis=1)\n\n Q, R = linalg.qr(V)\n\n R = R[:2 * n, :2 * n]\n Q = Q[:f, :2 * n]\n # coeff = linalg.solve_triangular(R, (np.dot(Q.transpose(), y)),check_finite=False)\n coeff = linalg.solve_triangular(R, (np.dot(Q.transpose(), y)))\n\n n = int(len(coeff) / 2)\n mag = np.sqrt(coeff[:n]**2+coeff[n:]**2)\n angle = np.arctan2(coeff[:n],coeff[n:])\n\n r = []\n for i,(m,a) in enumerate(zip(mag,angle)):\n r.append([float(m),i+1,float(a)])\n return r\n\n#the function to calculate the reconstructed function from the coefficent\ndef calc_fourier(X,coeff,vmed,w=0.5):\n y = np.zeros_like(X) + vmed\n for (m,i,p) in coeff:\n y += m*np.sin(w*i*X+p)\n return y\n\n#approx a function and get both the coefficent and the reconstructed function\ndef fourier_approx(funzione,n=0,w=0.5):\n\n fmean = np.mean(funzione)\n\n funzione = list(funzione)\n\n funzione = funzione + funzione[::-1] \n\n mean = np.mean(funzione)\n\n funzione = [z - mean for z in funzione]\n\n T = np.linspace( 0, 4 * np.pi, num=len(funzione), endpoint=True)\n if n == 0:\n n = int(len(T) / 2) - 1\n\n if n < 1:\n return -1\n\n coeff = approx(T, funzione, n, w)\n\n T = np.array(T [:int(len(T)/2)])\n funzione = np.array(funzione [:int(len(funzione)/2)])\n \n y = calc_fourier(T,coeff[:index+1],0)\n \n return y,coeff",
"How and Why it work\nWe start with the foruier series:\n$$f(t) = \\frac{a_0}{2} + \\sum_{n = 1}^{N} \\left [ a_n cos(nwt) + b_n sin(nwt) \\right ]$$\nwe get if we manage to get a function with mean value equal to zero we can simplify to:\n$$f(t) = \\sum_{n = 1}^{N} \\left [ a_n cos(nwt) + b_n sin(nwt) \\right ]$$\n$$ f(t) = \\sum_{n = 1}^{N} < \\begin{pmatrix}cos(nwt)\\ sin(nwt)\\end{pmatrix},\\begin{pmatrix}a_n\\ b_n\\end{pmatrix} >$$\n$$f(t) = \\sum_{n = 1}^{N} \\begin{pmatrix}cos(nwt)& sin(nwt)\\end{pmatrix}\\begin{pmatrix}a_n\\ b_n\\end{pmatrix}$$\n$$f(t) = \\begin{pmatrix}\ncos(wt)\n&\ncos(2wt)\n&\n...\n&\ncos((N-1)wt)\n& \nsin(wt)\n&\nsin(2wt)\n&\n...\n&\nsin((N-1)wt)\n\\end{pmatrix}\n\\begin{pmatrix}\na_1\n\\\na_2\n\\\n...\n\\\na_{N-1}\n\\ \nb_1\n\\\nb_2\n\\\n...\n\\\nb_{N-1}\n\\end{pmatrix}\n$$\nnow if we substitute t with the succession $T_k = {t_k} $ we get $k$ row forming a matrix $A$ with $k$ row and $2n$ column\nand the succession $Y_k(T_k) = {y_k}$ that$ y_k = f(t_k)$ wich form the vector $Y$ \n$$ f(T_k) = \\begin{pmatrix}\ncos(wt_1)\n&\ncos(2wt_1)\n&\n...\n&\ncos((N-1)wt_1)\n& \nsin(wt_1)\n&\nsin(2wt_1)\n&\n...\n&\nsin((N-1)wt_1)\n\\\ncos(wt_2)\n&\ncos(2wt_2)\n&\n...\n&\ncos((N-1)wt_2)\n& \nsin(wt_2)\n&\nsin(2wt_2)\n&\n...\n&\nsin((N-1)wt_2)\n\\\n...\n&\n...\n&\n...\n&\n...\n&\n...\n&\n...\n&\n...\n&\n...\n\\\ncos(wt_K)\n&\ncos(2wt_K)\n&\n...\n&\ncos((N-1)wt_K)\n& \nsin(wt_K)\n&\nsin(2wt_K)\n&\n...\n&\nsin((N-1)wt_K)\n\\end{pmatrix}\n\\begin{pmatrix}\na_1\n\\\na_2\n\\\n...\n\\\na_{N-1}\n\\ \nb_1\n\\\nb_2\n\\\n...\n\\\nb_{N-1}\n\\end{pmatrix}\n=\n\\begin{pmatrix}\ny_1\n\\\ny_2\n\\\n...\n\\\n...\n\\ \n...\n\\\n...\n\\\n...\n\\\ny_{k}\n\\end{pmatrix}\n$$\nso now we have a linear sistem to solve for+the vector of $a_n$ and $ b_n$ called $C$\n$$ A C = Y$$\nthe system to be solvable it have to have $det(A) > 0$\nso we have to have more lineary indipendent row than coloumn, so the maximum number of coloumn it can have is when $k = 2n$ if all the row are lineary independent\nin the case $ non lineary independent row < 2n$ the sistem is overdeterminated so to solve it we can use the $QR$ decomposition\nso that $ A = QR$ so we can compute $$ A = \\frac{R}{Q^T Y} $$\nand now we have the coefficent of the fourier series ready to be used\nThere is a possible ottimization if the approximation is computed in preprocessing:\n if we use the geometric identity\n$$ acos(wt) + bsin(wt) = \\sqrt{a^2 + b^2} sin(wt + arctan(\\frac{b}{a}))$$\nand now we can call $c_n = \\sqrt{a_n^2 + b_n^2} $ and $\\phi_n = arctan(\\frac{b_n}{a_n})$\nso we have a final series in the form of\n$$ f(t) = \\sum_{n = 1}^{N} c_n sin(nwt + \\phi_n)$$\nImplementation\nwe'll import matplotlib to plot the result",
"import matplotlib.pyplot as plt\nfrom pprint import pprint",
"we start with f that is a sawthoot wave",
"f = np.linspace(0,1,100)\nf = np.concatenate((f,f,f,f))\n\nT = np.linspace( 0, 2 * np.pi, num=len(f), endpoint=True) - np.pi\n\nplt.plot(T,f)\nplt.show()",
"we choose the number of armonichs that we want in this case 50 but $ n \\in [1,N)$ where $N$ is len(f)",
"n = 50\nw = 1",
"first we get rid of the mean",
"f_mean = np.mean(f)\nf -= f_mean\n\nplt.plot(T,f)\nplt.show()",
"we transforme the array into a coloumn vector",
"x = np.matrix(T).transpose()\ny = np.matrix(f).transpose()",
"we create the C matrix",
"j = np.matrix(np.arange(1,n+1))\nC1 = np.cos(w*x*j)\nj = np.matrix(np.arange(n+1,2*n+1))\nC2 = np.sin(w*x*(j-n))\n\nC = np.concatenate([C1,C2],axis=1)\nprint(C)",
"the QR decomposition",
"Q, R = linalg.qr(C)",
"we truncate the matrix so that it's possible to solve the system",
"R = R[:2 * n, :2 * n]\nQ = Q[:x.shape[0], :2 * n]",
"we solve the system and get the coeff vector",
"coeff = linalg.solve_triangular(R, (np.dot(Q.transpose(), y)))\n# coeff = linalg.solve_triangular(R, (np.dot(Q.transpose(), y)),check_finite=False) Alternative way",
"We separate the matrix into the sin and cos coeff list",
"n_ = int(len(coeff) / 2)\nsin_coeff = coeff[:n_]\ncos_coeff = coeff[n_:]",
"we convert to $c$ and $\\phi$ and now we have all the coefficent ready to go",
"mag = np.sqrt(cos_coeff**2+sin_coeff**2)\nphi = np.arctan2(sin_coeff,cos_coeff)",
"we calculate the result function and re-add the mean to the funciton",
"y = np.zeros_like(T)\nfor (m,i,p) in zip(mag,range(n),phi):\n y += m*np.sin(w*(i+1)*T+p)\ny += f_mean\nf += f_mean",
"Result",
"plt.plot(T,f)\nplt.plot(T,y)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/hammoz-consortium/cmip6/models/sandbox-1/landice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: HAMMOZ-CONSORTIUM\nSource ID: SANDBOX-1\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:03\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-1', 'landice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --> Mass Balance\n7. Ice --> Mass Balance --> Basal\n8. Ice --> Mass Balance --> Frontal\n9. Ice --> Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Ice Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify how ice albedo is modelled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Atmospheric Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Oceanic Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the ocean and ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs an adative grid being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Base Resolution\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThe base resolution (in metres), before any adaption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Resolution Limit\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Projection\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of glaciers in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of glaciers, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Dynamic Areal Extent\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes the model include a dynamic glacial extent?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Grounding Line Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.3. Ice Sheet\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice sheets simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.4. Ice Shelf\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice shelves simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Ice --> Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Ice --> Mass Balance --> Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Ocean\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Ice --> Mass Balance --> Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Melting\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Ice --> Dynamics\n**\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Approximation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nApproximation type used in modelling ice dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Adaptive Timestep\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.4. Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubeflow/examples
|
mnist/mnist_aws.ipynb
|
apache-2.0
|
[
"MNIST E2E on Kubeflow on AWS\nThis example guides you through:\n\nTaking an example TensorFlow model and modifying it to support distributed training\nServing the resulting model using TFServing\nDeploying and using a web-app that uses the model\n\nRequirements\n\nYou must be running Kubeflow 1.0 on EKS\n\nInstall AWS CLI\nClick Kernal -> Restart after your install new packages.",
"!pip install boto3",
"Create AWS secret in kubernetes and grant aws access to your notebook\n\nNote: Once IAM for Service Account is merged in 1.0.1, we don't have to use credentials\n\n\nPlease create an AWS secret in current namespace. \n\n\nNote: To get base64 string, try echo -n $AWS_ACCESS_KEY_ID | base64. \nMake sure you have AmazonEC2ContainerRegistryFullAccess and AmazonS3FullAccess for this experiment. Pods will use credentials to talk to AWS services.",
"%%bash\n\n# Replace placeholder with your own AWS credentials\nAWS_ACCESS_KEY_ID='<your_aws_access_key_id>'\nAWS_SECRET_ACCESS_KEY='<your_aws_secret_access_key>'\n\nkubectl create secret generic aws-secret --from-literal=AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} --from-literal=AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}",
"Attach AmazonEC2ContainerRegistryFullAccess and AmazonS3FullAccess to EKS node group role and grant AWS access to notebook.\n\nVerify you have access to AWS services\n\nThe cell below checks that this notebook was spawned with credentials to access AWS S3 and ECR",
"import logging\nimport os\nimport uuid\nfrom importlib import reload\nimport boto3\n\n# Set REGION for s3 bucket and elastic contaienr registry\nAWS_REGION='us-west-2'\nboto3.client('s3', region_name=AWS_REGION).list_buckets()\nboto3.client('ecr', region_name=AWS_REGION).describe_repositories()",
"Prepare model\nThere is a delta between existing distributed mnist examples and what's needed to run well as a TFJob.\nBasically, we must:\n\nAdd options in order to make the model configurable.\nUse tf.estimator.train_and_evaluate to enable model exporting and serving.\nDefine serving signatures for model serving.\n\nThe resulting model is model.py.\nInstall Required Libraries\nImport the libraries required to train this model.",
"import notebook_setup\nreload(notebook_setup)\nnotebook_setup.notebook_setup(platform='aws')\n\nimport k8s_util\n# Force a reload of kubeflow; since kubeflow is a multi namespace module\n# it looks like doing this in notebook_setup may not be sufficient\nimport kubeflow\nreload(kubeflow)\nfrom kubernetes import client as k8s_client\nfrom kubernetes import config as k8s_config\nfrom kubeflow.tfjob.api import tf_job_client as tf_job_client_module\nfrom IPython.core.display import display, HTML\nimport yaml",
"Configure The Docker Registry For Kubeflow Fairing\n\nIn order to build docker images from your notebook we need a docker registry where the images will be stored\nBelow you set some variables specifying a Amazon Elastic Container Registry\nKubeflow Fairing provides a utility function to guess the name of your AWS account",
"from kubernetes import client as k8s_client\nfrom kubernetes.client import rest as k8s_rest\nfrom kubeflow import fairing \nfrom kubeflow.fairing import utils as fairing_utils\nfrom kubeflow.fairing.builders import append\nfrom kubeflow.fairing.deployers import job\nfrom kubeflow.fairing.preprocessors import base as base_preprocessor\n\n# Setting up AWS Elastic Container Registry (ECR) for storing output containers\n# You can use any docker container registry istead of ECR\nAWS_ACCOUNT_ID=fairing.cloud.aws.guess_account_id()\nAWS_ACCOUNT_ID = boto3.client('sts').get_caller_identity().get('Account')\nDOCKER_REGISTRY = '{}.dkr.ecr.{}.amazonaws.com'.format(AWS_ACCOUNT_ID, AWS_REGION)\n\nnamespace = fairing_utils.get_current_k8s_namespace()\n\nlogging.info(f\"Running in aws region {AWS_REGION}, account {AWS_ACCOUNT_ID}\")\nlogging.info(f\"Running in namespace {namespace}\")\nlogging.info(f\"Using docker registry {DOCKER_REGISTRY}\")",
"Use Kubeflow fairing to build the docker image\n\nYou will use kubeflow fairing's kaniko builder to build a docker image that includes all your dependencies\nYou use kaniko because you want to be able to run pip to install dependencies\nKaniko gives you the flexibility to build images from Dockerfiles",
"# TODO(https://github.com/kubeflow/fairing/issues/426): We should get rid of this once the default \n# Kaniko image is updated to a newer image than 0.7.0.\nfrom kubeflow.fairing import constants\nconstants.constants.KANIKO_IMAGE = \"gcr.io/kaniko-project/executor:v0.14.0\"\n\nfrom kubeflow.fairing.builders import cluster\n\n# output_map is a map of extra files to add to the notebook.\n# It is a map from source location to the location inside the context.\noutput_map = {\n \"Dockerfile.model\": \"Dockerfile\",\n \"model.py\": \"model.py\"\n}\n\npreprocessor = base_preprocessor.BasePreProcessor(\n command=[\"python\"], # The base class will set this.\n input_files=[],\n path_prefix=\"/app\", # irrelevant since we aren't preprocessing any files\n output_map=output_map)\n\npreprocessor.preprocess()\n\n# Create a new ECR repository to host model image\n!aws ecr create-repository --repository-name mnist --region=$AWS_REGION\n\n# Use a Tensorflow image as the base image\n# We use a custom Dockerfile \ncluster_builder = cluster.cluster.ClusterBuilder(registry=DOCKER_REGISTRY,\n base_image=\"\", # base_image is set in the Dockerfile\n preprocessor=preprocessor,\n image_name=\"mnist\",\n dockerfile_path=\"Dockerfile\",\n pod_spec_mutators=[fairing.cloud.aws.add_aws_credentials_if_exists, fairing.cloud.aws.add_ecr_config],\n context_source=cluster.s3_context.S3ContextSource(region=AWS_REGION))\ncluster_builder.build()\nlogging.info(f\"Built image {cluster_builder.image_tag}\")",
"Create a S3 Bucket\n\nCreate a S3 bucket to store our models and other results.\nSince we are running in python we use the python client libraries but you could also use the gsutil command line",
"import boto3\nfrom botocore.exceptions import ClientError\n\nbucket = f\"{AWS_ACCOUNT_ID}-mnist\"\n\ndef create_bucket(bucket_name, region=None):\n \"\"\"Create an S3 bucket in a specified region\n\n If a region is not specified, the bucket is created in the S3 default\n region (us-east-1).\n\n :param bucket_name: Bucket to create\n :param region: String region to create bucket in, e.g., 'us-west-2'\n :return: True if bucket created, else False\n \"\"\"\n\n # Create bucket\n try:\n if region is None:\n s3_client = boto3.client('s3')\n s3_client.create_bucket(Bucket=bucket_name)\n else:\n s3_client = boto3.client('s3', region_name=region)\n location = {'LocationConstraint': region}\n s3_client.create_bucket(Bucket=bucket_name,\n CreateBucketConfiguration=location)\n except ClientError as e:\n logging.error(e)\n return False\n return True\n\ncreate_bucket(bucket, AWS_REGION)",
"Distributed training\n\nWe will train the model by using TFJob to run a distributed training job",
"train_name = f\"mnist-train-{uuid.uuid4().hex[:4]}\"\nnum_ps = 1\nnum_workers = 2\nmodel_dir = f\"s3://{bucket}/mnist\"\nexport_path = f\"s3://{bucket}/mnist/export\"\ntrain_steps = 200\nbatch_size = 100\nlearning_rate = .01\nimage = cluster_builder.image_tag\n\ntrain_spec = f\"\"\"apiVersion: kubeflow.org/v1\nkind: TFJob\nmetadata:\n name: {train_name} \nspec:\n tfReplicaSpecs:\n Ps:\n replicas: {num_ps}\n template:\n metadata:\n annotations:\n sidecar.istio.io/inject: \"false\"\n spec:\n serviceAccount: default-editor\n containers:\n - name: tensorflow\n command:\n - python\n - /opt/model.py\n - --tf-model-dir={model_dir}\n - --tf-export-dir={export_path}\n - --tf-train-steps={train_steps}\n - --tf-batch-size={batch_size}\n - --tf-learning-rate={learning_rate}\n image: {image}\n workingDir: /opt\n env:\n - name: AWS_REGION\n value: {AWS_REGION}\n - name: AWS_ACCESS_KEY_ID\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_ACCESS_KEY_ID\n - name: AWS_SECRET_ACCESS_KEY\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_SECRET_ACCESS_KEY\n\n restartPolicy: OnFailure\n Chief:\n replicas: 1\n template:\n metadata:\n annotations:\n sidecar.istio.io/inject: \"false\"\n spec:\n serviceAccount: default-editor\n containers:\n - name: tensorflow\n command:\n - python\n - /opt/model.py\n - --tf-model-dir={model_dir}\n - --tf-export-dir={export_path}\n - --tf-train-steps={train_steps}\n - --tf-batch-size={batch_size}\n - --tf-learning-rate={learning_rate}\n image: {image}\n workingDir: /opt\n env:\n - name: AWS_REGION\n value: {AWS_REGION}\n - name: AWS_ACCESS_KEY_ID\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_ACCESS_KEY_ID\n - name: AWS_SECRET_ACCESS_KEY\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_SECRET_ACCESS_KEY\n\n restartPolicy: OnFailure\n Worker:\n replicas: 1\n template:\n metadata:\n annotations:\n sidecar.istio.io/inject: \"false\"\n spec:\n serviceAccount: default-editor\n containers:\n - name: tensorflow\n command:\n - python\n - /opt/model.py\n - --tf-model-dir={model_dir}\n - --tf-export-dir={export_path}\n - --tf-train-steps={train_steps}\n - --tf-batch-size={batch_size}\n - --tf-learning-rate={learning_rate}\n image: {image}\n workingDir: /opt\n env:\n - name: AWS_REGION\n value: {AWS_REGION}\n - name: AWS_ACCESS_KEY_ID\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_ACCESS_KEY_ID\n - name: AWS_SECRET_ACCESS_KEY\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_SECRET_ACCESS_KEY\n restartPolicy: OnFailure\n\"\"\" ",
"Create the training job\n\nYou could write the spec to a YAML file and then do kubectl apply -f {FILE}\nSince you are running in jupyter you will use the TFJob client\nYou will run the TFJob in a namespace created by a Kubeflow profile\nThe namespace will be the same namespace you are running the notebook in\nCreating a profile ensures the namespace is provisioned with service accounts and other resources needed for Kubeflow",
"tf_job_client = tf_job_client_module.TFJobClient()\n\ntf_job_body = yaml.safe_load(train_spec)\ntf_job = tf_job_client.create(tf_job_body, namespace=namespace) \n\nlogging.info(f\"Created job {namespace}.{train_name}\")",
"Check the job\n\nAbove you used the python SDK for TFJob to check the status\nYou can also use kubectl get the status of your job\nThe job conditions will tell you whether the job is running, succeeded or failed",
"!kubectl get tfjobs -o yaml {train_name}",
"Get The Logs\n\n\nThere are two ways to get the logs for the training job\n\n\nUsing kubectl to fetch the pod logs\n\nThese logs are ephemeral; they will be unavailable when the pod is garbage collected to free up resources\n\n\nUsing Fluentd-Cloud-Watch\nKubernetes data plane logs are not automatically available in AWS\nYou need to install fluentd-cloud-watch plugin to ship containers logs to Cloud Watch \n\n\n\nDeploy TensorBoard\n\nYou will create a Kubernetes Deployment to run TensorBoard\nTensorBoard will be accessible behind the Kubeflow endpoint",
"tb_name = \"mnist-tensorboard\"\ntb_deploy = f\"\"\"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: mnist-tensorboard\n name: {tb_name}\n namespace: {namespace}\nspec:\n selector:\n matchLabels:\n app: mnist-tensorboard\n template:\n metadata:\n labels:\n app: mnist-tensorboard\n version: v1\n spec:\n serviceAccount: default-editor\n containers:\n - command:\n - /usr/local/bin/tensorboard\n - --logdir={model_dir}\n - --port=80\n image: tensorflow/tensorflow:1.15.2-py3\n name: tensorboard\n env:\n - name: AWS_REGION\n value: {AWS_REGION}\n - name: AWS_ACCESS_KEY_ID\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_ACCESS_KEY_ID\n - name: AWS_SECRET_ACCESS_KEY\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_SECRET_ACCESS_KEY\n ports:\n - containerPort: 80\n\"\"\"\ntb_service = f\"\"\"apiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: mnist-tensorboard\n name: {tb_name}\n namespace: {namespace}\nspec:\n ports:\n - name: http-tb\n port: 80\n targetPort: 80\n selector:\n app: mnist-tensorboard\n type: ClusterIP\n\"\"\"\n\ntb_virtual_service = f\"\"\"apiVersion: networking.istio.io/v1alpha3\nkind: VirtualService\nmetadata:\n name: {tb_name}\n namespace: {namespace}\nspec:\n gateways:\n - kubeflow/kubeflow-gateway\n hosts:\n - '*'\n http:\n - match:\n - uri:\n prefix: /mnist/{namespace}/tensorboard/\n rewrite:\n uri: /\n route:\n - destination:\n host: {tb_name}.{namespace}.svc.cluster.local\n port:\n number: 80\n timeout: 300s\n\"\"\"\n\ntb_specs = [tb_deploy, tb_service, tb_virtual_service]\n\nk8s_util.apply_k8s_specs(tb_specs, k8s_util.K8S_CREATE_OR_REPLACE)",
"Access The TensorBoard UI\n\nNote: By default, your namespace may not have access to istio-system namespace to get",
"endpoint = k8s_util.get_ingress_endpoint() \nif endpoint: \n vs = yaml.safe_load(tb_virtual_service)\n path= vs[\"spec\"][\"http\"][0][\"match\"][0][\"uri\"][\"prefix\"]\n tb_endpoint = endpoint + path\n display(HTML(f\"TensorBoard UI is at <a href='{tb_endpoint}'>{tb_endpoint}</a>\"))",
"Wait For the Training Job to finish\n\nYou can use the TFJob client to wait for it to finish.",
"tf_job = tf_job_client.wait_for_condition(train_name, expected_condition=[\"Succeeded\", \"Failed\"], namespace=namespace)\n\nif tf_job_client.is_job_succeeded(train_name, namespace):\n logging.info(f\"TFJob {namespace}.{train_name} succeeded\")\nelse:\n raise ValueError(f\"TFJob {namespace}.{train_name} failed\") ",
"Serve the model\n\nDeploy the model using tensorflow serving\nWe need to create\nA Kubernetes Deployment\nA Kubernetes service\n(Optional) Create a configmap containing the prometheus monitoring config",
"import os\ndeploy_name = \"mnist-model\"\nmodel_base_path = export_path\n\n# The web ui defaults to mnist-service so if you change it you will\n# need to change it in the UI as well to send predictions to the mode\nmodel_service = \"mnist-service\"\n\ndeploy_spec = f\"\"\"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: mnist\n name: {deploy_name}\n namespace: {namespace}\nspec:\n selector:\n matchLabels:\n app: mnist-model\n template:\n metadata:\n # TODO(jlewi): Right now we disable the istio side car because otherwise ISTIO rbac will prevent the\n # UI from sending RPCs to the server. We should create an appropriate ISTIO rbac authorization\n # policy to allow traffic from the UI to the model servier.\n # https://istio.io/docs/concepts/security/#target-selectors\n annotations: \n sidecar.istio.io/inject: \"false\"\n labels:\n app: mnist-model\n version: v1\n spec:\n serviceAccount: default-editor\n containers:\n - args:\n - --port=9000\n - --rest_api_port=8500\n - --model_name=mnist\n - --model_base_path={model_base_path}\n - --monitoring_config_file=/var/config/monitoring_config.txt\n command:\n - /usr/bin/tensorflow_model_server\n env:\n - name: modelBasePath\n value: {model_base_path}\n - name: AWS_REGION\n value: {AWS_REGION}\n - name: AWS_ACCESS_KEY_ID\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_ACCESS_KEY_ID\n - name: AWS_SECRET_ACCESS_KEY\n valueFrom:\n secretKeyRef:\n name: aws-secret\n key: AWS_SECRET_ACCESS_KEY\n image: tensorflow/serving:1.15.0\n imagePullPolicy: IfNotPresent\n livenessProbe:\n initialDelaySeconds: 30\n periodSeconds: 30\n tcpSocket:\n port: 9000\n name: mnist\n ports:\n - containerPort: 9000\n - containerPort: 8500\n resources:\n limits:\n cpu: \"1\"\n memory: 1Gi\n requests:\n cpu: \"1\"\n memory: 1Gi\n volumeMounts:\n - mountPath: /var/config/\n name: model-config\n volumes:\n - configMap:\n name: {deploy_name}\n name: model-config\n\"\"\"\n\nservice_spec = f\"\"\"apiVersion: v1\nkind: Service\nmetadata:\n annotations: \n prometheus.io/path: /monitoring/prometheus/metrics\n prometheus.io/port: \"8500\"\n prometheus.io/scrape: \"true\"\n labels:\n app: mnist-model\n name: {model_service}\n namespace: {namespace}\nspec:\n ports:\n - name: grpc-tf-serving\n port: 9000\n targetPort: 9000\n - name: http-tf-serving\n port: 8500\n targetPort: 8500\n selector:\n app: mnist-model\n type: ClusterIP\n\"\"\"\n\nmonitoring_config = f\"\"\"kind: ConfigMap\napiVersion: v1\nmetadata:\n name: {deploy_name}\n namespace: {namespace}\ndata:\n monitoring_config.txt: |-\n prometheus_config: {{\n enable: true,\n path: \"/monitoring/prometheus/metrics\"\n }}\n\"\"\"\n\nmodel_specs = [deploy_spec, service_spec, monitoring_config]\n\nk8s_util.apply_k8s_specs(model_specs, k8s_util.K8S_CREATE_OR_REPLACE)",
"Deploy the mnist UI\n\nWe will now deploy the UI to visual the mnist results\nNote: This is using a prebuilt and public docker image for the UI",
"ui_name = \"mnist-ui\"\nui_deploy = f\"\"\"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: {ui_name}\n namespace: {namespace}\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: mnist-web-ui\n template:\n metadata:\n labels:\n app: mnist-web-ui\n spec:\n containers:\n - image: gcr.io/kubeflow-examples/mnist/web-ui:v20190112-v0.2-142-g3b38225\n name: web-ui\n ports:\n - containerPort: 5000 \n serviceAccount: default-editor\n\"\"\"\n\nui_service = f\"\"\"apiVersion: v1\nkind: Service\nmetadata:\n annotations:\n name: {ui_name}\n namespace: {namespace}\nspec:\n ports:\n - name: http-mnist-ui\n port: 80\n targetPort: 5000\n selector:\n app: mnist-web-ui\n type: ClusterIP\n\"\"\"\n\nui_virtual_service = f\"\"\"apiVersion: networking.istio.io/v1alpha3\nkind: VirtualService\nmetadata:\n name: {ui_name}\n namespace: {namespace}\nspec:\n gateways:\n - kubeflow/kubeflow-gateway\n hosts:\n - '*'\n http:\n - match:\n - uri:\n prefix: /mnist/{namespace}/ui/\n rewrite:\n uri: /\n route:\n - destination:\n host: {ui_name}.{namespace}.svc.cluster.local\n port:\n number: 80\n timeout: 300s\n\"\"\"\n\nui_specs = [ui_deploy, ui_service, ui_virtual_service]\n\nk8s_util.apply_k8s_specs(ui_specs, k8s_util.K8S_CREATE_OR_REPLACE) ",
"Access the web UI\n\nA reverse proxy route is automatically added to the Kubeflow endpoint\nThe endpoint will be\n\nhttp:/${KUBEflOW_ENDPOINT}/mnist/${NAMESPACE}/ui/\n* You can get the KUBEFLOW_ENDPOINT\nKUBEfLOW_ENDPOINT=`kubectl -n istio-system get ingress istio-ingress -o jsonpath=\"{.status.loadBalancer.ingress[0].hostname}\"`\n\n\nYou must run this command with sufficient RBAC permissions to get the ingress.\n\n\nIf you have sufficient privileges you can run the cell below to get the endpoint if you don't have sufficient priveleges you can \n grant appropriate permissions by running the command\n\n\nkubectl create --namespace=istio-system rolebinding --clusterrole=kubeflow-view --serviceaccount=${NAMESPACE}:default-editor ${NAMESPACE}-istio-view",
"endpoint = k8s_util.get_ingress_endpoint() \nif endpoint: \n vs = yaml.safe_load(ui_virtual_service)\n path= vs[\"spec\"][\"http\"][0][\"match\"][0][\"uri\"][\"prefix\"]\n ui_endpoint = endpoint + path\n display(HTML(f\"mnist UI is at <a href='{ui_endpoint}'>{ui_endpoint}</a>\"))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
QuantScientist/Deep-Learning-Boot-Camp
|
day03/1.4 Keras Backend.ipynb
|
mit
|
[
"Linear Regression\nTo get familiar with automatic differentiation, we start by learning a simple linear regression model using Stochastic Gradient Descent (SGD).\nRecall that given a dataset ${(x_i, y_i)}_{i=0}^N$, with $x_i, y_i \\in \\mathbb{R}$, the objective of linear regression is to find two scalars $w$ and $b$ such that $y = w\\cdot x + b$ fits the dataset. In this tutorial we will learn $w$ and $b$ using SGD and a Mean Square Error (MSE) loss:\n$$\\mathcal{l} = \\frac{1}{N} \\sum_{i=0}^N (w\\cdot x_i + b - y_i)^2$$\nStarting from random values, parameters $w$ and $b$ will be updated at each iteration via the following rule:\n$$w_t = w_{t-1} - \\eta \\frac{\\partial \\mathcal{l}}{\\partial w}$$\n<br>\n$$b_t = b_{t-1} - \\eta \\frac{\\partial \\mathcal{l}}{\\partial b}$$\nwhere $\\eta$ is the learning rate.\nNOTE: Recall that linear regression is indeed a simple neuron with a linear activation function!!\nPlaceholders and variables\nTo implement and run this simple model, we will use the Keras backend module, which provides an abstraction over Theano and Tensorflow, two popular tensor manipulation libraries that provide automatic differentiation.\nFirst of all, we define the necessary variables and placeholders for our computational graph. Variables maintain state across executions of the computational graph, while placeholders are ways to feed the graph with external data.\nFor the linear regression example, we need three variables: w, b, and the learning rate for SGD, lr. Two placeholders x and target are created to store $x_i$ and $y_i$ values.",
"import keras.backend as K\nimport numpy as np\n\n# Placeholders and variables\nx = K.placeholder()\ntarget = K.placeholder()\nlr = K.variable(0.1)\nw = K.variable(np.random.rand())\nb = K.variable(np.random.rand())",
"Model definition\nNow we can define the $y = w\\cdot x + b$ relation as well as the MSE loss in the computational graph.",
"# Define model and loss\ny = w * x + b\nloss = K.mean(K.square(y-target))",
"Then, given the gradient of MSE wrt to w and b, we can define how we update the parameters via SGD:",
"grads = K.gradients(loss, [w,b])\nupdates = [(w, w-lr*grads[0]), (b, b-lr*grads[1])]",
"The whole model can be encapsulated in a function, which takes as input x and target, returns the current loss value and updates its parameter according to updates.",
"train = K.function(inputs=[x, target], outputs=[loss], updates=updates)",
"Training\nTraining is now just a matter of calling the function we have just defined. Each time train is called, indeed, w and b will be updated using the SGD rule.\nHaving generated some random training data, we will feed the train function for several epochs and observe the values of w, b, and loss.",
"# Generate data\nnp_x = np.random.rand(1000)\nnp_target = 0.96*np_x + 0.24\n\n# Training\nloss_history = []\nfor epoch in range(200):\n current_loss = train([np_x, np_target])[0]\n loss_history.append(current_loss)\n if epoch % 20 == 0:\n print(\"Loss: %.03f, w, b: [%.02f, %.02f]\" % (current_loss, K.eval(w), K.eval(b)))",
"We can also plot the loss history:",
"# Plot loss history\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.plot(loss_history)",
"Your Turn\nPlease switch to the Theano backend and re-run the notebook.\nYou should see no difference in the execution!\nReminder: please keep in mind that you can execute shell commands from a notebook (pre-pending a ! sign).\nThus:\nshell\n !cat ~/.keras/keras.json\nshould show you the content of your keras configuration file.\nLogistic Regression\nLet's try to re-implement the Logistic Regression Model using the keras.backend APIs.\nThe following code will look like very similar to what we would write in Theano or Tensorflow - with the only difference that it may run on both the two backends.",
"from kaggle_data import load_data, preprocess_data, preprocess_labels\n\nX_train, labels = load_data('data/kaggle_ottogroup/train.csv', train=True)\nX_train, scaler = preprocess_data(X_train)\nY_train, encoder = preprocess_labels(labels)\n\nX_test, ids = load_data('data/kaggle_ottogroup/test.csv', train=False)\n\nX_test, _ = preprocess_data(X_test, scaler)\n\nnb_classes = Y_train.shape[1]\nprint(nb_classes, 'classes')\n\ndims = X_train.shape[1]\nprint(dims, 'dims')\n\nfeats = dims\ntraining_steps = 25\n\nx = K.placeholder(dtype=\"float\", shape=X_train.shape) \ntarget = K.placeholder(dtype=\"float\", shape=Y_train.shape)\n\n# Set model weights\nW = K.variable(np.random.rand(dims, nb_classes))\nb = K.variable(np.random.rand(nb_classes))\n\n# Define model and loss\ny = K.dot(x, W) + b\nloss = K.categorical_crossentropy(y, target)\n\nactivation = K.softmax(y) # Softmax\n\n# Minimize error using cross entropy\ncross_entropy = K.categorical_crossentropy(activation, target)\nloss = K.mean(-K.sum(cross_entropy))\n\ngrads = K.gradients(loss, [W,b])\nupdates = [(W, W-lr*grads[0]), (b, b-lr*grads[1])]\n\ntrain = K.function(inputs=[x, target], outputs=[loss], updates=updates)\n\n# Training\nloss_history = []\nfor epoch in range(training_steps):\n current_loss = train([X_train, Y_train])[0]\n loss_history.append(current_loss)\n if epoch % 20 == 0:\n print(\"Loss: {}\".format(current_loss))\n\n#plotting\nplt.plot(range(len(loss_history)), loss_history, 'o', label='Logistic Regression Training phase')\nplt.ylabel('cost')\nplt.xlabel('epoch')\nplt.legend()\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fluffy-hamster/A-Beginners-Guide-to-Python
|
A Beginners Guide to Python/07.2. Philosophy, Part Two.ipynb
|
mit
|
[
"“Explicit is better than implicit”\nWhat does this line of the poem refer to? Well, there are several examples and interpretations one could give, but today I would to interpret this line with reference to the idea of readability. \nWe wrapped up part one by talking about a function that contains no defenses against misuse. To jog your memory, here's the code again:",
"def net_force(mass, acceleration):\n return mass * acceleration ",
"Literally this function takes two objects and returns mass * acceleration; But what does that actually mean? And why should we care? \nWell, the meaning of this code is implicit, the author just assumes you are going to understand what it does and how to use it. The premise of today's lecture however is that this code can be rewritten to be more explicit, and being explicit is generally to be prefered. \nTo be more precise, the code make two implicit assumptions:\n\nThe end user has a basic understanding of Physics. \nThe end user knows to pass it numbers. \n\nRegarding the second point above I'm not going to explain why this is a problem today (for that explanation, see lecture on Operator Overloading), but I will propose a few possible fixes. But first things first, lets talk physics!\nTo People with a modest background in Physics it is pretty obvious all we are doing is taking the formula for force, f=ma,\nand putting it in code form.\nBut to understand the problem with the code, we need to imagine talking to someone that NEVER took a physics lesson at school before. For this person, they understand that we are multiplying two numbers (i.e. Mass, Acceleration) but they have not concept of what the result actually means. In short, the code only returns force if you know the physics, for everyone else the function return simply returns two numbers multiplied together.\nIn short, the code is being implicit but the poem tells us to be explicit. Alright, let's try and fix that now...",
"def net_force(mass, acceleration):\n force = mass * acceleration\n return force\n\nprint(net_force(10,10))",
"Defining Force\nThis code does one more thing that makes things more explicit. instead of returning:\nMass * Acceleration\n\nwe now return:\nForce\n\n...And the line above the return statement clearly assigns force to 'Mass * Acceleration'. So now even those readers without the physics background understand the meaning behind what we are doing; the function isn't merely returning A times B, its returning force. \nSo, we have succesfully made our code more explicit, even readers without an understanding of physics can grasp what the number we are returning *actually means. *\nHowever, there is a problem with this:\n\n\"Do we really want to add code to our function whose sole purpose is to make things readable?\"\n\nHonestly that is exactly what we have done in this case, defining and then returning force simply adds an unnecessary step. Unnecessary lines of code harms readability which is ironic since we only added this unnecessary line to make the code more readable! \nAs a matter of fact there is a solution to this problem, we could just add an in-line comment like this:\n# ...and now we return force...\nreturn mass * acceleration\n\nSuch a comment makes the code more explicit, but doesn't add unnecessary code. So already this looks like a better solution. It turns out though we can do even better than a comment, we can add a ‘docstring’:",
"def net_force(mass, acceleration):\n \"\"\"\n Calculates f=ma, returns force.\n We assume mass & acceleration are of type int/float.\n \"\"\"\n return mass * acceleration",
"Docstrings\nThe above code has a new concept to talk about. The red text encased in triple quotes is called a docstring, it is a special type of string that Python Programmers use to communicate with each other. Usually docstrings contain information about how the function works, and how to use it. Such is the case here. Docstrings can also serve as another way to write comments that span several lines. The Syntax is really simple:\n\"\"\"\n{Text}\n\"\"\"\n\nThe docstring above tells use that the function expected us to pass in integers/floats, and it also mentions what it is supposed to do (i.e. calculate f=ma). In short, adding documentation has helped make our function much more explicit. And for what it is worth, adding docstrings to all your functions is generally regarded as good practice.\nIts often a good idea to mention the expected 'types' of input. If you expect somebody to pass in numbers then say so. If you the code is supposed to work with strings then say so; Explicit is better than implicit. \nBeing explicit about what your code expects can prevent bugs, for example:",
"def net_force(mass, acceleration):\n \"\"\"\n Calculates f=ma, returns force.\n We assume mass & acceleration are of type int/float.\n \"\"\"\n return mass * acceleration\n\nprint(net_force(\"10\", 10))",
"This output might surprise you (and I'll explain why this happens in later lectures). Notice that the function net_force says it expects numbers. We passed in a string and an integer. And so, if we are using the function in an unexpected/unintended way should we surprised when things do not work as we expect?\nSo docstrings/comments can avoid bugs because they increase the chances that somebody will use some piece of code as it was designed to be used. If only I read the documentation, I would have known to not pass in a string!\nOkay, so that explains why we should leave text for other developers to read, but why are Docstrings better than comments?\nWell, the main reason is that docstrings make for better documentation. For example:",
"help(int)",
"we can use the \"help\" command to get information about a python objects. As a implementation detail, \"help\" will return the docstring of the function. Thus, if we use docstrings then other developers can call 'help' on our functions to find out what they do. Cool right?",
"def add_eight_1(x):\n \"\"\"\n Takes a integer x, and returns x + 8\n \"\"\"\n return x + 8\n\nhelp(add_eight_1)\n\ndef add_eight_2(x):\n # Takes a integer x, and returns x + 8\n return x + 8\n\nhelp(add_eight_2)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
CSchoel/learn-wavelets
|
wavelet-introduction.ipynb
|
mit
|
[
"Introduction to wavelet theory\nby Christopher Schölzel\nAuthor's Note:\nI use this notebook to keep notes about my learning process about wavelet theory.\nI hope that these notes may help somebody else in their learning process, but as I am not an expert myself, they may of course contain errros.\nIf you spot such an error, please let me know. \nMy main learning resource currently is \"Ten lectures on wavelets\" by Ingrid Daubechies[1]. I will therefore probably mostly follow the structure of this book.\nThroughout the notebook you will find some \"TODO\" notes.\nThey are mainly a reminder that the explanation or the implementation could be improved at the concerning positions.\nI may or may not add these updates in the future.\nA final note to readers that found this notebook on github:\nGithub currently has some bugs in the rendering of formulas, making them absolutely tiny and inserting additional line breaks.\nOne temporary solution for this problem is to use the service nbviewer.jupyter.org to render the notebook on their servers.\n<a name=\"ref1\">[1]</a> Daubechies, I. Ten lectures on Wavelets. (Society for industrial and applied mathematics, 1992).\nWhat is a wavelet?\nSo... what exactly are these \"wavelets\" everybody is talking about? Well, just as a piglet is a small pig, a wavelet is - essentially - a small wave. A very simple example of such small wave is the mexican hat function:\n$$m(x) = \\frac{2}{\\sqrt{3 \\sigma} \\pi^{\\frac{1}{4}}} \\left(1-\\frac{x^2}{\\sigma^2}\\right) e^{\\frac{-x^2}{2 \\sigma^2}}$$",
"%matplotlib inline\n# we will use numpy and matplotlib for all the following examples\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n\ndef mexican_hat(x, mu, sigma):\n return 2 / (np.sqrt(3 * sigma) * np.pi**0.25) * (1 - x**2 / sigma**2) * np.exp(-x**2 / (2 * sigma**2) )\n\nxvals = np.arange(-10,10,0.1)\nplt.plot(xvals, mexican_hat(xvals, 0, 1))\nplt.show()",
"The mexican hat function/wavelet is the rescaled negative second derivative of the gaussian function (the probability distribution function of the normal distribution).\n$$\ng(x) = \\frac{1}{\\sigma \\sqrt{2 \\pi}} e^{\\frac{- (x-mu)^2}{2 \\sigma^2}}\n$$",
"def gauss(x, mu, sigma):\n return 1.0 / (sigma * np.sqrt(2 * np.pi)) * np.exp(- (x - mu)**2 / (2 * sigma**2))\n\ng = gauss(xvals, 0, 1)\nm = mexican_hat(xvals, 0, 1)\ndg = g[1:] - g[:-1] # linear approximation of first derivative\nddg = dg[1:] - dg[:-1] # linear approximation of second derivative\n\nplt.plot(xvals, m, color=\"blue\", lw=6, alpha=0.3)\nfac = m[len(xvals)//2] / -ddg[len(xvals)//2] # scaling factor\nplt.plot(xvals[1:-1], -ddg*fac, \"r-\")\nplt.show()",
"In fact, for the basic wavelet transform there is only one serious theoretical limitations to what a wavelet can be.\nWe will encouter the exact formulation of this limitation later on.\nFor now, lets just say that the wavelet should only cover a finite \"amount\" of frequencies (at some discretization of the frequency space).\nOne immediate followup of this conidition is that the total integral of the wavelet function has to be zero.\nThis is also the case for the mexican hat wavelet, as we can show by a simple sum:",
"np.sum(m)",
"Wavelet theory and the Fourier transform\nThe wavelet transform is often linked to the fourier transform, because both are used to inspect the frequency spectrum of a signal.\nTo examine the similarities and differences between the fourier and wavelet transform we will generate some artificial sound data:",
"def hamming(n):\n \"\"\" Hamming window of size N for smoothing the edges of sound waves \"\"\"\n return 0.54 - 0.46 * np.cos(2 * np.pi * np.arange(n) / (n-1))\n\ndef sound(freq, dur, res=10000):\n ln = dur*res\n sound = np.zeros(ln)\n sound = np.sin(np.arange(ln)*2*np.pi*freq/res)\n return sound * hamming(ln)\n\ndef add_sound(audio, loc, freq, dur, res=10000):\n audio[loc:loc+dur*res] += sound(freq, dur, res=res)\n\nres = 10000 # sound resolution in hz\nplt.figure(figsize=(15,6))\nplt.subplot(121)\nsnd = sound(10,1)\nplt.plot(np.arange(len(snd),dtype=\"float32\")/res,snd)\nplt.xlabel(\"t[s]\")\nplt.title(\"sound window at 10hz\")\n\nplt.subplot(122)\naudio = np.zeros(15000)\nadd_sound(audio, 1000, 100, 0.5)\nadd_sound(audio, 3000, 130, 0.5)\nadd_sound(audio, 2000, 50, 1)\nadd_sound(audio, 10000, 150, 0.5)\nplt.plot(np.arange(len(audio),dtype=\"float32\")/res,audio)\nplt.xlabel(\"t[s]\")\nplt.title(\"audio signal with overlaying sound waves\")\n\nplt.show()",
"Now that we have our sample data, we can look at its power spectrum using the fourier transform.",
"fourier = np.fft.fft(audio)\nxvals = np.fft.fftfreq(len(audio))*res\nidx = np.where(np.abs(xvals) < 200)\nplt.plot(xvals[idx],np.abs(fourier)[idx])\nplt.show()",
"As expected, we see clear peaks at the frequencies that are present in the data (50 hz, 100 hz, 130 hz, and 150 hz).\nHowever, the fourier transform leaves us with no information where these frequencies occurred in the original signal.\nFor the analysis of sound-related data (and many other types of data with spatial information) it would be desirable to retain this spatial information.\nWith the fourier transform, we can multiply the original signal with a window function so that we only get the frequency components within that window.\nBy shifting the window over the signal, we get the information of all frequency bands at all locations.\nThis technique is called the windowed Fourier transform.",
"# note: the execution of this code block might take a few seconds\ndef fourier_w(signal, window_size=1000):\n out = np.zeros((len(signal),window_size))\n window = hamming(window_size)\n for i in range(window_size//2, len(signal)-window_size//2):\n s = i - window_size//2\n e = s + window_size\n wsig = signal[s:e] * window\n out[i,:] = np.abs(np.fft.fft(wsig))\n return out\n\ns,e = 4000,8000 # range of signal\nwsize = 1000 # window size\nfs,fe = 0,18 # range of frequencies to plot\n\nfw = fourier_w(audio[s:e],window_size=wsize)\nfwcut = fw[wsize//2:-wsize//2,fs:fe]\nplt.figure(figsize=(20,10))\nplt.subplot(211)\nplt.pcolormesh(fwcut.T)\nyt = np.arange(0,len(fwcut[0]),1)\nplt.yticks(yt+0.5,(np.fft.fftfreq(wsize)*res)[yt+fs])\nxt = np.arange(0,len(fwcut),wsize//2)\nplt.xticks(xt,(xt+s+wsize//2)/res)\nplt.ylabel(\"freq[hz]\")\nplt.ylim(0,len(fwcut[0]))\n\nplt.subplot(212)\nplt.plot(audio[s+wsize//2:e-wsize//2])\nplt.xticks(xt, (xt+s+wsize//2)/res)\nplt.xlabel(\"t[s]\")\nplt.show()",
"As you can see from the plots above, the windowed fourier transform preserves the spatial information at the cost of a higher computational complexity.\nWe can see that the 100hz sound fades out at t = 0.6s, which is exactly what we defined: The sound starts at t = 0.1s and has a duration of 0.5s.\nThe (discrete) wavelet transform essentially aims to solve the same problem as the windowed fourier transform: It provides information about the frequency spectrum without losing the spatial location.\nAt this point, we could use an existing implementation of the discrete wavelet transform and compare its output to that of the windowed fourier transform.\nIf you are interested in a ready-to-use version of the DWT for python, I suggest you have a look at Machine Learning PYthon (mlpy) or PyWavelets (pywt).\nI have not yet tested any of those packages, but they both seem quite mature at first glance.\nIn this notebook we will take the hard approach of building a dwt for ourselves and understanding it step by step.\nWe may come back to this example when we are finished with that. ;)\nIn order to do this, it may first help to recall what the fourier transformation is actually doing.\nWe therefore define functions that calculate the fourier coefficients manually by multiplying the signal with a sine- and cosine-wave and taking the integral of the result (by summing all values):",
"def fourier_coeff_i(signal, freq, res=10000):\n \"\"\" calculates the imaginary fourier coefficient of signal at frequency freq \"\"\"\n s = -np.sin(np.arange(len(signal))*2*np.pi*freq/res) # sine wave with given frequency\n return np.sum(signal * s) # integral\n\ndef fourier_coeff_r(signal, freq, res=10000):\n \"\"\" calculates the real fourier coefficient of signal at frequency freq \"\"\"\n s = np.cos(np.arange(len(signal))*2*np.pi*freq/res) # sine wave with given frequency\n return np.sum(signal * s) # integral\n\nfreqs = [50,70,100,110,120,125,130,140,150]\nfaudio = np.fft.fft(audio)\nfbins = np.fft.fftfreq(len(audio))\ncoeff_lib = lambda f: faudio[int(np.floor(f/res*len(audio)))]\nfor f in freqs:\n i = fourier_coeff_i(audio,f)\n r = fourier_coeff_r(audio,f)\n print(\"{0:3d}hz: {1:5.0f} + {2:5.0f}i (fft: {3.real:5.0f} {3.imag:+5.0f}i)\".format(f,r,i,coeff_lib(f)))",
"Now, to introduce the time domain again, we define a function that calculates a single coefficient of the windowed fourier transform for a given time and frequency:",
"def windowed_fourier(signal, freq, t, wsize=1000, res = 10000):\n window = hamming(wsize)\n s = int(np.floor(t * res - wsize//2))\n wsig = signal[s:s+wsize] * window\n return [f(wsig, freq, res=res) for f in [fourier_coeff_r, fourier_coeff_i]]\n\nargs = [\n (50,0.5),\n (50,0.7),\n (150,0.6),\n (150,1.2)\n]\nfor f, t in args:\n r, i = windowed_fourier(audio, f, t)\n print(\"{0:3d}hz, {1:5.3f}s: {2:+5.0f} {3:+5.0f}i\".format(f, t, r, i))",
"With this definition we have a function that can tell us for each frequency at each point in time how much of that frequency is present in our signal.\nHow would this function look for the wavelet transform?\nWell, actually it looks really similar, except that the wavelet is constructed in such a way that the multiplication with the window and the sine/cosine wave is replaced by a convolution of the original signal with the wavelet.\n(Remember: Wavelets like the mexican hat wavelet already have a \"windowed\" shape that drops to zero to both sides.)\nMathematically, the wavelet transform can be defined as follows:\n$$\n(T^{\\text{wav}} f)(a,b) = \\sqrt{|a|} \\int dt \\; f(t) \\psi \\left(\\frac{t-b}{a}\\right)\n$$\nwhere $a$ is the scale and relates to the frequency f while b is the location parameter relating to the time t.\nWith this definition, we can take a quick glance how the wavelet transform with a mexican hat wavelet might look like:",
"# note the response of a mexican hat wavelet of 1s length is highest for a frequency of approximately 4hz\ndef twav(signal, f):\n wav = mexican_hat(np.arange(-5,5,10.0/10000 * f/4.0), 0, 1)\n return np.convolve(signal, wav, \"same\")\n\n# remember: the sound at 50hz starts at t = 0.2s and has a duration of 1s\nplt.plot(np.arange(len(audio))/res,twav(audio,50))\nplt.xlabel(\"t[s]\")\nplt.show()",
"As you can see, the wavelet transform with the mexican hat function also filters frequency information and retains spatial information.\nThe response of the transform at 50hz shows where the corresponding sound begins and ends, but the transformed signal also oscillates so that we do not get a clear spectrum.\nFor higher frequencies, you can also see that the response overlays with the other frequency bands that are nearby.\nThis is probably because the mexican hat function does not fit the sine waves in the signal perfectly, whereas the sine waves used in the fourier transform do.\nHowever, before we further investigate the theory of the wavelet transform, I would like to return one (probably) last time to the Fourier transform.\nIt turns out, that the fourier transform for a given frequency can also be interpreted as a convolution much like the wavelet transform.\nThe only change we need to make is to mirror our windowed sine- and cosine waves at the y axis to counteract the mirroring in the convolution operation.",
"def wfourier_conv(signal, freq, wsize=1000, res=10000):\n window = hamming(wsize)\n x = (wsize-1-np.arange(wsize)) * 2 * np.pi * freq / res\n swindow = window * np.sin(x)\n cwindow = window * np.cos(x)\n sfft, cfft = [np.convolve(signal,x,\"same\") for x in [swindow, cwindow]]\n return swindow, cwindow, cfft - sfft * 1j \n\n# remember: we have issued a sound with 150hz at t = 1s with duration 5s\nsw150, cw150, f150 = wfourier_conv(audio, 150, res=res)\nfor t in [6000, 12000]:\n print(\"{0:3d}hz, {1:5.3f}s: {2.real:+5.0f} {2.imag:+5.0f}i\".format(150, t/res, f150[t]))\nplt.figure(figsize=(15,4))\nplt.subplot(121)\nplt.plot(np.arange(len(audio),dtype=\"float32\")/res,np.abs(f150))\nplt.title(\"windowed fourier transform at 150hz\")\nplt.xlabel(\"t[s]\")\n\nplt.subplot(122)\nplt.plot(np.arange(len(sw150),dtype=\"float32\")/res,sw150)\nplt.plot(np.arange(len(cw150),dtype=\"float32\")/res,cw150)\nplt.title(\"fourier 'wavelets'\")\nplt.xlabel(\"t[s]\")\nplt.show()",
"With this representation we can view the fourier transform as a special case of the wavelet transform with a \"fourier wavelet\".\nIn fact, the commonly used Morlet wavelet is composed of a complex exponential $e^{ix}$ and a gaussian.\nThe discrete wavelet transform using the Morlet wavelet is therefore identical with a windowed fourier transform using a gaussian window.\nThe discrete wavelet transform - Once more with feeling!\nNow that we have a broad grasp of what the discrete wavelet transform (DWT) actually is, we can start discussing how to do it \"properly\".\nWhat are good choices for wavelets, how can we compute the DWT efficiently, and what is all the fuzz about \"mother\"- and \"father\"-wavelets, high- and lowpass filters and so on?\nFirst, we look again at the definition of the wavelet transform:\n$$\n(T^{\\text{wav}} f)(a,b) = \\sqrt{|a|} \\int dt \\; f(t) \\psi \\left(\\frac{t-b}{a}\\right)\n$$\nThis definition uses a single wavelet function, but we can also think of the transform as using several wavelets with different scale and location parameters a and b.\nIn this (discrete) case we define discrete steps for $a = a_0^m$ and $b = n b_0 a_0^m$.\nWith this definition, we get:\n$$\n(T^{\\text{wav}} f)(a,b) = T^\\text{wav}{m,n}(f) = \\int dt \\; f(t) \\psi{m,n} (x)\n$$\nwith\n$$\n\\psi_{m,n} = a_0^{\\frac{-m}{2}} \\psi\\left(a_0^{-m}x - nb_0\\right)\n$$\nNow why do we define our frequency steps in an exponential scale $a_0^m$ instead of a linear scale $m a_0$?\nCurrently, I do not have a good answer for this, except that it makes things easier if we use $a_0$ = 2, because then we can always double the scale, which seems to make sense.\nI would, however, love to hear an explanation of an actual expert in the field, because I am quite sure there is a better explanation than that.\nIf I find one myself, I will add it here later.\nIf we accept the exponential scale, the choice of $b$ also becomes reasonable.\nWe move our wavelet along the signal in steps of some initial step width $b_0$.\nThis step width can be increased with increasing m (or a respectively), because increasing m means that we increase the width of the wavelet.\nShifting a very wide wavelet in very small steps would give very similar results and therefore just waste computing power.\nLet's look a little closer at those wavelets. How does a mexican hat wavelet, for example, look with different values for m and n?",
"# we assume a0 = 2 and b0 = 1\ndef psi_mn(psi, m, n):\n a = 2**m\n b = n*2**m\n wav = np.zeros(len(psi)*a + b)\n wav[b:b+len(psi)*a] = np.interp(np.arange(len(psi)*a)/a,np.arange(len(psi)),psi)\n return wav\n\npsi = mexican_hat(np.arange(-5,5,0.1),0,1)\nxlim = (0,350)\nns = [1, 30, 60]\nms = [0, 1]\nplt.figure(figsize=(15,4))\nplt.subplot(121)\nfor mi in range(len(ms)):\n m = ms[mi]\n plt.subplot(1,len(ms),mi+1)\n for n in ns:\n plt.plot(psi_mn(psi, m, n), label=\"n=\"+str(n))\n plt.title(\"m = \"+str(m))\n plt.legend(loc=\"best\")\n plt.xlim(xlim)\nplt.show()",
"We can see that the parameter m dilates the wavelet and the parameter n translates the wavelet along the x-axis.\nThe figures also show that for the same n, the translation is larger for larger m, but stays the same relative to the size of the wavelet.\nTherefore, we can think of the parameter $b_0$ which we fixed to 1 in our example as a parameter that determines how many overlap there will be between two neighboring wavelets.\nSo how does the DWT look with this definition?\nIn the following we will look at two implementations:\n1. A \"naive\" implementation twav_mn_naive using the transformed wavelets to showcase the idea of the transform.\n2. An \"efficient\" implementation twav_mn that shows how many operations are actually needed.",
"def twav_mn(f, psi, m, n):\n f_scaled = f[::2**m]\n # we have 2 scaling factors: 2**(-m/2.0) from the formula and 2**m from our step length\n # => total scaling factor is 2**(-m/2.0) * 2**m = 2 ** (m - m/2.0) = 2**(m/2.0)\n return 2**(m/2.0) * np.sum(f_scaled[n:n+len(psi)] * psi)\n\ndef twav_mn_naive(f, psi, m, n):\n pmn = psi_mn(psi, m, n)\n return 2**(-m/2.0) * np.sum(f[:len(pmn)] * pmn)\n\nm = 3\nns = np.arange(1000,1500)\nplt.plot([twav_mn(audio, psi, m, n) for n in ns],color=\"blue\", lw=6, alpha=0.3)\nplt.plot([twav_mn_naive(audio, psi, m, n) for n in ns], \"r-\")\nplt.show()",
"You can both see that the two implementations yield identical outputs and that twav_mn is much more efficient.\nInstead of transforming the wavelet, we can subsample our data and only multiply the respective chunk of the (resampled) signal with the unaltered wavelet array.\nFor $a_0 = 2$ and $b_0 = 1$ the total number of coefficients to calculate for a signal of length $N$ for $m = 0$ will be $N$, as we can center the wavelet psi around any point in the whole signal.\nFor $m = 1$ however, we only have $\\frac{N}{2}$ coefficients to compute because we subsampled our data and $n$ can only range from $0$ to $\\frac{N}{2}$.\nRepeating this consideration, we arrive at the following formula for the number of total coefficients $c$:\n$$\nc = N + \\frac{N}{2} + \\frac{N}{4} + \\frac{N}{8} + ... = 2 N\n$$\nWe now have a description of our signal of $N$ datapoints with $2N$ coefficients.\nIn doing so, we have not imposed any restriction on our wavelets other than that they are at all \"sensible\", which can be formulated mathematically as the following requirement for the \"mother wavelet\" $\\psi$:\n$$\nC_\\psi = 2 \\pi \\int_{-\\infty}^{\\infty} d\\xi \\left|\\hat{\\psi}(\\xi)\\right| |\\xi|^{-1} < \\infty\n$$\nIn this formula, $\\hat{\\psi}$ denotes the fourier transform of the wavelet function.\nThe condition essentially informally reads as \"the wavelet function is only composed of a finite amount of frequencies\".\nThis also means that the total integral of the wavelet function must be zero.\nAs Daubechies puts it, this leaves \"a lot of freedom\" on the choice of wavelet functions, but also often leads to \"very redundant\" descriptions of the signal.\nWhat is there to gain if we let go of this freedom?\nWell, we might be interested in finding a wavelet or family of wavelets that allow us to approximate any function - preferably in a way similar to the fourier transform that does not lose any information and can even be reverted.\nThe magic thing that we are looking for is an \"orthonormal basis of $L^2(\\mathbb{R})$\".\nBut what does this mean?\nAn orthonormal basis of a vector space, for example, is a set of unit vectors that are pairwise orthogonal to each other (that's the \"orthonormal\" part) and each vector in the whole vector space can be written as a linear combination of these vectors (the \"basis\" part).\nIn other words, if vectors $\\vec{x}$, $\\vec{y}$ and $\\vec{z}$ constitute an orthonormal basis for $\\mathbb{R}^3$, they fullfill the following properties:\n$$\n|\\vec{x}| = |\\vec{y}| = |\\vec{z}| = 1\n$$\n$$\n\\vec{x} \\cdot \\vec{y} = \\vec{x} \\cdot \\vec{z} = \\vec{y} \\cdot \\vec{z} = 0\n$$\n$$\n\\forall{p} \\in \\mathbb{R}^3 \\;\\exists a,b,c \\in \\mathbb{R}:\\;\\; \\vec{p} = a\\vec{x} + b\\vec{y} + c\\vec{z}\n$$\nThis is true for the following exemplary choices for $\\vec{x}$, $\\vec{y}$ and $\\vec{z}$:\n$$\n\\vec{x} = \\begin{pmatrix}1 \\ 0 \\ 0\\end{pmatrix}, \\vec{y} = \\begin{pmatrix}0 \\ 1 \\ 0\\end{pmatrix}, \\vec{z} = \\begin{pmatrix}0 \\ 0 \\ 1\\end{pmatrix}\n$$\n$$\n\\vec{x} = \\begin{pmatrix}-1 \\ 0 \\ 0\\end{pmatrix}, \\vec{y} = \\begin{pmatrix}0 \\ 1 \\ 0\\end{pmatrix}, \\vec{z} = \\begin{pmatrix}0 \\ 0 \\ 1\\end{pmatrix}\n$$\n$$\n\\vec{x} = \\begin{pmatrix}\\sqrt{\\frac{1}{2}} \\ \\sqrt{\\frac{1}{2}} \\ 0\\end{pmatrix}, \\vec{y} = \\begin{pmatrix}-\\sqrt{\\frac{1}{2}} \\ \\sqrt{\\frac{1}{2}} \\ 0\\end{pmatrix}, \\vec{z} = \\begin{pmatrix}0 \\ 0 \\ 1\\end{pmatrix}\n$$\n(In the last example, we have just rotated all vectors of the first example by $45^\\circ{}$ around the z-axis.)\nSo much for an orthonormal basis.\nWhat exactly is $L^2(\\mathbb{R})$?\nThe term $L^2(\\mathbb{R})$ is a shorthand for the set of functions $f: \\mathbb{R} \\rightarrow \\mathbb{R}$ that is square integrable, i.e. $\\int dx |f(x)|^2 < \\infty$. \nTo cut a long story short this means that if the $\\psi_{m,n}$ are an orthonormal basis of $L^2(\\mathbb{R})$, every square integrable function can be expressed as a linear combination of these $\\psi_{m,n}$.\nHow can we find such $\\psi_{m,n}$? This question turns out to be not so easy to answer, but we may look at a fairly easy example given by Daubechies: The Haar wavelet.\nHaar Haar, I am captain wavelet! - The Haar wavelet as orthonormal basis of $L^2(\\mathbb{R})$\nTo finally see some code again, we construct a random function that we want to approximate.\nSince we are in the discrete world, we can of course look at this function as being piecewise constant at our given scale of resolution.",
"# generate a random walk\n# note: for reasons of simplicity we choose the length of our function to be 2^n\n# all code below can be made to work with signals of arbitrary length, but it\n# would make some examples less readable\nrfunc = np.cumsum(np.random.random(128)-0.5)\nplt.plot(rfunc)\nplt.show()",
"We will now try to approximate this function with haar wavelets.\nThe \"mother\" Haar wavelet is defined as follows:\n$$\n\\psi(x) = \\begin{cases} 1 & 0 \\leq x < \\frac{1}{2} \\ -1 & \\frac{1}{2} \\leq x < 1 \\ 0 & \\text{otherwise}\\end{cases}\n$$",
"def haar(width):\n h = np.zeros(width)\n h[:width//2] = 1\n h[width//2:] = -1\n return h\n\nh50 = np.zeros(100)\nh50[25:75] = haar(50)\nplt.plot(h50)\nplt.ylim(-1.1,1.1)\nplt.show()",
"The essential trick for our approximation is to write our \"function\" rfunc as the sum of a more \"coarse\" function and some delta function that holds the difference of this approximation function and the original function.\nIn mathematical terms we define:\n$$\nf = f^1 + \\delta_1\n$$\nIf we now define $f^1$ to be a function that is piecewise constant at half the resolution of the original function, we can simply compute $f^1$ by averaging every two values of $f$:",
"def haar_split(f):\n approx = 0.5*(f[::2]+f[1::2])\n detail = f - np.repeat(approx, 2)\n return approx, detail\n\nrfunc_1, delta_1 = haar_split(rfunc)\nplt.plot(np.repeat(rfunc_1,2), label=\"$f_1$\")\nplt.plot(rfunc, label=\"$f$\")\nplt.legend()\nplt.show()",
"The interesting part of this split is the $\\delta_1$.\nBecause we defined the values of $f_1$ to be the average of two adjacent values of $f$, two adjacent values of $\\delta_1 = f - f_1$ will always have the same absolute value and will only differ in the sign.\nWe can easily demonstrate this property in python:",
"delta_diff = np.abs(delta_1[::2])-np.abs(delta_1[1::2])\n\nplt.plot(delta_1, label=\"$\\delta_1$\")\nplt.plot(delta_diff, label=\"piecewise absolute difference\")\nplt.legend()\nplt.show()",
"We can now exploit this property by \"fitting\" a haar wavelet to these adjacent pairs.\nAll we need to do is to scale the mother wavelet by $\\delta_1(2k)$ and translate it to the appropriate location.",
"plt.plot(delta_1[:4],color=\"blue\", lw=6, alpha=0.3)\nplt.plot([0,1],haar(2)*delta_1[0],\"r-\")\nplt.plot([2,3],haar(2)*delta_1[2],\"k-\")\nplt.show()",
"For our whole $\\delta_1$ the fit then looks as follows:",
"def haar_fit(delta):\n fit = np.zeros(len(delta))\n for i in range(len(delta)//2):\n fit[2*i:2*(i+1)] = haar(2) * delta[2*i]\n return fit\n\nplt.plot(delta_1)\nplt.plot(haar_fit(delta_1),\"r--\")\nplt.show()",
"In other words, we now have a description of $\\delta_1$ in terms of coefficients for our haar wavelets:",
"def haar_coeff(delta):\n return delta[::2]\n\ndef haar_reconst(coeff):\n return np.tile(haar(2),len(coeff)) * np.repeat(coeff,2)\n\nplt.plot(delta_1)\nplt.plot(haar_reconst(haar_coeff(delta_1)),\"r--\")\nplt.show()",
"We are of course still left with $f_1$ which needs to be approximated.\nFor this function, we can repeat the process and again define\n$$\nf_1 = f_2 + \\delta_2\n$$\nin the same way as before with haar wavelets of width 4 instead of width 2.\nWe can do this until our function is reduced to a totally constant function (the constant being the mean value of the whole function).\nFor the proof that the approximation using just haar wavelets can be made arbitrarily precise, Daubechies continues to apply the same technique at even larger scales, but for practical purposes we can just use this mean as our final parameter for our approximation.\nThis has a nice analogy to the fourier transform, since the 0hz component of a fourier transform (called the DC component) is also just the mean of the whole signal.\nWith this we can write our DWT using haar wavelets as follows:",
"def dwt_haar(signal):\n approx, detail = haar_split(signal)\n coeffs = []\n while len(approx) > 1:\n coeffs.extend(haar_coeff(detail))\n approx, detail = haar_split(approx)\n coeffs.extend(haar_coeff(detail))\n coeffs.append(approx)\n return coeffs\n\ndef inv_dwt_haar(coeffs, plot_steps=[]):\n signal = np.array([coeffs[-1]]) # last coefficient is the mean\n csize = 1\n cidx = len(coeffs) - 1\n while cidx > 0:\n signal = np.repeat(signal, 2)\n signal += haar_reconst(coeffs[cidx-csize:cidx])\n cidx -= csize\n csize *= 2\n if csize in plot_steps:\n plt.plot(np.repeat(signal,len(coeffs)//len(signal)), label=\"csize=\"+str(csize))\n return signal\n\ncoeffs = dwt_haar(rfunc)\nplt.plot(rfunc, label=\"original function\",color=\"blue\", lw=6, alpha=0.3)\nplt.plot(inv_dwt_haar(coeffs),\"r-\", label=\"reconstructed function\")\nplt.legend()\nplt.show()\n\nplt.plot(rfunc, label=\"original function\")\ninv_dwt_haar(coeffs, plot_steps=[2, 8, 32, 64])\nplt.legend()\nplt.show()",
"As you can see the coefficients of the DWT using a Haar wavelet describe the approximated function across different scales or in different levels of details.\nOne can also say that the coefficients of each scale correspond to a certain frequency.\nThe coefficients with low step width correspond to high frequencies while the coefficients with large step width correspond to low frequencies.\nThe Haar wavelet is mostly only an academic example of a wavelet for which it is easy to construct an orthonormal basis of $L^2(\\mathbb{R})$ and prove that the construction works.\nThere are, of course, other orthonormal wavelet bases that have more practically relevant properties.\nIn fact, there is a recipe for constructing a function $\\psi(x)$ that can be used to construct such an orthonormal basis.\nThe only ingredient of this recipe is a scaling function $\\phi(x)$ for which the $\\phi_{0,n}$ constitute an orthonormal basis for the space $V_0 = \\lbrace f \\in L^2(\\mathbb{R}); \\; f \\text{ is piecewise constant on } [k,k+1] \\text{ for } k \\in \\mathbb{Z} \\rbrace$.\nFor such a scaling function $\\phi(x)$, a corresponding function $\\psi(x)$ can be constructed as follows:\n$$\n\\phi(x) = \\sum_{n=-\\infty}^{\\infty} (-1)^n \\alpha_{-n + 1} \\phi(2x -n)\n$$\n$$\n\\alpha_n = \\sqrt{2} \\sum_{x = -\\infty}^{\\infty} \\phi(x) \\phi_{-1,n}(x)\n$$\n$$\n\\phi_{-1,n} = \\sqrt{2} \\phi(2x -n)\n$$\nThe following python code demonstrates this construction for the haar wavelet function where we can use\n$$\n\\phi(x) = \\begin{cases}1 & 0 \\leq x < 1 \\ 0 & \\text{otherwise}\\end{cases}\n$$",
"def phi_haar_f(x):\n return 1 if x >= 0 and x < 1 else 0\n\ndef alpha_f(n, func, xvals=np.arange(-1,10,0.1), dt=0.1):\n f = [2 * func(x) * func(2*x -n) for x in xvals]\n return np.sum(f) * dt\n\ndef psi_f(x, phi, nvals=range(10)):\n return sum([(-1)**n * alpha_f(-n + 1, phi) * phi(2*x - n) for n in nvals])\n\nxvals = np.arange(-2,3,0.1)\nplt.plot(xvals,[phi_haar_f(x) for x in xvals])\nfor n in range(4):\n alpha_n = alpha_f(n, phi_haar_f)\n phi_m1 = np.array([phi_haar_f(2*x - n) for x in xvals])\n plt.plot(xvals,phi_m1*alpha_n)\nplt.ylim(-0.1,1.1)\nplt.show()\n\npsi_haar = [psi_f(x, phi_haar_f) for x in xvals]\nplt.plot(xvals, psi_haar)\nplt.ylim(-1.1,1.1)\nplt.show()",
"The first plot illustrates the fact that $\\phi(x) = \\sum_{n = -\\infty}^{\\infty} \\alpha_n \\phi(2x -n)$ while the second plot shows how the construction recipe really does produce the haar wavelet.\nThis code is of course rather clumsy as it somewhat mixes the continuous and the discrete point of view. We will now try to write the same recipe for the general discrete case.",
"def phi_haar(width):\n return np.ones(width)\n\ndef alpha(n, phi):\n # blow up phi => phi[x] = phi2[2*x]\n phi2 = np.repeat(phi,2)\n n = 2*n\n start = max(0, n//2)\n end = min(len(phi2),(len(phi2) + n)//2)\n xs = np.arange(start, end)\n xs2 = 2*xs - n\n return np.sum(phi2[xs] * phi2[xs2])\n\ndef psi(phi):\n s = np.zeros(len(phi)*2)\n ns = range(1-len(phi), len(s))\n for n in ns:\n a = alpha(-n + 1, phi)\n before = min(max(0,n),len(phi))\n after = len(phi)-before\n phi_tmp = np.pad(phi,(before,after),\"constant\")\n sign = -1 if n % 2 == 1 else 1\n s += sign * a * phi_tmp\n return s\n\nfor n in range(-1,4):\n tmpl = \"alpha_f({0:2d}): {1:.1f}, alpha({0:2d}): {2:.1f}\"\n print(tmpl.format(n,alpha_f(n, phi_haar_f),alpha(n,phi_haar(1))))\n\npsi_haar_6 = np.zeros(6)\npsi_haar_6[2:4] = psi(phi_haar(1))\nplt.plot(np.arange(-1,2,0.5),psi_haar_6)\nplt.show()\n\npsi_haar_60 = np.repeat(psi_haar_6,10)\nplt.plot(np.arange(-1,2,0.05),psi_haar_60)\nplt.ylim(-1.1,1.1)\nplt.show()",
"This second implementation is more computationally efficient and generally applicable at the cost of being a little less readable.\nHowever, one can still see that the \"mother wavelet\" $\\psi$ can be constructed from a series of shifted and scaled versions of the \"scaling function\" $\\phi$.\n$\\phi$ is therefore sometimes called the \"father wavelet\", which is a little bit confusing since father and mother are not two independent individuals needed to produce the child wavelets, but rather the \"father\" can be used to construct the \"mother\".\n(It sounds rather strange if you put it that way.)\nThere is actually a second alternative to define $\\psi$ by defining it's fourier transform $\\hat{\\psi}$ from the fourier transform of $\\phi$.\nWe will not go into detail about this construction stratgey, but for the sake of completeness I will briefly introduce the corresponding formula:\n$$\n\\hat{\\psi}(\\xi) = e^{\\frac{i\\xi}{2}} m_0\\left(\\frac{\\xi}{2} + \\pi\\right) \\hat{\\phi}\\left(\\frac{\\xi}{2}\\right)\n$$\n$$\nm_0(x) = \\frac{1}{\\sqrt{2}} \\sum_{n=-\\infty}^{n=\\infty} h_n e^{-in\\xi}\n$$\n$$\nh_n = \\sqrt{2} \\sum_{x=-\\infty}^{\\infty} \\phi(x) \\phi(2x-n)\n$$\nWith this we can leave the Haar wavelet behind.\nWe have found a construction recipe for orthonormal wavelet bases and we know what the terms father wavelet and mother wavelet mean.\nWhat is missing is a really efficient algorithm to compute the DWT which will also bring us to the description of the DWT as a set of low- and high-pass filters.\nThe DWT as subband filtering scheme\nTODO: Better explanation for formulas?\nWavelet tutorials often explain the DWT as passing the input sequence through a set of high- and low-pass filters.\nFor me, this was confusing as my understanding of wavelets was based on an understanding of the fourier transform.\nIn this introduction, we also have not yet encountered any low-pass filters, or have we?\nIn fact, we cab really express the DWT as a so called \"subband filtering scheme\", which automatically gives rise to an efficient implementation of the DWT.\nFirst of all we need to reformulate our definition of the DWT. With the $h_n$ notation from the last section we currently have:\n$$\n\\psi = \\sum_{n = -\\infty}^{\\infty} g_n \\phi_{-1, n}\n$$\n$$\ng_n = \\sum_{x = -\\infty}^{\\infty} \\psi(x) \\phi_{-1, n}(x) = (-1)^n h_{-n+1}\n$$\nWith this the $\\psi_{j, k}$ become\n$$\n\\psi_{j,k}(x) = \\sum_{n=-\\infty}^{\\infty} g_{n-2k} \\phi_{j-1, n}(x)\n$$\nNote: At this point I will switch notations. For the sake of clarity I used $\\sum_{x = -\\infty}^{\\infty} f(x) g(x)$ where Daubechies used the notation $\\langle f, g \\rangle$ which is indeed shorter and easier to read once one becomes familiar with the notation. I will probably go back and change the notation throughout the entire document once I find the time.\nHowever, we actually do not need $\\psi_{j,k}$ but only our wavelet coefficients $\\langle f, \\psi_{j,k} \\rangle$ which are given by\n$$\n\\langle f, \\psi_{j,k} \\rangle = \\sum_{n = -\\infty}^{\\infty} g_{n-2k} \\langle f, \\phi_{j-1, n} \\rangle\n$$\nThis definition can again be rewritten as a convolution followed by a downsampling by factor 2 (retaining only the even samples):\n$$\n\\langle f, \\psi_{j,k} \\rangle = \\left( (\\langle f, \\phi_{j-1, n} \\rangle){n \\in \\mathbb{Z}} * (g{-n})_{n \\in \\mathbb{Z}} \\right) \\downarrow 2\n$$\nAll we need to know to exploit this computation scheme are the $\\langle f, \\phi_{j-1, k} \\rangle$ for which we can also find a convenient definition:\n$$\n\\langle f, \\phi_{j,k} \\rangle = \\sum_{n = -\\infty}^{\\infty} h_{n-2k} \\langle f, \\phi_{j-1,n} \\rangle\n$$\n$$\n\\langle f, \\phi_{j,k} \\rangle = \\left( (\\langle f, \\phi_{j-1, n} \\rangle){n \\in \\mathbb{Z}} * (h{-n})_{n \\in \\mathbb{Z}} \\right) \\downarrow 2\n$$\nSo, to sum up we now have the following algorithm:\n* calculate the $\\langle f, \\phi_{0, n} \\rangle$\n* calculate $h_n$ and $g_n$ (only needs to be done once for a given $\\phi$)\n* loop for $j = 1, 2, 3, \\dots, j_{\\text{max}}$\n * calculate $\\langle f, \\phi_{j, k} \\rangle$ from $h_n$ and $\\langle f, \\phi_{j-1,k} \\rangle$\n * calculate $\\langle f, \\psi_{j,k} \\rangle$ from $g_n$ and $\\langle f, \\phi_{j-1,k} \\rangle$\n* store the \"detail coefficients\" $\\langle f, \\psi_{j,k} \\rangle$ for all $j$ and $k$ and the remaining \"approximation coefficients\" $\\langle f, \\phi_{j_{\\text{max}}, k} \\rangle$\nHow is this algorithm related to subband filtering?\nWell, the sequence $(h_{-n}){n \\in \\mathbb{Z}}$ can be seen as a high-pass filter that produces successively \"coarser\" approximations of the original signal while the sequence $(g{-n})_{n \\in \\mathbb{Z}}$ constitutes a low pass filter that captures the details lost in this approximation.\nThe convolution followed by a downsampling is the same operation as applying the corresponding filter.\nWe have now seen a lot of formulas, so it's time again to produce some code. Our (possibly last) version of the DWT:\nTODO: explain reconstruction (upsampling with interleaving zeros!)",
"def filters_hg(phi):\n \"\"\"\n Constructs the filters h and g\n \"\"\"\n phi2 = np.repeat(phi,2)\n # we only need the indices [-len(phi)+1, len(phi)*2) but we want the\n # filter to be centered at index zero\n ns_h = np.arange(-len(phi)+1,len(phi)*2)\n ns_g = - ns_h + 1\n ns_g = ns_g[::-1]\n hs = np.zeros(len(ns_h), dtype=\"float32\")\n gs = np.zeros(len(ns_h), dtype=\"float32\")\n for i in range(len(ns_h)):\n n_h = 2*ns_h[i]\n n_g = 2*ns_g[i]\n start = max(0, n_h//2)\n end = min(len(phi2),(len(phi2) + n_h)//2)\n xs = np.arange(start, end)\n xs2 = 2*xs - n_h\n # we need to divide by two because we operate on a \"blown up\" version of phi\n hs[i] = np.sqrt(2) * np.sum(phi2[xs] * phi2[xs2]) / 2.0\n gs[len(gs)-1-i] = -(n_g % 4 - 1) * hs[i]\n # we want our filters to be centered at index zero => add zeros at front or back as required\n add_front_h = len(phi) # len(phi)*2-1 = last index, -len(phi)+1 = first index => difference = len(phi)\n add_back_g = len(phi)-2 # -len(phi)*2 + 2 = first index, len(phi) = last index => difference = len(phi)-2\n hs = np.pad(hs, (add_front_h, 0), \"constant\")\n gs = np.pad(gs, (max(0,-add_back_g), max(0, add_back_g)), \"constant\")\n return hs, gs\n\nhs, gs = filters_hg(phi_haar(1))\nprint(\"h\", hs)\nprint(\"g\", gs)\nplt.plot(hs, label=\"$h_n$\")\nplt.plot(gs, label=\"$g_n$\")\nplt.ylim(-1,1)\nplt.legend(loc=\"best\")\nplt.show()",
"For the Haar basis we again see the similar shapes to the mother and father wavelets.\nThis is because the filters work similar to $\\phi$ and $\\psi$.\nThe high-pass filter $h_n$ smoothes the signal by averaging adjacent values while the low-pass filter $g_n$ retains exactly the high-frequency fluctuations that are removed by $h_n$.\nWith this first example we will now continue by looking at a single decomposition and reconstruction step of our subband filtering scheme.",
"def upsampleZero(ar,n):\n \"\"\" upsampling function that adds zeros between the sample values \"\"\"\n res = np.zeros(len(ar)*n)\n res[::n] = ar\n return res\n\ndef sbf_split(data, h, g):\n \"\"\" decomposition by convolution and downsampling \"\"\"\n # set starting points so that first filter position is centered at data[0]\n sh, sg = (len(h)//2, len(g)//2)\n approx = np.convolve(data, h[::-1], \"full\")[sh:sh+len(data):2]\n detail = np.convolve(data, g[::-1], \"full\")[sg:sg+len(data):2]\n return approx, detail\n\ndef sbf_reconst(approx, detail, h, g):\n \"\"\" reconstruction by upsampling and convolution \"\"\"\n # set starting points so that first filter position is centered at approx[0]/detail[0]\n sh, sg = (len(h)//2, len(g)//2)\n data = np.convolve(upsampleZero(approx, 2), h, \"full\")[sh:sh+len(approx)*2]\n data += np.convolve(upsampleZero(detail, 2), g, \"full\")[sg:sg+len(detail)*2]\n return data\n\nh, g = filters_hg(phi_haar(1))\n#h = [0, 0.7071, 0.7071]\n#g = [0, -0.7071, 0.7071]\ndata = [1,2,3,4]\ncs, ds = sbf_split(data, h, g)\nrecs = sbf_reconst(cs, ds, h, g)\nprint(\"c_0 (orig.) \",data)\nprint(\"c_1 \",cs)\nprint(\"d_1 \",ds)\nprint(\"c_0 (rec.) \",recs)",
"This simple example shows how the subband filtering works.\nWe decompose our $\\langle f, \\phi_{0,n} \\rangle = c^0 = (1, 2, 3, 4)$ by convolving the sequence with $(h_{-n}){n \\in \\mathbb{Z}}$ and $(g{-n}){n \\in \\mathbb{Z}}$ to obtain our detail coefficients $d^1 = \\langle f, \\psi{1,n} \\rangle$ and approximation coefficients $c^1 = \\langle f, \\phi_{1,n} \\rangle$.\nLater, we can reconstruct $c^0$ from $c^1$ and $d^1$ by\n$$\nc^0 = \\left( c^1 * (h_{n}){n \\in \\mathbb{Z}}\\right) \\uparrow 2 + \\left( d^1 * (g{n})_{n \\in \\mathbb{Z}}\\right) \\uparrow 2\n$$\nThe DWT as subband filtering scheme is now nothing more than the repetition of this same operation until the approximation sequence is sufficiently small (we will reduce the approximation to length 1).",
"def approx_f(coeffs, phi, j=0):\n \"\"\" reconstructs the actual function approximation from approximation coefficients \"\"\"\n # TODO this has to be checked for correctness for any other wavelet basis than Haar wavelets\n return 2**(-j/2.0) * np.correlate(coeffs, phi, \"same\")\n\ndef phi0(signal, phi):\n return np.convolve(signal, phi, \"same\")\n\ndef dwt_subband(signal, phi, plot_steps=[], colors={}):\n hs, gs = filters_hg(phi)\n phi0n = phi0(signal, phi)\n approx = phi0n\n coeffs = []\n while len(approx) > 1:\n approx, detail = sbf_split(approx, hs, gs)\n if len(approx) in plot_steps:\n j = np.log2(len(signal)/len(approx))\n l = \"f^{:.0f} (dwt)\".format(j)\n plt.plot(np.repeat(approx_f(approx, phi, j), 2**j), lw=6, alpha=0.3, color=colors[len(approx)], label=l)\n coeffs.append(detail)\n coeffs.append(approx)\n return coeffs\n\ndef inv_dwt_subband(coeffs, phi, plot_steps=[], colors={}):\n hs, gs = filters_hg(phi)\n psi_base = psi(phi)\n approx = coeffs[-1]\n idx = len(coeffs)-2\n while idx >= 0:\n detail = coeffs[idx]\n approx = sbf_reconst(approx, detail, hs, gs)\n if len(approx) in plot_steps:\n l = \"f^{:.0f} (idwt)\".format(idx)\n plt.plot(np.repeat(approx_f(approx, phi, idx), 2**idx), color=colors[len(approx)], label=l)\n idx -= 1\n return approx_f(approx, phi)\n\nfilter_len = 128\nfilter_signal = np.cumsum(np.random.random(filter_len)-0.5)\n\nsteps = [2, 8, 64]\ncolors = {2 : \"red\", 8: \"blue\", 64: \"green\"}\ndsb = dwt_subband(filter_signal, phi_haar(1), plot_steps=steps, colors=colors)\ndsbi = inv_dwt_subband(dsb, phi_haar(1), plot_steps=steps, colors=colors)\nplt.legend(loc=\"best\")\nplt.show()\n\nplt.plot(filter_signal, lw=6, alpha=0.3, label=\"original signal\")\nplt.plot(dsbi, label=\"reconstructed signal\")\nplt.legend(loc=\"best\")\nplt.show()",
"As you can see, the approximation coefficients at any given scale $j$ suffice to characterize the approximation $f^j$ of our $f^0$.\nThe decomposition builds coarser and coarser approximations and the reconstruction uses this coarse approximations along with the detail coefficients (the information \"lost\" in the approximation) to reconstruct the next finer approximation.\nIn principle, these functions should work for any choice of $\\phi$, because every DWT can be described as a subband filtering scheme.\nWhat is now still missing is a discussion of the typical choices for wavlets. Examples are the Meyer-Wavelet, the Battle-Lemarie family and the Daubechies and Symlet wavelets.\nOrigininally one of the questions that drove me to write this introduction was \"What does the formula for the Symlet-Wavlets look like\".\nIf you should have the same question, let me cut a long story short: There is no closed-form description of the Daubechies or Symlet wavelets.\nThey are produced as FIR filters by an approximation algorithm using trigonometric polynomials.\nI may look into what that exactly means in another notebook.\nWe also did not cover some parts of wavelet theory, such as the concept of \"frames\". I will therefore leave this TODO list here and replace the items with links to other notebooks if I choose to explore these questions in the future.\nTODO:\n* revisit sound example with dwt_subband and scaling function for mexican hat wavelet\n* compare with PyWavelets\n* IIR vs FIR\n * all \"closed-form\" wavelets correspond to IIR filters\n * for efficient implementation, FIR filters are better\n * construction of FIR filters from trigonometric polynomials\n * $\\Rightarrow$ Daubechies, Symlets\n* construction of other wavelets (Meyer, Battle-Lemarie, ...)\n* frames?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/tensorflow-gcp-tools
|
examples/cloud_fit.ipynb
|
apache-2.0
|
[
"# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"<table align=\"left\">\n <td>\n <a href=\"https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps://github.com/GoogleCloudPlatform/tensorflow-gcp-tools/blob/master/examples/cloud_fit.ipynb\">\n <img src=\"https://www.gstatic.com/images/branding/product/1x/google_cloud_48dp.png\" alt=\"AI Platform Notebooks\"> Run in AI Platform Notebooks\n </a>\n </td>\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/tensorflow-gcp-tools/blob/master/examples/cloud_fit.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/tensorflow-gcp-tools/blob/master/examples/cloud_fit.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">View on GitHub\n </a>\n </td>\n</table>\n\nOverview\nFollowing is a quick introduction to cloud_fit. cloud_fit enables training on Google Cloud AI Platform in the same manner as model.fit().\nIn this notebook, we will start by installing libraries required, then proceed with two samples showing how to use Numpy.array and TF.data.dataset with cloud_fit\nWhat are the components of the cloud_fit()\nCloud fit has two main components as follows:\nclient.py: serializes the provided data and model along with typical model.fit() parameters and triggers a AI platform training\n``` python\ndef cloud_fit(model,\n remote_dir: Text,\n region: Text = None,\n project_id: Text = None,\n image_uri: Text = None,\n distribution_strategy: Text = DEFAULT_DISTRIBUTION_STRATEGY,\n job_spec: Dict[str, Any] = None,\n job_id: Text = None,\n **fit_kwargs) -> Text:\n \"\"\"Facilitates remote execution of in memory Models and Datasets on AI Platform.\nArgs:\n model: A compiled Keras Model.\n remote_dir: Google Cloud Storage path for temporary assets and AI Platform\n training output. Will overwrite value in job_spec.\n region: Target region for running the AI Platform Training job.\n project_id: Project id where the training should be deployed to.\n image_uri: base image used to use for AI Platform Training\n distribution_strategy: Specifies the distribution strategy for remote\n execution when a jobspec is provided. Accepted values are strategy names\n as specified by 'tf.distribute.<strategy>.name'.\n job_spec: AI Platform training job_spec, will take precedence over all other\n provided values except for remote_dir. If none is provided a default\n cluster spec and distribution strategy will be used.\n job_id: A name to use for the AI Platform Training job (mixed-case letters,\n numbers, and underscores only, starting with a letter).\n **fit_kwargs: Args to pass to model.fit() including training and eval data.\n Only keyword arguments are supported. Callback functions will be\n serialized as is.\nReturns:\n AI Platform job ID\nRaises:\n RuntimeError: If executing in graph mode, eager execution is required for\n cloud_fit.\n NotImplementedError: Tensorflow v1.x is not supported.\n \"\"\"\n```\nremote.py: A job that takes in a remote_dir as parameter , load model and data from this location and executes the training with stored parameters.\n```python\ndef run(remote_dir: Text, distribution_strategy_text: Text):\n \"\"\"deserializes Model and Dataset and runs them.\nArgs:\n remote_dir: Temporary cloud storage folder that contains model and Dataset\n graph. This folder is also used for job output.\n distribution_strategy_text: Specifies the distribution strategy for remote\n execution when a jobspec is provided. Accepted values are strategy names\n as specified by 'tf.distribute.<strategy>.name'.\n \"\"\"\n```\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nAI Platform Training\nCloud Storage\n\nLearn about AI Platform Training\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the AI Platform APIs\n\n\nIf running locally on your own machine, you will need to install the Google Cloud SDK.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nAuthenticate your Google Cloud account\nIf you are using AI Platform Notebooks, your environment is already\nauthenticated. Skip these steps.",
"import sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nif 'google.colab' in sys.modules:\n from google.colab import auth as google_auth\n google_auth.authenticate_user()\n\n# If you are running this tutorial in a notebook locally, replace the string\n# below with the path to your service account key and run this cell to\n# authenticate your Google Cloud account.\nelse:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n# Log in to your account on Google Cloud\n! gcloud auth application-default login --quiet\n! gcloud auth login --quiet",
"Clone and build tensorflow_enterprise_addons\nTo use the latest version of the tensorflow_enterprise_addons, we will clone and build the repo. The resulting whl file is both used in the client side as well as in construction of a docker image for remote execution.",
"!git clone https://github.com/GoogleCloudPlatform/tensorflow-gcp-tools.git\n\n!cd tensorflow-gcp-tools/python && python3 setup.py -q bdist_wheel\n\n!pip install -U tensorflow-gcp-tools/python/dist/tensorflow_enterprise_addons-*.whl --quiet",
"Restart the Kernel\nWe will automatically restart your kernel so the notebook has access to the packages you installed.",
"# Restart the kernel after pip installs\nimport IPython\napp = IPython.Application.instance()\napp.kernel.do_shutdown(True)",
"Import libraries and define constants",
"import os\nimport uuid\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow_enterprise_addons.cloud_fit import client\n\n# Setup and imports\nREMOTE_DIR = '[gcs-bucket-for-temporary-files]' #@param {type:\"string\"}\nREGION = 'us-central1' #@param {type:\"string\"}\nPROJECT_ID = '[your-project-id]' #@param {type:\"string\"}\n! gcloud config set project $PROJECT_ID\nIMAGE_URI = 'gcr.io/{PROJECT_ID}/[name-for-docker-image]:latest' #@param {type:\"string\"}",
"Created a docker file with tensorflow_enterprise_addons\nIn the next step we create a base docker file with the latest wheel file to use for remote training. You may use any base image however DLVM base images come pre-installed with most needed packages.",
"%%file Dockerfile\n\n# Using DLVM base image\nFROM gcr.io/deeplearning-platform-release/tf2-cpu\nWORKDIR /root\n\n# Path configuration\nENV PATH $PATH:/root/tools/google-cloud-sdk/bin\n\n# Make sure gsutil will use the default service account\nRUN echo '[GoogleCompute]\\nservice_account = default' > /etc/boto.cfg\n\n# Copy and install tensorflow_enterprise_addons wheel file\nADD tensorflow-gcp-tools/python/dist/tensorflow_enterprise_addons-*.whl /tmp/\nRUN pip3 install --upgrade /tmp/tensorflow_enterprise_addons-*.whl --quiet\n\n# Sets up the entry point to invoke cloud_fit.\nENTRYPOINT [\"python3\",\"-m\",\"tensorflow_enterprise_addons.cloud_fit.remote\"]\n\n!docker build -t {IMAGE_URI} -f Dockerfile . -q && docker push {IMAGE_URI}",
"Tutorial 1 - Functional model\nIn this sample we will demonstrate using numpy.array as input data by creating a basic model and and submit it for remote training.\nDefine model building function",
"\"\"\"Simple model to compute y = wx + 1, with w trainable.\"\"\"\ninp = tf.keras.layers.Input(shape=(1,), dtype=tf.float32)\ntimes_w = tf.keras.layers.Dense(\n units=1,\n kernel_initializer=tf.keras.initializers.Constant([[0.5]]),\n kernel_regularizer=tf.keras.regularizers.l2(0.01),\n use_bias=False)\nplus_1 = tf.keras.layers.Dense(\n units=1,\n kernel_initializer=tf.keras.initializers.Constant([[1.0]]),\n bias_initializer=tf.keras.initializers.Constant([1.0]),\n trainable=False)\noutp = plus_1(times_w(inp))\nsimple_model = tf.keras.Model(inp, outp)\n\nsimple_model.compile(tf.keras.optimizers.SGD(0.002),\n \"mean_squared_error\", run_eagerly=True)",
"Prepare Data",
"# Creating sample data\nx = [[9.], [10.], [11.]] * 10\ny = [[xi[0]/2. + 6] for xi in x]",
"Run the model locally for validation",
"# Verify the model by training locally for one step.\nsimple_model.fit(np.array(x), np.array(y), batch_size=len(x), epochs=1)",
"Submit model and dataset for remote training",
"# Create a unique remote sub folder path for assets and model training output.\nSIMPLE_REMOTE_DIR = os.path.join(REMOTE_DIR, str(uuid.uuid4()))\nprint('your remote folder is %s' % (SIMPLE_REMOTE_DIR))\n\n# Using default configuration with two workers dividing the dataset between the two.\nsimple_model_job_id = client.cloud_fit(model=simple_model, remote_dir = SIMPLE_REMOTE_DIR, region =REGION , image_uri=IMAGE_URI, x=np.array(x), y=np.array(y), epochs=100, steps_per_epoch=len(x)/2,verbose=2)\n\n!gcloud ai-platform jobs describe projects/{PROJECT_ID}/jobs/{simple_model_job_id}",
"Retrieve the trained model\nOnce the training is complete you can access the trained model at remote_folder/output",
"# Load the trained model from gcs bucket\ntrained_simple_model = tf.keras.models.load_model(os.path.join(SIMPLE_REMOTE_DIR, 'output'))\n\n# Test that the saved model loads and works properly\ntrained_simple_model.evaluate(x,y)",
"Tutorial 2 - Sequential Models and Datasets\nIn this sample we will demonstrate using datasets by creating a basic model and submitting it for remote training.\nDefine model building function",
"# create a model\nfashion_mnist_model = tf.keras.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(10)\n])\n\nfashion_mnist_model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])",
"Prepare Data",
"train, test = tf.keras.datasets.fashion_mnist.load_data()\nimages, labels = train\nimages = images/255\ndataset = tf.data.Dataset.from_tensor_slices((images, labels))\ndataset = dataset.batch(32)",
"Run the model locally for validation",
"# Verify the model by training locally for one step. This is not necessary prior to cloud.fit() however it is recommended.\nfashion_mnist_model.fit(dataset, epochs=1)",
"Submit model and dataset for remote training",
"# Create a unique remote sub folder path for assets and model training output.\nFASHION_REMOTE_DIR = os.path.join(REMOTE_DIR, str(uuid.uuid4()))\nprint('your remote folder is %s' % (FASHION_REMOTE_DIR))\n\nfashion_mnist_model_job_id = client.cloud_fit(model=fashion_mnist_model, remote_dir = FASHION_REMOTE_DIR,region =REGION , image_uri=IMAGE_URI, x=dataset,epochs=10, steps_per_epoch=15,verbose=2)\n\n!gcloud ai-platform jobs describe projects/{PROJECT_ID}/jobs/{fashion_mnist_model_job_id}",
"Retrieve the trained model\nOnce the training is complete you can access the trained model at remote_folder/output",
"# Load the trained model from gcs bucket\ntrained_fashion_mnist_model = tf.keras.models.load_model(os.path.join(FASHION_REMOTE_DIR, 'output'))\n\n# Test that the saved model loads and works properly\ntest_images, test_labels = test\ntest_images = test_images/255\ntest_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels))\ntest_dataset = test_dataset.batch(32)\n\ntrained_fashion_mnist_model.evaluate(test_dataset)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
newlawrence/poliastro
|
docs/source/examples/Using NEOS package.ipynb
|
mit
|
[
"Using NEOS package\nWith the new poliastro version (0.7.0), a new package is included: NEOs package.\nThe docstrings of this package states the following:\n\nFunctions related to NEOs and different NASA APIs. All of them are coded as part of SOCIS 2017 proposal.\n\nSo, first of all, an important question:\nWhat are NEOs?\nNEO stands for near-Earth object. The Center for NEO Studies (CNEOS) defines NEOs as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.\nAnd what does \"near\" exactly mean? In terms of orbital elements, asteroids and comets can be considered NEOs if their perihelion (orbit point which is nearest to the Sun) is less than 1.3 au = 1.945 * 10<sup>8</sup> km from the Sun.",
"import matplotlib.pyplot as plt\nplt.ion()\n\nfrom astropy import time\n\nfrom poliastro.twobody.orbit import Orbit\nfrom poliastro.bodies import Earth\nfrom poliastro.plotting import OrbitPlotter",
"NeoWS module\nThis module make requests to NASA NEO Webservice, so you'll need an internet connection to run the next examples.\nThe simplest neows function is orbit_from_name(), which return an Orbit object given a name:",
"from poliastro.neos import neows\n\neros = neows.orbit_from_name('Eros')\n\nframe = OrbitPlotter()\nframe.plot(eros, label='Eros');",
"You can also search by IAU number or SPK-ID (there is a faster neows.orbit_from_spk_id() function in that case, although):",
"ganymed = neows.orbit_from_name('1036') # Ganymed IAU number\namor = neows.orbit_from_name('2001221') # Amor SPK-ID\neros = neows.orbit_from_spk_id('2000433') # Eros SPK-ID\n\nframe = OrbitPlotter()\nframe.plot(ganymed, label='Ganymed')\nframe.plot(amor, label='Amor')\nframe.plot(eros, label='Eros');",
"Since neows relies on Small-Body Database browser to get the SPK-ID given a body name, you can use the wildcards from that browser: * and ?.\n<div class=\"alert alert-info\">Keep it in mind that `orbit_from_name()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies.</div>",
"neows.orbit_from_name('*alley')",
"<div class=\"alert alert-info\">Note that epoch is provided by the Web Service itself, so if you need orbit on another epoch, you have to propagate it:</div>",
"eros.epoch.iso\n\nepoch = time.Time(2458000.0, scale='tdb', format='jd')\neros_november = eros.propagate(epoch)\neros_november.epoch.iso",
"Given that we are using NASA APIs, there is a maximum number of requests. If you want to make many requests, it is recommended getting a NASA API key. You can use your API key adding the api_key parameter to the function:",
"neows.orbit_from_name('Toutatis', api_key='DEMO_KEY')",
"DASTCOM5 module\nThis module can also be used to get NEOs orbit, in the same way that neows, but it have some advantages (and some disadvantages).\nIt relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but \n potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline.\nThe file is a ~230 MB zip that you can manually download and unzip in ~/.poliastro or, more easily, you can use\nPython\ndastcom5.download_dastcom5()\nThe main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is orbit_from_name():",
"from poliastro.neos import dastcom5\n\natira = dastcom5.orbit_from_name('atira')[0] # NEO\nwikipedia = dastcom5.orbit_from_name('wikipedia')[0] # Asteroid, but not NEO.\nframe = OrbitPlotter()\nframe.plot(atira, label='Atira (NEO)')\nframe.plot(wikipedia, label='Wikipedia (asteroid)');",
"Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one:",
"halleys = dastcom5.orbit_from_name('1P')\n\nframe = OrbitPlotter()\nframe.plot(halleys[0], label='Halley')\nframe.plot(halleys[5], label='Halley')\nframe.plot(halleys[10], label='Halley')\nframe.plot(halleys[20], label='Halley')\nframe.plot(halleys[-1], label='Halley');",
"While neows can only be used to get Orbit objects, dastcom5 can also provide asteroid and comet complete database.\nOnce you have this, you can get specific data about one or more bodies. The complete databases are ndarrays, so if you want to know the entire list of available parameters, you can look at the dtype, and they are also explained in\ndocumentation API Reference:",
"ast_db = dastcom5.asteroid_db()\ncomet_db = dastcom5.comet_db()\nast_db.dtype.names[:20] # They are more than 100, but that would be too much lines in this notebook :P",
"<div class=\"alert alert-info\">Asteroid and comet parameters are not exactly the same (although they are very close):</div>\n\nWith these ndarrays you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.\nFor example, NEOs can be grouped in several ways. One of the NEOs group is called Atiras, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using asteroid_db():\nTalking in orbital terms, Atiras have an aphelion distance, Q < 0.983 au and a semi-major axis, a < 1.0 au.\nVisiting documentation API Reference, you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis Q = 2*a - q, but there are probably many other ways.",
"aphelion_condition = 2 * ast_db['A'] - ast_db['QR'] < 0.983\naxis_condition = ast_db['A'] < 1.3 \natiras = ast_db[aphelion_condition & axis_condition]",
"The number of Atira NEOs we use using this method is:",
"len(atiras)",
"Which is consistent with the stats published by CNEOS\nNow we're gonna plot all of their orbits, with corresponding labels, just because we love plots :)",
"from poliastro.twobody.orbit import Orbit\nfrom poliastro.bodies import Earth\n\nearth = Orbit.from_body_ephem(Earth)",
"We only need to get the 16 orbits from these 16 ndarrays.\nThere are two ways:\n\nGather all their orbital elements manually and use the Orbit.from_classical() function.\nUse the NO property (logical record number in DASTCOM5 database) and the dastcom5.orbit_from_record() function.\n\nThe second one seems easier and it is related to the current notebook, so we are going to use that one:\nWe are going to use ASTNAM property of DASTCOM5 database:",
"frame = OrbitPlotter()\n\nframe.plot(earth, label='Earth')\n\nfor record in atiras['NO']:\n ss = dastcom5.orbit_from_record(record).to_icrs()\n frame.plot(ss, color=\"#666666\")",
"If we needed also the names of each asteroid, we could do:",
"frame = OrbitPlotter()\n\nframe.plot(earth, label='Earth')\n\nfor i in range(len(atiras)):\n record = atiras['NO'][i]\n label = atiras['ASTNAM'][i].decode().strip() # DASTCOM5 strings are binary\n ss = dastcom5.orbit_from_record(record).to_icrs()\n frame.plot(ss, label=label)",
"<div class=\"alert alert-info\">We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted.</div>\n\nFinally, another interesting function in dastcom5 is entire_db(), which is really similar to ast_db and com_db, but it returns a Pandas dataframe instead of a numpy ndarray. The dataframe has asteroids and comets in it, but in order to achieve that (and a more manageable dataframe), a lot of parameters were removed, and others were renamed:",
"db = dastcom5.entire_db()\ndb.columns",
"Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc):",
"db[db.NAME == 'Halley'] # As you can see, Halley is the name of an asteroid too, did you know that?",
"Panda offers many functionalities, and can also be used in the same way as the ast_db and comet_db functions:",
"aphelion_condition = (2 * db['A'] - db['QR']) < 0.983\naxis_condition = db['A'] < 1.3 \natiras = db[aphelion_condition & axis_condition]\n\nlen(atiras)",
"What? I said they can be used in the same way!\nDont worry :) If you want to know what's happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis!",
"len(atiras[atiras.A < 0])",
"So, rewriting our condition:",
"axis_condition = (db['A'] < 1.3) & (db['A'] > 0)\natiras = db[aphelion_condition & axis_condition]\nlen(atiras)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
anhaidgroup/py_entitymatching
|
notebooks/guides/end_to_end_em_guides/Basic EM Workflow Restaurants - 2.ipynb
|
bsd-3-clause
|
[
"Basic EM workflow 2 (Restaurants data set)\nIntroduction\nThis IPython notebook explains a basic workflow two tables using py_entitymatching. Our goal is to come up with a workflow to match restaurants from Fodors and Zagat sites. Specifically, we want to achieve precision and recall above 96%. The datasets contain information about the restaurants.\nFirst, we need to import py_entitymatching package and other libraries as follows:",
"import sys\nsys.path.append('/Users/pradap/Documents/Research/Python-Package/anhaid/py_entitymatching/')\n\nimport py_entitymatching as em\nimport pandas as pd\nimport os\n\n# Display the versions\nprint('python version: ' + sys.version )\nprint('pandas version: ' + pd.__version__ )\nprint('magellan version: ' + em.__version__ )",
"Matching two tables typically consists of the following three steps:\n 1. Reading the input tables \n 2. Blocking the input tables to get a candidate set \n 3. Matching the tuple pairs in the candidate set \nRead input tables\nWe begin by loading the input tables. For the purpose of this guide, we use the datasets that are included with the package.",
"# Get the paths\npath_A = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/fodors.csv'\npath_B = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/zagats.csv'\n\n# Load csv files as dataframes and set the key attribute in the dataframe\nA = em.read_csv_metadata(path_A, key='id')\nB = em.read_csv_metadata(path_B, key='id')\n\nprint('Number of tuples in A: ' + str(len(A)))\nprint('Number of tuples in B: ' + str(len(B)))\nprint('Number of tuples in A X B (i.e the cartesian product): ' + str(len(A)*len(B)))\n\nA.head(2)\n\nB.head(2)\n\n# Display the keys of the input tables\nem.get_key(A), em.get_key(B)\n\n# If the tables are large we can downsample the tables like this\nA1, B1 = em.down_sample(A, B, 200, 1, show_progress=False)\nlen(A1), len(B1)\n\n# But for the purposes of this notebook, we will use the entire table A and B",
"Block Tables To Get Candidate Set\nBefore we do the matching, we would like to remove the obviously non-matching tuple pairs from the input tables. This would reduce the number of tuple pairs considered for matching.\npy_entitymatching provides four different blockers: (1) attribute equivalence, (2) overlap, (3) rule-based, and (4) black-box. The user can mix and match these blockers to form a blocking sequence applied to input tables.\nFor the matching problem at hand, we know that two restaurants with no overlap between the names will not match. So we decide the apply blocking over names:",
"# Blocking plan\n\n# A, B -- Overlap blocker [name] --------------------|---> candidate set\n\n# Create overlap blocker\nob = em.OverlapBlocker()\n\n# Block tables using 'name' attribute \nC = ob.block_tables(A, B, 'name', 'name', \n l_output_attrs=['name', 'addr', 'city', 'phone'], \n r_output_attrs=['name', 'addr', 'city', 'phone'],\n overlap_size=1, show_progress=False\n )\nlen(C)",
"Match tuple pairs in candidate set\nIn this step, we would want to match the tuple pairs in the candidate set. Specifically, we use learning-based method for matching purposes.\nThis typically involves the following four steps:\n\nSampling and labeling the candidate set\nSplitting the labeled data into development and evaluation set\nSelecting the best learning based matcher using the development set\nEvaluating the selected matcher using the evaluation set\n\nSampling and labeling the candidate set\nFirst, we randomly sample 450 tuple pairs for labeling purposes.",
"# Sample candidate set\nS = em.sample_table(C, 450)",
"Next, we label the sampled candidate set. Specify we would enter 1 for a match and 0 for a non-match.",
"# Label S\nG = em.label_table(S, 'gold')",
"For the purposes of this guide, we will load in a pre-labeled dataset (of 450 tuple pairs) included in this package.",
"# Load the pre-labeled data\npath_G = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/lbl_restnt_wf1.csv'\nG = em.read_csv_metadata(path_G, \n key='_id',\n ltable=A, rtable=B, \n fk_ltable='ltable_id', fk_rtable='rtable_id')\nlen(G)",
"Splitting the labeled data into development and evaluation set\nIn this step, we split the labeled data into two sets: development (I) and evaluation (J). Specifically, the development set is used to come up with the best learning-based matcher and the evaluation set used to evaluate the selected matcher on unseen data.",
"# Split S into development set (I) and evaluation set (J)\nIJ = em.split_train_test(G, train_proportion=0.7, random_state=0)\nI = IJ['train']\nJ = IJ['test']",
"Selecting the best learning-based matcher\nSelecting the best learning-based matcher typically involves the following steps:\n\nCreating a set of learning-based matchers\nCreating features\nConverting the development set into feature vectors\nSelecting the best learning-based matcher using k-fold cross validation\n\nCreating a set of learning-based matchers",
"# Create a set of ML-matchers\ndt = em.DTMatcher(name='DecisionTree', random_state=0)\nsvm = em.SVMMatcher(name='SVM', random_state=0)\nrf = em.RFMatcher(name='RF', random_state=0)\nlg = em.LogRegMatcher(name='LogReg', random_state=0)\nln = em.LinRegMatcher(name='LinReg')\nnb = em.NBMatcher(name='NaiveBayes')",
"Creating features\nNext, we need to create a set of features for the development set. py_entitymatching provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.",
"# Generate features\nfeature_table = em.get_features_for_matching(A, B, validate_inferred_attr_types=False)",
"Converting the development set to feature vectors",
"# Convert the I into a set of feature vectors using F\nH = em.extract_feature_vecs(I, \n feature_table=feature_table, \n attrs_after='gold',\n show_progress=False)\n\n# Display first few rows\nH.head(3)",
"Selecting the best matcher using cross-validation\nNow, we select the best matcher using k-fold cross-validation. For the purposes of this guide, we use five fold cross validation and use 'precision' and 'recall' metric to select the best matcher.",
"# Select the best ML matcher using CV\nresult = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H, \n exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],\n k=5,\n target_attr='gold', metric_to_select_matcher='precision', random_state=0)\nresult['cv_stats']\n\nresult = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H, \n exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],\n k=5,\n target_attr='gold', metric_to_select_matcher='recall', random_state=0)\nresult['cv_stats']",
"We observe that the best matcher (RF) is getting us the best precision and recall. So, we select this matcher and now we can proceed on to evaluating the best matcher on the unseen data (the evaluation set).\nEvaluating the matching output\nEvaluating the matching outputs for the evaluation set typically involves the following four steps:\n1. Converting the evaluation set to feature vectors\n2. Training matcher using the feature vectors extracted from the development set\n3. Predicting the evaluation set using the trained matcher\n4. Evaluating the predicted matches\nConverting the evaluation set to feature vectors\nAs before, we convert to the feature vectors (using the feature table and the evaluation set)",
"# Convert J into a set of feature vectors using feature table\nL = em.extract_feature_vecs(J, feature_table=feature_table,\n attrs_after='gold', show_progress=False)",
"Training the selected matcher\nNow, we train the matcher using all of the feature vectors from the development set. For the purposes of this guide we use random forest as the selected matcher.",
"# Train using feature vectors from I \ndt.fit(table=H, \n exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'], \n target_attr='gold')",
"Predicting the matches\nNext, we predict the matches for the evaluation set (using the feature vectors extracted from it).",
"# Predict on L \npredictions = dt.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'], \n append=True, target_attr='predicted', inplace=False)",
"Evaluating the predictions\nFinally, we evaluate the accuracy of predicted outputs",
"# Evaluate the predictions\neval_result = em.eval_matches(predictions, 'gold', 'predicted')\nem.print_eval_summary(eval_result)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chagaz/ma2823_2016
|
lab_notebooks/Lab 5 2016-10-14 Nearest neighbors.ipynb
|
mit
|
[
"2016-10-14: Nearest neighbors\nIn this lab, we will apply nearest neighbors classification to the Endometrium vs. Uterus cancer data. For documentation see: http://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-classification and http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier\nLet us start by setting up our environment, loading the data, and setting up our cross-validation.",
"import numpy as np\n%pylab inline",
"Question Load the data as in the previous lab.",
"# Set up a stratified 10-fold cross-validation\nfrom sklearn import cross_validation\nfolds = cross_validation.StratifiedKFold(y, 10, shuffle=True)\n\n# This is the cross-validation method with scaling we defined in the previous labs. \nfrom sklearn import preprocessing\ndef cross_validate(design_matrix, labels, classifier, cv_folds):\n \"\"\" Perform a cross-validation and returns the predictions. \n \n Parameters:\n -----------\n design_matrix: (n_samples, n_features) np.array\n Design matrix for the experiment.\n labels: (n_samples, ) np.array\n Vector of labels.\n classifier: sklearn classifier object\n Classifier instance; must have the following methods:\n - fit(X, y) to train the classifier on the data X, y\n - predict_proba(X) to apply the trained classifier to the data X and return probability estimates \n cv_folds: sklearn cross-validation object\n Cross-validation iterator.\n \n Return:\n -------\n pred: (n_samples, ) np.array\n Vectors of predictions (same order as labels).\n \"\"\"\n pred = np.zeros(labels.shape)\n for tr, te in cv_folds:\n # Restrict data to train/test folds\n Xtr = design_matrix[tr, :]\n ytr = labels[tr]\n Xte = design_matrix[te, :]\n\n # Scale data\n scaler = preprocessing.StandardScaler() # create scaler\n Xtr = scaler.fit_transform(Xtr) # fit the scaler to the training data and transform training data\n Xte = scaler.transform(Xte) # transform test data\n \n # Fit classifier\n classifier.fit(Xtr, ytr)\n\n # Predict probabilities (of belonging to +1 class) on test data\n yte_pred = classifier.predict_proba(Xte)\n index_of_class_1 = (1-classifier.classes_[0])/2 # 0 if the first sample is positive, 1 otherwise\n pred[te] = yte_pred[:, index_of_class_1]\n return pred",
"Question A nearest-neighbors classifier with k neighbors can be instantiated as:\nclf = neighbors.KNeighborsClassifier(n_neighbors=k)\nCross-validate 15 nearest-neighbors classifiers, for k ranging from 1 to 29 (odd values of k only). Plot the area under the ROC curves you obtained as a function of k. \nWhy are we not using even values for k?",
"from sklearn import neighbors\nfrom sklearn import metrics\naurocs = []\n\nfor k in range(1, 30, 2): # values from 1 to 30, with a step size of 2\n # TODO: Compute the vector ypred of cross-validated predictions of a k-nearest-neighbor classifier.\n\n fpr, tpr, thresholds = metrics.roc_curve(y, ypred, pos_label=1)\n aurocs.append(metrics.auc(fpr, tpr)) \n\nplt.plot(range(1, 30, 2), aurocs, color='blue')\nplt.xlabel('Number of nearest neighbors', fontsize=16)\nplt.ylabel('Cross-validated AUC', fontsize=16)\nplt.title('Nearest neighbors classification', fontsize=16)",
"Question Use 'grid_search.GridSearchCV' to set the optimal value of k automatically. On the previous plot, plot the area under the ROC curve you obtain as a horizontal line.\nComment If the area under the ROC curve is lower than what you were expecting, check the score (i.e. scoring parameter) for which the grid search CV parameter was optimized.\nLet us look at the optimal value of the parameter k returned for the last fold.",
"print clf.best_params_",
"Question Modify cross_validate(design_matrix, labels, classifier, cv_folds) to take as classifier a GridSearchCV instance and print the best parameter(s) for each fold.",
"def cross_validate_optimize(design_matrix, labels, classifier, cv_folds):\n \"\"\" Perform a cross-validation and returns the predictions. \n \n Parameters:\n -----------\n design_matrix: (n_samples, n_features) np.array\n Design matrix for the experiment.\n labels: (n_samples, ) np.array\n Vector of labels.\n classifier: sklearn GridSearchCV object\n GridSearchCV instance; must have the following methods/attributes:\n - fit(X, y) to train the classifier on the data X, y\n - predict_proba(X) to apply the trained classifier to the data X and return probability estimates \n cv_folds: sklearn cross-validation object\n - best_params_ the best parameter dictionary\n Cross-validation iterator.\n \n Return:\n -------\n pred: (n_samples, ) np.array\n Vector of predictions (same order as labels).\n \"\"\"\n # TODO",
"Question How many nearest neighbors were chosen for each fold? How stable is this value?",
"from sklearn import grid_search\nparam_grid = {'n_neighbors': range(1, 30, 2)}\nclf = grid_search.GridSearchCV(neighbors.KNeighborsClassifier(), \n param_grid, scoring='roc_auc')\nypred = cross_validate_optimize(X, y, clf, folds)\nfpr, tpr, thresholds = metrics.roc_curve(y, ypred, pos_label=1)",
"Question How does the nearest-neighbors classifier compare to the linear regression (regularized or not)? Plot ROC curves.\nQuestion What distance was used to define nearest neighbors? What other distances can you use? How does this affect performance?\nKaggle challenge\nYou can find the documentation for nearest neighbors regression here: http://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression \n* What parameters can you change?\n* Cross-validate several different nearest neighbors regressors (different=that use different parameters) on your data, using the folds you previously set up. How do the different variants of nearest neighbors compare to each other? How do they compare to performance obtained with other algorithms?\n* Submit predictions to the leaderboard for the best of your nearest-neighbors models. Do the results on the leaderboard data match your expectations?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Naereen/notebooks
|
Using_Python_to_solve_Regexp_CrossWord_Puzzles.ipynb
|
mit
|
[
"Using Python to solve Regexp CrossWord Puzzles\nHave a look at the amazing https://regexcrossword.com/ website.\nI played during about two hours, and could manually solve almost all problems, quite easily for most of them.\nBut then I got stucked on this one.\nSoooooo. I want to use Python3 regular expressions and try to solve any such cross-word puzzles.\nWarning: This notebook will not explain the concept and syntax of regular expressions, go read on about it on Wikipedia or in a good book. The Python documentation gives a nice introduction here.\n\nAuthor: Lilian Besson (@Naereen) ;\nLicense: MIT License ;\nDate: 28-02-2021.\n\nRepresentation of a problem\nHere is a screenshot from the game webpage.\n\nAs you can see, an instance of this game is determined by its rectangular size, let's denote it $(m, n)$, so here there are $m=5$ lines and $n=5$ columns.\nI'll also use this easy problem:\n\nLet's define both, in a small dictionnary containing two to four lists of regexps.\nEasy problem of size $(2,2)$ with four constraints",
"problem1 = {\n \"left_lines\": [\n r\"HE|LL|O+\", # HE|LL|O+ line 1\n r\"[PLEASE]+\", # [PLEASE]+ line 2\n ],\n \"right_lines\": None,\n \"top_columns\": [\n r\"[^SPEAK]+\", # [^SPEAK]+ column 1\n r\"EP|IP|EF\", # EP|IP|EF column 2\n ],\n \"bottom_columns\": None,\n}",
"The keys \"right_lines\" and \"bottom_columns\" can be empty, as for easier problems there are no constraints on the right and bottom.\nEach line and column (but not each square) contains a regular expression, on a common alphabet of letters and symbols.\nLet's write $\\Sigma$ this alphabet, which in the most general case is $\\Sigma={$ A, B, ..., Z, 0, ..., 9, :, ?, ., $, -$}$.\nFor the first beginner problem, the alphabet can be shorten:",
"alphabet1 = {\n 'H', 'E', 'L', 'O',\n 'P', 'L', 'E', 'A', 'S', 'E',\n 'S', 'P', 'E', 'A', 'K',\n 'E', 'P', 'I', 'P', 'I', 'F',\n}\n\nprint(f\"alphabet1 = \\n{sorted(alphabet1)}\")",
"Difficult problem of size $(5,5)$ with 20 constraints\nDefining the second problem is just a question of more copy-pasting:",
"problem2 = {\n \"left_lines\": [\n r\"(N3|TRA|N7)+\", # left line 1\n r\"[1LOVE2?4]+.\", # left line 2\n r\"(A|D)M[5-8$L]+\", # left line 3\n r\"[^\\s0ILAD]+\", # left line 4\n r\"[B-E]+(.)\\1.\", # left line 5\n ],\n \"right_lines\": [\n r\"[^OLD\\s]+\", # right line 1\n r\"(\\d+)[LA\\s$?]+\", # right line 2\n r\"(\\-P|5\\$|AM|Z|L)+\", # right line 3\n r\"(\\-D|\\-WE)+[^L4-9N$?]+\", # right line 4\n r\"[FED$?]+\", # right line 5\n ],\n \"top_columns\": [\n r\"[2TAIL\\-D]+\", # top column 1\n r\"(WE|R4|RY|M)+\", # top column 2\n r\"[FEAL3-5S]+\", # top column 3\n r\"[^FA\\sT1-2]+F\", # top column 4\n r\"[LO\\s\\?5-8]+\", # top column 5\n ],\n \"bottom_columns\": [\n r\"[^ILYO]+\", # top column 1\n r\".+[MURDEW]+\", # top column 2\n r\"[1ALF5$E\\s]+\", # top column 3\n r\"[\\dFAN$?]+\", # top column 4\n r\".+\\s.+\\?\", # top column 5\n ],\n}",
"And its alphabet:",
"import string\n\nalphabet2 = set(string.digits) \\\n | set(string.ascii_uppercase) \\\n | { ':', '?', '.', '$', '-' }\n\nprint(f\"alphabet2 = \\n{sorted(alphabet2)}\")",
"An intermediate problem of size $(3,3)$ with 12 constraints\nDefining the third problem is just a question of more copy-pasting:",
"problem3 = {\n \"left_lines\": [\n r\"[ONE]*[SKA]\", # left line 1\n r\".*(RE|ER)\", # left line 2\n r\"A+[TUB]*\", # left line 3\n ],\n \"right_lines\": [\n r\".*(O|S)*\", # right line 1\n r\"[^GOA]*\", # right line 2\n r\"[STUPA]+\", # right line 3\n ],\n \"top_columns\": [\n r\".*[GAF]*\", # top column 1\n r\"(P|ET|O|TEA)*\", # top column 2\n r\"[RUSH]+\", # top column 3\n ],\n \"bottom_columns\": [\n r\"(NF|FA|A|FN)+\", # top column 1\n r\".*(A|E|I).*\", # top column 2\n r\"[SUPER]*\", # top column 3\n ],\n}",
"And its alphabet:",
"alphabet3 = {\n 'O', 'N', 'E', 'S', 'K', 'A',\n 'R', 'E', 'E', 'R',\n 'A', 'T', 'U', 'B',\n \n 'O', 'S',\n 'G', 'O', 'A',\n 'S', 'T', 'U', 'P', 'A',\n \n 'G', 'A', 'F',\n 'P', 'E', 'T', 'O', 'T', 'E', 'A',\n 'R', 'U', 'S', 'H',\n\n 'N', 'F', 'F', 'A', 'A', 'F', 'N',\n 'A', 'E', 'I',\n 'S', 'U', 'P', 'E', 'R',\n}\n\nprint(f\"alphabet3 = \\n{sorted(alphabet3)}\")",
"A few useful functions\nLet's first extract the dimension of a problem:",
"def dimension_problem(problem):\n m = len(problem['left_lines'])\n if problem['right_lines'] is not None:\n assert m == len(problem['right_lines'])\n n = len(problem['top_columns'])\n if problem['bottom_columns'] is not None:\n assert n == len(problem['bottom_columns'])\n return (m, n)\n\nproblem1\n\ndimension_problem(problem1)",
"Now let's write a representation of a grid, a solution (or partial solution) of a problem:",
"___ = \"_\" # represents an empty answer, as _ is not in the alphabet\ngrid1_partial = [\n [ 'H', ___ ],\n [ ___, 'P' ],\n]\n\ngrid1_solution = [\n [ 'H', 'E' ],\n [ 'L', 'P' ],\n]",
"As well as a few complete grids which are NOT solutions",
"grid1_wrong1 = [\n [ 'H', 'E' ],\n [ 'L', 'F' ],\n]\n\ngrid1_wrong2 = [\n [ 'H', 'E' ],\n [ 'E', 'P' ],\n]\n\ngrid1_wrong3 = [\n [ 'H', 'E' ],\n [ 'O', 'F' ],\n]\n\ngrid1_wrong4 = [\n [ 'O', 'E' ],\n [ 'O', 'F' ],\n]",
"We also write these short functions to extract the $i$-th line or $j$-th column:",
"def nth_line(grid, line):\n return \"\".join(grid[line])\n\ndef nth_column(grid, column):\n return \"\".join(grid[line][column] for line in range(len(grid)))\n\n[ nth_line(grid1_solution, line) for line in range(len(grid1_solution)) ]\n\n[ nth_column(grid1_solution, column) for column in range(len(grid1_solution[0])) ]",
"A partial solution for the intermediate problem:",
"___ = \"_\" # represents an empty answer, as _ is not in the alphabet\ngrid3_solution = [\n [ 'N', 'O', 'S' ],\n [ 'F', 'E', 'R' ],\n [ 'A', 'T', 'U' ],\n]",
"And a partial solution for the harder problem:",
"___ = \"_\" # represents an empty answer, as _ is not in the alphabet\ngrid2_partial = [\n [ 'T', 'R', 'A', 'N', '7' ],\n [ '2', '4', ___, ___, ' ' ],\n [ 'A', ___, ___, ___, ___ ],\n [ '-', ___, ___, ___, ___ ],\n [ 'D', ___, ___, ___, '?' ],\n]",
"Let's extract the dimension of a grid, just to check it:",
"def dimension_grid(grid):\n m = len(grid)\n n = len(grid[0])\n assert all(n == len(grid[i]) for i in range(1, m))\n return (m, n)\n\nprint(f\"Grid grid1_partial has dimension: {dimension_grid(grid1_partial)}\")\nprint(f\"Grid grid1_solution has dimension: {dimension_grid(grid1_solution)}\")\n\nprint(f\"Grid grid2_partial has dimension: {dimension_grid(grid2_partial)}\")\n\ndef check_dimensions(problem, grid):\n return dimension_problem(problem) == dimension_grid(grid)\n\nassert check_dimensions(problem1, grid1_partial)\nassert check_dimensions(problem1, grid1_solution)\n\nassert not check_dimensions(problem2, grid1_partial)\n\nassert check_dimensions(problem2, grid2_partial)\n\nassert not check_dimensions(problem1, grid2_partial)",
"Two more checks\nWe also have to check if a word is in an alphabet:",
"def check_alphabet(alphabet, word, debug=True):\n result = True\n for i, letter in enumerate(word):\n new_result = letter in alphabet\n if debug and result and not new_result:\n print(f\"The word {repr(word)} is not in alphabet {repr(alphabet)}, as its #{i}th letter {letter} is not present.\")\n result = result and new_result\n return result\n\nassert check_alphabet(alphabet1, 'H' 'E') # concatenate the strings\n\nassert check_alphabet(alphabet1, 'H' 'E')\nassert check_alphabet(alphabet1, 'L' 'P')\nassert check_alphabet(alphabet1, 'H' 'L')\nassert check_alphabet(alphabet1, 'E' 'P')\n\nassert check_alphabet(alphabet2, \"TRAN7\")",
"And also check that a word matches a regexp:",
"import re",
"As the documentation explains it:\n\nbut using prog = re.compile(regepx) and saving the resulting regular expression object prog for reuse is more efficient when the expression will be used several times in a single program.\n\nI don't want to have to think about compiling a regexp before using it, so... I'm gonna memoize them!",
"memory_of_compiled_regexps = dict()",
"Now we are ready to write our \"smart\" match function:",
"def match(regexp, word, debug=True):\n global memory_of_compiled_regexps\n if regexp not in memory_of_compiled_regexps:\n prog = re.compile(regexp)\n memory_of_compiled_regexps[regexp] = prog\n print(f\"For the first time seeing this regexp {repr(regexp)}, compiling it and storing in memory_of_compiled_regexps, now of size {len(memory_of_compiled_regexps)}.\")\n else:\n prog = memory_of_compiled_regexps[regexp]\n \n result = re.fullmatch(regexp, word)\n result = prog.fullmatch(word)\n\n entire_match = result is not None\n # entire_match = result.group(0) == word\n if debug:\n if entire_match:\n print(f\"The word {repr(word)} is matched by {repr(regexp)}\")\n else:\n print(f\"The word {repr(word)} is NOT matched by {repr(regexp)}\")\n return entire_match",
"Let's compare the time of the first match and next ones:",
"%%time\nmatch(r\"(N3|TRA|N7)+\", \"TRAN7\")\n\n%%time\nmatch(r\"(N3|TRA|N7)+\", \"TRAN8\")",
"Well of course it's not different for tiny test like this.",
"match(r\"(N3|TRA|N7)+\", \"\")\n\nmatch(r\"(N3|TRA|N7)+\", \"TRA\")",
"That should be enough to start the first \"easy\" task.",
"%timeit match(r\"(N3|TRA|N7)+\", \"TRA\", debug=False)\n\n%timeit re.fullmatch(r\"(N3|TRA|N7)+\", \"TRA\")",
"We can see that our \"memoization trick\" indeed helped to speed-up the time required to check a regexp, by about a factor 2, even for very small tests like this.\nFirst easy task: check that a line/column word validate its contraints\nGiven a problem $P$ of dimension $(m, n)$, its alphabet $\\Sigma$, a position $i \\in [| 0, m-1 |]$ of a line or $j \\times [|0, n-1 |]$ of a column, and a word $w \\in \\Sigma^k$ (with $k=m$ for line or $k=n$ for column), I want to write a function that checks the validity of each (left/right) line, or (top/bottom) constraints.\nTo ease debugging, and in the goal of using this Python program to improve my skills in solving such puzzles, I don't want this function to just reply True or False, but to also print for each constraints if it is satisfied or not.\nBonus: for each regexp contraint, highlight the parts which corresponded to each letter of the word?\nFor lines\nWe are ready to check the one or two constraints of a line.\nThe same function will be written for columns, just below.",
"def check_line(problem, alphabet, word, position, debug=True, early=False):\n if not check_alphabet(alphabet, word, debug=debug):\n return False\n m, n = dimension_problem(problem)\n if len(word) != n:\n if debug:\n print(f\"Word {repr(word)} does not have correct size n = {n} for lines\")\n return False\n assert 0 <= position < m\n constraints = []\n if \"left_lines\" in problem and problem[\"left_lines\"] is not None:\n constraints += [ problem[\"left_lines\"][position] ]\n if \"right_lines\" in problem and problem[\"right_lines\"] is not None:\n constraints += [ problem[\"right_lines\"][position] ]\n # okay we have one or two constraint for this line,\n assert len(constraints) in {1, 2}\n # let's check them!\n result = True\n for cnb, constraint in enumerate(constraints):\n if debug:\n print(f\"For line constraint #{cnb} {repr(constraint)}:\")\n new_result = match(constraint, word, debug=debug)\n if early and not new_result: return False\n result = result and new_result\n return result",
"Let's try it!",
"problem1, alphabet1, grid1_solution\n\nn, m = dimension_problem(problem1)\n\nfor line in range(n):\n word = nth_line(grid1_solution, line)\n print(f\"- For line number {line}, checking word {repr(word)}:\")\n result = check_line(problem1, alphabet1, word, line)\n\nn, m = dimension_problem(problem1)\nfake_words = [\"OK\", \"HEY\", \"NOT\", \"HELL\", \"N\", \"\", \"HU\", \"OO\", \"EA\"]\n\nfor word in fake_words:\n print(f\"# For word {repr(word)}:\")\n for line in range(n):\n result = check_line(problem1, alphabet1, word, line)\n print(f\" => {result}\")",
"That was long, but it works fine!",
"n, m = dimension_problem(problem2)\n\nfor line in [0]:\n word = nth_line(grid2_partial, line)\n print(f\"- For line number {line}, checking word {repr(word)}:\")\n result = check_line(problem2, alphabet2, word, line)\n print(f\" => {result}\")\n\nn, m = dimension_problem(problem2)\nfake_words = [\n \"TRAN8\", \"N2TRA\", # violate first constraint\n \"N3N3N7\", \"N3N3\", \"TRA9\", # smaller or bigger dimension\n \"O L D\", \"TRA \", # violate second contraint\n]\n\nfor word in fake_words:\n for line in [0]:\n print(f\"- For line number {line}, checking word {repr(word)}:\")\n result = check_line(problem2, alphabet2, word, line)\n print(f\" => {result}\")",
"For columns\nWe are ready to check the one or two constraints of a line.\nThe same function will be written for columns, just below.",
"def check_column(problem, alphabet, word, position, debug=True, early=False):\n if not check_alphabet(alphabet, word, debug=debug):\n return False\n m, n = dimension_problem(problem)\n if len(word) != m:\n if debug:\n print(f\"Word {repr(word)} does not have correct size n = {n} for columns\")\n return False\n assert 0 <= position < n\n constraints = []\n if \"top_columns\" in problem and problem[\"top_columns\"] is not None:\n constraints += [ problem[\"top_columns\"][position] ]\n if \"bottom_columns\" in problem and problem[\"bottom_columns\"] is not None:\n constraints += [ problem[\"bottom_columns\"][position] ]\n # okay we have one or two constraint for this column,\n assert len(constraints) in {1, 2}\n # let's check them!\n result = True\n for cnb, constraint in enumerate(constraints):\n if debug:\n print(f\"For column constraint #{cnb} {repr(constraint)}:\")\n new_result = match(constraint, word, debug=debug)\n if early and not new_result: return False\n result = result and new_result\n return result",
"Let's try it!",
"problem1, alphabet1, grid1_solution\n\nn, m = dimension_problem(problem1)\n\nfor column in range(m):\n word = nth_column(grid1_solution, column)\n print(f\"- For column number {column}, checking word {repr(word)}:\")\n result = check_column(problem1, alphabet1, word, column)\n\nn, m = dimension_problem(problem1)\nfake_words = [\"OK\", \"HEY\", \"NOT\", \"HELL\", \"N\", \"\", \"HU\", \"OO\", \"EA\"]\n\nfor word in fake_words:\n print(f\"# For word {repr(word)}:\")\n for column in range(m):\n result = check_column(problem1, alphabet1, word, column)\n print(f\" => {result}\")",
"That was long, but it works fine!",
"n, m = dimension_problem(problem2)\n\nfor column in [0]:\n word = nth_column(grid2_partial, column)\n print(f\"- For column number {column}, checking word {repr(word)}:\")\n result = check_column(problem2, alphabet2, word, column)\n print(f\" => {result}\")\n\nn, m = dimension_problem(problem2)\nfake_words = [\n \"TRAN8\", \"N2TRA\", # violate first constraint\n \"N3N3N7\", \"N3N3\", \"TRA9\", # smaller or bigger dimension\n \"O L D\", \"TRA \", # violate second contraint\n]\n\nfor word in fake_words:\n for line in [0]:\n print(f\"- For line number {line}, checking word {repr(word)}:\")\n result = check_column(problem2, alphabet2, word, line)\n print(f\" => {result}\")",
"Second easy task: check that a proposed grid is a valid solution\nI think it's easy, as we just have to use $m$ times the check_line and $n$ times the check_column functions.",
"def check_grid(problem, alphabet, grid, debug=True, early=False):\n m, n = dimension_problem(problem)\n \n ok_lines = [False] * m\n for line in range(m):\n word = nth_line(grid, line)\n ok_lines[line] = check_line(problem, alphabet, word, line, debug=debug, early=early)\n \n ok_columns = [False] * n\n for column in range(n):\n word = nth_column(grid, column)\n ok_columns[column] = check_column(problem, alphabet, word, column, debug=debug, early=early)\n \n return all(ok_lines) and all(ok_columns)",
"Let's try it!\nFor the easy problem\nFor a partial grid, of course it's going to be invalid just because '_' is not in the alphabet $\\Sigma$.",
"check_grid(problem1, alphabet1, grid1_partial)",
"For a complete grid, let's check that our solution is valid:",
"check_grid(problem1, alphabet1, grid1_solution)",
"And let's also check that the few wrong solutions are indeed not valid:",
"check_grid(problem1, alphabet1, grid1_wrong1)\n\ncheck_grid(problem1, alphabet1, grid1_wrong2)\n\ncheck_grid(problem1, alphabet1, grid1_wrong3)\n\ncheck_grid(problem1, alphabet1, grid1_wrong4)",
"We can see that for each wrong grid, at least one of the contraint is violated!\nThat's pretty good!\nFor the intermediate problem\nMy solution for the intermediate problem problem3 is indeed valid:",
"check_grid(problem3, alphabet3, grid3_solution)",
"For the hard problem\nWell I don't have a solution yet, so I cannot check it!\nThird easy task: generate all words of a given size in the alphabet\nUsing itertools.product and the alphabet defined above, it's going to be easy.\nNote that I'll first try with a smaller alphabet, to check the result (for problem 1).",
"import itertools\n\ndef all_words_of_alphabet(alphabet, size):\n yield from itertools.product(alphabet, repeat=size)",
"Just a quick check:",
"list(all_words_of_alphabet(['0', '1'], 3))",
"The time and memory complexity of this function should be $\\mathcal{O}(|\\Sigma|^k)$ for words of size $k\\in\\mathbb{N}^*$.",
"alphabet0 = ['0', '1']\nlen_alphabet = len(alphabet0)\nfor k in [2, 3, 4, 5]:\n print(f\"Generating {len_alphabet**k} words of size = {k} takes about\")\n %timeit list(all_words_of_alphabet(alphabet0, k))\n\n%timeit list(all_words_of_alphabet(['0', '1', '2', '3'], 10))",
"We can quickly check that even for the larger alphabet of size ~40, it's quite quick for small words of length $\\leq 5$:",
"len_alphabet = len(alphabet1)\nfor k in [2, 3, 4, 5]:\n print(f\"Generating {len_alphabet**k} words of size = {k} takes about\")\n %timeit list(all_words_of_alphabet(alphabet1, k))\n\nlen_alphabet = len(alphabet2)\nfor k in [2, 3, 4, 5]:\n print(f\"Generating {len_alphabet**k} words of size = {k} takes about\")\n %timeit list(all_words_of_alphabet(alphabet2, k))",
"Who, it takes 12 seconds to just generate all the possible words for the largest problem (which is just of size $(5,5)$)...\nI'm afraid that my naive approach to solve the puzzle will be VERY slow...\nFourth easy task: generate all grids of a given size",
"def all_grids_of_alphabet(alphabet, lines, columns):\n all_words = list(itertools.product(alphabet, repeat=columns))\n all_words = [ \"\".join(words) for words in all_words ]\n all_grids = itertools.product(all_words, repeat=lines)\n for pre_tr_grid in all_grids:\n tr_grid = [\n [\n pre_tr_grid[line][column]\n for line in range(lines)\n ]\n for column in range(columns)\n ]\n yield tr_grid\n\nfor alphabet in ( ['0', '1'], ['T', 'A', 'C', 'G'] ):\n for (n, m) in [ (1, 1), (2, 2), (1, 2), (2, 1), (3, 3), (3, 2), (2, 3) ]:\n assert len(list(all_grids_of_alphabet(alphabet, n, m))) == len(alphabet)**(n*m)\n print(list(all_grids_of_alphabet(alphabet0, n, m))[0])\n print(list(all_grids_of_alphabet(alphabet0, n, m))[-1])\n\nprint(f\"For the alphabet {alphabet0} of size = {len(alphabet0)} :\")\nfor (n, m) in [ (1, 1), (2, 1), (1, 2), (2, 2) ]:\n %time all_these_grids = list(all_grids_of_alphabet(alphabet0, n, m))\n print(f\"For (n, m) = {(n, m)} the number of grids is {len(all_these_grids)}\")",
"How long does it take and how many grids for the easy problem?",
"print(f\"For the alphabet {alphabet1} of size = {len(alphabet1)} :\")\nfor (n, m) in [ (1, 1), (2, 1), (1, 2), (2, 2) ]:\n %time all_these_grids = list(all_grids_of_alphabet(alphabet1, n, m))\n print(f\"For (n, m) = {(n, m)} the number of grids is {len(all_these_grids)}\")",
"That's still pretty small and fast!\nHow long does it take and how many grids for the hard problem?",
"print(f\"For the alphabet {alphabet2} of size = {len(alphabet2)} :\")\nfor (n, m) in [ (1, 1), (2, 1), (1, 2), (2, 2) ]:\n %time all_these_grids = list(all_grids_of_alphabet(alphabet2, n, m))\n print(f\"For (n, m) = {(n, m)} the number of grids is {len(all_these_grids)}\")\n\n41**(2*3)",
"Just for $(n, m) = (2, 2)$ it takes about 7 seconds...\nSo to scale for $(n, m) = (5, 5)$ would just take... WAY TOO MUCH TIME!",
"n, m = 5, 5\n41**(5*5)\n\nimport math\n\nmath.log10(41**(5*5))",
"For a grid of size $(5,5)$, the number of different possible grids is about $10^{40}$, that is CRAZY large, we have no hope of solving this problem with a brute force approach.\nHow much time would that require, just to generate the grids?",
"s = 7\nestimate_of_running_time = 7*s * len(alphabet1)**(5*5) / len(alphabet1)**(2*2)\nestimate_of_running_time # in seconds",
"This rough estimate gives about $5 * 10^{22}$ seconds, about $10^{15}$ years, so about a million of billion years !",
"math.log10( estimate_of_running_time / (60*60*24*365) )",
"First difficult task: for each possible grid, check if its valid",
"def naive_solve(problem, alphabet, debug=False, early=True):\n n, m = dimension_problem(problem)\n good_grids = []\n for possible_grid in all_grids_of_alphabet(alphabet, n, m):\n is_good_grid = check_grid(problem, alphabet, possible_grid, debug=debug, early=early)\n if is_good_grid:\n if early:\n return [ possible_grid ]\n good_grids.append(possible_grid)\n return good_grids",
"Let's try it!\nSolving the easy problem\nLet's check that we can quickly find one solution:",
"%%time\ngood_grids1 = naive_solve(problem1, alphabet1, debug=False, early=True)\n\nprint(f\"For problem 1\\n{problem1}\\nOn alphabet\\n{alphabet1}\\n==> We found one solution:\\n{good_grids1}\")",
"Then can we find more solutions?",
"%%time\ngood_grids1 = naive_solve(problem1, alphabet1, debug=False, early=False)\n\nprint(f\"For problem 1\\n{problem1}\\nOn alphabet\\n{alphabet1}\\n==> We found these solutions:\\n{good_grids1}\")",
"No there is indeed a unique solution here for the first \"easy\" problem!\nSolving the intermediate problem",
"%%time\ngood_grids3 = naive_solve(problem3, alphabet3, debug=False, early=True)\n\nprint(f\"For problem 3\\n{problem3}\\nOn alphabet\\n{alphabet3}\\n==> We found one solution:\\n{good_grids3}\")",
"That was so long...\nI could try to use Pypy3 IPython kernel, to speed things up?\n\nYes it's possible to use a Pypy kernel from your regular Python notebook!\nSee https://stackoverflow.com/questions/33850577/is-it-possible-to-run-a-pypy-kernel-in-the-jupyter-notebook\n\nSolving the hard problem\nMost probably, it will run forever if I use the naive approach of:\n\ngenerate all grids of $m$ words of size $n$ in given alphabet $\\Sigma$ ;\nfor all grid:\ntest it using naive algorithm\nif it's valid: adds it to the list of good grids\n\n\n\nThere are $|\\Sigma|^{n \\times m}$ possible grids, so this approach is doubly exponential in $n$ for square grids.\nI must think of a better approach...\nBeing just exponential in $\\max(m, n)$ would imply that it's practical for the harder problem of size $(5,5)$.",
"%%time\ngood_grids2 = naive_solve(problem2, alphabet2, debug=False, early=True)\n\nprint(f\"For problem 2\\n{problem2}\\nOn alphabet\\n{alphabet2}\\n==> We found one solution:\\n{good_grids2}\")",
"My first idea was to try to tackle each constraint independently, and generate the set of words that satisfy this contraint. (by naively checking check(constraint, word) for each word in $\\Sigma^n$ or $\\Sigma^m$).\n\nif there are two line constraints (left/right), get the intersection of the two sets of words;\nthen, for each line we have a set of possible words:\nwe can build each column, and then check that the top/bottom constraint is valid or not\nif valid, continue to next column until the last\nif all columns are valid, then these lines/columns form a possible grid!\n(if we want only one solution, stop now, otherwise continue)\n\n\n\nSecond difficult task: a more efficient approach to solve any problem",
"n, m = dimension_problem(problem1)\n\nproblem1\n\nalphabet1\n\nlen(list(all_words_of_alphabet(alphabet1, n)))\n\n[\"\".join(word) for word in list(all_words_of_alphabet(alphabet1, n))][:10]\n\n[\n [ \"\".join(word)\n for word in all_words_of_alphabet(alphabet1, n)\n if check_line(problem1, alphabet1, \"\".join(word), line, debug=False, early=True)\n ]\n for line in range(m)\n]\n\n[\n [ \"\".join(word)\n for word in all_words_of_alphabet(alphabet1, m)\n if check_column(problem1, alphabet1, \"\".join(word), column, debug=False, early=True)\n ]\n for column in range(n)\n]",
"So let's write this algorithm.\nI'm using a tqdm.tqdm() wrapper on the foor loops, to keep an eye on the progress.",
"from tqdm.notebook import trange, tqdm\n\ndef smart_solve(problem, alphabet, debug=False, early=True):\n n, m = dimension_problem(problem)\n good_grids = []\n \n possible_words_for_lines = [\n [ \"\".join(word)\n for word in all_words_of_alphabet(alphabet, n)\n if check_line(problem, alphabet, \"\".join(word), line, debug=False, early=True)\n # TODO don't compute this \"\".join(word) twice?\n ]\n for line in range(m)\n ]\n number_of_combinations = 1\n for line in range(m):\n number_of_combinations *= len(possible_words_for_lines[line])\n print(f\"- There are {len(possible_words_for_lines[line])} different words for line #{line}\")\n print(f\"=> There are {number_of_combinations} combinations of words for lines #{0}..#{m-1}\")\n\n for possible_words in tqdm(\n list(itertools.product(*possible_words_for_lines)),\n desc=\"lines\"\n ):\n if debug: print(f\" Trying possible_words from line constraints = {possible_words}\")\n column = 0\n no_wrong_column = True\n while no_wrong_column and column < n:\n word_column = \"\".join(possible_words[line][column] for line in range(m))\n if debug: print(f\" For column #{column}, word = {word_column}, checking constraint...\")\n if not check_column(problem, alphabet, word_column, column, debug=False, early=True):\n # this word is NOT valid for this column, so let's go to the next word\n if debug: print(f\" This word {word_column} is NOT valid for this column {column}, so let's go to the next word\")\n no_wrong_column = False\n # break: this was failing... broke the outer for-loop and not the inner one\n column += 1\n if no_wrong_column:\n print(f\" These words seemed to satisfy the column constraints!\\n{possible_words}\")\n \n # so all columns are valid! this choice of words is good!\n possible_grid = [\n list(word) for word in possible_words\n ]\n print(f\"Giving this grid:\\n{possible_grid}\")\n # let's check it, just in case (this takes a short time, compared to the rest)\n is_good_grid = check_grid(problem, alphabet, possible_grid, debug=debug, early=early)\n if is_good_grid:\n if early:\n return [ possible_grid ]\n good_grids.append(possible_grid)\n \n # after the outer for loop on possible_words\n return good_grids",
"And let's try it:\nFor the easy problem",
"grid1_solution\n\n%%time\ngood_grids1 = smart_solve(problem1, alphabet1)\n\ngood_grids1",
"So it worked!\n🚀 It was also BLAZING fast compared to the naive approach: 160ms against about 900µs, almost a 160x speed-up factor!\n🤔 I don't understand why it's so slow now I did get a time of 900 µs at first try, now it's about 90 ms... just a 2x spee-up factor.\nLet's try for the harder problem!\nFor the intermediate problem",
"%%time\n#assert False # uncomment when ready\n\ngood_grids3 = smart_solve(problem3, alphabet3)\n\ngood_grids3",
"🚀 It was also BLAZING fast compared to the naive approach: 90ms, when the naive approach was just too long that I killed it...\nFor the harder problem",
"%%time\n#assert False # uncomment when ready\n\ngood_grids2 = smart_solve(problem2, alphabet2)\n\ngood_grids2",
"It made my kernel restart...\nImprove the solution - TODO\n\nIf you're extra curious about this puzzle problem, and my experiments, you can continue from here and finish these ideas:\n\n\n\nIt could be great if it were be possible to give a partially filled grid, and start from there.\n\n\nIt could also be great to just be able to fill one cell in the grid, in case you're blocked and want some hint.\n\n\nMy feeling about these problems and my solutions\nI could have tried to be more efficient, but I didn't have much time to spend on this.\nConclusion\nThat was nice! Writing this notebook took about 4.5 hours entirely, from first idea to final edit, on Sunday 28th of February, 2021. (note that I was also cooking my pancakes during the first half, so I wasn't intensely coding)\nHave a look at my other notebooks."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
MrKriss/ThinkStatsToolbox
|
stats_toolbox/examples/Pmf Example.ipynb
|
gpl-3.0
|
[
"# Imports\nimport os\nimport sys\nimport pandas as pd\nimport seaborn as sb\n\n# Custom Imports\nsys.path.insert(0, '../../')\nimport stats_toolbox as st\nfrom stats_toolbox.utils.data_loaders import load_fem_preg_2002\n\n# Graphics setup \n%pylab inline --no-import-all\n\n# Load and Clean Data\ndf = load_fem_preg_2002('../data')\nfull_term = df[df['prglngth'] >= 37]\nweights = df.birthwgt_kg.dropna()",
"Constructing PMFs\nAs twith histograms, and list like object or pandas Series can be converted to a Pmf object. Hist objects can also be converted using the Pmf constructor or with their to_pmf() method",
"# Convert to PMF\npmf = st.Pmf(full_term.totalwgt_lb, label='Total Birth Weight')\nH = st.Hist(full_term.totalwgt_lb, label='Total Birth Weight')\n\npmf == H.to_pmf()\n\nIndividual probabilities can be looked up \n\npmf[8]\n# same as pmf.prob(8)",
"Methods\nSummary stats",
"pmf.mean()\npmf.var()\npmf.std()\npmf.maximum_likelihood()",
"Calculate Probabilities",
"pmf.prob_less(3)\npmf.prob_greater(4)\n\n# Arithmatic \npmf_first = st.Pmf(full_term.prglngth[full_term.birthord == 1], label='1st born')\npmf_other = st.Pmf(full_term.prglngth[full_term.birthord != 1], label='other')\n\n\n(pmf_first - pmf_other).plot()\n\nfig = st.multiplot((pmf_first, pmf_other))\n\npmf_other\n\npmf_first"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PyLCARS/PythonUberHDL
|
myHDL_ComputerFundamentals/Counters/.ipynb_checkpoints/CountersInMyHDL-checkpoint.ipynb
|
bsd-3-clause
|
[
"\\title{Counters in myHDL}\n\\author{Steven K Armour}\n\\maketitle\nCounters play a vital role in Digital Hardware, ranging from Clock Dividers; (see below) to event triggers by recording the number of events that have occurred or will still need to occur (all the counters here in use a clock as the counting event but this is easily changed). Presented below are some basic HDL counters (Up, Down, Hybridized Up-Down) in myHDL.\n<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\" style=\"margin-top: 1em;\"><ul class=\"toc-item\"><li><span><a href=\"#Refrances\" data-toc-modified-id=\"Refrances-1\"><span class=\"toc-item-num\">1 </span>Refrances</a></span></li><li><span><a href=\"#Libraries-and-Helper-functions\" data-toc-modified-id=\"Libraries-and-Helper-functions-2\"><span class=\"toc-item-num\">2 </span>Libraries and Helper functions</a></span></li><li><span><a href=\"#Counter-Specs\" data-toc-modified-id=\"Counter-Specs-3\"><span class=\"toc-item-num\">3 </span>Counter Specs</a></span></li><li><span><a href=\"#myHDL-modules-bitvector-type-behavior\" data-toc-modified-id=\"myHDL-modules-bitvector-type-behavior-4\"><span class=\"toc-item-num\">4 </span>myHDL modules bitvector type behavior</a></span><ul class=\"toc-item\"><li><span><a href=\"#up-counting-behavior\" data-toc-modified-id=\"up-counting-behavior-4.1\"><span class=\"toc-item-num\">4.1 </span>up counting behavior</a></span></li><li><span><a href=\"#down-counting-behavior\" data-toc-modified-id=\"down-counting-behavior-4.2\"><span class=\"toc-item-num\">4.2 </span>down counting behavior</a></span></li></ul></li><li><span><a href=\"#Up-Counter\" data-toc-modified-id=\"Up-Counter-5\"><span class=\"toc-item-num\">5 </span>Up-Counter</a></span><ul class=\"toc-item\"><li><span><a href=\"#myHDL-testing\" data-toc-modified-id=\"myHDL-testing-5.1\"><span class=\"toc-item-num\">5.1 </span>myHDL testing</a></span></li><li><span><a href=\"#Verilog-Code\" data-toc-modified-id=\"Verilog-Code-5.2\"><span class=\"toc-item-num\">5.2 </span>Verilog Code</a></span></li><li><span><a href=\"#Verilog-Testbench\" data-toc-modified-id=\"Verilog-Testbench-5.3\"><span class=\"toc-item-num\">5.3 </span>Verilog Testbench</a></span></li></ul></li><li><span><a href=\"#Down-Counter\" data-toc-modified-id=\"Down-Counter-6\"><span class=\"toc-item-num\">6 </span>Down Counter</a></span><ul class=\"toc-item\"><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-6.1\"><span class=\"toc-item-num\">6.1 </span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Code\" data-toc-modified-id=\"Verilog-Code-6.2\"><span class=\"toc-item-num\">6.2 </span>Verilog Code</a></span></li><li><span><a href=\"#Verilog-Testbench\" data-toc-modified-id=\"Verilog-Testbench-6.3\"><span class=\"toc-item-num\">6.3 </span>Verilog Testbench</a></span></li></ul></li><li><span><a href=\"#Up/Down-Counter\" data-toc-modified-id=\"Up/Down-Counter-7\"><span class=\"toc-item-num\">7 </span>Up/Down Counter</a></span><ul class=\"toc-item\"><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-7.1\"><span class=\"toc-item-num\">7.1 </span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Code\" data-toc-modified-id=\"Verilog-Code-7.2\"><span class=\"toc-item-num\">7.2 </span>Verilog Code</a></span></li><li><span><a href=\"#Verilog-Testbench\" data-toc-modified-id=\"Verilog-Testbench-7.3\"><span class=\"toc-item-num\">7.3 </span>Verilog Testbench</a></span></li></ul></li><li><span><a href=\"#Application:-Clock-Divider\" data-toc-modified-id=\"Application:-Clock-Divider-8\"><span class=\"toc-item-num\">8 </span>Application: Clock Divider</a></span><ul class=\"toc-item\"><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-8.1\"><span class=\"toc-item-num\">8.1 </span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Code\" data-toc-modified-id=\"Verilog-Code-8.2\"><span class=\"toc-item-num\">8.2 </span>Verilog Code</a></span></li><li><span><a href=\"#Verilog-Testbench\" data-toc-modified-id=\"Verilog-Testbench-8.3\"><span class=\"toc-item-num\">8.3 </span>Verilog Testbench</a></span></li></ul></li></ul></div>\n\nRefrances\n@misc{loi le_2017,\ntitle={Verilog code for counter with testbench},\nurl={http://www.fpga4student.com/2017/03/verilog-code-for-counter-with-testbench.html},\njournal={Fpga4student.com},\nauthor={Loi Le, Van},\nyear={2017}\n}\n@misc{digilent_2018,\ntitle={Learn.Digilentinc | Counter and Clock Divider},\nurl={https://learn.digilentinc.com/Documents/262},\njournal={Learn.digilentinc.com},\nauthor={Digilent},\nyear={2018}\n}\nLibraries and Helper functions",
"from myhdl import *\nfrom myhdlpeek import Peeker\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sympy import *\ninit_printing()\n\nimport random\n\n#https://github.com/jrjohansson/version_information\n%load_ext version_information\n%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random\n\n#helper functions to read in the .v and .vhd generated files into python\ndef VerilogTextReader(loc, printresult=True):\n with open(f'{loc}.v', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***Verilog modual from {loc}.v***\\n\\n', VerilogText)\n return VerilogText\n\ndef VHDLTextReader(loc, printresult=True):\n with open(f'{loc}.vhd', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***VHDL modual from {loc}.vhd***\\n\\n', VerilogText)\n return VerilogText",
"Counter Specs",
"CountVal=17\nBitSize=int(np.log2(CountVal))+1; BitSize",
"myHDL modules bitvector type behavior\nup counting behavior",
"ModBV=modbv(0)[BitSize:]\nIntBV=intbv(0)[BitSize:]\nprint(f\"`ModBV` max is {ModBV.max}; min is {ModBV.min}\")\nprint(f\"`IntBV` max is {IntBV.max}; min is {IntBV.min}\")\n\nfor _ in range(ModBV.max*2):\n try:\n ModBV+=1; IntBV+=1\n print(f\"`ModBV` value is {ModBV}; `IntBV` value is {IntBV}\")\n except ValueError:\n ModBV+=1\n print(f\"`ModBV` value is {ModBV}; `IntBV` value is {IntBV} and INVALID\")",
"down counting behavior",
"ModBV=modbv(2**BitSize -1)[BitSize:]\nIntBV=intbv(2**BitSize -1)[BitSize:]\nprint(f\"`ModBV` max is {ModBV.max}; min is {ModBV.min}\")\nprint(f\"`IntBV` max is {IntBV.max}; min is {IntBV.min}\")\n\nfor _ in range(ModBV.max*2):\n try:\n ModBV-=1; IntBV-=1\n print(f\"`ModBV` value is {ModBV}; `IntBV` value is {IntBV}\")\n except ValueError:\n ModBV-=0\n print(f\"`ModBV` value is {ModBV}; `IntBV` value is {IntBV} and INVALID\")",
"Up-Counter\nup counters are counters that count up to a target value from a lower starting value. The following counter is a simple one that uses the clock as incrementer (think one clock cycle as one swing of an old grandfather clock pendulum). But more complicated counters can use any signal as an incrementer. This Counter also has a signal the indicates that the counter has been triggered before the modulus values for the internal counter is reset. This is because this counter tries to reproduce the behavior of timers found on common apps that show how much time has elapsed since the counter has run up\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{Up_Counter.png}}\n\\caption{\\label{fig:RP} Up_Counter Functianl Digram }\n\\end{figure}",
"@block\ndef Up_Counter(count, Trig, clk, rst, CountVal, BitSize):\n \"\"\"\n UpCounter\n \n Input:\n clk(bool): system clock feed\n rst(bool): clock reset signal\n Ouput:\n count (bit vector): current count value; count \n Trig(bool)\n \n Parmeter(Python Only):\n CountVal(int): value to count to\n BitSize (int): Bitvalue size is log_2(CountVal)+1\n \n \"\"\"\n #internals\n count_i=Signal(modbv(0)[BitSize:])\n Trig_i=Signal(bool(0))\n \n @always(clk.posedge, rst.negedge)\n def logic():\n if rst:\n count_i.next=0\n Trig_i.next=0\n \n elif count_i%CountVal==0 and count_i!=0:\n Trig_i.next=1\n count_i.next=0\n \n else:\n count_i.next=count_i+1\n \n \n @always_comb\n def OuputBuffer():\n count.next=count_i\n Trig.next=Trig_i\n \n return instances()",
"myHDL testing",
"Peeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nTrig=Signal(bool(0)); Peeker(Trig, 'Trig')\ncount=Signal(modbv(0)[BitSize:]); Peeker(count, 'count')\n\nDUT=Up_Counter(count, Trig, clk, rst, CountVal, BitSize)\n\ndef Up_CounterTB():\n \"\"\"\n myHDL only Testbench for `Up_Counter` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n if i==int(CountVal*1.5):\n rst.next=1\n elif i==int(CountVal*1.5)+1:\n rst.next=0\n \n if i==int(CountVal*2.5):\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\nsim=Simulation(DUT, Up_CounterTB(), *Peeker.instances()).run()\n\n\nPeeker.to_wavedrom()\n\nUp_CounterData=Peeker.to_dataframe()\nUp_CounterData=Up_CounterData[Up_CounterData['clk']==1]\nUp_CounterData.drop('clk', axis=1, inplace=True)\nUp_CounterData.reset_index(drop=True, inplace=True)\nUp_CounterData",
"Verilog Code",
"DUT.convert()\nVerilogTextReader('Up_Counter');",
"\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{Up_CounterRTL.png}}\n\\caption{\\label{fig:UCRTL} Up_Counter RTL Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{Up_CounterSYN.png}}\n\\caption{\\label{fig:UCSYN} Up_Counter Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nVerilog Testbench",
"ResetAt=int(CountVal*1.5)+1\nStopAt=int(CountVal*2.5)\n\n@block\ndef Up_CounterTBV():\n \"\"\"\n myHDL -> Verilog Testbench for `Up_Counter` module\n \"\"\"\n clk=Signal(bool(0))\n rst=Signal(bool(0))\n Trig=Signal(bool(0))\n count=Signal(modbv(0)[BitSize:])\n \n @always_comb\n def print_data():\n print(clk, rst, Trig, count)\n\n DUT=Up_Counter(count, Trig, clk, rst, CountVal, BitSize)\n\n\n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n i=0\n while True:\n if i==ResetAt:\n rst.next=1\n elif i==(ResetAt+1):\n rst.next=0\n else:\n pass\n \n if i==StopAt:\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\nTB=Up_CounterTBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('Up_CounterTBV');",
"Down Counter\nDown Counters Count Down from a set upper value to a set target lower value. The following Down Counter is a simple revamp of the previous Up Counter. Thus it starts from the CountVal and counts down to zero to trigger the trigger signal that it has completed one countdown cycle before the internal counter resets to restart the countdown. \n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{}}\n\\caption{\\label{fig:RP} Down_Counter Functianl Digram (ToDo) }\n\\end{figure}",
"@block\ndef Down_Counter(count, Trig, clk, rst, StartVal, BitSize):\n \"\"\"\n DownCounter\n \n Input:\n clk(bool): system clock feed\n rst(bool): clock reset signal\n Ouput:\n count (bit vector): current count value; count \n Trig(bool)\n \n Parmeter(Python Only):\n StartVal(int): value to count from\n BitSize (int): Bitvalue size is log_2(CountVal)+1\n CatButt\n \n \"\"\"\n #internal counter value\n count_i=Signal(modbv(StartVal)[BitSize:])\n \n @always(clk.posedge, rst.negedge)\n def logic():\n if rst:\n count_i.next=StartVal\n Trig.next=0\n \n elif count_i==0:\n Trig.next=1\n count_i.next=StartVal\n \n else:\n count_i.next=count_i-1\n \n \n @always_comb\n def OuputBuffer():\n count.next=count_i\n \n return instances()",
"myHDL Testing",
"Peeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nTrig=Signal(bool(0)); Peeker(Trig, 'Trig')\ncount=Signal(modbv(0)[BitSize:]); Peeker(count, 'count')\n\nDUT=Down_Counter(count, Trig, clk, rst, CountVal, BitSize)\n\ndef Down_CounterTB():\n \"\"\"\n myHDL only Testbench for `Down_Counter` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n if i==int(CountVal*1.5):\n rst.next=1\n elif i==int(CountVal*1.5)+1:\n rst.next=0\n \n if i==int(CountVal*2.5):\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\nsim=Simulation(DUT, Down_CounterTB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nDown_CounterData=Peeker.to_dataframe()\nDown_CounterData=Down_CounterData[Down_CounterData['clk']==1]\nDown_CounterData.drop('clk', axis=1, inplace=True)\nDown_CounterData.reset_index(drop=True, inplace=True)\nDown_CounterData",
"Verilog Code",
"DUT.convert()\nVerilogTextReader('Down_Counter');",
"\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{Down_CounterRTL.png}}\n\\caption{\\label{fig:DCRTL} Down_Counter RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{Down_CounterSYN.png}}\n\\caption{\\label{fig:DCSYN} Down_Counter Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nVerilog Testbench",
"ResetAt=int(CountVal*1.5)\nStopAt=int(CountVal*2.5)\n\n@block\ndef Down_CounterTBV():\n \"\"\"\n myHDL -> Verilog Testbench for `Down_Counter` module\n \"\"\"\n clk=Signal(bool(0))\n rst=Signal(bool(0))\n Trig=Signal(bool(0))\n count=Signal(modbv(0)[BitSize:])\n \n @always_comb\n def print_data():\n print(clk, rst, Trig, count)\n\n DUT=Down_Counter(count, Trig, clk, rst, CountVal, BitSize)\n\n\n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n i=0\n while True:\n if i==ResetAt:\n rst.next=1\n elif i==(ResetAt+1):\n rst.next=0\n else:\n pass\n \n if i==StopAt:\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\nTB=Down_CounterTBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('Down_CounterTBV');",
"Up/Down Counter\nThis Counter incorporates both an Up Counter and Down Counter via hybridizing between the two via a direction control state machine \n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{}}\n\\caption{\\label{fig:RP} UpDown_Counter Functianl Digram (ToDo) }\n\\end{figure}",
"#Create the Direction States for UpDown Counter\nDirStates=enum('Up', 'Down')\nprint(f\"`Up` state repersentation is {bin(DirStates.Up)}\")\nprint(f\"`Down` state repersentation is {bin(DirStates.Down)}\")\n\n@block\ndef UpDown_Counter(Dir, count, Trig, clk, rst, \n CountVal, StartVal, BitSize):\n \"\"\"\n UpDownCounter, hybrid of a simple Up Counter and \n a simple Down Counter using `Dir` to control Up/Down \n count Direction \n \n Input:\n Dir(): \n clk(bool): system clock feed\n rst(bool): clock reset signal\n Ouput:\n count (bit vector): current count value; count \n Trig(bool)\n \n Parmeter(Python Only):\n CountVal(int): Highest Value for counter\n StartVal(int): starting value for internal counter\n BitSize (int): Bitvalue size is log_2(CountVal)+1\n \n \"\"\"\n #internal counter value\n count_i=Signal(modbv(StartVal)[BitSize:])\n \n \n \n @always(clk.posedge, rst.negedge)\n def logic():\n if rst:\n count_i.next=StartVal\n Trig.next=0\n \n \n \n #counter contanment\n elif count_i//CountVal==1 and rst==0:\n count_i.next=StartVal\n \n #up behavior\n elif Dir==DirStates.Up:\n count_i.next=count_i+1\n #simple Triger at ends \n if count_i%CountVal==0:\n Trig.next=1\n \n #down behavior\n elif Dir==DirStates.Down:\n count_i.next=count_i-1\n #simple Triger at ends \n if count_i%CountVal==0:\n Trig.next=1\n \n \n \n \n \n @always_comb\n def OuputBuffer():\n count.next=count_i\n \n return instances()",
"myHDL Testing",
"Peeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\nTrig=Signal(bool(0)); Peeker(Trig, 'Trig')\ncount=Signal(modbv(0)[BitSize:]); Peeker(count, 'count')\nDir=Signal(DirStates.Up); Peeker(Dir, 'Dir')\n\nDUT=UpDown_Counter(Dir, count, Trig, clk, rst, \n CountVal, StartVal=CountVal//2, BitSize=BitSize)\n\ndef UpDown_CounterTB():\n \"\"\"\n myHDL only Testbench for `UpDown_Counter` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n if i==int(CountVal*1.5):\n Dir.next=DirStates.Down\n elif i==int(CountVal*2.5):\n rst.next=1\n elif i==int(CountVal*2.5)+1:\n rst.next=0\n \n \n if i==int(CountVal*3.5):\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\nsim=Simulation(DUT, UpDown_CounterTB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nUpDown_CounterData=Peeker.to_dataframe()\nUpDown_CounterData=UpDown_CounterData[UpDown_CounterData['clk']==1]\nUpDown_CounterData.drop('clk', axis=1, inplace=True)\nUpDown_CounterData.reset_index(drop=True, inplace=True)\nUpDown_CounterData",
"Verilog Code",
"DUT.convert()\nVerilogTextReader('UpDown_Counter');",
"\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{UpDown_CounterRTL.png}}\n\\caption{\\label{fig:UDCRTL} UpDown_Counter RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{UpDown_CounterSYN.png}}\n\\caption{\\label{fig:UDCSYN} UpDown_Counter Synthesized schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nVerilog Testbench",
"StateChangeAt=int(CountVal*1.5)\nResetAt=int(CountVal*2.5)\nStopAt=int(CountVal*3.5)\n\n@block\ndef UpDown_CounterTBV():\n \"\"\"\n myHDL -> Verilog Testbench for `Down_Counter` module\n \"\"\"\n clk=Signal(bool(0))\n rst=Signal(bool(0))\n Trig=Signal(bool(0))\n count=Signal(modbv(0)[BitSize:])\n Dir=Signal(DirStates.Up)\n\n DUT=UpDown_Counter(Dir, count, Trig, clk, rst, \n CountVal, StartVal=CountVal//2, BitSize=BitSize)\n \n @always_comb\n def print_data():\n print(clk, rst, Trig, count)\n\n DUT=Down_Counter(count, Trig, clk, rst, CountVal, BitSize)\n\n\n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n i=0\n while True:\n \n if i==StateChangeAt:\n Dir.next=DirStates.Down\n elif i==ResetAt:\n rst.next=1\n elif i==ResetAt+1:\n rst.next=0\n else:\n pass\n \n \n if i==StopAt:\n raise StopSimulation()\n \n i+=1\n yield clk.posedge\n \n return instances()\n\nTB=UpDown_CounterTBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('UpDown_CounterTBV');",
"Application: Clock Divider\nOn common application in HDL for counters in build clock dividers. And while there are more specialized and advanced means to perform up or down frequency generation from a reference clock (see for example digital Phase Lock Loops). A simple clock divider is very useful HDL code to drive other HDL IPs that should/need a slower event rate than the Megahertz+ speeds of today's FPGAs\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{}}\n\\caption{\\label{fig:RP} ClockDivider Functianl Digram (ToDo) }\n\\end{figure}",
"@block\ndef ClockDivider(Divisor, clkOut, count, clk,rst):\n \"\"\"\n Simple Clock Divider based on the Digilint Clock Divider\n https://learn.digilentinc.com/Documents/262\n \n Input:\n Divisor(32 bit): the clock frequncy divide by value\n clk(bool): The input clock\n rst(bool): clockDivider Reset\n \n Ouput:\n clkOut(bool): the divided clock ouput\n count(32bit): the value of the internal counter\n \"\"\"\n \n count_i=Signal(modbv(0)[32:])\n @always(clk.posedge, rst.posedge)\n def counter():\n if rst:\n count_i.next=0\n elif count_i==(Divisor-1):\n count_i.next=0\n else:\n count_i.next=count_i+1\n \n clkOut_i=Signal(bool(0))\n @always(clk.posedge, rst.posedge)\n def clockTick():\n if rst:\n clkOut_i.next=0\n elif count_i==(Divisor-1):\n clkOut_i.next=not clkOut_i\n else:\n clkOut_i.next=clkOut_i\n \n \n \n @always_comb\n def OuputBuffer():\n count.next=count_i\n clkOut.next=clkOut_i\n \n return instances()\n \n ",
"myHDL Testing",
"Peeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nDivisor=Signal(intbv(0)[32:]); Peeker(Divisor, 'Divisor')\ncount=Signal(intbv(0)[32:]); Peeker(count, 'count')\nclkOut=Signal(bool(0)); Peeker(clkOut, 'clkOut')\nrst=Signal(bool(0)); Peeker(rst, 'rst')\n\nDUT=ClockDivider(Divisor, clkOut, count, clk,rst)\n\ndef ClockDividerTB():\n \"\"\"\n myHDL only Testbench for `ClockDivider` module\n \"\"\"\n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n for i in range(2,6+1):\n Divisor.next=i\n rst.next=0\n #run clock time\n for _ in range(4*2**(i-1)):\n yield clk.posedge\n \n for j in range(1):\n if j==0:\n rst.next=1\n \n yield clk.posedge\n \n raise StopSimulation()\n \n \n \n return instances()\n\nsim=Simulation(DUT, ClockDividerTB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nClockDividerData=Peeker.to_dataframe()\nClockDividerData\n\nClockDividerData_2=ClockDividerData[ClockDividerData['Divisor']==2]\nClockDividerData_2.reset_index(drop=True, inplace=True)\nClockDividerData_2.plot(y=['clk', 'clkOut']);\n\nClockDividerData_3=ClockDividerData[ClockDividerData['Divisor']==3]\nClockDividerData_3.reset_index(drop=True, inplace=True)\nClockDividerData_3.plot(y=['clk', 'clkOut']);\n\nClockDividerData_4=ClockDividerData[ClockDividerData['Divisor']==4]\nClockDividerData_4.reset_index(drop=True, inplace=True)\nClockDividerData_4.plot(y=['clk', 'clkOut']);\n\nClockDividerData_5=ClockDividerData[ClockDividerData['Divisor']==5]\nClockDividerData_5.reset_index(drop=True, inplace=True)\nClockDividerData_5.plot(y=['clk', 'clkOut']);\n\nClockDividerData_6=ClockDividerData[ClockDividerData['Divisor']==6]\nClockDividerData_6.reset_index(drop=True, inplace=True)\nClockDividerData_6.plot(y=['clk', 'clkOut']);\n\nDUT.convert()\nVerilogTextReader('ClockDivider');",
"Verilog Code\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ClockDividerRTL.png}}\n\\caption{\\label{fig:clkDivRTL} ClockDivider RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{ClockDividerSYN.png}}\n\\caption{\\label{fig:clkDivRTL} ClockDivider synthesized schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nVerilog Testbench",
"@block\ndef ClockDividerTBV():\n \"\"\"\n myHDL -> Verilog Testbench for `ClockDivider` module\n \"\"\"\n\n clk=Signal(bool(0)); \n Divisor=Signal(intbv(0)[32:])\n count=Signal(intbv(0)[32:])\n clkOut=Signal(bool(0))\n rst=Signal(bool(0))\n\n \n @always_comb\n def print_data():\n print(clk, Divisor, count, clkOut, rst)\n\n DUT=ClockDivider(Divisor, clkOut, count, clk,rst)\n\n\n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n for i in range(2,6+1):\n Divisor.next=i\n rst.next=0\n #run clock time\n for _ in range(4*2**(i-1)):\n yield clk.posedge\n \n for j in range(1):\n if j==0:\n rst.next=1\n else:\n pass\n \n yield clk.posedge\n \n raise StopSimulation()\n \n \n \n return instances()\n\nTB=ClockDividerTBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('ClockDividerTBV');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
OSHI7/Learning1
|
MatplotLib Pynotebooks/AnatomyOfMatplotlib-Part6-mpl_toolkits.ipynb
|
mit
|
[
"import matplotlib\nmatplotlib.use('nbagg')\nimport numpy as np\nimport matplotlib.pyplot as plt",
"mpl_toolkits\nIn addition to the core library of matplotlib, there are a few additional utilities that are set apart from matplotlib proper for some reason or another, but are often shipped with matplotlib.\n\nBasemap - shipped separately from matplotlib due to size of mapping data that are included.\nmplot3d - shipped with matplotlib to provide very simple, rudimentary 3D plots in the same style as matplotlib's 2D plots.\naxes_grid1 - An enhanced SubplotAxes. Very Enhanced...\n\nmplot3d\nBy taking advantage of matplotlib's z-order layering engine, mplot3d emulates 3D plotting by projecting 3D data into 2D space, layer by layer. While it isn't going to replace any of the true 3D plotting libraries anytime soon, its goal is to allow for matplotlib users to produce 3D plots with the same amount of simplicity as 2D plots are.",
"from mpl_toolkits.mplot3d import Axes3D, axes3d\n\nfig, ax = plt.subplots(1, 1, subplot_kw={'projection': '3d'})\nX, Y, Z = axes3d.get_test_data(0.05)\nax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)\n\nplt.show()",
"axes_grid1\nThis module was originally intended as a collection of helper classes to ease the displaying of (possibly multiple) images with matplotlib. Some of the functionality has come to be useful for non-image plotting as well. Some classes deals with the sizing and positioning of multiple Axes relative to each other (ImageGrid, RGB Axes, and AxesDivider). The ParasiteAxes allow for the plotting of multiple datasets in the same axes, but with each their own x or y scale. Also, there is the AnchoredArtist that can be used to anchor particular artist objects in place.\nOne can get a sense of the neat things that can be done with this toolkit by browsing through its user guide linked above. There is one particular feature that is an absolute must-have for me -- automatic allocation of space for colorbars.",
"from mpl_toolkits.axes_grid1 import AxesGrid\nfig = plt.figure()\ngrid = AxesGrid(fig, 111, # similar to subplot(111)\n nrows_ncols = (2, 2),\n axes_pad = 0.2,\n share_all=True,\n label_mode = \"L\", # similar to \"label_outer\"\n cbar_location = \"right\",\n cbar_mode=\"single\",\n )\n\nextent = (-3,4,-4,3)\nfor i in range(4):\n im = grid[i].imshow(Z, extent=extent, interpolation=\"nearest\")\n \ngrid.cbar_axes[0].colorbar(im)\nplt.show()",
"This next feature is commonly requested on the mailing lists. The problem is that most people who request it don't quite know how to describe it. We call it \"Parasite Axes\".",
"%load http://matplotlib.org/mpl_examples/axes_grid/demo_parasite_axes2.py\n\nfrom mpl_toolkits.axes_grid1 import host_subplot\nimport mpl_toolkits.axisartist as AA\nimport matplotlib.pyplot as plt\n\nif 1:\n\n host = host_subplot(111, axes_class=AA.Axes)\n plt.subplots_adjust(right=0.75)\n\n par1 = host.twinx()\n par2 = host.twinx()\n\n offset = 60\n new_fixed_axis = par2.get_grid_helper().new_fixed_axis\n par2.axis[\"right\"] = new_fixed_axis(loc=\"right\",\n axes=par2,\n offset=(offset, 0))\n \n par2.axis[\"right\"].toggle(all=True)\n\n\n\n host.set_xlim(0, 2)\n host.set_ylim(0, 2)\n\n host.set_xlabel(\"Distance\")\n host.set_ylabel(\"Density\")\n par1.set_ylabel(\"Temperature\")\n par2.set_ylabel(\"Velocity\")\n\n p1, = host.plot([0, 1, 2], [0, 1, 2], label=\"Density\")\n p2, = par1.plot([0, 1, 2], [0, 3, 2], label=\"Temperature\")\n p3, = par2.plot([0, 1, 2], [50, 30, 15], label=\"Velocity\")\n\n par1.set_ylim(0, 4)\n par2.set_ylim(1, 65)\n\n host.legend()\n\n host.axis[\"left\"].label.set_color(p1.get_color())\n par1.axis[\"right\"].label.set_color(p2.get_color())\n par2.axis[\"right\"].label.set_color(p3.get_color())\n\n plt.draw()\n plt.show()\n\n #plt.savefig(\"Test\")\n",
"And finally, as a nice teaser of what else axes_grid1 can do...",
"%load http://matplotlib.org/mpl_toolkits/axes_grid/examples/demo_floating_axes.py\n\nfrom matplotlib.transforms import Affine2D\n\nimport mpl_toolkits.axisartist.floating_axes as floating_axes\n\nimport numpy as np\nimport mpl_toolkits.axisartist.angle_helper as angle_helper\nfrom matplotlib.projections import PolarAxes\nfrom mpl_toolkits.axisartist.grid_finder import FixedLocator, MaxNLocator, \\\n DictFormatter\n\ndef setup_axes1(fig, rect):\n \"\"\"\n A simple one.\n \"\"\"\n tr = Affine2D().scale(2, 1).rotate_deg(30)\n\n grid_helper = floating_axes.GridHelperCurveLinear(tr, extremes=(0, 4, 0, 4))\n\n ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)\n fig.add_subplot(ax1)\n\n aux_ax = ax1.get_aux_axes(tr)\n\n grid_helper.grid_finder.grid_locator1._nbins = 4\n grid_helper.grid_finder.grid_locator2._nbins = 4\n\n return ax1, aux_ax\n\n\ndef setup_axes2(fig, rect):\n \"\"\"\n With custom locator and formatter.\n Note that the extreme values are swapped.\n \"\"\"\n\n #tr_scale = Affine2D().scale(np.pi/180., 1.)\n\n tr = PolarAxes.PolarTransform()\n\n pi = np.pi\n angle_ticks = [(0, r\"$0$\"),\n (.25*pi, r\"$\\frac{1}{4}\\pi$\"),\n (.5*pi, r\"$\\frac{1}{2}\\pi$\")]\n grid_locator1 = FixedLocator([v for v, s in angle_ticks])\n tick_formatter1 = DictFormatter(dict(angle_ticks))\n\n grid_locator2 = MaxNLocator(2)\n\n grid_helper = floating_axes.GridHelperCurveLinear(tr,\n extremes=(.5*pi, 0, 2, 1),\n grid_locator1=grid_locator1,\n grid_locator2=grid_locator2,\n tick_formatter1=tick_formatter1,\n tick_formatter2=None,\n )\n\n ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)\n fig.add_subplot(ax1)\n\n # create a parasite axes whose transData in RA, cz\n aux_ax = ax1.get_aux_axes(tr)\n\n aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax\n ax1.patch.zorder=0.9 # but this has a side effect that the patch is\n # drawn twice, and possibly over some other\n # artists. So, we decrease the zorder a bit to\n # prevent this.\n\n return ax1, aux_ax\n\n\ndef setup_axes3(fig, rect):\n \"\"\"\n Sometimes, things like axis_direction need to be adjusted.\n \"\"\"\n\n # rotate a bit for better orientation\n tr_rotate = Affine2D().translate(-95, 0)\n\n # scale degree to radians\n tr_scale = Affine2D().scale(np.pi/180., 1.)\n\n tr = tr_rotate + tr_scale + PolarAxes.PolarTransform()\n\n grid_locator1 = angle_helper.LocatorHMS(4)\n tick_formatter1 = angle_helper.FormatterHMS()\n\n grid_locator2 = MaxNLocator(3)\n\n ra0, ra1 = 8.*15, 14.*15\n cz0, cz1 = 0, 14000\n grid_helper = floating_axes.GridHelperCurveLinear(tr,\n extremes=(ra0, ra1, cz0, cz1),\n grid_locator1=grid_locator1,\n grid_locator2=grid_locator2,\n tick_formatter1=tick_formatter1,\n tick_formatter2=None,\n )\n\n ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)\n fig.add_subplot(ax1)\n\n # adjust axis\n ax1.axis[\"left\"].set_axis_direction(\"bottom\")\n ax1.axis[\"right\"].set_axis_direction(\"top\")\n\n ax1.axis[\"bottom\"].set_visible(False)\n ax1.axis[\"top\"].set_axis_direction(\"bottom\")\n ax1.axis[\"top\"].toggle(ticklabels=True, label=True)\n ax1.axis[\"top\"].major_ticklabels.set_axis_direction(\"top\")\n ax1.axis[\"top\"].label.set_axis_direction(\"top\")\n\n ax1.axis[\"left\"].label.set_text(r\"cz [km$^{-1}$]\")\n ax1.axis[\"top\"].label.set_text(r\"$\\alpha_{1950}$\")\n\n\n # create a parasite axes whose transData in RA, cz\n aux_ax = ax1.get_aux_axes(tr)\n\n aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax\n ax1.patch.zorder=0.9 # but this has a side effect that the patch is\n # drawn twice, and possibly over some other\n # artists. So, we decrease the zorder a bit to\n # prevent this.\n\n return ax1, aux_ax\n\n\n\nif 1:\n import matplotlib.pyplot as plt\n fig = plt.figure(1, figsize=(8, 4))\n fig.subplots_adjust(wspace=0.3, left=0.05, right=0.95)\n\n ax1, aux_ax2 = setup_axes1(fig, 131)\n aux_ax2.bar([0, 1, 2, 3], [3, 2, 1, 3])\n \n #theta = np.random.rand(10) #*.5*np.pi\n #radius = np.random.rand(10) #+1.\n #aux_ax1.scatter(theta, radius)\n\n\n ax2, aux_ax2 = setup_axes2(fig, 132)\n\n theta = np.random.rand(10)*.5*np.pi\n radius = np.random.rand(10)+1.\n aux_ax2.scatter(theta, radius)\n\n\n ax3, aux_ax3 = setup_axes3(fig, 133)\n\n theta = (8 + np.random.rand(10)*(14-8))*15. # in degrees\n radius = np.random.rand(10)*14000.\n aux_ax3.scatter(theta, radius)\n\n plt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arcyfelix/Courses
|
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/06-Data-Sources/02 - Quandl.ipynb
|
apache-2.0
|
[
"Quandl\nMore info:\nhttps://www.quandl.com/tools/python",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport quandl",
"Make a Basic Data Call\nThis call gets the WTI Crude Oil price from the US Department of Energy:",
"mydata = quandl.get(\"EIA/PET_RWTC_D\")\n\nmydata.head()\n\nmydata.plot(figsize = (12, 6))",
"Note that you need to know the \"Quandl code\" of each dataset you download. In the above example, it is \"EIA/PET_RWTC_D\".\nChange Formats\nYou can get the same data in a NumPy array:",
"mydata = quandl.get(\"EIA/PET_RWTC_D\", \n returns = \"numpy\")",
"Specifying Data\nTo set start and end dates:",
"mydata = quandl.get(\"FRED/GDP\", \n start_date = \"2001-12-31\", \n end_date = \"2005-12-31\")\n\nmydata.head()\n\nmydata = quandl.get([\"NSE/OIL.1\", \"WIKI/AAPL.4\"])\n\nmydata.head()",
"Usage Limits\nThe Quandl Python module is free. If you would like to make more than 50 calls a day, however, you will need to create a free Quandl account and set your API key:",
"# EXAMPLE\nquandl.ApiConfig.api_key = \"2qM_u-g8oxTV6JbhUWLn\"\nmydata = quandl.get(\"FRED/GDP\")",
"Database Codes\nEach database on Quandl has a short (3-to-6 character) database ID. For example:\n\nCFTC Commitment of Traders Data: CFTC\nCore US Stock Fundamentals: SF1\nFederal Reserve Economic Data: FRED\n\nEach database contains many datasets. Datasets have their own IDs which are appended to their parent database ID, like this:\n\nCommitment of traders for wheat: CFTC/W_F_ALL\nMarket capitalization for Apple: SF1/AAPL_MARKETCAP\nUS civilian unemployment rate: FRED/UNRATE\n\nYou can download all dataset codes in a database in a single API call, by appending /codes to your database request. The call will return a ZIP file containing a CSV.\nDatabases\nEvery Quandl code has 2 parts: the database code (“WIKI”) which specifies where the data comes from, and the dataset code (“FB”) which identifies the specific time series you want.\nYou can find Quandl codes on their website, using their data browser.\nhttps://www.quandl.com/search",
"# FOR STOCKS\n\nmydata = quandl.get('WIKI/FB',\n start_date = '2015-01-01',\n end_date = '2017-01-01')\n\nmydata.head()\n\nmydata = quandl.get('WIKI/FB.1',\n start_date = '2015-01-01',\n end_date = '2017-01-01')\n\nmydata.head()\n\nmydata = quandl.get('WIKI/FB.7',\n start_date = '2015-01-01',\n end_date = '2017-01-01')\n\nmydata.head()",
"Housing Price Example\nZillow Home Value Index (Metro): Zillow Rental Index - All Homes - San Francisco, CA\nThe Zillow Home Value Index is Zillow's estimate of the median market value of zillow rental index - all homes within the metro of San Francisco, CA. This data is calculated by Zillow Real Estate Research (www.zillow.com/research) using their database of 110 million homes.",
"houses = quandl.get('ZILLOW/M11_ZRIAH')\n\nhouses.head()\n\nhouses.plot(figsize = (12, 6))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arviz-devs/arviz
|
doc/source/getting_started/XarrayforArviZ.ipynb
|
apache-2.0
|
[
"(xarray_for_arviz)=\nIntroduction to xarray, InferenceData, and netCDF for ArviZ\nWhile ArviZ supports plotting from familiar data types, such as dictionaries and NumPy arrays, there are a couple of data structures central to ArviZ that are useful to know when using the library. \nThey are \n\n{class}xarray:xarray.Dataset\n{class}arviz.InferenceData\n{ref}netCDF <netcdf> \n\nWhy more than one data structure?\nBayesian inference generates numerous datasets that represent different aspects of the model. For example, in a single analysis, a Bayesian practitioner could end up with any of the following data.\n\nPrior Distribution for N number of variables\nPosterior Distribution for N number of variables\nPrior Predictive Distribution\nPosterior Predictive Distribution\nTrace data for each of the above\nSample statistics for each inference run\nAny other array like data source\n\nFor more detail, see the InferenceData structure specification {ref}here <schema>.\nWhy not Pandas Dataframes or NumPy Arrays?\nData from probabilistic programming is naturally high dimensional. To add to the complexity ArviZ must handle the data generated from multiple Bayesian modeling libraries, such as PyMC3 and PyStan. This application is handled by the xarray package quite well. The xarray package lets users manage high dimensional data with human readable dimensions and coordinates quite easily.\n \nAbove is a visual representation of the data structures and their relationships. Although it seems more complex at a glance, the ArviZ devs believe that the usage of xarray, InferenceData, and netCDF will simplify the handling, referencing, and serialization of data generated during Bayesian analysis. \nAn introduction to each\nTo help get familiar with each, ArviZ includes some toy datasets. You can check the different ways to start an InferenceData {ref}here <creating_InferenceData>. For illustration purposes, here we have shown only one example provided by the library. To start an az.InferenceData, sample can be loaded from disk.",
"# Load the centered eight schools model\nimport arviz as az\n\ndata = az.load_arviz_data(\"centered_eight\")\ndata",
"In this case the az.InferenceData object contains both a posterior predictive distribution and the observed data, among other datasets. Each group in InferenceData is both an attribute on InferenceData and itself a xarray.Dataset object.",
"# Get the posterior dataset\nposterior = data.posterior\nposterior",
"In our eight schools model example, the posterior trace consists of 3 variables and approximately over 4 chains. In addition, it is a hierarchical model where values for the variable theta are associated with a particular school. \nAccording to the xarray's terminology: \n* Data variables are the actual values generated from the MCMC draws\n* Dimensions are the axes on which refer to the data variables\n* Coordinates are pointers to specific slices or points in the xarray.Dataset\nObserved data from the eight schools model can be accessed through the same method.",
"# Get the observed xarray\nobserved_data = data.observed_data\nobserved_data",
"It should be noted that the observed dataset contains only 8 data variables. Moreover, it doesn't have a chain and draw dimension or coordinates unlike posterior. This difference in sizes is the motivating reason behind InferenceData. Rather than force multiple different sized arrays into one array, or have users manage multiple objects corresponding to different datasets, it is easier to hold references to each xarray.Dataset in an InferenceData object.\n(netcdf)=\nNetCDF\nNetCDF is a standard for referencing array oriented files. In other words, while xarray.Datasets, and by extension InferenceData, are convenient for accessing arrays in Python memory, netCDF provides a convenient mechanism for persistence of model data on disk. In fact, the netCDF dataset was the inspiration for InferenceData as netCDF4 supports the concept of groups. InferenceData merely wraps xarray.Dataset with the same functionality.\nMost users will not have to concern themselves with the netCDF standard but for completeness it is good to make its usage transparent. It is also worth noting that the netCDF4 file standard is interoperable with HDF5 which may be familiar from other contexts.\nEarlier in this tutorial InferenceData was loaded from a netCDF file",
"data = az.load_arviz_data(\"centered_eight\")",
"Similarly, the InferenceData objects can be persisted to disk in the netCDF format",
"data.to_netcdf(\"eight_schools_model.nc\")",
"Additional Reading\nAdditional documentation and tutorials exist for xarray and netCDF4. Check the following links:\nInferenceData\n\n{ref}working_with_InferenceData: Tutorial covering the most common operations with InferenceData objects\n{ref}creating_InferenceData: Cookbook with examples of generating InferenceData objects from multiple sources, both external inference libraries like \n{ref}data module API reference <data_api>\n{ref}InferenceData API reference <idata_api>: description of all available InferenceData methods, grouped by topic\n\nxarray\n\nFor getting to know xarray, check xarray documentation\nFeel free to watch the Q/A session about xarray at xarray lightning talk at SciPy 2015\n\nNetCDF\n\nGet to know the introduction of netCDF at the official website of NetCDF documentation\nNetcdf4-python library is a used to read/write netCDF files in both netCDF4 and netCDF3 format. Learn more about it by visitng its API documentation at NetCDF4 API documentation\nxarray provides direct serialization and IO to netCDF format. Learn how to read/write netCDF files directly as xarray objects at {ref}NetCDF usage in xarray <xarray:io.netcdf>\nCheck how to read/write netCDF4 files with HDF5 and vice versa at NetCDF interoperability with HDF5"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.