markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Plot function f for all values of `x_new`. Run the code below.
# Run this code: %matplotlib inline import matplotlib.pyplot as plt plt.plot(x_new, f(x_new))
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Next create a function that will generate a cubic interpolation function. Name the function `g`.
# Your code here: # Run this code: plt.plot(x_new, g(x_new))
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Bonus Challenge - The Binomial DistributionThe binomial distribution allows us to calculate the probability of k successes in n trials for a random variable with two possible outcomes (which we typically label success and failure). The probability of success is typically denoted by p and the probability of failure is denoted by 1-p.The `scipy.stats` submodule contains a `binom` function for computing the probabilites of a random variable with the binomial distribution. You may read more about the binomial distribution [here](http://b.link/binomial55)* In the cell below, compute the probability that a dice lands on 5 exactly 3 times in 8 tries.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
* Do a simulation for the last event: do a function that simulate 8 tries and return a 1 if the result is 5 exactly 3 times and 0 if not. Now launch your simulation.
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
* Launch 10 simulations and represent the result in a bar plot. Now launch 1000 simulations and represent it. What do you see?
# Your code here:
_____no_output_____
MIT
module-2/Intro-Scipy/your-code/main.ipynb
Skye-FY/daft-miami-0120-labs
Plot data from example patient's time-series
df = pd.read_csv('/tmp/mp_data.csv') # load in this patient's deathtime from the actual experiment df_offset = pd.read_csv('/tmp/mp_death.csv') # get censoring information df_censor = pd.read_csv('/tmp/mp_censor.csv')
_____no_output_____
MIT
notebooks/mp-plot-patient-data.ipynb
alistairewj/mortality-prediction
Experiment A: First 24 hours
# define the patient iid = 200001 iid2 = 200019 T_WINDOW = 24 time_dict = {iid: 24, iid2: 24} df_pat = df.loc[df['icustay_id']==iid, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid, 'deathtime_hours'].values # Two subplots, the axes array is 1-d f, axarr = plt.subplots(2, sharex=True, figsize=[10,10]) pretty_labels = {'heartrate': 'Heart rate', 'meanbp': 'Mean blood pressure', 'resprate': 'Respiratory rate', 'spo2': 'Peripheral oxygen saturation', 'tempc': 'Temperature', 'bg_ph': 'pH', 'bg_bicarbonate': 'Serum bicarbonate', 'hemoglobin': 'Hemoglobin', 'potassium': 'Potassium', 'inr': 'International normalized ratio', 'bg_lactate': 'Lactate', 'wbc': 'White blood cell count'} #var_list = df.columns # first plot all the vitals in subfigure 1 var_vitals = [u'heartrate', u'meanbp', u'resprate', u'tempc', u'spo2'] i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[0].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], markersize=8, color=tableau20[i], linewidth=2) i+=1 axarr[0].set_ylim([0,150]) y_lim = axarr[0].get_ylim() # add ICU discharge if dischtime is not np.nan: axarr[0].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) # add a grey patch to represent the window endtime = time_dict[iid] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[0].add_patch(rect) # #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16) axarr[0].set_ylabel('Vital signs for {}'.format(iid),fontsize=16) # next plot the vitals for the next patient in subfigure 2 df_pat = df.loc[df['icustay_id']==iid2, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid2, 'deathtime_hours'].values i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed since ICU admission' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[1].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], markersize=8, color=tableau20[i], linewidth=2) i+=1 axarr[1].set_ylim([0,150]) y_lim = axarr[1].get_ylim() # add ICU discharge if deathtime is not np.nan: axarr[1].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) axarr[1].arrow(deathtime-5, 115, 4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(deathtime-12, 112.5, 'Death', fontsize=16) # add DNR dnrtime = df_censor.loc[df_censor['icustay_id']==iid2, 'censortime_hours'].values if dnrtime.shape[0]>0: axarr[1].plot([dnrtime,dnrtime], y_lim, 'm:', linewidth=3) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) # add a patch to represent the window endtime = time_dict[iid2] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[1].add_patch(rect) axarr[1].set_xlabel(t_unit,fontsize=16) axarr[1].set_ylabel('Vital signs for {}'.format(iid2),fontsize=16) axarr[1].legend(shadow=True, fancybox=True,loc='upper center', bbox_to_anchor=(0.5, 1.21),ncol=3) plt.show()
_____no_output_____
MIT
notebooks/mp-plot-patient-data.ipynb
alistairewj/mortality-prediction
Experiment B: Random time
# generate a random time dictionary T_WINDOW=4 df_tmp=df_offset.copy().merge(df_censor, how='left', left_on='icustay_id', right_on='icustay_id') time_dict = mp.generate_times(df_tmp, T=2, seed=111, censor=True) # define the patient iid = 200001 iid2 = 200019 df_pat = df.loc[df['icustay_id']==iid, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid, 'deathtime_hours'].values # Two subplots, the axes array is 1-d f, axarr = plt.subplots(2, sharex=True, figsize=[10,10]) pretty_labels = {'heartrate': 'Heart rate', 'meanbp': 'Mean blood pressure', 'resprate': 'Respiratory rate', 'spo2': 'Peripheral oxygen saturation', 'tempc': 'Temperature', 'bg_ph': 'pH', 'bg_bicarbonate': 'Serum bicarbonate', 'hemoglobin': 'Hemoglobin', 'potassium': 'Potassium', 'inr': 'International normalized ratio', 'bg_lactate': 'Lactate', 'wbc': 'White blood cell count'} #var_list = df.columns # first plot all the vitals in subfigure 1 var_vitals = [u'heartrate', u'meanbp', u'resprate', u'tempc', u'spo2'] i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[0].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], color=tableau20[i], linewidth=2) i+=1 axarr[0].set_ylim([0,150]) y_lim = axarr[0].get_ylim() # add ICU discharge if dischtime is not np.nan: axarr[0].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) # add a grey patch to represent the window endtime = time_dict[iid] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[0].add_patch(rect) # #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16) axarr[0].set_ylabel('Vital signs for {}'.format(iid),fontsize=16) # next plot the vitals for the next patient in subfigure 2 df_pat = df.loc[df['icustay_id']==iid2, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid2, 'deathtime_hours'].values i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed since ICU admission' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[1].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], markersize=8, color=tableau20[i], linewidth=2) i+=1 axarr[1].set_ylim([0,150]) y_lim = axarr[1].get_ylim() # add ICU discharge if deathtime is not np.nan: axarr[1].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) axarr[1].arrow(deathtime-5, 115, 4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(deathtime-12, 112.5, 'Death', fontsize=16) # add DNR dnrtime = df_censor.loc[df_censor['icustay_id']==iid2, 'censortime_hours'].values if dnrtime.shape[0]>0: axarr[1].plot([dnrtime,dnrtime], y_lim, 'm:', linewidth=3) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) # add a patch to represent the window endtime = time_dict[iid2] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[1].add_patch(rect) axarr[1].set_xlabel(t_unit,fontsize=16) axarr[1].set_ylabel('Vital signs for {}'.format(iid2),fontsize=16) #axarr[1].legend(shadow=True, fancybox=True,loc='upper center', bbox_to_anchor=(0.5, 1.1),ncol=3) plt.show()
_____no_output_____
MIT
notebooks/mp-plot-patient-data.ipynb
alistairewj/mortality-prediction
Both 24 hours and 4 hour window
# generate a random time dictionary T_WINDOW=4 df_tmp=df_offset.copy().merge(df_censor, how='left', left_on='icustay_id', right_on='icustay_id') time_dict = mp.generate_times(df_tmp, T=2, seed=111, censor=True) # define the patient iid = 200001 iid2 = 200019 df_pat = df.loc[df['icustay_id']==iid, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid, 'deathtime_hours'].values # Two subplots, the axes array is 1-d f, axarr = plt.subplots(2, sharex=True, figsize=[10,10]) pretty_labels = {'heartrate': 'Heart rate', 'meanbp': 'Mean blood pressure', 'resprate': 'Respiratory rate', 'spo2': 'Peripheral oxygen saturation', 'tempc': 'Temperature', 'bg_ph': 'pH', 'bg_bicarbonate': 'Serum bicarbonate', 'hemoglobin': 'Hemoglobin', 'potassium': 'Potassium', 'inr': 'International normalized ratio', 'bg_lactate': 'Lactate', 'wbc': 'White blood cell count'} #var_list = df.columns # first plot all the vitals in subfigure 1 var_vitals = [u'heartrate', u'meanbp', u'resprate', u'tempc', u'spo2'] i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[0].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], color=tableau20[i], linewidth=2) i+=1 axarr[0].set_ylim([0,150]) y_lim = axarr[0].get_ylim() # add ICU discharge if dischtime is not np.nan: axarr[0].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) # add a grey patch to represent the 4 hour window endtime = time_dict[iid] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[0].add_patch(rect) # #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16) # add a grey patch to represent the 24 hour window rect = matplotlib.patches.Rectangle( (0, y_lim[0]), 24, y_lim[1], color='#bdbdbd') axarr[0].add_patch(rect) # #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16) axarr[0].set_ylabel('Vital signs for {}'.format(iid),fontsize=16) # next plot the vitals for the next patient in subfigure 2 df_pat = df.loc[df['icustay_id']==iid2, :].set_index('hr') deathtime = df_offset.loc[df_offset['icustay_id']==iid2, 'deathtime_hours'].values i=0 t_scale = 1.0 # divide by this to get from hours to t_unit t_unit = 'Hours elapsed since ICU admission' for v in var_vitals: idx = ~df_pat[v].isnull() if np.sum(idx) > 0: axarr[1].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--', label=pretty_labels[v], marker=marker[np.mod(i,7)], markersize=8, color=tableau20[i], linewidth=2) i+=1 axarr[1].set_ylim([0,150]) y_lim = axarr[1].get_ylim() # add ICU discharge if deathtime is not np.nan: axarr[1].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3) axarr[1].arrow(deathtime-5, 115, 4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(deathtime-12, 112.5, 'Death', fontsize=16) # add DNR dnrtime = df_censor.loc[df_censor['icustay_id']==iid2, 'censortime_hours'].values if dnrtime.shape[0]>0: axarr[1].plot([dnrtime,dnrtime], y_lim, 'm:', linewidth=3) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) # add a patch to represent the 4 hour window endtime = time_dict[iid2] rect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd') axarr[1].add_patch(rect) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) # add a patch to represent the 24 hour window rect = matplotlib.patches.Rectangle( (0, y_lim[0]), 24, y_lim[1], color='#bdbdbd') axarr[1].add_patch(rect) axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k') axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16) axarr[1].set_xlabel(t_unit,fontsize=16) axarr[1].set_ylabel('Vital signs for {}'.format(iid2),fontsize=16) #axarr[1].legend(shadow=True, fancybox=True,loc='upper center', bbox_to_anchor=(0.5, 1.1),ncol=3) plt.show()
_____no_output_____
MIT
notebooks/mp-plot-patient-data.ipynb
alistairewj/mortality-prediction
Fairseq in Amazon SageMaker: Pre-trained English to French translation modelIn this notebook, we will show you how to serve an English to French translation model using pre-trained model provided by the [Fairseq toolkit](https://github.com/pytorch/fairseq) PermissionsRunning this notebook requires permissions in addition to the regular SageMakerFullAccess permissions. This is because it creates new repositories in Amazon ECR. The easiest way to add these permissions is simply to add the managed policy AmazonEC2ContainerRegistryFullAccess to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately. Download pre-trained modelFairseq maintains their pre-trained models [here](https://github.com/pytorch/fairseq/blob/master/examples/translation/README.md). We will use the model that was pre-trained on the [WMT14 English-French](http://statmt.org/wmt14/translation-task.htmlDownload) dataset. As the models are archived in .bz2 format, we need to convert them to .tar.gz as this is the format supported by Amazon SageMaker. Convert archive
%%sh wget https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 tar xvjf wmt14.v2.en-fr.fconv-py.tar.bz2 > /dev/null cd wmt14.en-fr.fconv-py mv model.pt checkpoint_best.pt tar czvf wmt14.en-fr.fconv-py.tar.gz checkpoint_best.pt dict.en.txt dict.fr.txt bpecodes README.md > /dev/null
_____no_output_____
Apache-2.0
advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb
Amirosimani/amazon-sagemaker-examples
The pre-trained model has been downloaded and converted. The next step is upload the data to Amazon S3 in order to make it available for running the inference. Upload data to Amazon S3
import sagemaker sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_session.region_name account = sagemaker_session.boto_session.client("sts").get_caller_identity().get("Account") bucket = sagemaker_session.default_bucket() prefix = "sagemaker/DEMO-pytorch-fairseq/pre-trained-models" role = sagemaker.get_execution_role() trained_model_location = sagemaker_session.upload_data( path="wmt14.en-fr.fconv-py/wmt14.en-fr.fconv-py.tar.gz", bucket=bucket, key_prefix=prefix )
_____no_output_____
Apache-2.0
advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb
Amirosimani/amazon-sagemaker-examples
Build Fairseq serving containerNext we need to register a Docker image in Amazon SageMaker that will contain the Fairseq code and that will be pulled at inference time to perform the of the precitions from the pre-trained model we downloaded.
%%sh chmod +x create_container.sh ./create_container.sh pytorch-fairseq-serve
_____no_output_____
Apache-2.0
advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb
Amirosimani/amazon-sagemaker-examples
The Fairseq serving image has been pushed into Amazon ECR, the registry from which Amazon SageMaker will be able to pull that image and launch both training and prediction. Hosting the pre-trained model for inferenceWe first needs to define a base JSONPredictor class that will help us with sending predictions to the model once it's hosted on the Amazon SageMaker endpoint.
from sagemaker.predictor import RealTimePredictor, json_serializer, json_deserializer class JSONPredictor(RealTimePredictor): def __init__(self, endpoint_name, sagemaker_session): super(JSONPredictor, self).__init__( endpoint_name, sagemaker_session, json_serializer, json_deserializer )
_____no_output_____
Apache-2.0
advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb
Amirosimani/amazon-sagemaker-examples
We can now use the Model class to deploy the model artificats (the pre-trained model), and deploy it on a CPU instance. Let's use a `ml.m5.xlarge`.
from sagemaker import Model algorithm_name = "pytorch-fairseq-serve" image = "{}.dkr.ecr.{}.amazonaws.com/{}:latest".format(account, region, algorithm_name) model = Model( model_data=trained_model_location, role=role, image=image, predictor_cls=JSONPredictor, ) predictor = model.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")
_____no_output_____
Apache-2.0
advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb
Amirosimani/amazon-sagemaker-examples
Now it's your time to play. Input a sentence in English and get the translation in French by simply calling predict.
import html result = predictor.predict("I love translation") # Some characters are escaped HTML-style requiring to unescape them before printing print(html.unescape(result))
_____no_output_____
Apache-2.0
advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb
Amirosimani/amazon-sagemaker-examples
Once you're done with getting predictions, remember to shut down your endpoint as you no longer need it. Delete endpoint
model.sagemaker_session.delete_endpoint(predictor.endpoint)
_____no_output_____
Apache-2.0
advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb
Amirosimani/amazon-sagemaker-examples
SSD Evaluation TutorialThis is a brief tutorial that explains how compute the average precisions for any trained SSD model using the `Evaluator` class. The `Evaluator` computes the average precisions according to the Pascal VOC pre-2010 or post-2010 detection evaluation algorithms. You can find details about these computation methods [here](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/devkit_doc.htmlsec:ap).As an example we'll evaluate an SSD300 on the Pascal VOC 2007 `test` dataset, but note that the `Evaluator` works for any SSD model and any dataset that is compatible with the `DataGenerator`. If you would like to run the evaluation on a different model and/or dataset, the procedure is analogous to what is shown below, you just have to build the appropriate model and load the relevant dataset.Note: I that in case you would like to evaluate a model on MS COCO, I would recommend to follow the [MS COCO evaluation notebook](https://github.com/pierluigiferrari/ssd_keras/blob/master/ssd300_evaluation_COCO.ipynb) instead, because it can produce the results format required by the MS COCO evaluation server and uses the official MS COCO evaluation code, which computes the mAP slightly differently from the Pascal VOC method.Note: In case you want to evaluate any of the provided trained models, make sure that you build the respective model with the correct set of scaling factors to reproduce the official results. The models that were trained on MS COCO and fine-tuned on Pascal VOC require the MS COCO scaling factors, not the Pascal VOC scaling factors.
from keras import backend as K from keras.models import load_model from keras.optimizers import Adam from imageio import imread import numpy as np from matplotlib import pyplot as plt from models.keras_ssd300 import ssd_300 from keras_loss_function.keras_ssd_loss import SSDLoss from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes from keras_layers.keras_layer_DecodeDetections import DecodeDetections from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast from keras_layers.keras_layer_L2Normalization import L2Normalization from data_generator.object_detection_2d_data_generator import DataGenerator from eval_utils.average_precision_evaluator import Evaluator %matplotlib inline import os import os.path as p # Set a few configuration parameters. img_height = 300 img_width = 300 n_classes = 20 model_mode = 'training'
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
mdsmith-cim/ssd_keras
1. Load a trained SSDEither load a trained model or build a model and load trained weights into it. Since the HDF5 files I'm providing contain only the weights for the various SSD versions, not the complete models, you'll have to go with the latter option when using this implementation for the first time. You can then of course save the model and next time load the full model directly, without having to build it.You can find the download links to all the trained model weights in the README. 1.1. Build the model and load trained weights into it
# 1: Build the Keras model K.clear_session() # Clear previous models from memory. model = ssd_300(image_size=(img_height, img_width, 3), n_classes=n_classes, mode=model_mode, l2_regularization=0.0005, scales=[0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05], # The scales for MS COCO [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] aspect_ratios_per_layer=[[1.0, 2.0, 0.5], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5], [1.0, 2.0, 0.5]], two_boxes_for_ar1=True, steps=[8, 16, 32, 64, 100, 300], offsets=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5], clip_boxes=False, variances=[0.1, 0.1, 0.2, 0.2], normalize_coords=True, subtract_mean=[123, 117, 104], swap_channels=[2, 1, 0], confidence_thresh=0.01, iou_threshold=0.45, top_k=200, nms_max_output_size=400) # 2: Load the trained weights into the model. weights_path = '/usr/local/data/msmith/uncertainty/ssd_keras/good_dropout_model/ssd300_dropout_PASCAL2012_train_+12_epoch-58_loss-3.8960_val_loss-5.0832.h5' model.load_weights(weights_path, by_name=True) # 3: Compile the model so that Keras won't complain the next time you load it. adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
WARNING: Logging before flag parsing goes to stderr. W1121 16:15:06.265161 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead. W1121 16:15:06.267094 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:98: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead. W1121 16:15:06.297540 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:102: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead. W1121 16:15:06.299231 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. W1121 16:15:06.322895 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4185: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead. W1121 16:15:06.371278 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead. W1121 16:15:06.573747 140201693157120 deprecation.py:506] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. W1121 16:15:08.872177 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. W1121 16:15:08.889451 140201693157120 deprecation.py:323] From /usr/local/data/msmith/uncertainty/ssd_keras/keras_loss_function/keras_ssd_loss.py:133: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. W1121 16:15:08.900078 140201693157120 deprecation.py:323] From /usr/local/data/msmith/uncertainty/ssd_keras/keras_loss_function/keras_ssd_loss.py:74: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where W1121 16:15:08.914669 140201693157120 deprecation.py:323] From /usr/local/data/msmith/uncertainty/ssd_keras/keras_loss_function/keras_ssd_loss.py:166: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead.
Apache-2.0
ssd300_evaluation.ipynb
mdsmith-cim/ssd_keras
Or 1.2. Load a trained modelWe set `model_mode` to 'inference' above, so the evaluator expects that you load a model that was built in 'inference' mode. If you're loading a model that was built in 'training' mode, change the `model_mode` parameter accordingly.
# TODO: Set the path to the `.h5` file of the model to be loaded. model_path = 'ssd300_dropout_pascal_07+12_epoch-114_loss-4.3685_val_loss-4.5034.h5' # We need to create an SSDLoss object in order to pass that to the model loader. ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) K.clear_session() # Clear previous models from memory. model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes, 'L2Normalization': L2Normalization, 'DecodeDetections': DecodeDetections, 'compute_loss': ssd_loss.compute_loss}) model.summary()
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
mdsmith-cim/ssd_keras
2. Create a data generator for the evaluation datasetInstantiate a `DataGenerator` that will serve the evaluation dataset during the prediction phase.
ROOT_PATH = '/usr/local/data/msmith/APL/Datasets/PASCAL/' # The directories that contain the images. VOC_2007_images_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2007/JPEGImages/') VOC_2012_images_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2012/JPEGImages/') # The directories that contain the annotations. VOC_2007_annotations_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2007/Annotations/') VOC_2012_annotations_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2012/Annotations/') # The paths to the image sets. VOC_2007_train_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/train.txt') VOC_2012_train_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2012/ImageSets/Main/train.txt') VOC_2007_val_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/val.txt') VOC_2012_val_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2012/ImageSets/Main/val.txt') VOC_2007_trainval_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/trainval.txt') VOC_2012_trainval_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2012/ImageSets/Main/trainval.txt') VOC_2007_test_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/test.txt') dataset = DataGenerator(load_images_into_memory=True) # The XML parser needs to now what object class names to look for and in which order to map them to integers. classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] dataset.parse_xml(images_dirs=[VOC_2012_images_dir], image_set_filenames=[VOC_2012_val_image_set_filename], annotations_dirs=[VOC_2012_annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=False, ret=False)
Processing image set 'val.txt': 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5823/5823 [00:43<00:00, 134.57it/s] Loading images into memory: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5823/5823 [01:06<00:00, 87.20it/s]
Apache-2.0
ssd300_evaluation.ipynb
mdsmith-cim/ssd_keras
3. Run the evaluationNow that we have instantiated a model and a data generator to serve the dataset, we can set up the evaluator and run the evaluation.The evaluator is quite flexible: It can compute the average precisions according to the Pascal VOC pre-2010 algorithm, which samples 11 equidistant points of the precision-recall curves, or according to the Pascal VOC post-2010 algorithm, which integrates numerically over the entire precision-recall curves instead of sampling a few individual points. You could also change the number of sampled recall points or the required IoU overlap for a prediction to be considered a true positive, among other things. Check out the `Evaluator`'s documentation for details on all the arguments.In its default settings, the evaluator's algorithm is identical to the official Pascal VOC pre-2010 Matlab detection evaluation algorithm, so you don't really need to tweak anything unless you want to.The evaluator roughly performs the following steps: It runs predictions over the entire given dataset, then it matches these predictions to the ground truth boxes, then it computes the precision-recall curves for each class, then it samples 11 equidistant points from these precision-recall curves to compute the average precision for each class, and finally it computes the mean average precision over all classes.
evaluator = Evaluator(model=model, n_classes=n_classes, data_generator=dataset, model_mode=model_mode) results = evaluator(img_height=img_height, img_width=img_width, batch_size=2, data_generator_mode='resize', round_confidences=False, matching_iou_threshold=0.5, border_pixels='include', sorting_algorithm='quicksort', average_precision_mode='sample', num_recall_points=11, ignore_neutral_boxes=True, return_precisions=True, return_recalls=True, return_average_precisions=True, verbose=True) mean_average_precision, average_precisions, precisions, recalls = results
Number of images in the evaluation dataset: 5823 Producing predictions batch-wise: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2912/2912 [02:39<00:00, 11.19it/s] Matching predictions to ground truth, class 1/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 9115/9115 [00:00<00:00, 21509.26it/s] Matching predictions to ground truth, class 2/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4937/4937 [00:00<00:00, 28537.91it/s] Matching predictions to ground truth, class 3/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 24373/24373 [00:00<00:00, 32677.22it/s] Matching predictions to ground truth, class 4/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20421/20421 [00:00<00:00, 23329.56it/s] Matching predictions to ground truth, class 5/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 37001/37001 [00:00<00:00, 39748.54it/s] Matching predictions to ground truth, class 6/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3867/3867 [00:00<00:00, 21291.08it/s] Matching predictions to ground truth, class 7/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 38953/38953 [00:01<00:00, 24611.19it/s] Matching predictions to ground truth, class 8/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5130/5130 [00:00<00:00, 23991.56it/s] Matching predictions to ground truth, class 9/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 95148/95148 [00:03<00:00, 29372.23it/s] Matching predictions to ground truth, class 10/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 8843/8843 [00:00<00:00, 30400.36it/s] Matching predictions to ground truth, class 11/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6261/6261 [00:00<00:00, 27624.58it/s] Matching predictions to ground truth, class 12/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7443/7443 [00:00<00:00, 29286.36it/s] Matching predictions to ground truth, class 13/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3434/3434 [00:00<00:00, 21807.96it/s] Matching predictions to ground truth, class 14/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3700/3700 [00:00<00:00, 21560.74it/s] Matching predictions to ground truth, class 15/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 233148/233148 [00:10<00:00, 22641.15it/s] Matching predictions to ground truth, class 16/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30600/30600 [00:00<00:00, 33699.43it/s] Matching predictions to ground truth, class 17/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 14850/14850 [00:00<00:00, 29594.40it/s] Matching predictions to ground truth, class 18/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4445/4445 [00:00<00:00, 29100.47it/s] Matching predictions to ground truth, class 19/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4338/4338 [00:00<00:00, 19091.32it/s] Matching predictions to ground truth, class 20/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 8865/8865 [00:00<00:00, 24837.17it/s] Computing precisions and recalls, class 1/20 Computing precisions and recalls, class 2/20 Computing precisions and recalls, class 3/20 Computing precisions and recalls, class 4/20 Computing precisions and recalls, class 5/20 Computing precisions and recalls, class 6/20 Computing precisions and recalls, class 7/20 Computing precisions and recalls, class 8/20 Computing precisions and recalls, class 9/20 Computing precisions and recalls, class 10/20 Computing precisions and recalls, class 11/20 Computing precisions and recalls, class 12/20 Computing precisions and recalls, class 13/20 Computing precisions and recalls, class 14/20 Computing precisions and recalls, class 15/20 Computing precisions and recalls, class 16/20 Computing precisions and recalls, class 17/20 Computing precisions and recalls, class 18/20 Computing precisions and recalls, class 19/20 Computing precisions and recalls, class 20/20 Computing average precision, class 1/20 Computing average precision, class 2/20 Computing average precision, class 3/20 Computing average precision, class 4/20 Computing average precision, class 5/20 Computing average precision, class 6/20 Computing average precision, class 7/20 Computing average precision, class 8/20 Computing average precision, class 9/20 Computing average precision, class 10/20 Computing average precision, class 11/20 Computing average precision, class 12/20 Computing average precision, class 13/20 Computing average precision, class 14/20 Computing average precision, class 15/20 Computing average precision, class 16/20 Computing average precision, class 17/20 Computing average precision, class 18/20 Computing average precision, class 19/20 Computing average precision, class 20/20
Apache-2.0
ssd300_evaluation.ipynb
mdsmith-cim/ssd_keras
4. Visualize the resultsLet's take a look:
for i in range(1, len(average_precisions)): print("{:<14}{:<6}{}".format(classes[i], 'AP', round(average_precisions[i], 3))) print() print("{:<14}{:<6}{}".format('','mAP', round(mean_average_precision, 3))) m = max((n_classes + 1) // 2, 2) n = 2 fig, cells = plt.subplots(m, n, figsize=(n*8,m*8)) for i in range(m): for j in range(n): if n*i+j+1 > n_classes: break cells[i, j].plot(recalls[n*i+j+1], precisions[n*i+j+1], color='blue', linewidth=1.0) cells[i, j].set_xlabel('recall', fontsize=14) cells[i, j].set_ylabel('precision', fontsize=14) cells[i, j].grid(True) cells[i, j].set_xticks(np.linspace(0,1,11)) cells[i, j].set_yticks(np.linspace(0,1,11)) cells[i, j].set_title("{}, AP: {:.3f}".format(classes[n*i+j+1], average_precisions[n*i+j+1]), fontsize=16)
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
mdsmith-cim/ssd_keras
5. Advanced use`Evaluator` objects maintain copies of all relevant intermediate results like predictions, precisions and recalls, etc., so in case you want to experiment with different parameters, e.g. different IoU overlaps, there is no need to compute the predictions all over again every time you make a change to a parameter. Instead, you can only update the computation from the point that is affected onwards.The evaluator's `__call__()` method is just a convenience wrapper that executes its other methods in the correct order. You could just call any of these other methods individually as shown below (but you have to make sure to call them in the correct order).Note that the example below uses the same evaluator object as above. Say you wanted to compute the Pascal VOC post-2010 'integrate' version of the average precisions instead of the pre-2010 version computed above. The evaluator object still has an internal copy of all the predictions, and since computing the predictions makes up the vast majority of the overall computation time and since the predictions aren't affected by changing the average precision computation mode, we skip computing the predictions again and instead only compute the steps that come after the prediction phase of the evaluation. We could even skip the matching part, since it isn't affected by changing the average precision mode either. In fact, we would only have to call `compute_average_precisions()` `compute_mean_average_precision()` again, but for the sake of illustration we'll re-do the other computations, too.
evaluator.get_num_gt_per_class(ignore_neutral_boxes=True, verbose=False, ret=False) evaluator.match_predictions(ignore_neutral_boxes=True, matching_iou_threshold=0.5, border_pixels='include', sorting_algorithm='quicksort', verbose=True, ret=False) precisions, recalls = evaluator.compute_precision_recall(verbose=True, ret=True) average_precisions = evaluator.compute_average_precisions(mode='integrate', num_recall_points=11, verbose=True, ret=True) mean_average_precision = evaluator.compute_mean_average_precision(ret=True) for i in range(1, len(average_precisions)): print("{:<14}{:<6}{}".format(classes[i], 'AP', round(average_precisions[i], 3))) print() print("{:<14}{:<6}{}".format('','mAP', round(mean_average_precision, 3)))
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
mdsmith-cim/ssd_keras
[![Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb) Tutorial 2: Differential Equations**Week 0, Day 4: Calculus****By Neuromatch Academy**__Content creators:__ John S Butler, Arvind Kumar with help from Rebecca Brady__Content reviewers:__ Swapnil Kumar, Sirisha Sripada, Matthew McCann, Tessy Tom__Production editors:__ Matthew McCann, Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 45 minutes*A great deal of neuroscience can be modelled using differential equations, from gating channels to single neurons to a network of neurons to blood flow, to behaviour. A simple way to think about differential equations is they are equations that describe how something changes. The most famous of these in neuroscience is the Nobel Prize winning Hodgkin Huxley equation, which describes a neuron by modelling the gating of each axon. But we will not start there; we will start a few steps back.Differential Equations are mathematical equations that describe how something like population or a neuron changes over time. The reason why differential equations are so useful is they can generalise a process such that one equation can be used to describe many different outcomes.The general form of a first order differential equation is:\begin{align*}\frac{d}{dt}y(t)&=f(t,y(t))\\\end{align*}which can be read as "the change in a process $y$ over time $t$ is a function $f$ of time $t$ and itself $y$". This might initially seem like a paradox as you are using a process $y$ you want to know about to describe itself, a bit like the MC Escher drawing of two hands painting [each other](https://en.wikipedia.org/wiki/Drawing_Hands). But that is the beauty of mathematics - this can be solved some of time, and when it cannot be solved exactly we can use numerical methods to estimate the answer (as we will see in the next tutorial). In this tutorial, we will see how __differential equations are motivated by observations of physical responses.__ We will break down the population differential equation, then the integrate and fire model, which leads nicely into raster plots and frequency-current curves to rate models.**Steps:**- Get an intuitive understanding of a linear population differential equation (humans, not neurons)- Visualize the relationship between the change in population and the population- Breakdown the Leaky Integrate and Fire (LIF) differential equation- Code the exact solution of an LIF for a constant input- Visualize and listen to the response of the LIF for different inputs
# @title Video 1: Why do we care about differential equations? from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1v64y197bW", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="LhX-mUd8lPo", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
--- Setup
# Imports import numpy as np import matplotlib.pyplot as plt # @title Figure Settings import IPython.display as ipd from matplotlib import gridspec import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' # use NMA plot style plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") my_layout = widgets.Layout() # @title Plotting Functions def plot_dPdt(alpha=.3): """ Plots change in population over time Args: alpha: Birth Rate Returns: A figure two panel figure left panel: change in population as a function of population right panel: membrane potential as a function of time """ with plt.xkcd(): time=np.arange(0, 10 ,0.01) fig = plt.figure(figsize=(12,4)) gs = gridspec.GridSpec(1, 2) ## dpdt as a fucntion of p plt.subplot(gs[0]) plt.plot(np.exp(alpha*time), alpha*np.exp(alpha*time)) plt.xlabel(r'Population $p(t)$ (millions)') plt.ylabel(r'$\frac{d}{dt}p(t)=\alpha p(t)$') ## p exact solution plt.subplot(gs[1]) plt.plot(time, np.exp(alpha*time)) plt.ylabel(r'Population $p(t)$ (millions)') plt.xlabel('time (years)') plt.show() def plot_V_no_input(V_reset=-75): """ Args: V_reset: Reset Potential Returns: A figure two panel figure left panel: change in membrane potential as a function of membrane potential right panel: membrane potential as a function of time """ E_L=-75 tau_m=10 t=np.arange(0,100,0.01) V= E_L+(V_reset-E_L)*np.exp(-(t)/tau_m) V_range=np.arange(-90,0,1) dVdt=-(V_range-E_L)/tau_m with plt.xkcd(): time=np.arange(0, 10, 0.01) fig = plt.figure(figsize=(12, 4)) gs = gridspec.GridSpec(1, 2) plt.subplot(gs[0]) plt.plot(V_range,dVdt) plt.hlines(0,min(V_range),max(V_range), colors='black', linestyles='dashed') plt.vlines(-75, min(dVdt), max(dVdt), colors='black', linestyles='dashed') plt.plot(V_reset,-(V_reset - E_L)/tau_m, 'o', label=r'$V_{reset}$') plt.text(-50, 1, 'Positive') plt.text(-50, -2, 'Negative') plt.text(E_L - 1, max(dVdt), r'$E_L$') plt.legend() plt.xlabel('Membrane Potential V (mV)') plt.ylabel(r'$\frac{dV}{dt}=\frac{-(V(t)-E_L)}{\tau_m}$') plt.subplot(gs[1]) plt.plot(t,V) plt.plot(t[0],V_reset,'o') plt.ylabel(r'Membrane Potential $V(t)$ (mV)') plt.xlabel('time (ms)') plt.ylim([-95, -60]) plt.show() ## LIF PLOT def plot_IF(t, V,I,Spike_time): """ Args: t : time V : membrane Voltage I : Input Spike_time : Spike_times Returns: figure with three panels top panel: Input as a function of time middle panel: membrane potential as a function of time bottom panel: Raster plot """ with plt.xkcd(): fig = plt.figure(figsize=(12, 4)) gs = gridspec.GridSpec(3, 1, height_ratios=[1, 4, 1]) # PLOT OF INPUT plt.subplot(gs[0]) plt.ylabel(r'$I_e(nA)$') plt.yticks(rotation=45) plt.hlines(I,min(t),max(t),'g') plt.ylim((2, 4)) plt.xlim((-50, 1000)) # PLOT OF ACTIVITY plt.subplot(gs[1]) plt.plot(t,V) plt.xlim((-50, 1000)) plt.ylabel(r'$V(t)$(mV)') # PLOT OF SPIKES plt.subplot(gs[2]) plt.ylabel(r'Spike') plt.yticks([]) plt.scatter(Spike_time, 1 * np.ones(len(Spike_time)), color="grey", marker=".") plt.xlim((-50, 1000)) plt.xlabel('time(ms)') plt.show() ## Plotting the differential Equation def plot_dVdt(I=0): """ Args: I : Input Current Returns: figure of change in membrane potential as a function of membrane potential """ with plt.xkcd(): E_L = -75 tau_m = 10 V = np.arange(-85, 0, 1) g_L = 10. fig = plt.figure(figsize=(6, 4)) plt.plot(V,(-(V-E_L) + I*10) / tau_m) plt.hlines(0, min(V), max(V), colors='black', linestyles='dashed') plt.xlabel('V (mV)') plt.ylabel(r'$\frac{dV}{dt}$') plt.show() # @title Helper Functions ## EXACT SOLUTION OF LIF def Exact_Integrate_and_Fire(I,t): """ Args: I : Input Current t : time Returns: Spike : Spike Count Spike_time : Spike time V_exact : Exact membrane potential """ Spike = 0 tau_m = 10 R = 10 t_isi = 0 V_reset = E_L = -75 V_exact = V_reset * np.ones(len(t)) V_th = -50 Spike_time = [] for i in range(0, len(t)): V_exact[i] = E_L + R*I + (V_reset - E_L - R*I) * np.exp(-(t[i]-t_isi)/tau_m) # Threshold Reset if V_exact[i] > V_th: V_exact[i-1] = 0 V_exact[i] = V_reset t_isi = t[i] Spike = Spike+1 Spike_time = np.append(Spike_time, t[i]) return Spike, Spike_time, V_exact
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
--- Section 1: Population differential equation
# @title Video 2: Population differential equation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1pg41137CU", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="czgGyoUsRoQ", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
This video covers our first example of a differential equation: a differential equation which models the change in population. Click here for text recap of video To get an intuitive feel of a differential equations, we will start with a population differential equation, which models the change in population [1], that is human population not neurons, we will get to neurons later. Mathematically it is written like:\begin{align*}\\\frac{d}{dt}\,p(t) &= \alpha p(t),\\\end{align*}where $p(t)$ is the population of the world and $\alpha$ is a parameter representing birth rate.Another way of thinking about the models is that the equation\begin{align*}\\\frac{d}{dt}\,p(t) &= \alpha p(t),\\\text{can be written as:}\\\text{"Change in Population"} &= \text{ "Birth rate times Current population."}\end{align*}The equation is saying something reasonable maybe not the perfect model but a good start. Think! 1.1: Interpretating the behavior of a linear population equationUsing the plot below of change of population $\frac{d}{dt} p(t) $ as a function of population $p(t)$ with birth-rate $\alpha=0.3$, discuss the following questions:1. Why is the population differential equation known as a linear differential equation?2. How does population size affect the rate of change of the population?
# @markdown Execute the code to plot the rate of change of population as a function of population p = np.arange(0, 100, 0.1) with plt.xkcd(): dpdt = 0.3*p fig = plt.figure(figsize=(6, 4)) plt.plot(p, dpdt) plt.xlabel(r'Population $p(t)$ (millions)') plt.ylabel(r'$\frac{d}{dt}p(t)=\alpha p(t)$') plt.show() # to_remove explanation """ 1. The plot of $\frac{dp}{dt}$ is a line, which is why the differential equation is known as a linear differential equation. 2. As the population increases, the change of population increases. A population of 20 has a change of 6 while a population of 100 has a change of 30. This makes sense - the larger the population the larger the change. """
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
Section 1.1: Exact solution of the population equation Section 1.1.1: Initial conditionThe linear population differential equation is known as an initial value differential equation because we need an initial population value to solve it. Here we will set our initial population at time 0 to 1:\begin{align*}&p(0)=1.\\\end{align*}Different initial conditions will lead to different answers, but they will not change the differential equation. This is one of the strengths of a differential equation. Section 1.1.2: Exact SolutionTo calculate the exact solution of a differential equation, we must integrate both sides. Instead of numerical integration (as you delved into in the last tutorial), we will first try to solve the differential equations using analytical integration. As with derivatives, we can find analytical integrals of simple equations by consulting [a list](https://en.wikipedia.org/wiki/Lists_of_integrals). We can then get integrals for more complex equations using some mathematical tricks - the harder the equation the more obscure the trick. The linear population equation \begin{align*}\frac{d}{dt}\,p(t) &= \alpha p(t),\\\\p(0)=P_0,\\\end{align*}has the exact solution:\begin{align*}p(t)&=P_0e^{\alpha t}.\\\end{align*}The exact solution written in words is: \begin{align*}\text{"Population"}&=\text{"grows/declines exponentially as a function of time and birth rate"}.\\\end{align*}Most differential equations do not have a known exact solution, so in the next tutorial on numerical methods we will show how the solution can be estimated.A small aside: a good deal of progress in mathematics was due to mathematicians writing taunting letters to each other saying they had a trick that could solve something better than everyone else. So do not worry too much about the tricks. Example Exact Solution of the Population EquationLet's consider the population differential equation with a birth rate $\alpha=0.3$:\begin{align*}\frac{d}{dt}\,p(t) = 0.3 p(t),\\\text{with the initial condition}\\p(0)=1.\\\end{align*}It has an exact solution \begin{align*}\\p(t)=e^{0.3 t}.\end{align*}
# @markdown Execute code to plot the exact solution t = np.arange(0, 10, 0.1) # Time from 0 to 10 years in 0.1 steps with plt.xkcd(): p = np.exp(0.3 * t) fig = plt.figure(figsize=(6, 4)) plt.plot(t, p) plt.ylabel('Population (millions)') plt.xlabel('time (years)') plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
Section 1.2: Parameters of the differential equation*Estimated timing to here from start of tutorial: 12 min*One of the goals when designing a differential equation is to make it generalisable. Which means that the differential equation will give reasonable solutions for different countries with different birth rates $\alpha$. Interactive Demo 1.2: Interactive Parameter ChangePlay with the widget to see the relationship between $\alpha$ and the population differential equation as a function of population (left-hand side), and the population solution as a function of time (right-hand side). Pay close attention to the transition point from positive to negative.How do changing parameters of the population equation affect the outcome?1. What happens when $\alpha < 0$?2. What happens when $\alpha > 0$?3. What happens when $\alpha = 0$?
# @markdown Make sure you execute this cell to enable the widget! my_layout.width = '450px' @widgets.interact( alpha=widgets.FloatSlider(.3, min=-1., max=1., step=.1, layout=my_layout) ) def Pop_widget(alpha): plot_dPdt(alpha=alpha) plt.show() # to_remove explanation """ 1. Negative values of alpha result in an exponential decrease to 0 a stable solution. 2. Positive Values of alpha in an exponential increases to infinity. 3. Alpha equal to 0 is a unique point known as an equilibrium point when the dp/dt=0 and there is no change in population. This is known as a stable point. """
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
The population differential equation is an over-simplification and has some very obvious limitations: 1. Population growth is not exponential as there are limited number of resources so the population will level out at some point.2. It does not include any external factors on the populations like weather, predators and preys.These kind of limitations can be addressed by extending the model.While it might not seem that the population equation has direct relevance to neuroscience, a similar equation is used to describe the accumulation of evidence for decision making. This is known as the Drift Diffusion Model and you will see in more detail in the Linear System day in Neuromatch (W2D2).Another differential equation that is similar to the population equation is the Leaky Integrate and Fire model which you may have seen in the python pre-course materials on W0D1 and W0D2. It will turn up later in Neuromatch as well. Below we will delve in the motivation of the differential equation. --- Section 2: The leaky integrate and fire model
# @title Video 3: The leaky integrate and fire model from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1rb4y1C79n", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="ZfWO6MLCa1s", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
This video covers the Leaky Integrate and Fire model (a linear differential equation which describes the membrane potential of a single neuron). Click here for text recap of full LIF equation from video The Leaky Integrate and Fire Model is a linear differential equation that describes the membrane potential ($V$) of a single neuron which was proposed by Louis Γ‰douard Lapicque in 1907 [2].The subthreshold membrane potential dynamics of a LIF neuron is described by\begin{align}\tau_m\frac{dV}{dt} = -(V-E_L) + R_mI\,\end{align}where $\tau_m$ is the time constant, $V$ is the membrane potential, $E_L$ is the resting potential, $R_m$ is membrane resistance, and $I$ is the external input current. In the next few sections, we will break down the full LIF equation and then build it back up to get an intuitive feel of the different facets of the differential equation. Section 2.1: LIF without input*Estimated timing to here from start of tutorial: 18 min*As seen in the video, we will first model an LIF neuron without input, which results in the equation:\begin{align}\frac{dV}{dt} &= \frac{-(V-E_L)}{\tau_m}.\\\end{align}where $\tau_m$ is the time constant, $V$ is the membrane potential, and $E_L$ is the resting potential. Click here for further details (from video) Removing the input gives the equation\begin{align}\tau_m\frac{dV}{dt} &= -V+E_L,\\\end{align}which can be written in words as:\begin{align}\begin{matrix}\text{"Time constant multiplied by the} \\ \text{change in membrane potential"}\end{matrix}&=\begin{matrix}\text{"Minus Current} \\ \text{membrane potential"} \end{matrix}+\begin{matrix}\text{"resting potential"}\end{matrix}.\\\end{align}The equation can be re-arranged to look even more like the population equation:\begin{align}\frac{dV}{dt} &= \frac{-(V-E_L)}{\tau_m}.\\\end{align} Think! 2.1: Effect on membrane potential $V$ on the LIF modelThe plot the below shows the change in membrane potential $\frac{dV}{dt}$ as a function of membrane potential $V$ with the parameters set as:* `E_L = -75`* `V_reset = -50`* `tau_m = 10.`1. What is the effect on $\frac{dV}{dt}$ when $V>-75$ mV?2. What is the effect on $\frac{dV}{dt}$ when $V<-75$ mV3. What is the effect on $\frac{dV}{dt}$ when $V=-75$ mV?
# @markdown Make sure you execute this cell to plot the relationship between dV/dt and V # Parameter definition E_L = -75 tau_m = 10 # Range of Values of V V = np.arange(-90, 0, 1) dV = -(V - E_L) / tau_m with plt.xkcd(): fig = plt.figure(figsize=(6, 4)) plt.plot(V, dV) plt.hlines(0, min(V), max(V), colors='black', linestyles='dashed') plt.vlines(-75, min(dV), max(dV), colors='black', linestyles='dashed') plt.text(-50, 1, 'Positive') plt.text(-50, -2, 'Negative') plt.text(E_L, max(dV) + 1, r'$E_L$') plt.xlabel(r'$V(t)$ (mV)') plt.ylabel(r'$\frac{dV}{dt}=\frac{-(V-E_L)}{\tau_m}$') plt.ylim(-8, 2) plt.show() # to_remove explanation """ 1. For $V>-75$ mV, the derivative is negative. 2. For $V<-75$ mV, the derivative is positive. 3. For $V=-75$ mV, the derivative is equal to $0$ is and a stable point when nothing changes. """
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
Section 2.1.1: Exact Solution of the LIF model without inputThe LIF model has the exact solution:\begin{align*}V(t)=&\ E_L+(V_{reset}-E_L)e^{\frac{-t}{\tau_m}}\\\end{align*}where $\tau_m$ is the time constant, $V$ is the membrane potential, $E_L$ is the resting potential, and $V_{reset}$ is the initial membrane potential. Click here for further details (from video) Similar to the population equation, we need an initial membrane potential at time $0$ to solve the LIF model. With this equation \begin{align}\frac{dV}{dt} &= \frac{-(V-E_L)}{\tau_m}\,\\V(0)&=V_{reset},\end{align}where is $V_{reset}$ is called the reset potential.The LIF model has the exact solution:\begin{align*}V(t)=&\ E_L+(V_{reset}-E_L)e^{\frac{-t}{\tau_m}}\\\text{ which can be written as: }\\\begin{matrix}\text{"Current membrane} \\ \text{potential}"\end{matrix}=&\text{"Resting potential"}+\begin{matrix}\text{"Reset potential minus resting potential} \\ \text{times exponential with rate one over time constant."}\end{matrix}\\\end{align*} Interactive Demo 2.1.1: Initial Condition $V_{reset}$This exercise is to get an intuitive feel of how the different initial conditions $V_{reset}$ impacts the differential equation of the LIF and the exact solution for the equation:\begin{align}\frac{dV}{dt} &= \frac{-(V-E_L)}{\tau_m}\,\\\end{align}with the parameters set as:* `E_L = -75,`* `tau_m = 10.`The panel on the left-hand side plots the change in membrane potential $\frac{dV}{dt}$ as a function of membrane potential $V$ and right-hand side panel plots the exact solution $V$ as a function of time $t,$ the green dot in both panels is the reset potential $V_{reset}$.Pay close attention to when $V_{reset}=E_L=-75$mV.1. How does the solution look with initial values of $V_{reset} < -75$?2. How does the solution look with initial values of $V_{reset} > -75$?3. How does the solution look with initial values of $V_{reset} = -75$?
#@markdown Make sure you execute this cell to enable the widget! my_layout.width = '450px' @widgets.interact( V_reset=widgets.FloatSlider(-77., min=-91., max=-61., step=2, layout=my_layout) ) def V_reset_widget(V_reset): plot_V_no_input(V_reset) # to_remove explanation """ 1. Initial Values of $V_{reset} < -75$ result in the solution increasing to -75mV because $\frac{dV}{dt} > 0$. 2. Initial Values of $V_{reset} > -75$ result in the solution decreasing to -75mV because $\frac{dV}{dt} < 0$. 3. Initial Values of $V_{reset} = -75$ result in a constant $V = -75$ mV because $\frac{dV}{dt} = 0$ (Stable point). """
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
Section 2.2: LIF with input*Estimated timing to here from start of tutorial: 24 min*We will re-introduce the input $I$ and membrane resistance $R_m$ giving the original equation:\begin{align}\tau_m\frac{dV}{dt} = -(V-E_L) + \color{blue}{R_mI}\,\end{align}The input can be other neurons or sensory information. Interactive Demo 2.2: The Impact of InputThe interactive plot below manipulates $I$ in the differential equation.- With increasing input, how does the $\frac{dV}{dt}$ change? How would this impact the solution?
# @markdown Make sure you execute this cell to enable the widget! my_layout.width = '450px' @widgets.interact( I=widgets.FloatSlider(3., min=0., max=20., step=2, layout=my_layout) ) def Pop_widget(I): plot_dVdt(I=I) plt.show() # to_remove explanation """ dV/dt becomes bigger and less of it is below 0. This means the solution will increase well beyond what is bioligically plausible """
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
Section 2.2.1: LIF exact solutionThe LIF with a constant input has a known exact solution:\begin{align*}V(t)=&\ E_L+R_mI+(V_{reset}-E_L-R_mI)e^{\frac{-t}{\tau_m}}\\\text{which is written as:}\\\begin{matrix}\text{"Current membrane} \\ \text{potential"}\end{matrix}=&\text{"Resting potential"}+\begin{matrix}\text{"Reset potential minus resting potential} \\ \text{times exponential with rate one over time constant." }\end{matrix}\\\end{align*} The plot below shows the exact solution of the membrane potential with the parameters set as:* `V_reset = -75,`* `E_L = -75,`* `tau_m = 10,`* `R_m = 10,`* `I = 10.`Ask yourself, does the result make biological sense? If not, what would you change? We'll delve into this in the next section
# @markdown Make sure you execute this cell to see the exact solution dt = 0.5 t_rest = 0 t = np.arange(0, 1000, dt) tau_m = 10 R_m = 10 V_reset = E_L = -75 I = 10 V = E_L + R_m*I + (V_reset - E_L - R_m*I) * np.exp(-(t)/tau_m) with plt.xkcd(): fig = plt.figure(figsize=(6, 4)) plt.plot(t,V) plt.ylabel('V (mV)') plt.xlabel('time (ms)') plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
Section 2.3: Maths is one thing, but neuroscience matters*Estimated timing to here from start of tutorial: 30 min*
# @title Video 4: Adding firing to the LIF from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1gX4y1P7pZ", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="rLQk-vXRaX0", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
This video first recaps the introduction of input to the leaky integrate and fire model and then delves into how we add spiking behavior (or firing) to the model. Click here for text recap of video While the mathematics of the exact solution is exact, it is not biologically valid as a neuron spikes and definitely does not plateau at a very positive value.To model the firing of a spike, we must have a threshold voltage $V_{th}$ such that if the voltage $V(t)$ goes above it, the neuron spikes$$V(t)>V_{th}.$$We must record the time of spike $t_{isi}$ and count the number of spikes$$t_{isi}=t, $$$$π‘†π‘π‘–π‘˜π‘’=π‘†π‘π‘–π‘˜π‘’+1.$$Then reset the membrane voltage $V(t)$$$V(t_{isi} )=V_{Reset}.$$To take into account the spike the exact solution becomes:\begin{align*}V(t)=&\ E_L+R_mI+(V_{reset}-E_L-R_mI)e^{\frac{-(t-t_{isi})}{\tau_m}},&\qquad V(t)<V_{th} \\V(t)=&V_{reset},&\qquad V(t)>V_{th}\\Spike=&Spike+1,&\\t_{isi}=&t,\\\end{align*}while this does make the neuron spike, it introduces a discontinuity which is not as elegant mathematically as it could be, but it gets results so that is good. Interactive Demo 2.3.1: Input on spikesThis exercise show the relationship between firing rate and the Input for exact solution `V` of the LIF:$$V(t)=\ E_L+R_mI+(V_{reset}-E_L-R_mI)e^{\frac{-(t-t_{isi})}{\tau_m}},$$with the parameters set as:* `V_reset = -75,`* `E_L = -75,`* `tau_m = 10,`* `R_m = 10.`Below is a figure with three panels; * the top panel is the input, $I,$* the middle panel is the membrane potential $V(t)$. To illustrate the spike, $V(t)$ is set to $0$ and then reset to $-75$ mV when there is a spike. * the bottom panel is the raster plot with each dot indicating a spike.First, as electrophysiologist normally listen to spikes when conducting experiments, listen to the music of the firing rate for a single value of $I$. (Note the audio doesn't work in some browsers so don't worry about it if you can't hear anything)
# @markdown Make sure you execute this cell to be able to hear the neuron I = 3 t = np.arange(0, 1000, dt) Spike, Spike_time, V = Exact_Integrate_and_Fire(I, t) plot_IF(t, V, I, Spike_time) ipd.Audio(V, rate=len(V))
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
Manipulate the input into the LIF to see the impact of input on the firing pattern (rate).* What is the effect of $I$ on spiking?* Is this biologically valid?
# @markdown Make sure you execute this cell to enable the widget! my_layout.width = '450px' @widgets.interact( I=widgets.FloatSlider(3, min=2.0, max=4., step=.1, layout=my_layout) ) def Pop_widget(I): Spike, Spike_time, V = Exact_Integrate_and_Fire(I, t) plot_IF(t, V, I, Spike_time) # to_remove explanation """ 1. As $I$ increases, the number of spikes increases. 2. No, as there is a limit to the number of spikes due to a refractory period, which is not accounted for in this model. """
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
Section 2.4 Firing Rate as a function of Input*Estimated timing to here from start of tutorial: 38 min*The firing frequency of a neuron plotted as a function of current is called an input-output curve (F–I curve). It is also known as a transfer function, which you came across in the previous tutorial. This function is one of the starting points for the rate model, which extends from modelling single neurons to the firing rate of a collection of neurons. By fitting this to a function, we can start to generalise the firing pattern of many neurons, which can be used to build rate models. This will be discussed later in Neuromatch.
# @markdown *Execture this cell to visualize the FI curve* I_range = np.arange(2.0, 4.0, 0.1) Spike_rate = np.ones(len(I_range)) for i, I in enumerate(I_range): Spike_rate[i], _, _ = Exact_Integrate_and_Fire(I, t) with plt.xkcd(): fig = plt.figure(figsize=(6, 4)) plt.plot(I_range,Spike_rate) plt.xlabel('Input Current (nA)') plt.ylabel('Spikes per Second (Hz)') plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
The LIF model is a very nice differential equation to start with in computational neuroscience as it has been used as a building block for many papers that simulate neuronal response.__Strengths of LIF model:__+ Has an exact solution;+ Easy to interpret;+ Great to build network of neurons.__Weaknesses of the LIF model:__- Spiking is a discontinuity;- Abstraction from biology;- Cannot generate different spiking patterns. --- Summary*Estimated timing of tutorial: 45 min*
# @title Video 5: Summary from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1jV411x7t9", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="VzwLAW5p4ao", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)
_____no_output_____
CC-BY-4.0
tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb
ofou/course-content
K-means clustering demo 1. Different distance metrics
from math import sqrt def manhattan(v1,v2): res=0 dimensions=min(len(v1),len(v2)) for i in range(dimensions): res+=abs(v1[i]-v2[i]) return res def euclidean(v1,v2): res=0 dimensions=min(len(v1),len(v2)) for i in range(dimensions): res+=pow(abs(v1[i]-v2[i]),2) return sqrt(float(res)) def cosine(v1,v2): dotproduct=0 dimensions=min(len(v1),len(v2)) for i in range(dimensions): dotproduct+=v1[i]*v2[i] v1len=0 v2len=0 for i in range (dimensions): v1len+=v1[i]*v1[i] v2len+=v2[i]*v2[i] v1len=sqrt(v1len) v2len=sqrt(v2len) # we need distance here - # we convert cosine similarity into distance return 1.0-(float(dotproduct)/(v1len*v2len)) def pearson(v1,v2): # Simple sums sum1=sum(v1) sum2=sum(v2) # Sums of the squares sum1Sq=sum([pow(v,2) for v in v1]) sum2Sq=sum([pow(v,2) for v in v2]) # Sum of the products pSum=sum([v1[i]*v2[i] for i in range(min(len(v1),len(v2)))]) # Calculate r (Pearson score) numerator=pSum-(sum1*sum2/len(v1)) denominator=sqrt((sum1Sq-pow(sum1,2)/len(v1))*(sum2Sq-pow(sum2,2)/len(v1))) if denominator==0: return 1.0 # we need distance here - # we convert pearson correlation into distance return 1.0-numerator/denominator def tanimoto(v1,v2): c1,c2,shared=0,0,0 for i in range(len(v1)): if v1[i]!=0 or v2[i]!= 0: if v1[i]!=0: c1+=1 # in v1 if v2[i]!=0: c2+=1 # in v2 if v1[i]!=0 and v2[i]!=0: shared+=1 # in both # we need distance here - # we convert tanimoto similarity into distance return 1.0-(float(shared)/(c1+c2-shared))
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
2. K-means clustering algorithm
import random # k-means clustering def kcluster(rows,distance=euclidean,k=4): # Determine the minimum and maximum values for each point ranges=[(min([row[i] for row in rows]),max([row[i] for row in rows])) for i in range(len(rows[0]))] # Create k randomly placed centroids clusters=[[random.random()*(ranges[i][1]-ranges[i][0])+ranges[i][0] for i in range(len(rows[0]))] for j in range(k)] lastmatches=None bestmatches = None for t in range(100): print ('Iteration %d' % t) bestmatches=[[] for i in range(k)] # Find which centroid is the closest for each row for j in range(len(rows)): row=rows[j] bestmatch=0 for i in range(k): d=distance(clusters[i],row) if d<distance(clusters[bestmatch],row): bestmatch=i bestmatches[bestmatch].append(j) # If the results are the same as last time, this is complete if bestmatches==lastmatches: break lastmatches=bestmatches # Move the centroids to the average of the cluster members for i in range(k): avgs=[0.0]*len(rows[0]) if len(bestmatches[i])>0: for rowid in bestmatches[i]: for m in range(len(rows[rowid])): avgs[m]+=rows[rowid][m] for j in range(len(avgs)): avgs[j]/=len(bestmatches[i]) clusters[i]=avgs return bestmatches
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
3. Toy demo: clustering papers by title 3.1. Data preparationThe input is a list of Computer Science paper titles from file [titles.txt](titles.txt).
file_name = "titles.txt" f = open(file_name, "r", encoding="utf-8") i = 0 for line in f: print("document", i, ": ", line.strip()) i += 1
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
To compare documents written in Natural Language, we need to decide how to decide which attributes of a document are important. The simplest possible model is called a **bag of words**: that is we consider each word in a document as a separate and independent dimension. First, we collect all different words occuring across all the document collection (called corpora in NLP). These will become our dimensions.We create a vector as big as the entire vocabulary in a given corpora.Next we represent each document as a numeric vector: the number of occurrences of a given word becomes value in the corresponding vector dimension.Here are the functions for converting documents into bag of words:
import re # Returns dictionary of word counts for a text def get_word_counts(text, all_words): wc={} words = get_words(text) # Loop over all the entries for word in words: if (word not in stopwords) and (word in all_words): wc[word] = wc.get(word,0)+1 return wc # splits text into words def get_words(txt): # Split words by all non-alpha characters words=re.compile(r'[^A-Z^a-z]+').split(txt) # Convert to lowercase return [word.lower() for word in words if word!=''] # converts counts into a vector def get_word_vector(word_list, wc): v = [0]*len(word_list) for i in range(len(word_list)): if word_list[i] in wc: v[i] = wc[word_list[i]] return v # prints matrix def print_word_matrix(docs): for d in docs: print (d[0], d[1])
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
Some words of the document should be ignored. These are words that are very commonly used in all documents no matter the topic of the document: ''the'', ''it'', ''and'' etc. These words are called **stop words**. Which words to consider as stop words is application-dependent. One of possible stop words collection is given in file ''stop_words.txt''.
stop_words_file = "stop_words.txt" f = open(stop_words_file, "r", encoding="utf-8") stopwords = [] for line in f: stopwords.append(line.strip()) f.close() print(stopwords[:20])
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
We collect all unique words and for each document we will count how many times each word is present.
file_name = "titles.txt" f = open(file_name, "r", encoding="utf-8") documents = [] doc_id = 1 all_words = {} # transfer content of a file into a list of lines lines = [line for line in f] # create a dictionary of all words and their total counts for line in lines: doc_words = get_words(line) for w in doc_words : if w not in stopwords: all_words[w] = all_words.get(w,0)+1 unique_words = set() for w, count in all_words.items(): if all_words[w] > 1 : unique_words.add(w) # create a matrix of word presence in each document for line in lines: documents.append(["d"+str(doc_id), get_word_counts(line,unique_words)]) doc_id += 1 unique_words=list(unique_words) print("All unique words:",unique_words) print(documents)
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
Now we want to convert each document into a numeric vector:
out = open(file_name.split('.')[0] + "_vectors.txt", "w") # write a header which contains the words themselves for w in unique_words: out.write('\t' + w) out.write('\n') # print_word_matrix to file for i in range(len(documents)): vector = get_word_vector(unique_words, documents[i][1]) out.write(documents[i][0]) for x in vector: out.write('\t' + str(x)) out.write('\n') out.close()
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
Our data now looks like this matrix:
doc_vectors_file = "titles_vectors.txt" f = open(doc_vectors_file, "r", encoding="utf-8") s = f.read() print(s) # This function will read document vectors file and produce 2D data matrix, # plus the names of the rows and the names of the columns. def read_vector_file(file_name): f = open(file_name) lines=[line for line in f] # First line is the column headers colnames=lines[0].strip().split('\t')[:] # print(colnames) rownames=[] data=[] for line in lines[1:]: p=line.strip().split('\t') # First column in each row is the rowname if len(p)>1: rownames.append(p[0]) # The data for this row is the remainder of the row data.append([float(x) for x in p[1:]]) return rownames,colnames,data # This function will transpose the data matrix def rotatematrix(data): newdata=[] for i in range(len(data[0])): newrow=[data[j][i] for j in range(len(data))] newdata.append(newrow) return newdata
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
As the result of all this, we have the matrix where the rows are document vectors.Each vector dimension represents a unique word in the collection.The value in each dimension represents the count of this word in a particular document. 3.2. Clustering documents Performing k-means clustering.
doc_vectors_file = "titles_vectors.txt" docs,words,data=read_vector_file(doc_vectors_file) num_clusters=2 print('Searching for {} clusters:'.format(num_clusters)) clust=kcluster(data,distance=pearson,k=num_clusters) print() print ('Document clusters') print ('=================') for i in range(num_clusters): print ('cluster {}:'.format(i+1)) print ([docs[r] for r in clust[i]]) print()
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
Does this grouping make sense?
for d in documents: print(d)
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
3.3. Clustering words by their occurrence in documentsWe may consider that the words are similar if they occur in the same document. We say that the words are connected - they belong to the same topic, they occur in a similar context.If we want to cluster words by their occurrences in the documents, all we need to do is to transpose the document matrix.
rdata=rotatematrix(data) num_clusters = 3 print ('Grouping words into {} clusters:'.format(num_clusters)) clust=kcluster(rdata,distance=cosine,k=num_clusters) print() print ('word clusters:') print("=============") for i in range(num_clusters): print("cluster {}".format(i+1)) print ([words[r] for r in clust[i]]) print()
_____no_output_____
MIT
kmeans_clustering_demo.ipynb
lyb770/labs_ml_clustering
Using Named Entity Recognition (NER) **Named entities** are noun phrases that refer to specific locations, people, organizations, and so on. With **named entity recognition**, you can find the named entities in your texts and also determine what kind of named entity they are.Here’s the list of named entity types from the NLTK book: NEtype Examples ORGANIZATION Georgia-Pacific Corp., WHO PERSON Eddy Bonte, President Obama LOCATION Murray River, Mount Everest DATE June, 2008-06-29 TIME two fifty a m, 1:30 p.m. MONEY 175 million Canadian dollars, GBP 10.40 PERCENT twenty pct, 18.75 % FACILITY Washington Monument, Stonehenge GPE South East Asia, MidlothianYou can use nltk.ne_chunk() to recognize named entities. Let’s use lotr_pos_tags again to test it out:
import nltk from nltk.tokenize import word_tokenize lotr_quote = "It's a dangerous business, Frodo, going out your door." words_in_lotr_quote = word_tokenize(lotr_quote) print(words_in_lotr_quote) lotr_pos_tags = nltk.pos_tag(words_in_lotr_quote) print(lotr_pos_tags) tree = nltk.ne_chunk(lotr_pos_tags)
_____no_output_____
MIT
NLP/8.Using Named Entity Recognition (NER).ipynb
SuryaReginaAA/Learn
Now take a look at the visual representation:
tree.draw()
_____no_output_____
MIT
NLP/8.Using Named Entity Recognition (NER).ipynb
SuryaReginaAA/Learn
Here’s what you get: See how Frodo has been tagged as a PERSON? You also have the option to use the parameter binary=True if you just want to know what the named entities are but not what kind of named entity they are:
tree = nltk.ne_chunk(lotr_pos_tags, binary=True) tree.draw()
_____no_output_____
MIT
NLP/8.Using Named Entity Recognition (NER).ipynb
SuryaReginaAA/Learn
Now all you see is that Frodo is an NE: That’s how you can identify named entities! But you can take this one step further and extract named entities directly from your text. Create a string from which to extract named entities. You can use this quote from The War of the Worlds:
quote = """ Men like Schiaparelli watched the red planetβ€”it is odd, by-the-bye, that for countless centuries Mars has been the star of warβ€”but failed to interpret the fluctuating appearances of the markings they mapped so well. All that time the Martians must have been getting ready. During the opposition of 1894 a great light was seen on the illuminated part of the disk, first at the Lick Observatory, then by Perrotin of Nice, and then by other observers. English readers heard of it first in the issue of Nature dated August 2."""
_____no_output_____
MIT
NLP/8.Using Named Entity Recognition (NER).ipynb
SuryaReginaAA/Learn
Now create a function to extract named entities:
def extract_ne(quote): words = word_tokenize(quote, language='english') tags = nltk.pos_tag(words) tree = nltk.ne_chunk(tags, binary=True) tree.draw() return set( " ".join(i[0] for i in t) for t in tree if hasattr(t, "label") and t.label() == "NE" )
_____no_output_____
MIT
NLP/8.Using Named Entity Recognition (NER).ipynb
SuryaReginaAA/Learn
With this function, you gather all named entities, with no repeats. In order to do that, you tokenize by word, apply part of speech tags to those words, and then extract named entities based on those tags. Because you included binary=True, the named entities you’ll get won’t be labeled more specifically. You’ll just know that they’re named entities.Take a look at the information you extracted:
extract_ne(quote)
_____no_output_____
MIT
NLP/8.Using Named Entity Recognition (NER).ipynb
SuryaReginaAA/Learn
Closed-Loop EvaluationIn this notebook you are going to evaluate Urban Driver to control the SDV with a protocol named *closed-loop* evaluation.**Note: this notebook assumes you've already run the [training notebook](./train.ipynb) and stored your model successfully (or that you have stored a pre-trained one).****Note: for a detailed explanation of what closed-loop evaluation (CLE) is, please refer to our [planning notebook](../planning/closed_loop_test.ipynb)** Imports
import matplotlib.pyplot as plt import numpy as np import torch from prettytable import PrettyTable from l5kit.configs import load_config_data from l5kit.data import LocalDataManager, ChunkedDataset from l5kit.dataset import EgoDatasetVectorized from l5kit.vectorization.vectorizer_builder import build_vectorizer from l5kit.simulation.dataset import SimulationConfig from l5kit.simulation.unroll import ClosedLoopSimulator from l5kit.cle.closed_loop_evaluator import ClosedLoopEvaluator, EvaluationPlan from l5kit.cle.metrics import (CollisionFrontMetric, CollisionRearMetric, CollisionSideMetric, DisplacementErrorL2Metric, DistanceToRefTrajectoryMetric) from l5kit.cle.validators import RangeValidator, ValidationCountingAggregator from l5kit.visualization.visualizer.zarr_utils import simulation_out_to_visualizer_scene from l5kit.visualization.visualizer.visualizer import visualize from bokeh.io import output_notebook, show from l5kit.data import MapAPI from collections import defaultdict import os
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
Prepare data path and load cfgBy setting the `L5KIT_DATA_FOLDER` variable, we can point the script to the folder where the data lies.Then, we load our config file with relative paths and other configurations (rasteriser, training params ...).
# set env variable for data from l5kit.data import get_dataset_path os.environ["L5KIT_DATA_FOLDER"], project_path = get_dataset_path() dm = LocalDataManager(None) # get config cfg = load_config_data("./config.yaml")
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
Load the model
model_path = project_path + "/urban_driver_dummy_model.pt" device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = torch.load(model_path).to(device) model = model.eval() torch.set_grad_enabled(False)
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
Load the evaluation dataDifferently from training and open loop evaluation, this setting is intrinsically sequential. As such, we won't be using any of PyTorch's parallelisation functionalities.
# ===== INIT DATASET eval_cfg = cfg["val_data_loader"] eval_zarr = ChunkedDataset(dm.require(eval_cfg["key"])).open() vectorizer = build_vectorizer(cfg, dm) eval_dataset = EgoDatasetVectorized(cfg, eval_zarr, vectorizer) print(eval_dataset)
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
Define some simulation propertiesWe define here some common simulation properties such as the length of the simulation and how many scene to simulate.**NOTE: these properties have a significant impact on the execution time. We suggest you to increase them only if your setup includes a GPU.**
num_scenes_to_unroll = 10 num_simulation_steps = 50
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
Closed-loop simulationWe define a closed-loop simulation that drives the SDV for `num_simulation_steps` steps while using the log-replayed agents.Then, we unroll the selected scenes.The simulation output contains all the information related to the scene, including the annotated and simulated positions, states, and trajectories of the SDV and the agents. If you want to know more about what the simulation output contains, please refer to the source code of the class `SimulationOutput`.
# ==== DEFINE CLOSED-LOOP SIMULATION sim_cfg = SimulationConfig(use_ego_gt=False, use_agents_gt=True, disable_new_agents=True, distance_th_far=500, distance_th_close=50, num_simulation_steps=num_simulation_steps, start_frame_index=0, show_info=True) sim_loop = ClosedLoopSimulator(sim_cfg, eval_dataset, device, model_ego=model, model_agents=None) # ==== UNROLL scenes_to_unroll = list(range(0, len(eval_zarr.scenes), len(eval_zarr.scenes)//num_scenes_to_unroll)) sim_outs = sim_loop.unroll(scenes_to_unroll)
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
Closed-loop metrics**Note: for a detailed explanation of CLE metrics, please refer again to our [planning notebook](../planning/closed_loop_test.ipynb)**
metrics = [DisplacementErrorL2Metric(), DistanceToRefTrajectoryMetric(), CollisionFrontMetric(), CollisionRearMetric(), CollisionSideMetric()] validators = [RangeValidator("displacement_error_l2", DisplacementErrorL2Metric, max_value=30), RangeValidator("distance_ref_trajectory", DistanceToRefTrajectoryMetric, max_value=4), RangeValidator("collision_front", CollisionFrontMetric, max_value=0), RangeValidator("collision_rear", CollisionRearMetric, max_value=0), RangeValidator("collision_side", CollisionSideMetric, max_value=0)] intervention_validators = ["displacement_error_l2", "distance_ref_trajectory", "collision_front", "collision_rear", "collision_side"] cle_evaluator = ClosedLoopEvaluator(EvaluationPlan(metrics=metrics, validators=validators, composite_metrics=[], intervention_validators=intervention_validators))
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
Quantitative evaluationWe can now compute the metric evaluation, collect the results and aggregate them.
cle_evaluator.evaluate(sim_outs) validation_results = cle_evaluator.validation_results() agg = ValidationCountingAggregator().aggregate(validation_results) cle_evaluator.reset()
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
Reporting errors from the closed-loopWe can now report the metrics and plot them.
fields = ["metric", "value"] table = PrettyTable(field_names=fields) values = [] names = [] for metric_name in agg: table.add_row([metric_name, agg[metric_name].item()]) values.append(agg[metric_name].item()) names.append(metric_name) print(table) plt.bar(np.arange(len(names)), values) plt.xticks(np.arange(len(names)), names, rotation=60, ha='right') plt.show()
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
Qualitative evaluation Visualise the closed-loopWe can visualise the scenes we have obtained previously. **The policy is now in full control of the SDV as this moves through the annotated scene.**
output_notebook() mapAPI = MapAPI.from_cfg(dm, cfg) for sim_out in sim_outs: # for each scene vis_in = simulation_out_to_visualizer_scene(sim_out, mapAPI) show(visualize(sim_out.scene_id, vis_in))
_____no_output_____
Apache-2.0
examples/urban_driver/closed_loop_test.ipynb
ronamit/l5kit
import torch import numpy as np from matplotlib import pyplot as plt torch.manual_seed(0) np.random.seed(0) class Environment: def __init__(self): self.constant_function_details = {'value':np.random.randint(10, 90)} self.uniform_function_details = {'min': 25, 'max': 75} self.gaussian_function_details = {'mean': 50, 'std': 25} self.quadratic_growth_details = {'m':0.0175, 'count':0} self.bandits = None self.generate_bandit_instance() def return_constant(self): return self.constant_function_details['value'] + np.random.random()*10 def return_uniform(self): return np.random.uniform(self.uniform_function_details['min'], self.uniform_function_details['max']) def return_gaussian(self): return np.random.normal(loc=self.gaussian_function_details['mean'], scale=self.gaussian_function_details['std']) def return_quadratic_growth(self): self.quadratic_growth_details['count'] += 1 return np.power((self.quadratic_growth_details['m'] * self.quadratic_growth_details['count']), 2) def generate_bandit_instance(self): self.bandits = np.array([self.return_constant, self.return_uniform, self.return_gaussian, self.return_quadratic_growth]) np.random.shuffle(self.bandits) def observe_all_bandits(self): vals = [] for func in self.bandits: vals.append(func()/100) return np.array(vals) env = Environment() values = [] for _ in range(1000): values.append(env.observe_all_bandits()) values = np.array(values) for index, function in enumerate(env.bandits): plt.plot(np.arange(values.shape[0]), values.T[index], label=function.__name__[len('return_'):]) plt.legend() plt.show()
_____no_output_____
MIT
DQN_practice.ipynb
AmanPriyanshu/Reinforcement-Learning
Model:
def model_generator(): model = torch.nn.Sequential( torch.nn.Linear(2, 4), torch.nn.ReLU(), torch.nn.Linear(4, 8), torch.nn.ReLU(), torch.nn.Linear(8, 4), torch.nn.Softmax(dim=1), ) return model class Agent: def __init__(self): self.transition = {'state': None, 'action': None, 'next_state': None, 'reward': None} self.replay_memory = self.ReplayMemory() self.policy_net = model_generator() self.target_net = model_generator() self.target_net.eval() self.target_net.load_state_dict(self.policy_net.state_dict()) self.epsilon = 1 self.epsilon_limit = 0.01 self.steps_taken = 0 self.gamma = 0. self.optimizer = torch.optim.Adam(self.policy_net.parameters()) self.batch_size = 5 def loss_calculator(self): samples = self.replay_memory.sample(self.batch_size) losses = [] for sample in samples: action = sample['action'] state = sample['state'] next_state = sample['next_state'] reward = sample['reward'] loss = self.policy_pass(state)[0][action] - (reward + self.gamma * torch.max(self.target_pass(next_state))) losses.append(loss) loss = torch.mean(torch.stack(losses)) if abs(loss.item()) < 1: loss = 0.5 * torch.pow(loss, 2) else: loss = torch.abs(loss) - 0.5 return loss def policy_update(self): loss = self.loss_calculator() self.optimizer.zero_grad() loss.backward() self.optimizer.step() def target_update(self): self.target_net.load_state_dict(self.policy_net.state_dict()) def target_pass(self, state): input_state = torch.tensor([[state['rank'], state['reward']]], dtype=torch.float) actions = self.target_net(input_state) return actions def policy_pass(self, state): input_state = torch.tensor([[state['rank'], state['reward']]], dtype=torch.float) actions = self.policy_net(input_state) return actions def take_action(self, state): if np.random.random() < self.epsilon: action = torch.randint(0, 4, (1,)) else: actions = self.policy_pass(state) action = torch.argmax(actions, 1) return action def take_transition(self, transition): self.steps_taken += 1 self.replay_memory.push(transition) if self.steps_taken%self.batch_size == 0 and self.steps_taken>20: self.policy_update() if self.steps_taken%25 == 0 and self.steps_taken>20: self.target_update() self.epsilon -= self.epsilon_limit/6 if self.epsilon<self.epsilon_limit: self.epsilon = self.epsilon_limit class ReplayMemory(object): def __init__(self, capacity=15): self.capacity = capacity self.memory = [None] * self.capacity self.position = 0 def push(self, transition): self.memory[self.position] = transition self.position = (self.position + 1) % self.capacity def sample(self, batch_size=5): return np.random.choice(np.array(self.memory), batch_size) def __len__(self): return len(self.memory) env = Environment() agent1 = Agent() rewards = [] state = {'rank':0, 'reward':0} for _ in range(1000): with torch.no_grad(): action = agent1.take_action(state) observation = env.observe_all_bandits() reward = observation[action] rank = np.argsort(observation)[action] next_state = {'rank': rank, 'reward':reward} transition = {'state': state, 'action': action, 'next_state': next_state, 'reward': reward} agent1.take_transition(transition) rewards.append(reward) plt.plot([i for i in range(len(rewards))], rewards, label='rewards') plt.legend() plt.title('Rewards Progression') plt.show()
_____no_output_____
MIT
DQN_practice.ipynb
AmanPriyanshu/Reinforcement-Learning
Trapezoidal Method
# params stored in txt sys = MultiModeSystem(params={"dir":"data/"}) x_0 = np.array([1,0,0,1]) ts = np.linspace(0, 10, 101) X = sys.trapezoidal(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$","$q_b$","$p_b$"]) ax.legend()
_____no_output_____
MIT
demos/basic/multimode/MultiModeSystem.ipynb
Phionx/quantumnetworks
Forward Euler
# params stored in txt sys = MultiModeSystem(params={"dir":"data/"}) x_0 = np.array([1,0,0,1]) ts = np.linspace(0, 10, 10001) X = sys.forward_euler(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$","$q_b$","$p_b$"]) ax.legend() u = sys.eval_u(0) sys.eval_Jf(x_0, u) sys.eval_Jf_numerical(x_0, u) # params directly provided omegas = [1,2] kappas = [0.001,0.005] gammas = [0.002,0.002] kerrs = [0.001, 0.001] couplings = [[0,1,0.002]] sys = MultiModeSystem(params={"omegas":omegas, "kappas":kappas, "gammas":gammas, "kerrs": kerrs, "couplings":couplings}) x_0 = np.array([1,0,0,1]) ts = np.linspace(0, 10, 1001) X = sys.forward_euler(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$","$q_b$","$p_b$"]) ax.legend() # single mode system omegas = [2*np.pi*1] kappas = [2*np.pi*0.001] gammas = [2*np.pi*0.002] kerrs = [2*np.pi*0.001] couplings = [] sys = MultiModeSystem(params={"omegas":omegas, "kappas":kappas,"gammas":gammas,"kerrs":kerrs,"couplings":couplings}) x_0 = np.array([1,0]) ts = np.linspace(0, 10, 100001) X = sys.forward_euler(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$"]) ax.legend() # params directly provided omegas = [2*np.pi*1,2*np.pi*2,2*np.pi*1] kappas = [2*np.pi*0.001,2*np.pi*0.005,2*np.pi*0.001] gammas = [2*np.pi*0.002,2*np.pi*0.002,2*np.pi*0.002] kerrs = [2*np.pi*0.001, 2*np.pi*0.001, 2*np.pi*0.001] couplings = [[0,1,2*np.pi*0.002],[1,2,2*np.pi*0.002]] sys = MultiModeSystem(params={"omegas":omegas, "kappas":kappas, "gammas":gammas, "kerrs":kerrs, "couplings":couplings}) print(sys.A) # x_0 = np.array([1,0,0,1]) # ts = np.linspace(0, 10, 1001) # X = sys.forward_euler(x_0, ts) # fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$","$q_b$","$p_b$"]) # ax.legend()
[[-9.42477796e-03 6.28318531e+00 0.00000000e+00 1.25663706e-02 0.00000000e+00 0.00000000e+00] [-6.28318531e+00 -9.42477796e-03 -1.25663706e-02 0.00000000e+00 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 1.25663706e-02 -2.19911486e-02 1.25663706e+01 0.00000000e+00 1.25663706e-02] [-1.25663706e-02 0.00000000e+00 -1.25663706e+01 -2.19911486e-02 -1.25663706e-02 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.25663706e-02 -9.42477796e-03 6.28318531e+00] [ 0.00000000e+00 0.00000000e+00 -1.25663706e-02 0.00000000e+00 -6.28318531e+00 -9.42477796e-03]]
MIT
demos/basic/multimode/MultiModeSystem.ipynb
Phionx/quantumnetworks
Linearization
omegas = [2*np.pi*1,2*np.pi*2,2*np.pi*1] kappas = [2*np.pi*0.001,2*np.pi*0.005,2*np.pi*0.001] gammas = [2*np.pi*0.002,2*np.pi*0.002,2*np.pi*0.002] kerrs = [2*np.pi*0.001, 2*np.pi*0.001, 2*np.pi*0.001] couplings = [[0,1,2*np.pi*0.002],[1,2,2*np.pi*0.002]] sys = MultiModeSystem(params={"omegas":omegas, "kappas":kappas, "gammas":gammas, "kerrs":kerrs, "couplings":couplings}) x_0 = np.array([1,0, 0,1, 1,0]) ts = np.linspace(0, 1, 1001) X = sys.forward_euler(x_0, ts) fig, ax = plot_full_evolution(X, ts, labels=["$q_a$","$p_a$", "$q_b$","$p_b$", "$q_c$","$p_c$"]) ax.legend() X_linear = sys.forward_euler_linear(x_0, ts, x_0, 0) fig, ax = plot_full_evolution(X_linear, ts, labels=["$q_{a,linear}$","$p_{a,linear}$","$q_{b,linear}$","$p_{b,linear}$","$q_{c,linear}$","$p_{c,linear}$"]) Delta_X = (X-X_linear)/X plot_full_evolution(Delta_X[:,:50], ts[:50], labels=["$q_a - q_{a,linear}$","$p_a - p_{a,linear}$","$q_b - q_{b,linear}$","$p_b - p_{b,linear}$","$q_c - q_{c,linear}$","$p_c - p_{c,linear}$"]) ax.legend()
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ipykernel_launcher.py:17: RuntimeWarning: invalid value encountered in true_divide
MIT
demos/basic/multimode/MultiModeSystem.ipynb
Phionx/quantumnetworks
Getting Started with Azure Machine LearningAzure Machine Learning (*Azure ML*) is a cloud-based service for creating and managing machine learning solutions. It's designed to help data scientists leverage their existing data processing and model development skills and frameworks, and help them scale their workloads to the cloud. The Azure ML SDK for Python provides classes you can use to work with Azure ML in your Azure subscription. Before You Start1. Complete the steps in [Lab 1 - Getting Started with Azure Machine Learning](./labdocs/Lab01.md) to create an Azure Machine Learning workspace and a compute instance with the contents of this repo.2. Open this notebook in the compute instance and run it there. Check the Azure ML SDK VersionLet's start by importing the **azureml-core** package and checking the version of the SDK that is installed. Click the cell below and then use the **&9658; Run** button on the toolbar to run it.
import azureml.core print("Ready to use Azure ML", azureml.core.VERSION)
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
Connect to Your WorkspaceAll experiments and associated resources are managed within you Azure ML workspace. You can connect to an existing workspace, or create a new one using the Azure ML SDK.In most cases, you should store the workspace configuration in a JSON configuration file. This makes it easier to reconnect without needing to remember details like your Azure subscription ID. You can download the JSON configuration file from the blade for your workspace in the Azure portal, but if you're using a Compute Instance within your workspace, the configuration file has alreday been downloaded to the root folder.The code below uses the configuration file to connect to your workspace. The first time you run it in a notebook session, you'll be prompted to sign into Azure by clicking the https://microsoft.com/devicelogin link, entering an automatically generated code, and signing into Azure. After you have successfully signed in, you can close the browser tab that was opened, return to this notebook, and wait for the sign-in process to complete.
from azureml.core import Workspace ws = Workspace.from_config() print(ws.name, "loaded")
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
Run an ExperimentOne of the most fundamentals tasks that data scientists need to perform is to create and run experiments that process and analyze data. In this exercise, you'll learn how to use an Azure ML *experiment* to run Python code and record values extracted from data. In this case, you'll use a simple dataset that contains details of patients that have been tested for diabetes. You'll run an experiment to explore the data, extracting statistics, visualizations, and data samples. Most of the code you'll use is fairly generic Python, such as you might run in any data exploration process. However, with the addition of a few lines, the code uses an Azure ML *experiment* to log details of the run.
from azureml.core import Experiment import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # Create an Azure ML experiment in your workspace experiment = Experiment(workspace = ws, name = "diabetes-experiment") # Start logging data from the experiment run = experiment.start_logging() print("Starting experiment:", experiment.name) # load the data from a local file data = pd.read_csv('data/diabetes.csv') # Count the rows and log the result row_count = (len(data)) run.log('observations', row_count) print('Analyzing {} rows of data'.format(row_count)) # Plot and log the count of diabetic vs non-diabetic patients diabetic_counts = data['Diabetic'].value_counts() fig = plt.figure(figsize=(6,6)) ax = fig.gca() diabetic_counts.plot.bar(ax = ax) ax.set_title('Patients with Diabetes') ax.set_xlabel('Diagnosis') ax.set_ylabel('Patients') plt.show() run.log_image(name = 'label distribution', plot = fig) # log distinct pregnancy counts pregnancies = data.Pregnancies.unique() run.log_list('pregnancy categories', pregnancies) # Log summary statistics for numeric columns med_columns = ['PlasmaGlucose', 'DiastolicBloodPressure', 'TricepsThickness', 'SerumInsulin', 'BMI'] summary_stats = data[med_columns].describe().to_dict() for col in summary_stats: keys = list(summary_stats[col].keys()) values = list(summary_stats[col].values()) for index in range(len(keys)): run.log_row(col, stat = keys[index], value = values[index]) # Save a sample of the data and upload it to the experiment output data.sample(100).to_csv('sample.csv', index=False, header=True) run.upload_file(name = 'outputs/sample.csv', path_or_stream = './sample.csv') # Complete the run run.complete()
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
View Experiment ResultsAfter the experiment has been finished, you can use the **run** object to get information about the run and its outputs:
import json # Get run details details = run.get_details() print(details) # Get logged metrics metrics = run.get_metrics() print(json.dumps(metrics, indent=2)) # Get output files files = run.get_file_names() print(json.dumps(files, indent=2))
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
In Jupyter Notebooks, you can use the **RunDetails** widget to get a better visualization of the run details, while the experiment is running or after it has finished.
from azureml.widgets import RunDetails RunDetails(run).show()
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
Note that the **RunDetails** widget includes a link to view the run in Azure Machine Learning studio. Click this to open a new browser tab with the run details (you can also just open [Azure Machine Learning studio](https://ml.azure.com) and find the run on the **Experiments** page). When viewing the run in Azure Machine Learning studio, note the following:- The **Details** tab contains the general properties of the experiment run.- The **Metrics** tab enables you to select logged metrics and view them as tables or charts.- The **Images** tab enables you to select and view any images or plots that were logged in the experiment (in this case, the *Label Distribution* plot)- The **Child Runs** tab lists any child runs (in this experiment there are none).- The **Outputs + Logs** tab shows the output or log files generated by the experiment.- The **Snapshot** tab contains all files in the folder where the experiment code was run (in this case, everything in the same folder as this notebook).- The **Explanations** tab is used to show model explanations generated by the experiment (in this case, there are none).- The **Fairness** tab is used to visualize predictive performance disparities that help you evaluate the fairness of machine learning models (in this case, there are none). Run an Experiment ScriptIn the previous example, you ran an experiment inline in this notebook. A more flexible solution is to create a separate script for the experiment, and store it in a folder along with any other files it needs, and then use Azure ML to run the experiment based on the script in the folder.First, let's create a folder for the experiment files, and copy the data into it:
import os, shutil # Create a folder for the experiment files folder_name = 'diabetes-experiment-files' experiment_folder = './' + folder_name os.makedirs(folder_name, exist_ok=True) # Copy the data file into the experiment folder shutil.copy('data/diabetes.csv', os.path.join(folder_name, "diabetes.csv"))
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
Now we'll create a Python script containing the code for our experiment, and save it in the experiment folder.> **Note**: running the following cell just *creates* the script file - it doesn't run it!
%%writefile $folder_name/diabetes_experiment.py from azureml.core import Run import pandas as pd import os # Get the experiment run context run = Run.get_context() # load the diabetes dataset data = pd.read_csv('diabetes.csv') # Count the rows and log the result row_count = (len(data)) run.log('observations', row_count) print('Analyzing {} rows of data'.format(row_count)) # Count and log the label counts diabetic_counts = data['Diabetic'].value_counts() print(diabetic_counts) for k, v in diabetic_counts.items(): run.log('Label:' + str(k), v) # Save a sample of the data in the outputs folder (which gets uploaded automatically) os.makedirs('outputs', exist_ok=True) data.sample(100).to_csv("outputs/sample.csv", index=False, header=True) # Complete the run run.complete()
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
This code is a simplified version of the inline code used before. However, note the following:- It uses the `Run.get_context()` method to retrieve the experiment run context when the script is run.- It loads the diabetes data from the folder where the script is located.- It creates a folder named **outputs** and writes the sample file to it - this folder is automatically uploaded to the experiment runNow you're almost ready to run the experiment. There are just a few configuration issues you need to deal with:1. Create a *Run Configuration* that defines the Python code execution environment for the script - in this case, it will automatically create a Conda environment with some default Python packages installed.2. Create a *Script Configuration* that identifies the Python script file to be run in the experiment, and the environment in which to run it.The following cell sets up these configuration objects, and then submits the experiment.> **Note**: This will take a little longer to run the first time, as the conda environment must be created.
import os import sys from azureml.core import Experiment, RunConfiguration, ScriptRunConfig from azureml.widgets import RunDetails # create a new RunConfig object experiment_run_config = RunConfiguration() # Create a script config src = ScriptRunConfig(source_directory=experiment_folder, script='diabetes_experiment.py', run_config=experiment_run_config) # submit the experiment experiment = Experiment(workspace = ws, name = 'diabetes-experiment') run = experiment.submit(config=src) RunDetails(run).show() run.wait_for_completion()
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
As before, you can use the widget or the link to the experiment in [Azure Machine Learning studio](https://ml.azure.com) to view the outputs generated by the experiment, and you can also write code to retrieve the metrics and files it generated:
# Get logged metrics metrics = run.get_metrics() for key in metrics.keys(): print(key, metrics.get(key)) print('\n') for file in run.get_file_names(): print(file)
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
View Experiment Run HistoryNow that you've run experiments multiple times, you can view the history in [Azure Machine Learning studio](https://ml.azure.com) and explore each logged run. Or you can retrieve an experiment by name from the workspace and iterate through its runs using the SDK:
from azureml.core import Experiment, Run diabetes_experiment = ws.experiments['diabetes-experiment'] for logged_run in diabetes_experiment.get_runs(): print('Run ID:', logged_run.id) metrics = logged_run.get_metrics() for key in metrics.keys(): print('-', key, metrics.get(key))
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
Use MLflowMLflow is an open source platform for managing machine learning processes. It's commonly (but not exclusively) used in Databricks environments to coordinate experiments and track metrics. In Azure Machine Learning experiments, you can use MLflow to track metrics instead of the native log functionality if you desire. Use MLflow with an Inline ExperimentTo use MLflow to track metrics for an inline experiment, you must set the MLflow *tracking URI* to the workspace where the experiment is being run. This enables you to use **mlflow** tracking methods to log data to the experiment run.
from azureml.core import Experiment import pandas as pd import mlflow # Set the MLflow tracking URI to the workspace mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri()) # Create an Azure ML experiment in your workspace experiment = Experiment(workspace=ws, name='diabetes-mlflow-experiment') mlflow.set_experiment(experiment.name) # start the MLflow experiment with mlflow.start_run(): print("Starting experiment:", experiment.name) # Load data data = pd.read_csv('data/diabetes.csv') # Count the rows and log the result row_count = (len(data)) print('observations:', row_count) mlflow.log_metric('observations', row_count) # Get a link to the experiment in Azure ML studio experiment_url = experiment.get_portal_url() print('See details at', experiment_url)
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
After running the code above, you can use the link that is displayed to view the experiment in Azure Machine Learning studio. Then select the latest run of tghe experiment and view it **Metrics** tab to see the logged metric. Use MLflow in an Experiment ScriptYou can also use MLflow to track metrics in an experiment script.Run the following two cells to create a folder and a script for an experiment that uses MLflow.
import os, shutil # Create a folder for the experiment files folder_name = 'mlflow-experiment-files' experiment_folder = './' + folder_name os.makedirs(folder_name, exist_ok=True) # Copy the data file into the experiment folder shutil.copy('data/diabetes.csv', os.path.join(folder_name, "diabetes.csv")) %%writefile $folder_name/mlflow_diabetes.py from azureml.core import Run import pandas as pd import mlflow # start the MLflow experiment with mlflow.start_run(): # Load data data = pd.read_csv('diabetes.csv') # Count the rows and log the result row_count = (len(data)) print('observations:', row_count) mlflow.log_metric('observations', row_count)
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
When you use MLflow tracking in an Azure ML experiment script, the MLflow tracking URI is set automatically when you start the experiment run. However, the environment in which the script is to be run must include the required **mlflow** packages.
from azureml.core import Experiment, RunConfiguration, ScriptRunConfig from azureml.core.conda_dependencies import CondaDependencies from azureml.widgets import RunDetails # create a new RunConfig object experiment_run_config = RunConfiguration() # Ensure the required packages are installed packages = CondaDependencies.create(pip_packages=['mlflow', 'azureml-mlflow']) experiment_run_config.environment.python.conda_dependencies=packages # Create a script config src = ScriptRunConfig(source_directory=experiment_folder, script='mlflow_diabetes.py', run_config=experiment_run_config) # submit the experiment experiment = Experiment(workspace = ws, name = 'diabetes-mlflow-experiment') run = experiment.submit(config=src) RunDetails(run).show() run.wait_for_completion()
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
As usual, you can get the logged metrics from the experiment run when it's finished.
# Get logged metrics metrics = run.get_metrics() for key in metrics.keys(): print(key, metrics.get(key))
_____no_output_____
MIT
labs/Getting_Started_with_Azure_ML.ipynb
PeakIndicatorsHub/Getting-Started-On-Azure-ML
IntroductionImplementation of the cTAKES BoW method with relation pairs (f.e. CUI-Relationship-CUI) (added to the BoW cTAKES orig. pairs (Polarity-CUI)), evaluated against the annotations from: > Gehrmann, Sebastian, et al. "Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives." PloS one 13.2 (2018): e0192360. Import Packages
# imported packages import multiprocessing import collections import itertools import re import os # xml and xmi from lxml import etree # arrays and dataframes import pandas import numpy from pandasql import sqldf # classifier from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.ensemble import GradientBoostingClassifier from sklearn.preprocessing import FunctionTransformer from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.multiclass import OneVsRestClassifier from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.svm import SVC # plotting import matplotlib matplotlib.use('Agg') # server try: get_ipython # jupyter notebook %matplotlib inline except: pass import matplotlib.pyplot as plt # import custom modules import context # set search path to one level up from src import evaluation # method for evaluation of classifiers
_____no_output_____
MIT
notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb
jonasspenger/clinical.notes.phenotyping
Define variables and parameters
# variables and parameters # filenames input_directory = '../data/interim/cTAKES_output' input_filename = '../data/raw/annotations.csv' results_filename = '../reports/ctakes_relationgram_bow_tfidf_results.csv' plot_filename_1 = '../reports/ctakes_relationgram_bow_tfidf_boxplot_1.png' plot_filename_2 = '../reports/ctakes_relationgram_bow_tfidf_boxplot_2.png' # number of splits and repeats for cross validation n_splits = 5 n_repeats = 10 # n_repeats = 1 # for testing # number of workers n_workers=multiprocessing.cpu_count() # n_workers = 1 # for testing # keep the conditions for which results are reported in the publication conditions = [ # 'cohort', 'Obesity', # 'Non.Adherence', # 'Developmental.Delay.Retardation', 'Advanced.Heart.Disease', 'Advanced.Lung.Disease', 'Schizophrenia.and.other.Psychiatric.Disorders', 'Alcohol.Abuse', 'Other.Substance.Abuse', 'Chronic.Pain.Fibromyalgia', 'Chronic.Neurological.Dystrophies', 'Advanced.Cancer', 'Depression', # 'Dementia', # 'Unsure', ]
_____no_output_____
MIT
notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb
jonasspenger/clinical.notes.phenotyping
Load and prepare data Load and parse xmi data
%load_ext ipycache %%cache --read 2.6-JS-ctakes-relationgram-bow-tfidf_cache.pkl X def ctakes_xmi_to_df(xmi_path): records = [] tree = etree.parse(xmi_path) root = tree.getroot() mentions = [] for mention in root.iterfind('*[@{http://www.omg.org/XMI}id][@typeID][@polarity]'): if 'ontologyConceptArr' in mention.attrib: for concept in mention.attrib['ontologyConceptArr'].split(" "): d = dict(mention.attrib) d['ontologyConceptArr'] = concept mentions.append(d) else: d = dict(mention.attrib) mentions.append(d) mentions_df = pandas.DataFrame(mentions) concepts = [] for concept in root.iterfind('*[@{http://www.omg.org/XMI}id][@cui][@tui]'): concepts.append(dict(concept.attrib)) concepts_df = pandas.DataFrame(concepts) events = [] for event in root.iterfind('*[@{http://www.omg.org/XMI}id][@properties]'): events.append(dict(event.attrib)) events_df = pandas.DataFrame(events) eventproperties = [] for eventpropertie in root.iterfind('*[@{http://www.omg.org/XMI}id][@docTimeRel]'): eventproperties.append(dict(eventpropertie.attrib)) eventproperties_df = pandas.DataFrame(eventproperties) merged_df = mentions_df.add_suffix('_1')\ .merge(right=concepts_df, left_on='ontologyConceptArr_1', right_on='{http://www.omg.org/XMI}id')\ .merge(right=events_df, left_on='event_1', right_on='{http://www.omg.org/XMI}id')\ .merge(right=eventproperties_df, left_on='properties', right_on='{http://www.omg.org/XMI}id') # # unique cui and tui per event IDEA: consider keeping all # merged_df = merged_df.drop_duplicates(subset=['event', 'cui', 'tui']) # merge polarity of the *mention and the cui merged_df = merged_df.dropna(subset=['cui']) # remove any NaN merged_df['polaritycui'] = merged_df['polarity_1'] + merged_df['cui'] # extract relations textrelations = [] for tr in root.iterfind('*[@{http://www.omg.org/XMI}id][@category][@arg1][@arg2]'): textrelations.append(dict(tr.attrib)) textrelations_df = pandas.DataFrame(textrelations) relationarguments = [] for relationargument in root.iterfind('*[@{http://www.omg.org/XMI}id][@argument][@role]'): relationarguments.append(dict(relationargument.attrib)) relationarguments_df = pandas.DataFrame(relationarguments) # transforms tdf = textrelations_df tdf['xmiid'] = tdf['{http://www.omg.org/XMI}id'] rdf = relationarguments_df rdf['xmiid'] = rdf['{http://www.omg.org/XMI}id'] mdf = mentions_df mdf['xmiid'] = mdf['{http://www.omg.org/XMI}id'] cdf = concepts_df cdf['xmiid'] = cdf['{http://www.omg.org/XMI}id'] subquery_1 = """ -- table with: -- (from *Relation): category -- (from RelationArgument): argument (as argument1 and argument2) (Foreign Key *Mentions.xmiid) -- (from *Mention): begin - end (as begin1 - end1 - begin2 - end2) SELECT r.category, m1.begin as begin1, m1.end as end1, m2.begin as begin2, m2.end as end2 FROM tdf r INNER JOIN rdf a1 ON r.arg1 = a1.xmiid INNER JOIN rdf a2 ON r.arg2 = a2.xmiid INNER JOIN mdf m1 ON a1.argument = m1.xmiid INNER JOIN mdf m2 ON a2.argument = m2.xmiid """ subquery_2 = """ -- table with: -- (from *Mentions): begin - end - polarity -- (from Concepts): cui SELECT m.begin, m.end, m.polarity, c.cui FROM mdf m INNER JOIN cdf c ON m.ontologyConceptArr = c.xmiid """ # run subqueries and save in new tables sq1 = sqldf(subquery_1, locals()) sq2 = sqldf(subquery_2, locals()) query = """ -- table with: -- (from Concept): cui1, cui2 -- (from *Mention): polarity1, polarity2 -- (from *Relation): category (what kind of relation) SELECT sq1.category, sq21.cui as cui1, sq22.cui as cui2, sq21.polarity as polarity1, sq22.polarity as polarity2 FROM sq1 sq1 INNER JOIN sq2 sq21 ON sq21.begin >= sq1.begin1 and sq21.end <= sq1.end1 INNER JOIN sq2 sq22 ON sq22.begin >= sq1.begin2 and sq22.end <= sq1.end2 """ res = sqldf(query, locals()) # remove duplicates res = res.drop_duplicates(subset=['cui1', 'cui2', 'category', 'polarity1', 'polarity2']) res['string'] = res['polarity1'] + res['cui1'] + res['category'] + res['polarity2'] + res['cui2'] # return as a string return ' '.join(list(res['string']) + list(merged_df['polaritycui'])) X = [] # key function for sorting the files according to the integer of the filename def key_fn(x): i = x.split(".")[0] if i != "": return int(i) return None for f in sorted(os.listdir(input_directory), key=key_fn): # for each file in the input directory if f.endswith(".xmi"): fpath = os.path.join(input_directory, f) # parse file and append as a dataframe to x_df try: X.append(ctakes_xmi_to_df(fpath)) except Exception as e: print e X.append('NaN') X = numpy.array(X)
_____no_output_____
MIT
notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb
jonasspenger/clinical.notes.phenotyping
Load annotations and classification data
# read and parse csv file data = pandas.read_csv(input_filename) # data = data[0:100] # for testing # X = X[0:100] # for testing data.head() # groups: the subject ids # used in order to ensure that # "patients’ notes stay within the set, so that all discharge notes in the # test set are from patients not previously seen by the model." Gehrmann17. groups_df = data.filter(items=['subject.id']) groups = groups_df.as_matrix() # y: the annotated classes y_df = data.filter(items=conditions) # filter the conditions y = y_df.as_matrix() print(X.shape, groups.shape, y.shape)
_____no_output_____
MIT
notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb
jonasspenger/clinical.notes.phenotyping
Define classifiers
# dictionary of classifiers (sklearn estimators) classifiers = collections.OrderedDict() def tokenizer(text): pattern = r'[\s]+' # match any sequence of whitespace characters repl = r' ' # replace with space temp_text = re.sub(pattern, repl, text) return temp_text.lower().split(' ') # lower-case and split on space prediction_models = [ ('logistic_regression', LogisticRegression(random_state=0)), ("random_forest", RandomForestClassifier(random_state=0)), ("naive_bayes", MultinomialNB()), ("svm_linear", SVC(kernel="linear", random_state=0, probability=True)), ("gradient_boosting", GradientBoostingClassifier(random_state=0)), ] # BoW representation_models = [('ctakes_relationgram_bow_tfidf', TfidfVectorizer(tokenizer=tokenizer))] # IDEA: Use Tfidf on normal BoW model aswell? # cross product of representation models and prediction models # save to classifiers as pipelines of rep. model into pred. model for rep_model, pred_model in itertools.product(representation_models, prediction_models): classifiers.update({ # add this classifier to classifiers dictionary '{rep_model}_{pred_model}'.format(rep_model=rep_model[0], pred_model=pred_model[0]): # classifier name Pipeline([rep_model, pred_model]), # concatenate representation model with prediction model in a pipeline })
_____no_output_____
MIT
notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb
jonasspenger/clinical.notes.phenotyping
Run and evaluate
results = evaluation.run_evaluation(X=X, y=y, groups=groups, conditions=conditions, classifiers=classifiers, n_splits=n_splits, n_repeats=n_repeats, n_workers=n_workers)
_____no_output_____
MIT
notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb
jonasspenger/clinical.notes.phenotyping
Save and plot results
# save results results_df = pandas.DataFrame(results) results_df.to_csv(results_filename) results_df.head(100) ## load results for plotting # import pandas # results = pandas.read_csv('output/results.csv') # plot and save axs = results_df.groupby('name').boxplot(column='AUROC', by='condition', rot=90, figsize=(10,10)) for ax in axs: ax.set_ylim(0,1) plt.savefig(plot_filename_1) # plot and save axs = results_df.groupby('condition').boxplot(column='AUROC', by='name', rot=90, figsize=(10,10)) for ax in axs: ax.set_ylim(0,1) plt.savefig(plot_filename_2)
_____no_output_____
MIT
notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb
jonasspenger/clinical.notes.phenotyping
___ ___ Principal Component AnalysisLet's discuss PCA! Since this isn't exactly a full machine learning algorithm, but instead an unsupervised learning algorithm, we will just have a lecture on this topic, but no full machine learning project (although we will walk through the cancer set with PCA). PCA ReviewMake sure to watch the video lecture and theory presentation for a full overview of PCA! Remember that PCA is just a transformation of your data and attempts to find out what features explain the most variance in your data. For example: Libraries
import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns %matplotlib inline
_____no_output_____
MIT
res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb
Calvibert/machine-learning-exercises
The DataLet's work with the cancer data set again since it had so many features.
from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() cancer.keys() print(cancer['DESCR']) df = pd.DataFrame(cancer['data'],columns=cancer['feature_names']) #(['DESCR', 'data', 'feature_names', 'target_names', 'target']) df.head()
_____no_output_____
MIT
res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb
Calvibert/machine-learning-exercises
PCA VisualizationAs we've noticed before it is difficult to visualize high dimensional data, we can use PCA to find the first two principal components, and visualize the data in this new, two-dimensional space, with a single scatter-plot. Before we do this though, we'll need to scale our data so that each feature has a single unit variance.
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(df) scaled_data = scaler.transform(df)
_____no_output_____
MIT
res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb
Calvibert/machine-learning-exercises
PCA with Scikit Learn uses a very similar process to other preprocessing functions that come with SciKit Learn. We instantiate a PCA object, find the principal components using the fit method, then apply the rotation and dimensionality reduction by calling transform().We can also specify how many components we want to keep when creating the PCA object.
from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(scaled_data)
_____no_output_____
MIT
res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb
Calvibert/machine-learning-exercises
Now we can transform this data to its first 2 principal components.
x_pca = pca.transform(scaled_data) scaled_data.shape x_pca.shape
_____no_output_____
MIT
res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb
Calvibert/machine-learning-exercises
Great! We've reduced 30 dimensions to just 2! Let's plot these two dimensions out!
plt.figure(figsize=(8,6)) plt.scatter(x_pca[:,0],x_pca[:,1],c=cancer['target'],cmap='plasma') plt.xlabel('First principal component') plt.ylabel('Second Principal Component')
_____no_output_____
MIT
res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb
Calvibert/machine-learning-exercises
Clearly by using these two components we can easily separate these two classes. Interpreting the components Unfortunately, with this great power of dimensionality reduction, comes the cost of being able to easily understand what these components represent.The components correspond to combinations of the original features, the components themselves are stored as an attribute of the fitted PCA object:
pca.components_
_____no_output_____
MIT
res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb
Calvibert/machine-learning-exercises
In this numpy matrix array, each row represents a principal component, and each column relates back to the original features. we can visualize this relationship with a heatmap:
df_comp = pd.DataFrame(pca.components_,columns=cancer['feature_names']) plt.figure(figsize=(12,6)) sns.heatmap(df_comp,cmap='plasma',)
_____no_output_____
MIT
res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb
Calvibert/machine-learning-exercises