markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
We can configure the `anchorDateYear`,`anchorDateMonth` and `anchorDateDay` for the relatives dates. In the following example we will use as a relative date 2021/02/22, to make that possible we need to set up the `anchorDateYear` to 2020, the `anchorDateMonth` to 2 and the `anchorDateDay` to 27. I will show you the configuration with the following example. | date_normalizer = DateNormalizer().setInputCols('chunk_date').setOutputCol('date')\
.setAnchorDateDay(27)\
.setAnchorDateMonth(2)\
.setAnchorDateYear(2021)
date_normaliced_df = date_normalizer.transform(chunks_df)
dateNormalizedClean = date_normaliced_df.selectExpr("ner_chunk","date.result as dateresult","date.metadata as metadata")
dateNormalizedClean.withColumn("dateresult", dateNormalizedClean["dateresult"]
.getItem(0)).withColumn("metadata", dateNormalizedClean["metadata"]
.getItem(0)['normalized']).show(truncate=False)
| +------------+----------+--------+
|ner_chunk |dateresult|metadata|
+------------+----------+--------+
|08/02/2018 |2018/08/02|true |
|11/2018 |2018/11/DD|true |
|11/01/2018 |2018/11/01|true |
|12Mar2021 |2021/03/12|true |
|Jan 30, 2018|2018/01/30|true |
|13.04.1999 |1999/04/13|true |
|3April 2020 |2020/04/03|true |
|next monday |2021/02/29|true |
|today |2021/02/27|true |
|next week |2021/03/03|true |
+------------+----------+--------+
| Apache-2.0 | tutorials/Certification_Trainings/Healthcare/25.Date_Normalizer.ipynb | Rock-ass/spark-nlp-workshop |
import pandas as pd
#data = pd.read_csv(filename, encoding= 'unicode_escape')
df = pd.read_csv("/content/test _123.csv", encoding= 'unicode_escape')
df.head()
#del df["planType"]
#del df["customField4"]
#del df["customField5"]
#del df["pdfName"]
#del df["phoneNumber"]
#del df["dropDate"]
#df
#Group by the coordinatedMailingId
gk = df.groupby(['coordinatedMailingId','asset1','addressLine1','isPrimary'])
gk.first()
#drop ALL duplicate values
#df.drop_duplicates(subset = "customerPersonId", keep = False, inplace = True)
#drop ALL duplicate values
#df.drop_duplicates(subset = "asset1", keep = False, inplace = True)
#drop ALL duplicate values
#df.drop_duplicates(subset = "asset2", keep = False, inplace = True)
#df
df['fullName'] = df['firstName'].str.cat(df['lastName'],sep=" ")
#firstName lastName fullName
df
# Sorting by column "isPrimary"
df.sort_values(by=['isPrimary'], ascending=False)
df['phoneNumber']
for phone_no in df['phoneNumber']:
contactphone = "%c-%c%c%c-%c%c%c-%c%c%c%c" % tuple(map(ord,list(str(phone_no)[:11])))
print(contactphone)
df['phoneNumber']
for phone_no in df['phoneNumber']:
contactphone = "%c-%c%c-%c%c%c-%c%c%c%c" % tuple(map(ord,list(str(phone_no)[:11])))
print(contactphone)
df = df.assign(phone=[2, 2, 4, 7, 4, 1],
Identity=[0, 1, 1, 3, 2, 5]) | _____no_output_____ | MIT | Python_Project1.ipynb | gndede/python | |
Code to download The Guardian UK data and clean data for text analysis@Jorge de Leon This script allows you to download news articles that match your parameters from the Guardian newspaper, https://www.theguardian.com/us. Set-up | import os
import re
import glob
import json
import requests
import pandas as pd
from glob import glob
from os import makedirs
from textblob import TextBlob
from os.path import join, exists
from datetime import date, timedelta
os.chdir("..")
import nltk
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
from nltk import sent_tokenize, word_tokenize
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.corpus import stopwords | _____no_output_____ | MIT | python/The Guardian Data ingestion and wrangling/The_Guardian_JPMorgan.ipynb | georgetown-analytics/Economic-Events |
API and news articles requestsThis section contains the code that will be used to download articles from the Guardian website. the initial variables will be determined as user-defined parameters. | #Enter API and parameters - these parameters can be obtained by playing around with the Guardian API tool:
# https://open-platform.theguardian.com/explore/
# Set up initial and end date
start_date_global = date(2000, 1, 1)
end_date_global = date(2020, 5, 17)
query = "JPMorgan"
term = ('stock')
#Enter API key, endpoint and parameters
my_api_key = open("..\\input files\\creds_guardian.txt").read().strip()
api_endpoint = "http://content.guardianapis.com/search?"
my_params = {
'from-date': '',
'to-date': '',
'show-fields': 'bodyText',
'q': query,
'page-size': 200,
'api-key': my_api_key
}
articles_dir = join('theguardian','jpmorgan')
makedirs(articles_dir, exist_ok=True)
# day iteration from here:
# http://stackoverflow.com/questions/7274267/print-all-day-dates-between-two-dates
start_date = start_date_global
end_date = end_date_global
dayrange = range((end_date - start_date).days + 1)
for daycount in dayrange:
dt = start_date + timedelta(days=daycount)
datestr = dt.strftime('%Y-%m-%d')
fname = join(articles_dir, datestr + '.json')
if not exists(fname):
# then let's download it
print("Downloading", datestr)
all_results = []
my_params['from-date'] = datestr
my_params['to-date'] = datestr
current_page = 1
total_pages = 1
while current_page <= total_pages:
print("...page", current_page)
my_params['page'] = current_page
resp = requests.get(api_endpoint, my_params)
data = resp.json()
all_results.extend(data['response']['results'])
# if there is more than one page
current_page += 1
total_pages = data['response']['pages']
with open(fname, 'w') as f:
print("Writing to", fname)
# re-serialize it for pretty indentation
f.write(json.dumps(all_results, indent=2))
#Read all json files that will be concatenated
test_files = sorted(glob('theguardian/jpmorgan/*.json'))
#intialize empty list that we will append dataframes to
all_files = []
#write a for loop that will go through each of the file name through globbing and the end result will be the list
#of dataframes
for file in test_files:
try:
articles = pd.read_json(file)
all_files.append(articles)
except pd.errors.EmptyDataError:
print('Note: filename.csv ws empty. Skipping')
continue #will skip the rest of the bloc and move to next file
#create dataframe with data from json files
theguardian_rawdata = pd.concat(all_files, axis=0, ignore_index=True) | _____no_output_____ | MIT | python/The Guardian Data ingestion and wrangling/The_Guardian_JPMorgan.ipynb | georgetown-analytics/Economic-Events |
Text Analysis | #Drop empty columns
theguardian_rawdata = theguardian_rawdata.iloc[:,0:12]
#show types of media that was downloaded by type
theguardian_rawdata['type'].unique()
#filter only for articles
theguardian_rawdata = theguardian_rawdata[theguardian_rawdata['type'].str.match('article',na=False)]
#remove columns that do not contain relevant information for analysis
theguardian_dataset = theguardian_rawdata.drop(['apiUrl','id', 'isHosted', 'pillarId', 'pillarName',
'sectionId', 'sectionName', 'type','webTitle', 'webUrl'], axis=1)
#Modify the column webPublicationDate to Date and the fields to string and lower case
theguardian_dataset["date"] = pd.to_datetime(theguardian_dataset["webPublicationDate"]).dt.strftime('%Y-%m-%d')
theguardian_dataset['fields'] = theguardian_dataset['fields'].astype(str).str.lower()
#Clean the articles from URLS, remove punctuaction and numbers.
theguardian_dataset['fields'] = theguardian_dataset['fields'].str.replace('<.*?>','') # remove HTML tags
theguardian_dataset['fields'] = theguardian_dataset['fields'].str.replace('[^\w\s]','') # remove punc.
#Generate sentiment analysis for each article
#Using TextBlob obtain polarity
theguardian_dataset['sentiment_polarity'] = theguardian_dataset['fields'].apply(lambda row: TextBlob(row).sentiment.polarity)
#Using TextBlob obtain subjectivity
theguardian_dataset['sentiment_subjectivity'] = theguardian_dataset['fields'].apply(lambda row: TextBlob(row).sentiment.subjectivity)
#Remove numbers from text
theguardian_dataset['fields'] = theguardian_dataset['fields'].str.replace('\d+','') # remove numbers
#Then I will tokenize each word and remover stop words
theguardian_dataset['tokenized_fields'] = theguardian_dataset.apply(lambda row: nltk.word_tokenize(row['fields']), axis=1)
#Stop words
stop_words=set(stopwords.words("english"))
#Remove stop words
theguardian_dataset['tokenized_fields'] = theguardian_dataset['tokenized_fields'].apply(lambda x: [item for item in x if item not in stop_words])
#Count number of words and create a column with the most common 5 words per article
from collections import Counter
theguardian_dataset['high_recurrence'] = theguardian_dataset['tokenized_fields'].apply(lambda x: [k for k, v in Counter(x).most_common(5)])
#Create a word count for the word "stock"
theguardian_dataset['word_ocurrence'] = theguardian_dataset['tokenized_fields'].apply(lambda x: [w for w in x if re.search(term, w)])
theguardian_dataset['word_count'] = theguardian_dataset['word_ocurrence'].apply(len)
#Create a count of the total number of words
theguardian_dataset['total_words'] = theguardian_dataset['tokenized_fields'].apply(len)
#Create new table with average polarity, subjectivity, count of the word "stock" per day
guardian_microsoft = theguardian_dataset.groupby('date')['sentiment_polarity','sentiment_subjectivity','word_count','total_words'].agg('mean')
#Create a variable for the number of articles per day
count_articles = theguardian_dataset
count_articles['no_articles'] = count_articles.groupby(['date'])['fields'].transform('count')
count_articles = count_articles[["date","no_articles"]]
count_articles_df = count_articles.drop_duplicates(subset = "date",
keep = "first", inplace=False)
#Join tables by date
guardian_microsoft = guardian_microsoft.merge(count_articles_df, on='date', how ='left')
#Save dataframes into CSV
theguardian_dataset.to_csv('theguardian/jpmorgan/theguardian_jpmorgan_text.csv', encoding='utf-8')
guardian_microsoft.to_csv('theguardian/jpmorgan/theguardian_jpmorgan_data.csv', encoding='utf-8') | _____no_output_____ | MIT | python/The Guardian Data ingestion and wrangling/The_Guardian_JPMorgan.ipynb | georgetown-analytics/Economic-Events |
Neuroon cross-validation------------------------Neuroon and PSG recordings were simultanously collected over the course of two nights. This analysis will show whether Neuroon is able to accurately classify sleep stages. The PSG classification will be a benchmark against which Neuroon performance will be tested. "The AASM Manual for te Scoring of Sleep ad Associated Events" identifies 5 sleep stages: * Stage W (Wakefulness)* Stage N1 (NREM 1)* Stage N1 (NREM 2)* Stage N1 (NREM 3)* Stage R (REM)These stages can be identified following the rules guidelines in [1] either visually or digitally using combined information from EEG, EOG and EMG. Extensive research is beeing conducted on developing automated and simpler methods for sleep stage classification suitable for everyday home use (for a review see [2]). Automatic methods based on single channel EEG, which is the Neuroon category, were shown to work accurately when compared to PSG scoring [3]. [1] Berry RB BR, Gamaldo CE, Harding SM, Lloyd RM, Marcus CL, Vaughn BV; for the American Academy of Sleep Medicine. The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications.,Version 2.0.3. Darien, IL: American Academy of Sleep Medicine; 2014.[2] Van De Water, A. T. M., Holmes, A., & Hurley, D. a. (2011). Objective measurements of sleep for non-laboratory settings as alternatives to polysomnography - a systematic review. Journal of Sleep Research, 20, 183–200. [3] Berthomier, C., Drouot, X., Herman-Stoïca, M., Berthomier, P., Prado, J., Bokar-Thire, D. d’Ortho, M.P. (2007). Automatic analysis of single-channel sleep EEG: validation in healthy individuals. Sleep, 30(11), 1587–1595. Signals time-synchronization using crosscorelation--------------------------------------------------Neuroon and PSG were recorded on devices with (probably) unsycnhronized clocks. First we will use a cross-correlation method [4] to find the time offset between the two recordings.[4] Fridman, L., Brown, D. E., Angell, W., Abdić, I., Reimer, B., & Noh, H. Y. (2016). Automated synchronization of driving data using vibration and steering events. Pattern Recognition Letters, 75, 9-15.Define cross correlation function - code from: (http://lexfridman.com/blogs/research/2015/09/18/fast-cross-correlation-and-time-series-synchronization-in-python/)for other examlpes see: (http://stackoverflow.com/questions/4688715/find-time-shift-between-two-similar-waveforms) |
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
from itertools import tee
import pandas as pd
import seaborn as sns
from numpy.fft import fft, ifft, fft2, ifft2, fftshift
from collections import OrderedDict
from datetime import timedelta
plt.rcParams['figure.figsize'] = (9.0, 5.0)
from parse_signal import load_psg, load_neuroon
# Cross-correlation function. Equivalent to numpy.correlate(x,y mode = 'full') but faster for large arrays
# This function was tested against other cross correlation methods in -- LINK TO OTHER NOTEBOOK
def cross_correlation_using_fft(x, y):
f1 = fft(x)
f2 = fft(np.flipud(y))
cc = np.real(ifft(f1 * f2))
return fftshift(cc)
# shift < 0 means that y starts 'shift' time steps before x # shift > 0 means that y starts 'shift' time steps after x
def compute_shift(x, y):
assert len(x) == len(y)
c = cross_correlation_using_fft(x, y)
assert len(c) == len(x)
zero_index = int(len(x) / 2) - 1
shift = zero_index - np.argmax(c)
return shift,c
def cross_correlate():
# Load the signal from hdf database and parse it to pandas series with datetime index
psg_signal = load_psg('F3-A2')
neuroon_signal = load_neuroon()
# Resample the signal to 100hz, to have the same length for cross correlation
psg_10 = psg_signal.resample('10ms').mean()
neuroon_10 = neuroon_signal.resample('10ms').mean()
# Create ten minute intervals
dates_range = pd.date_range(psg_signal.head(1).index.get_values()[0], neuroon_signal.tail(1).index.get_values()[0], freq="10min")
# Convert datetime interval boundaries to string with only hours, minutes and seconds
dates_range = [d.strftime('%H:%M:%S') for d in dates_range]
all_coefs = []
# iterate over overlapping pairs of 10 minutes boundaries
for start, end in pairwise(dates_range):
# cut 10 minutes piece of signal
neuroon_cut = neuroon_10.between_time(start, end)
psg_cut = psg_10.between_time(start, end)
# Compute the correlation using fft convolution
shift, coeffs = compute_shift(neuroon_cut, psg_cut)
#normalize the coefficients because they will be shown on the same heatmap and need a common color scale
all_coefs.append((coeffs - coeffs.mean()) / coeffs.std())
#print('max corr at shift %s is at sample %i'%(start, shift))
all_coefs = np.array(all_coefs)
return all_coefs, dates_range
# This function is used to iterate over a list, taking two consecutive items at each iteration
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)
# Construct a matrix where each row represents a 10 minute window from the recording
# and each column represent correlation coefficient between neuroon and psg signals offset by samples number.
# 0 samples offset coefficient is stored at the middle column -1. Negative offset and positive offset span left and right from the center.
# offset < 0 means that psg starts 'shift' time steps before neuroon
# offset > 0 means that psg starts 'shift' time steps after neuroon
coeffs_matrix, dates = cross_correlate()
from plotting_collection import plot_crosscorrelation_heatmap
#Plot part of the coefficients matrix centered around the max average correlation for all 10 minute windows
plot_crosscorrelation_heatmap(coeffs_matrix, dates) | _____no_output_____ | MIT | old scripts/Time_synchronization.ipynb | pawelngei/sleep_project |
Hipnogram time-delay--------------------From the crosscorrelation of the eeg signals we can see the two devices are off by 2 minutes 41 seconds. Now we'll see if there is a point in time where the hipnograms are most simmilar. The measure of hipnogram simmilarity will be the sum of times when two devices classified the same sleep stage. | import parse_hipnogram as ph
def get_hipnogram_intersection(neuroon_hipnogram, psg_hipnogram, time_shift):
neuroon_hipnogram.index = neuroon_hipnogram.index + timedelta(seconds = int(time_shift))
combined = psg_hipnogram.join(neuroon_hipnogram, how = 'outer', lsuffix = '_psg', rsuffix = '_neuro')
combined.loc[:, ['stage_num_psg', 'stage_name_psg', 'stage_num_neuro', 'stage_name_neuro', 'event_number_psg', 'event_number_neuro']] = combined.loc[:, ['stage_num_psg', 'stage_name_psg', 'stage_num_neuro', 'stage_name_neuro', 'event_number_psg', 'event_number_neuro']].fillna( method = 'bfill')
combined.loc[:, ['stage_shift_psg', 'stage_shift_neuro']] = combined.loc[:, ['stage_shift_psg', 'stage_shift_neuro']].fillna( value = 'inside')
# From the occupied room number subtract the room occupied by another mouse.
combined['overlap'] = combined['stage_num_psg'] - combined['stage_num_neuro']
same_stage = combined.loc[combined['overlap'] == 0]
same_stage.loc[:, 'event_union'] = same_stage['event_number_psg'] + same_stage['event_number_neuro']
# common_window = np.array([neuroon_hipnogram.tail(1).index.get_values()[0] - psg_hipnogram.head(1).index.get_values()[0]],dtype='timedelta64[m]').astype(int)[0]
all_durations = OrderedDict()
for stage_name, intersection in same_stage.groupby('event_union'):
# Subtract the first row timestamp from the last to get the duration. Store as the duration in milliseconds.
duration = (intersection.index.to_series().iloc[-1]- intersection.index.to_series().iloc[0]).total_seconds()
stage_id = intersection.iloc[0, intersection.columns.get_loc('stage_name_neuro')]
# Keep appending results to a list stored in a dict. Check if the list exists, if not create it.
if stage_id not in all_durations.keys():
all_durations[stage_id] = [duration]
else:
all_durations[stage_id].append(duration)
means = OrderedDict()
stds = OrderedDict()
sums = OrderedDict()
stages_sum = 0
#Adding it here so its first in ordered dict and leftmost on the plot
sums['stages_sum'] = 0
for key, value in all_durations.items():
#if key != 'wake':
means[key] = np.array(value).mean()
stds[key] = np.array(value).std()
sums[key] = np.array(value).sum()
stages_sum += np.array(value).sum()
sums['stages_sum'] = stages_sum
# Divide total seconds by 60 to get minutes
#return stages_sum
return sums, means, stds
def intersect_with_shift():
psg_hipnogram = ph.parse_psg_stages()
neuroon_hipnogram = ph.parse_neuroon_stages()
intersection = OrderedDict([('wake', []), ('rem',[]), ('N1',[]), ('N2',[]), ('N3', []), ('stages_sum', [])])
shift_range = np.arange(-500, 100, 10)
for shift in shift_range:
sums, _, _ = get_hipnogram_intersection(neuroon_hipnogram.copy(), psg_hipnogram.copy(), shift)
for stage, intersect_dur in sums.items():
intersection[stage].append(intersect_dur)
return intersection, shift_range
def plot_intersection(intersection, shift_range):
psg_hipnogram = ph.parse_psg_stages()
neuroon_hipnogram = ph.parse_neuroon_stages()
stage_color_dict = {'N1' : 'royalblue', 'N2' :'forestgreen', 'N3' : 'coral', 'rem' : 'plum', 'wake' : 'lightgrey', 'stages_sum': 'dodgerblue'}
fig, axes = plt.subplots(2)
zscore_ax = axes[0].twinx()
for stage in ['rem', 'N2', 'N3', 'wake']:
intersect_sum = np.array(intersection[stage])
z_scored = (intersect_sum - intersect_sum.mean()) / intersect_sum.std()
zscore_ax.plot(shift_range, z_scored, color = stage_color_dict[stage], label = stage, alpha = 0.5, linestyle = '--')
max_overlap = shift_range[np.argmax(intersection['stages_sum'])]
fig.suptitle('max overlap at %i seconds offset'%max_overlap)
axes[0].plot(shift_range, intersection['stages_sum'], label = 'stages sum', color = 'dodgerblue')
axes[0].axvline(max_overlap, color='k', linestyle='--')
axes[0].set_ylabel('time in the same sleep stage')
axes[0].set_xlabel('offset in seconds')
axes[0].legend(loc = 'center right')
zscore_ax.grid(b=False)
zscore_ax.legend()
sums0, means0, stds0 = get_hipnogram_intersection(neuroon_hipnogram.copy(), psg_hipnogram.copy(), 0)
#
width = 0.35
ind = np.arange(5)
colors_inorder = ['dodgerblue', 'lightgrey', 'forestgreen', 'coral', 'plum']
#Plot the non shifted overlaps
axes[1].bar(left = ind, height = list(sums0.values()),width = width, alpha = 0.8,
tick_label =list(sums0.keys()), edgecolor = 'black', color= colors_inorder)
sumsMax, meansMax, stdsMax = get_hipnogram_intersection(neuroon_hipnogram.copy(), psg_hipnogram.copy(), max_overlap)
# Plot the shifted overlaps
axes[1].bar(left = ind +width, height = list(sumsMax.values()),width = width, alpha = 0.8,
tick_label =list(sumsMax.keys()), edgecolor = 'black', color = colors_inorder)
axes[1].set_xticks(ind + width)
plt.tight_layout()
intersection, shift_range = intersect_with_shift()
plot_intersection(intersection, shift_range) | _____no_output_____ | MIT | old scripts/Time_synchronization.ipynb | pawelngei/sleep_project |
@author : krishan subudhi create a list of 50 random numbers | import random
arr = [random.randint(0,1000) for i in range(50)]
arr[:10] | _____no_output_____ | Apache-2.0 | python_and_data_analysis/numpy-sample.ipynb | krishansubudhi/krishanAI |
Calculate square root | import math
%timeit [math.sqrt(a) for a in arr] | 20.9 µs ± 177 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
| Apache-2.0 | python_and_data_analysis/numpy-sample.ipynb | krishansubudhi/krishanAI |
numpy 1. Faster1. Easy to use2. Memory effecient3. Rich in mathematical functions4. Easy matrix operations | import numpy as np
numpy_arr = np.array(arr)
numpy_arr[:10]
%timeit np.sqrt(numpy_arr) | 2.82 µs ± 196 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
| Apache-2.0 | python_and_data_analysis/numpy-sample.ipynb | krishansubudhi/krishanAI |
Matrix operations | matrix_2d = np.random.randint(0,100,(2,3))
matrix_2d
matrix_2d.sum(axis = 0) | _____no_output_____ | Apache-2.0 | python_and_data_analysis/numpy-sample.ipynb | krishansubudhi/krishanAI |
PI-ICR analysisCreated on 17 July 2019 for the ISOLTRAP experiment- V1.1 (24 June 2020): Maximum likelihood estimation was simplified based on SciPy PDF's and the CERN-ROOT6 minimizer via the iminuit package (→ great performance)- V1.2 (20 February 2021): Preparations for scientific publication and iminuit v2 update integration@author: Jonas Karthein@contact: jonas.karthein@cern.ch@license: MIT license References[1]: https://doi.org/10.1007/s00340-013-5621-0[2]: https://doi.org/10.1103/PhysRevLett.110.082501[3]: https://doi.org/10.1007/s10751-019-1601-z[4]: https://doi.org/10.1103/PhysRevLett.124.092502[1] S. Eliseev, _et al._ Appl. Phys. B (2014) 114: 107.[2] S. Eliseev, _et al._ Phys. Rev. Lett. 110, 082501 (2013).[3] J. Karthein, _et al._ Hyperfine Interact (2019) 240: 61. ApplicationThe code was used to analyse data for the following publications:[3] J. Karthein, _et al._ Hyperfine Interact (2019) 240: 61.[4] V. Manea and J. Karthein, _et al._ Phys. Rev. Lett. 124, 092502 (2020)[5] M. Mougeot, _et al._ in preparation (2020) IntroductionThe following code was written to reconstruct raw Phase-Imaging Ion-Cyclotron-Resonance (PI-ICR) data, to fit PI-ICR position information and calculate a frequency using the patter 1/2 scheme described in Ref. [1] and to determine a frequency ratio between a measurement ion and a reference ion. Additionally, the code allows to analyze isomeric states separated in pattern 2. data, to fit PI-ICR position information and calculate a frequency using the patter 1/2 scheme described in Ref. [1] and to determine a frequency ratio between a measurement ion and a reference ion. Additionally, the code allows to analyze isomeric states separated in pattern 2. Required software and librariesThe following code was written in Python 3.7. The required libraries are listed below with a rough description for their task in the code. It doesn't claim to be a full description of the library.* pandas (data storage and calculation)* numpy (calculation)* matplotlib (plotting)* scipy (PDFs, least squares estimation)* configparser (configuration file processing)* jupyter (Python notebook environment)* iminuit (CERN-ROOT6 minimizer)All packages can be fetched using pip: | !pip3 install --user pandas numpy matplotlib scipy configparser jupyter iminuit | _____no_output_____ | MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Instead of the regular jupyter environment, one can also use CERN's SWAN service or Google Colab. | google_colab = False
if google_colab:
try:
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My\ Drive/Colab/pi-icr/
except:
%cd ~/cernbox/Documents/Colab/pi-icr/ | _____no_output_____ | MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Data filesSpecify, whether the analysis involves one or two states separated in pattern 2 by commenting out the not applicable case in lines 10 or 11. Then enter the file paths for all your data files without the `*.txt` extension. In the following, `ioi` represents the Ion of interest, and `ref` the reference ion. | %config InlineBackend.figure_format ='retina'
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import pickle, os
# analysis = {'ioi_g': {},'ref': {}}
analysis = {'ioi_g': {},'ioi_m': {},'ref': {}}
files_ioi_g = ['data/ioi_ground/85Rb_c_000',
'data/ioi_ground/85Rb_002',
'data/ioi_ground/85Rb_004',
'data/ioi_ground/85Rb_006']
# files_ioi_m = ['data/ioi_isomer/101In_c_000',
# 'data/ioi_isomer/101In_005']
files_ref = ['data/ref/133Cs_c_000',
'data/ref/133Cs_003',
'data/ref/133Cs_005',
'data/ref/133Cs_007']
latex_ioi_g = '$^{88}$Rb'
# latex_ioi_m = '$^{101}$In$^m$'
latex_ref = '$^{133}$Cs' | _____no_output_____ | MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Load pre-analyzed data from file or reconstruct raw dataAll files are loaded and reconstructed in one big dictionary of dictionaries. It contains besides the positions and timestamps also information about the measurement conditions (excitation frequencies, rounds etc). One can load a whole beamtime at once. Center files must be indicated by a `_c_` in the name (e.g. regular name: `101In_001.txt` $\rightarrow$ center name `101In_c_000.txt`). All the data is at later stages saved in a `pickle` file. This enables quick loading of the data dictionary without the need of re-reconstructing the data.The reconstruction code is parallelized and can be found in the subfolder `bin/reconstruction.py` | from bin.reconstruction import PIICR
piicr = PIICR()
if os.path.isfile('data/data-save.p'):
analysis = pickle.load(open('data/data-save.p','rb'))
print('\nLoading finished!')
else:
for file in files_ioi_g:
analysis['ioi_g'].update({file: piicr.prepare(file)})
if analysis['ioi_m'] != {}:
for file in files_ioi_m:
analysis['ioi_m'].update({file: piicr.prepare(file)})
for file in files_ref:
analysis['ref'].update({file: piicr.prepare(file)})
print('\nReconstruction finished!') |
Loading finished!
| MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Individual file selectionThe analysis dictionary contains all files. The analysis however is intended to be performed on a file-by-file basis. Please select the individual files here in the variable `file_name`. | # load P1, P2 and C data in panda dataframes for selected file
# file_name = files_ioi_g[1]
# file_name = files_ioi_m[1]
# file_name = files_ref[1]
file_name = files_ioi_g[3]
print('Selected file:',file_name)
if 'ground' in file_name:
df_p1 = pd.DataFrame(analysis['ioi_g'][file_name]['p1'], columns=['event','x','y','time'])
df_p2 = pd.DataFrame(analysis['ioi_g'][file_name]['p2'], columns=['event','x','y','time'])
df_c = pd.DataFrame(analysis['ioi_g'][file_name.split('_0', 1)[0]+'_c_000']['c'],
columns=['event','x','y','time'])
elif 'isomer' in file_name:
df_p1 = pd.DataFrame(analysis['ioi_m'][file_name]['p1'], columns=['event','x','y','time'])
df_p2 = pd.DataFrame(analysis['ioi_m'][file_name]['p2'], columns=['event','x','y','time'])
df_c = pd.DataFrame(analysis['ioi_m'][file_name.split('_0', 1)[0]+'_c_000']['c'],
columns=['event','x','y','time'])
else:
df_p1 = pd.DataFrame(analysis['ref'][file_name]['p1'], columns=['event','x','y','time'])
df_p2 = pd.DataFrame(analysis['ref'][file_name]['p2'], columns=['event','x','y','time'])
df_c = pd.DataFrame(analysis['ref'][file_name.split('_0', 1)[0]+'_c_000']['c'],
columns=['event','x','y','time']) | Selected file: data/ioi_ground/85Rb_006
| MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Manual space and time cutPlease perform a rough manual space cut for each file to improve results on the automatic space cutting tool. This is necessary if one deals with two states in pattern two or if there is a lot of background. This selection will be ellipsoidal. Additionally, please perform a rough time of flight (ToF) cut. | # manual_space_cut = [x_peak_pos, x_peak_spread, y_peak_pos, y_peak_spread]
manual_space_cut = {'data/ioi_ground/85Rb_002': [150, 150, 100, 150],
'data/ioi_ground/85Rb_004': [150, 150, 100, 150],
'data/ioi_ground/85Rb_006': [150, 150, 100, 150],
'data/ref/133Cs_003': [120, 150, 80, 150],
'data/ref/133Cs_005': [120, 150, 80, 150],
'data/ref/133Cs_007': [120, 150, 80, 150]}
# manual_tof_cut = [tof_min, tof_max]
manual_tof_cut = [20, 50]
# manual_z_cut <= number of ions in the trap
manual_z_cut = 5 | _____no_output_____ | MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Automatic time and space cuts based on Gaussian distributionThis section contains all cuts in time and space in different steps. 1. In the time domain contaminants are removed by fitting a gaussian distribution via maximum likelihood estimation to the largest peak in the ToF spectrum and cutting +/- 5 $\sigma$ (change cut range in lines 70 & 71). The ToF distribution has to be binned first before the maximum can be found, but the fit is performed on the unbinned data set.2. Manual space cut is applied for pattern 1 and pattern 2 (not for the center spot)3. Outlyers/wrongly excited ions are removed +/- 3 $\sigma$ by measures of a simple mean in x and y after applying the manual cut (change cut range in lines).4. Ejections with more than `manual_z_cut` number of ions in the trap (without taking into account the detector efficiency) are rejected (= z-class cut) | %config InlineBackend.figure_format ='retina'
import matplotlib as mpl
from scipy.stats import norm
from iminuit import Minuit
# Utopia LaTeX font with greek letters
mpl.rc('font', family='serif', serif='Linguistics Pro')
mpl.rc('text', usetex=False)
mpl.rc('mathtext', fontset='custom',
rm='Linguistics Pro',
it='Linguistics Pro:italic',
bf='Linguistics Pro:bold')
mpl.rcParams.update({'font.size': 18})
col = ['#FFCC00', '#FF2D55', '#00A2FF', '#61D935', 'k', 'grey', 'pink'] # yellow, red, blue, green
df_list = [df_p1, df_p2, df_c]
pattern = ['p1', 'p2', 'c']
bin_time_df = [0,0,0] # [p1,p2,c] list of dataframes containing the time-binned data
result_t = [0,0,0] # [p1,p2,c] list of MLE fit result dicts
cut_df = [0,0,0] # [p1,p2,c] list of dataframes containing the time- and space-cut data
excludes_df = [0,0,0] # [p1,p2,c] list of dataframes containing the time- and space-cut excluded data
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(12, 15))
for df_nr in range(len(df_list)):
##############################
### BINNING, FITTING TOF DISTR
##############################
bin_time_df[df_nr] = pd.DataFrame(pd.value_counts(pd.cut(df_list[df_nr].time, bins=np.arange(manual_tof_cut[0], manual_tof_cut[1],0.02))).sort_index()).rename(index=str, columns={'time': 'counts'}).reset_index(drop=True)
bin_time_df[df_nr]['time'] = np.arange(manual_tof_cut[0]+0.01,manual_tof_cut[1]-0.01,0.02)
# fit gaussian to time distribution using unbinned maximum likelihood estimation
def NLL_1D(mean, sig):
'''Negative log likelihood function for (n=1)-dimensional Gaussian distribution.'''
return( -np.sum(norm.logpdf(x=data_t,
loc=mean,
scale=sig)) )
def Start_Par(data):
'''Starting parameter based on simple mean of 1D numpy array.'''
return(np.array([data.mean(), # meanx
data.std()])) #rho
# minimize negative log likelihood function first for the symmetric case
data_t = df_list[df_nr][(df_list[df_nr].time > bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()] - 1.0) &
(df_list[df_nr].time < bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()] + 1.0)].time.to_numpy()
result_t[df_nr] = Minuit(NLL_1D, mean=Start_Par(data_t)[0], sig=Start_Par(data_t)[1])
result_t[df_nr].errors = (0.1, 0.1) # initital step size
result_t[df_nr].limits =[(None, None), (None, None)] # fit ranges
result_t[df_nr].errordef = Minuit.LIKELIHOOD # MLE definition (instead of Minuit.LEAST_SQUARES)
result_t[df_nr].migrad() # finds minimum of mle function
result_t[df_nr].hesse() # computes errors
for p in result_t[df_nr].parameters:
print("{} = {:3.5f} +/- {:3.5f}".format(p, result_t[df_nr].values[p], result_t[df_nr].errors[p]))
##############################
### VISUALIZE TOF DISTRIBUTION # kind='bar' is VERY time consuming -> use kind='line' instead!
##############################
# whole distribution
bin_time_df[df_nr].plot(x='time', y='counts', kind='line', xticks=np.arange(manual_tof_cut[0],manual_tof_cut[1]+1,5), ax=axes[df_nr,0])
# reduced peak plus fit
bin_time_df[df_nr][bin_time_df[df_nr].counts.idxmax()-50:bin_time_df[df_nr].counts.idxmax()+50].plot(x='time', y='counts', kind='line', ax=axes[df_nr,1])
pdf_x = np.arange(bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()-50],
bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()+51],
(bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()+51]
-bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()-50])/100)
pdf_y = norm.pdf(pdf_x, result_t[df_nr].values['mean'], result_t[df_nr].values['sig'])
axes[df_nr,0].plot(pdf_x, pdf_y/pdf_y.max()*bin_time_df[df_nr].counts.max(), 'r', label='PDF')
axes[df_nr,1].plot(pdf_x, pdf_y/pdf_y.max()*bin_time_df[df_nr].counts.max(), 'r', label='PDF')
# mark events in t that will be cut away (+/- 3 sigma = 99.73% of data)
bin_time_df[df_nr][(bin_time_df[df_nr].time < result_t[df_nr].values['mean'] - 3*result_t[df_nr].values['sig']) |
(bin_time_df[df_nr].time > result_t[df_nr].values['mean'] + 3*result_t[df_nr].values['sig'])].plot(x='time', y='counts', kind='scatter', ax=axes[df_nr,0], c='y', marker='x', s=50, label='excluded')
bin_time_df[df_nr][(bin_time_df[df_nr].time < result_t[df_nr].values['mean'] - 3*result_t[df_nr].values['sig']) |
(bin_time_df[df_nr].time > result_t[df_nr].values['mean'] + 3*result_t[df_nr].values['sig'])].plot(x='time', y='counts', kind='scatter', ax=axes[df_nr,1], c='y', marker='x', s=50, label='excluded')
# legend title shows total number of events and reduced number of events
axes[df_nr,0].legend(title='total: {}'.format(bin_time_df[df_nr].counts.sum()),loc='upper right', fontsize=16)
axes[df_nr,1].legend(title='considered: {}'.format(bin_time_df[df_nr].counts.sum()-bin_time_df[df_nr][(bin_time_df[df_nr].time < result_t[df_nr].values['mean'] - 3*result_t[df_nr].values['sig']) |
(bin_time_df[df_nr].time > result_t[df_nr].values['mean'] + 3*result_t[df_nr].values['sig'])].counts.sum()),loc='upper left', fontsize=16)
##############################
### APPYING ALL CUTS
##############################
# cutting in t: mean +/- 5 sigma
cut_df[df_nr] = df_list[df_nr][(df_list[df_nr].time > (result_t[df_nr].values['mean'] - 5*result_t[df_nr].values['sig']))&
(df_list[df_nr].time < (result_t[df_nr].values['mean'] + 5*result_t[df_nr].values['sig']))]
len1 = cut_df[df_nr].shape[0]
# applying manual cut in x and y:
if df_nr < 2: # only for p1 and p2, not for c
cut_df[df_nr] = cut_df[df_nr][((cut_df[df_nr].x-manual_space_cut[file_name][0])**2 + (cut_df[df_nr].y-manual_space_cut[file_name][2])**2) <
manual_space_cut[file_name][1]*manual_space_cut[file_name][3]]
len2 = cut_df[df_nr].shape[0]
# applyig automatic cut in x and y: mean +/- 3 std in an ellipsoidal cut
cut_df[df_nr] = cut_df[df_nr][((cut_df[df_nr].x-cut_df[df_nr].x.mean())**2 + (cut_df[df_nr].y-cut_df[df_nr].y.mean())**2) <
3*cut_df[df_nr].x.std()*3*cut_df[df_nr].y.std()]
len3 = cut_df[df_nr].shape[0]
# applying automatic z-class-cut (= cut by number of ions per event) for z>5 ions per event to reduce space-charge effects:
cut_df[df_nr] = cut_df[df_nr][cut_df[df_nr].event.isin(cut_df[df_nr].event.value_counts()[cut_df[df_nr].event.value_counts() <= 6].index)]
# printing the reduction of the number of ions per file in each of the cut steps
print('\n{}: data size: {} -> time cut: {} -> manual space cut: {} -> automatic space cut: {} -> z-class-cut: {}\n'.format(pattern[df_nr], df_list[df_nr].shape[0], len1, len2, len3, cut_df[df_nr].shape[0]))
# saves excluded data (allows visual checking later)
excludes_df[df_nr] = pd.concat([df_list[df_nr], cut_df[df_nr]]).drop_duplicates(keep=False).reset_index(drop=True)
plt.savefig('{}-tof.pdf'.format(file_name))
plt.show() | mean = 25.54630 +/- 0.01769
sig = 0.27570 +/- 0.01251
p1: data size: 248 -> time cut: 244 -> manual space cut: 239 -> automatic space cut: 235 -> z-class-cut: 235
mean = 25.55908 +/- 0.01934
sig = 0.32012 +/- 0.01367
p2: data size: 283 -> time cut: 281 -> manual space cut: 266 -> automatic space cut: 265 -> z-class-cut: 265
mean = 25.54692 +/- 0.01284
sig = 0.27694 +/- 0.00908
c: data size: 480 -> time cut: 477 -> manual space cut: 477 -> automatic space cut: 474 -> z-class-cut: 359
| MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Spot fitting2D multivariate gaussian maximum likelihood estimations of the cleaned pattern 1, pattern 2 and center spot positions are performed SciPy PDF's and ROOT's minimizer. Displayed are all uncut data with a blue-transparent point. This allows displaying a density of points by the shade of blue without the need of binning the data (= reducing the information; also: binning is much more time-consuming). The cut data is displayed with a black "x" at the position of the blue point. These points are not considered in the fit (represented by the red (6-$\sigma$ band) but allow for an additional check of the cutting functions. The scale of the MCP-position plots is given in the time unit of the position-sensitive MCP data. There is no need in converting it into a mm-unit since one is only interested in the angle. | %config InlineBackend.figure_format ='retina'
# activate interactive matplotlib plot -> uncomment line below!
# %matplotlib notebook
import pickle, os
from scipy.stats import multivariate_normal, linregress, pearsonr
from scipy.optimize import minimize
import numpy as np
from iminuit import Minuit
# open preanalyzed dataset if existing
if os.path.isfile('data/data-save.p'):
analysis = pickle.load(open('data/data-save.p','rb'))
df_list = [df_p1, df_p2, df_c]
result = [{},{},{}]
root_res = [0,0,0]
parameters = ['meanx', 'meany', 'sigx', 'sigy', 'theta']
fig2, axes2 = plt.subplots(nrows=3, ncols=1, figsize=(7.5, 20))
piicr_scheme_names = ['p1','p2','c']
##############################
### Prepare maximum likelihood estimation
##############################
def Rot(theta):
'''Rotation (matrix) of angle theta to cartesian coordinates.'''
return np.array([[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)]])
def NLL_2D(meanx, meany, sigx, sigy, theta):
'''Negative log likelihood function for (n=2)-dimensional Gaussian distribution for Minuit.'''
cov = Rot(theta) @ np.array([[np.power(sigx,2),0],[0,np.power(sigy,2)]]) @ Rot(theta).T
return( -np.sum(multivariate_normal.logpdf(x=data,
mean=np.array([meanx, meany]),
cov=cov,
allow_singular=True)) )
def NLL_2D_scipy(param):
'''Negative log likelihood function for (n=2)-dimensional Gaussian distribution for SciPy.'''
meanx, meany, sigx, sigy, theta = param
cov = Rot(theta) @ np.array([[np.power(sigx,2),0],[0,np.power(sigy,2)]]) @ Rot(theta).T
return( -np.sum(multivariate_normal.logpdf(x=data,
mean=np.array([meanx, meany]),
cov=cov,
allow_singular=True)) )
def Start_Par(data):
'''Starting parameter based on simple linear regression and 2D numpy array.'''
# simple linear regression to guess the rotation angle based on slope
slope, intercept, r_value, p_value, std_err = linregress(data[:, 0], data[:, 1])
theta_guess = -np.arctan(slope)
# data rotated based on theta guess
data_rotated_guess = np.dot(Rot(theta_guess), [data[:,0], data[:,1]])
first_guess = np.array([data[:,0].mean()+0.2, # meanx
data[:,1].mean()+0.2, # meany
data_rotated_guess[1].std(), # sigma-x
data_rotated_guess[0].std(), # sigma-y
theta_guess]) # rot. angle based on slope of lin. reg.
# based on a first guess, a minimization based on a robust simplex is performed
start_par = minimize(NLL_2D_scipy, first_guess, method='Nelder-Mead')
return(start_par['x'])
##############################
### Fitting and visualization of P1, P2, C
##############################
for df_nr in range(len(df_list)):
# minimize negative log likelihood function first for the symmetric case
data = cut_df[df_nr][['x', 'y']].to_numpy()
root_res[df_nr] = Minuit(NLL_2D, meanx=Start_Par(data)[0], meany=Start_Par(data)[1],
sigx=Start_Par(data)[2], sigy=Start_Par(data)[3],
theta=Start_Par(data)[4])
root_res[df_nr].errors = (0.1, 0.1, 0.1, 0.1, 0.1) # initital step size
root_res[df_nr].limits =[(None, None), (None, None), (None, None), (None, None), (None, None)] # fit ranges
root_res[df_nr].errordef = Minuit.LIKELIHOOD # MLE definition (instead of Minuit.LEAST_SQUARES)
root_res[df_nr].migrad() # finds minimum of mle function
root_res[df_nr].hesse() # computes errors
# plotting of data, excluded data, reference MCP circle, and fit results
axes2[df_nr].plot(df_list[df_nr].x.to_numpy(),df_list[df_nr].y.to_numpy(),'o',alpha=0.15,label='data',zorder=0)
axes2[df_nr].plot(excludes_df[df_nr].x.to_numpy(), excludes_df[df_nr].y.to_numpy(), 'x k',
label='excluded data',zorder=1)
mcp_circ = mpl.patches.Ellipse((0,0), 1500, 1500, edgecolor='k', fc='None', lw=2)
axes2[df_nr].add_patch(mcp_circ)
axes2[df_nr].scatter(root_res[df_nr].values['meanx'], root_res[df_nr].values['meany'], marker='o', color=col[1], linewidth=0, zorder=2)
sig = mpl.patches.Ellipse((root_res[df_nr].values['meanx'], root_res[df_nr].values['meany']),
3*root_res[df_nr].values['sigx'], 3*root_res[df_nr].values['sigy'],
np.degrees(root_res[df_nr].values['theta']),
edgecolor=col[1], fc='None', lw=2, label='6-$\sigma$ band (fit)', zorder=2)
axes2[df_nr].add_patch(sig)
axes2[df_nr].legend(title='fit(x) = {:1.0f}({:1.0f})\nfit(y) = {:1.0f}({:1.0f})'.format(root_res[df_nr].values['meanx'],root_res[df_nr].errors['meanx'],
root_res[df_nr].values['meany'],root_res[df_nr].errors['meany']),
loc='lower left', fontsize=14)
axes2[df_nr].axis([-750,750,-750,750])
axes2[df_nr].grid(True)
axes2[df_nr].text(-730, 660, '{}: {}'.format(file_name.split('/',1)[-1], piicr_scheme_names[df_nr]))
plt.tight_layout()
# save fit information for each parameter:
# 'parameter': [fitresult, fiterror, Hesse-covariance matrix]
for i in range(len(parameters)):
result[df_nr].update({'{}'.format(parameters[i]): [np.array(root_res[df_nr].values)[i],
np.array(root_res[df_nr].errors)[i],
root_res[df_nr].covariance]})
if 'ground' in file_name:
analysis['ioi_g'][file_name]['fit-{}'.format(piicr_scheme_names[df_nr])] = result[df_nr]
elif 'isomer' in file_name:
analysis['ioi_m'][file_name]['fit-{}'.format(piicr_scheme_names[df_nr])] = result[df_nr]
else:
analysis['ref'][file_name]['fit-{}'.format(piicr_scheme_names[df_nr])] = result[df_nr]
plt.savefig('{}-fit.pdf'.format(file_name))
plt.show()
# save all data using pickle
pickle.dump(analysis, open('data/data-save.p','wb')) | 'LinguisticsPro-Italic.otf' can not be subsetted into a Type 3 font. The entire font will be embedded in the output.
'LinguisticsPro-Regular.otf' can not be subsetted into a Type 3 font. The entire font will be embedded in the output.
| MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
--- !!! REPEAT CODE ABOVE FOR ALL INDIVIDUAL FILES !!!--- Save fit data to dataframe and *.csv fileContinue here after analyzing all files individually. The following command saves all necessary data and fit information in a `*.csv` file. | calc_df = pd.DataFrame()
for key in analysis.keys():
for subkey in analysis[key].keys():
if '_c_' not in subkey:
calc_df = calc_df.append(pd.DataFrame({'file': subkey,
'p1_x': analysis[key][subkey]['fit-p1']['meanx'][0],
'p1_y': analysis[key][subkey]['fit-p1']['meany'][0],
'p2_x': analysis[key][subkey]['fit-p2']['meanx'][0],
'p2_y': analysis[key][subkey]['fit-p2']['meany'][0],
'c_x': analysis[key][subkey]['fit-c']['meanx'][0],
'c_y': analysis[key][subkey]['fit-c']['meany'][0],
'p1_x_unc': analysis[key][subkey]['fit-p1']['meanx'][1],
'p1_y_unc': analysis[key][subkey]['fit-p1']['meany'][1],
'p2_x_unc': analysis[key][subkey]['fit-p2']['meanx'][1],
'p2_y_unc': analysis[key][subkey]['fit-p2']['meany'][1],
'c_x_unc': analysis[key][subkey]['fit-c']['meanx'][1],
'c_y_unc': analysis[key][subkey]['fit-c']['meany'][1],
'cyc_freq_guess': analysis[key][subkey]['cyc_freq'],
'red_cyc_freq': analysis[key][subkey]['red_cyc_freq'],
'mag_freq': analysis[key][subkey]['mag_freq'],
'cyc_acc_time': analysis[key][subkey]['cyc_acc_time'],
'n_acc': analysis[key][subkey]['n_acc'],
'time_start': pd.to_datetime('{} {}'.format(analysis[key][subkey]['time-info'][0], analysis[key][subkey]['time-info'][1]), format='%m/%d/%Y %H:%M:%S', errors='ignore'),
'time_end': pd.to_datetime('{} {}'.format(analysis[key][subkey]['time-info'][2], analysis[key][subkey]['time-info'][3]), format='%m/%d/%Y %H:%M:%S', errors='ignore')}, index=[0]), ignore_index=True)
calc_df.to_csv('data/analysis-summary.csv')
calc_df | _____no_output_____ | MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Calculate $\nu_c$ from position fits[1]: https://doi.org/10.1007/s00340-013-5621-0[2]: https://doi.org/10.1103/PhysRevLett.110.082501[3]: https://doi.org/10.1007/s10751-019-1601-zCan be run independently from everything above by loading the `analysis-summary.csv` file! A detailed description of the $\nu_c$ calculation can be found in Ref. [1], [2] and [3]. | import pandas as pd
import numpy as np
# load fit-data file, datetime has to be converted
calc_df = pd.read_csv('data/analysis-summary.csv', header=0, index_col=0)
# calculate angle between the P1-vector (P1_x/y - C_x/y) and the P2-vector (P2_x/y - C_x/y)
calc_df['p1p2_angle'] = np.arctan2(calc_df.p1_y - calc_df.c_y, calc_df.p1_x - calc_df.c_x) \
- np.arctan2(calc_df.p2_y - calc_df.c_y, calc_df.p2_x - calc_df.c_x)
# calculate the uncertainty on the angle between the P1/P2 vectors
# see https://en.wikipedia.org/wiki/Atan2
calc_df['p1p2_angle_unc'] = np.sqrt(
( calc_df.p1_x_unc * (calc_df.c_y - calc_df.p1_y) / ( (calc_df.p1_x - calc_df.c_x)**2 + (calc_df.p1_y - calc_df.c_y)**2 ) )**2
+ ( calc_df.p1_y_unc * (calc_df.p1_x - calc_df.c_x) / ( (calc_df.p1_x - calc_df.c_x)**2 + (calc_df.p1_y - calc_df.c_y)**2 ) )**2
+ ( calc_df.p2_x_unc * (calc_df.c_y - calc_df.p2_y) / ( (calc_df.p2_x - calc_df.c_x)**2 + (calc_df.p2_y - calc_df.c_y)**2 ) )**2
+ ( calc_df.p2_y_unc * (calc_df.p2_x - calc_df.c_x) / ( (calc_df.p2_x - calc_df.c_x)**2 + (calc_df.p2_y - calc_df.c_y)**2 ) )**2
+ ( calc_df.c_x_unc *
( -(calc_df.c_y - calc_df.p1_y) / ( (calc_df.p1_x - calc_df.c_x)**2 + (calc_df.p1_y - calc_df.c_y)**2 )
-(calc_df.c_y - calc_df.p2_y) / ( (calc_df.p2_x - calc_df.c_x)**2 + (calc_df.p2_y - calc_df.c_y)**2 ) ) )**2
+ ( calc_df.c_y_unc *
( (calc_df.p1_x - calc_df.c_x) / ( (calc_df.p1_x - calc_df.c_x)**2 + (calc_df.p1_y - calc_df.c_y)**2 )
+(calc_df.p2_x - calc_df.c_x) / ( (calc_df.p2_x - calc_df.c_x)**2 + (calc_df.p2_y - calc_df.c_y)**2 ) ) )**2 )
# calculate cyc freq: total phase devided by total time
calc_df['cyc_freq'] = (calc_df.p1p2_angle + 2*np.pi * calc_df.n_acc) / (2*np.pi * calc_df.cyc_acc_time * 0.000001)
calc_df['cyc_freq_unc'] = calc_df.p1p2_angle_unc / (2*np.pi * calc_df.cyc_acc_time * 0.000001)
calc_df.to_csv('data/analysis-summary.csv')
calc_df.head() | _____no_output_____ | MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Frequency-ratio calculation[1]: https://doi.org/10.1007/s00340-013-5621-0[2]: https://doi.org/10.1103/PhysRevLett.110.082501[3]: https://doi.org/10.1007/s10751-019-1601-zIn order to determine the frequency ratio between the ioi and the ref, simultaneous fits of all for the data set possible polynomial degrees are performed. The code calculates the reduced $\chi^2_{red}$ for each fit and returns only the one with a $\chi^2_{red}$ closest to 1. A detailed description of the procedure can be found in Ref. [3]. If problems in the fitting occur, please try to vary the starting parameter section in lines 125-135 of `~/bin/freq_ratio.py` | import pandas as pd
import numpy as np
from bin.freq_ratio import Freq_ratio
freq = Freq_ratio()
# load fit-data file
calc_df = pd.read_csv('data/analysis-summary.csv', header=0, index_col=0)
# save average time of measurement: t_start+(t_end-t_start)/2
calc_df.time_start = pd.to_datetime(calc_df.time_start)
calc_df.time_end = pd.to_datetime(calc_df.time_end)
calc_df['time'] = calc_df.time_start + (calc_df.time_end - calc_df.time_start)/2
calc_df.to_csv('data/analysis-summary.csv')
# convert avg.time to difference in minutes from first measurement -> allows fitting with small number as x value
calc_df['time_delta'] = ((calc_df['time']-calc_df['time'].min())/np.timedelta64(1, 's')/60)
# selecting data for isotopes
df_ioi_g = calc_df[calc_df.file.str.contains('ground')][['time_delta','cyc_freq','cyc_freq_unc','time','file']]
df_ioi_m = calc_df[calc_df.file.str.contains('isomer')][['time_delta','cyc_freq','cyc_freq_unc','time','file']]
# allows to define a subset of reference frequencies for ground and isomer
df_ref_g = calc_df[calc_df.file.str.contains('ref')][['time_delta','cyc_freq','cyc_freq_unc','time','file']]
df_ref_m = calc_df[calc_df.file.str.contains('ref')][['time_delta','cyc_freq','cyc_freq_unc','time','file']]
# simultaneous polynomial fit, see https://doi.org/10.1007/s10751-019-1601-z
fit1, fit2, ratio1, ratio_unc1, chi_sq1 = freq.ratio_sim_fit(['ref', 'ioi_g'],
df_ref_g.time_delta.tolist(),
df_ref_g.cyc_freq.tolist(),
df_ref_g.cyc_freq_unc.tolist(),
df_ioi_g.time_delta.tolist(),
df_ioi_g.cyc_freq.tolist(),
df_ioi_g.cyc_freq_unc.tolist())
if len(df_ioi_m) > 0:
fit3, fit4, ratio2, ratio_unc2, chi_sq2 = freq.ratio_sim_fit(['ref', 'ioi_m'],
df_ref_m.time_delta.tolist(),
df_ref_m.cyc_freq.tolist(),
df_ref_m.cyc_freq_unc.tolist(),
df_ioi_m.time_delta.tolist(),
df_ioi_m.cyc_freq.tolist(),
df_ioi_m.cyc_freq_unc.tolist()) | [-10, 685181.1403450029, 1]
Poly-degree: 2
Red.Chi.Sq.: 0.6548141372439115
Corellation: 2.7799756249708774e-12
Ratio fit parameter: 0.6388872125439365 +/- 2.1557485311110636e-09
[1, -10, 685181.1403450029, 1]
Poly-degree: 3
Red.Chi.Sq.: 0.7550362063128235
| MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Frequency-ratio plotting | %config InlineBackend.figure_format ='retina'
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import numpy as np
mpl.rc('font', family='serif', serif='Linguistics Pro') # open source Utopia LaTeX font with greek letters
mpl.rc('text', usetex=False)
mpl.rc('mathtext', fontset='custom',
rm='Linguistics Pro',
it='Linguistics Pro:italic',
bf='Linguistics Pro:bold')
mpl.rcParams.update({'font.size': 18})
# prepare fit data
x1 = np.linspace(min([df_ioi_g.time_delta.min(),df_ref_g.time_delta.min()]),max([df_ioi_g.time_delta.max(),df_ref_g.time_delta.max()]),500)
t1 = pd.date_range(pd.Series([df_ioi_g.time.min(),df_ref_g.time.min()]).min(),pd.Series([df_ioi_g.time.max(),df_ref_g.time.max()]).max(),periods=500)
if len(df_ioi_m) > 0:
x2 = np.linspace(min([df_ioi_m.time_delta.min(),df_ref_m.time_delta.min()]),max([df_ioi_m.time_delta.max(),df_ref_m.time_delta.max()]),500)
t2 = pd.date_range(pd.Series([df_ioi_m.time.min(),df_ref_m.time.min()]).min(),pd.Series([df_ioi_m.time.max(),df_ref_m.time.max()]).max(),periods=500)
fit1_y = [np.polyval(fit1, i) for i in x1]
fit2_y = [np.polyval(fit2, i) for i in x1]
if len(df_ioi_m) > 0:
fit3_y = [np.polyval(fit3, i) for i in x2]
fit4_y = [np.polyval(fit4, i) for i in x2]
#########################
### PLOTTING ground state
#########################
if len(df_ioi_m) > 0:
fig, (ax1, ax3) = plt.subplots(figsize=(9,12),nrows=2, ncols=1)
else:
fig, ax1 = plt.subplots(figsize=(9,6),nrows=1, ncols=1)
ax1.errorbar(df_ref_g.time, df_ref_g.cyc_freq, yerr=df_ref_g.cyc_freq_unc, fmt='o', label='{}'.format(latex_ref), marker='d', c='#1E77B4', ms=10, elinewidth=2.5)
ax1.set_xlabel('Time', fontsize=24, fontweight='bold')
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Frequency (Hz)', fontsize=24, fontweight='bold')
ax1.tick_params('y', colors='#1E77B4')
ax1.plot(t1, fit1_y, ls=(5.5, (5, 1, 1, 1, 1, 1, 1, 1)),c='#1E77B4', label='poly-fit')
# Allowing two axes in one subplot
ax2 = ax1.twinx()
ax2.errorbar(df_ioi_g.time, df_ioi_g.cyc_freq, yerr=df_ioi_g.cyc_freq_unc, fmt='o', color='#D62728', label='{}'.format(latex_ioi_g), fillstyle='none', ms=10, elinewidth=2.5) # green: #2ca02c
ax2.tick_params('y', colors='#D62728')
ax2.plot(t1, fit2_y, ls=(0, (5, 3, 1, 3)),c='#D62728', label='poly-fit')
# adjust the y axes to be the same height
middle_y1 = df_ref_g.cyc_freq.min() + (df_ref_g.cyc_freq.max() - df_ref_g.cyc_freq.min())/2
middle_y2 = df_ioi_g.cyc_freq.min() + (df_ioi_g.cyc_freq.max() - df_ioi_g.cyc_freq.min())/2
range_y1 = df_ref_g.cyc_freq.max() - df_ref_g.cyc_freq.min() + 2 * df_ref_g.cyc_freq_unc.max()
range_y2 = df_ioi_g.cyc_freq.max() - df_ioi_g.cyc_freq.min() + 2 * df_ioi_g.cyc_freq_unc.max()
ax1.set_ylim(middle_y1 - 1.3 * max([range_y1, middle_y1*range_y2/middle_y2])/2, middle_y1 + 1.1 * max([range_y1, middle_y1*range_y2/middle_y2])/2) # outliers only
ax2.set_ylim(middle_y2 - 1.1 * max([middle_y2*range_y1/middle_y1, range_y2])/2, middle_y2 + 1.3 * max([middle_y2*range_y1/middle_y1, range_y2])/2) # most of the data
# plotting only hours without the date
ax2.xaxis.set_major_formatter(mpl.dates.DateFormatter('%H:%M'))
ax2.xaxis.set_minor_locator(mpl.dates.HourLocator())
handles1, labels1 = ax1.get_legend_handles_labels()
handles2, labels2 = ax2.get_legend_handles_labels()
handles_g = [handles1[1], handles2[1], (handles1[0], handles2[0])]
labels_g = [labels1[1], labels2[1], labels1[0]]
plt.legend(handles=handles_g, labels=labels_g,fontsize=18,title='Ratio: {:1.10f}\n $\\pm${:1.10f}'.format(ratio1, ratio_unc1), loc='upper right')
plt.text(0.03,0.03,'poly-{}: $\chi^2_{{red}}$ {:3.2f}'.format(len(fit1)-1, chi_sq1),transform=ax1.transAxes)
###########################
### PLOTTING isomeric state
###########################
if len(df_ioi_m) > 0:
ax3.errorbar(df_ref_m.time, df_ref_m.cyc_freq, yerr=df_ref_m.cyc_freq_unc, fmt='o', label='{}'.format(latex_ref), marker='d', c='#1E77B4', ms=10, elinewidth=2.5)
ax3.set_xlabel('Time', fontsize=24, fontweight='bold')
# Make the y-axis label, ticks and tick labels match the line color.
ax3.set_ylabel('Frequency (Hz)', fontsize=24, fontweight='bold')
ax3.tick_params('y', colors='#1E77B4')
ax3.plot(t2, fit3_y, ls=(5.5, (5, 1, 1, 1, 1, 1, 1, 1)),c='#1E77B4', label='poly-fit')
# Allowing two axes in one subplot
ax4 = ax3.twinx()
ax4.errorbar(df_ioi_m.time, df_ioi_m.cyc_freq, yerr=df_ioi_m.cyc_freq_unc, fmt='o', color='#D62728', label='{}'.format(latex_ioi_m), fillstyle='none', ms=10, elinewidth=2.5) # green: #2ca02c
ax4.tick_params('y', colors='#D62728')
ax4.plot(t2, fit4_y, ls=(0, (5, 3, 1, 3)),c='#D62728', label='poly-fit')
# adjust the y axes to be the same height
middle_y3 = df_ref_m.cyc_freq.min() + (df_ref_m.cyc_freq.max() - df_ref_m.cyc_freq.min())/2
middle_y4 = df_ioi_m.cyc_freq.min() + (df_ioi_m.cyc_freq.max() - df_ioi_m.cyc_freq.min())/2
range_y3 = df_ref_m.cyc_freq.max() - df_ref_m.cyc_freq.min() + 2 * df_ref_m.cyc_freq_unc.max()
range_y4 = df_ioi_m.cyc_freq.max() - df_ioi_m.cyc_freq.min() + 2 * df_ioi_m.cyc_freq_unc.max()
ax3.set_ylim(middle_y3 - 1.3 * max([range_y3, middle_y3*range_y4/middle_y4])/2, middle_y3 + 1.1 * max([range_y3, middle_y3*range_y4/middle_y4])/2) # outliers only
ax4.set_ylim(middle_y4 - 1.1 * max([middle_y4*range_y3/middle_y3, range_y4])/2, middle_y4 + 1.3 * max([middle_y4*range_y3/middle_y3, range_y4])/2) # most of the data
# plotting only hours without the date
ax4.xaxis.set_major_formatter(mpl.dates.DateFormatter('%H:%M'))
ax4.xaxis.set_minor_locator(mpl.dates.HourLocator())
handles3, labels3 = ax3.get_legend_handles_labels()
handles4, labels4 = ax4.get_legend_handles_labels()
handles_m = [handles3[1], handles4[1], (handles3[0], handles4[0])]
labels_m = [labels3[1], labels4[1], labels3[0]]
plt.legend(handles=handles_m, labels=labels_m, fontsize=18,title='Ratio: {:1.10f}\n $\\pm${:1.10f}'.format(ratio2, ratio_unc2), loc='upper right')
plt.text(0.03,0.03,'poly-{}: $\chi^2_{{red}}$ {:3.2f}'.format(len(fit3)-1, chi_sq2),transform=ax3.transAxes)
plt.tight_layout()
plt.savefig('data/freq-ratios.pdf')
plt.show() | 'LinguisticsPro-Bold.otf' can not be subsetted into a Type 3 font. The entire font will be embedded in the output.
'LinguisticsPro-Italic.otf' can not be subsetted into a Type 3 font. The entire font will be embedded in the output.
'LinguisticsPro-Regular.otf' can not be subsetted into a Type 3 font. The entire font will be embedded in the output.
| MIT | pi-icr-analysis.ipynb | jonas-ka/pi-icr-analysis |
Introdution to Jupyter Notebooks and Text Processing in PythonThis 'document' is a Jupyter notebook. It allows you to combine explanatory **text** and **code** that executes to produce results you can see on the same page. Notebook Basics Text cellsThe box this text is written in is called a *cell*. It is a *text cell* written in a very simple markup language called 'Markdown'. Here is a useful [Markdown cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). You can edit and then run cells to produce a result. Running this text cell produces formatted text. Code cellsThe other main kind of cell is a *code cell*. The cell immediately below this one is a code cell. Running a code cell runs the code in the cell and produces a result. | # This is a comment in a code cell. Comments start with a # symbol. They are ignored and do not do anything.
# This box is a code cell. When this cell is run, the code below will execute and produce a result
3 + 4 | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Simple String Manipulation in PythonThis section introduces some very basic things you can do in Python to create and manipulate *strings*. A string is a simple sequence of characters, like `flabbergast`. This introduction is limited to those things that may be useful to know in order to understand the *Bughunt!* data mining in the following two notebooks. Creating and Storing Strings in VariablesStrings are simple to create in Python. You can simply write some characters in quote marks. | 'Butterflies are important as pollinators.' | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
In order to do something useful with this string, other than print it out, we need to store in a *variable* by using the assignment operator `=` (equals sign). Whatever is on the right-hand side of the `=` is stored into a variable with the name on the left-hand side. | # my_variable is the variable on the left
# 'manuscripts' is the string on the right that is stored in the variable my_variable
my_variable = 'Butterflies are important as pollinators.' | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Notice that nothing is printing to the screen. That's because the string is stored in the variable `my_variable`. In order to see what is inside the variable `my_variable` we can simply write `my_variable` in a code cell, run it, and the interpreter will print it out for us. | my_variable | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Manipulating Bits of Strings Accessing Individual CharactersA strings is just a sequence (or list) of characters. You can access **individual characters** in a string by specifying which ones you want in square brackets. If you want the first character you specify `1`. | my_variable[1] | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Hang on a minute! Why did it give us `u` instead of `B`?In programming, everything tends to be *zero indexed*, which means that things are counted from 0 rather than 1. Thus, in the example above, `1` gives us the *second* character in the string.If you want the first character in the string, you need to specify the index `0`! | my_variable[0] | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Accessing a Range of CharactersYou can also pick out a **range of characters** from within a string, by giving the *start index* followed by the *end index* with a semi-colon (`:`) in between.The example below gives us the character at index `0` all the way up to, *but not including*, the character at index `20`. | my_variable[0:20] | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Changing Whole Strings with FunctionsPython has some built-in *functions* that allow you to change a whole string at once. You can change all characters to lowercase or uppercase: | my_variable.lower()
my_variable.upper() | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
NB: These functions do not change the original string but create a new one. Our original string is still the same as it was before: | my_variable | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Testing StringsYou can also test a string to see if it is passes some test, e.g. is the string all alphabetic characters only? | my_variable.isalpha() | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Does the string have the letter `p` in it? | 'p' in my_variable | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Lists of StringsAnother important thing we can do with strings is creating a list of strings by listing them inside square brackets `[]`: | my_list = ['Butterflies are important as pollinators',
'Butterflies feed primarily on nectar from flowers',
'Butterflies are widely used in objects of art']
my_list | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Manipulating Lists of StringsJust like with strings, we can access individual items inside a list by index number: | my_list[0] | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
And we can access a range of items inside a list by *slicing*: | my_list[0:2] | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Advanced: Creating Lists of Strings with List ComprehensionsWe can create new lists in an elegant way by combining some of the things we have covered above. Here is an example where we have taken our original list `my_list` and created a new list `new_list` by going over each string in the list: | new_list = [string for string in my_list]
new_list | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
Why do this? If we combine it with a test, we can have a list that only contains strings with the letter `p` in them: | new_list_p = [string for string in my_list if 'p' in string]
new_list_p | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
This is a very powerful way to quickly create lists. We can even change all the strings to uppercase at the same time! | new_list_p_upper = [string.upper() for string in my_list if 'p' in string]
new_list_p_upper | _____no_output_____ | MIT | notebooks/1-intro-to-strings.ipynb | mchesterkadwell/bughunt-analysis |
This notebook will illustrate how to access DeepLabCut(DLC) results for IBL sessions and how to create short videos with DLC labels printed onto, as well as wheel angle, starting by downloading data from the IBL flatiron server. It requires ibllib, a ONE account and the following script: https://github.com/int-brain-lab/iblapps/blob/master/DLC_labeled_video.py | run '/home/mic/Dropbox/scripts/IBL/DLC_labeled_video.py'
one = ONE() | Connected to https://alyx.internationalbrainlab.org as michael.schartner
| MIT | dlc/Example_DLC_access.ipynb | GaelleChapuis/iblapps |
Let's first find IBL ephys sessions with DLC results: | eids= one.search(task_protocol='ephysChoiceworld', dataset_types=['camera.dlc'], details=False)
len(eids) | _____no_output_____ | MIT | dlc/Example_DLC_access.ipynb | GaelleChapuis/iblapps |
For a particular session, we can create a short labeled video by calling the function Viewer, specifying the eid of the desired session, the video type (there's 'left', 'right' and 'body' videos), and a range of trials for which the video should be created. Most sesions have around 700 trials. In the following, this is illustrated with session '3663d82b-f197-4e8b-b299-7b803a155b84', video type 'left', trials range [10,13] and without a zoom for the eye, such that nose, paw and tongue tracking is visible. The eye-zoom option shows only the four points delineating the pupil edges, which are too small to be visible in the normal view. Note that this automatically starts the download of the video from flatiron (in case it is not locally stored already), which may take a while since these videos are about 8 GB large. | eid = eids[6]
Viewer(eid, 'left', [10,13], save_video=True, eye_zoom=False) | Connected to https://alyx.internationalbrainlab.org as michael.schartner
Connected to https://alyx.internationalbrainlab.org as michael.schartner
| MIT | dlc/Example_DLC_access.ipynb | GaelleChapuis/iblapps |
As usual when downloading IBL data from flatiron, the dimensions are listed. Below is one frame of the video for illustration. One can see one point for each paw, two points for the edges of the tongue, one point for the nose and there are 4 points close together around the pupil edges. All points for which the DLC network had a confidence probability of below 0.9 are hidden. For instance when the mouse is not licking, there is no tongue and so the network cannot detect it, and no points are shown. The script will display and save the short video in your local folder.  Sections of the script DLC_labeled_video.py can be recycled to analyse DLC traces. For example let's plot the x coordinate for the right paw in a 'left' cam video for a given trial. | one = ONE()
dataset_types = ['camera.times','trials.intervals','camera.dlc']
video_type = 'left'
# get paths to load in data
D = one.load('3663d82b-f197-4e8b-b299-7b803a155b84',dataset_types=dataset_types, dclass_output=True)
alf_path = Path(D.local_path[0]).parent.parent / 'alf'
video_data = alf_path.parent / 'raw_video_data'
# get trials start and end times, camera time stamps (one for each frame, synced with DLC trace)
trials = alf.io.load_object(alf_path, '_ibl_trials')
cam0 = alf.io.load_object(alf_path, '_ibl_%sCamera' % video_type)
cam1 = alf.io.load_object(video_data, '_ibl_%sCamera' % video_type)
cam = {**cam0,**cam1}
# for each tracked point there's x,y in [px] in the frame and a likelihood that indicates the network's confidence
cam.keys() | _____no_output_____ | MIT | dlc/Example_DLC_access.ipynb | GaelleChapuis/iblapps |
There is also 'times' in this dictionary, the time stamps for each frame that we'll use to sync it with other events in the experiment. Let's get rid of it briefly to have only DLC points and set coordinates to nan when the likelihood is below 0.9. | Times = cam['times']
del cam['times']
points = np.unique(['_'.join(x.split('_')[:-1]) for x in cam.keys()])
cam['times'] = Times
# A helper function to find closest time stamps
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return idx | _____no_output_____ | MIT | dlc/Example_DLC_access.ipynb | GaelleChapuis/iblapps |
Let's pick say the 5th trial and find all DLC traces for it. | frame_start = find_nearest(cam['times'], trials['intervals'][4][0])
frame_stop = find_nearest(cam['times'], trials['intervals'][4][1])
XYs = {}
for point in points:
x = np.ma.masked_where(
cam[point + '_likelihood'] < 0.9, cam[point + '_x'])
x = x.filled(np.nan)
y = np.ma.masked_where(
cam[point + '_likelihood'] < 0.9, cam[point + '_y'])
y = y.filled(np.nan)
XYs[point] = np.array(
[x[frame_start:frame_stop], y[frame_start:frame_stop]])
import matplotlib.pyplot as plt
plt.plot(cam['times'][frame_start:frame_stop],XYs['paw_r'][0])
plt.xlabel('time [sec]')
plt.ylabel('x location of right paw [px]') | _____no_output_____ | MIT | dlc/Example_DLC_access.ipynb | GaelleChapuis/iblapps |
Week 3 - Functions The real power in any programming language is the **Function**.A function is:* a little block of script (one line or many) that performs specific task or a series of tasks.* reusable and helps us make our code DRY.* triggered when something "invokes" or "calls" it.* ideally modular – it performs a narrow task and you call several functions to perform more complex tasks. What we'll cover today:* Simple function* Return statements* | ## Build a function called myFunction that adds 2 numbers together
## it should print "The total is (whatever the number is)!"
## build it here
## Call myFunction using 4 and 5 as the arguments
## Call myFunction using 10 and 2 as the arguments
## you might forget what arguments are needed for the function to work.
## you can add notes that appear on shift-tab as you call the function.
## write it here
## test it on 3 and 4
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
To use or not use functions?Let's compare the two options with a simple example: | ## You have a list of numbers.
mylist1 = [1, -5, 22, -44.2, 33, -45]
## Turn each number into an absolute number.
## a for loop works perfectly fine here.
## The problem is that your project keeps generating more lists.
## Each list of numbers has to be turned into absolute numbers
mylist2 = [-56, -34, -75, -111, -22]
mylist3 = [-100, -200, 100, -300, -100]
mylist4 = [-23, -89, -11, -45, -27]
mylist5 = [0, 1, 2, 3, 4, 5] | _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
DRY Do you keep writing for loops for each list? No, that's a lot of repetition! DRY stands for "Don't Repeat Yourself" | ## Instead we write a function that takes a list,
## converts each list item to an absolute number,
## and prints out the number
## Try swapping out different lists into the function:
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
Timesaver Imagine for a moment that your editor tells you that the calculation needs to be updated. Instead of needing the absolute number, you need the absolute number minus 5. Having used multiple for loops, you'd have to change each one. What if you miss one or two? Either way, it's a chore. With functions, you just revise the function and the update runs everywhere. | ## So if an editor says to actually multiply the absolute number by 1_000_000,
## Try swapping out different lists into the function:
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
Return Statements So far we have only printed out values processed by a function. But we really want to retain the value the function creates. We can then pass that value to other parts of our calculations and code. | ## Simple example
## A function that adds two numbers together and prints the value:
## call the function with the numbers 2 and 4
## let's try to save it in a variable called myCalc
## Print myCalc. What does it hold?
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
The return Statement | ## Tweak our function by adding return statement
## instead of printing a value we want to return a value(or values).
## call the function add_numbers_ret
## and store in variable called myCalc
## print myCalc
## What type is myCalc?
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
Return multiple values | ## demo function
name,age,country = getPerson("David", 35, "France")
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
Let's revise our earlier absolute values converter with a return statement Here is the earlier version: | ## Here it is revised with a return statement
## Let's actually make that a list comprehension version of the function:
## Let's test it by storing the return value in variable x
## What type of data object is it?
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
Make a function more flexible and universal* Currently, we have a function that takes ONLY a list as an argument.* We'd have to write another one for a single number argument. | ## try using return_absolutes_lc on a single number like -10
## it will break
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
Universalize our absolute numbers function | ## call the function make_abs
## try it on -10
## Try it on mylist3 - it will break!
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
We can use the ```map()``` function to tackle this problem.```map()``` takes 2 arguments: a ```function``` and ```iterable like a list```. | ## try it on make_abs and mylist3
## save it into a list
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
```map()``` also works for multiple iterables remember our ```add_numbers_ret``` function. | ## here it is again:
def add_numbers_ret(number1, number2):
return (number1 + number2)
## two lists
a_even = [2, 4, 6, 8]
a_odd = [1, 3, 5, 7, 9] ## note this has one more item in the list.
## run map on a_even and a_odd
b = list(map(add_numbers_ret, a_even, a_odd))
b | _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
Functions that call other funcions | ## let's create a function that returns the square of a number
## what is 9 squared?
| _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
Making a point here with a simple exampleLet's say we want to add 2 numbers together and then square that result.Instead of writing one "complex" function, we can call on our modular functions. | ## a function that calls our modular functions
## call make_point() on 2 and 5
make_point(2,5) | _____no_output_____ | MIT | in-class/week-3-B-defined-functions-BLANKS.ipynb | jchapamalacara/fall21-students-practical-python |
piston example with explicit Euler scheme | %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as anim
import numpy as np
import sys
sys.path.insert(0, './code')
import ideal_gas | _____no_output_____ | MIT | piston_animation_euler.ipynb | MarkusLohmayer/master-thesis-code |
physical parameters | # length of cylinder
l = 0.1
# radius of cylinder
r = 0.05
# thickness of wall
w = 0.006
# derived geometrical data
r2 = 2 * r # diameter of cylinder
w2 = w / 2 # halved thickness of wall
l2 = l - w2
A = r**2 * np.pi # cross-sectional area
def get_v_1(q):
"""first volume"""
return A * (q - w2)
def get_v_2(q):
"""second volume"""
return A * (l2 - q)
# density of aluminium
m_Al = 2700.0
m_Cu = 8960.0
# mass of piston
m = m_Cu * A * w
# thermal conductivity of aluminium
κ_Al = 237.0
κ_Cu = 401.0
# thermal conduction coefficient
α = κ_Cu * A / w
m_inv = 1 / m | _____no_output_____ | MIT | piston_animation_euler.ipynb | MarkusLohmayer/master-thesis-code |
initial conditionsdetermine $n_1$, $n_2$, $s_1$, $s_2$ | # wanted conditions
v_1 = v_2 = get_v_1(l/2)
θ_1 = 273.15 + 25.0
π_1 = 1.5 * 1e5
θ_2 = 273.15 + 20.0
π_2 = 1.0 * 1e5
from scipy.optimize import fsolve
n_1 = fsolve(lambda n : ideal_gas.S_π(ideal_gas.U2(θ_1, n), v_1, n) - π_1, x0=2e22)[0]
s_1 = ideal_gas.S(ideal_gas.U2(θ_1, n_1), v_1, n_1)
# check temperature
ideal_gas.U_θ(s_1, v_1, n_1) - 273.15
# check pressure
ideal_gas.U_π(s_1, v_1, n_1) * 1e-5
n_2 = fsolve(lambda n : ideal_gas.S_π(ideal_gas.U2(θ_2, n), v_2, n) - π_2, x0=2e22)[0]
s_2 = ideal_gas.S(ideal_gas.U2(θ_2, n_2), v_2, n_2)
# check temperature
ideal_gas.U_θ(s_2, v_2, n_2) - 273.15
# check pressure
ideal_gas.U_π(s_2, v_2, n_2) * 1e-5
x_0 = l/2, 0, s_1, s_2 | _____no_output_____ | MIT | piston_animation_euler.ipynb | MarkusLohmayer/master-thesis-code |
simulation | def set_state(data, i, x):
q, p, s_1, s_2 = x
data[i, 0] = q
data[i, 1] = p
data[i, 2] = v = m_inv * p
data[i, 3] = v_1 = get_v_1(q)
data[i, 4] = π_1 = ideal_gas.U_π(s_1, v_1, n_1)
data[i, 5] = s_1
data[i, 6] = θ_1 = ideal_gas.U_θ(s_1, v_1, n_1)
data[i, 7] = v_2 = get_v_2(q)
data[i, 8] = π_2 = ideal_gas.U_π(s_2, v_2, n_2)
data[i, 9] = s_2
data[i, 10] = θ_2 = ideal_gas.U_θ(s_2, v_2, n_2)
data[i, 11] = E_kin = 0.5 * m_inv * p**2
data[i, 12] = u_1 = ideal_gas.U(s_1, v_1, n_1)
data[i, 13] = u_2 = ideal_gas.U(s_2, v_2, n_2)
data[i, 14] = E = E_kin + u_1 + u_2
data[i, 15] = S = s_1 + s_2
def get_state(data, i):
return data[i, (0, 1, 5, 9)]
def rhs(x):
"""right hand side of the explicit system
of differential equations
"""
q, p, s_1, s_2 = x
v_1 = get_v_1(q)
v_2 = get_v_2(q)
π_1 = ideal_gas.U_π(s_1, v_1, n_1)
π_2 = ideal_gas.U_π(s_2, v_2, n_2)
θ_1 = ideal_gas.U_θ(s_1, v_1, n_1)
θ_2 = ideal_gas.U_θ(s_2, v_2, n_2)
return np.array((m_inv*p, A*(π_1-π_2), α*(θ_2-θ_1)/θ_1, α*(θ_1-θ_2)/θ_2))
t_f = 1.0
dt = 1e-4
steps = int(t_f // dt)
print(f'steps={steps}')
t = np.linspace(0, t_f, num=steps)
dt = t[1] - t[0]
data = np.empty((steps, 16), dtype=float)
set_state(data, 0, x_0)
x_old = get_state(data, 0)
for i in range(1, steps):
x_new = x_old + dt * rhs(x_old)
set_state(data, i, x_new)
x_old = x_new
θ_min = np.min(data[:, (6,10)])
θ_max = np.max(data[:, (6,10)])
# plot transient
fig, ax = plt.subplots(dpi=200)
ax.set_title("piston position q")
ax.plot(t, data[:, 0]);
fig, ax = plt.subplots(dpi=200)
ax.set_title("total entropy S")
ax.plot(t, data[:, 15]);
fig, ax = plt.subplots(dpi=200)
ax.set_title("total energy E")
ax.plot(t, data[:, 14]); | _____no_output_____ | MIT | piston_animation_euler.ipynb | MarkusLohmayer/master-thesis-code |
설치하기 | !pip install git+https://github.com/gbolmier/funk-svd
from funk_svd.dataset import fetch_ml_ratings
from funk_svd import SVD
from sklearn.metrics import mean_absolute_error
import pandas as pd
ds_ratings = pd.read_csv("../ml-latest-small/ratings.csv")
ds_movies = pd.read_csv("../ml-latest-small/movies.csv")
# funk-svd 는 유저 칼럼이 u_id, 아이템 칼럼이 i_id여서 이름을 변경해줍니다
df = ds_ratings.rename(
{
"userId": "u_id",
"movieId": "i_id"
},
axis=1
)
df
# 80%를 학습에 사용하고 10%를 validation set, 10%를 test_set으로 사용합니다
train = df.sample(frac=0.8, random_state=7)
val = df.drop(train.index.tolist()).sample(frac=0.5, random_state=8)
test = df.drop(train.index.tolist()).drop(val.index.tolist())
# SVD를 학습합니다. n_factors가 SVD의 k를 의미합니다
svd = SVD(lr=0.001, reg=0.005, n_epochs=100, n_factors=15, early_stopping=True,
shuffle=False, min_rating=1, max_rating=5)
svd.fit(X=train, X_val=val)
# 학습 결과를 test set 으로 평가합니다.
pred = svd.predict(test)
mae = mean_absolute_error(test['rating'], pred)
print(f'Test MAE: {mae:.2f}')
user_ratings = ds_ratings.pivot(index="userId", columns="movieId", values="rating")
def get_user_real_score(user_id, item_id):
return user_ratings.loc[user_id, item_id]
def get_user_unseen_ranks(user_id, max_rank=100):
# predict_pair 메서드를 이용해서
# 유저의 모든 영화 예측 평점을 계산합니다.
movie_ids = df.i_id.unique()
rec = pd.DataFrame(
[{
"id": id,
"recommendation_score": svd.predict_pair(user_id, id),
"real_score": get_user_real_score(user_id, id)
}
for id in movie_ids
]
)
# 유저가 본 영화는 제외합니다
user_seen_movies = train[train.u_id == user_id]
rec = rec[~rec.id.isin(user_seen_movies.i_id)]
rec.sort_values("recommendation_score", ascending=False, inplace=True)
# max_rank 개만 보여줍니다
if max_rank is not None:
rec = rec.head(max_rank)
# 순위를 컬럼에 추가합니다
rec["rank"] = range(1, len(rec) + 1)
# train 에는 포함되지 않았지만 실제로 유저가 봤던 영화 ID를 가져옵니다
# 이후 추천 결과에서 해당 영화들만 필터링해서 몇 위로 추천됬는지 확인합니다
user_unseen_movies = pd.concat([val, test], axis=0)
user_unseen_movies = user_unseen_movies[user_unseen_movies.u_id == user_id].i_id
rec = rec[rec.id.isin(user_unseen_movies)]
rec.index = rec.id
del rec["id"]
# 실제 추천할 영화 정보와 join합니다.
rec = ds_movies.merge(rec, left_on="movieId", right_index=True)
rec.sort_values("rank", inplace=True)
top_k_accuracy = len(rec) / len(user_unseen_movies)
return rec, top_k_accuracy
from IPython.display import display
user_ids = df.u_id.unique()[:10]
total_acc = 0
for uid in user_ids:
top100, acc = get_user_unseen_ranks(uid)
total_acc += acc
print("User: ", uid, "TOP 100 accuracy: ", round(acc, 2))
display(top100)
total_acc / len(user_ids) | User: 1 TOP 100 accuracy: 0.27
| MIT | notebooks/FunkSVD-lib.ipynb | HeegyuKim/RecSys-MovieLens100k |
print("Number of records belonging to black individuals: {}".format(train_adult_black.shape[0]))
sns.set(style="dark", rc={'figure.figsize':(11.7,8.27)})
sns.countplot(x="sex",
palette="Paired", edgecolor=".6",
data=train_adult_black)
#In what concerns sex, we have an equal representation of women and man for the black population of the dataset
#Using the YData synthetic data lib to generate new 3000 individuals for the black population
synth_model = synthetic.SynthTabular()
synth_model.fit(adult_black)
synth_data = synth_model.sample(n_samples=3000)
synth_data = pd.read_csv('synth_data.csv', index_col=[0])
synth_data = synth_data.drop('education.num', axis=1)
synth_data = pd.concat([synth_data[synth_data['income']=='>50K'],synth_data[synth_data['income']=='<=50K'][:1000]])
synth_data.describe()
#Now combining both the datasets
test_adult['income'] = income_test
adult_combined = synth_data.append(test_adult).sample(frac=1)
#Let's check again how are we regarding the balancing of our classes for the race variable
sns.set(style="dark", rc={'figure.figsize':(11.7,8.27)})
sns.countplot(x="race",
palette="Paired", edgecolor=".6",
data=adult_combined)
#Auxiliar function to encode the categorical variables
import numpy as np
from sklearn.preprocessing import OneHotEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import f1_score, accuracy_score, average_precision_score
def numerical_encoding(df, cat_cols=[], ord_cols=[]):
try:
assert isinstance(df, pd.DataFrame)
except AssertionError as e:
logging.error('The df input object must a Pandas dataframe. This action will not be executed.')
return
ord_cols_val = None
cat_cols_val = None
dummies = None
cont_cols = list(set(df.columns) - set(cat_cols+ord_cols))
cont_vals = df[cont_cols].values
if len(ord_cols) > 0:
ord_cols_val = df[ord_cols].values
label_encoder = LabelEncoder
ord_encoded = label_encoder.fit_transform(ord_cols_val)
if len(cat_cols) > 0:
cat_cols_val = df[cat_cols].values
hot_encoder = OneHotEncoder()
cat_encoded = hot_encoder.fit_transform(cat_cols_val).toarray()
dummies = []
for i, cat in enumerate(hot_encoder.categories_):
for j in cat:
dummies.append(cat_cols[i]+'_'+str(j))
if ord_cols_val is not None and cat_cols_val is not None:
encoded = np.hstack([cont_vals, ord_encoded, cat_encoded])
columns = cont_cols+ord_cols+dummies
elif cat_cols_val is not None:
encoded = np.hstack([cont_vals, cat_encoded])
columns = cont_cols+ord_cols+dummies
else:
encoded = cont_vals
columns = cont_cols
return pd.DataFrame(encoded, columns=columns), dummies
#validation functions
def score_estimators(estimators, x_test, y_test):
#f1_score average='micro'
scores = {type(clf).__name__: f1_score(y_test, clf.predict(x_test), average='micro') for clf in estimators}
return scores
def fit_estimators(estimators, data_train, y_train):
estimators_fit = []
for i, estimator in enumerate(estimators):
estimators_fit.append(estimator.fit(data_train, y_train))
return estimators_fit
def estimator_eval(data, y, cat_cols=[]):
def order_cols(df):
cols = sorted(df.columns.tolist())
return df[cols]
data,_ = numerical_encoding(data, cat_cols=cat_cols)
y, uniques = pd.factorize(y)
data = order_cols(data)
x_train, x_test, y_train, y_test = train_test_split(data, y, test_size=0.33, random_state=42)
# Prepare train and test datasets
estimators = [
LogisticRegression(multi_class='auto', solver='lbfgs', max_iter=500, random_state=42),
RandomForestClassifier(n_estimators=10, random_state=42),
DecisionTreeClassifier(random_state=42),
SVC(gamma='auto'),
KNeighborsClassifier(n_neighbors=5)
]
estimators_names = [type(clf).__name__ for clf in estimators]
for estimator in estimators:
assert hasattr(estimator, 'fit')
assert hasattr(estimator, 'score')
estimators = fit_estimators(estimators, x_train, y_train)
scores = score_estimators(estimators, x_test, y_test)
return scores
real_scores = estimator_eval(data=test_adult.drop('income', axis=1),
y=test_adult['income'],
cat_cols=['workclass', 'education', 'marital.status', 'occupation', 'relationship','race', 'sex', 'native.country'])
synth_scores = estimator_eval(data=adult_combined.drop('income', axis=1),
y=adult_combined['income'],
cat_cols=['workclass', 'education', 'marital.status', 'occupation', 'relationship','race', 'sex', 'native.country'])
dict_results = {'original': real_scores, 'synthetic': synth_scores}
results = pd.DataFrame(dict_results).reset_index()
print("Mean average accuracy improvement: {}".format((results['synthetic'] - results['original']).mean()))
results_graph = results.melt('index', var_name='data_source', value_name='accuracy')
pd.DataFrame(dict_results).transpose()
#Final results comparision
sns.barplot(x="index", y="accuracy", hue="data_source", data=results_graph,
palette="Paired", edgecolor=".6") | _____no_output_____ | MIT | blog/black-lives-matter/Race_bias_A_synthetic_data_approach.ipynb | ydataai/academy | |
LAB 5b: Deploy and predict with Keras model on Cloud AI Platform.**Learning Objectives**1. Setup up the environment1. Deploy trained Keras model to Cloud AI Platform1. Online predict from model on Cloud AI Platform1. Batch predict from model on Cloud AI Platform Introduction In this notebook, we'll deploying our Keras model to Cloud AI Platform and creating predictions.We will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/5b_deploy_keras_ai_platform_babyweight.ipynb). Set up environment variables and load necessary libraries Import necessary libraries. | import os | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/structured/labs/5b_deploy_keras_ai_platform_babyweight.ipynb | Glairly/introduction_to_tensorflow |
Lab Task 1: Set environment variables.Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region. | %%bash
PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# Change these to try this notebook out
PROJECT = "cloud-training-demos" # TODO: Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # TODO: Replace with your REGION
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
%%bash
gcloud config set compute/region $REGION
gcloud config set ai_platform/region global | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/structured/labs/5b_deploy_keras_ai_platform_babyweight.ipynb | Glairly/introduction_to_tensorflow |
Check our trained model filesLet's check the directory structure of our outputs of our trained model in folder we exported the model to in our last [lab](../solutions/10_train_keras_ai_platform_babyweight.ipynb). We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service. | %%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model
%%bash
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1)
gsutil ls ${MODEL_LOCATION} | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/structured/labs/5b_deploy_keras_ai_platform_babyweight.ipynb | Glairly/introduction_to_tensorflow |
Lab Task 2: Deploy trained model.Deploying the trained model to act as a REST web service is a simple gcloud call. Complete __TODO__ by providing location of saved_model.pb file to Cloud AI Platoform model deployment service. The deployment will take a few minutes. | %%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=# TODO: Add GCS path to saved_model.pb file.
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=2.1 \
--python-version=3.7 | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/structured/labs/5b_deploy_keras_ai_platform_babyweight.ipynb | Glairly/introduction_to_tensorflow |
Lab Task 3: Use model to make online prediction.Complete __TODO__s for both the Python and gcloud Shell API methods of calling our deployed model on Cloud AI Platform for online prediction. Python APIWe can use the Python API to send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances. | from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = # TODO: Add model name
MODEL_VERSION = # TODO: Add model version
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict" \
.format(PROJECT, MODEL_NAME, MODEL_VERSION)
headers = {"Authorization": "Bearer " + token }
data = {
"instances": [
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Single(1)",
"gestation_weeks": 39
},
{
"is_male": "False",
"mother_age": 29.0,
"plurality": "Single(1)",
"gestation_weeks": 38
},
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Triplets(3)",
"gestation_weeks": 39
},
# TODO: Create another instance
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/structured/labs/5b_deploy_keras_ai_platform_babyweight.ipynb | Glairly/introduction_to_tensorflow |
The predictions for the four instances were: 5.33, 6.09, 2.50, and 5.86 pounds respectively when I ran it (your results might be different). gcloud shell APIInstead we could use the gcloud shell API. Create a newline delimited JSON file with one instance per line and submit using gcloud. | %%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39} | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/structured/labs/5b_deploy_keras_ai_platform_babyweight.ipynb | Glairly/introduction_to_tensorflow |
Now call `gcloud ai-platform predict` using the JSON we just created and point to our deployed `model` and `version`. | %%bash
gcloud ai-platform predict \
--model=babyweight \
--json-instances=inputs.json \
--version=# TODO: Add model version | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/structured/labs/5b_deploy_keras_ai_platform_babyweight.ipynb | Glairly/introduction_to_tensorflow |
Lab Task 4: Use model to make batch prediction.Batch prediction is commonly used when you have thousands to millions of predictions. It will create an actual Cloud AI Platform job for prediction. Complete __TODO__s so we can call our deployed model on Cloud AI Platform for batch prediction. | %%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT \
--region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=babyweight \
--version=# TODO: Add model version | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive2/structured/labs/5b_deploy_keras_ai_platform_babyweight.ipynb | Glairly/introduction_to_tensorflow |
SummaryWe are given : * a positive integer $n$ which is the number of dimension of the space. * a positive integer $k$ which is the total number of discrete points we need to place inside that space. * each continuous point $x$ queried in that space that follows a probability density function $PDF$.And we are asked for: * the $k$ discrete points that are placed in a way that minimizes the average distance (or mean error $ME$) from those continuous points $x$ . Proposed approachWe propose a method that: * is an almost optimal solution that provides a decent decrease of the $ME$.* works for almost any number $k$ of points.* works for almost any number $n$ of dimensions.* adapts very fast (close to $\log k$ rate).* is always better than the $ME$ provided by a uniform discretization, even before it is adapted.* is "model" free. The $PDF$ of the appearance of $x$'s don't have to be known. How it worksThe core idea behind this method is the $2^n$-trees. Like quadtrees and octrees with $n$ equal to 2 and 3 respectively, n-trees have any positive integer number $n$ as spatial subdivision factor, or also known as branching factor. $n$ describes the dimensions of the space that these trees spread. The $2^n$ child branches of each parent node, expand to all the basic directions in order to fill up all the space as uniform as possible. Each node is a point in space that covers a specified region. This region is the corresponding fraction of its parent's region. In other words, the parent splits the region that is assigned to it to $2^n$ equal sub-regions with volume $\frac{1}{2^n}$ times smaller than their parent's. The sub-regions that are produced are n-dimensional cubes too and have no overlapping except their touching surfaces. Of coursethey fully overlap with their parent because they are actually its sub-set. As the height of the tree is growing, the nodes that are created have smaller and smaller volumes. Each point is located in the middle of the cube so there will be no way for two or more points to be in the same location. So a tree with $k$ nodes will have $k$ different points inside the given space.We assign to the root node the whole space we want to use, in our case the unit cube between the points $[0, 0, ..., 0]$ and $[1, 1, ..., 1]$. For example, lets take $n=2$. Root, the node in level zero, will have the point $[0.5,0.5]$ and determined by the vertices $[0,0],[1,1]$. Its $2^n=4$ branches, will split this area in 4 equal 2d-cubes (aka rectangles) with vertices and points: * cube 1 -> vertices $[0,0],[0.5,0.5]$ , point $[0.25,0.25]$ * cube 2 -> vertices $[0.5,0],[1,0.5]$ , point $[0.75,0.25]$ * cube 3 -> vertices $[0,0.5],[0.5,1]$ , point $[0.25,0.75]$ * cube 4 -> vertices $[0.5,0.5],[1,1]$ , point $[0.75,0.75]$ So now we have 5 points to fill up this space. Those 4 branches can be extended to create 16 new sub-branches (4 more branches each) adding to a total of 21 points. | tree = Tree(2, 21)
tree.plot()
points = tree.get_points()
plt.figure()
for p in points:
plt.plot(p[0], p[1], 'bo', label='Points')
plt.plot([0, 0], [0, 1], 'g--', label='Space')
plt.plot([0, 1], [0, 0], 'g--')
plt.plot([1, 1], [0, 1], 'g--')
plt.plot([1, 0], [1, 1], 'g--')
plt.xticks(np.linspace(0, 1, 9))
plt.yticks(np.linspace(0, 1, 9))
plt.title('Points spread into space')
plt.grid(True)
plt.legend()
plt.show() | _____no_output_____ | MIT | notebooks/my_approach.ipynb | jimkon/adaptive-discretization |
XYZ Pro FeaturesThis notebook demonstrates some of the pro features for XYZ Hub API.XYZ paid features can be found here: [xyz pro features](https://www.here.xyz/xyz_pro/).XYZ plans can be found here: [xyz plans](https://developer.here.com/pricing). Virtual SpaceA virtual space is described by definition which references other existing spaces(the upstream spaces).Queries being done to a virtual space will return the features of its upstream spaces combined.Below are different predefined operations of how to combine the features of the upstream spaces.- [group](group_cell)- [merge](merge_cell)- [override](override_cell)- [custom](custom_cell) | # Make necessary imports.
import os
import json
import warnings
from xyzspaces.datasets import get_chicago_parks_data, get_countries_data
from xyzspaces.exceptions import ApiError
import xyzspaces | _____no_output_____ | Apache-2.0 | docs/notebooks/xyz_pro_features_examples.ipynb | fangkun202303x/heremapsn |
Warning: Before running below cells please make sure you have XYZ Token to interact with xyzspaces. Please see README.md in notebooks folder for more info on XYZ_TOKEN | os.environ["XYZ_TOKEN"] = "MY-XYZ-TOKEN" # Replace your token here.
xyz = xyzspaces.XYZ()
# create two spaces which will act as upstream spaces for virtual space created later.
title1 = "Testing xyzspaces"
description1 = "Temporary space containing countries data."
space1 = xyz.spaces.new(title=title1, description=description1)
# Add some data to it space1
gj_countries = get_countries_data()
space1.add_features(features=gj_countries)
space_id1 = space1.info["id"]
title2 = "Testing xyzspaces"
description2 = "Temporary space containing Chicago parks data."
space2 = xyz.spaces.new(title=title2, description=description2)
# Add some data to space2
with open("./data/chicago_parks.geo.json", encoding="utf-8-sig") as json_file:
gj_chicago = json.load(json_file)
space2.add_features(features=gj_chicago)
space_id2 = space2.info["id"] | _____no_output_____ | Apache-2.0 | docs/notebooks/xyz_pro_features_examples.ipynb | fangkun202303x/heremapsn |
GroupGroup means to combine the content of the specified spaces. All objects of each space will be part of the response when the virtual space is queried by the user. The information about which object came from which space can be found in the XYZ-namespace in the properties of each feature. When writing back these objects to the virtual space they'll be written back to the upstream space from which they were actually coming. | # Create a new virtual space by grouping two spaces created above.
title = "Virtual Space for coutries and Chicago parks data"
description = "Test group functionality of virtual space"
upstream_spaces = [space_id1, space_id2]
kwargs = {"virtualspace": dict(group=upstream_spaces)}
vspace = xyz.spaces.virtual(title=title, description=description, **kwargs)
print(json.dumps(vspace.info, indent=2))
# Reading a particular feature from space1 via virtual space.
vfeature1 = vspace.get_feature(feature_id="FRA")
feature1 = space1.get_feature(feature_id="FRA")
assert vfeature1 == feature1
# Reading a particular feature from space2 via virtual space.
vfeature2 = vspace.get_feature(feature_id="LP")
feature2 = space2.get_feature(feature_id="LP")
assert vfeature2 == feature2
# Deleting a feature from virtual space deletes corresponding feature from upstream space.
vspace.delete_feature(feature_id="FRA")
try:
space1.get_feature("FRA")
except ApiError as err:
print(err)
# Delete temporary spaces created.
vspace.delete()
space1.delete()
space2.delete() | _____no_output_____ | Apache-2.0 | docs/notebooks/xyz_pro_features_examples.ipynb | fangkun202303x/heremapsn |
MergeMerge means that objects with the same ID will be merged together. If there are duplicate feature-IDs in the various data of the upstream spaces, the duplicates will be merged to build a single feature. The result will be a response that is guaranteed to have no features with duplicate IDs. The merge will happen in the order of the space-references in the specified array. That means objects coming from the second space will overwrite potentially existing property values of objects coming from the first space. The information about which object came from which space(s) can be found in the XYZ-namespace in the properties of each feature. When writing back these objects to the virtual space they'll be written back to the upstream space from which they were actually coming, or the last one in the list if none was specified.When deleting features from the virtual space a new pseudo-deleted feature is written to the last space in the list. Trying to read the feature with that ID from the virtual space is not possible afterward. | # create two spaces with duplicate data
title1 = "Testing xyzspaces"
description1 = "Temporary space containing Chicago parks data."
space1 = xyz.spaces.new(title=title1, description=description1)
with open("./data/chicago_parks.geo.json", encoding="utf-8-sig") as json_file:
gj_chicago = json.load(json_file)
# Add some data to it space1
space1.add_features(features=gj_chicago)
space_id1 = space1.info["id"]
title2 = "Testing xyzspaces duplicate"
description2 = "Temporary space containing Chicago parks data duplicate"
space2 = xyz.spaces.new(title=title1, description=description1)
# Add some data to it space2
space2.add_features(features=gj_chicago)
space_id2 = space2.info["id"]
# update a particular feature of second space so that post merge virtual space will have this feature merged
lp = space2.get_feature("LP")
space2.update_feature(feature_id="LP", data=lp, add_tags=["foo", "bar"])
# Create a new virtual space by merging two spaces created above.
title = "Virtual Space for coutries and Chicago parks data"
description = "Test merge functionality of virtual space"
upstream_spaces = [space_id1, space_id2]
kwargs = {"virtualspace": dict(merge=upstream_spaces)}
vspace = xyz.spaces.virtual(title=title, description=description, **kwargs)
print(vspace.info)
vfeature1 = vspace.get_feature(feature_id="LP")
assert vfeature1["properties"]["@ns:com:here:xyz"]["tags"] == ["foo", "bar"]
bp = space2.get_feature("BP")
space2.update_feature(feature_id="BP", data=lp, add_tags=["foo1", "bar1"])
vfeature2 = vspace.get_feature(feature_id="BP")
assert vfeature2["properties"]["@ns:com:here:xyz"]["tags"] == ["foo1", "bar1"]
space1.delete()
space2.delete()
vspace.delete() | _____no_output_____ | Apache-2.0 | docs/notebooks/xyz_pro_features_examples.ipynb | fangkun202303x/heremapsn |
OverrideOverride means that objects with the same ID will be overridden completely. If there are duplicate feature-IDs in the various data of the upstream spaces, the duplicates will be overridden to result in a single feature. The result will be a response that is guaranteed to have no features with duplicate IDs. The override will happen in the order of the space-references in the specified array. That means objects coming from the second space one will override potentially existing features coming from the first space. The information about which object came from which space can be found in the XYZ-namespace in the properties of each feature. When writing back these objects to the virtual space they'll be written back to the upstream space from which they were actually coming. When deleting features from the virtual space the same rules as for merge apply. | # create two spaces with duplicate data
title1 = "Testing xyzspaces"
description1 = "Temporary space containing Chicago parks data."
space1 = xyz.spaces.new(title=title1, description=description1)
with open("./data/chicago_parks.geo.json", encoding="utf-8-sig") as json_file:
gj_chicago = json.load(json_file)
# Add some data to it space1
space1.add_features(features=gj_chicago)
space_id1 = space1.info["id"]
title2 = "Testing xyzspaces duplicate"
description2 = "Temporary space containing Chicago parks data duplicate"
space2 = xyz.spaces.new(title=title1, description=description1)
# Add some data to it space2
space2.add_features(features=gj_chicago)
space_id2 = space2.info["id"]
# Create a new virtual space by override operation.
title = "Virtual Space for coutries and Chicago parks data"
description = "Test merge functionality of virtual space"
upstream_spaces = [space_id1, space_id2]
kwargs = {"virtualspace": dict(override=upstream_spaces)}
vspace = xyz.spaces.virtual(title=title, description=description, **kwargs)
print(vspace.info)
bp = space2.get_feature("BP")
space2.update_feature(feature_id="BP", data=bp, add_tags=["foo1", "bar1"])
vfeature2 = vspace.get_feature(feature_id="BP")
assert vfeature2["properties"]["@ns:com:here:xyz"]["tags"] == ["foo1", "bar1"]
space1.delete()
space2.delete()
vspace.delete() | _____no_output_____ | Apache-2.0 | docs/notebooks/xyz_pro_features_examples.ipynb | fangkun202303x/heremapsn |
Applying clustering in space | # create two spaces which will act as upstream spaces for virtual space created later.
title1 = "Testing xyzspaces"
description1 = "Temporary space containing countries data."
space1 = xyz.spaces.new(title=title1, description=description1)
# Add some data to it space1
gj_countries = get_countries_data()
space1.add_features(features=gj_countries)
space_id1 = space1.info["id"]
# Genereate clustering for the space
space1.cluster(clustering="hexbin")
# Delete created space
space1.delete() | _____no_output_____ | Apache-2.0 | docs/notebooks/xyz_pro_features_examples.ipynb | fangkun202303x/heremapsn |
Rule based TaggingRule based tagging makes tagging multiple features in space tagged to a particular tag, based in rules mentioned based on JSON-path expression. Users can update space with a map of rules where the key is the tag to be applied to all features matching the JSON-path expression being the value.If multiple rules are matching, multiple tags will be applied to the according to matched sets of features. It could even happen that a feature is matched by multiple rules and thus multiple tags will get added to it. | # Create a new space
title = "Testing xyzspaces"
description = "Temporary space containing Chicago parks data."
space = xyz.spaces.new(title=title, description=description)
# Add data to the space.
with open("./data/chicago_parks.geo.json", encoding="utf-8-sig") as json_file:
gj_chicago = json.load(json_file)
_ = space.add_features(features=gj_chicago)
# update space to add tagging rules to the above mentioned space.
tagging_rules = {
"large": "$.features[?(@.properties.area>=500)]",
"small": "$.features[?(@.properties.area<500)]",
}
_ = space.update(tagging_rules=tagging_rules)
# verify that features are tagged correctly based on rules.
large_parks = space.search(tags=["large"])
for park in large_parks:
assert park["id"] in ["LP", "BP", "JP"]
small_parks = space.search(tags=["small"])
for park in small_parks:
assert park["id"] in ["MP", "GP", "HP", "DP", "CP", "COP"]
# Delete created space
space.delete() | _____no_output_____ | Apache-2.0 | docs/notebooks/xyz_pro_features_examples.ipynb | fangkun202303x/heremapsn |
Activity LogThe Activity log will enable tracking of changes in your space.To activate it, just create a space with the listener added and enable_uuid set to True.More information on the activity log can be found [here](https://www.here.xyz/api/devguide/activitylogguide/). | title = "Activity-Log Test"
description = "Activity-Log Test"
listeners = {
"id": "activity-log",
"params": {"states": 5, "storageMode": "DIFF_ONLY", "writeInvalidatedAt": "true"},
"eventTypes": ["ModifySpaceEvent.request"],
}
space = xyz.spaces.new(
title=title,
description=description,
enable_uuid=True,
listeners=listeners,
)
from time import sleep
# As activity log is async operation adding sleep to get info
sleep(5)
print(json.dumps(space.info, indent=2))
space.delete() | _____no_output_____ | Apache-2.0 | docs/notebooks/xyz_pro_features_examples.ipynb | fangkun202303x/heremapsn |
Pyber Ride Sharing3 observations from the data:* Urban drivers typically drive more frequently yet charge on average (i.e., <30) less than rural drivers.* Roughly two-thirds of all rides occur in Urban cities, however, roughly 80% of all drivers work in Urban areas.* While less rides occur in rural cities, there are on average less drivers to manage the load, creating a more favorable driver to ride ratio. * Rural drivers have the greatest fare distribution (i.e., roughly 40 dollars/driver) among drivers of all 3 city types. | import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Read in City Data csv file
city_df = pd.read_csv('city_data.csv')
# Read in Ride Data csv file
ride_df = pd.read_csv('ride_data.csv')
# Combine the 2 dataframes
pyber_df = pd.merge(city_df, ride_df, on="city", how='left')
pyber_df.head()
# Find the total fare per city
city_fare_total = pyber_df.groupby('city')['fare'].sum().to_frame()
# Find the average fare ($) per city
city_fare_avg = pyber_df.groupby('city')['fare'].mean().to_frame()
# Find the total number of rides per city
city_total_rides = pyber_df.groupby('city')['ride_id'].count().to_frame()
# Find the total number of drivers per city
city_driver_count = pyber_df.groupby('city')['driver_count'].unique().to_frame()
city_driver_count['driver_count'] = city_driver_count['driver_count'].str.get(0)
# Find the city type (urban, suburban, rural)
city_type = pyber_df.groupby('city')['type'].unique().to_frame()
city_type['type'] = city_type['type'].str.get(0)
# Combine each dataframe
city_fare_avg.columns=["city"]
join_one = city_fare_avg.join(city_total_rides, how="left")
join_one.columns=["Average Fare", "Total Rides"]
join_two = join_one.join(city_fare_total, how="inner")
join_two.columns=["Average Fare", "Total Rides", "City Fare Total"]
join_three = join_two.join(city_driver_count, how="inner")
join_three.columns=["Average Fare", "Total Rides", "City Fare Total", "Driver Count"]
city_agg = join_three.join(city_type, how='inner')
city_agg.columns=["Average Fare", "Total Rides", "City Fare Total", "Driver Count", "City Type"]
city_agg.head()
# Separate data by City Type
urban_data = city_agg.loc[(city_agg['City Type']=='Urban'), :]
suburban_data = city_agg.loc[(city_agg['City Type']=='Suburban'), :]
rural_data = city_agg.loc[(city_agg['City Type']=='Rural'), :] | _____no_output_____ | MIT | Pyber.ipynb | bmodie/Unit_5_Pyber |
Bubble Plot | ## Bubble Plot Data
all_urban_rides = urban_data.groupby('city')['Total Rides'].sum()
avg_urban_fare = urban_data.groupby('city')['Average Fare'].mean()
all_suburban_rides = suburban_data.groupby('city')['Total Rides'].sum()
avg_suburban_fare = suburban_data.groupby('city')['Average Fare'].mean()
all_rural_rides = rural_data.groupby('city')['Total Rides'].sum()
avg_rural_fare = rural_data.groupby('city')['Average Fare'].mean()
## Bubble Plot
# Store driver count as a Numpy Array
np_city_driver_count = np.array(city_driver_count)
np_city_driver_count = np_city_driver_count * 3
# Add chart note
textstr = 'Note: Circle size corresponds to driver count/city'
urban = plt.scatter(all_urban_rides, avg_urban_fare, s=np_city_driver_count, color='lightskyblue', alpha=0.65, edgecolors='none')
suburban = plt.scatter(all_suburban_rides, avg_suburban_fare, s=np_city_driver_count, color='gold', alpha=0.65, edgecolors='none')
rural = plt.scatter(all_rural_rides, avg_rural_fare, s=np_city_driver_count, color='lightcoral', alpha=0.65, edgecolors='none')
plt.grid(linestyle='dotted')
plt.xlabel('Total Number of Rides (Per City)')
plt.ylabel('Average Fare ($)')
plt.title('Pyber Ride Sharing Data (2016)')
plt.gcf().text(0.95, 0.50, textstr, fontsize=8)
plt.legend((urban, suburban, rural),('Urban', 'Suburban', 'Rural'),scatterpoints=1,loc='upper right',ncol=1,\
fontsize=8, markerscale=0.75,title='City Type', edgecolor='none',framealpha=0.25)
plt.show()
| _____no_output_____ | MIT | Pyber.ipynb | bmodie/Unit_5_Pyber |
Pie Charts Total Fares by City Type | ## Find Total Fares By City Type
urban_fare_total = urban_data['City Fare Total'].sum()
suburban_fare_total = suburban_data['City Fare Total'].sum()
rural_fare_total = rural_data['City Fare Total'].sum()
# Create a Pie Chart to Express the Above Date
driver_type = ["Urban", "Suburban", "Rural"]
driver_count = [urban_fare_total, suburban_fare_total, rural_fare_total]
colors = ["lightskyblue", "gold","lightcoral"]
explode = (0.1,0,0)
plt.pie(driver_count, explode=explode, labels=driver_type, colors=colors,
autopct="%1.1f%%", shadow=True, startangle=68)
plt.title("% of Total Fares by City Type")
plt.axis("equal")
plt.show() | _____no_output_____ | MIT | Pyber.ipynb | bmodie/Unit_5_Pyber |
Total Rides by City Type | ## Find Total Rides By City Type
urban_rides_count = urban_data['Total Rides'].sum()
suburban_rides_count = suburban_data['Total Rides'].sum()
rural_rides_count = rural_data['Total Rides'].sum()
# Create a Pie Chart to Express the Above Date
ride_type = ["Urban", "Suburban", "Rural"]
ride_count = [urban_rides_count, suburban_rides_count, rural_rides_count]
colors = ["lightskyblue", "gold","lightcoral"]
explode = (0.1,0,0)
plt.pie(ride_count, explode=explode, labels=ride_type, colors=colors,
autopct="%1.1f%%", shadow=True, startangle=60)
plt.title("% of Total Rides by City Type")
plt.axis("equal")
plt.show()
| _____no_output_____ | MIT | Pyber.ipynb | bmodie/Unit_5_Pyber |
Total Drivers by City Type | ## Find Total Drivers By City Type
urban_driver_count = urban_data['Driver Count'].sum()
suburban_driver_count = suburban_data['Driver Count'].sum()
rural_driver_count = rural_data['Driver Count'].sum()
# Create a Pie Chart to Express the Above Date
driver_type = ["Urban", "Suburban", "Rural"]
driver_count = [urban_driver_count, suburban_driver_count, rural_driver_count]
colors = ["lightskyblue", "gold","lightcoral"]
explode = (0.1,0,0)
plt.pie(driver_count, explode=explode, labels=driver_type, colors=colors,
autopct="%1.1f%%", shadow=True, startangle=40)
plt.title("% of Total Drivers by City Type")
plt.axis("equal")
plt.show()
| _____no_output_____ | MIT | Pyber.ipynb | bmodie/Unit_5_Pyber |
Average Ride Value Per Driver (by City Type) | # Identify the average fare for drivers in each city type
urban_avg_driver_pay = urban_fare_total / urban_rides_count
suburban_avg_driver_pay = suburban_fare_total / suburban_rides_count
rural_avg_driver_pay = rural_fare_total / rural_rides_count
# Create a Bar Chart to Express the Above Date
driver_type = ["Urban", "Suburban", "Rural"]
avg_driver_pay = [urban_avg_driver_pay, suburban_avg_driver_pay, rural_avg_driver_pay]
x_axis = np.arange(len(avg_driver_pay))
colors = ["lightskyblue", "gold","lightcoral"]
plt.bar(x_axis, avg_driver_pay, color=colors, align='edge')
tick_locations = [value+0.4 for value in x_axis]
plt.xticks(tick_locations, ["Urban", "Suburban", "Rural"])
plt.ylim(0, max(avg_driver_pay)+1)
plt.xlim(-0.25, len(driver_type))
plt.title("Average Per Ride Value for Drivers")
plt.show() | _____no_output_____ | MIT | Pyber.ipynb | bmodie/Unit_5_Pyber |
Average Fare Distribution Across All Drivers (by City Type) | urban_fare_dist = urban_fare_total / urban_driver_count
suburban_fare_dist = suburban_fare_total / suburban_driver_count
rural_fare_dist = rural_fare_total / rural_driver_count
# Create a Bar Chart to Express the Above Date
driver_type = ["Urban", "Suburban", "Rural"]
avg_fare_dist = [urban_fare_dist, suburban_fare_dist, rural_fare_dist]
x_axis = np.arange(len(avg_fare_dist))
colors = ["lightskyblue", "gold","lightcoral"]
plt.bar(x_axis, avg_fare_dist, color=colors, align='edge')
tick_locations = [value+0.4 for value in x_axis]
plt.xticks(tick_locations, ["Urban", "Suburban", "Rural"])
plt.ylim(0, max(avg_fare_dist)+1)
plt.xlim(-0.25, len(driver_type))
plt.title("Average Fare Distribution Across All Drivers")
plt.show() | _____no_output_____ | MIT | Pyber.ipynb | bmodie/Unit_5_Pyber |
torchserve.ipynbThis notebook contains code for the portions of the benchmark in [the benchmark notebook](./benchmark.ipynb) that use [TorchServe](https://github.com/pytorch/serve). | # Imports go here
import json
import os
import requests
import scipy.special
import transformers
# Fix silly warning messages about parallel tokenizers
os.environ['TOKENIZERS_PARALLELISM'] = 'False'
# Constants go here
INTENT_MODEL_NAME = 'mrm8488/t5-base-finetuned-e2m-intent'
SENTIMENT_MODEL_NAME = 'cardiffnlp/twitter-roberta-base-sentiment'
QA_MODEL_NAME = 'deepset/roberta-base-squad2'
GENERATE_MODEL_NAME = 'gpt2'
INTENT_INPUT = {
'context':
("I came here to eat chips and beat you up, "
"and I'm all out of chips.")
}
SENTIMENT_INPUT = {
'context': "We're not happy unless you're not happy."
}
QA_INPUT = {
'question': 'What is 1 + 1?',
'context':
"""Addition (usually signified by the plus symbol +) is one of the four basic operations of
arithmetic, the other three being subtraction, multiplication and division. The addition of two
whole numbers results in the total amount or sum of those values combined. The example in the
adjacent image shows a combination of three apples and two apples, making a total of five apples.
This observation is equivalent to the mathematical expression "3 + 2 = 5" (that is, "3 plus 2
is equal to 5").
"""
}
GENERATE_INPUT = {
'prompt_text': 'All your base are'
} | _____no_output_____ | Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
Model PackagingTorchServe requires models to be packaged up as model archive files. Documentation for this process (such as it is) is [here](https://github.com/pytorch/serve/blob/master/README.mdserve-a-model) and [here](https://github.com/pytorch/serve/blob/master/model-archiver/README.md). Intent ModelThe intent model requires the caller to call the pre- and post-processing code manually. Only the model and tokenizer are provided on the model zoo. | # First we need to dump the model into a local directory.
intent_model = transformers.AutoModelForSeq2SeqLM.from_pretrained(
INTENT_MODEL_NAME)
intent_tokenizer = transformers.AutoTokenizer.from_pretrained('t5-base')
intent_model.save_pretrained('torchserve/intent')
intent_tokenizer.save_pretrained('torchserve/intent') | _____no_output_____ | Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
Next we wrapped the model in a handler class, located at `./torchserve/handler_intent.py`, which needs to be in its own separate Python file in order for the `torch-model-archiver`utility to work.The following command turns this Python file, plus the data files created by the previous cell, into a model archive (`.mar`) file at `torchserve/model_store/intent.mar`. | %%time
!mkdir -p torchserve/model_store
!torch-model-archiver --model-name intent --version 1.0 \
--serialized-file torchserve/intent/pytorch_model.bin \
--handler torchserve/handler_intent.py \
--extra-files "torchserve/intent/config.json,torchserve/intent/special_tokens_map.json,torchserve/intent/tokenizer_config.json,torchserve/intent/tokenizer.json" \
--export-path torchserve/model_store \
--force | CPU times: user 438 ms, sys: 116 ms, total: 553 ms
Wall time: 54 s
| Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
Sentiment ModelThe sentiment model operates similarly to the intent model. | sentiment_tokenizer = transformers.AutoTokenizer.from_pretrained(
SENTIMENT_MODEL_NAME)
sentiment_model = (
transformers.AutoModelForSequenceClassification
.from_pretrained(SENTIMENT_MODEL_NAME))
sentiment_model.save_pretrained('torchserve/sentiment')
sentiment_tokenizer.save_pretrained('torchserve/sentiment')
contexts = ['hello', 'world']
input_batch = sentiment_tokenizer(contexts, padding=True,
return_tensors='pt')
inference_output = sentiment_model(**input_batch)
scores = inference_output.logits.detach().numpy()
scores = scipy.special.softmax(scores, axis=1).tolist()
scores = [{k: v for k, v in zip(['positive', 'neutral', 'negative'], row)}
for row in scores]
# return scores
scores | _____no_output_____ | Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
As with the intent model, we created a handler class (located at `torchserve/handler_sentiment.py`), thenpass that class and the serialized model from two cells agothrough the `torch-model-archiver` utility. | %%time
!torch-model-archiver --model-name sentiment --version 1.0 \
--serialized-file torchserve/sentiment/pytorch_model.bin \
--handler torchserve/handler_sentiment.py \
--extra-files "torchserve/sentiment/config.json,torchserve/sentiment/special_tokens_map.json,torchserve/sentiment/tokenizer_config.json,torchserve/sentiment/tokenizer.json" \
--export-path torchserve/model_store \
--force | CPU times: user 210 ms, sys: 114 ms, total: 324 ms
Wall time: 24.2 s
| Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
Question Answering ModelThe QA model uses a `transformers` pipeline. We squeeze this model into the TorchServe APIs by telling the pipeline to serialize all of its parts to a single directory, then passing the parts that aren't `pytorch_model.bin` in as extra files. At runtime, our custom handler uses the model loading code from `transformers` on the reconstituted model directory. | qa_pipeline = transformers.pipeline('question-answering', model=QA_MODEL_NAME)
qa_pipeline.save_pretrained('torchserve/qa') | _____no_output_____ | Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
As with the previous models, we wrote a class (located at `torchserve/handler_qa.py`), thenpass that wrapper class and the serialized model through the `torch-model-archiver` utility. | %%time
!torch-model-archiver --model-name qa --version 1.0 \
--serialized-file torchserve/qa/pytorch_model.bin \
--handler torchserve/handler_qa.py \
--extra-files "torchserve/qa/config.json,torchserve/qa/merges.txt,torchserve/qa/special_tokens_map.json,torchserve/qa/tokenizer_config.json,torchserve/qa/tokenizer.json,torchserve/qa/vocab.json" \
--export-path torchserve/model_store \
--force
data = [QA_INPUT, QA_INPUT]
# Preprocessing
samples = [qa_pipeline.create_sample(**r) for r in data]
generators = [qa_pipeline.preprocess(s) for s in samples]
# Inference
inference_outputs = ((qa_pipeline.forward(example) for example in batch) for batch in generators)
post_results = [qa_pipeline.postprocess(o) for o in inference_outputs]
post_results | _____no_output_____ | Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
Natural Language Generation ModelThe text generation model is roughly similar to the QA model, albeit with important differences in how the three stages of the pipeline operate. At least model loading is the same. | generate_pipeline = transformers.pipeline(
'text-generation', model=GENERATE_MODEL_NAME)
generate_pipeline.save_pretrained('torchserve/generate')
data = [GENERATE_INPUT, GENERATE_INPUT]
pad_token_id = generate_pipeline.tokenizer.eos_token_id
json_records = data
# preprocess() takes a single input at a time, but we need to do
# a batch at a time.
input_batch = [generate_pipeline.preprocess(**r) for r in json_records]
# forward() takes a single input at a time, but we need to run a
# batch at a time.
inference_output = [
generate_pipeline.forward(r, pad_token_id=pad_token_id)
for r in input_batch]
# postprocess() takes a single generation result at a time, but we
# need to run a batch at a time.
generate_result = [generate_pipeline.postprocess(i)
for i in inference_output]
generate_result | _____no_output_____ | Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
Once again, we wrote a class (located at `torchserve/handler_generate.py`), thenpass that wrapper class and the serialized model through the `torch-model-archiver` utility. | %%time
!torch-model-archiver --model-name generate --version 1.0 \
--serialized-file torchserve/generate/pytorch_model.bin \
--handler torchserve/handler_generate.py \
--extra-files "torchserve/generate/config.json,torchserve/generate/merges.txt,torchserve/generate/special_tokens_map.json,torchserve/generate/tokenizer_config.json,torchserve/generate/tokenizer.json,torchserve/generate/vocab.json" \
--export-path torchserve/model_store \
--force | CPU times: user 198 ms, sys: 96 ms, total: 294 ms
Wall time: 24.5 s
| Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
TestingNow we can fire up TorchServe and test our models.For some reason, starting TorchServe needs to be done in a proper terminal window. Running the command from this notebook has no effect. The commands to run (from the root of the repository) are:```> conda activate ./env> cd notebooks/benchmark/torchserve> torchserve --start --ncs --model-store model_store --ts-config torchserve.properties```Then pick up a cup of coffee and a book and wait a while. The startup process is like cold-starting a gas turbine and takes about 10 minutes.Once the server has started, we can test our deployed models by making POST requests. | # Probe the management API to verify that TorchServe is running.
requests.get('http://127.0.0.1:8081/models').json()
port = 8080
intent_result = requests.put(
f'http://127.0.0.1:{port}/predictions/intent_en',
json.dumps(INTENT_INPUT)).json()
print(f'Intent result: {intent_result}')
sentiment_result = requests.put(
f'http://127.0.0.1:{port}/predictions/sentiment_en',
json.dumps(SENTIMENT_INPUT)).json()
print(f'Sentiment result: {sentiment_result}')
qa_result = requests.put(
f'http://127.0.0.1:{port}/predictions/qa_en',
json.dumps(QA_INPUT)).json()
print(f'Question answering result: {qa_result}')
generate_result = requests.put(
f'http://127.0.0.1:{port}/predictions/generate_en',
json.dumps(GENERATE_INPUT)).json()
print(f'Natural language generation result: {generate_result}') | _____no_output_____ | Apache-2.0 | notebooks/benchmark/torchserve.ipynb | frreiss/zero-copy-model-loading |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.