markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Part 2 Now we'll apply this to our training data, and take a look at the F1-score as a function of the hyperparameters. Here is an example of computing the F1-score for a particular choice of parameters:
n_components = 3 covariance_type = 'full' y_pred = GMMBayes(X_train, n_components, 'full') f1 = metrics.f1_score(y_train, y_pred) print f1
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Try changing the number of components and the covariance type. To see a description of the various covariance_type options, you can type gmm.GMM? in a code cell to see the documentation. You might also wish to loop over several values of the hyperparameters and plot the learning curves for the data. Part 3 Once you have settled on a choice of hyperparameters, it's time to evaluate the test data using this model. First we'll construct the test data as we did the training and cross-validation data above:
X_test = np.zeros((test_data.size, 4), dtype=float) X_test[:, 0] = test_data['u-g'] X_test[:, 1] = test_data['g-r'] X_test[:, 2] = test_data['r-i'] X_test[:, 3] = test_data['i-z'] y_pred_literature = (test_data['label'] == 0).astype(int) Ntest = len(y_pred_literature) print Ntest
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Now follow the procedure above, and for the test data predict the labels using the Gaussian Naive Bayes estimator, as well as our Gaussian Mixture Bayes estimator. For simplicity, you may wish to use the Gaussian Mixture estimator to evaluate the Naive Bayes result.
# variables to compute: # y_pred_gmm : predicted labels for X_test from GMM Bayes model # y_pred_gnb : predicted labels for X_test with Naive Bayes model.
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
If the notebook is within the tutorial directory structure, the following command will load the solution:
%load soln/01-05.py print "------------------------------------------------------------------" print "Comparison of current results with published results (Naive Bayes)" print metrics.classification_report(y_pred_literature, y_pred_gnb, target_names=['stars', 'QSOs']) print "------------------------------------------------------------------" print "Comparison of current results with published results (GMM Bayes)" print metrics.classification_report(y_pred_literature, y_pred_gmm, target_names=['stars', 'QSOs'])
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
k-means finds the two well separated clusters in this case Problem 2 [Generating Mixed Samples] Implement a random number generator for a random variable with the following mixture distribution: $f(x) = 0.4N(-1,1) + 0.6N(1,1)$ Generate N=1000 samples and histogram them. Try out a k-means clustering routine (k=2) on the data.
def mixture_model(mu1,mu2,s1,s2,alpha): return alpha*np.random.normal(mu1, s1, 1000) + (1-alpha)*np.random.normal(mu2, s2, 1000) mixture_samples = mixture_model(-1,1,1,1,0.4) plt.scatter(range(1000), mixture_samples) plt.hist(mixture_samples, bins=20) y_pred = KMeans(n_clusters=2, random_state=0).fit_predict(mixture_samples.reshape(-1,1)) plt.scatter(range(1000), mixture_samples, c=y_pred)
2016_Fall/EE-511/Homework3/Homework 3.ipynb
saketkc/hatex
mit
François Fillon Le projet de François Fillon ne sera annoncé que le 13 mars : https://www.fillon2017.fr/projet/
r = requests.get('https://www.fillon2017.fr/projet/') soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('a', class_='projectItem__inner') sublinks = [tag.attrs['href'] for tag in tags] r = requests.get('https://www.fillon2017.fr/projet/competitivite/') soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('li', class_='singleProject__propositionItem') len(tags) tag = tags[0] tag.find('div', class_='singleProject__propositionItem-content').text for tag in tags: tag.find('div', class_='singleProject__propositionItem-content').text def extract_propositions(url): r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('li', class_='singleProject__propositionItem') return [tag.find('div', class_='singleProject__propositionItem-content').text for tag in tags] extract_propositions(sublinks[0]) props_sources = {} for sublink in sublinks: props = extract_propositions(sublink) props_sources[sublink] = props df = make_df_from_props_sources(props_sources) df.head() df.to_csv('../projets/francois_fillon.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Marine Le Pen Les 144 engagements de Marine Le Pen peuvent être consultés ici : https://www.marine2017.fr/programme/ Analyse de la structure du site Apparemment, les différentes propositions sont imbriquées dans des balises &lt;p&gt;. ``` <p>3. <strong>Permettre la représentation de tous les Français</strong> par le scrutin proportionnel à toutes les élections. À l’Assemblée nationale, la proportionnelle sera intégrale avec une prime majoritaire de 30&nbsp;% des sièges pour la liste arrivée en tête et un seuil de 5&nbsp;% des suffrages pour obtenir des élus.</p> ``` On peut donc extraire ces éléments et les trier ensuite. Extraction des paragraphes Téléchargeons le code source de la page.
r = requests.get('https://www.marine2017.fr/programme/') soup = BeautifulSoup(r.text, "html.parser")
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Maintenant, chercons à extraire tous les paragraphes, à l'aide d'une fonction qui vérifie que le paragraphe commence par un nombre suivi d'un point (et peut-être d'un espace).
pattern = re.compile('^\d+.\s*') def filter_func(tag): if tag.text is not None: return pattern.match(tag.text) is not None else: return False all_paragraphs = [re.split(pattern, tag.text)[1:] for tag in soup.find_all('p') if filter_func(tag)] len(all_paragraphs) @interact def disp_para(n=(0, len(all_paragraphs) - 1)): print(all_paragraphs[n]) props_sources = {} props_sources['https://www.marine2017.fr/programme/'] = all_paragraphs df = make_df_from_props_sources(props_sources) df.head(10)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Bien, on peut maintenant écrire ces données dans un fichier texte.
df.to_csv('../projets/marine_le_pen.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Benoît Hamon Le site de Benoît Hamon ne permet pas d'accéder à une page avec toutes les propositions facilement. Du coup, il faut explorer trois sous-catégories. https://www.benoithamon2017.fr/thematique/pour-un-progres-social-et-ecologique/
r = requests.get('https://www.benoithamon2017.fr/thematique/pour-un-progres-social-et-ecologique/') r soup = BeautifulSoup(r.text, 'html.parser') all_propositions = soup.find_all(class_='Propositions-Proposition') len(all_propositions) p = all_propositions[0] p.text p.find('h1').text p.find('p').text
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On peut extraire de ces propositions la moëlle essentielle :
def extract_data(tag): "Extracts title for tag and content." subject = tag.find('h1').text content = tag.find('p').text return subject, content
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Construisons une table de données avec ces propositions.
df = pd.DataFrame([extract_data(p) for p in all_propositions], columns=['titre', 'contenu']) df df[df['contenu'].str.contains('ascension')]
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On peut transformer ces propositions en DataFrame.
props_sources = {} props_sources['https://www.benoithamon2017.fr/thematique/pour-un-progres-social-et-ecologique/'] = df['contenu'].values.tolist() df = make_df_from_props_sources(props_sources) df.head() df.to_csv('../projets/benoit_hamon.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Jean-Luc Mélenchon On peut trouver une version inofficielle du programme ici : https://laec.fr/sommaire Un peu comme pour le site d'Hamon, il y a des rubriques. Commençons par la première.
r = requests.get('https://laec.fr/chapitre/1/la-6e-republique') soup = BeautifulSoup(r.text, 'html.parser') sublinks = soup.find_all('a', class_='list-group-item') sublinks
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On peut étendre cette manière de récupérer les données à toutes les sous-sections :
suburls = ['https://laec.fr/chapitre/1/la-6e-republique', 'https://laec.fr/chapitre/2/proteger-et-partager', 'https://laec.fr/chapitre/3/la-planification-ecologique', 'https://laec.fr/chapitre/4/sortir-des-traites-europeens', 'https://laec.fr/chapitre/5/pour-l-independance-de-la-france', 'https://laec.fr/chapitre/6/le-progres-humain-d-abord', 'https://laec.fr/chapitre/7/la-france-aux-frontieres-de-l-humanite'] sublinks = [] for suburl in suburls: r = requests.get(suburl) soup = BeautifulSoup(r.text, 'html.parser') sublinks.extend(soup.find_all('a', class_='list-group-item')) sublinks[:5]
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Combien de propositions trouvons-nous ?
len(sublinks)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Construisons les url complètes.
full_urls = ['https://laec.fr' + link.attrs['href'] for link in sublinks] full_urls[:10] full_url = full_urls[13] #full_url = full_urls[0] r = requests.get(full_url) print(r.text[:800]) soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('li', class_='list-group-item') tag = tags[0] tag.text tag.find_all('li') tag.p.text "\n".join([t.text for t in tag.find_all('li')]) len(tags) [tag.text for tag in tags] def extract_data(url): r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('li', class_='list-group-item') contents = [] for tag in tags: if len(tag.find_all('li')) == 0: contents.append(tag.text) else: contents.append(tag.p.text + '\n\t' + "\n\t".join([t.text for t in tag.find_all('li')])) return contents extract_data(full_url) extract_data(full_urls[13]) props_sources = {} for url in full_urls: props_sources[url] = extract_data(url) df = make_df_from_props_sources(props_sources) df
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On écrit un fichier.
df.to_csv('../projets/jean_luc_melenchon.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Emmanuel Macron Il faut dans un premier temps aller chercher les pages individuelles du site.
r = requests.get('https://en-marche.fr/emmanuel-macron/le-programme') soup = BeautifulSoup(r.text, 'html.parser') proposals = soup.find_all(class_='programme__proposal') proposals = [p for p in proposals if 'programme__proposal--category' not in p.attrs['class']] len(proposals) full_urls = ["https://en-marche.fr" + p.find('a').attrs['href'] for p in proposals] url = full_urls[1] r = requests.get(url) text = r.text text = text.replace('</br>', '') soup = BeautifulSoup(text, 'html.parser') article_tag = soup.find_all('article', class_='l__wrapper--slim')[0] for line in article_tag.find_all(class_='arrows'): print(line.text) tag = article_tag.find_all(class_='arrows')[-1] tag.text tag.next_sibling def extract_items(url): r = requests.get(url) text = r.text.replace('</br>', '') soup = BeautifulSoup(text, 'html.parser') article_tag = soup.find_all('article', class_='l__wrapper--slim')[0] return [line.text.strip() for line in article_tag.find_all(class_='arrows')] extract_items(full_urls[1])
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On extrait toutes les propositions.
propositions = [extract_items(url) for url in full_urls] len(propositions) full_urls[18] @interact def print_prop(n=(0, len(propositions) - 1)): print(propositions[n]) props_sources = {} for url, props in zip(full_urls, propositions): props_sources[url] = props df = make_df_from_props_sources(props_sources) df.head() df.iloc[0, 1] df.to_csv('../projets/emmanuel_macron.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Yannick Jadot http://avecjadot.fr/lafrancevive/
r = requests.get('http://avecjadot.fr/lafrancevive/') soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('div', class_='bloc-mesure') links = [tag.find('a').attrs['href'] for tag in tags] all([link.startswith('http://avecjadot.fr/') for link in links])
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Extraction du titre d'une des pages.
link = links[0] r = requests.get(link) soup = BeautifulSoup(r.text, 'html.parser') soup.find('div', class_='texte-mesure').text.strip().replace('\n', ' ') def extract_data(link): r = requests.get(link) soup = BeautifulSoup(r.text, 'html.parser') return soup.find('div', class_='texte-mesure').text.strip().replace('\n', ' ') extract_data(link) all_props = [extract_data(link) for link in links] props_sources = {} for url, props in zip(links, all_props): props_sources[url] = [props] props_sources df = make_df_from_props_sources(props_sources) df.head() df.to_csv('../projets/yannick_jadot.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Nicolas Dupont-Aignan
r = requests.get('http://www.nda-2017.fr/themes.html') soup = BeautifulSoup(r.text, 'html.parser') len(soup.find_all('div', class_='theme')) links = ['http://www.nda-2017.fr' + tag.find('a').attrs['href'] for tag in soup.find_all('div', class_='theme')] link = links[0] r = requests.get(link) soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('div', class_='proposition') len(tags) tags[0].find('a').text.strip() tags[0].find('a').attrs['href'] def extract_data(link): r = requests.get(link) soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('div', class_='proposition') return [tag.find('a').text.strip() for tag in tags] all_props = [extract_data(link) for link in links] len(all_props) props_sources = {} for url, props in zip(links, all_props): props_sources[url] = props df = make_df_from_props_sources(props_sources) df df.to_csv('../projets/nicolas_dupont_aignan.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
MOTOR Lin Engineering 1. Determine appropriate velocity_max = microsteps/sec 2. Determine motor limits 3. Determine conv = microsteps/mm 4. Determine orientation (P+; D-)
# TODO: get current position for relative move class Motor: def __init__(self, config_file, init=True): self.serial = s.Serial() # placeholder f = open(config_file, 'r') self.config = yaml.load(f) f.close() if init: self.initialize() def initialize(self): self.serial = s.Serial(**self.config['serial']) # open serial connection # TODO set moving current # TODO set holding current self.set_velocity(self.config['velocity_limit']) # set velocity self.home() # move motor to home def cmd(self, cmd_string, block=True): full_string = self.config['prefix'] + cmd_string + self.config['terminator'] self.serial.write(full_string) time.sleep(0.15) # TODO: monitor for response? response = self.serial.read(self.serial.inWaiting()).decode('utf8', 'ignore') while block and self.is_busy(): pass return response def is_busy(self): cmd_string = 'Q' time.sleep(0.05) response = self.cmd(cmd_string, False) return response.rfind('`') == -1 # velocity: (usteps/sec) def set_velocity(self, velocity): if velocity > self.config['velocity_limit']: velocity = self.config['velocity_limit'] print 'ERR: Desired velocity exceeds velocity_limit; velocity now set to velocity_limit' cmd_string = 'V{}R'.format(velocity) return self.cmd(cmd_string) def halt(self): cmd_string = 'T' self.cmd(cmd_string) def home(self): cmd_string = 'Z{}R'.format(self.config['ustep_max']) return self.cmd(cmd_string) def move(self, mm, block=True): ustep = int(self.config['conv']*mm) if ustep > self.config['ustep_max']: ustep = self.config['ustep_max'] print 'ERR: Desired move to {} mm exceeds max of {} mm; moving to max instead'.format(mm, self.config['ustep_max']/self.config['conv']) if ustep < self.config['ustep_min']: ustep = self.config['ustep_min'] print 'ERR: Desired move to {} mm exceeds min of {} mm; moving to min instead'.format(mm, self.config['ustep_min']/self.config['conv']) cmd_string = 'A{}R'.format(ustep) return self.cmd(cmd_string, block) def move_relative(self, mm): ustep = int(self.config['conv']*mm) ustep_current = int(self.config['ustep_max']/2) # TODO: limit movement (+ and -) if mm >= 0: if (ustep_current + ustep) > self.config['ustep_max']: ustep = self.config['ustep_max'] - ustep_current print 'ERR: Desired move of +{} mm exceeds max of {} mm; moving to max instead'.format(mm, self.config['ustep_max']/self.config['conv']) cmd_string = 'P{}R'.format(ustep) else: if (ustep_current + ustep) < self.config['ustep_min']: ustep = self.config['ustep_min'] - ustep_current print 'ERR: Desired move of {} mm exceeds min of {} mm; moving to min instead'.format(mm, self.config['ustep_min']/self.config['conv']) ustep = -1*ustep cmd_string = 'D{}R'.format(ustep) return self.cmd(cmd_string) def exit(self): self.serial.close()
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
ASI Controller Applied Scientific Instrumentation Set hall effect sensors to appropriate limits Determine orientation (X+-, Y+-)
# TODO: Fix serial.read encoding class ASI_Controller: def __init__(self, config_file, init=True): self.serial = s.Serial() # placeholder f = open(config_file, 'r') self.config = yaml.load(f) f.close() if init: self.initialize() def initialize(self): self.serial = s.Serial(**self.config['serial']) # open serial connection self.cmd_xy('mc x+ y+') # enable motor control for xy self.cmd_z('mc z+') # enable motor control for z print "Initializing stage..." self.move_xy(2000, -2000) # move to switch limits (bottom right) self.r_xy(-0.5, 0.5) # move from switch limits 0.5 mm def cmd(self, cmd_string): full_string = self.config['prefix'] + cmd_string + self.config['terminator'] self.serial.write(full_string) time.sleep(0.05) response = self.serial.read(self.serial.inWaiting()) return response def halt(self): self.halt_xy() self.halt_z() # XY ---------------------------------------------- def cmd_xy(self, cmd_string, block=True): full_string = '2h ' + cmd_string response = self.cmd(full_string) while block and self.is_busy_xy(): time.sleep(0.05) pass return response def is_busy_xy(self): status = self.cmd('2h STATUS')[0] return status == 'B' def halt_xy(self): self.cmd_xy('HALT', False) def move_xy(self, x_mm, y_mm): conv = self.config['conv'] xStr = 'x=' + str(float(x_mm) * conv) yStr = 'y=' + str(float(y_mm) * conv) return self.cmd_xy(' '.join(['m', xStr, yStr])) def r_xy(self, x_mm, y_mm): conv = self.config['conv'] xStr = 'x=' + str(float(x_mm) * conv) yStr = 'y=' + str(float(y_mm) * conv) return self.cmd_xy(' '.join(['r', xStr, yStr])) # Z ----------------------------------------------- def cmd_z(self, cmd_string, block=True): while block and self.is_busy_z(): time.sleep(0.3) full_string = '1h ' + cmd_string return self.cmd(full_string) def is_busy_z(self): status = self.cmd('1h STATUS') return status[0] == 'B' def halt_z(self): self.cmd_z('HALT', False) def move_z(self, z_mm): conv = self.config['conv'] zStr = 'z=' + str(float(z_mm) * conv) return self.cmd_z(' '.join(['m', zStr])) def r_z(self, z_mm): conv = self.config['conv'] zStr = 'z=' + str(float(z_mm) * conv) return self.cmd_z(' '.join(['r', zStr])) def exit(self): self.serial.close()
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
Autosipper
# I: filepath of delimited file # P: detect delimiter/header read file accordingly # O: list of records (no header) def read_delim(filepath): f = open(filepath, 'r') dialect = csv.Sniffer().sniff(f.read(1024)) f.seek(0) hasHeader = csv.Sniffer().has_header(f.read(1024)) f.seek(0) reader = csv.reader(f, dialect) if hasHeader: reader.next() ret = [line for line in reader] return ret def read_delim_pd(filepath): f = open(filepath) has_header = None if csv.Sniffer().has_header(f.read(1024)): has_header = 0 f.seek(0) return pd.read_csv(f, header=has_header, sep=None, engine='python') def lookup(table, columns, values): temp_df = pd.DataFrame(data=[values], columns = columns, copy=False) return table.merge(temp_df, copy=False) class Autosipper: def __init__(self, z, xy): self.Z = z # must be initialized first! self.XY = xy while True: fp = raw_input('Type in plate map file:') try: self.load_platemap(fp) # load platemap break except IOError: print 'No file', fp raw_input('Place dropper above reference (press enter when done)') self.XY.cmd_xy('here x y') # establish current position as 0,0 def load_platemap(self, filepath): self.platemap = read_delim_pd(filepath) def go_to(self, columns, values): x1,y1,z1 = np.array(lookup(self.platemap, columns, values)[['x','y','z']])[0] self.Z.home() # move needle to travel height (blocking) self.XY.move_xy(x1,y1) # move stage (blocking) self.Z.move(z1) # move needle to bottom of well (blocking) def exit(self): self.XY.exit() self.Z.exit() d = Autosipper(Motor('config/le_motor.yaml'), ASI_Controller('config/asi_controller.yaml')) d.platemap fig = plt.figure() ax = Axes3D(fig) ax.scatter(d.platemap['x'], d.platemap['y'], d.platemap['z'], s=5) plt.show() d.Z.move(10) d.XY.r_xy(0,5) d.go_to(['name'],'A02') d.exit()
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
Procedure Primed control Primed chip with BSA (3.8 psi) Inlet tree Device w/ outlet open Device w/ outlet closed (to remove air) Opened outlet / closed neck valves Passivated w/ BSA under flow for 1 hr
d = Autosipper(Motor('config/le_motor.yaml'), ASI_Controller('config/asi_controller.yaml'))
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
Prime PEEK manually Put PEEK in W2 Open Valves(vacuum_in, inlet) Open CTRL(vacuum_in)
d.exit()
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
I transferred the lottery odds data from the PDF reports from the last three years to a CSV for easy read to python.
data = pd.read_csv('./hr100_odds.csv',header=0,index_col=0) data # calculate total number of tickets per lottery year total_tix = [] for x in ['2017','2018','2019']: total_tix.append((data[x]*data.index).sum()) plt.plot(total_tix) plt.scatter([0,1,2],total_tix) plt.suptitle('total tickets',size=14) plt.xticks([0,1,2],[2017,2018,2019])
scripts/hardrock 100 entry model.ipynb
dspak/ultradata
mpl-2.0
The total number of tickets appears to be increasing linearly, so a linear model seems like a decent first approximation of the process. So we'll start with that in the model below. model assumptions: the total number of tickets increases linearly over time. Likely this is not true and I expect it will plateau at a given number, or even increase at greater than linear rates. More modeling to come on this. each year, I model my number of tickets as a binomial distribution with p = 0.90, meaning each uear I expect there'sa 90% chance I'll qualify to enter the lottery. That is a huge assumption, essentially saying in 5 years I think I'll be healthy, fit, and have enough time to run a 100 mile race. Oof... the model: Normal linear regression model inferring values of the slope and intercept based on our available data. Use that model to build a distribution of what we think the total number of tickets will look like each year into the future. My number of tickets each year is a binomial distribution with p = 0.90 and n = year - 2019. The output of the model is a geometric distribution showing how many draws from the lottery are necessary until my name gets picked the first time. Since there are (currently) 45 draws for newbies, then if that number is less than 45, I expect to get picked!
# years to predict ytp = np.array([3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]) ytp_real = 2017+ytp print ytp_real b0 = pymc.Normal('b0',7000,0.00001) # intercept of model of total tickets in lottery b1 = pymc.Normal('b1',5000,0.00001) # slope of total_tickets linear model err= pymc.Uniform('err',0,500) # error on total_tickets model x=np.array([0,1,2]) # distribution of number of tickets per year with 0.9 prob of drawing 1 tix per year my_tickets = pymc.Binomial('num tix',p=0.9,n=np.array([ytp-1])) # the model @pymc.deterministic def total_pool_pred(b0=b0,b1=b1,x=x): return b0+b1*x #estimate values of the model based on data total_pool = pymc.Normal('y', total_pool_pred , err, value=np.array(total_tix), observed=True) # use fitted params to estimate population size at each year pop_size = pymc.Normal('population',mu=b1*ytp+b0,tau=err,size=len(ytp)) def chance_final(foonum=my_tickets,pop_size=pop_size): tmp = (2**foonum)/pop_size tmp[tmp>1] = 1 return tmp chances = pymc.Deterministic(name='chances',eval=chance_final,parents={"foonum":my_tickets,"pop_size":pop_size},doc='foo') # how many draws until success, final = pymc.Geometric('final_odds',p=chances) model = pymc.Model([total_pool_pred, b0, b1, total_pool, err, x,pop_size,chances,my_tickets,final]) mcmc = pymc.MCMC(model) mcmc.sample(100000, 20000) fo_central = final.stats()['quantiles'][50] fo_ub = final.stats()['95% HPD interval'][1] fo_lb = final.stats()['95% HPD interval'][0] plt.figure(figsize=[7.5,7.5]) plt.suptitle('number of draws needed to pull my name',size=14) plt.plot(fo_central,linewidth=3) plt.plot(fo_ub,c='grey',linewidth=2) plt.plot(fo_lb,c='grey',linewidth=2) plt.fill_between(np.arange(0,len(ytp_real+1),1),fo_ub,fo_lb,color='grey',alpha=0.25) plt.plot([0,15],[45,45],c='red') plt.xticks(np.arange(0,len(ytp_real+1),1),ytp_real,rotation=45) plt.ylim([0,250]) plt.xlabel('year') plt.ylabel('number of draws')
scripts/hardrock 100 entry model.ipynb
dspak/ultradata
mpl-2.0
The figure above shows the mean (blue line) and 95% HPD (grey shaded area) of the distribution of number of draws necessary to pull my name each year in the HR100 lottery. The red horizontal line is 45, which is the number of slots in the loterry for newcomers. So in 2029 when the mean of the distribution passes below 45 on the y-axis, I have a better than 50% chance of getting in. This is shown in the table below as well.
tmp_perc = [] for x in xrange(len(ytp)): tmp_perc.append(round(sum(np.mean(mcmc.trace('final_odds')[:],1)[:,x]<=45)/80000.0,4)*100) pd.DataFrame(index=ytp_real,data={'percent chance':tmp_perc})
scripts/hardrock 100 entry model.ipynb
dspak/ultradata
mpl-2.0
Gradient Ascent The function find_maximum that is defined below takes four arguments: - f is a function of the form $\texttt{f}: \mathbb{R}^n \rightarrow \mathbb{R}$. It is assumed that the function f is <font color="blue">convex</font> and therefore there is only one global maximum. - gradF is the gradient of the function f. - start is a numpy array of numbers that is used to start the search for a maximum. - eps is a small floating point number. This number controls the precision. If the values of f change less than eps, then the algorithm stops. The function find_maximum returns a triple of values of the form $$ (x_{max}, \texttt{fx}, \texttt{cnt}) $$ - $x_{max}$ is an approximation of the position of the maximum, - $\texttt{fx}$ is equal to $\texttt{f}(x_{max})$, - $\texttt{cnt}$ is the number of iterations that have been performed. The algorithms computes a sequence $(x_n)n$ that is defined inductively: - $x_0 := \texttt{start}$, - $x{n+1} := x_n + \alpha_n \cdot \nabla f(x_n)$. The algorithm given below adjusts the <font color="blue">learning rate</font> $\alpha$ dynamically: If $f(x_{n+1}) > f(x_n)$, then the learning rate alpha is increased by a factor of $1.2$. Otherwise, the learning rate is decreased by a factor of $\frac{1}{2}$. This way, the algorithm determines the optimal learning rate by itself.
def findMaximum(f, gradF, start, eps): x = start fx = f(x) alpha = 0.1 # learning rate cnt = 0 # number of iterations while True: cnt += 1 xOld, fOld = x, fx x += alpha * gradF(x) fx = f(x) print(f'cnt = {cnt}, f({x}) = {fx}') print(f'gradient = {gradF(x)}') if abs(x - xOld) <= abs(x) * eps: return x, fx, cnt if fx <= fOld: # f didn't increased, learning rate is too high alpha *= 0.5 # decrease the learning rate print(f'decrementing: alpha = {alpha}') x, fx = xOld, fOld # reset x continue else: # f has increased alpha *= 1.2 # increase the learning rate print(f'incrementing: alpha = {alpha}') import numpy as np
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We will try to find the maximum of the function $$ f(x) := \sin(x) - \frac{x^2}{2} $$
def f(x): return np.sin(x) - x**2 / 2
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us plot this function.
import matplotlib.pyplot as plt import seaborn as sns X = np.arange(-0.5, 1.8, 0.01) Y = f(X) plt.figure(figsize=(15, 10)) sns.set(style='whitegrid') plt.title('lambda x: sin(x) - x**2/2') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x') plt.ylabel('y') plt.xticks(np.arange(-0.5, 1.81, step=0.1)) plt.plot(X, Y, color='b')
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Clearly, this function has a maximum somewhere between 0.7 and 0.8. Let us use gradient ascent to find it. In order to do so, we have to provide the derivative of this function. We have $$ \frac{\mathrm{d}f}{\mathrm{d}x} = \cos(x) - x. $$
def fs(x): return np.cos(x) - x
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us plot the derivative together with the function.
X2 = np.arange(0.4, 1.1, 0.01) Ys = fs(X2) plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('lambda x: sin(x) - x**2/2 and its derivative') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x') plt.ylabel('y') plt.xticks(np.arange(-0.5, 1.81, step=0.1)) plt.yticks(np.arange(-0.6, 0.61, step=0.1)) plt.plot(X, Y, color='b') plt.plot(X2, Ys, color='r') x_max, _, cnt = findMaximum(f, fs, 0.0, 1e-15) x_max, cnt
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The maximum seems to be at $x \approx 0.739085$. Let's check the derivative at this position.
fs(x_max)
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Input functions to read JPEG images The key difference between this notebook and the MNIST one is in the input function. In the input function here, we are doing the following: * Reading JPEG images, rather than 2D integer arrays. * Reading in batches of batch_size images rather than slicing our in-memory structure to be batch_size images. * Resizing the images to the expected HEIGHT, WIDTH. Because this is a real-world dataset, the images are of different sizes. We need to preprocess the data to, at the very least, resize them to constant size. Run as a Python module Since we want to run our code on Cloud ML Engine, we've packaged it as a python module. The model.py and task.py containing the model code is in <a href="flowersmodel">flowersmodel</a> Complete the TODOs in model.py before proceeding! Once you've completed the TODOs, run it locally for a few steps to test the code.
%%bash rm -rf flowersmodel.tar.gz flowers_trained gcloud ai-platform local train \ --module-name=flowersmodel.task \ --package-path=${PWD}/flowersmodel \ -- \ --output_dir=${PWD}/flowers_trained \ --train_steps=5 \ --learning_rate=0.01 \ --batch_size=2 \ --model=$MODEL_TYPE \ --augment \ --train_data_path=gs://cloud-ml-data/img/flower_photos/train_set.csv \ --eval_data_path=gs://cloud-ml-data/img/flower_photos/eval_set.csv
courses/machine_learning/deepdive/08_image/labs/flowers_fromscratch.ipynb
turbomanage/training-data-analyst
apache-2.0
Now, let's do it on ML Engine. Note the --model parameter
%%bash OUTDIR=gs://${BUCKET}/flowers/trained_${MODEL_TYPE} JOBNAME=flowers_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=flowersmodel.task \ --package-path=${PWD}/flowersmodel \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC_GPU \ --runtime-version=$TFVERSION \ -- \ --output_dir=$OUTDIR \ --train_steps=1000 \ --learning_rate=0.01 \ --batch_size=40 \ --model=$MODEL_TYPE \ --augment \ --batch_norm \ --train_data_path=gs://cloud-ml-data/img/flower_photos/train_set.csv \ --eval_data_path=gs://cloud-ml-data/img/flower_photos/eval_set.csv
courses/machine_learning/deepdive/08_image/labs/flowers_fromscratch.ipynb
turbomanage/training-data-analyst
apache-2.0
Monitor training with TensorBoard To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row. TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests. You may close the TensorBoard tab when you are finished exploring. Deploying and predicting with model Deploy the model:
%%bash MODEL_NAME="flowers" MODEL_VERSION=${MODEL_TYPE} MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1) echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" #gcloud ai-platform versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME} #gcloud ai-platform models delete ${MODEL_NAME} gcloud ai-platform models create ${MODEL_NAME} --regions $REGION gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
courses/machine_learning/deepdive/08_image/labs/flowers_fromscratch.ipynb
turbomanage/training-data-analyst
apache-2.0
Send it to the prediction service
%%bash gcloud ai-platform predict \ --model=flowers \ --version=${MODEL_TYPE} \ --json-instances=./request.json
courses/machine_learning/deepdive/08_image/labs/flowers_fromscratch.ipynb
turbomanage/training-data-analyst
apache-2.0
interactive plotting line plots With Plotly, you can turn on and off data values by clicking on the legend
df.iplot()
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
bar plot
df2.iplot(kind='bar')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
box plot
df.iplot(kind='box')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
surface plot
df3 = pd.DataFrame({'x':[1,2,3,4,5], 'y':[11,22,33,44,55], 'z':[5,4,3,2,1]}) df3 df3.iplot(kind='surface')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
histograms
df.iplot(kind='hist',bins=50)
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
spread plots Used to show the spread in data value between two columns / variables.
df[['A','B']].iplot(kind='spread')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
bubble scatter plots same as scatter, but you can easily size the dots by another column
df.iplot(kind='bubble',x='A', y='B', size='C')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
scatter matrix This is similar to seaborn's pairplot
df.scatter_matrix()
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
2. Normalization between 0 and 1
x_train = df_x_train.values x_train = (x_train - x_train.min()) / (x_train.max() - x_train.min()) y_train = df_y_train.values y_train_cat = y_train x_val = df_x_val.values x_val = (x_val - x_val.min()) / (x_val.max() - x_val.min()) y_val = df_y_val.values y_eval = y_val
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Train Neural Net Due to a tight schedule we will not perform any cross validation. So it might happen that our accuracy estimators lack a little bit in potential of generalization. We shall live with that. Another setup of experiments would be, that we loop over some different dataframes samples up in the preprocessing steps and repeat all the steps below to finally average the results. The dimension of the hidden layers are set arbitrarily but some runs have shown that 30 is a good number. The input_dim Variable is set to 24 because initially there are 24 features. The aim is to build the best possible neural net. Optimizer RMSprop is a mini batch gradient descent algorithm which divides the gradient by a running average of the learning rate. More information: http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf The weights are initialized by a normal distribution with mean 0 and standard deviation of 0.05.
input_data = Input(shape=(input_dim,), dtype='float32', name='main_input') hidden_layer1 = Dense(hidden1_dim, activation='relu', input_shape=(input_dim,), kernel_initializer='normal')(input_data) dropout1 = Dropout(dropout)(hidden_layer1) hidden_layer2 = Dense(hidden2_dim, activation='relu', input_shape=(input_dim,), kernel_initializer='normal')(dropout1) dropout2 = Dropout(dropout)(hidden_layer2) output_layer = Dense(num_classes, activation='softmax', kernel_initializer='normal')(dropout2) model = Model(inputs=input_data, outputs=output_layer) model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) plot_model(model, to_file='images/robo1_nn.png', show_shapes=True, show_layer_names=True) IPython.display.Image("images/robo1_nn.png") model.fit(x_train, y_train, batch_size=batchsize, epochs=epochsize, verbose=0, shuffle=shuffle) nn_score = model.evaluate(x_val, y_val)[1] print(nn_score) # Compute confusion matrix cnf_matrix = confusion_matrix(y_eval, model.predict(x_val).argmax(axis=-1)) np.set_printoptions(precision=2) # Plot normalized confusion matrix plt.figure(figsize=(20,10)) plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True, title='Normalized confusion matrix')
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
One can easily see that our results are better. So we go further with that result and check how good our SAW might become. Stacked Autoencoder For this dataset we decided to go with a 24-16-8-16-24 architecture. First layer
input_img = Input(shape=(input_dim,)) encoded1 = Dense(16, activation='relu')(input_img) decoded1 = Dense(input_dim, activation='relu')(encoded1) class1 = Dense(num_classes, activation='softmax')(decoded1) autoencoder1 = Model(input_img, class1) autoencoder1.compile(optimizer=RMSprop(), loss='binary_crossentropy', metrics=['accuracy']) encoder1 = Model(input_img, encoded1) encoder1.compile(optimizer=RMSprop(), loss='binary_crossentropy') autoencoder1.fit(x_train , y_train , epochs=50 , batch_size=24 , shuffle=True , verbose=False ) score1 = autoencoder1.evaluate(x_val, y_val, verbose=0) print('Test accuracy:', score1[1])
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Second layer
first_layer_code = encoder1.predict(x_train) encoded_2_input = Input(shape=(16,)) encoded2 = Dense(8, activation='relu')(encoded_2_input) decoded2 = Dense(16, activation='relu')(encoded2) class2 = Dense(num_classes, activation='softmax')(decoded2) autoencoder2 = Model(encoded_2_input, class2) autoencoder2.compile(optimizer=RMSprop(), loss='binary_crossentropy', metrics=['accuracy']) encoder2 = Model(encoded_2_input, encoded2) encoder2.compile(optimizer=RMSprop(), loss='binary_crossentropy') autoencoder2.fit(first_layer_code , y_train , epochs=50 , batch_size=24 , shuffle=True , verbose=False ) first_layer_code_val = encoder1.predict(x_val) score2 = autoencoder2.evaluate(first_layer_code_val, y_val, verbose=0) print('Test loss:', score2[0]) print('Test accuracy:', score2[1])
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Data Reconstruction with SAE
sae_encoded1 = Dense(16, activation='relu')(input_img) sae_encoded2 = Dense(8, activation='relu')(sae_encoded1) sae_decoded1 = Dense(16, activation='relu')(sae_encoded2) sae_decoded2 = Dense(24, activation='sigmoid')(sae_decoded1) sae = Model(input_img, sae_decoded2) sae.layers[1].set_weights(autoencoder1.layers[1].get_weights()) sae.layers[2].set_weights(autoencoder2.layers[1].get_weights()) sae.compile(loss='binary_crossentropy', optimizer=RMSprop()) sae.fit(x_train , x_train , epochs=50 , batch_size=24 , shuffle=True , verbose=False ) score4 = sae.evaluate(x_val, x_val, verbose=0) print('Test loss:', score4)
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Classification
input_img = Input(shape=(input_dim,)) sae_classifier_encoded1 = Dense(16, activation='relu')(input_img) sae_classifier_encoded2 = Dense(8, activation='relu')(sae_classifier_encoded1) class_layer = Dense(num_classes, activation='softmax')(sae_classifier_encoded2) sae_classifier = Model(inputs=input_img, outputs=class_layer) sae_classifier.layers[1].set_weights(autoencoder1.layers[1].get_weights()) sae_classifier.layers[2].set_weights(autoencoder2.layers[1].get_weights()) sae_classifier.compile(loss='binary_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) sae_classifier.fit(x_train, y_train , epochs=50 , verbose=True , batch_size=24 , shuffle=True) score5 = sae_classifier.evaluate(x_val, y_val) print('Test accuracy:', score5[1])
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Plot a two dimensional representation of the data
third_layer_code = encoder2.predict(encoder1.predict(x_train)) encoded_4_input = Input(shape=(8,)) encoded4 = Dense(2, activation='sigmoid')(encoded_4_input) decoded4 = Dense(8, activation='sigmoid')(encoded4) class4 = Dense(num_classes, activation='softmax')(decoded4) autoencoder4 = Model(encoded_4_input, class4) autoencoder4.compile(optimizer=RMSprop(), loss='binary_crossentropy', metrics=['accuracy']) encoder4 = Model(encoded_4_input, encoded4) encoder4.compile(optimizer=RMSprop(), loss='binary_crossentropy') autoencoder4.fit(third_layer_code , y_train , epochs=100 , batch_size=24 , shuffle=True , verbose=True ) third_layer_code_val = encoder2.predict(encoder1.predict(x_val)) score4 = autoencoder4.evaluate(third_layer_code_val, y_val, verbose=0) print('Test loss:', score4[0]) print('Test accuracy:', score4[1]) fourth_layer_code = encoder4.predict(encoder2.predict(encoder1.predict(x_train))) value1 = [x[0] for x in fourth_layer_code] value2 = [x[1] for x in fourth_layer_code] y_classes = y_train_cat data = {'value1': value1, 'value2': value2, 'class' : y_classes} data = pd.DataFrame.from_dict(data) data.head() groups = data.groupby('class') # Plot fig, ax = plt.subplots(figsize=(20,10)) # plt.figure(figsize=(20,10)) ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling for name, group in groups: ax.plot(group.value1, group.value2, marker='o', linestyle='', ms=3, label=name, alpha=0.7) ax.legend() plt.show()
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Here's how long it takes to drop 25 meters.
t_final = get_last_label(results) t_final
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
I'll run Phase 1 again so we can get the final state.
system1 = make_system(params) system1 event_func.direction=-1 results1, details1 = run_ode_solver(system1, slope_func1, events=event_func) details1.message
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
Now I need the final time, position, and velocity from Phase 1.
t_final = get_last_label(results1) t_final init2 = results1.row[t_final] init2
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
And that gives me the starting conditions for Phase 2.
system2 = System(system1, t_0=t_final, init=init2) system2
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
Here's how we run Phase 2, setting the direction of the event function so it doesn't stop the simulation immediately.
event_func.direction=+1 results2, details2 = run_ode_solver(system2, slope_func2, events=event_func) details2.message t_final = get_last_label(results2) t_final
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
Now we can run both phases and get the results in a single TimeFrame.
results = simulate_system2(params); plot_position(results) params_no_cord = Params(params, m_cord=1*kg) results_no_cord = simulate_system2(params_no_cord); plot_position(results, label='m_cord = 75 kg') plot_position(results_no_cord, label='m_cord = 1 kg') savefig('figs/jump.png') min(results_no_cord.y) diff = min(results.y) - min(results_no_cord.y) diff
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
Example 1
model = ConcreteModel(name="Getting started") model.x = Var(bounds=(-10, 10)) model.obj = Objective(expr=model.x) model.const_1 = Constraint(expr=model.x >= 5) # @tail: opt = SolverFactory('glpk') # "glpk" or "cbc" res = opt.solve(model) # solves and updates instance model.display() print() print("Optimal solution: ", value(model.x)) print("Cost of the optimal solution: ", value(model.obj)) # @:tail
nb_dev_python/python_pyomo_getting_started_1.ipynb
jdhp-docs/python_notebooks
mit
Example 2 $$ \begin{align} \max_{x_1,x_2} & \quad 4 x_1 + 3 x_2 \ \text{s.t.} & \quad x_1 + x_2 \leq 100 \ & \quad 2 x_1 + x_2 \leq 150 \ & \quad 3 x_1 + 4 x_2 \leq 360 \ & \quad x_1, x_2 \geq 0 \end{align} $$ ``` Optimal total cost is: 350.0 x_1 = 50. x_2 = 50. ```
model = ConcreteModel(name="Getting started") model.x1 = Var(within=NonNegativeReals) model.x2 = Var(within=NonNegativeReals) model.obj = Objective(expr=4. * model.x1 + 3. * model.x2, sense=maximize) model.ineq_const_1 = Constraint(expr=model.x1 + model.x2 <= 100) model.ineq_const_2 = Constraint(expr=2. * model.x1 + model.x2 <= 150) model.ineq_const_3 = Constraint(expr=3. * model.x1 + 4. * model.x2 <= 360) # @tail: opt = SolverFactory('glpk') # "glpk" or "cbc" results = opt.solve(model) # solves and updates instance model.display() print() print("Optimal solution: ({}, {})".format(value(model.x1), value(model.x2))) print("Gain of the optimal solution: ", value(model.obj)) # @:tail
nb_dev_python/python_pyomo_getting_started_1.ipynb
jdhp-docs/python_notebooks
mit
Soal 1.2.a (2 poin) Diberikan $\pi(main) = A$. Formulasikan $V_{\pi}(main)$ dan $V_{\pi}(selesai)$. $$ V_{\pi}(main) = ... $$ $$ V_{\pi}(selesai) = ... $$ Soal 1.2.b (2 poin) Implementasikan algoritma value iteration dari formula di atas untuk mendapatkan $V_{\pi}(main)$ dan $V_{\pi}(selesai)$.
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 1.3 (2 poin) Dengan $\pi(main) = A$, tuliskan formula $Q_{\pi}(main, B)$ dan tentukan nilainya. Jawaban Anda di sini Soal 1.4 (1 poin) Apa yang menjadi nilai $\pi_{opt}(main)$? Jawaban Anda di sini 2. Game Playing Diberikan permainan seperti di bawah ini. Diberikan ambang batas $N$ dan permainan dimulai dari nilai 1. Para pemain secara bergantian dapat memilih untuk menambahkan nilai dengan 2 atau mengalikan nilai dengan 1.1. Pemain yang melebihi ambang batas akan kalah.
import numpy as np class ExplodingGame(object): def __init__(self, N): self.N = N # state = (player, number) def start(self): return (+1, 1) def actions(self, state): player, number = state return ['+', '*'] def succ(self, state, action): player, number = state if action == '+': return (-player, number + 2) elif action == '*': return (-player, np.ceil(number * 1.1)) assert False def is_end(self, state): player, number = state return number > self.N def utility(self, state): player, number = state assert self.is_end(state) return player * float('inf') def player(self, state): player, number = state return player def add_policy(game, state): action = '+' print(f"add policy: state {state} => action {action}") return action def multiply_policy(game, state): action = '*' print(f"multiply policy: state {state} => action {action}") return action
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 2.1 (2 poin) Implementasikan random policy yang akan memilih aksi dengan rasio peluang 50%:50%.
def random_policy(game, state): pass
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 2.2 (3 poin) Implementasikan fungsi minimax policy.
def minimax_policy(game, state): pass
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 2.3 (2 poin) Implementasikan fungsi expectimax policy untuk melawan random policy yang didefinisikan pada soal 2.1.
def expectimax_policy(game, state): pass # Kasus uji game = ExplodingGame(N=10) policies = { +1: add_policy, -1: multiply_policy } state = game.start() while not game.is_end(state): # Who controls this state? player = game.player(state) policy = policies[player] # Ask policy to make a move action = policy(game, state) # Advance state state = game.succ(state, action) print(f"Utility di akhir permainan {game.utility(state)}")
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 2.4 (3 poin) Sebutkan policy terbaik untuk melawan: random policy expectimax policy minimax policy Jawaban Anda di sini 3. Bayesian Network Bayangkan Anda adalah seorang klimatolog yang bekerja untuk BMKG di tahun 3021 yang sedang mempelajari kasus pemanasan global. Anda tidak mengetahui catatan cuaca di tahun 2021 di Jakarta, tapi Anda punya diari milik Pak Ali yang memberikan keterangan berapa jumlah es krim yang dimakan Pak Ali tiap harinya di musim kemarau. Tujuan Anda adalah mengestimasi perubahan cuaca dari hari ke hari - panas (H) atau sejuk (C). Dengan kata lain, Jika diberikan observasi $O$ (bilangan bulat yang merepresentasikan jumlah es krim yang dimakan Pak Ali di suatu hari), temukan urutan kondisi cuaca $Q$ dari hari-hari tersebut. Definisi variabelnya adalah sebagai berikut: $Q \in {H, C}$ dan $O \in {1, 2, 3}$. Untuk bagian ini, Anda diminta untuk mengimplementasikan kode dengan memanfaatkan pustaka pomegranate. Masalah ini diadaptasi dari makalah oleh Eisner (2002). Petunjuk: Lihat kembali tugas 4 Anda. Penggunaan pustaka pomegranate sangat mirip dengan implementasi pada tugas tersebut.
!pip install pomegranate from pomegranate import * observed = [2,3,3,2,3,2,3,2,2,3,1,3,3,1,1,1,2,1,1,1,3,1,2,1,1,1,2,3,3,2,3,2,2]
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.1 (2 poin) Jika diketahui bahwa \begin{align} P(1|H) = 0.2 \ P(2|H) = 0.4 \ P(3|H) = 0.4 \end{align} dan \begin{align} P(1|C) = 0.5 \ P(2|C) = 0.4 \ P(3|C) = 0.1 \end{align} Definisikan probabilitas emisinya.
# Kode anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.2 (2 poin) Diketahui bahwa \begin{align} P(Q_t=H|Q_{t-1}=H) &= 0.6 \ P(Q_t=C|Q_{t-1}=H) &= 0.4 \ P(Q_t=H|Q_{t-1}=C) &= 0.5 \ P(Q_t=C|Q_{t-1}=C) &= 0.5 \ \end{align} Definisikan probabilitas transisinya.
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.3 (2 poin) Diketahui bahwa $$ P(Q_1 = H) = 0.8 $$ Definisikan probabilitas inisiasinya.
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.4 (2 poin) Berapa log probability dari observasi (observed) di atas?
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.5 (2 poin) Tunjukkan urutan $Q$ yang paling mungkin.
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Let's see how much time is necessary for 70,000,000 iterations intead of 100,000 iterations.
tm = time.time() C0 = bsm(S0=105,r=0.06,sigma=0.22,T=1.0,K=109,R = 70000000 , seed=500) pm = time.time() - tm print("Value of European Call Option: {0:.4g}".format(C0)+" - time[{0:.4g} secs]".format(pm))
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Let's see how we can speed up the computation with the numexpr package.
import numexpr as ne def bsm_ne(S0,r,sigma,T,K,R = 70000000 , seed=500): np.random.seed(seed) z = np.random.standard_normal(R) ST = ne.evaluate('S0 * exp(( r - 0.5 * sigma ** 2) * T + sigma * sqrt(T) * z)') hT = np.maximum(ST - K, 0) C0 = np.exp(-r * T) * np.sum(hT) / R return C0 tm = time.time() C0 = bsm_ne(S0=105,r=0.06,sigma=0.22,T=1.0,K=109,R = 70000000 , seed=500) pm = time.time() - tm print("Value of the European Call Option: {0:.4g}".format(C0)+" - time[{0:.4g} secs]".format(pm))
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Key Factors for Evaluating the Performance of a Portfolio The daily return of a stock is easily computable as it follows $$dr(t)=\frac{P(t)}{P(t-1)}-1$$ Similarly, the cumulative return of a stock is easily computable as it follows $$cr(t)=\frac{P(t)}{P(0)}-1$$ What is it P(t)? There are basically 2 options for this value, i.e. * the adjusted close price of a stock typically indicated in financial feed as Adj Close, or * the close price of a stock typically indicated in financial feed as Adj. We take the adjusted close price (see What Hedge Funds Really Do to understand why). Typically for evaluating the performance of a portfolio key factors to focus on are 1. Cumulative return 2. Average daily return 3. Rsk (Standard deviation of daily return) 4. Sharpe ratio We will see how to compute and plot these factors. Get financial data Functions from pandas.io.data and pandas.io.ga extract data from various Internet sources into a DataFrame The following sources are supported: Yahoo! Finance Google Finance St.Louis FED (FRED) Kenneth French’s data library World Bank Google Analytics For further info see pandas documentation
import numpy as np import pandas as pd import pandas.io.data as web df_final = web.DataReader(['GOOG','SPY'], data_source='yahoo', start='1/21/2010', end='4/15/2016') print(df_final) print(df_final.shape) df_final.ix[:,:,'SPY'].head() print(type(df_final.ix[:,:,'SPY'])) print("\n>>> null values:"+str(pd.isnull(df_final.ix[:,:,'GOOG']).sum().sum())) df_final = web.DataReader(['GOOG','SPY'], data_source='yahoo', start='1/21/1999', end='4/15/2016') df_final.ix[:,:,'GOOG'].head() print(type(df_final.ix[:,:,'GOOG'])) print("\n>>> null values:"+str(pd.isnull(df_final.ix[:,:,'GOOG']).sum().sum()))
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
There is a couple of observations to be done: 1. calling pandas.io.data with multiple stocks gets a pandas.core.panel.Panel instead of a pandas.DataFrame but filtering to specific axis (e.g. Google) we get pandas.core.frame.DataFrame 2. pandas.io.data does not handle missing values Hence, we can define the following functions.
import matplotlib.pyplot as plt def get_data(symbols, add_ref=True, data_source='yahoo', price='Adj Close', start='1/21/2010', end='4/15/2016'): """Read stock data (adjusted close) for given symbols from.""" if add_ref and 'SPY' not in symbols: # add SPY for reference, if absent symbols.insert(0, 'SPY') df = web.DataReader(symbols, data_source=data_source, start=start, end=end) return df[price,:,:] get_data(symbols=['GOOG','SPY']).tail()
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Also, notice that it is not necessary to perform an initial join with the data range of interest filtering out non trading days, as padas does it for us , i.e.
df_stock = get_data(symbols=['GOOG','SPY'],start='1/21/1999',end='4/15/2016') print(">> Trading days from pandas:"+str(df_stock.shape[0])) dates = pd.date_range('1/21/1999', '4/15/2016') df = pd.DataFrame(index=dates) print(">> Calendar days:"+str(df.shape[0])) df = df.join(df_stock) print(">> After join:"+str(df.shape[0])) df = df.dropna(subset=["SPY"]) print(">> After removing non trading days:"+str(df.shape[0]))
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Plotting stock prices
ax = get_data(symbols=['GOOG','SPY','IBM','GLD'],start='1/21/1999', end='4/15/2016').plot(title="Stock Data", fontsize=9) ax.set_xlabel("Date") ax.set_ylabel("Price") plt.show()
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Imputing missing values As clear from above plot, we need to handle missing values.
def fill_missing_values(df_data): """Fill missing values in data frame, in place.""" df_data.fillna(method='ffill',inplace=True) df_data.fillna(method='backfill',inplace=True) return df_data ax = fill_missing_values(get_data(symbols=['GOOG','SPY','IBM','GLD'], start='1/21/1999', end='4/15/2016')).plot(title="Stock Data", fontsize=9) ax.set_xlabel("Date") ax.set_ylabel("Price") plt.show()
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Normalizing prices
def normalize_data(df): return df/df.ix[0,:] ax = normalize_data( fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='1/21/1999', end='4/15/2016'))).plot(title="Stock Data", fontsize=9) ax.set_xlabel("Date") ax.set_ylabel("Normalized price") plt.show()
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Rolling statistics Notice that pandas.rolling_mean has been deprecated for DataFrame and will be removed in a future version. Hence, we will replace it with DataFrame.rolling(center=False,window=20).mean() Notice that pd.rolling_std has been deprecated for DataFrame and will be removed in a future version. Hence, we will replace it with DataFrame.rolling(center=False,window=20).std()
df = fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2015', end='7/15/2016')) # 1. Computing rolling mean using a 20-day window rm_df = pd.DataFrame.rolling(df, window=20).mean() ax = rm_df.plot(title="Rolling Mean") ax.set_xlabel("Date") ax.set_ylabel("Price") plt.show() # 2. Computing rolling standard deviation using a 20-day window rstd_df = pd.DataFrame.rolling(df, window=20).std() ax = rstd_df.plot(title="Rolling Standard Deviation") ax.set_xlabel("Date") ax.set_ylabel("Price") plt.show() # 3. Compute upper and lower bands def get_bollinger_bands(rm, rstd): """Return upper and lower Bollinger Bands.""" upper_band, lower_band = rm + 2 * rstd, rm - 2 * rstd return upper_band, lower_band upper_band, lower_band = get_bollinger_bands(rm_df, rstd_df) # Plot raw SPY values, rolling mean and Bollinger Bands ax = df['SPY'].plot(title="Bollinger Bands",label='SPY') rm_df['SPY'].plot(label='Rolling mean', ax=ax) upper_band['SPY'].plot(label='upper band', ax=ax) lower_band['SPY'].plot(label='lower band', ax=ax) # Add axis labels and legend ax.set_xlabel("Date") ax.set_ylabel("Price") ax.legend(loc='lower left') plt.show()
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Daily returns There are two ways to compute the daily return of a stock with pandas. We check they produce same results and plot them.
def compute_daily_returns_2(df): """Compute and return the daily return values.""" # Note: Returned DataFrame must have the same number of rows daily_returns = df.copy() daily_returns[1:] = (df[1:]/df[:-1].values) - 1 daily_returns.ix[0,:] = 0 return daily_returns def compute_daily_returns(df): """Compute and return the daily return values.""" # Note: Returned DataFrame must have the same number of rows daily_returns = (df / df.shift(1)) - 1 daily_returns.ix[0,:] = 0 return daily_returns pd.util.testing.assert_frame_equal( compute_daily_returns( fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2016', end='7/15/2016'))), compute_daily_returns_2( fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2016', end='7/15/2016')))) ax = compute_daily_returns(fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2016', end='7/15/2016'))).plot(title="Daily returns") ax.set_xlabel("Date") ax.set_ylabel("Daily return") plt.show() df = compute_daily_returns(fill_missing_values(get_data( symbols=['SPY'], start='4/21/2000', end='7/15/2016'))) plt.hist(df['SPY'],bins=30,color='c',label=['Daily return']) plt.axvline(df['SPY'].mean(), color='b', linestyle='dashed', linewidth=2 , label='Mean') plt.axvline(-df['SPY'].std(), color='r', linestyle='dashed', linewidth=2 , label='Std') plt.axvline(df['SPY'].std(), color='r', linestyle='dashed', linewidth=2 ) plt.title('SPY daily return distribution') plt.xlabel('Daily return') plt.grid(True) plt.legend() plt.show()
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Cumulative returns
def cumulative_returns(df): return df/df.ix[0,:] - 1 ax = cumulative_returns(fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2016', end='7/15/2016'))).plot(title="Cumulative returns") ax.set_xlabel("Date") ax.set_ylabel("Cumulative return") plt.show()
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Sharpe Ratio Sharpe ratio is a way to examine the performance of an investment by adjusting for its risk. The ratio measures the excess return (or risk premium) per unit of deviation in an investment asset or a trading strategy, typically referred to as risk (and is a deviation risk measure), named after William F. Sharpe. The ex-ante Sharpe ratio is defined as: $$S=\frac{E{R_p-R_f}}{Std{R_p-R_f}}$$ where $R_p$ is the asset return, $R_f$ is the return on a benchmark asset, such as the risk free rate of return. $E{R_p-R_f}$ is the expected value of the excess of the asset return over the benchmark return, and $Std{R_p-R_f}$ is the standard deviation of the asset return. The ex-post Sharpe ratio uses the same equation as the one above but with realized returns of the asset and benchmark rather than expected returns. Examples of risk free rates are * LIBOR * Interest rate of 3 month treasury bill * 0% (approximation used a lot in last years) Using daily returns as $R_p$ the ex-post Sharpe ratio is computable as: $$S=\frac{mean{R_p^{daily}-R_f^{daily}}}{Std{R_p^{daily}}}$$ as $R_f^{daily}$ is typically costant for several months it is possible to remove from the denominator. Also, $R_f^{daily}$ is typically approximable with 0% but it is possible to compute given $R_f^{yearly}$ as it follows remembering there are 252 trading days in a year $$R_f^{daily}=\sqrt[252]{1+R_f^{yearly}}$$ Sharpe ratio is typically an annual measure. This means if we are using different sample frequencies need to apply the following formula: $$S^{annual}=\sqrt[2]{SPY} \times S$$ where $SPR$ is the number of samples per year considered in computing $S$, e.g. $S^{annual}=\sqrt[2]{252} \times S^{daily}$ $S^{annual}=\sqrt[2]{52} \times S^{weekly}$ $S^{annual}=\sqrt[2]{12} \times S^{monthly}$
def sharpe_ratio(df,sample_freq='d',risk_free_rate=0.0): sr = (df - risk_free_rate).mean() / df.std() if sample_freq == 'd': sr = sr * np.sqrt(252) elif sample_freq == 'w': sr = sr * np.sqrt(52) elif sample_freq == 'm': sr = sr * np.sqrt(12) else: raise Exception('unkown sample frequency :'+str(sample_freq)) return sr # Sharpe ratio sharpe_ratio( compute_daily_returns( fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2015', end='7/15/2016'))))
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Summary For evaluating the performance of a portfolio key factors to focus on are 1. Cumulative return 2. Average daily return 3. Rsk (Standard deviation of daily return) 4. Sharpe ratio
df = fill_missing_values(get_data(symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2015', end='7/15/2016')) # 1. Cumulative return cumulative_returns(df).ix[-1,:] # 2. Average daily return compute_daily_returns(df).mean() # 3. Rsk (Standard deviation of daily return) compute_daily_returns(df).std() # 4. Sharpe ratio sharpe_ratio(compute_daily_returns(df))
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the output layer, and The output layer, which defines how the network learns and outputs a label for a given image. Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example, net = tflearn.input_data([None, 100]) would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). Then, to set how you train the network, use: net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with categorical cross-entropy. Finally, you put all this together to create the model with tflearn.DNN(net). Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
# Define the neural network def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model net = tflearn.input_data([None, 784]) net = tflearn.fully_connected(net, 100, activation='ReLU') net = tflearn.fully_connected(net, 10, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') # This model assumes that your network is named "net" model = tflearn.DNN(net) return model # Build the model model = build_model()
intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
azhurb/deep-learning
mit
Here, we set the PN order, if it is not already set. This will be used in numerous places below. This is the exponent of the largest power of $x$, or half the exponent of the largest power of $v$ that will appear beyond leading orders in the various quantities. Note that, because of python's convention that intervals are half-open at the end, most occurrences of PNOrbitalEvolutionOrder in the code will have 1 added to them; the actual value of PNOrbitalEvolutionOrder will be what we normally expect.
if 'PNOrbitalEvolutionOrder' not in globals(): PNOrbitalEvolutionOrder = frac(7,2)
PNTerms/OrbitalEvolution.ipynb
moble/PostNewtonian
mit
TaylorT1, TaylorT4, and TaylorT5* These very similar approximants are the simplest in construction, and most widely applicable. In particular, they can both be applied to precessing systems. Each gives rise to the same system of ODEs that need to be integrated in time, except that the right-hand side for $dv/dt$ is expanded as a series in $v$ and truncated for TaylorT4. TaylorT5 extends that simply. * The version of TaylorT5 output below is slightly different from the one introduced by Ajith. He further solved analytically for the orbital phase $\Phi$ in terms of $v$. This doesn't appear to be possible in the precessing case, since terms such as $\vec{\Sigma} \cdot \hat{L}_{\text{N}}$ now vary with $v$ in nontrivial ways; I believe uses of T5 for precessing systems assume that such terms are constant. Besides, we have 11 variables to integrate in addition to $\Phi$, so it's not much extra burden to integrate it too. For nonprecessing systems, similar code could be generated for the non-precessing case using Ajith's solution for $\Phi$; the value of y[1] would have to be reset explicitly at the top of the TaylorT4RHS function emitted below. First, we collect the various expressions from other notebooks:
execnotebook('BindingEnergy.ipynb') execnotebook('EnergyAbsorption.ipynb') execnotebook('Precession.ipynb')
PNTerms/OrbitalEvolution.ipynb
moble/PostNewtonian
mit
Next, we calculate the expansions needed for TaylorT4 and T5. These will be the right-hand sides in our evolution equations for $dv/dt$. TaylorT1 simply numerically evaluates a ratio of the terms imported above
# Read in the high-order series expansion of a ratio of polynomials p_Ratio = pickle.load(file('PolynomialRatios/PolynomialRatioSeries_Order{0}.dat'.format(2*PNOrbitalEvolutionOrder+1))) p_Ratio = p_Ratio.removeO().subs('PolynomialVariable',v) # Evaluate the flux, energy, and derivative of energy FluxTerms = [Flux_NoSpin, Flux_Spin] BindingEnergyTerms = [BindingEnergy_NoSpin, BindingEnergy_Spin] for Term in FluxTerms: PNVariables.update(Term) for Term in BindingEnergyTerms: PNVariables.update(Term) Flux = FluxExpression(FluxTerms, PNOrbitalEvolutionOrder) Energy = BindingEnergyExpression(BindingEnergyTerms, PNOrbitalEvolutionOrder) dEdv = BindingEnergyDerivativeExpression(BindingEnergyTerms, PNOrbitalEvolutionOrder) # Evaluate the energy absorption by the BHs, and make substitutions so that the Horner form is nice AbsorptionTerms = [AlviTerms] for Term in AbsorptionTerms: PNVariables.update(Term) Absorption = AbsorptionExpression(AbsorptionTerms, PNOrbitalEvolutionOrder) # Treat remaining log(v) terms as constants, for Taylor expansions and efficient numerical evaluation Flux = Flux.subs(log(v), logv) Energy = Energy.subs(log(v), logv) dEdv = dEdv.subs(log(v), logv) Absorption = Absorption.subs(log(v), logv) # Get the series expansions for the numerators and denominators FluxSeries = series(- (Flux + Absorption)/Fcal_coeff, x=v, x0=0, n=2*PNOrbitalEvolutionOrder+1).removeO() dEdvSeries = series(dEdv/(-nu*v/2), x=v, x0=0, n=2*PNOrbitalEvolutionOrder+1).removeO() # TaylorT4 T4Expressions = PNCollection() NumTerms = {'Num{0}'.format(n): FluxSeries.coeff(v,n=n) for n in range(2*PNOrbitalEvolutionOrder+1)} DenTerms = {'Den{0}'.format(n): dEdvSeries.coeff(v,n=n) for n in range(2*PNOrbitalEvolutionOrder+1)} T4Expressions.AddDerivedConstant('dvdt_T4', (Fcal_coeff/(-nu*v/2))*\ horner(sum([v**n*horner(N(p_Ratio.coeff(v,n=n).subs(dict(NumTerms.items() + DenTerms.items())))) for n in range(2*PNOrbitalEvolutionOrder+1)]))) # TaylorT5 T5Expressions = PNCollection() NumTerms = {'Num{0}'.format(n): dEdvSeries.coeff(v,n=n) for n in range(2*PNOrbitalEvolutionOrder+1)} DenTerms = {'Den{0}'.format(n): FluxSeries.coeff(v,n=n) for n in range(2*PNOrbitalEvolutionOrder+1)} T5Expressions.AddDerivedConstant('dtdv', ((-nu*v/2)/Fcal_coeff)*\ horner(sum([v**n*horner(N(p_Ratio.coeff(v,n=n).subs(dict(NumTerms.items() + DenTerms.items())))) for n in range(2*PNOrbitalEvolutionOrder+1)]))) T5Expressions.AddDerivedConstant('dvdt_T5', 1.0/dtdv) # TaylorT1 just gets some substitutions for efficiency T1Expressions = PNCollection() T1Expressions.AddDerivedConstant('Flux', Flux.subs(Pow(nu,3), nu__3).subs(Pow(nu,2), nu__2)) T1Expressions.AddDerivedConstant('dEdv', dEdv.subs(Pow(nu,3), nu__3).subs(Pow(nu,2), nu__2)) T1Expressions.AddDerivedConstant('Absorption', Absorption.subs(Pow(nu,3), nu__3).subs(Pow(nu,2), nu__2)) T1Expressions.AddDerivedConstant('dvdt_T1', - (Flux + Absorption) / dEdv)
PNTerms/OrbitalEvolution.ipynb
moble/PostNewtonian
mit
Now, the precession terms:
PrecessionVelocities = PNCollection() PrecessionVelocities.AddDerivedVariable('OmegaVec_chiVec_1', Precession_chiVec1Expression(PNOrbitalEvolutionOrder), datatype=ellHat.datatype) PrecessionVelocities.AddDerivedVariable('OmegaVec_chiVec_2', Precession_chiVec2Expression(PNOrbitalEvolutionOrder), datatype=ellHat.datatype) PrecessionVelocities.AddDerivedVariable(('OmegaVec' if UseQuaternions else 'OmegaVec_ellHat'), Precession_ellHatExpression(PNOrbitalEvolutionOrder)*nHat + ((v**3/M)*ellHat if UseQuaternions else 0), datatype=nHat.datatype) CodeConstructor = CodeOutput.CodeConstructor(PNVariables, T1Expressions) for Terms in BindingEnergyTerms+FluxTerms+[AlviTerms]+[Precession_ellHat, Precession_chiVec1, Precession_chiVec2]: CodeConstructor.AddDependencies(Terms) CodeConstructor.AddDependencies(PrecessionVelocities)
PNTerms/OrbitalEvolution.ipynb
moble/PostNewtonian
mit
Chaper 1 page 18
theta_real = 0.35 trials = [0, 1, 2, 3, 4, 8, 16, 32, 50, 150] data = [0, 1, 1, 1, 1, 4, 6, 9, 13, 48] beta_params = [(1, 1), (0.5, 0.5), (20, 20)] plt.figure(figsize=(10,12)) dist = stats.beta x = np.linspace(0, 1, 100) for idx, N in enumerate(trials): if idx == 0: plt.subplot(4,3, 2) else: plt.subplot(4,3, idx+3) y = data[idx] for (a_prior, b_prior), c in zip(beta_params, ('b', 'r', 'g')): p_theta_given_y = dist.pdf(x, a_prior + y, b_prior + N - y) plt.plot(x, p_theta_given_y, c) plt.fill_between(x, 0, p_theta_given_y, color=c, alpha=0.6) plt.axvline(theta_real, ymax=0.3, color='k') plt.plot(0, 0, label="{:d} experiments\n{:d} heads".format(N,y), alpha=0) plt.xlim(0,1) plt.ylim(0,12) plt.xlabel(r'$\theta$') plt.legend() plt.gca().axes.get_yaxis().set_visible(False) plt.tight_layout()
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Blue is uniform prior red has more 1, 0 than uniform green is centered around 0.5, we think we know the answer Solve using a grid method ch2 page 34
def posterior_grid(grid_points=100, heads=6, tosses=9): """ A grid implementation for the coin-flip problem """ grid = np.linspace(0, 1, grid_points) prior = np.repeat(1, grid_points) likelihood = stats.binom.pmf(heads, tosses, grid) unstd_posterior = likelihood * prior posterior = unstd_posterior / unstd_posterior.sum() return grid, posterior #Assuming we made 4 tosses and we observe only 1 head we have the following: points = 15 h, n = 1, 4 grid, posterior = posterior_grid(points, h, n) plt.plot(grid, posterior, 'o-', label='heads = {}\ntosses = {}'.format(h, n)) plt.xlabel(r'$\theta$') plt.legend(loc=0) #Assuming we made 40 tosses and we observe only 1 head we have the following: points = 15 h, n = 1, 40 grid, posterior = posterior_grid(points, h, n) plt.plot(grid, posterior, 'o-', label='heads = {}\ntosses = {}'.format(h, n)) plt.xlabel(r'$\theta$') plt.legend(loc=0) #Assuming we made 40 tosses and we observe 24 head we have the following: points = 15 h, n = 24, 40 grid, posterior = posterior_grid(points, h, n) plt.plot(grid, posterior, 'o-', label='heads = {}\ntosses = {}'.format(h, n)) plt.xlabel(r'$\theta$') plt.legend(loc=0) plt.figure() points = 150 h, n = 24, 40 grid, posterior = posterior_grid(points, h, n) plt.plot(grid, posterior, 'o-', label='heads = {}\ntosses = {}'.format(h, n)) plt.xlabel(r'$\theta$') plt.legend(loc=0)
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Chapter 2 Coin flip pymc3
np.random.seed(123) n_experiments = 4 theta_real = 0.35 data = stats.bernoulli.rvs(p=theta_real, size=n_experiments) print(data) XX = np.linspace(0,1,100) plt.plot(XX, stats.beta(1,1).pdf(XX)) with pm.Model() as our_first_model: theta = pm.Beta('theta', alpha=1, beta=1) y = pm.Bernoulli('y', p=theta, observed=data) start = pm.find_MAP() step = pm.Metropolis() trace = pm.sample(1000, step=step, start=start, chains=4, compute_convergence_checks=True)
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Convergence checking page 49
burnin = 100 chain = trace[burnin:] ax = pm.traceplot(chain, lines={'theta':theta_real}); ax[0][0].axvline(theta_real, c='r') theta_real with our_first_model: print(pm.rhat(chain)) # want < 1.1 pm.forestplot(chain) pm.summary(trace) pm.autocorrplot(trace) # a measure of eff n based on autocorrelecation # pm.effective_n(trace) # AKA Kruschke plot with our_first_model: pm.plot_posterior(trace) with our_first_model: pm.plot_posterior(trace, rope=[0.45, .55]) with our_first_model: pm.plot_posterior(trace, ref_val=0.50)
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Try pymc3 with a lot more data Coin is clearly not fair at the 1000 flips level
data = stats.bernoulli.rvs(p=theta_real, size=1000) # 1000 flips in the data with pm.Model() as our_first_model: theta = pm.Beta('theta', alpha=1, beta=1) y = pm.Bernoulli('y', p=theta, observed=data) start = pm.find_MAP() step = pm.Metropolis() trace = pm.sample(10000, step=step, start=start, chains=4) burnin = 100 chain = trace[burnin:] ax = pm.traceplot(chain, lines={'theta':theta_real}); ax[0][0].axvline(theta_real, c='r') pm.rhat(chain) # want < 1.1 pm.forestplot(chain) # super tight range pm.summary(trace) pm.autocorrplot(trace) # pm.effective_n(trace) pm.plot_posterior(trace, rope=[0.45, .55]) pm.plot_posterior(trace, ref_val=0.50)
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Try pymc3 with a lot more data Coin is not fair at the 25 flips level (for these data)
data = stats.bernoulli.rvs(p=theta_real, size=25) # 25 flips in the data with pm.Model() as our_first_model: theta = pm.Beta('theta', alpha=1, beta=1) y = pm.Bernoulli('y', p=theta, observed=data) start = pm.find_MAP() step = pm.Metropolis() trace = pm.sample(10000, step=step, start=start, chains=4) burnin = 100 chain = trace[burnin:] ax = pm.traceplot(chain, lines={'theta':theta_real}); ax[0][0].axvline(theta_real, c='r') pm.rhat(chain) # want < 1.1 pm.forestplot(chain) # super tight range pm.summary(trace) pm.autocorrplot(trace) pm.effective_n(trace) pm.plot_posterior(trace, rope=[0.45, .55]) pm.plot_posterior(trace, ref_val=0.50) pm.plot_posterior(trace, ref_val=0.50, rope=[0.45, .55])
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Explore priors on the coin flip Ex 2-5 page 59
np.random.seed(123) n_experiments = 4 theta_real = 0.35 data = stats.bernoulli.rvs(p=theta_real, size=n_experiments) print(data) with pm.Model() as our_first_model: theta = pm.Beta('theta', alpha=1, beta=1) y = pm.Bernoulli('y', p=theta, observed=data) start = pm.find_MAP() step = pm.Metropolis() trace = pm.sample(5000, step=step, start=start, chains=8) pm.plot_posterior(trace, ref_val=0.50, rope=[0.45, .55]) plt.title("pm.Beta('theta', alpha=1, beta=1)") with pm.Model() as our_first_model: theta = pm.Uniform('theta', .2, .4) y = pm.Bernoulli('y', p=theta, observed=data) step = pm.Metropolis() trace = pm.sample(5000, step=step, chains=8) pm.plot_posterior(trace, ref_val=0.50, rope=[0.45, .55]) plt.title("pm.Uniform('theta', 0, 1)") with pm.Model() as our_first_model: theta = pm.Normal('theta', 0.35, 1) y = pm.Bernoulli('y', p=theta, observed=data) step = pm.Metropolis() trace = pm.sample(5000, step=step, chains=8) pm.plot_posterior(trace, ref_val=0.50, rope=[0.45, .55]) plt.title("pm.Normal('theta', 0.35, 1)") pm.plots.densityplot(trace, hpd_markers='v')
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause