markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Part 2 Now we'll apply this to our training data, and take a look at the F1-score as a function of the hyperparameters. Here is an example of computing the F1-score for a particular choice of parameters:
n_components = 3 covariance_type = 'full' y_pred = GMMBayes(X_train, n_components, 'full') f1 = metrics.f1_score(y_train, y_pred) print f1
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Try changing the number of components and the covariance type. To see a description of the various covariance_type options, you can type gmm.GMM? in a code cell to see the documentation. You might also wish to loop over several values of the hyperparameters and plot the learning curves for the data. Part 3 Once you ...
X_test = np.zeros((test_data.size, 4), dtype=float) X_test[:, 0] = test_data['u-g'] X_test[:, 1] = test_data['g-r'] X_test[:, 2] = test_data['r-i'] X_test[:, 3] = test_data['i-z'] y_pred_literature = (test_data['label'] == 0).astype(int) Ntest = len(y_pred_literature) print Ntest
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Now follow the procedure above, and for the test data predict the labels using the Gaussian Naive Bayes estimator, as well as our Gaussian Mixture Bayes estimator. For simplicity, you may wish to use the Gaussian Mixture estimator to evaluate the Naive Bayes result.
# variables to compute: # y_pred_gmm : predicted labels for X_test from GMM Bayes model # y_pred_gnb : predicted labels for X_test with Naive Bayes model.
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
If the notebook is within the tutorial directory structure, the following command will load the solution:
%load soln/01-05.py print "------------------------------------------------------------------" print "Comparison of current results with published results (Naive Bayes)" print metrics.classification_report(y_pred_literature, y_pred_gnb, target_names=['stars', 'QSOs']) print "------...
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
k-means finds the two well separated clusters in this case Problem 2 [Generating Mixed Samples] Implement a random number generator for a random variable with the following mixture distribution: $f(x) = 0.4N(-1,1) + 0.6N(1,1)$ Generate N=1000 samples and histogram them. Try out a k-means ...
def mixture_model(mu1,mu2,s1,s2,alpha): return alpha*np.random.normal(mu1, s1, 1000) + (1-alpha)*np.random.normal(mu2, s2, 1000) mixture_samples = mixture_model(-1,1,1,1,0.4) plt.scatter(range(1000), mixture_samples) plt.hist(mixture_samples, bins=20) y_pred = KMeans(n_clusters=2, random_state=0).fit_predict(mi...
2016_Fall/EE-511/Homework3/Homework 3.ipynb
saketkc/hatex
mit
François Fillon Le projet de François Fillon ne sera annoncé que le 13 mars : https://www.fillon2017.fr/projet/
r = requests.get('https://www.fillon2017.fr/projet/') soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('a', class_='projectItem__inner') sublinks = [tag.attrs['href'] for tag in tags] r = requests.get('https://www.fillon2017.fr/projet/competitivite/') soup = BeautifulSoup(r.text, 'html.parser') tags...
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Marine Le Pen Les 144 engagements de Marine Le Pen peuvent être consultés ici : https://www.marine2017.fr/programme/ Analyse de la structure du site Apparemment, les différentes propositions sont imbriquées dans des balises &lt;p&gt;. ``` <p>3. <strong>Permettre la représentation de tous les Français</strong> par le s...
r = requests.get('https://www.marine2017.fr/programme/') soup = BeautifulSoup(r.text, "html.parser")
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Maintenant, chercons à extraire tous les paragraphes, à l'aide d'une fonction qui vérifie que le paragraphe commence par un nombre suivi d'un point (et peut-être d'un espace).
pattern = re.compile('^\d+.\s*') def filter_func(tag): if tag.text is not None: return pattern.match(tag.text) is not None else: return False all_paragraphs = [re.split(pattern, tag.text)[1:] for tag in soup.find_all('p') if filter_func(tag)] len(all_paragraphs) @interact def disp_para(n...
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Bien, on peut maintenant écrire ces données dans un fichier texte.
df.to_csv('../projets/marine_le_pen.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Benoît Hamon Le site de Benoît Hamon ne permet pas d'accéder à une page avec toutes les propositions facilement. Du coup, il faut explorer trois sous-catégories. https://www.benoithamon2017.fr/thematique/pour-un-progres-social-et-ecologique/
r = requests.get('https://www.benoithamon2017.fr/thematique/pour-un-progres-social-et-ecologique/') r soup = BeautifulSoup(r.text, 'html.parser') all_propositions = soup.find_all(class_='Propositions-Proposition') len(all_propositions) p = all_propositions[0] p.text p.find('h1').text p.find('p').text
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On peut extraire de ces propositions la moëlle essentielle :
def extract_data(tag): "Extracts title for tag and content." subject = tag.find('h1').text content = tag.find('p').text return subject, content
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Construisons une table de données avec ces propositions.
df = pd.DataFrame([extract_data(p) for p in all_propositions], columns=['titre', 'contenu']) df df[df['contenu'].str.contains('ascension')]
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On peut transformer ces propositions en DataFrame.
props_sources = {} props_sources['https://www.benoithamon2017.fr/thematique/pour-un-progres-social-et-ecologique/'] = df['contenu'].values.tolist() df = make_df_from_props_sources(props_sources) df.head() df.to_csv('../projets/benoit_hamon.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Jean-Luc Mélenchon On peut trouver une version inofficielle du programme ici : https://laec.fr/sommaire Un peu comme pour le site d'Hamon, il y a des rubriques. Commençons par la première.
r = requests.get('https://laec.fr/chapitre/1/la-6e-republique') soup = BeautifulSoup(r.text, 'html.parser') sublinks = soup.find_all('a', class_='list-group-item') sublinks
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On peut étendre cette manière de récupérer les données à toutes les sous-sections :
suburls = ['https://laec.fr/chapitre/1/la-6e-republique', 'https://laec.fr/chapitre/2/proteger-et-partager', 'https://laec.fr/chapitre/3/la-planification-ecologique', 'https://laec.fr/chapitre/4/sortir-des-traites-europeens', 'https://laec.fr/chapitre/5/pour-l-independance-d...
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Combien de propositions trouvons-nous ?
len(sublinks)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Construisons les url complètes.
full_urls = ['https://laec.fr' + link.attrs['href'] for link in sublinks] full_urls[:10] full_url = full_urls[13] #full_url = full_urls[0] r = requests.get(full_url) print(r.text[:800]) soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('li', class_='list-group-item') tag = tags[0] tag.text tag.fi...
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On écrit un fichier.
df.to_csv('../projets/jean_luc_melenchon.csv', index=False, quoting=1)
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Emmanuel Macron Il faut dans un premier temps aller chercher les pages individuelles du site.
r = requests.get('https://en-marche.fr/emmanuel-macron/le-programme') soup = BeautifulSoup(r.text, 'html.parser') proposals = soup.find_all(class_='programme__proposal') proposals = [p for p in proposals if 'programme__proposal--category' not in p.attrs['class']] len(proposals) full_urls = ["https://en-marche.fr" + ...
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
On extrait toutes les propositions.
propositions = [extract_items(url) for url in full_urls] len(propositions) full_urls[18] @interact def print_prop(n=(0, len(propositions) - 1)): print(propositions[n]) props_sources = {} for url, props in zip(full_urls, propositions): props_sources[url] = props df = make_df_from_props_sources(props_sources...
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Yannick Jadot http://avecjadot.fr/lafrancevive/
r = requests.get('http://avecjadot.fr/lafrancevive/') soup = BeautifulSoup(r.text, 'html.parser') tags = soup.find_all('div', class_='bloc-mesure') links = [tag.find('a').attrs['href'] for tag in tags] all([link.startswith('http://avecjadot.fr/') for link in links])
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Extraction du titre d'une des pages.
link = links[0] r = requests.get(link) soup = BeautifulSoup(r.text, 'html.parser') soup.find('div', class_='texte-mesure').text.strip().replace('\n', ' ') def extract_data(link): r = requests.get(link) soup = BeautifulSoup(r.text, 'html.parser') return soup.find('div', class_='texte-mesure').text.strip(...
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
Nicolas Dupont-Aignan
r = requests.get('http://www.nda-2017.fr/themes.html') soup = BeautifulSoup(r.text, 'html.parser') len(soup.find_all('div', class_='theme')) links = ['http://www.nda-2017.fr' + tag.find('a').attrs['href'] for tag in soup.find_all('div', class_='theme')] link = links[0] r = requests.get(link) soup = BeautifulSoup(r...
ipynb/Text mining des programmes.ipynb
flothesof/presidentielles2017
mit
MOTOR Lin Engineering 1. Determine appropriate velocity_max = microsteps/sec 2. Determine motor limits 3. Determine conv = microsteps/mm 4. Determine orientation (P+; D-)
# TODO: get current position for relative move class Motor: def __init__(self, config_file, init=True): self.serial = s.Serial() # placeholder f = open(config_file, 'r') self.config = yaml.load(f) f.close() if init: self.initialize() def initialize(...
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
ASI Controller Applied Scientific Instrumentation Set hall effect sensors to appropriate limits Determine orientation (X+-, Y+-)
# TODO: Fix serial.read encoding class ASI_Controller: def __init__(self, config_file, init=True): self.serial = s.Serial() # placeholder f = open(config_file, 'r') self.config = yaml.load(f) f.close() if init: self.initialize() def initiali...
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
Autosipper
# I: filepath of delimited file # P: detect delimiter/header read file accordingly # O: list of records (no header) def read_delim(filepath): f = open(filepath, 'r') dialect = csv.Sniffer().sniff(f.read(1024)) f.seek(0) hasHeader = csv.Sniffer().has_header(f.read(1024)) f.seek(0) reader = csv.re...
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
Procedure Primed control Primed chip with BSA (3.8 psi) Inlet tree Device w/ outlet open Device w/ outlet closed (to remove air) Opened outlet / closed neck valves Passivated w/ BSA under flow for 1 hr
d = Autosipper(Motor('config/le_motor.yaml'), ASI_Controller('config/asi_controller.yaml'))
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
Prime PEEK manually Put PEEK in W2 Open Valves(vacuum_in, inlet) Open CTRL(vacuum_in)
d.exit()
notebooks/20170301_Test.ipynb
FordyceLab/AcqPack
mit
I transferred the lottery odds data from the PDF reports from the last three years to a CSV for easy read to python.
data = pd.read_csv('./hr100_odds.csv',header=0,index_col=0) data # calculate total number of tickets per lottery year total_tix = [] for x in ['2017','2018','2019']: total_tix.append((data[x]*data.index).sum()) plt.plot(total_tix) plt.scatter([0,1,2],total_tix) plt.suptitle('total tickets',size=14) plt.xtic...
scripts/hardrock 100 entry model.ipynb
dspak/ultradata
mpl-2.0
The total number of tickets appears to be increasing linearly, so a linear model seems like a decent first approximation of the process. So we'll start with that in the model below. model assumptions: the total number of tickets increases linearly over time. Likely this is not true and I expect it will plateau at ...
# years to predict ytp = np.array([3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]) ytp_real = 2017+ytp print ytp_real b0 = pymc.Normal('b0',7000,0.00001) # intercept of model of total tickets in lottery b1 = pymc.Normal('b1',5000,0.00001) # slope of total_tickets linear model err= pymc.Uniform('err',0,500) # error on tot...
scripts/hardrock 100 entry model.ipynb
dspak/ultradata
mpl-2.0
The figure above shows the mean (blue line) and 95% HPD (grey shaded area) of the distribution of number of draws necessary to pull my name each year in the HR100 lottery. The red horizontal line is 45, which is the number of slots in the loterry for newcomers. So in 2029 when the mean of the distribution passes belo...
tmp_perc = [] for x in xrange(len(ytp)): tmp_perc.append(round(sum(np.mean(mcmc.trace('final_odds')[:],1)[:,x]<=45)/80000.0,4)*100) pd.DataFrame(index=ytp_real,data={'percent chance':tmp_perc})
scripts/hardrock 100 entry model.ipynb
dspak/ultradata
mpl-2.0
Gradient Ascent The function find_maximum that is defined below takes four arguments: - f is a function of the form $\texttt{f}: \mathbb{R}^n \rightarrow \mathbb{R}$. It is assumed that the function f is <font color="blue">convex</font> and therefore there is only one global maximum. - gradF is the gradient o...
def findMaximum(f, gradF, start, eps): x = start fx = f(x) alpha = 0.1 # learning rate cnt = 0 # number of iterations while True: cnt += 1 xOld, fOld = x, fx x += alpha * gradF(x) fx = f(x) print(f'cnt = {cnt}, f({x}) = {fx}') print(f'...
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We will try to find the maximum of the function $$ f(x) := \sin(x) - \frac{x^2}{2} $$
def f(x): return np.sin(x) - x**2 / 2
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us plot this function.
import matplotlib.pyplot as plt import seaborn as sns X = np.arange(-0.5, 1.8, 0.01) Y = f(X) plt.figure(figsize=(15, 10)) sns.set(style='whitegrid') plt.title('lambda x: sin(x) - x**2/2') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x') plt.ylabel('y') plt.xticks(np.arange(-0.5, 1.81, ste...
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Clearly, this function has a maximum somewhere between 0.7 and 0.8. Let us use gradient ascent to find it. In order to do so, we have to provide the derivative of this function. We have $$ \frac{\mathrm{d}f}{\mathrm{d}x} = \cos(x) - x. $$
def fs(x): return np.cos(x) - x
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us plot the derivative together with the function.
X2 = np.arange(0.4, 1.1, 0.01) Ys = fs(X2) plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('lambda x: sin(x) - x**2/2 and its derivative') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x') plt.ylabel('y') plt.xticks(np.arange(-0.5, 1.81, step=0.1)) plt.yticks(np.arange(-0.6, 0.61, ste...
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The maximum seems to be at $x \approx 0.739085$. Let's check the derivative at this position.
fs(x_max)
Python/6 Classification/Gradient-Ascent.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Input functions to read JPEG images The key difference between this notebook and the MNIST one is in the input function. In the input function here, we are doing the following: * Reading JPEG images, rather than 2D integer arrays. * Reading in batches of batch_size images rather than slicing our in-memory structure to ...
%%bash rm -rf flowersmodel.tar.gz flowers_trained gcloud ai-platform local train \ --module-name=flowersmodel.task \ --package-path=${PWD}/flowersmodel \ -- \ --output_dir=${PWD}/flowers_trained \ --train_steps=5 \ --learning_rate=0.01 \ --batch_size=2 \ --model=$MODEL_TYPE \ --augme...
courses/machine_learning/deepdive/08_image/labs/flowers_fromscratch.ipynb
turbomanage/training-data-analyst
apache-2.0
Now, let's do it on ML Engine. Note the --model parameter
%%bash OUTDIR=gs://${BUCKET}/flowers/trained_${MODEL_TYPE} JOBNAME=flowers_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=flowersmodel.task \ --package-path=${PWD}/flowersmodel...
courses/machine_learning/deepdive/08_image/labs/flowers_fromscratch.ipynb
turbomanage/training-data-analyst
apache-2.0
Monitor training with TensorBoard To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row. TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and...
%%bash MODEL_NAME="flowers" MODEL_VERSION=${MODEL_TYPE} MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1) echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" #gcloud ai-platform versions delete --quiet ${MODEL_VE...
courses/machine_learning/deepdive/08_image/labs/flowers_fromscratch.ipynb
turbomanage/training-data-analyst
apache-2.0
Send it to the prediction service
%%bash gcloud ai-platform predict \ --model=flowers \ --version=${MODEL_TYPE} \ --json-instances=./request.json
courses/machine_learning/deepdive/08_image/labs/flowers_fromscratch.ipynb
turbomanage/training-data-analyst
apache-2.0
interactive plotting line plots With Plotly, you can turn on and off data values by clicking on the legend
df.iplot()
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
bar plot
df2.iplot(kind='bar')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
box plot
df.iplot(kind='box')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
surface plot
df3 = pd.DataFrame({'x':[1,2,3,4,5], 'y':[11,22,33,44,55], 'z':[5,4,3,2,1]}) df3 df3.iplot(kind='surface')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
histograms
df.iplot(kind='hist',bins=50)
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
spread plots Used to show the spread in data value between two columns / variables.
df[['A','B']].iplot(kind='spread')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
bubble scatter plots same as scatter, but you can easily size the dots by another column
df.iplot(kind='bubble',x='A', y='B', size='C')
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
scatter matrix This is similar to seaborn's pairplot
df.scatter_matrix()
python_crash_course/plotly_cufflinks_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
2. Normalization between 0 and 1
x_train = df_x_train.values x_train = (x_train - x_train.min()) / (x_train.max() - x_train.min()) y_train = df_y_train.values y_train_cat = y_train x_val = df_x_val.values x_val = (x_val - x_val.min()) / (x_val.max() - x_val.min()) y_val = df_y_val.values y_eval = y_val
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Train Neural Net Due to a tight schedule we will not perform any cross validation. So it might happen that our accuracy estimators lack a little bit in potential of generalization. We shall live with that. Another setup of experiments would be, that we loop over some different dataframes samples up in the preprocessing...
input_data = Input(shape=(input_dim,), dtype='float32', name='main_input') hidden_layer1 = Dense(hidden1_dim, activation='relu', input_shape=(input_dim,), kernel_initializer='normal')(input_data) dropout1 = Dropout(dropout)(hidden_layer1) hidden_layer2 = Dense(hidden2_dim, activation='relu', input_shape=(input_dim,), k...
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
One can easily see that our results are better. So we go further with that result and check how good our SAW might become. Stacked Autoencoder For this dataset we decided to go with a 24-16-8-16-24 architecture. First layer
input_img = Input(shape=(input_dim,)) encoded1 = Dense(16, activation='relu')(input_img) decoded1 = Dense(input_dim, activation='relu')(encoded1) class1 = Dense(num_classes, activation='softmax')(decoded1) autoencoder1 = Model(input_img, class1) autoencoder1.compile(optimizer=RMSprop(), loss='binary_crossentropy', met...
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Second layer
first_layer_code = encoder1.predict(x_train) encoded_2_input = Input(shape=(16,)) encoded2 = Dense(8, activation='relu')(encoded_2_input) decoded2 = Dense(16, activation='relu')(encoded2) class2 = Dense(num_classes, activation='softmax')(decoded2) autoencoder2 = Model(encoded_2_input, class2) autoencoder2.compile(opt...
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Data Reconstruction with SAE
sae_encoded1 = Dense(16, activation='relu')(input_img) sae_encoded2 = Dense(8, activation='relu')(sae_encoded1) sae_decoded1 = Dense(16, activation='relu')(sae_encoded2) sae_decoded2 = Dense(24, activation='sigmoid')(sae_decoded1) sae = Model(input_img, sae_decoded2) sae.layers[1].set_weights(autoencoder1.layers[1].g...
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Classification
input_img = Input(shape=(input_dim,)) sae_classifier_encoded1 = Dense(16, activation='relu')(input_img) sae_classifier_encoded2 = Dense(8, activation='relu')(sae_classifier_encoded1) class_layer = Dense(num_classes, activation='softmax')(sae_classifier_encoded2) sae_classifier = Model(inputs=input_img, outputs=class_l...
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Plot a two dimensional representation of the data
third_layer_code = encoder2.predict(encoder1.predict(x_train)) encoded_4_input = Input(shape=(8,)) encoded4 = Dense(2, activation='sigmoid')(encoded_4_input) decoded4 = Dense(8, activation='sigmoid')(encoded4) class4 = Dense(num_classes, activation='softmax')(decoded4) autoencoder4 = Model(encoded_4_input, class4) au...
ROBO_SAE.ipynb
philippgrafendorfe/stackedautoencoders
mit
Here's how long it takes to drop 25 meters.
t_final = get_last_label(results) t_final
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
I'll run Phase 1 again so we can get the final state.
system1 = make_system(params) system1 event_func.direction=-1 results1, details1 = run_ode_solver(system1, slope_func1, events=event_func) details1.message
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
Now I need the final time, position, and velocity from Phase 1.
t_final = get_last_label(results1) t_final init2 = results1.row[t_final] init2
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
And that gives me the starting conditions for Phase 2.
system2 = System(system1, t_0=t_final, init=init2) system2
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
Here's how we run Phase 2, setting the direction of the event function so it doesn't stop the simulation immediately.
event_func.direction=+1 results2, details2 = run_ode_solver(system2, slope_func2, events=event_func) details2.message t_final = get_last_label(results2) t_final
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
Now we can run both phases and get the results in a single TimeFrame.
results = simulate_system2(params); plot_position(results) params_no_cord = Params(params, m_cord=1*kg) results_no_cord = simulate_system2(params_no_cord); plot_position(results, label='m_cord = 75 kg') plot_position(results_no_cord, label='m_cord = 1 kg') savefig('figs/jump.png') min(results_no_cord.y) diff = mi...
soln/jump2_soln.ipynb
AllenDowney/ModSimPy
mit
Example 1
model = ConcreteModel(name="Getting started") model.x = Var(bounds=(-10, 10)) model.obj = Objective(expr=model.x) model.const_1 = Constraint(expr=model.x >= 5) # @tail: opt = SolverFactory('glpk') # "glpk" or "cbc" res = opt.solve(model) # solves and updates instance model.display() print() print("Optimal so...
nb_dev_python/python_pyomo_getting_started_1.ipynb
jdhp-docs/python_notebooks
mit
Example 2 $$ \begin{align} \max_{x_1,x_2} & \quad 4 x_1 + 3 x_2 \ \text{s.t.} & \quad x_1 + x_2 \leq 100 \ & \quad 2 x_1 + x_2 \leq 150 \ & \quad 3 x_1 + 4 x_2 \leq 360 \ & \quad x_1, x_2 \geq 0 \end{align} $$ ``` Optimal total cost is: 350.0 x_...
model = ConcreteModel(name="Getting started") model.x1 = Var(within=NonNegativeReals) model.x2 = Var(within=NonNegativeReals) model.obj = Objective(expr=4. * model.x1 + 3. * model.x2, sense=maximize) model.ineq_const_1 = Constraint(expr=model.x1 + model.x2 <= 100) model.ineq_const_2 = Constraint(expr=2. * model.x1 +...
nb_dev_python/python_pyomo_getting_started_1.ipynb
jdhp-docs/python_notebooks
mit
Soal 1.2.a (2 poin) Diberikan $\pi(main) = A$. Formulasikan $V_{\pi}(main)$ dan $V_{\pi}(selesai)$. $$ V_{\pi}(main) = ... $$ $$ V_{\pi}(selesai) = ... $$ Soal 1.2.b (2 poin) Implementasikan algoritma value iteration dari formula di atas untuk mendapatkan $V_{\pi}(main)$ dan $V_{\pi}(selesai)$.
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 1.3 (2 poin) Dengan $\pi(main) = A$, tuliskan formula $Q_{\pi}(main, B)$ dan tentukan nilainya. Jawaban Anda di sini Soal 1.4 (1 poin) Apa yang menjadi nilai $\pi_{opt}(main)$? Jawaban Anda di sini 2. Game Playing Diberikan permainan seperti di bawah ini. Diberikan ambang batas $N$ dan permainan dimulai dari nila...
import numpy as np class ExplodingGame(object): def __init__(self, N): self.N = N # state = (player, number) def start(self): return (+1, 1) def actions(self, state): player, number = state return ['+', '*'] def succ(self, state, action): player, number = ...
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 2.1 (2 poin) Implementasikan random policy yang akan memilih aksi dengan rasio peluang 50%:50%.
def random_policy(game, state): pass
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 2.2 (3 poin) Implementasikan fungsi minimax policy.
def minimax_policy(game, state): pass
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 2.3 (2 poin) Implementasikan fungsi expectimax policy untuk melawan random policy yang didefinisikan pada soal 2.1.
def expectimax_policy(game, state): pass # Kasus uji game = ExplodingGame(N=10) policies = { +1: add_policy, -1: multiply_policy } state = game.start() while not game.is_end(state): # Who controls this state? player = game.player(state) policy = policies[player] # Ask policy to make a mov...
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 2.4 (3 poin) Sebutkan policy terbaik untuk melawan: random policy expectimax policy minimax policy Jawaban Anda di sini 3. Bayesian Network Bayangkan Anda adalah seorang klimatolog yang bekerja untuk BMKG di tahun 3021 yang sedang mempelajari kasus pemanasan global. Anda tidak mengetahui catatan cuaca di tahun 2...
!pip install pomegranate from pomegranate import * observed = [2,3,3,2,3,2,3,2,2,3,1,3,3,1,1,1,2,1,1,1,3,1,2,1,1,1,2,3,3,2,3,2,2]
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.1 (2 poin) Jika diketahui bahwa \begin{align} P(1|H) = 0.2 \ P(2|H) = 0.4 \ P(3|H) = 0.4 \end{align} dan \begin{align} P(1|C) = 0.5 \ P(2|C) = 0.4 \ P(3|C) = 0.1 \end{align} Definisikan probabilitas emisinya.
# Kode anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.2 (2 poin) Diketahui bahwa \begin{align} P(Q_t=H|Q_{t-1}=H) &= 0.6 \ P(Q_t=C|Q_{t-1}=H) &= 0.4 \ P(Q_t=H|Q_{t-1}=C) &= 0.5 \ P(Q_t=C|Q_{t-1}=C) &= 0.5 \ \end{align} Definisikan probabilitas transisinya.
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.3 (2 poin) Diketahui bahwa $$ P(Q_1 = H) = 0.8 $$ Definisikan probabilitas inisiasinya.
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.4 (2 poin) Berapa log probability dari observasi (observed) di atas?
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Soal 3.5 (2 poin) Tunjukkan urutan $Q$ yang paling mungkin.
# Kode Anda di sini
scripts/final-exam2021.ipynb
aliakbars/uai-ai
mit
Let's see how much time is necessary for 70,000,000 iterations intead of 100,000 iterations.
tm = time.time() C0 = bsm(S0=105,r=0.06,sigma=0.22,T=1.0,K=109,R = 70000000 , seed=500) pm = time.time() - tm print("Value of European Call Option: {0:.4g}".format(C0)+" - time[{0:.4g} secs]".format(pm))
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Let's see how we can speed up the computation with the numexpr package.
import numexpr as ne def bsm_ne(S0,r,sigma,T,K,R = 70000000 , seed=500): np.random.seed(seed) z = np.random.standard_normal(R) ST = ne.evaluate('S0 * exp(( r - 0.5 * sigma ** 2) * T + sigma * sqrt(T) * z)') hT = np.maximum(ST - K, 0) C0 = np.exp(-r * T) * np.sum(hT) / R return C0 tm = time.tim...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Key Factors for Evaluating the Performance of a Portfolio The daily return of a stock is easily computable as it follows $$dr(t)=\frac{P(t)}{P(t-1)}-1$$ Similarly, the cumulative return of a stock is easily computable as it follows $$cr(t)=\frac{P(t)}{P(0)}-1$$ What is it P(t)? There are basically 2 options for this ...
import numpy as np import pandas as pd import pandas.io.data as web df_final = web.DataReader(['GOOG','SPY'], data_source='yahoo', start='1/21/2010', end='4/15/2016') print(df_final) print(df_final.shape) df_final.ix[:,:,'SPY'].head() print(type(df_final.ix[:,:,'SPY'])) print("\n>>> null value...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
There is a couple of observations to be done: 1. calling pandas.io.data with multiple stocks gets a pandas.core.panel.Panel instead of a pandas.DataFrame but filtering to specific axis (e.g. Google) we get pandas.core.frame.DataFrame 2. pandas.io.data does not handle missing values Hence, we can define the following f...
import matplotlib.pyplot as plt def get_data(symbols, add_ref=True, data_source='yahoo', price='Adj Close', start='1/21/2010', end='4/15/2016'): """Read stock data (adjusted close) for given symbols from.""" if add_ref and 'SPY' not in symb...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Also, notice that it is not necessary to perform an initial join with the data range of interest filtering out non trading days, as padas does it for us , i.e.
df_stock = get_data(symbols=['GOOG','SPY'],start='1/21/1999',end='4/15/2016') print(">> Trading days from pandas:"+str(df_stock.shape[0])) dates = pd.date_range('1/21/1999', '4/15/2016') df = pd.DataFrame(index=dates) print(">> Calendar days:"+str(df.shape[0])) df = df.join(df_stock) print(">> After join:"+str(df.shape...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Plotting stock prices
ax = get_data(symbols=['GOOG','SPY','IBM','GLD'],start='1/21/1999', end='4/15/2016').plot(title="Stock Data", fontsize=9) ax.set_xlabel("Date") ax.set_ylabel("Price") plt.show()
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Imputing missing values As clear from above plot, we need to handle missing values.
def fill_missing_values(df_data): """Fill missing values in data frame, in place.""" df_data.fillna(method='ffill',inplace=True) df_data.fillna(method='backfill',inplace=True) return df_data ax = fill_missing_values(get_data(symbols=['GOOG','SPY','IBM','GLD'], start='1/21/1...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Normalizing prices
def normalize_data(df): return df/df.ix[0,:] ax = normalize_data( fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='1/21/1999', end='4/15/2016'))).plot(title="Stock Data", fontsize=9) ax.set_xlabel("Date") ax...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Rolling statistics Notice that pandas.rolling_mean has been deprecated for DataFrame and will be removed in a future version. Hence, we will replace it with DataFrame.rolling(center=False,window=20).mean() Notice that pd.rolling_std has been deprecated for DataFrame and will be removed in a future version. Hence, we ...
df = fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2015', end='7/15/2016')) # 1. Computing rolling mean using a 20-day window rm_df = pd.DataFrame.rolling(df, window=20).mean() ax = rm_df.plot(title="Rolli...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Daily returns There are two ways to compute the daily return of a stock with pandas. We check they produce same results and plot them.
def compute_daily_returns_2(df): """Compute and return the daily return values.""" # Note: Returned DataFrame must have the same number of rows daily_returns = df.copy() daily_returns[1:] = (df[1:]/df[:-1].values) - 1 daily_returns.ix[0,:] = 0 return daily_returns def compute_daily_returns(df)...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Cumulative returns
def cumulative_returns(df): return df/df.ix[0,:] - 1 ax = cumulative_returns(fill_missing_values( get_data( symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2016', end='7/15/2016'))).plot(title="Cumulative returns") ax.set_xlabel("Date...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Sharpe Ratio Sharpe ratio is a way to examine the performance of an investment by adjusting for its risk. The ratio measures the excess return (or risk premium) per unit of deviation in an investment asset or a trading strategy, typically referred to as risk (and is a deviation risk measure), named after William F. ...
def sharpe_ratio(df,sample_freq='d',risk_free_rate=0.0): sr = (df - risk_free_rate).mean() / df.std() if sample_freq == 'd': sr = sr * np.sqrt(252) elif sample_freq == 'w': sr = sr * np.sqrt(52) elif sample_freq == 'm': sr = sr * np.sqrt(12) else: raise Exce...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Summary For evaluating the performance of a portfolio key factors to focus on are 1. Cumulative return 2. Average daily return 3. Rsk (Standard deviation of daily return) 4. Sharpe ratio
df = fill_missing_values(get_data(symbols=['GOOG','SPY','IBM','GLD'], start='4/21/2015', end='7/15/2016')) # 1. Cumulative return cumulative_returns(df).ix[-1,:] # 2. Average daily return compute_daily_returns(df).mean() # 3. Rsk (Standard deviation of daily...
1__Warmup.ipynb
gtesei/python-for-finance-notes
mit
Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the ou...
# Define the neural network def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model net = tflearn.input_data([None, 784]) net = tflear...
intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
azhurb/deep-learning
mit
Here, we set the PN order, if it is not already set. This will be used in numerous places below. This is the exponent of the largest power of $x$, or half the exponent of the largest power of $v$ that will appear beyond leading orders in the various quantities. Note that, because of python's convention that interval...
if 'PNOrbitalEvolutionOrder' not in globals(): PNOrbitalEvolutionOrder = frac(7,2)
PNTerms/OrbitalEvolution.ipynb
moble/PostNewtonian
mit
TaylorT1, TaylorT4, and TaylorT5* These very similar approximants are the simplest in construction, and most widely applicable. In particular, they can both be applied to precessing systems. Each gives rise to the same system of ODEs that need to be integrated in time, except that the right-hand side for $dv/dt$ is e...
execnotebook('BindingEnergy.ipynb') execnotebook('EnergyAbsorption.ipynb') execnotebook('Precession.ipynb')
PNTerms/OrbitalEvolution.ipynb
moble/PostNewtonian
mit
Next, we calculate the expansions needed for TaylorT4 and T5. These will be the right-hand sides in our evolution equations for $dv/dt$. TaylorT1 simply numerically evaluates a ratio of the terms imported above
# Read in the high-order series expansion of a ratio of polynomials p_Ratio = pickle.load(file('PolynomialRatios/PolynomialRatioSeries_Order{0}.dat'.format(2*PNOrbitalEvolutionOrder+1))) p_Ratio = p_Ratio.removeO().subs('PolynomialVariable',v) # Evaluate the flux, energy, and derivative of energy FluxTerms = [Flux_NoS...
PNTerms/OrbitalEvolution.ipynb
moble/PostNewtonian
mit
Now, the precession terms:
PrecessionVelocities = PNCollection() PrecessionVelocities.AddDerivedVariable('OmegaVec_chiVec_1', Precession_chiVec1Expression(PNOrbitalEvolutionOrder), datatype=ellHat.datatype) PrecessionVelocities.AddDerivedVariable('OmegaVec_chiVec_2'...
PNTerms/OrbitalEvolution.ipynb
moble/PostNewtonian
mit
Chaper 1 page 18
theta_real = 0.35 trials = [0, 1, 2, 3, 4, 8, 16, 32, 50, 150] data = [0, 1, 1, 1, 1, 4, 6, 9, 13, 48] beta_params = [(1, 1), (0.5, 0.5), (20, 20)] plt.figure(figsize=(10,12)) dist = stats.beta x = np.linspace(0, 1, 100) for idx, N in enumerate(trials): if idx == 0: plt.subplot(4,3, 2) else: p...
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Blue is uniform prior red has more 1, 0 than uniform green is centered around 0.5, we think we know the answer Solve using a grid method ch2 page 34
def posterior_grid(grid_points=100, heads=6, tosses=9): """ A grid implementation for the coin-flip problem """ grid = np.linspace(0, 1, grid_points) prior = np.repeat(1, grid_points) likelihood = stats.binom.pmf(heads, tosses, grid) unstd_posterior = likelihood * prior posterior = unstd...
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Chapter 2 Coin flip pymc3
np.random.seed(123) n_experiments = 4 theta_real = 0.35 data = stats.bernoulli.rvs(p=theta_real, size=n_experiments) print(data) XX = np.linspace(0,1,100) plt.plot(XX, stats.beta(1,1).pdf(XX)) with pm.Model() as our_first_model: theta = pm.Beta('theta', alpha=1, beta=1) y = pm.Bernoulli('y', p=theta, observed...
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Convergence checking page 49
burnin = 100 chain = trace[burnin:] ax = pm.traceplot(chain, lines={'theta':theta_real}); ax[0][0].axvline(theta_real, c='r') theta_real with our_first_model: print(pm.rhat(chain)) # want < 1.1 pm.forestplot(chain) pm.summary(trace) pm.autocorrplot(trace) # a measure of eff n based on autocorrelecation # p...
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Try pymc3 with a lot more data Coin is clearly not fair at the 1000 flips level
data = stats.bernoulli.rvs(p=theta_real, size=1000) # 1000 flips in the data with pm.Model() as our_first_model: theta = pm.Beta('theta', alpha=1, beta=1) y = pm.Bernoulli('y', p=theta, observed=data) start = pm.find_MAP() step = pm.Metropolis() trace = pm.sample(10000, step=step, start=start, cha...
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Try pymc3 with a lot more data Coin is not fair at the 25 flips level (for these data)
data = stats.bernoulli.rvs(p=theta_real, size=25) # 25 flips in the data with pm.Model() as our_first_model: theta = pm.Beta('theta', alpha=1, beta=1) y = pm.Bernoulli('y', p=theta, observed=data) start = pm.find_MAP() step = pm.Metropolis() trace = pm.sample(10000, step=step, start=start, chains=...
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause
Explore priors on the coin flip Ex 2-5 page 59
np.random.seed(123) n_experiments = 4 theta_real = 0.35 data = stats.bernoulli.rvs(p=theta_real, size=n_experiments) print(data) with pm.Model() as our_first_model: theta = pm.Beta('theta', alpha=1, beta=1) y = pm.Bernoulli('y', p=theta, observed=data) start = pm.find_MAP() step = pm.Metropolis() t...
BayesianAnalysisWithPython/Coin Flip.ipynb
balarsen/pymc_learning
bsd-3-clause