markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Indeed, there are 9 cases of 300-year olds patient ICU admittances that look suspicious, but after that, the maximum age of the patients is 89. Actually, the 300 age is intentionally introduced in the MIMIC-III datasets for privacy protection of patients whose age is 90 or beyond: any patient in this age group got thei...
run_query(''' WITH co AS ( SELECT icu.subject_id, icu.hadm_id, icu.icustay_id, pat.dob, TIMESTAMP_DIFF(icu.outtime, icu.intime, DAY) AS icu_length_of_stay, DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age, RANK() OVER (PARTITION BY icu.subject_id ORDER BY icu.intime) AS icustay_id_o...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
In the last column, we see that for some patients (e.g. Subject 40177 and 42281), there are multiple ICU stays. In research studies, we usually filter out follow-up ICU stays, and only keep the first ICU stay so as to minimize unwanted data correlation. For this purpose, we create an exclusion label column based on icu...
run_query(''' WITH co AS ( SELECT icu.subject_id, icu.hadm_id, icu.icustay_id, pat.dob, TIMESTAMP_DIFF(icu.outtime, icu.intime, DAY) AS icu_length_of_stay, DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age, RANK() OVER (PARTITION BY icu.subject_id ORDER BY icu.intime) AS icustay_id_o...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Another filter that we often use is the current service that ICU patients are undergoing. This could be done by joining with the services table using the hadm_id column. We can use the BigQuery preview tab to gain some visual understanding of data in this table as usual. We could also find out the number of each servic...
run_query(''' SELECT curr_service, IF(curr_service like '%SURG' OR curr_service = 'ORTHO', 1, 0) AS surgical, COUNT(DISTINCT hadm_id) num_hadm FROM `datathon-datasets.mimic_demo.services` GROUP BY 1, 2 ORDER BY 2, 1 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
With this understanding of service types, we are now ready to join the icustays table with the services table to identify what serives ICU patients are undergoing.
run_query(''' SELECT icu.hadm_id, icu.icustay_id, curr_service, IF(curr_service like '%SURG' OR curr_service = 'ORTHO', 1, 0) AS surgical FROM `datathon-datasets.mimic_demo.icustays` AS icu LEFT JOIN `datathon-datasets.mimic_demo.services` AS se ON icu.hadm_id = se.hadm_id ORDER BY icustay_id LIMIT 10 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Notice that for a single ICU stay, there may be multiple services. The following query finds the first service from ICU stays from patients, and indicates whether the last service before ICU admission was surgical.
run_query(''' WITH serv AS ( SELECT icu.hadm_id, icu.icustay_id, se.curr_service, IF(curr_service like '%SURG' OR curr_service = 'ORTHO', 1, 0) AS surgical, RANK() OVER (PARTITION BY icu.hadm_id ORDER BY se.transfertime DESC) as rank FROM `datathon-datasets.mimic_demo.icustays` AS icu LEFT JOI...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Finally, we are ready to add this surgical exclusion label to the cohort generation table we had before by joining the two tables. For the convenience of later analysis, we rename some columns, and filter out patients more than 100 years old.
df = run_query(''' WITH co AS ( SELECT icu.subject_id, icu.hadm_id, icu.icustay_id, pat.dob, TIMESTAMP_DIFF(icu.outtime, icu.intime, DAY) AS icu_length_of_stay, DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age, RANK() OVER (PARTITION BY icu.subject_id ORDER BY icu.intime) AS icustay...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
The original MIMIC tutorial, also showed why the first_careunit field of the icustays table is not always the same as the surgical bit derived from the services table. It also demonstrated how the pandas dataframes returned from the BigQuery queries can be further processed in Python for summarization. We will not redo...
data = df[['age', 'first_stay', 'short_stay']] data.reindex(np.random.permutation(data.index)) training_df=data.head(100) validation_df=data.tail(27) print "Training data summary:" display(training_df.describe()) print "Validation data summary:" display(validation_df.describe())
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
And let's quickly check the label distribution for the features.
display(training_df.groupby(['short_stay', 'first_stay']).count()) fig, ax = plt.subplots() shorts = training_df[training_df.short_stay==1].age longs = training_df[training_df.short_stay==0].age colors = ['b', 'g'] ax.hist([shorts, longs], bins=10, color=colors, label=['short_stay=1', 'short_stay=0']) ax.set_xlabel(...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Let's first build a linear regression model to predict the numeric value of "short_stay" based on age and first_stay features. You can tune the parameters on the right-hand side and observe differences in the evaluation result.
#@title Linear Regression Parameters {display-mode:"both"} BATCH_SIZE = 5 # @param NUM_EPOCHS = 100 # @param first_stay = tf.feature_column.numeric_column('first_stay') age = tf.feature_column.numeric_column('age') # Build linear regressor linear_regressor = tf.estimator.LinearRegressor(feature_columns=[first_stay, a...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Remember that the label short_stay is actually categorical, with the value 1 for an ICU stay of 1 day or less and value 0 for stays of length 2 days or more. So a classification model better fits this task. Here we try a deep neural networks model using the DNNClassifier estimator. Notice the little changes from the re...
#@title ML Training example {display-mode:"both"} BATCH_SIZE = 5 # @param NUM_EPOCHS = 100 # @param HIDDEN_UNITS=[10, 10] # @param # Build linear regressor classifier = tf.estimator.DNNClassifier( feature_columns=[first_stay, age], hidden_units=HIDDEN_UNITS) # Train the Model. model = classifier....
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
The ANZICS dataset also has some demo tables that you can explore. For example, in the APD and CCR tables, we can check the number of elements of certain types among the 100 demo records.
run_query(''' SELECT hospitalclassification AS Type, COUNT(*) AS NumHospitals FROM `datathon-datasets.anzics_demo.apd` GROUP BY 1 ORDER BY 2 DESC ''') run_query(''' SELECT iculevelname AS ICU_Level, COUNT(*) AS NumHospitals, SUM(hospitalbeds) AS NumBeds FROM `datathon-datasets.anzics_demo.ccr` GROUP BY 1 ORD...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Congratulations! Now you have finished this datathon tutorial, and ready to explore the real data by querying Google BigQuery. To do so, simply remove the _demo suffix in the dataset names. For example, the table mimic_demo.icustays becomes mimic.icustays when you need the actual MIMIC data. Now, let's do the substitut...
run_query(''' SELECT COUNT(*) AS num_rows FROM `datathon-datasets.mimic.icustays` ''') run_query(''' SELECT subject_id, hadm_id, icustay_id FROM `datathon-datasets.mimic.icustays` LIMIT 10 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Build the MLP Now we use the provided layers from Lasagne to build our MLP
def build_mlp(n_in, n_hidden, n_out, input_var=None): # Put your code here to build the MLP using Lasagne
2015-10_Lecture/Lecture2/code/3_Intro_Lasagne.ipynb
UKPLab/deeplearning4nlp-tutorial
apache-2.0
Create the Train Function After loading the data and defining the MLP, we can now create the train function.
# Parameters n_in = 28*28 n_hidden = 50 n_out = 10 # Create the necessary training and predict labels function
2015-10_Lecture/Lecture2/code/3_Intro_Lasagne.ipynb
UKPLab/deeplearning4nlp-tutorial
apache-2.0
Train the model We run the training for some epochs and output the accurarcy of our network
#Put your code here to train the model using mini batches
2015-10_Lecture/Lecture2/code/3_Intro_Lasagne.ipynb
UKPLab/deeplearning4nlp-tutorial
apache-2.0
Let's determine the current through R1. There are many ways to solve this; the easiest is to combine the sources, combine the resistances, and then use Ohm's law. The result is a function of $V_x$:
Vx = V('V_x').Voc I = (cct.V1.V - 4 * Vx) / (cct.R1.Z + cct.R2.Z)
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Now given the current, we can use Ohm's law to determine the voltage drop across R1.
I * cct.R1.Z cct.V1.V - I * cct.R1.Z
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Thus we know that $V_x = 3 V_x + 2$ or $V_x = -1$. Of course, Lcapy can determine this directly. Here Ox is the name of the open circuit over which we wish to determine the voltage difference:
cct.Ox.V
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Alternatively, we can query Lcapy for the voltage at node 'x' with respect to ground. This gives the same result.
cct['x'].V
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Let's check the current with Lcapy:
cct.R1.I
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Simulation setup
cutoff = 50*unit.angstrom useMinimize = True epsilon_r = 80. temperature = 300*unit.kelvin kT = unit.BOLTZMANN_CONSTANT_kB*temperature timestep = 10*unit.femtoseconds; steps_eq = 5000 steps_production = 2e4 steps_total = steps_eq + steps_production
yukawa/yukawa.ipynb
mlund/openmm-examples
mit
Convenience functions A set of independent functions, useful for setting up OpenMM.
def findForce(system, forcetype, add=True): """ Finds a specific force in the system force list - added if not found.""" for force in system.getForces(): if isinstance(force, forcetype): return force if add==True: system.addForce(forcetype()) return findForce(system, forc...
yukawa/yukawa.ipynb
mlund/openmm-examples
mit
Setup simulation
pdb = app.PDBFile('squares.pdb') forcefield = app.ForceField('yukawa.xml') system = forcefield.createSystem(pdb.topology, nonbondedMethod=app.CutoffPeriodic, nonbondedCutoff=cutoff ) box = np.array(pdb.topology.getPeriodicBoxVectors()).diagonal() harmonic = findForce(system, mm.HarmonicBondForce) n...
yukawa/yukawa.ipynb
mlund/openmm-examples
mit
Run simulation
simulation = app.Simulation(pdb.topology, system, integrator) simulation.context.setPositions(pdb.positions) if useMinimize: print('Minimizing...') simulation.minimizeEnergy() print('Equilibrating...') simulation.context.setVelocitiesToTemperature(300*unit.kelvin) simulation.step(steps_eq) simulation.reporte...
yukawa/yukawa.ipynb
mlund/openmm-examples
mit
通过上面的可视化过程,我们可以认为其数据的分布是近似线性的,所以我们可以用简单的线性回归模型来去拟合数据。
from sklearn.linear_model import LinearRegression linreg = LinearRegression() trainx = waiting.values.reshape(272, 1) trainy = eruptions.values linreg.fit(trainx, trainy) print "intercept b0 = %f" % linreg.intercept_ print "coefficiet b1 = %f" % linreg.coef_[0] x = np.linspace(40, 100, 200) x = x.reshape(200,1) y = l...
2.LINEAR_MODELS_FOR_REGRESSION/Old_Faithful_Geyser_Experiment.ipynb
jasonding1354/PRML_Notes
mit
我们可以得到拟合的结果和参数: $$\textbf{Y} = \beta_0+\beta_1\textbf{X}+\epsilon=-1.874016+0.075628\textbf{X}+\epsilon$$
residues = trainy-linreg.predict(trainx) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot() plt.scatter(waiting.values, residues) plt.xlabel("waiting") plt.ylabel("residues") plt.title("piecewise linear") plt.show()
2.LINEAR_MODELS_FOR_REGRESSION/Old_Faithful_Geyser_Experiment.ipynb
jasonding1354/PRML_Notes
mit
上面对余量residues进行分析,发现可视化residues后,residues依然存在某种模式(pattern),resudue不能很好的模拟随机的噪声,所以现在得到模型还不能很好地来描述数据,于是我们需要进一步来优化修正。 从residues图形我们可以看出其是分段线性的,在waiting=70这里点上将数据分成两部分,我们可以通过人为观察来构造几个非线性函数将原数据转化成新的特征,比如这里我们新加入两个特征,分别是$max(0, \textbf{$X_i$}-68)$和$max(0, \textbf{$X_i$}-72)$。 下面是加入非线性转换得到的线性模型的代码:
xi = waiting.values feature2 = np.maximum(0, xi-68) feature3 = np.maximum(0, xi-72) train_data = {'f1': xi, 'f2': feature2, 'f3': feature3} trainXFrame = pd.DataFrame(train_data) linreg2 = LinearRegression() linreg2.fit(trainXFrame.values, trainy) x = np.linspace(40, 100, 200) f2 = np.maximum(0, x-68) f3 = np.maximum...
2.LINEAR_MODELS_FOR_REGRESSION/Old_Faithful_Geyser_Experiment.ipynb
jasonding1354/PRML_Notes
mit
$$\textbf{$Y_i$} = \beta_0+\beta_1\textbf{$X_i$}+\beta_2max(0, \textbf{$X_i$}-68)+\beta_3max(0, \textbf{$X_i$}-72)+\epsilon_i$$
residues2 = trainy-linreg2.predict(trainXFrame.values) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot() plt.scatter(waiting.values, residues2) plt.xlabel("waiting") plt.ylabel("residues") plt.show()
2.LINEAR_MODELS_FOR_REGRESSION/Old_Faithful_Geyser_Experiment.ipynb
jasonding1354/PRML_Notes
mit
Visualizating a scan of a male head Included in ipyvolume, is a visualuzation of a scan of a human head, see the sourcecode for more details.
import ipyvolume as ipv fig = ipv.figure() vol_head = ipv.examples.head(max_shape=128); vol_head.ray_steps = 400 ipv.view(90, 0)
docs/source/examples/volshow.ipynb
maartenbreddels/ipyvolume
mit
Instead of "bagging" the CSV file we will use this create a metadata rich netCDF file. We can convert the table to a DSG, Discrete Sampling Geometry, using pocean.dsg. The first thing we need to do is to create a mapping from the data column names to the netCDF axes.
axes = {"t": "time", "x": "lon", "y": "lat", "z": "depth"}
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Now we can create a Orthogonal Multidimensional Timeseries Profile object...
import os import tempfile from pocean.dsg import OrthogonalMultidimensionalTimeseriesProfile as omtsp output_fp, output = tempfile.mkstemp() os.close(output_fp) ncd = omtsp.from_dataframe(df.reset_index(), output=output, axes=axes, mode="a")
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
... And add some extra metadata before we close the file.
naming_authority = "ioos" st_id = "Station1" ncd.naming_authority = naming_authority ncd.id = st_id print(ncd) ncd.close()
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Time to create the archive for the file with BagIt. We have to create a folder for the bag.
temp_bagit_folder = tempfile.mkdtemp() temp_data_folder = os.path.join(temp_bagit_folder, "data")
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Now we can create the bag and copy the netCDF file to a data sub-folder.
import shutil import bagit bag = bagit.make_bag(temp_bagit_folder, checksum=["sha256"]) shutil.copy2(output, temp_data_folder + "/parameter1.nc")
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Last, but not least, we have to set bag metadata and update the existing bag with it.
urn = "urn:ioos:station:{naming_authority}:{st_id}".format( naming_authority=naming_authority, st_id=st_id ) bag_meta = { "Bag-Count": "1 of 1", "Bag-Group-Identifier": "ioos_bagit_testing", "Contact-Name": "Kyle Wilcox", "Contact-Phone": "907-230-0304", "Contact-Email": "axiom+ncei@axiomdatasc...
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
That is it! Simple and efficient!! The cell below illustrates the bag directory tree. (Note that the commands below will not work on Windows and some *nix systems may require the installation of the command tree, however, they are only need for this demonstration.)
!tree $temp_bagit_folder !cat $temp_bagit_folder/manifest-sha256.txt
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
We can add more files to the bag as needed.
shutil.copy2(output, temp_data_folder + "/parameter2.nc") shutil.copy2(output, temp_data_folder + "/parameter3.nc") shutil.copy2(output, temp_data_folder + "/parameter4.nc") bag.save(manifests=True, processes=4) !tree $temp_bagit_folder !cat $temp_bagit_folder/manifest-sha256.txt
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Weekly sentiment score analysis
score_data = pd.read_csv("../data/nyt_bitcoin_with_score.csv", index_col='time', parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format)) score_data.head()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Ratio of "negative", "neutral", "positive"
score_data.sentiment.unique() score_data.groupby("sentiment").sentiment.count().plot(kind='bar',rot=0)
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Massively negative ratings!!!! Is this special to bitcoin news? To double check, run the same analysis on news with headline including "internet". Alternate news analysis (digress)
internet_news = pd.read_csv("../data/nyt_internet_with_score.csv", index_col='time', parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format)) internet_news.head() internet_news.groupby("sentiment").sentiment.count()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
So it seems most of news would be classified as negative by the Stanford classifier. How about other classifier? Indico.io sentiment score Here we analyze the score generated by Indico.io API on the same dataset. The score is between 0 and 1, and scores >0.5 are considered as positive.
indico_news = pd.read_csv("../data/indico_nyt_bitcoin.csv", index_col='time', parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format)) indico_news.head() indico_news.indico_score.describe()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Distribution
indico_news.indico_score.plot(kind='hist')
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
The distribution of indico score looks quite like a normal distribution, which is better than the Stanford one of course. So maybe we should try using indico score?
indico_news.resample('w', how='mean').plot()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Let's try again with news about "internet".
indico_news = pd.read_csv("../data/indico_nyt_internet.csv", index_col='time', parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format)) indico_news.head() indico_news.indico_score.plot(kind='hist', bins=20)
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Again, it is a normal distribution. I am not sure whether the reasonable distribution of sentiment about a thing should be like this? Because this is not a very neutral thing, and we should probably expect the distribution be positively skewed. This needs to be further studied for validity.
indico_news.indico_score.resample('w', how='mean').plot() indico_news.indico_score.describe()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Weekly distribution
weekly_news_count = score_data.resample('w', how='count').fillna(0) weekly_news_count.sentiment.describe()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
News distribution by week
weekly_news_count.sentiment.plot()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
News distribution
weekly_news_count.sentiment.plot(kind='hist')
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Average weekly sentiment score
weekly_score = score_data.resample('d', how='mean').fillna(0) weekly_score.head()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Score Distribution
weekly_score.sentimentValue.plot(kind='hist')
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Score distribution by week
weekly_score.plot()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
We miss news about bitcoin for about half of the all time. Therefore we try keyword "internet".
missing_news = 100*weekly_score[weekly_score.sentimentValue==0].count()/float(weekly_score.count()) print "Percentage of weeks without news: %f%%" % missing_news
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
<center>Find Simulations and Load data
# Search for a simulation. Hopefully the results will be from different codes. NOTE that this could be done more manually so that we don't "hope" but know. A = scsearch( q=[1,4], nonspinning=True, verbose=True ) # Select which of the search results we wish to keep U,V = A[77],A[131] # Load the modes u = gwylm(U,lmax...
review/notebooks/compare_waveforms_from_two_codes.ipynb
Cyberface/nrutils_dev
mit
<center>Recompose the Waveforms
# theta,phi = 0,0 a,b = u.recompose(theta,phi,kind='strain'),v.recompose(theta,phi,kind='strain')
review/notebooks/compare_waveforms_from_two_codes.ipynb
Cyberface/nrutils_dev
mit
<center>Plot the amplitudes to verify correct scaling between GT and SXS waveforms
figure( figsize=2*array([5,3]) ) plot( a.t - a.intrp_t_amp_max, a.amp ) plot( b.t - b.intrp_t_amp_max, b.amp ) gca().set_yscale("log", nonposy='clip') ylim([1e-5,1e-1]) xlim([-400,100]) title('the amplitudes should be approx. the same') a.plot();b.plot();
review/notebooks/compare_waveforms_from_two_codes.ipynb
Cyberface/nrutils_dev
mit
When you get home, you ask your roommate, who conveniently happens to work in a nuclear chemistry lab, to measure some of the properties of this odd material that you've been given. When she comes home that night, she says that she managed to purify the sample and measure its radioactive decay rate (which is to say, t...
# this block of code reads in the data files. Don't worry too much about how they # work right now -- we'll talk about that in a couple of weeks weeks! import numpy as np ''' count_times = the time since the start of data-taking when the data was taken (in seconds) count_rate = the number of counts ...
past-semesters/fall_2016/day-by-day/day06-modeling-radioactivity-day1/radioactivity_modeling.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Using the four numpy arrays created in the cell above, plot the measured count rates as a function of time and, on a separate plot, plot the measured sample amounts as a function of time. What do you notice about these plots, compared to the ones from your analytic estimate? Also, if you inspect the data, approximate...
# put your code here! add additional cells if necessary.
past-semesters/fall_2016/day-by-day/day06-modeling-radioactivity-day1/radioactivity_modeling.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Based on the plot in the previous cell, what do you think is going on? put your answer here! How might you modify your model to emulate this behavior? In other words, how might you modify the equation for decay rate to get something that matches the observed decay rate? put your answer here! More complex data manipul...
# put your code here! add additional cells if necessary.
past-semesters/fall_2016/day-by-day/day06-modeling-radioactivity-day1/radioactivity_modeling.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Feedback on this assignment Please fill out the form that appears when you run the code below. You must completely fill this out in order for your group to receive credit for the assignment!
from IPython.display import HTML HTML( """ <iframe src="https://goo.gl/forms/F1MvFMDpIWPScchr2?embedded=true" width="80%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> """ )
past-semesters/fall_2016/day-by-day/day06-modeling-radioactivity-day1/radioactivity_modeling.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Con un dataframe
z = pd.DataFrame(np.array([parse_latex(r'x+y=6'),parse_latex(r'x-y=0')])) z z.columns for i in z.index: print(z.loc[i][0]) latex(solve(z.loc[i][0] for i in z.index)) for i in z.index: print(z.loc[i][0])
tmp/.ipynb_checkpoints/Probando_parse_latex-checkpoint.ipynb
crdguez/mat4ac
gpl-3.0
Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in...
INPUT_COLS = [ "pickup_longitude", "pickup_latitude", "dropoff_longitude", "dropoff_latitude", "passenger_count", ] # Create input layer of feature columns # TODO 1 feature_columns = { colname: tf.feature_column.numeric_column(colname) for colname in INPUT_COLS }
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/3_keras_sequential_api.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
# Build a keras DNN model using Sequential API # TODO 2a model = Sequential( [ DenseFeatures(feature_columns=feature_columns.values()), Dense(units=32, activation="relu", name="h1"), Dense(units=8, activation="relu", name="h2"), Dense(units=1, activation="linear", name="output"), ...
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/3_keras_sequential_api.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments: An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class. A loss funct...
# TODO 2b # Create a custom evalution metric def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) # Compile the keras model model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/3_keras_sequential_api.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, fo...
# TODO 3 %%time steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) LOGDIR = "./taxi_trained" history = model.fit( x=trainds, steps_per_epoch=steps_per_epoch, epochs=NUM_EVALS, validation_data=evalds, callbacks=[TensorBoard(LOGDIR)], )
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/3_keras_sequential_api.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The kinematics of local F stars Load the TGAS and 2MASS data that we use, among other things, select F-type stars:
# Load TGAS and 2MASS tgas= gaia_tools.load.tgas() twomass= gaia_tools.load.twomass() jk= twomass['j_mag']-twomass['k_mag'] dm= -5.*numpy.log10(tgas['parallax'])+10. mj= twomass['j_mag']-dm
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Load the dwarf locus and select TGAS F-type stars with good parallaxes:
# Select F stars sp= effsel.load_spectral_types() sptype= 'F' jkmin= (sp['JH']+sp['HK'])[sp['SpT']=='%s0V' % sptype] if sptype == 'M': jkmax= (sp['JH']+sp['HK'])[sp['SpT']=='%s5V' % sptype] else: jkmax= (sp['JH']+sp['HK'])[sp['SpT']=='%s9V' % sptype] jmin= main_sequence_cut_r(jkmax) jmax= main_sequence_cut_r(jk...
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Using XD to get $\sigma_{v_z}$ from the TGAS data alone We will determine $\sigma_{v_z}^2$ from the TGAS data alone by, that is, just using the TGAS parallaxes and proper motions. This only works close to the plane, because further away the $v_z$ component of the velocity only weakly projects onto the proper motions. F...
# Compute XYZ lb= bovy_coords.radec_to_lb(tgas['ra'],tgas['dec'],degree=True,epoch=None) XYZ= bovy_coords.lbd_to_XYZ(lb[:,0],lb[:,1],1./tgas['parallax'],degree=True) # Generate vradec vradec= numpy.array([bovy_coords._K/tgas['parallax']*tgas['pmra'], bovy_coords._K/tgas['parallax']*tgas['pmdec']])...
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
If we use the same bins as those in which we computed the density, we have the following number of stars in each bin:
zbins= numpy.arange(-0.4125,0.425,0.025) indx= (numpy.fabs(XYZ[:,2]) > -0.4125)\ *(numpy.fabs(XYZ[:,2]) <= 0.4125)\ *(numpy.sqrt(XYZ[:,0]**2.+XYZ[:,1]**2.) < 0.2) bovy_plot.bovy_print(axes_labelsize=17.,text_fontsize=12.,xtick_labelsize=15.,ytick_labelsize=15.) _= hist(XYZ[indx,2],bins=zbins) xlabel(r'$Z\,(...
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Now we perform an example fit:
def combined_sig2(amp,mean,covar): indx= numpy.sqrt(covar) < 30. tamp= amp[indx]/numpy.sum(amp[indx]) return (numpy.sum(tamp*(covar+mean**2.)[indx])-numpy.sum(tamp*mean[indx])**2.) # Fit with mix of Gaussians ii= 14 print("Z/pc:",500.*(numpy.roll(zbins,-1)+zbins)[:-1][ii]) indx= (XYZ[:,2] > zbins[ii])\ ...
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
The following function computes a bootstrap estimate of the uncertainty in $\sigma_{v_z}$:
def bootstrap(nboot,vrd,vrd_cov,proj,ngauss=2): out= numpy.empty(nboot) for ii in range(nboot): # Draw w/ replacement indx= numpy.floor(numpy.random.uniform(size=len(vrd))*len(vrd)).astype('int') ydata= vrd[indx] ycovar= vrd_cov[indx] initamp= numpy.random.uniform(size=ng...
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
A test case:
ii= 7 indx= (XYZ[:,2] > zbins[ii])\ *(XYZ[:,2] <= zbins[ii+1])\ *(numpy.sqrt(XYZ[:,0]**2.+XYZ[:,1]**2.) < 0.2) b= bootstrap(20,vradec.T[indx],vradec_cov[indx],proj[indx],ngauss=2) zbins[ii], numpy.mean(numpy.sqrt(b)), numpy.std(numpy.sqrt(b))
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Now we do the XD fit to each $Z$ bin and compute the uncertainty using bootstrap:
savefilename= 'Fstar-sigz.sav' if not os.path.exists(savefilename): zbins= numpy.arange(-0.4125,0.425,0.025) nboot= 200 nstar= numpy.zeros(len(zbins)-1)-1 sig2z= numpy.zeros(len(zbins)-1)-1 sig2z_err= numpy.zeros(len(zbins)-1)-1 all_sam= numpy.zeros((len(zbins)-1,nboot))-1 ngauss= 2 star...
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Create a SparkSession and give it a name. Note: This will start the spark client console -- there is no need to run spark-shell directly.
spark = SparkSession \ .builder \ .appName("PythonPi") \ .getOrCreate()
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
partitions is the number of spark workers to partition the work into.
partitions = 2
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
n is the number of random samples to calculate
n = 100000000
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
This is the sampling function. It generates numbers in the square from (-1, -1) to (1, 1), and returns 1 if it falls inside the unit circle, and 0 otherwise.
def f(_): x = random() * 2 - 1 y = random() * 2 - 1 return 1 if x ** 2 + y ** 2 <= 1 else 0
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
Here's where we farm the work out to Spark.
count = spark.sparkContext \ .parallelize(range(1, n + 1), partitions) \ .map(f) \ .reduce(add) print("Pi is roughly %f" % (4.0 * count / n))
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
Shut down the spark server.
spark.stop()
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
Now that we see how the data is organized, let's use a MLP, with an architecture as the one taught in "Machine Learning" from Stanford in coursera.org to recognize the digits. The MLP will have 3 layers. The input layer will have 784 units The hidden layer will have 25 units The output layer will have, obviously, 10 u...
from sklearn.neural_network import MLPClassifier myX = myTrainDf[myTrainDf.columns[1:]] myY = myTrainDf[myTrainDf.columns[0]] # Use 'adam' solver for large datasets, alpha is the regularization term. # Display the optimization by showing the cost. myClf = MLPClassifier(hidden_layer_sizes=25, activation='logistic', so...
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
The results are quite good, as expected, now let's make a prediction for the test set.
myYtestPred = myClf.predict(myTestDf) myOutDf = pd.DataFrame(index=myTestDf.index+1, data=myYtestPred) myOutDf.reset_index().to_csv('submission.csv', header=['ImageId', 'Label'],index=False)
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
Cross-validation set and regularization parameter The following code will split the test set into a training set and cross-validation set.
REG_ARRAY = [100, 1, 1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 0] def splitDataset (aDf, aFrac): aTrainDf = aDf.sample(frac=aFrac) aXvalDf = aDf.iloc[[x for x in aDf.index if x not in aTrainDf.index]] return aTrainDf, aXvalDf mySampleTrainDf, mySampleXvalDf = splitDataset(myTrainDf, 0.8) myAccuracyDf = pd.DataF...
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
From here one can tell that the default regularization parameter (around 1e-5) Multiple layers
REG_ARRAY = [100, 1, 1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 0] myAccuracyDf = pd.DataFrame(index=REG_ARRAY, columns=['Accuracy']) for myAlpha in REG_ARRAY: print ('Training with regularization param ', str(myAlpha)) myClf = MLPClassifier(hidden_layer_sizes=[400, 400, 100, 25], activation='logistic', solver='adam',...
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
Let's produce a new output file with no regularization and a complex MLP of 784x400x400x100x25x10
myClf = MLPClassifier(hidden_layer_sizes=[400, 400, 100, 25], activation='logistic', solver='adam', alpha=0, verbose=True) myClf.fit(myTrainDf[myTrainDf.columns[1:]], myTrainDf['label']) myYtestPred = myClf.predict(myTestDf) myOutDf = pd.DataFrame(index=myTestDf.index+1, data=myYtestPred) myOutDf.reset_index().to_csv('...
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
Model Averaging <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/average_optimizers_callback"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://co...
!pip install -U tensorflow-addons import tensorflow as tf import tensorflow_addons as tfa import numpy as np import os
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Build Model
def create_model(opt): model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimi...
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Prepare Dataset
#Load Fashion MNIST dataset train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) test_images, test_labels...
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
We will be comparing three optimizers here: Unwrapped SGD SGD with Moving Average SGD with Stochastic Weight Averaging And see how they perform with the same model.
#Optimizers sgd = tf.keras.optimizers.SGD(0.01) moving_avg_sgd = tfa.optimizers.MovingAverage(sgd) stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Both MovingAverage and StochasticAverage optimizers use ModelAverageCheckpoint.
#Callback checkpoint_path = "./training/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir, save_weights_only=True, verbose=1) ...
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Train Model Vanilla SGD Optimizer
#Build Model model = create_model(sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Moving Average SGD
#Build Model model = create_model(moving_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy...
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Stocastic Weight Average SGD
#Build Model model = create_model(stocastic_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accur...
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
At first, we need to define the dataset names and temporal ranges. Please note that the datasets have different time ranges. So we will download the data from 1981, when CHRIPS starts (ARC2 is from 1983).
dh=datahub.datahub(server,version,API_key) dataset1='noaa_arc2_africa_01' variable_name1 = 'pr' time_start = '1983-01-01T00:00:00' time_end = '2018-01-01T00:00:00'
api-examples/ARC2_download_example.ipynb
planet-os/notebooks
mit
Then we define spatial range. In this case we define all Africa. Keep in mind that it's a huge area and downloading all the data from Africa for 35 years is a lot and can take even several hours.
area_name = 'Africa' latitude_north = 42.24; longitude_west = -24.64 latitude_south = -45.76; longitude_east = 60.28
api-examples/ARC2_download_example.ipynb
planet-os/notebooks
mit
Download the data with package API Create package objects Send commands for the package creation Download the package files
package_arc2_africa_01 = package_api.package_api(dh,dataset1,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area_name) package_arc2_africa_01.make_package() package_arc2_africa_01.download_package()
api-examples/ARC2_download_example.ipynb
planet-os/notebooks
mit
The pandas library in Python is designed to make it easy to process tabular data, and we can use it display the CSV file so that it looks more like a table. In the next example, we import the library (and give it the short name pd), and then use its read_cv() method to slurp up the CSV file.
import pandas as pd table = pd.read_csv("../data/formats/messages.csv") table
notebooks/data_formats.ipynb
edinburghlivinglab/dds-notebooks
cc0-1.0
One issue to note is that if a value in the CSV file contains a comma, then we have to wrap that value with quote signs, as in "James, Ewan". <a name="xml">XML</a> XML (eXtensible Markup Language) is a W3C open standard, used widely for storing data and for transferring it between applications and services. Here is a s...
from lxml import etree tree = etree.parse("../data/formats/messages.xml") print(etree.tostring(tree, pretty_print = True, encoding="unicode"))
notebooks/data_formats.ipynb
edinburghlivinglab/dds-notebooks
cc0-1.0
<a name="json">JSON</a> JSON (JavaScript Object Notation) is intended as format for transferring data that is easier for humans to write and read than XML. Unlike CSV and XML, JSON lets us represent lists directly using [ and ]. For example, the list containing the two strings 'James' and 'Ewan' is ['James', 'Ewan']. ...
import json fp = open("../data/formats/messages.json") data = json.load(fp) print(json.dumps(data, indent=2, sort_keys=True))
notebooks/data_formats.ipynb
edinburghlivinglab/dds-notebooks
cc0-1.0
Load Data As usual, let's start by loading some network data. This time round, we have a physician trust network, but slightly modified such that it is undirected rather than directed. This directed network captures innovation spread among 246 physicians in for towns in Illinois, Peoria, Bloomington, Quincy and Galesb...
# Load the network. G = cf.load_physicians_network() # Make a Circos plot of the graph import numpy as np from circos import CircosPlot nodes = sorted(G.nodes()) edges = G.edges() edgeprops = dict(alpha=0.1) nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes)) fig = plt.figure(figsize=(6,6)) ax = fig.add_s...
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit
Question What can you infer about the structure of the graph from the Circos plot? Structures in a Graph We can leverage what we have learned in the previous notebook to identify special structures in a graph. In a network, cliques are one of these special structures. Cliques In a social network, cliques are groups of...
# Example code. def in_triangle(G, node): """ Returns whether a given node is present in a triangle relationship or not. """ # We first assume that the node is not present in a triangle. is_in_triangle = False # Then, iterate over every pair of the node's neighbors. for nbr1, nbr2 in it...
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit
Exercise Can you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with? Do not return the triplets, but the set/list of nodes. Possible Implementation: If my neighbor's neighbor's neighbor includes m...
# Possible answer def get_triangles(G, node): neighbors = set(G.neighbors(node)) triangle_nodes = set() """ Fill in the rest of the code below. """ triangle_nodes.add(node) is_in_triangle = False # Then, iterate over every pair of the node's neighbors. for nbr1, nbr2 in itertool...
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit