markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Indeed, there are 9 cases of 300-year olds patient ICU admittances that look suspicious, but after that, the maximum age of the patients is 89. Actually, the 300 age is intentionally introduced in the MIMIC-III datasets for privacy protection of patients whose age is 90 or beyond: any patient in this age group got their age redacted to 300. Next, let's see how many ICU admissions each patient had.
run_query(''' WITH co AS ( SELECT icu.subject_id, icu.hadm_id, icu.icustay_id, pat.dob, TIMESTAMP_DIFF(icu.outtime, icu.intime, DAY) AS icu_length_of_stay, DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age, RANK() OVER (PARTITION BY icu.subject_id ORDER BY icu.intime) AS icustay_id_order FROM `datathon-datasets.mimic_demo.icustays` AS icu INNER JOIN `datathon-datasets.mimic_demo.patients` AS pat ON icu.subject_id = pat.subject_id ORDER BY hadm_id DESC LIMIT 10 ) SELECT subject_id, hadm_id, icustay_id, icu_length_of_stay, co.age, IF(icu_length_of_stay < 2, 1, 0) AS exclusion_los, icustay_id_order FROM co ORDER BY subject_id, icustay_id_order ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
In the last column, we see that for some patients (e.g. Subject 40177 and 42281), there are multiple ICU stays. In research studies, we usually filter out follow-up ICU stays, and only keep the first ICU stay so as to minimize unwanted data correlation. For this purpose, we create an exclusion label column based on icustay_id_order, which becomes handy for filtering needs. This is done by ranking the ICU visits ordered by the admission time in the following query, and PARTITION BY ensures that the ranking is within each patient (subject_id).
run_query(''' WITH co AS ( SELECT icu.subject_id, icu.hadm_id, icu.icustay_id, pat.dob, TIMESTAMP_DIFF(icu.outtime, icu.intime, DAY) AS icu_length_of_stay, DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age, RANK() OVER (PARTITION BY icu.subject_id ORDER BY icu.intime) AS icustay_id_order FROM `datathon-datasets.mimic_demo.icustays` AS icu INNER JOIN `datathon-datasets.mimic_demo.patients` AS pat ON icu.subject_id = pat.subject_id ORDER BY hadm_id DESC LIMIT 10 ) SELECT subject_id, hadm_id, icustay_id, icu_length_of_stay, co.age, IF(icu_length_of_stay < 2, 1, 0) AS exclusion_los, icustay_id_order, IF(icustay_id_order = 1, 0, 1) AS exclusion_first_stay FROM co ORDER BY subject_id, icustay_id_order ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Another filter that we often use is the current service that ICU patients are undergoing. This could be done by joining with the services table using the hadm_id column. We can use the BigQuery preview tab to gain some visual understanding of data in this table as usual. We could also find out the number of each service instances and whether it's a sergical service by running the following query with aggregation function COUNT DISTINCT. You can find the service code descriptions at http://mimic.physionet.org/mimictables/services/.
run_query(''' SELECT curr_service, IF(curr_service like '%SURG' OR curr_service = 'ORTHO', 1, 0) AS surgical, COUNT(DISTINCT hadm_id) num_hadm FROM `datathon-datasets.mimic_demo.services` GROUP BY 1, 2 ORDER BY 2, 1 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
With this understanding of service types, we are now ready to join the icustays table with the services table to identify what serives ICU patients are undergoing.
run_query(''' SELECT icu.hadm_id, icu.icustay_id, curr_service, IF(curr_service like '%SURG' OR curr_service = 'ORTHO', 1, 0) AS surgical FROM `datathon-datasets.mimic_demo.icustays` AS icu LEFT JOIN `datathon-datasets.mimic_demo.services` AS se ON icu.hadm_id = se.hadm_id ORDER BY icustay_id LIMIT 10 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Notice that for a single ICU stay, there may be multiple services. The following query finds the first service from ICU stays from patients, and indicates whether the last service before ICU admission was surgical.
run_query(''' WITH serv AS ( SELECT icu.hadm_id, icu.icustay_id, se.curr_service, IF(curr_service like '%SURG' OR curr_service = 'ORTHO', 1, 0) AS surgical, RANK() OVER (PARTITION BY icu.hadm_id ORDER BY se.transfertime DESC) as rank FROM `datathon-datasets.mimic_demo.icustays` AS icu LEFT JOIN `datathon-datasets.mimic_demo.services` AS se ON icu.hadm_id = se.hadm_id AND se.transfertime < TIMESTAMP_ADD(icu.intime, INTERVAL 12 HOUR) ORDER BY icustay_id LIMIT 10) SELECT hadm_id, icustay_id, curr_service, surgical FROM serv WHERE rank = 1 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Finally, we are ready to add this surgical exclusion label to the cohort generation table we had before by joining the two tables. For the convenience of later analysis, we rename some columns, and filter out patients more than 100 years old.
df = run_query(''' WITH co AS ( SELECT icu.subject_id, icu.hadm_id, icu.icustay_id, pat.dob, TIMESTAMP_DIFF(icu.outtime, icu.intime, DAY) AS icu_length_of_stay, DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age, RANK() OVER (PARTITION BY icu.subject_id ORDER BY icu.intime) AS icustay_id_order FROM `datathon-datasets.mimic_demo.icustays` AS icu INNER JOIN `datathon-datasets.mimic_demo.patients` AS pat ON icu.subject_id = pat.subject_id ORDER BY hadm_id DESC), serv AS ( SELECT icu.hadm_id, icu.icustay_id, se.curr_service, IF(curr_service like '%SURG' OR curr_service = 'ORTHO', 1, 0) AS surgical, RANK() OVER (PARTITION BY icu.hadm_id ORDER BY se.transfertime DESC) as rank FROM `datathon-datasets.mimic_demo.icustays` AS icu LEFT JOIN `datathon-datasets.mimic_demo.services` AS se ON icu.hadm_id = se.hadm_id AND se.transfertime < TIMESTAMP_ADD(icu.intime, INTERVAL 12 HOUR) ORDER BY icustay_id) SELECT co.subject_id, co.hadm_id, co.icustay_id, co.icu_length_of_stay, co.age, IF(co.icu_length_of_stay < 2, 1, 0) AS short_stay, IF(co.icustay_id_order = 1, 0, 1) AS first_stay, IF(serv.surgical = 1, 1, 0) AS surgical FROM co LEFT JOIN serv USING (icustay_id, hadm_id) WHERE serv.rank = 1 AND age < 100 ORDER BY subject_id, icustay_id_order ''') print 'Number of rows in dataframe: %d' % len(df) df.head()
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
The original MIMIC tutorial, also showed why the first_careunit field of the icustays table is not always the same as the surgical bit derived from the services table. It also demonstrated how the pandas dataframes returned from the BigQuery queries can be further processed in Python for summarization. We will not redo those analyses here, believing that you should reproduce those results easily here with Colab by now. Instead, we will show an example of using Tensorflow (getting started doc) to build a simple predictor, where we use the patient's age and whether it is the first ICU stay to predict whether the ICU stay will be a short one. With only 127 data points in total, we don't expect to actually build an accurate or useful predictor, but it should serve the purpose of showing how a model can be trained and used using Tensorflow within Colab. First, let us split the 127 data points into a training set with 100 records and a testing set with 27, and examine the distribution of the split sets to make sure that the distribution is similar.
data = df[['age', 'first_stay', 'short_stay']] data.reindex(np.random.permutation(data.index)) training_df=data.head(100) validation_df=data.tail(27) print "Training data summary:" display(training_df.describe()) print "Validation data summary:" display(validation_df.describe())
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
And let's quickly check the label distribution for the features.
display(training_df.groupby(['short_stay', 'first_stay']).count()) fig, ax = plt.subplots() shorts = training_df[training_df.short_stay==1].age longs = training_df[training_df.short_stay==0].age colors = ['b', 'g'] ax.hist([shorts, longs], bins=10, color=colors, label=['short_stay=1', 'short_stay=0']) ax.set_xlabel('Age') ax.set_ylabel('Number of Patients') plt.legend(loc='upper left') plt.show()
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Let's first build a linear regression model to predict the numeric value of "short_stay" based on age and first_stay features. You can tune the parameters on the right-hand side and observe differences in the evaluation result.
#@title Linear Regression Parameters {display-mode:"both"} BATCH_SIZE = 5 # @param NUM_EPOCHS = 100 # @param first_stay = tf.feature_column.numeric_column('first_stay') age = tf.feature_column.numeric_column('age') # Build linear regressor linear_regressor = tf.estimator.LinearRegressor(feature_columns=[first_stay, age]) # Train the Model. model = linear_regressor.train( input_fn=tf.compat.v1.estimator.inputs.pandas_input_fn( x=training_df, y=training_df['short_stay'], num_epochs=100, batch_size=BATCH_SIZE, shuffle=True), steps=100) # Evaluate the model. eval_result = linear_regressor.evaluate( input_fn=tf.compat.v1.estimator.inputs.pandas_input_fn( x=validation_df, y=validation_df['short_stay'], batch_size=BATCH_SIZE, shuffle=False)) display(eval_result)
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Remember that the label short_stay is actually categorical, with the value 1 for an ICU stay of 1 day or less and value 0 for stays of length 2 days or more. So a classification model better fits this task. Here we try a deep neural networks model using the DNNClassifier estimator. Notice the little changes from the regression code above.
#@title ML Training example {display-mode:"both"} BATCH_SIZE = 5 # @param NUM_EPOCHS = 100 # @param HIDDEN_UNITS=[10, 10] # @param # Build linear regressor classifier = tf.estimator.DNNClassifier( feature_columns=[first_stay, age], hidden_units=HIDDEN_UNITS) # Train the Model. model = classifier.train( input_fn=tf.compat.v1.estimator.inputs.pandas_input_fn( x=training_df, y=training_df['short_stay'], num_epochs=100, batch_size=BATCH_SIZE, shuffle=True), steps=100) # Evaluate the model. eval_result = classifier.evaluate( input_fn=tf.compat.v1.estimator.inputs.pandas_input_fn( x=validation_df, y=validation_df['short_stay'], batch_size=BATCH_SIZE, shuffle=False)) display(eval_result)
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
The ANZICS dataset also has some demo tables that you can explore. For example, in the APD and CCR tables, we can check the number of elements of certain types among the 100 demo records.
run_query(''' SELECT hospitalclassification AS Type, COUNT(*) AS NumHospitals FROM `datathon-datasets.anzics_demo.apd` GROUP BY 1 ORDER BY 2 DESC ''') run_query(''' SELECT iculevelname AS ICU_Level, COUNT(*) AS NumHospitals, SUM(hospitalbeds) AS NumBeds FROM `datathon-datasets.anzics_demo.ccr` GROUP BY 1 ORDER BY 2 DESC ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Congratulations! Now you have finished this datathon tutorial, and ready to explore the real data by querying Google BigQuery. To do so, simply remove the _demo suffix in the dataset names. For example, the table mimic_demo.icustays becomes mimic.icustays when you need the actual MIMIC data. Now, let's do the substitution and, and start the real datathon exploration.
run_query(''' SELECT COUNT(*) AS num_rows FROM `datathon-datasets.mimic.icustays` ''') run_query(''' SELECT subject_id, hadm_id, icustay_id FROM `datathon-datasets.mimic.icustays` LIMIT 10 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Build the MLP Now we use the provided layers from Lasagne to build our MLP
def build_mlp(n_in, n_hidden, n_out, input_var=None): # Put your code here to build the MLP using Lasagne
2015-10_Lecture/Lecture2/code/3_Intro_Lasagne.ipynb
UKPLab/deeplearning4nlp-tutorial
apache-2.0
Create the Train Function After loading the data and defining the MLP, we can now create the train function.
# Parameters n_in = 28*28 n_hidden = 50 n_out = 10 # Create the necessary training and predict labels function
2015-10_Lecture/Lecture2/code/3_Intro_Lasagne.ipynb
UKPLab/deeplearning4nlp-tutorial
apache-2.0
Train the model We run the training for some epochs and output the accurarcy of our network
#Put your code here to train the model using mini batches
2015-10_Lecture/Lecture2/code/3_Intro_Lasagne.ipynb
UKPLab/deeplearning4nlp-tutorial
apache-2.0
Let's determine the current through R1. There are many ways to solve this; the easiest is to combine the sources, combine the resistances, and then use Ohm's law. The result is a function of $V_x$:
Vx = V('V_x').Voc I = (cct.V1.V - 4 * Vx) / (cct.R1.Z + cct.R2.Z)
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Now given the current, we can use Ohm's law to determine the voltage drop across R1.
I * cct.R1.Z cct.V1.V - I * cct.R1.Z
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Thus we know that $V_x = 3 V_x + 2$ or $V_x = -1$. Of course, Lcapy can determine this directly. Here Ox is the name of the open circuit over which we wish to determine the voltage difference:
cct.Ox.V
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Alternatively, we can query Lcapy for the voltage at node 'x' with respect to ground. This gives the same result.
cct['x'].V
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Let's check the current with Lcapy:
cct.R1.I
doc/examples/notebooks/dependent_source_example1.ipynb
mph-/lcapy
lgpl-2.1
Simulation setup
cutoff = 50*unit.angstrom useMinimize = True epsilon_r = 80. temperature = 300*unit.kelvin kT = unit.BOLTZMANN_CONSTANT_kB*temperature timestep = 10*unit.femtoseconds; steps_eq = 5000 steps_production = 2e4 steps_total = steps_eq + steps_production
yukawa/yukawa.ipynb
mlund/openmm-examples
mit
Convenience functions A set of independent functions, useful for setting up OpenMM.
def findForce(system, forcetype, add=True): """ Finds a specific force in the system force list - added if not found.""" for force in system.getForces(): if isinstance(force, forcetype): return force if add==True: system.addForce(forcetype()) return findForce(system, forcetype) return None def setGlobalForceParameter(force, key, value): for i in range(force.getNumGlobalParameters()): if force.getGlobalParameterName(i)==key: print('setting force parameter', key, '=', value) force.setGlobalParameterDefaultValue(i, value); def atomIndexInResidue(residue): """ list of atom index in residue """ index=[] for a in list(residue.atoms()): index.append(a.index) return index def getResiduePositions(residue, positions): """ Returns array w. atomic positions of residue """ ndx = atomIndexInResidue(residue) return np.array(positions)[ndx] def uniquePairs(index): """ list of unique, internal pairs """ return list(combinations( range(index[0],index[-1]+1),2 ) ) def addHarmonicConstraint(harmonicforce, pairlist, positions, threshold, k): """ add harmonic bonds between pairs if distance is smaller than threshold """ print('Constraint force constant =', k) for i,j in pairlist: distance = unit.norm( positions[i]-positions[j] ) if distance<threshold: harmonicforce.addBond( i,j, distance.value_in_unit(unit.nanometer), k.value_in_unit( unit.kilojoule/unit.nanometer**2/unit.mole )) print("added harmonic bond between", i, j, 'with distance',distance) def addExclusions(nonbondedforce, pairlist): """ add nonbonded exclusions between pairs """ for i,j in pairlist: nonbondedforce.addExclusion(i,j) def rigidifyResidue(residue, harmonicforce, positions, nonbondedforce=None, threshold=6.0*unit.angstrom, k=2500*unit.kilojoule/unit.nanometer**2/unit.mole): """ make residue rigid by adding constraints and nonbonded exclusions """ index = atomIndexInResidue(residue) pairlist = uniquePairs(index) addHarmonicConstraint(harmonic, pairlist, pdb.positions, threshold, k) if nonbondedforce is not None: for i,j in pairlist: print('added nonbonded exclusion between', i, j) nonbonded.addExclusion(i,j) def centerOfMass(positions, box): """ Calculates the geometric center taking into account periodic boundaries More here: https://en.wikipedia.org/wiki/Center_of_mass#Systems_with_periodic_boundary_conditions """ theta=np.divide(positions, box).astype(np.float) * 2*np.pi x1=np.array( [np.cos(theta[:,0]).mean(), np.cos(theta[:,1]).mean(), np.cos(theta[:,2]).mean()] ) x2=np.array( [np.sin(theta[:,0]).mean(), np.sin(theta[:,1]).mean(), np.sin(theta[:,2]).mean()] ) return box * (np.arctan2(-x1,-x2)+np.pi) / (2*np.pi)
yukawa/yukawa.ipynb
mlund/openmm-examples
mit
Setup simulation
pdb = app.PDBFile('squares.pdb') forcefield = app.ForceField('yukawa.xml') system = forcefield.createSystem(pdb.topology, nonbondedMethod=app.CutoffPeriodic, nonbondedCutoff=cutoff ) box = np.array(pdb.topology.getPeriodicBoxVectors()).diagonal() harmonic = findForce(system, mm.HarmonicBondForce) nonbonded = findForce(system, mm.CustomNonbondedForce) setGlobalForceParameter(nonbonded, 'lB', 0.7*unit.nanometer) setGlobalForceParameter(nonbonded, 'kappa', 0.0) for residue in pdb.topology.residues(): p = getResiduePositions(residue, pdb.positions) print(centerOfMass(p, box)) rigidifyResidue(residue, harmonicforce=harmonic, nonbondedforce=nonbonded, positions=pdb.positions) integrator = mm.LangevinIntegrator(temperature, 1.0/unit.picoseconds, timestep) integrator.setConstraintTolerance(0.0001)
yukawa/yukawa.ipynb
mlund/openmm-examples
mit
Run simulation
simulation = app.Simulation(pdb.topology, system, integrator) simulation.context.setPositions(pdb.positions) if useMinimize: print('Minimizing...') simulation.minimizeEnergy() print('Equilibrating...') simulation.context.setVelocitiesToTemperature(300*unit.kelvin) simulation.step(steps_eq) simulation.reporters.append(mdtraj.reporters.HDF5Reporter('trajectory.h5', 100)) simulation.reporters.append( app.StateDataReporter(stdout, int(steps_total/10), step=True, potentialEnergy=True, temperature=True, progress=True, remainingTime=False, speed=True, totalSteps=steps_total, volume=True, separator='\t')) print('Production...') simulation.step(steps_production) print('Done!')
yukawa/yukawa.ipynb
mlund/openmm-examples
mit
通过上面的可视化过程,我们可以认为其数据的分布是近似线性的,所以我们可以用简单的线性回归模型来去拟合数据。
from sklearn.linear_model import LinearRegression linreg = LinearRegression() trainx = waiting.values.reshape(272, 1) trainy = eruptions.values linreg.fit(trainx, trainy) print "intercept b0 = %f" % linreg.intercept_ print "coefficiet b1 = %f" % linreg.coef_[0] x = np.linspace(40, 100, 200) x = x.reshape(200,1) y = linreg.predict(x) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot() plt.scatter(waiting.values, eruptions.values) plt.plot(x, y, 'r') plt.xlim(40,100) plt.ylim(1,6) plt.xlabel("waiting") plt.ylabel("duration") plt.title("Old Faithful Geyser Data") plt.show()
2.LINEAR_MODELS_FOR_REGRESSION/Old_Faithful_Geyser_Experiment.ipynb
jasonding1354/PRML_Notes
mit
我们可以得到拟合的结果和参数: $$\textbf{Y} = \beta_0+\beta_1\textbf{X}+\epsilon=-1.874016+0.075628\textbf{X}+\epsilon$$
residues = trainy-linreg.predict(trainx) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot() plt.scatter(waiting.values, residues) plt.xlabel("waiting") plt.ylabel("residues") plt.title("piecewise linear") plt.show()
2.LINEAR_MODELS_FOR_REGRESSION/Old_Faithful_Geyser_Experiment.ipynb
jasonding1354/PRML_Notes
mit
上面对余量residues进行分析,发现可视化residues后,residues依然存在某种模式(pattern),resudue不能很好的模拟随机的噪声,所以现在得到模型还不能很好地来描述数据,于是我们需要进一步来优化修正。 从residues图形我们可以看出其是分段线性的,在waiting=70这里点上将数据分成两部分,我们可以通过人为观察来构造几个非线性函数将原数据转化成新的特征,比如这里我们新加入两个特征,分别是$max(0, \textbf{$X_i$}-68)$和$max(0, \textbf{$X_i$}-72)$。 下面是加入非线性转换得到的线性模型的代码:
xi = waiting.values feature2 = np.maximum(0, xi-68) feature3 = np.maximum(0, xi-72) train_data = {'f1': xi, 'f2': feature2, 'f3': feature3} trainXFrame = pd.DataFrame(train_data) linreg2 = LinearRegression() linreg2.fit(trainXFrame.values, trainy) x = np.linspace(40, 100, 200) f2 = np.maximum(0, x-68) f3 = np.maximum(0, x-72) tdata = {'f1': x, 'f2': f2, 'f3': f3} tX = pd.DataFrame(tdata) y = linreg2.predict(tX.values) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot() plt.scatter(waiting.values, eruptions.values) plt.plot(x, y, 'r') plt.xlim(40,100) plt.xlabel("waiting") plt.ylabel("duration") plt.title("Old Faithful Geyser Data") plt.show()
2.LINEAR_MODELS_FOR_REGRESSION/Old_Faithful_Geyser_Experiment.ipynb
jasonding1354/PRML_Notes
mit
$$\textbf{$Y_i$} = \beta_0+\beta_1\textbf{$X_i$}+\beta_2max(0, \textbf{$X_i$}-68)+\beta_3max(0, \textbf{$X_i$}-72)+\epsilon_i$$
residues2 = trainy-linreg2.predict(trainXFrame.values) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot() plt.scatter(waiting.values, residues2) plt.xlabel("waiting") plt.ylabel("residues") plt.show()
2.LINEAR_MODELS_FOR_REGRESSION/Old_Faithful_Geyser_Experiment.ipynb
jasonding1354/PRML_Notes
mit
Visualizating a scan of a male head Included in ipyvolume, is a visualuzation of a scan of a human head, see the sourcecode for more details.
import ipyvolume as ipv fig = ipv.figure() vol_head = ipv.examples.head(max_shape=128); vol_head.ray_steps = 400 ipv.view(90, 0)
docs/source/examples/volshow.ipynb
maartenbreddels/ipyvolume
mit
Instead of "bagging" the CSV file we will use this create a metadata rich netCDF file. We can convert the table to a DSG, Discrete Sampling Geometry, using pocean.dsg. The first thing we need to do is to create a mapping from the data column names to the netCDF axes.
axes = {"t": "time", "x": "lon", "y": "lat", "z": "depth"}
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Now we can create a Orthogonal Multidimensional Timeseries Profile object...
import os import tempfile from pocean.dsg import OrthogonalMultidimensionalTimeseriesProfile as omtsp output_fp, output = tempfile.mkstemp() os.close(output_fp) ncd = omtsp.from_dataframe(df.reset_index(), output=output, axes=axes, mode="a")
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
... And add some extra metadata before we close the file.
naming_authority = "ioos" st_id = "Station1" ncd.naming_authority = naming_authority ncd.id = st_id print(ncd) ncd.close()
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Time to create the archive for the file with BagIt. We have to create a folder for the bag.
temp_bagit_folder = tempfile.mkdtemp() temp_data_folder = os.path.join(temp_bagit_folder, "data")
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Now we can create the bag and copy the netCDF file to a data sub-folder.
import shutil import bagit bag = bagit.make_bag(temp_bagit_folder, checksum=["sha256"]) shutil.copy2(output, temp_data_folder + "/parameter1.nc")
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Last, but not least, we have to set bag metadata and update the existing bag with it.
urn = "urn:ioos:station:{naming_authority}:{st_id}".format( naming_authority=naming_authority, st_id=st_id ) bag_meta = { "Bag-Count": "1 of 1", "Bag-Group-Identifier": "ioos_bagit_testing", "Contact-Name": "Kyle Wilcox", "Contact-Phone": "907-230-0304", "Contact-Email": "axiom+ncei@axiomdatascience.com", "External-Identifier": urn, "External-Description": "Sensor data from station {}".format(urn), "Internal-Sender-Identifier": urn, "Internal-Sender-Description": "Station - URN:{}".format(urn), "Organization-address": "1016 W 6th Ave, Ste. 105, Anchorage, AK 99501, USA", "Source-Organization": "Axiom Data Science", } bag.info.update(bag_meta) bag.save(manifests=True, processes=4)
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
That is it! Simple and efficient!! The cell below illustrates the bag directory tree. (Note that the commands below will not work on Windows and some *nix systems may require the installation of the command tree, however, they are only need for this demonstration.)
!tree $temp_bagit_folder !cat $temp_bagit_folder/manifest-sha256.txt
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
We can add more files to the bag as needed.
shutil.copy2(output, temp_data_folder + "/parameter2.nc") shutil.copy2(output, temp_data_folder + "/parameter3.nc") shutil.copy2(output, temp_data_folder + "/parameter4.nc") bag.save(manifests=True, processes=4) !tree $temp_bagit_folder !cat $temp_bagit_folder/manifest-sha256.txt
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb
ioos/notebooks_demos
mit
Weekly sentiment score analysis
score_data = pd.read_csv("../data/nyt_bitcoin_with_score.csv", index_col='time', parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format)) score_data.head()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Ratio of "negative", "neutral", "positive"
score_data.sentiment.unique() score_data.groupby("sentiment").sentiment.count().plot(kind='bar',rot=0)
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Massively negative ratings!!!! Is this special to bitcoin news? To double check, run the same analysis on news with headline including "internet". Alternate news analysis (digress)
internet_news = pd.read_csv("../data/nyt_internet_with_score.csv", index_col='time', parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format)) internet_news.head() internet_news.groupby("sentiment").sentiment.count()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
So it seems most of news would be classified as negative by the Stanford classifier. How about other classifier? Indico.io sentiment score Here we analyze the score generated by Indico.io API on the same dataset. The score is between 0 and 1, and scores >0.5 are considered as positive.
indico_news = pd.read_csv("../data/indico_nyt_bitcoin.csv", index_col='time', parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format)) indico_news.head() indico_news.indico_score.describe()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Distribution
indico_news.indico_score.plot(kind='hist')
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
The distribution of indico score looks quite like a normal distribution, which is better than the Stanford one of course. So maybe we should try using indico score?
indico_news.resample('w', how='mean').plot()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Let's try again with news about "internet".
indico_news = pd.read_csv("../data/indico_nyt_internet.csv", index_col='time', parse_dates=[0], date_parser=lambda x: datetime.datetime.strptime(x, time_format)) indico_news.head() indico_news.indico_score.plot(kind='hist', bins=20)
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Again, it is a normal distribution. I am not sure whether the reasonable distribution of sentiment about a thing should be like this? Because this is not a very neutral thing, and we should probably expect the distribution be positively skewed. This needs to be further studied for validity.
indico_news.indico_score.resample('w', how='mean').plot() indico_news.indico_score.describe()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Weekly distribution
weekly_news_count = score_data.resample('w', how='count').fillna(0) weekly_news_count.sentiment.describe()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
News distribution by week
weekly_news_count.sentiment.plot()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
News distribution
weekly_news_count.sentiment.plot(kind='hist')
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Average weekly sentiment score
weekly_score = score_data.resample('d', how='mean').fillna(0) weekly_score.head()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Score Distribution
weekly_score.sentimentValue.plot(kind='hist')
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
Score distribution by week
weekly_score.plot()
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
We miss news about bitcoin for about half of the all time. Therefore we try keyword "internet".
missing_news = 100*weekly_score[weekly_score.sentimentValue==0].count()/float(weekly_score.count()) print "Percentage of weeks without news: %f%%" % missing_news
notes/news_analysis.ipynb
yyl/btc-price-analysis
gpl-2.0
<center>Find Simulations and Load data
# Search for a simulation. Hopefully the results will be from different codes. NOTE that this could be done more manually so that we don't "hope" but know. A = scsearch( q=[1,4], nonspinning=True, verbose=True ) # Select which of the search results we wish to keep U,V = A[77],A[131] # Load the modes u = gwylm(U,lmax=2,verbose=True) v = gwylm(V,lmax=2,verbose=True) # Plot the waveforms u.plot(); v.plot()
review/notebooks/compare_waveforms_from_two_codes.ipynb
Cyberface/nrutils_dev
mit
<center>Recompose the Waveforms
# theta,phi = 0,0 a,b = u.recompose(theta,phi,kind='strain'),v.recompose(theta,phi,kind='strain')
review/notebooks/compare_waveforms_from_two_codes.ipynb
Cyberface/nrutils_dev
mit
<center>Plot the amplitudes to verify correct scaling between GT and SXS waveforms
figure( figsize=2*array([5,3]) ) plot( a.t - a.intrp_t_amp_max, a.amp ) plot( b.t - b.intrp_t_amp_max, b.amp ) gca().set_yscale("log", nonposy='clip') ylim([1e-5,1e-1]) xlim([-400,100]) title('the amplitudes should be approx. the same') a.plot();b.plot();
review/notebooks/compare_waveforms_from_two_codes.ipynb
Cyberface/nrutils_dev
mit
When you get home, you ask your roommate, who conveniently happens to work in a nuclear chemistry lab, to measure some of the properties of this odd material that you've been given. When she comes home that night, she says that she managed to purify the sample and measure its radioactive decay rate (which is to say, the number of decays over some period of time) and the total amount of stuff as a function of time. Since you have to use two different machines to do that, it's in two different files. She also mentions that she didn't have time to do any more because somebody else in the lab had forgotten to clean out the two machines, so she had to do a quick cleanup job before making the measurements.
# this block of code reads in the data files. Don't worry too much about how they # work right now -- we'll talk about that in a couple of weeks weeks! import numpy as np ''' count_times = the time since the start of data-taking when the data was taken (in seconds) count_rate = the number of counts since the last time data was taken, at the time in count_times ''' count_times = np.loadtxt("count_rates.txt", dtype=int)[0] count_rates = np.loadtxt("count_rates.txt", dtype=int)[1] ''' sample_times = the time since the start of data-taking when the sample was measured (in seconds) sample_amounts = the number of atoms left of the mysterious material at the time in sample_times ''' sample_times = np.loadtxt("sample_amount.txt", dtype=int)[0] sample_amounts = np.loadtxt("sample_amount.txt", dtype=int)[1]
past-semesters/fall_2016/day-by-day/day06-modeling-radioactivity-day1/radioactivity_modeling.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Using the four numpy arrays created in the cell above, plot the measured count rates as a function of time and, on a separate plot, plot the measured sample amounts as a function of time. What do you notice about these plots, compared to the ones from your analytic estimate? Also, if you inspect the data, approximately what is the initial amount of sample and the half-life? (Hint: when you plot, make sure you plot points rather than a line!)
# put your code here! add additional cells if necessary.
past-semesters/fall_2016/day-by-day/day06-modeling-radioactivity-day1/radioactivity_modeling.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Based on the plot in the previous cell, what do you think is going on? put your answer here! How might you modify your model to emulate this behavior? In other words, how might you modify the equation for decay rate to get something that matches the observed decay rate? put your answer here! More complex data manipulation and visualization We're now going to try to do something to the decay rate data that you worked with above, to try to get a more accurate understanding of the decay rate and whatever is adding confusion to your modeling. What you're going to do is: "Smooth" the decay rate data over multiple adjacent samples in time to get rid of some of the noise. Try writing a piece of code to loop over the array of data and average the sample you're interested in along with the N samples on either side (i.e., from element i-N to i+N, for an arbitrary number of cells). Store this smoothed data in a new array (perhaps using np.zeros_like() to create the new array?). Plot your smoothed data on top of the noisy data to ensure that it agrees. Create a new array with an analytic equation that describes for the decay rate as a function of time, taking into account what you're seeing in point (2), and try to find the values of the various constants in the equation. Plot the new array on top of the raw data and smoothed values. Does this expression, and the constants that you decided on, give reasonable results?
# put your code here! add additional cells if necessary.
past-semesters/fall_2016/day-by-day/day06-modeling-radioactivity-day1/radioactivity_modeling.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Feedback on this assignment Please fill out the form that appears when you run the code below. You must completely fill this out in order for your group to receive credit for the assignment!
from IPython.display import HTML HTML( """ <iframe src="https://goo.gl/forms/F1MvFMDpIWPScchr2?embedded=true" width="80%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> """ )
past-semesters/fall_2016/day-by-day/day06-modeling-radioactivity-day1/radioactivity_modeling.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Con un dataframe
z = pd.DataFrame(np.array([parse_latex(r'x+y=6'),parse_latex(r'x-y=0')])) z z.columns for i in z.index: print(z.loc[i][0]) latex(solve(z.loc[i][0] for i in z.index)) for i in z.index: print(z.loc[i][0])
tmp/.ipynb_checkpoints/Probando_parse_latex-checkpoint.ipynb
crdguez/mat4ac
gpl-3.0
Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide. In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column() We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop.
INPUT_COLS = [ "pickup_longitude", "pickup_latitude", "dropoff_longitude", "dropoff_latitude", "passenger_count", ] # Create input layer of feature columns # TODO 1 feature_columns = { colname: tf.feature_column.numeric_column(colname) for colname in INPUT_COLS }
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/3_keras_sequential_api.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
# Build a keras DNN model using Sequential API # TODO 2a model = Sequential( [ DenseFeatures(feature_columns=feature_columns.values()), Dense(units=32, activation="relu", name="h1"), Dense(units=8, activation="relu", name="h2"), Dense(units=1, activation="linear", name="output"), ] )
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/3_keras_sequential_api.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments: An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class. A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the Losses class (such as categorical_crossentropy or mse), or it can be a custom objective function. A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function. We will add an additional custom metric called rmse to our list of metrics which will return the root mean square error.
# TODO 2b # Create a custom evalution metric def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) # Compile the keras model model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/3_keras_sequential_api.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training.
# TODO 3 %%time steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) LOGDIR = "./taxi_trained" history = model.fit( x=trainds, steps_per_epoch=steps_per_epoch, epochs=NUM_EVALS, validation_data=evalds, callbacks=[TensorBoard(LOGDIR)], )
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/3_keras_sequential_api.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The kinematics of local F stars Load the TGAS and 2MASS data that we use, among other things, select F-type stars:
# Load TGAS and 2MASS tgas= gaia_tools.load.tgas() twomass= gaia_tools.load.twomass() jk= twomass['j_mag']-twomass['k_mag'] dm= -5.*numpy.log10(tgas['parallax'])+10. mj= twomass['j_mag']-dm
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Load the dwarf locus and select TGAS F-type stars with good parallaxes:
# Select F stars sp= effsel.load_spectral_types() sptype= 'F' jkmin= (sp['JH']+sp['HK'])[sp['SpT']=='%s0V' % sptype] if sptype == 'M': jkmax= (sp['JH']+sp['HK'])[sp['SpT']=='%s5V' % sptype] else: jkmax= (sp['JH']+sp['HK'])[sp['SpT']=='%s9V' % sptype] jmin= main_sequence_cut_r(jkmax) jmax= main_sequence_cut_r(jkmin) good_plx_indx= (tgas['parallax']/tgas['parallax_error'] > 10.)*(jk != 0.) good_sampling= good_plx_indx*(jk > jkmin)*(jk < jkmax)\ *(mj < main_sequence_cut_r(jk,tight=False,low=True))\ *(mj > main_sequence_cut_r(jk,tight=False)) print("Found %i F stars in TGAS with good parallaxes" % (numpy.sum(good_sampling))) tgas= tgas[good_sampling] twomass= twomass[good_sampling] jk= jk[good_sampling] dm= dm[good_sampling] mj=mj[good_sampling]
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Using XD to get $\sigma_{v_z}$ from the TGAS data alone We will determine $\sigma_{v_z}^2$ from the TGAS data alone by, that is, just using the TGAS parallaxes and proper motions. This only works close to the plane, because further away the $v_z$ component of the velocity only weakly projects onto the proper motions. First we compute the linear velocity in RA and Dec and propagate the correlated parallax and proper-motion uncertainties to this vector. We then model this vector as being a 2D projection of the 3D velocity distribution in $(v_x,v_y,v_z)$, where we are mainly interested in the $v_z$ component and so we do not have to worry too much about the in-plane $(v_x,v_y)$ velocity distribution.
# Compute XYZ lb= bovy_coords.radec_to_lb(tgas['ra'],tgas['dec'],degree=True,epoch=None) XYZ= bovy_coords.lbd_to_XYZ(lb[:,0],lb[:,1],1./tgas['parallax'],degree=True) # Generate vradec vradec= numpy.array([bovy_coords._K/tgas['parallax']*tgas['pmra'], bovy_coords._K/tgas['parallax']*tgas['pmdec']]) #First calculate the transformation matrix T epoch= None theta,dec_ngp,ra_ngp= bovy_coords.get_epoch_angles(epoch) Tinv= numpy.dot(numpy.array([[numpy.cos(ra_ngp),-numpy.sin(ra_ngp),0.], [numpy.sin(ra_ngp),numpy.cos(ra_ngp),0.], [0.,0.,1.]]), numpy.dot(numpy.array([[-numpy.sin(dec_ngp),0.,numpy.cos(dec_ngp)], [0.,1.,0.], [numpy.cos(dec_ngp),0.,numpy.sin(dec_ngp)]]), numpy.array([[numpy.cos(theta),numpy.sin(theta),0.], [numpy.sin(theta),-numpy.cos(theta),0.], [0.,0.,1.]])))# Calculate all projection matrices ra= tgas['ra']/180.*numpy.pi dec= tgas['dec']/180.*numpy.pi A1= numpy.array([[numpy.cos(dec),numpy.zeros(len(tgas)),numpy.sin(dec)], [numpy.zeros(len(tgas)),numpy.ones(len(tgas)),numpy.zeros(len(tgas))], [-numpy.sin(dec),numpy.zeros(len(tgas)),numpy.cos(dec)]]) A2= numpy.array([[numpy.cos(ra),numpy.sin(ra),numpy.zeros(len(tgas))], [-numpy.sin(ra),numpy.cos(ra),numpy.zeros(len(tgas))], [numpy.zeros(len(tgas)),numpy.zeros(len(tgas)),numpy.ones(len(tgas))]]) TAinv= numpy.empty((len(tgas),3,3)) for jj in range(len(tgas)): TAinv[jj]= numpy.dot(numpy.dot(A1[:,:,jj],A2[:,:,jj]),Tinv) proj= TAinv[:,1:] # Sample from the joint (parallax,proper motion) uncertainty distribution to get the covariance matrix of the vradec nmc= 10001 # Need to sample from the (parallax,proper-motion covariance matrix) vradec_cov= numpy.empty((len(tgas),2,2)) for ii in tqdm.trange(len(tgas)): # Construct covariance matrixx tcov= numpy.zeros((3,3)) tcov[0,0]= tgas['parallax_error'][ii]**2./2. # /2 because of symmetrization below tcov[1,1]= tgas['pmra_error'][ii]**2./2. tcov[2,2]= tgas['pmdec_error'][ii]**2./2. tcov[0,1]= tgas['parallax_pmra_corr'][ii]*tgas['parallax_error'][ii]*tgas['pmra_error'][ii] tcov[0,2]= tgas['parallax_pmdec_corr'][ii]*tgas['parallax_error'][ii]*tgas['pmdec_error'][ii] tcov[1,2]= tgas['pmra_pmdec_corr'][ii]*tgas['pmra_error'][ii]*tgas['pmdec_error'][ii] # symmetrize tcov= (tcov+tcov.T) # Cholesky decomp. L= numpy.linalg.cholesky(tcov) tsam= numpy.tile((numpy.array([tgas['parallax'][ii],tgas['pmra'][ii],tgas['pmdec'][ii]])),(nmc,1)).T tsam+= numpy.dot(L,numpy.random.normal(size=(3,nmc))) tvradec= numpy.array([bovy_coords._K/tsam[0]*tsam[1], bovy_coords._K/tsam[0]*tsam[2]]) vradec_cov[ii]= numpy.cov(tvradec)
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
If we use the same bins as those in which we computed the density, we have the following number of stars in each bin:
zbins= numpy.arange(-0.4125,0.425,0.025) indx= (numpy.fabs(XYZ[:,2]) > -0.4125)\ *(numpy.fabs(XYZ[:,2]) <= 0.4125)\ *(numpy.sqrt(XYZ[:,0]**2.+XYZ[:,1]**2.) < 0.2) bovy_plot.bovy_print(axes_labelsize=17.,text_fontsize=12.,xtick_labelsize=15.,ytick_labelsize=15.) _= hist(XYZ[indx,2],bins=zbins) xlabel(r'$Z\,(\mathrm{kpc})$')
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Now we perform an example fit:
def combined_sig2(amp,mean,covar): indx= numpy.sqrt(covar) < 30. tamp= amp[indx]/numpy.sum(amp[indx]) return (numpy.sum(tamp*(covar+mean**2.)[indx])-numpy.sum(tamp*mean[indx])**2.) # Fit with mix of Gaussians ii= 14 print("Z/pc:",500.*(numpy.roll(zbins,-1)+zbins)[:-1][ii]) indx= (XYZ[:,2] > zbins[ii])\ *(XYZ[:,2] <= zbins[ii+1])\ *(numpy.sqrt(XYZ[:,0]**2.+XYZ[:,1]**2.) < 0.2) ydata= vradec.T[indx] ycovar= vradec_cov[indx] ngauss= 2 initamp= numpy.random.uniform(size=ngauss) initamp/= numpy.sum(initamp) m= numpy.zeros(3) s= numpy.array([40.,40.,20.]) initmean= [] initcovar= [] for ii in range(ngauss): initmean.append(m+numpy.random.normal(size=3)*s) initcovar.append(4.*s**2.*numpy.diag(numpy.ones(3))) initcovar= numpy.array(initcovar) initmean= numpy.array(initmean) print("lnL",extreme_deconvolution(ydata,ycovar,initamp,initmean,initcovar,projection=proj[indx])) print("amp, mean, std. dev.",initamp,initmean[:,2],numpy.sqrt(initcovar[:,2,2])) print("Combined <v^2>, sqrt(<v^2>):",combined_sig2(initamp,initmean[:,2],initcovar[:,2,2]), numpy.sqrt(combined_sig2(initamp,initmean[:,2],initcovar[:,2,2])))
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
The following function computes a bootstrap estimate of the uncertainty in $\sigma_{v_z}$:
def bootstrap(nboot,vrd,vrd_cov,proj,ngauss=2): out= numpy.empty(nboot) for ii in range(nboot): # Draw w/ replacement indx= numpy.floor(numpy.random.uniform(size=len(vrd))*len(vrd)).astype('int') ydata= vrd[indx] ycovar= vrd_cov[indx] initamp= numpy.random.uniform(size=ngauss) initamp/= numpy.sum(initamp) m= numpy.zeros(3) s= numpy.array([40.,40.,20.]) initmean= [] initcovar= [] for jj in range(ngauss): initmean.append(m+numpy.random.normal(size=3)*s) initcovar.append(4.*s**2.*numpy.diag(numpy.ones(3))) initcovar= numpy.array(initcovar) initmean= numpy.array(initmean) lnL= extreme_deconvolution(ydata,ycovar,initamp,initmean,initcovar,projection=proj[indx]) out[ii]= combined_sig2(initamp,initmean[:,2],initcovar[:,2,2]) return out
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
A test case:
ii= 7 indx= (XYZ[:,2] > zbins[ii])\ *(XYZ[:,2] <= zbins[ii+1])\ *(numpy.sqrt(XYZ[:,0]**2.+XYZ[:,1]**2.) < 0.2) b= bootstrap(20,vradec.T[indx],vradec_cov[indx],proj[indx],ngauss=2) zbins[ii], numpy.mean(numpy.sqrt(b)), numpy.std(numpy.sqrt(b))
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Now we do the XD fit to each $Z$ bin and compute the uncertainty using bootstrap:
savefilename= 'Fstar-sigz.sav' if not os.path.exists(savefilename): zbins= numpy.arange(-0.4125,0.425,0.025) nboot= 200 nstar= numpy.zeros(len(zbins)-1)-1 sig2z= numpy.zeros(len(zbins)-1)-1 sig2z_err= numpy.zeros(len(zbins)-1)-1 all_sam= numpy.zeros((len(zbins)-1,nboot))-1 ngauss= 2 start= 0 else: with open(savefilename,'rb') as savefile: out= (pickle.load(savefile),) while True: try: out= out+(pickle.load(savefile),) except EOFError: break zbins,sig2z,sig2z_err,nstar,all_sam,ngauss,nboot,start= out for ii in tqdm.trange(start,len(zbins)-1): indx= (XYZ[:,2] > zbins[ii])\ *(XYZ[:,2] <= zbins[ii+1])\ *(numpy.sqrt(XYZ[:,0]**2.+XYZ[:,1]**2.) < 0.2) nstar[ii]= numpy.sum(indx) if numpy.sum(indx) < 30: continue # Basic XD fit ydata= vradec.T[indx] ycovar= numpy.zeros_like(vradec.T)[indx] initamp= numpy.random.uniform(size=ngauss) initamp/= numpy.sum(initamp) m= numpy.zeros(3) s= numpy.array([40.,40.,20.]) initmean= [] initcovar= [] for jj in range(ngauss): initmean.append(m+numpy.random.normal(size=3)*s) initcovar.append(4.*s**2.*numpy.diag(numpy.ones(3))) initcovar= numpy.array(initcovar) initmean= numpy.array(initmean) lnL= extreme_deconvolution(ydata,ycovar,initamp,initmean,initcovar,projection=proj[indx]) sig2z[ii]= combined_sig2(initamp,initmean[:,2],initcovar[:,2,2]) sam= bootstrap(nboot,vradec.T[indx],vradec_cov[indx],proj[indx],ngauss=ngauss) all_sam[ii]= sam sig2z_err[ii]= 1.4826*numpy.median(numpy.fabs(sam-numpy.median(sam))) save_pickles(savefilename,zbins,sig2z,sig2z_err,nstar,all_sam,ngauss,nboot,ii+1) figsize(12,4) subplot(1,2,1) errorbar(500.*(numpy.roll(zbins,-1)+zbins)[:-1], sig2z,yerr=sig2z_err,ls='None',marker='o') axhline(11.7**2.,ls='--',color='0.65',lw=2.) xlabel(r'$Z\,(\mathrm{pc})$') ylabel(r'$\langle v_z^2 \rangle\,(\mathrm{km^2\,s}^{-2})$') ylim(0.,400.) xlim(-420,420) subplot(1,2,2) errorbar(500.*(numpy.roll(zbins,-1)+zbins)[:-1], numpy.sqrt(sig2z),yerr=0.5*sig2z_err/numpy.sqrt(sig2z),ls='None',marker='o') ylim(7.,18.) xlim(-420,420) xlabel(r'$Z\,(\mathrm{pc})$') ylabel(r'$\sqrt{\langle v_z^2 \rangle}\,(\mathrm{km\,s}^{-1})$') axhline(11.7,ls='--',color='0.65',lw=2.) tight_layout()
py/Fstar-kinematics.ipynb
jobovy/simple-m2m
mit
Create a SparkSession and give it a name. Note: This will start the spark client console -- there is no need to run spark-shell directly.
spark = SparkSession \ .builder \ .appName("PythonPi") \ .getOrCreate()
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
partitions is the number of spark workers to partition the work into.
partitions = 2
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
n is the number of random samples to calculate
n = 100000000
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
This is the sampling function. It generates numbers in the square from (-1, -1) to (1, 1), and returns 1 if it falls inside the unit circle, and 0 otherwise.
def f(_): x = random() * 2 - 1 y = random() * 2 - 1 return 1 if x ** 2 + y ** 2 <= 1 else 0
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
Here's where we farm the work out to Spark.
count = spark.sparkContext \ .parallelize(range(1, n + 1), partitions) \ .map(f) \ .reduce(add) print("Pi is roughly %f" % (4.0 * count / n))
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
Shut down the spark server.
spark.stop()
examples/Jupyter Spark example.ipynb
mreid-moz/jupyter-spark
mpl-2.0
Now that we see how the data is organized, let's use a MLP, with an architecture as the one taught in "Machine Learning" from Stanford in coursera.org to recognize the digits. The MLP will have 3 layers. The input layer will have 784 units The hidden layer will have 25 units The output layer will have, obviously, 10 units. It will output 1 or 0 depending on the classification. Note that the sizes of the input and output layers are given by the X and Y datasets.
from sklearn.neural_network import MLPClassifier myX = myTrainDf[myTrainDf.columns[1:]] myY = myTrainDf[myTrainDf.columns[0]] # Use 'adam' solver for large datasets, alpha is the regularization term. # Display the optimization by showing the cost. myClf = MLPClassifier(hidden_layer_sizes=25, activation='logistic', solver='adam', alpha=1e-5, verbose=True) myClf.fit(myX, myY) # Get the training error myPredY = myClf.predict(myX) from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score # Generic function to assess performance in two datasets def showPerformance (aY, aYpred): # Ensure np.array print ('*** Performance Statistics ***') print ('Accuracy: ', accuracy_score(aY, aYpred)) print ('Precision: ', precision_score(aY, aYpred, average='micro')) print ('Recall: ', recall_score(aY, aYpred, average='micro')) print ('F1: ', f1_score(aY, aYpred, average='micro')) showPerformance(myY, myPredY)
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
The results are quite good, as expected, now let's make a prediction for the test set.
myYtestPred = myClf.predict(myTestDf) myOutDf = pd.DataFrame(index=myTestDf.index+1, data=myYtestPred) myOutDf.reset_index().to_csv('submission.csv', header=['ImageId', 'Label'],index=False)
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
Cross-validation set and regularization parameter The following code will split the test set into a training set and cross-validation set.
REG_ARRAY = [100, 1, 1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 0] def splitDataset (aDf, aFrac): aTrainDf = aDf.sample(frac=aFrac) aXvalDf = aDf.iloc[[x for x in aDf.index if x not in aTrainDf.index]] return aTrainDf, aXvalDf mySampleTrainDf, mySampleXvalDf = splitDataset(myTrainDf, 0.8) myAccuracyDf = pd.DataFrame(index=REG_ARRAY, columns=['Accuracy']) for myAlpha in REG_ARRAY: print ('Training with regularization param ', str(myAlpha)) myClf = MLPClassifier(hidden_layer_sizes=25, activation='logistic', solver='adam', alpha=myAlpha, verbose=False) myClf.fit(mySampleTrainDf[mySampleTrainDf.columns[1:]], mySampleTrainDf['label']) myYpred = myClf.predict(mySampleXvalDf[mySampleXvalDf.columns[1:]]) myAccuracyDf.loc[myAlpha, 'Accuracy'] = accuracy_score(mySampleXvalDf['label'], myYpred) myAccuracyDf
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
From here one can tell that the default regularization parameter (around 1e-5) Multiple layers
REG_ARRAY = [100, 1, 1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 0] myAccuracyDf = pd.DataFrame(index=REG_ARRAY, columns=['Accuracy']) for myAlpha in REG_ARRAY: print ('Training with regularization param ', str(myAlpha)) myClf = MLPClassifier(hidden_layer_sizes=[400, 400, 100, 25], activation='logistic', solver='adam', alpha=myAlpha, verbose=False) myClf.fit(mySampleTrainDf[mySampleTrainDf.columns[1:]], mySampleTrainDf['label']) myYpred = myClf.predict(mySampleXvalDf[mySampleXvalDf.columns[1:]]) myAccuracyDf.loc[myAlpha, 'Accuracy'] = accuracy_score(mySampleXvalDf['label'], myYpred) myAccuracyDf
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
Let's produce a new output file with no regularization and a complex MLP of 784x400x400x100x25x10
myClf = MLPClassifier(hidden_layer_sizes=[400, 400, 100, 25], activation='logistic', solver='adam', alpha=0, verbose=True) myClf.fit(myTrainDf[myTrainDf.columns[1:]], myTrainDf['label']) myYtestPred = myClf.predict(myTestDf) myOutDf = pd.DataFrame(index=myTestDf.index+1, data=myYtestPred) myOutDf.reset_index().to_csv('submission2.csv', header=['ImageId', 'Label'],index=False)
digit_recognition/notebook.ipynb
drublackberry/fantastic_demos
mit
Model Averaging <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/average_optimizers_callback"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/average_optimizers_callback.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/average_optimizers_callback.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/average_optimizers_callback.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview This notebook demonstrates how to use Moving Average Optimizer along with the Model Average Checkpoint from tensorflow addons package. Moving Averaging The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more general idea of the model training until some point. Stochastic Averaging Stochastic Weight Averaging converges to wider optima. By doing so, it resembles geometric ensembeling. SWA is a simple method to improve model performance when used as a wrapper around other optimizers and averaging results from different points of trajectory of the inner optimizer. Model Average Checkpoint callbacks.ModelCheckpoint doesn't give you the option to save moving average weights in the middle of training, which is why Model Average Optimizers required a custom callback. Using the update_weights parameter, ModelAverageCheckpoint allows you to: 1. Assign the moving average weights to the model, and save them. 2. Keep the old non-averaged weights, but the saved model uses the average weights. Setup
!pip install -U tensorflow-addons import tensorflow as tf import tensorflow_addons as tfa import numpy as np import os
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Build Model
def create_model(opt): model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Prepare Dataset
#Load Fashion MNIST dataset train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) test_images, test_labels = test
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
We will be comparing three optimizers here: Unwrapped SGD SGD with Moving Average SGD with Stochastic Weight Averaging And see how they perform with the same model.
#Optimizers sgd = tf.keras.optimizers.SGD(0.01) moving_avg_sgd = tfa.optimizers.MovingAverage(sgd) stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Both MovingAverage and StochasticAverage optimizers use ModelAverageCheckpoint.
#Callback checkpoint_path = "./training/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir, save_weights_only=True, verbose=1) avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir, update_weights=True)
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Train Model Vanilla SGD Optimizer
#Build Model model = create_model(sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Moving Average SGD
#Build Model model = create_model(moving_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Stocastic Weight Average SGD
#Build Model model = create_model(stocastic_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)
site/en-snapshot/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
At first, we need to define the dataset names and temporal ranges. Please note that the datasets have different time ranges. So we will download the data from 1981, when CHRIPS starts (ARC2 is from 1983).
dh=datahub.datahub(server,version,API_key) dataset1='noaa_arc2_africa_01' variable_name1 = 'pr' time_start = '1983-01-01T00:00:00' time_end = '2018-01-01T00:00:00'
api-examples/ARC2_download_example.ipynb
planet-os/notebooks
mit
Then we define spatial range. In this case we define all Africa. Keep in mind that it's a huge area and downloading all the data from Africa for 35 years is a lot and can take even several hours.
area_name = 'Africa' latitude_north = 42.24; longitude_west = -24.64 latitude_south = -45.76; longitude_east = 60.28
api-examples/ARC2_download_example.ipynb
planet-os/notebooks
mit
Download the data with package API Create package objects Send commands for the package creation Download the package files
package_arc2_africa_01 = package_api.package_api(dh,dataset1,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area_name) package_arc2_africa_01.make_package() package_arc2_africa_01.download_package()
api-examples/ARC2_download_example.ipynb
planet-os/notebooks
mit
The pandas library in Python is designed to make it easy to process tabular data, and we can use it display the CSV file so that it looks more like a table. In the next example, we import the library (and give it the short name pd), and then use its read_cv() method to slurp up the CSV file.
import pandas as pd table = pd.read_csv("../data/formats/messages.csv") table
notebooks/data_formats.ipynb
edinburghlivinglab/dds-notebooks
cc0-1.0
One issue to note is that if a value in the CSV file contains a comma, then we have to wrap that value with quote signs, as in "James, Ewan". <a name="xml">XML</a> XML (eXtensible Markup Language) is a W3C open standard, used widely for storing data and for transferring it between applications and services. Here is a simple example: xml &lt;message&gt; &lt;to&gt;James&lt;/to&gt; &lt;from&gt;Arno&lt;/from&gt; &lt;heading&gt;Reminder&lt;/heading&gt; &lt;body&gt;Cycling to Cramond today!&lt;/body&gt; &lt;/message&gt; This is intended to self-explanatory, in the sense that we have marked-up the different parts of our data with explicit labels, or tags, enclosed in angle brackets. The representation is hierarchical, in the sense that we start off with a 'root' element, in this case &lt;message&gt;, which has four child elements: &lt;to&gt;, &lt;from&gt;, &lt;heading&gt; and &lt;body&gt;. In an XML document, we use start and end tags for marking up elements. For example, the start tag &lt;to&gt; has a corresponding end tag &lt;/to&gt;. In XML, all elements must have a closing tag. As you can see, XML is considerably more verbose than CSV, particularly when the data is tabular in nature. If we have more than one message in our data, then we have to come up with a higher-level root of the tree &mdash; we'll use &lt;data&gt;. Then each row in the table has a corresponding &lt;message&gt; element. Third, we should be explicit about the fact that there are two distinct people to whom the messages are addressed. We'll represent this by having separate &lt;to&gt; elements for each of them. We've put this all into a file called messages.xml and the result of parsing this file (with the lxml library) is shown below.
from lxml import etree tree = etree.parse("../data/formats/messages.xml") print(etree.tostring(tree, pretty_print = True, encoding="unicode"))
notebooks/data_formats.ipynb
edinburghlivinglab/dds-notebooks
cc0-1.0
<a name="json">JSON</a> JSON (JavaScript Object Notation) is intended as format for transferring data that is easier for humans to write and read than XML. Unlike CSV and XML, JSON lets us represent lists directly using [ and ]. For example, the list containing the two strings 'James' and 'Ewan' is ['James', 'Ewan']. A JSON 'object' is a set of key-value pairs, enclosed by curly brackets { and }. So, for example, the following is a JSON object, with keys "body" and "date" and strings as values. json { "body": "Cycling to Cramond today!", "date": "13/10/2015" } We can use a list as a value, so the following also counts as a JSON object: json { "to": ["James", "Ewan"] } Returning to our running example, here's the result of 'loading' a JSON object from a file and 'dumping' it as string which can then be printed out.
import json fp = open("../data/formats/messages.json") data = json.load(fp) print(json.dumps(data, indent=2, sort_keys=True))
notebooks/data_formats.ipynb
edinburghlivinglab/dds-notebooks
cc0-1.0
Load Data As usual, let's start by loading some network data. This time round, we have a physician trust network, but slightly modified such that it is undirected rather than directed. This directed network captures innovation spread among 246 physicians in for towns in Illinois, Peoria, Bloomington, Quincy and Galesburg. The data was collected in 1966. A node represents a physician and an edge between two physicians shows that the left physician told that the righ physician is his friend or that he turns to the right physician if he needs advice or is interested in a discussion. There always only exists one edge between two nodes even if more than one of the listed conditions are true.
# Load the network. G = cf.load_physicians_network() # Make a Circos plot of the graph import numpy as np from circos import CircosPlot nodes = sorted(G.nodes()) edges = G.edges() edgeprops = dict(alpha=0.1) nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes)) fig = plt.figure(figsize=(6,6)) ax = fig.add_subplot(111) c = CircosPlot(nodes, edges, radius=10, ax=ax, edgeprops=edgeprops, nodecolor=nodecolor) c.draw()
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit
Question What can you infer about the structure of the graph from the Circos plot? Structures in a Graph We can leverage what we have learned in the previous notebook to identify special structures in a graph. In a network, cliques are one of these special structures. Cliques In a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not. The core idea is that if a node is present in a triangle, then its neighbors' neighbors' neighbors should include itself.
# Example code. def in_triangle(G, node): """ Returns whether a given node is present in a triangle relationship or not. """ # We first assume that the node is not present in a triangle. is_in_triangle = False # Then, iterate over every pair of the node's neighbors. for nbr1, nbr2 in itertools.combinations(G.neighbors(node), 2): # Check to see if there is an edge between the node's neighbors. # If there is an edge, then the given node is present in a triangle. if G.has_edge(nbr1, nbr2): is_in_triangle = True # We break because any triangle that is present automatically # satisfies the problem requirements. break return is_in_triangle in_triangle(G, 3)
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit
Exercise Can you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with? Do not return the triplets, but the set/list of nodes. Possible Implementation: If my neighbor's neighbor's neighbor includes myself, then we are in a triangle relationship. Possible Implementation: If I check every pair of my neighbors, any pair that are also connected in the graph are in a triangle relationship with me. Hint: Python's itertools module has a combinations function that may be useful. Hint: NetworkX graphs have a .has_edge(node1, node2) function that checks whether an edge exists between two nodes. Verify your answer by drawing out the subgraph composed of those nodes.
# Possible answer def get_triangles(G, node): neighbors = set(G.neighbors(node)) triangle_nodes = set() """ Fill in the rest of the code below. """ triangle_nodes.add(node) is_in_triangle = False # Then, iterate over every pair of the node's neighbors. for nbr1, nbr2 in itertools.combinations(neighbors, 2): # Check to see if there is an edge between the node's neighbors. # If there is an edge, then the given node is present in a triangle. if G.has_edge(nbr1, nbr2): # We break because any triangle that is present automatically # satisfies the problem requirements. triangle_nodes.add(nbr1) triangle_nodes.add(nbr2) return triangle_nodes # Verify your answer with the following funciton call. Should return something of the form: # {3, 9, 11, 41, 42, 67} get_triangles(G, 3) # Then, draw out those nodes. nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True) # Compare for yourself that those are the only triangles that node 3 is involved in. neighbors3 = G.neighbors(3) neighbors3.append(3) nx.draw(G.subgraph(neighbors3), with_labels=True)
4. Cliques, Triangles and Graph Structures (Student).ipynb
SubhankarGhosh/NetworkX
mit