markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
As before we can compose these by passing in an ax argument.
fig = plt.figure(figsize=(12,6)) ax = plt.subplot(1,2,1) pydsd.plot.plot_dsd(dsd, ax=ax) ax = plt.subplot(1,2,2) pydsd.plot.plot_NwD0(dsd, ax=ax) plt.tight_layout()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Finally let's visualize a few more of the calculated fields. We can also look at what all new fields have appeared.
dsd.fields.keys() plt.figure(figsize=(12,12)) plt.subplot(2,2,1) pydsd.plot.plot_ts(dsd, 'D0', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('$D_0$') # plt.xlim(5,24) plt.subplot(2,2,2) pydsd.plot.plot_ts(dsd, 'Nw', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('$log_{10}(N_w)$') # pl...
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Note the fit submodule has alternative algorithms for calculating various DSD parameter fits. Radar Equivalent Scattering We can calculate radar equivalent parameters as well. We use the PyTMatrix library under the hood for this. Let's look at what these measurements would look like if we did T-Matrix scatttering at X-...
dsd.calculate_radar_parameters()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
This assumes BC shape relationship, X band, 10C. All of the scattering options are fully configurable.
dsd.set_scattering_temperature_and_frequency(scattering_temp=10, scattering_freq=9700000000.0) dsd.set_canting_angle(7)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Note this updates the parameters, but does not re-scatter the fields until you ask it to for computational reasons. Let's do that now while also changing the DSR we are using, and the maximum diameter we will scatter for.
dsd.calculate_radar_parameters(dsr_func = pydsd.DSR.bc, max_diameter=7)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Similar to before these new fields will be added to the DropSizeDistribution object in the fields dictionary.
dsd.fields.keys()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Now we can plot these variables up using our pydsd.plot.plot_ts function as before.
plt.figure(figsize=(12,12)) plt.subplot(2,2,1) pydsd.plot.plot_ts(dsd, 'Zh', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('Reflectivity(dBZ)') # plt.xlim(5,24) plt.subplot(2,2,2) pydsd.plot.plot_ts(dsd, 'Zdr', x_min_tick_format='hour') plt.xlabel('Time(minutes)') plt.ylabel('Differential Reflectivit...
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Rain Rate estimators PyDSD has support for some fairly simple built in rain rate estimators for each of the polarimetric variables. Let's calculate a few of these and see how well they work out. TODO: Add support for storing these on the object. TODO: Add better built in plotting support for these.
(r_z_a, r_z_b), opt = dsd.calculate_R_Zh_relationship() print(f'RR(Zh) = {r_z_a} Zh **{r_z_b}') (r_kdp_a, r_kdp_b), opt = dsd.calculate_R_Kdp_relationship() print(f'RR(KDP) = {r_kdp_a} KDP **{r_kdp_b}') (r_zk_a, r_zk_b1, r_zk_b2), opt = dsd.calculate_R_Zh_Kdp_relationship() print(f'RR(Zh, KDP) = {r_zk_a} Zh **{r_zk_b...
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Let's visualize how good of a fit each of these estimators is. We have the measured rain rate from the disdrometer in the fields variable.
rr_z = r_z_a * np.power(dsd._idb(dsd.fields['Zh']['data']), r_z_b) rr_kdp = r_kdp_a * np.power(dsd.fields['Kdp']['data'], r_kdp_b) rr_zk = r_zk_a * np.power(dsd._idb(dsd.fields['Zh']['data']), r_zk_b1)* np.power(dsd.fields['Kdp']['data'], r_zk_b2) plt.figure(figsize=(12,4)) plt.subplot(1,3,1) plt.scatter(rr_z, dsd.f...
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
As expected, estimators that use polarimetry tend to do much better. Convective Stratiform Partitioning Finally we have several algorithms that exist for stratiform partitioning in a variety of situations. Let's look at an applicable ground based one due to Bringi
cs = pydsd.partition.cs_partition.cs_partition_bringi_2010(dsd.fields['Nw']['data'], dsd.fields['D0']['data'])
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
We have a few ways we can choose to visualize this. One is to just look at the ouput where 0-unclassified, 1-Stratiform, 2-convective, 3-transition
plt.plot(cs)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
We can also color code the D0,Nw points to get a better visual understanding of this algorithm.
plt.scatter(dsd.fields['D0']['data'], np.log10(dsd.fields['Nw']['data']), c=cs)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Note: If you're reading this as a static HTML page, you can also get it as an executable Jupyter notebook here. FSMs Without Monsters! If you google "FSM", you'll probably get links to "Flying Spaghetti Monster". But that is not this. Instead, I'm talking about finite-state machines. While some will tell you that worki...
from pygmyhdl import * @chunk def counter(clk_i, cnt_o): # Here's the counter state variable. cnt = Bus(len(cnt_o)) # The next state logic is just an adder that adds 1 to the current cnt state variable. @seq_logic(clk_i.posedge) def next_state_logic(): cnt.next = cnt + 1 ...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
You can see the output logic for the counter just copies the state variable to the counter outputs, and the next-state logic is an adder that increments the current counter value. <img alt="Counter next-state and output logic." src="FSM_Counter.png" width=500 /> This counter doesn't take any inputs except for the clock...
@chunk def counter_en_rst(clk_i, en_i, rst_i, cnt_o): cnt = Bus(len(cnt_o)) # The next state logic now includes a reset input to clear the counter # to zero, and an enable input that only allows counting when it is true. @seq_logic(clk_i.posedge) def next_state_logic(): if rst == T...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
You can see that lowering the enable input over the interval [8, 12] keeps the counter from advancing, and raising the reset at $t=$ 16 forces the counter back to zero. This is all well and good, but you've known how to build counters for a quite a while. Let's look at an FSM that does something new. A Button Debouncer...
@chunk def debouncer(clk_i, button_i, button_o, debounce_time): ''' Inputs: clk_i: Main clock input. button_i: Raw button input. button_o: Debounced button output. debounce_time: Number of clock cycles the button value has to be stable. ''' # These are the state variable...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Now I can simulate button presses of various lengths and watch the output of the circuit. Note that I'm using a very small debounce time to keep the simulation to a reasonable length. In reality, a clock of 12 MHz and a debounce time of 100 ms would require a debounce count of 1,200,000.
initialize() # Initialize for simulation here because we'll be watching the internal debounce counter. clk = Wire(name='clk') button_i = Wire(name='button_i') button_o = Wire(name='button_o') debouncer(clk, button_i, button_o, 3) def debounce_tb(): '''Test bench for the counter with a reset and enable inputs.'''...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Note these points in the simulation: The debounce counter gets reset to its maximum value when the current and previous button values are different ($t =$ 1, 9, 13, 17). The initial button press during interval [0, 8] is long enough to change the button output at $t =$ 9. The button release for $t \ge$ 16 is also ...
@chunk def classic_fsm(clk_i, inputs_i, outputs_o): ''' Inputs: clk_i: Main clock input. inputs_i: Two-bit input vector directs state transitions. outputs_o: Four-bit output vector. ''' # Declare a state variable with four states. In addition to the current # state of th...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Now I can stimulate the FSM with the following test bench. The FSM is moved forward by three states and then backward three times, so it should end up where it started.
initialize() inputs = Bus(2, name='inputs') outputs = Bus(4, name='outputs') clk = Wire(name='clk') classic_fsm(clk, inputs, outputs) def fsm_tb(): nop = 0b00 # no operation - both inputs are inactive fwd = 0b01 # Input combination for moving forward. bck = 0b10 # Input combination for moving backward....
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
The waveforms show the FSM moving forward (A $\rightarrow$ B $\rightarrow$ C $\rightarrow$ D) and then moving back to where it started (D $\rightarrow$ C $\rightarrow$ B $\rightarrow$ A). This is good, but what if your inputs are slow (like from manually-operated pushbuttons) and your clock is very fast (like 12 MHz)....
@chunk def classic_fsm(clk_i, inputs_i, outputs_o): fsm_state = State('A', 'B', 'C', 'D', name='state') reset_cnt = Bus(2) # Variables for storing the input values during the previous clock # and holding the changes between the current and previous input values. prev_inputs = Bus(len(inputs_i)...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Now I'll modify the test bench a bit by adding another sequence of inputs that alternate between active and inactive values.
initialize() inputs = Bus(2, name='inputs') outputs = Bus(4, name='outputs') clk = Wire(name='clk') classic_fsm(clk, inputs, outputs) def fsm_tb(): nop = 0b00 fwd = 0b01 bck = 0b10 ins = [nop, nop, nop, nop, fwd, fwd, fwd, bck, bck, bck] for inputs.next in ins: clk.next = 0 yi...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
From the simulation, you can see the first sequence of six inputs (time $t =$ 8 to $t =$ 20) only caused two state transitions (A $\rightarrow$ B $\rightarrow$ A) because the inputs only changed twice. Then, when active inputs were interspersed with inactive inputs (time $t \ge$ 20), the FSM went through six state tran...
toVerilog(classic_fsm, clk_i=Wire(), inputs_i=Bus(2), outputs_o=Bus(4)) with open('classic_fsm.pcf', 'w') as pcf: pcf.write( ''' set_io clk_i 21 set_io outputs_o[0] 99 set_io outputs_o[1] 98 set_io outputs_o[2] 97 set_io outputs_o[3] 96 set_io inputs_i[0] 118 set_io inputs_i[1] 114 ''' ) !yosys -q -p "synth_i...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
The following video shows the operation of the FSM on the iCEstick board. As you watch, you can see the FSM move backwards and forwards through the states under the guidance of the button presses. However, there are times when it makes multiple transitions for a single button press because the buttons are bouncing.
HTML('<div style="padding-bottom:50.000%;"><iframe src="https://streamable.com/s/lmqvd/urtqfp" frameborder="0" width="100%" height="100%" allowfullscreen style="width:640px;position:absolute;"></iframe></div>')
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
To correct the button bounce problem, I added debounce circuits to the FSM as shown below.
@chunk def classic_fsm(clk_i, inputs_i, outputs_o): fsm_state = State('A', 'B', 'C', 'D', name='state') reset_cnt = Bus(2) prev_inputs = Bus(len(inputs_i), name='prev_inputs') input_chgs = Bus(len(inputs_i), name='input_chgs') # Take the inputs and run them through the debounce circuits. ...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Now it's just a matter of recompiling the debounced FSM and observing its operation.
toVerilog(classic_fsm, clk_i=Wire(), inputs_i=Bus(2), outputs_o=Bus(4)) with open('classic_fsm.pcf', 'w') as pcf: pcf.write( ''' set_io clk_i 21 set_io outputs_o[0] 99 set_io outputs_o[1] 98 set_io outputs_o[2] 97 set_io outputs_o[3] 96 set_io inputs_i[0] 118 set_io inputs_i[1] 114 ''' ) !yosys -q -p "synth_i...
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
I probably don't have to tell you that the bouncing buttons are conspicuously absent in the following video.
HTML('<div style="padding-bottom:50.000%;"><iframe src="https://streamable.com/s/agk4i/tqcuqu" frameborder="0" width="100%" height="100%" allowfullscreen style="width:640px;position:absolute;"></iframe></div>')
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Restart the kernel before proceeding further (On the Notebook menu, select Kernel > Restart Kernel > Restart). Load necessary libraries
# Importing necessary tensorflow library and printing the TF version. import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) # Here we'll import Pandas and Numpy data processing libraries import pandas as pd import numpy as np # Use matplotlib for visualizing the model import matplotlib.pyplot as p...
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Get the Data
# Reading "kyphosis.csv" file using the read_csv() function included in the pandas library df = pd.read_csv('../kyphosis.csv') # Output the first five rows df.head()
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Exploratory Data Analysis We'll just check out a simple pairplot for this small dataset.
# Here we are using the pairplot() function to plot multiple pairwise bivariate distributions in a dataset # TODO 1 sns.pairplot(df,hue='Kyphosis',palette='Set1')
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train Test Split
# Import train_test_split function from sklearn.model_selection from sklearn.model_selection import train_test_split # Remove column name 'Kyphosis' X = df.drop('Kyphosis',axis=1) y = df['Kyphosis'] # Let's split up the data into a training set and a test set X_train, X_test, y_train, y_test = train_test_split(X, y, ...
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Decision Trees We'll start just by training a single decision tree.
# Import Decision Tree Classifier from sklearn.tree from sklearn.tree import DecisionTreeClassifier # Create Decision Tree classifer object dtree = DecisionTreeClassifier() # Train Decision Tree Classifer # TODO 2 dtree.fit(X_train,y_train)
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Prediction and Evaluation Let's evaluate our decision tree.
# Predict the response for test dataset predictions = dtree.predict(X_test) # Importing the classification_report and confusion_matrix from sklearn.metrics import classification_report,confusion_matrix # Here we will build a text report showing the main classification metrics # TODO 3a print(classification_report(y...
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Tree Visualization Scikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this:
# Here we are importing some built-in visualization functionalities for decision trees from IPython.display import Image from sklearn.externals.six import StringIO from sklearn.tree import export_graphviz import pydot features = list(df.columns[1:]) features # Now we are ready to visualize our Decision Tree mode...
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Random Forests Now let's compare the decision tree model to a random forest.
# Import Random Forest Model from sklearn.ensemble import RandomForestClassifier # Create a Gaussian Classifier rfc = RandomForestClassifier(n_estimators=100) # Train Random Forest Classifer rfc.fit(X_train, y_train) # Train model using the training sets rfc_pred = rfc.predict(X_test) # Now we can compute confusion...
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Basic analyses Pymatgen provides many analyses functions for Structures. Some common ones are given below.
#Determining the symmetry from pymatgen.symmetry.analyzer import SpacegroupAnalyzer finder = SpacegroupAnalyzer(structure) print("The spacegroup is {}".format(finder.get_spacegroup_symbol()))
examples/Basic functionality.ipynb
Dioptas/pymatgen
mit
The vaspio_set module provides a means o obtain a complete set of VASP input files for performing calculations. Several useful presets based on the parameters used in the Materials Project are provided.
from pymatgen.io.vaspio_set import MPVaspInputSet v = MPVaspInputSet() v.write_input(structure, "MyInputFiles") #Writes a complete set of input files for structure to the directory MyInputFiles
examples/Basic functionality.ipynb
Dioptas/pymatgen
mit
As you can see the plot below, it's not linear separatable.
# Plot both classes on the x1, x2 plane plt.plot(x_red[:,0], x_red[:,1], 'ro', label='class red') plt.plot(x_blue[:,0], x_blue[:,1], 'bo', label='class blue') plt.grid() plt.legend(loc=1) plt.xlabel('$x_1$', fontsize=15) plt.ylabel('$x_2$', fontsize=15) plt.axis([-1.5, 1.5, -1.5, 1.5]) plt.title('red vs blue classes in...
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Model and Cost Function Model can be visualized as below: <p align="center"> <img src="https://raw.githubusercontent.com/weichetaru/weichetaru.github.com/master/notebook/machine-learning/img/SimpleANN04.png"></p> Input and Label So for the input layer, we have 2 dimention inputs from N data points (Nx2): $$X = \beg...
# Define the logistic function. - for hidden layer activation. def logistic(z): return 1 / (1 + np.exp(-z)) # Define the softmax function def softmax(z): return np.exp(z) / np.sum(np.exp(z), axis=1, keepdims=True) # Function to compute the hidden activations def hidden_activations(X, Wh, bh): return lo...
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Cost Function The parameter set $w$ can be optimized by maximizing the likelihood: $$\underset{\theta}{\text{argmax}}\; \mathcal{L}(\theta|\mathbf{t},\mathbf{z})$$ The likelihood can be described as join distribution of $t\;and\;z\;$given $\theta$: $$P(\mathbf{t},\mathbf{z}|\theta) = P(\mathbf{t}|\mathbf{z},\theta)P(\...
# Define the cost function def cost(Y, T): return - np.multiply(T, np.log(Y)).sum() # Define the error function at the output def error_output(Y, T): return Y - T # Define the gradient function for the weight parameters at the output layer def gradient_weight_out(H, Eo): return H.T.dot(Eo) # Define the...
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Momentum Model like this is highly unlikely to have convex cost functions and we might easily get a local minimum with gradient decent. Momentum is created to solve this. It's probably the most popular extension of the backprop algorithm. Momentum can be defined: $$\begin{split} V(i+1) & = \lambda V(i) - \mu \frac{\...
# Define the update function to update the network parameters over 1 iteration def backprop_gradients(X, T, Wh, bh, Wo, bo): # Compute the output of the network # Compute the activations of the layers H = hidden_activations(X, Wh, bh) Y = output_activations(H, Wo, bo) # Compute the gradients of the ...
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Code Implementaion
# Run backpropagation # Initialize weights and biases init_var = 0.1 # Initialize hidden layer parameters bh = np.random.randn(1, 3) * init_var Wh = np.random.randn(2, 3) * init_var # Initialize output layer parameters bo = np.random.randn(1, 2) * init_var Wo = np.random.randn(3, 2) * init_var # Parameters are already ...
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Visualization of the trained classifier The classifier we just trianed is circled around and between the blue and red class. It's non-linear and hence be able to correctly classify red and blue.
# Plot the resulting decision boundary # Generate a grid over the input space to plot the color of the # classification at that grid point nb_of_xs = 200 xs1 = np.linspace(-2, 2, num=nb_of_xs) xs2 = np.linspace(-2, 2, num=nb_of_xs) xx, yy = np.meshgrid(xs1, xs2) # create the grid # Initialize and fill the classificati...
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Transformation of the input domain You can see from the plot below. the 2-dimentions input have been project into 3-dimension space (hidden layer) and become liner separable.
# Plot the projection of the input onto the hidden layer # Define the projections of the blue and red classes H_blue = hidden_activations(x_blue, Wh, bh) H_red = hidden_activations(x_red, Wh, bh) # Plot the error surface fig = plt.figure() ax = Axes3D(fig) ax.plot(np.ravel(H_blue[:,2]), np.ravel(H_blue[:,1]), np.ravel...
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
try/finally Statement The other flavor of the try statement is a specialization that has to do with finalization (a.k.a. termination) actions. If a finally clause is included in a try, Python will always run its block of statements “on the way out” of the try statement, whether an exception occurred while the try block...
class TraceBlock: def message(self, arg): print('running ' + arg) def __enter__(self): print('starting with block') return self def __exit__(self, exc_type, exc_value, exc_tb): if exc_type is None: print('exited normally\n') else: ...
09 Exceptions.ipynb
leriomaggio/python-in-a-notebook
mit
User Defined Exceptions
class AlreadyGotOne(Exception): pass def gail(): raise AlreadyGotOne() try: gail() except AlreadyGotOne: print('got exception') class Career(Exception): def __init__(self, job, *args, **kwargs): super(Career, self).__init__(*args, **kwargs) self._job = job def __str...
09 Exceptions.ipynb
leriomaggio/python-in-a-notebook
mit
<div class="alert alert-success"> **EXERCISE 1** Make a line chart of the `data` using Matplotlib. The figure should be 12 (width) by 4 (height) in inches. Make the line color 'darkgrey' and provide an x-label ('days since start') and a y-label ('measured value'). Use the object oriented approach to create the chart...
# %load _solutions/visualization_01_matplotlib1.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 2** The data represents each a day starting from Jan 1st 2021. Create an array (variable name `dates`) of the same length as the original data (length 100) with the corresponding dates ('2021-01-01', '2021-01-02',...). Create the same chart as in the previous exercise, but...
# %load _solutions/visualization_01_matplotlib2.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 3** Compare the __last ten days__ ('2021-04-01' till '2021-04-10') in a bar chart using darkgrey color. For the data on '2021-04-01', use an orange bar to highlight the measurement on this day. <details><summary>Hints</summary> - Select the last 10 days from the `data` a...
# %load _solutions/visualization_01_matplotlib3.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 4** Pandas supports different types of charts besides line plots, all available from `.plot.xxx`, e.g. `.plot.scatter`, `.plot.bar`,... Make a bar chart to compare the mean discharge in the three measurement stations L06_347, LS06_347, LS06_348. Add a y-label 'mean dischar...
# %load _solutions/visualization_01_matplotlib4.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 5** To compare the stations data, make two subplots next to each other: - In the left subplot, make a bar chart of the minimal measured value for each of the station. - In the right subplot, make a bar chart of the maximal measured value for each of the station. Add ...
# %load _solutions/visualization_01_matplotlib5.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 6** Make a line plot of the discharge measurements in station `LS06_347`. The main event on November 13th caused a flood event. To support the reader in the interpretation of the graph, add the following elements: - Add an horizontal red line at 20 m3/s to define the al...
# %load _solutions/visualization_01_matplotlib6.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
In previous weeks we have covered preprocessing our data, dimensionality reduction, and last week looked at supervised learning. This week we will be pulling these processes together into a complete project. Most projects can be thought of as a series of discrete steps: Data acquisition / loading Feature creation Feat...
# http://scikit-learn.org/stable/auto_examples/plot_digits_pipe.html#example-plot-digits-pipe-py import numpy as np import matplotlib.pyplot as plt from sklearn import linear_model, decomposition, datasets from sklearn.pipeline import Pipeline from sklearn.grid_search import GridSearchCV logistic = linear_model.Logi...
Wk12-ml-workflow/Wk12-machine-learning-workflow.ipynb
beyondvalence/biof509_wtl
mit
FeatureUnion
# http://scikit-learn.org/stable/auto_examples/feature_stacker.html#example-feature-stacker-py # Author: Andreas Mueller <amueller@ais.uni-bonn.de> # # License: BSD 3 clause from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.grid_search import GridSearchCV from sklearn.svm import SVC from sklearn.datase...
Wk12-ml-workflow/Wk12-machine-learning-workflow.ipynb
beyondvalence/biof509_wtl
mit
Exercises Using the final example with the diabetes dataset from last week convert the solution over to a pipeline format. Do you get the same result for the optimal number of neighbors? Create a new pipeline applying PCA to the dataset before the classifier. What is the optimal number of dimensions and neighbors? Loo...
from sklearn import grid_search from sklearn import datasets from sklearn import neighbors from sklearn import metrics from sklearn.pipeline import Pipeline from sklearn import grid_search import numpy as np import matplotlib.pyplot as plt %matplotlib inline diabetes = datasets.load_diabetes() X = diabetes.data y = d...
Wk12-ml-workflow/Wk12-machine-learning-workflow.ipynb
beyondvalence/biof509_wtl
mit
Load some data I'm going to work with the data from the combined data sets. The analysis for this data set is in analysis\Cf072115_to_Cf072215b. The one limitation here is that this data has already cut out the fission chamber neighbors. det_df without fission chamber neighbors
det_df = bicorr.load_det_df('../meas_info/det_df_pairs_angles.csv') pair_is = bicorr.generate_pair_is(det_df,ignore_fc_neighbors_flag=True) det_df = det_df.loc[pair_is].reset_index().rename(columns={'index':'index_og'}).copy() det_df.head()
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
I am going to add in a new optional input parameter in bicorr.load_det_df that will let you provide this det_df without fission chamber neighbors directly. Try it out.
det_df = bicorr.load_det_df('../meas_info/det_df_pairs_angles.csv', remove_fc_neighbors=True) det_df.head() chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists() dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df) num_fissions = 2194651200.00
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
singles_hist.npz
singles_hist, dt_bin_edges_sh, dict_det_to_index, dict_index_to_det = bicorr.load_singles_hist(filepath='../analysis/Cf072115_to_Cf072215b/datap',plot_flag=True,show_flag=True)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Load bhp_nn for all pairs I'm going to skip a few steps in order to save memory. This data was produced in analysis_build_bhp_nn_by_pair_1_ns.ipynb and is stored in datap\bhp_nn_by_pair_1ns.npz. Load it now, as explained in the notebook.
npzfile = np.load('../analysis/Cf072115_to_Cf072215b/datap/bhp_nn_by_pair_1ns.npz') pair_is = npzfile['pair_is'] bhp_nn_pos = npzfile['bhp_nn_pos'] bhp_nn_neg = npzfile['bhp_nn_neg'] dt_bin_edges = npzfile['dt_bin_edges']
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
The fission chamber neighbors have already been removed
len(pair_is)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Specify energy range
emin = 0.62 emax = 12
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Calculate sums Singles- set up singles_df I will store this in a pandas dataframe. Columns: Channel number Sp - Singles counts, positive Sn - Singles counts, negative Sd - Singles counts, br-subtracted Sd_err - Singles counts, br-subtracted, err
singles_df = pd.DataFrame.from_dict(dict_index_to_det,orient='index',dtype=np.int8).rename(columns={0:'ch'}) chIgnore = [1,17,33] singles_df = singles_df[~singles_df['ch'].isin(chIgnore)].copy() singles_df['Sp']= 0.0 singles_df['Sn']= 0.0 singles_df['Sd']= 0.0 singles_df['Sd_err'] = 0.0
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Singles- calculate sums
for index in singles_df.index.values: Sp, Sn, Sd, Sd_err = bicorr.calc_n_sum_br(singles_hist, dt_bin_edges_sh, index, emin=emin, emax=emax) singles_df.loc[index,'Sp'] = Sp singles_df.loc[index,'Sn'] = Sn singles_df.loc[index,'Sd'] = Sd singles_df.loc[index,'Sd_err'] = Sd_err singles_df.head() bico...
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Doubles- set up det_df
det_df.head() det_df['Cp'] = 0.0 det_df['Cn'] = 0.0 det_df['Cd'] = 0.0 det_df['Cd_err'] = 0.0 det_df['Np'] = 0.0 det_df['Nn'] = 0.0 det_df['Nd'] = 0.0 det_df['Nd_err'] = 0.0 det_df.head()
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Doubles- Calculate sums
for index in det_df.index.values: Cp, Cn, Cd, err_Cd = bicorr.calc_nn_sum_br(bhp_nn_pos[index,:,:], bhp_nn_neg[index,:,:], dt_bin_edges, emin=emin, emax=emax) det_df.loc[i...
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Perform the correction Now I am going to loop through all pairs and calculate $W$. Loop through each pair Identify $i$, $j$ Fetch $S_i$, $S_j$ Calculate $W$ Propagate error for $W_{err}$ Store in det_df Add W, W_err columns to det_df
det_df['Sd1'] = 0.0 det_df['Sd1_err'] = 0.0 det_df['Sd2'] = 0.0 det_df['Sd2_err'] = 0.0 det_df['W'] = 0.0 det_df['W_err'] = 0.0 det_df.head() singles_df.head()
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Loop through det_df, store singles rates Fill the S and S_err values for each channel in each detector pair.
# Fill S columns in det_df for index in singles_df.index.values: ch = singles_df.loc[index,'ch'] d1_indices = (det_df[det_df['d1'] == ch]).index.tolist() d2_indices = (det_df[det_df['d2'] == ch]).index.tolist() det_df.loc[d1_indices,'Sd1'] = singles_df.loc[index,'Sd'] det_df.loc[d1_ind...
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
This is much "tighter" than the raw counts. Functionalize Write functions to perform all of these calculations. Demo them here. The functions are in a new script called bicorr_sums.py. You have to specify emin, emax.
emin = 0.62 emax = 12
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Data you have to have loaded: det_df dict_index_to_det singles_hist dt_bin_edges_sh bhp_nn_pos bhp_nn_neg dt_bin_edges emin emax num_fissions angle_bin_edges Produce and fill singles_df:
singles_df = bicorr_sums.init_singles_df(dict_index_to_det) singles_df.head() singles_df = bicorr_sums.fill_singles_df(dict_index_to_det, singles_hist, dt_bin_edges_sh, emin, emax) singles_df.head() bicorr_plot.Sd_vs_angle_all(singles_df)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Expand, fill det_df
det_df.head() det_df = bicorr_sums.init_det_df_sums(det_df, t_flag = True) det_df = bicorr_sums.fill_det_df_singles_sums(det_df, singles_df) det_df = bicorr_sums.fill_det_df_doubles_t_sums(det_df, bhp_nn_pos, bhp_nn_neg, dt_bin_edges, emin, emax) det_df = bicorr_sums.calc_det_df_W(det_df) det_df.head()
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Condense into angle bins
angle_bin_edges = np.arange(8,190,10) by_angle_df = bicorr_sums.condense_det_df_by_angle(det_df,angle_bin_edges) by_angle_df.head() bicorr_plot.W_vs_angle(det_df, by_angle_df, save_flag=False)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Put all of this into one function Returns: singles_df, det_df, by_angle_df
angle_bin_edges = np.arange(8,190,10) singles_df, det_df, by_angle_df = bicorr_sums.perform_W_calcs(det_df, dict_index_to_det, singles_hist, dt_bin_edges_sh, bhp_nn_pos, bhp_nn_neg, dt_bin_edges, num_fissions, emin, emax, angle_bin_edges) det_df.head() bico...
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Annotate You can add new fields to a table with annotate. As an example, let's create a new column called cleaned_occupation that replaces missing entries in the occupation field labeled as 'other' with 'none.'
missing_occupations = hl.set(['other', 'none']) t = users.annotate( cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation), hl.null('str'), users.occupation)) t.show()
hail/python/hail/docs/tutorials/05-filter-annotate.ipynb
danking/hail
mit
transmute replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. transmute is useful for transforming data into a new form. Compare the following two snippts of code. The second is identical to the first with transmute replacing select.
missing_occupations = hl.set(['other', 'none']) t = users.select( cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation), hl.null('str'), users.occupation)) t.show() missing_occupations = hl.set(['other', 'none']) t =...
hail/python/hail/docs/tutorials/05-filter-annotate.ipynb
danking/hail
mit
From now on, we will refer to this table using this variable ($meth_BQtable), but we could just as well explicitly give the table name each time. Let's start by taking a look at the table schema:
%bigquery schema --table $meth_BQtable
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
Let's count up the number of unique patients, samples and aliquots mentioned in this table. Using the same approach, we can count up the number of unique CpG probes. We will do this by defining a very simple parameterized query. (Note that when using a variable for the table name in the FROM clause, you should not a...
%%sql --module count_unique DEFINE QUERY q1 SELECT COUNT (DISTINCT $f, 500000) AS n FROM $t fieldList = ['ParticipantBarcode', 'SampleBarcode', 'AliquotBarcode', 'Probe_Id'] for aField in fieldList: field = meth_BQtable.schema[aField] rdf = bq.Query(count_unique.q1,t=meth_BQtable,f=field).results().to_dataframe()...
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
As mentioned above, two different platforms were used to measure DNA methylation. The annotations from Illumina are also available in a BigQuery table:
methAnnot = bq.Table('isb-cgc:platform_reference.methylation_annotation') %bigquery schema --table $methAnnot
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
Given the coordinates for a gene of interest, we can find the associated methylation probes.
%%sql --module getGeneProbes SELECT IlmnID, Methyl27_Loci, CHR, MAPINFO FROM $t WHERE ( CHR=$geneChr AND ( MAPINFO>$geneStart AND MAPINFO<$geneStop ) ) ORDER BY Methyl27_Loci DESC, MAPINFO ASC # MLH1 gene coordinates (+/- 2500 bp) geneChr = "3" geneStart = 37034841 - 2500 geneStop = 37092337 + 2500 ml...
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
There are a total of 50 methlyation probes in and near the MLH1 gene, although only 6 of them are on both the 27k and the 450k versions of the platform.
mlh1Probes
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
We can now use this list of CpG probes as a filter on the data table to extract all of the methylation data across all tumor types for MLH1:
%%sql --module getMLH1methStats SELECT cpg.IlmnID AS Probe_Id, cpg.Methyl27_Loci AS Methyl27_Loci, cpg.CHR AS Chr, cpg.MAPINFO AS Position, data.beta_stdev AS beta_stdev, data.beta_mean AS beta_mean, data.beta_min AS beta_min, data.beta_max AS beta_max FROM ( SELECT * FROM $mlh1Probes ) AS cpg JO...
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.p...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize we...
first-neural-network/Your_first_neural_network.ipynb
brandoncgay/deep-learning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys ### Set the hyperparameters here ### iterations = 100 learning_rate = 0.1 hidden_nodes = 2 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random ba...
first-neural-network/Your_first_neural_network.ipynb
brandoncgay/deep-learning
mit
Error plots for MiniZephyr vs. the AnalyticalHelmholtz response Response of the field (showing where the numerical case does not match the analytical case): Source region PML regions
fig = plt.figure() ax = fig.add_subplot(1,1,1, aspect=0.1) plt.plot(uAH.real.reshape((nz, nx))[:,xs], label='AnalyticalHelmholtz') plt.plot(uMZ.real.reshape((nz, nx))[:,xs], label='MiniZephyr') plt.legend(loc=4) plt.title('Real part of response through xs=%d'%xs)
notebooks/Compare Solutions Homogeneous.ipynb
uwoseis/zephyr
mit
Define a Neural Network ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Three hidden layers, input size = height * width of the image, output size = the number of classes (which is 10 in the case of MNIST) Use the base class: nn.Module The nn.Module mainly takes care of storing the paramters of the neural network.
class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(28 * 28, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # flatten image x = x[:, 0, ...].view(-1, 28*28) #...
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Define a Loss function and optimizer ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Let's use a Classification Cross-Entropy loss and SGD with momentum.
import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Train the network ^^^^^^^^^^^^^^^^^^^^ This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize.
for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net...
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Test the network on the test data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We have trained the network for 2 passes over the training dataset. But we need to check if the network has learnt anything at all. We will check this by predicting the class label that the neural network outputs, and checking it against the ground...
testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Performance on the test dataset.
correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on ...
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Plot images:
import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() predictio...
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
23 24 25 26 为什么要除n? 27 对称阵的不同特征值对应的特征向量相互正交,那么也就是说对称阵的单位特征向量组成的矩阵是正交阵。 正交阵:$Q^T * Q = I$,也就是说每一个列向量,自己乘自己,是1,和别人乘,是0。 另外还有对角化的关系 这个感觉和SVD有那么点关联,不太确定。 28 29 期望是一阶原点矩,方差是二阶中心矩。 30 31 32 33 34
import numpy as np # rv is short for random variable def cal_stats(rv): # if not rv: # raise ValueError("None") length = len(rv) if length == 0: raise ValueError("length of 0") mean = 0 variance = 0 # third order origin moment, 三阶原点矩 third_order = 0 tw...
ml-zb/lec02.ipynb
JasonWayne/course-notes
mit
RT vs Control Human/Mouse
out_dir = "RT_control_hm_gsea" df = pd.read_csv(os.path.join(BASE,"RT_control_results_named_annot.csv")) df = df[(df.padj.abs()<=0.2)] df = df[df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'ran...
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs Control Non-Human/Mouse
out_dir = "RT_control_gsea" df = pd.read_csv(os.path.join(BASE,"RT_control_results_named_annot.csv")) df = df[(df.padj.abs()<=0.2)] df = df[~df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'...
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
Rag vs WT Human/Mouse
out_dir = "Rag_WT_hm_gsea" df = pd.read_csv(os.path.join(BASE,"Rag_WT_results_named_annot.csv")) df = df[(df.padj.abs()<=0.2)] df = df[df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) ran...
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
Rag vs WT Non-Human/Mouse
out_dir = "Rag_WT_gsea" df = pd.read_csv(os.path.join(BASE,"Rag_WT_results_named_annot.csv")) df = df[(df.padj.abs()<=0.2)] df = df[~df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_...
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs WT Human/Mouse
out_dir = "RT_WT_hm_gsea" df = pd.read_csv(os.path.join(BASE,"RT_WT_deseq_results.csv")) df = df[(df.padj.abs()<=0.2)] df = df[df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = r...
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs WT Non-Human/Mouse
out_dir = "RT_WT_gsea" df = pd.read_csv(os.path.join(BASE,"RT_WT_deseq_results.csv")) df = df[(df.padj.abs()<=0.2)] df = df[~df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = ran...
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs Rag Human/Mouse
out_dir = "RT_Rag_hm_gsea" df = pd.read_csv(os.path.join(BASE,"RT_Rag_deseq_results.csv")) df = df[(df.padj.abs()<=0.2)] df = df[df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df =...
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs Rag Non-Human/Mouse
out_dir = "RT_Rag_gsea" df = pd.read_csv(os.path.join(BASE,"RT_Rag_deseq_results.csv")) df = df[(df.padj.abs()<=0.2)] df = df[~df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = r...
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
Get the image data
img, seg, seeds = make_data(64, 20) i = 30 plt.imshow(img[i, :, :], cmap='gray')
examples/pretrain_model.ipynb
mjirik/pyseg_base
bsd-3-clause
Train gaussian mixture model and save it to file
segparams = { # 'method':'graphcut', "method": "graphcut", "use_boundary_penalties": False, "boundary_dilatation_distance": 2, "boundary_penalties_weight": 1, "modelparams": { "type": "gmmsame", "fv_type": "intensity", # 'fv_extern': fv_function, "adaptation": "or...
examples/pretrain_model.ipynb
mjirik/pyseg_base
bsd-3-clause
Run segmentation faster by loading model from file The advantage is higher with the higher number of seeds.
# forget gc = None img, seg, seeds = make_data(56, 18) gc = pycut.ImageGraphCut(img) gc.load(mdl_stored_file) gc.set_seeds(seeds) t0 = datetime.now() gc.run(run_fit_model=False) print(f"time cosumed={datetime.now()-t0}") plt.imshow(img[i, :, :], cmap='gray') plt.contour(gc.segmentation[i,:,:]) plt.show()
examples/pretrain_model.ipynb
mjirik/pyseg_base
bsd-3-clause
The seeds does not have to be used if model is loaded from file
# forget gc = None img, seg, seeds = make_data(56, 18) gc = pycut.ImageGraphCut(img) gc.load(mdl_stored_file) t0 = datetime.now() gc.run(run_fit_model=False) print(f"time cosumed={datetime.now()-t0}") plt.imshow(img[i, :, :], cmap='gray') plt.contour(gc.segmentation[i,:,:]) plt.show()
examples/pretrain_model.ipynb
mjirik/pyseg_base
bsd-3-clause