markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
As before we can compose these by passing in an ax argument.
fig = plt.figure(figsize=(12,6)) ax = plt.subplot(1,2,1) pydsd.plot.plot_dsd(dsd, ax=ax) ax = plt.subplot(1,2,2) pydsd.plot.plot_NwD0(dsd, ax=ax) plt.tight_layout()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Finally let's visualize a few more of the calculated fields. We can also look at what all new fields have appeared.
dsd.fields.keys() plt.figure(figsize=(12,12)) plt.subplot(2,2,1) pydsd.plot.plot_ts(dsd, 'D0', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('$D_0$') # plt.xlim(5,24) plt.subplot(2,2,2) pydsd.plot.plot_ts(dsd, 'Nw', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('$log_{10}(N_w)$') # plt.xlim(5,24) plt.subplot(2,2,3) pydsd.plot.plot_ts(dsd, 'Dmax', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('Maximum Drop Size') # plt.xlim(5,24) plt.subplot(2,2,4) pydsd.plot.plot_ts(dsd, 'mu', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('$\mu$') plt.tight_layout() plt.show()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Note the fit submodule has alternative algorithms for calculating various DSD parameter fits. Radar Equivalent Scattering We can calculate radar equivalent parameters as well. We use the PyTMatrix library under the hood for this. Let's look at what these measurements would look like if we did T-Matrix scatttering at X-band, which is the default.
dsd.calculate_radar_parameters()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
This assumes BC shape relationship, X band, 10C. All of the scattering options are fully configurable.
dsd.set_scattering_temperature_and_frequency(scattering_temp=10, scattering_freq=9700000000.0) dsd.set_canting_angle(7)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Note this updates the parameters, but does not re-scatter the fields until you ask it to for computational reasons. Let's do that now while also changing the DSR we are using, and the maximum diameter we will scatter for.
dsd.calculate_radar_parameters(dsr_func = pydsd.DSR.bc, max_diameter=7)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Similar to before these new fields will be added to the DropSizeDistribution object in the fields dictionary.
dsd.fields.keys()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Now we can plot these variables up using our pydsd.plot.plot_ts function as before.
plt.figure(figsize=(12,12)) plt.subplot(2,2,1) pydsd.plot.plot_ts(dsd, 'Zh', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('Reflectivity(dBZ)') # plt.xlim(5,24) plt.subplot(2,2,2) pydsd.plot.plot_ts(dsd, 'Zdr', x_min_tick_format='hour') plt.xlabel('Time(minutes)') plt.ylabel('Differential Reflectivity(dB)') # plt.xlim(5,24) plt.subplot(2,2,3) pydsd.plot.plot_ts(dsd, 'Kdp', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('Specific Differential Phase(deg/km)') # plt.xlim(5,24) plt.subplot(2,2,4) pydsd.plot.plot_ts(dsd, 'Ai', x_min_tick_format='hour') plt.xlabel('Time(hrs)') plt.ylabel('Specific Attenuation') # plt.xlim(5,24) plt.tight_layout() plt.show()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Rain Rate estimators PyDSD has support for some fairly simple built in rain rate estimators for each of the polarimetric variables. Let's calculate a few of these and see how well they work out. TODO: Add support for storing these on the object. TODO: Add better built in plotting support for these.
(r_z_a, r_z_b), opt = dsd.calculate_R_Zh_relationship() print(f'RR(Zh) = {r_z_a} Zh **{r_z_b}') (r_kdp_a, r_kdp_b), opt = dsd.calculate_R_Kdp_relationship() print(f'RR(KDP) = {r_kdp_a} KDP **{r_kdp_b}') (r_zk_a, r_zk_b1, r_zk_b2), opt = dsd.calculate_R_Zh_Kdp_relationship() print(f'RR(Zh, KDP) = {r_zk_a} Zh **{r_zk_b1} * KDP ** {r_zk_b2}')
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Let's visualize how good of a fit each of these estimators is. We have the measured rain rate from the disdrometer in the fields variable.
rr_z = r_z_a * np.power(dsd._idb(dsd.fields['Zh']['data']), r_z_b) rr_kdp = r_kdp_a * np.power(dsd.fields['Kdp']['data'], r_kdp_b) rr_zk = r_zk_a * np.power(dsd._idb(dsd.fields['Zh']['data']), r_zk_b1)* np.power(dsd.fields['Kdp']['data'], r_zk_b2) plt.figure(figsize=(12,4)) plt.subplot(1,3,1) plt.scatter(rr_z, dsd.fields['rain_rate']['data']) plt.plot(np.arange(0,80)) plt.xlabel('Predicted rain rate') plt.ylabel('Measured Rain Rate') plt.title('R(Z)') plt.subplot(1,3,2) plt.scatter(rr_kdp, dsd.fields['rain_rate']['data']) plt.plot(np.arange(0,80)) plt.xlabel('Predicted rain rate') plt.ylabel('Measured Rain Rate') plt.title('R(KDP)') plt.subplot(1,3,3) plt.scatter(rr_zk, dsd.fields['rain_rate']['data']) plt.plot(np.arange(0,80)) plt.xlabel('Predicted rain rate') plt.ylabel('Measured Rain Rate') plt.title('R(Zh,KDP)')
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
As expected, estimators that use polarimetry tend to do much better. Convective Stratiform Partitioning Finally we have several algorithms that exist for stratiform partitioning in a variety of situations. Let's look at an applicable ground based one due to Bringi
cs = pydsd.partition.cs_partition.cs_partition_bringi_2010(dsd.fields['Nw']['data'], dsd.fields['D0']['data'])
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
We have a few ways we can choose to visualize this. One is to just look at the ouput where 0-unclassified, 1-Stratiform, 2-convective, 3-transition
plt.plot(cs)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
We can also color code the D0,Nw points to get a better visual understanding of this algorithm.
plt.scatter(dsd.fields['D0']['data'], np.log10(dsd.fields['Nw']['data']), c=cs)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Note: If you're reading this as a static HTML page, you can also get it as an executable Jupyter notebook here. FSMs Without Monsters! If you google "FSM", you'll probably get links to "Flying Spaghetti Monster". But that is not this. Instead, I'm talking about finite-state machines. While some will tell you that working with FSMs is a monstrous chore, it's really not any more difficult than working with combinational logic. (OK, maybe that's not very reassuring, either.) At it's core, the FSM is a sequential circuit that remembers some stuff about what has happened in the past. This memory is called the state or state variable and it's stored in a bunch of flip-flops. As the FSM receives inputs (usually when a clock pulse occurs), it combines them with the state variable to do two things: Generate some outputs to control some other piece of circuitry. Update the state variable that records what it has done. <img alt="FSM architecture." src="FSM_arch.png" width=600 /> Building an FSM typically involves three things: Figure out how to represent the state using flip-flops (this is often called encoding). Design the logic that generates the outputs based on the inputs and the current state. Design the logic that combines the inputs and the current state to arrive at the next state. That's it. That's all you have to do. Now, you may have heard others talk about things like Mealy/Moore architectures or minimal state encoding or deterministic versus non-deterministic operation. Fuhgeddaboudit! The best way to learn about FSMs is to build some FSMs. After that (if you want), you can learn about these other topics and impress people with your pedantry. To start off, let's take a circuit you already know about but maybe you didn't think of it as an FSM... A Counter We've used counters before. Here's one that's slightly modified to show it's also an FSM:
from pygmyhdl import * @chunk def counter(clk_i, cnt_o): # Here's the counter state variable. cnt = Bus(len(cnt_o)) # The next state logic is just an adder that adds 1 to the current cnt state variable. @seq_logic(clk_i.posedge) def next_state_logic(): cnt.next = cnt + 1 # The output logic just sends the current cnt state variable to the output. @comb_logic def output_logic(): cnt_o.next = cnt initialize() clk = Wire(name='clk') cnt = Bus(3, name='cnt') counter(clk_i=clk, cnt_o=cnt) clk_sim(clk, num_cycles=10) show_waveforms()
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
You can see the output logic for the counter just copies the state variable to the counter outputs, and the next-state logic is an adder that increments the current counter value. <img alt="Counter next-state and output logic." src="FSM_Counter.png" width=500 /> This counter doesn't take any inputs except for the clock, so all it does is count from 0 to $N$, over and over. The next example adds a few inputs to make this FSM more exciting. A Counter With Reset and Enable Inputs The counter shown below has two additional inputs that affect how the state is updated on each rising clock edge: a reset to set the counter to a known state (usually with a value of zero), and an enable that lets the counter advance when it's true and stalls the counter when it's false.
@chunk def counter_en_rst(clk_i, en_i, rst_i, cnt_o): cnt = Bus(len(cnt_o)) # The next state logic now includes a reset input to clear the counter # to zero, and an enable input that only allows counting when it is true. @seq_logic(clk_i.posedge) def next_state_logic(): if rst == True: cnt.next = 0 elif en == True: cnt.next = cnt + 1 else: # No reset and no enable, so just keep the counter at its current value. pass @comb_logic def output_logic(): cnt_o.next = cnt initialize() clk = Wire(name='clk') rst = Wire(1, name='rst') en = Wire(1, name='en') cnt = Bus(3, name='cnt') counter_en_rst(clk_i=clk, rst_i=rst, en_i=en, cnt_o=cnt) def cntr_tb(): '''Test bench for the counter with a reset and enable inputs.''' # Enable the counter for a few cycles. rst.next = 0 en.next = 1 for _ in range(4): clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) # Disable the counter for a few cycles. en.next = 0 for _ in range(2): clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) # Re-enable the counter for a few cycles. en.next = 1 for _ in range(2): clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) # Reset the counter. rst.next = 1 clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) # Start counting again. rst.next = 0 for _ in range(4): clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) simulate(cntr_tb()) show_waveforms(tick=True)
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
You can see that lowering the enable input over the interval [8, 12] keeps the counter from advancing, and raising the reset at $t=$ 16 forces the counter back to zero. This is all well and good, but you've known how to build counters for a quite a while. Let's look at an FSM that does something new. A Button Debouncer We used buttons previously in the block RAM demonstration circuit. Buttons, being the mechanical beasts they are, have an annoying habit of chattering or bouncing as their metal contacts bash into one another and rebound until they settle. This is seen by the rest of the circuitry as a sequence of rapid button presses, even if the person thinks they only pressed the button once. There are many solutions to this problem, but I'll show an FPGA circuit that does it (because that's the hammer I'm swinging). Essentially, the circuit filters out button bounces by waiting until the button has had a stable value for a certain amount of time and then outputing that value like so: The circuit compares the current value of the button input to the previous value stored in a flip-flop. If the values match, a counter is decremented. But if the values don't match, the counter is reset to a non-zero value. If the counter reaches zero, the button value must not have changed for a while. This stable button value is output by the circuit. If the counter is non-zero, the previous stable button value is retained on the output. Here's code for an FSM that does this:
@chunk def debouncer(clk_i, button_i, button_o, debounce_time): ''' Inputs: clk_i: Main clock input. button_i: Raw button input. button_o: Debounced button output. debounce_time: Number of clock cycles the button value has to be stable. ''' # These are the state variables of the FSM. from math import ceil, log2 debounce_cnt = Bus(int(ceil(log2(debounce_time+1))), name='dbcnt') # Counter big enough to store debounce time. prev_button = Wire(name='prev_button') # Stores the button value from the previous clock cycle. @seq_logic(clk_i.posedge) def next_state_logic(): if button_i == prev_button: # If the current and previous button values are the same, decrement the counter # until it reaches zero and then stop. if debounce_cnt != 0: debounce_cnt.next = debounce_cnt - 1 else: # If the current and previous button values aren't the same, then the button must # still be bouncing so reset the counter to the debounce interval and try again. debounce_cnt.next = debounce_time # Store the current button value for comparison during the next clock cycle. prev_button.next = button_i @seq_logic(clk_i.posedge) def output_logic(): if debounce_cnt == 0: # Output the stable button value whenever the counter is zero. # Don't use the actual button input value because that could change at any time. button_o.next = prev_button
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Now I can simulate button presses of various lengths and watch the output of the circuit. Note that I'm using a very small debounce time to keep the simulation to a reasonable length. In reality, a clock of 12 MHz and a debounce time of 100 ms would require a debounce count of 1,200,000.
initialize() # Initialize for simulation here because we'll be watching the internal debounce counter. clk = Wire(name='clk') button_i = Wire(name='button_i') button_o = Wire(name='button_o') debouncer(clk, button_i, button_o, 3) def debounce_tb(): '''Test bench for the counter with a reset and enable inputs.''' # Initialize the button and leave it stable for the debounce time. button_i.next = 1 for _ in range(4): clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) # Blip the button for less than the debounce time and show the debounced output does not change. button_i.next = 0 for _ in range(2): clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) button_i.next = 1 for _ in range(2): clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) # Press the button for more than the debounce time and show the debounced output changes. button_i.next = 0 for _ in range(5): clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) simulate(debounce_tb()) show_waveforms(tick=True)
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Note these points in the simulation: The debounce counter gets reset to its maximum value when the current and previous button values are different ($t =$ 1, 9, 13, 17). The initial button press during interval [0, 8] is long enough to change the button output at $t =$ 9. The button release for $t \ge$ 16 is also long enough to change the button output at $t =$ 25. Neither of the short release and press intervals from $t =$ 8 to $t =$ 16 are long enough for the debounce counter to reach zero and trigger a button output. Maybe these past few examples don't feel like state machines, but they are. They each store some information about the past and use it to change their behavior in the present. But, possibly, you're looking for something more like the classic state machines you've seen in books. Well, I'm never one to disappoint! A "Classic" State Machine This FSM has four states, two inputs, and four outputs. The states are arranged like a ring, with transistions going from each state to the states preceding and following it. When the first input is active, the FSM transitions forward (clockwise) by one state; when the second input is active, the FSM moves one state backward (counter-clockwise). Finally, a single output is associated with each state that is high whenever the FSM is in that state. <img src="classic_FSM_diag.png" alt="Classic FSM state diagram." width="400" /> The MyHDL code for this FSM is shown below. As you'll see, each possible state is allocated a section of code that describes what state the FSM will go to for all possible combinations of the inputs.
@chunk def classic_fsm(clk_i, inputs_i, outputs_o): ''' Inputs: clk_i: Main clock input. inputs_i: Two-bit input vector directs state transitions. outputs_o: Four-bit output vector. ''' # Declare a state variable with four states. In addition to the current # state of the FSM, the state variable also stores a complete list of its # possible values to use for comparing what state the FSM is in and for # assigning a new state. fsm_state = State('A', 'B', 'C', 'D', name='state') # This counter is used to apply a reset to the FSM for the first few clocks upon startup. reset_cnt = Bus(2) @seq_logic(clk_i.posedge) def next_state_logic(): if reset_cnt < reset_cnt.max-1: # The reset counter starts at zero upon startup. The FSM stays in this reset # state until the counter increments to its maximum value. Then it never returns here. reset_cnt.next = reset_cnt + 1 fsm_state.next = fsm_state.s.A # Set initial state for FSM after reset. elif fsm_state == fsm_state.s.A: # Compare current state to state A. # If the FSM is in state A, then go forward to state B if inputs_i[0] is active, # otherwise go backward to state D if inputs_i[1] is active. # Stay in this state if neither input is active. if inputs_i[0]: fsm_state.next = fsm_state.s.B # Update state to state B. elif inputs_i[1]: fsm_state.next = fsm_state.s.D # Update state to state D. elif fsm_state == fsm_state.s.B: # State B operates similarly to state A. if inputs_i[0]: fsm_state.next = fsm_state.s.C elif inputs_i[1]: fsm_state.next = fsm_state.s.A elif fsm_state == fsm_state.s.C: # State C operates similarly to states A and B. if inputs_i[0]: fsm_state.next = fsm_state.s.D elif inputs_i[1]: fsm_state.next = fsm_state.s.B elif fsm_state == fsm_state.s.D: # State D yada, yada... if inputs_i[0]: fsm_state.next = fsm_state.s.A elif inputs_i[1]: fsm_state.next = fsm_state.s.C else: # If the FSM is in some unknown state, send it back to the starting state. fsm_state.next = fsm_state.s.A @comb_logic def output_logic(): # Turn on one of the outputs depending upon which state the FSM is in. if fsm_state == fsm_state.s.A: outputs_o.next = 0b0001 elif fsm_state == fsm_state.s.B: outputs_o.next = 0b0010 elif fsm_state == fsm_state.s.C: outputs_o.next = 0b0100 elif fsm_state == fsm_state.s.D: outputs_o.next = 0b1000 else: # Turn on all the outputs if the FSM is in some unknown state (shouldn't happen). outputs_o.next = 0b1111
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Now I can stimulate the FSM with the following test bench. The FSM is moved forward by three states and then backward three times, so it should end up where it started.
initialize() inputs = Bus(2, name='inputs') outputs = Bus(4, name='outputs') clk = Wire(name='clk') classic_fsm(clk, inputs, outputs) def fsm_tb(): nop = 0b00 # no operation - both inputs are inactive fwd = 0b01 # Input combination for moving forward. bck = 0b10 # Input combination for moving backward. # Input sequence of 3 forwards and 3 backwards transitions. # The four initial NOPs are for the FSM's initial reset period. ins = [nop, nop, nop, nop, fwd, fwd, fwd, bck, bck, bck] # Apply each input combination from the list and then pulse the clock. for inputs.next in ins: clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) simulate(fsm_tb()) show_waveforms('clk inputs state outputs', tick=True)
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
The waveforms show the FSM moving forward (A $\rightarrow$ B $\rightarrow$ C $\rightarrow$ D) and then moving back to where it started (D $\rightarrow$ C $\rightarrow$ B $\rightarrow$ A). This is good, but what if your inputs are slow (like from manually-operated pushbuttons) and your clock is very fast (like 12 MHz). Then it would be hard to make controlled state transitions because a single button press would cause many state changes. In this case, the solution is to make the FSM change states only when the input changes from a 0 (inactive) to a 1 (active). This is easy to do by comparing the current values of the inputs with their values on the previous clock cycle. When they are different, the FSM can make a transition.
@chunk def classic_fsm(clk_i, inputs_i, outputs_o): fsm_state = State('A', 'B', 'C', 'D', name='state') reset_cnt = Bus(2) # Variables for storing the input values during the previous clock # and holding the changes between the current and previous input values. prev_inputs = Bus(len(inputs_i), name='prev_inputs') input_chgs = Bus(len(inputs_i), name='input_chgs') # This logic compares the current input values with the negation of the previous values. # The output is active only if an input goes from 0 to 1. @comb_logic def detect_chg(): input_chgs.next = inputs_i & ~prev_inputs # This is the same FSM state transition logic as before, except it looks at the # input_chgs signals instead of the input_i signals. @seq_logic(clk_i.posedge) def next_state_logic(): if reset_cnt < reset_cnt.max-1: reset_cnt.next = reset_cnt + 1 fsm_state.next = fsm_state.s.A elif fsm_state == fsm_state.s.A: if input_chgs[0]: fsm_state.next = fsm_state.s.B elif input_chgs[1]: fsm_state.next = fsm_state.s.D elif fsm_state == fsm_state.s.B: if input_chgs[0]: fsm_state.next = fsm_state.s.C elif input_chgs[1]: fsm_state.next = fsm_state.s.A elif fsm_state == fsm_state.s.C: if input_chgs[0]: fsm_state.next = fsm_state.s.D elif input_chgs[1]: fsm_state.next = fsm_state.s.B elif fsm_state == fsm_state.s.D: if input_chgs[0]: fsm_state.next = fsm_state.s.A elif input_chgs[1]: fsm_state.next = fsm_state.s.C else: fsm_state.next = fsm_state.s.A prev_inputs.next = inputs_i # Record the current input values. @comb_logic def output_logic(): if fsm_state == fsm_state.s.A: outputs_o.next = 0b0001 elif fsm_state == fsm_state.s.B: outputs_o.next = 0b0010 elif fsm_state == fsm_state.s.C: outputs_o.next = 0b0100 elif fsm_state == fsm_state.s.D: outputs_o.next = 0b1000 else: outputs_o.next = 0b1111
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Now I'll modify the test bench a bit by adding another sequence of inputs that alternate between active and inactive values.
initialize() inputs = Bus(2, name='inputs') outputs = Bus(4, name='outputs') clk = Wire(name='clk') classic_fsm(clk, inputs, outputs) def fsm_tb(): nop = 0b00 fwd = 0b01 bck = 0b10 ins = [nop, nop, nop, nop, fwd, fwd, fwd, bck, bck, bck] for inputs.next in ins: clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) # Interspersed active and inactive inputs. ins = [fwd, nop, fwd, nop, fwd, nop, bck, nop, bck, nop, bck, nop] for inputs.next in ins: clk.next = 0 yield delay(1) clk.next = 1 yield delay(1) simulate(fsm_tb()) show_waveforms('clk inputs prev_inputs input_chgs state outputs', tick=True, width=2000)
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
From the simulation, you can see the first sequence of six inputs (time $t =$ 8 to $t =$ 20) only caused two state transitions (A $\rightarrow$ B $\rightarrow$ A) because the inputs only changed twice. Then, when active inputs were interspersed with inactive inputs (time $t \ge$ 20), the FSM went through six state transitions (A $\rightarrow$ B $\rightarrow$ C $\rightarrow$ D $\rightarrow$ C $\rightarrow$ B $\rightarrow$ A) Demo Time! Once again, we've reached the highly-anticipated demo time! This time, I'm just going to hook the previous FSM to two pushbuttons and four LEDs and then steer it through a few state transitions.
toVerilog(classic_fsm, clk_i=Wire(), inputs_i=Bus(2), outputs_o=Bus(4)) with open('classic_fsm.pcf', 'w') as pcf: pcf.write( ''' set_io clk_i 21 set_io outputs_o[0] 99 set_io outputs_o[1] 98 set_io outputs_o[2] 97 set_io outputs_o[3] 96 set_io inputs_i[0] 118 set_io inputs_i[1] 114 ''' ) !yosys -q -p "synth_ice40 -blif classic_fsm.blif" classic_fsm.v !arachne-pnr -q -d 1k -p classic_fsm.pcf classic_fsm.blif -o classic_fsm.asc !icepack classic_fsm.asc classic_fsm.bin !iceprog classic_fsm.bin
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
The following video shows the operation of the FSM on the iCEstick board. As you watch, you can see the FSM move backwards and forwards through the states under the guidance of the button presses. However, there are times when it makes multiple transitions for a single button press because the buttons are bouncing.
HTML('<div style="padding-bottom:50.000%;"><iframe src="https://streamable.com/s/lmqvd/urtqfp" frameborder="0" width="100%" height="100%" allowfullscreen style="width:640px;position:absolute;"></iframe></div>')
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
To correct the button bounce problem, I added debounce circuits to the FSM as shown below.
@chunk def classic_fsm(clk_i, inputs_i, outputs_o): fsm_state = State('A', 'B', 'C', 'D', name='state') reset_cnt = Bus(2) prev_inputs = Bus(len(inputs_i), name='prev_inputs') input_chgs = Bus(len(inputs_i), name='input_chgs') # Take the inputs and run them through the debounce circuits. dbnc_inputs = Bus(len(inputs_i)) # These are the inputs after debouncing. debounce_time = 120000 debouncer(clk_i, inputs_i.o[0], dbnc_inputs.i[0], debounce_time) debouncer(clk_i, inputs_i.o[1], dbnc_inputs.i[1], debounce_time) # The edge detection of the inputs is now performed on the debounced inputs. @comb_logic def detect_chg(): input_chgs.next = dbnc_inputs & ~prev_inputs @seq_logic(clk_i.posedge) def next_state_logic(): if reset_cnt < reset_cnt.max-1: fsm_state.next = fsm_state.s.A reset_cnt.next = reset_cnt + 1 elif fsm_state == fsm_state.s.A: if input_chgs[0]: fsm_state.next = fsm_state.s.B elif input_chgs[1]: fsm_state.next = fsm_state.s.D elif fsm_state == fsm_state.s.B: if input_chgs[0]: fsm_state.next = fsm_state.s.C elif input_chgs[1]: fsm_state.next = fsm_state.s.A elif fsm_state == fsm_state.s.C: if input_chgs[0]: fsm_state.next = fsm_state.s.D elif input_chgs[1]: fsm_state.next = fsm_state.s.B elif fsm_state == fsm_state.s.D: if input_chgs[0]: fsm_state.next = fsm_state.s.A elif input_chgs[1]: fsm_state.next = fsm_state.s.C else: fsm_state.next = fsm_state.s.A prev_inputs.next = dbnc_inputs # Store the debounced inputs. @comb_logic def output_logic(): if fsm_state == fsm_state.s.A: outputs_o.next = 0b0001 elif fsm_state == fsm_state.s.B: outputs_o.next = 0b0010 elif fsm_state == fsm_state.s.C: outputs_o.next = 0b0100 elif fsm_state == fsm_state.s.D: outputs_o.next = 0b1000 else: outputs_o.next = 0b1111
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Now it's just a matter of recompiling the debounced FSM and observing its operation.
toVerilog(classic_fsm, clk_i=Wire(), inputs_i=Bus(2), outputs_o=Bus(4)) with open('classic_fsm.pcf', 'w') as pcf: pcf.write( ''' set_io clk_i 21 set_io outputs_o[0] 99 set_io outputs_o[1] 98 set_io outputs_o[2] 97 set_io outputs_o[3] 96 set_io inputs_i[0] 118 set_io inputs_i[1] 114 ''' ) !yosys -q -p "synth_ice40 -blif classic_fsm.blif" classic_fsm.v !arachne-pnr -q -d 1k -p classic_fsm.pcf classic_fsm.blif -o classic_fsm.asc !icepack classic_fsm.asc classic_fsm.bin !iceprog classic_fsm.bin
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
I probably don't have to tell you that the bouncing buttons are conspicuously absent in the following video.
HTML('<div style="padding-bottom:50.000%;"><iframe src="https://streamable.com/s/agk4i/tqcuqu" frameborder="0" width="100%" height="100%" allowfullscreen style="width:640px;position:absolute;"></iframe></div>')
examples/5_fsm/fsm.ipynb
xesscorp/pygmyhdl
mit
Restart the kernel before proceeding further (On the Notebook menu, select Kernel > Restart Kernel > Restart). Load necessary libraries
# Importing necessary tensorflow library and printing the TF version. import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) # Here we'll import Pandas and Numpy data processing libraries import pandas as pd import numpy as np # Use matplotlib for visualizing the model import matplotlib.pyplot as plt # Use seaborn for data visualization import seaborn as sns %matplotlib inline
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Get the Data
# Reading "kyphosis.csv" file using the read_csv() function included in the pandas library df = pd.read_csv('../kyphosis.csv') # Output the first five rows df.head()
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Exploratory Data Analysis We'll just check out a simple pairplot for this small dataset.
# Here we are using the pairplot() function to plot multiple pairwise bivariate distributions in a dataset # TODO 1 sns.pairplot(df,hue='Kyphosis',palette='Set1')
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train Test Split
# Import train_test_split function from sklearn.model_selection from sklearn.model_selection import train_test_split # Remove column name 'Kyphosis' X = df.drop('Kyphosis',axis=1) y = df['Kyphosis'] # Let's split up the data into a training set and a test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Decision Trees We'll start just by training a single decision tree.
# Import Decision Tree Classifier from sklearn.tree from sklearn.tree import DecisionTreeClassifier # Create Decision Tree classifer object dtree = DecisionTreeClassifier() # Train Decision Tree Classifer # TODO 2 dtree.fit(X_train,y_train)
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Prediction and Evaluation Let's evaluate our decision tree.
# Predict the response for test dataset predictions = dtree.predict(X_test) # Importing the classification_report and confusion_matrix from sklearn.metrics import classification_report,confusion_matrix # Here we will build a text report showing the main classification metrics # TODO 3a print(classification_report(y_test,predictions)) # Now we can compute confusion matrix to evaluate the accuracy of a classification # TODO 3b print(confusion_matrix(y_test,predictions))
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Tree Visualization Scikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this:
# Here we are importing some built-in visualization functionalities for decision trees from IPython.display import Image from sklearn.externals.six import StringIO from sklearn.tree import export_graphviz import pydot features = list(df.columns[1:]) features # Now we are ready to visualize our Decision Tree model dot_data = StringIO() export_graphviz(dtree, out_file=dot_data,feature_names=features,filled=True,rounded=True) graph = pydot.graph_from_dot_data(dot_data.getvalue()) Image(graph[0].create_png())
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Random Forests Now let's compare the decision tree model to a random forest.
# Import Random Forest Model from sklearn.ensemble import RandomForestClassifier # Create a Gaussian Classifier rfc = RandomForestClassifier(n_estimators=100) # Train Random Forest Classifer rfc.fit(X_train, y_train) # Train model using the training sets rfc_pred = rfc.predict(X_test) # Now we can compute confusion matrix to evaluate the accuracy # TODO 4a print(confusion_matrix(y_test,rfc_pred)) # Finally we will build a text report showing the main metrics # TODO 4b print(classification_report(y_test,rfc_pred))
courses/machine_learning/deepdive2/launching_into_ml/solutions/decision_trees_and_random_Forests_in_Python.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Basic analyses Pymatgen provides many analyses functions for Structures. Some common ones are given below.
#Determining the symmetry from pymatgen.symmetry.analyzer import SpacegroupAnalyzer finder = SpacegroupAnalyzer(structure) print("The spacegroup is {}".format(finder.get_spacegroup_symbol()))
examples/Basic functionality.ipynb
Dioptas/pymatgen
mit
The vaspio_set module provides a means o obtain a complete set of VASP input files for performing calculations. Several useful presets based on the parameters used in the Materials Project are provided.
from pymatgen.io.vaspio_set import MPVaspInputSet v = MPVaspInputSet() v.write_input(structure, "MyInputFiles") #Writes a complete set of input files for structure to the directory MyInputFiles
examples/Basic functionality.ipynb
Dioptas/pymatgen
mit
As you can see the plot below, it's not linear separatable.
# Plot both classes on the x1, x2 plane plt.plot(x_red[:,0], x_red[:,1], 'ro', label='class red') plt.plot(x_blue[:,0], x_blue[:,1], 'bo', label='class blue') plt.grid() plt.legend(loc=1) plt.xlabel('$x_1$', fontsize=15) plt.ylabel('$x_2$', fontsize=15) plt.axis([-1.5, 1.5, -1.5, 1.5]) plt.title('red vs blue classes in the input space') plt.show()
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Model and Cost Function Model can be visualized as below: <p align="center"> <img src="https://raw.githubusercontent.com/weichetaru/weichetaru.github.com/master/notebook/machine-learning/img/SimpleANN04.png"></p> Input and Label So for the input layer, we have 2 dimention inputs from N data points (Nx2): $$X = \begin{bmatrix} x_{11} & x_{12} \ \vdots & \vdots \ x_{N1} & x_{N2} \end{bmatrix}$$ And for output, we have 2 class (Nx2): $$T = \begin{bmatrix} t_{11} & t_{12}\ \vdots & \vdots \ t_{N1} & t_{N2} \end{bmatrix}$$ Where $t_{ij}=1$ if and only if the $i$-th input sample belongs to class $j$. So blue points are labelled T = [0 1] and red points are labelled T = [1 0]. Note in the model we added a bias term $b$ to $X$ that has value +1 for all rows. Hidden Layer The hidden layer has two set of parameters, $W_h$ (2x3) and bias term $b_h$ (1x3): $$\begin{align} W_h = \begin{bmatrix} w_{h11} & w_{h12} & w_{h13} \ w_{h21} & w_{h22} & w_{h23} \end{bmatrix} && \mathbf{b}h = \begin{bmatrix} b{h1} & b_{h2} & b_{h3} \end{bmatrix} \end{align}$$ And we use logistic function for activation, hence $H$ (Nx3): $$H = \sigma(z_h) =\sigma(X \cdot W_h + \mathbf{b}h) = \frac{1}{1+e^{-(X \cdot W_h + \mathbf{b}_h)}} = \begin{bmatrix} h{11} & h_{12} & h_{13} \ \vdots & \vdots & \vdots \ h_{N1} & h_{N2} & h_{N3} \end{bmatrix}$$ Derivative of Logistic function we will use the derivative of Logistic function in backward propagation. So writing down here in advance. (1) $$\frac{\partial h_i}{\partial z_{hi}} = \frac{\partial \sigma(z_{hi})}{\partial z_{hi}} = \frac{\partial \frac{1}{1+e^{-z_{hi}}}}{\partial z_{hi}} = \frac{-1}{(1+e^{-z_{hi}})^2} e^{-z_{hi}}-1 = \frac{1}{1+e^{-z_{hi}}} \frac{e^{-z_{hi}}}{1+e^{-z_{hi}}} = \sigma(z_{hi}) * (1- \sigma(z_{hi})) = h_i (1-h_i)$$ Output layer The output layer has two set of parameters, $W_o $ (3x2) and bias term $b_o$ (2x1): $$\begin{align} W_o = \begin{bmatrix} w_{o11} & w_{o12} \ w_{o21} & w_{o22} \ w_{o31} & w_{o32} \end{bmatrix} && \mathbf{b}o = \begin{bmatrix} b{o1} & b_{o2} \end{bmatrix} \end{align}$$ And we use softmax function for activation, hence $Y$ (Nx2): $$Y = \varsigma(H \cdot W_o + \mathbf{b}o) = \frac{e^{Z_o}}{\sum{d=1}^C e^{\mathbf{z}{od}}} = \frac{e^{H \cdot W_o + \mathbf{b}_o}}{\sum{d=1}^C e^{H \cdot \mathbf{w}{od} + b{od}}} = \begin{bmatrix} y_{11} & y_{12}\ \vdots & \vdots \ y_{n1} & y_{n2} \end{bmatrix}$$ with $\varsigma$ the softmax function. In our example data, $C=2$ as we have 2 class red and blue. Derivative of Softmax function we will use the derivative of Softmax function in backward propagation. So writing down here in advance. For $\Sigma_C = \sum_{d=1}^C e^{z_d} \, \text{for} \; c = 1 \cdots C$ so that $y_c = e^{z_c} / \Sigma_C$, ${\partial y_i}/{\partial z_j}$ of the output $y$ of the softmax function with respect to its input $z$ can be calculated as: (2) $$ \begin{split} \text{if} \; i = j :& \frac{\partial y_i}{\partial z_i} = \frac{\partial \frac{e^{z_i}}{\Sigma_C}}{\partial z_i} = \frac{e^{z_i}\Sigma_C - e^{z_i}e^{z_i}}{\Sigma_C^2} = \frac{e^{z_i}}{\Sigma_C}\frac{\Sigma_C - e^{z_i}}{\Sigma_C} = \frac{e^{z_i}}{\Sigma_C}(1-\frac{e^{z_i}}{\Sigma_C}) = y_i (1 - y_i)\ \text{if} \; i \neq j :& \frac{\partial y_i}{\partial z_j} = \frac{\partial \frac{e^{z_i}}{\Sigma_C}}{\partial z_j} = \frac{0 - e^{z_i}e^{z_j}}{\Sigma_C^2} = -\frac{e^{z_i}}{\Sigma_C} \frac{e^{z_j}}{\Sigma_C} = -y_i y_j \end{split} $$ Code for forward step We can at first code the functions for forward step.
# Define the logistic function. - for hidden layer activation. def logistic(z): return 1 / (1 + np.exp(-z)) # Define the softmax function def softmax(z): return np.exp(z) / np.sum(np.exp(z), axis=1, keepdims=True) # Function to compute the hidden activations def hidden_activations(X, Wh, bh): return logistic(X.dot(Wh) + bh) # Define output layer feedforward def output_activations(H, Wo, bo): return softmax(H.dot(Wo) + bo) # Define the neural network function def nn(X, Wh, bh, Wo, bo): return output_activations(hidden_activations(X, Wh, bh), Wo, bo) # Define the neural network prediction function that only returns # 1 or 0 depending on the predicted class def nn_predict(X, Wh, bh, Wo, bo): return np.around(nn(X, Wh, bh, Wo, bo))
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Cost Function The parameter set $w$ can be optimized by maximizing the likelihood: $$\underset{\theta}{\text{argmax}}\; \mathcal{L}(\theta|\mathbf{t},\mathbf{z})$$ The likelihood can be described as join distribution of $t\;and\;z\;$given $\theta$: $$P(\mathbf{t},\mathbf{z}|\theta) = P(\mathbf{t}|\mathbf{z},\theta)P(\mathbf{z}|\theta)$$ We don't care the probability of $\mathbf{z}$ so $$\mathcal{L}(\theta|\mathbf{t},\mathbf{z}) = P(\mathbf{t}|\mathbf{z},\theta)$$ It can be written as $P(\mathbf{t}|\mathbf{z})$ for fixed $\theta$. Since each $t_i$ is dependent on the full $\mathbf{z}$, and only 1 class can be activated in the tt we can write: $$P(\mathbf{t}|\mathbf{z}) = \prod_{i=c}^{C} P(t_c|\mathbf{z})^{t_c} = \prod_{i=c}^{C} \varsigma(\mathbf{z})c^{t_c} = \prod{i=c}^{C} y_c^{t_c}$$ Instead of maximizing this likelihood, it can also be done by minimizing the negative log-likelihood: $$- log \mathcal{L}(\theta|\mathbf{t},\mathbf{z}) = \xi(\mathbf{t},\mathbf{z}) = - log \prod_{i=c}^{C} y_c^{t_c} = - \sum_{i=c}^{C} t_c \cdot log(y_c)$$ The cross-entropy error function $\xi$ for multiple class of sample size $n$ can be defined as: $$\xi(T,Y) = \sum_{i=1}^n \xi(\mathbf{t}i,\mathbf{y}_i) = - \sum{i=1}^n \sum_{i=c}^{C} t_{ic} \cdot log( y_{ic})$$ Note $Y$ will be activated by softmax in output layer as we defined above. Derivative of the cross-entropy cost function for the softmax function The derivative ${\partial \xi}/{\partial z_i}$ of the cost function with respect to the softmax input $z_i$ can be calculated as (Note below use (2) the derivative of softmax function): (3) $$ \begin{split} \frac{\partial \xi}{\partial z_i} & = - \sum_{j=1}^C \frac{\partial t_j log(y_j)}{\partial z_i}{} = - \sum_{j=1}^C t_j \frac{\partial log(y_j)}{\partial z_i} = - \sum_{j=1}^C t_j \frac{1}{y_j} \frac{\partial y_j}{\partial z_i} \ & = - \frac{t_i}{y_i} \frac{\partial y_i}{\partial z_i} - \sum_{j \neq i}^C \frac{t_j}{y_j} \frac{\partial y_j}{\partial z_i} = - \frac{t_i}{y_i} y_i (1-y_i) - \sum_{j \neq i}^C \frac{t_j}{y_j} (-y_j y_i) \ & = - t_i + t_i y_i + \sum_{j \neq i}^C t_j y_i = - t_i + \sum_{j = 1}^C t_j y_i = -t_i + y_i \sum_{j = 1}^C t_j \ & = y_i - t_i \end{split} $$ Backward propagation During the backward step, what's important is to get the error gradient in each layer. Gradient of output layer The error gradient $\delta_{o}$ of this cost function at the softmax output layer is simply (from (3)): $$\delta_{o} = \frac{\partial \xi}{\partial Z_o} = Y - T$$ As $Z_o = H \cdot W_o + \mathbf{b}_o$, so the output grandient over all $N$ samples is computed : $$\frac{\partial \xi}{\partial \mathbf{w}{oj}} = \frac{\partial Z{o}}{\partial \mathbf{w}{oj}} \frac{\partial Y}{\partial Z{o}} \frac{\partial \xi}{\partial Y} = \frac{\partial Z_{o}}{\partial w_{oji}} \frac{\partial \xi}{\partial Z_o} = \sum_{i=1}^N h_{ij} (\mathbf{y}i - \mathbf{t}_i) = \sum{i=1}^N h_{ij} \delta_{oi}$$ In matrix form: $$\frac{\partial \xi}{\partial W_o} = H^T \cdot (Y-T) = H^T \cdot \delta_{o}$$ For bias term $b_o$: $$\frac{\partial \xi}{\partial \mathbf{b}{o}} = \frac{\partial Z{o}}{\partial \mathbf{b}{o}} \frac{\partial Y}{\partial Z{o}} \frac{\partial \xi}{\partial Y} = \sum_{i=1}^N 1 * (\mathbf{y}i - \mathbf{t}_i) = \sum{i=1}^N \delta_{oi}$$ Gradient of hidden layer The error gradient $\delta_{h}$ of this cost function at the hidden layer can be defined as: $$\delta_{h} = \frac{\partial \xi}{\partial Z_h} = \frac{\partial H}{\partial Z_h} \frac{\partial \xi}{\partial H} = \frac{\partial H}{\partial Z_h} \frac{\partial Z_o}{\partial H} \frac{\partial \xi}{\partial Z_o}$$ As $Z_h = X \cdot W_h + \mathbf{b}h$, $\delta{h}$ will also result in a $N×3$ matrix. We use (1) and (2) to get result below. Note the gradients that backpropagate from the previous layer via the weighted connections are summed for each $h_{ij}$ $$\delta_{hij} = \frac{\partial \xi}{\partial z_{hij}} = \frac{\partial h_{ij}}{\partial z_{hij}} \frac{\partial \mathbf{z}{oi}}{\partial h{ij}} \frac{\partial \xi}{\partial \mathbf{z}{oi}} = h{ij} (1-h_{ij}) \sum_{k=1}^2 w_{ojk} (y_{ik}-t_{ik}) = h_{ij} (1-h_{ij}) [\delta_{oi} \cdot \mathbf{w}_{oj}^T]$$ In matrix form with notation $\circ$ for elementwise product: $$\delta_{h} = \frac{\partial \xi}{\partial Z_h} = H \circ (1 - H) \circ [\delta_{o} \cdot W_o^T]$$ The hidden layer gradient then can be defined as: $$\frac{\partial \xi}{\partial \mathbf{w}{hj}} = \frac{\partial Z{h}}{\partial \mathbf{w}{hj}} \frac{\partial H}{\partial Z{h}} \frac{\partial \xi}{\partial H} = \frac{\partial Z_{h}}{\partial \mathbf{w}{hj}} \frac{\partial \xi}{\partial Z_h} = \sum{i=1}^N x_{ij} \delta_{hi}$$ For bias term $b_h$: $$\frac{\partial \xi}{\partial \mathbf{b}{h}} = \frac{\partial Z{h}}{\partial \mathbf{b}{h}} \frac{\partial H}{\partial Z{h}} \frac{\partial \xi}{\partial H} = \sum_{j=1}^N \delta_{hj}$$ Code for cost and backward gradient
# Define the cost function def cost(Y, T): return - np.multiply(T, np.log(Y)).sum() # Define the error function at the output def error_output(Y, T): return Y - T # Define the gradient function for the weight parameters at the output layer def gradient_weight_out(H, Eo): return H.T.dot(Eo) # Define the gradient function for the bias parameters at the output layer def gradient_bias_out(Eo): return np.sum(Eo, axis=0, keepdims=True) # Define the error function at the hidden layer def error_hidden(H, Wo, Eo): # H * (1-H) * (E . Wo^T) return np.multiply(np.multiply(H,(1 - H)), Eo.dot(Wo.T)) # Define the gradient function for the weight parameters at the hidden layer def gradient_weight_hidden(X, Eh): return X.T.dot(Eh) # Define the gradient function for the bias parameters at the output layer def gradient_bias_hidden(Eh): return np.sum(Eh, axis=0, keepdims=True)
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Momentum Model like this is highly unlikely to have convex cost functions and we might easily get a local minimum with gradient decent. Momentum is created to solve this. It's probably the most popular extension of the backprop algorithm. Momentum can be defined: $$\begin{split} V(i+1) & = \lambda V(i) - \mu \frac{\partial \xi}{\partial \theta(i)} \ \theta(i+1) & = \theta(i) + V(i+1) \end{split} $$ Where $V(i)$ is the velocity of the parameters at iteration $i$ and 0< $\lambda$ <1 shows how much the velocity decreases due to 'resistance' and $\mu$ the learning rate. Note here we are using previous $V(i)$ to update $V(i+1)$. The initial $V(0)$ should be nearly 0 so $V(1)$ is $\lambda$ (say, 0.9)*0 $-\mu \frac{\partial \xi}{\partial \theta(0)}$ (say, -1) = -1 and in iteration=2 the $V(i)$ is the same direction (say, -0.5) then $V(2)$ will become bigger (-0.9 - 0.5 = -1.4) and smaller if opposite direction (say. 0.5 -> -0.9 + 0.5 = -0.4). Essentially, it's just like we push a ball down a hill. The ball accumulates momentum as it rolls downhill, becoming faster and faster on the way. The same thing happens to our parameter updates: The momentum term increases for dimensions whose gradients point in the same directions and reduces updates for dimensions whose gradients change directions. As a result, we gain faster convergence and reduced oscillation. We can code backprop with momentum as below.
# Define the update function to update the network parameters over 1 iteration def backprop_gradients(X, T, Wh, bh, Wo, bo): # Compute the output of the network # Compute the activations of the layers H = hidden_activations(X, Wh, bh) Y = output_activations(H, Wo, bo) # Compute the gradients of the output layer Eo = error_output(Y, T) JWo = gradient_weight_out(H, Eo) Jbo = gradient_bias_out(Eo) # Compute the gradients of the hidden layer Eh = error_hidden(H, Wo, Eo) JWh = gradient_weight_hidden(X, Eh) Jbh = gradient_bias_hidden(Eh) return [JWh, Jbh, JWo, Jbo] def update_velocity(X, T, ls_of_params, Vs, momentum_term, learning_rate): # ls_of_params = [Wh, bh, Wo, bo] # Js = [JWh, Jbh, JWo, Jbo] Js = backprop_gradients(X, T, *ls_of_params) return [momentum_term * V - learning_rate * J for V,J in zip(Vs, Js)] def update_params(ls_of_params, Vs): # ls_of_params = [Wh, bh, Wo, bo] # Vs = [VWh, Vbh, VWo, Vbo] return [P + V for P,V in zip(ls_of_params, Vs)]
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Code Implementaion
# Run backpropagation # Initialize weights and biases init_var = 0.1 # Initialize hidden layer parameters bh = np.random.randn(1, 3) * init_var Wh = np.random.randn(2, 3) * init_var # Initialize output layer parameters bo = np.random.randn(1, 2) * init_var Wo = np.random.randn(3, 2) * init_var # Parameters are already initilized randomly with the gradient checking # Set the learning rate learning_rate = 0.02 momentum_term = 0.9 # define the velocities Vs = [VWh, Vbh, VWo, Vbo] Vs = [np.zeros_like(M) for M in [Wh, bh, Wo, bo]] # Start the gradient descent updates and plot the iterations nb_of_iterations = 300 # number of gradient descent updates lr_update = learning_rate / nb_of_iterations # learning rate update rule ls_costs = [cost(nn(X, Wh, bh, Wo, bo), T)] # list of cost over the iterations for i in range(nb_of_iterations): # Update the velocities and the parameters Vs = update_velocity(X, T, [Wh, bh, Wo, bo], Vs, momentum_term, learning_rate) Wh, bh, Wo, bo = update_params([Wh, bh, Wo, bo], Vs) ls_costs.append(cost(nn(X, Wh, bh, Wo, bo), T)) # Plot the cost over the iterations plt.plot(ls_costs, 'b-') plt.xlabel('iteration') plt.ylabel('$\\xi$', fontsize=15) plt.title('Decrease of cost over backprop iteration') plt.grid() plt.show()
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Visualization of the trained classifier The classifier we just trianed is circled around and between the blue and red class. It's non-linear and hence be able to correctly classify red and blue.
# Plot the resulting decision boundary # Generate a grid over the input space to plot the color of the # classification at that grid point nb_of_xs = 200 xs1 = np.linspace(-2, 2, num=nb_of_xs) xs2 = np.linspace(-2, 2, num=nb_of_xs) xx, yy = np.meshgrid(xs1, xs2) # create the grid # Initialize and fill the classification plane classification_plane = np.zeros((nb_of_xs, nb_of_xs)) for i in range(nb_of_xs): for j in range(nb_of_xs): pred = nn_predict(np.asmatrix([xx[i,j], yy[i,j]]), Wh, bh, Wo, bo) classification_plane[i,j] = pred[0,0] # Create a color map to show the classification colors of each grid point cmap = ListedColormap([ colorConverter.to_rgba('w', alpha=0.30), colorConverter.to_rgba('g', alpha=0.30)]) # Plot the classification plane with decision boundary and input samples plt.contourf(xx, yy, classification_plane, cmap=cmap) # Plot both classes on the x1, x2 plane plt.plot(x_red[:,0], x_red[:,1], 'ro', label='class red') plt.plot(x_blue[:,0], x_blue[:,1], 'bo', label='class blue') plt.grid() plt.legend(loc=1) plt.xlabel('$x_1$', fontsize=15) plt.ylabel('$x_2$', fontsize=15) plt.axis([-1.5, 1.5, -1.5, 1.5]) plt.title('red vs blue classification boundary') plt.show()
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
Transformation of the input domain You can see from the plot below. the 2-dimentions input have been project into 3-dimension space (hidden layer) and become liner separable.
# Plot the projection of the input onto the hidden layer # Define the projections of the blue and red classes H_blue = hidden_activations(x_blue, Wh, bh) H_red = hidden_activations(x_red, Wh, bh) # Plot the error surface fig = plt.figure() ax = Axes3D(fig) ax.plot(np.ravel(H_blue[:,2]), np.ravel(H_blue[:,1]), np.ravel(H_blue[:,0]), 'bo') ax.plot(np.ravel(H_red[:,2]), np.ravel(H_red[:,1]), np.ravel(H_red[:,0]), 'ro') ax.set_xlabel('$h_1$', fontsize=15) ax.set_ylabel('$h_2$', fontsize=15) ax.set_zlabel('$h_3$', fontsize=15) ax.view_init(elev=10, azim=-40) plt.title('Projection of the input X onto the hidden layer H') plt.grid() plt.show()
notebook/machine-learning/deep_learning-neural-network-gradient-decent-part2.ipynb
weichetaru/weichetaru.github.com
mit
try/finally Statement The other flavor of the try statement is a specialization that has to do with finalization (a.k.a. termination) actions. If a finally clause is included in a try, Python will always run its block of statements “on the way out” of the try statement, whether an exception occurred while the try block was running or not. In it's general form, it is: try: statements # Run this action first finally: statements # Always run this code on the way out <a name="ctx"></a> with/as Context Managers Python 2.6 and 3.0 introduced a new exception-related statement—the with, and its optional as clause. This statement is designed to work with context manager objects, which support a new method-based protocol, similar in spirit to the way that iteration tools work with methods of the iteration protocol. Context Manager Intro Basic Usage: with expression [as variable]: with-block Classical Usage ```python with open(r'C:\misc\data') as myfile: for line in myfile: print(line) # ...more code here... ``` ... even using multiple context managers: python with open('script1.py') as f1, open('script2.py') as f2: for (linenum, (line1, line2)) in enumerate(zip(f1, f2)): if line1 != line2: print('%s\n%r\n%r' % (linenum, line1, line2)) How it works The expression is evaluated,resulting in an object known as a context manager that must have __enter__ and __exit__ methods The context manager’s __enter__ method is called. The value it returns is assigned to the variable in the as clause if present, or simply discarded otherwise The code in the nested with block is executed. If the with block raises an exception, the __exit__(type,value,traceback) method is called with the exception details. These are the same three values returned by sys.exc_info (Python function). If this method returns a false value, the exception is re-raised; otherwise, the exception is terminated. The exception should normally be reraised so that it is propagated outside the with statement. If the with block does not raise an exception, the __exit__ method is still called, but its type, value, and traceback arguments are all passed in as None. Usage with Exceptions
class TraceBlock: def message(self, arg): print('running ' + arg) def __enter__(self): print('starting with block') return self def __exit__(self, exc_type, exc_value, exc_tb): if exc_type is None: print('exited normally\n') else: print('raise an exception! ' + str(exc_type)) return False # Propagate with TraceBlock() as action: action.message('test 1') print('reached') with TraceBlock() as action: action.message('test 2') raise TypeError() print('not reached')
09 Exceptions.ipynb
leriomaggio/python-in-a-notebook
mit
User Defined Exceptions
class AlreadyGotOne(Exception): pass def gail(): raise AlreadyGotOne() try: gail() except AlreadyGotOne: print('got exception') class Career(Exception): def __init__(self, job, *args, **kwargs): super(Career, self).__init__(*args, **kwargs) self._job = job def __str__(self): return 'So I became a waiter of {}'.format(self._job) raise Career('Engineer')
09 Exceptions.ipynb
leriomaggio/python-in-a-notebook
mit
<div class="alert alert-success"> **EXERCISE 1** Make a line chart of the `data` using Matplotlib. The figure should be 12 (width) by 4 (height) in inches. Make the line color 'darkgrey' and provide an x-label ('days since start') and a y-label ('measured value'). Use the object oriented approach to create the chart. <details><summary>Hints</summary> - When Matplotlib only receives a single input variable, it will interpret this as the variable for the y-axis - Check the cheat sheet above for the functions. </details> </div>
# %load _solutions/visualization_01_matplotlib1.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 2** The data represents each a day starting from Jan 1st 2021. Create an array (variable name `dates`) of the same length as the original data (length 100) with the corresponding dates ('2021-01-01', '2021-01-02',...). Create the same chart as in the previous exercise, but use the `dates` values for the x-axis data. Mark the region inside `[-5, 5]` with a green color to show that these values are within an acceptable range. <details><summary>Hints</summary> - As seen in notebook `pandas_04_time_series_data`, Pandas provides a useful function `pd.date_range` to create a set of datetime values. In this case 100 values with `freq="D"`. - Make sure to understand the difference between `axhspan` and `fill_between`, which one do you need? - When adding regions, adding an `alpha` level is mostly a good idea. </details> </div>
# %load _solutions/visualization_01_matplotlib2.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 3** Compare the __last ten days__ ('2021-04-01' till '2021-04-10') in a bar chart using darkgrey color. For the data on '2021-04-01', use an orange bar to highlight the measurement on this day. <details><summary>Hints</summary> - Select the last 10 days from the `data` and `dates` variable, i.e. slice [-10:]. - Similar to a `plot` method, Matplotlib provides a `bar` method. - By plotting a single orange bar on top of the grey bars with a second bar chart, that one is highlithed. </details> </div>
# %load _solutions/visualization_01_matplotlib3.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 4** Pandas supports different types of charts besides line plots, all available from `.plot.xxx`, e.g. `.plot.scatter`, `.plot.bar`,... Make a bar chart to compare the mean discharge in the three measurement stations L06_347, LS06_347, LS06_348. Add a y-label 'mean discharge'. To do so, prepare a Figure and Axes with Matplotlib and add the chart to the created Axes. <details><summary>Hints</summary> * You can either use Pandas `ylabel` parameter to set the label or add it with Matploltib `ax.set_ylabel()` * To link an Axes object with Pandas output, pass the Axes created by `fig, ax = plt.subplots()` as parameter to the Pandas plot function. </details> </div>
# %load _solutions/visualization_01_matplotlib4.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 5** To compare the stations data, make two subplots next to each other: - In the left subplot, make a bar chart of the minimal measured value for each of the station. - In the right subplot, make a bar chart of the maximal measured value for each of the station. Add a title to the Figure containing 'Minimal and maximal discharge from 2009-01-01 till 2013-01-02'. Extract these dates from the data itself instead of hardcoding it. <details><summary>Hints</summary> - One can directly unpack the result of multiple axes, e.g. `fig, (ax0, ax1) = plt.subplots(1, 2,..` and link each of them to a Pands plot function. - Remember the remark about `constrained_layout=True` to overcome overlap with subplots? - A Figure title is called `suptitle` (which is different from an Axes title) - f-strings ([_formatted string literals_](https://docs.python.org/3/tutorial/inputoutput.html#formatted-string-literals)) is a powerful Python feature (since Python 3.6) to use variables inside a string, e.g. `f"some text with a {variable:HOWTOFORMAT}"` (with the format being optional). </details> </div>
# %load _solutions/visualization_01_matplotlib5.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 6** Make a line plot of the discharge measurements in station `LS06_347`. The main event on November 13th caused a flood event. To support the reader in the interpretation of the graph, add the following elements: - Add an horizontal red line at 20 m3/s to define the alarm level. - Add the text 'Alarm level' in red just above the alarm levl line. - Add an arrow pointing to the main peak in the data (event on November 13th) with the text 'Flood event on 2020-11-13' Check the Matplotlib documentation on [annotations](https://matplotlib.org/stable/gallery/text_labels_and_annotations/annotation_demo.html#annotating-plots) for the text annotation <details><summary>Hints</summary> - The horizontal line is explained in the cheat sheet in this notebook. - Whereas `ax.text` would work as well for the 'alarm level' text, the `annotate` method provides easier options to shift the text slightly relative to a data point. - Extract the main peak event by filtering the data on the maximum value. Different approaches are possible, but the `max()` and `idxmax()` methods are a convenient option in this case. </details> </div>
# %load _solutions/visualization_01_matplotlib6.py
notebooks/visualization_01_matplotlib.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
In previous weeks we have covered preprocessing our data, dimensionality reduction, and last week looked at supervised learning. This week we will be pulling these processes together into a complete project. Most projects can be thought of as a series of discrete steps: Data acquisition / loading Feature creation Feature normalization Feature selection Machine learning model Combining multiple models Reporting / Utilization Data acquisition If we are fortunate our data may already be in a usable format but more often extensive work is needed to generate something usable. What type of data do we have? Do we need to combine data from multiple sources? Is our data structured in such a way it can be used directly? Does our data need to be cleaned? Does our data have issues with confounding? Feature creation Can our data be used directly? What features have been used previously for similar tasks? Feature normalization Z-score normalization? Min-max mormalization? Feature selection The number of features we have compared with our sample size will determine whether feature selection is needed. We may choose in the first instance not to use feature selection. If we observe that our performance on the validation dataset is substantially worse than on the training dataset it is likely our model is overfitting and would benefit from limiting the number of features. Even if the performance is comparable we may still consider using dimensionality reduction or feature selection. Machine learning model Which algorithm to use will depend on the type of task and the size of the dataset. As with the preceding steps it can be difficult to predict the optimal approach and different options should be tried. Usually try ensemble models first, and train and evaluate more than one model! Combining multiple models An additional step that can frequently boost performance is combining multiple different models. It is important to consider that combining different models can make the result more difficult to interpret. The models may be generated by simply using a different algorithm or may additionally include changes to the features used. Reporting / Utilization Finally we need to be able to utilize the model we have generated. This typically takes the form of receiving a new sample and then performing all the steps used in training to make a prediction. If we are generating a model only to understand the structure of the data we already have the new samples may be only the test dataset we set aside at the beginning. Rapid experimentation At each of the major steps we need to take there are a variety of options. It is often not clear which approach will give us the best performance and so we should try several. Being able to rapidly try different options helps us get to the best solution faster. It is tempting to make a change to our code, execute it, look at the performance, and then decide between sticking with the change or going back to the original version. It is very easy to: Lose track of what code generated what solution Overwrite a working solution and be unable to repeat it Using version control software is very useful for avoiding these issues. Optimizing the entire workflow We have previously looked at approaches for choosing the optimal parameters for an algorithm. We also have choices earlier in the workflow that we should systematically explore - what features should we use, how should they be normalized. Scikit learn includes functionality for easily exploring the impact of different parameters not only in the machine learning algorithm we choose but at every stage of our solution. Pipeline
# http://scikit-learn.org/stable/auto_examples/plot_digits_pipe.html#example-plot-digits-pipe-py import numpy as np import matplotlib.pyplot as plt from sklearn import linear_model, decomposition, datasets from sklearn.pipeline import Pipeline from sklearn.grid_search import GridSearchCV logistic = linear_model.LogisticRegression() pca = decomposition.PCA() pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)]) digits = datasets.load_digits() X_digits = digits.data y_digits = digits.target ############################################################################### # Plot the PCA spectrum pca.fit(X_digits) plt.figure(1, figsize=(4, 3)) plt.clf() plt.axes([.2, .2, .7, .7]) plt.plot(pca.explained_variance_, linewidth=2) plt.axis('tight') plt.xlabel('n_components') plt.ylabel('explained_variance_') ############################################################################### # Prediction n_components = [20, 40, 64] Cs = np.logspace(-4, 4, 3) #Parameters of pipelines can be set using ‘__’ separated parameter names: estimator = GridSearchCV(pipe, dict(pca__n_components=n_components, logistic__C=Cs)) estimator.fit(X_digits, y_digits) plt.axvline(estimator.best_estimator_.named_steps['pca'].n_components, linestyle=':', label='n_components chosen') plt.legend(prop=dict(size=12)) plt.show() print(estimator)
Wk12-ml-workflow/Wk12-machine-learning-workflow.ipynb
beyondvalence/biof509_wtl
mit
FeatureUnion
# http://scikit-learn.org/stable/auto_examples/feature_stacker.html#example-feature-stacker-py # Author: Andreas Mueller <amueller@ais.uni-bonn.de> # # License: BSD 3 clause from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.grid_search import GridSearchCV from sklearn.svm import SVC from sklearn.datasets import load_iris from sklearn.decomposition import PCA from sklearn.feature_selection import SelectKBest iris = load_iris() X, y = iris.data, iris.target # This dataset is way to high-dimensional. Better do PCA: pca = PCA() # Maybe some original features where good, too? selection = SelectKBest() # Build estimator from PCA and Univariate selection: combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)]) # Use combined features to transform dataset: X_features = combined_features.fit(X, y).transform(X) svm = SVC(kernel="linear") # Do grid search over k, n_components and C: pipeline = Pipeline([("features", combined_features), ("svm", svm)]) param_grid = dict(features__pca__n_components=[1, 2, 3], features__univ_select__k=[1, 2], svm__C=[0.1, 1, 10]) grid_search = GridSearchCV(pipeline, param_grid=param_grid) grid_search.fit(X, y) print(grid_search.best_estimator_)
Wk12-ml-workflow/Wk12-machine-learning-workflow.ipynb
beyondvalence/biof509_wtl
mit
Exercises Using the final example with the diabetes dataset from last week convert the solution over to a pipeline format. Do you get the same result for the optimal number of neighbors? Create a new pipeline applying PCA to the dataset before the classifier. What is the optimal number of dimensions and neighbors? Looking at both models do the errors correlate with the true values? This can be visualized using a Bland-Altman plot of the average plotted against the difference for the true and predicted values. Build a pipeline using a different algorithm. How does this compare? Finally, combine two of the pipelines you have created using a FeatureUnion and then return the average of the two models. You may need to create a transformer to do this.
from sklearn import grid_search from sklearn import datasets from sklearn import neighbors from sklearn import metrics from sklearn.pipeline import Pipeline from sklearn import grid_search import numpy as np import matplotlib.pyplot as plt %matplotlib inline diabetes = datasets.load_diabetes() X = diabetes.data y = diabetes.target np.random.seed(0) split = np.random.random(y.shape) > 0.3 X_train = X[split] y_train = y[split] X_test = X[np.logical_not(split)] y_test = y[np.logical_not(split)] print(X_train.shape, y_train.shape) knn = neighbors.KNeighborsRegressor() pipe = Pipeline(steps=[('knn', knn)]) parameters = [1,2,3,4,5,6,7,8,9,10] grid = grid_search.GridSearchCV(pipe, dict(knn__n_neighbors=parameters)) grid.fit(X_train, y_train) plt.plot(y_test, grid.predict(X_test), 'k.') plt.show() print(metrics.mean_squared_error(y_test, grid.predict(X_test))) grid.get_params() best=grid.best_estimator_.named_steps['knn'].n_neighbors print('optimal number of clusters:', best)
Wk12-ml-workflow/Wk12-machine-learning-workflow.ipynb
beyondvalence/biof509_wtl
mit
Load some data I'm going to work with the data from the combined data sets. The analysis for this data set is in analysis\Cf072115_to_Cf072215b. The one limitation here is that this data has already cut out the fission chamber neighbors. det_df without fission chamber neighbors
det_df = bicorr.load_det_df('../meas_info/det_df_pairs_angles.csv') pair_is = bicorr.generate_pair_is(det_df,ignore_fc_neighbors_flag=True) det_df = det_df.loc[pair_is].reset_index().rename(columns={'index':'index_og'}).copy() det_df.head()
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
I am going to add in a new optional input parameter in bicorr.load_det_df that will let you provide this det_df without fission chamber neighbors directly. Try it out.
det_df = bicorr.load_det_df('../meas_info/det_df_pairs_angles.csv', remove_fc_neighbors=True) det_df.head() chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists() dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df) num_fissions = 2194651200.00
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
singles_hist.npz
singles_hist, dt_bin_edges_sh, dict_det_to_index, dict_index_to_det = bicorr.load_singles_hist(filepath='../analysis/Cf072115_to_Cf072215b/datap',plot_flag=True,show_flag=True)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Load bhp_nn for all pairs I'm going to skip a few steps in order to save memory. This data was produced in analysis_build_bhp_nn_by_pair_1_ns.ipynb and is stored in datap\bhp_nn_by_pair_1ns.npz. Load it now, as explained in the notebook.
npzfile = np.load('../analysis/Cf072115_to_Cf072215b/datap/bhp_nn_by_pair_1ns.npz') pair_is = npzfile['pair_is'] bhp_nn_pos = npzfile['bhp_nn_pos'] bhp_nn_neg = npzfile['bhp_nn_neg'] dt_bin_edges = npzfile['dt_bin_edges']
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
The fission chamber neighbors have already been removed
len(pair_is)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Specify energy range
emin = 0.62 emax = 12
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Calculate sums Singles- set up singles_df I will store this in a pandas dataframe. Columns: Channel number Sp - Singles counts, positive Sn - Singles counts, negative Sd - Singles counts, br-subtracted Sd_err - Singles counts, br-subtracted, err
singles_df = pd.DataFrame.from_dict(dict_index_to_det,orient='index',dtype=np.int8).rename(columns={0:'ch'}) chIgnore = [1,17,33] singles_df = singles_df[~singles_df['ch'].isin(chIgnore)].copy() singles_df['Sp']= 0.0 singles_df['Sn']= 0.0 singles_df['Sd']= 0.0 singles_df['Sd_err'] = 0.0
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Singles- calculate sums
for index in singles_df.index.values: Sp, Sn, Sd, Sd_err = bicorr.calc_n_sum_br(singles_hist, dt_bin_edges_sh, index, emin=emin, emax=emax) singles_df.loc[index,'Sp'] = Sp singles_df.loc[index,'Sn'] = Sn singles_df.loc[index,'Sd'] = Sd singles_df.loc[index,'Sd_err'] = Sd_err singles_df.head() bicorr_plot.Sd_vs_angle_all(singles_df)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Doubles- set up det_df
det_df.head() det_df['Cp'] = 0.0 det_df['Cn'] = 0.0 det_df['Cd'] = 0.0 det_df['Cd_err'] = 0.0 det_df['Np'] = 0.0 det_df['Nn'] = 0.0 det_df['Nd'] = 0.0 det_df['Nd_err'] = 0.0 det_df.head()
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Doubles- Calculate sums
for index in det_df.index.values: Cp, Cn, Cd, err_Cd = bicorr.calc_nn_sum_br(bhp_nn_pos[index,:,:], bhp_nn_neg[index,:,:], dt_bin_edges, emin=emin, emax=emax) det_df.loc[index,'Cp'] = Cp det_df.loc[index,'Cn'] = Cn det_df.loc[index,'Cd'] = Cd det_df.loc[index,'Cd_err'] = err_Cd Np, Nn, Nd, err_Nd = bicorr.calc_nn_sum_br(bhp_nn_pos[index,:,:], bhp_nn_neg[index,:,:], dt_bin_edges, emin=emin, emax=emax, norm_factor = num_fissions) det_df.loc[index,'Np'] = Np det_df.loc[index,'Nn'] = Nn det_df.loc[index,'Nd'] = Nd det_df.loc[index,'Nd_err'] = err_Nd det_df.head() bicorr_plot.counts_vs_angle_all(det_df, normalized=True)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Perform the correction Now I am going to loop through all pairs and calculate $W$. Loop through each pair Identify $i$, $j$ Fetch $S_i$, $S_j$ Calculate $W$ Propagate error for $W_{err}$ Store in det_df Add W, W_err columns to det_df
det_df['Sd1'] = 0.0 det_df['Sd1_err'] = 0.0 det_df['Sd2'] = 0.0 det_df['Sd2_err'] = 0.0 det_df['W'] = 0.0 det_df['W_err'] = 0.0 det_df.head() singles_df.head()
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Loop through det_df, store singles rates Fill the S and S_err values for each channel in each detector pair.
# Fill S columns in det_df for index in singles_df.index.values: ch = singles_df.loc[index,'ch'] d1_indices = (det_df[det_df['d1'] == ch]).index.tolist() d2_indices = (det_df[det_df['d2'] == ch]).index.tolist() det_df.loc[d1_indices,'Sd1'] = singles_df.loc[index,'Sd'] det_df.loc[d1_indices,'Sd1_err'] = singles_df.loc[index,'Sd_err'] det_df.loc[d2_indices,'Sd2'] = singles_df.loc[index,'Sd'] det_df.loc[d2_indices,'Sd2_err'] = singles_df.loc[index,'Sd_err'] # Calculate W, W_err from S columns det_df['W'] = det_df['Cd']/(det_df['Sd1']*det_df['Sd2']) det_df['W_err'] = det_df['W'] * np.sqrt((det_df['Cd_err']/det_df['Cd'])**2 + (det_df['Sd1_err']/det_df['Sd1'])**2 + (det_df['Sd2_err']/det_df['Sd2'])**2) det_df.head() bicorr_plot.W_vs_angle_all(det_df)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
This is much "tighter" than the raw counts. Functionalize Write functions to perform all of these calculations. Demo them here. The functions are in a new script called bicorr_sums.py. You have to specify emin, emax.
emin = 0.62 emax = 12
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Data you have to have loaded: det_df dict_index_to_det singles_hist dt_bin_edges_sh bhp_nn_pos bhp_nn_neg dt_bin_edges emin emax num_fissions angle_bin_edges Produce and fill singles_df:
singles_df = bicorr_sums.init_singles_df(dict_index_to_det) singles_df.head() singles_df = bicorr_sums.fill_singles_df(dict_index_to_det, singles_hist, dt_bin_edges_sh, emin, emax) singles_df.head() bicorr_plot.Sd_vs_angle_all(singles_df)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Expand, fill det_df
det_df.head() det_df = bicorr_sums.init_det_df_sums(det_df, t_flag = True) det_df = bicorr_sums.fill_det_df_singles_sums(det_df, singles_df) det_df = bicorr_sums.fill_det_df_doubles_t_sums(det_df, bhp_nn_pos, bhp_nn_neg, dt_bin_edges, emin, emax) det_df = bicorr_sums.calc_det_df_W(det_df) det_df.head()
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Condense into angle bins
angle_bin_edges = np.arange(8,190,10) by_angle_df = bicorr_sums.condense_det_df_by_angle(det_df,angle_bin_edges) by_angle_df.head() bicorr_plot.W_vs_angle(det_df, by_angle_df, save_flag=False)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Put all of this into one function Returns: singles_df, det_df, by_angle_df
angle_bin_edges = np.arange(8,190,10) singles_df, det_df, by_angle_df = bicorr_sums.perform_W_calcs(det_df, dict_index_to_det, singles_hist, dt_bin_edges_sh, bhp_nn_pos, bhp_nn_neg, dt_bin_edges, num_fissions, emin, emax, angle_bin_edges) det_df.head() bicorr_plot.W_vs_angle(det_df, by_angle_df, save_flag = False)
methods/singles_correction.ipynb
pfschus/fission_bicorrelation
mit
Annotate You can add new fields to a table with annotate. As an example, let's create a new column called cleaned_occupation that replaces missing entries in the occupation field labeled as 'other' with 'none.'
missing_occupations = hl.set(['other', 'none']) t = users.annotate( cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation), hl.null('str'), users.occupation)) t.show()
hail/python/hail/docs/tutorials/05-filter-annotate.ipynb
danking/hail
mit
transmute replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. transmute is useful for transforming data into a new form. Compare the following two snippts of code. The second is identical to the first with transmute replacing select.
missing_occupations = hl.set(['other', 'none']) t = users.select( cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation), hl.null('str'), users.occupation)) t.show() missing_occupations = hl.set(['other', 'none']) t = users.transmute( cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation), hl.null('str'), users.occupation)) t.show()
hail/python/hail/docs/tutorials/05-filter-annotate.ipynb
danking/hail
mit
From now on, we will refer to this table using this variable ($meth_BQtable), but we could just as well explicitly give the table name each time. Let's start by taking a look at the table schema:
%bigquery schema --table $meth_BQtable
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
Let's count up the number of unique patients, samples and aliquots mentioned in this table. Using the same approach, we can count up the number of unique CpG probes. We will do this by defining a very simple parameterized query. (Note that when using a variable for the table name in the FROM clause, you should not also use the square brackets that you usually would if you were specifying the table name as a string.)
%%sql --module count_unique DEFINE QUERY q1 SELECT COUNT (DISTINCT $f, 500000) AS n FROM $t fieldList = ['ParticipantBarcode', 'SampleBarcode', 'AliquotBarcode', 'Probe_Id'] for aField in fieldList: field = meth_BQtable.schema[aField] rdf = bq.Query(count_unique.q1,t=meth_BQtable,f=field).results().to_dataframe() print " There are %6d unique values in the field %s. " % ( rdf.iloc[0]['n'], aField)
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
As mentioned above, two different platforms were used to measure DNA methylation. The annotations from Illumina are also available in a BigQuery table:
methAnnot = bq.Table('isb-cgc:platform_reference.methylation_annotation') %bigquery schema --table $methAnnot
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
Given the coordinates for a gene of interest, we can find the associated methylation probes.
%%sql --module getGeneProbes SELECT IlmnID, Methyl27_Loci, CHR, MAPINFO FROM $t WHERE ( CHR=$geneChr AND ( MAPINFO>$geneStart AND MAPINFO<$geneStop ) ) ORDER BY Methyl27_Loci DESC, MAPINFO ASC # MLH1 gene coordinates (+/- 2500 bp) geneChr = "3" geneStart = 37034841 - 2500 geneStop = 37092337 + 2500 mlh1Probes = bq.Query(getGeneProbes,t=methAnnot,geneChr=geneChr,geneStart=geneStart,geneStop=geneStop).results()
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
There are a total of 50 methlyation probes in and near the MLH1 gene, although only 6 of them are on both the 27k and the 450k versions of the platform.
mlh1Probes
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
We can now use this list of CpG probes as a filter on the data table to extract all of the methylation data across all tumor types for MLH1:
%%sql --module getMLH1methStats SELECT cpg.IlmnID AS Probe_Id, cpg.Methyl27_Loci AS Methyl27_Loci, cpg.CHR AS Chr, cpg.MAPINFO AS Position, data.beta_stdev AS beta_stdev, data.beta_mean AS beta_mean, data.beta_min AS beta_min, data.beta_max AS beta_max FROM ( SELECT * FROM $mlh1Probes ) AS cpg JOIN ( SELECT Probe_Id, STDDEV(beta_value) beta_stdev, AVG(beta_value) beta_mean, MIN(beta_value) beta_min, MAX(beta_value) beta_max FROM $meth_BQtable WHERE ( SampleTypeLetterCode=$sampleType ) GROUP BY Probe_Id ) AS data ON cpg.IlmnID = data.Probe_Id ORDER BY Position ASC qTP = bq.Query(getMLH1methStats,mlh1Probes=mlh1Probes,meth_BQtable=meth_BQtable,sampleType="TP") rTP = qTP.results().to_dataframe() rTP.describe() qNT = bq.Query(getMLH1methStats,mlh1Probes=mlh1Probes,meth_BQtable=meth_BQtable,sampleType="NT") rNT = qNT.results().to_dataframe() rNT.describe() import numpy as np import matplotlib.pyplot as plt bins=range(1,len(rTP)+1) #print bins plt.bar(bins,rTP['beta_mean'],color='red',alpha=0.8,label='Primary Tumor'); plt.bar(bins,rNT['beta_mean'],color='blue',alpha=0.4,label='Normal Tissue'); plt.legend(loc='upper left'); plt.title('MLH1 DNA methylation: average'); plt.bar(bins,rTP['beta_stdev'],color='red',alpha=0.8,label='Primary Tumor'); plt.bar(bins,rNT['beta_stdev'],color='blue',alpha=0.4,label='Normal Tissue'); plt.legend(loc='upper right'); plt.title('MLH1 DNA methylation: standard deviation');
notebooks/DNA Methylation.ipynb
isb-cgc/examples-Python
apache-2.0
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 0 # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs = None # signals into hidden layer hidden_outputs = None # signals from hidden layer # TODO: Output layer - Replace these values with your calculations. final_inputs = None # signals into final output layer final_outputs = None # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = None # Output layer error is the difference between desired target and actual output. # TODO: Calculate the backpropagated error term (delta) for the output output_error_term = None # TODO: Calculate the hidden layer's contribution to the error hidden_error = None # TODO: Calculate the backpropagated error term (delta) for the hidden layer hidden_error_term = None # Weight step (input to hidden) delta_weights_i_h += None # Weight step (hidden to output) delta_weights_h_o += None # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += None # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += None # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = None # signals into hidden layer hidden_outputs = None # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = None # signals into final output layer final_outputs = None # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
first-neural-network/Your_first_neural_network.ipynb
brandoncgay/deep-learning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
import sys ### Set the hyperparameters here ### iterations = 100 learning_rate = 0.1 hidden_nodes = 2 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim()
first-neural-network/Your_first_neural_network.ipynb
brandoncgay/deep-learning
mit
Error plots for MiniZephyr vs. the AnalyticalHelmholtz response Response of the field (showing where the numerical case does not match the analytical case): Source region PML regions
fig = plt.figure() ax = fig.add_subplot(1,1,1, aspect=0.1) plt.plot(uAH.real.reshape((nz, nx))[:,xs], label='AnalyticalHelmholtz') plt.plot(uMZ.real.reshape((nz, nx))[:,xs], label='MiniZephyr') plt.legend(loc=4) plt.title('Real part of response through xs=%d'%xs)
notebooks/Compare Solutions Homogeneous.ipynb
uwoseis/zephyr
mit
Define a Neural Network ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Three hidden layers, input size = height * width of the image, output size = the number of classes (which is 10 in the case of MNIST) Use the base class: nn.Module The nn.Module mainly takes care of storing the paramters of the neural network.
class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(28 * 28, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # flatten image x = x[:, 0, ...].view(-1, 28*28) # feed layer 1 out_layer1 = self.fc1(x) out_layer1 = F.relu(out_layer1) # feed layer 2 out_layer2 = self.fc2(out_layer1) out_layer2 = F.relu(out_layer2) # feed layer 3 out_layer3 = self.fc3(out_layer2) return out_layer3 net = Net()
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Define a Loss function and optimizer ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Let's use a Classification Cross-Entropy loss and SGD with momentum.
import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Train the network ^^^^^^^^^^^^^^^^^^^^ This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize.
for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 99 == 0: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 100)) running_loss = 0.0 print('Finished Training')
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Test the network on the test data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We have trained the network for 2 passes over the training dataset. But we need to check if the network has learnt anything at all. We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions. Okay, first step. Let us display an image from the test set to get familiar.
testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Performance on the test dataset.
correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total))
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
Plot images:
import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() predictions = net(images) _, predicted = torch.max(predictions.data, 1) # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % predicted[j].item() for j in range(4)))
session10_PyTorch/introduction_to_pytorch_mnist.ipynb
INM-6/Python-Module-of-the-Week
mit
23 24 25 26 为什么要除n? 27 对称阵的不同特征值对应的特征向量相互正交,那么也就是说对称阵的单位特征向量组成的矩阵是正交阵。 正交阵:$Q^T * Q = I$,也就是说每一个列向量,自己乘自己,是1,和别人乘,是0。 另外还有对角化的关系 这个感觉和SVD有那么点关联,不太确定。 28 29 期望是一阶原点矩,方差是二阶中心矩。 30 31 32 33 34
import numpy as np # rv is short for random variable def cal_stats(rv): # if not rv: # raise ValueError("None") length = len(rv) if length == 0: raise ValueError("length of 0") mean = 0 variance = 0 # third order origin moment, 三阶原点矩 third_order = 0 two_order = 0 fourth_order = 0 for x in rv: mean += x two_order += x ** 2 third_order += x ** 3 fourth_order += x ** 4 else: mean /= length two_order /= length third_order /= length fourth_order /= length mean = mean variance = two_order - mean ** 2 skewness = (third_order - 3 * mean * variance - mean ** 3) / variance ** 1.5 kurtosis = fourth_order / variance ** 2 - 3 return (mean, variance, skewness, kurtosis) # randn sample from normal distribution. data = [] data.append(np.random.randn(10000)) data.append(2 *np.random.randn(10000)) data.append([x for x in data[0] if x > -0.5]) data.append(np.random.uniform(0, 4, 10000)) stats = [] for l in data: stats.append(cal_stats(l)) template = r'$\mu={0:.2f},\ \sigma={1:.2f},\ skewness={2:.2f},\ kurt={3:.2f}$' infos = [] for stat in stats: infos.append(template.format(*stat)) plt.text(1, 0.38, infos[0], bbox=dict(facecolor='red', alpha=0.25)) plt.text(1, 0.35, infos[1], bbox=dict(facecolor='red', alpha=0.25)) plt.hist(data[0], 50, normed=True, facecolor='r', alpha=0.9) plt.hist(data[1], 80, normed=True, facecolor='g', alpha=0.8)
ml-zb/lec02.ipynb
JasonWayne/course-notes
mit
RT vs Control Human/Mouse
out_dir = "RT_control_hm_gsea" df = pd.read_csv(os.path.join(BASE,"RT_control_results_named_annot.csv")) df = df[(df.padj.abs()<=0.2)] df = df[df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = rank_df.sort_values('rank').reset_index(drop=True) mf_res = run_go_gsea(rank_df, mf_map_f, seed=1111, outdir=out_dir) bp_res = run_go_gsea(rank_df, bp_map_f, seed=1111, outdir=out_dir) #cc_res = run_go_gsea(rank_df, cc_map_f, seed=1111, outdir=out_dir) mf_rt = mf_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) mf_control = mf_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) bp_rt = bp_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) bp_control = bp_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) #cc_rt = cc_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) #cc_control = cc_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) mf_rt mf_control bp_rt bp_control
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs Control Non-Human/Mouse
out_dir = "RT_control_gsea" df = pd.read_csv(os.path.join(BASE,"RT_control_results_named_annot.csv")) df = df[(df.padj.abs()<=0.2)] df = df[~df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = rank_df.sort_values('rank').reset_index(drop=True) mf_res = run_go_gsea(rank_df, mf_map_f, seed=1111, outdir=out_dir) bp_res = run_go_gsea(rank_df, bp_map_f, seed=1111, outdir=out_dir) cc_res = run_go_gsea(rank_df, cc_map_f, seed=1111, outdir=out_dir) mf_rt = mf_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) mf_control = mf_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) bp_rt = bp_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) bp_control = bp_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) cc_rt = cc_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) cc_control = cc_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) mf_rt mf_control bp_rt bp_control cc_rt cc_control
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
Rag vs WT Human/Mouse
out_dir = "Rag_WT_hm_gsea" df = pd.read_csv(os.path.join(BASE,"Rag_WT_results_named_annot.csv")) df = df[(df.padj.abs()<=0.2)] df = df[df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = rank_df.sort_values('rank').reset_index(drop=True) mf_res = run_go_gsea(rank_df, mf_map_f, seed=1111, outdir=out_dir) bp_res = run_go_gsea(rank_df, bp_map_f, seed=1111, outdir=out_dir) #cc_res = run_go_gsea(rank_df, cc_map_f, seed=1111, outdir=out_dir) mf_rag = mf_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) mf_wt = mf_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) bp_rag = bp_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) bp_wt = bp_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) #cc_rt = cc_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) #cc_control = cc_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) mf_rag mf_wt bp_rag bp_wt
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
Rag vs WT Non-Human/Mouse
out_dir = "Rag_WT_gsea" df = pd.read_csv(os.path.join(BASE,"Rag_WT_results_named_annot.csv")) df = df[(df.padj.abs()<=0.2)] df = df[~df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = rank_df.sort_values('rank').reset_index(drop=True) mf_res = run_go_gsea(rank_df, mf_map_f, seed=1111, outdir=out_dir) bp_res = run_go_gsea(rank_df, bp_map_f, seed=1111, outdir=out_dir) cc_res = run_go_gsea(rank_df, cc_map_f, seed=1111, outdir=out_dir) mf_rag = mf_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) mf_wt = mf_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) bp_rag = bp_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) bp_wt = bp_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) cc_rag = cc_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) cc_wt = cc_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) mf_rag mf_wt bp_rag bp_wt cc_rag cc_wt
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs WT Human/Mouse
out_dir = "RT_WT_hm_gsea" df = pd.read_csv(os.path.join(BASE,"RT_WT_deseq_results.csv")) df = df[(df.padj.abs()<=0.2)] df = df[df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = rank_df.sort_values('rank').reset_index(drop=True) mf_res = run_go_gsea(rank_df, mf_map_f, seed=1111, outdir=out_dir) bp_res = run_go_gsea(rank_df, bp_map_f, seed=1111, outdir=out_dir) mf_rt = mf_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) mf_wt = mf_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) bp_rt = bp_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) bp_wt = bp_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) mf_rt mf_wt bp_rt bp_wt
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs WT Non-Human/Mouse
out_dir = "RT_WT_gsea" df = pd.read_csv(os.path.join(BASE,"RT_WT_deseq_results.csv")) df = df[(df.padj.abs()<=0.2)] df = df[~df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = rank_df.sort_values('rank').reset_index(drop=True) mf_res = run_go_gsea(rank_df, mf_map_f, seed=1111, outdir=out_dir) bp_res = run_go_gsea(rank_df, bp_map_f, seed=1111, outdir=out_dir) cc_res = run_go_gsea(rank_df, cc_map_f, seed=1111, outdir=out_dir) mf_rt = mf_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) mf_wt = mf_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) bp_rt = bp_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) bp_wt = bp_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) cc_rt = cc_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) cc_wt = cc_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) mf_rt mf_wt bp_rt bp_wt cc_rt cc_wt
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs Rag Human/Mouse
out_dir = "RT_Rag_hm_gsea" df = pd.read_csv(os.path.join(BASE,"RT_Rag_deseq_results.csv")) df = df[(df.padj.abs()<=0.2)] df = df[df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = rank_df.sort_values('rank').reset_index(drop=True) mf_res = run_go_gsea(rank_df, mf_map_f, seed=1111, outdir=out_dir) bp_res = run_go_gsea(rank_df, bp_map_f, seed=1111, outdir=out_dir) mf_rt = mf_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) mf_rag = mf_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) bp_rt = bp_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) bp_rag = bp_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) mf_rt mf_rag bp_rt bp_rag
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
RT vs Rag Non-Human/Mouse
out_dir = "RT_Rag_gsea" df = pd.read_csv(os.path.join(BASE,"RT_Rag_deseq_results.csv")) df = df[(df.padj.abs()<=0.2)] df = df[~df.human_mouse] df['log2FoldChange'] = -1 * df['log2FoldChange'] rank_df = df[['Unnamed: 0', 'log2FoldChange']].rename(columns={'Unnamed: 0': 'gene_name', 'log2FoldChange': 'rank'}) rank_df = rank_df.sort_values('rank').reset_index(drop=True) mf_res = run_go_gsea(rank_df, mf_map_f, seed=1111, outdir=out_dir) bp_res = run_go_gsea(rank_df, bp_map_f, seed=1111, outdir=out_dir) cc_res = run_go_gsea(rank_df, cc_map_f, seed=1111, outdir=out_dir) mf_rt = mf_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) mf_rag = mf_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) bp_rt = bp_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) bp_rag = bp_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) cc_rt = cc_res.query('nes > 0 and pval < 0.05').sort_values('nes', ascending=False) cc_rag = cc_res.query('nes < 0 and pval < 0.05').sort_values('nes', ascending=True) mf_rt mf_rag bp_rt bp_rag cc_rt cc_rag
scripts/gseapy.ipynb
stuppie/CM7_CM1E2d56col_unenr123_rawextract_2017
mit
Get the image data
img, seg, seeds = make_data(64, 20) i = 30 plt.imshow(img[i, :, :], cmap='gray')
examples/pretrain_model.ipynb
mjirik/pyseg_base
bsd-3-clause
Train gaussian mixture model and save it to file
segparams = { # 'method':'graphcut', "method": "graphcut", "use_boundary_penalties": False, "boundary_dilatation_distance": 2, "boundary_penalties_weight": 1, "modelparams": { "type": "gmmsame", "fv_type": "intensity", # 'fv_extern': fv_function, "adaptation": "original_data", }, } gc = pycut.ImageGraphCut(img, segparams=segparams) gc.set_seeds(seeds) t0 = datetime.now() gc.run() print(f"time cosumed={datetime.now()-t0}") plt.imshow(img[i, :, :], cmap='gray') plt.contour(gc.segmentation[i,:,:]) plt.show() mdl_stored_file = "test_model.p" gc.save(mdl_stored_file)
examples/pretrain_model.ipynb
mjirik/pyseg_base
bsd-3-clause
Run segmentation faster by loading model from file The advantage is higher with the higher number of seeds.
# forget gc = None img, seg, seeds = make_data(56, 18) gc = pycut.ImageGraphCut(img) gc.load(mdl_stored_file) gc.set_seeds(seeds) t0 = datetime.now() gc.run(run_fit_model=False) print(f"time cosumed={datetime.now()-t0}") plt.imshow(img[i, :, :], cmap='gray') plt.contour(gc.segmentation[i,:,:]) plt.show()
examples/pretrain_model.ipynb
mjirik/pyseg_base
bsd-3-clause
The seeds does not have to be used if model is loaded from file
# forget gc = None img, seg, seeds = make_data(56, 18) gc = pycut.ImageGraphCut(img) gc.load(mdl_stored_file) t0 = datetime.now() gc.run(run_fit_model=False) print(f"time cosumed={datetime.now()-t0}") plt.imshow(img[i, :, :], cmap='gray') plt.contour(gc.segmentation[i,:,:]) plt.show()
examples/pretrain_model.ipynb
mjirik/pyseg_base
bsd-3-clause