markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
This OptimizationRecord object has many different fields attached to it so that all quantities involved in the computation can be explored. For this example, let us pull the final molecule (optimized structure) and inspect the physical dimensions. Note: if the status does not say COMPLETE, these fields will not be available. Try querying the procedure again in a few seconds to see if the task completed in the background.
final_mol = proc.get_final_molecule() print(final_mol.measure([0, 1])) print(final_mol.measure([1, 0, 2])) final_mol
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
This water molecule has bond length and angle dimensions much closer to expected values. We can also plot the optimization history to see how each step in the geometry optimization affected the results. Though the chart is not too impressive for this simple molecule, it is hopefully illuminating and is available for any geometry optimization ever completed.
proc.show_history()
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
Collections Submitting individual procedures or single quantum chemistry tasks is not typically done as it becomes hard to track individual tasks. To help resolve this, Collections are different ways of organizing standard computations so that many tasks can be referenced in a more human-friendly way. In this particular case, we will be exploring an intermolecular potential dataset. To begin, we will create a new dataset and add a few intermolecular interactions to it.
ds = ptl.collections.ReactionDataset("My IE Dataset", ds_type="ie", client=client, default_program="psi4")
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
We can construct a water dimer that has fragments used in the intermolecular computation with the -- divider. A single water molecule with ghost atoms can be extracted like so:
water_dimer = ptl.Molecule.from_data(""" O 0.000000 0.000000 0.000000 H 0.758602 0.000000 0.504284 H 0.260455 0.000000 -0.872893 -- O 3.000000 0.500000 0.000000 H 3.758602 0.500000 0.504284 H 3.260455 0.500000 -0.872893 """) water_dimer.get_fragment(0, 1)
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
Many molecular entries can be added to this dataset where each is entry is a given intermolecular complex that is given a unique name. In addition, the add_ie_rxn method to can automatically fragment molecules.
ds.add_ie_rxn("water dimer", water_dimer) ds.add_ie_rxn("helium dimer", """ He 0 0 -3 -- He 0 0 3 """)
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
Once the Collection is created it can be saved to the server so that it can always be retrived at a future date
ds.save()
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
The client can list all Collections currently on the server and retrive collections to be manipulated:
client.list_collections() ds = client.get_collection("ReactionDataset", "My IE Dataset") ds
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
Computing with collections Computational methods can be applied to all of the reactions in the dataset with just a few simple lines:
ds.compute("B3LYP-D3", "def2-SVP")
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
By default this collection evaluates the non-counterpoise corrected interaction energy which typically requires three computations per entry (the complex and each monomer). In this case we compute the B3LYP and -D3 additive correction separately, nominally 12 total computations. However the collection is smart enough to understand that each Helium monomer is identical and does not need to be computed twice, reducing the total number of computations to 10 as shown here. We can continue to compute additional methods. Again, this is being evaluated on your computer! Be careful of the compute requirements.
ds.compute("PBE-D3", "def2-SVP")
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
A list of all methods that have been computed for this dataset can also be shown:
ds.list_values()
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
The above only shows what has been computed and does not pull this data from the server to your computer. To do so, the get_history command can be used:
print(f"DataFrame units: {ds.units}") ds.get_values()
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
You can also visualize results and more!
ds.visualize(["B3LYP-D3", "PBE-D3"], "def2-SVP", bench="B3LYP/def2-svp", kind="violin")
docs/qcfractal/source/quickstart.ipynb
psi4/DatenQM
bsd-3-clause
Importing a data set This next cell may take a little while to run if it's grabbing a pretty big data set. The cell label to the left will look like "In [*]" while it's still thinking and "In [2]" when it's finished.
# Combined land and ocean temperature averages (LOTI: Land Ocean Temperature Index) data1 = pd.read_csv('http://github.com/adamlamee/CODINGinK12-data/raw/master/LOTI.csv', header=1).replace(to_replace="***", value=np.NaN) data_LOTI = data1.apply(lambda x: pd.to_numeric(x, errors='ignore')) # Only land temperature averages data2 = pd.read_csv('http://github.com/adamlamee/CODINGinK12-data/raw/master/LAND.csv', header=1).replace(to_replace="***", value=np.NaN) data_LAND = data2.apply(lambda x: pd.to_numeric(x, errors='ignore'))
GlobalTemp.ipynb
merryjman/astronomy
gpl-3.0
We can view the first few rows of the file we just imported.
# The .head(n) command displays the first n rows of the file. data_LAND.head(5)
GlobalTemp.ipynb
merryjman/astronomy
gpl-3.0
Plotting the data
x1 = data_LOTI.Year y1 = data_LOTI.JanDec # plt.plot() makes a line graph, by default fig = plt.figure(figsize=(10, 5)) plt.plot(x1, y1) plt.title('Average land an docean temperature readings') plt.xlabel('Year') plt.ylabel('Percent temp change') x2 = data_LAND.Year y2 = data_LAND.JanDec # plt.plot() makes a line graph, by default fig = plt.figure(figsize=(10, 5)) plt.plot(x2, y2) plt.title('Land temperature readings') plt.xlabel('Year') plt.ylabel('Percent temp change') # Wow, this needs a title and axis labels! fig = plt.figure(figsize=(10, 5)) plt.plot(x1, y1, label="Land and Ocean") plt.plot(x2, y2, label="Land only") plt.legend() plt.show()
GlobalTemp.ipynb
merryjman/astronomy
gpl-3.0
Edit and re-plot If you like Randall Monroe's webcomic XKCD as much as I do, you can make your plots look like his hand-drawn ones. Thanks to Jake VanderPlas for sorting that out.
plt.xkcd() fig = plt.figure(figsize=(10, 5)) plt.plot(x1, y1) # to make normal plots again mpl.rcParams.update(inline_rc)
GlobalTemp.ipynb
merryjman/astronomy
gpl-3.0
Demultiplexers \begin{definition}\label{def:MUX} A Demultiplexer, typically referred to as a DEMUX, is a Digital(or analog) switching unit that takes one input channel to be streamed to a single output channel from many via a control input. For single input DEMUXs with $2^n$ outputs, there are then $n$ input selection signals that make up the control word to select the output channel for the input. Thus a DEMUX is the conjugate digital element to the MUX such that a MUX is an $N:1$ mapping device and a DEMUX is a $1:N$ mapping device. From a behavioral standpoint DEMUXs are implemented with the same if-elif-else (case) control statements as a MUX but for each case, all outputs must be specified. Furthermore, DEMUXs are often implemented via stacked MUXs since there governing equation is the Product SET (Cartesian product) all internal products of a MUXs SOP equation \end{definition} 1 Channel Input: 2 Channel Output demultiplexer in Gate Level Logic \begin{figure} \centerline{\includegraphics{DEMUX12Gate.png}} \caption{\label{fig:D12G} 1:2 DEMUX Symbol and Gate internals} \end{figure} Sympy Expression
x, s, y0, y1=symbols('x, s, y_0, y_1') y12_0Eq=Eq(y0, ~s&x) y12_1Eq=Eq(y1, s&x) y12_0Eq, y12_1Eq T0=TruthTabelGenrator(y12_0Eq) T1=TruthTabelGenrator(y12_1Eq) T10=pd.merge(T1, T0, how='left') T10 y12_0EqN=lambdify([s, x], y12_0Eq.rhs, dummify=False) y12_1EqN=lambdify([s, x], y12_1Eq.rhs, dummify=False) SystmaticVals=np.array(list(itertools.product([0,1], repeat=2))) print(SystmaticVals) print(y12_0EqN(SystmaticVals[:, 0], SystmaticVals[:, 1]).astype(int)) print(y12_1EqN(SystmaticVals[:, 0], SystmaticVals[:, 1]).astype(int))
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Module
@block def DEMUX1_2_Combo(x, s, y0, y1): """ 1:2 DEMUX written in full combo Inputs: x(bool): input feed s(bool): channel select Outputs: y0(bool): ouput channel 0 y1(bool): ouput channel 1 """ @always_comb def logic(): y0.next= not s and x y1.next= s and x return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
TestLen=10 SystmaticVals=list(itertools.product([0,1], repeat=2)) xTVs=np.array([i[1] for i in SystmaticVals]).astype(int) np.random.seed(15) xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int) sTVs=np.array([i[0] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) sTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(xTVs) SystmaticVals, sTVs, xTVs Peeker.clear() x=Signal(bool(0)); Peeker(x, 'x') s=Signal(bool(0)); Peeker(s, 's') y0=Signal(bool(0)); Peeker(y0, 'y0') y1=Signal(bool(0)); Peeker(y1, 'y1') DUT=DEMUX1_2_Combo(x, s, y0, y1) def DEMUX1_2_Combo_TB(): """ myHDL only testbench for module `DEMUX1_2_Combo` """ @instance def stimules(): for i in range(TestLen): x.next=int(xTVs[i]) s.next=int(sTVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, DEMUX1_2_Combo_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('x', 's', 'y0','y1') DEMUX1_2_ComboData=Peeker.to_dataframe() DEMUX1_2_ComboData=DEMUX1_2_ComboData[['x', 's', 'y0','y1']] DEMUX1_2_ComboData DEMUX1_2_ComboData['y0Ref']=DEMUX1_2_ComboData.apply(lambda row:y12_0EqN(row['s'], row['x']), axis=1).astype(int) DEMUX1_2_ComboData['y1Ref']=DEMUX1_2_ComboData.apply(lambda row:y12_1EqN(row['s'], row['x']), axis=1).astype(int) DEMUX1_2_ComboData Test0=(DEMUX1_2_ComboData['y0']==DEMUX1_2_ComboData['y0Ref']).all() Test1=(DEMUX1_2_ComboData['y1']==DEMUX1_2_ComboData['y1Ref']).all() Test=Test0&Test1 print(f'Module `DEMUX1_2_Combo` works as exspected: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('DEMUX1_2_Combo');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_2_Combo_RTL.png}} \caption{\label{fig:D12CRTL} DEMUX1_2_Combo RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_2_Combo_SYN.png}} \caption{\label{fig:D12CSYN} DEMUX1_2_Combo Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_2_Combo_IMP.png}} \caption{\label{fig:D12CIMP} DEMUX1_2_Combo Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbech
#create BitVectors xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:] sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:] xTVs, bin(xTVs), sTVs, bin(sTVs) @block def DEMUX1_2_Combo_TBV(): """ myHDL -> testbench for module `DEMUX1_2_Combo` """ x=Signal(bool(0)) s=Signal(bool(0)) y0=Signal(bool(0)) y1=Signal(bool(0)) @always_comb def print_data(): print(x, s, y0, y1) #Test Signal Bit Vectors xTV=Signal(xTVs) sTV=Signal(sTVs) DUT=DEMUX1_2_Combo(x, s, y0, y1) @instance def stimules(): for i in range(TestLen): x.next=int(xTV[i]) s.next=int(sTV[i]) yield delay(1) raise StopSimulation() return instances() TB=DEMUX1_2_Combo_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('DEMUX1_2_Combo_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit \begin{figure} \centerline{\includegraphics[width=5cm]{DEMUX12PYNQZ1Circ.png}} \caption{\label{fig:D12Circ} 1:2 DEMUX PYNQ-Z1 (Non SoC) conceptualized circuit} \end{figure} Board Constraints
ConstraintXDCTextReader('DEMUX1_2');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Video of Deployment DEMUX1_2_Combo on PYNQ-Z1 (YouTube) 1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic Sympy Expression
x, s0, s1, y0, y1, y2, y3=symbols('x, s0, s1, y0, y1, y2, y3') y14_0Eq=Eq(y0, ~s0&~s1&x) y14_1Eq=Eq(y1, s0&~s1&x) y14_2Eq=Eq(y2, ~s0&s1&x) y14_3Eq=Eq(y3, s0&s1&x) y14_0Eq, y14_1Eq, y14_2Eq, y14_3Eq T0=TruthTabelGenrator(y14_0Eq) T1=TruthTabelGenrator(y14_1Eq) T2=TruthTabelGenrator(y14_2Eq) T3=TruthTabelGenrator(y14_3Eq) T10=pd.merge(T1, T0, how='left') T20=pd.merge(T2, T10, how='left') T30=pd.merge(T3, T20, how='left') T30 y14_0EqN=lambdify([x, s0, s1], y14_0Eq.rhs, dummify=False) y14_1EqN=lambdify([x, s0, s1], y14_1Eq.rhs, dummify=False) y14_2EqN=lambdify([x, s0, s1], y14_2Eq.rhs, dummify=False) y14_3EqN=lambdify([x, s0, s1], y14_3Eq.rhs, dummify=False) SystmaticVals=np.array(list(itertools.product([0,1], repeat=3))) print(SystmaticVals) print(y14_0EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int)) print(y14_1EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int)) print(y14_2EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int)) print(y14_3EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int))
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Module
@block def DEMUX1_4_Combo(x, s0, s1, y0, y1, y2, y3): """ 1:4 DEMUX written in full combo Inputs: x(bool): input feed s0(bool): channel select 0 s1(bool): channel select 1 Outputs: y0(bool): ouput channel 0 y1(bool): ouput channel 1 y2(bool): ouput channel 2 y3(bool): ouput channel 3 """ @always_comb def logic(): y0.next= (not s0) and (not s1) and x y1.next= s0 and (not s1) and x y2.next= (not s0) and s1 and x y3.next= s0 and s1 and x return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
TestLen=10 SystmaticVals=list(itertools.product([0,1], repeat=3)) xTVs=np.array([i[2] for i in SystmaticVals]).astype(int) np.random.seed(15) xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int) s0TVs=np.array([i[1] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int) s1TVs=np.array([i[0] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(17) s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(xTVs) SystmaticVals, xTVs, s0TVs, s1TVs Peeker.clear() x=Signal(bool(0)); Peeker(x, 'x') s0=Signal(bool(0)); Peeker(s0, 's0') s1=Signal(bool(0)); Peeker(s1, 's1') y0=Signal(bool(0)); Peeker(y0, 'y0') y1=Signal(bool(0)); Peeker(y1, 'y1') y2=Signal(bool(0)); Peeker(y2, 'y2') y3=Signal(bool(0)); Peeker(y3, 'y3') DUT=DEMUX1_4_Combo(x, s0, s1, y0, y1, y2, y3) def DEMUX1_4_Combo_TB(): """ myHDL only testbench for module `DEMUX1_4_Combo` """ @instance def stimules(): for i in range(TestLen): x.next=int(xTVs[i]) s0.next=int(s0TVs[i]) s1.next=int(s1TVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, DEMUX1_4_Combo_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('x', 's1', 's0', 'y0', 'y1', 'y2', 'y3') DEMUX1_4_ComboData=Peeker.to_dataframe() DEMUX1_4_ComboData=DEMUX1_4_ComboData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']] DEMUX1_4_ComboData DEMUX1_4_ComboData['y0Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_0EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int) DEMUX1_4_ComboData['y1Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_1EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int) DEMUX1_4_ComboData['y2Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_2EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int) DEMUX1_4_ComboData['y3Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_3EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int) DEMUX1_4_ComboData Test0=(DEMUX1_4_ComboData['y0']==DEMUX1_4_ComboData['y0Ref']).all() Test1=(DEMUX1_4_ComboData['y1']==DEMUX1_4_ComboData['y1Ref']).all() Test2=(DEMUX1_4_ComboData['y2']==DEMUX1_4_ComboData['y2Ref']).all() Test3=(DEMUX1_4_ComboData['y3']==DEMUX1_4_ComboData['y3Ref']).all() Test=Test0&Test1&Test2&Test3 print(f'Module `DEMUX1_4_Combo` works as exspected: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('DEMUX1_4_Combo');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_Combo_RTL.png}} \caption{\label{fig:D14CRTL} DEMUX1_4_Combo RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_Combo_SYN.png}} \caption{\label{fig:D14CSYN} DEMUX1_4_Combo Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_Combo_IMP.png}} \caption{\label{fig:D14CIMP} DEMUX1_4_Combo Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench
#create BitVectors xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:] s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:] s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:] xTVs, bin(xTVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs) @block def DEMUX1_4_Combo_TBV(): """ myHDL -> testbench for module `DEMUX1_4_Combo` """ x=Signal(bool(0)) s0=Signal(bool(0)) s1=Signal(bool(0)) y0=Signal(bool(0)) y1=Signal(bool(0)) y2=Signal(bool(0)) y3=Signal(bool(0)) @always_comb def print_data(): print(x, s0, s1, y0, y1, y2, y3) #Test Signal Bit Vectors xTV=Signal(xTVs) s0TV=Signal(s0TVs) s1TV=Signal(s1TVs) DUT=DEMUX1_4_Combo(x, s0, s1, y0, y1, y2, y3) @instance def stimules(): for i in range(TestLen): x.next=int(xTV[i]) s0.next=int(s0TV[i]) s1.next=int(s1TV[i]) yield delay(1) raise StopSimulation() return instances() TB=DEMUX1_4_Combo_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('DEMUX1_4_Combo_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit \begin{figure} \centerline{\includegraphics[width=5cm]{DEMUX14PYNQZ1Circ.png}} \caption{\label{fig:D14Circ} 1:4 DEMUX PYNQ-Z1 (Non SoC) conceptualized circuit} \end{figure} Board Constraints
ConstraintXDCTextReader('DEMUX1_4');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Video of Deployment DEMUX1_4_Combo on PYNQ-Z1 (YouTube) 1 Channel Input:4 Channel Output demultiplexer via DEMUX Stacking myHDL Module
@block def DEMUX1_4_DMS(x, s0, s1, y0, y1, y2, y3): """ 1:4 DEMUX via DEMUX Stacking Inputs: x(bool): input feed s0(bool): channel select 0 s1(bool): channel select 1 Outputs: y0(bool): ouput channel 0 y1(bool): ouput channel 1 y2(bool): ouput channel 2 y3(bool): ouput channel 3 """ s0_y0y1_WIRE=Signal(bool(0)) s0_y2y3_WIRE=Signal(bool(0)) x_s1_DEMUX=DEMUX1_2_Combo(x, s1, s0_y0y1_WIRE, s0_y2y3_WIRE) s1_y0y1_DEMUX=DEMUX1_2_Combo(s0_y0y1_WIRE, s0, y0, y1) s1_y2y3_DEMUX=DEMUX1_2_Combo(s0_y2y3_WIRE, s0, y2, y3) return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
TestLen=10 SystmaticVals=list(itertools.product([0,1], repeat=3)) xTVs=np.array([i[2] for i in SystmaticVals]).astype(int) np.random.seed(15) xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int) s0TVs=np.array([i[1] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int) s1TVs=np.array([i[0] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(17) s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(xTVs) SystmaticVals, xTVs, s0TVs, s1TVs Peeker.clear() x=Signal(bool(0)); Peeker(x, 'x') s0=Signal(bool(0)); Peeker(s0, 's0') s1=Signal(bool(0)); Peeker(s1, 's1') y0=Signal(bool(0)); Peeker(y0, 'y0') y1=Signal(bool(0)); Peeker(y1, 'y1') y2=Signal(bool(0)); Peeker(y2, 'y2') y3=Signal(bool(0)); Peeker(y3, 'y3') DUT=DEMUX1_4_DMS(x, s0, s1, y0, y1, y2, y3) def DEMUX1_4_DMS_TB(): """ myHDL only testbench for module `DEMUX1_4_DMS` """ @instance def stimules(): for i in range(TestLen): x.next=int(xTVs[i]) s0.next=int(s0TVs[i]) s1.next=int(s1TVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, DEMUX1_4_DMS_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('x', 's1', 's0', 'y0', 'y1', 'y2', 'y3') DEMUX1_4_DMSData=Peeker.to_dataframe() DEMUX1_4_DMSData=DEMUX1_4_DMSData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']] DEMUX1_4_DMSData Test=DEMUX1_4_DMSData==DEMUX1_4_ComboData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']] Test=Test.all().all() print(f'DEMUX1_4_DMS equivlinet to DEMUX1_4_Combo: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('DEMUX1_4_DMS');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_DMS_RTL.png}} \caption{\label{fig:D14DMSRTL} DEMUX1_4_DMS RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_DMS_SYN.png}} \caption{\label{fig:D14DMSSYN} DEMUX1_4_DMS Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_DMS_IMP.png}} \caption{\label{fig:D14DMSIMP} DEMUX1_4_DMS Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench
#create BitVectors xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:] s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:] s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:] xTVs, bin(xTVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs) @block def DEMUX1_4_DMS_TBV(): """ myHDL -> testbench for module `DEMUX1_4_DMS` """ x=Signal(bool(0)) s0=Signal(bool(0)) s1=Signal(bool(0)) y0=Signal(bool(0)) y1=Signal(bool(0)) y2=Signal(bool(0)) y3=Signal(bool(0)) @always_comb def print_data(): print(x, s0, s1, y0, y1, y2, y3) #Test Signal Bit Vectors xTV=Signal(xTVs) s0TV=Signal(s0TVs) s1TV=Signal(s1TVs) DUT=DEMUX1_4_DMS(x, s0, s1, y0, y1, y2, y3) @instance def stimules(): for i in range(TestLen): x.next=int(xTV[i]) s0.next=int(s0TV[i]) s1.next=int(s1TV[i]) yield delay(1) raise StopSimulation() return instances() TB=DEMUX1_4_DMS_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('DEMUX1_4_DMS_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit See Board Circuit for "1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic" Board Constraint uses same 'DEMUX1_4.xdc' as "# 1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic" Video of Deployment DEMUX1_4_DMS on PYNQ-Z1 (YouTube) 1:2 DEMUX via Behavioral IF myHDL Module
@block def DEMUX1_2_B(x, s, y0, y1): """ 1:2 DMUX in behavioral Inputs: x(bool): input feed s(bool): channel select Outputs: y0(bool): ouput channel 0 y1(bool): ouput channel 1 """ @always_comb def logic(): if s==0: #take note that since we have #two ouputs there next state values #must both be set, else the last #value will presist till it changes y0.next=x y1.next=0 else: y0.next=0 y1.next=x return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
TestLen=10 SystmaticVals=list(itertools.product([0,1], repeat=2)) xTVs=np.array([i[1] for i in SystmaticVals]).astype(int) np.random.seed(15) xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int) sTVs=np.array([i[0] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) sTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(xTVs) SystmaticVals, sTVs, xTVs Peeker.clear() x=Signal(bool(0)); Peeker(x, 'x') s=Signal(bool(0)); Peeker(s, 's') y0=Signal(bool(0)); Peeker(y0, 'y0') y1=Signal(bool(0)); Peeker(y1, 'y1') DUT=DEMUX1_2_B(x, s, y0, y1) def DEMUX1_2_B_TB(): """ myHDL only testbench for module `DEMUX1_2_B` """ @instance def stimules(): for i in range(TestLen): x.next=int(xTVs[i]) s.next=int(sTVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, DEMUX1_2_B_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('x', 's', 'y0','y1') DEMUX1_2_BData=Peeker.to_dataframe() DEMUX1_2_BData=DEMUX1_2_BData[['x', 's', 'y0','y1']] DEMUX1_2_BData Test=DEMUX1_2_BData==DEMUX1_2_ComboData[['x', 's', 'y0','y1']] Test=Test.all().all() print(f'DEMUX1_2_BD is equivlent to DEMUX1_2_Combo: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('DEMUX1_2_B');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_2_B_RTL.png}} \caption{\label{fig:D12BRTL} DEMUX1_2_B RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_2_B_SYN.png}} \caption{\label{fig:D12BSYN} DEMUX1_2_B Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_2_B_IMP.png}} \caption{\label{fig:D12BIMP} DEMUX1_2_B Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench
#create BitVectors xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:] sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:] xTVs, bin(xTVs), sTVs, bin(sTVs) @block def DEMUX1_2_B_TBV(): """ myHDL -> testbench for module `DEMUX1_2_B` """ x=Signal(bool(0)) s=Signal(bool(0)) y0=Signal(bool(0)) y1=Signal(bool(0)) @always_comb def print_data(): print(x, s, y0, y1) #Test Signal Bit Vectors xTV=Signal(xTVs) sTV=Signal(sTVs) DUT=DEMUX1_2_B(x, s, y0, y1) @instance def stimules(): for i in range(TestLen): x.next=int(xTV[i]) s.next=int(sTV[i]) yield delay(1) raise StopSimulation() return instances() TB=DEMUX1_2_B_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('DEMUX1_2_B_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit See Board Circuit for "1 Channel Input: 2 Channel Output demultiplexer in Gate Level Logic" Board Constraint uses same 'DEMUX1_2.xdc' as "1 Channel Input: 2 Channel Output demultiplexer in Gate Level Logic" Video of Deployment DEMUX1_2_B on PYNQ-Z1 (YouTube) 1:4 DEMUX via Behavioral if-elif-else myHDL Module
@block def DEMUX1_4_B(x, s0, s1, y0, y1, y2, y3): """ 1:4 DEMUX written via behaviorial Inputs: x(bool): input feed s0(bool): channel select 0 s1(bool): channel select 1 Outputs: y0(bool): ouput channel 0 y1(bool): ouput channel 1 y2(bool): ouput channel 2 y3(bool): ouput channel 3 """ @always_comb def logic(): if s0==0 and s1==0: y0.next=x; y1.next=0 y2.next=0; y3.next=0 elif s0==1 and s1==0: y0.next=0; y1.next=x y2.next=0; y3.next=0 elif s0==0 and s1==1: y0.next=0; y1.next=0 y2.next=x; y3.next=0 else: y0.next=0; y1.next=0 y2.next=0; y3.next=x return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
TestLen=10 SystmaticVals=list(itertools.product([0,1], repeat=3)) xTVs=np.array([i[2] for i in SystmaticVals]).astype(int) np.random.seed(15) xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int) s0TVs=np.array([i[1] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int) s1TVs=np.array([i[0] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(17) s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(xTVs) SystmaticVals, xTVs, s0TVs, s1TVs Peeker.clear() x=Signal(bool(0)); Peeker(x, 'x') s0=Signal(bool(0)); Peeker(s0, 's0') s1=Signal(bool(0)); Peeker(s1, 's1') y0=Signal(bool(0)); Peeker(y0, 'y0') y1=Signal(bool(0)); Peeker(y1, 'y1') y2=Signal(bool(0)); Peeker(y2, 'y2') y3=Signal(bool(0)); Peeker(y3, 'y3') DUT=DEMUX1_4_B(x, s0, s1, y0, y1, y2, y3) def DEMUX1_4_B_TB(): """ myHDL only testbench for module `DEMUX1_4_Combo` """ @instance def stimules(): for i in range(TestLen): x.next=int(xTVs[i]) s0.next=int(s0TVs[i]) s1.next=int(s1TVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, DEMUX1_4_B_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('x', 's1', 's0', 'y0', 'y1', 'y2', 'y3') DEMUX1_4_BData=Peeker.to_dataframe() DEMUX1_4_BData=DEMUX1_4_BData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']] DEMUX1_4_BData Test=DEMUX1_4_BData==DEMUX1_4_ComboData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']] Test=Test.all().all() print(f'DEMUX1_4_B equivlinet to DEMUX1_4_Combo: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('DEMUX1_4_B');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_B_RTL.png}} \caption{\label{fig:D14BRTL} DEMUX1_4_B RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_B_SYN.png}} \caption{\label{fig:D14BSYN} DEMUX1_4_B Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_B_IMP.png}} \caption{\label{fig:D14BIMP} DEMUX1_4_B Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench
#create BitVectors xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:] s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:] s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:] xTVs, bin(xTVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs) @block def DEMUX1_4_B_TBV(): """ myHDL -> testbench for module `DEMUX1_4_B` """ x=Signal(bool(0)) s0=Signal(bool(0)) s1=Signal(bool(0)) y0=Signal(bool(0)) y1=Signal(bool(0)) y2=Signal(bool(0)) y3=Signal(bool(0)) @always_comb def print_data(): print(x, s0, s1, y0, y1, y2, y3) #Test Signal Bit Vectors xTV=Signal(xTVs) s0TV=Signal(s0TVs) s1TV=Signal(s1TVs) DUT=DEMUX1_4_B(x, s0, s1, y0, y1, y2, y3) @instance def stimules(): for i in range(TestLen): x.next=int(xTV[i]) s0.next=int(s0TV[i]) s1.next=int(s1TV[i]) yield delay(1) raise StopSimulation() return instances() TB=DEMUX1_4_B_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('DEMUX1_4_B_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit See Board Circuit for "1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic" Board Constraint uses same 'DEMUX1_4.xdc' as "# 1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic" Video of Deployment DEMUX1_4_B on PYNQ-Z1 (YouTube) Demultiplexer 1:4 Behavioral via Bitvectors myHDL Module
@block def DEMUX1_4_BV(x, S, Y): """ 1:4 DEMUX written via behaviorial with bit vectors Inputs: x(bool): input feed S(2bit vector): channel select bitvector; min=0, max=3 Outputs: Y(4bit vector): ouput channel bitvector; values min=0, max=15; allowed is: 0,1,2,4,8 in this application """ @always_comb def logic(): #here concat is used to build up the word #from the x input if S==0: Y.next=concat(intbv(0)[3:], x); '0001' elif S==1: Y.next=concat(intbv(0)[2:], x, intbv(0)[1:]); '0010' elif S==2: Y.next=concat(intbv(0)[1:], x, intbv(0)[2:]); '0100' else: Y.next=concat(x, intbv(0)[3:]); '1000' return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
xTVs=np.array([0,1]) xTVs=np.append(xTVs, np.random.randint(0,2,6)).astype(int) TestLen=len(xTVs) np.random.seed(12) STVs=np.arange(0,4) STVs=np.append(STVs, np.random.randint(0,4, 5)) TestLen, xTVs, STVs Peeker.clear() x=Signal(bool(0)); Peeker(x, 'x') S=Signal(intbv(0)[2:]); Peeker(S, 'S') Y=Signal(intbv(0)[4:]); Peeker(Y, 'Y') DUT=DEMUX1_4_BV(x, S, Y) def DEMUX1_4_BV_TB(): @instance def stimules(): for i in STVs: for j in xTVs: S.next=int(i) x.next=int(j) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, DEMUX1_4_BV_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('x', 'S', 'Y', start_time=0, stop_time=2*TestLen+2) DEMUX1_4_BVData=Peeker.to_dataframe() DEMUX1_4_BVData=DEMUX1_4_BVData[['x', 'S', 'Y']] DEMUX1_4_BVData DEMUX1_4_BVData['y0']=None; DEMUX1_4_BVData['y1']=None; DEMUX1_4_BVData['y2']=None; DEMUX1_4_BVData['y3']=None DEMUX1_4_BVData[['y3', 'y2', 'y1', 'y0']]=DEMUX1_4_BVData[['Y']].apply(lambda bv: [int(i) for i in bin(bv, 4)], axis=1, result_type='expand') DEMUX1_4_BVData['s0']=None; DEMUX1_4_BVData['s1']=None DEMUX1_4_BVData[['s1', 's0']]=DEMUX1_4_BVData[['S']].apply(lambda bv: [int(i) for i in bin(bv, 2)], axis=1, result_type='expand') DEMUX1_4_BVData=DEMUX1_4_BVData[['x', 'S', 's0', 's1', 'Y', 'y3', 'y2', 'y1', 'y0']] DEMUX1_4_BVData DEMUX1_4_BVData['y0Ref']=DEMUX1_4_BVData.apply(lambda row:y14_0EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int) DEMUX1_4_BVData['y1Ref']=DEMUX1_4_BVData.apply(lambda row:y14_1EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int) DEMUX1_4_BVData['y2Ref']=DEMUX1_4_BVData.apply(lambda row:y14_2EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int) DEMUX1_4_BVData['y3Ref']=DEMUX1_4_BVData.apply(lambda row:y14_3EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int) DEMUX1_4_BVData Test=DEMUX1_4_BVData[['y0', 'y1', 'y2', 'y3']].sort_index(inplace=True)==DEMUX1_4_BVData[['y0Ref', 'y1Ref', 'y2Ref', 'y3Ref']].sort_index(inplace=True) print(f'Module `DEMUX1_4_BVData` works as exspected: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('DEMUX1_4_BV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_BV_RTL.png}} \caption{\label{fig:D14BVRTL} DEMUX1_4_BV RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_BV_SYN.png}} \caption{\label{fig:D14BVSYN} DEMUX1_4_BV Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{DEMUX1_4_BV_IMP.png}} \caption{\label{fig:D14BVIMP} DEMUX1_4_BV Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench (To Do!) PYNQ-Z1 Board Deployment Board Circuit Board Constraints
ConstraintXDCTextReader('DEMUX1_4_BV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Demultiplexers(DEMUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Regex Filtering Another approach is to use the regular expressions while dealing with special patterns of noise. We have explained regular expressions in detail in one of our previous article. Following python code removes a regex pattern from the input text:
import re def _remove_regex(input_text, regex_pattern): urls = re.finditer(regex_pattern, input_text) for i in urls: input_text = re.sub(i.group().strip(), '', input_text) return input_text regex_pattern = "#[\w]*" _remove_regex("remove this #FloridaBlue from tweet text", regex_pattern)
NLP Overview.ipynb
krondor/nlp-dsx-pot
gpl-3.0
Normalization Another type of textual noise is about the multiple representations exhibited by single word. For example – “play”, “player”, “played”, “plays” and “playing” are the different variations of the word – “play”, Though they mean different but contextually all are similar. The step converts all the disparities of a word into their normalized form (also known as lemma). Normalization is a pivotal step for feature engineering with text as it converts the high dimensional features (N different features) to the low dimensional space (1 feature), which is an ideal ask for any ML model. The most common lexicon normalization practices are : Stemming: Stemming is a rudimentary rule-based process of stripping the suffixes (“ing”, “ly”, “es”, “s” etc) from a word. Lemmatization: Lemmatization, on the other hand, is an organized & step by step procedure of obtaining the root form of the word, it makes use of vocabulary (dictionary importance of words) and morphological analysis (word structure and grammar relations). Below is the sample code that performs lemmatization and stemming using python’s popular library – NLTK.
from nltk.stem.wordnet import WordNetLemmatizer lem = WordNetLemmatizer() from nltk.stem.porter import PorterStemmer stem = PorterStemmer() word = "multiplying" lem.lemmatize(word, "v") stem.stem(word)
NLP Overview.ipynb
krondor/nlp-dsx-pot
gpl-3.0
Standardization Text data often contains words or phrases which are not present in any standard lexical dictionaries. These pieces are not recognized by search engines and models. Some of the examples are – acronyms, hashtags with attached words, and colloquial slangs. With the help of regular expressions and manually prepared data dictionaries, this type of noise can be fixed, the code below uses a dictionary lookup method to replace social media slangs from a text.
translation_dict = {'rt':'Retweet', 'dm':'direct message', "awsm" : "awesome", "luv" :"love"}
NLP Overview.ipynb
krondor/nlp-dsx-pot
gpl-3.0
Parsing Syntactical parsing involves the analysis of words in the sentence for grammar and their arrangement in a manner that shows the relationships among the words. Dependency Grammar and Part of Speech tags are the important attributes of text syntactics. Dependency Trees – Sentences are composed of some words sewed together. The relationship among the words in a sentence is determined by the basic dependency grammar. Dependency grammar is a class of syntactic text analysis that deals with (labeled) asymmetrical binary relations between two lexical items (words). Every relation can be represented in the form of a triplet (relation, governor, dependent). For example: consider the sentence – “Bills on ports and immigration were submitted by Senator Brownback, Republican of Kansas.” The relationship among the words can be observed in the form of a tree representation as shown: <img src='https://s3-ap-south-1.amazonaws.com/av-blog-media/wp-content/uploads/2017/01/11181146/image-2.png' width="50%" height="50%"></img> The tree shows that “submitted” is the root word of this sentence, and is linked by two sub-trees (subject and object subtrees). Each subtree is a itself a dependency tree with relations such as – (“Bills” <-> “ports” <by> “proposition” relation), (“ports” <-> “immigration” <by> “conjugation” relation). This type of tree, when parsed recursively in top-down manner gives grammar relation triplets as output which can be used as features for many nlp problems like entity wise sentiment analysis, actor & entity identification, and text classification. The python wrapper StanfordCoreNLP (by Stanford NLP Group, only commercial license) and NLTK dependency grammars can be used to generate dependency trees. Tokenization The process of splitting text into smaller pieces or units. We want to tokenize text into sentences, and sentences into tokens. The library provides a tokenization module, nltk.tokenize
import nltk from nltk import sent_tokenize, word_tokenize from IPython.display import Image nltk.download() sentences = sent_tokenize("Our mission is to help people and communities achieve better health declares our purpose " \ "as a company and it serves as the standard against which we weigh our actions and our decisions. " \ "Our Vision is to be a leading innovator enabling healthy communities is both the inspirational and " \ "aspirational description of the future state of our company. It is our framework and guides every " \ "aspect of our business. By broadening our scope and continuing to evolve, we have more flexibility " \ "to make a greater impact on as many people as possible. Our core values are timeless. They " \ "the core principles that distinguish our culture and serve as a compass for our actions and describe " \ "how we behave in the world.") sentences tokens = word_tokenize(sentences[2]) tokens
NLP Overview.ipynb
krondor/nlp-dsx-pot
gpl-3.0
Part of speech tagging Apart from the grammar relations, every word in a sentence is also associated with a part of speech (pos) tag (nouns, verbs, adjectives, adverbs etc). The pos tags defines the usage and function of a word in the sentence. Here is a list of all possible pos-tags defined by Pennsylvania university. Following code using NLTK performs pos tagging annotation on input text. (it provides several implementations, the default one is perceptron tagger)
from nltk import pos_tag #this is a Classifier, given a token assign a class #pos_tag Already defined in the library. We can train our own. tags = pos_tag(tokens) text = "I am using Data Science Experience at Florida Blue for Natural Language Processing" tokens = word_tokenize(text) print pos_tag(tokens) # Let's apply this to our sample text from our website. tags
NLP Overview.ipynb
krondor/nlp-dsx-pot
gpl-3.0
Part of Speech tagging is used for many important purposes in NLP: Word Sense Disambiguation Some language words have multiple meanings according to their usage. For example, in the two sentences below: I. “Please book my flight for Delhi” II. “I am going to read this book in the flight” “Book” is used with different context, however the part of speech tag for both of the cases are different. In sentence I, the word “book” is used as v erb, while in II it is used as no un. (Lesk Algorithm is also us ed for similar purposes) Improving word-based features A learning model could learn different contexts of a word when used word as the features, however if the part of speech tag is linked with them, the context is preserved, thus making strong features. For example: Sentence -“book my flight, I will read this book” Tokens – (“book”, 2), (“my”, 1), (“flight”, 1), (“I”, 1), (“will”, 1), (“read”, 1), (“this”, 1) Tokens with POS – (“book_VB”, 1), (“my_PRP$”, 1), (“flight_NN”, 1), (“I_PRP”, 1), (“will_MD”, 1), (“read_VB”, 1), (“this_DT”, 1), (“book_NN”, 1) Normalization and Lemmatization POS tags are the basis of lemmatization process for converting a word to its base form (lemma). Efficient Stopword Removal POS tags are also useful in efficient removal of stopwords. For example, there are some tags which always define the low frequency / less important words of a language. For example: (IN – “within”, “upon”, “except”), (CD – “one”,”two”, “hundred”), (MD – “may”, “must” etc) Word Senses In linguistics, a word sense is one of the meanings of a word. Until now, we worked with tokens and POS. So, for instance in "the man sit down on the bench near the river.", the token [bench] could be bench as a constructed object by humans where people sit, or the natural side where the river meets the land. WordNet: A semantic graph for words. NLTK provides a interface to the API <img src="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSFZ2l8g_3316qek21ZEIkTS0WIYs8-lfTvXtO3YGWHEGpdDiMG"> Lets see some functions to handle meanings in tokens. Wordnet provides the concept of synsets, as syntactic units for tokens
from nltk.corpus import wordnet as wn #loading wordnet module wn.synsets('human') wn.synsets('human')[0].definition wn.synsets('human')[1].definition human = wn.synsets('Human',pos=wn.NOUN)[0] human human.hypernyms() human.hyponyms() bike = wn.synsets('bicycle')[0] bike girl = wn.synsets('girl')[1] girl bike.wup_similarity(human) girl.wup_similarity(human)
NLP Overview.ipynb
krondor/nlp-dsx-pot
gpl-3.0
Chunks Chunking is the process of collecting patterns of Part of Speech together, representing some meaning. Analysis of a sentence which identifies the constituents (noun groups - "[The red tree] grows near the river", verbs, verb groups, etc.) Our goal is detect into digital text, thing like "where are different entities located," or "which person is employed by what organization". Its the way in which we extract structured data (entities and relations) from unstructured text. <img width=580px height=150px src="http://www.nltk.org/images/chunk-treerep.png" >
from nltk import word_tokenize,pos_tag from nltk.chunk import RegexpParser chunker = RegexpParser(r''' NP: {<DT><NN.*><.*>*<NN.*>} }<VB.*>{ ''') print tags print chunker.parse(tags)
NLP Overview.ipynb
krondor/nlp-dsx-pot
gpl-3.0
Entity Recognition - Chunking Part of chunking. We look for chunks of Part Of Speech. The goal is to detect entities: Person, Location, Time, etc. I will use here the library ne_chunker. But we can train our own models, or provide grammar rules as before.
from nltk.chunk import ne_chunk sentence = "Daryl A. is the head of the coworking place Commoncode Corp. from where many people work in Melbourne, Australia." pos_tags = pos_tag(word_tokenize(sentence)) pos_tags from IPython.display import display display(pos_tags)
NLP Overview.ipynb
krondor/nlp-dsx-pot
gpl-3.0
Sentiment Analyisis Is a phrase expresing a positive opinion? a negative opinion? how can we measure that? We will decompose sentences into their smaller units: tokens, and we will measure how probable they distribute on positive/negative sentences along the text. <h2> Lets Classify Text! </h2>
#We will use movie reviews, already separated as positive and negative. Specially interest are those bigrams, #that are in positive or negative sentences (pair of words (word1, word2) that appear consecutives.) from nltk.corpus import movie_reviews import nltk.classify.util from nltk.classify import NaiveBayesClassifier from nltk.corpus import movie_reviews #This is the function that given a word, return a dict {word:True}. This will be our feature in the classifier. def word_feats(words): return dict([(word, True) for word in words]) #neg_ids, pos_ids keep all the files for negative reviews, and positive reviews respectively. neg_ids = movie_reviews.fileids('neg') pos_ids = movie_reviews.fileids('pos') #So, lets take the positive/negative words, create the feature for such word, and store it in a negative/positive features list. neg_feats = [(word_feats(movie_reviews.words(fileids=[f])), 'neg') for f in neg_ids] pos_feats = [(word_feats(movie_reviews.words(fileids=[f])), 'pos') for f in pos_ids] #Separating 3/4 of this featured words for training, 1/4 for test. neg_len_train = len(neg_feats)*3/4 pos_len_train = len(pos_feats)*3/4 train_feats = neg_feats[:neg_len_train] + pos_feats[:pos_len_train] test_feats = neg_feats[neg_len_train:] + pos_feats[pos_len_train:] #training a NaiveBayes Classifier with our training featured words. classifier = NaiveBayesClassifier.train(train_feats) #Lts check accuracy. print 'accuracy: ', nltk.classify.util.accuracy(classifier, test_feats) #Lets see which words fit best in each class. classifier.show_most_informative_features() #SO WE TRAINED A CLASSIFIER FOR MOVIE REVIEWS. IT MEANS, FOR EVERY WORD THAT WE TRAINED, #IT NOWS THAT THIS WORD IS PROBABLE IN NEGATIVES WITH PROB P(W/POS) AND POSITIVE P(W/POS) (BAYES THEOREME). sentence = "Florida Blue, movie is incredible!" tokens = [word for word in word_tokenize(sentence)] pos_tags = [pos for pos in pos_tag(tokens)] pos_tags feats = word_feats( [word for (word,_) in pos_tags] ) feats classifier.classify(feats) sentence = "This is a miserable experience, and I just want to leave and be a lumberjack." tokens = [word for word in word_tokenize(sentence)] pos_tags = [pos for pos in pos_tag(tokens) if pos[1] == 'JJ'] pos_tags feats = word_feats( [word for (word,_) in pos_tags] ) feats classifier.classify(feats)
NLP Overview.ipynb
krondor/nlp-dsx-pot
gpl-3.0
<span id="Shallow_Water_Bathymetry_import">Import Dependencies and Connect to the Data Cube &#9652;</span>
from datacube.utils.aws import configure_s3_access configure_s3_access(requester_pays=True) import datacube dc = datacube.Datacube()
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<span id="Shallow_Water_Bathymetry_plat_prod">Choose the Platform and Product &#9652;</span>
#List the products available on this server/device dc.list_products() #create a list of the desired platforms platform = 'LANDSAT_8' product = 'ls8_level1_usgs'
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<span id="Shallow_Water_Bathymetry_define_extents">Define the Extents of the Analysis &#9652;</span> Region bounds
# East Coast of Australia lat_subsect = (-31.7, -32.2) lon_subsect = (152.4, 152.9) print(''' Latitude:\t{0}\t\tRange:\t{2} degrees Longitude:\t{1}\t\tRange:\t{3} degrees '''.format(lat_subsect, lon_subsect, max(lat_subsect)-min(lat_subsect), max(lon_subsect)-min(lon_subsect)))
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Display
from utils.data_cube_utilities.dc_display_map import display_map display_map(latitude = lat_subsect,longitude = lon_subsect)
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<span id="Shallow_Water_Bathymetry_retrieve_data">Retrieve the Data &#9652;</span> Load and integrate datasets
%%time ds = dc.load(lat = lat_subsect, lon = lon_subsect, platform = platform, product = product, output_crs = "EPSG:32756", measurements = ["red","blue","green","nir","quality"], resolution = (-30,30)) ds
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Preview the Data
from utils.data_cube_utilities.dc_rgb import rgb rgb(ds.isel(time=6), x_coord='x', y_coord='y') plt.show()
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<span id="Shallow_Water_Bathymetry_bathymetry">Calculate the Bathymetry and NDWI Indices &#9652;</span> The bathymetry function is located at the top of this notebook.
# Create Bathemtry Index column ds["bathymetry"] = bathymetry_index(ds) from utils.data_cube_utilities.dc_water_classifier import NDWI # (green - nir) / (green + nir) ds["ndwi"] = NDWI(ds, band_pair=1)
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<hr> Preview Combined Dataset
ds
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<span id="Shallow_Water_Bathymetry_export_unmasked">Export Unmasked GeoTIFF &#9652;</span>
import os from utils.data_cube_utilities.import_export import export_xarray_to_multiple_geotiffs unmasked_dir = "geotiffs/landsat8/unmasked" if not os.path.exists(unmasked_dir): os.makedirs(unmasked_dir) export_xarray_to_multiple_geotiffs(ds, unmasked_dir + "/unmasked.tif", x_coord='x', y_coord='y')
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<span id="Shallow_Water_Bathymetry_mask">Mask the Dataset Using the Quality Column and NDWI &#9652;</span>
# preview values np.unique(ds["quality"])
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Use NDWI to Mask Out Land The threshold can be tuned if need be to better fit the RGB image above. <br> Unfortunately our existing WOFS algorithm is designed to work with Surface Reflectance (SR) and does not work with this data yet but with a few modifications it could be made to do so. We will approximate the WOFs mask with NDWI for now.
# Tunable threshold for masking the land out threshold = .05 water = (ds.ndwi>threshold).values #preview one time slice to determine the effectiveness of the NDWI masking rgb(ds.where(water).isel(time=6), x_coord='x', y_coord='y') plt.show() from utils.data_cube_utilities.dc_mosaic import ls8_oli_unpack_qa clear_xarray = ls8_oli_unpack_qa(ds.quality, "clear") full_mask = np.logical_and(clear_xarray, water) ds = ds.where(full_mask)
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<span id="Shallow_Water_Bathymetry_vis_func">Create a Visualization Function &#9652;</span> Visualize the distribution of the bathymetry index for the water pixels
plt.figure(figsize=[15,5]) #Visualize the distribution of the remaining data sns.boxplot(ds['bathymetry']) plt.show()
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<b>Interpretation: </b> We can see that most of the values fall within a very short range. We can scale our plot's cmap limits to fit the specific quantile ranges for the bathymetry index so we can achieve better contrast from our plots.
#set the quantile range in either direction from the median value def get_quantile_range(col, quantile_range = .25): low = ds[col].quantile(.5 - quantile_range,["time","y","x"]).values high = ds[col].quantile(.5 + quantile_range,["time","y","x"]).values return low,high #Custom function for a color mapping object from matplotlib.colors import LinearSegmentedColormap def custom_color_mapper(name = "custom", val_range = (1.96,1.96), colors = "RdGnBu"): custom_cmap = LinearSegmentedColormap.from_list(name,colors=colors) min, max = val_range step = max/10.0 Z = [min,0],[0,max] levels = np.arange(min,max+step,step) cust_map = plt.contourf(Z, 100, cmap=custom_cmap) plt.clf() return cust_map.cmap def mean_value_visual(ds, col, figsize = [15,15], cmap = "GnBu", low=None, high=None): if low is None: low = np.min(ds[col]).values if high is None: high = np.max(ds[col]).values ds.reduce(np.nanmean,dim=["time"])[col].plot.imshow(figsize = figsize, cmap=cmap, vmin=low, vmax=high)
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<span id="Shallow_Water_Bathymetry_bath_vis">Visualize the Bathymetry &#9652;</span>
mean_value_visual(ds, "bathymetry", cmap="GnBu")
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<span id="Shallow_Water_Bathymetry_bath_vis_better">Visualize the Bathymetry With Adjusted Contrast &#9652;</span> If we clamp the range of the plot using different quantile ranges we can see relative differences in higher contrast.
# create range using the 10th and 90th quantile low, high = get_quantile_range("bathymetry", .40) custom = custom_color_mapper(val_range=(low,high), colors=["darkred","red","orange","yellow","green", "blue","darkblue","black"]) mean_value_visual(ds, "bathymetry", cmap=custom, low=low, high=high) # create range using the 5th and 95th quantile low, high = get_quantile_range("bathymetry", .45) custom = custom_color_mapper(val_range=(low,high), colors=["darkred","red","orange","yellow","green", "blue","darkblue","black"]) mean_value_visual(ds, "bathymetry", cmap = custom, low=low, high = high) # create range using the 2nd and 98th quantile low, high = get_quantile_range("bathymetry", .48) custom = custom_color_mapper(val_range=(low,high), colors=["darkred","red","orange","yellow","green", "blue","darkblue","black"]) mean_value_visual(ds, "bathymetry", cmap=custom, low=low, high=high) # create range using the 1st and 99th quantile low, high = get_quantile_range("bathymetry", .49) custom = custom_color_mapper(val_range=(low,high), colors=["darkred","red","orange","yellow","green", "blue","darkblue","black"]) mean_value_visual(ds, "bathymetry", cmap=custom, low=low, high=high)
notebooks/water/bathymetry/Shallow_Water_Bathymetry.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Read dog breed information
# source1: web df_breed = pd.read_csv("breed_nick_names.txt",names=['breed_info']) df_breed.head() df_breed.shape breeds_info = df_breed['breed_info'].values breed_dict = {} for breed in breeds_info: temp = breed.lower() temp = re.findall('\d.\s+(\D*)', temp)[0] temp = temp.strip().split('=') breed_dict[temp[0].strip()] = temp[1].strip() # 1. different nicek names are separated with 'or' for k, v in breed_dict.iteritems(): breed_dict[k] = map(lambda x:x.strip(), v.split(' or ')) # 2. get n-gram and stemmed words breed_dict for k, v in breed_dict.iteritems(): breed_dict[k] = set(v) breed_dict[k].add(k) temp_set = set([snowball.stem(x) for x in breed_dict[k]]) breed_dict[k] = breed_dict[k]|temp_set for word in word_tokenize(k): breed_dict[k].add(word) breed_dict[k].add(snowball.stem(word)) breed_dict[k] = breed_dict[k] - {'dog', 'dogs'} - stopword_set print breed_dict['chow chows'] breed_lookup = defaultdict(set) for k, v in breed_dict.iteritems(): for word in v: breed_lookup[word].add(k) breed_lookup.keys() del_list = ['toy','blue','great','duck','coat','wire','st.','white','grey', 'black','old','smooth','west','soft'] for w in del_list: breed_lookup.pop(w, None) print len(breed_lookup) # polish the look up tables based on 52 base classes breed_classes = pd.read_csv("s3://dogfaces/tensor_model/output_labels_20170907.txt",names=['breed']) base_breeds = breed_classes['breed'].values not_found_breed = [] for breed in base_breeds: if breed not in breed_dict: if breed in breed_lookup: if len(breed_lookup[breed])==1: breed_in_dict = list(breed_lookup[breed])[0] breed_dict[breed] = breed_dict[breed_in_dict] breed_dict[breed].add(breed_in_dict) breed_dict.pop(breed_in_dict, None) print "replace the key {} with {}".format(breed_in_dict, breed) else: print breed, breed_lookup[breed] elif snowball.stem(breed) in breed_lookup: breed_stem = snowball.stem(breed) if len(breed_lookup[breed_stem])==1: breed_in_dict = list(breed_lookup[breed_stem])[0] breed_dict[breed] = breed_dict[breed_in_dict] breed_dict[breed].add(breed_in_dict) breed_dict.pop(breed_in_dict, None) else: print breed,breed_stem, breed_lookup[breed_stem] else: not_found_breed.append(breed) print "not found these breeds:" print not_found_breed
NLP/nlp eda.ipynb
sadahanu/Capstone
mit
for poodles:create each type of poodle a separate look up item and delete the original poodle one. to add: american bulldog, merge with bulldog here to add: bull mastiff = bullmstiffs to add english springer = 'english springer spaniels' german shorhaired = 'german shorthaired pointers' german shepherd = 'german shepherd dog' to add basset and merge ['basset hound', 'petits bassets griffons vendeens']
# poodles: for breed in not_found_breed: if breed.endswith('poodle') or breed=='wheaten terrier': breed_dict[breed] = set(breed.split())|set([snowball.stem(w) for w in breed.split()]) breed_dict.pop('poodle', None) # bullmastiff if 'bull mastiff' in not_found_breed: breed_dict['bull mastiff'] = breed_dict['bullmastiffs'] breed_dict.pop('bullmastiffs', None) # english springer if 'english springer' in not_found_breed: breed_dict['english springer'] = breed_dict['english springer spaniels'] breed_dict.pop('english springer spaniels', None) # german short haired, german shepherd and 'american bulldog' name = 'american bulldog' if name in not_found_breed: breed_dict[name] = breed_dict['bulldog'] | set(name.split()) | set([snowball.stem(w) for w in name.split()]) breed_dict.pop('bulldog', None) name = 'german shorthaired' if name in not_found_breed: breed_dict[name] = breed_dict['german shorthaired pointers'] breed_dict.pop('german shorthaired pointers', None) name = 'german shepherd' if name in not_found_breed: breed_dict[name] = breed_dict['german shepherd dog'] breed_dict.pop('german shepherd dog', None) # basset dog breed_dict['basset'] = breed_dict['basset hound']|breed_dict['petits bassets griffons vendeens'] 'basset' in base_breeds sorted(breed_dict.keys()) ind = np.random.randint(df_reviews.shape[0]) text_review = df_reviews['review_content'][ind].lower() print text_review puncs = string.punctuation reduced_set = set([snowball.stem(x) for x in (set(filter(lambda x: x not in puncs, word_tokenize(text_review))) - stopword_set)]) po_breeds = [] for w in reduced_set: if w in breed_lookup: po_breeds.extend(breed_lookup[w]) print po_breeds df_reviews.columns def getReviewBreed(text): ntext = text.decode('utf-8') reduced_set = set([snowball.stem(x) for x in (set(filter(lambda x: x not in string.punctuation, word_tokenize(ntext.lower()))) - stopword_set)]) po_breeds = [] for w in reduced_set: if w in breed_lookup: po_breeds.extend(breed_lookup[w]) return po_breeds def getBreedTable(df): N = df.shape[0] breed = [] review_id = [] toy_id = [] for ind, row in df.iterrows(): breed.append(getReviewBreed(row['review_content'])) review_id.append(row['review_id']) toy_id.append(row['toy_id']) return pd.DataFrame({'review_id':review_id, 'toy_id':toy_id, 'breed_extract':breed}) test_df = df_reviews.copy() start_time = time.time() new_df = getBreedTable(test_df) print time.time() - start_time new_df.head() df_reviews['review_content'][1] new_df.shape df_extract = pd.merge(df_reviews, new_df, on=['review_id', 'toy_id']) df_extract.pop('review_content') print df_extract.shape df_extract.head() #ind = np.random.randint(df_extract.shape[0]) ind = 4 print df_reviews['review_content'][ind] print df_extract['breed_extract'][ind] df_extract['breed_extract'] = df_extract['breed_extract'].apply(lambda row:','.join(row)) df_extract.head() np.sum(df_extract['breed_extract'].isnull()) breed_lookup['poodle']
NLP/nlp eda.ipynb
sadahanu/Capstone
mit
Save intermediate import dictionaries and results
save_data = df_extract.to_csv(index=False) s3_res = boto3.resource('s3') s3_res.Bucket('dogfaces').put_object(Key='reviews/extract_breed_review.csv', Body=save_data) # save breed_lookup # save breed_dict with open('breed_lookup.pickle', 'wb') as handle: pickle.dump(breed_lookup, handle, protocol=pickle.HIGHEST_PROTOCOL) with open('breed_dict.pickle', 'wb') as handle: pickle.dump(breed_dict, handle, protocol=pickle.HIGHEST_PROTOCOL) # source 2: classified dog names breed_classes = pd.read_csv("s3://dogfaces/tensor_model/output_labels_20170907.txt",names=['breed']) breed_classes.head()
NLP/nlp eda.ipynb
sadahanu/Capstone
mit
Get breed scores
# generate a data frame, review_id, toy_id, breed len(df_extract['review_id'].unique())
NLP/nlp eda.ipynb
sadahanu/Capstone
mit
.. _tut_inverse_mne_dspm: Source localization with MNE/dSPM/sLORETA The aim of this tutorials is to teach you how to compute and apply a linear inverse method such as MNE/dSPM/sLORETA on evoked/raw/epochs data.
import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.minimum_norm import (make_inverse_operator, apply_inverse, write_inverse_operator)
0.12/_downloads/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Process MEG data
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname) events = mne.find_events(raw, stim_channel='STI 014') event_id = dict(aud_r=1) # event trigger and conditions tmin = -0.2 # start of each epoch (200ms before the trigger) tmax = 0.5 # end of each epoch (500ms after the trigger) raw.info['bads'] = ['MEG 2443', 'EEG 053'] picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True, exclude='bads') baseline = (None, 0) # means from the first instant to t = 0 reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=baseline, reject=reject)
0.12/_downloads/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute regularized noise covariance For more details see :ref:tut_compute_covariance.
noise_cov = mne.compute_covariance( epochs, tmax=0., method=['shrunk', 'empirical']) fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)
0.12/_downloads/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Here we use peak getter to move visualization to the time point of the peak and draw a marker at the maximum peak vertex.
vertno_max, time_idx = stc.get_peak(hemi='rh', time_as_index=True) subjects_dir = data_path + '/subjects' brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir) brain.set_data_time_index(time_idx) brain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue', scale_factor=0.6) brain.scale_data_colormap(fmin=8, fmid=12, fmax=15, transparent=True) brain.show_view('lateral')
0.12/_downloads/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Morph data to average brain
stc_fsaverage = stc.morph(subject_to='fsaverage', subjects_dir=subjects_dir) brain_fsaverage = stc_fsaverage.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir) brain_fsaverage.set_data_time_index(time_idx) brain_fsaverage.scale_data_colormap(fmin=8, fmid=12, fmax=15, transparent=True) brain_fsaverage.show_view('lateral')
0.12/_downloads/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Vertex AI Training and Serving with TFX and Vertex Pipelines <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td> <td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td> <td><a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Fgcp%252Fvertex_pipelines_vertex_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Run in Google Cloud Vertex AI Workbench</a></td> </table></div> This notebook-based tutorial will create and run a TFX pipeline which trains an ML model using Vertex AI Training service and publishes it to Vertex AI for serving. This notebook is based on the TFX pipeline we built in Simple TFX Pipeline for Vertex Pipelines Tutorial. If you have not read that tutorial yet, you should read it before proceeding with this notebook. You can train models on Vertex AI using AutoML, or use custom training. In custom training, you can select many different machine types to power your training jobs, enable distributed training, use hyperparameter tuning, and accelerate with GPUs. You can also serve prediction requests by deploying the trained model to Vertex AI Models and creating an endpoint. In this tutorial, we will use Vertex AI Training with custom jobs to train a model in a TFX pipeline. We will also deploy the model to serve prediction request using Vertex AI. This notebook is intended to be run on Google Colab or on AI Platform Notebooks. If you are not using one of these, you can simply click "Run in Google Colab" button above. Set up If you have completed Simple TFX Pipeline for Vertex Pipelines Tutorial, you will have a working GCP project and a GCS bucket and that is all we need for this tutorial. Please read the preliminary tutorial first if you missed it. Install python packages We will install required Python packages including TFX and KFP to author ML pipelines and submit jobs to Vertex Pipelines.
# Use the latest version of pip. !pip install --upgrade pip !pip install --upgrade "tfx[kfp]<2"
site/en-snapshot/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb
tensorflow/docs-l10n
apache-2.0
Write a pipeline definition We will define a function to create a TFX pipeline. It has the same three Components as in Simple TFX Pipeline Tutorial, but we use a Trainer and Pusher component in the GCP extension module. tfx.extensions.google_cloud_ai_platform.Trainer behaves like a regular Trainer, but it just moves the computation for the model training to cloud. It launches a custom job in Vertex AI Training service and the trainer component in the orchestration system will just wait until the Vertex AI Training job completes. tfx.extensions.google_cloud_ai_platform.Pusher creates a Vertex AI Model and a Vertex AI Endpoint using the trained model.
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str, module_file: str, endpoint_name: str, project_id: str, region: str, use_gpu: bool) -> tfx.dsl.Pipeline: """Implements the penguin pipeline with TFX.""" # Brings data into the pipeline or otherwise joins/converts training data. example_gen = tfx.components.CsvExampleGen(input_base=data_root) # NEW: Configuration for Vertex AI Training. # This dictionary will be passed as `CustomJobSpec`. vertex_job_spec = { 'project': project_id, 'worker_pool_specs': [{ 'machine_spec': { 'machine_type': 'n1-standard-4', }, 'replica_count': 1, 'container_spec': { 'image_uri': 'gcr.io/tfx-oss-public/tfx:{}'.format(tfx.__version__), }, }], } if use_gpu: # See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#acceleratortype # for available machine types. vertex_job_spec['worker_pool_specs'][0]['machine_spec'].update({ 'accelerator_type': 'NVIDIA_TESLA_K80', 'accelerator_count': 1 }) # Trains a model using Vertex AI Training. # NEW: We need to specify a Trainer for GCP with related configs. trainer = tfx.extensions.google_cloud_ai_platform.Trainer( module_file=module_file, examples=example_gen.outputs['examples'], train_args=tfx.proto.TrainArgs(num_steps=100), eval_args=tfx.proto.EvalArgs(num_steps=5), custom_config={ tfx.extensions.google_cloud_ai_platform.ENABLE_VERTEX_KEY: True, tfx.extensions.google_cloud_ai_platform.VERTEX_REGION_KEY: region, tfx.extensions.google_cloud_ai_platform.TRAINING_ARGS_KEY: vertex_job_spec, 'use_gpu': use_gpu, }) # NEW: Configuration for pusher. vertex_serving_spec = { 'project_id': project_id, 'endpoint_name': endpoint_name, # Remaining argument is passed to aiplatform.Model.deploy() # See https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api#deploy_the_model # for the detail. # # Machine type is the compute resource to serve prediction requests. # See https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types # for available machine types and acccerators. 'machine_type': 'n1-standard-4', } # Vertex AI provides pre-built containers with various configurations for # serving. # See https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers # for available container images. serving_image = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-6:latest' if use_gpu: vertex_serving_spec.update({ 'accelerator_type': 'NVIDIA_TESLA_K80', 'accelerator_count': 1 }) serving_image = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-gpu.2-6:latest' # NEW: Pushes the model to Vertex AI. pusher = tfx.extensions.google_cloud_ai_platform.Pusher( model=trainer.outputs['model'], custom_config={ tfx.extensions.google_cloud_ai_platform.ENABLE_VERTEX_KEY: True, tfx.extensions.google_cloud_ai_platform.VERTEX_REGION_KEY: region, tfx.extensions.google_cloud_ai_platform.VERTEX_CONTAINER_IMAGE_URI_KEY: serving_image, tfx.extensions.google_cloud_ai_platform.SERVING_ARGS_KEY: vertex_serving_spec, }) components = [ example_gen, trainer, pusher, ] return tfx.dsl.Pipeline( pipeline_name=pipeline_name, pipeline_root=pipeline_root, components=components)
site/en-snapshot/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb
tensorflow/docs-l10n
apache-2.0
Run the pipeline on Vertex Pipelines. We will use Vertex Pipelines to run the pipeline as we did in Simple TFX Pipeline for Vertex Pipelines Tutorial.
# docs_infra: no_execute import os PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json' runner = tfx.orchestration.experimental.KubeflowV2DagRunner( config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(), output_filename=PIPELINE_DEFINITION_FILE) _ = runner.run( _create_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT, data_root=DATA_ROOT, module_file=os.path.join(MODULE_ROOT, _trainer_module_file), endpoint_name=ENDPOINT_NAME, project_id=GOOGLE_CLOUD_PROJECT, region=GOOGLE_CLOUD_REGION, # We will use CPUs only for now. use_gpu=False))
site/en-snapshot/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb
tensorflow/docs-l10n
apache-2.0
The generated definition file can be submitted using Google Cloud aiplatform client in google-cloud-aiplatform package.
# docs_infra: no_execute from google.cloud import aiplatform from google.cloud.aiplatform import pipeline_jobs import logging logging.getLogger().setLevel(logging.INFO) aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION) job = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE, display_name=PIPELINE_NAME) job.submit()
site/en-snapshot/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb
tensorflow/docs-l10n
apache-2.0
Now you can visit the link in the output above or visit 'Vertex AI > Pipelines' in Google Cloud Console to see the progress. Test with a prediction request Once the pipeline completes, you will find a deployed model at the one of the endpoints in 'Vertex AI > Endpoints'. We need to know the id of the endpoint to send a prediction request to the new endpoint. This is different from the endpoint name we entered above. You can find the id at the Endpoints page in Google Cloud Console, it looks like a very long number. Set ENDPOINT_ID below before running it.
ENDPOINT_ID='' # <--- ENTER THIS if not ENDPOINT_ID: from absl import logging logging.error('Please set the endpoint id.')
site/en-snapshot/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb
tensorflow/docs-l10n
apache-2.0
We use the same aiplatform client to send a request to the endpoint. We will send a prediction request for Penguin species classification. The input is the four features that we used, and the model will return three values, because our model outputs one value for each species. For example, the following specific example has the largest value at index '2' and will print '2'.
# docs_infra: no_execute import numpy as np # The AI Platform services require regional API endpoints. client_options = { 'api_endpoint': GOOGLE_CLOUD_REGION + '-aiplatform.googleapis.com' } # Initialize client that will be used to create and send requests. client = aiplatform.gapic.PredictionServiceClient(client_options=client_options) # Set data values for the prediction request. # Our model expects 4 feature inputs and produces 3 output values for each # species. Note that the output is logit value rather than probabilities. # See the model code to understand input / output structure. instances = [{ 'culmen_length_mm':[0.71], 'culmen_depth_mm':[0.38], 'flipper_length_mm':[0.98], 'body_mass_g': [0.78], }] endpoint = client.endpoint_path( project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION, endpoint=ENDPOINT_ID, ) # Send a prediction request and get response. response = client.predict(endpoint=endpoint, instances=instances) # Uses argmax to find the index of the maximum value. print('species:', np.argmax(response.predictions[0]))
site/en-snapshot/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb
tensorflow/docs-l10n
apache-2.0
For detailed information about online prediction, please visit the Endpoints page in Google Cloud Console. you can find a guide on sending sample requests and links to more resources. Run the pipeline using a GPU Vertex AI supports training using various machine types including support for GPUs. See Machine spec reference for available options. We already defined our pipeline to support GPU training. All we need to do is setting use_gpu flag to True. Then a pipeline will be created with a machine spec including one NVIDIA_TESLA_K80 and our model training code will use tf.distribute.MirroredStrategy. Note that use_gpu flag is not a part of the Vertex or TFX API. It is just used to control the training code in this tutorial.
# docs_infra: no_execute runner.run( _create_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT, data_root=DATA_ROOT, module_file=os.path.join(MODULE_ROOT, _trainer_module_file), endpoint_name=ENDPOINT_NAME, project_id=GOOGLE_CLOUD_PROJECT, region=GOOGLE_CLOUD_REGION, # Updated: Use GPUs. We will use a NVIDIA_TESLA_K80 and # the model code will use tf.distribute.MirroredStrategy. use_gpu=True)) job = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE, display_name=PIPELINE_NAME) job.submit()
site/en-snapshot/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb
tensorflow/docs-l10n
apache-2.0
Tiramisu / Camvid Tiramisu is a fully-convolutional neural network based on DenseNet architecture. It was designed as a state-of-the-art approach to semantic image segmentation. We're going to use the same dataset they did, CamVid. CamVid is a dataset of images from a video. It has ~ 600 images, so it's quite small, and given that it is from a video the information content of the dataset is small. We're going to train this Tiramisu network from scratch to segment the CamVid dataset. This seems extremely ambitious! Setup Modify the following to point to the appropriate paths on your machine
PATH = '/data/datasets/SegNet-Tutorial/CamVid/' frames_path = PATH+'all/' labels_path = PATH+'allannot/' PATH = '/data/datasets/camvid/'
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
The images in CamVid come with labels defining the segments of the input image. We're going to load both the images and the labels.
frames_path = PATH+'701_StillsRaw_full/' labels_path = PATH+'LabeledApproved_full/' fnames = glob.glob(frames_path+'*.png') lnames = [labels_path+os.path.basename(fn)[:-4]+'_L.png' for fn in fnames] img_sz = (480,360)
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
Standardize
imgs-=0.4 imgs/=0.3 n,r,c,ch = imgs.shape
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
Convert labels The following loads, parses, and converts the segment labels we need for targets. In particular we're looking to make the segmented targets into integers for classification purposes.
def parse_code(l): a,b = l.strip().split("\t") return tuple(int(o) for o in a.split(' ')), b label_codes,label_names = zip(*[ parse_code(l) for l in open(PATH+"label_colors.txt")]) label_codes,label_names = list(label_codes),list(label_names)
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
Each segment / category is indicated by a particular color. The following maps each unique pixel to it's category.
list(zip(label_codes,label_names))[:5]
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
Test set Next we load test set, set training/test images and labels.
fn_test = set(o.strip() for o in open(PATH+'test.txt','r')) is_test = np.array([o.split('/')[-1] in fn_test for o in fnames]) trn = imgs[is_test==False] trn_labels = labels_int[is_test==False] test = imgs[is_test] test_labels = labels_int[is_test] trn.shape,test_labels.shape rnd_trn = len(trn_labels) rnd_test = len(test_labels)
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
The Tiramisu Now that we've prepared our data, we're ready to introduce the Tiramisu. Conventional CNN's for image segmentation are very similar to the kind we looked at for style transfer. Recall that it involved convolutions with downsampling (stride 2, pooling) to increase the receptive field, followed by upsampling with deconvolutions until reaching the original side. Tiramisu uses a similar down / up architecture, but with some key caveats. As opposed to normal convolutional layers, Tiramisu uses the DenseNet method of concatenating inputs to outputs. Tiramisu also uses skip connections from the downsampling branch to the upsampling branch. Specifically, the skip connection functions by concatenating the output of a Dense block in the down-sampling branch onto the input of the corresponding Dense block in the upsampling branch. By "corresponding", we mean the down-sample/up-sample Dense blocks that are equidistant from the input / output respectively. One way of interpreting this architecture is that by re-introducing earlier stages of the network to later stages, we're forcing the network to "remember" the finer details of the input image. The pieces This should all be familiar.
def relu(x): return Activation('relu')(x) def dropout(x, p): return Dropout(p)(x) if p else x def bn(x): return BatchNormalization(mode=2, axis=-1)(x) def relu_bn(x): return relu(bn(x)) def concat(xs): return merge(xs, mode='concat', concat_axis=-1) def conv(x, nf, sz, wd, p, stride=1): x = Convolution2D(nf, sz, sz, init='he_uniform', border_mode='same', subsample=(stride,stride), W_regularizer=l2(wd))(x) return dropout(x, p) def conv_relu_bn(x, nf, sz=3, wd=0, p=0, stride=1): return conv(relu_bn(x), nf, sz, wd=wd, p=p, stride=stride)
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
This is the downsampling transition. In the original paper, downsampling consists of 1x1 convolution followed by max pooling. However we've found a stride 2 1x1 convolution to give better results.
def transition_dn(x, p, wd): # x = conv_relu_bn(x, x.get_shape().as_list()[-1], sz=1, p=p, wd=wd) # return MaxPooling2D(strides=(2, 2))(x) return conv_relu_bn(x, x.get_shape().as_list()[-1], sz=1, p=p, wd=wd, stride=2)
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
This is the upsampling transition. We use a deconvolution layer.
def transition_up(added, wd=0): x = concat(added) _,r,c,ch = x.get_shape().as_list() return Deconvolution2D(ch, 3, 3, (None,r*2,c*2,ch), init='he_uniform', border_mode='same', subsample=(2,2), W_regularizer=l2(wd))(x) # x = UpSampling2D()(x) # return conv(x, ch, 2, wd, 0)
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
Finally we put together the entire network.
def create_tiramisu(nb_classes, img_input, nb_dense_block=6, growth_rate=16, nb_filter=48, nb_layers_per_block=5, p=None, wd=0): if type(nb_layers_per_block) is list or type(nb_layers_per_block) is tuple: nb_layers = list(nb_layers_per_block) else: nb_layers = [nb_layers_per_block] * nb_dense_block x = conv(img_input, nb_filter, 3, wd, 0) skips,added = down_path(x, nb_layers, growth_rate, p, wd) x = up_path(added, reverse(skips[:-1]), reverse(nb_layers[:-1]), growth_rate, p, wd) x = conv(x, nb_classes, 1, wd, 0) _,r,c,f = x.get_shape().as_list() x = Reshape((-1, nb_classes))(x) return Activation('softmax')(x)
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
Train Now we can train. These architectures can take quite some time to train.
limit_mem() input_shape = (224,224,3) img_input = Input(shape=input_shape) x = create_tiramisu(12, img_input, nb_layers_per_block=[4,5,7,10,12,15], p=0.2, wd=1e-4) model = Model(img_input, x) gen = segm_generator(trn, trn_labels, 3, train=True) gen_test = segm_generator(test, test_labels, 3, train=False) model.compile(loss='sparse_categorical_crossentropy', optimizer=keras.optimizers.RMSprop(1e-3), metrics=["accuracy"]) model.optimizer=keras.optimizers.RMSprop(1e-3, decay=1-0.99995) model.optimizer=keras.optimizers.RMSprop(1e-3) K.set_value(model.optimizer.lr, 1e-3) model.fit_generator(gen, rnd_trn, 100, verbose=2, validation_data=gen_test, nb_val_samples=rnd_test) model.optimizer=keras.optimizers.RMSprop(3e-4, decay=1-0.9995) model.fit_generator(gen, rnd_trn, 500, verbose=2, validation_data=gen_test, nb_val_samples=rnd_test) model.optimizer=keras.optimizers.RMSprop(2e-4, decay=1-0.9995) model.fit_generator(gen, rnd_trn, 500, verbose=2, validation_data=gen_test, nb_val_samples=rnd_test) model.optimizer=keras.optimizers.RMSprop(1e-5, decay=1-0.9995) model.fit_generator(gen, rnd_trn, 500, verbose=2, validation_data=gen_test, nb_val_samples=rnd_test) lrg_sz = (352,480) gen = segm_generator(trn, trn_labels, 2, out_sz=lrg_sz, train=True) gen_test = segm_generator(test, test_labels, 2, out_sz=lrg_sz, train=False) lrg_shape = lrg_sz+(3,) lrg_input = Input(shape=lrg_shape) x = create_tiramisu(12, lrg_input, nb_layers_per_block=[4,5,7,10,12,15], p=0.2, wd=1e-4) lrg_model = Model(lrg_input, x) lrg_model.compile(loss='sparse_categorical_crossentropy', optimizer=keras.optimizers.RMSprop(1e-4), metrics=["accuracy"]) lrg_model.fit_generator(gen, rnd_trn, 100, verbose=2, validation_data=gen_test, nb_val_samples=rnd_test) lrg_model.fit_generator(gen, rnd_trn, 100, verbose=2, validation_data=gen_test, nb_val_samples=rnd_test) lrg_model.optimizer=keras.optimizers.RMSprop(1e-5) lrg_model.fit_generator(gen, rnd_trn, 2, verbose=2, validation_data=gen_test, nb_val_samples=rnd_test) lrg_model.save_weights(PATH+'results/8758.h5')
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
View results Let's take a look at some of the results we achieved.
colors = [(128, 128, 128), (128, 0, 0), (192, 192, 128), (128, 64, 128), (0, 0, 192), (128, 128, 0), (192, 128, 128), (64, 64, 128), (64, 0, 128), (64, 64, 0), (0, 128, 192), (0, 0, 0)] names = ['sky', 'building', 'column_pole', 'road', 'sidewalk', 'tree', 'sign', 'fence', 'car', 'pedestrian', 'bicyclist', 'void'] gen_test = segm_generator(test, test_labels, 2, out_sz=lrg_sz, train=False) preds = lrg_model.predict_generator(gen_test, rnd_test) preds = np.argmax(preds, axis=-1) preds = preds.reshape((-1,352,480)) target = test_labels.reshape((233,360,480))[:,8:] (target == preds).mean() non_void = target != 11 (target[non_void] == preds[non_void]).mean() idx=1 p=lrg_model.predict(np.expand_dims(test[idx,8:],0)) p = np.argmax(p[0],-1).reshape(352,480) pred = color_label(p)
data_science/courses/deeplearning2/tiramisu-keras.ipynb
jmhsi/justin_tinker
apache-2.0
Can you plot the distribution of protein sizes in the data/proteome.faa file?
%matplotlib inline import matplotlib.pyplot as plt sizes = [] for seq in SeqIO.parse('../data/proteome.faa', 'fasta'): sizes.append(len(seq)) plt.hist(sizes, bins=100) plt.xlabel('protein size') plt.ylabel('count');
notebooks/[6a]-Exercises-solutions.ipynb
mgalardini/2017_python_course
gpl-2.0
Can you count the number of CDS sequences in the data/ecoli.gbk file?
counter = 0 for seq in SeqIO.parse('../data/ecoli.gbk', 'genbank'): for feat in seq.features: if feat.type == 'CDS': counter += 1 counter
notebooks/[6a]-Exercises-solutions.ipynb
mgalardini/2017_python_course
gpl-2.0
Can you compute the average root-to-tip distance in the data/tree.nwk file?
from Bio import Phylo tree = Phylo.read('../data/tree.nwk', 'newick') distances = [] for node in tree.get_terminals(): distances.append(tree.distance(tree.root, node)) sum(distances)/float(len(distances))
notebooks/[6a]-Exercises-solutions.ipynb
mgalardini/2017_python_course
gpl-2.0
Networkx Can you read the yeast protein interaction network in data/yeast.gml? Can you plot the degree distribution of the proteins contained in the graph?
import networkx as nx graph = nx.read_gml('../data/yeast.gml') plt.hist(nx.degree(graph).values(), bins=20) plt.xlabel('degree') plt.ylabel('count');
notebooks/[6a]-Exercises-solutions.ipynb
mgalardini/2017_python_course
gpl-2.0