markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Ploteando con GNUPLOT el Puente 1
# This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db1_xl1_vs_xr1.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^L_1}" set ylabel "{/=30 X@^R_1}" set title "Free Energy Surface Intramolecular DB1" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "XL1_XR1.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db1_xl2_vs_xr2.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^L_2}" set ylabel "{/=30 X@^R_2}" set title "Free Energy Surface Intramolecular DB1" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "XL2_XR2.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db1_xm3_vs_xl1.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^M_3}" set ylabel "{/=30 X@^L_1}" set title "Free Energy Surface Intramolecular DB1" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "XM3_XL1.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db1_xm3_vs_xl2.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^M_3}" set ylabel "{/=30 X@^L_2}" set title "Free Energy Surface Intramolecular DB1" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "XM3_XL2.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db1_xm3_vs_xr2.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^M_3}" set ylabel "{/=30 X@^R_2}" set title "Free Energy Surface Intramolecular DB1" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "XM3_XR2.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db1_xm3_vs_xr1.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^M_3}" set ylabel "{/=30 X@^R_1}" set title "Free Energy Surface Intramolecular DB1" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "XM3_XR1.dat" with pm3d
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando la Free Energy intramolecular para el Puente 2
if (revisa2>0): #Cargando valores del DB2_X1L data_db2_x1l=np.loadtxt('dihed_db2_x1l.dat',comments=['#', '@']) #Cargando valores del DB1_X1R data_db2_x1r=np.loadtxt('dihed_db2_x1r.dat',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del DB2_X1L min_db2_x1l=np.amin(data_db2_x1l[:,1]) max_db2_x1l=np.amax(data_db2_x1l[:,1]) print ('Minimo DB2_X1L=>',min_db2_x1l) print ('Máximo DB2_X1L=>',max_db2_x1l) #Obteniendo los valores máximo y mínimo del DB2_X1R min_db2_x1r=np.amin(data_db2_x1r[:,1]) max_db2_x1r=np.amax(data_db2_x1r[:,1]) print ('Minimo DB2_X1R=>',min_db2_x1r) print ('Máximo DB2_X1R=>',max_db2_x1r) #Creando los archivos de entrada para el script np.savetxt('db2_x1l.dat',data_db2_x1l[:,1], fmt='%1.14f') np.savetxt('db2_x1r.dat',data_db2_x1r[:,1], fmt='%1.14f') !paste db2_x1l.dat db2_x1r.dat > DB2_x1_lr.dat #Ejecutando el script de FES !python generateFES.py DB2_x1_lr.dat $min_db2_x1l $max_db2_x1l $min_db2_x1r $max_db2_x1r 200 200 $temperatura DB2_XL1_XR1.dat ################################################################### #Cargando valores del DB2_X2l data_db2_x2l=np.loadtxt('dihed_db2_x2l.dat',comments=['#', '@']) #Cargando valores del DB2_X1R data_db2_x2r=np.loadtxt('dihed_db2_x2r.dat',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del DB2_X1L min_db2_x2l=np.amin(data_db2_x2l[:,1]) max_db2_x2l=np.amax(data_db2_x2l[:,1]) print ('Minimo DB2_X2L=>',min_db2_x2l) print ('Máximo DB2_X2L=>',max_db2_x2l) #Obteniendo los valores máximo y mínimo del DB2_X1R min_db2_x2r=np.amin(data_db2_x2r[:,1]) max_db2_x2r=np.amax(data_db2_x2r[:,1]) print ('Minimo DB2_X2R=>',min_db2_x2r) print ('Máximo DB2_X2R=>',max_db2_x2r) #Creando los archivos de entrada para el script np.savetxt('db2_x2l.dat',data_db2_x2l[:,1], fmt='%1.14f') np.savetxt('db2_x2r.dat',data_db2_x2r[:,1], fmt='%1.14f') !paste db2_x2l.dat db2_x2r.dat > DB2_x2_lr.dat #Ejecutando el script de FES !python generateFES.py DB2_x2_lr.dat $min_db2_x2l $max_db2_x2l $min_db2_x2r $max_db2_x2r 200 200 $temperatura DB2_XL2_XR2.dat ###################################################################################### #Cargando valores del DB2_X3M data_db2_x3m=np.loadtxt('dihed_db2_x3m.dat',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del DB2_X3M min_db2_x3m=np.amin(data_db2_x3m[:,1]) max_db2_x3m=np.amax(data_db2_x3m[:,1]) print ('Minimo DB2_X3M=>',min_db2_x3m) print ('Máximo DB2_X3M=>',max_db2_x3m) print ('Minimo DB2_X1R=>',min_db2_x1r) print ('Máximo DB2_X1R=>',max_db2_x1r) print ('Minimo DB2_X2R=>',min_db2_x2r) print ('Máximo DB2_X2R=>',max_db2_x2r) print ('Minimo DB2_X1L=>',min_db2_x1l) print ('Máximo DB2_X1L=>',max_db2_x1l) print ('Minimo DB2_X2L=>',min_db2_x2l) print ('Máximo DB2_X2L=>',max_db2_x2l) #Creando los archivos de entrada para el script np.savetxt('db2_x3m.dat',data_db2_x3m[:,1], fmt='%1.14f') !paste db2_x3m.dat db2_x1r.dat > DB2_x3m_x1r.dat !paste db2_x3m.dat db2_x2r.dat > DB2_x3m_x2r.dat !paste db2_x3m.dat db2_x1l.dat > DB2_x3m_x1l.dat !paste db2_x3m.dat db2_x2l.dat > DB2_x3m_x2l.dat #Ejecutando el script de FES !python generateFES.py DB2_x3m_x1r.dat $min_db2_x3m $max_db2_x3m $min_db2_x1r $max_db2_x1r 200 200 $temperatura DB2_XM3_XR1.dat !python generateFES.py DB2_x3m_x2r.dat $min_db2_x3m $max_db2_x3m $min_db2_x2r $max_db2_x2r 200 200 $temperatura DB2_XM3_XR2.dat !python generateFES.py DB2_x3m_x1l.dat $min_db2_x3m $max_db2_x3m $min_db2_x1l $max_db2_x1l 200 200 $temperatura DB2_XM3_XL1.dat !python generateFES.py DB2_x3m_x2l.dat $min_db2_x3m $max_db2_x3m $min_db2_x2l $max_db2_x2l 200 200 $temperatura DB2_XM3_XL2.dat
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ploteando con GNUPLOT el Puente 2
# This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db2_xl1_vs_xr1.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^L_1}" set ylabel "{/=30 X@^R_1}" set title "Free Energy Surface Intramolecular DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB2_XL1_XR1.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db2_xl2_vs_xr2.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^L_2}" set ylabel "{/=30 X@^R_2}" set title "Free Energy Surface Intramolecular DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB2_XL2_XR2.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db2_xm3_vs_xl1.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^M_3}" set ylabel "{/=30 X@^L_1}" set title "Free Energy Surface Intramolecular DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB2_XM3_XL1.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db2_xm3_vs_xl2.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^M_3}" set ylabel "{/=30 X@^L_2}" set title "Free Energy Surface Intramolecular DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB2_XM3_XL2.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db2_xm3_vs_xr2.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^M_3}" set ylabel "{/=30 X@^R_2}" set title "Free Energy Surface Intramolecular DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB2_XM3_XR2.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db2_xm3_vs_xr1.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 X@^M_3}" set ylabel "{/=30 X@^R_1}" set title "Free Energy Surface Intramolecular DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB2_XM3_XR1.dat" with pm3d
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Free Energy Intermolecular
############################################ #### Intermolecular DB1- DB2 - X1L ############################################ #Creando el DB1-DB2-X1L !paste db1_x1l.dat db2_x1l.dat > DB1_DB2_x1l.dat print('Minimo DB1-X1L=>',min_x1l) print('Máximo DB1-X1L=>',max_x1l) print('Minimo DB2-X1L=>',min_db2_x1l) print('Máximo DB2-X1L=>',max_db2_x1l) #Ejecutando el script de FES !python generateFES.py DB1_DB2_x1l.dat $min_x1l $max_x1l $min_db2_x1l $max_db2_x1l 200 200 $temperatura DB1_DB2_X1L.dat ######################################### #### Intermolecular DB1- DB2 - X2L ############################################ #Creando el DB1-DB2-X2L !paste db1_x2l.dat db2_x2l.dat > DB1_DB2_x2l.dat print('Minimo DB1-X2L=>',min_x2l) print('Máximo DB1-X2L=>',max_x2l) print('Minimo DB2-X2L=>',min_db2_x2l) print('Máximo DB2-X2L=>',max_db2_x2l) #Ejecutando el script de FES !python generateFES.py DB1_DB2_x2l.dat $min_x2l $max_x2l $min_db2_x2l $max_db2_x2l 200 200 $temperatura DB1_DB2_X2L.dat ############################################ #### Intermolecular DB1- DB2 - X3M ############################################ #Creando el DB1-DB2-X3M !paste db1_x3m.dat db2_x3m.dat > DB1_DB2_x3m.dat print('Minimo DB1-X3M=>',min_x3m) print('Máximo DB1-X3M=>',max_x3m) print('Minimo DB2-X3M=>',min_db2_x3m) print('Máximo DB2-X3M=>',max_db2_x3m) #Ejecutando el script de FES !python generateFES.py DB1_DB2_x3m.dat $min_x3m $max_x3m $min_db2_x3m $max_db2_x3m 200 200 $temperatura DB1_DB2_X3M.dat ############################################ #### Intermolecular DB1- DB2 - X2R ############################################ #Creando el DB1-DB2-X2R !paste db1_x2r.dat db2_x2r.dat > DB1_DB2_x2r.dat print('Minimo DB1-X2R=>',min_x2r) print('Máximo DB1-X2R=>',max_x2r) print('Minimo DB2-X2R=>',min_db2_x2r) print('Máximo DB2-X2R=>',max_db2_x2r) #Ejecutando el script de FES !python generateFES.py DB1_DB2_x2r.dat $min_x2r $max_x2r $min_db2_x2r $max_db2_x2r 200 200 $temperatura DB1_DB2_X2R.dat ############################################ #### Intermolecular DB1- DB2 - X1R ############################################ #Creando el DB1-DB2-X1R !paste db1_x1r.dat db2_x1r.dat > DB1_DB2_x1r.dat print('Minimo DB1-X1R=>',min_x1r) print('Máximo DB1-X1R=>',max_x1r) print('Minimo DB2-X1R=>',min_db2_x1r) print('Máximo DB2-X1R=>',max_db2_x1r) #Ejecutando el script de FES !python generateFES.py DB1_DB2_x1r.dat $min_x1r $max_x1r $min_db2_x1r $max_db2_x1r 200 200 $temperatura DB1_DB2_X1R.dat
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ploteando la Free Energy Intermolecular puentes DB1 y DB2
# This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "DB1_DB2_X1L.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 DB1 X@^L_1}" set ylabel "{/=30 DB2 X@^L_1}" set title "Free Energy Surface Intermolecular DB1-DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB1_DB2_X1L.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "DB1_DB2_X2L.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 DB1 X@^L_2}" set ylabel "{/=30 DB2 X@^L_2}" set title "Free Energy Surface Intermolecular DB1-DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB1_DB2_X2L.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "DB1_DB2_X3M.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 DB1 X@^M_3}" set ylabel "{/=30 DB2 X@^M_3}" set title "Free Energy Surface Intermolecular DB1-DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB1_DB2_X3M.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "DB1_DB2_X2R.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set xyplane 0 set pm3d interpolate 0,0 set xlabel "{/=30 DB1 X@^R_2}" set ylabel "{/=30 DB2 X@^R_2}" set title "Free Energy Surface Intermolecular DB1-DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "DB1_DB2_X2R.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "DB1_DB2_X1R.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 DB1 X@^R_1}" set ylabel "{/=30 DB2 X@^R_1}" set title "Free Energy Surface Intermolecular DB1-DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente set cbrange[8:10] splot "DB1_DB2_X1R.dat" with pm3d
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calcular los histogramas de los diedros
hist_escale_y=[] fig = pl.figure(figsize=(25,8)) fig.subplots_adjust(hspace=.4, wspace=.3) #subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None) #left = 0.125 # the left side of the subplots of the figure #right = 0.9 # the right side of the subplots of the figure #bottom = 0.1 # the bottom of the subplots of the figure #top = 0.9 # the top of the subplots of the figure #wspace = 0.2 # the amount of width reserved for blank space between subplots #hspace = 0.2 # the amount of height reserved for white space between subplots #Formateando los valores de los ejes #Engrosando marcos ax = fig.add_subplot(2,5,1) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) ax = fig.add_subplot(2,5,2) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) ax = fig.add_subplot(2,5,3) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) ax = fig.add_subplot(2,5,4) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) ax = fig.add_subplot(2,5,5) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) #Cargando valores del DB1 data_h_db1_x1l=np.loadtxt('db1_x1l.dat',comments=['#', '@']) data_h_db1_x2l=np.loadtxt('db1_x2l.dat',comments=['#', '@']) data_h_db1_x3m=np.loadtxt('db1_x3m.dat',comments=['#', '@']) data_h_db1_x2r=np.loadtxt('db1_x2r.dat',comments=['#', '@']) data_h_db1_x1r=np.loadtxt('db1_x1r.dat',comments=['#', '@']) #Cargando valores del DB2 data_h_db2_x1l=np.loadtxt('db2_x1l.dat',comments=['#', '@']) data_h_db2_x2l=np.loadtxt('db2_x2l.dat',comments=['#', '@']) data_h_db2_x3m=np.loadtxt('db2_x3m.dat',comments=['#', '@']) data_h_db2_x2r=np.loadtxt('db2_x2r.dat',comments=['#', '@']) data_h_db2_x1r=np.loadtxt('db2_x1r.dat',comments=['#', '@']) sub1 = fig.add_subplot(251) # instead of plt.subplot(2, 2, 1) sub1.set_xlabel('Angle (Degree) ', fontsize=10) sub1.set_ylabel('P(Angle)') n1, bins1, rectangles1 = sub1.hist(data_h_db1_x1l,100, normed=True, color='black',histtype='step', linewidth=3) n2, bins2, rectangles2 = sub1.hist(data_h_db2_x1l,100, normed=True, color='red',histtype='step', linewidth=3) x1,x2,y1,y2=sub1.axis() hist_escale_y.append(y2) sub2 = fig.add_subplot(252) # instead of plt.subplot(2, 2, 1) sub2.set_xlabel('Angle (Degree) ', fontsize=10) sub2.set_ylabel('P(Angle)') n1, bins1, rectangles1 = sub2.hist(data_h_db1_x2l,100, normed=True, color='black',histtype='step', linewidth=3) n2, bins2, rectangles2 = sub2.hist(data_h_db2_x2l,100, normed=True, color='red',histtype='step', linewidth=3) x1,x2,y1,y2=sub2.axis() hist_escale_y.append(y2) sub3 = fig.add_subplot(253) # instead of plt.subplot(2, 2, 1) sub3.set_xlabel('Angle (Degree) ', fontsize=10) sub3.set_ylabel('P(Angle)') n1, bins1, rectangles1 = sub3.hist(data_h_db1_x3m,100, normed=True, color='black',histtype='step', linewidth=3) n2, bins2, rectangles2 = sub3.hist(data_h_db2_x3m,100, normed=True, color='red',histtype='step', linewidth=3) x1,x2,y1,y2=sub3.axis() hist_escale_y.append(y2) sub4 = fig.add_subplot(254) # instead of plt.subplot(2, 2, 1) sub4.set_xlabel('Angle (Degree) ', fontsize=10) sub4.set_ylabel('P(Angle)') n1, bins1, rectangles1 = sub4.hist(data_h_db1_x2r,100, normed=True, color='black',histtype='step', linewidth=3) n2, bins2, rectangles2 = sub4.hist(data_h_db2_x2r,100, normed=True, color='red',histtype='step', linewidth=3) x1,x2,y1,y2=sub4.axis() hist_escale_y.append(y2) sub5 = fig.add_subplot(255) # instead of plt.subplot(2, 2, 1) sub5.set_xlabel('Angle (Degree) ', fontsize=10) sub5.set_ylabel('P(Angle)') n1, bins1, rectangles1 = sub5.hist(data_h_db1_x1r,100, normed=True, color='black',histtype='step', linewidth=3) n2, bins2, rectangles2 = sub5.hist(data_h_db2_x1r,100, normed=True, color='red',histtype='step', linewidth=3) x1,x2,y1,y2=sub5.axis() hist_escale_y.append(y2) #escale_y hist_escale_y.sort(reverse=True) hist_escale_y ##Cambiando los ejes de las y sub1.axis((x1,x2,y1,hist_escale_y[0])) sub2.axis((x1,x2,y1,hist_escale_y[0])) sub3.axis((x1,x2,y1,hist_escale_y[0])) sub4.axis((x1,x2,y1,hist_escale_y[0])) sub5.axis((x1,x2,y1,hist_escale_y[0]))
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ángulos de Enlace de los puentes Intermolecular
### Creando el directorio para el análisis de las distancias de enlace de los puentes INTERMOLECULAR ruta_bonds_puentes = nuevaruta+'/bonds_puentes' print ( ruta_bonds_puentes ) if not os.path.exists(ruta_bonds_puentes): os.makedirs(ruta_bonds_puentes) print ('Se ha creado la ruta ===>',ruta_bonds_puentes) else: print ("La ruta "+ruta_bonds_puentes+" existe..!!!") print ( 'Nos vamos a ....', ruta_bonds_puentes) os.chdir( ruta_bonds_puentes )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Copiando el archivo de generación de FES
print ('\nCopiando el archivo generateFES.py a '+ruta_bonds_puentes) source_file=ruta_scripts+'/free_energy/generateFES.py' dest_file=ruta_bonds_puentes+'/generateFES.py' shutil.copy(source_file,dest_file) #Cambiando permisos de ejecución !chmod +x generateFES.py
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Generando los archivos Tcl para el cálculo de los ángulos.
psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print ('Puente DB1=>',DB1_N) print ('Puente DB1=>',DB1_i) print ('Puente DB2=>',DB2_N) print ('Puente DB2=>',DB2_i) puente=2 if (int(puente)==2): #Creando script para Bond X1 Left b1 = open('bond_DB1_left.tcl', 'w') print(b1) b1.write('set psfFile '+ psf+' \n') b1.write('set dcdFile '+ dcd+' \n') b1.write('\nmol load psf $psfFile dcd $dcdFile\n') b1.write('set outfile ' +'[open ' +'bond_db1_left.dat'+' w]\n') b1.write('set nf [molinfo top get numframes]\n') b1.write(' \n') b1.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[1]+'"] get index]\n') b1.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[2]+'"] get index]\n') b1.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n') b1.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\n') b1.write('for {set i 0} {$i < $nf} {incr i 1} {\n') b1.write(' set x [measure angle $angle frame $i]\n') b1.write(' set time [expr {$i +1}]\n') b1.write(' puts $outfile "$time $x"\n') b1.write('}\n') b1.close() #Creando script para Bond X1 Right b2 = open('bond_DB1_right.tcl', 'w') print(b2) b2.write('set psfFile '+ psf+' \n') b2.write('set dcdFile '+ dcd+' \n') b2.write('\nmol load psf $psfFile dcd $dcdFile\n') b2.write('set outfile ' +'[open ' +'bond_db1_right.dat'+' w]\n') b2.write('set nf [molinfo top get numframes]\n') b2.write(' \n') b2.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n') b2.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[5]+'"] get index]\n') b2.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[6]+'"] get index]\n') b2.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\n') b2.write('for {set i 0} {$i < $nf} {incr i 1} {\n') b2.write(' set x [measure angle $angle frame $i]\n') b2.write(' set time [expr {$i +1}]\n') b2.write(' puts $outfile "$time $x"\n') b2.write('}\n') b2.close() #Creando script para Bond DB2 X1 Left b3 = open('bond_DB2_left.tcl', 'w') print(b3) b3.write('set psfFile '+ psf+' \n') b3.write('set dcdFile '+ dcd+' \n') b3.write('\nmol load psf $psfFile dcd $dcdFile\n') b3.write('set outfile ' +'[open ' +'bond_db2_left.dat'+' w]\n') b3.write('set nf [molinfo top get numframes]\n') b3.write(' \n') b3.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[1]+'"] get index]\n') b3.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[2]+'"] get index]\n') b3.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n') b3.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\n') b3.write('for {set i 0} {$i < $nf} {incr i 1} {\n') b3.write(' set x [measure angle $angle frame $i]\n') b3.write(' set time [expr {$i +1}]\n') b3.write(' puts $outfile "$time $x"\n') b3.write('}\n') b3.close() #Creando script para Bond DB2 X1 Right b4 = open('bond_DB2_right.tcl', 'w') print(b4) b4.write('set psfFile '+ psf+' \n') b4.write('set dcdFile '+ dcd+' \n') b4.write('\nmol load psf $psfFile dcd $dcdFile\n') b4.write('set outfile ' +'[open ' +'bond_db2_right.dat'+' w]\n') b4.write('set nf [molinfo top get numframes]\n') b4.write(' \n') b4.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n') b4.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[5]+'"] get index]\n') b4.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[6]+'"] get index]\n') b4.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\n') b4.write('for {set i 0} {$i < $nf} {incr i 1} {\n') b4.write(' set x [measure angle $angle frame $i]\n') b4.write(' set time [expr {$i +1}]\n') b4.write(' puts $outfile "$time $x"\n') b4.write('}\n') b4.close()
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ejecutando los archivos tcl generados con VMD
#Calculando con VMD bond DB1 Left !vmd -dispdev text < bond_DB1_left.tcl #Calculando con VMD bond DB1 Right !vmd -dispdev text < bond_DB1_right.tcl #Calculando con VMD bond DB2 Left !vmd -dispdev text < bond_DB2_left.tcl #Calculando con VMD bond DB2 Right !vmd -dispdev text < bond_DB2_right.tcl
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando la Free Energy de los Bonds de los puentes
#Cargando valores del DB1 data_bond_db1_left=np.loadtxt('bond_db1_left.dat',comments=['#', '@']) #Cargando valores del DB1_X1R data_bond_db1_right=np.loadtxt('bond_db1_right.dat',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del DB1 Left min_bond1_left=np.amin(data_bond_db1_left[:,1]) max_bond1_left=np.amax(data_bond_db1_left[:,1]) print ('Minimo DB1_Left=>',min_bond1_left) print ('Máximo DB1_Left=>',max_bond1_left) #Obteniendo los valores máximo y mínimo del DB1 Right min_bond1_right=np.amin(data_bond_db1_right[:,1]) max_bond1_right=np.amax(data_bond_db1_right[:,1]) print ('Minimo DB1_Right=>',min_bond1_right) print ('Máximo DB1_Right=>',max_bond1_right) #Creando los archivos de entrada para el script np.savetxt('bond_DB1_left.dat',data_bond_db1_left[:,1], fmt='%1.14f') np.savetxt('bond_DB1_right.dat',data_bond_db1_right[:,1], fmt='%1.14f') !paste bond_DB1_left.dat bond_DB1_right.dat > angles_DB1.dat #Ejecutando el script de FES !python generateFES.py angles_DB1.dat $min_bond1_left $max_bond1_left $min_bond1_right $max_bond1_right 200 200 $temperatura Angles_DB1.dat ###################################################################3 #Cargando valores del DB2 data_bond_db2_left=np.loadtxt('bond_db2_left.dat',comments=['#', '@']) #Cargando valores del DB1_X1R data_bond_db2_right=np.loadtxt('bond_db2_right.dat',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del DB2 Left min_bond2_left=np.amin(data_bond_db2_left[:,1]) max_bond2_left=np.amax(data_bond_db2_left[:,1]) print ('Minimo DB2_Left=>',min_bond2_left) print ('Máximo DB2_Left=>',max_bond2_left) #Obteniendo los valores máximo y mínimo del DB2 Right min_bond2_right=np.amin(data_bond_db2_right[:,1]) max_bond2_right=np.amax(data_bond_db2_right[:,1]) print ('Minimo DB2_Right=>',min_bond2_right) print ('Máximo DB2_Right=>',max_bond2_right) #Creando los archivos de entrada para el script np.savetxt('bond_DB2_left.dat',data_bond_db2_left[:,1], fmt='%1.14f') np.savetxt('bond_DB2_right.dat',data_bond_db2_right[:,1], fmt='%1.14f') !paste bond_DB2_left.dat bond_DB2_right.dat > angles_DB2.dat #Ejecutando el script de FES !python generateFES.py angles_DB2.dat $min_bond2_left $max_bond2_left $min_bond2_right $max_bond2_right 200 200 $temperatura Angles_DB2.dat
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ploteando la Free Energy de los ángulos con gnuplot
# This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db1_a1_a2.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 C@^1_{/Symbol a}}-{/=30 C@^1_{/Symbol b}}-{/=30 S@^1_{/Symbol g}}" set ylabel "{/=30 C@^2_{/Symbol a}}-{/=30 C@^2_{/Symbol b}}-{/=30 S@^2_{/Symbol g}}" set title "Free Energy Surface Angles DB1" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "Angles_DB1.dat" with pm3d # This loads the magics for gnuplot %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "db2_a1_a2.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "{/=30 C@^1_{/Symbol a}}-{/=30 C@^1_{/Symbol b}}-{/=30 S@^1_{/Symbol g}}" set ylabel "{/=30 C@^2_{/Symbol a}}-{/=30 C@^2_{/Symbol b}}-{/=30 S@^2_{/Symbol g}}" set title "Free Energy Surface Angles DB2" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "Angles_DB2.dat" with pm3d
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando los histogramas de los bonds
bonds_escale_y=[] #Cargando valores del DB1 data_h_db1_left=np.loadtxt('bond_DB1_left.dat',comments=['#', '@']) data_h_db1_right=np.loadtxt('bond_DB1_right.dat',comments=['#', '@']) #Cargando valores del DB2 data_h_db2_left=np.loadtxt('bond_DB2_left.dat',comments=['#', '@']) data_h_db2_right=np.loadtxt('bond_DB2_right.dat',comments=['#', '@']) #Engrosar marco figb=pl.figure(figsize=(12, 10), dpi=100, linewidth=3.0) figb.subplots_adjust(hspace=.5) ax = figb.add_subplot(221) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) ax = figb.add_subplot(222) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) ax = figb.add_subplot(223) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) ax = figb.add_subplot(224) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) bond1 = figb.add_subplot(221) # instead of plt.subplot(2, 2, 1) #bond1.set_title('CA1 - CB1 - SY1') bond1.set_xlabel('Angle (Degree)') bond1.set_ylabel('P (Angle)') n, bins, rectangles = bond1.hist(data_h_db1_left,100, normed=True, color='black',histtype='step',linewidth=3) x1,x2,y1,y2=bond1.axis() bonds_escale_y.append(y2) bond2 = figb.add_subplot(222) # instead of plt.subplot(2, 2, 1) #bond2.set_title('CA2 - CB2 - SY2') bond2.set_xlabel('Angle (Degree)') bond2.set_ylabel('P (Angle)') n, bins, rectangles = bond2.hist(data_h_db1_right,100, normed=True, color='black',histtype='step', linewidth=3) x1,x2,y1,y2=bond2.axis() bonds_escale_y.append(y2) bond3 = figb.add_subplot(223) # instead of plt.subplot(2, 2, 1) #bond3.set_title('CA1 - CB1 - SY1') bond3.set_xlabel('Angle (Degree)') bond3.set_ylabel('P (Angle)') n, bins, rectangles = bond3.hist(data_h_db2_left,100, normed=True, color='red',histtype='step', linewidth=3) x1,x2,y1,y2=bond3.axis() bonds_escale_y.append(y2) bond4 = figb.add_subplot(224) # instead of plt.subplot(2, 2, 1) #bond4.set_title('CA2 - CB2 - SY2') bond4.set_xlabel('Angle (Degree)') bond4.set_ylabel('P (Angle)') n, bins, rectangles = bond4.hist(data_h_db2_right,100, normed=True, color='red',histtype='step', linewidth=3) x1,x2,y1,y2=bond4.axis() bonds_escale_y.append(y2) #escale_y bonds_escale_y.sort(reverse=True) bonds_escale_y ##Cambiando los ejes de las y sub1.axis((x1,x2,y1,bonds_escale_y[0])) sub2.axis((x1,x2,y1,bonds_escale_y[0])) sub3.axis((x1,x2,y1,bonds_escale_y[0])) sub4.axis((x1,x2,y1,bonds_escale_y[0]))
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Generación de clusters Crear la nueva ruta para calcular los clusters
### Creando el directorio para el análisis de los puentes ruta_clusters = nuevaruta+'/clusters' print ( ruta_clusters ) if not os.path.exists(ruta_clusters): os.makedirs(ruta_clusters) print ('Se ha creado la ruta ===>',ruta_clusters) else: print ("La ruta "+ruta_clusters+" existe..!!!") print ( 'Nos vamos a ....', ruta_clusters) os.chdir( ruta_clusters )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando los clusters con la opción (1= Protein)
!echo 1 1 | g_cluster -f ../output.xtc -s ../ionized.pdb -method gromos -cl out.pdb -g out.log -cutoff 0.2
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Cargando los clusters para su visualización en VMD Se cargan los clusters en VMD y se guardan sus coordenadas para cada uno de ellos haciendo uso de VMD
!vmd out.pdb
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
colorByRMSF Creando la carpeta para salida de datos
### Creando el directorio para el análisis de colorByRMSF ruta_colorByRMSF = nuevaruta+'/colorByRMSF' print ( ruta_colorByRMSF ) if not os.path.exists(ruta_colorByRMSF): os.makedirs(ruta_colorByRMSF) print ('Se ha creado la ruta ===>',ruta_colorByRMSF) else: print ("La ruta "+ruta_colorByRMSF+" existe..!!!") print ( 'Nos vamos a ....', ruta_colorByRMSF) os.chdir( ruta_colorByRMSF )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Copiando el archivo a la carpeta de datos
print ('\nCopiando el archivo colorByRMSF.vmd a '+ruta_colorByRMSF) source_file=ruta_scripts+'/colorByRMSF/colorByRMSF.vmd' dest_file=ruta_colorByRMSF+'/colorByRMSF.vmd' shutil.copy(source_file,dest_file)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando el RMSF para el análisis de la proteína con la opción (1) Protein
print ('Ejecutando el análisis de rmsf...') !echo 1 | g_rmsf -f ../output.xtc -s ../ionized.pdb -oq bfac.pdb -o rmsf.xvg #Calculando el mínimo y máximo del rmsf #Cargando valores del RMSF data_rmsf_gcolor=np.loadtxt('rmsf.xvg',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del RMSF min_rmsf_gcolor=np.amin(data_rmsf_gcolor[:,1]) max_rmsf_gcolor=np.amax(data_rmsf_gcolor[:,1]) print ('Minimo_RMSF=>',min_rmsf_gcolor) print ('Máximo_RMSF=>',max_rmsf_gcolor)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Cargar el scrit colorByRMSF.vmd en VMD Arrancar VMD, dirigirse al menú Extensions -> Tk Console, copiar y ejecutar la siguiente secuencia de comandos en el cual pondremos los valores del Mínimo_RMSF y Máximo_RMSF calculado en la celda anterior: tcl source colorByRMSF.vmd colorByRMSF top rmsf.xvg Mínimo_RMSF Máximo_RMSF ESCALA DE COLOR Dirigirse al menú Extensions -> Visualization -> Color Scale Bar y cambiar los valores de los siguientes campos: 1. Colocar el valor calculado de Mínimo_RMSF en el campo Mínimum scale value 2. Colocar el valor calculado de Máximo_RMSF en el campo Maximum scale value. 3. Seleccionar el color Black en el campo Color of labels. CAMBIAR EL COLOR DE FONDO Dirigirse al menú Graphics -> Colors , y realizar las siguientes selecciones: 1. Categories seleccionar Display 2. Names seleccionar Background 3. Colors seleccionar 8 White REMOVER EJE X,Y,Z Dirigirse al menú Display -> Axes -> Off, con el cual eliminaremos el eje de X,Y,Z.
# Cargando el pdb con VMD !vmd ../ionized.pdb
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Graficando B-Factors con Chimera
print ( 'Nos vamos a ....', ruta_colorByRMSF ) os.chdir( ruta_colorByRMSF )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Adecuando archivo bfac.pdb para obtener la columna de B-factors
#Inicializando vector rmsf=[] rmsf_x=[] rmsf_y=[] try: file_Bfactor = open( 'bfac.pdb' ) new_bfactor=open('bfac_new.pdb','w') except IOError: print ('No se pudo abrir el archivo o no existe·..') i=0 for linea in file_Bfactor.readlines(): fila = linea.strip() sl = fila.split() cadena=sl[0] if (cadena=='ATOM'): if (len(sl)==12): new_bfactor.write(linea) else: x=linea[0:60] tempFactor=linea[60:66] #print (x) #print(tempFactor) y=fila[67:] #print (y) enviar=x+' '+tempFactor+y new_bfactor.write(enviar+'\n') #print(enviar) else: #print (linea) new_bfactor.write(linea) new_bfactor.close()
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Revisando la estructura del archivo generado. Revisar que los campos se encuentren completamente alineados en la estructura de los campos. Guardar y salir.
!gedit bfac_new.pdb
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Generando el archivo de Bfactors para todos los átomos FALTA ADECUAR PARA SACAR EL MAYOR POR RESIDUO
#Inicializando vector bfactors_color=[] try: file_bfactor_color = open( 'bfac_new.pdb' ) except IOError: print ('No se pudo abrir el archivo o no existe·..') i=0 for linea in file_bfactor_color.readlines(): fila = linea.strip() sl = fila.split() if (sl[0]=='ATOM'): #print (sl[0]) idresidue=fila[23:26] bfactor=fila[60:66] #print (idresidue + '\t'+bfactor) bfactors_color.append(idresidue+'\t'+bfactor+'\n') #i=i+1 #Escribiendo el archivo BFACTOR.dat f = open('protein_bfactor.dat', 'w') #f.write('@ title "B-factors" \n') f.write('@ xaxis label " Residue" \n') f.write('@ xaxis label char size 1.480000\n') f.write('@ xaxis bar linewidth 5.0\n') f.write('@ xaxis ticklabel char size 1.480000\n') f.write('@ yaxis label "B-factors (' +"\\"+'cE'+"\\"+'C)"\n') f.write('@ yaxis label char size 1.480000\n') f.write('@ yaxis bar linewidth 5.0\n') f.write('@ yaxis ticklabel char size 1.480000\n') f.write('@ s0 line linewidth 7\n') f.write('@ s0 symbol 1\n') f.write('@ s0 symbol size 1.000000\n') f.write('@ s0 symbol color 1\n') f.write('@ s0 symbol pattern 1\n') f.write('@ s0 symbol fill color 2\n') f.write('@ s0 symbol fill pattern 1\n') f.write('@ s0 symbol linewidth 1.0\n') f.write('@TYPE xy \n') f.write("".join(bfactors_color)) f.close() !xmgrace protein_bfactor.dat #Cargando la imagen generada en xmgrace Image(filename='protein_bfactor.png') #Calculando el mínimo y máximo del rmsf #Cargando valores del RMSF data_bfactor_color=np.loadtxt('protein_bfactor.dat',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del RMSF min_bfactor_color=np.amin(data_bfactor_color[:,1]) max_bfactor_color=np.amax(data_bfactor_color[:,1]) print ('Minimo_B-Factor=>',min_bfactor_color) print ('Máximo_B-Factor=>',max_bfactor_color)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Cargando el archivo pdb con Chimera para realizar la coloración de Bfactors
!chimera bfac_new.pdb
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Instrucciones para generar la imagen de B-factors ESTABLECER EL MODO DE VISUALIZACIÓN 1. Seleccionar del menú principal Presets -> Interactive 2 (all atoms). 2. Seleccionar del menú principal Actions -> Surface -> Show. 3. Ajustar el tamaño de la ventana principal. 4. Ajustar el tamaño y posición de la figura haciendo uso de la tecla CTRL+ Button wheel mouse. COLOREAR LOS B-FACTORS Seleccionar Tools -> Depiction -> Render by Attribute. Nos desplegará una ventana Render/Select by Attribute. 1. Del campo Attribute seleccionar bfactor. 2. En el histograma que se muestra, seleccionar la barra blanca y cambiar el color de blanco a amarillo en el campo color. 3. Pulsar el botón Apply para visualizar los cambios de coloración. 4. Pulsar OK para finalizar. FONDO BLANCO Para aplicar el fondo blanco: 1. Seleccionar del menú principal Presets->Publication_1. SALVAR LA POSICIÓN DE LA IMAGEN Una vez que se ha obtenido la imagen coloreada, ajustar la visualización rotando la imagen, con la finalidad de dejar los espacios adecuados para la inclusión de las etiquetas y la barra de color. Para salvar la posición final de la imagen: 1. Seleccionar Favorites -> Command Line. 2. En la línea de comando teclear savepos p1 Si por alguna razón movemos la posición, para restaurarla hacer lo siguiente: 1. Seleccionar Favorites -> Command Line. 2. En la línea de comando teclear reset p1 TITULO Y BARRA DE COLOR Seleccionar del menú principal Tools -> Utilities -> Color Key. El cual desplegará la ventana 2D Labels/Color Key. Para desplegar la barra de color: Seleccionar la pestaña Color Key. Cambiar el color blanco por amarillo pulsando en el botón correspondiente. Cambiar la palabra min por el valor mínimo calculado del bfactor. Cambiar la palabra max por el valor máximo calculado del bfactor. Dar click con el mouse en la parte inferior de la imagen en donde se desea visualizar la escala. Arrastrar el mouse para definir el largo y ancho de la escala. Para desplegar el título de la barra: Seleccionar la pestaña Labels. Dar click en la parte superior de barra de color para incrustar el título. Escribir el título de la barra como B-Factors(Å). Para ajustar el tamaño de letra, en el campo Font size cambiar el valor adecuado. Para desplegar el título de la imagen: Seleccionar la pestaña Labels. Dar click en la parte superior de la imagenr para incrustar el título. Escribir el título con el nombre correspondiente. Ajustar el tamaño de letra, en el campo Font size cambiar el valor adecuado. Para el título en negrita, en el campo Font style seleccionar bold. Notas: 1. Si desea cambiar una etiqueta de posición, deberá estar en la pestaña Labels, mantener pulsado el botón izquierdo del mouse sobre la etiqueta y moverla a la posición deseada. 2. Si desea eliminar una etiqueta, deberá seleccionarla en el campo de Labels y desmarcar la opción Show. SALVAR LA IMAGEN Seleccionar del menú principal File -> Save Image. El cual desplegará la ventana Save image, en el cual en el campo File name dar el nombre de image.png. SALVAR LA SESIÓN DE QUIMERA Seleccionar del menú principal File -> Save Session as. El cual desplegará la ventana Choose Session Save File, en el cual en el campo File name colocar el nombre con la extensión .py.
##Cargando la imagen generada print ('Cargando el archivo...') Image(filename='image.png')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Graficando SASA
### Creando el directorio para el análisis del SASA en el directorio de VMD print ('Nos vamos a ', ruta) os.chdir( ruta ) output_find=!find /usr/local -maxdepth 2 -type d -name vmd print (output_find) ruta_vmd=output_find[0] print (ruta_vmd) ruta_vmd_sasa = ruta_vmd+'/plugins/noarch/tcl/iceVMD1.0' print ( ruta_vmd_sasa ) if not os.path.exists(ruta_vmd_sasa): os.makedirs(ruta_vmd_sasa) print ('Se ha creado la ruta ===>',ruta_vmd_sasa) else: print ("La ruta "+ruta_vmd_sasa+" existe..!!!") print ( 'Nos vamos a ....', ruta_vmd_sasa ) os.chdir( ruta_vmd_sasa ) #Copiando los archivos generados a la carpeta plugins de VMD print ('\nCopiando los archivos generados a '+ruta_vmd_sasa) source_file=ruta_scripts+'/iceVMD1.0/colorplot.tcl' dest_file=ruta_vmd_sasa+'/colorplot.tcl' shutil.copy(source_file,dest_file) source_file=ruta_scripts+'/iceVMD1.0/multiplot.tcl' dest_file=ruta_vmd_sasa+'/multiplot.tcl' shutil.copy(source_file,dest_file) source_file=ruta_scripts+'/iceVMD1.0/pkgIndex.tcl' dest_file=ruta_vmd_sasa+'/pkgIndex.tcl' shutil.copy(source_file,dest_file) source_file=ruta_scripts+'/iceVMD1.0/vmdICE.tcl' dest_file=ruta_vmd_sasa+'/vmdICE.tcl' shutil.copy(source_file,dest_file) print('\nArchivos copiados.. Regresando a... '+nuevaruta) os.chdir( nuevaruta ) ### Creando el directorio para la graficación del sasa ruta_sasaColor = nuevaruta+'/sasaColor' print ( ruta_sasaColor ) if not os.path.exists(ruta_sasaColor): os.makedirs(ruta_sasaColor) print ('Se ha creado la ruta ===>',ruta_sasaColor) else: print ("La ruta "+ruta_sasaColor+" existe..!!!") print ( 'Nos vamos a ....', ruta_sasaColor ) os.chdir( ruta_sasaColor ) print ('\nCopiando el archivo de configuracion a '+ruta_sasaColor) source_file=ruta_scripts+'/iceVMD1.0/vmdrc' dest_file=ruta_sasaColor+'/.vmdrc' shutil.copy(source_file,dest_file)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Coloreando el SASA Arrancar VMD. Ventana vmdICE Dirigirse al menú Extensions -> Analysis -> vmdICE, se presentará una ventana y se deberán cambiar los valores de los siguientes campos: 1. To: Colocar el rango máximo de frames de la trayectoria. 2. Selection for Calculation: agregar a chain A and protein. 3. Pulsar en el botón SASA Single Atom y esperar a que termine el cálculo. CAMBIAR EL COLOR DE FONDO Dirigirse al menú Graphics -> Colors , y realizar las siguientes selecciones: 1. Categories seleccionar Display 2. Names seleccionar Background 3. Colors seleccionar 8 White CAMBIAR RESOLUCIÓN DE ESFERAS Dirigirse al menú Graphics - Representations, y en el campo Sphere Resolution cambiamos al valor de 50. ROTAR LA IMAGEN PARA PRESENTAR UNA MEJOR VISTA Y GUARDARLA.
!vmd ../ionized.psf ../output.xtc
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Restaurando configuración default de VMD
#Borrando los archivos del vmd !rm -r $ruta_vmd_sasa
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Graficando el RGYRO
### Creando el directorio para la graficación del rgyro ruta_gyroColor = nuevaruta+'/color_rgyro' print ( ruta_gyroColor ) if not os.path.exists(ruta_gyroColor): os.makedirs(ruta_gyroColor) print ('Se ha creado la ruta ===>',ruta_gyroColor) else: print ("La ruta "+ruta_gyroColor+" existe..!!!") print ( 'Nos vamos a ....', ruta_gyroColor ) os.chdir( ruta_gyroColor ) print ('\nCopiando el script colorRgyro.tcl a '+ruta_gyroColor) source_file=ruta_scripts+'/colorRgyro/colorRgyro.tcl' dest_file=ruta_gyroColor+'/colorRgyro.tcl' shutil.copy(source_file,dest_file)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Coloreando el RGYRO Arrancar VMD, dirigirse al manú Extensions -> Tk Console, copiar y ejecutar la siguiente secuencia de comandos: tcl source colorRgyro.tcl CAMBIAR EL COLOR DE FONDO Dirigirse al menú Graphics -> Colors , y realizar las siguientes selecciones: 1. Categories seleccionar Display 2. Names seleccionar Background 3. Colors seleccionar 8 White ROTAR LA IMAGEN PARA PRESENTAR UNA MEJOR VISTA Y GUARDARLA.
!vmd ../ionized.psf ../output.xtc
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Import section specific modules:
import matplotlib.image as mpimg from IPython.display import Image from astropy.io import fits import aplpy #Disable astropy/aplpy logging import logging logger0 = logging.getLogger('astropy') logger0.setLevel(logging.CRITICAL) logger1 = logging.getLogger('aplpy') logger1.setLevel(logging.CRITICAL) from IPython.display import HTML HTML('../style/code_toggle.html')
6_Deconvolution/6_4_residuals_and_iqa.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
6.4 Residuals and Image Quality<a id='deconv:sec:iqa'></a> Using CLEAN or another deconvolution methods produces 'nicer' images than the dirty image (except when deconvolution gets out of control). What it means for an image to be 'nicer' is not a well defined metric, in fact it is almost completely undefined. When we talk of the quality of an image in sythesis imaging we rarely use a quantitative metric, but instead rely on the subjective opinion of the people looking at the image. I know, this is not very scientific. The field of computer vision has been around for decades, this is a field which has developed the objective metrics and techniques for image quality assessment. At some point in the future these methods will need to be incorporated into radio astronomy. This is bound to happen as we have moved to using automated calibration, imaging, and deconvolution pipelines. We have two some what related questions we need to answer when we are reducing visibilities into a final image: When should you halt the deconvolution process? What makes a good image? In $\S$ 6.2 &#10142; we covered how we can seperate out a sky model from noise using an iterative CLEAN deconvolution process. But, we did not discuss at what point we halt the process. There is no well-defined point in which to stop the process. Typically and ad hoc decision is made to run deconvolution for a fixed number of iterations or down to a certain flux level. These halting limits are set by adjusting the CLEAN parameters until a 'nice' image is produced. Or, if the visibilities have been flux calibrated, which is possible with some arrays, the signal is fixed to some real flux scale. Having knowledge about the array and observation a theoretical noise floor can be computed, then CLEAN can be ran to a known noise level. One could imagine there is a more automated way to decide when to halt CLEAN, perhaps keeping track of the iterations and deciding if there is a convergence? As a thought experiment we can think about a observation with perfect calibration (we discuss calibration in Chapter 8, but for now it is sufficient to know that the examples we have been using have perfect calibration). When we run CLEAN on this observation, each iteration will transfer some flux from the residual image to the sky model (see figure below). If we run this long enough then we will reach the observation noise floor. Then, we will start to deconvolve the noise from the image. And if you run this process for infinite iteration we will eventually have a sky model which contains all the flux, both from the sources and the noise. The residual image in this case will be empty. Now, this extreme case results in our sky model containing noise sources, this is not ideal. But, if we have not deconvoled enough flux then the sky model is incomplete and the residual image will contain PSF structure from the remaining flux. Thus, the challenge is to determine what is enough deconvolution to remove most of the true sky signal but not over-deconvolved such that noise is added to the sky model. As stated earlier, the typical way to do that at the moment is to do multiple deconvolutions, adjusting the parameters until a subjective solution is reached. We can see an example of over-deconvolution below. Using the same example from the previous section &#10142;, if we deconvolve beyond 300 iterations (as we found to result in a well-deconvoled sky model) then noise from the residual image is added to the sky model. This can be seen as the low flux sources around the edge of the image. Over-deconvolution can lead to <cite data-cite='1998AJ....115.1693C'>clean bias</cite> &#10548; effects.
def generalGauss2d(x0, y0, sigmax, sigmay, amp=1., theta=0.): """Return a normalized general 2-D Gaussian function x0,y0: centre position sigmax, sigmay: standard deviation amp: amplitude theta: rotation angle (deg)""" #norm = amp * (1./(2.*np.pi*(sigmax*sigmay))) #normalization factor norm = amp rtheta = theta * 180. / np.pi #convert to radians #general function parameters (https://en.wikipedia.org/wiki/Gaussian_function) a = (np.cos(rtheta)**2.)/(2.*(sigmax**2.)) + (np.sin(rtheta)**2.)/(2.*(sigmay**2.)) b = -1.*(np.sin(2.*rtheta))/(4.*(sigmax**2.)) + (np.sin(2.*rtheta))/(4.*(sigmay**2.)) c = (np.sin(rtheta)**2.)/(2.*(sigmax**2.)) + (np.cos(rtheta)**2.)/(2.*(sigmay**2.)) return lambda x,y: norm * np.exp(-1. * (a * ((x - x0)**2.) - 2.*b*(x-x0)*(y-y0) + c * ((y-y0)**2.))) def genRstoredBeamImg(fitsImg): """Generate an image of the restored PSF beam based on the FITS header and image size""" fh = fits.open(fitsImg) #get the restoring beam information from the FITS header bmin = fh[0].header['BMIN'] #restored beam minor axis (deg) bmaj = fh[0].header['BMAJ'] #restored beam major axis (deg) bpa = fh[0].header['BPA'] #restored beam angle (deg) dRA = fh[0].header['CDELT1'] #pixel size in RA direction (deg) ra0 = fh[0].header['CRPIX1'] #centre RA pixel dDec = fh[0].header['CDELT2'] #pixel size in Dec direction (deg) dec0 = fh[0].header['CRPIX2'] #centre Dec pixel #construct 2-D ellipitcal Gaussian function gFunc = generalGauss2d(0., 0., bmin/2., bmaj/2., theta=bpa) #produce an restored PSF beam image imgSize = 2.*(ra0-1) #assumes a square image xpos, ypos = np.mgrid[0:imgSize, 0:imgSize].astype(float) #make a grid of pixel indicies xpos -= ra0 #recentre ypos -= dec0 #recentre xpos *= dRA #convert pixel number to degrees ypos *= dDec #convert pixel number to degrees return gFunc(xpos, ypos) #restored PSF beam image def convolveBeamSky(beamImg, skyModel): """Convolve a beam (PSF or restored) image with a sky model image, images must be the same shape""" sampFunc = np.fft.fft2(beamImg) #sampling function skyModelVis = np.fft.fft2(skyModel[0,0]) #sky model visibilities sampModelVis = sampFunc * skyModelVis #sampled sky model visibilities return np.abs(np.fft.fftshift(np.fft.ifft2(sampModelVis))) #sky model convolved with restored beam fig = plt.figure(figsize=(16, 7)) fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-residual.fits') residualImg = fh[0].data fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-model.fits') skyModel = fh[0].data #generate a retored PSF beam image restBeam = genRstoredBeamImg( '../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-residual.fits') #convolve restored beam image with skymodel convImg = convolveBeamSky(restBeam, skyModel) gc1 = aplpy.FITSFigure(residualImg, figure=fig, subplot=[0.1,0.1,0.35,0.8]) gc1.show_colorscale(vmin=-1.5, vmax=2, cmap='viridis') gc1.hide_axis_labels() gc1.hide_tick_labels() plt.title('Residual Image (niter=1000)') gc1.add_colorbar() gc2 = aplpy.FITSFigure(convImg, figure=fig, subplot=[0.5,0.1,0.35,0.8]) gc2.show_colorscale(vmin=0., vmax=2.5, cmap='viridis') gc2.hide_axis_labels() gc2.hide_tick_labels() plt.title('Sky Model') gc2.add_colorbar() fig.canvas.draw()
6_Deconvolution/6_4_residuals_and_iqa.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
Figure: residual image and sky model after 1000 deconvolution iterations. The residual image has been over-deconvolved leading to noise components being added to the sky model. The second question of what makes a good image is why we still use subjective opinion. If we consider the realistic case of imaging and deconvolving a real set of visibilities then we have the added problem that there will be always be, at some level, calibration errors. These errors, and cause of these errors, can be identified by a trained eye whether it is poor gain calibration, interference, strong source sidelobes, or any number of other issues. Errors can cause a deconvolution process to diverge resulting in an unrealistic sky model. Humans are very good at looking at images and deciding if they make sense, but we can not easily describe how we do our image processing, thus we find it hard to implement algorithms to do the same. Looking at the dirty image and deconvolved image of the same field below most people would say the deconvoled image is objectively 'better' than the dirty image. Yet we do not know exactly why that is the case.
fig = plt.figure(figsize=(16, 7)) gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-dirty.fits', \ figure=fig, subplot=[0.1,0.1,0.35,0.8]) gc1.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis') gc1.hide_axis_labels() gc1.hide_tick_labels() plt.title('Dirty Image') gc1.add_colorbar() gc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits', \ figure=fig, subplot=[0.5,0.1,0.35,0.8]) gc2.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis') gc2.hide_axis_labels() gc2.hide_tick_labels() plt.title('Deconvolved Image') gc2.add_colorbar() fig.canvas.draw()
6_Deconvolution/6_4_residuals_and_iqa.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
Left: dirty image from a 6 hour KAT-7 observation at a declination of $-30^{\circ}$. Right: deconvolved image. The deconvolved image does not have the same noisy PSF structures around the sources that the dirty image does. We could say that these imaging artefacts are localized and related to the PSF response to bright sources. The aim of deconvolution is to remove these PSF like structures and replace them with a simple sky model which is decoupled fro the observing system. Most of difficult work in radio interferometry is the attempt to understand and remove the instrumentational effects in order to recover the sky signal. Thus, we have some context for why the deconvolved image is 'better' than the dirty image. The challenge in automatically answering what makes a good image is some how encoding both the context and human intution. Indeed, a challenge left to the reader. 6.4.1 Dynamic Range and Signal-to-Noise Ratio Dynamic range is the standard metric, which has been used for decades, to describe the quality of an interferometric image. The dynamic range (DR) is defined as the ratio of the peak flux $I_{\textrm{peak}}$ to the standard deviation of the noise in the image $\sigma_I$. The dynamic range can be computed for either a dirty or deconvolved image. $$\textrm{DR} = \frac{I_{\textrm{peak}}}{\sigma_I}$$ Now this definition of the dynamic range is not well defined. First, how is the peak flux defined? Typically, the peak pixel value anywhere in the image is taken to be the peak flux. But, be careful, changing the resolution of the image will result in different flux values. Decreasing the resolution can result in more flux being included in a single pixel, likewise by increasing the resolution the flux will be spread across more pixels. The second issue is how the noise of the image is computed, possible options are: Use the entire image Use the entire residual image Randomly sample the image Choose a 'relatively' empty region This is not an exhaustive list of methods, but the typical method is option 4. After deconvolution, the image is loaded into a viewer and the standard deviation of the noise is computed from a region the relatively free of sources. As I write this I am aware of how ridiculous that might sound. Using the same image we can see how the dynamic range varies by using these different methods. The dynamic range for the image deconvolved image above is:
#load deconvolved image fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits') deconvImg = fh[0].data #load residual image fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits') residImg = fh[0].data peakI = np.max(deconvImg) print 'Peak Flux: %f Jy'%(peakI) print 'Dynamic Range:' #method 1 noise = np.std(deconvImg) print '\tMethod 1:', peakI/noise #method 2 noise = np.std(residImg) print '\tMethod 2:', peakI/noise #method 3 noise = np.std(np.random.choice(deconvImg.flatten(), int(deconvImg.size*.01))) #randomly sample 1% of pixels print '\tMethod 3:', peakI/noise #method 4, region 1 noise = np.std(deconvImg[0,0,0:128,0:128]) #corner of image print '\tMethod 4a:', peakI/noise #method 4, region 2 noise = np.std(deconvImg[0,0,192:320,192:320]) #centre of image print '\tMethod 4b:', peakI/noise
6_Deconvolution/6_4_residuals_and_iqa.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
Method 1 will always result in a lower dynamic range than Method 2 as the deconvoled image includes the sources where method 2 only uses the residuals. Method 3 will result in a dynamic range which varies depending on the number of pixels sampled and which pixels are sampled. One could imagine an unlucky sampling where every pixel chosen is part of a source, resulting in a large standard deviation. Method 4 depends on the region used to compute the noise. In the Method 4a result a corner of the image, where there are essantially no sources, results in a high dynamic range. On the other hand, choosing the centre region to compute the noise standard deviation results in a low dynamic range. This variation between methods can lead to people playing 'the dynamic range game' where someone can pick the result that best fits what they want to say about the image. Be careful, and make sure your dynamic range metric is well defined and unbaised. There is a qualitative explaination for computing the image noise and the dynamic range by human interaction. Humans are very good at image processing, so we can quickly select regions which are 'noise-like', so it is easier to just look at an image then to try to come up with a complicated algorithm to find these regions. The dynamic range has a number of issues, but it is correlated with image quality. For a fixed visibility set, improving the dynamic range of an image usually results in a improvement in the quality of the image, as determined by a human. A significant disadvantage to using dynamic range is that it is a global metric which reduced an image down to a single number. It provides no information about local artefacts. This is becoming an important issue in modern synthesis imaging as we push into imaging siginificant portions of the primary beam and need to account for direction-dependent effects. These topics are discussed in Chapter 7. But, as is noted in <cite data-cite='taylor1999synthesis'>Synthesis Imaging in Radio Astronomy II (Lecture 13) </cite> &#10548; an valid argument can be made for using dynamic range as a proxy (at least a partial one) for image quality. As of this writing dynamic range is the standard method to measure image quality. 6.4.2 The Residual Image We have noted that the results of a deconvolution process is a sky model and a residual image. An example residual image is shown below.
fig = plt.figure(figsize=(8, 7)) gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits', \ figure=fig) gc1.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis') gc1.hide_axis_labels() gc1.hide_tick_labels() plt.title('Residual Image') gc1.add_colorbar() fig.canvas.draw()
6_Deconvolution/6_4_residuals_and_iqa.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
Split the data into features (x) and target (y, the last column in the table) Remember you can cast the results into an numpy array and then slice out what you want
x = myarray[:,:11] y = myarray[:,11:]
class10/donow/kate_bennion_donow_10.ipynb
ledeprogram/algorithms
gpl-3.0
Create a decision tree with the data
from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier() dt = dt.fix(x,y)
class10/donow/kate_bennion_donow_10.ipynb
ledeprogram/algorithms
gpl-3.0
Run 10-fold cross validation on the model
from sklearn.cross_validation import cross_val_score scores = cross_val_score(dt,x,y2,cv=10)
class10/donow/kate_bennion_donow_10.ipynb
ledeprogram/algorithms
gpl-3.0
If you have time, calculate the feature importance and graph based on the code in the slides from last class Use this tip for getting the column names from your cursor object
plt.plot(dt.feature_importances_,'o') plt.ylim(0,1)
class10/donow/kate_bennion_donow_10.ipynb
ledeprogram/algorithms
gpl-3.0
Initialize ASCAT reader
ascat_data_folder = os.path.join('/media/sf_R', 'Datapool_processed', 'WARP', 'WARP5.5', 'IRMA1_WARP5.5_P2', 'R1', '080_ssm', 'netcdf') ascat_grid_folder = os.path.join('/media/sf_R', 'Datapool_processed', 'WARP', 'ancillary', 'warp5_grid') ascat_reader = AscatH25_SSM(ascat_data_folder, ascat_grid_folder) ascat_reader.read_bulk = True ascat_reader._load_grid_info()
docs/setup_validation_ASCAT_ISMN.ipynb
christophreimer/pytesmo
bsd-3-clause
Initialize ISMN reader
ismn_data_folder = os.path.join('/media/sf_D', 'ISMN', 'data') ismn_reader = ISMN_Interface(ismn_data_folder)
docs/setup_validation_ASCAT_ISMN.ipynb
christophreimer/pytesmo
bsd-3-clause
Create the variable jobs which is a list containing either cell numbers (for a cell based process) or grid point index information tuple(gpi, longitude, latitude). For ISMN gpi is replaced by idx which is an index used to read time series of variables such as soil moisture. DO NOT CHANGE the name jobs because it will be searched during the parallel processing!
jobs = [] ids = ismn_reader.get_dataset_ids(variable='soil moisture', min_depth=0, max_depth=0.1) for idx in ids: metadata = ismn_reader.metadata[idx] jobs.append((idx, metadata['longitude'], metadata['latitude']))
docs/setup_validation_ASCAT_ISMN.ipynb
christophreimer/pytesmo
bsd-3-clause
Create the variable save_path which is a string representing the path where the results will be saved. DO NOT CHANGE the name save_path because it will be searched during the parallel processing!
save_path = os.path.join('/media/sf_D', 'validation_framework', 'test_ASCAT_ISMN')
docs/setup_validation_ASCAT_ISMN.ipynb
christophreimer/pytesmo
bsd-3-clause
Create the validation object.
datasets = {'ISMN': {'class': ismn_reader, 'columns': ['soil moisture'], 'type': 'reference', 'args': [], 'kwargs': {}}, 'ASCAT': {'class': ascat_reader, 'columns': ['sm'], 'type': 'other', 'args': [], 'kwargs': {}, 'grids_compatible': False, 'use_lut': False, 'lut_max_dist': 30000} } period = [datetime(2007, 1, 1), datetime(2014, 12, 31)] process = Validation(datasets=datasets, data_prep=DataPreparation(), temporal_matcher=temporal_matchers.BasicTemporalMatching(window=1/24.0, reverse=True), scaling='lin_cdf_match', scale_to_other=True, metrics_calculator=metrics_calculators.BasicMetrics(), period=period, cell_based_jobs=False)
docs/setup_validation_ASCAT_ISMN.ipynb
christophreimer/pytesmo
bsd-3-clause
If you decide to use the ipython parallel processing to perform the validation please ADD the start_processing function to your code. Then move to pytesmo.validation_framework.start_validation, change the path to your setup code and start the validation.
def start_processing(job): try: return process.calc(job) except RuntimeError: return process.calc(job)
docs/setup_validation_ASCAT_ISMN.ipynb
christophreimer/pytesmo
bsd-3-clause
If you chose to perform the validation normally then please ADD the uncommented main method to your code.
# if __name__ == '__main__': # # from pytesmo.validation_framework.results_manager import netcdf_results_manager # # for job in jobs: # results = process.calc(job) # netcdf_results_manager(results, save_path)
docs/setup_validation_ASCAT_ISMN.ipynb
christophreimer/pytesmo
bsd-3-clause
Objectives Choose two players who reach same scores(stop forcely) Choose two players who play the same time(stop forcely) Calculate their statistical result(variance, average, mode, etc.) Visualization in terms of HR, Emotional, Collection of Emoji According to the movement of birds(HR of players) to find out their similarity using Dynamic Time Warping algorithm
# all the function we need to parse the data def extract_split_data(data): content = re.findall("\[(.*?)\]", data) timestamps = [] values = [] for c in content[0].split(","): c = (c.strip()[1:-1]) if len(c)>21: x, y = c.split("#") values.append(int(x)) timestamps.append(y) return timestamps, values def de_timestampe(time): # get year month date y = time.split()[0].split("-")[0] m = time.split()[0].split("-")[1] d = time.split()[0].split("-")[2] # get hour minute second h = time.split()[1].split(":")[0] mi = time.split()[1].split(":")[1] s = time.split()[1].split(":")[2] t = m + " " + d + " " + h + ":" + mi + ":" + s + " " + y good_format = datetime.datetime.strptime(t, '%m %d %H:%M:%S.%f %Y' ) return good_format def de_movement(movement): moves = [] for m in movement: if len(m[1:-2]) > 1: for y in m[1:-2].split(","): moves.append(float(y)) return moves def chop_video(url): vidcap = cv2.VideoCapture(url) vidcap.set(cv2.CAP_PROP_POS_MSEC,6000) #success,image = vidcap.read() count = 0 success = True while success: success,image = vidcap.read() (h, w) = image.shape[:2] M = cv2.getRotationMatrix2D((w/2,h/2),-90,1) rotated = cv2.warpAffine(image,M,(w,h)) cropped = rotated[100:550, 80:400] cv2.imwrite("converted1/frame%d.jpg" % count, cropped) # save frame as JPEG file count += 1 def process_pred_data(): dirname = "/Users/xueguoliang/myGithub/affectiveComputing/converted1" # Load every image file in the provided directory filenames = [os.path.join(dirname, fname) for fname in os.listdir(dirname) if fname.split(".")[1] == 'jpg'] # Read every filename as an RGB image imgs = [cv2.imread(fname,cv2.IMREAD_GRAYSCALE) for fname in filenames] # Then resize the square image to 48 x 48 pixels imgs = [cv2.resize(img_i, (48, 48)) for img_i in imgs] # Finally make our list of 3-D images a 4-D array with the first dimension the number of images: imgs = np.array(imgs).astype(np.float32) np.save('pred_data.npy', imgs) def emotion_predict(x): MODEL = None with tf.Graph().as_default(): network = input_data(shape=[None, IMG_SIZE, IMG_SIZE, 1], name='input') network = conv_2d(network, 96, 11, strides=4, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = conv_2d(network, 256, 5, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = conv_2d(network, 384, 3, activation='relu') network = conv_2d(network, 384, 3, activation='relu') network = conv_2d(network, 256, 3, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = fully_connected(network, 4096, activation='tanh') network = dropout(network, 0.5) network = fully_connected(network, 4096, activation='tanh') network = dropout(network, 0.5) network = fully_connected(network, 7, activation='softmax') network = regression(network, optimizer='momentum',loss='categorical_crossentropy',learning_rate=LR, name='targets') model = tflearn.DNN(network, tensorboard_dir='alex_bird') model.load("affective-bird-0.001-alexnet_15.model") MODEL = model predict_y = MODEL.predict(x.reshape(-1,IMG_SIZE,IMG_SIZE,1)) new_y = (np.argmax(predict_y, axis=1)).astype(np.uint8) return new_y def get_track_emoj(data): content = re.findall("\[(.*?)\]", data) e_timestamp = [] #print (len(content[0])) if len(content[0])>0: for c in content[0].split(","): c = (c.strip()[1:-1]) e_timestamp.append(c) return e_timestamp player1 = pd.read_csv("/Users/xueguoliang/Desktop/finalData/FlappyBird-1ec48f0fbc8d80edc56051dd46c7070d-2017-07-06-20-48.csv", delimiter=";") player2 = pd.read_csv("/Users/xueguoliang/Desktop/finalData/FlappyBird-f2b801830aba82769b39d29f2afddd10-2017-07-07-20-07.csv", delimiter=";") #chop_video('/Users/xueguoliang/Desktop/finalData/VideoRecording-2017-07-06-20-48-51.mp4') #process_pred_data() pred_data = np.load('pred_data.npy') # hyperparameter IMG_SIZE = 48 LR = 1e-3 result = emotion_predict(pred_data)
affectiveComputing/ComparisonAnalysis.ipynb
Ivanhehe/Sharings
mit
Heart rates analysis from player1
# playing span s1 = player1['TimeStarted'].values[0] e1 = player1['TimeEnded'].values[-1] sx1 = player1['TimeStarted'].values[-1] diff1 = (de_timestampe(e1) - de_timestampe(s1)) # difference in seconds diffx1 = (de_timestampe(e1) - de_timestampe(sx1)) # get timestamp and HR times1 = [] rates1 = [] flags = [0] pos = 0 for session in player1['Heartbeats']: time, rate = extract_split_data(session) pos += len(time)-1 if pos>0: flags.append(pos) times1 += time rates1 += rate print ("Player1") print ("Time: {} minutes, {} ~ {}".format(round(diff1.seconds/60,2), s1, e1)) print ("Scores: {}".format(player1["Score"].values)) print ("Emoj Scores: {}".format(player1["EmojiScore"].values)) print ("Game Sessions: {}".format(player1.shape[0])) print ("Variance of HR: {}".format(np.var(rates1))) print ("Average of HR: {}".format(np.mean(rates1))) print ("Mode of HR: {}".format(mode(rates1)))
affectiveComputing/ComparisonAnalysis.ipynb
Ivanhehe/Sharings
mit
Emoj collection analysis from player1
e_timestamp = [] for session in player1['EmojiTimestamps']: e_timestamp += get_track_emoj(session) xi = [] track = [] for i,t in enumerate(times1): for e in e_timestamp: if abs((de_timestampe(e)-de_timestampe(t)).seconds) < 1: xi.append(i) track.append(int(rates1[i])) fig, ax = plt.subplots(figsize=(15,8)) markers_on = track plt.plot(rates1) plt.scatter(xi,track,c="r",s=50) #plt.xticks(x,times1, rotation="60") plt.title("Heartbeats - EmojiCollection") ax.set_xlabel("time(s)") ax.set_ylabel("beats") plt.show() # plot x1 = diffx1.seconds fig, ax1 = plt.subplots(figsize=(15,8)) plt.title("Heartbeats of player1") #plt.scatter(timestamps1, rates1) ax2 = ax1.twinx() ax1.plot(rates1) ax1.tick_params('y', colors='b') emotions = [] i=0 while(i<=len(result)): emotions.append(int(result[i])) i = i+len(result)//len(rates1) ax2.scatter(range(0,len(emotions)),emotions,color="r",s=50,alpha=.4) ax2.tick_params('y', colors='r') #plt.ylim([70,150]) for f in flags: plt.axvline(x=f, color='y', linestyle='--') #plt.text(x1,120, str(x1)+" >>>", size=15, fontweight='bold') ax1.set_xlabel("time(s)") ax1.set_ylabel("Beats", color="b") ax2.set_ylabel('Emotion', color="r") ax2.set_yticklabels(["","Angry", "Disgust", "Fear", "Happy", "Sad", "Surprise", "Neutral",""]) plt.show() liter = ["Angry", "Disgust", "Fear", "Happy", "Sad", "Surprise", "Neutral"] final_result = [liter[i] for i in result] es = [] fs = [] rs = Counter(final_result) for v in rs: es.append(v) fs.append(rs[v]) sns.barplot(es, fs) plt.title("Emotional Distribution of Player1") plt.show()
affectiveComputing/ComparisonAnalysis.ipynb
Ivanhehe/Sharings
mit
Heart rates analysis from player2
# playing span s2 = player2['TimeStarted'].values[0] e2 = player2['TimeEnded'].values[-1] sx2 = player2['TimeStarted'].values[-1] diff2 = (de_timestampe(e2) - de_timestampe(s2)) # difference in second diffx2 = (de_timestampe(e2) - de_timestampe(sx2)) # difference in seconds # get timestamp and HR times2 = [] rates2 = [] for session in player2['Heartbeats']: time, rate = extract_split_data(session) times2 += time rates2 += rate print ("Player2") print ("Time: {} minutes, {} ~ {}".format(round(diff2.seconds/60,2), s2.split()[1], e2.split()[1])) print ("Game Sessions: {}".format(player2.shape[0])) print ("Scores: {}".format(player2["Score"].values)) print ("Emoj Scores: {}".format(player2["EmojiScore"].values)) print ("Variance of HR: {}".format(np.var(rates2))) print ("Average of HR: {}".format(np.mean(rates2))) print ("Mode of HR: {}".format(mode(rates2))) # plot timestamps2 = pd.to_datetime(times2) x2 = diffx2.seconds fig, ax = plt.subplots(figsize=(15,8)) plt.title("Heartbeats of player2") #plt.scatter(timestamps1, rates1) sns.tsplot(rates2) plt.ylim([65,90]) #plt.xticks(x, times, rotation="60") ax.set_xlabel("time(s)") ax.set_ylabel("beats") plt.show()
affectiveComputing/ComparisonAnalysis.ipynb
Ivanhehe/Sharings
mit
Playing Pattern
m1 = player1["Movement"] m2 = player2["Movement"] print (m1[:5]) print (m2[:5]) y1 = de_movement(m1) y2 = de_movement(m2) fig, ax = plt.subplots(figsize=(15,8)) plt.title("Comparison between birds") #plt.scatter(timestamps1, rates1) plt.plot(y1, color="b", label="player1", alpha=.6) plt.plot(y2, color="g", label="player2", alpha=.4) plt.xlim([0,100]) ax.set_xlabel("time(s)") ax.set_ylabel("y") plt.legend() plt.show() yy1 = (y1-np.mean(y1))/np.std(y1) yy2 = (y2-np.mean(y2))/np.std(y2) dist, cost, acc, path = dtw(yy1, yy2, dist=euclidean) dist1, cost1, acc1, path1 = dtw(yy1[:300], yy2[:300], dist=euclidean) print("Whole Game Sessions: {}".format(dist)) print("During Same Period: {}".format(dist1)) %pylab inline imshow(acc1.T, origin='lower', cmap=cm.gray, interpolation='nearest') plot(path1[0], path1[1], 'w') xlim((-0.5, acc1.shape[0]-0.5)) ylim((-0.5, acc1.shape[1]-0.5)) # similarity for own movement from itertools import islice def window(seq, n=2): "Returns a sliding window (of width n) over data from the iterable" " s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... " it = iter(seq) result = tuple(islice(it, n)) if len(result) == n: yield result for elem in it: result = result[1:] + (elem,) yield result seq = yy1[:100] sub = window(seq, 10) print (sub)
affectiveComputing/ComparisonAnalysis.ipynb
Ivanhehe/Sharings
mit
Contour plots of 2d wavefunctions The wavefunction of a 2d quantum well is: $$ \psi_{n_x,n_y}(x,y) = \frac{2}{L} \sin{\left( \frac{n_x \pi x}{L} \right)} \sin{\left( \frac{n_y \pi y}{L} \right)} $$ This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well. Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
def well2d(x, y, nx, ny, L=1.0): """Compute the 2d quantum well wave function.""" return 2/L*np.sin(nx*np.pi*x/L)*np.sin(ny*np.pi*y/L) psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1) assert len(psi)==10 assert psi.shape==(10,)
assignments/assignment05/MatplotlibEx03.ipynb
CalPolyPat/phys202-2015-work
mit
The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction: Use $n_x=3$, $n_y=2$ and $L=0$. Use the limits $[0,1]$ for the x and y axis. Customize your plot to make it effective and beautiful. Use a non-default colormap. Add a colorbar to you visualization. First make a plot using one of the contour functions:
f=plt.figure(figsize=(10,10)) x=np.linspace(0,1,100) y=np.linspace(0,1,100) xx, yy=np.meshgrid(x, y) z=well2d(xx,yy,3,2,1) plt.contourf(x,y,z,50,cmap=plt.cm.get_cmap("hot")) plt.colorbar(label=r"$\Psi (x,y)$") plt.xlabel("X Position") plt.ylabel("Y Position") plt.title("The wavefunction of a 2D inifinite well") assert True # use this cell for grading the contour plot
assignments/assignment05/MatplotlibEx03.ipynb
CalPolyPat/phys202-2015-work
mit
Next make a visualization using one of the pcolor functions:
f=plt.figure(figsize=(10,10)) x=np.linspace(0,1,100) y=np.linspace(0,1,100) xx, yy=np.meshgrid(x, y) z=well2d(xx,yy,3,2,1) plt.pcolor(x,y,z,cmap="RdBu") plt.colorbar(label=r"$\Psi (x,y)$") plt.xlabel("X Position") plt.ylabel("Y Position") plt.title("The wavefunction of a 2D inifinite well") assert True # use this cell for grading the pcolor plot
assignments/assignment05/MatplotlibEx03.ipynb
CalPolyPat/phys202-2015-work
mit
Now we instantiate a model instance: a 10x10 grid, with an 80% change of an agent being placed in each cell, approximately 20% of agents set as minorities, and agents wanting at least 3 similar neighbors.
model = SchellingModel(10, 10, 0.8, 0.2, 3)
examples/Schelling/.ipynb_checkpoints/analysis-checkpoint.ipynb
projectmesa/mesa-examples
apache-2.0
We want to run the model until all the agents are happy with where they are. However, there's no guarentee that a given model instantiation will ever settle down. So let's run it for either 100 steps or until it stops on its own, whichever comes first:
while model.running and model.schedule.steps < 100: model.step() print(model.schedule.steps) # Show how many steps have actually run
examples/Schelling/.ipynb_checkpoints/analysis-checkpoint.ipynb
projectmesa/mesa-examples
apache-2.0
The model has a DataCollector object, which checks and stores how many agents are happy at the end of each step. It can also generate a pandas DataFrame of the data it has collected:
model_out = model.datacollector.get_model_vars_dataframe() model_out.head()
examples/Schelling/.ipynb_checkpoints/analysis-checkpoint.ipynb
projectmesa/mesa-examples
apache-2.0
Finally, we can plot the 'happy' series:
model_out.happy.plot()
examples/Schelling/.ipynb_checkpoints/analysis-checkpoint.ipynb
projectmesa/mesa-examples
apache-2.0
For testing purposes, here is a table giving each agent's x and y values at each step.
x_positions = model.datacollector.get_agent_vars_dataframe() x_positions.head()
examples/Schelling/.ipynb_checkpoints/analysis-checkpoint.ipynb
projectmesa/mesa-examples
apache-2.0
Effect of Homophily on segregation Now, we can do a parameter sweep to see how segregation changes with homophily. First, we create a function which takes a model instance and returns what fraction of agents are segregated -- that is, have no neighbors of the opposite type.
from mesa.batchrunner import BatchRunner def get_segregation(model): ''' Find the % of agents that only have neighbors of their same type. ''' segregated_agents = 0 for agent in model.schedule.agents: segregated = True for neighbor in model.grid.neighbor_iter(agent.pos): if neighbor.type != agent.type: segregated = False break if segregated: segregated_agents += 1 return segregated_agents / model.schedule.get_agent_count()
examples/Schelling/.ipynb_checkpoints/analysis-checkpoint.ipynb
projectmesa/mesa-examples
apache-2.0
Now, we set up the batch run, with a dictionary of fixed and changing parameters. Let's hold everything fixed except for Homophily.
parameters = {"height": 10, "width": 10, "density": 0.8, "minority_pc": 0.2, "homophily": range(1,9)} model_reporters = {"Segregated_Agents": get_segregation} param_sweep = BatchRunner(SchellingModel, parameters, iterations=10, max_steps=200, model_reporters=model_reporters) param_sweep.run_all() df = param_sweep.get_model_vars_dataframe() plt.scatter(df.homophily, df.Segregated_Agents) plt.grid(True)
examples/Schelling/.ipynb_checkpoints/analysis-checkpoint.ipynb
projectmesa/mesa-examples
apache-2.0
Exploring the TF-Hub CORD-19 Swivel Embeddings <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/cord_19_embeddings"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/cord_19_embeddings.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/cord_19_embeddings.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/cord_19_embeddings.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/tensorflow/cord-19/swivel-128d/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> </td> </table> The CORD-19 Swivel text embedding module from TF-Hub (https://tfhub.dev/tensorflow/cord-19/swivel-128d/1) was built to support researchers analyzing natural languages text related to COVID-19. These embeddings were trained on the titles, authors, abstracts, body texts, and reference titles of articles in the CORD-19 dataset. In this colab we will: - Analyze semantically similar words in the embedding space - Train a classifier on the SciCite dataset using the CORD-19 embeddings Setup
import functools import itertools import matplotlib.pyplot as plt import numpy as np import seaborn as sns import pandas as pd import tensorflow.compat.v1 as tf tf.disable_eager_execution() tf.logging.set_verbosity('ERROR') import tensorflow_datasets as tfds import tensorflow_hub as hub try: from google.colab import data_table def display_df(df): return data_table.DataTable(df, include_index=False) except ModuleNotFoundError: # If google-colab is not available, just display the raw DataFrame def display_df(df): return df
site/en-snapshot/hub/tutorials/cord_19_embeddings.ipynb
tensorflow/docs-l10n
apache-2.0
Analyze the embeddings Let's start off by analyzing the embedding by calculating and plotting a correlation matrix between different terms. If the embedding learned to successfully capture the meaning of different words, the embedding vectors of semantically similar words should be close together. Let's take a look at some COVID-19 related terms.
# Use the inner product between two embedding vectors as the similarity measure def plot_correlation(labels, features): corr = np.inner(features, features) corr /= np.max(corr) sns.heatmap(corr, xticklabels=labels, yticklabels=labels) with tf.Graph().as_default(): # Load the module query_input = tf.placeholder(tf.string) module = hub.Module('https://tfhub.dev/tensorflow/cord-19/swivel-128d/1') embeddings = module(query_input) with tf.train.MonitoredTrainingSession() as sess: # Generate embeddings for some terms queries = [ # Related viruses "coronavirus", "SARS", "MERS", # Regions "Italy", "Spain", "Europe", # Symptoms "cough", "fever", "throat" ] features = sess.run(embeddings, feed_dict={query_input: queries}) plot_correlation(queries, features)
site/en-snapshot/hub/tutorials/cord_19_embeddings.ipynb
tensorflow/docs-l10n
apache-2.0
We can see that the embedding successfully captured the meaning of the different terms. Each word is similar to the other words of its cluster (i.e. "coronavirus" highly correlates with "SARS" and "MERS"), while they are different from terms of other clusters (i.e. the similarity between "SARS" and "Spain" is close to 0). Now let's see how we can use these embeddings to solve a specific task. SciCite: Citation Intent Classification This section shows how one can use the embedding for downstream tasks such as text classification. We'll use the SciCite dataset from TensorFlow Datasets to classify citation intents in academic papers. Given a sentence with a citation from an academic paper, classify whether the main intent of the citation is as background information, use of methods, or comparing results.
#@title Set up the dataset from TFDS class Dataset: """Build a dataset from a TFDS dataset.""" def __init__(self, tfds_name, feature_name, label_name): self.dataset_builder = tfds.builder(tfds_name) self.dataset_builder.download_and_prepare() self.feature_name = feature_name self.label_name = label_name def get_data(self, for_eval): splits = THE_DATASET.dataset_builder.info.splits if tfds.Split.TEST in splits: split = tfds.Split.TEST if for_eval else tfds.Split.TRAIN else: SPLIT_PERCENT = 80 split = "train[{}%:]".format(SPLIT_PERCENT) if for_eval else "train[:{}%]".format(SPLIT_PERCENT) return self.dataset_builder.as_dataset(split=split) def num_classes(self): return self.dataset_builder.info.features[self.label_name].num_classes def class_names(self): return self.dataset_builder.info.features[self.label_name].names def preprocess_fn(self, data): return data[self.feature_name], data[self.label_name] def example_fn(self, data): feature, label = self.preprocess_fn(data) return {'feature': feature, 'label': label}, label def get_example_data(dataset, num_examples, **data_kw): """Show example data""" with tf.Session() as sess: batched_ds = dataset.get_data(**data_kw).take(num_examples).map(dataset.preprocess_fn).batch(num_examples) it = tf.data.make_one_shot_iterator(batched_ds).get_next() data = sess.run(it) return data TFDS_NAME = 'scicite' #@param {type: "string"} TEXT_FEATURE_NAME = 'string' #@param {type: "string"} LABEL_NAME = 'label' #@param {type: "string"} THE_DATASET = Dataset(TFDS_NAME, TEXT_FEATURE_NAME, LABEL_NAME) #@title Let's take a look at a few labeled examples from the training set NUM_EXAMPLES = 20 #@param {type:"integer"} data = get_example_data(THE_DATASET, NUM_EXAMPLES, for_eval=False) display_df( pd.DataFrame({ TEXT_FEATURE_NAME: [ex.decode('utf8') for ex in data[0]], LABEL_NAME: [THE_DATASET.class_names()[x] for x in data[1]] }))
site/en-snapshot/hub/tutorials/cord_19_embeddings.ipynb
tensorflow/docs-l10n
apache-2.0
Training a citaton intent classifier We'll train a classifier on the SciCite dataset using an Estimator. Let's set up the input_fns to read the dataset into the model
def preprocessed_input_fn(for_eval): data = THE_DATASET.get_data(for_eval=for_eval) data = data.map(THE_DATASET.example_fn, num_parallel_calls=1) return data def input_fn_train(params): data = preprocessed_input_fn(for_eval=False) data = data.repeat(None) data = data.shuffle(1024) data = data.batch(batch_size=params['batch_size']) return data def input_fn_eval(params): data = preprocessed_input_fn(for_eval=True) data = data.repeat(1) data = data.batch(batch_size=params['batch_size']) return data def input_fn_predict(params): data = preprocessed_input_fn(for_eval=True) data = data.batch(batch_size=params['batch_size']) return data
site/en-snapshot/hub/tutorials/cord_19_embeddings.ipynb
tensorflow/docs-l10n
apache-2.0
Let's build a model which use the CORD-19 embeddings with a classification layer on top.
def model_fn(features, labels, mode, params): # Embed the text embed = hub.Module(params['module_name'], trainable=params['trainable_module']) embeddings = embed(features['feature']) # Add a linear layer on top logits = tf.layers.dense( embeddings, units=THE_DATASET.num_classes(), activation=None) predictions = tf.argmax(input=logits, axis=1) if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec( mode=mode, predictions={ 'logits': logits, 'predictions': predictions, 'features': features['feature'], 'labels': features['label'] }) # Set up a multi-class classification head loss = tf.nn.sparse_softmax_cross_entropy_with_logits( labels=labels, logits=logits) loss = tf.reduce_mean(loss) if mode == tf.estimator.ModeKeys.TRAIN: optimizer = tf.train.GradientDescentOptimizer(learning_rate=params['learning_rate']) train_op = optimizer.minimize(loss, global_step=tf.train.get_or_create_global_step()) return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) elif mode == tf.estimator.ModeKeys.EVAL: accuracy = tf.metrics.accuracy(labels=labels, predictions=predictions) precision = tf.metrics.precision(labels=labels, predictions=predictions) recall = tf.metrics.recall(labels=labels, predictions=predictions) return tf.estimator.EstimatorSpec( mode=mode, loss=loss, eval_metric_ops={ 'accuracy': accuracy, 'precision': precision, 'recall': recall, }) #@title Hyperparmeters { run: "auto" } EMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/1' #@param {type: "string"} TRAINABLE_MODULE = False #@param {type: "boolean"} STEPS = 8000#@param {type: "integer"} EVAL_EVERY = 200 #@param {type: "integer"} BATCH_SIZE = 10 #@param {type: "integer"} LEARNING_RATE = 0.01 #@param {type: "number"} params = { 'batch_size': BATCH_SIZE, 'learning_rate': LEARNING_RATE, 'module_name': EMBEDDING, 'trainable_module': TRAINABLE_MODULE }
site/en-snapshot/hub/tutorials/cord_19_embeddings.ipynb
tensorflow/docs-l10n
apache-2.0
Train and evaluate the model Let's train and evaluate the model to see the performance on the SciCite task
estimator = tf.estimator.Estimator(functools.partial(model_fn, params=params)) metrics = [] for step in range(0, STEPS, EVAL_EVERY): estimator.train(input_fn=functools.partial(input_fn_train, params=params), steps=EVAL_EVERY) step_metrics = estimator.evaluate(input_fn=functools.partial(input_fn_eval, params=params)) print('Global step {}: loss {:.3f}, accuracy {:.3f}'.format(step, step_metrics['loss'], step_metrics['accuracy'])) metrics.append(step_metrics) global_steps = [x['global_step'] for x in metrics] fig, axes = plt.subplots(ncols=2, figsize=(20,8)) for axes_index, metric_names in enumerate([['accuracy', 'precision', 'recall'], ['loss']]): for metric_name in metric_names: axes[axes_index].plot(global_steps, [x[metric_name] for x in metrics], label=metric_name) axes[axes_index].legend() axes[axes_index].set_xlabel("Global Step")
site/en-snapshot/hub/tutorials/cord_19_embeddings.ipynb
tensorflow/docs-l10n
apache-2.0
We can see that the loss quickly decreases while especially the accuracy rapidly increases. Let's plot some examples to check how the prediction relates to the true labels:
predictions = estimator.predict(functools.partial(input_fn_predict, params)) first_10_predictions = list(itertools.islice(predictions, 10)) display_df( pd.DataFrame({ TEXT_FEATURE_NAME: [pred['features'].decode('utf8') for pred in first_10_predictions], LABEL_NAME: [THE_DATASET.class_names()[pred['labels']] for pred in first_10_predictions], 'prediction': [THE_DATASET.class_names()[pred['predictions']] for pred in first_10_predictions] }))
site/en-snapshot/hub/tutorials/cord_19_embeddings.ipynb
tensorflow/docs-l10n
apache-2.0
Problem 2 Using optimization solve the following equation: $$ \int_{-\infty}^x e^{-s^2} = 0.25 $$ 1D, convex, root-fniding
from scipy.integrate import quad import numpy as np x = newton(lambda x: quad(lambda y: np.exp(-y**2), -np.inf,x)[0] - 0.25, x0=0) print('x = {:.3f}'.format(x))
unit_11/hw_2017/problem_set_1.ipynb
whitead/numerical_stats
gpl-3.0
Problem 3 Find the maximum value of $g(x,y)$ where both $x$ and $y$ are between 0 and 1: $$ g(x,y) = \exp\left(-\frac{(x - 0.2)^2}{4}\right)\exp\left(-\frac{(x - y)^2}{5}\right) \exp\left(-\frac{(y - 0.7)^2}{4}\right) $$ 2D, convex, minimization
from scipy.optimize import minimize def obj(z): x = z[0] y = z[1] #return negative to allow max return -np.exp(-(x - 0.2)**2 / 4) * np.exp(-(x - y)**2 / 5) * np.exp(-(y - 0.7)**2 / 4) result = minimize(obj, x0=[0.5, 0.5], bounds=[(0, 1), (0,1)]) print('The minimizing x,y are x = {:.3f}, y = {:.3f}'.format(*result.x))
unit_11/hw_2017/problem_set_1.ipynb
whitead/numerical_stats
gpl-3.0
Problem 4 $x$ and $y$ lie inside a disc with radius $3 \geq r \geq 5$. Find the point within the disc that minimizes the distance to (-6, 2). Modify the code to add your optimum point along with an entry in the legend. Complete the problem in Cartesian coordinates.
import matplotlib.pyplot as plt import matplotlib #use nice style with larger plot size matplotlib.style.use(['seaborn-white', 'seaborn-talk']) #set-up our points theta = np.linspace(0, 2 * np.pi, 100) r = np.repeat(3, len(theta)) #plot the disc boundaries plt.polar(theta, r, linestyle='--', color='#333333') plt.polar(theta, r + 2, linestyle='--', color='#333333') #plot the inside of the disc plt.fill_between(theta, r, r + 2, color='#AAAAAA') #plot the point plt.plot(np.arctan2(2, -6), np.sqrt((-6)**2 + 2**2), 'ro', label='objective') #give some whitespace plt.gca().set_rmax(10) #add legend plt.legend(loc='best') plt.show()
unit_11/hw_2017/problem_set_1.ipynb
whitead/numerical_stats
gpl-3.0
2D, convex, constrained, minimization. Constraints: $$ x^2 + y^2 - 3^2 \geq 0 $$ $$ -x^2 - y^2 + 5^2 \geq 0 $$
#Optimization Code ### BEGIN SOLUTION ineq_1 = lambda x: x[0]**2 + x[1]**2 - 3**2 ineq_2 = lambda x: -(x[0]**2 + x[1]**2 - 5**2) constraints = [{'type':'ineq', 'fun':ineq_1}, {'type':'ineq', 'fun':ineq_2}] result = minimize(lambda x: (x[0] - -6)**2 + (x[1] - 2)**2, constraints=constraints, x0=[0,0]) print('The minimum coordinates are x = {:.3f} and y = {:.3f}'.format(*result.x)) ###END SOLUTION #Your plot Code ### BEGIN SOLUTION import matplotlib.pyplot as plt import matplotlib #use nice style with larger plot size matplotlib.style.use(['seaborn-white', 'seaborn-talk']) #set-up our points theta = np.linspace(0, 2 * np.pi, 100) r = np.repeat(3, len(theta)) #plot the disc boundaries plt.polar(theta, r, linestyle='--', color='#333333') plt.polar(theta, r + 2, linestyle='--', color='#333333') #plot the inside of the disc plt.fill_between(theta, r, r + 2, color='#AAAAAA') #plot the point plt.plot(np.arctan2(2, -6), np.sqrt((-6)**2 + 2**2), 'ro', label='objective') plt.plot(np.arctan2(result.x[1], result.x[0]), np.sqrt(result.x[1]**2 + result.x[0]**2), 'gX', label='optimum') #give some whitespace plt.gca().set_rmax(10) #add legend plt.legend(loc='upper center') plt.show() ### END SOLUTION
unit_11/hw_2017/problem_set_1.ipynb
whitead/numerical_stats
gpl-3.0
Problem 5 Repeat the previous problem except now you must minimize the distance to three points: (-6, 2), (4,2), (-7, 0)
#optimization ### BEGIN SOLUTION def obj(x): s = 0 for p in [[-6,2], [4,2], [-7, 0]]: s += (x[0] - p[0])**2 + (x[1] - p[1])**2 return s result = minimize(obj, constraints=constraints, x0=[0,0]) print('The minimum coordinates are x = {:.3f} and y = {:.3f}'.format(*result.x)) ### END SOLUTION #Your plot Code ### BEGIN SOLUTION import matplotlib.pyplot as plt import matplotlib #use nice style with larger plot size matplotlib.style.use(['seaborn-white', 'seaborn-talk']) #set-up our points theta = np.linspace(0, 2 * np.pi, 100) r = np.repeat(3, len(theta)) #plot the disc boundaries plt.polar(theta, r, linestyle='--', color='#333333') plt.polar(theta, r + 2, linestyle='--', color='#333333') #plot the inside of the disc plt.fill_between(theta, r, r + 2, color='#AAAAAA') #plot the point plt.plot(np.arctan2(2, -6), np.sqrt((-6)**2 + 2**2), 'ro', label='objective') plt.plot(np.arctan2(2, 4), np.sqrt((4)**2 + 2**2), 'ro') plt.plot(np.arctan2(0, -7), np.sqrt((-7)**2 + 0**2), 'ro',) plt.plot(np.arctan2(result.x[1], result.x[0]), np.linalg.norm(result.x), 'gX', label='optimum') #give some whitespace plt.gca().set_rmax(10) #add legend plt.legend(loc='upper center') plt.show() ### END SOLUTION
unit_11/hw_2017/problem_set_1.ipynb
whitead/numerical_stats
gpl-3.0
Problem 6 The free energy of mixing is given by the following equation in phase equilibrium theory: $$ \Delta F = x\ln x + (1 - x)\ln (1 - x) + \chi_{AB}x(1 - x) + \beta x $$ where x is the mole fraction of component A, $\chi_{AB}$ is the interaction parameter, and $\beta$ is a system correction. Find the mole fraction of component A at which the free energy of mixing is minimized. Use $\chi_{AB} = 3$ and $\beta = 0.05$. Use bashinhopping. 1D, non-convex (see plot below), bounded, minimization
#make a plot to see if it's convex chi = 3 x = np.linspace(0.01,0.99, 100) F = x * np.log(x) + (1 - x) * np.log(1 - x) + chi * x * (1 - x) + 0.05 * x plt.plot(x,F) #looks nonconvex from scipy.optimize import basinhopping def f(x): return x * np.log(x) + (1 - x) * np.log(1 - x) + 3 * x * (1 - x) + 0.05 * x result = basinhopping(f, x0=0.5, minimizer_kwargs={'bounds': [(0.001,0.999)]}) print('The lowest free energy of mixing is at x = {:.5f}'.format(*result.x))
unit_11/hw_2017/problem_set_1.ipynb
whitead/numerical_stats
gpl-3.0
And more precisely, we are using the following versions:
print(nltk.__version__) print(cltk.__version__) print(MyCapytain.__version__)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's grab some text To start with, we need some text from which we'll try to extract named entities using various methods and libraries. There are several ways of doing this e.g.: 1. copy and paste the text from Perseus or the Latin Library into a text document, and read it into a variable 2. load a text from one of the Latin corpora available via cltk (cfr. this blog post) 3. or load it from Perseus by leveraging its Canonical Text Services API Let's gor for #3 :) What's CTS? CTS URNs stand for Canonical Text Service Uniform Resource Names. You can think of a CTS URN like a social security number for texts (or parts of texts). Here are some examples of CTS URNs with different levels of granularity: - urn:cts:latinLit:phi0448 (Caesar) - urn:cts:latinLit:phi0448.phi001 (Caesar's De Bello Gallico) - urn:cts:latinLit:phi0448.phi001.perseus-lat2 DBG Latin edtion - urn:cts:latinLit:phi0448.phi001.perseus-lat2:1 DBG Latin edition, book 1 - urn:cts:latinLit:phi0448.phi001.perseus-lat2:1.1.1 DBG Latin edition, book 1, chapter 1, section 1 How do I find out the CTS URN of a given author or text? The Perseus Catalog is your friend! (crf. e.g. http://catalog.perseus.org/catalog/urn:cts:latinLit:phi0448) Querying a CTS API The URN of the Latin edition of Caesar's De Bello Gallico is urn:cts:latinLit:phi0448.phi001.perseus-lat2.
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-lat2"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
With this information, we can query a CTS API and get some information about this text. For example, we can "discover" its canonical text structure, an essential information to be able to cite this text.
# We set up a resolver which communicates with an API available in Leipzig resolver = HttpCTSResolver(CTS("http://cts.dh.uni-leipzig.de/api/cts/")) # We require some metadata information textMetadata = resolver.getMetadata("urn:cts:latinLit:phi0448.phi001.perseus-lat2") # Texts in CTS Metadata have one interesting property : its citation scheme. # Citation are embedded objects that carries information about how a text can be quoted, what depth it has print([citation.name for citation in textMetadata.citation])
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
But we can also query the same API and get back the text of a specific text section, for example the entire book 1. To do so, we need to append the indication of the reference scope (i.e. book 1) to the URN.
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-lat2:1"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
So we retrieve the first book of the De Bello Gallico by passing its CTS URN (that we just stored in the variable my_passage) to the CTS API, via the resolver provided by MyCapytains:
passage = resolver.getTextualNode(my_passage)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
At this point the passage is available in various formats: text, but also TEI XML, etc. Thus, we need to specify that we are interested in getting the text only:
de_bello_gallico_book1 = passage.export(Mimetypes.PLAINTEXT)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's check that the text is there by printing the content of the variable de_bello_gallico_book1 where we stored it:
print(de_bello_gallico_book1)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
The text that we have just fetched by using a programming interface (API) can also be viewed in the browser. Or even imported as an iframe into this notebook!
from IPython.display import IFrame IFrame('http://cts.dh.uni-leipzig.de/read/latinLit/phi0448/phi001/perseus-lat2/1', width=1000, height=350)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's see how many words (tokens, more properly) there are in Caesar's De Bello Gallico I:
len(de_bello_gallico_book1.split(" "))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Very simple baseline Now let's write what in NLP jargon is called a baseline, that is a method for extracting named entities that can serve as a term of comparison to evaluate the accuracy of other methods. Baseline method: - cycle through each token of the text - if the token starts with a capital letter it's a named entity (only one type, i.e. Entity)
"T".istitle() "t".istitle() # we need a list to store the tagged tokens tagged_tokens = [] # tokenisation is done by using the string method `split(" ")` # that splits a string upon white spaces for n, token in enumerate(de_bello_gallico_book1.split(" ")): if(token.istitle()): tagged_tokens.append((token, "Entity")) else: tagged_tokens.append((token, "O"))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's a have a look at the first 50 tokens that we just tagged:
tagged_tokens[:50]
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
For convenience we can also wrap our baseline code into a function that we call extract_baseline. Let's define it:
def extract_baseline(input_text): """ :param input_text: the text to tag (string) :return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag """ # we need a list to store the tagged tokens tagged_tokens = [] # tokenisation is done by using the string method `split(" ")` # that splits a string upon white spaces for n, token in enumerate(input_text.split(" ")): if(token.istitle()): tagged_tokens.append((token, "Entity")) else: tagged_tokens.append((token, "O")) return tagged_tokens
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
And now we can call it like this:
tagged_tokens_baseline = extract_baseline(de_bello_gallico_book1) tagged_tokens_baseline[-50:]
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
We can modify slightly our function so that it prints the snippet of text where an entity is found:
def extract_baseline(input_text): """ :param input_text: the text to tag (string) :return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag """ # we need a list to store the tagged tokens tagged_tokens = [] # tokenisation is done by using the string method `split(" ")` # that splits a string upon white spaces for n, token in enumerate(input_text.split(" ")): if(token.istitle()): tagged_tokens.append((token, "Entity")) context = input_text.split(" ")[n-5:n+5] print("Found entity \"%s\" in context \"%s\""%(token, " ".join(context))) else: tagged_tokens.append((token, "O")) return tagged_tokens tagged_text_baseline = extract_baseline(de_bello_gallico_book1) tagged_text_baseline[:50]
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
NER with CLTK The CLTK library has some basic support for the extraction of named entities from Latin and Greek texts (see CLTK's documentation). The current implementation (as of version 0.1.47) uses a lookup-based method. For each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities: - list of Latin proper nouns: https://github.com/cltk/latin_proper_names_cltk - list of Greek proper nouns: https://github.com/cltk/greek_proper_names_cltk Let's run CLTK's tagger (it takes a moment):
%%time tagged_text_cltk = tag_ner('latin', input_text=de_bello_gallico_book1)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's have a look at the ouput, only the first 10 tokens (by using the list slicing notation):
tagged_text_cltk[:10]
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
The output looks slightly different from the one of our baseline function (the size of the tuples in the list varies). But we can write a function to fix this, we call it reshape_cltk_output:
def reshape_cltk_output(tagged_tokens): reshaped_output = [] for tagged_token in tagged_tokens: if(len(tagged_token)==1): reshaped_output.append((tagged_token[0], "O")) else: reshaped_output.append((tagged_token[0], tagged_token[1])) return reshaped_output
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
We apply this function to CLTK's output:
tagged_text_cltk = reshape_cltk_output(tagged_text_cltk)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
And the resulting output looks now ok:
tagged_text_cltk[:20]
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Now let's compare the two list of tagged tokens by using a python function called zip, which allows us to read multiple lists simultaneously:
list(zip(tagged_text_baseline[:20], tagged_text_cltk_reshaped[:20]))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
But, as you can see, the two lists are not aligned. This is due to how the CLTK function tokenises the text. The comma after "tres" becomes a token on its own, whereas when we tokenise by white space the comma is attached to "tres" (i.e. "tres,"). A solution to this is to pass to the tag_ner function the text already tokenised by text.
tagged_text_cltk = reshape_cltk_output(tag_ner('latin', input_text=de_bello_gallico_book1.split(" "))) list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20]))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
NER with NLTK
stanford_model_italian = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/ner-ita-nogpe-noiob_gaz_wikipedia_sloppy.ser.gz" ner_tagger = StanfordNERTagger(stanford_model_italian) tagged_text_nltk = ner_tagger.tag(de_bello_gallico_book1.split(" "))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's have a look at the output
tagged_text_nltk[:20]
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Wrap up At this point we can "compare" the output of the three different methods we used, again by using the zip function.
list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20])) for baseline_out, cltk_out, nltk_out in zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20]): print("Baseline: %s\nCLTK: %s\nNLTK: %s\n"%(baseline_out, cltk_out, nltk_out))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Excercise Extract the named entities from the English translation of the De Bello Gallico book 1. The CTS URN for this translation is urn:cts:latinLit:phi0448.phi001.perseus-eng2:1. Modify the code above to use the English model of the Stanford tagger instead of the italian one. Hint:
stanford_model_english = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/english.muc.7class.distsim.crf.ser.gz"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0