markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Sometimes one also needs the index of the element, e.g. to plot a subset of data on different subplots. Then enumerate provides an elegant ("pythonic") way:
for index, fruit in enumerate(fruits): print('Current fruit :', fruits[index])
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
In principle one could also iterate over an index going from 0 to the number of elements:
for index in range(len(fruits)): print('Current fruit:', fruits[index])
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
for loops can be elegantly integrated for creating lists
fruits_with_b = [fruit for fruit in fruits if fruit.startswith('b')] fruits_with_b
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
This is equivalent to the following loop:
fruits_with_b = [] for fruit in fruits: if fruit.startswith('b'): fruits_with_b.append(fruit) fruits_with_b
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Nested Loops Python programming language allows to use one loop inside another loop. A final note on loop nesting is that you can put any type of loop inside of any other type of loop. For example a for loop can be inside a while loop or vice versa.
for x in range(1, 3): for y in range(1, 4): print(f'{x} * {y} = {x*y}')
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
if The if statement will evaluate the code only if a given condition is met (used with logical operator such as ==, <, >, =<, =>, not, is, in, etc. Optionally we can introduce a elsestatement to execute an alternative code when the condition is not met.
x = 'Mark' if x in ['Mark', 'Jack', 'Mary']: print('present!') else: print('absent!') x = 'Tom' if x in ['Mark', 'Jack', 'Mary']: print('present!') else: print('absent!')
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
We can also use one or more elif statements to check multiple expressions for TRUE and execute a block of code as soon as one of the conditions evaluates to TRUE
x = 'Tom' if x in ['Mark', 'Jack', 'Mary']: print('present in list A!') elif x in ['Tom', 'Dick', 'Harry']: print('present in list B!') else: print('absent!')
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
else statements can be also use with while and for loops (the code will be executed in the end) You can also use nested if statements. break It terminates the current loop and resumes execution at the next statement. The break statement can be used in both while and for loops. If you are using nested loops, the break s...
var = 10 while var > 0: print('Current variable value: ' + str(var)) var = var -1 if var == 5: break print('Good bye!')
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
continue The continue statement rejects all the remaining statements in the current iteration of the loop and moves the control back to the top of the loop (like a "skip"). The continue statement can be used in both while and for loops.
for letter in 'Python': if letter == 'h': continue print('Current Letter: ' + letter)
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
pass The pass statement is a null operation; nothing happens when it executes. The pass is also useful in places where your code will eventually go, but has not been written yet
for letter in 'Python': if letter == 'h': pass print('This is pass block') print('Current Letter: ' + letter) print('Good bye!')
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
pass and continuecould seem similar but they are not. The printed message "This is pass block", wouldn't have been printed if continue had been used instead. pass does nothing, continue goes to the next iteration.
for letter in 'Python': if letter == 'h': continue print('This is pass block') print('Current Letter: ' + letter) print('Good bye!')
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Import section specific modules:
from mpl_toolkits.mplot3d import Axes3D import plotBL HTML('../style/code_toggle.html')
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
4.5.1 UV coverage : UV tracks The objective of $\S$ 4.5.1 &#10549; and $\S$ 4.5.2 &#10142; is to give you a glimpse into the process of aperture synthesis. <span style="background-color:cyan">TLG:GM: Check if the italic words are in the glossary. </span> An interferometer measures components of the Fourier Transform of...
ant1 = np.array([-500e3,500e3,0]) # in m ant2 = np.array([500e3,-500e3,+10]) # in m
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Let's express the corresponding physical baseline in ENU coordinates.
b_ENU = ant2-ant1 # baseline D = np.sqrt(np.sum((b_ENU)**2)) # |b| print str(D/1000)+" km"
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Let's place the interferometer at a latitude $L_a=+45^\circ00'00''$.
L = (np.pi/180)*(45+0./60+0./3600) # Latitude in radians A = np.arctan2(b_ENU[0],b_ENU[1]) print "Baseline Azimuth="+str(np.degrees(A))+"°" E = np.arcsin(b_ENU[2]/D) print "Baseline Elevation="+str(np.degrees(E))+"°" %matplotlib nbagg plotBL.sphere(ant1,ant2,A,E,D,L)
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Figure 4.5.1: A baseline located at +45$^\circ$ as seen from the sky. This plot is interactive and can be rotated in 3D to see different baseline projections, depending on the position of the source w.r.t. the physical baseline. On the interactive plot above, we represent a baseline located at +45$^\circ$. It is aligne...
# Observation parameters c = 3e8 # Speed of light f = 1420e9 # Frequency lam = c/f # Wavelength dec = (np.pi/180)*(-30-43.0/60-17.34/3600) # Declination time_steps = 600 # ...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
4.5.1.1.3 Computing of the projected baselines in ($u$,$v$,$w$) coordinates as a function of time As seen previously, we convert the baseline coordinates using the previous matrix transformation.
ant1 = np.array([25.095,-9.095,0.045]) ant2 = np.array([90.284,26.380,-0.226]) b_ENU = ant2-ant1 D = np.sqrt(np.sum((b_ENU)**2)) L = (np.pi/180)*(-30-43.0/60-17.34/3600) A=np.arctan2(b_ENU[0],b_ENU[1]) print "Azimuth=",A*(180/np.pi) E=np.arcsin(b_ENU[2]/D) print "Elevation=",E*(180/np.pi) X = D*(np.cos(L)*np.sin(E)-n...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
As the $u$, $v$, $w$ coordinates explicitly depend on $H$, we must evaluate them for each observational time step. We will use the equations defined in $\S$ 4.2.2 &#10142;: $\lambda u = X \sin H + Y \cos H$ $\lambda v= -X \sin \delta \cos H + Y \sin\delta\sin H + Z \cos\delta$ $\lambda w= X \cos \delta \cos H -Y \cos\...
u = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3 v = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3 w = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
We now have everything that describes the $uvw$-track of the baseline (over an 8-hour observational period). It is hard to predict which locus the $uvw$ track traverses given only the three mathematical equations from above. Let's plot it in $uvw$ space and its projection in $uv$ space.
%matplotlib nbagg plotBL.UV(u,v,w)
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Figure 4.5.2: $uvw$ track derived from the simulation and projection in the $uv$-plane. The track in $uvw$ space are curves and the projection in the $uv$ plane are arcs. Let us focus on the track's projection in this plane. To get observation-independent knowledge of the track we can try to combine the three equations...
%matplotlib inline from matplotlib.patches import Ellipse # parameters of the UVtrack as an ellipse a=np.sqrt(X**2+Y**2)/lam/1e3 # major axis b=a*np.sin(dec) # minor axis v0=Z/lam*np.cos(dec)/1e3 # center of ellipse plotBL.UVellipse(u,v,w,a,b,v0)
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Figure 4.5.3: The blue (resp. the red) curve is the $uv$ track of the baseline $\mathbf{b}{12}$ (resp. $\mathbf{b}{21}$). As $I_\nu$ is real, the real part of the visibility $\mathcal{V}$ is even and the imaginary part is odd making $\mathcal{V}(-u,-v)=\mathcal{V}^*$. It implies that one baseline automatically provides...
L=np.radians(90.) ant1 = np.array([25.095,-9.095,0.045]) ant2 = np.array([90.284,26.380,-0.226]) b_ENU = ant2-ant1 D = np.sqrt(np.sum((b_ENU)**2)) A=np.arctan2(b_ENU[0],b_ENU[1]) print "Azimuth=",A*(180/np.pi) E=np.arcsin(b_ENU[2]/D) print "Elevation=",E*(180/np.pi) X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.c...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Let's compute the $uv$ tracks of an observation of the NCP ($\delta=90^\circ$):
dec=np.radians(90.) uNCP = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3 vNCP = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3 wNCP = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3 # parameters of the UVtrack as an ellipse aNCP=np.sqrt(X**2+Y**2)/lam/1e3 # majo...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Let's compute the uv tracks when observing a source at $\delta=30^\circ$:
dec=np.radians(30.) u30 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3 v30 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3 w30 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3 a30=np.sqrt(X**2+Y**2)/lam/1e3 # major axis b30=a*np.sin(dec) # mino...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Figure 4.5.4: $uv$ track for a baseline at the pole observing at $\delta=90^\circ$ (NCP) and at $\delta=30^\circ$ with the same color conventions as the previous figure. When observing a source at declination $\delta$, we still have an elliptical shape but centered at (0,0). In the case of a polar interferometer, the f...
L=np.radians(90.) X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A)) Y = D*np.cos(E)*np.sin(A) Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A)) # At local zenith == Celestial Equator dec=np.radians(0.) uEQ = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3 vEQ = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Figure 4.5.5: $uv$ track for a baseline at the equator observing at $\delta=0^\circ$ and at $\delta=10^\circ$, with the same color conventions as the previous figure. An equatorial interferometer observing its zenith will see radio sources crossing the sky on straight, linear paths. Therefore, they will produce straigh...
H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians d = 100 #We assume that we have already divided by wavelength delta = 60*(np.pi/180) #Declination in degrees u_60 = d*np.cos(H) v_60 = d*np.sin(H)*np.sin(delta)
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
<span style="background-color:red">TLG:AC: Add the following figures. This is specifically for an EW array. They will add some more insight. </span> <img src='figures/EW_1_d.svg' width=40%> <img src='figures/EW_2_d.svg' width=40%> <img src='figures/EW_3_d.svg' width=40%> 4.5.1.3.2 Simulating the sky Let us populate our...
RA_sources = np.array([5+30.0/60,5+32.0/60+0.4/3600,5+36.0/60+12.8/3600,5+40.0/60+45.5/3600]) DEC_sources = np.array([60,60+17.0/60+57.0/3600,61+12.0/60+6.9/3600,61+56.0/60+34.0/3600]) Flux_sources_labels = np.array(["","1 Jy","0.5 Jy","0.2 Jy"]) Flux_sources = np.array([1,0.5,0.1]) #in Jy step_size = 200 print "Phase ...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
We then convert the ($\alpha$,$\delta$) to $l,m$: <span style="background-color:red">TLG:AC:Point to Chapter 3.</span> * $l = \cos \delta \sin \Delta \alpha$ * $m = \sin \delta\cos\delta_0 -\cos \delta\sin\delta_0\cos\Delta \alpha$ * $\Delta \alpha = \alpha - \alpha_0$
RA_rad = np.array(RA_sources)*(np.pi/12) DEC_rad = np.array(DEC_sources)*(np.pi/180) RA_delta_rad = RA_rad-RA_rad[0] l = np.cos(DEC_rad)*np.sin(RA_delta_rad) m = (np.sin(DEC_rad)*np.cos(DEC_rad[0])-np.cos(DEC_rad)*np.sin(DEC_rad[0])*np.cos(RA_delta_rad)) print "l=",l*(180/np.pi) print "m=",m*(180/np.pi) point_sources...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
The source and phase centre coordinates are now given in degrees.
%matplotlib inline fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111) plt.xlim([-4,4]) plt.ylim([-4,4]) plt.xlabel("$l$ [degrees]") plt.ylabel("$m$ [degrees]") plt.plot(l[0],m[0],"bx") plt.hold("on") plt.plot(l[1:]*(180/np.pi),m[1:]*(180/np.pi),"ro") counter = 1 for xy in zip(l[1:]*(180/np.pi)+0.25, m[1:]*(18...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Figure 4.5.6: Distribution of the simulated sky in the $l$,$m$ plane. 4.5.1.3.3 Simulating an observation We will now create a fully-filled $uv$-plane, and sample it using the EW-baseline track we created in the first section. We will be ignoring the $w$-term for the sake of simplicity.
u = np.linspace(-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10, num=step_size, endpoint=True) v = np.linspace(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10, num=step_size, endpoint=True) uu, vv = np.meshgrid(u, v) zz = np.zeros(uu.shape).astype(complex)
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
We create the dimensions of our visibility plane.
s = point_sources.shape for counter in xrange(1, s[0]+1): A_i = point_sources[counter-1,0] l_i = point_sources[counter-1,1] m_i = point_sources[counter-1,2] zz += A_i*np.exp(-2*np.pi*1j*(uu*l_i+vv*m_i)) zz = zz[:,::-1]
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
We create our fully-filled visibility plane. With a "perfect" interferometer, we could sample the entire $uv$-plane. Since we only have a finite amount of antennas, this is never possible in practice. Recall that our sky brightness $I(l,m)$ is related to our visibilites $V(u,v)$ via the Fourier transform. For a bunch o...
u_track = u_60 v_track = v_60 z = np.zeros(u_track.shape).astype(complex) s = point_sources.shape for counter in xrange(1, s[0]+1): A_i = point_sources[counter-1,0] l_i = point_sources[counter-1,1] m_i = point_sources[counter-1,2] z += A_i*np.exp(-1*2*np.pi*1j*(u_track*l_i+v_track*m_i))
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Below we sample our visibility plane on the $uv$-track derived in the first section, i.e. $V(u_t,v_t)$.
plt.figure(figsize=(12,6)) plt.subplot(121) plt.imshow(zz.real,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \ np.amax(abs(v_60))+10]) plt.plot(u_60,v_60,"k") plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10]) plt.ylim(-1*(np.ama...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Figure 4.5.7: Real and imaginary parts of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility. We now plot the sampled visibilites as a function of time-slots, i.e $V(u_t(t_s),v_t(t_s))$.
plt.figure(figsize=(12,6)) plt.subplot(121) plt.plot(z.real) plt.xlabel("Timeslots") plt.ylabel("Jy") plt.title("Real: sampled visibilities") plt.subplot(122) plt.plot(z.imag) plt.xlabel("Timeslots") plt.ylabel("Jy") plt.title("Imag: sampled visibilities")
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Figure 4.5.8: Real and imaginary parts of the visibility sampled by the black curve in Fig. 4.5.7, plotted as a function of time.
plt.figure(figsize=(12,6)) plt.subplot(121) plt.imshow(abs(zz), extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10, -1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10]) plt.plot(u_60,v_60,"k") plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10]) plt.yli...
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Figure 4.5.9: Amplitude and Phase of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility.
plt.figure(figsize=(12,6)) plt.subplot(121) plt.plot(abs(z)) plt.xlabel("Timeslots") plt.ylabel("Jy") plt.title("Abs: sampled visibilities") plt.subplot(122) plt.plot(np.angle(z)) plt.xlabel("Timeslots") plt.ylabel("Jy") plt.title("Phase: sampled visibilities")
4_Visibility_Space/4_5_1_uv_coverage_uv_tracks.ipynb
ebonnassieux/fundamentals_of_interferometry
gpl-2.0
Python posee por defecto un tipo de datos que se asemeja(listas), pero es numéricamente ineficiente
#Ejemplo creo una lista de Python de 0 a 1000 y calculo el cuadrado de cada elemento L = range(1000) %%timeit [i**2 for i in L] #Ahora hago lo mismo con Numpy a = np.arange(1000) %%timeit a**2 print 'Numpy es ' print 111/5.4 print 'veces mas rapido'
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Caracteristicas y utilidades principales:
#Creando arrays a = np.array([1,1,1]) # 1D b = np.array([[1,1,1],[1,1,1]]) #2d (Matrix) c = np.array([[[1,1,1],[1,1,1],[1,1,1]]]) #3D (Tensor...) print a.shape print b.shape print c.shape #Podemos crear arrays predeterminados con funciones muy utiles a = np.arange(10) # un array de enteros 0 a 10(no lo incluye) a...
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Indexing and slicing Los items de un array pueden accederse de una manera natural como en Matlab(hay que tener en cuenta que los indices comienzan en 0)
a = np.arange(10) a #creo una tupla a[0],a[2],a[-1] # Slicing [comienzo:final:paso] # Los tres no son necesarios explicitamente ya que por default comienzo=0, final=[-1] y el paso=1 a[2:8:2] a[::4] # solo cambiamos el paso
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Fancy indexing Hay más maneras de indexar y asignar
np.random.seed(3) a = np.random.random_integers(0, 20, 15) a (a % 3 == 0) mask = (a % 3 == 0) a_multiplos_3 = a[mask] a_multiplos_3 #Puedo indexar y asignar al mismo tiempo a[a % 3 == 0] = -1 a
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Elementwise operations Operar con funciones en cada elemento de un array(NO USE FOR)
a = np.array([1, 2, 3, 4]) a #todas las operaciones aritmeticas funcionan a+1 j = np.arange(5) 2**(j + 1) - j #Multiplicacion ES ELEMENTO A ELEMENTO a * a #Para hacer multiplicaciones matriciales usamos dot b = np.random.rand(3,3) c = np.random.rand(3,3) np.dot(b,c) #cado objeto ndarray tiene muuchos metodos e...
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Optimizaciones con Fortran y C <img src="files/Images/index.png" style="float: left;"/> <div style="clear: both;">
%%file hellofortran.f C File hellofortran.f subroutine hellofortran (n) integer n do 100 i=0, n print *, "Hola Soy Fortran tengo muuchos años" 100 continue end
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Generamos un modulo de Python con f2py
!f2py -c -m hellofortran hellofortran.f
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Importamos el modulo que generamos y lo utilizamos
%%file hello.py import hellofortran hellofortran.hellofortran(5) # corremos el script !python hello.py
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Ejemplo 2: suma acumulativa, entrada vector, salida vector Primero hacemos la implementacion en Python puro, este tipo de suma es particularmente costoso ya que se necesitan loops for
# Esta no es la mejor implementacion #Porque el loop esta implementado en Python def py_dcumsum(a): b = np.empty_like(a) b[0] = a[0] for n in range(1,len(a)): b[n] = b[n-1]+a[n] return b
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Ahora hacemos la implementacion en Fortran
%%file dcumsum.f c File dcumsum.f subroutine dcumsum(a, b, n) double precision a(n) double precision b(n) integer n cf2py intent(in) :: a cf2py intent(out) :: b cf2py intent(hide) :: n b(1) = a(1) do 100 i=2, n b(i) = b(i-1) + a(i) 100 continue end
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Compilamos directamente a un modulo de python
!f2py -c dcumsum.f -m dcumsum #importamos el modulo recien creado import dcumsum a = np.array([1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0]) py_dcumsum(a) dcumsum.dcumsum(a)
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Ahora los ponemos a prueba
a = np.random.rand(10000) %%timeit py_dcumsum(a) %%timeit dcumsum.dcumsum(a) %%timeit a.cumsum()
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Guauuuu¡¡¡¡ Ejemplo de vectorizacion
%run srinivasan_pruebas.py %run srinivasan_pruebas_vec.py
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
<img src="files/Images/logo2.png" style="float: left;"/> <div style="clear: both;"> #Matplotlib ## Es el modulo para realizar y visualizar graficos [Pagina oficial](http://matplotlib.org/)
#para que los graficos queden empotrados %pylab inline X = np.linspace(-np.pi, np.pi, 256, endpoint=True) C, S = np.cos(X), np.sin(X) plot(X, C) plot(X, S) t = 2 * np.pi / 3 plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine") plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine") ...
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Ejemplos Ejemplo 1 Ecuaciones de Navier-Stokes con convección no-lineal en 2D Now we solve 2D Convection, represented by the pair of coupled partial differential equations below: $$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} + v \frac{\partial u}{\partial y} = 0$$ $$\frac{\partial v}{\partial t} +...
from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import numpy as np ###variable declarations nx = 101 ny = 101 nt = 80 c = 1 dx = 2.0/(nx-1) dy = 2.0/(ny-1) sigma = .2 dt = sigma*dx x = np.linspace(0,2,nx) y = np.linspace(0,2,ny) u = np.ones((ny,nx)) ##create a 1xn vector of 1's v = np.ones((n...
Python_scientific/numpy_matplot_ejemplos.ipynb
elsuizo/Charlas_presentaciones
gpl-2.0
Since we want to translate a new data split ('test') we must add it to the dataset instance, just as we did before (at the first tutorial). In case we also had the refences of the test split and we wanted to evaluate it, we can add it to the dataset. Note that this is not mandatory and we could just predict without eva...
dataset.setInput('examples/EuTrans/DATA/test.es', 'test', type='text', id='source_text', pad_on_batch=True, tokenization='tokenize_none', fill='end', max_text_len=30, min_occ=0) dataset.setInput(None, 'test', ...
examples/3_decoding_tutorial.ipynb
Sasanita/nmt-keras
mit
Now, let's load the translation model. Suppose we want to load the model saved at the end of the epoch 4:
params['INPUT_VOCABULARY_SIZE'] = dataset.vocabulary_len[params['INPUTS_IDS_DATASET'][0]] params['OUTPUT_VOCABULARY_SIZE'] = dataset.vocabulary_len[params['OUTPUTS_IDS_DATASET'][0]] # Load model nmt_model = loadModel('trained_models/tutorial_model', 4)
examples/3_decoding_tutorial.ipynb
Sasanita/nmt-keras
mit
Once we loaded the model, we just have to invoke the sampling method (in this case, the Beam Search algorithm) for the 'test' split:
params_prediction = {'max_batch_size': 50, 'n_parallel_loaders': 8, 'predict_on_sets': ['test'], 'beam_size': 12, 'maxlen': 50, 'model_inputs': ['source_text', 'state_below'], 'model_outputs': [...
examples/3_decoding_tutorial.ipynb
Sasanita/nmt-keras
mit
Up to this moment, in the variable 'predictions', we have the indices of the words of the hypotheses. We must decode them into words. For doing this, we'll use the dictionary stored in the dataset object:
from keras_wrapper.utils import decode_predictions_beam_search vocab = dataset.vocabulary['target_text']['idx2words'] predictions = decode_predictions_beam_search(predictions, vocab, verbose=params['VERBOSE'])
examples/3_decoding_tutorial.ipynb
Sasanita/nmt-keras
mit
Finally, we store the system hypotheses:
filepath = nmt_model.model_path+'/' + 'test' + '_sampling.pred' # results file from keras_wrapper.extra.read_write import list2file list2file(filepath, predictions)
examples/3_decoding_tutorial.ipynb
Sasanita/nmt-keras
mit
If we have the references of this split, we can also evaluate the performance of our system on it. First, we must add them to the dataset object:
# In case we had the references of this split, we could also load the split and evaluate on it dataset.setOutput('examples/EuTrans/DATA/test.en', 'test', type='text', id='target_text', pad_on_batch=True, tokenization='tokenize_none', sample_w...
examples/3_decoding_tutorial.ipynb
Sasanita/nmt-keras
mit
Next, we call the evaluation system: The COCO package. Although its main usage is for multimodal captioning, we can use it in machine translation:
from keras_wrapper.extra.evaluation import select metric = 'coco' # Apply sampling extra_vars = dict() extra_vars['tokenize_f'] = eval('dataset.' + 'tokenize_none') extra_vars['language'] = params['TRG_LAN'] extra_vars['test'] = dict() extra_vars['test']['references'] = dataset.extra_variables['test']['target_text'] me...
examples/3_decoding_tutorial.ipynb
Sasanita/nmt-keras
mit
Macierz $A$ dla regresji liniowej wynosi:
import numpy as np M = np.vstack([np.ones_like(train_X),train_X]).T M print (np.dot(M.T,M)) print(np.dot(M.T,train_Y))
ML_SS2017/linear_regression_from_np_to_tf.ipynb
marcinofulus/teaching
gpl-3.0
Współczynniki dokładnie będą wynosiły:
c = np.linalg.solve(np.dot(M.T,M),np.dot(M.T,train_Y)) c plt.plot(train_X, train_Y, 'ro', label='Original data') plt.plot(train_X, c[1] * train_X + c[0], label='Fitted line') plt.legend() plt.close()
ML_SS2017/linear_regression_from_np_to_tf.ipynb
marcinofulus/teaching
gpl-3.0
Optymalizacja metodą iteracyjną, Nie zakładamy, że mamy problem regresji liniowej. Używamy: https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/optimize.html
from scipy.optimize import minimize def cost(c,x=train_X,y=train_Y): return sum( (c[0]+x_*c[1]-y_)**2 for (x_,y_) in zip(x,y) ) cost([1,2]) res = minimize(cost, [1,1], method='nelder-mead', options={'xtol': 1e-8, 'disp': True}) res.x x = np.linspace(-2,2,77) y = np.linspace(-2,2,77) X,Y = np.meshgrid(x,y) co...
ML_SS2017/linear_regression_from_np_to_tf.ipynb
marcinofulus/teaching
gpl-3.0
Tensor flow - gradient descend
# tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") # Set model weights W = tf.Variable(1.0, name="weight") b = tf.Variable(1.0, name="bias") # Construct a linear model pred = tf.add(tf.multiply(X, W), b) # Mean squared error cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples) # Gradient desce...
ML_SS2017/linear_regression_from_np_to_tf.ipynb
marcinofulus/teaching
gpl-3.0
Feature construction and data clean-up. 1. Z-score normalisation of data. 2. Group each of the measurement parameters into quartiles. Most of the classification methods find data like this easier to work with. 3. Create a series of 'adjacent' parameters by looking for the above and below depth sample for ea...
train_columns = train.columns[1:] std_scaler = preprocessing.StandardScaler().fit(train[train_columns]) train_std = std_scaler.transform(train[train_columns]) train_std_frame = train for i, column in enumerate(train_columns): train_std_frame.loc[:, column] = train_std[:, i] train = train_std_frame master_columns...
GCC_FaciesClassification/01 - Facies Classification - GCC-VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
4. TPOT
train_vector = ['class'] train_columns = frame.columns[4:] # train_f, test_f = train_test_split(frame, test_size = 0.1, random_state = 7)
GCC_FaciesClassification/01 - Facies Classification - GCC-VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
TPOT uses a genetic algorithm to tune model parameters for the most effective fit. This can take quite a while to process if you want to re-run this part!
# tpot = TPOTClassifier(verbosity=2, generations=5, max_eval_time_mins=30) # tpot.fit(train_f[train_columns], train_f['class']) # tpot.score(test_f[train_columns], test_f['class']) # tpot.export('contest_export.py') !cat contest_export.py from sklearn.pipeline import make_pipeline from sklearn.feature_selection imp...
GCC_FaciesClassification/01 - Facies Classification - GCC-VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
5.0 Workflow for Test Data Run this to generate results from output model.
test_path = r'../validation_data_nofacies.csv' # Read training data to dataframe test = pd.read_csv(test_path) # TPOT library requires that the target class is renamed to 'class' test.rename(columns={'Facies': 'class'}, inplace=True) test_columns = test.columns formations = {} for i, value in enumerate(test['Format...
GCC_FaciesClassification/01 - Facies Classification - GCC-VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
<header class="w3-container w3-teal"> <img src="images/utfsm.png" alt="" height="100px" align="left"/> <img src="images/mat.png" alt="" height="100px" align="right"/> </header> <br/><br/><br/><br/><br/> MAT281 Aplicaciones de la Matemática en la Ingeniería Sebastián Flores https://www.github.com/usantamaria/mat281 Clas...
from sklearn import datasets import matplotlib.pyplot as plt iris = datasets.load_iris() def plot(dataset, ax, i, j): ax.scatter(dataset.data[:,i], dataset.data[:,j], c=dataset.target, s=50) ax.set_xlabel(dataset.feature_names[i], fontsize=20) ax.set_ylabel(dataset.feature_names[j], fontsize=20) # row and...
clases/Unidad4-MachineLearning/Clase02-Clustering/clustering.ipynb
sebastiandres/mat281
cc0-1.0
Clustering Pregunta Crucial: ¿Si no supiéramos que existen 3 tipos de Iris, seríamos capaces algorítmicamente de encontrar 3 tipos de flores? Clustering Se tienen datos sin etiquetar/agrupar. Se busca obtener un agrupamiento "natural" de los datos. No existen ejemplos de los cuales aprender: método sin supervisar. Fá...
from mat281_code import iplot iplot.kmeans(N_points=100, n_clusters=4)
clases/Unidad4-MachineLearning/Clase02-Clustering/clustering.ipynb
sebastiandres/mat281
cc0-1.0
k-means <span class="good">Ventajas<span/> Rápido y sencillo de programar <span class="bad">Desventajas<span/> Trabaja en datos continuos, o donde distancias y promedios pueden definirse. Heurística depende del puntos iniciales. Requiere especificar el número de clusters $k$. No funciona correctamente en todos los c...
import numpy as np from scipy.linalg import norm def find_centers(X, k, seed=None): if seed is None: seed = np.random.randint(10000000) np.random.seed(seed) # Initialize to K random centers old_centroids = random_centers(X, k) new_centroids = random_centers(X, k) while not has_converged(n...
clases/Unidad4-MachineLearning/Clase02-Clustering/clustering.ipynb
sebastiandres/mat281
cc0-1.0
Aplicación a datos
from mat281_code import gendata from mat281_code import plot from mat281_code import kmeans X = gendata.init_blobs(1000, 4, seed=40) ax = plot.data(X) centroids, clusters = kmeans.find_centers(X, k=4) plot.clusters(X, centroids, clusters)
clases/Unidad4-MachineLearning/Clase02-Clustering/clustering.ipynb
sebastiandres/mat281
cc0-1.0
¿Es necesario reinventar la rueda? Utilicemos la libreria sklearn.
from mat281_code import gendata from mat281_code import plot from sklearn.cluster import KMeans X = gendata.init_blobs(10000, 6, seed=43) plot.data(X) kmeans = KMeans(n_clusters=6) kmeans.fit(X) centroids = kmeans.cluster_centers_ clusters = kmeans.labels_ plot.clusters(X, centroids, clusters)
clases/Unidad4-MachineLearning/Clase02-Clustering/clustering.ipynb
sebastiandres/mat281
cc0-1.0
¿Cómo seleccionar k? Conocimiento previo de los datos. Prueba y error. Regla del codo (Elbow rule). Estimating the number of clusters in a dataset via the gap statistic, Tibshirani, Walther and Hastie (2001). Selection of k in k-means, Pham, Dimov y Nguyen (2004). Volviendo al Iris Dataset Apliquemos k-means al Iris ...
import numpy as np from sklearn import datasets from sklearn.cluster import KMeans from sklearn.metrics import confusion_matrix # Parameters n_clusters = 8 # Loading the data iris = datasets.load_iris() X = iris.data y_true = iris.target # Running the algorithm kmeans = KMeans(n_clusters) kmeans.fit(X) y_pred = kmea...
clases/Unidad4-MachineLearning/Clase02-Clustering/clustering.ipynb
sebastiandres/mat281
cc0-1.0
1) How does gradient checking work? Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function. Because forward propagation is relatively easy to implement, you're confident you got tha...
# GRADED FUNCTION: forward_propagation def forward_propagation(x, theta): """ Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value...
Improving Deep Neural Networks/Gradient+Checking.ipynb
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
mit
Expected Output: <table> <tr> <td> ** dtheta ** </td> <td> 2 </td> </tr> </table> Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking. Instructions: - First compute "gradapprox" ...
# GRADED FUNCTION: gradient_check def gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradien...
Improving Deep Neural Networks/Gradient+Checking.ipynb
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
mit
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. How does gradient checking work?. As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation....
# GRADED FUNCTION: gradient_check_n def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", ...
Improving Deep Neural Networks/Gradient+Checking.ipynb
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
mit
Data preparation Split the data into training set and test set
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(ourdata.data, ourdata.target, test_size=0.3) print X_train.shape print y_train.shape print X_test.shape print y_test.shape from scipy.stats import itemfreq itemfreq(y_train)
Super_textbook.ipynb
chainsawriot/pycon2016hk_sklearn
mit
Let's try an unrealistic algorithm: kNN
from sklearn.neighbors import KNeighborsClassifier hx_knn = KNeighborsClassifier() hx_knn.fit(X_train, y_train) hx_knn.predict(X_train)
Super_textbook.ipynb
chainsawriot/pycon2016hk_sklearn
mit
Training set performance evaluation Confusion matrix https://en.wikipedia.org/wiki/Confusion_matrix
from sklearn.metrics import confusion_matrix, f1_score print confusion_matrix(y_train, hx_knn.predict(X_train)) print f1_score(y_train, hx_knn.predict(X_train))
Super_textbook.ipynb
chainsawriot/pycon2016hk_sklearn
mit
Moment of truth: test set performance evaluation
print confusion_matrix(y_test, hx_knn.predict(X_test)) print f1_score(y_test, hx_knn.predict(X_test))
Super_textbook.ipynb
chainsawriot/pycon2016hk_sklearn
mit
Classical analysis: Logistic Regression Classifier (Mint)
from sklearn.linear_model import LogisticRegression hx_log = LogisticRegression() hx_log.fit(X_train, y_train) hx_log.predict(X_train) # Training set evaluation print confusion_matrix(y_train, hx_log.predict(X_train)) print f1_score(y_train, hx_log.predict(X_train)) # test set evaluation print confusion_matrix(y_te...
Super_textbook.ipynb
chainsawriot/pycon2016hk_sklearn
mit
Your turn: Task 1 Suppose there is a learning algorithm called Naive Bayes and the API is located in http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html#sklearn.naive_bayes.GaussianNB Create a hx_nb, fit the data, and evaluate the training set and test set performance. Regression exampl...
from sklearn.datasets import load_boston bostondata = load_boston() print bostondata.target print bostondata.data.shape ### Learn more about the dataset print bostondata.DESCR # how the first row of data looks like bostondata.data[1,] BX_train, BX_test, By_train, By_test = train_test_split(bostondata.data, boston...
Super_textbook.ipynb
chainsawriot/pycon2016hk_sklearn
mit
Classical algo: Linear Regression
from sklearn.linear_model import LinearRegression hx_lin = LinearRegression() hx_lin.fit(BX_train, By_train) hx_lin.predict(BX_train)
Super_textbook.ipynb
chainsawriot/pycon2016hk_sklearn
mit
Plot a scatter plot of predicted and actual value
import matplotlib.pyplot as plt plt.scatter(By_train, hx_lin.predict(BX_train)) plt.ylabel("predicted value") plt.xlabel("actual value") plt.show() ### performance evaluation: training set from sklearn.metrics import mean_squared_error mean_squared_error(By_train, hx_lin.predict(BX_train)) ### performance evaluation...
Super_textbook.ipynb
chainsawriot/pycon2016hk_sklearn
mit
Notice that the GenericMaximumLikelihood class provides automatic differentiation, so we did not have to provide Hessian or Score functions in order to calculate the covariance estimates. Example 2: Negative Binomial Regression for Count Data Consider a negative binomial regression model for count data with log-likelih...
import numpy as np from scipy.stats import nbinom def _ll_nb2(y, X, beta, alph): mu = np.exp(np.dot(X, beta)) size = 1/alph prob = size/(size+mu) ll = nbinom.logpmf(y, size, prob) return ll
examples/notebooks/generic_mle.ipynb
statsmodels/statsmodels
bsd-3-clause
Setting up gdeltPyR It's easy to set up gdeltPyR. This single line gets us ready to query. See the github project page for details on accessing other tables and setting other parameters. Then, we just pass in a date to pull the data. It's really that simple. The only concern, is memory. Pulling multiple days of GD...
gd = gdelt.gdelt() %time vegas = gd.Search(['Oct 1 2017','Oct 2 2017'],normcols=True,coverage=True)
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Time format transformations These custom function handle time transformations.
def striptimen(x): """Strip time from numpy array or list of dates that are integers""" date = str(int(x)) n = np.datetime64("{}-{}-{}T{}:{}:{}".format(date[:4],date[4:6],date[6:8],date[8:10],date[10:12],date[12:])) return n def timeget(x): '''convert to datetime object with UTC time tag''' ...
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Now we apply the functions to create a datetime object column (dates) and a timezone aware column (datezone).
# vectorize our function vect = np.vectorize(striptimen) # use custom functions to build time enabled columns of dates and zone vegastimed = (vegas.assign( dates=vect(vegas.dateadded.values)).assign( zone=list(timeget(k) for k in vegas.assign( dates=vect(vegas.dateadded...
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Filtering to a city and specific CAMEO Code I return data in pandas dataframes to leverage the power of pandas data manipulation. Now we filter our data on the two target fields; actiongeofeatureid and eventrootcode. To learn more about the columns, see this page with descriptions for each header.
# filter to data in Las Vegas and about violence/fighting/mass murder only vegastimedfil=(vegastimed[ ((vegas.eventrootcode=='19') | (vegas.eventrootcode=='20') | (vegas.eventrootcode=='18')) & (vegas.actiongeofeatureid=...
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Stripping out unique news providers This regex extracts baseurls from the sourceurl column. These extractions allow us to analyze the contributions of unique providers in GDELT events data.
# lazy meta-character regex; more elegant s = re.compile('(http://|https://)([A-Za-z0-9_\.-]+)')
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Build Chronological List If you want to see a chronological list, you'll need to time enable your data.
# build the chronological news stories and show the first few rows print(vegastimedfil.set_index('zone')[['dates','sourceurl']].head())
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
To time enable the entire dataset, it's a fairly simple task.
# example of converting to Los Angeles time. vegastimed.set_index( vegastimed.dates.astype('datetime64[ns]') ).tz_localize( 'UTC' ).tz_convert( 'America/Los_Angeles' ...
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Counting Who Produced the Most We use pandas to find the provider with the most unique content. One drawback of GDELT, is repeated URLs. But, in the pandas ecosystem, removing duplicates is easy. We extract provider baseurls, remove duplicates, and count the number of articles.
# regex to strip a url from a string; should work on any url (let me know if it doesn't) s = re.compile('(http://|https://)([A-Za-z0-9_\.-]+)') # apply regex to each url; strip provider; assign as new column print(vegastimedfil.assign(provider=vegastimedfil.sourceurl.\ apply(lambda x: s.search(x).group() if s.se...
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
How many unique news providers? This next block uses regex to strip the base URL from each record. Then, you just use the pandas.Series.unique() method to get a total count of providers
# chained operation to return shape vegastimedfil.assign(provider=vegastimedfil.sourceurl.\ apply(lambda x: s.search(x).group() if \ s.search(x) else np.nan))['provider']\ .value_counts().shape
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Understanding how many providers we have producing, it would be a good idea to understand the distribution of production. Or, we want to see how many articles each provider published. We use a distribution and cumulative distribution plot.
# make plot canvas f,ax = plt.subplots(figsize=(15,5)) # set title plt.title('Distributions of Las Vegas Active Shooter News Production') # ckernel density plot sns.kdeplot(vegastimedfil.assign(provider=vegastimedfil.sourceurl.\ apply(lambda x: s.search(x).group() if s.search(x) else np.nan))['provider']\ .valu...
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Time Series: Calculating the volumetric change Next, we use the exponentially weighted moving average to see the change in production.
timeseries = pd.concat([vegastimed.set_index(vegastimed.dates.astype('datetime64[ns]')).tz_localize('UTC').tz_convert('America/Los_Angeles').resample('15T')['sourceurl'].count(),vegastimedfil.set_index('zone').resample('15T')['sourceurl'].count()] ,axis=1) # file empty event counts with zero timeseries.fillna...
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Finding Who Produced the "Fastest" This block of code finds the news provider who produced reports faster "on average". We convert the date of each article to epoch time, average across providers, and compare. Again, pandas makes this easy.
# complex, chained operations to perform all steps listed above print((((vegastimedfil.reset_index().assign(provider=vegastimedfil.reset_index().sourceurl.\ apply(lambda x: s.search(x).group() if s.search(x) else np.nan),\ epochzone=vegastimedfil.set_index('dates')\ ...
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Getting the Content This code gets the content (or tries to) at the end of each GDELT sourceurl.
# Author: Linwood Creekmore # Email: valinvescap@gmail.com # Description: Python script to pull content from a website (works on news stories). # Notes """ 23 Oct 2017: updated to include readability based on PyCon talk: https://github.com/DistrictDataLabs/PyCon2016/blob/master/notebooks/tutorial/Working%20with%20Tex...
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
Testing the Function Here is a test. The done dictionary is important; it keeps you from repeating calls to urls you've already processed. It's like "caching".
# create vectorized function vect = np.vectorize(textgetter) #vectorize the operation cc = vect(vegastimedfil['sourceurl'].values[10:25]) #Vectorized opp dd = list(next(l) for l in cc) # the output pd.DataFrame(dd).head(5)
notebooks/gdeltPyR Tutorial Notebook.ipynb
linwoodc3/linwoodc3.github.io
apache-2.0
1) What different population types are of concern for the UNHCR?
df['Population type'].unique()
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
2)-8) For each of these population types, which were the 20 country hosting most in 2013? except: "Others of concern" @TAs: This question might actually count as six questions, right? ;)
df.columns recent = df[['Country', 'Origin_Returned_from', 'Population type','2013']] def population_type_count(a): a = recent[recent['Population type'] == a] a.groupby('Country')['2013'].sum() a_table = pd.DataFrame(a.groupby('Country')['2013'].sum()) return a_table.sort_values(by='2013', ascending = ...
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit
TOP20 most refugees
population_type_count('Refugees')
foundations-homework/08/homework-08-gruen-dataset3-refugees.ipynb
gcgruen/homework
mit