markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Keep in mind that increasing the number of folds will give you a larger training dataset, but will lead to more repetitions, and therefore a slower evaluation:
plot_cv(KFold(n_splits=10), iris.target)
Day_1_Scientific_Python/scikit-learn/13-Cross-Validation.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
Another helpful cross-validation generator is ShuffleSplit. This generator simply splits of a random portion of the data repeatedly. This allows the user to specify the number of repetitions and the training set size independently:
plot_cv(ShuffleSplit(n_splits=5, test_size=.2), iris.target)
Day_1_Scientific_Python/scikit-learn/13-Cross-Validation.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
If you want a more robust estimate, you can just increase the number of iterations:
plot_cv(ShuffleSplit(n_splits=10, test_size=.2), iris.target)
Day_1_Scientific_Python/scikit-learn/13-Cross-Validation.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
Complex Impedances You could also renormalize to a complex port impedance if you're crazy. For example, renormalizing to 50j, one would expect: $$ \Gamma^{'} = \frac{50-50j}{50+50j} = 50\frac{1-j}{1+j} = -50j $$ However, one finds an unexpected result when plotting the Smith chart:
match_at_50 = rf.wr10.match() match_at_50.renormalize(50j) # same as renormalize(50j, s_def='power') match_at_50.plot_s_smith(**kw) # expect -1j
doc/source/examples/networktheory/Renormalizing S-parameters.ipynb
jhillairet/scikit-rf
bsd-3-clause
This is because the default behaviour of scikit-rf is to use power-waves scattering parameter definition (since it is the most popular one is CAD softwares). But the power-waves definition is known to fail in such a case. This is why scikit-rf also implement the pseudo-waves scattering parameters definition, but you have to specify it using the s_def parameter:
match_at_50 = rf.wr10.match() match_at_50.renormalize(50j, s_def='pseudo') match_at_50.plot_s_smith(**kw) # expect -1j
doc/source/examples/networktheory/Renormalizing S-parameters.ipynb
jhillairet/scikit-rf
bsd-3-clause
The binary files are small in size and store every floating point number exactly, so you don't have to worry about efficiency or loosing precision. You can make lots of checkpoints if you want! Let's reset REBOUND (that deletes the particles from memory) and then read the binary file we just saved.
rebound.reset() rebound.load("checkpoint.bin") rebound.status()
python_tutorials/Checkpoints.ipynb
quasars100/Resonance_testing_scripts
gpl-3.0
Generating random weather samples with a IID model with no time dependencies Let's first create some random samples of a symbolic random variable corresponding to the weather with two values Sunny (S) and cloudy (C) and generate random weather for 365 days. The assumption in this model is that the weather of each day is indepedent of the previous days and drawn from the same probability distribution.
values = ['S', 'C'] probabilities = [0.5, 0.5] weather = Random_Variable('weather', values, probabilities) samples = weather.sample(365) print(",".join(samples))
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
Now let lets visualize these samples using yellow for sunny and grey for cloudy
state2color = {} state2color['S'] = 'yellow' state2color['C'] = 'grey' def plot_weather_samples(samples, state2color, title): colors = [state2color[x] for x in samples] x = np.arange(0, len(colors)) y = np.ones(len(colors)) plt.figure(figsize=(10,1)) plt.bar(x, y, color=colors, width=1) plt.title(title) plot_weather_samples(samples, state2color, 'iid')
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
Markov Chain Now instead of independently sampling the weather random variable lets form a markov chain. The Markov chain will start at a particular state and then will either stay in the same state or transition to a different state based on a transition probability matrix. To accomplish that we basically create a random variable for each row of the transition matrix that basically corresponds to the probabilities of the transitions emanating fromt the state corresponding to that row. Then we can use the markov chain to generate sequences of samples and contrast these sequence with the iid weather model. By adjusting the transition probabilities you can in a probabilistic way control the different lengths of "stretches" of the same state.
def markov_chain(transmat, state, state_names, samples): (rows, cols) = transmat.shape rvs = [] values = list(np.arange(0,rows)) # create random variables for each row of transition matrix for r in range(rows): rv = Random_Variable("row" + str(r), values, transmat[r]) rvs.append(rv) # start from initial state and then sample the appropriate # random variable based on the state following the transitions states = [] for n in range(samples): state = rvs[state].sample(1)[0] states.append(state_names[state]) return states # transition matrices for the Markov Chain transmat1 = np.array([[0.7, 0.3], [0.2, 0.8]]) transmat2 = np.array([[0.9, 0.1], [0.1, 0.9]]) transmat3 = np.array([[0.5, 0.5], [0.5, 0.5]]) state2color = {} state2color['S'] = 'yellow' state2color['C'] = 'grey' # plot the iid model too samples = weather.sample(365) plot_weather_samples(samples, state2color, 'iid') samples1 = markov_chain(transmat1,0,['S','C'], 365) plot_weather_samples(samples1, state2color, 'markov chain 1') samples2 = markov_chain(transmat2,0,['S','C'],365) plot_weather_samples(samples2, state2color, 'marov_chain 2') samples3 = markov_chain(transmat3,0,['S','C'], 365) plot_weather_samples(samples3, state2color, 'markov_chain 3')
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
Note: Look back at the Random Variables notebook for an example of generating melodies using a Markov Chain with a transition probability matrix that is calculated by analyzing a corpus of chorales. Generating samples using a Hidden Markov Model Lets now look at how a Hidden Markov Model would work by having a Markov Chain to generate a sequence of states and for each state having a different emission probability. When sunny we will output red or yellow with higher probabilities and when cloudy black or blue. First we will write the code directly and then we will use the hmmlearn package.
state2color = {} state2color['S'] = 'yellow' state2color['C'] = 'grey' # generate random samples for a year samples = weather.sample(365) states = markov_chain(transmat1,0,['S','C'], 365) plot_weather_samples(states, state2color, "markov chain 1") # create two random variables one of the sunny state and one for the cloudy sunny_colors = Random_Variable('sunny_colors', ['y', 'r', 'b', 'g'], [0.6, 0.3, 0.1, 0.0]) cloudy_colors = Random_Variable('cloudy_colors', ['y', 'r', 'b', 'g'], [0.0, 0.1, 0.4, 0.5]) def emit_obs(state, sunny_colors, cloudy_colors): if (state == 'S'): obs = sunny_colors.sample(1)[0] else: obs = cloudy_colors.sample(1)[0] return obs # iterate over the sequence of states and emit color based on the emission probabilities obs = [emit_sample(s, sunny_colors, cloudy_colors) for s in states] obs2color = {} obs2color['y'] = 'yellow' obs2color['r'] = 'red' obs2color['b'] = 'blue' obs2color['g'] = 'grey' plot_weather_samples(obs, obs2color, "Observed sky color") # let's zoom in a month plot_weather_samples(states[0:30], state2color, 'states for a month') plot_weather_samples(obs[0:30], obs2color, 'observations for a month')
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
Multinomial HMM Lets do the same generation process using the multinomail HMM model supported by the hmmlearn python package.
transmat = np.array([[0.7, 0.3], [0.2, 0.8]]) start_prob = np.array([1.0, 0.0, 0.0]) # yellow and red have high probs for sunny # blue and grey have high probs for cloudy emission_probs = np.array([[0.6, 0.3, 0.1, 0.0], [0.0, 0.1, 0.4, 0.5]]) model = hmm.MultinomialHMM(n_components=2) model.startprob_ = start_prob model.transmat_ = transmat model.emissionprob_ = emission_probs # sample the model - X is the observed values # and Z is the "hidden" states X, Z = model.sample(365) # we have to re-define state2color and obj2color as the hmm-learn # package just outputs numbers for the states state2color = {} state2color[0] = 'yellow' state2color[1] = 'grey' plot_weather_samples(Z, state2color, 'states') samples = [item for sublist in X for item in sublist] obj2color = {} obj2color[0] = 'yellow' obj2color[1] = 'red' obj2color[2] = 'blue' obj2color[3] = 'grey' plot_weather_samples(samples, obj2color, 'observations')
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
Estimating the parameters of an HMM Let's sample the generative HMM and get a sequence of 1000 observations. Now we can learn in an unsupervised way the paraemters of a two component multinomial HMM just using these observations. Then we can compare the learned parameters with the original parameters of the model used to generate the observations. Notice that the order of the components is different between the original and estimated models. Notice that hmmlearn does NOT directly support supervised training where you have both the labels and observations. It is possible to initialize a HMM model with some of the parameters and learn the others. For example you can initialize the transition matrix and learn the emission probabilities. That way you could implement supervised learning for a multinomial HMM. In many practical applications the hidden labels are not available and that's the hard case that is implemented.
# generate the samples X, Z = model.sample(1000) # learn a new model estimated_model = hmm.MultinomialHMM(n_components=2, n_iter=10000).fit(X)
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
Let's compare the estimated model parameters with the original model.
print("Transition matrix") print("Estimated model:") print(estimated_model.transmat_) print("Original model:") print(model.transmat_) print("Emission probabilities") print("Estimated model") print(estimated_model.emissionprob_) print("Original model") print(model.emissionprob_)
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
Predicting a sequence of states given a sequence of observations We can also use the trained HMM model to predict a sequence of hidden states given a sequence of observations. This is the tasks of maximum likelihood sequence estimation. For example in Speech Recognition it would correspond to estimating a sequence of phonemes (hidden states) from a sequence of observations (acoustic vectors).
Z2 = estimated_model.predict(X) state2color = {} state2color[0] = 'yellow' state2color[1] = 'grey' plot_weather_samples(Z, state2color, 'Original states') plot_weather_samples(Z2, state2color, 'Predicted states') # note the reversal of colors for the states as the order of components is not the same. # we can easily fix this by change the state2color state2color = {} state2color[1] = 'yellow' state2color[0] = 'grey' plot_weather_samples(Z2, state2color, 'Flipped Predicted states')
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
The estimated model can be sampled just like the original model
X, Z = estimated_model.sample(365) state2color = {} state2color[0] = 'yellow' state2color[1] = 'grey' plot_weather_samples(Z, state2color, 'states generated by estimated model ') samples = [item for sublist in X for item in sublist] obs2color = {} obs2color[0] = 'yellow' obs2color[1] = 'red' obs2color[2] = 'blue' obs2color[3] = 'grey' plot_weather_samples(samples, obs2color, 'observations generated by estimated model')
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
A HMM example using Chords Let's do pretend music example by having the states model a chord progression consisting of D (II), G(V), C (I) chords and the observations consist of chord type i.e whether they are minor7, major7, or dominant7. This is an extremely simplified model of how harmony works but it does do a bit better than random.
# probabities of each state D(II), G(V), C(I). The transitions are semi-plausible but set by hand. # in a full problem they would be learned from data transmat = np.array([[0.4, 0.4, 0.2], [0.1, 0.1, 0.8], [0.0, 0.3, 0.7]]) start_prob = np.array([1.0, 0.0, 0.0]) # the emission probabilities are also set by hand and semi-plausible and correspond # to the probability that a chord is dominant, minor or major 7th. Notice for example # that if the chord is a C(I) (the third row then it will never be a dominant chord the # last 0.0 in that row emission_probs = np.array([[0.4, 0.0, 0.4], [0.3, 0.3, 0.3], [0.2, 0.8, 0.0]]) chord_model = hmm.MultinomialHMM(n_components=2) chord_model.startprob_ = start_prob chord_model.transmat_ = transmat chord_model.emissionprob_ = emission_probs X, Z = chord_model.sample(10) state2name = {} state2name[0] = 'D' state2name[1] = 'G' state2name[2] = 'C' chords = [state2name[state] for state in Z] print(chords) obj2name = {} obj2name[0] = 'min7' obj2name[1] = 'maj7' obj2name[2] = '7' observations = [obj2name[item] for sublist in X for item in sublist] print(observations) chords = [''.join(chord) for chord in zip(chords,observations)] print(chords)
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
Playing back the generated chords Now that we have generated a sequence of chords symbols with a little bit of work we can play it back using Music21
from music21 import * # create some chords for II, V, I d7 = chord.Chord(['D4','F4', 'A4', 'C5']) dmin7 = chord.Chord(['D4','F-4', 'A4', 'C5']) dmaj7 = chord.Chord(['D4','F#4', 'A4', 'C#5']) c7 = d7.transpose(-2) cmin7 = dmin7.transpose(-2) cmaj7 = dmaj7.transpose(-2) g7 = d7.transpose(5) gmin7 = dmin7.transpose(5) gmaj7 = dmaj7.transpose(5) print(g7.pitches) stream1 = stream.Stream() stream1.repeatAppend(dmin7,1) stream1.repeatAppend(g7,1) stream1.repeatAppend(cmaj7,1) stream1.repeatAppend(cmaj7,1) print(stream1) name2chord = {} name2chord['C7'] = c7 name2chord['Cmin7'] = cmin7 name2chord['Cmaj7'] = cmaj7 name2chord['D7'] = d7 name2chord['Dmin7'] = dmin7 name2chord['Dmaj7'] = dmaj7 name2chord['G7'] = g7 name2chord['Gmin7'] = gmin7 name2chord['Gmaj7'] = gmaj7 hmm_chords = stream.Stream() for c in chords: hmm_chords.repeatAppend(name2chord[c],1) # let's check that we can play streams of chords #sp = midi.realtime.StreamPlayer(stream1) #sp.play() # let's now play a hidden markov model generated chord sequence print(chords) hmm_chords.show() sp = midi.realtime.StreamPlayer(hmm_chords) sp.play()
Markov Chains and Hidden Markov Models.ipynb
gtzan/mir_book
cc0-1.0
Datos La base de datos está en formato delimitado por comas (csv). Este archivo es el resultado de la extracción de datos a los archivos que el INEGI distribuye en sus anuarios estadísticos.
data = pd.read_csv("test.csv") Markdown('La base de datos tiene {} filas y {} columnas. Cada columna es una variable:'.format(data.shape[0], data.shape[1]))
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Las 38 variables que contiene la base de datos son:
vdesc = data.columns[0:8] for i,a in enumerate(data.columns): print(i,a)
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Las primeras ocho variables, con índices del cero al siete, son aspectos descriptivos. A partir de la novena columna, con índice ocho, se encuentran las variables con información cuantitativa sobre consumo, almacenamiento, fuentes de agua, entre otras. Cada fila corresponde a un municipio y cada municipio corresponde a una zona metropolitana. Para saber a qué ciudad, o zona metropolitana, corresponde un municipio debe de observarse su clave del SUN (cve_sun). El siguiente paso es agrupar a todos los municipios de acuerdo con esta clave.
# Agrupación de todos los municipios según su clave SUN. zm = data.groupby("cve_sun").count() print("Zonas metropolitanas en total: ",zm.shape[0])
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Selección de datos para el estudio Para la creación de indicadores es necesario contar con información completa. En esta sección se seleccionarán de la base de datos, las ciudades del SUN con información completa. Python identifica las celdas sin información disponible con el indicador NaN. Por lo tanto, para identificar los municipios que cuentan con información completa será necesario filtrar todos los municipios que no tengan ningún NaN en sus variables.
# Agrupación según clave SUN con todas las variables con valor diferente a "NaN". zm_sinNaN = data.dropna().groupby("cve_sun").count() print("Zonas metropolitanas sin valores 'NaN': ",zm_sinNaN.shape[0])
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Tal y como muestran las líneas de código anteriores, exiten 59 zonas metropolitanas; cada una está compuesta por cierto número de municipios. Sin embargo, la tabla de datos original tiene muchos valores vacíos (NaN) por lo que, al filtar todos los municipios que no tienen valores vacíos y al agruparlos según su clave SUN, en realidad, de acuerdo con los datos, existen 38 ciudades cuyos municipios cuentan con todas las variables disponibles. Esto no quiere decir que se cuenta con toda la información para todos los municipios que conforman estas 38 ciudades; la variable zm_sinNaN enlista el número de municipios por ciudad que no presentan valores NaN. Por ende, el siguiente paso es observar cuántas ciudades tienen toda la información para todos sus municipios.
# Lista de claves SUN de las ciudades sin 'NaNs' para filtrar. cves_sinNaN = zm_sinNaN.index.tolist() # Filtro de ciudades con la lista anterior. zm = zm.loc[zm.index.isin(cves_sinNaN)] # ¿Qué ciudades tienen la información completa para todos sus municipios?() (zm_sinNaN / zm)[:10]
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Aquellas ciudades cuya proporción es igual a uno serán seleccionadas.
# Municipios por ciudad con información completa entre el total de los municipios que existen en esa ciudad. zm = zm_sinNaN / zm # Filtar aquellas ciudades cuyos municipios cuentan con toda la información disponible. zm = zm[zm['_id'] == 1] zm.head()
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Ya que se tienen las claves SUN de las ciudades sin datos faltantes hay que filtrar los datos de los municipios y declarar una nueva variable con esta información.
muns = data.loc[data['cve_sun'].isin(zm.index.tolist())] muns.head()
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Las ciudades que cuentan con la información completa para todos sus municipios son las siguientes:
infoc = muns[muns.columns[5:8]].sort_values('cve_sun') infoc.to_csv(r'.\datasets\infoc.csv') infoc
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
El siguiente mapa muestra los municipios pertenecientes a Zonas Urbanas que cuentan con información completa:
Image('info_map.png', width=640)
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Indicadores 01 - Disponibilidad y acceso al agua AG01.1 - PROPORCIÓN DE LA POBLACIÓN QUE UTILIZA SERVICIOS DE AGUA POTABLE MANEJADOS DE MANERA SALUDABLE Los servicios de agua potable manejados de manera saludable se definen como aquéllas fuentes mejoradas de agua potable que se encuentran dentro de las instalaciones y disponibles cuando son necesarias, libres de contaminación por materia fecal o química. Una fuente mejorada de agua potable se define como aquélla que por la naturaleza de su construcción o a través de intervención activa, está protegida de la contaminación externa. Este indicador está diseñado en base al indicador 6.1.1 de los Objetivos de Desarrollo Sostenible de la ONU. Metodología: Como se mencionó anteriormente, no todas las ciudades del SUN cuentan con información completa de indicadores de agua. Si bien para generar un indicador integral de sustentabilidad del agua es necesario que una ciudad cuente con la información completa para todos sus municipios, esto no nos impide analizar de manera individual cada indicador para las ciudades que cuenten con la información. En la base de datos hay 12 columnas relacionadas al acceso a agua potable:
dispyacc = list(); dispyacc.append(data.columns[8]) dispyacc = dispyacc + data.columns[13:24].tolist(); dispyacc
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
El campo "entubada_total" es el porcentaje de viviendas dentro del municipio que cuentan con agua potable entubada mientras que "entubada_dentro_de_vivienda" y "entubada_fuera_de_vivienda_dentro_de_terreno" muestran la proporción de estas viviendas que cuentan con tubería hasta la vivienda o cuya tubería entra al terreno sin llegar a la vivienda, respectivamente. Para este indicador basta utilizar los datos de entubada_total. El acarreo de agua potable no se considera una fuente mejorada de agua potable, toda vez que no es trazable la salubridad de la fuente.
print('Los datos disponibles en la base de datos acerca de agua entubada son los siguientes:') print('\tTotal de municipios en la base de datos: {}'.format(len(data['entubada_total']))) print('\tMunicipios que cuentan con registros de agua entubada: {}'.format(len(data['entubada_total'].dropna()))) # Dataset de agua entubada c_disponibilidad = vdesc.tolist() c_disponibilidad.append('entubada_total'); c_disponibilidad.remove('collection') data_disponibilidad = data[c_disponibilidad] data_disponibilidad.to_csv('disponibilidad.csv') data_disponibilidad.head()
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Quitando los municipios que no tienen información disponible, las estadísticas básicas de la información disponible son las siguientes:
data_disponibilidad['entubada_total'].dropna().describe()
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Los primeros 3 cuartiles están por debajo del 99%, mientras que es de extrañar que el valor máximo sea 157884. Muy probablemente los valores que superan el 100 están capturados de manera errónea, por lo que serán excluidos del análisis.
# Casos que serán excluidos del análisis data_disponibilidad[data_disponibilidad['entubada_total']>100] # Declaracion de variable con datos para el análisis analisis_disponibilidad = data_disponibilidad[data_disponibilidad['entubada_total']<100] analisis_disponibilidad = analisis_disponibilidad.set_index('entidad')
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
Una vez excluidos estos valores, las estadísticas básicas de la información son las siguientes:
analisis_disponibilidad['entubada_total'].describe() analisis_disponibilidad.boxplot(column = 'entubada_total', by = 'entidad', figsize = (14, 4)) plt.suptitle('') plt.title('Acceso al agua potable en ciudades por entidad', fontsize=16) plt.ylabel('% de acceso al agua') plt.show()
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
La informacion de este análisis puede ser exportada en formato CSV para integrarla al sistema de información Geográfica de la Plataforma de Conocimiento sobre Ciudades Sustentables.
analisis_disponibilidad.to_csv('AG01_1.csv')
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
Caranarq/01_Dmine
gpl-3.0
The equation of motion: $$ \begin{gather} m \frac{d^2 \vec{r} }{dt^2} = \frac{q}{c} [ \vec{v} \vec{B} ] \end{gather} $$ For the case of a uniform magnetic field along the $z$-axis: $$ \vec{B} = B_z = B, \quad B_x = 0, \quad B_y = 0 $$ In Cortesian coordinates:
eq_x = Eq( Derivative(x(t), t, 2), q / c / m * Bz * Derivative(y(t),t) ) eq_y = Eq( Derivative(y(t), t, 2), - q / c / m * Bz * Derivative(x(t),t) ) eq_z = Eq( Derivative(z(t), t, 2), 0 ) display( eq_x, eq_y, eq_z )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
The constants of integration can be found from the initial conditions $z(0) = z_0$ and $v_z(0) = v_{z0}$:
c1_c2_system = [] initial_cond_subs = [(t, 0), (z(0), z_0), (diff(z(t),t).subs(t,0), vz_0) ] c1_c2_system.append( z_eq.subs( initial_cond_subs ) ) c1_c2_system.append( vz_eq.subs( initial_cond_subs ) ) c1, c2 = symbols("C1, C2") c1_c2 = solve( c1_c2_system, [c1, c2] ) c1_c2
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
So that
z_sol = z_eq.subs( c1_c2 ) vz_sol = vz_eq.subs( c1_c2 ).subs( [( diff(z(t),t), vz(t) ) ] ) display( z_sol, vz_sol )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
For some reason I have not been able to solve the system of differential equations for $x$ and $y$ directly with Sympy's dsolve function:
#dsolve( [eq_x, eq_y], [x(t),y(t)] )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
It is necessary to resort to the manual solution. The method is to differentiate one of them over time and substitute the other. This will result in oscillator-type second-order equations for $v_y$ and $v_x$. Their solution is known. Integrating one more time, it is possible to obtain laws of motion $x(t)$ and $y(t)$.
v_subs = [ (Derivative(x(t),t), vx(t)), (Derivative(y(t),t), vy(t)) ] eq_vx = eq_x.subs( v_subs ) eq_vy = eq_y.subs( v_subs ) display( eq_vx, eq_vy ) eq_d2t_vx = Eq( diff(eq_vx.lhs,t), diff(eq_vx.rhs,t)) eq_d2t_vx = eq_d2t_vx.subs( [(eq_vy.lhs, eq_vy.rhs)] ) display( eq_d2t_vx )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
The solution of the last equation is
C1, C2, Omega = symbols( "C1, C2, Omega" ) vx_eq = Eq( vx(t), C1 * cos( Omega * t ) + C2 * sin( Omega * t )) display( vx_eq ) omega_eq = Eq( Omega, Bz * q / c / m ) display( omega_eq )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
where $\Omega$ is a cyclotron frequency.
display( vx_eq ) vy_eq = Eq( vy(t), solve( Eq( diff(vx_eq.rhs,t), eq_vx.rhs ), ( vy(t) ) )[0] ) vy_eq = vy_eq.subs( [(Omega*c*m / Bz / q, omega_eq.rhs * c * m / Bz / q)]).simplify() display( vy_eq )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
For initial conditions $v_x(0) = v_{x0}, v_y(0) = v_{y0}$:
initial_cond_subs = [(t,0), (vx(0), vx_0), (vy(0), vy_0) ] vx0_eq = vx_eq.subs( initial_cond_subs ) vy0_eq = vy_eq.subs( initial_cond_subs ) display( vx0_eq, vy0_eq ) c1_c2 = solve( [vx0_eq, vy0_eq] ) c1_c2_subs = [ ("C1", c1_c2[c1]), ("C2", c1_c2[c2]) ] vx_eq = vx_eq.subs( c1_c2_subs ) vy_eq = vy_eq.subs( c1_c2_subs ) display( vx_eq, vy_eq )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
These equations can be integrated to obtain the laws of motion:
x_eq = vx_eq.subs( vx(t), diff(x(t),t)) x_eq = dsolve( x_eq ) y_eq = vy_eq.subs( vy(t), diff(y(t),t)) y_eq = dsolve( y_eq ).subs( C1, C2 ) display( x_eq, y_eq )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
For nonzero $\Omega$:
x_eq = x_eq.subs( [(Omega, 123)] ).subs( [(123, Omega)] ).subs( [(Rational(1,123), 1/Omega)] ) y_eq = y_eq.subs( [(Omega, 123)] ).subs( [(123, Omega)] ).subs( [(Rational(1,123), 1/Omega)] ) display( x_eq, y_eq )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
For initial conditions $x(0) = x_0, y(0) = y_0$:
initial_cond_subs = [(t,0), (x(0), x_0), (y(0), y_0) ] x0_eq = x_eq.subs( initial_cond_subs ) y0_eq = y_eq.subs( initial_cond_subs ) display( x0_eq, y0_eq ) c1_c2 = solve( [x0_eq, y0_eq] ) c1_c2_subs = [ ("C1", c1_c2[0][c1]), ("C2", c1_c2[0][c2]) ] x_eq = x_eq.subs( c1_c2_subs ) y_eq = y_eq.subs( c1_c2_subs ) display( x_eq, y_eq ) x_eq = x_eq.simplify() y_eq = y_eq.simplify() x_eq = x_eq.expand().collect(Omega) y_eq = y_eq.expand().collect(Omega) display( x_eq, y_eq )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
Finally
display( x_eq, y_eq, z_sol ) display( vx_eq, vy_eq, vz_sol ) display( omega_eq )
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
epicf/ef
mit
Список файлов: * empty - файл полученный с томографа без коррекций * corr - то же изображение что и empty, но с коррекцией * tomo - то же, что и empty, но полученное в ходе проведения эксперимента * white - пустой пучок используемый для нормировки изображений (получен в тот-же день при калибровке) * black_1, black_2 - темновые токи, полученные в разное время
empty = plt.imread(data_root+'first_projection.tif').astype('float32') corr = plt.imread(data_root+'first_projection_corr.tif').astype('float32') tomo = plt.imread(data_root+'Raw/pin_2.24um_0000.tif').astype('float32') white = np.fromfile(data_root+'white0202_2016-02-11.ffr',dtype='<u2').astype('float32').reshape((2096, 4000)) black_1 = np.fromfile(data_root+'black0101_2016-02-09.ffr',dtype='<u2').astype('float32').reshape((2096, 4000)) black_2 = np.fromfile(data_root+'black0201_2016-02-16.ffr',dtype='<u2').astype('float32').reshape((2096, 4000)) def show_frame(data, label): data_filtered = cv2.medianBlur(data,5) plt.figure(figsize=(12,10)) plt.imshow(data_filtered) plt.title(label+' filtered') plt.colorbar(orientation='horizontal') plt.show() plt.figure(figsize=(12,8)) plt.plot(data[1000]) plt.grid(True) plt.title(label+' filtered: central cut') plt.show() plt.figure(figsize=(12,10)) plt.imshow(data_filtered) plt.colorbar(orientation='horizontal') plt.title(label) plt.show() plt.figure(figsize=(12,8)) plt.plot(data_filtered[1000]) plt.grid(True) plt.title(label+': central cut') plt.show()
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Вот пучок без объекта. По осям - отсчёты детектора. Здесь и далее первая картинка и центральное сечение - как есть. Вторая картинка - с применениейм медианной фильтрации (чтобы убрать шумы сцинстиллятора).
show_frame(white, 'White')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Вот темновой ток 1 по осям - отсчёты детектора
show_frame(black_1, 'Black_1')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Вот темновой ток 2 по осям - отсчёты детектора
show_frame(black_2, 'Black_2')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Вот разница между темновыми токами
show_frame(black_1 - black_2, 'Black_1 - Black_2')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Вот никак не скорректированное изображение
show_frame(empty, 'Empty')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Вот отнормированное изображение (силами томографа) Странно, что на центральном срезе максимум не на 65535 (2^16), а примерно 65535*0.8. Это значит что нам при реконструкции нужно нормироваться не на 65535 при взятии логарифма, а на максимум по синограмме?
show_frame(corr, 'Corr')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Вот изображение из томографического зксперимента
show_frame(tomo, 'tomo image')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Вот разница изображений отнормированных томографом в ручном режиме и режиме томографа Они видимо немого сдвинуты
show_frame(corr - tomo, 'corr / tomo image')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Вот моя попытка отнормировать изображение Видны следы от прямого пучка (сетка на заднем фоне), но это видимо связано с тем, что прямой пучок зависит от расстояний детектор-источник (сферичность интенсивности), и прямой пучок был померян для другого рассотояния. К тому-же интенсивнось прямого пучка видимо была меньше (в 16 раз?), чем при проведениии зксперимента. (это надо проверить)
white_norm = (white - black_1) white_norm[white_norm<1] = 1 empty_norm = (empty/16 - black_1) empty_norm[empty_norm<1] =1 my_corr = empty_norm/white_norm my_corr[my_corr>1.1] = 1.1 show_frame(my_corr, 'my_corr image')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Скорректированный пучок нами поделеённый на скорреткироваанный скайсканом. Они вроде совпадают, с точностью до шумов. Отсюда следует, что нормировка происходит по формуле $$Signal=k\times 2^{16}\frac{I_1-dark}{I_0-dark}, k=0.87$$
show_frame(my_corr*65535*0.87/corr, 'my_corr/corr image')
tomo/yaivan/empty_frames.ipynb
buzmakov/tomography_scripts
mit
Hamilton (1989) switching model of GNP This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written: $$ y_t = \mu_{S_t} + \phi_1 (y_{t-1} - \mu_{S_{t-1}}) + \phi_2 (y_{t-2} - \mu_{S_{t-2}}) + \phi_3 (y_{t-3} - \mu_{S_{t-3}}) + \phi_4 (y_{t-4} - \mu_{S_{t-4}}) + \varepsilon_t $$ Each period, the regime transitions according to the following matrix of transition probabilities: $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00} & p_{10} \ p_{01} & p_{11} \end{bmatrix} $$ where $p_{ij}$ is the probability of transitioning from regime $i$, to regime $j$. The model class is MarkovAutoregression in the time-series part of Statsmodels. In order to create the model, we must specify the number of regimes with k_regimes=2, and the order of the autoregression with order=4. The default model also includes switching autoregressive coefficients, so here we also need to specify switching_ar=False to avoid that. After creation, the model is fit via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum.
# Get the RGNP data to replicate Hamilton dta = pd.read_stata('http://www.stata-press.com/data/r14/rgnp.dta').iloc[1:] dta.index = pd.DatetimeIndex(dta.date, freq='QS') dta_hamilton = dta.rgnp # Plot the data dta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3)) # Fit the model mod_hamilton = sm.tsa.MarkovAutoregression(dta_hamilton, k_regimes=2, order=4, switching_ar=False) res_hamilton = mod_hamilton.fit() res_hamilton.summary()
examples/notebooks/markov_autoregression.ipynb
josef-pkt/statsmodels
bsd-3-clause
Let's go over the columns: - event_id: the unique identifier for this contract win. - asof_date: EventVestor's timestamp of event capture. - trade_date: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day. - symbol: stock ticker symbol of the affected company. - event_type: this should always be Contract Win. - contract_amount: the amount of amount_units the contract is for. - amount_units: the currency or other units for the value of the contract. Most commonly in millions of dollars. - contract_entity: name of the customer, if available - event_rating: this is always 1. The meaning of this is uncertain. - timestamp: this is our timestamp on when we registered the data. - sid: the equity's unique identifier. Use this instead of the symbol. We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases. We can select columns and rows with ease. Below, we'll fetch all contract wins by Boeing. We'll display only the contract_amount, amount_units, contract_entity, and timestamp. We'll sort by date.
ba_sid = symbols('BA').sid wins = contract_win[contract_win.sid == ba_sid][['timestamp', 'contract_amount','amount_units','contract_entity']].sort('timestamp') # When displaying a Blaze Data Object, the printout is automatically truncated to ten rows. wins
notebooks/data/eventvestor.contract_win/notebook.ipynb
quantopian/research_public
apache-2.0
Finally, suppose we want the above as a DataFrame:
ba_df = odo(wins, pd.DataFrame) # Printing a pandas DataFrame displays the first 30 and last 30 items, and truncates the middle. ba_df
notebooks/data/eventvestor.contract_win/notebook.ipynb
quantopian/research_public
apache-2.0
Build topology
def build_topology(nodes, edges): topology = nx.Graph() # add all nodes for index, row in nodes.iterrows(): node_name = row["name"] node_attributes = row.drop(["name"]).to_dict() topology.add_node(node_name, **node_attributes) # add all edges for index, row in edges.iterrows(): node1_name = row["node1"] node2_name = row["node2"] edge_attributes = row.drop(["node1", "node2"]).to_dict() topology.add_edge(node1_name, node2_name, **edge_attributes) return topology topology = build_topology(nodes, edges)
clustering/by_topology/lom2-agnostic-code.ipynb
clustbench/network-tests2
lgpl-2.1
Actually do the work
import itertools @pd_diskcache("paths.pkl") def find_comp_to_comp_shortest_paths(topology, comp_nodes): paths_ugly = dict(nx.all_pairs_shortest_path(topology)) # calculates shortest paths and stores them in a dict of dicts # build a table with all computational node pairs # they are not duplicated # if there is ("n48001", "n49419") then there is no ("n49419", "n48001") pair comp_node_pairs = pd.DataFrame.from_records( itertools.chain.from_iterable( [(node1, node2) for node2 in comp_nodes.iloc[index:]] for (index, node1) in comp_nodes.iteritems() ), columns=["node1", "node2"] ) # write shortest paths to this table comp_node_pairs["shortest_path"] = comp_node_pairs.apply( lambda row: paths_ugly[row.loc["node1"]][row.loc["node2"]], axis=1 ) return comp_node_pairs # shortest paths between all computational nodes paths = find_comp_to_comp_shortest_paths(topology, comp_nodes)
clustering/by_topology/lom2-agnostic-code.ipynb
clustbench/network-tests2
lgpl-2.1
Calculate feature lists of these paths We also add a new column to paths table here Helper functions
def interleave(it1, it2): """ >>> list(interleave([1, 2, 3, 4], ["a", "b", "c"])) [1, 'a', 2, 'b', 3, 'c', 4] """ return ( item for item in itertools.chain.from_iterable(itertools.zip_longest(it1, it2)) if item is not None) def get_node_features(topology, node): """Returns node features as a tuple of tuples. >>> topology = nx.Graph() >>> topology.add_node("kek", a=1, b="lol") >>> sorted(get_node_features(topology, "kek"), key=lambda pair: pair[0]) [('a', 1), ('b', 'lol')] """ return tuple(topology.node[node].items()) def get_edge_features(topology, node1, node2): """Returns features of an edge as tuple of tuples. >>> topology = nx.Graph() >>> topology.add_node("a1") >>> topology.add_node("b1") >>> topology.add_edge("a1", "b1", foo="bar", shim="sham") >>> get_edge_features(topology, "a1", "b1") (('foo', 'bar'), ('shim', 'sham')) """ return tuple(topology.edges[node1, node2].items()) def maybe_reverse(l): """ Takes list or tuple and reverses it, or not. Using maybe_reverse on some list and on its reversed version will yield the same result. >>> maybe_reverse([1, 2, 3]) [1, 2, 3] >>> maybe_reverse([3, 2, 1]) [1, 2, 3] >>> maybe_reverse(('a', 'b', 'c')) ('a', 'b', 'c') >>> maybe_reverse(('c', 'b', 'a')) ('a', 'b', 'c') """ if type(l) == list: constructor = list elif type(l) == tuple: constructor = tuple else: raise TypeError("can only take list or tuple arguments") reversed_l = constructor(reversed(l)) if str(l) <= str(reversed_l): return l return reversed_l def get_features_of_path(topology, path): """Returns features of path as a tuple of tuples of tuples. The list of features will be normalized, so that this function returns the same features in the same order for path (A, B, C, D) and for path (D, C, B, A)""" nodes_features = (get_node_features(topology, node) for node in path) edges_features = (get_edge_features(topology, node1, node2) for (node1, node2) in zip(path[:-1], path[1:])) return maybe_reverse(tuple(interleave(nodes_features, edges_features))) def df_loc_by_sequence(df, sequence): """ Use this instead of `df.loc[sequence]`. Pandas df gets confused by tuples and possibly by other sequences. If you do `df.loc[(1, 2)]`, it will look for 1 or 2 in df's index instead of looking for the tuple itself. You can use df.xs to overcome this problem. Or use this function which hides the ugliness. Also see [stackoverflow question](https://goo.gl/emtjB8) for better description of the problem.""" return df.xs(sequence)
clustering/by_topology/lom2-agnostic-code.ipynb
clustbench/network-tests2
lgpl-2.1
$$p(x)=\mathcal{N}(0, I) \ q(x)=\mathcal{N}(0, I)$$
# Gaussian variance difference. gvd5_fname = 'ex3-gvd5-me2_n500_rs100_Jmi2_Jma384_a0.050_trp0.50.p' gvd5_results = load_plot_vs_Js(gvd5_fname, show_legend=True) plt.legend(bbox_to_anchor=(1.8, 1.05)) plt.xticks([1, 10, 1e2, 1e3]) plt.savefig(gvd5_fname.replace('.p', '.pdf', 1), bbox_inches='tight') # plt.legend(ncol=2) #plt.ylim([0.03, 0.1])
ipynb/ex3_results.ipynb
wittawatj/kernel-gof
mit
$$p(x)=\mathcal{N}(0, I) \ q(x)=\mathcal{N}(0, \mathrm{diag}(2,1,1,\ldots))$$
# Gauss-Bernoulli RBM. H1 case # rbm_h1_fname = 'ex3-gbrbm_dx5_dh3_v5em3-me2_n500_rs100_Jmi2_Jma384_a0.050_trp0.50.p' # rbm_h1_results = load_plot_vs_Js(rbm_h1_fname, show_legend=True)
ipynb/ex3_results.ipynb
wittawatj/kernel-gof
mit
Simulate a bivariate relationship Let's simulate two variables, $X$ and $Y$, where $Y$ is a function of $X$ plus some independent noise.
npts = 25 X = np.linspace(0, 5, npts) + stats.norm.rvs(loc=0, scale=1, size=npts) a = 1.0 b = 1.5 Y = b*X + a + stats.norm.rvs(loc=0, scale=2, size=npts) g = sbn.jointplot(X, Y) g.set_axis_labels("X", "Y") pass
2016-04-18-Linear-Regression.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Linear Regression -- finding the line of "best fit" What if we wanted to estimate the linear function that relates $Y$ to $X$? Linear functions are those whose graphs are straight lines. A linear function of a variable $x$ is usually written as: $$ \hat{y} = f(x) = bx + a $$ where $a$ and $b$ are constants. In geometric terms $b$ is the slope of the line and $a$ is the value of the function when $x$ is zero (usually the referred to as the "y-intercept"). There are infinitely many such linear functions of $x$ we could define. Which line should we use if we want to be able to predict $y$? Regression Terminology Predictors, explanatory, or independent variable -- the variables upon which we want to make our prediction. Outcomes, dependent, or response variable -- the variable we are trying to predict/explain in our regression. The optimality criterion for least-squares regression Find the linear function, $f(x)$, that minimizes $$ \sum (y_i - f(x))^2 $$ i.e. find the linear function that minimizes the squared deviations in the $y$ direction.
# calculate regression using built-in scipy.stats.linregress # we'll show the underlying calculations later in the notebook rgr = stats.linregress(X,Y) b = rgr.slope a = rgr.intercept # plot scatter plt.scatter(X, Y, color='steelblue', s=30) # plot regression line plt.plot(X, b*X + a, color='indianred', alpha=0.5) # plot lines from regression to actual value for (x,y) in zip(X, Y): plt.vlines(x, y, b*x + a, linestyle='dashed', color='indianred') plt.xlabel("X") plt.ylabel("Y") plt.gcf().set_size_inches(6,6) plt.title("Linear Least-Squares Regression\nminimizes the sum of squared deviates",fontsize=14) pass
2016-04-18-Linear-Regression.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Solution for the least-squares criterion The slope, $b$, and intercept, $a$, that minimize this quantity are: \begin{align} b &= \frac{s_{xy}}{s^2_x} = r_{xy}\frac{s_y}{s_x}\ \ a &= \overline{y} - b\overline{x} \end{align}
# here's a simple function to calculate the least squares regression of Y on X def lsqr_regression(X, Y): covxy = np.cov(X, Y, ddof=1)[0,1] varx = np.var(X, ddof=1) b = covxy/varx a = np.mean(Y) - b * np.mean(X) return b, a def plot_regression_line(X, Y, b, a, axis, **kw): minx = min(X) maxx = max(X) yhatmin = b*minx + a yhatmax = b*maxx + a axis.plot((minx,maxx), (yhatmin, yhatmax), marker=None, **kw) b, a = lsqr_regression(X, Y) fig, ax = plt.subplots(figsize=(6,6)) ax.scatter(X, Y, color='steelblue') plot_regression_line(X, Y, b, a, ax, color='indianred', alpha=0.9) ax.set_xlabel("X") ax.set_ylabel("Y") ax.set_title("Regression of Y on X",fontsize=14) print("Estimated slope, b:", b) print("Estimated intercept, a:", a) pass
2016-04-18-Linear-Regression.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Residuals Residuals are the difference between the observed value of $y$ and the predicted value. You can think of residuals as the proportion of $y$ unaccounted for by the regression. $$ residuals = y - \hat{y} $$ When the linear regression model is appropriate, residuals should should centered around zero and shoud show no strong trends or extreme differences in spread for different values of $x$.
# yhat is the predicted values of y from x Yhat = b * X + a # residuals are the differnce between predicted and actual residuals = Y - Yhat plt.scatter(X, residuals, color='steelblue') plt.hlines(0, min(X)*0.99, max(X)*1.01,color='indianred') plt.xlabel("X") plt.ylabel("Residuals") pass
2016-04-18-Linear-Regression.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Regression as sum-of-squares decomposition Like ANOVA, regression can be viewed as a decomposition of the sum-of-squares deviances. $$ ss(y) = ss(\hat{y}) + ss(residuals) $$
ssy = np.sum((Y - np.mean(Y))**2) ssyhat = np.sum((Yhat - np.mean(Yhat))**2) ssresiduals = np.sum((residuals - np.mean(residuals))**2) print("SSTotal:", ssy) print("SSYhat:", ssyhat) print("SSResiduals:", ssresiduals) print("SSYhat + SSresiduals:", ssyhat + ssresiduals)
2016-04-18-Linear-Regression.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Variance "explained" by a regression model In the same way we did for ANOVA, we can use the sum-of-square decomposition to understand the relative proportion of variance "explained" (accounted for) by the regression model. We call this quanity the "Coefficient of Determination", and it's designated $R^2$. $$ R^2 = \left( 1 - \frac{SS_{residuals}}{SS_{total}} \right) $$ Linear regression implementation in SciPy The scipy.stats.linregress function implements simple linear regression, and returns information relevant to the null hypothesis that the slope of the regression line is zero. Below is a demonstration of how to use the scipy.stats.linregress function.
regr = stats.linregress(X, Y) regr print("Regression slope: ", regr.slope) print("Regression intercept: ", regr.intercept) print("P-value for the null hypothesis that slope = 0:", regr.pvalue) print("Coefficient of determination, R^2:", regr.rvalue**2) R2 = (1 - (ssresiduals/ssy)) R2
2016-04-18-Linear-Regression.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Regression confidence intervals To understand regression confidence intervals, let's simulate the sampling distribution of the slope and intercept. For this simulation we will hold $X$ fixed, and repeatedly generate samples of $Y$.
npts = 25 x = np.linspace(0,5,npts) + stats.norm.rvs(loc=0, scale=2, size=npts) A = 1 B = 1.5 slopes = [] intercepts = [] yhat = [] nsims = 1000 for i in range(nsims): y = B*x + A + stats.norm.rvs(loc=0, scale=2, size=npts) b = np.corrcoef(x,y,ddof=1)[0,1] * (np.std(y,ddof=1)/np.std(x,ddof=1)) a = np.mean(y) - b * np.mean(x) yhat.append(b * x + a) slopes.append(b) intercepts.append(a) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,4)) sbn.distplot(slopes, ax=ax1) sbn.distplot(intercepts, ax=ax2) ax1.set_title('Sampling Distribution\nof Regression Slope, b') ax2.set_title('Sampling Distribution\nof Regression Intercept, a') pass for r in yhat: plt.plot(x, r, alpha=0.0075, color='steelblue', marker=None, linewidth=1) plt.plot(x, B*x + A, color='indianred', marker=None, linewidth=3) pass b_ci95low = np.percentile(slopes, 5) b_ci95hi = np.percentile(slopes, 95) a_ci95low = np.percentile(intercepts, 5) a_ci95hi = np.percentile(intercepts, 95) print("95% CI for slope of regression:", (b_ci95low, b_ci95hi)) print("95% CI for intercepts of regression:", (a_ci95low, a_ci95hi))
2016-04-18-Linear-Regression.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Seaborn will automatically draw CIs for you The seaborn.regplot function will draw the bivariate scatter, the corresponding linear regression, and confidence interval for you.
rp = sbn.regplot(X, Y) pass
2016-04-18-Linear-Regression.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Illustration of regression with Iris data
url = "http://roybatty.org/iris.csv" iris = pd.read_csv(url) iris.columns = iris.columns.str.replace('.',"_") iris.head() setosa = iris.query('Species == "setosa"') setosa.shape g = sbn.jointplot(setosa.Sepal_Length, setosa.Sepal_Width, space=0.15,) g.set_axis_labels("Sepal Length", "Sepal Width") plt.subplots_adjust(top=0.85) # adjust top of subplots relative to figure so there is room for title g.fig.suptitle("Bivariate Distribution of\nSepal Length and Sepal Width\nIris setosa specimens", fontsize=14) pass rgr = stats.linregress(setosa.Sepal_Length, setosa.Sepal_Width) print("Regression slope:", rgr.slope) print("Regression intercept:", rgr.intercept) print("Regression R:", rgr.rvalue) sbn.regplot(setosa.Sepal_Length, setosa.Sepal_Width) plt.xlabel("Sepal Length") plt.ylabel("Sepal Width") plt.title("Linear Least-Squares Regression\nminimizes the sum of squared deviation") pass sbn.jointplot(setosa.Sepal_Length, setosa.Sepal_Width, kind="reg") pass
2016-04-18-Linear-Regression.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Create a function to get all the statistics of a variable and create summaries. This will be used to write summaries for things like weight vectors and biases.
# Code from - https://www.tensorflow.org/get_started/summaries_and_tensorboard def variable_summaries(var): # Attach a scope for the summary with tf.name_scope("summaries"): mean = tf.reduce_mean(var) tf.summary.scalar('mean', mean) with tf.name_scope('stddev'): stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) tf.summary.scalar('stddev', stddev) tf.summary.scalar('max', tf.reduce_max(var)) tf.summary.scalar('min', tf.reduce_min(var)) tf.summary.histogram('histogram', var)
notebooks/Feed Forward Neural network with TensorBoard.ipynb
abhay1/tf_rundown
mit
Create a function for a neural network layer
def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu): # Scope the entire layer with its name with tf.name_scope(layer_name): # Scope for the weights with tf.name_scope("weights"): weights = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=0.1)) variable_summaries(weights) # Scope for biases with tf.name_scope("biases"): biases = tf.Variable(tf.constant(0.1, shape=([output_dim]))) variable_summaries(biases) # Scope for preactivations with tf.name_scope("preacts"): preacts = tf.add(tf.matmul(input_tensor, weights), biases) tf.summary.histogram('pre_activations', preacts) # Scope for activations with tf.name_scope("activations"): activations = act(preacts, name="activation") tf.summary.histogram('activations', activations) return activations
notebooks/Feed Forward Neural network with TensorBoard.ipynb
abhay1/tf_rundown
mit
Build the network
# Build the network hidden1 = nn_layer(x, num_features, layer_1_size, "layer1") hidden2 = nn_layer(hidden1, layer_1_size, layer_2_size, "layer2") # Final layer - Use tf.identity to make sure there are no activations logits = nn_layer(hidden2, layer_2_size, num_classes, "softmax", act=tf.identity)
notebooks/Feed Forward Neural network with TensorBoard.ipynb
abhay1/tf_rundown
mit
Compute the Loss
# Compute the cross entropy with tf.name_scope('cross_entropy'): deltas = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits) with tf.name_scope('total'): cross_entropy_loss = tf.reduce_mean(deltas) tf.summary.scalar('cross_entropy', cross_entropy_loss)
notebooks/Feed Forward Neural network with TensorBoard.ipynb
abhay1/tf_rundown
mit
Define the optimizer
with tf.name_scope('train_step'): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy_loss)
notebooks/Feed Forward Neural network with TensorBoard.ipynb
abhay1/tf_rundown
mit
Compute the accuracies
with tf.name_scope("evaluation"): with tf.name_scope("correct_prediction"): correct_predictions = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) with tf.name_scope("accuracy"): accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) tf.summary.scalar("accuracy", accuracy)
notebooks/Feed Forward Neural network with TensorBoard.ipynb
abhay1/tf_rundown
mit
Now lets run our graph as usual
# Initializing global variables init = tf.global_variables_initializer() # Merge all the summaries summaries = tf.summary.merge_all() # Create a session to run the graph with tf.Session() as sess: # Run initialization sess.run(init) summary_writer = tf.summary.FileWriter(log_path, sess.graph) # For the set number of epochs for epoch in range(training_epochs): # Compute the total number of batches num_batches = int(mnist.train.num_examples/batch_size) # Iterate over all the examples (1 epoch) for batch in range(num_batches): # Get a batch of examples batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Now run the session curr_loss, cur_accuracy, _, summary = sess.run([cross_entropy_loss, accuracy, train_step, summaries], feed_dict={x: batch_xs, y: batch_ys}) if batch % 50 == 0: # Write the log after each iteration summary_writer.add_summary(summary, epoch * num_batches + batch) display.clear_output(wait=True) # time.sleep(0.05) # Print the loss print("Epoch: %d/%d. Batch: %d/%d. Current loss: %.5f. Train Accuracy: %.2f" %(epoch, training_epochs, batch, num_batches, curr_loss, cur_accuracy)) # Run the session to compute the value and print it test_accuracy = sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels}) print("Test Accuracy: %.2f"%test_accuracy)
notebooks/Feed Forward Neural network with TensorBoard.ipynb
abhay1/tf_rundown
mit
Prepare Data Input shape for X will be (28, 28, 1) images in this case.
def parse_file(filename): xdata, ydata = [], [] fin = open(filename, "rb") i = 0 for line in fin: if i % 10000 == 0: print("{:s}: {:d} lines read".format( os.path.basename(filename), i)) cols = line.strip().split(",") ydata.append(int(cols[0])) x1d = np.array([float(x) / 255.0 for x in cols[1:]]) x3d = np.reshape(x1d, (28, 28, 1)) xdata.append(x3d) i += 1 print("{:s}: {:d} lines read".format(os.path.basename(filename), i)) fin.close() Y = np_utils.to_categorical(np.array(ydata), num_classes=NUM_CLASSES) X = np.array(xdata) return X, Y Xtrain, Ytrain = parse_file(TRAIN_FILE) Xtest, Ytest = parse_file(TEST_FILE) print(Xtrain.shape, Ytrain.shape, Xtest.shape, Ytest.shape)
src/keras/02-mnist-cnn.ipynb
sujitpal/polydlot
apache-2.0
Define Network Model defined is identical to that in Keras example mnist_cnn.py.
model = Sequential() model.add(Conv2D(32, (3, 3), activation="relu", input_shape=(28, 28, 1))) model.add(Conv2D(64, (3, 3), activation="relu")) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation="relu")) model.add(Dropout(0.5)) model.add(Dense(NUM_CLASSES, activation="softmax")) model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]) model.summary()
src/keras/02-mnist-cnn.ipynb
sujitpal/polydlot
apache-2.0
Train Network The Tensorboard callback, if enabled will write out the training logs to the directory given by TENSORBOARD_LOGS_DIR, and you can now start the tensorboard server using the following command: tensorboard --logdir=/path/to/TENSORBOARD_LOGS_DIR The tensorboard application can be accessed from the browser at http://localhost:6060
checkpoint = ModelCheckpoint(filepath=BEST_MODEL, save_best_only=True) tensorboard = TensorBoard(log_dir=TENSORBOARD_LOGS_DIR, histogram_freq=1, batch_size=BATCH_SIZE, write_graph=True, write_grads=True, write_images=True, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None) history = model.fit(Xtrain, Ytrain, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS, validation_split=0.1, callbacks=[checkpoint, tensorboard]) model.save(FINAL_MODEL, overwrite=True) plt.subplot(211) plt.title("Accuracy") plt.plot(history.history["acc"], color="r", label="Train") plt.plot(history.history["val_acc"], color="b", label="Validation") plt.legend(loc="best") plt.subplot(212) plt.title("Loss") plt.plot(history.history["loss"], color="r", label="Train") plt.plot(history.history["val_loss"], color="b", label="Validation") plt.legend(loc="best") plt.tight_layout() plt.show()
src/keras/02-mnist-cnn.ipynb
sujitpal/polydlot
apache-2.0
Data Preparation
df = load_responses_with_traces() df['click_dt_local'] = df.apply(lambda x: utc_to_local(x['click_dt_utc'], x['geo_data']['timezone']), axis = 1) df = df[df['click_dt_local'].notnull()].copy() print('Num Responses with a timezone', df.shape[0])
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
Set up for hour of day plots
df['local_hour_of_day'] = df['click_dt_local'].apply(lambda x: x.hour) df['local_hour_of_day_div2'] = df['click_dt_local'].apply(lambda x: 2 * int(x.hour / 2)) df['local_hour_of_day_div3'] = df['click_dt_local'].apply(lambda x: 3 * int(x.hour / 3)) df['local_hour_of_day_div4'] = df['click_dt_local'].apply(lambda x: 4 * int(x.hour / 4)) hour_of_day_div2_xticks = ['%d-%d' % e for e in zip(range(0, 24, 2), range(2, 25, 2))] hour_of_day_div3_xticks = ['%d-%d' % e for e in zip(range(0, 24, 3), range(3, 25, 3))] hour_of_day_div4_xticks = ['%d-%d' % e for e in zip(range(0, 24, 4), range(4, 25, 4))]
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
Set up for Day of Week and Weekend vs Weekday plots
def get_day_of_week(t): return t.weekday() def get_day_type(t): if t.weekday() > 4: return 1 else: return 0 df['local_day_of_week'] = df['click_dt_local'].apply(get_day_of_week) df['local_day_type'] = df['click_dt_local'].apply(get_day_type) day_of_week_xticks = [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday' ] day_type_xticks = ['weekday', 'weekend']
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
Plots Motivation Weekday vs Weekend
d_single_motivation = df[df['motivation'].apply(lambda x: len(x.split('|')) == 1)] print('Num Response Traces with a single motivation:', d_single_motivation.shape) x = 'local_day_type' hue = 'motivation' d = d_single_motivation xticks = day_type_xticks figsize = (4, 6) xlim = (-0.25, 1.25) hue_order = ['media', 'work/school', 'intrinsic learning', 'bored/random', 'conversation', 'current event', 'personal decision'] title = 'Motivation by Weekday vs Weekend' plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim )
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
The proportion of respondents motivated by work/school drops significantly from weekday to weekend. The proportion of respondents motivated by media increase significantly from weekday to weekend. All other motivations see a very silgtht increase. Motivation by Day of Week
x = 'local_day_of_week' hue = 'motivation' d = d_single_motivation xticks = day_of_week_xticks figsize = (6, 6) xlim = (-0.25, 6.25) hue_order = ['media', 'work/school', 'intrinsic learning', 'bored/random', 'conversation', 'current event', 'personal decision'] plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim )
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
Motivations besides media and work/school maintain a fairly constant proportion throughout the week. The work/school proportion stays at a high contant 20% M-Th, drops until Saturday to almost 10% and then rebounds on Sunday to 15%. Similarly, media stays at a high contant M-F, then Jumps up during the weekend. Motivation: Hour of Day
x = 'local_hour_of_day_div4' hue = 'motivation' d = d_single_motivation d = d[d['local_day_type'] == 0] xticks = hour_of_day_div4_xticks figsize = (8, 6) xlim = (-0.5, 20.5) hue_order = ['media', 'work/school', 'intrinsic learning', 'bored/random', 'conversation', 'current event', 'personal decision'] plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim )
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
The familiar pattern. Motivations besides media and work/school maintain a fairly constant proportion throughout the day. Work/school maintains the highest proportion during work/school hours. Media has the highest proportion outside of work/school hours. Information Depth: Weekday vs Weekend
x = 'local_day_type' hue = 'information depth' xticks = day_type_xticks figsize = (4, 8) xlim = (-0.25, 1.25) hue_order = ['in-depth', 'overview', 'fact'] plot_over_time(df, x, xticks, hue, hue_order, figsize, xlim )
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
Non of the differences are significant, athough it appears as if a higher proportion of respondents are seeking an in depth reading on the weekend. Information Depth: Hour of Day
x = 'local_hour_of_day_div4' hue = 'information depth' d = df[df['local_day_type'] == 0] xticks = hour_of_day_div4_xticks figsize = (8, 6) xlim = (-0.5, 20.5) hue_order = ['in-depth', 'overview', 'fact'] plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim )
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
Not much. Maybe a bump in overview and a dip in fact during the day. Information Depth: Weekday vs Weekend
x = 'local_day_type' hue = 'prior knowledge' xticks = day_type_xticks figsize = (4, 8) xlim = (-0.25, 1.25) hue_order = ['familiar', 'unfamiliar'] plot_over_time(df, x, xticks, hue, hue_order, figsize, xlim )
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
Again, no significant change. Weekend readrs may be more likely to familiar with the topic. Information Depth: Hour of Day
x = 'local_hour_of_day_div4' hue = 'prior knowledge' d = df[df['local_day_type'] == 0] xticks = hour_of_day_div4_xticks figsize = (8, 6) xlim = (-0.5, 20.5) hue_order = ['familiar', 'unfamiliar'] plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim )
src/analysis/Temporal Distributions.ipynb
ewulczyn/readers
mit
We can convert lists to TensorArray, so appending to lists also works, with a few modifications:
def f(n): numbers = [] # We ask you to tell us about the element dtype. autograph.utils.set_element_type(numbers, tf.int32) for i in range(n): numbers.append(i) return numbers.stack() # Stack the list so that it can be used as a Tensor tf_f = autograph.to_graph(f) with tf.Graph().as_default(): with tf.Session() as sess: print(sess.run(tf_f(tf.constant(5)))) # Uncomment the line below to print the generated graph code # print(autograph.to_code(f))
tensorflow/contrib/autograph/examples/notebooks/dev_summit_2018_demo.ipynb
allenlavoie/tensorflow
apache-2.0
And all of these functionalities, and more, can be composed into more complicated code:
def print_primes(n): """Returns all the prime numbers less than n.""" assert n > 0 primes = [] autograph.utils.set_element_type(primes, tf.int32) for i in range(2, n): is_prime = True for k in range(2, i): if i % k == 0: is_prime = False break if not is_prime: continue primes.append(i) all_primes = primes.stack() print('The prime numbers less than', n, 'are:') print(all_primes) return tf.no_op() tf_print_primes = autograph.to_graph(print_primes) with tf.Graph().as_default(): with tf.Session() as sess: n = tf.constant(50) sess.run(tf_print_primes(n)) # Uncomment the line below to print the generated graph code # print(autograph.to_code(print_primes))
tensorflow/contrib/autograph/examples/notebooks/dev_summit_2018_demo.ipynb
allenlavoie/tensorflow
apache-2.0
This function specifies the main training loop. We instantiate the model (using the code above), instantiate an optimizer (here we'll use SGD with momentum, nothing too fancy), and we'll instantiate some lists to keep track of training and test loss and accuracy over time. In the loop inside this function, we'll grab a batch of data, apply an update to the weights of our model to improve its performance, and then record its current training loss and accuracy. Every so often, we'll log some information about training as well.
def train(train_ds, test_ds, hp): m = mlp_model((28 * 28,)) opt = tf.train.MomentumOptimizer(hp.learning_rate, 0.9) train_losses = [] train_losses = autograph.utils.set_element_type(train_losses, tf.float32) test_losses = [] test_losses = autograph.utils.set_element_type(test_losses, tf.float32) train_accuracies = [] train_accuracies = autograph.utils.set_element_type(train_accuracies, tf.float32) test_accuracies = [] test_accuracies = autograph.utils.set_element_type(test_accuracies, tf.float32) i = tf.constant(0) while i < hp.max_steps: train_x, train_y = get_next_batch(train_ds) test_x, test_y = get_next_batch(test_ds) step_train_loss, step_train_accuracy = fit(m, train_x, train_y, opt) step_test_loss, step_test_accuracy = predict(m, test_x, test_y) if i % (hp.max_steps // 10) == 0: print('Step', i, 'train loss:', step_train_loss, 'test loss:', step_test_loss, 'train accuracy:', step_train_accuracy, 'test accuracy:', step_test_accuracy) train_losses.append(step_train_loss) test_losses.append(step_test_loss) train_accuracies.append(step_train_accuracy) test_accuracies.append(step_test_accuracy) i += 1 return (train_losses.stack(), test_losses.stack(), train_accuracies.stack(), test_accuracies.stack())
tensorflow/contrib/autograph/examples/notebooks/dev_summit_2018_demo.ipynb
allenlavoie/tensorflow
apache-2.0
Next, we set up the RNNColobot model, which is very similar to the one we used in the main exercise. Autograph doesn't fully support classes yet (but it will soon!), so we'll write the model using simple functions.
def model_components(): lower_cell = tf.contrib.rnn.LSTMBlockCell(256) lower_cell.build(tf.TensorShape((None, 256))) upper_cell = tf.contrib.rnn.LSTMBlockCell(128) upper_cell.build(tf.TensorShape((None, 256))) relu_layer = tf.layers.Dense(3, activation=tf.nn.relu) relu_layer.build(tf.TensorShape((None, 128))) return lower_cell, upper_cell, relu_layer def rnn_layer(chars, cell, batch_size, training): """A simple RNN layer. Args: chars: A Tensor of shape (max_sequence_length, batch_size, input_size) cell: An object of type tf.contrib.rnn.LSTMBlockCell batch_size: Int, the batch size to use training: Boolean, whether the layer is used for training Returns: A Tensor of shape (max_sequence_length, batch_size, output_size). """ hidden_outputs = [] autograph.utils.set_element_type(hidden_outputs, tf.float32) state, output = cell.zero_state(batch_size, tf.float32) n = tf.shape(chars)[0] i = 0 while i < n: ch = chars[i] cell_output, (state, output) = cell.call(ch, (state, output)) hidden_outputs.append(cell_output) i += 1 hidden_outputs = hidden_outputs.stack() if training: hidden_outputs = tf.nn.dropout(hidden_outputs, 0.5) return hidden_outputs def model(inputs, lower_cell, upper_cell, relu_layer, batch_size, training): """RNNColorbot model. The model consists of two RNN layers (made by lower_cell and upper_cell), followed by a fully connected layer with ReLU activation. Args: inputs: A tuple (chars, length) lower_cell: An object of type tf.contrib.rnn.LSTMBlockCell upper_cell: An object of type tf.contrib.rnn.LSTMBlockCell relu_layer: An object of type tf.layers.Dense batch_size: Int, the batch size to use training: Boolean, whether the layer is used for training Returns: A Tensor of shape (batch_size, 3) - the model predictions. """ (chars, length) = inputs chars_time_major = tf.transpose(chars, [1, 0, 2]) chars_time_major.set_shape((None, batch_size, 256)) hidden_outputs = rnn_layer(chars_time_major, lower_cell, batch_size, training) final_outputs = rnn_layer(hidden_outputs, upper_cell, batch_size, training) # Grab just the end-of-sequence from each output. indices = tf.stack([length - 1, range(batch_size)], axis=1) sequence_ends = tf.gather_nd(final_outputs, indices) return relu_layer(sequence_ends) def loss_fn(labels, predictions): return tf.reduce_mean((predictions - labels) ** 2)
tensorflow/contrib/autograph/examples/notebooks/dev_summit_2018_demo.ipynb
allenlavoie/tensorflow
apache-2.0
We'll create copies of the point and line reflected in the circle, using $X = C\hat X\tilde C$, where $\hat X$ is the grade involution.
point_refl = circle * point.gradeInvol() * ~circle line_refl = circle * line.gradeInvol() * ~circle
docs/tutorials/cga/visualization-tools.ipynb
arsenovic/clifford
bsd-3-clause
pyganja pyganja is a python interface to the ganja.js (github) library. To use it, typically we need to import two names from the library:
from pyganja import GanjaScene, draw import pyganja; pyganja.__version__
docs/tutorials/cga/visualization-tools.ipynb
arsenovic/clifford
bsd-3-clause
GanjaScene lets us build scenes out of geometric objects, with attached labels and RGB colors:
sc = GanjaScene() sc.add_object(point, color=(255, 0, 0), label='point') sc.add_object(line, color=(0, 255, 0), label='line') sc.add_object(circle, color=(0, 0, 255), label='circle') sc_refl = GanjaScene() sc_refl.add_object(point_refl, color=(128, 0, 0), label='point_refl') sc_refl.add_object(line_refl, color=(0, 128, 0), label='line_refl')
docs/tutorials/cga/visualization-tools.ipynb
arsenovic/clifford
bsd-3-clause
Once we've built our scene, we can draw it, specifying a scale (which here we use to zoom out), and the signature of our algebra (which defaults to conformal 3D):
draw(sc, sig=layout.sig, scale=0.5)
docs/tutorials/cga/visualization-tools.ipynb
arsenovic/clifford
bsd-3-clause