markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
2. Aplicación al Iris Dataset Podemos visualizar el resultado con una matriz de confusión.
from sklearn.metrics import confusion_matrix cm = confusion_matrix(Y, Y_pred) print cm
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
Run Provide the run information: * run id * run metalink containing the 3 by 3 kernel extractions * participant
run_id = '0000005-150701000025418-oozie-oozi-W' run_meta = 'http://sb-10-16-10-55.dev.terradue.int:50075/streamFile/ciop/run/participant-f/0000005-150701000025418-oozie-oozi-W/results.metalink?' participant = 'participant-f'
evaluation-participant-f.ipynb
ocean-color-ac-challenge/evaluate-pearson
apache-2.0
Below we define the rotation and reflection matrices
def rotation_matrix(angle, d): directions = { "x":[1.,0.,0.], "y":[0.,1.,0.], "z":[0.,0.,1.] } direction = np.array(directions[d]) sina = np.sin(angle) cosa = np.cos(angle) # rotation matrix around unit vector R = np.diag([cosa, cosa, cosa]) R += np.outer(dir...
docs/Molonglo_coords.ipynb
ewanbarr/anansi
apache-2.0
Define a position vectors
def pos_vector(a,b): return np.array([[np.cos(b)*np.cos(a)], [np.cos(b)*np.sin(a)], [np.sin(b)]]) def pos_from_vector(vec): a,b,c = vec a_ = np.arctan2(b,a) c_ = np.arcsin(c) return a_,c_
docs/Molonglo_coords.ipynb
ewanbarr/anansi
apache-2.0
Generic transform
def transform(a,b,R,inverse=True): P = pos_vector(a,b) if inverse: R = R.T V = np.dot(R,P).ravel() a,b = pos_from_vector(V) a = 0 if np.isnan(a) else a b = 0 if np.isnan(a) else b return a,b
docs/Molonglo_coords.ipynb
ewanbarr/anansi
apache-2.0
Reference conversion formula from Duncan's old TCC
def hadec_to_nsew(ha,dec): ew = np.arcsin((0.9999940546 * np.cos(dec) * np.sin(ha)) - (0.0029798011806 * np.cos(dec) * np.cos(ha)) + (0.002015514993 * np.sin(dec))) ns = np.arcsin(((-0.0000237558704 * np.cos(dec) * np.sin(ha)) + (0.578881847 * np.cos(...
docs/Molonglo_coords.ipynb
ewanbarr/anansi
apache-2.0
New conversion formula using rotation matrices What do we think we should have: \begin{equation} \begin{bmatrix} \cos(\rm EW)\cos(\rm NS) \ \cos(\rm EW)\sin(\rm NS) \ \sin(\rm EW) \end{bmatrix} = \mathbf{R} \begin{bmatrix} \cos(\delta)\cos(\rm HA) \ \cos(\delta)\sin(\rm HA) \ \sin(\delta) \end{bmatrix} \end{equation}...
# There should be a slope and tilt conversion to get accurate change #skew = 4.363323129985824e-05 #slope = 0.0034602076124567475 #skew = 0.00004 #slope = 0.00346 skew = 0.01297 # <- this is the skew I get if I optimize for the same results as duncan's system slope= 0.00343 def telescope_to_nsew_matrix(skew,slope):...
docs/Molonglo_coords.ipynb
ewanbarr/anansi
apache-2.0
The inverse of this is:
def azel_to_nsew(az, el): ns,ew = transform(az,el,nsew_to_azel_matrix(skew,slope).T) return ns,ew
docs/Molonglo_coords.ipynb
ewanbarr/anansi
apache-2.0
Extending this to HA Dec
mol_lat = -0.6043881274183919 # in radians def azel_to_hadec_matrix(lat): rot_y = rotation_matrix(np.pi/2-lat,"y") rot_z = rotation_matrix(np.pi,"z") R = np.dot(rot_y,rot_z) return R def azel_to_hadec(az,el,lat): ha,dec = transform(az,el,azel_to_hadec_matrix(lat)) return ha,dec def nsew_to_ha...
docs/Molonglo_coords.ipynb
ewanbarr/anansi
apache-2.0
Durchführung Die Versuchumgebung besteht aus einem doppelwandigen, luftdicht verschlossenen Messingrohr R. Auf einer Seite des Rohres ist im Inneren ein Lautsprecher L und gegenüberliegend ein Kondensatormikrofon KM montiert. Um die Schallgeschwindigkeiten bei verschiedenen Distanzen bestimmen zu können, kann die Wand,...
# Constants name = ['Luft', 'Helium', 'SF6'] mm = [28.95, 4.00, 146.06] ri = [287, 2078, 56.92] cp = [1.01, 5.23, 0.665] cv = [0.72, 3.21, 0.657] k = [1.63, 1.40, 1.012] c0 = [971, 344, 129] constants_tbl = PrettyTable( list(zip(name, mm, ri, cp, cv, k, c0)), label='tab:gase', caption='Kennwerte und Konst...
versuch3/W8.ipynb
Yatekii/glal3
gpl-3.0
Verwendete Messgeräte
# Utilities name = ['Oszilloskop', 'Zeitmesser', 'Funktionsgenerator', 'Verstärker', 'Vakuumpumpe', 'Netzgerät', 'Temperaturmessgerät'] manufacturer = ['LeCroy', 'Keithley', 'HP', 'WicTronic', 'Pfeiffer', ' ', ' '] device = ['9631 Dual 300MHz Oscilloscope 2.5 GS/s', '775 Programmable Counter/Timer', '33120A 15MHz Wave...
versuch3/W8.ipynb
Yatekii/glal3
gpl-3.0
Auswertung Bei allen Versuchen wurde im Behältnis ein Unterdruck von -0.8 Bar erzeugt. Anschliesend wurde das Rohr bis 0.3 Bar mit Gas gefüllt. Dies wurde jeweils zweimal gemacht um Rückstände des vorherigen Gases zu entfernen. Laufzeitmethode Bei der Laufzeitmethode wurde die Laufzeit vom Lautsprecher bis zum Mikrofon...
# Laufzeitenmethode Luft, Helium, SF6 import collections # Read Data dfb = pd.read_csv('data/laufzeitmethode.csv') ax = None i = 0 for gas1 in ['luft', 'helium', 'sf6']: df = dfb.loc[dfb['gas1'] == gas1].loc[dfb['gas2'] == gas1].loc[dfb['p'] == 1] slope, intercept, sem, r, p = stats.linregress(df['t'], df['s']...
versuch3/W8.ipynb
Yatekii/glal3
gpl-3.0
Resonanzmethode Um eine anständige Messung zu kriegen, wurde zuerst eine Anfangsfrequenz bestimmt, bei welcher mindestens 3 konstruktive Interferenzen über die Messdistanz von einem Meter gemessen wurden. Da wurde dann eine Messung durchgeführt, sowie bei 5 weiteren, höheren Frequenzen. Mit einem linearen Fit konnte da...
# Resonanzmethode Luft, Helium, SF6 import collections # Read Data dfb2 = pd.read_csv('data/resonanzfrequenz.csv') ax = None i = 0 for gas1 in ['luft', 'helium', 'sf6']: df = dfb2.loc[dfb2['gas1'] == gas1].loc[dfb2['gas2'] == gas1].loc[dfb2['p'] == 1] df['lbd'] = 1 / (df['s'] * 2) df['v'] = 2 * df['f'] * d...
versuch3/W8.ipynb
Yatekii/glal3
gpl-3.0
Gasgemische Bei diesem Versuch wurden Helium und SF6 mit $\frac{1}{5}$ Anteilen kombiniert. Dafür wurde jeweils erst ein Gas bis einem Druck proportional zum jeweiligen Anteil eingelassen und darauf hin das zweite Gas. Wie in \ref{eq:m_p} erklärt ist dies möglich.
# Laufzeitenmethode Helium-SF6-Gemisch import collections # Read Data dfb = pd.read_csv('data/laufzeitmethode.csv') ax = None colors = ['blue', 'green', 'red', 'purple'] results['helium']['sf6'] = {} v_exp = [] for i in range(1, 5): i /= 5 df = dfb.loc[dfb['gas1'] == 'helium'].loc[dfb['gas2'] == 'sf6'].loc[dfb...
versuch3/W8.ipynb
Yatekii/glal3
gpl-3.0
Fehlerrechnung Wie in der nachfolgenden Sektion ersichtlich ist, hält sich der statistische Fehler sehr in Grenzen. Der systematische Fehler sollte durch die Lineare Regression ebenfalls durch den Offset kompensiert werden. Resonanzmethode Hier wurde nur eine Distanz zwischen den Maximas gemessen. Besser wäre drei oder...
# Show results values = [ 'Luft', 'Helium', 'SF6' ] means_l = [ '{0:.2f}'.format(results['luft']['luft']['1_l_slope']) + r'$\frac{m}{s}$', '{0:.2f}'.format(results['helium']['helium']['1_l_slope']) + r'$\frac{m}{s}$', '{0:.2f}'.format(results['sf6']['sf6']['1_l_slope']) + r'$\frac{m}{s}$' ] me...
versuch3/W8.ipynb
Yatekii/glal3
gpl-3.0
Gasgemische In der Tabelle \ref{tab:resultat_gasgemisch} kann einfach erkannt werden, dass die experimentell bestimmten Werte absolut nicht übereinstimmen mit den berechneten Werten. Beide Resultatreihen würden einzeln aber plausibel aussehen, wobei die berechnete Reihe in Anbetracht der Konstanten von SF6 und Helium s...
# Show results values = [ '20% / 80%', '40% / 60%', '60% / 40%', '80% / 20%' ] means_x = [ '{0:.2f}'.format(results['helium']['sf6']['02_l_slope']) + r'$\frac{m}{s}$', '{0:.2f}'.format(results['helium']['sf6']['04_l_slope']) + r'$\frac{m}{s}$', '{0:.2f}'.format(results['helium']['sf6']['06_...
versuch3/W8.ipynb
Yatekii/glal3
gpl-3.0
Anhang Laufzeitmethode
data = PrettyTable( list(zip(dfb['gas1'], dfb['gas2'], dfb['p'], dfb['s'], dfb['t'])), caption='Messwerte der Laufzeitmethode.', entries_per_column=len(dfb['gas1']), extra_header=['Gas 1', 'Gas 2', 'Anteil Gas 1', 'Strecke [m]', 'Laufzeit [s]'] ) data.show()
versuch3/W8.ipynb
Yatekii/glal3
gpl-3.0
Resonanzmethode
data = PrettyTable( list(zip(dfb2['gas1'], dfb2['gas2'], dfb2['p'], dfb2['f'], dfb2['s'])), caption='Messwerte der Resonanzmethode.', entries_per_column=len(dfb2['gas1']), extra_header=['Gas 1', 'Gas 2', 'Anteil Gas 1', 'Frequenz [Hz]', 'Strecke [m]'] ) data.show()
versuch3/W8.ipynb
Yatekii/glal3
gpl-3.0
First we'll open an image, and create a helper function that converts that image into a training set of (x,y) positions (the data) and their corresponding (r,g,b) colors (the labels). We'll then load a picture with it.
def get_data(img): width, height = img.size pixels = img.getdata() x_data, y_data = [],[] for y in range(height): for x in range(width): idx = x + y * width r, g, b = pixels[idx] x_data.append([x / float(width), y / float(height)]) y_data.append([r...
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
We've postfixed all the variable names with a 1 because later we'll open a second image. We're now going to define a neural network which takes a 2-neuron input (the normalized x, y position) and outputs a 3-neuron output corresponding to color. We'll use Keras's Sequential class to create a deep neural network with a ...
def make_model(): model = Sequential() model.add(Dense(2, activation='relu', input_shape=(2,))) model.add(Dense(20, activation='relu')) model.add(Dense(20, activation='relu')) model.add(Dense(20, activation='relu')) model.add(Dense(20, activation='relu')) model.add(Dense(20, activation='relu...
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
Let's now go ahead and train our neural network. In this case, we are going to use the training set as the validation set as well. Normally, you'd never do this because it would cause your neural network to overfit. But in this experiment, we're not worried about overfitting... in fact, overfitting is the whole point! ...
m1.fit(x1, y1, batch_size=5, epochs=25, verbose=1, validation_data=(x1, y1))
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
Now that the neural net is finished training, let's take the training data, our pixel positions, and simply send them back straight through the network, and plot the predicted colors on a new image. We'll make a new function for this called generate_image.
def generate_image(model, x, width, height): img = Image.new("RGB", [width, height]) pixels = img.load() y_pred = model.predict(x) for y in range(height): for x in range(width): idx = x + y * width r, g, b = y_pred[idx] pixels[x, y] = (int(r), int(g), int(b)) ...
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
Sort of looks like the original image a bit! Of course the network can't learn the mapping perfectly without pretty much memorizing the data, but this way gives us a pretty good impression and doubles as an extremely inefficient form of compression! Let's load another image. We'll load the second image and also resize ...
im2 = Image.open("../assets/kitty.jpg") im2 = im2.resize(im1.size) x2, y2 = get_data(im2) print("data", x2) print("labels", y2) imshow(im2)
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
Now we'll repeat the experiment from before. We'll make a new neural network m2 which will learn to map im2's (x,y) positions to its (r,g,b) colors.
m2 = make_model() # make a new model, keep m1 separate m2.fit(x2, y2, batch_size=5, epochs=25, verbose=1, validation_data=(x2, y2))
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
Let's generate a new image from m2 and see how it looks.
img = generate_image(m2, x2, im2.width, im2.height) imshow(img)
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
Not too bad! Now let's do something funky. We're going to make a new neural network, m3, with the same architecture as m1 and m2 but instead of training it, we'll just set its weights to be interpolations between the weights of m1 and m2 and at each step, we'll generate a new image. In other words, we'll gradually chan...
def get_interpolated_weights(model1, model2, amt): w1 = np.array(model1.get_weights()) w2 = np.array(model2.get_weights()) w3 = np.add((1.0 - amt) * w1, amt * w2) return w3 def generate_image_rescaled(model, x, width, height): img = Image.new("RGB", [width, height]) pixels = img.load() y_pr...
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
Neat... Let's do one last thing, and make an animation with more frames. We'll generate 120 frames inside the assets folder, then use ffmpeg to stitch them into an mp4 file. If you don't have ffmpeg, you can install it from here.
n = 120 frames_dir = '../assets/neural-painter-frames' video_path = '../assets/neural-painter-interpolation.mp4' import os if not os.path.isdir(frames_dir): os.makedirs(frames_dir) for i in range(n): amt = float(i)/(n-1.0) w3 = get_interpolated_weights(m1, m2, amt) m3.set_weights(w3) img = generat...
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
You can find the video now in the assets directory. Looks neat! We can also display it in this notebook. From here, there's a lot of fun things we can do... Triangulating between multiple images, or streaming together several interpolations, or predicting color from not just position, but time in a movie. Lots of possi...
from IPython.display import HTML import io import base64 video = io.open(video_path, 'r+b').read() encoded = base64.b64encode(video) HTML(data='''<video alt="test" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video>'''.format(encoded.decode('ascii')))
examples/dreaming/neural-net-painter.ipynb
ml4a/ml4a-guides
gpl-2.0
Event loop and GUI integration The %gui magic enables the integration of GUI event loops with the interactive execution loop, allowing you to run GUI code without blocking IPython. Consider for example the execution of Qt-based code. Once we enable the Qt gui support:
%gui qt
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb
pacoqueen/ginn
gpl-2.0
We can define a simple Qt application class (simplified version from this Qt tutorial):
import sys from PyQt4 import QtGui, QtCore class SimpleWindow(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setGeometry(300, 300, 200, 80) self.setWindowTitle('Hello World') quit = QtGui.QPushButton('Close', self) quit.setGeomet...
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb
pacoqueen/ginn
gpl-2.0
And now we can instantiate it:
app = QtCore.QCoreApplication.instance() if app is None: app = QtGui.QApplication([]) sw = SimpleWindow() sw.show() from IPython.lib.guisupport import start_event_loop_qt4 start_event_loop_qt4(app)
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb
pacoqueen/ginn
gpl-2.0
But IPython still remains responsive:
10+2
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb
pacoqueen/ginn
gpl-2.0
The %gui magic can be similarly used to control Wx, Tk, glut and pyglet applications, as can be seen in our examples. Embedding IPython in a terminal application
%%writefile simple-embed.py # This shows how to use the new top-level embed function. It is a simpler # API that manages the creation of the embedded shell. from IPython import embed a = 10 b = 20 embed(header='First time', banner1='') c = 30 d = 40 embed(header='The second time')
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb
pacoqueen/ginn
gpl-2.0
$d2q9$ cf.Jonas Tölke. Implementation of a Lattice Boltzmann kernel using the Compute Unified Device Architecture developed by nVIDIA. Comput. Visual Sci. DOI 10.1007/s00791-008-0120-2 Affine Spaces Lattices, that are "sufficiently" Galilean invariant, through non-perturbative algebraic theory cf. http://staff.polito....
#u = Symbol("u",assume="real") u = Symbol("u",real=True) #T_0 =Symbol("T_0",assume="positive") T_0 =Symbol("T_0",real=True,positive=True) #v = Symbol("v",assume="real") v = Symbol("v",real=True) #phi_v = sqrt( pi/(Rat(2)*T_0))*exp( - (v-u)**2/(Rat(2)*T_0)) phi_v = sqrt( pi/(2*T_0))*exp( - (v-u)**2/(2*T_0)) integrate...
LatticeBoltzmann/LatticeBoltzmannMethod.ipynb
ernestyalumni/CUDACFD_out
mit
cf. http://stackoverflow.com/questions/16599325/simplify-conditional-integrals-in-sympy
refine(integrate(phi_v,(v,-oo,oo)), Q.is_true(Abs(periodic_argument(1/polar_lift(sqrt(T_0))**2, oo)) <= pi/2))
LatticeBoltzmann/LatticeBoltzmannMethod.ipynb
ernestyalumni/CUDACFD_out
mit
Causal assumptions Having introduced the basic functionality, we now turn to a discussion of the assumptions underlying a causal interpretation: Faithfulness / Stableness: Independencies in data arise not from coincidence, but rather from causal structure or, expressed differently, If two variables are independent gi...
np.random.seed(1) data = np.random.randn(500, 3) for t in range(1, 500): # data[t, 0] += 0.6*data[t-1, 1] data[t, 1] += 0.6*data[t-1, 0] data[t, 2] += 0.6*data[t-1, 1] - 0.36*data[t-2, 0] var_names = [r'$X^0$', r'$X^1$', r'$X^2$'] dataframe = pp.DataFrame(data, var_names=var_names) # tp.plot_timeseries...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
Since here $X^2_t = 0.6 X^1_{t-1} - 0.36 X^0_{t-2} + \eta^2_t = 0.6 (0.6 X^0_{t-2} + \eta^1_{t-1}) - 0.36 X^0_{t-2} + \eta^2_t = 0.36 X^0_{t-2} - 0.36 X^0_{t-2} + ...$, there is no unconditional dependency $X^0_{t-2} \to X^2_t$ and the link is not detected in the condition-selection step:
parcorr = ParCorr() pcmci_parcorr = PCMCI( dataframe=dataframe, cond_ind_test=parcorr, verbosity=1) all_parents = pcmci_parcorr.run_pc_stable(tau_max=2, pc_alpha=0.2)
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
However, since the other parent of $X^2$, namely $X^1_{t-1}$ is detected, the MCI step conditions on $X^1_{t-1}$ and can reveal the true underlying graph (in this particular case):
results = pcmci_parcorr.run_pcmci(tau_max=2, pc_alpha=0.2, alpha_level = 0.01)
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
Note, however, that this is not always the case and such cancellation, even though a pathological case, can present a problem especially for smaller sample sizes. Deterministic dependencies Another violation of faithfulness can happen due to purely deterministic dependencies as shown here:
np.random.seed(1) data = np.random.randn(500, 3) for t in range(1, 500): data[t, 0] = 0.4*data[t-1, 1] data[t, 2] += 0.3*data[t-2, 1] + 0.7*data[t-1, 0] dataframe = pp.DataFrame(data, var_names=var_names) tp.plot_timeseries(dataframe); plt.show() parcorr = ParCorr() pcmci_parcorr = PCMCI( dataframe=datafra...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
Here the partial correlation $X^1_{t-1} \to X^0_t$ is exactly 1. Since these now represent the same variable, the true link $X^0_{t-1} \to X^2_t$ cannot be detected anymore since we condition on $X^1_{t-2}$. Deterministic copies of other variables should be excluded from the analysis. Causal sufficiency Causal sufficie...
np.random.seed(1) data = np.random.randn(10000, 5) a = 0.8 for t in range(5, 10000): data[t, 0] += a*data[t-1, 0] data[t, 1] += a*data[t-1, 1] + 0.5*data[t-1, 0] data[t, 2] += a*data[t-1, 2] + 0.5*data[t-1, 1] + 0.5*data[t-1, 4] data[t, 3] += a*data[t-1, 3] + 0.5*data[t-2, 4] data[t, 4] += a*data[t-...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
The upper plot shows the true causal graph if all variables are observed. The lower graph shows the case where variable $U$ is hidden. Then several spurious links appear: (1) $X\to Z$ and (2) links from $Y$ and $W$ to $Z$, which is counterintuitive because there is no possible indirect pathway (see upper graph). What's...
np.random.seed(42) T = 2000 data = np.random.randn(T, 4) # Simple sun data[:,3] = np.sin(np.arange(T)*20/np.pi) + 0.1*np.random.randn(T) c = 0.8 for t in range(1, T): data[t, 0] += 0.4*data[t-1, 0] + 0.4*data[t-1, 1] + c*data[t-1,3] data[t, 1] += 0.5*data[t-1, 1] + c*data[t-1,3] data[t, 2] += 0.6*data[t-1, ...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
If we do not account for the common solar forcing, there will be many spurious links:
parcorr = ParCorr() dataframe_nosun = pp.DataFrame(data[:,[0,1,2]], var_names=[r'$X^0$', r'$X^1$', r'$X^2$']) pcmci_parcorr = PCMCI( dataframe=dataframe_nosun, cond_ind_test=parcorr, verbosity=0) tau_max = 2 tau_min = 1 results = pcmci_parcorr.run_pcmci(tau_max=tau_max, pc_alpha=0.2, alpha_level = 0.01) #...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
However, if we explicitely include the solar forcing variable (which we assume is known in this case), we can identify the correct causal graph. Since we are not interested in the drivers of the solar forcing variable, we don't attempt to reconstruct its parents. This can be achieved by restricting selected_links.
parcorr = ParCorr() # Only estimate parents of variables 0, 1, 2 selected_links = {} for j in range(4): if j in [0, 1, 2]: selected_links[j] = [(var, -lag) for var in range(4) for lag in range(tau_min, tau_max + 1)] else: selected_links[j] = [] pcmci_parcorr = PCMCI( ...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
Time sub-sampling Sometimes a time series might be sub-sampled, that is the measurements are less frequent than the true underlying time-dependency. Consider the following process:
np.random.seed(1) data = np.random.randn(1000, 3) for t in range(1, 1000): data[t, 0] += 0.*data[t-1, 0] + 0.6*data[t-1,2] data[t, 1] += 0.*data[t-1, 1] + 0.6*data[t-1,0] data[t, 2] += 0.*data[t-1, 2] + 0.6*data[t-1,1] dataframe = pp.DataFrame(data, var_names=[r'$X^0$', r'$X^1$', r'$X^2$']) tp.plot_timeseri...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
With the original time sampling we obtain the correct causal graph:
pcmci_parcorr = PCMCI(dataframe=dataframe, cond_ind_test=ParCorr()) results = pcmci_parcorr.run_pcmci(tau_min=0,tau_max=2, pc_alpha=0.2, alpha_level = 0.01) # Plot time series graph tp.plot_time_series_graph( val_matrix=results['val_matrix'], graph=results['graph'], var_names=var_names, link_colorbar_l...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
If we sub-sample the data, very counter-intuitive links can appear. The true causal loop gets detected in the wrong direction:
sampled_data = data[::2] pcmci_parcorr = PCMCI(dataframe=pp.DataFrame(sampled_data, var_names=var_names), cond_ind_test=ParCorr(), verbosity=0) results = pcmci_parcorr.run_pcmci(tau_min=0, tau_max=2, pc_alpha=0.2, alpha_level=0.01) # Plot time series graph tp.plot_time_series_graph( val_matri...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
If causal lags are smaller than the time sampling, such problems may occur. Causal inference for sub-sampled data is still an active area of research. Causal Markov condition The Markov condition can be rephrased as assuming that the noises driving each variable are independent of each other and independent in time (ii...
np.random.seed(1) T = 10000 # Generate 1/f noise by averaging AR1-process with wide range of coeffs # (http://www.scholarpedia.org/article/1/f_noise) def one_over_f_noise(T, n_ar=20): whitenoise = np.random.randn(T, n_ar) ar_coeffs = np.linspace(0.1, 0.9, n_ar) for t in range(T): whitenoise[t] += a...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
Here PCMCI will detect many spurious links, especially auto-dependencies, since the process has long memory and the present state is not independent of the further past given some set of parents.
parcorr = ParCorr() pcmci_parcorr = PCMCI( dataframe=dataframe, cond_ind_test=parcorr, verbosity=1) results = pcmci_parcorr.run_pcmci(tau_max=5, pc_alpha=0.2, alpha_level = 0.01)
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
Time aggregation An important choice is how to aggregate measured time series. For example, climate time series might have been measured daily, but one might be interested in a less noisy time-scale and analyze monthly aggregates. Consider the following process:
np.random.seed(1) data = np.random.randn(1000, 3) for t in range(1, 1000): data[t, 0] += 0.7*data[t-1, 0] data[t, 1] += 0.6*data[t-1, 1] + 0.6*data[t-1,0] data[t, 2] += 0.5*data[t-1, 2] + 0.6*data[t-1,1] dataframe = pp.DataFrame(data, var_names=var_names) tp.plot_timeseries(dataframe); plt.show()
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
With the original time aggregation we obtain the correct causal graph:
pcmci_parcorr = PCMCI(dataframe=dataframe, cond_ind_test=ParCorr()) results = pcmci_parcorr.run_pcmci(tau_min=0,tau_max=2, pc_alpha=0.2, alpha_level = 0.01) # Plot time series graph tp.plot_time_series_graph( val_matrix=results['val_matrix'], graph=results['graph'], var_names=var_names, link_colorbar_l...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
If we aggregate the data, we also detect a contemporaneous dependency for which no causal direction can be assessed in this framework and we obtain also several lagged spurious links. Essentially, we now have direct causal effects that appear contemporaneous on the aggregated time scale. Also causal inference for time-...
aggregated_data = pp.time_bin_with_mask(data, time_bin_length=4) pcmci_parcorr = PCMCI(dataframe=pp.DataFrame(aggregated_data[0], var_names=var_names), cond_ind_test=ParCorr(), verbosity=0) results = pcmci_parcorr.run_pcmci(tau_min=0, tau_max=2, pc_alpha=0.2, alpha_level = 0.01) # Plot time seri...
tutorials/tigramite_tutorial_assumptions.ipynb
jakobrunge/tigramite
gpl-3.0
First, list supported options on the Stackdriver magic %sd:
%sd -h
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Let's see what we can do with the monitoring command:
%sd monitoring -h
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
List names of Compute Engine CPU metrics Here we use IPython cell magics to list the CPU metrics. The Labels column shows that instance_name is a metric label.
%sd monitoring metrics list --type compute*/cpu/*
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
List monitored resource types related to GCE
%sd monitoring resource_types list --type gce*
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Querying time series data The Query class allows users to query and access the monitoring time series data. Many useful methods of the Query class are actually defined by the base class, which is provided by the google-cloud-python library. These methods include: * select_metrics: filters the query based on metric lab...
from google.datalab.stackdriver import monitoring as gcm help(gcm.Query.select_interval)
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Initializing the query During intialization, the metric type and the time interval need to be specified. For interactive use, the metric type has a default value. The simplest way to specify the time interval that ends now is to use the arguments days, hours, and minutes. In the cell below, we initialize the query to l...
query_cpu = gcm.Query('compute.googleapis.com/instance/cpu/utilization', hours=2)
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Getting the metadata The method metadata() returns a QueryMetadata object. It contains the following information about the time series matching the query: * resource types * resource labels and their values * metric labels and their values This helps you understand the structure of the time series data, and makes it ea...
metadata_cpu = query_cpu.metadata().as_dataframe() metadata_cpu.head(5)
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Reading the instance names from the metadata Next, we read in the instance names from the metadata, and use it in filtering the time series data below. If there are no GCE instances in this project, the cells below will raise errors.
import sys if metadata_cpu.empty: sys.stderr.write('This project has no GCE instances. The remaining notebook ' 'will raise errors!') else: instance_names = sorted(list(metadata_cpu['metric.labels']['instance_name'])) print('First 5 instance names: %s' % ([str(name) for name in instance_names[...
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Filtering by metric label We first filter query_cpu defined earlier to include only the first instance. Next, calling as_dataframe gets the results from the monitoring API, and converts them into a pandas DataFrame.
query_cpu_single_instance = query_cpu.select_metrics(instance_name=instance_names[0]) # Get the query results as a pandas DataFrame and look at the last 5 rows. data_single_instance = query_cpu_single_instance.as_dataframe(label='instance_name') data_single_instance.tail(5)
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Displaying the time series as a linechart We can plot the time series data by calling the plot method of the dataframe. The pandas library uses matplotlib for plotting, so you can learn more about it here.
# N.B. A useful trick is to assign the return value of plot to _ # so that you don't get text printed before the plot itself. _ = data_single_instance.plot()
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Aggregating the query You can aggregate or summarize time series data along various dimensions. * In the first stage, data in a time series is aligned to a specified period. * In the second stage, data from multiple time series is combined, or reduced, into one time series. Not all alignment and reduction options are ...
# Filter the query by a common instance name prefix. common_prefix = instance_names[0].split('-')[0] query_cpu_aligned = query_cpu.select_metrics(instance_name_prefix=common_prefix) # Align the query to have data every 5 minutes. query_cpu_aligned = query_cpu_aligned.align(gcm.Aligner.ALIGN_MEAN, minutes=5) data_multi...
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Reducing the query In order to combine the data across multiple time series, the reduce() method can be used. The fields to be retained after aggregation must be specified in the method. For example, to aggregate the results by the zone, 'resource.zone' can be specified.
query_cpu_reduced = query_cpu_aligned.reduce(gcm.Reducer.REDUCE_MEAN, 'resource.zone') data_per_zone = query_cpu_reduced.as_dataframe('zone') data_per_zone.tail(5)
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Displaying the time series as a heatmap Let us look at the time series at the instance level as a heatmap. A heatmap is a compact representation of the data, and can often highlight patterns. The diagram below shows the instances along rows, and the timestamps along columns.
import matplotlib import seaborn # Set the size of the heatmap to have a better aspect ratio. div_ratio = 1 if len(data_multiple_instances.columns) == 1 else 2.0 width, height = (size/div_ratio for size in data_multiple_instances.shape) matplotlib.pyplot.figure(figsize=(width, height)) # Display the data as a heatmap...
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Multi-level headers If you don't provide any labels to as_dataframe, it returns all the resource and metric labels present in the time series as a multi-level header. This allows you to filter, and aggregate the data more easily.
data_multi_level = query_cpu_aligned.as_dataframe() data_multi_level.tail(5)
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Filter the dataframe Let us filter the multi-level dataframe based on the common prefix. Applying the filter will look across all column headers.
print('Finding pattern "%s" in the dataframe headers' % (common_prefix,)) data_multi_level.filter(regex=common_prefix).tail(5)
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
Aggregate columns in the dataframe Here, we aggregate the multi-level dataframe at the zone level. This is similar to applying reduction using 'REDUCE_MEAN' on the field 'resource.zone'.
data_multi_level.groupby(level='zone', axis=1).mean().tail(5)
tutorials/Stackdriver Monitoring/Getting started.ipynb
googledatalab/notebooks
apache-2.0
<a id='sec3.2'></a> <a id='sec1.2'></a> 1.2 Compute POI Info Compute POI (Longitude, Latitude) as the average coordinates of the assigned photos.
poi_coords = traj[['poiID', 'photoLon', 'photoLat']].groupby('poiID').agg(np.mean) poi_coords.reset_index(inplace=True) poi_coords.rename(columns={'photoLon':'poiLon', 'photoLat':'poiLat'}, inplace=True) poi_coords.head()
tour/traj_visualisation.ipynb
charmasaur/digbeta
gpl-3.0
<a id='sec1.3'></a> 1.3 Construct Travelling Sequences
seq_all = traj[['userID', 'seqID', 'poiID', 'dateTaken']].copy()\ .groupby(['userID', 'seqID', 'poiID']).agg([np.min, np.max]) seq_all.columns = seq_all.columns.droplevel() seq_all.reset_index(inplace=True) seq_all.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True) seq_all['poiDurati...
tour/traj_visualisation.ipynb
charmasaur/digbeta
gpl-3.0
<a id='sec1.4'></a> 1.4 Generate KML File for Trajectory Visualise Trajectory on map by generating a KML file for a trajectory and its associated POIs.
def generate_kml(fname, seqid_set, seq_all, poi_all): k = kml.KML() ns = '{http://www.opengis.net/kml/2.2}' styid = 'style1' # colors in KML: aabbggrr, aa=00 is fully transparent sty = styles.Style(id=styid, styles=[styles.LineStyle(color='9f0000ff', width=2)]) # transparent red doc = kml.Docume...
tour/traj_visualisation.ipynb
charmasaur/digbeta
gpl-3.0
<a id='sec2'></a> 2. Trajectory with same (start, end)
seq_user = seq_all[['userID', 'seqID', 'poiID']].copy().groupby(['userID', 'seqID']).agg(np.size) seq_user.reset_index(inplace=True) seq_user.rename(columns={'size':'seqLen'}, inplace=True) seq_user.set_index('seqID', inplace=True) seq_user.head() def extract_seq(seqid, seq_all): seqi = seq_all[seq_all['seqID'] ==...
tour/traj_visualisation.ipynb
charmasaur/digbeta
gpl-3.0
<a id='sec3'></a> 3. Trajectory with more than one occurrence Contruct trajectories with more than one occurrence (can be same or different user).
distinct_seq = dict() for seqid in seq_all['seqID'].unique(): seq = extract_seq(seqid, seq_all) #if len(seq) < 2: continue # drop trajectory with single point if str(seq) not in distinct_seq: distinct_seq[str(seq)] = [(seqid, seq_user.loc[seqid].iloc[0])] # (seqid, user) else: distinct...
tour/traj_visualisation.ipynb
charmasaur/digbeta
gpl-3.0
Filtering out sequences with single point as well as sequences occurs only once.
distinct_seq_df2 = distinct_seq_df[distinct_seq_df['seqLen'] > 1] distinct_seq_df2 = distinct_seq_df2[distinct_seq_df2['#occurrence'] > 1] distinct_seq_df2.head() plt.figure(figsize=[9, 9]) plt.xlabel('sequence length') plt.ylabel('#occurrence') plt.scatter(distinct_seq_df2['seqLen'], distinct_seq_df2['#occurrence'], ...
tour/traj_visualisation.ipynb
charmasaur/digbeta
gpl-3.0
<a id='sec4'></a> 4. Visualise Trajectory <a id='sec4.1'></a> 4.1 Visualise Trajectories with more than one occurrence
for seqstr in distinct_seq_df2.index: assert(seqstr in distinct_seq) seqid = distinct_seq[seqstr][0][0] fname = re.sub(',', '_', re.sub('[ \[\]]', '', seqstr)) fname = os.path.join(data_dir, suffix + '-seq-occur-' + str(len(distinct_seq[seqstr])) + '_' + fname + '.kml') generate_kml(fname, [seqid], ...
tour/traj_visualisation.ipynb
charmasaur/digbeta
gpl-3.0
<a id='sec4.2'></a> 4.2 Visualise Trajectories with same (start, end) but different paths
startend_distinct_seq = dict() distinct_seqid_set = [distinct_seq[x][0][0] for x in distinct_seq_df2.index] for seqid in distinct_seqid_set: seq = extract_seq(seqid, seq_all) if (seq[0], seq[-1]) not in startend_distinct_seq: startend_distinct_seq[(seq[0], seq[-1])] = [seqid] else: starten...
tour/traj_visualisation.ipynb
charmasaur/digbeta
gpl-3.0
<a id='sec5'></a> 5. Visualise the Most Common Edges <a id='sec5.1'></a> 5.1 Count the occurrence of edges
edge_count = pd.DataFrame(data=np.zeros((poi_all.index.shape[0], poi_all.index.shape[0]), dtype=np.int), \ index=poi_all.index, columns=poi_all.index) for seqid in seq_all['seqID'].unique(): seq = extract_seq(seqid, seq_all) for j in range(len(seq)-1): edge_count.loc[seq[j], s...
tour/traj_visualisation.ipynb
charmasaur/digbeta
gpl-3.0
讨论 函数 re.split() 是非常实用的,因为它允许你为分隔符指定多个正则模式。 比如,在上面的例子中,分隔符可以是逗号,分号或者是空格,并且后面紧跟着任意个的空格。 只要这个模式被找到,那么匹配的分隔符两边的实体都会被当成是结果中的元素返回。 返回结果为一个字段列表,这个跟 str.split() 返回值类型是一样的。 当你使用 re.split() 函数时候,需要特别注意的是正则表达式中是否包含一个括号捕获分组。 如果使用了捕获分组,那么被匹配的文本也将出现在结果列表中。比如,观察一下这段代码运行后的结果:
fields = re.split(r"(;|,|\s)\s*", line) fields
02 strings and text/02.01 split string on multiple delimiters.ipynb
wuafeing/Python3-Tutorial
gpl-3.0
获取分割字符在某些情况下也是有用的。 比如,你可能想保留分割字符串,用来在后面重新构造一个新的输出字符串:
values = fields[::2] values delimiters = fields[1::2] + [""] delimiters # Reform the line using the same delimiters "".join(v + d for v, d in zip(values, delimiters))
02 strings and text/02.01 split string on multiple delimiters.ipynb
wuafeing/Python3-Tutorial
gpl-3.0
如果你不想保留分割字符串到结果列表中去,但仍然需要使用到括号来分组正则表达式的话, 确保你的分组是非捕获分组,形如 (?:...) 。比如:
re.split(r"(?:,|;|\s)\s*", line)
02 strings and text/02.01 split string on multiple delimiters.ipynb
wuafeing/Python3-Tutorial
gpl-3.0
Step 4: 建立模型 把数据集分回 训练/测试集
dummy_train_df = all_dummy_df.loc[train_df.index] dummy_test_df = all_dummy_df.loc[test_df.index] dummy_train_df.shape, dummy_test_df.shape
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
Ridge Regression 用Ridge Regression模型来跑一遍看看。(对于多因子的数据集,这种模型可以方便的把所有的var都无脑的放进去)
from sklearn.linear_model import Ridge from sklearn.model_selection import cross_val_score
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
这一步不是很必要,只是把DF转化成Numpy Array,这跟Sklearn更加配
X_train = dummy_train_df.values X_test = dummy_test_df.values
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
用Sklearn自带的cross validation方法来测试模型
alphas = np.logspace(-3, 2, 50) test_scores = [] for alpha in alphas: clf = Ridge(alpha) test_score = np.sqrt(-cross_val_score(clf, X_train, y_train, cv=10, scoring='neg_mean_squared_error')) test_scores.append(np.mean(test_score))
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
存下所有的CV值,看看哪个alpha值更好(也就是『调参数』)
import matplotlib.pyplot as plt %matplotlib inline plt.plot(alphas, test_scores) plt.title("Alpha vs CV Error");
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
可见,大概alpha=10~20的时候,可以把score达到0.135左右。 Random Forest
from sklearn.ensemble import RandomForestRegressor max_features = [.1, .3, .5, .7, .9, .99] test_scores = [] for max_feat in max_features: clf = RandomForestRegressor(n_estimators=200, max_features=max_feat) test_score = np.sqrt(-cross_val_score(clf, X_train, y_train, cv=5, scoring='neg_mean_squared_error')) ...
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
用RF的最优值达到了0.137 Step 5: Ensemble 这里我们用一个Stacking的思维来汲取两种或者多种模型的优点 首先,我们把最好的parameter拿出来,做成我们最终的model
ridge = Ridge(alpha=15) rf = RandomForestRegressor(n_estimators=500, max_features=.3) ridge.fit(X_train, y_train) rf.fit(X_train, y_train)
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
上面提到了,因为最前面我们给label做了个log(1+x), 于是这里我们需要把predit的值给exp回去,并且减掉那个"1" 所以就是我们的expm1()函数。
y_ridge = np.expm1(ridge.predict(X_test)) y_rf = np.expm1(rf.predict(X_test))
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
一个正经的Ensemble是把这群model的预测结果作为新的input,再做一次预测。这里我们简单的方法,就是直接『平均化』。
y_final = (y_ridge + y_rf) / 2
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
Step 6: 提交结果
submission_df = pd.DataFrame(data= {'Id' : test_df.index, 'SalePrice': y_final})
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
我们的submission大概长这样:
submission_df.head(10)
python/kaggle/competition/house-price/house_price.ipynb
muxiaobai/CourseExercises
gpl-2.0
Variables, lists and dictionaries
var1 = 1 my_string = "This is a string" var1 print(my_string) my_list = [1, 2, 3, 'x', 'y'] my_list my_list[0] my_list[1:3] salaries = {'Mike':2000, 'Ann':3000} salaries['Mike'] salaries['Jake'] = 2500 salaries
notebooks/Intro to Python and Jupyter.ipynb
samoturk/HUB-ipython
mit
Strings
long_string = 'This is a string \n Second line of the string' print(long_string) long_string.split(" ") long_string.split("\n") long_string.count('s') # case sensitive! long_string.upper()
notebooks/Intro to Python and Jupyter.ipynb
samoturk/HUB-ipython
mit
Conditionals
if long_string.startswith('X'): print('Yes') elif long_string.startswith('T'): print('It has T') else: print('No')
notebooks/Intro to Python and Jupyter.ipynb
samoturk/HUB-ipython
mit
Loops
for line in long_string.split('\n'): print line c = 0 while c < 10: c += 2 print c
notebooks/Intro to Python and Jupyter.ipynb
samoturk/HUB-ipython
mit
List comprehensions
some_numbers = [1,2,3,4] [x**2 for x in some_numbers]
notebooks/Intro to Python and Jupyter.ipynb
samoturk/HUB-ipython
mit
File operations
with open('../README.md', 'r') as f: content = f.read() print(content)
notebooks/Intro to Python and Jupyter.ipynb
samoturk/HUB-ipython
mit
Functions
def average(numbers): return float(sum(numbers)/len(numbers)) average([1,2,2,2.5,3,]) map(average, [[1,2,2,2.5,3,],[3,2.3,4.2,2.5,5,]]) # %load cool_events.py #!/usr/bin/env python from IPython.display import HTML class HUB: """ HUB event class """ def __init__(self, version): self.full_...
notebooks/Intro to Python and Jupyter.ipynb
samoturk/HUB-ipython
mit
Python libraries Library is a collection of resources. These include pre-written code, subroutines, classes, etc.
from math import exp exp(2) #shift tab to access documentation import math math.exp(10) import numpy as np # Numpy - package for scientifc computing #import pandas as pd # Pandas - package for working with data frames (tables) #import Bio # BioPython - package for bioinformatics #import sklearn # scikit-learn - ...
notebooks/Intro to Python and Jupyter.ipynb
samoturk/HUB-ipython
mit
Plotting
%matplotlib inline import matplotlib.pyplot as plt x_values = np.arange(0, 20, 0.1) y_values = [math.sin(x) for x in x_values] plt.plot(x_values, y_values) plt.scatter(x_values, y_values) plt.boxplot(y_values)
notebooks/Intro to Python and Jupyter.ipynb
samoturk/HUB-ipython
mit
Load up the tptY3 buzzard mocks.
fname = '/u/ki/jderose/public_html/bcc/measurement/y3/3x2pt/buzzard/flock/buzzard-2/tpt_Y3_v0.fits' hdulist = fits.open(fname) z_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9]) zbin=1 a = 0.81120 z = 1.0/a - 1.0
notebooks/wt Integral calculation.ipynb
mclaughlin6464/pearce
mit