markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The function $\texttt{self}.\texttt{_setValues}(k, v, l, r)$ overwrites the member variables of the node $\texttt{self}$ with the given values.
def _setValues(self, k, v, l, r): self.mKey = k self.mValue = v self.mLeft = l self.mRight = r AVLTree._setValues = _setValues def _restoreHeight(self): self.mHeight = max(self.mLeft.mHeight, self.mRight.mHeight) + 1 AVLTree._restoreHeight = _restoreHeight
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
The function $\texttt{createNode}(k, v, l, r)$ creates an AVL-tree of that has the pair $(k, v)$ stored at its root, left subtree $l$ and right subtree $r$.
def createNode(key, value, left, right): node = AVLTree() node.mKey = key node.mValue = value node.mLeft = left node.mRight = right node.mHeight = max(left.mHeight, right.mHeight) + 1 return node import graphviz as gv
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
Given an ordered binary tree, this function renders the tree graphically using graphviz.
def toDot(self): AVLTree.sNodeCount = 0 # this is a static variable of the class AVLTree dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'}) NodeDict = {} self._assignIDs(NodeDict) for n, t in NodeDict.items(): if t.mValue != None: dot.node(str(n), label='{' + str...
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
This method assigns a unique identifier with each node. The dictionary NodeDict maps these identifiers to the nodes where they occur.
def _assignIDs(self, NodeDict): AVLTree.sNodeCount += 1 self.mID = AVLTree.sNodeCount NodeDict[self.mID] = self if self.isEmpty(): return self.mLeft ._assignIDs(NodeDict) self.mRight._assignIDs(NodeDict) AVLTree._assignIDs = _assignIDs
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
The function $\texttt{demo}()$ creates a small ordered binary tree.
def demo(): m = AVLTree() m.insert("anton", 123) m.insert("hugo" , 345) m.insert("gustav", 789) m.insert("jens", 234) m.insert("hubert", 432) m.insert("andre", 342) m.insert("philip", 342) m.insert("rene", 345) return m t = demo() print(t.toDot()) t.toDot() t.delete('gu...
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
Let's generate an ordered binary tree with random keys.
import random as rnd t = AVLTree() for k in range(30): k = rnd.randrange(100) t.insert(k, None) t.toDot()
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
This tree looks more or less balanced. Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees.
t = AVLTree() for k in range(30): t.insert(k, None) t.toDot()
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
Next, we compute the set of prime numbers $\leq 100$. Mathematically, this set is given as follows: $$ \bigl{2, \cdots, 100 \bigr} - \bigl{ i \cdot j \bigm| i, j \in {2, \cdots, 100 }\bigr}$$
S = AVLTree() for k in range(2, 101): S.insert(k, None) for i in range(2, 101): for j in range(2, 101): S.delete(i * j) S.toDot()
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
The function $t.\texttt{maxKey}()$ returns the biggest key in the tree $t$. It is defined inductively: - $\texttt{Nil}.\texttt{maxKey}() = \Omega$, - $\texttt{Node}(k,v,l,\texttt{Nil}).\texttt{maxKey}() = k$, - $r \not= \texttt{Nil} \rightarrow \texttt{Node}(k,v,l,r).\texttt{maxKey}() = r.\texttt{maxKey}()$.
def maxKey(self): if self.isEmpty(): return None if self.mRight.isEmpty(): return self.mKey return self.mRight.maxKey() AVLTree.maxKey = maxKey
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
The function $\texttt{leanTree}(h, k)$ computes an AVL tree of height $h$ that is as lean as possible. All key in the tree will be integers that are bigger than $k$. The definition by induction: - $\texttt{leanTree}(0, k) = \texttt{Nil}$, because there is only one AVL tree of height $0$ and this is the tree $\texttt...
def leanTree(h, k): if h == 0: return AVLTree() if h == 1: return createNode(k + 1, None, AVLTree(), AVLTree()) left = leanTree(h - 1, k) l = left.maxKey() return createNode(l + 1, None, left, leanTree(h - 2, l + 1)) l = leanTree(6, 0) l.toDot() for k in range(6): l = leanT...
Python/Chapter-06/AVL-Trees.ipynb
Danghor/Algorithms
gpl-2.0
Comparison with the quadratic model The model can be evaluated for a set of scalar parameters using the tm.evaluate_ps method. The oblate model takes, in addition to the basic orbital parameters, the stellar rotation period rperiod, pole temperature tpole, obliquity phi, gravity-darkening parameter beta, and azimuthal ...
k = array([0.1]) t0, p, a, i, az, e, w = 0.0, 4.0, 4.5, 0.5*pi, 0.0, 0.0, 0.0 rho, rperiod, tpole, phi, beta = 1.4, 0.25, 6500., -0.2*pi, 0.3 ldc = array([0.3, 0.1]) # Quadtratic limb darkening coefficients flux_qm = tmc.evaluate_ps(k, ldc, t0, p, a, i, e, w) rperiod = 10 flux_om = tmo.evaluate_ps(k, rho, rperiod, tp...
notebooks/osmodel_example_1.ipynb
hpparvi/PyTransit
gpl-2.0
Changing obliquity
rperiod = 0.15 b = 0.25 for phi in (-0.25*pi, 0.0, 0.25*pi, 0.5*pi): tmo.visualize(0.1, b, 0.0, rho, rperiod, tpole, phi, beta, ldc, ires=256)
notebooks/osmodel_example_1.ipynb
hpparvi/PyTransit
gpl-2.0
Changing azimuth angle
rperiod = 0.15 phi = 0.25 b = 0.00 for az in (-0.25*pi, 0.0, 0.25*pi, 0.5*pi): tmo.visualize(0.1, b, az, rho, rperiod, tpole, phi, beta, ldc, ires=256)
notebooks/osmodel_example_1.ipynb
hpparvi/PyTransit
gpl-2.0
Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU
""" DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.3'), 'Please use TensorFlow version 1.3 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Che...
tv-script-generation/dlnd_tv_script_generation.ipynb
ktmud/deep-learning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - Th...
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Func...
tv-script-generation/dlnd_tv_script_generation.ipynb
ktmud/deep-learning
mit
Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loa...
tv-script-generation/dlnd_tv_script_generation.ipynb
ktmud/deep-learning
mit
Since we should already have the data downloaded as csv files in this repository, we will not need to re-scrape the data. Omit this cell to directly download from the USAU website (may be slow).
# Read data from csv files usau.reports.d1_college_nats_men_2016.load_from_csvs() usau.reports.d1_college_nats_women_2016.load_from_csvs()
notebooks/2016-D-I_College_Nationals_Data_Quality.ipynb
azjps/usau-py
mit
Let's take a look at the games for which the sum of the player goals/assists is less than the final score of the game:
display_url_column(pd.concat([usau.reports.d1_college_nats_men_2016.missing_tallies, usau.reports.d1_college_nats_women_2016.missing_tallies]) [["Score", "Gs", "As", "Ds", "Ts", "Team", "Opponent", "url"]])
notebooks/2016-D-I_College_Nationals_Data_Quality.ipynb
azjps/usau-py
mit
All in all, not too bad! A few of the women's consolation games are missing player statistics, and there are several other games for which a couple of goals or assists were missed. For missing assists, it is technically possible that there were one or more callahans scored in those game, but obviously that's not the ca...
men_matches = usau.reports.d1_college_nats_men_2016.match_results women_matches = usau.reports.d1_college_nats_women_2016.match_results display_url_column(pd.concat([men_matches[(men_matches.Ts == 0) & (men_matches.Gs > 0)], women_matches[(women_matches.Ts == 0) & (women_matches.Gs > 0)]])...
notebooks/2016-D-I_College_Nationals_Data_Quality.ipynb
azjps/usau-py
mit
This implies that there was a pretty good effort made to keep up with counting turns and Ds. By contrast, see how many teams did not keep track of Ds and turns last year (2015)!
# Read last year's data from csv files usau.reports.d1_college_nats_men_2015.load_from_csvs() usau.reports.d1_college_nats_women_2015.load_from_csvs() display_url_column(pd.concat([usau.reports.d1_college_nats_men_2015.missing_tallies, usau.reports.d1_college_nats_women_2015.missing_tallie...
notebooks/2016-D-I_College_Nationals_Data_Quality.ipynb
azjps/usau-py
mit
Filtering and resampling data This tutorial covers filtering and resampling, and gives examples of how filtering can be used for artifact repair. :depth: 2 We begin as always by importing the necessary Python modules and loading some example data <sample-dataset>. We'll also crop the data to 60 seconds (to sav...
import os import numpy as np import matplotlib.pyplot as plt import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file) raw.crop(0, ...
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Background on filtering A filter removes or attenuates parts of a signal. Usually, filters act on specific frequency ranges of a signal — for example, suppressing all frequency components above or below a certain cutoff value. There are many ways of designing digital filters; see disc-filtering for a longer discussion ...
mag_channels = mne.pick_types(raw.info, meg='mag') raw.plot(duration=60, order=mag_channels, proj=False, n_channels=len(mag_channels), remove_dc=False)
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
A half-period of this slow drift appears to last around 10 seconds, so a full period would be 20 seconds, i.e., $\frac{1}{20} \mathrm{Hz}$. To be sure those components are excluded, we want our highpass to be higher than that, so let's try $\frac{1}{10} \mathrm{Hz}$ and $\frac{1}{5} \mathrm{Hz}$ filters to see which wo...
for cutoff in (0.1, 0.2): raw_highpass = raw.copy().filter(l_freq=cutoff, h_freq=None) fig = raw_highpass.plot(duration=60, order=mag_channels, proj=False, n_channels=len(mag_channels), remove_dc=False) fig.subplots_adjust(top=0.9) fig.suptitle('High-pass filtered at {} Hz'.f...
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts. Notice that the text output summarizes the relevant characteristics of the filter that was created. If you want to visualize the filter, you can pass the same arguments used in the call to :meth:raw.filter() <mne.io.Raw.filter> above to ...
filter_params = mne.filter.create_filter(raw.get_data(), raw.info['sfreq'], l_freq=0.2, h_freq=None)
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that the output is the same as when we applied this filter to the data using :meth:raw.filter() <mne.io.Raw.filter>. You can now pass the filter parameters (and the sampling frequency) to :func:~mne.viz.plot_filter to plot the filter:
mne.viz.plot_filter(filter_params, raw.info['sfreq'], flim=(0.01, 5))
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Power line noise Power line noise is an environmental artifact that manifests as persistent oscillations centered around the AC power line frequency_. Power line artifacts are easiest to see on plots of the spectrum, so we'll use :meth:~mne.io.Raw.plot_psd to illustrate. We'll also write a little function that adds arr...
def add_arrows(axes): # add some arrows at 60 Hz and its harmonics for ax in axes: freqs = ax.lines[-1].get_xdata() psds = ax.lines[-1].get_ydata() for freq in (60, 120, 180, 240): idx = np.searchsorted(freqs, freq) # get ymax of a small region around the freq. of...
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
It should be evident that MEG channels are more susceptible to this kind of interference than EEG that is recorded in the magnetically shielded room. Removing power-line noise can be done with a notch filter, applied directly to the :class:~mne.io.Raw object, specifying an array of frequencies to be attenuated. Since t...
meg_picks = mne.pick_types(raw.info) # meg=True, eeg=False are the defaults freqs = (60, 120, 180, 240) raw_notch = raw.copy().notch_filter(freqs=freqs, picks=meg_picks) for title, data in zip(['Un', 'Notch '], [raw, raw_notch]): fig = data.plot_psd(fmax=250, average=True) fig.subplots_adjust(top=0.85) fig...
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
:meth:~mne.io.Raw.notch_filter also has parameters to control the notch width, transition bandwidth and other aspects of the filter. See the docstring for details. Resampling EEG and MEG recordings are notable for their high temporal precision, and are often recorded with sampling rates around 1000 Hz or higher. This i...
raw_downsampled = raw.copy().resample(sfreq=200) for data, title in zip([raw, raw_downsampled], ['Original', 'Downsampled']): fig = data.plot_psd(average=True) fig.subplots_adjust(top=0.9) fig.suptitle(title) plt.setp(fig.axes, xlim=(0, 300))
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Because resampling involves filtering, there are some pitfalls to resampling at different points in the analysis stream: Performing resampling on :class:~mne.io.Raw data (before epoching) will negatively affect the temporal precision of Event arrays, by causing jitter_ in the event timing. This reduced temporal p...
current_sfreq = raw.info['sfreq'] desired_sfreq = 90 # Hz decim = np.round(current_sfreq / desired_sfreq).astype(int) obtained_sfreq = current_sfreq / decim lowpass_freq = obtained_sfreq / 3. raw_filtered = raw.copy().filter(l_freq=None, h_freq=lowpass_freq) events = mne.find_events(raw_filtered) epochs = mne.Epochs(...
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
As 2-dimensional NumPy array, we can perform rapid stats on the pixel values. We can do this for all the pixels at once, or we can perform stats on specific pixels or subsets of pixels, using their row and columns to identify the subsets.
#Get the value of a specific pixel, here one at the 100th row and 50th column demRaster[99,49] #Compute the min, max and mean value of all pixels in the 200 x 200 DEM snipped print "Min:", demRaster.min() print "Max:", demRaster.max() print "Mean:", demRaster.mean()
07_DataWrangling/notebooks/03-Using-NumPy-With-Rasters.ipynb
johnpfay/environ859
gpl-3.0
To define subsets, we use the same slicing techniques we use for lists and strings, but as this is a 2 dimensional array, we provide two sets of slices; the first slices will be the rows and the second for the columns. We can use a : to select all rows or columns.
#Get the max for the 10th column of pixels, # (using `:` to select all rows and `9` to select just the 10th column) print(demRaster[:,9].max()) #Get the mean for the first 10 rows of pixels, selecting all columns # (We can put a `:` as the second slice, or leave it blank after the comma...) x = demRaster[:10,] x.shap...
07_DataWrangling/notebooks/03-Using-NumPy-With-Rasters.ipynb
johnpfay/environ859
gpl-3.0
The SciPy package has a number of multi-dimensional image processing capabilities (see https://docs.scipy.org/doc/scipy/reference/ndimage.html). Here is a somewhat complex example that runs through 10 iterations of computing a neighborhood mean (using the nd.median_filter) with an incrementally growing neighorhood. We ...
#Import the SciPy and plotting packages import scipy.ndimage as nd from matplotlib import pyplot as plt #Allows plots in our Jupyter Notebook %matplotlib inline #Create a 'canvas' onto which we can add plots fig = plt.figure(figsize=(20,20)) #Loop through 10 iterations for i in xrange(10): #Create a kernel, inti...
07_DataWrangling/notebooks/03-Using-NumPy-With-Rasters.ipynb
johnpfay/environ859
gpl-3.0
Q1 - triple récursivité Réécrire la fonction u de façon à ce qu'elle ne soit plus récurrente.
def u(n): if n <= 2: return 1 else: return u(n-1) + u(n-2) + u(n-3) u(5)
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Le problème de cette écriture est que la fonction est triplement récursive et que son coût est aussi grand que la fonction elle-même. Vérifions.
compteur = [] def u_st(n): global compteur compteur.append(n) if n <= 2: return 1 else: return u_st(n-1) + u_st(n-2) + u_st(n-3) u_st(5), compteur
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
La seconde liste retourne tous les n pour lesquels la fonction u_st a été appelée.
def u_non_recursif(n): if n <= 2: return 1 u0 = 1 u1 = 1 u2 = 1 i = 3 while i <= n: u = u0 + u1 + u2 u0 = u1 u1 = u2 u2 = u i += 1 return u u_non_recursif(5)
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Q2 - comparaison de listes On considère deux listes d'entiers. La première est inférieure à la seconde si l'une des deux conditions suivantes est vérifiée : les $n$ premiers nombres sont égaux mais la première liste ne contient que $n$ éléments tandis que la seconde est plus longue, les $n$ premiers nombres sont égau...
def compare_liste(p, q): i = 0 while i < len(p) and i < len(q): if p [i] < q [i]: return -1 # on peut décider elif p [i] > q [i]: return 1 # on peut décider i += 1 # on ne peut pas décider # fin de la boucle, il faut décider à par...
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Q3 - précision des calculs On cherche à calculer la somme des termes d'une suite géométriques de raison~$\frac{1}{2}$. On définit $r=\frac{1}{2}$, on cherche donc à calculer $\sum_{i=0}^{\infty} r^i$ qui une somme convergente mais infinie. Le programme suivant permet d'en calculer une valeur approchée. Il retourne, out...
def suite_geometrique_1(r): x = 1.0 y = 0.0 n = 0 while x > 0: y += x x *= r n += 1 return y, n print(suite_geometrique_1(0.5))
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Un informaticien plus expérimenté a écrit le programme suivant qui retourne le même résultat mais avec un nombre d'itérations beaucoup plus petit.
def suite_geometrique_2(r): x = 1.0 y = 0.0 n = 0 yold = y + 1 while abs (yold - y) > 0: yold = y y += x x *= r n += 1 return y,n print(suite_geometrique_2(0.5))
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Expliquez pourquoi le second programme est plus rapide tout en retournant le même résultat. Repère numérique : $2^{-55} \sim 2,8.10^{-17}$. Tout d'abord le second programme est plus rapide car il effectue moins d'itérations, 55 au lieu de 1075. Maintenant, il s'agit de savoir pourquoi le second programme retourne le mê...
def hyper_cube_liste(n, m=None): if m is None: m = [0, 0] if n > 1 : m[0] = [0,0] m[1] = [0,0] m[0] = hyper_cube_liste (n-1, m[0]) m[1] = hyper_cube_liste (n-1, m[1]) return m hyper_cube_liste(3)
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
La seconde à base de dictionnaire (plus facile à manipuler) :
def hyper_cube_dico (n) : r = { } ind = [ 0 for i in range (0,n) ] while ind [0] <= 1 : cle = tuple(ind) # conversion d'une liste en tuple r[cle] = 0 ind[-1] += 1 k = len(ind)-1 while ind[k] == 2 and k > 0: ind[k] = 0 ind[k-1] += 1 ...
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Le chercheur a commencé à écrire son programme :
def occurrence(l,n) : # d = ....... # choix d'un hyper_cube (n) # ..... # return d pass suite = [ 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1 ] h = occurrence(suite, 3) h
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Sur quelle structure se porte votre choix (a priori celle avec dictionnaire), compléter la fonction occurrence.
def occurrence(tu, n): d = hyper_cube_dico(n) for i in range (0, len(tu)-n) : cle = tu[i:i+n] d[cle] += 1 return d occurrence((1, 0, 1, 1, 0, 1, 0), 3)
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Il est même possible de se passer de la fonction hyper_cube_dico :
def occurrence2(tu, n): d = { } for i in range (0, len(tu)-n) : cle = tu[i:i+n] if cle not in d: d[cle] = 0 d [cle] += 1 return d occurrence2((1, 0, 1, 1, 0, 1, 0), 3)
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
La seule différence apparaît lorsqu'un n-uplet n'apparaît pas dans la liste. Avec la fonction hyper_cube_dico, ce n-uplet recevra la fréquence 0, sans cette fonction, ce n-uplet ne sera pas présent dans le dictionnaire d. Le même programme avec la structure matricielle est plus une curiosité qu'un cas utile.
def occurrence3(li, n): d = hyper_cube_liste(n) for i in range (0, len(li)-n) : cle = li[i:i+n] t = d # for k in range (0,n-1) : # point clé de la fonction : t = t[cle[k]] # accès à un élément t [cle [ n-1] ] +...
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Une autre écriture...
def hyper_cube_liste2(n, m=[0, 0], m2=[0, 0]): if n > 1 : m[0] = list(m2) m[1] = list(m2) m[0] = hyper_cube_liste2(n-1, m[0]) m[1] = hyper_cube_liste2(n-1, m[1]) return m def occurrence4(li, n): d = hyper_cube_liste2(n) # * remarque voir plus bas for i in range (...
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Et si on remplace list(m2) par m2.
def hyper_cube_liste3(n, m=[0, 0], m2=[0, 0]): if n > 1 : m[0] = m2 m[1] = m2 m[0] = hyper_cube_liste3(n-1, m[0]) m[1] = hyper_cube_liste3(n-1, m[1]) return m def occurrence5(li, n): d = hyper_cube_liste3(n) # * remarque voir plus bas for i in range (0, len(li)-n...
_doc/notebooks/python/hypercube.ipynb
sdpython/teachpyx
mit
Automated Set-Up
# %load -s describe_var_list /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/utilities/analysis_run_prefixes.py def describe_var_list(input_var_name_list): description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list] return "".join(description_...
notebooks/crispr/Dual CRISPR 4-Count Combination.ipynb
ucsd-ccbb/jupyter-genomics
mit
Count Combination Functions
# %load -s get_counts_file_suffix /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/construct_counter.py def get_counts_file_suffix(): return "counts.txt" # %load /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/count_com...
notebooks/crispr/Dual CRISPR 4-Count Combination.ipynb
ucsd-ccbb/jupyter-genomics
mit
Input Count Filenames
from ccbbucsd.utilities.files_and_paths import summarize_filenames_for_prefix_and_suffix print(summarize_filenames_for_prefix_and_suffix(g_fastq_counts_dir, g_fastq_counts_run_prefix, get_counts_file_suffix()))
notebooks/crispr/Dual CRISPR 4-Count Combination.ipynb
ucsd-ccbb/jupyter-genomics
mit
Count Combination Execution
write_collapsed_count_files(g_fastq_counts_dir, g_collapsed_counts_dir, g_collapsed_counts_run_prefix, g_fastq_counts_run_prefix, get_counts_file_suffix(), get_collapsed_counts_file_suffix()) write_combined_count_file(g_collapsed_counts_dir, g_combined_counts_dir, g_collapsed_counts_run_pr...
notebooks/crispr/Dual CRISPR 4-Count Combination.ipynb
ucsd-ccbb/jupyter-genomics
mit
Conformal Geometric Algebra (CGA) is a projective geometry tool which allows conformal transformations to be implemented with rotations. To do this, the original geometric algebra is extended by two dimensions, one of positive signature $e_+$ and one of negative signature $e_-$. Thus, if we started with $G_p$, the ...
from numpy import pi,e from clifford import Cl, conformalize G2, blades_g2 = Cl(2) blades_g2 G2c, blades_g2c, stuff = conformalize(G2) blades_g2c #inspect the CGA blades print_ga(blades_g2c['e4']) stuff
py/CGA-clifford.ipynb
utensil/julia-playground
mit
It contains the following: ep - postive basis vector added en - negative basis vector added eo - zero vecror of null basis (=.5*(en-ep)) einf - infinity vector of null basis (=en+ep) E0 - minkowski bivector (=einf^eo) up() - function to up-project a vector from GA to CGA down() - function to down-project a vector from...
locals().update(blades_g2c) locals().update(stuff) x = e1+e2 print_ga(x) print_ga(up(x)) print_ga(down(up(x))) a= 1*e1 + 2*e2 b= 3*e1 + 4*e2 print_ga(a, b) print_ga(down(ep*up(a)*ep), a.inv()) print_ga(down(E0*up(a)*E0), -a)
py/CGA-clifford.ipynb
utensil/julia-playground
mit
Dilations $$D_{\alpha} = e^{-\frac{\ln{\alpha}}{2} \,E_0} $$ $$D_{\alpha} \, X \, \tilde{D_{\alpha}} $$
from scipy import rand,log D = lambda alpha: e**((-log(alpha)/2.)*(E0)) alpha = rand() print_ga(down( D(alpha)*up(a)*~D(alpha)), (alpha*a))
py/CGA-clifford.ipynb
utensil/julia-playground
mit
Translations $$ V = e ^{\frac{1}{2} e_{\infty} a } = 1 + e_{\infty}a$$
T = lambda x: e**(1/2.*(einf*x)) print_ga(down( T(a)*up(b)*~T(a)), b+a) from pprint import pprint pprint(vars(einf)) print_ga(ep, en, eo)
py/CGA-clifford.ipynb
utensil/julia-playground
mit
Transversions A transversion is an inversion, followed by a translation, followed by a inversion. The verser is $$V= e_+ T_a e_+$$ which is recognised as the translation bivector reflected in the $e_+$ vector. From the diagram, it is seen that this is equivalent to the bivector in $x\wedge e_o$, $$ e_+ (1+e_{\infty}a...
V = ep * T(a) * ep assert ( V == 1+(eo*a)) K = lambda x: 1+(eo*a) B= up(b) print_ga( down(K(a)*B*~K(a)) , 1/(a+1/b) ) print_ga(a, 1/a) print_ga(e1,e2, e1 | e2, e1^e2, e1 * e2) print_ga(a,b, a | b, a^b, a * b) soa = np.array([[0, 0, 1, 1, -2, 0], [0, 0, 2, 1, 1, 0], [0, 0, 3, 2, 1, 0], [0, 0, 4, ...
py/CGA-clifford.ipynb
utensil/julia-playground
mit
Reflections $$ -mam^{-1} \rightarrow MA\tilde{M} $$
m = 5*e1 + 6*e2 n = 7*e1 + 8*e2 print_ga(down(m*up(a)*m), -m*a*m.inv()) str_ga(a, m, down(m*up(a)*m)) print_ga(a, m, down(m*up(a)*m)) plot_as_vector(a) plot_as_vector(m) plot_as_vector(down(m*up(a)*m))
py/CGA-clifford.ipynb
utensil/julia-playground
mit
Rotations $$ mnanm = Ra\tilde{R} \rightarrow RA\tilde{R} $$
R = lambda theta: e**((-.5*theta)*(e12)) theta = pi/2 print_ga(down( R(theta)*up(a)*~R(theta))) print_ga(R(theta)*a*~R(theta)) plot_as_vector(a, down( R(theta)*up(a)*~R(theta))) from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets def plot_rotate(origin, theta=pi/2): ...
py/CGA-clifford.ipynb
utensil/julia-playground
mit
As a simple example consider the combination operations of translation,scaling, and inversion. $$b=-2a+e_0 \quad \rightarrow \quad B= (T_{e_0}E_0 D_2) A \tilde{ (D_2 E_0 T_{e_0})} $$
A = up(a) V = T(e1)*E0*D(2) B = V*A*~V assert(down(B) == (-2*a)+e1 ) plot_as_vector(a, down(B))
py/CGA-clifford.ipynb
utensil/julia-playground
mit
Transversion A transversion may be built from a inversion, translation, and inversion. $$c = (a^{-1}+b)^{-1}$$ In conformal GA, this is accomplished by $$C = VA\tilde{V}$$ $$V= e_+ T_b e_+$$
A = up(a) V = ep*T(b)*ep C = V*A*~V assert(down(C) ==1/(1/a +b)) plot_as_vector(a, down(C))
py/CGA-clifford.ipynb
utensil/julia-playground
mit
Init
%%R library(dplyr) library(tidyr) library(ggplot2) library(phyloseq)
ipynb/bac_genome/fullCyc/detection_threshold.ipynb
nick-youngblut/SIPSim
mit
Mapping bulk and SIP data SIP dataset
%%R F = file.path(physeqDir, physeq_SIP_core) physeq.SIP = readRDS(F) physeq.SIP.m = physeq.SIP %>% sample_data physeq.SIP
ipynb/bac_genome/fullCyc/detection_threshold.ipynb
nick-youngblut/SIPSim
mit
Pre-fraction dataset
%%R F = file.path(physeqDir, physeq_bulk_core) physeq.bulk = readRDS(F) physeq.bulk.m = physeq.bulk %>% sample_data physeq.bulk %%R # parsing out to just 12C-Con gradients physeq.bulk.f = prune_samples((physeq.bulk.m$Exp_type == 'microcosm_bulk') | (physeq.bulk.m$Exp_type == 'SIP' & ...
ipynb/bac_genome/fullCyc/detection_threshold.ipynb
nick-youngblut/SIPSim
mit
Re-train our model with trips_last_5min feature In this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook 4a_streaming_data_training.ipynb. Open and run the notebook to train and save a model. T...
bq = bigquery.Client() dataset = bigquery.Dataset(bq.dataset("taxifare")) try: bq.create_dataset(dataset) # will fail if dataset already exists print("Dataset created.") except: print("Dataset already exists.")
courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next, we create a table called traffic_realtime and set up the schema.
dataset = bigquery.Dataset(bq.dataset("taxifare")) table_ref = dataset.table("traffic_realtime") SCHEMA = [ bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"), bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"), ] table = bigquery.Table(table_ref, schema=SCHEMA) try: bq.create_tab...
courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Launch Streaming Dataflow Pipeline Now that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline. The pipeline is defined in ./taxicab_traffic/streaming_count.py. Open that file and inspect it. There are 5 transformations being...
%%bigquery SELECT * FROM `taxifare.traffic_realtime` ORDER BY time DESC LIMIT 10
courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Make predictions from the new data In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook. The add_traffic_last_5min function below will query the traffic_realtime table to find the most recent...
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance. def add_traffic_last_5min(instance): bq = bigquery.Client() query_string = """ TODO: Your code goes here """ trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0] instance['tra...
courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
add_traffic_last_5min(instance={'dayofweek': 4, 'hourofday': 13, 'pickup_longitude': -73.99, 'pickup_latitude': 40.758, 'dropoff_latitude': 41.742, 'dropoff_lon...
courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. Exercise. Complete the code below to call prediction on an...
# TODO 2b. Write code to call prediction on instance using realtime traffic info. #Hint: Look at this sample https://github.com/googleapis/python-aiplatform/blob/master/samples/snippets/predict_custom_trained_model_sample.py ENDPOINT_ID = # TODO: Copy the `ENDPOINT_ID` from the deployment in the previous lab. api_end...
courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Basic run Here we show how FaIR can be run with step change CO$_2$ emissions and sinusoidal non-CO$_2$ forcing timeseries.
emissions = np.zeros(250) emissions[125:] = 10.0 other_rf = np.zeros(emissions.size) for x in range(0, emissions.size): other_rf[x] = 0.5 * np.sin(2 * np.pi * (x) / 14.0) C,F,T = fair.forward.fair_scm( emissions=emissions, other_rf=other_rf, useMultigas=False ) fig = plt.figure() ax1 = fig.add_sub...
notebooks/Example-Usage.ipynb
OMS-NetZero/FAIR
apache-2.0
RCPs We can run FaIR with the CO$_2$ emissions and non-CO$_2$ forcing from the four representative concentration pathway scenarios. To use the emissions-based version specify useMultigas=True in the call to fair_scm(). By default in multi-gas mode, volcanic and solar forcing plus natural emissions of methane and nitrou...
from fair.RCPs import rcp3pd, rcp45, rcp6, rcp85 fig = plt.figure() ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222) ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224) C26, F26, T26 = fair.forward.fair_scm(emissions=rcp3pd.Emissions.emissions) ax1.plot(rcp3pd.Emissions.year, rcp3pd.Emissions.co2_fossil, color...
notebooks/Example-Usage.ipynb
OMS-NetZero/FAIR
apache-2.0
Concentrations of well-mixed greenhouse gases The output of FaIR (in most cases) is a 3-element tuple of concentrations, effective radiative forcing and temperature change since pre-industrial. Concentrations are a 31-column array of greenhouse gases. The indices correspond to the order given in the RCP concentration d...
fig = plt.figure() ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222) ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224) ax1.plot(rcp3pd.Emissions.year, C26[:,1], color='green', label='RCP3PD') ax1.plot(rcp45.Emissions.year, C45[:,1], color='blue', label='RCP4.5') ax1.plot(rcp6.Emissions.year, C60[:,1], color='r...
notebooks/Example-Usage.ipynb
OMS-NetZero/FAIR
apache-2.0
Radiative forcing We consider 13 separate species of radiative forcing: CO$_2$, CH$_4$, N$_2$O, minor GHGs, tropospheric ozone, stratospheric ozone, stratospheric water vapour from methane oxidation, contrails, aerosols, black carbon on snow, land use change, volcanic and solar (table 3 in Smith et al., https://www.geo...
fig = plt.figure() ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222) ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224) ax1.plot(rcp3pd.Emissions.year, F26[:,4], color='green', label='RCP3PD') ax1.plot(rcp45.Emissions.year, F45[:,4], color='blue', label='RCP4.5') ax1.plot(rcp6.Emissions.year, F60[:,4], color='r...
notebooks/Example-Usage.ipynb
OMS-NetZero/FAIR
apache-2.0
Ensemble generation An advantage of FaIR is that it is very quick to run (much less than a second on an average machine). Therefore it can be used to generate probabilistic future ensembles. We'll show a 100-member ensemble.
from scipy import stats from fair.tools.ensemble import tcrecs_generate # generate some joint lognormal TCR and ECS pairs tcrecs = tcrecs_generate(n=100, seed=38571) # generate some forcing scale factors with SD of 10% of the best estimate F_scale = stats.norm.rvs(size=(100,13), loc=1, scale=0.1, random_state=40000) ...
notebooks/Example-Usage.ipynb
OMS-NetZero/FAIR
apache-2.0
The resulting projections show a large spread. Some of these ensemble members are unrealistic, ranging from around 0.4 to 2.0 K temperature change in the present day, whereas we know in reality it is more like 0.9 (plus or minus 0.2). Therefore we can constrain this ensemble to observations.
try: # For Python 3.0 and later from urllib.request import urlopen except ImportError: # Fall back to Python 2's urllib2 from urllib2 import urlopen from fair.tools.constrain import hist_temp # load up Cowtan and Way data remotely url = 'http://www-users.york.ac.uk/~kdc3/papers/coverage2013/had4_k...
notebooks/Example-Usage.ipynb
OMS-NetZero/FAIR
apache-2.0
<font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
server = 'api.planetos.com' API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>' version = 'v1'
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
At first, we need to define the dataset name and a variable we want to use.
dh = datahub.datahub(server,version,API_key) dataset = 'cams_nrt_forecasts_global' variable_name1 = 'pm2p5'
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
Then we define spatial range. We decided to analyze US, where unfortunately catastrofic wildfires are taking place at the moment and influeces air quality.
area_name = 'USA' latitude_north = 49.138; longitude_west = -128.780 latitude_south = 24.414; longitude_east = -57.763
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
Download the data with package API Create package objects Send commands for the package creation Download the package files
package_cams = package_api.package_api(dh,dataset,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,area_name=area_name) package_cams.make_package() package_cams.download_package()
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
Work with the downloaded files We start with opening the files with xarray and adding PM2.5 as micrograms per cubic meter as well to make the values easier to understand and compare. After that, we will create a map plot with a time slider, then make a GIF using the images, and finally, we will look into a specific loc...
dd1 = xr.open_dataset(package_cams.local_file_name) dd1['longitude'] = ((dd1.longitude+180) % 360) - 180 dd1['pm2p5_micro'] = dd1.pm2p5 * 1000000000. dd1.pm2p5_micro.data[dd1.pm2p5_micro.data < 0] = np.nan
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
Here we are making a Basemap of the US that we will use for showing the data.
m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4, resolution = 'i', area_thresh = 0.05, llcrnrlon=longitude_west, llcrnrlat=latitude_south, urcrnrlon=longitude_east, urcrnrlat=latitude_north) lons,lats = np.meshgrid(dd1.longitude.data,dd1.latitude.data) lonmap,latmap = m(lons,lats)
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
Now it is time to plot all the data. A great way to do it is to make an interactive widget, where you can choose time stamp by using a slider. As the minimum and maximum values are very different, we are using logarithmic colorbar to visualize it better. On the map we can see that the areas near Los Angeles have very ...
vmax = np.nanmax(dd1.pm2p5_micro.data) vmin = 2 def loadimg(k): fig=plt.figure(figsize=(10,7)) ax = fig.add_subplot(111) pcm = m.pcolormesh(lonmap,latmap,dd1.pm2p5_micro.data[k], norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = 'rainbow') ilat,ilon = np.unravel_index(np.nanargmax(dd1....
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
Let's include an image from the last time-step as well, because GitHub Preview doesn't show the time slider images.
loadimg(10)
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
With the function below we will save images you saw above to the local filesystem as a GIF, so it is easily shareable with others.
def make_ani(): folder = './anim/' for k in range(len(dd1.pm2p5_micro)): filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png' if not os.path.exists(filename): fig=plt.figure(figsize=(10,7)) ax = fig.add_subplot(111) pcm = m.pcolormesh(lonmap,latmap,dd1.pm...
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
To see data more specifically we need to choose the location. This time we decided to look into the place where PM2.5 is highest. Seems like at the moment it is the Santa Barbara area, where the Thomas Fire is taking place.
ilat,ilon = np.unravel_index(np.nanargmax(dd1.pm2p5_micro.data[1]),dd1.pm2p5_micro.data[1].shape) lon_max = -121.9; lat_max = 37.33 #dd1.latitude.data[ilat] data_in_spec_loc = dd1.sel(longitude = lon_max,latitude=lat_max,method='nearest') print ('Latitude ' + str(lat_max) + ' ; Longitude ' + str(lon_max))
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
In the plot below we can see the PM2.5 forecast on the surface layer. Note that the time zone on the graph is UTC while the time zone in Santa Barbara is UTC-08:00. The air pollution from the wildfire has exceeded a record 5,000 µg/m3, while the hourly norm is 25 µg/m3. We can also see some peaks every day around 12 p...
fig = plt.figure(figsize=(10,5)) plt.plot(data_in_spec_loc.time,data_in_spec_loc.pm2p5_micro, '*-',linewidth = 1,c='blue',label = dataset) plt.xlabel('Time') plt.title('PM2.5 forecast for San Jose') plt.grid()
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
Finally, we will remove the package we downloaded.
os.remove(package_cams.local_file_name)
api-examples/cams_air_quality_demo_calu2020.ipynb
planet-os/notebooks
mit
Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
graf = datos.ix[:, "Diametro X"].plot(figsize=(16,10),ylim=(0.5,3)) graf.axhspan(1.65,1.85, alpha=0.2) graf.set_xlabel('Tiempo (s)') graf.set_ylabel('Diámetro (mm)') #datos['RPM TRAC'].plot(secondary_y='RPM TRAC') box = datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes') box.axhspan(1.65,1.85, alpha=0.2...
ipython_notebooks/06_regulador_experto/ensayo4.ipynb
darkomen/TFG
cc0-1.0
To edit a preexisting package, we need to first make sure to install the package:
quilt3.Package.install( "examples/hurdat", "s3://quilt-example", )
docs/Walkthrough/Editing a Package.ipynb
quiltdata/quilt-compiler
apache-2.0
Use browse to edit the package:
p = quilt3.Package.browse('examples/hurdat')
docs/Walkthrough/Editing a Package.ipynb
quiltdata/quilt-compiler
apache-2.0
For more information on accessing existing packages see the section "Installing a Package". Adding data to a package Use the set and set_dir commands to add individual files and whole directories, respectively, to a Package:
# add entries individually using `set` # ie p.set("foo.csv", "/local/path/foo.csv"), # p.set("bar.csv", "s3://bucket/path/bar.csv") # create test data with open("data.csv", "w") as f: f.write("id, value\na, 42") p = quilt3.Package() p.set("data.csv", "data.csv") p.set("banner.png", "s3://quilt-example/imgs/banner...
docs/Walkthrough/Editing a Package.ipynb
quiltdata/quilt-compiler
apache-2.0
The first parameter to these functions is the logical key, which will determine where the file lives within the package. So after running the commands above our package will look like this:
p
docs/Walkthrough/Editing a Package.ipynb
quiltdata/quilt-compiler
apache-2.0
The second parameter is the physical key, which states the file's actual location. The physical key may point to either a local file or a remote object (with an s3:// path). If the physical key and the logical key are the same, you may omit the second argument:
# assuming data.csv is in the current directory p = quilt3.Package() p.set("data.csv")
docs/Walkthrough/Editing a Package.ipynb
quiltdata/quilt-compiler
apache-2.0
Another useful trick. Use "." to set the contents of the package to that of the current directory:
# switch to a test directory and create some test files import os %cd data/ os.mkdir("stuff") with open("new_data.csv", "w") as f: f.write("id, value\na, 42") # set the contents of the package to that of the current directory p.set_dir(".", ".")
docs/Walkthrough/Editing a Package.ipynb
quiltdata/quilt-compiler
apache-2.0
Deleting data in a package Use delete to remove entries from a package:
p.delete("data.csv")
docs/Walkthrough/Editing a Package.ipynb
quiltdata/quilt-compiler
apache-2.0
Note that this will only remove this piece of data from the package. It will not delete the actual data itself. Adding metadata to a package Packages support metadata anywhere in the package. To set metadata on package entries or directories, use the meta argument:
p = quilt3.Package() p.set("data.csv", "new_data.csv", meta={"type": "csv"}) p.set_dir("stuff/", "stuff/", meta={"origin": "unknown"})
docs/Walkthrough/Editing a Package.ipynb
quiltdata/quilt-compiler
apache-2.0
You can also set metadata on the package as a whole using set_meta.
# set metadata on a package p.set_meta({"package-type": "demo"})
docs/Walkthrough/Editing a Package.ipynb
quiltdata/quilt-compiler
apache-2.0
Siesta --- the H2O molecule This tutorial will describe a complete walk-through of some of the sisl functionalities that may be related to the Siesta code. Creating the geometry Our system of interest will be the $\mathrm H_2\mathrm O$ system. The first task will be to create the molecule geometry. This is done using l...
h2o = Geometry([[0, 0, 0], [0.8, 0.6, 0], [-0.8, 0.6, 0.]], [Atom('O'), Atom('H'), Atom('H')], sc=SuperCell(10, origin=[-5] * 3))
docs/tutorials/tutorial_siesta_1.ipynb
zerothi/sisl
mpl-2.0
The input are the 1) xyz coordinates, 2) the atomic species and 3) the supercell that is attached. By printing the object one gets basic information regarding the geometry, such as 1) number of atoms, 2) species of atoms, 3) number of orbitals, 4) orbitals associated with each atom and 5) number of supercells.
print(h2o)
docs/tutorials/tutorial_siesta_1.ipynb
zerothi/sisl
mpl-2.0
So there are 3 atoms, 1 Oxygen and 2 Hydrogen. Currently there are only 1 orbital per atom. Later we will look into the details of orbitals associated with atoms and how they may be used for wavefunctions etc. Lets visualize the atomic positions (here adding atomic indices)
plot(h2o)
docs/tutorials/tutorial_siesta_1.ipynb
zerothi/sisl
mpl-2.0
Now we need to create the input fdf file for Siesta:
open('RUN.fdf', 'w').write("""%include STRUCT.fdf SystemLabel siesta_1 PAO.BasisSize SZP MeshCutoff 250. Ry CDF.Save true CDF.Compress 9 SaveHS true SaveRho true """) h2o.write('STRUCT.fdf')
docs/tutorials/tutorial_siesta_1.ipynb
zerothi/sisl
mpl-2.0