markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Data Manipulation (Plots) Now that we have the data easily accessible in python, lets look at how to plot it. <code>Pandas</code> allows you to use matplotlib to plot, however it is done using methods built into pandas. Although the methods to create an manipulate plots are built into <code>Pandas</code>, we will still...
import matplotlib.pyplot as plt
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
In order to demonstrate the plotting capabilities of pandas arrays, lets use the example data that we imported earlier. The data frame contains only the two columns that were in the file; temperature and time. Because of this simplicity, we can trust pandas to properly interpret the first column as time and the second ...
plt.figure(1) ax = df.plot() plt.show()
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
While this simplification is nice, it is generally better to specify what data you want to plot. Particularly if you are automating the plotting of a large set of dataframes. To do this, specify the <code>x</code> and <code>y</code> arrays in your dataframe as you would in a standard <code>matplotlib</code> plot call, ...
plt.figure(2) ax = df.plot(cols[0],cols[1]) plt.show()
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
Now that we have the basics down, lets spice up the plot a little bit.
plt.figure(3) ax = df.plot(cols[0],cols[1]) ax.set_title('This is a Title') ax.set_ylabel('Temperature (deg F)') ax.grid() plt.show()
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
Data Manipulation (Timestamps) One thing you probably noticed in these plots is that the time axis isn't all that useful. It would be better to change the timestamps to a more useful form like seconds since start. Lets go through the process of making that conversion. First, lets see what the timestamp currently looks ...
df[cols[0]][0]
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
Good news! Since python interpreted the date as a datetime object, we can use datetime object methods to determine the time in seconds. The one caveat is that we can only determine a time difference, not an absolute time. For more on this, read this stackoverflow question. The first thing we have to do is convert these...
from datetime import datetime, date startTime = df[cols[0]][0] timeArray = [] for i in range(0,len(df[cols[0]])): timeArray.append((datetime.combine(date.today(), df[cols[0]][i]) - datetime.combine(date.today(), startTime)).total_seconds())
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
Note: There is probably a better way of doing this (i.e. without a loop, but I'm tired and can't think of anything right now)
plt.figure(4) plt.plot(timeArray, df[cols[1]], 'b') plt.title('This is a graph with a better time axis') plt.ylabel('Temperature (deg F)') plt.xlabel('Time (s)') plt.grid() plt.show()
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
Exercice 1 : Excel $\rightarrow$ Python $\rightarrow$ Excel Il faut télécharger le fichier seance4_excel.xlsx puis l'enregistrer au formet texte (séparateur : tabulation) (*.txt). On rappelle les étapes de l'exercice : enregistrer le fichier au format texte, le lire sous python créer une matrice carrée 3x3 où chaque v...
with open ("seance4_excel.txt", "r") as f : mat = [ row.strip(' \n').split('\t') for row in f.readlines() ] mat = mat [1:] res = [ [ None ] * 3 for i in range(5) ] for i,j,v in mat : res [ int(j)-1 ] [ int (i)-1 ] = float(v) with open ("seance4_excel_mat.txt", "w") as f : f.write ( '\n'.join ( [ '\...
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Il est très rare d'écrire ce genre de code. En règle générale, on se sert de modules déjà existant comme pandas, xlrd et openpyxl. Cela évite la conversion au format texte :
import pandas df = pandas.read_excel("seance4_excel.xlsx", sheet_name="Feuil1", engine='openpyxl') mat = df.pivot("X", "Y", "value") mat.to_excel("seance4_excel_mat.xlsx") mat
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
C'est un peu plus rapide. <h3 id="exo2">Exercice 2 : trouver un module (1)</h3> Le module random est celui qu'on cherche.
import random alea = [ random.random() for i in range(10) ] print (alea) random.shuffle(alea) print (alea)
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Exercice 3 : trouver un module (2) Le module datetime permet de faire des opérations sur les dates.
from datetime import datetime date1 = datetime(2013,9,9) date0 = datetime(2013,8,1) print (date1 - date0) birth = datetime (1975,8,11) print (birth.weekday()) # lundi
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Exercice 4 : son propre module On effectue le remplacement if __name__ == "__main__": par if True : :
# fichier monmodule2.py import math def fonction_cos_sequence(seq) : return [ math.cos(x) for x in seq ] if __name__ == "__main__" : # et une petite astuce quand on travaille sous notebook code = """ # -*- coding: utf-8 -*- import math def fonction_cos_sequence(seq) : r...
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Le message ce message n'apparaît que ce programme est le point d'entrée apparaît maintenant alors qu'il n'apparaissait pas avec la version de l'énoncé. Comme il apparaît après *, cela montre que cette ligne est exécutée si le module est importé.
import monmodule3
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Si on importe le module une seconde fois, le message n'apparaît plus : le langage Python a détecté que le module avait déjà été importé. Il ne le fait pas une seconde fois. Exercice 5 : chercher un motif dans un texte L'expression régulière est je .{1,60}. Le symbol . signifie n'importe quel caractère. Suivi de {1,60} ...
import pyensae.datasource, re discours = pyensae.datasource.download_data('voeux.zip', website = 'xd') exp = re.compile ("je .{1,60}", re.IGNORECASE) for fichier in discours : print("----",fichier) try: with open(fichier,"r") as f : text = f.read() except: with open(fichier,"r", encoding="l...
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Exercice 6 : chercher un autre motif dans un texte Pour les mots securite ou insecurite, on construit l'expression :
import pyensae.datasource, re discours = pyensae.datasource.download_data('voeux.zip', website = 'xd') exp = re.compile ("(.{1,15}(in)?sécurité.{1,50})", re.IGNORECASE) for fichier in discours : print("----",fichier) try: with open(fichier,"r") as f : text = f.read() except: with open(fich...
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Exercice 7 : recherche les urls dans une page wikipédia On pourra prendre comme exemple la page du programme Python. La première partie consiste à récupérer le contenu d'une page HTML.
from urllib.request import urlopen url = "https://fr.wikipedia.org/wiki/Python_(langage)" with urlopen(url) as u: content = u.read() content[:300]
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Les données récupérées sont au format binaire d'où le préfixe b''. Pour éviter de télécharger les données à chaque fois, on sauve le contenu sur disque pour le récupérer la prochaine fois.
with open('page.html', 'wb') as f: f.write(content)
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Et on le recharge.
with open('page.html', 'rb') as f: page = f.read() page[:300]
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Les données sont sous forme d'octets, il faut d'abord les convertir sous forme de caractères. il y a plus de caractères que d'octets disponibles (256), c'est cela qu'il faut une sorte de code pour passer de l'un à l'autre : dans le cas d'internet, le plus utilisé est l'encoding utf-8.
page_str = page.decode('utf-8') page_str[:300]
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
On recherche maintenant les urls commençant par http...
import re reg = re.compile("href=\\\"(http.*?)\\\"") urls = reg.findall(page_str) urls[:10]
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
Exercice 8 : construire un texte à motif A l'inverse des expressions régulières, des modules comme Mako ou Jinja2 permettent de construire simplement des documents qui suivent des règles. Ces outils sont très utilisés pour la construction de page web. On appelle cela faire du templating. Créer une page web qui affiche ...
patron = """ <ul>{% for i, url in enumerate(urls) %} <li><a href="{{ url }}">url {{ i }}</a></li>{% endfor %} </ul> """ from jinja2 import Template tpl = Template(patron) print(tpl.render(urls=urls[:10], enumerate=enumerate))
_doc/notebooks/td1a/td1a_correction_session4.ipynb
sdpython/ensae_teaching_cs
mit
We'll also clone the TensorFlow repository, which contains the training scripts, and copy them into our workspace.
# Clone the repository from GitHub !git clone --depth 1 -q https://github.com/tensorflow/tensorflow # Copy the training scripts into our workspace !cp -r tensorflow/tensorflow/lite/micro/examples/magic_wand/train train
tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb
jhseu/tensorflow
apache-2.0
Prepare the data Next, we'll download the data and extract it into the expected location within the training scripts' directory.
# Download the data we will use to train the model !wget http://download.tensorflow.org/models/tflite/magic_wand/data.tar.gz # Extract the data into the train directory !tar xvzf data.tar.gz -C train 1>/dev/null
tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb
jhseu/tensorflow
apache-2.0
We'll then run the scripts that split the data into training, validation, and test sets.
# The scripts must be run from within the train directory %cd train # Prepare the data !python data_prepare.py # Split the data by person !python data_split_person.py
tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb
jhseu/tensorflow
apache-2.0
Load TensorBoard Now, we set up TensorBoard so that we can graph our accuracy and loss as training proceeds.
# Load TensorBoard %load_ext tensorboard %tensorboard --logdir logs/scalars
tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb
jhseu/tensorflow
apache-2.0
Begin training The following cell will begin the training process. Training will take around 5 minutes on a GPU runtime. You'll see the metrics in TensorBoard after a few epochs.
!python train.py --model CNN --person true
tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb
jhseu/tensorflow
apache-2.0
Create a C source file The train.py script writes a model, model.tflite, to the training scripts' directory. In the following cell, we convert this model into a C++ source file we can use with TensorFlow Lite for Microcontrollers.
# Install xxd if it is not available !apt-get -qq install xxd # Save the file as a C source file !xxd -i model.tflite > /content/model.cc # Print the source file !cat /content/model.cc
tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb
jhseu/tensorflow
apache-2.0
Reformat into a TensorFlow-friendly shape: - convolutions need the image data formatted as a cube (width by height by #channels) - labels as float 1-hot encodings.
image_size = 28 num_labels = 10 num_channels = 1 # grayscale import numpy as np def reformat(dataset, labels): dataset = dataset.reshape( (-1, image_size, image_size, num_channels)).astype(np.float32) labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset,...
tensorflow/examples/udacity/4_convolutions.ipynb
EvenStrangest/tensorflow
apache-2.0
Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.
batch_size = 16 patch_size = 5 depth = 16 num_hidden = 64 graph = tf.Graph() with graph.as_default(): # Input data. tf_train_dataset = tf.placeholder( tf.float32, shape=(batch_size, image_size, image_size, num_channels)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_vali...
tensorflow/examples/udacity/4_convolutions.ipynb
EvenStrangest/tensorflow
apache-2.0
Problem 1 The convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides by a max pooling operation (nn.max_pool()) of stride 2 and kernel size 2.
batch_size = 16 patch_size = 5 depth = 12 # was 16 num_hidden = 64 graph = tf.Graph() with graph.as_default(): # Input data. tf_train_dataset = tf.placeholder( tf.float32, shape=(batch_size, image_size, image_size, num_channels)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) ...
tensorflow/examples/udacity/4_convolutions.ipynb
EvenStrangest/tensorflow
apache-2.0
The Simple Model $$L + P \underset{k_-1}{\stackrel{k_1}{\rightleftharpoons}} PL$$ This is a simple model of our system. We are assuming complex concentration [PL] is proportional to complex fluorescence (in this particular assay). We estimate/know the total Ligand $[L]{tot} = [L] + [PL]$ and Protein $[P]{tot} = [P] + [...
Kd = 2e-9 # M
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
The protein concentration for our assay will be 1 nM (half of the Kd).
Ptot = 1e-9 # M
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
The ligand concentration will be a 12-point half-log dilution from 20 uM ligand (down to ~60 pM).
Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
To calculate $[PL]$ as a function of $[P]{tot}$, $[L]{tot}$, and $K_d$, we start with $$[PL] = \frac{[L][P]}{K_{d} }$$ Then we need to put L and P in terms of $[L]{tot}$ and $[P]{tot}$, using $$[L] = [L]_{tot}-[PL]$$ $$[P] = [P]_{tot}-[PL]$$ This gives us: $$[PL] = \frac{([L]{tot}-[PL])([P]{tot}-[PL])}{K_{d} }$$ Rearra...
# Now we can use this to define a function that gives us PL from Kd, Ptot, and Ltot. def two_component_binding(Kd, Ptot, Ltot): """ Parameters ---------- Kd : float Dissociation constant Ptot : float Total protein concentration Ltot : float Total ligand concentration ...
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
Now we can plot our complex concentration as a function of our ligand concentration!
# y will be complex concentration # x will be total ligand concentration plt.semilogx(Ltot, PL, 'ko') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$[PL]$') plt.ylim(0, 1.05*np.max(PL)) plt.axvline(Kd,color='r',linestyle='--',label='K_d') plt.legend(loc=0);
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
Okay, so now lets do something a little more fun. Let's overlap the curves we get for different amounts of protein in the assay.
[L2, P2, PL2] = two_component_binding(Kd, Ptot/2, Ltot) [L3, P3, PL3] = two_component_binding(Kd, Ptot*2, Ltot) # y will be complex concentration # x will be total ligand concentration plt.semilogx(Ltot,PL,'b',Ltot,PL2,'g',Ltot,PL3,'k') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$[PL]$ / M') plt.ylim(0,2.05e-9) plt.axh...
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
Let's do even more fun things! Say we have one molecule that has a different Kd for a bunch of proteins. We'll keep the protein concentration the same, but look at how our complex concentration changes as a function of Kd.
[L4, P4, PL4] = two_component_binding(Kd/10, Ptot, Ltot) [L5, P5, PL5] = two_component_binding(Kd*10, Ptot, Ltot) # y will be complex concentration # x will be total ligand concentration plt.semilogx(Ltot,PL,'o',label='$K_d$'); plt.semilogx(Ltot,PL4,'violet',label='0.1 $K_d$'); plt.semilogx(Ltot,PL5,'.75',label='10 $K...
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
Now let's make this new plot for 'simulated model of dilution series experiment' figure
# Let's plot Kd's ranging from 1mM to 10pM Kd_max = 1e-3 # M [La, Pa, PLa] = two_component_binding(Kd_max, Ptot, Ltot) [Lb, Pb, PLb] = two_component_binding(Kd_max/10, Ptot, Ltot) [Lc, Pc, PLc] = two_component_binding(Kd_max/100, Ptot, Ltot) [Ld, Pd, PLd] = two_component_binding(Kd_max/1e3, Ptot, Ltot) [Le, Pe, PLe] =...
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
Okay! Now let's do some stuff with kinases! We're going to pick 10 kinases and look at what binding curves we would expect to the fluorescent inhibitor bosutinib. Info from: http://www.guidetopharmacology.org/GRAC/LigandDisplayForward?tab=screens&ligandId=5710 Specifically: http://www.guidetopharmacology.org/GRAC/Ligan...
Kd_Src = 1.0e-9 # M Kd_Abl = 0.12e-9 # M Kd_Abl_T315I = 21.0e-9 # M Kd_p38 = 3000.0e-9 # M Kd_Aur = 3000.0e-9 # M Kd_CK2 = 3000.0e-9 # M Kd_SYK = 290.0e-9 # M Kd_DDR = 120.0e-9 # M Kd_MEK = 19.0e-9 # M #This CK2, Aur, and p38 value is actually 'greater than'.
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
We'll use the same Ltot and Ptot as before.
[L6, P6, PL6] = two_component_binding(Kd_Src, Ptot, Ltot) [L7, P7, PL7] = two_component_binding(Kd_Abl, Ptot, Ltot) [L8, P8, PL8] = two_component_binding(Kd_Abl_T315I, Ptot, Ltot) [L9, P9, PL9] = two_component_binding(Kd_p38, Ptot, Ltot) # y will be complex concentration # x will be total ligand concentration Src, = p...
examples/direct-fluorescence-assay/1 Simulating Experimental Fluorescence Binding Data.ipynb
MehtapIsik/assaytools
lgpl-2.1
申请秘钥 由于服务器算力有限,匿名用户每分钟限2次调用。如果你需要更多调用次数,建议申请免费公益API秘钥auth。 抽取式自动摘要 抽取式自动摘要的目标是从文章中筛选出一些作为摘要的中心句子:既要紧扣要点,又要避免赘语。 中文 抽取式自动摘要任务的输入为一段文本和所需的摘要句子数量的最大值topk:
text = ''' 据DigiTimes报道,在上海疫情趋缓,防疫管控开始放松后,苹果供应商广达正在逐步恢复其中国工厂的MacBook产品生产。 据供应链消息人士称,生产厂的订单拉动情况正在慢慢转强,这会提高MacBook Pro机型的供应量,并缩短苹果客户在过去几周所经历的延长交货时间。 仍有许多苹果笔记本用户在等待3月和4月订购的MacBook Pro机型到货,由于苹果的供应问题,他们的发货时间被大大推迟了。 据分析师郭明錤表示,广达是高端MacBook Pro的唯一供应商,自防疫封控依赖,MacBook Pro大部分型号交货时间增加了三到五周, 一些高端定制型号的MacBook Pro配置要到6月底到7月初才能交货。 尽管M...
plugins/hanlp_demo/hanlp_demo/zh/extractive_summarization_restful.ipynb
hankcs/HanLP
apache-2.0
返回值为最多topk个摘要句子以及相应的权重,权重取值区间为$[0, 1]$。由于Trigram Blocking技巧,实际返回的摘要句数量可能小于topk。 可视化
def highlight(text, scores): for k, v in scores.items(): text = text.replace(k, f'<span style="background-color:rgba(255, 255, 0, {v});">{k}</span>') from IPython.display import display, HTML display(HTML(text)) scores = HanLP.extractive_summarization(text, topk=100) highlight(text, scores)
plugins/hanlp_demo/hanlp_demo/zh/extractive_summarization_restful.ipynb
hankcs/HanLP
apache-2.0
繁体中文 HanLP的抽取式自动摘要接口支持繁体中文:
text = ''' 華爾街日報周二(3日)報導,根據知情人透露,日前已宣布將以440億美元買下推特(Twitter)並下市的馬斯克,曾經跟一些潛在投資人說,他可以在短短幾年後,再將這家社群媒體公司重新上市。 消息來源說,特斯拉創辦人兼執行長馬斯克表示,他計劃在買下推特後最短三年內,就展開推特的首次公開發行股票。 馬斯克買推特的交易案預期在今年稍後走完程序,包括獲得股東同意以及監管機關核准等步驟。 根據之前華爾街日報的報導,馬斯克為購買推特籌現金時,與私募股權公司等投資人討論出資事宜,Apollo Global Management有興趣參與。 私募股權公司通常都先買下公司將之私有化,把公司移出眾人注目的焦點之外以後,整頓公司,接著...
plugins/hanlp_demo/hanlp_demo/zh/extractive_summarization_restful.ipynb
hankcs/HanLP
apache-2.0
Preparing data set sweep First, we're going to define the data sets that we'll sweep over. As the simulated novel taxa dataset names depend on how the database generation notebook was executed, we must define the variables used to create these datasets. If you modified any variables in that notebook, set these same var...
iterations = 10 data_dir = join(project_dir, "data", analysis_name) # databases is a list of names given as dictionary keys in the second # cell of the database generation notebook. Just list the names here. databases = ['B1-REF', 'F1-REF'] # Generate a list of input directories (dataset_reference_combinations, refere...
ipynb/novel-taxa/taxonomy-assignment.ipynb
caporaso-lab/tax-credit
bsd-3-clause
Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep(). Fields must adhere to following format: {0} = output directory {1} = input data {2} = output destination {3} = reference taxonomy ...
command_template = 'bash -c "source activate qiime1; source ./.bashrc; mkdir -p {0} ; assign_taxonomy.py -v -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 16000"' commands = parameter_sweep(data_dir, results_dir, reference_dbs, dataset_reference_combinations, ...
ipynb/novel-taxa/taxonomy-assignment.ipynb
caporaso-lab/tax-credit
bsd-3-clause
As a sanity check, we can look at the first command that was generated and the number of commands generated.
for method in method_parameters_combinations: print(method) for command in commands: if '/'+method+'/' in command: print(command) break print(len(commands))
ipynb/novel-taxa/taxonomy-assignment.ipynb
caporaso-lab/tax-credit
bsd-3-clause
Finally, we run our commands.
Parallel(n_jobs=23)(delayed(system)(command) for command in commands);
ipynb/novel-taxa/taxonomy-assignment.ipynb
caporaso-lab/tax-credit
bsd-3-clause
BLAST+
(dataset_reference_combinations, reference_dbs) = recall_novel_taxa_dirs( data_dir, databases, iterations, ref_seqs='ref_seqs.qza', ref_taxa='ref_taxa.qza') method_parameters_combinations = { 'blast+' : {'p-evalue': [0.001], 'p-maxaccepts': [1, 10], ...
ipynb/novel-taxa/taxonomy-assignment.ipynb
caporaso-lab/tax-credit
bsd-3-clause
VSEARCH
method_parameters_combinations = { 'vsearch' : {'p-maxaccepts': [1, 10], 'p-perc-identity': [0.80, 0.90, 0.97, 0.99], 'p-min-consensus': [0.51, 0.99]} } command_template = ("mkdir -p {0}; " "qiime feature-classifier cl...
ipynb/novel-taxa/taxonomy-assignment.ipynb
caporaso-lab/tax-credit
bsd-3-clause
scikit-learn
method_parameters_combinations = { 'naive-bayes' : {'p-feat-ext--ngram-range': ['[4,4]', '[6,6]', '[8,8]', '[16,16]', '[32,32]', '[7,7]', '[9,9]', '[10,10]', '[11,11]', '[12,12]', '[14,14]', '[18,18]'], 'p-classify--alpha': [0.0...
ipynb/novel-taxa/taxonomy-assignment.ipynb
caporaso-lab/tax-credit
bsd-3-clause
Move result files to repository Add results to the tax-credit directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were ma...
precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name) method_dirs = glob(join(results_dir, '*', '*', '*', '*')) move_results_to_repository(method_dirs, precomputed_results_dir)
ipynb/novel-taxa/taxonomy-assignment.ipynb
caporaso-lab/tax-credit
bsd-3-clause
Normal / Gaussian Visualize the probability density function:
from scipy.stats import norm import matplotlib.pyplot as plt x = np.arange(-3, 3, 0.001) plt.plot(x, norm.pdf(x))
1t_DataAnalysisMLPython/1j_ML/DS_ML_Py_SBO/DataScience/3_Distributions/Distributions.ipynb
yevheniyc/C
mit
Generate some random numbers with a normal distribution. "mu" is the desired mean, "sigma" is the standard deviation:
import numpy as np import matplotlib.pyplot as plt mu = 5.0 sigma = 2.0 values = np.random.normal(mu, sigma, 10000) plt.hist(values, 50) plt.show()
1t_DataAnalysisMLPython/1j_ML/DS_ML_Py_SBO/DataScience/3_Distributions/Distributions.ipynb
yevheniyc/C
mit
Exponential PDF / "Power Law"
from scipy.stats import expon import matplotlib.pyplot as plt x = np.arange(0, 10, 0.001) plt.plot(x, expon.pdf(x))
1t_DataAnalysisMLPython/1j_ML/DS_ML_Py_SBO/DataScience/3_Distributions/Distributions.ipynb
yevheniyc/C
mit
Binomial Probability Mass Function
from scipy.stats import binom import matplotlib.pyplot as plt # n -> number of events, i.e. flipping a coin 10 times # p -> probability of the event occuring: 50% chance of getting heads n, p = 10, 0.5 x = np.arange(0, 10, 0.001) plt.plot(x, binom.pmf(x, n, p))
1t_DataAnalysisMLPython/1j_ML/DS_ML_Py_SBO/DataScience/3_Distributions/Distributions.ipynb
yevheniyc/C
mit
Poisson Probability Mass Function Example: My website gets on average 500 visits per day. What's the odds of getting 550?
from scipy.stats import poisson import matplotlib.pyplot as plt mu = 500 x = np.arange(400, 600, 0.5) plt.plot(x, poisson.pmf(x, mu))
1t_DataAnalysisMLPython/1j_ML/DS_ML_Py_SBO/DataScience/3_Distributions/Distributions.ipynb
yevheniyc/C
mit
Pop Quiz! What's the equivalent of a probability distribution function when using discrete instead of continuous data?
# probability mass function
1t_DataAnalysisMLPython/1j_ML/DS_ML_Py_SBO/DataScience/3_Distributions/Distributions.ipynb
yevheniyc/C
mit
We look at basic info about the dataset.
log.info()
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
<b>1</b> DataFrame (~ programmable Excel worksheet), <b>6</b> Series (= columns), <b>1128819</b> entries (= rows) We convert the timestamps of texts into objects.
log['timestamp'] = pd.to_datetime(log['timestamp']) log.head()
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
We're just looking at recent changes.
# use log['timestamp'].max() instead of pd.Timedelta('today') to avoid outdated data in the future recent = log[log['timestamp'] > log['timestamp'].max() - pd.Timedelta('90 days')] recent.head()
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
We want to use only Java code.
java = recent[recent['filename'].str.endswith(".java")].copy() java.head()
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
III. Formal Modeling Create new views Blend in more data Wir zählen die Anzahl der Änderungen je Datei.
changes = java.groupby('filename')[['sha']].count() changes.head()
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
We add info about the code lines...
loc = pd.read_csv("dataset/cloc_intellij.csv.gz", index_col=1) loc.head()
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
...and blend them with the existing data.
hotspots = changes.join(loc[['code']]).dropna(subset=['code']) hotspots.head()
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
VI. Interpretation Elaborate the core result of the analysis. Make the central message / new findings clear We only show the TOP 10 hotspots in the code.
top10 = hotspots.sort_values(by="sha", ascending=False).head(10) top10
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
V. Communication Transform the findings into an understandable visualization Communicate the next steps after analysis We generate an XY chart from the TOP 10 list.
ax = top10.plot.scatter('sha', 'code'); for k, v in top10.iterrows(): ax.annotate(k.split("/")[-1], v)
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
<b>Result:</b> There are a few complex files that change very frequently. Next step is to investigate those files in more detail. Bonus Which files change particularly frequently in general?
most_changes = hotspots['sha'].sort_values(ascending=False) most_changes.head(10)
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
*We visualize this with a simple line graph.
most_changes.plot(rot=90);
demos/IntelliJ IDEA Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
Uniform meshes by refinement The refinement_level parameter of the stripy meshes makes repeated loops determining the bisection points of all the existing edges in the triangulation and then creating a new triangulation that includes these points and the original ones. These refinement operations can also be used for n...
ellip0 = stripy.cartesian_meshes.elliptical_mesh(extent, spacingX, spacingY, refinement_levels=0) ellip1 = stripy.cartesian_meshes.elliptical_mesh(extent, spacingX, spacingY, refinement_levels=1) ellip2 = stripy.cartesian_meshes.elliptical_mesh(extent, spacingX, spacingY, refinement_levels=2) ellip3 = stripy.cartesian_...
Notebooks/SphericalMeshing/CartesianTriangulations/Ex7-Refinement-of-Triangulations.ipynb
lmoresi/UoM-VIEPS-Intro-to-Python
mit
Refinement strategies Five refinement strategies: Bisect all segments connected to a given node Refine all triangles connected to a given node by adding a point at the centroid or bisecting all edges Refine a given triangle by adding a point at the centroid or bisecting all edges These are provided as follows:
mx, my = ellip2.midpoint_refine_triangulation_by_vertices(vertices=[1,2,3,4,5,6,7,8,9,10]) ellip2mv = stripy.Triangulation(mx, my) mx, my = ellip2.edge_refine_triangulation_by_vertices(vertices=[1,2,3,4,5,6,7,8,9,10]) ellip2ev = stripy.Triangulation(mx, my) mx, my = ellip2.centroid_refine_triangulation_by_vertices(v...
Notebooks/SphericalMeshing/CartesianTriangulations/Ex7-Refinement-of-Triangulations.ipynb
lmoresi/UoM-VIEPS-Intro-to-Python
mit
Visualisation of refinement strategies
%matplotlib inline import matplotlib.pyplot as plt def mesh_fig(mesh, meshR, name): fig = plt.figure(figsize=(10, 10), facecolor="none") ax = plt.subplot(111) ax.axis('off') generator = mesh refined = meshR x0 = generator.x y0 = generator.y xR = refined.x yR = refined.y ...
Notebooks/SphericalMeshing/CartesianTriangulations/Ex7-Refinement-of-Triangulations.ipynb
lmoresi/UoM-VIEPS-Intro-to-Python
mit
Targetted refinement Here we refine a triangulation to a specific criterion - resolving two points in distinct triangles or with distinct nearest neighbour vertices.
points = np.array([[ 3.33, 3.33], [7.77, 7.77]]).T triangulations = [ellip1] nearest, distances = triangulations[-1].nearest_vertex(points[:,0], points[:,1]) max_depth = 10 while nearest[0] == nearest[1] and max_depth > 0: xs, ys = triangulations[-1].centroid_refine_triangulation_by_vertices(vertices=nearest[0]...
Notebooks/SphericalMeshing/CartesianTriangulations/Ex7-Refinement-of-Triangulations.ipynb
lmoresi/UoM-VIEPS-Intro-to-Python
mit
Visualisation of targetted refinement
import matplotlib.pyplot as plt %matplotlib inline str_fmt = "{:18} --- {} simplices, equant max = {:.2f}, equant min = {:.2f}, size ratio = {:.2f}" mesh_fig(edge_triangulations[0], edge_triangulations[-1], "EdgeByVertex" ) T = edge_triangulations[-1] E = np.array(T.edge_lengths()).T A = np.array(T.ar...
Notebooks/SphericalMeshing/CartesianTriangulations/Ex7-Refinement-of-Triangulations.ipynb
lmoresi/UoM-VIEPS-Intro-to-Python
mit
Decision-Tree Learning Decision tree learning uses a decision tree as a predictive model observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining and machine learnin...
def entropy(S): outcomes = pd.unique(S[final]) ent = lambda p: 0 if p == 0 else p * log2(p) return -sum([ ent( S[S[final] == o].size / S.size ) for o in outcomes ])
Decision-Tree_Learning.ipynb
sebsch/WT-2_Such-_und_Texttechnologien
gpl-3.0
Information $$ I(S, A) = - \sum\limits_{i=1}^{n} \frac{|S_i|}{|S|} \cdot E(S_i) $$
def information(S, A): partitions = pd.unique(S[A]) return sum([( S[A][S[A] == p].size / S[A].size ) * entropy( S[S[A] == p]) for p in partitions])
Decision-Tree_Learning.ipynb
sebsch/WT-2_Such-_und_Texttechnologien
gpl-3.0
Information Gain $$ \mbox{Gain}(S, A) = E(S) - I(S,A) $$
def gain(S, A): return entropy(S) - information(S, A)
Decision-Tree_Learning.ipynb
sebsch/WT-2_Such-_und_Texttechnologien
gpl-3.0
Instrinsic Information $$ \mbox{IntI}(S,A) = - \sum\limits_i \frac{|S_i|}{|S|} \cdot \log_2(\frac{|S_i|}{|S|}) $$
def intrinsic_information(S, A): partitions = pd.unique(S[A]) return -sum([( S[A][S[A] == p].size / S[A].size ) * log2( S[A][S[A] == p].size / S[A].size ) for p in partitions])
Decision-Tree_Learning.ipynb
sebsch/WT-2_Such-_und_Texttechnologien
gpl-3.0
Gain Ratio $$ GR(S,A) = \frac{\mbox{Gain}(S,A)}{\mbox{IntI}(S,A)} $$
def gain_ratio(S, A): return gain(S,A) / intrinsic_information(S,A)
Decision-Tree_Learning.ipynb
sebsch/WT-2_Such-_und_Texttechnologien
gpl-3.0
Gini-Index $$ \mbox{Gini}(S) = 1- \sum\limits_i p_i^2 $$ $$ \mbox{Gini}(S, A) = \sum\limits_i \frac{|s_i|}{|S|} \cdot \mbox{Gini}(S) $$
def gini(S, A=None): if A == None: return 1-sum( [(S[S[final] == o].size / S.size)**2 for o in pd.unique(S[final])] ) return sum( [ ( S[A][S[A] == p].size / S[A].size ) * gini(S[S[A] == p]) for p in pd.unique(S[A])] ) _exec = lambda f : {col: f(data, col) for col in da...
Decision-Tree_Learning.ipynb
sebsch/WT-2_Such-_und_Texttechnologien
gpl-3.0
Specify region For this exercise, using examples from India.
#--- Identify country for example # label country country = 'India' # define bounding box for region mlat = '0' ; Mlat = '40' ; mlon = '65' ; Mlon = '105'
scripts/india_map.ipynb
hauser-tristan/heatwave-defcomp-examples
mit
Set data Disaster records A spreadsheet of availble data was obtained from the DesInventar website, and then exported to .csv format. Both versions are available in the data repository. When pulling data from the website sometimes there can be little formatting issues, which we repair here. Also want to learn what span...
#--- Pull in data from DesInvetar records # Read file of reported heatwaves (original spreadsheet) heatwave_data = pd.read_csv('../data/Heatwaves_database.csv') # repair region name with space before name heatwave_data.loc[(heatwave_data.Region==' Tamil Nadu'),'Region'] = 'Tamil Nadu' # list out the dates for example c...
scripts/india_map.ipynb
hauser-tristan/heatwave-defcomp-examples
mit
Reanalysis Need to pull the renalysis data from NCEP's online database. Going to pull the full global files at first, so that have the data avaialbe if want to look at other regions of the world. This requires a lot of download time and storage space, the resulting minimally sized files are stored in the repository (ot...
#---Download NetCDF files # path to data directory for max/min daily temperatures path_maxmin = 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface_gauss' # path to data directory for 6hr temperature records path_hourly = 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis/surface_gauss' # loop through year...
scripts/india_map.ipynb
hauser-tristan/heatwave-defcomp-examples
mit
Once have full data set can then subdivide to create individual files for different regions to reduce the run time when reading in data for individual regions.
#--- Create data files of region # select region from min-temperature data _ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]), input='../data/t2m.min.daily.nc', output='../data/'+country+'.t2m.min.daily.nc') # select region from max-temperature data _ = cdo.sellonlatbox(','.j...
scripts/india_map.ipynb
hauser-tristan/heatwave-defcomp-examples
mit
Region masks The way we arranged the analysis (which as you can see is a bit of an ad hoc, duct tape style procedure) requires masking out the individual districts, or rather the closest approximation of them possible using the low resolution, gridded reanalysis data. The first step is creating a 'blanked' file of the...
#--- Create blank file for region # write grid information to file ofile = open('../data/ncep_grid.asc','w') ofile.write('\n'.join(cdo.griddes(input='../data/'+country+'.t2m.daily.nc'))) ofile.close() # create data file with all values set to 1 _ = cdo.const('1','../data/ncep_grid.asc', output='../data/'+...
scripts/india_map.ipynb
hauser-tristan/heatwave-defcomp-examples
mit
The actual mask files are made with a different script, writen in NCL The code here modifies the generic script based on what region we're interested in at the moment. For some countries, e.g., Chile, the region labels in the shapefiles and the region labels in the heatwave database are not rendered the same (typicall...
#--- Identify regions of interest # make list of unique region names for country regions = list( set(heatwave_data.Region.where(heatwave_data.Country==country)) ) # remove nans (from regions that arent in the selected country) regions = [x for x in regions if str(x) != 'nan'] regions = [x.title() for x in regions] if...
scripts/india_map.ipynb
hauser-tristan/heatwave-defcomp-examples
mit
Drawing a map Want to create a graphic to show that reports only exist for certain regions, and how the grid spacing of the meterological fields imperfectly matches the actual region boundaries. Have currently set things so that a grid cell is considered informative about the political region as long some part of the r...
#--- Map regions of India used in this example # read which regions are included in disaster database regions = list(set(heatwave_data.loc[(heatwave_data.Country=='India'),'Region'])) # Create a map object chart = Basemap(projection='lcc',resolution='c', lat_0=20,lon_0=85, llcrnrlat=5,urcrn...
scripts/india_map.ipynb
hauser-tristan/heatwave-defcomp-examples
mit
Checkerboard Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0: Your function should work for both odd and even size. The 0,0 element should be 1.0. The dtype should be float.
def checkerboard(size): """Return a 2d checkboard of 0.0 and 1.0 as a NumPy array""" board = np.ones((size,size), dtype=float) for i in range(size): if i%2==0: board[i,1:size:2]=0 else: board[i,0:size:2]=0 va.enable() return board checkerboard(10) a = checker...
assignments/assignment03/NumpyEx01.ipynb
CalPolyPat/phys202-2015-work
mit
Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
va.set_block_size(10) checkerboard(20) assert True
assignments/assignment03/NumpyEx01.ipynb
CalPolyPat/phys202-2015-work
mit
Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
va.set_block_size(5) checkerboard(27) assert True
assignments/assignment03/NumpyEx01.ipynb
CalPolyPat/phys202-2015-work
mit
Introduction to the TensorFlow Models NLP library <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/official_models/nlp/nlp_modeling_library_intro"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> ...
!pip install -q tf-nightly !pip install -q tf-models-nightly
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
Import Tensorflow and other libraries
import numpy as np import tensorflow as tf from official.nlp import modeling from official.nlp.modeling import layers, losses, models, networks
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
BERT pretraining model BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding) introduced the method of pre-training language representations on a large text corpus and then using that model for downstream NLP tasks. In this section, we will learn how to build a model to pretrain BERT on the m...
# Build a small transformer network. vocab_size = 100 sequence_length = 16 network = modeling.networks.TransformerEncoder( vocab_size=vocab_size, num_layers=2, sequence_length=16)
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
Inspecting the encoder, we see it contains few embedding layers, stacked Transformer layers and are connected to three input layers: input_word_ids, input_type_ids and input_mask.
tf.keras.utils.plot_model(network, show_shapes=True, dpi=48) # Create a BERT pretrainer with the created network. num_token_predictions = 8 bert_pretrainer = modeling.models.BertPretrainer( network, num_classes=2, num_token_predictions=num_token_predictions, output='predictions')
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
Inspecting the bert_pretrainer, we see it wraps the encoder with additional MaskedLM and Classification heads.
tf.keras.utils.plot_model(bert_pretrainer, show_shapes=True, dpi=48) # We can feed some dummy data to get masked language model and sentence output. batch_size = 2 word_id_data = np.random.randint(vocab_size, size=(batch_size, sequence_length)) mask_data = np.random.randint(2, size=(batch_size, sequence_length)) type_...
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
Compute loss Next, we can use lm_output and sentence_output to compute loss.
masked_lm_ids_data = np.random.randint(vocab_size, size=(batch_size, num_token_predictions)) masked_lm_weights_data = np.random.randint(2, size=(batch_size, num_token_predictions)) next_sentence_labels_data = np.random.randint(2, size=(batch_size)) mlm_loss = modeling.losses.weighted_sparse_categorical_crossentropy_lo...
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
With the loss, you can optimize the model. After training, we can save the weights of TransformerEncoder for the downstream fine-tuning tasks. Please see run_pretraining.py for the full example. Span labeling model Span labeling is the task to assign labels to a span of the text, for example, label a span of text as th...
network = modeling.networks.TransformerEncoder( vocab_size=vocab_size, num_layers=2, sequence_length=sequence_length) # Create a BERT trainer with the created network. bert_span_labeler = modeling.models.BertSpanLabeler(network)
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
Inspecting the bert_span_labeler, we see it wraps the encoder with additional SpanLabeling that outputs start_position and end_postion.
tf.keras.utils.plot_model(bert_span_labeler, show_shapes=True, dpi=48) # Create a set of 2-dimensional data tensors to feed into the model. word_id_data = np.random.randint(vocab_size, size=(batch_size, sequence_length)) mask_data = np.random.randint(2, size=(batch_size, sequence_length)) type_id_data = np.random.rand...
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
Compute loss With start_logits and end_logits, we can compute loss:
start_positions = np.random.randint(sequence_length, size=(batch_size)) end_positions = np.random.randint(sequence_length, size=(batch_size)) start_loss = tf.keras.losses.sparse_categorical_crossentropy( start_positions, start_logits, from_logits=True) end_loss = tf.keras.losses.sparse_categorical_crossentropy( ...
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
With the loss, you can optimize the model. Please see run_squad.py for the full example. Classification model In the last section, we show how to build a text classification model. Build a BertClassifier model wrapping TransformerEncoder BertClassifier implements a simple token classification model containing a single ...
network = modeling.networks.TransformerEncoder( vocab_size=vocab_size, num_layers=2, sequence_length=sequence_length) # Create a BERT trainer with the created network. num_classes = 2 bert_classifier = modeling.models.BertClassifier( network, num_classes=num_classes)
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
Inspecting the bert_classifier, we see it wraps the encoder with additional Classification head.
tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48) # Create a set of 2-dimensional data tensors to feed into the model. word_id_data = np.random.randint(vocab_size, size=(batch_size, sequence_length)) mask_data = np.random.randint(2, size=(batch_size, sequence_length)) type_id_data = np.random.randin...
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0