markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Betrachte den Inhalt der "target"-Spalte von df_x1_t1_trx_1_4. | analyze_target(df_x1_t1_trx_1_4) | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Als nächstes laden wir den Frame x3_t2_trx_3_1 und betrachten seine Dimension. | df_x3_t2_trx_3_1 = hdf.get('/x3/t2/trx_3_1')
print("Rows:", df_x3_t2_trx_3_1.shape[0])
print("Columns:", df_x3_t2_trx_3_1.shape[1]) | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Gefolgt von einer Analyse seiner Spaltenzusammensetzung und seiner "target"-Werte. | analyse_columns(df_x3_t2_trx_3_1)
analyze_target(df_x3_t2_trx_3_1) | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Frage: Was stellen Sie bzgl. der „Empfänger-Nummer_Sender-Nummer“-Kombinationen fest? Sind diese gleich? Welche Ausprägungen finden Sie in der Spalte „target“?
Antwort: Wir sehen, wenn jeweils ein Paar sendet, hören die anderen beiden Sender zu und messen ihre Verbindung zu den gerade sendenden Knoten (d.h. 6 Paare in jedem Dataframe). Sendet z.B. das Paar 3 1, so misst Knoten 1 die Verbindung 1-3, Knoten 3 die Verbindung 3-1 und Knoten 2 und 4 Verbindung 2-1 und 2-3 bzw. 4-1 und 4-3. Die 10 verschiedenen Ausprägungen der Spalte "target" sind oben zu sehen.
Aufgabe 3: Visualisierung der Messreihe des Datensatz
Wir visualisieren die Rohdaten mit verschiedenen Heatmaps, um so die Integrität der Daten optisch zu validieren und Ideen für mögliche Features zu entwickeln. Hier stellen wir exemplarisch die Daten von Frame df_x1_t1_trx_1_4 dar. | vals = df_x1_t1_trx_1_4.loc[:,'trx_2_4_ifft_0':'trx_2_4_ifft_1999'].values
# one big heatmap
plt.figure(figsize=(14, 12))
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')
plt.show() | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Wir betrachten wie verschiedene Farbschemata unterschiedliche Merkmale unserer Rohdaten hervorheben. | # compare different heatmaps
plt.figure(1, figsize=(12,10))
# nipy_spectral_r scheme
plt.subplot(221)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')
# terrain scheme
plt.subplot(222)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='terrain')
# Vega10 scheme
plt.subplot(223)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Vega10')
# Wistia scheme
plt.subplot(224)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Wistia')
# Adjust the subplot layout, because the logit one may take more space
# than usual, due to y-tick labels like "1 - 10^{-3}"
plt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25,
wspace=0.2)
plt.show() | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Aufgabe 3: Groundtruth-Label anpassen | # Iterating over hdf data and creating interim data presentation stored in data/interim/testmessungen_interim.hdf
# Interim data representation contains aditional binary class (binary_target - encoding 0=empty and 1=not empty)
# and multi class target (multi_target - encoding 0-9 for each possible class)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
interim_path = '../../data/interim/01_testmessungen.hdf'
def binary_mapper(df):
def map_binary(target):
if target.startswith('Empty'):
return 0
else:
return 1
df['binary_target'] = pd.Series(map(map_binary, df['target']))
def multiclass_mapper(df):
le.fit(df['target'])
df['multi_target'] = le.transform(df['target'])
for key in hdf.keys():
df = hdf.get(key)
binary_mapper(df)
multiclass_mapper(df)
df.to_hdf(interim_path, key)
hdf.close() | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Überprüfe neu beschrifteten Dataframe „/x1/t1/trx_3_1“ verwenden. Wir erwarten als Ergebnisse für 5 zu Beginn des Experiments „Empty“ (bzw. 0) und für 120 mitten im Experiment „Not Empty“ (bzw. 1). | hdf = pd.HDFStore('../../data/interim/01_testmessungen.hdf')
df_x1_t1_trx_3_1 = hdf.get('/x1/t1/trx_3_1')
print("binary_target for measurement 5:", df_x1_t1_trx_3_1['binary_target'][5])
print("binary_target for measurement 120:", df_x1_t1_trx_3_1['binary_target'][120])
hdf.close() | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Aufgabe 4: Einfacher Erkenner mit Hold-Out-Validierung
Wir folgen den Schritten in Aufgabe 4 und testen einen einfachen Erkenner. | from evaluation import *
from filters import *
from utility import *
from features import * | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Öffnen von Hdf mittels pandas | # raw data to achieve target values
hdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf') | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Beispiel Erkenner
Datensätze vorbereiten | # generate datasets
tst = ['1','2','3']
tst_ds = []
for t in tst:
df_tst = hdf.get('/x1/t'+t+'/trx_3_1')
lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]
#df_tst_cl,_ = distortion_filter(df_tst_cl)
groups = get_trx_groups(df_tst)
df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target')
df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single)
df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature
df_all = pd.concat( [df_std, df_mean, df_p2p], axis=1 ) # added p2p feature
df_all = cf_std_window(df_all, window=4, label='target')
df_tst_sum = generate_class_label_presence(df_all, state_variable='target')
# remove index column
df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]
print('Columns in Dataset:',t)
print(df_tst_sum.columns)
tst_ds.append(df_tst_sum.copy())
# holdout validation
print(hold_out_val(tst_ds, target='target', include_self=False, cl='rf', verbose=False, random_state=1)) | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Schließen von HDF Store | hdf.close() | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Aufgabe 5: Eigener Erkenner
Für die Konstruktion eines eigenen Erkenners führen wir die entsprechenden Preprocessing und Mapping Schritte ausgehend von den Roddaten erneut durch und passen diese unseren Bedürfnissen an.# Load hdfs data
hdfs = pd.HDFStore("../../data/raw/henrik/TestMessungen_NEU.hdf") | # Load raw data
hdf = pd.HDFStore("../../data/raw/TestMessungen_NEU.hdf")
# Check available keys in hdf store
print(hdf.keys) | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Vorverarbeitung
Zuerst passen wir die Groundtruth-Label an, entfernen Zeitstempel sowie Zeilenindices und speichern die resultierenden Frames ab. | hdf_path = "../../data/interim/02_tesmessungen.hdf"
# Mapping groundtruth to 0-empty and 1-not empty and prepare for further preprocessing by
# removing additional timestamp columns and index column
# Storing cleaned dataframes (no index, removed _ts columns, mapped multi classes to 0-empty, 1-not empty)
# to new hdfstore to `data/interim/02_testmessungen.hdf`
dfs = []
for key in hdf.keys():
df = hdf.get(key)
#df['target'] = df['target'].map(lambda x: 0 if x.startswith("Empty") else 1)
# drop all time stamp columns who endswith _ts
cols = [c for c in df.columns if not c.lower().endswith("ts")]
df = df[cols]
df = df.drop('Timestamp', axis=1)
df = df.drop('index', axis=1)
df.to_hdf(hdf_path, key)
hdf.close() | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Wir sehen, dass nur noch die 6 x 2000 Messungen für die jeweiligen Paare sowie die 'target'-Werte in den resultierenden Frames enthalten sind. | hdf = pd.HDFStore(hdf_path)
df = hdf.get("/x1/t1/trx_1_2")
df.head()
# Step-1 repeating the previous taks 4 to get a comparable base result with the now dropped _ts and index column to improve from
# generate datasets
from evaluation import *
from filters import *
from utility import *
from features import *
def prepare_features(c, p):
tst = ['1','2','3']
tst_ds = []
for t in tst:
df_tst = hdf.get('/x'+c+'/t'+t+'/trx_'+p)
lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]
#df_tst_cl,_ = distortion_filter(df_tst_cl)
df_tst,_ = distortion_filter(df_tst)
groups = get_trx_groups(df_tst)
df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target')
df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single)
df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature
df_kurt = rf_grouped(df_tst, groups=groups, fn=rf_kurtosis_single)
df_all = pd.concat( [df_std, df_mean, df_p2p, df_kurt], axis=1 ) # added p2p feature
df_all = cf_std_window(df_all, window=4, label='target')
df_all = cf_diff(df_all, label='target')
df_tst_sum = generate_class_label_presence(df_all, state_variable='target')
# remove index column
df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]
# print('Columns in Dataset:',t)
# print(df_tst_sum.columns)
tst_ds.append(df_tst_sum.copy())
return tst_ds
tst_ds = prepare_features(c='1', p='3_1')
# Evaluating different supervised learning methods provided in eval.py
# added a NN evaluator but there are some problems regarding usage and hidden layers
# For the moment only kurtosis and cf_diff are added to the dataset as well as the distortion filter
# Feature selection is needed right now!
for elem in ['rf', 'dt', 'nb' ,'nn','knn']:
print(elem, ":", hold_out_val(tst_ds, target='target', include_self=False, cl=elem, verbose=False, random_state=1))
# extra column features generated and reduced with PCA
from evaluation import *
from filters import *
from utility import *
from features import *
from new_features import *
def prepare_features_PCA_cf(c, p):
tst = ['1','2','3']
tst_ds = []
for t in tst:
df_tst = hdf.get('/x'+c+'/t'+t+'/trx_'+p)
lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]
df_tst,_ = distortion_filter(df_tst)
groups = get_trx_groups(df_tst)
df_cf_mean = reduce_dim_PCA(cf_mean_window(df_tst, window=3, column_key="ifft", label=None ).fillna(0), n_comps=10)
#df_cf_std = reduce_dim_PCA(cf_std_window(df_tst, window=3, column_key="ifft", label=None ).fillna(0), n_comps=10)
df_cf_ptp = reduce_dim_PCA(cf_ptp(df_tst, window=3, column_key="ifft", label=None ).fillna(0), n_comps=10)
#df_cf_kurt = reduce_dim_PCA(cf_kurt(df_tst, window=3, column_key="ifft", label=None ).fillna(0), n_comps=10)
#df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single)
df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single, label='target')
df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature
df_kurt = rf_grouped(df_tst, groups=groups, fn=rf_kurtosis_single)
df_skew = rf_grouped(df_tst, groups=groups, fn=rf_skew_single)
df_all = pd.concat( [df_mean, df_p2p, df_kurt, df_skew], axis=1 )
df_all = cf_std_window(df_all, window=4, label='target')
df_all = cf_diff(df_all, label='target')
df_all = reduce_dim_PCA(df_all.fillna(0), n_comps=10, label='target')
df_all = pd.concat( [df_all, df_cf_mean, df_cf_ptp], axis=1)
df_tst_sum = generate_class_label_presence(df_all, state_variable='target')
# remove index column
df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]
#print('Columns in Dataset:',t)
#print(df_tst_sum.columns)
tst_ds.append(df_tst_sum.copy())
return tst_ds
tst_ds_PCA = prepare_features_PCA_cf(c='1', p='3_1')
# Evaluating different supervised learning methods provided in eval.py
# We can see that the column features have increased F1 score of the classifiers
# Best score for Naive Bayes
for elem in ['rf', 'dt', 'nb' ,'nn','knn']:
print(elem, ":", hold_out_val(tst_ds_PCA, target='target', include_self=False, cl=elem, verbose=False, random_state=1))
def evaluate_models(ds):
res = {}
for elem in ['rf', 'dt', 'nb' ,'nn','knn']:
res[elem] = hold_out_val(ds, target='target', include_self=False, cl=elem, verbose=False, random_state=1)
return res
def evaluate_performance(c, p):
# include a prepare data function?
ds = prepare_features(c, p)
return evaluate_models(ds)
def evaluate_performance_PCA_cf(c, p):
# include a prepare data function?
ds = prepare_features_PCA_cf(c, p)
return evaluate_models(ds)
config = ['1','2','3','4']
pairing = ['1_2','1_4','2_3','3_1','3_4','4_2']
tst_ds = []
res_all = []
for c in config:
print("Testing for configuration", c)
for p in pairing:
print("Analyse performance for pairing", p)
res = evaluate_performance(c, p)
res_all.append(res)
# TODO draw graph
for model in res:
print(model, res[model])
all_keys = set().union(*(d.keys() for d in res_all))
print(all_keys)
print("results for prepare_features() function")
for key in all_keys:
print("mean F1 for {}: {}".format(key, sum(item[key][0] for item in res_all)/len(res_all)))
config = ['1','2','3','4']
pairing = ['1_2','1_4','2_3','3_1','3_4','4_2']
tst_ds = []
res_all_PCA = []
for c in config:
print("Testing for configuration", c)
for p in pairing:
print("Analyse performance for pairing", p)
res = evaluate_performance_PCA_cf(c, p)
res_all_PCA.append(res)
# TODO draw graph
for model in res:
print(model, res[model])
all_keys = set().union(*(d.keys() for d in res_all_PCA))
print(all_keys)
print("results for prepare_features_PCA_cf() function")
for key in all_keys:
print("mean F1 for {}: {}".format(key, sum(item[key][0] for item in res_all_PCA)/len(res_all_PCA))) | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Aufgabe 6: Online Erkenner
Serialisierung des Models für den Online Predictor
Das zuvor gewählte Model wird serialisiert und in 'models/solution_ueb02' gespeichert damit es beim starten der REST-API geladen werden kann. | from sklearn.externals import joblib
joblib.dump(res['dt'], '../../models/solution_ueb02/model.plk') | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
Starten des online servers
Hierzu müssen die Abhängigkeiten Flask, flask_restful, flask_cors installiert sein
The following command starts a flask_restful server on localhost port:5444 which answers json post requests. The server is implemented in the file online.py within the ipynb folder and makes use of the final chosen model.
Requests can be made as post request to http://localhost:5444/predict with a json file of the following format:
{ "row": "features" }
be careful that the sent file is valid json. The answer contains the predicted class.
{ "p_class": "predicted class" }
For now the online predictor only predicts the class of single lines sent to it | # Navigate to notebooks/solution_ueb02 and start the server
# with 'python -m online'
# Nun werden zeilenweise Anfragen an die REST-API simuliert, jeder valider json request wird mit einer
# json prediction response beantwortet | notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb | hhain/sdap17 | mit |
首先引入python正则表达式库re
1. 入门 | s = 'Blow low, follow in of which low. lower, lmoww oow aow bow cow 23742937 dow kdiieur998.'
p = 'low' | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
假设要在字符串s中查找单词low,由于该单词的规律就是low,因此可将low作为一个正则表达式,可命名为p。 | m = re.findall(p, s)
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
findall(pattern, string)是re模块中的函数,会在字符串string中将所有匹配正则表达式pattern模式的字符串提取出来,并以一个list的形式返回。该方法是从左到右进行扫描,所返回的list中的每个匹配按照从左到右匹配的顺序进行存放。
正则表达式low能够将所有单词low匹配出来,但是也会将lower,Blow等含有low字符串中的low也匹配出来。 | p = r'\blow\b'
m = re.findall(p, s)
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
\b,即boundary,是正则表达式中的一种特殊字符,表示单词的边界。正则表达式r'\blow\b'就是要单独匹配low,该字符串两侧为单词的边界(边界为空格等,但是并不是要匹配之) | p = r'[lmo]ow'
m = re.findall(p, s)
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
[lmo],匹配lmo字母中的任何一个 | p = r'[a-d]ow'
m = re.findall(p, s)
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
[a-d],匹配abcd字母中的任何一个 | p = r'\d'
m = re.findall(p, s)
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
\d,即digit,表示数字 | p = r'\d+'
m = re.findall(p, s)
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
+,元字符,表示一个或者重复多个对象,对象为+前面指定的模式
因此\d+可以匹配长度至少为1的任意正整数。
2. 基本匹配与实例
字符模式|匹配模式内容|等价于
----|---|--
[a-d]|One character of: a, b, c, d|[abcd]
[^a-d]|One character except: a, b, c, d|[^abcd]
abc丨def|abc or def|
\d|One digit|[0-9]
\D|One non-digit|[^0-9]
\s|One whitespace|[ \t\n\r\f\v]
\S|One non-whitespace|[^ \t\n\r\f\v]
\w|One word character|[a-zA-Z0-9_]
\W|One non-word character|[^a-zA-Z0-9_]
.|Any character (except newline)|[^\n]
固定点标记|匹配模式内容
----|---
^|Start of the string
$|End of the string
\b|Boundary between word and non-word characters
数量词|匹配模式内容
----|---
{5}|Match expression exactly 5 times
{2,5}|Match expression 2 to 5 times
{2,}|Match expression 2 or more times
{,5}|Match expression 0 to 5 times
*|Match expression 0 or more times
{,}|Match expression 0 or more times
?|Match expression 0 or 1 times
{0,1}|Match expression 0 or 1 times
+|Match expression 1 or more times
{1,}|Match expression 1 or more times
字符转义|转义匹配内容
----|---
\.|. character
\\|\ character
\| character
\+|+ character
\?|? character
\{|{ character
\)|) character
\[|[ character | m = re.findall(r'\d{3,4}-?\d{8}', '010-66677788,02166697788, 0451-22882828')
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
匹配电话号码,区号可以是3或者4位,号码为8位,中间可以有-或者没有。 | m = re.findall(r'[\u4e00-\u9fa5]', '测试 汉 字,abc,测试xia,可以')
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
匹配汉字
几个实例
正则表达式|匹配内容
----|---
[A-Za-z0-9]|匹配英文和数字
[\u4E00-\u9FA5A-Za-z0-9_]|中文英文和数字及下划线
^[a-zA-Z][a-zA-Z0-9_]{4,15}$`|合法账号,长度在5-16个字符之间,只能用字母数字下划线,且第一个位置必须为字母
3. 进阶
3.1 python正则表达式几个函数
函数|功能|用法
----|---|---
re.search|Return a match object if pattern found in string|re.search(r'[pat]tern', 'string')
re.finditer|Return an iterable of match objects (one for each match)|re.finditer(r'[pat]tern', 'string')
re.findall|Return a list of all matched strings (different when capture groups)|re.findall(r'[pat]tern', 'string')
re.split|Split string by regex delimeter & return string list|re.split(r'[ -]', 'st-ri ng')
re.compile|Compile a regular expression pattern for later use|re.compile(r'[pat]tern') | m = re.search(r'\d{3,4}-?\d{8}', '010-66677788,02166697788, 0451-22882828')
m
m.group() | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
利用group()函数,取出match对象中的内容 | ms = re.finditer(r'\d{3,4}-?\d{8}', '010-66677788,02166697788, 0451-22882828')
for m in ms:
print(m.group())
words = re.split(r'[,-]', '010-66677788,02166697788,0451-22882828')
words
p = re.compile(r'[,-]')
p.split('010-66677788,02166697788,0451-22882828') | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
利用compile()函数将正则表达式编译,如以后多次运行,可加快程序运行速度
3.2 分组与引用
Group Type|Expression
----|---
Capturing|( ... )
Non-capturing|(?: ... )
Capturing group named Y|(?P<Y> ... )
Match the Y'th captured group|\Y
Match the named group Y|(?P=Y)
(...) 将括号中的部分,放在一起,视为一组,即group。以该group来匹配符合条件的字符串。
group,可被同一正则表达式的后续,所引用,引用可以利用其位置,或者利用其名称,可称为反向引用。 | p = re.compile('(ab)+')
p.search('ababababab').group()
p.search('ababababab').groups() | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
有分组的情况,用groups()函数取出匹配的所有分组 | p=re.compile('(\d)-(\d)-(\d)')
p.search('1-2-3').group()
p.search('1-2-3').groups()
s = '喜欢/v 你/x 的/u 眼睛/n 和/u 深情/n 。/w'
p = re.compile(r'(\S+)/n')
m = p.findall(s)
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
按出现顺序捕获名词(/n)。 | p=re.compile('(?P<first>\d)-(\d)-(\d)')
p.search('1-2-3').group() | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
在分组内,可通过?P<name>的形式,给该分组命名,其中name是给该分组的命名 | p.search('1-2-3').group('first') | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
可利用group('name'),直接通过组名来获取匹配的该分组 | s = 'age:13,name:Tom;age:18,name:John'
p = re.compile(r'age:(\d+),name:(\w+)')
m = p.findall(s)
m
p = re.compile(r'age:(?:\d+),name:(\w+)')
m = p.findall(s)
m | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
(?:\d+),匹配该模式,但不捕获该分组。因此没有捕获该分组的数字 | s = 'abcdebbcde'
p = re.compile(r'([ab])\1')
m = p.search(s)
print('The match is {},the capture group is {}'.format(m.group(), m.groups())) | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
此即为反向引用
当分组([ab])内的a或b匹配成功后,将开始匹配\1,\1将匹配前面分组成功的字符。因此该正则表达式将匹配aa或bb。
类似地,r'([a-z])\1{3}',该正则将匹配连续的4个英文小写字母。 | s = '12,56,89,123,56,98, 12'
p = re.compile(r'\b(\d+)\b.*\b\1\b')
m = p.search(s)
m.group(1) | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
利用反向引用来判断是否含有重复数字,可提取第一个重复的数字。
其中\1是引用前一个分组的匹配。 | s = '12,56,89,123,56,98, 12'
p = re.compile(r'\b(?P<name>\d+)\b.*\b(?P=name)\b')
m = p.search(s)
m.group(1) | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
与前一个类似,但是利用了带分组名称的反向引用。
3.3 贪婪与懒惰
数量词|匹配模式内容
----|---
{2,5}?|Match 2 to 5 times (less preferred)
{2,}?|Match 2 or more times (less preferred)
{,5}?|Match 0 to 5 times (less preferred)
*?|Match 0 or more times (less preferred)
{,}?|Match 0 or more times (less preferred)
??|Match 0 or 1 times (less preferred)
{0,1}?|Match 0 or 1 times (less preferred)
+?|Match 1 or more times (less preferred)
{1,}?|Match 1 or more times (less preferred)
当正则表达式中包含能接受重复的限定符时,通常的行为是(在使整个表达式能得到匹配的前提下)匹配尽可能多的字符。
而懒惰匹配,是匹配尽可能少的字符。方法是在重复的后面加一个?。 | p = re.compile('(ab)+')
p.search('ababababab').group()
p = re.compile('(ab)+?')
p.search('ababababab').group() | chapter3/python正则表达式基础快速教程.ipynb | zipeiyang/liupengyuan.github.io | mit |
Let us save the embedding for later use. | #### DUMP
#import pickle
#f = open('myembedding.pkl','wb')
#pickle.dump([final_embeddings,dictionary,reverse_dictionary],f)
#f.close()
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words) | 7.2 Word Embeddings.ipynb | jvitria/DeepLearningBBVA2016 | mit |
6. Understanding and using the embedding.
In the former code we have the dictionary that converts from the word to an index, and the reverse_dictionary that given an index returns the corresponding word. | import pickle
f = open('./dataset/myembedding.pkl','rb')
fe,dic,rdic=pickle.load(f)
f.close()
dic['woman']
rdic[42] | 7.2 Word Embeddings.ipynb | jvitria/DeepLearningBBVA2016 | mit |
The embedding tries to put together words with similar meaning. A good embedding allows to semantically operate. Let us check some simple semantical operations: | result = (fe[dic['two'],:]+ fe[dic['one'],:])
from scipy.spatial import distance
candidates=np.argsort(distance.cdist(fe,result[np.newaxis,:],metric="seuclidean"),axis=0)
for i in xrange(5):
idx=candidates[i][0]
print(rdic[idx]) | 7.2 Word Embeddings.ipynb | jvitria/DeepLearningBBVA2016 | mit |
We can also define word analogies: football is to ? as foot is to hand | result = (fe[dic['football'],:] - fe[dic['foot'],:] + fe[dic['hand'],:])
from scipy.spatial import distance
candidates=np.argsort(distance.cdist(fe,result[np.newaxis,:],metric="seuclidean"),axis=0)
for i in xrange(5):
idx=candidates[i][0]
print(rdic[idx])
result = (fe[dic['madrid'],:] - fe[dic['spain'],:] + fe[dic['germany'],:])
from scipy.spatial import distance
candidates=np.argsort(distance.cdist(fe,result[np.newaxis,:],metric="seuclidean"),axis=0)
for i in xrange(5):
idx=candidates[i][0]
print(rdic[idx])
result = (fe[dic['barcelona'],:] - fe[dic['spain'],:] + fe[dic['germany'],:])
from scipy.spatial import distance
candidates=np.argsort(distance.cdist(fe,result[np.newaxis,:],metric="seuclidean"),axis=0)
for i in xrange(5):
idx=candidates[i][0]
print(rdic[idx]) | 7.2 Word Embeddings.ipynb | jvitria/DeepLearningBBVA2016 | mit |
Let us used a pretrained embedding. We will use a simple embedding detailed in Improving Word Representations via Global Context and Multiple Word Prototypes | import pandas as pd
df = pd.read_table("./dataset/wordVectors.txt",delimiter=" ",header=None)
embedding=df.values[:,:-1]
f = open("./dataset/vocab.txt",'r')
dictionary=dict()
for word in f.readlines():
dictionary[word] = len(dictionary)
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
result = embedding[dictionary['king\n'],:]-embedding[dictionary['man\n'],:]+embedding[dictionary['girl\n'],:]
import numpy as np
from scipy.spatial import distance
candidates=np.argsort(distance.cdist(embedding,result[np.newaxis,:],metric="seuclidean"),axis=0)
for i in xrange(0,5):
idx=candidates[i][0]
print(reverse_dictionary[idx]) | 7.2 Word Embeddings.ipynb | jvitria/DeepLearningBBVA2016 | mit |
Inter and Intra Similarities
The first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below | def get_avg_inter_intra_sims(X, y, val):
inter_sims = []
intra_sims = []
for i in range(len(X)):
for j in range(i+1, len(X)):
if y[i] == y[j] and y[i] == val:
intra_sims.append(similarities[i][j])
else:
inter_sims.append(similarities[i][j])
return np.array(intra_sims), np.array(inter_sims)
for i in np.unique(y):
intra_sims, inter_sims = get_avg_inter_intra_sims(X, y, i)
print('Mean intra similarity for hw',i,'is',np.mean(intra_sims),'with std',np.std(intra_sims))
print('Mean inter similarity for hw',i,'is',np.mean(inter_sims),'with std',np.std(inter_sims))
print('----')
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 5, 15
def get_all_sims(X, y, val):
sims = []
for i in range(len(X)):
for j in range(i+1, len(X)):
if y[i] == val or y[j] == val:
sims.append(similarities[i][j])
return sims
fig, axes = plt.subplots(4)
for i in range(4):
axes[i].hist(get_all_sims(X,y,i), bins=30)
axes[i].set_xlabel("Similarity Value")
axes[i].set_ylabel("Number of pairs") | summary_of_work/server_notebooks/bottom_up/Bottom Up Random Forest SplitCall.ipynb | DataPilot/notebook-miner | apache-2.0 |
Sims color coded | %matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 5, 15
def get_all_sims(X, y, val):
sims = []
sims_outer = []
for i in range(len(X)):
for j in range(i+1, len(X)):
if y[i] == val or y[j] == val:
if y[i] == y[j]:
sims.append(similarities[i][j])
else:
sims_outer.append(similarities[i][j])
return sims,sims_outer
fig, axes = plt.subplots(4)
for i in range(4):
axes[i].hist(get_all_sims(X,y,i)[1], bins=30)
axes[i].hist(get_all_sims(X,y,i)[0], bins=30)
axes[i].set_xlabel("Similarity Value")
axes[i].set_ylabel("Number of pairs") | summary_of_work/server_notebooks/bottom_up/Bottom Up Random Forest SplitCall.ipynb | DataPilot/notebook-miner | apache-2.0 |
Actual Prediction
While the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows:
Split the data into train and test
Vectorize based on templates that exist
Build a random forest classifier that uses this feature representation, and measure the performance | import sklearn
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
X, y = ci.get_data_set()
countvec = sklearn.feature_extraction.text.CountVectorizer()
X_list = [" ".join(el) for el in X]
countvec.fit(X_list)
X = countvec.transform(X_list)
p = np.random.permutation(len(X.todense()))
X = X.todense()[p]
y = np.array(y)[p]
clf = sklearn.ensemble.RandomForestClassifier(n_estimators=400, max_depth=3)
scores = cross_val_score(clf, X, y, cv=10)
print(scores)
print(np.mean(scores))
X.shape | summary_of_work/server_notebooks/bottom_up/Bottom Up Random Forest SplitCall.ipynb | DataPilot/notebook-miner | apache-2.0 |
Clustering
Lastly, we try unsupervised learning, clustering based on the features we've extracted, and measure using sillouette score. | X, y = ci.get_data_set()
countvec = sklearn.feature_extraction.text.CountVectorizer()
X_list = [" ".join(el) for el in X]
countvec.fit(X_list)
X = countvec.transform(X_list)
clusterer = sklearn.cluster.KMeans(n_clusters = 4).fit(X)
cluster_score = (sklearn.metrics.silhouette_score(X, clusterer.labels_))
cheat_score = (sklearn.metrics.silhouette_score(X, y))
print('Silhouette score using the actual labels:', cheat_score)
print('Silhouette score using the cluster labels:', cluster_score)
x_reduced = sklearn.decomposition.PCA(n_components=2).fit_transform(X.todense())
plt.rcParams['figure.figsize'] = 5, 10
fig, axes = plt.subplots(2)
axes[0].scatter(x_reduced[:,0], x_reduced[:,1], c=y)
axes[0].set_title('PCA Reduced notebooks with original labels')
axes[0].set_xlim(left=-2, right=5)
axes[1].scatter(x_reduced[:,0], x_reduced[:,1], c=clusterer.labels_)
axes[1].set_title('PCA Reduced notebooks with kmean cluster labels')
axes[1].set_xlim(left=-2, right=5) | summary_of_work/server_notebooks/bottom_up/Bottom Up Random Forest SplitCall.ipynb | DataPilot/notebook-miner | apache-2.0 |
Trying to restrict features
The problem above is that there are too many unimportant features -- all this noise makes it hard to seperate the different classes. To try to counteract this, I'll try ranking the features using tfidf and only take some of them | X, y = ci.get_data_set()
tfidf = sklearn.feature_extraction.text.TfidfVectorizer()
X_list = [" ".join(el) for el in X]
tfidf.fit(X_list)
X = tfidf.transform(X_list)
#X = X.todense()
feature_array = np.array(tfidf.get_feature_names())
tfidf_sorting = np.argsort(X.toarray()).flatten()[::-1]
top_n = feature_array[tfidf_sorting][:50]
print(top_n)
top_n = [el[1] for el in sra[:15]]
print(top_n)
X, y = ci.get_data_set()
countvec = sklearn.feature_extraction.text.CountVectorizer()
X_list = [" ".join([val for val in el if val in top_n]) for el in X]
countvec.fit(X_list)
X = countvec.transform(X_list)
X = X.todense()
x_reduced = sklearn.decomposition.PCA(n_components=2).fit_transform(X)
print(x_reduced.shape)
plt.rcParams['figure.figsize'] = 5, 5
plt.scatter(x_reduced[:,0], x_reduced[:,1], c=y)
cheat_score = (sklearn.metrics.silhouette_score(x_reduced, y))
print(cheat_score) | summary_of_work/server_notebooks/bottom_up/Bottom Up Random Forest SplitCall.ipynb | DataPilot/notebook-miner | apache-2.0 |
T-SNE | X, y = ci.get_data_set()
tfidf = sklearn.feature_extraction.text.TfidfVectorizer()
X_list = [" ".join(el) for el in X]
tfidf.fit(X_list)
X = tfidf.transform(X_list)
#X = X.todense()
# This is a recommended step when using T-SNE
x_reduced = sklearn.decomposition.PCA(n_components=50).fit_transform(X.todense())
from sklearn.manifold import TSNE
tsn = TSNE(n_components=2)
x_red = tsn.fit_transform(x_reduced)
plt.scatter(x_red[:,0], x_red[:,1], c=y)
cheat_score = (sklearn.metrics.silhouette_score(x_red, y))
print(cheat_score)
clusterer = sklearn.cluster.KMeans(n_clusters = 4).fit(x_red)
cluster_score = (sklearn.metrics.silhouette_score(X, clusterer.labels_))
print(cluster_score)
plt.scatter(x_red[:,0], x_red[:,1], c=clusterer.labels_) | summary_of_work/server_notebooks/bottom_up/Bottom Up Random Forest SplitCall.ipynb | DataPilot/notebook-miner | apache-2.0 |
What's happening
Figuring out what is going on is a bit difficult, but we can look at the top templates generated from the random forest, and see why they might have been chosen | '''
Looking at the output below, it's clear that the bottom up method is recognizing very specific
structures of ast graph, which makes sense because some structures are exactly repeated in
homeworks. For example:
treatment = pd.Series([0]*4 + [1]*2)
is a line in all of the homework one notebooks, and the top feature of the random forest
(at time of running) is
var = pd.Series([0] * 4 + [1] * 2)
Note that the bottom up method does not even take the specific numbers into account, but only
the operations.
'''
clf.fit(X,y)
fnames= countvec.get_feature_names()
clfi = clf.feature_importances_
sa = []
for i in range(len(clfi)):
sa.append((clfi[i], fnames[i]))
sra = [el for el in reversed(sorted(sa))]
import astor
for temp in sra:
temp = temp[1]
print(temp, agr.templates.get_examples(temp)[1])
for i in range(5):
print ('\t',astor.to_source(agr.templates.get_examples(temp)[0][i])) | summary_of_work/server_notebooks/bottom_up/Bottom Up Random Forest SplitCall.ipynb | DataPilot/notebook-miner | apache-2.0 |
Get info about a dataset | # information about the ch04 dataset
dsinfo = service.datasets().get(datasetId="ch04", projectId=PROJECT).execute()
for info in dsinfo.items():
print(info) | 05_devel/google_api_client.ipynb | GoogleCloudPlatform/bigquery-oreilly-book | apache-2.0 |
List tables and creation times | # list tables in dataset
tables = service.tables().list(datasetId="ch04", projectId=PROJECT).execute()
for t in tables['tables']:
print(t['tableReference']['tableId'] + ' was created at ' + t['creationTime']) | 05_devel/google_api_client.ipynb | GoogleCloudPlatform/bigquery-oreilly-book | apache-2.0 |
Query and get result | # send a query request
request={
"useLegacySql": False,
"query": "SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT 5"
}
print(request)
response = service.jobs().query(projectId=PROJECT, body=request).execute()
print('----' * 10)
for r in response['rows']:
print(r['f'][0]['v']) | 05_devel/google_api_client.ipynb | GoogleCloudPlatform/bigquery-oreilly-book | apache-2.0 |
Asynchronous query and paging through results | # send a query request that will not terminate within the timeout specified and will require paging
request={
"useLegacySql": False,
"timeoutMs": 0,
"useQueryCache": False,
"query": "SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT 5"
}
response = service.jobs().query(projectId=PROJECT, body=request).execute()
print(response)
jobId = response['jobReference']['jobId']
print(jobId)
# get query results
while (not response['jobComplete']):
response = service.jobs().getQueryResults(projectId=PROJECT,
jobId=jobId,
maxResults=2,
timeoutMs=5).execute()
while (True):
# print responses
for row in response['rows']:
print(row['f'][0]['v']) # station name
print('--' * 5)
# page through responses
if 'pageToken' in response:
pageToken = response['pageToken']
# get next page
response = service.jobs().getQueryResults(projectId=PROJECT,
jobId=jobId,
maxResults=2,
pageToken=pageToken,
timeoutMs=5).execute()
else:
break
| 05_devel/google_api_client.ipynb | GoogleCloudPlatform/bigquery-oreilly-book | apache-2.0 |
Setup the matplotlib environment to make the plots look pretty. | # Show the plots inside the notebook.
%matplotlib inline
# Make the figures high-resolution.
%config InlineBackend.figure_format='retina'
# Various font sizes.
ticksFontSize=18
labelsFontSizeSmall=20
labelsFontSize=30
titleFontSize=34
legendFontSize=14
matplotlib.rc('xtick', labelsize=ticksFontSize)
matplotlib.rc('ytick', labelsize=ticksFontSize)
# Colourmaps.
cm=matplotlib.pyplot.cm.get_cmap('viridis') | ResultingToGeneratedFragmentsRatio.ipynb | AleksanderLidtke/AnalyseCollisionFragments | mit |
Introduction
This is notebook analyses the data from a projection of an evolutionary space debris model DAMAGE. It investigates the amplification of the numbers of fragments that collisions generate themselves, which occurs due to follow-on collisions. The projected scenario is "mitigation only" with additional collision avoidance performed by active spacecraft.
Read the data
Read the data about the number of fragments generated in DAMAGE collisions as well as the corresponding number of fragments in the final (2213) population snapshot, which every collision gave rise to. This accounts for the follow-on collisions that occurred in certain cases, as well as decay of fragments. Store the data in arrays and distinguish between all collisions and the subset of catastrophic collisions, which exceeded the $40\ J/g$ energy threshold.
The data are stored on GitHub. Will read one file at a time and cap the amount of characters to be read at any given time, not to clog the network. | import urllib2, numpy
from __future__ import print_function
# All collisions.
lines=urllib2.urlopen('https://raw.githubusercontent.com/AleksanderLidtke/\
AnalyseCollisionFragments/master/AllColGenerated').read(856393*25) # no. lines * no. chars per line
allColGen=numpy.array(lines.split('\n')[1:-1],dtype=numpy.float64) # Skip the header and the last empty line
lines=urllib2.urlopen('https://raw.githubusercontent.com/AleksanderLidtke/\
AnalyseCollisionFragments/master/AllColResulting').read(856393*25)
allColRes=numpy.array(lines.split('\n')[1:-1],dtype=numpy.float64)
assert allColGen.shape==allColRes.shape
print("Read data for {} collisions.".format(allColGen.size))
# Catastrophic collisions (a subset of all collisions).
lines=urllib2.urlopen('https://raw.githubusercontent.com/AleksanderLidtke/\
AnalyseCollisionFragments/master/CatColGenerated').read(500227*25) # Fewer lines for the subset of all collisions.
catColGen=numpy.array(lines.split('\n')[1:-1],dtype=numpy.float64)
lines=urllib2.urlopen('https://raw.githubusercontent.com/AleksanderLidtke/\
AnalyseCollisionFragments/master/CatColResulting').read(500227*25)
catColRes=numpy.array(lines.split('\n')[1:-1],dtype=numpy.float64)
assert catColGen.shape==catColRes.shape
print("Read data for {} catastrophic collisions.".format(catColGen.size)) | ResultingToGeneratedFragmentsRatio.ipynb | AleksanderLidtke/AnalyseCollisionFragments | mit |
Analyse the ratio
Description
Investigate the fact that sometimes follow-on collisions will result in certain collisions being responsible for more fragments at some census epoch than they generated themselves. In order to do this, investigate the ratio between the number of fragments in the population snapshot in 2213 that resulted from a given collision, $N_{res}$, and the number of fragments $\geq 10$ cm generated in every collision, $N_{gen}$:
$$ r=\frac{N_{res}}{N_{gen}}. $$
$N_{res}$ is the effective number of objects $\geq 10$ cm passing through the low-Earth orbit (LEO) volume that every collision has given rise to. If a fragment from collision $C_1$ took part in another collision, $C_2$, all the fragments from these two collisions left on-orbit at the census epoch are said to have been caused by $C_1.$ This is because if $C_1$ hadn't taken place, $C_2$ wouldn't have taken place either.
The effective number of objects $N_{res}$ is computed as the number of fragments $\geq 10$ cm, $N$, multiplied by the fraction of the orbital period that the fragments spend under the altitude of 2000 km, i.e. within the LEO regime:
$$N_{res}=N\times\frac{\tau_{LEO}}{\tau},$$
where $\tau_{LEO}$ is the time that the object fragments spend under the altitude of 2000 km during every orbit, and $\tau$ is the orbital period.
Results
This is what the ratio $r$ looks like for all collisions and the subset of catastrophic ones. | # Compute the ratios.
allRatios=allColRes/allColGen
catRatios=catColRes/catColGen
# Plot.
fig=matplotlib.pyplot.figure(figsize=(12,8))
ax=fig.gca()
matplotlib.pyplot.grid(linewidth=1)
ax.set_xlabel(r"$Time\ (s)$",fontsize=labelsFontSize)
ax.set_ylabel(r"$Response\ (-)$",fontsize=labelsFontSize)
ax.set_xlim(0,7)
ax.set_ylim(-2,2)
ax.plot(allColGen,allRatios,alpha=1.0,label=r"$All\ collisions$",marker='o',c='k',markersize=1,mew=0,lw=0)
ax.plot(catColGen,catRatios,alpha=1.0,label=r"$Catastrophic$",marker='x',c='r',markersize=1,mew=2,lw=0)
ax.set_xlabel(r"$No.\ generated\ fragments\ \geq10\ cm$",fontsize=labelsFontSize)
ax.set_ylabel(r"$Resulting-to-generated\ ratio$",fontsize=labelsFontSize)
ax.set_xlim(0,12000)
ax.set_ylim(0,10)
ax.ticklabel_format(axis='x', style='sci', scilimits=(-2,-1))
ax.tick_params(axis='both',reset=False,which='both',length=5,width=1.5)
matplotlib.pyplot.subplots_adjust(left=0.1,right=0.95,top=0.95,bottom=0.1)
box=ax.get_position()
ax.set_position([box.x0+box.width*0.0,box.y0+box.height*0.05,box.width*0.99,box.height*0.88])
ax.legend(bbox_to_anchor=(0.5,1.14),loc='upper center',prop={'size':legendFontSize},fancybox=True,\
shadow=True,ncol=3)
fig.show() | ResultingToGeneratedFragmentsRatio.ipynb | AleksanderLidtke/AnalyseCollisionFragments | mit |
Not very legible, right? But some things can be observed in the above figure anyway. First of all, there's a "dip" in the number of generated fragments around $6.5\times 10^4$. This was caused by the fact that the number of generated fragments, $N$, exceeding a certain length $L_c$ is given by a power law:
$$ N(L_c)=0.1M^{0.75}L_c^{-1.71},$$
whre $M$ is the mass of both objects [1]. This shows that, in spite of 150000 Monte Carlo (MC) runs being used to project the analysed scenario over 200 years, there simply weren't many collisions that involved two objects with masses that resulted in around $6.5\times10^4$ fragments. This is because the distribution of the masses of objects in orbit isn't continuous and not every combination of collided masses is equally likely.
Now, let us bin the ratios inside fixed-width bins and compute the mean and median inside each bin to see if the mean is close to the median, i.e. whether the distribution is close to normal-ish: | bins=numpy.arange(0,allColGen.max(),500)
means=numpy.zeros(bins.size-1)
medians=numpy.zeros(bins.size-1)
meansCat=numpy.zeros(bins.size-1)
mediansCat=numpy.zeros(bins.size-1)
for i in range(bins.size-1):
means[i]=numpy.mean(allRatios[(allColGen>=bins[i]) & (allColGen<bins[i+1])])
medians[i]=numpy.median(allRatios[(allColGen>=bins[i]) & (allColGen<bins[i+1])])
meansCat[i]=numpy.mean(catRatios[(catColGen>=bins[i]) & (catColGen<bins[i+1])])
mediansCat[i]=numpy.median(catRatios[(catColGen>=bins[i]) & (catColGen<bins[i+1])])
# Plot.
fig=matplotlib.pyplot.figure(figsize=(14,8))
ax=fig.gca()
matplotlib.pyplot.grid(linewidth=2)
ax.plot(bins[:-1],means,alpha=1.0,label=r"$Mean,\ all$",marker=None,c='k',lw=3,ls='--')
ax.plot(bins[:-1],medians,alpha=1.0,label=r"$Median,\ all$",marker=None,c='k',lw=3,ls=':')
ax.plot(bins[:-1],meansCat,alpha=1.0,label=r"$Mean,\ catastrophic$",marker=None,c='r',lw=3,ls='--')
ax.plot(bins[:-1],mediansCat,alpha=1.0,label=r"$Median,\ catastrophic$",marker=None,c='r',lw=3,ls=':')
ax.set_xlabel(r"$No.\ generated\ fragments\ \geq10\ cm$",fontsize=labelsFontSize)
ax.set_ylabel(r"$Resulting-to-generated\ ratio$",fontsize=labelsFontSize)
ax.set_xlim(0,12000)
ax.set_ylim(0,1)
ax.ticklabel_format(axis='x', style='sci', scilimits=(-2,-1))
ax.tick_params(axis='both',reset=False,which='both',length=5,width=1.5)
matplotlib.pyplot.subplots_adjust(left=0.1,right=0.95,top=0.92,bottom=0.1)
box=ax.get_position()
ax.set_position([box.x0+box.width*0.0,box.y0+box.height*0.05,box.width*0.99,box.height*0.88])
ax.legend(bbox_to_anchor=(0.5,1.18),loc='upper center',prop={'size':legendFontSize},fancybox=True,\
shadow=True,ncol=2)
fig.show() | ResultingToGeneratedFragmentsRatio.ipynb | AleksanderLidtke/AnalyseCollisionFragments | mit |
The more fragments were generated in a collision, the fewer fragments a given collision gave rise to in the final population. This seems counter-intuitive because large collisions are expected to contribute more to the long-term growth of the debris population than others, which generate fewer fragments [2]. However, these plots do not contradict this thesis about the reasons for the growth of the number of debris because they do not show which collisions will drive the predicted growth of the number of objects in orbit. Rather, they show which collisions are likely to results in many follow-on collisions that will amplify the number of fragments that the collisions generate, thus fuelling the "Kessler syndrome"[3]. They do not necessarily say that th number of resulting fragments will be large on absolute terms.
The mean in every analysed bin was considerably different than the median, meaning that the distributions in every bin were far from normal. This means that in every bin relatively few collisions resulted in many follow-on collisions that increased the ratio $r$. This is what the distribution of the ratio $r$ looks like in every bin of the no. generated fragments: | ratioBins=numpy.linspace(0,2,100)
# Get colours for every bin of the number of generated fragments.
cNorm=matplotlib.colors.Normalize(vmin=0, vmax=bins.size-1)
scalarMap=matplotlib.cm.ScalarMappable(norm=cNorm,cmap=cm)
histColours=[]
for i in range(0,bins.size-1):
histColours.append(scalarMap.to_rgba(i))
# Plot the histograms.
fig=matplotlib.pyplot.figure(figsize=(14,8))
ax=fig.gca()
matplotlib.pyplot.grid(linewidth=2)
ax.set_xlabel(r"$Resulting-to-generated\ ratio$",fontsize=labelsFontSize)
ax.set_ylabel(r"$Fraction\ of\ collisions$",fontsize=labelsFontSize)
for i in range(bins.size-1):
ax.hist(allRatios[(allColGen>=bins[i]) & (allColGen<bins[i+1])],\
ratioBins,normed=1,cumulative=1,histtype='step',ls='solid',\
color=histColours[i],label=r"${}-{},\ all$".format(bins[i],bins[i+1]))
ax.hist(catRatios[(catColGen>=bins[i]) & (catColGen<bins[i+1])],\
ratioBins,normed=1,cumulative=1,histtype='step',ls='dashed',\
color=histColours[i],label=r"${}-{},\ cat$".format(bins[i],bins[i+1]))
ax.set_xlim(0,2)
ax.ticklabel_format(axis='y', style='sci', scilimits=(-2,-1))
ax.tick_params(axis='both',reset=False,which='both',length=5,width=1.5)
matplotlib.pyplot.subplots_adjust(left=0.1,right=0.95,top=0.92,bottom=0.1)
box=ax.get_position()
ax.set_position([box.x0+box.width*0.0,box.y0+box.height*0.05,box.width*0.99,box.height*0.6])
ax.legend(bbox_to_anchor=(0.5,1.8),loc='upper center',prop={'size':legendFontSize},fancybox=True,\
shadow=True,ncol=5)
fig.show() | ResultingToGeneratedFragmentsRatio.ipynb | AleksanderLidtke/AnalyseCollisionFragments | mit |
Most collisions had a ratio of less than $2.0$. Only | numpy.sum(allRatios>=2.0)/float(allRatios.size)*100 | ResultingToGeneratedFragmentsRatio.ipynb | AleksanderLidtke/AnalyseCollisionFragments | mit |
Importing, downsampling, and visualizing data | # Get paths to all images and masks.
all_image_paths = glob('E:\\data\\lungs\\2d_images\\*.tif')
all_mask_paths = glob('E:\\data\\lungs\\2d_masks\\*.tif')
print(len(all_image_paths), 'image paths found')
print(len(all_mask_paths), 'mask paths found')
# Define function to read in and downsample an image.
def read_image (path, sampling=1): return np.expand_dims(imread(path)[::sampling, ::sampling],0)
# Import and downsample all images and masks.
all_images = np.stack([read_image(path, 4) for path in all_image_paths], 0)
all_masks = np.stack([read_image(path, 4) for path in all_mask_paths], 0) / 255.0
print('Image resolution is', all_images[1].shape)
print('Mask resolution is', all_images[1].shape)
# Visualize an example CT image and manual segmentation.
example_no = 1
fig, ax = plt.subplots(nrows=1, ncols=2, sharex='col', sharey='row', figsize=(10,5))
ax[0].imshow(all_images[example_no, 0], cmap='Blues')
ax[0].set_title('CT image', fontsize=18)
ax[0].tick_params(labelsize=16)
ax[1].imshow(all_masks[example_no, 0], cmap='Blues')
ax[1].set_title('Manual segmentation', fontsize=18)
ax[1].tick_params(labelsize=16) | deep_image_segmentation_with_convolutional_neural_networks.ipynb | nicholsonjohnc/jupyter | mit |
Split data into training and validation sets | X_train, X_test, y_train, y_test = train_test_split(all_images, all_masks, test_size=0.1)
print('Training input is', X_train.shape)
print('Training output is {}, min is {}, max is {}'.format(y_train.shape, y_train.min(), y_train.max()))
print('Testing set is', X_test.shape) | deep_image_segmentation_with_convolutional_neural_networks.ipynb | nicholsonjohnc/jupyter | mit |
Create CNN model | # Create a sequential model, i.e. a linear stack of layers.
model = Sequential()
# Add a 2D convolution layer.
model.add(
Conv2D(
filters=32,
kernel_size=(3, 3),
activation='relu',
input_shape=all_images.shape[1:],
padding='same'
)
)
# Add a 2D convolution layer.
model.add(
Conv2D(filters=64,
kernel_size=(3, 3),
activation='sigmoid',
input_shape=all_images.shape[1:],
padding='same'
)
)
# Add a max pooling layer.
model.add(
MaxPooling2D(
pool_size=(2, 2),
padding='same'
)
)
# Add a dense layer.
model.add(
Dense(
64,
activation='relu'
)
)
# Add a 2D convolution layer.
model.add(
Conv2D(
filters=1,
kernel_size=(3, 3),
activation='sigmoid',
input_shape=all_images.shape[1:],
padding='same'
)
)
# Add a 2D upsampling layer.
model.add(
UpSampling2D(
size=(2,2)
)
)
model.compile(
loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy','mse']
)
print(model.summary()) | deep_image_segmentation_with_convolutional_neural_networks.ipynb | nicholsonjohnc/jupyter | mit |
Train CNN model | history = model.fit(X_train, y_train, validation_split=0.10, epochs=10, batch_size=10)
test_no = 7
fig, ax = plt.subplots(nrows=1, ncols=3, sharex='col', sharey='row', figsize=(15,5))
ax[0].imshow(X_test[test_no,0], cmap='Blues')
ax[0].set_title('CT image', fontsize=18)
ax[0].tick_params(labelsize=16)
ax[1].imshow(y_test[test_no,0], cmap='Blues')
ax[1].set_title('Manual segmentation', fontsize=18)
ax[1].tick_params(labelsize=16)
ax[2].imshow(model.predict(X_test)[test_no,0], cmap='Blues')
ax[2].set_title('CNN segmentation', fontsize=18)
ax[2].tick_params(labelsize=16) | deep_image_segmentation_with_convolutional_neural_networks.ipynb | nicholsonjohnc/jupyter | mit |
```{admonition} Observación
:class: tip
En la celda anterior se utilizó el comando de magic %%bash. Algunos comandos de magic los podemos utilizar también con import. Ver ipython-magics
```
Características de los lenguajes de programación
Los lenguajes de programación y sus implementaciones tienen características como las siguientes:
Realizar un parsing de las instrucciones y ejecutarlas de forma casi inmediata (intérprete). Como ejemplo está el lenguaje: Beginners' All-purpose Symbolic Instruction Code: BASIC
Realizar un parsing de las instrucciones, traducirlas a una representación intermedia (IR) y ejecutarlas. La traducción a una representación intermedia es un bytecode. Como ejemplo se encuentra el lenguaje Python en su implementación CPython.
Compilar ahead of time (AOT) las instrucciones antes de su ejecución. Como ejemplo se encuentran los lenguajes C, C++ y Fortran.
Realizar un parsing de las instrucciones y compilarlas en una forma just in time compilation (JIT) at runtime. Como ejemplos se encuentran los lenguajes Julia y Python en su implementación con PyPy.
La ejecución de instrucciones será más rápida dependiendo del lenguaje, la implementación que se haga del mismo y de sus features.
```{admonition} Comentarios
Varios proyectos están en desarrollo para mejorar eficiencia y otros temas. Algunos de ellos son:
PyPy
A better API for extending Python in C: hpyproject
La implementación CPython de Python es la estándar, pero hay otras más como PyPy. Ver python-vs-cpython para una breve explicación de implementaciones de Python. Ver Alternative R implementations y R implementations para implementaciones de R diferentes a la estándar.
```
Cpython
<img src="https://dl.dropboxusercontent.com/s/6quwf6c2ci5ey0n/cpython.png?dl=0" heigth="900" width="900">
Compilación AOT y JIT
```{margin}
Es común utilizar la palabra librería en lugar de paquete en el contexto de compilación.
```
Una compilación AOT crea una librería, especializada para nuestras máquinas y se puede utilizar de forma instantánea. Un ejemplo de lo anterior lo tenemos con Cython, el cual es un paquete que realiza la compilación de módulos de Python. Por ejemplo, las librerías de NumPy, SciPy o Scikit-learn instalados vía pip o conda utilizan Cython para compilar secciones de tales librerías adaptadas a nuestras máquinas.
Una compilación JIT no requiere que se realice "trabajo previo" de nuestro lado, la compilación se realiza al tiempo que se utiliza el código, at runtime. En términos coloquiales, en una compilación JIT, se iniciará la ejecución del código identificando diferentes secciones que pueden compilarse y que por tanto se ejecutarán más lentamente de lo normal pues se estará realizando la compilación al tiempo de ejecución. Sin embargo, en sucesivas ejecuciones del mismo código tales secciones serán más rápidas. En resúmen se requiere un warm-up, ver por ejemplo how-fast-is-pypy.
La compilación AOT da los mejores speedups pero solicita mayor trabajo de nuestro lado. La compilación JIT da buenos speedups con poca intervención nuestra pero utiliza más memoria y más tiempo en iniciar la ejecución del código, ver por ejemplo python_performance-slide-15 acerca de PyPy issues.
Para la ejecución frecuente de scripts pequeños la compilación AOT resulta una mejor opción que la compilación JIT, ver por ejemplo couldn't the jit dump and reload already compiled machine code.
A continuación se presentan ejecuciones en diferentes lenguajes con sus implementaciones estándar para aproximar el área debajo de la curva de $f(x) = e^{-x^2}$ en el intervalo $[0, 1]$ con la regla del rectángulo compuesto. Se mide el tiempo de ejecución utilizando $n = 10^7$ nodos.
Python | %%file Rcf_python.py
import math
import time
def Rcf(f,a,b,n):
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
"""
h_hat = (b-a)/n
sum_res = 0
for i in range(n):
x = a+(i+1/2)*h_hat
sum_res += f(x)
return h_hat*sum_res
if __name__ == "__main__":
n = 10**7
f = lambda x: math.exp(-x**2)
a = 0
b = 1
start_time = time.time()
res = Rcf(f,a,b,n)
end_time = time.time()
secs = end_time-start_time
print("Rcf tomó", secs, "segundos" )
%%bash
python3 Rcf_python.py | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
R | %%file Rcf_R.R
Rcf<-function(f,a,b,n){
'
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
'
h_hat <- (b-a)/n
sum_res <- 0
for(i in 0:(n-1)){
x <- a+(i+1/2)*h_hat
sum_res <- sum_res + f(x)
}
approx <- h_hat*sum_res
}
n <- 10**7
f <- function(x)exp(-x^2)
a <- 0
b <- 1
system.time(Rcf(f,a,b,n))
%%bash
Rscript Rcf_R.R | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Julia
Ver: Julia: performance-tips | %%file Rcf_julia.jl
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
# Arguments
- `f::Float`: function expression of integrand.
- `a::Float`: left point of interval.
- `b::Float`: right point of interval.
- `n::Integer`: number of subintervals.
"""
function Rcf(f, a, b, n)
h_hat = (b-a)/n
sum_res = 0
for i in 0.0:n-1
x = a+(i+1/2)*h_hat
sum_res += f(x)
end
return h_hat*sum_res
end
function main()
a = 0
b = 1
n =10^7
f(x) = exp(-x^2)
res(f, a, b, n) = @time Rcf(f, a, b, n)
println(res(f, a, b, n))
println(res(f, a, b, n))
end
main()
%%bash
/usr/local/julia-1.7.1/bin/julia Rcf_julia.jl | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
(RCFJULIATYPEDVALUES)=
Rcf_julia_typed_values.jl | %%file Rcf_julia_typed_values.jl
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
# Arguments
- `f::Float`: function expression of integrand.
- `a::Float`: left point of interval.
- `b::Float`: right point of interval.
- `n::Integer`: number of subintervals.
"""
function Rcf(f, a, b, n)
h_hat = (b-a)/n
sum_res = 0.0
for i in 0:n-1
x = a+(i + 1/2)*h_hat
sum_res += f(x)
end
return h_hat*sum_res
end
function main()
a = 0.0
b = 1.0
n =10^7
f(x) = exp(-x^2)
res(f, a, b, n) = @time Rcf(f, a, b, n)
println(res(f, a, b, n))
println(res(f, a, b, n))
end
main()
%%bash
/usr/local/julia-1.7.1/bin/julia Rcf_julia_typed_values.jl | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
(RCFJULIANAIVE)=
Rcf_julia_naive.jl | %%file Rcf_julia_naive.jl
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
# Arguments
- `f::Float`: function expression of integrand.
- `a::Float`: left point of interval.
- `b::Float`: right point of interval.
- `n::Integer`: number of subintervals.
"""
function Rcf(f, a, b, n)
h_hat = (b-a)/n
sum_res = 0
for i in 0:n-1
x = a+(i + 1/2)*h_hat
sum_res += f(x)
end
return h_hat*sum_res
end
function main()
a = 0
b = 1
n =10^7
f(x) = exp(-x^2)
res(f, a, b, n) = @time Rcf(f, a, b, n)
println(res(f, a, b, n))
println(res(f, a, b, n))
end
main()
%%bash
/usr/local/julia-1.7.1/bin/julia Rcf_julia_naive.jl | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
C
Para la medición de tiempos se utilizaron las ligas: measuring-time-in-millisecond-precision y find-execution-time-c-program.
(RCFC)=
Rcf_c.c | %%file Rcf_c.c
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<time.h>
#include <sys/time.h>
void Rcf(double ext_izq, double ext_der, int n,\
double *sum_res_p);
double f(double nodo);
int main(int argc, char *argv[]){
double sum_res = 0.0;
double a = 0.0, b = 1.0;
int n = 1e7;
struct timeval start;
struct timeval end;
long seconds;
long long mili;
gettimeofday(&start, NULL);
Rcf(a,b,n,&sum_res);
gettimeofday(&end, NULL);
seconds = (end.tv_sec - start.tv_sec);
mili = 1000*(seconds) + (end.tv_usec - start.tv_usec)/1000;
printf("Tiempo de ejecución: %lld milisegundos", mili);
return 0;
}
void Rcf(double a, double b, int n, double *sum){
double h_hat = (b-a)/n;
double x = 0.0;
int i = 0;
*sum = 0.0;
for(i = 0; i <= n-1; i++){
x = a+(i+1/2.0)*h_hat;
*sum += f(x);
}
*sum = h_hat*(*sum);
}
double f(double nodo){
double valor_f;
valor_f = exp(-pow(nodo,2));
return valor_f;
}
%%bash
gcc -Wall Rcf_c.c -o Rcf_c.out -lm
%%bash
./Rcf_c.out | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
¿Por qué dar información sobre el tipo de valores (u objetos) que se utilizan en un código ayuda a que su ejecución sea más rápida?
Python es dynamically typed que se refiere a que un objeto de cualquier tipo y cualquier statement que haga referencia a un objeto, pueden cambiar su tipo. Esto hace difícil que la máquina virtual pueda optimizar la ejecución del código pues no se conoce qué tipo será utilizado para las operaciones futuras. Por ejemplo: | v = -1.0
print(type(v), abs(v))
v = 1 - 1j
print(type(v), abs(v)) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
La función abs trabaja diferente dependiendo del tipo de objeto. Para un número entero o punto flotante regresa el negativo de $-1.0$ y para un número complejo calcula una norma Euclidiana tomando de $v$ su parte real e imaginaria: $\text{abs}(v) = \sqrt{v.real^2 + v.imag^2}$.
Lo anterior en la práctica implica la ejecución de más instrucciones y por tanto mayor tiempo en ejecutarse. Antes de llamar a abs en la variable, Python revisa el tipo y decide cuál método llamar (overhead).
```{admonition} Comentarios
Además cada número en Python está wrapped up en un objeto de Python de alto nivel. Por ejemplo para un entero se tiene el objeto int. Tal objeto tiene otras funciones por ejemplo __str__ para imprimirlo.
Es muy común que en los códigos no cambien los tipos por lo que la compilación AOT es una buena opción para una ejecución más rápida.
Siguiendo con los dos comentarios anteriores, si sólo se desea calcular operaciones matemáticas (como el caso de la raíz cuadrada anterior) no requerimos la funcionalidad del objeto de alto nivel.
```
Cython
Es un compilador que traduce instrucciones anotadas y escritas en un lenguaje híbrido entre Python y C que resultan un módulo compilado. Este módulo puede ser importado como un módulo regular de Python utilizando import. Típicamente el módulo compilado resulta ser similar en sintaxis al lenguaje C.
```{margin}
La frase código tipo CPU-bound es código cuya ejecución involucra un porcentaje mayor para uso de CPU que uso de memoria o I/O.
```
Tiene un buen tiempo en la comunidad (2007 aproximadamente), es altamente usado y es de las herramientas preferidas para código tipo CPU-bound. Es un fork de Pyrex (2002) que expande sus capacidades.
```{admonition} Comentario
Pyrex en términos simples es Python con manejo de tipo de valores de C. Pyrex traduce el código escrito en Python a código de C (lo cual evita el uso de la Python/C API) y permite la declaración de parámetros o valores en tipos de valores de C.
```
Requiere conocimiento del lenguaje C lo cual debe tomarse en cuenta en un equipo de desarrollo de software y se sugiere utilizarlo en secciones pequeñas del código.
Soporta la API OpenMP para aprovechar los múltiples cores de una máquina.
Puede utilizarse vía un script setup.py que compila un módulo para usarse con import y también puede utilizarse en IPython vía un comando magic.
<img src="https://dl.dropboxusercontent.com/s/162u0zcfpm8lewu/cython.png?dl=0" heigth="900" width="900">
```{admonition} Comentario
En el paso de compilación a código de máquina del dibujo anterior se omitieron detalles como son: creación de un archivo .c y compilación de tal archivo con el compilador gcc al módulo compilado (en sistemas Unix tiene extensión .so).
Ver machine code
```
Cython y el compilador gcc analizan el código anotado para determinar qué instrucciones pueden optimizarse mediante una compilación AOT.
¿En qué casos y qué tipo de ganancias en velocidad podemos esperar al usar Cython?
Un caso es en el que se tenga un código con muchos loops que realicen operaciones matemáticas típicamente no vectorizadas o que no pueden vectorizarse. Esto es, códigos en los que las instrucciones son básicamente sólo Python sin utilizar paquetes externos. Además, si en el ciclo las variables no cambian de su tipo (por ejemplo de int a float) entonces es un código que obtendrá ganancia en velocidad al compilar a código de máquina.
```{admonition} Observación
:class: tip
Si tu código de Python llama a operaciones vectorizadas vía NumPy podría ser que no se ejecute más rápido tu código después de compilarlo. Principalmente porque probablemente no se crearán muchos objetos intermedios que es un feature de NumPy.
```
No esperamos tener un speedup después de compilar para llamadas a librerías externas (por ejemplo paqueterías que manejan bases de datos). También es poco probable que se obtengan ganancias significativas en programas que tengan alta carga de I/O.
En general es poco probable que tu código compilado se ejecute más rápido que un código en C "bien escrito" y también es poco probable que se ejecute más lento. Es muy posible que el código C generado desde Python mediante Cython pueda alcanzar las velocidades de un código escrito en C, a menos que la persona que programó en C tenga un gran conocimiento de formas de hacer que el código de C se ajuste a la arquitectura de la máquina sobre la que se ejecutan los códigos.
Ejemplo utilizando un archivo setup.py | import math
import time
from pytest import approx
from scipy.integrate import quad
from IPython.display import HTML, display | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Para este caso requerimos tres archivos:
1.El código que será compilado en un archivo con extensión .pyx (escrito en Python).
```{admonition} Observación
:class: tip
La extensión .pyx se utiliza en el lenguaje Pyrex.
```
2.Un archivo setup.py que contiene las instrucciones para llamar a Cython y se encarga de crear el módulo compilado.
3.El código escrito en Python que importará el módulo compilado.
Archivo .pyx: | %%file Rcf_cython.pyx
def Rcf(f,a,b,n): #Rcf: rectángulo compuesto para f
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
"""
h_hat = (b-a)/n
nodes = [a+(i+1/2)*h_hat for i in range(n)]
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
return h_hat*sum_res | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Archivo setup.py que contiene las instrucciones para el build: | %%file setup.py
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules = cythonize("Rcf_cython.pyx",
compiler_directives={'language_level' : 3})
) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Compilar desde la línea de comandos: | %%bash
python3 setup.py build_ext --inplace | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Importar módulo compilado y ejecutarlo: | f=lambda x: math.exp(-x**2) #using math library
n = 10**7
a = 0
b = 1
import Rcf_cython
start_time = time.time()
res = Rcf_cython.Rcf(f, a, b,n)
end_time = time.time()
secs = end_time-start_time
print("Rcf tomó",secs,"segundos" )
obj, err = quad(f, a, b)
print(res == approx(obj)) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Comando de magic %cython
```{margin}
Ver extensions-bundled-with-ipython para extensiones que antes se incluían en Ipython.
```
Al instalar Cython se incluye tal comando. Al ejecutarse crea el archivo .pyx, lo compila con setup.py e importa en el notebook. | %load_ext Cython
%%cython
def Rcf(f,a,b,n):
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
"""
h_hat = (b-a)/n
nodes = [a+(i+1/2)*h_hat for i in range(n)]
sum_res = 0
for node in nodes:
sum_res = sum_res+f(node)
return h_hat*sum_res
start_time = time.time()
res = Rcf(f, a, b,n)
end_time = time.time()
secs = end_time-start_time
print("Rcf tomó",secs,"segundos" )
obj, err = quad(f, a, b)
print(res == approx(obj)) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Anotaciones para analizar un bloque de código
Cython tiene la opción de annotation para generar un archivo con extensión .html en el que cada línea puede ser expandida haciendo un doble click que mostrará el código C generado. Líneas "más amarillas" refieren a más llamadas en la máquina virtual de Python, mientras que líneas más blancas significan "más código en C y no Python".
El objetivo es remover la mayor cantidad de líneas amarillas posibles pues son costosas en tiempo. Si tales líneas están dentro de loops serán todavía más costosas. Al final se busca tener códigos cuyas anotaciones sean lo más blancas posibles.
```{admonition} Observación
:class: tip
Concentra tu atención en las líneas que son amarillas y están dentro de los loops, no inviertas tiempo en líneas amarillas que están fuera de loops y que no causan una ejecución lenta. Una ayuda para identificar lo anterior la da el perfilamiento.
```
Ejemplo vía línea de comando | %%bash
$HOME/.local/bin/cython --force -3 --annotate Rcf_cython.pyx | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Ver archivo creado: Rcf_cython.html
```{margin}
La liga correcta del archivo Rcf_cython.c es Rcf_cython.c
``` | display(HTML("Rcf_cython.html")) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{admonition} Comentarios
Para el código anterior el statement en donde se crean los nodos involucra un loop y es "muy amarilla". Si se perfila el código se verá que es una línea en la que se gasta una buena parte del tiempo total de ejecución del código.
```
Una primera opción que tenemos es crear los nodos para el método de integración dentro del loop y separar el llamado a la list comprehension nodes=[a+(i+1/2)*h_hat for i in range(n)]: | %%file Rcf_2_cython.pyx
def Rcf(f,a,b,n):
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
"""
h_hat = (b-a)/n
sum_res = 0
for i in range(n):
x = a+(i+1/2)*h_hat
sum_res += f(x)
return h_hat*sum_res
%%bash
$HOME/.local/bin/cython --force -3 --annotate Rcf_2_cython.pyx | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{margin}
La liga correcta del archivo Rcf_2_cython.c es Rcf_2_cython.c
``` | display(HTML("Rcf_2_cython.html")) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{admonition} Comentario
Para el código anterior los statements que están dentro del loop son "muy amarillos". En tales statements involucran tipos de valores que no cambiarán en la ejecución de cada loop. Una opción es declarar los tipos de objetos que están involucrados en el loop utilizando la sintaxis cdef. Ver function_declarations, definition-of-def-cdef-and-cpdef-in-cython
``` | %%file Rcf_3_cython.pyx
def Rcf(f, double a, double b, unsigned int n):
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
"""
cdef unsigned int i
cdef double x, sum_res, h_hat
h_hat = (b-a)/n
sum_res = 0
for i in range(n):
x = a+(i+1/2)*h_hat
sum_res += f(x)
return h_hat*sum_res
%%bash
$HOME/.local/bin/cython -3 --force --annotate Rcf_3_cython.pyx | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{margin}
La liga correcta del archivo Rcf_3_cython.c es Rcf_3_cython.c
``` | display(HTML("Rcf_3_cython.html")) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{admonition} Comentario
Al definir tipos, éstos sólo serán entendidos por Cython y no por Python. Cython utiliza estos tipos para convertir el código de Python a código de C.
```
Una opción con la que perdemos flexibilidad pero ganamos en disminuir tiempo de ejecución es directamente llamar a la función math.exp: | %%file Rcf_4_cython.pyx
import math
def Rcf(double a, double b, unsigned int n):
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
"""
cdef unsigned int i
cdef double x, sum_res, h_hat
h_hat = (b-a)/n
sum_res = 0
for i in range(n):
x = a+(i+1/2)*h_hat
sum_res += math.exp(-x**2)
return h_hat*sum_res
%%bash
$HOME/.local/bin/cython -3 --force --annotate Rcf_4_cython.pyx | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{margin}
La liga correcta del archivo Rcf_4_cython.c es Rcf_4_cython.c
``` | display(HTML("Rcf_4_cython.html")) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Mejoramos el tiempo si directamente utilizamos la función exp de la librería math de Cython, ver calling C functions.
(RCF5CYTHON)=
Rcf_5_cython.pyx | %%file Rcf_5_cython.pyx
from libc.math cimport exp as c_exp
cdef double f(double x) nogil:
return c_exp(-x**2)
def Rcf(double a, double b, unsigned int n):
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
"""
cdef unsigned int i
cdef double x, sum_res, h_hat
h_hat = (b-a)/n
sum_res = 0
for i in range(n):
x = a+(i+1/2)*h_hat
sum_res += f(x)
return h_hat*sum_res
%%bash
$HOME/.local/bin/cython -3 --force --annotate Rcf_5_cython.pyx | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{margin}
La liga correcta del archivo Rcf_5_cython.c es Rcf_5_cython.c
``` | display(HTML("Rcf_5_cython.html")) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{admonition} Comentario
Un tradeoff en la optimización de código se realiza entre flexibilidad, legibilidad y una ejecución rápida del código.
``` | %%file setup_2.py
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules = cythonize("Rcf_2_cython.pyx",
compiler_directives={'language_level' : 3})
) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Compilar desde la línea de comandos: | %%bash
python3 setup_2.py build_ext --inplace
%%file setup_3.py
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules = cythonize("Rcf_3_cython.pyx",
compiler_directives={'language_level' : 3})
) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Compilar desde la línea de comandos: | %%bash
python3 setup_3.py build_ext --inplace
%%file setup_4.py
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules = cythonize("Rcf_4_cython.pyx",
compiler_directives={'language_level' : 3})
) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Compilar desde la línea de comandos: | %%bash
python3 setup_4.py build_ext --inplace
%%file setup_5.py
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules = cythonize("Rcf_5_cython.pyx",
compiler_directives={'language_level' : 3})
) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Compilar desde la línea de comandos: | %%bash
python3 setup_5.py build_ext --inplace | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Importar módulos compilados: | import Rcf_2_cython, Rcf_3_cython, Rcf_4_cython, Rcf_5_cython
start_time = time.time()
res_2 = Rcf_2_cython.Rcf(f, a, b,n)
end_time = time.time()
secs = end_time-start_time
print("Rcf_2 tomó",secs,"segundos" ) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema: | print(res_2 == approx(obj))
start_time = time.time()
res_3 = Rcf_3_cython.Rcf(f, a, b,n)
end_time = time.time()
secs = end_time-start_time
print("Rcf_3 tomó",secs,"segundos" )
print(res_3 == approx(obj))
start_time = time.time()
res_4 = Rcf_4_cython.Rcf(a, b,n)
end_time = time.time()
secs = end_time-start_time
print("Rcf_4 tomó",secs,"segundos" )
print(res_4 == approx(obj))
start_time = time.time()
res_5 = Rcf_5_cython.Rcf(a, b,n)
end_time = time.time()
secs = end_time-start_time
print("Rcf_5 tomó",secs,"segundos" ) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema: | print(res_5 == approx(obj)) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Ejemplo de implementación con NumPy
Comparamos con una implementación usando NumPy y vectorización: | import numpy as np
f_np = lambda x: np.exp(-x**2) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
(RCFNUMPY)=
Rcf_numpy | def Rcf_numpy(f,a,b,n):
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (float): function expression of integrand.
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
"""
h_hat = (b-a)/n
aux_vec = np.linspace(a, b, n+1)
nodes = (aux_vec[:-1]+aux_vec[1:])/2
return h_hat*np.sum(f(nodes))
start_time = time.time()
res_numpy = Rcf_numpy(f_np, a, b,n)
end_time = time.time()
secs = end_time-start_time
print("Rcf_numpy tomó",secs,"segundos" )
print(res_numpy == approx(obj)) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{admonition} Comentarios
La implementación con NumPy resulta ser la segunda más rápida principalmente por el uso de bloques contiguos de memoria para almacenar los valores y la vectorización. La implementación anterior, sin embargo, requiere un conocimiento de las funciones de tal paquete. Para este ejemplo utilizamos linspace y la funcionalidad de realizar operaciones de forma vectorizada para la creación de los nodos y evaluación de la función. Una situación que podría darse es que para un problema no podamos utilizar alguna función de NumPy o bien no tengamos el ingenio para pensar cómo realizar una operación de forma vectorizada. En este caso Cython puede ser una opción a utilizar.
En Cython se tienen las memoryviews para acceso de bajo nivel a la memoria similar a la que proveen los arrays de NumPy en el caso de requerirse arrays en una forma más general que no sólo sean de NumPy (por ejemplo de C o de Cython, ver Cython arrays).
```
```{admonition} Observación
:class: tip
Compárese la implementación vía NumPy con el uso de listas para los nodos. Recuérdese que las listas de Python alojan locaciones donde se pueden encontrar los valores y no los valores en sí. Los arrays de NumPy almacenan tipos de valores primitivos. Las listas tienen data fragmentation que causan memory fragmentation y por tanto un mayor impacto del Von Neumann bottleneck. Además el almacenamiento de tipo de objetos de alto nivel en las listas causa overhead en lugar de almacenamiento de tipo de valores primitivos en un array de NumPy.
```
Cython y OpenMP
OpenMP es una extensión al lenguaje C y es una API para cómputo en paralelo en un sistema de memoria compartida, aka, shared memory parallel programming con CPUs. Se revisará con mayor profundidad en la nota de cómputo en paralelo.
```{margin}
Ver global interpreter lock (GIL), global interpreter lock
```
En Cython, OpenMP se utiliza mediante prange (parallel range). Además debe deshabilitarse el GIL.
```{admonition} Observación
:class: tip
Al deshabilitar el GIL en una sección de código se debe operar con tipos primitivos. En tal sección no se debe operar con objetos Python (por ejemplo listas).
```
(RCF5CYTHONOPENMP)=
Rcf_5_cython_openmp | %%file Rcf_5_cython_openmp.pyx
from cython.parallel import prange
from libc.math cimport exp as c_exp
cdef double f(double x) nogil:
return c_exp(-x**2)
def Rcf(double a, double b, unsigned int n):
"""
Compute numerical approximation using rectangle or mid-point
method in an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for
i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
a (float): left point of interval.
b (float): right point of interval.
n (int): number of subintervals.
Returns:
sum_res (float): numerical approximation to integral
of f in the interval a,b
"""
cdef int i
cdef double x, sum_res, h_hat
h_hat = (b-a)/n
sum_res = 0
for i in prange(n, schedule="guided", nogil=True):
x = a+(i+1/2)*h_hat
sum_res += f(x)
return h_hat*sum_res | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
```{admonition} Comentario
Con prange puede elegirse diferente scheduling. Si schedule recibe el valor static el trabajo a realizar se reparte equitativamente entre los cores y si algunos threads terminan antes permanecerán sin realizar trabajo, aka idle. Con dynamic y guided se reparte de manera dinámica at runtime que es útil si la cantidad de trabajo es variable y si threads terminan antes pueden recibir trabajo a realizar.
``` | %%bash
$HOME/.local/bin/cython -3 --force Rcf_5_cython_openmp.pyx | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
En el archivo setup.py se coloca la directiva -fopenmp.
```{margin}
Ver Rcf_5_cython_openmp.c para la implementación en C de la función Rcf_5_cython_openmp.Rcf.
``` | %%file setup_5_openmp.py
from setuptools import Extension, setup
from Cython.Build import cythonize
ext_modules = [Extension("Rcf_5_cython_openmp",
["Rcf_5_cython_openmp.pyx"],
extra_compile_args=["-fopenmp"],
extra_link_args=["-fopenmp"],
)
]
setup(ext_modules = cythonize(ext_modules)) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Compilar desde la línea de comandos: | %%bash
python3 setup_5_openmp.py build_ext --inplace
import Rcf_5_cython_openmp
start_time = time.time()
res_5_openmp = Rcf_5_cython_openmp.Rcf(a, b, n)
end_time = time.time()
secs = end_time-start_time
print("Rcf_5_openmp tomó",secs,"segundos" ) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema: | print(res_5_openmp == approx(obj)) | libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb | ITAM-DS/analisis-numerico-computo-cientifico | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.