markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Workload Execution and Functions Profiling Data Collection Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
def experiment(te): # Create and RTApp RAMP task rtapp = RTA(te.target, 'ramp', calibration=te.calibration()) rtapp.conf(kind='profile', params={ 'ramp' : Ramp( start_pct = 60, end_pct = 20, delta...
ipynb/examples/trace_analysis/TraceAnalysis_FunctionsProfiling.ipynb
joelagnel/lisa
apache-2.0
Parse Trace and Profiling Data
# Base folder where tests folder are located res_dir = te.res_dir logging.info('Content of the output folder %s', res_dir) !tree {res_dir} with open(os.path.join(res_dir, 'platform.json'), 'r') as fh: platform = json.load(fh) print json.dumps(platform, indent=4) logging.info('LITTLE cluster max capacity: %d', ...
ipynb/examples/trace_analysis/TraceAnalysis_FunctionsProfiling.ipynb
joelagnel/lisa
apache-2.0
Report Functions Profiling Data
# Get the DataFrame for the specified list of kernel functions df = trace.data_frame.functions_stats(['enqueue_task_fair', 'dequeue_task_fair']) df # Get the DataFrame for the single specified kernel function df = trace.data_frame.functions_stats('select_task_rq_fair') df
ipynb/examples/trace_analysis/TraceAnalysis_FunctionsProfiling.ipynb
joelagnel/lisa
apache-2.0
Plot Functions Profiling Data The only method of the FunctionsAnalysis class that is used for functions profiling is plotProfilingStats. This method is used to plot functions profiling metrics for the specified kernel functions. For each speficied metric a barplot is generated which reports the value of the metric when...
# Plot Average and Total execution time for the specified # list of kernel functions trace.analysis.functions.plotProfilingStats( functions = [ 'select_task_rq_fair', 'enqueue_task_fair', 'dequeue_task_fair' ], metrics = [ # Average completion time per CPU 'avg', ...
ipynb/examples/trace_analysis/TraceAnalysis_FunctionsProfiling.ipynb
joelagnel/lisa
apache-2.0
First, download members data (Evidenta membrilor.xlsx) from the official data source, and create a macro-enabled Excel file from the Google Sheet. Then write a simple macro to extract the cell comments from the Club column in order to get info about club Transfers. Follow the instructions here. Save the new file as Evi...
members=members_loader.get_members('../data/manual/Evidenta membrilor.xlsm')
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Members are loaded but a bit messy.
members.head(2) members_clean=members_loader.cleaner(members).reset_index(drop=False) members_clean.to_csv('../data/clean/members.csv')
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
2. Load and clean matches Matches are loaded from excel sheets in the /data folder, organized by year and competition. We are always looking for match list data,the cleaner the better, the more concentrated the better. While this is not possible all the time, we have several demo import routines. These are stored in th...
matches={i:{} for i in range(1993,2019)} competitions={ 2018:['CR','CN','SL'], 2017:['CR','CN','SL'], 2016:['CR','CN','SL'], 2015:['CR','CN','SL'], 2014:['CR','CN','SL'], 2013:['CR','CN','SL'], 2012:['CR','CN'], 2011:['CR','CN'], 2010:['CR','CN'], 2009:['CR','CN'], 1998:['CR'...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
2.1. Load matches
for year in competitions: for competition in competitions[year]: matches[year][competition]=matches_loader.get_matches(year,competition)
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
2.2. Standardize names Names in name_exceptions get replaced with their right hand side values before processing.
name_exceptions={'Atanasovski':'Atanasovski A. (MAC)', 'Dobrovicescu (SON)':'Dobrovicescu T. (SON)', 'Ianăș':'Ianăș F.', 'Crăciun (Tamang) Sujata':'Crăciun S.', 'Abe (Carțiș) Emilia':'Abe E.', 'Dinu (Ioniță) Claudia-Andreea':'Dinu A.',...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Names in name_equals get replaced with their right hand side values after processing.
name_equals={'Chirea M.':'Chirea A.', 'Ghinet C.':'Ghineț C.', 'Anghelescu A.':'Anghelescu M.', 'Domnița M.':'Domniță M.', 'Bejgu N.':'Beygu N.', 'Canceu A.':'Canceu Ad.', 'Dinu C.':'Dinu A.', 'Grapa D.':'Grapă D.', 'Cristea...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Names in name_doubles handle situation where the default name abbreviation might lead to duplicates.
name_doubles={ 'Cristea Cristina':'Cristea Cr.', 'Cristea Călin-Ștefan':'Cristea Că.', 'Sandu Marius-Cristian':'Sandu Mar.', 'Sandu Matei-Serban':'Sandu Mat.', 'Sandu Matei':'Sandu Mat.', 'Georgescu Andrei':'Georgescu An.', 'Georgescu Alexandra':'Georgescu Al.'...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Normalize Romanian characters, define name cleaner function to get Name IDs. Name ID are unique competitor names in the form of: Surname, First letter of Name. If the First Letter of Name leads to a non-unique ID, the second letter is taken, and so forth, until a unique ID is found. It gets contructed as follows: 1. I...
letter_norm={'ţ':'ț','ş':'ș','Ş':'Ș'} def name_cleaner(name): name=str(name) if name in name_doubles: return name_doubles[name] else: for letter in letter_norm: name=name.replace(letter,letter_norm[letter]) if name in name_exceptions: name=name_exceptions[name...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Names equalling any string in redflags_names get thrown out of the final dataset. Names containing any string in redflags_names2 get thrown out of the final dataset.
redflags_names=['-','—','—',np.nan,'. ()','— ','- -.','- -. (-)','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','R','S', 'Kashi','Sankon','București','Victorii:','Sakura','Taiken','Ikada','Sonkei','CRK','Museido', 'Ichimon','Bushi Tokukai 1','Competitori – Shiai-sha','Ec...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Check is name is not in redflags. Ignore these entries.
def name_ok(name): name=str(name) if name=='nan': return False if name not in redflags_names: if np.array([i not in name for i in redflags_names2]).all(): return True return False
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Process all names for standardization. Create 3 variables: 1. all_players: forward relationship: unclean name -> cleaned name 2. all_players_r: reverse relationship 3. all_players_unsorted: unique set of all names processed Process both competitor and shinpan names.
all_players={} all_players_r={} all_players_unsorted=set() for year in matches: for competition in matches[year]: for match in matches[year][competition]: for color in ['aka','shiro']: name=match[color]['name'] all_players_unsorted.add(name) if nam...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Link procesed to names in members. The name_linker dictionary contains Name IDs (short names) as keys and sets of long names as values. Ideally, this set should contain only one element, so that the mapping is unique.
name_linker={} for i in members_clean.index: name=members_clean.loc[i]['name'] try: cname=name_cleaner(name) except: print(name) if cname not in name_linker:name_linker[cname]=set() name_linker[cname].add(name)
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Do the opposite mapping in names_abbr: long->short. Create exceptions for duplicate names.
names_abbr={} for name in name_linker: if len(name_linker[name])>1: #only for dev to create exceptions for duplicate person names. print(name,name_linker[name]) for i in name_linker[name]: names_abbr[i]=name
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Save club mappings by short name, by year.
names_abbr_list=[] name_abbr2long={} name_abbr2club={} for i in members_clean.index: name=members_clean.loc[i]['name'] club=members_clean.loc[i]['club'] year=members_clean.loc[i]['year'] names_abbr_list.append(names_abbr[name]) name_abbr2long[names_abbr[name]]=name if names_abbr[name] not in nam...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Add short names to members_clean.
members_clean['name_abbr']=names_abbr_list
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Some names appear in the short form, we need to add them manually to the long list. We parse through all forms in which the name appears, and choose the longest. We call this the inferred name.
for name in all_players: if name not in name_abbr2long: #infer using longest available name names={len(j):j for i in all_players[name] for j in all_players[name][i]['names']} if len(names)>0: inferred_name=names[max(names.keys())] if '(' in inferred_name: ...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Infer duplicates
def levenshteinDistance(s1, s2): if len(s1) > len(s2): s1, s2 = s2, s1 distances = range(len(s1) + 1) for i2, c2 in enumerate(s2): distances_ = [i2+1] for i1, c1 in enumerate(s1): if c1 == c2: distances_.append(distances[i1]) else: ...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
2.3. Infer clubs Infer clubs from name if club is part of name in the competition. Club names in redflags_clubs get ignored. Clubs in club_equals get replaced after processing. The convention is to have 3 letter all-caps club names for Romanian clubs, 3 letter club names followed by a / and a two letter country code fo...
redflags_clubs=['','N/A','RO1','RO2'] club_equals=clubs_loader.club_equals
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Attach clubs to all_players who have it in their competition name data, but we don't already known from members.
for name in all_players: #if we dont already know the club for this year from the members register if name not in name_abbr2club: for year in all_players[name]: for name_form in all_players[name][year]['names']: if '(' in name_form: club=name_form.spli...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Normalize club names and long names.
for name in name_abbr2club: for year in name_abbr2club[name]: if name_abbr2club[name][year] in club_equals: name_abbr2club[name][year]=club_equals[name_abbr2club[name][year]] for name in name_abbr2long: name_abbr2long[name]=name_abbr2long[name].replace(' ',' ').strip()
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
If club still not found, fill the gaps between years. Forward fill first, then backward fill, if necessary.
manual_club_needed=set() for name in all_players: if name in name_abbr2club: years=np.sort(list(all_players[name].keys())) minyear1=min(years) maxyear1=max(years) minyear2=min(name_abbr2club[name].keys()) maxyear2=min(name_abbr2club[name].keys()) ...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
We have extracted what was possible from the data. Now we do a save of short name to long name and club mappings (by year). We then edit this file manually, if necessary. 2.4. Manual club and long name overrides
manual_name_needed=set() #check if we dont have first name information, then flag for manual additions for name in name_abbr2long: names=name_abbr2long[name].split(' ') if len(names)<2: manual_name_needed.add(name) elif len(names[1])<3: manual_name_needed.add(name) manual_data_override=pd....
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Extend with manual data
for i in df['long_name'].replace('',np.nan).dropna().index: name_abbr2long[i]=df.loc[i]['long_name'] all_players_r[name_abbr2long[i]]=i manual_club_needed=set() for name in all_players: years=np.sort(list(all_players[name].keys())) minyear=min(years) maxyear=max(years) for year in range(minyear...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Update and overwrite with club existence data 3. Update members Extend members data with data mined from matches Extend members with unregistered members. Probably inactive now, or from abroad. Only that one year when he appared in competition. But we only register them as known to be active that year. This is in ontra...
unregistered_members=[] for name in all_players: if name not in set(members_clean['name_abbr'].values): years=np.sort(list(name_abbr2club[name].keys())) for year in range(min(years),max(years)+1): if year in all_players[name]: iyear=year else: ...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Extend 0 dan down to starting year.
members_mu_dan_extensions=[] members_by_name=members_updated.set_index(['name_abbr']) for year in matches: members_by_year=members_updated.set_index(['year']).loc[year] for competition in matches[year]: print(year,competition) for k in matches[year][competition]: aka=k['aka']['name']...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Update members
members_mu_dan_extensions=pd.concat(members_mu_dan_extensions) members_updated=pd.concat([members_updated,members_mu_dan_extensions]).reset_index(drop=True)
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Prettify club names, and IDs
clubs=[] pclubs=[] countries=[] for i in members_updated.index: club=members_updated.loc[i]['club'] country=members_updated.loc[i]['country'] year=members_updated.loc[i]['year'] club,country=clubs_loader.club_cleaner(club,country) club,pclub=clubs_loader.club_year(club,country,year) clubs.append...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Fix unknwown genders
manual_mf_data_override=pd.read_excel('../data/manual/members_mf_manual.xlsx') manual_mf_data_needed=members_updated[(members_updated['gen']!='M')&(members_updated['gen']!='F')][['name_abbr','name']]\ .drop_duplicates() df=manual_mf_data_needed#.merge(manual_mf_data_override[['name_abbr','gen']],'outer').drop...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Update members with manual gender data.
members_updated=members_updated.reset_index(drop=True).drop_duplicates() gens=[] for i in members_updated.index: name=members_updated.loc[i]['name_abbr'] if name in list(df.index): gens.append(df.loc[name]) else: gens.append(members_updated.loc[i]['gen']) members_updated['gen']=gens
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Save to /data/export.
members_updated.to_csv('../data/export/members.csv') clubs_updated=members_updated.groupby(['club','country','pretty_club','year'])[['name_abbr']].count() clubs_updated=clubs_updated.reset_index().set_index('club').join(clubs_loader.club_year_df['Oraș']) clubs_updated.to_csv('../data/export/clubs.csv')
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
4. Update matches Update and save cleaned match data
master_matches=[] for year in matches: members_by_year=members_updated.set_index(['year']).loc[year].drop_duplicates() for competition in matches[year]: print(year,competition) for k in matches[year][competition]: good=True match={'year':year,'competition':competition} ...
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
Clean up and save matches for display
data=pd.DataFrame(master_matches).reset_index(drop=True) save_utils.save(data)
kendo romania/scripts/.ipynb_checkpoints/cleanerÜold-checkpoint.ipynb
csaladenes/blog
mit
重构vnpy接收行情数据代码,以用于测试
TICK_DB_NAME='Test' EMPTY_STRING = '' EMPTY_UNICODE = u'' EMPTY_INT = 0 EMPTY_FLOAT = 0.0 class DrTickData(object): """Tick数据""" #---------------------------------------------------------------------- def __init__(self): """Constructor""" self.vtSymbol = EMPTY_STRING # v...
vn.tutorial/performance/Performance of Receiving Tick Data.ipynb
Chemcy/vnpy
mit
创建一个用于测试的Tick数据
client=pymongo.MongoClient() data=client['VnTrader_Tick_Db']['rb1705'].find_one({}) del data['_id'] class InputTick: pass tick=InputTick() tick.__dict__.update(data) print tick.__dict__
vn.tutorial/performance/Performance of Receiving Tick Data.ipynb
Chemcy/vnpy
mit
测试原版函数性能
def profiling(count,func=None): if func==None: func=lambda: procecssTickEvent(tick) t0=gtime.time() for i in range(count): func() total_time=(gtime.time()-t0) return total_time*1000/count test_count=10000 original_nodb=profiling(test_count) client.drop_database(TICK_DB_NAME) original_db=pr...
vn.tutorial/performance/Performance of Receiving Tick Data.ipynb
Chemcy/vnpy
mit
改进版本 原版程序使用CTP接口保存期货数据时,存在几个问题: - 非交易时间收到的野数据没有被过滤掉 - 当前各交易所提供的date字段混乱,有的使用真实日期,有的使用交易日,导致计算的datetime字段也是有问题的 针对以上问题的改进版本如下:
#过滤掉的时间区间,注意集合竞价tick被过滤了。 invalid_sections=[(time(2,30,59),time(9,0,0)), (time(11,30,59),time(13,0,0)), (time(15,15,0),time(21,0,0))] #本地时间在此区间时对收到的Tick数据不处理,避免有时期货公司会抽风把数据重推一次。 invalid_local_section=(time(5,0,0),time(8,30,0)) def procecssTickEvent(tick, insertDB=False): """处理行情推送""" # 1. ...
vn.tutorial/performance/Performance of Receiving Tick Data.ipynb
Chemcy/vnpy
mit
保存为文本文件效率
def insertData(db,collection,data): for key in data.__dict__: fout.write(str(data.__dict__[key])+',') fout=open('D:/test.txt','w') new_db_text=profiling(test_count,func=lambda: procecssTickEvent(tick,insertDB=True)) print '新版含保存数据到text file单次耗时:%.4f' %original_db fout.close()
vn.tutorial/performance/Performance of Receiving Tick Data.ipynb
Chemcy/vnpy
mit
时间类型转化效率 注意到不保存数据到数据的版本中,新版相比老版耗时反而降低了,这主要是由于时间转化函数的改写。 如下三种时间转化方法效率差别巨大:
time_convert1=profiling(10000,lambda:parse('20161212 21:21:21.5')) time_convert2=profiling(10000,lambda:datetime.strptime('20161212 21:21:21.5', '%Y%m%d %H:%M:%S.%f')) def customized_parse(s): s=s.ljust(21,'0') return datetime(int(s[0:4]),int(s[4:6]),int(s[6:8]),int(s[9:11]), int(s[12:14]), int(s[15:17]), int(s...
vn.tutorial/performance/Performance of Receiving Tick Data.ipynb
Chemcy/vnpy
mit
总结
import pandas as pd df=pd.DataFrame([{u'无数据写入':original_nodb,u'mongodb写入':original_db}, {u'无数据写入': new_nodb, u'mongodb写入': new_db, u'text文件写入':new_db_text} ],index=['原版','新版']) df
vn.tutorial/performance/Performance of Receiving Tick Data.ipynb
Chemcy/vnpy
mit
How to assign values to variables? Single assignment
a = 1
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Multiple assignment
a, b, c = 1, 2, 3 print a, b, c a = b = c = d = "The same string" print a, b, c, d
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
What is a reference? What is a value? You could ask: does Python use call-by-value, or call-by-reference? Neither of those, actually. Variables in Python are "names", that ALWAYS bind to some object, because mostly everything in Python is an object, a complex type. So assigning a variable means, binding this "name" to ...
type(my_age)
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
To be completely precise, let's look at creating two variables that store some names. To see where in memory does the object go, we can use method "id". To see the hex representation of this memory, as you will usually see, we can use the method "id".
some_person = "Andrew" person_age = 22 print some_person, type(some_person), hex(id(some_person)) print person_age, type(person_age), hex(id(person_age))
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Now, let's change this name to something else.
some_person = "Jamie" person_age = 24 print some_person, type(some_person), hex(id(some_person)) print person_age, type(person_age), hex(id(person_age))
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
The important bit is that, even though we use the same variable "person_age", the memory address changed. The object holding integer '22' is still living somewhere on the process heap, but is no longer bound to any name, and probably will be deleted by the "Garbage Collector". The binding that exists now, if from name ...
shared_list = [11,22] my_list = shared_list your_list = shared_list print shared_list, my_list, your_list
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Now, when we modify the binding of 'shared_list' variable, both of our variables will change also!
shared_list.append(33) print shared_list, my_list, your_list
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
This can be very confusing later on, if you do not grasp this right now. Feel free to play around :) Data types What is a data type? It is a way of telling our computer, that we want to store a specific kind of information in a particular variable. This allows us to access tools and mechanisms that are allowed for that...
a = 111 print a, type(a) b = 111111111111111111111111111111111 print b, type(b)
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
float Floating decimal point numbers. Used usually for everything that is not an 'int'.
c = 11.33333 d = 11111.33 print c, type(c) print d, type(d)
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
complex Complex numbers. Advanced sorceries of mathematicians. In simple terms, numbers that have two components. Historically, they were named 'real' component (regular numbers) and 'imaginary' component - marked in Python using the 'j' letter.
c = 2 + 3j print c, type(c)
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Numeric operations
# Addition print(1+1) # Multiplication print(2*2) # Division print(4/2) # Remainder of division print(5%2) # Power print(2**4)
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Strings Represents text, or to be more specific, sequences of 'Unicode' characters. To let Python know we are using strings, put them in quotes, either single, or double.
a = "Something" b = 'Something else' print type(a), type(b)
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Even though strings are not numbers, you can do a lot of operations on them using the usual operators.
name = 'Adam' print name + name print name * 3
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Actually, strings are 'lists' of characters. We will explore lists in just a moment, but I want you to become familiar with a new notation. It is based on the order of sequence. When I say, "Give me the second character of this string", I can write is as such:
print 'Second character is: ' + name[1]
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Since we are counting from 0, the second character has index = 1. Now, say I want characters from second, to fourth.
print 'From second to fourth: ' + name[1:4] print 'The last character (or first counting from the end) is: ' + name[-1] print 'All characters, but skip every second: ' + name[0:4:2]
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
These operations are called 'slicing'. We can also find substrings in other substrings. THe result is the index, at which this substring occurs.
some_string = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAxxAAAAAAAAAAAAAAAAAAAA" substring = "xx" location = some_string.find(substring) print("Lets see what we found:") print(some_string[location:location+len(substring)])
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
We can also replace substrings in a bigger string. Very convenient. But more complex replacements or searches are done using regular expressions, which we will cover later
some_string = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAxxAAAAAAAAAAAAAAAAAAAA" substring = "xx" print(some_string.replace( substring , "___REPLACED___"))
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Boolean It represents the True and False values. Variables of this type, can be only True or False. It is useful to know, that in Python we can check any variable to be True or False, even lists! We use the bool() function.
a = True b = False print("Is a equal to b ?") print(a==b) print("Logical AND") print(a and b) print("Logical OR") print(a or b) print("Logical value of True") print( bool(a) ) print("Logical value of an empty list") print( bool([]) ) print("Logical value of an empty string") print( bool("") ) print("Logical valu...
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
List Prepare to use this data type A LOT. Lists can store any objects, and have as many elements, as you like. The most important thing about lists, is that their elements are ordered. You can create a list by making an empty list, converting something else to a list, or defining elements of a list right there, when yo...
empty_list = [] list_from_something_else = list('I feel like Im going to explode') list_elements_defined_when_list_is_created = [1, 2, 3, 4] print empty_list print list_from_something_else print list_elements_defined_when_list_is_created
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Selecting from lists
l = ["a", "b", "c", "d", "e"] print l[0] print l[-1] print l[1:3]
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Adding and removing from a list
l = [] l.append(1) print l l[0] = 222 print l l.remove(1) print l l = [1,2,3,3,4,5,3,2,3,2] # Make a new list from a part of that list new = l[4:7] print new
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Iterating over a list But lists are not only used to hold some sequences! You can iterate over a list. This means no more, no less, then doing something for each of the elements in a given range, or for all of them. We will cover the so-called 'for' loop in next lessons, but I guess you can easily imagine what this min...
# Do something for all of elements. for element in [1, 2, 3]: print element + 20 # Do something for numbers coming from a range of numbers. for number in range(0,3): print number + 20 # Do something for all of elements, but written in a short way. some_list = ['a', 'b', 'c'] print [element*2 for element in so...
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Even though the short notation is a more advanced topic, it is very elegant and 'pythonic'. This way of writing down the process of iteration is called 'list comprehensions'. Tuple A tuple is a simple data structure - it behaves pretty much like a list, except for one fact - you can not change elements of tuple after i...
some_tuple = (1,3,4) print some_tuple print type(some_tuple) print len(some_tuple) print some_tuple[0] print some_tuple[-1] print some_tuple[1:2] other_tuple = 1, 2, 3 print other_tuple print type(other_tuple) # This will cause an error! You can not modify a tuple. some_tuple[1] = 22
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Dictionary This data structure is very useful. In essence, it stores pairs of values, first of which is always a "key", a unique identifier, and the "value", which is the connected object. A dictionary performs a mapping between keys and values. Because the key is always unique (has to be, we will find out in a minute)...
empty_dictionary = {} print empty_dictionary print type(empty_dictionary) dictionary_from_direct_definition = {"key1": 1, "key2": 33} print dictionary_from_direct_definition # Let's create a dictionary from a list of tuples dictionary_from_a_collection = dict([("a", 1), ("b", 2)]) print dictionary_from_a_collection ...
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Using dictionaries Add key-value pairs
d = {} d["a"] = 1 d["bs"] = 22 d["ddddd"] = 31 print d d.update({"b": 2, "c": 3}) print d
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Remove items
del d["b"] print d d.pop("c") print d
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Inspect a dictionary
# How many keys? print d.keys() print len(d) print len(d.keys()) # How many values? print d.values() print len(d.values())
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Iterate over dictionary
for key, value in d.items(): print key, value
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Example of looking for a specific thing in a list, and in a dictionary:
l = ["r", "p", "s", "t"] d = {a: a for a in l} # Find "t" in list. for letter in l: if letter == "t": print "Found it!" else: print "Not yet!" # Find "t" in dictionary keys. print "In dictionary - found it! " + d["t"]
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Sets A set behaves pretty much like a mixture of a dictionary and a list. It has two features: - it only has unique values - it does not respect order of things - it has no order, like a dictionary
some_sequence = [1,1,1,1,2,2,2,3,3,3] some_set = set(some_sequence) print some_set some_string = "What's going ooooon?" another_set = set(some_string) print another_set some_dictionary = {"a": 2, "b": 2} print some_dictionary yet_another_set = set(some_dictionary) print yet_another_set print set(some_dictionary.va...
Lesson_01_variables_and_data_types.ipynb
Adamage/python-training
apache-2.0
Custom Factors When we first looked at factors, we explored the set of built-in factors. Frequently, a desired computation isn't included as a built-in factor. One of the most powerful features of the Pipeline API is that it allows us to define our own custom factors. When a desired computation doesn't exist as a built...
from quantopian.pipeline import CustomFactor import numpy
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
Next, let's define our custom factor to calculate the standard deviation over a trailing window using numpy.nanstd:
class StdDev(CustomFactor): def compute(self, today, asset_ids, out, values): # Calculates the column-wise standard deviation, ignoring NaNs out[:] = numpy.nanstd(values, axis=0)
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
Finally, let's instantiate our factor in make_pipeline():
def make_pipeline(): std_dev = StdDev(inputs=[USEquityPricing.close], window_length=5) return Pipeline( columns={ 'std_dev': std_dev } )
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
When this pipeline is run, StdDev.compute() will be called every day with data as follows: - values: An M x N numpy array, where M is 20 (window_length), and N is ~8000 (the number of securities in our database on the day in question). - out: An empty array of length N (~8000). In this example, the job of compute is to...
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05') result
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
Default Inputs When writing a custom factor, we can set default inputs and window_length in our CustomFactor subclass. For example, let's define the TenDayMeanDifference custom factor to compute the mean difference between two data columns over a trailing window using numpy.nanmean. Let's set the default inputs to [USE...
class TenDayMeanDifference(CustomFactor): # Default inputs. inputs = [USEquityPricing.close, USEquityPricing.open] window_length = 10 def compute(self, today, asset_ids, out, close, open): # Calculates the column-wise mean difference, ignoring NaNs out[:] = numpy.nanmean(close - open, ax...
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
<i>Remember in this case that close and open are each 10 x ~8000 2D numpy arrays.</i> If we call TenDayMeanDifference without providing any arguments, it will use the defaults.
# Computes the 10-day mean difference between the daily open and close prices. close_open_diff = TenDayMeanDifference()
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
The defaults can be manually overridden by specifying arguments in the constructor call.
# Computes the 10-day mean difference between the daily high and low prices. high_low_diff = TenDayMeanDifference(inputs=[USEquityPricing.high, USEquityPricing.low])
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
Further Example Let's take another example where we build a momentum custom factor and use it to create a filter. We will then use that filter as a screen for our pipeline. Let's start by defining a Momentum factor to be the division of the most recent close price by the close price from n days ago where n is the windo...
class Momentum(CustomFactor): # Default inputs inputs = [USEquityPricing.close] # Compute momentum def compute(self, today, assets, out, close): out[:] = close[-1] / close[0]
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
Now, let's instantiate our Momentum factor (twice) to create a 10-day momentum factor and a 20-day momentum factor. Let's also create a positive_momentum filter returning True for securities with both a positive 10-day momentum and a positive 20-day momentum.
ten_day_momentum = Momentum(window_length=10) twenty_day_momentum = Momentum(window_length=20) positive_momentum = ((ten_day_momentum > 1) & (twenty_day_momentum > 1))
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
Next, let's add our momentum factors and our positive_momentum filter to make_pipeline. Let's also pass positive_momentum as a screen to our pipeline.
def make_pipeline(): ten_day_momentum = Momentum(window_length=10) twenty_day_momentum = Momentum(window_length=20) positive_momentum = ((ten_day_momentum > 1) & (twenty_day_momentum > 1)) std_dev = StdDev(inputs=[USEquityPricing.close], window_length=5) return Pipeline( columns={ ...
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
Running this pipeline outputs the standard deviation and each of our momentum computations for securities with positive 10-day and 20-day momentum.
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05') result
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_10.ipynb
d00d/quantNotebooks
unlicense
Toy problem Let us consider two 1D distributions $p_0$ and $p_1$ for which we want to approximate the ratio $r(x) = \frac{p_0(x)}{p_1(x)}$ of their densities. $p_1$ is defined as a mixture of two gaussians; $p_0$ is defined as a mixture of the same two gaussians + a bump.
from carl.distributions import Normal from carl.distributions import Mixture components = [ Normal(mu=-2.0, sigma=0.75), # c0 Normal(mu=0.0, sigma=2.0), # c1 Normal(mu=1.0, sigma=0.5) # c2 (bump) ] bump_coefficient = 0.05 g = theano.shared(bump_coefficient) p0 = Mixture(components=components, ...
examples/Likelihood ratios of mixtures of normals.ipynb
diana-hep/carl
bsd-3-clause
Note: for $p_0$, weights are all tied together through the Theano shared variable g. This means that changes to the value stored in g also automatically change the weight values and the resulting mixture. Next we generate an artificial observed dataset X_true.
X_true = p0.rvs(5000, random_state=777) reals = np.linspace(-5, 5, num=1000) plt.plot(reals, p0.pdf(reals.reshape(-1, 1)), label=r"$p(x|\gamma=0.05)$", color="b") plt.plot(reals, p1.pdf(reals.reshape(-1, 1)), label=r"$p(x|\gamma=0)$", color="r") plt.hist(X_true[:, 0], bins=100, normed=True, label="data", alpha=0.2, co...
examples/Likelihood ratios of mixtures of normals.ipynb
diana-hep/carl
bsd-3-clause
Density ratio estimation The density ratio $r(x)$ can be approximated using calibrated classifiers, either directly by learning to classify $x \sim p_0$ from $x \sim p_1$, calibrating the resulting classifier, or by decomposing the ratio of the two mixtures as pairs of simpler density ratios and calibrating each corres...
from sklearn.model_selection import StratifiedShuffleSplit from sklearn.neural_network import MLPRegressor from carl.ratios import ClassifierRatio from carl.ratios import DecomposedRatio from carl.learning import CalibratedClassifierCV n_samples = 200000 clf = MLPRegressor(tol=1e-05, activation="logistic", ...
examples/Likelihood ratios of mixtures of normals.ipynb
diana-hep/carl
bsd-3-clause
Note: CalibratedClassifierRatio takes three arguments for controlling its execution: base_estimator specifying the classifier to be used (note commented ExtraTreesRegressor), calibration specifying the calibration algorithm ("kde", "histogram", or a user-defined distribution-like object), cv specifying how to allocate...
plt.plot(reals, -p0.nll(reals.reshape(-1, 1)) +p1.nll(reals.reshape(-1, 1)), label="Exact ratio") plt.plot(reals, cc_none.predict(reals.reshape(-1, 1), log=True), label="No calibration") plt.plot(reals, cc_direct.predict(reals.reshape(-1, 1), log=True), label="Calibration") plt.plot(reals, cc_decompo...
examples/Likelihood ratios of mixtures of normals.ipynb
diana-hep/carl
bsd-3-clause
Below is an alternative plot (that works in higher dimensions when the true likleihood is known) to check if the uncalibrated classifier is monotonically related to the true likelihood ratio.
plt.scatter(-p0.nll(reals.reshape(-1, 1)) + p1.nll(reals.reshape(-1, 1)), cc_none.classifier_.predict_proba(reals.reshape(-1, 1))[:, 0], alpha=0.5) plt.xlabel("r(x)") plt.ylabel("s(x)") plt.show()
examples/Likelihood ratios of mixtures of normals.ipynb
diana-hep/carl
bsd-3-clause
Now we inspect the distribution of the exact $\log {r}(x)$ and approximate $\log \hat{r}(x)$.
g.set_value(bump_coefficient) X0 = p0.rvs(200000) plt.hist(-p0.nll(X0) + p1.nll(X0), bins=100, histtype="step", label="Exact", normed=1) plt.hist(cc_decomposed.predict(X0, log=True), bins=100, histtype="step", label="Approx.", normed=1) plt.yscale("log") plt.legend() #plt.savefig("fig1e.pdf") plt.show()
examples/Likelihood ratios of mixtures of normals.ipynb
diana-hep/carl
bsd-3-clause
Using density ratios for maximum likelihood fit Next let us construct the log-likelihood curve for the artificial dataset.
def nll_true(theta, X): g.set_value(theta[0]) return (p0.nll(X) - p1.nll(X)).sum() def nll_approx(theta, X): g.set_value(theta[0]) return -np.sum(cc_decomposed.predict(X, log=True)) g_scan = np.linspace(0.0, 2 * bump_coefficient, 50) nll_true_scan = np.array([nll_true([t], X_true) for t in g_scan]) nl...
examples/Likelihood ratios of mixtures of normals.ipynb
diana-hep/carl
bsd-3-clause
A nice approximation of the exact likelihood. Ensemble tests Now let us perform an ensemble test with 1000 repeated experiments. We will use this to check bias of the maximum likelihood estimator and the asymptotic distribution of $-2\log \Lambda(\gamma)$ (ie. Wilks's theorem).
from sklearn.utils import check_random_state from scipy.optimize import minimize n_trials = 1000 true_mles = [] true_nll = [] approx_mles = [] approx_nll = [] for i in range(n_trials): # Generate new data g.set_value(bump_coefficient) X_true = p0.rvs(5000, random_state=i) # True MLE ...
examples/Likelihood ratios of mixtures of normals.ipynb
diana-hep/carl
bsd-3-clause
Text Plain text:
text = """Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam urna libero, dictum a egestas non, placerat vel neque. In imperdiet iaculis fermentum. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Cras augue tortor, tristique vitae varius nec, dictum eu lectus. Pell...
galata/test/galata/notebooks/simple_test.ipynb
jupyter/jupyterlab
bsd-3-clause
Text as output:
text
galata/test/galata/notebooks/simple_test.ipynb
jupyter/jupyterlab
bsd-3-clause
Standard error:
import sys; print('this is stderr', file=sys.stderr)
galata/test/galata/notebooks/simple_test.ipynb
jupyter/jupyterlab
bsd-3-clause
HTML
div = HTML('<div style="width:100px;height:100px;background:grey;" />') div for i in range(3): print(7**10) display(div)
galata/test/galata/notebooks/simple_test.ipynb
jupyter/jupyterlab
bsd-3-clause
Markdown
md = Markdown(""" ### Subtitle This is some *markdown* text with math $F=ma$. """) md display(md)
galata/test/galata/notebooks/simple_test.ipynb
jupyter/jupyterlab
bsd-3-clause
LaTeX Examples LaTeX in a markdown cell: \begin{align} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t}...
math = Latex("$F=ma$") math maxwells = Latex(r""" \begin{align} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}...
galata/test/galata/notebooks/simple_test.ipynb
jupyter/jupyterlab
bsd-3-clause
SVG
svg_source = """ <svg width="400" height="110"> <rect width="300" height="100" style="fill:#E0E0E0;" /> </svg> """ svg = SVG(svg_source) svg for i in range(3): print(10**i) display(svg)
galata/test/galata/notebooks/simple_test.ipynb
jupyter/jupyterlab
bsd-3-clause
Declaring elements in a function If we write a function that accepts one or more parameters and constructs an element, we can build plots that do things like: Loading data from disk as needed Querying data from an API Calculating data from a mathematical function Generating data from a simulation As a basic example, ...
def fm_modulation(f_carrier=110, f_mod=110, mod_index=1, length=0.1, sampleRate=3000): x = np.arange(0, length, 1.0/sampleRate) y = np.sin(2*np.pi*f_carrier*x + mod_index*np.sin(2*np.pi*f_mod*x)) return hv.Curve((x, y), kdims=['Time'], vdims=['Amplitude'])
notebooks/03-exploration-with-containers.ipynb
ioam/scipy-2017-holoviews-tutorial
bsd-3-clause