markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Fitting the LDA model
%%time lda = models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=50, passes=10) lda.save('newsgroups_50.model')
notebooks/Gensim Newsgroup.ipynb
codingafuture/pyLDAvis
bsd-3-clause
Visualzing the model with pyLDAvis Okay, the moment we have all been waiting for is finally here! You'll notice in the visualizaiton that we have a few junk topics that would probably disappear after better preprocessing of the corpus. This is left as an exercises to the reader. :)
import pyLDAvis.gensim as gensimvis import pyLDAvis vis_data = gensimvis.prepare(lda, corpus, dictionary) pyLDAvis.display(vis_data)
notebooks/Gensim Newsgroup.ipynb
codingafuture/pyLDAvis
bsd-3-clause
Now, we'll use our model to generate a single prediction: we're not even going to learn a character for it, we simply want to check the output array
# Generate characters x = np.reshape(pattern, (1, len(pattern), 1)) x = x / float(n_vocab) prediction = model.predict(x, verbose=0) print(prediction)
#3 - Improving text generation/3.1 - Randomizing our prediction.ipynb
AlexGascon/playing-with-keras
apache-2.0
As we can see, the array contains numbers between 0 and 1; each one represents the probability of the character in that position of being the correct character to output. However, this doesn't seem to be a very clear representation. Let's see it graphically.
# Creating the indices array indices = np.arange(len(prediction[0])) # Creating the plots fig, ax = plt.subplots() preds = ax.bar(indices, prediction[0]) # Adding some text for labels and titles ax.set_xlabel('Character') ax.set_ylabel('Probability') ax.set_title('Probabilty of each character of being the correct output (according to our model)') ax.set_xticks(indices) ax.set_xticklabels((c for c in chars if ord(c) < 0x81)) # We can't use non ascii chars as label ax.set_ylim(top=1.0) plt.show()
#3 - Improving text generation/3.1 - Randomizing our prediction.ipynb
AlexGascon/playing-with-keras
apache-2.0
As we can see, our model seems quite convinced that the correct character to output is 'i' (and, watching the seed, I can tell you that it's probably correct: "arremetió" is a real word in Spanish, and it would make sense in that context). However, we cannot simply discard the other characters: although they may seem improbable, there may be cases where an improbable option is the correct one (or, what happens more often, that the decision is not so clear). Therefore, as we've said, we're going to choose the output randomly by using this distribution as the PDF. We'll generate a new array with the same length as the output one that contains the cummulative probability of the characters, i.e. each element will contain the probability of the correct output character to be at his position or a previous one. The result will be similar to the following one:
prob_cum = np.cumsum(prediction[0]) # Creating the indices array indices = np.arange(len(prob_cum)) # Creating the plots fig, ax = plt.subplots() preds = ax.bar(indices, prob_cum) # Adding some text for labels and titles ax.set_xlabel('Character') ax.set_ylabel('Probability') ax.set_title('Cummulative output probability') ax.set_xticks(indices) ax.set_xticklabels((c for c in chars if ord(c) < 0x81)) # We can't use non ascii chars as label plt.show()
#3 - Improving text generation/3.1 - Randomizing our prediction.ipynb
AlexGascon/playing-with-keras
apache-2.0
As you can see the final value is 1, because it is sure that the output will be chosen from one of the characters in the array. In order to use that as the PDF, what we'll do is to generate a random number between 0 and 1 and choose the first element of the array to be greater than it. As you can imagine, the char that will appear the most is the 'i', because a lot of numbers between 0 and 1 are lower than prob_cum['i'] but greater than prob_cum['h']. However, we won't always output 'i', so our output will have some random component while keeping a plausible distribution. 3.1.1. Testing the improvement Now it's time to see if our improvement is really an improvement or not: we'll implement it in our predictions. As the model and the libraries are already loaded we don't need to repeat that step, so we'll start directly by loading the weights and picking a random seed.
# Generate characters for i in range(500): x = np.reshape(pattern, (1, len(pattern), 1)) x = x / float(n_vocab) prediction = model.predict(x, verbose=0) # Choosing the character randomly prob_cum = np.cumsum(prediction[0]) rand_ind = np.random.rand() for i in range(len(prob_cum)): if (rand_ind <= prob_cum[i]) and ((i == 0) or (rand_ind > prob_cum[i-1])): index = i break result = int_to_char[index] seq_in = [int_to_char[value] for value in pattern] result_str += result pattern.append(index) pattern = pattern[1:len(pattern)] print "\nDone." print seed + result_str
#3 - Improving text generation/3.1 - Randomizing our prediction.ipynb
AlexGascon/playing-with-keras
apache-2.0
We can store it in a variable and print it out. It doesn't sound confusing at all!
x = 42 print (x)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Here, a variable x stores number 42 as an integer. However, we can store the same number as a different type or within another data structure - as float, string, part of a list or a tuple. Depending on the type of variable, Python will print it slightly differently.
x_float = float(42) x_scientific = 42e0 x_str = '42' print ('42 as a float', x_float) print ('42 as a float in scientific notation', x_scientific) print ('42 as a string', x_str)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
So far it looks pretty normal. float adds floating point to the integer. Scientific notation is typically a float (serious scientists don't work with integers!). 42 as string looks exactly like we expected and similar to the integer, but their behaviors are different. The difference in behavior isn't obvious until we make it a part of a collection, for example a list.
x_list = [x, x_str, x_float, x_scientific] print ("All 42 in a list:", x_list)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Here is the thing - Python won't show you quotes when you print a string, but if you print a string in another object, it encloses the string in single quotes. So each time you see this single quotes, you should understand that it's a string and not a number (at least for Python)! Let's look how:
x_tuple = tuple(x_list) x_set = set(x_list) x_dict = {x_str : x, x_tuple : x_list} print ("All 42 in a list:", x_list) print ("All 42 in a tuple:", x_tuple) print ("All 42 in a set:", x_set) print ("A dict of 42 in different flavors", x_dict)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Wow, now you should be extreeeeeemely watchful! Look how the shape of brackets differs between a tuple and a list; lists use [brackets], whereas tuples use (parentheses) Look how both sets and dicts use {braces}. That might create some confusion, but each element of set is just an object, whereas in a dictionary you have key : value pair separated by : (colon) Although 42 as integer and 42.0 as floating point are objects of a different kind, for sets they are equal since they have the same value on comparison, exact 42. '42' as a string on the other hand is an object of a different nature, it's not a number. That's why set has only two objects: 42 as integer, and '42' as a string If you are confused, you can always use type() function to figure out the type of the object you have:
print ('42 as integer', x, "variable type is", type(x)) print ('42 as a float', x_float, "variable type is", type(x_float)) print ('42 as a float in scientific notation', x_scientific, "variable type is", type(x_scientific)) print ('42 as a string', x_str, "variable type is", type(x_str))
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Using type() function might be extremely useful during debugging stage. However, quite often a simple print and a little bit of attention to what's printed is enough to figure out what's going on. Some objects have different results on calling the print function. For example, let's consider a frozenset, a built-in immutable implementation of python set.
x_frozenset = frozenset(x_list) print ("Here's a set of 42:\n", x_set) print ("Here's a frozenset of 42:\n", x_frozenset)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
As you see, when we print set and frozenset, they look very different. Frozenset, as lots of other objects in python, adds its object name when you print it. That makes really hard to confuse set and frozenset! If you want to do something similar for your custom class, you can do it rather easily in Python. You just need to add a special str method to your custome class which defines the string representation for your object.
class my_42: def __init__(self): self.n = 42 def __str__(self): return 'Member of class my_42(' + str(self.n) + ')' print ('Just 42:',42) print ('New class:', my_42())
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Exercise. Now let's use our knowledge to practise and play with a function that takes a list and returns exactly the same list if every element of the list is a string. Otherwise, it returns a new list with all non-string elements converted to string. To avoid confusion, the function also returns a flag variable showing whether the list has been modified. Try to add some print statements to investigate the types of elements in the list, how the elements are printed out, and how the whole array looks like before and after type conversion.
def list_converter(l): """ l - input list Returns a list where all elements have been stringified as well a flag to indicate if the list has been modified """ assert (type(l) == list) flag = False for el in l: # print the type of el if type(el) != str: flag = True if flag: new_list = [] for el in l: # how would be each element printed out? what's the element type? new_list.append(str(el)) # print how the new list looks like return new_list, flag else: return l, flag # `list_converter_test`: Test cell l = ['4', 8, 15, '16', 23, 42] l_true = ['4', '8', '15', '16', '23', '42'] new_l, flag = list_converter(l) print ("list_converter({}) -> {} [True: {}], new list flag is {}".format(l, new_l, l_true, flag)) assert new_l == l_true assert flag new_l, flag = list_converter(l_true) print ("list_converter({}) -> {} [True: {}], new list flag is {}".format(l, new_l, l_true, flag)) assert new_l == l_true assert not flag
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Helper functions
def left_of_bracket(s): if '(' in s: needle = s.find('(') r = s[:needle-1].strip() return r else: return s def state_abbreviation(state): spaces = state.count(' ') if spaces == 2: bits = state.split(' ') r='' for b in bits: r = r + b[:1].upper() # for each word in state grab first letter return r elif 'Australia' in state: r = state[:1].upper() + 'A' return r elif state == 'Queensland': return 'QLD' elif state == 'Northern Territory': return 'NT' else: r = state[:3].upper() return r # i keep forgetting the syntax for this so writing a wrapper def dedup_df(df, keys, keep = False): # for a data frame, drop anything thats a duplicate # if you change keep to first, it'll keep the first row rather than none df_dedup = df.drop_duplicates(keys, keep) return df_dedup
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Import polling places import_polling_places(filepath) * Takes a path to a polling place file * Returns a tidy data frame * Renames some columns * Dedups on ['state','polling_place']
def import_polling_places(filepath): # read csv df_pp = pd.read_csv( filepath ) # pick the columns I want to keep cols = [ 'State', 'PollingPlaceNm', 'PremisesNm', 'PremisesAddress1', 'PremisesAddress2', 'PremisesAddress3', 'PremisesSuburb', 'PremisesStateAb', 'PremisesPostCode', 'Latitude', 'Longitude' ] # filter for those df_pp = df_pp[cols] # create a polling place column missing the bracket lambda_polling_places = lambda x: left_of_bracket(x) df_pp['polling_place'] = df_pp['PollingPlaceNm'].apply(lambda_polling_places) # rename columns to make joining easier df_pp['premises'] = df_pp['PremisesNm'] df_pp['postcode'] = df_pp['PremisesPostCode'] # replace in the col headers list where I've modified/added the column cols = [c.replace('PollingPlaceNm', 'polling_place') for c in cols] cols = [c.replace('PremisesNm', 'premises') for c in cols] cols = [c.replace('PremisesPostCode', 'postcode') for c in cols] # reorder df df_pp = df_pp[cols] # dedup df_pp = df_pp.drop_duplicates() # make all headers lower case df_pp.columns = [x.lower() for x in df_pp.columns] return df_pp filepath = 'federal_election_polling_places/pp_2007_election.csv' test = import_polling_places(filepath) display('Rows: ' + str(len(test.index)))
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Import 1999 polling places
def import_1999_pp(filepath): df_pp_1999 = pd.read_csv( filepath ) # add blank columns for match types and lat/lng df_pp_1999['match_source'] = np.nan df_pp_1999['match_type'] = np.nan df_pp_1999['latitude'] = np.nan df_pp_1999['longitude'] = np.nan # tell it to index on state and polling place df_pp_1999 = df_pp_1999.set_index(['state','polling_place']) return df_pp_1999 filepath = '1999_referenda_output/polling_places.csv' df_pp_1999 = import_1999_pp(filepath) display(df_pp_1999.head(3))
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Matches Pandas setting I need for below to behave pandas generates warnings for working with a data frame that's a copy of another it thinks I might think I'm changing df_pp_1999 when im working with df_pp_1999_working i'm turning this warning off because i'm doing this on purpose, so I can keep df_pp_1999 as a 'yet to be matched' file, and update it with each subsequent working file
pd.set_option('chained_assignment',None)
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Functions match_polling_places(df_pp_1999, df_pp, settings) For the 1999 data frame, and a given other polling place data frame, and a set of settings, run a merge, and return the rows that matched based on the join you specified <br /> E.g: match_polling_places( df_pp_1999, df_pp, dict( keys = ['state','premises','postcode'], match_source = '2007 Polling Places', match_type = 'Match 01 - state, premises, postcode' ) ) runs a join on state, premises, and postcode between df1 and df2 keeps a defined set of columns from df1 adds the columns match_source and match_type, and sets their value replaces the latitude and longitude columns of df1 with those from df2 returns this data frame, deleting all rows that didn't match from df1
def match_polling_places(df1, df2, settings): # split up our meta field keys = settings['keys'] match_source = settings['match_source'] match_type = settings['match_type'] # filter for those columns df_working = df1.reset_index()[[ 'state', 'polling_place', 'premises', 'address', 'suburb', 'postcode', 'wheelchair_access' ]] # the keys I want to keep from the second df in the join are the group_by keys, and also lat/lng cols_df2 = keys + ['latitude','longitude'] # add cols for match type df_working['match_source'] = match_source df_working['match_type'] = match_type # run the join df_working = pd.merge( df_working, df2[cols_df2], on=keys, how='left' ) # delete those which we didn't match df_working = df_working[~df_working['latitude'].isnull()] # dedup on the keys we matched on df_working = dedup_df(df_working, keys) return df_working # test match_polling_places filepath = '1999_referenda_output/polling_places.csv' df1 = import_1999_pp(filepath) filepath2 = 'federal_election_polling_places/pp_2007_election.csv' df2 = import_polling_places(filepath2) test = match_polling_places( df1, df2, dict( keys = ['state','premises','postcode'], match_source = '2007 Polling Places', match_type = 'Match 01 - state, premises, postcode' ) ) display(test.head(3))
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
match_unmatched_polling_places(df1, settings) This is a wrapper function for match_polling_places It will only pass data that is NOT yet matched in df1 to the match function, so that we keep track of at what point in our order we matched the data frame (rather than overriding each time it matches) This will matter as we do less high quality matches at the bottom of the pile
def match_unmatched_polling_places(df1, settings): # get polling place file from settings filepath = settings['pp_filepath'] df2 = import_polling_places(filepath) # work out which rows we haven't yet matched df1_unmatched = df1[df1.match_source.isnull()] # run match for those df1_matches = match_polling_places(df1_unmatched, df2, settings) # dedup this file for combinations of state/polling_place (my unique key) keys = ['state','polling_place'] df1_matches = dedup_df(df1_matches, keys) # check that worked by making it a key now df1_matches = df1_matches.set_index(keys) # update with matches df1.update(df1_matches) # return return df1
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
match_status(df1) a function to tell me for a given data frame what the match status is
def match_status(df1): # how many Nans are in match_type? not_matched = len(df1[df1['match_type'].isnull()].index) # make a df for none none = pd.DataFrame(dict( match_type = 'Not yet matched', count = not_matched ), index=[0]) if not_matched == len(df1.index): # if all values are not-matched return none else: df = pd.DataFrame( df1.groupby('match_type')['match_type'].count().reset_index(name='count') ) # add the non-matched row df = df.append(none) return df
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match attempts Match 1 - 2007 on premises name, state, and postcode Other than schools that have moved, these places should be the same And for schools that have moved, the postcode test should ensure it's not too far
# first match attempt - set up file filepath = '1999_referenda_output/polling_places.csv' df_pp_1999 = import_1999_pp(filepath) # double check none are somehow magically matched yet print('before') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2007_election.csv', keys = ['state','premises','postcode'], match_source = '2007 Polling Places', match_type = 'Match 01 - state, premises, postcode' ) # run match df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings) print('after') display(match_status(df_pp_1999))
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match 2 through 4 - 2010 through 2016 on premises name, state, and postcode Other than schools that have moved, these places should be the same And for schools that have moved, the postcode test should ensure it's not too far
## 2 print('before 2') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2010_election.csv', keys = ['state','premises','postcode'], match_source = '2010 Polling Places', match_type = 'Match 02 - state, premises, postcode' ) # run match df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings) print('after') display(match_status(df_pp_1999)) ## 3 print('before 3') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2013_election.csv', keys = ['state','premises','postcode'], match_source = '2013 Polling Places', match_type = 'Match 03 - state, premises, postcode' ) # run match df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings) print('after') display(match_status(df_pp_1999)) ## 4 print('before 4') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2016_election.csv', keys = ['state','premises','postcode'], match_source = '2016 Polling Places', match_type = 'Match 04 - state, premises, postcode' ) # run match df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings) print('after') display(match_status(df_pp_1999))
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match 5 - 2007 polling places on polling place name, state, and postcode This will match to a polling place name, in a different location, as long as it is in the same suburb for the purposes of this analysis, this should be good enough
print('before 5') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2007_election.csv', keys = ['state','polling_place','postcode'], match_source = '2007 Polling Places', match_type = 'Match 05 - state, polling_place, postcode' ) # run match df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings) print('after') display(match_status(df_pp_1999))
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match 6-8 - 2010-2016 polling places on polling place name, state, and postcode
print('before 6') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2010_election.csv', keys = ['state','polling_place','postcode'], match_source = '2010 Polling Places', match_type = 'Match 06 - state, polling_place, postcode' ) # run match df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings) print('after') display(match_status(df_pp_1999)) ## 7 print('before 7') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2013_election.csv', keys = ['state','polling_place','postcode'], match_source = '2013 Polling Places', match_type = 'Match 07 - state, polling_place, postcode' ) # run match df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings) print('after') display(match_status(df_pp_1999)) ## 8 print('before 8') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2016_election.csv', keys = ['state','polling_place','postcode'], match_source = '2016 Polling Places', match_type = 'Match 08 - state, polling_place, postcode' ) # run match df_pp_1999 = match_unmatched_polling_places(df_pp_1999, settings) print('after') display(match_status(df_pp_1999))
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Gooogle geocoder keys.json contains a google maps api key, so it's not in this notebook.
def get_google_api_key(): filepath = 'config/keys.json' with open(filepath) as data_file: data = json.load(data_file) key = data['google_maps'] return key
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Google geocode example
def geocode_address(address): key = get_google_api_key() componentRestrictions = {'country':'AU'} gmaps = googlemaps.Client(key=key) # geocode_result = gmaps.geocode(address) geocode_result = gmaps.geocode(address, componentRestrictions) return geocode_result # display(geocode_address('10/44 Lord St Richmond VIC 3121')) display(geocode_address('Salt Creek Rd SALT CREEK 5275')) def geocode_polling_place(row, match_source = 'google geocode'): address = row['address'] + ' ' + row['suburb'] + ' ' + str(row['postcode'])[:4] # geocode address geocode = geocode_address(address) # update the row if geocode: row['match_source'] = match_source row['match_type'] = geocode[0]['geometry']['location_type'] row['latitude'] = geocode[0]['geometry']['location']['lat'] row['longitude'] = geocode[0]['geometry']['location']['lng'] else: row['match_source'] = 'google geocode' row['match_type'] = 'failed' row = pd.DataFrame(row, index=[0]) return row # test above function on a few rows test = df_pp_1999.head(3) geocode_test = pd.DataFrame() i = 0 for row in test.reset_index().to_dict('records')[:3]: row = geocode_polling_place(row, 'Match 09 - Google Geocode') if geocode_test.empty: geocode_test = row else: geocode_test = geocode_test.append(row) geocode_test
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match unmatched so far by google geocoder
# test above function on a few rows unmatched_places = df_pp_1999[df_pp_1999.match_source.isnull()] geocode_matches = pd.DataFrame() for row in unmatched_places.reset_index().to_dict('records'): row = geocode_polling_place(row) if geocode_matches.empty: geocode_matches = row else: geocode_matches = geocode_matches.append(row) geocode_matches.head(5) # reorder geocode in same pattern as non-matches # get all keys from that table keys = df_pp_1999.reset_index().columns.values #reorder by that geocode_matches_ordered = geocode_matches[keys] # add state indexes keys = ['state','polling_place'] # df1_matches = dedup_df(df1_matches, keys) # check that worked by making it a key now geocode_matches_ordered = geocode_matches_ordered.set_index(keys) # update with matches df_pp_1999.update(geocode_matches_ordered) # where are we at? display(match_status(df_pp_1999))
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Set the rest to the centroid of their suburb
filepath = 'from_abs/ssc_2016_aust_centroid.csv' subtown_centroids = results = pd.read_csv( filepath ) # create a column for abbreviated states lambda_states = lambda x: state_abbreviation(x) subtown_centroids['state'] = subtown_centroids['STE_NAME16'].apply(lambda_states) # strip the brackets out of suburb names, and make all caps, to match the 1999 file lambda_suburbs = lambda x: left_of_bracket(x).upper() subtown_centroids['suburb'] = subtown_centroids['SSC_NAME16'].apply(lambda_suburbs) display(subtown_centroids.head(5)) print('before suburbs') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'from_abs/ssc_2016_aust_centroid.csv', keys = ['state','suburb'], match_source = 'ABS Suburb Centroids', match_type = 'Match 10 - suburb centroid' ) # rows to try ## try not matched rows, OR rows that the geocode failed on df_to_match = df_pp_1999[df_pp_1999.match_source.isnull()] df_to_match = df_to_match.append(df_pp_1999[df_pp_1999['match_type'] == 'failed']) # get matches df1_matches = match_polling_places( df_to_match, # only run on empty rows subtown_centroids, settings ) # dedup this file for combinations of state/polling_place (my unique key) keys = ['state','polling_place'] # df1_matches = dedup_df(df1_matches, keys) # check that worked by making it a key now df1_matches = df1_matches.set_index(keys) # update with matches df_pp_1999.update(df1_matches) print('after') display(match_status(df_pp_1999))
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
How many are left now?
df_to_match = df_pp_1999[df_pp_1999.match_source.isnull()] df_to_match = df_to_match.append(df_pp_1999[df_pp_1999['match_type'] == 'failed']) display(df_to_match)
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Hooray! WE HAVE A RESULT FOR EVERYWHERE Let's write a CSV:
df_pp_1999.to_csv( '1999_referenda_output/polling_places_geocoded.csv', sep = ',' )
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Before we explore the package pandas, let's import pandas package. We often use pd to refer to pandas in convention.
%matplotlib inline import pandas as pd import numpy as np
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Series A Series is a single vector of data (like a Numpy array) with an index that labels each element in the vector.
counts = pd.Series([223, 43, 53, 24, 43]) counts type(counts)
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
If an index is not specified, a default sequence of integers is assigned as index. We can access the values like an array
counts[0] counts[1:4]
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
You can get the array representation and index object of the Series via its values and index atrributes, respectively.
counts.values counts.index
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
We can assign meaningful labels to the index, if they are available:
fruit = pd.Series([223, 43, 53, 24, 43], index=['apple', 'orange', 'banana', 'pears', 'lemon']) fruit fruit.index
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
These labels can be used to refer to the values in the Series.
fruit['apple'] fruit[['apple', 'lemon']]
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
We can give both the array of values and the index meaningful labels themselves:
fruit.name = 'counts' fruit.index.name = 'fruit' fruit
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Operations can be applied to Series without losing the data structure. Use bool array to filter Series
fruit > 50 fruit[fruit > 50]
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Critically, the labels are used to align data when used in operations with other Series objects.
fruit2 = pd.Series([11, 12, 13, 14, 15], index=fruit.index) fruit2 fruit2 = fruit2.drop('apple') fruit2 fruit2['grape'] = 18 fruit2 fruit3 = fruit + fruit2 fruit3
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Contrast this with arrays, where arrays of the same length will combine values element-wise; Adding Series combined values with the same label in the resulting series. Notice that the missing values were propogated by addition.
fruit3.dropna() fruit3 fruit3.isnull()
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
DataFrame A DataFrame is a tabular data structure, encapsulating multiple series like columns in a spreadsheet.Each column can be a different value type (numeric, string, boolean etc).
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year':[2000, 2001, 2002, 2001, 2003], 'pop':[1.5, 1.7, 3.6, 2.4, 2.9]} df = pd.DataFrame(data) df len(df) # Get the number of rows in the dataframe df.shape # Get the (rows, cols) of the dataframe df.T df.columns # get the index of columns df.index # get the index of the row df.dtypes df.describe()
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
There are three basic ways to access the data in the dataframe use DataFrame[] to access data quickly use DataFrame.iloc[row, col] integer position based selection method use DataFrame.loc[row, col] label based selection method
df df['state'] # indexing by label df[['state', 'year']] # indexing by a list of label df[:2] # numpy-style indexing df.iloc[0, 0] df.iloc[0, :] df.iloc[:, 1] df.iloc[:2, 1:3] df.loc[:, 'state'] df.loc[:, ['state', 'year']]
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Add new column and delete column
df['debt'] = np.random.randn(len(df)) df['rain'] = np.abs(np.random.randn(len(df))) df df = df.drop('debt', axis=1) df row1 = pd.Series([4.5, 'Nevada', 2005, 2.56], index=df.columns) df.append(row1,ignore_index=True) df.drop([0, 1])
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
data filtering
df['pop'] < 2 df df.loc[df['pop'] < 2, 'pop'] = 2 df df['year'] == 2001 (df['pop'] > 3) | (df['year'] == 2001) df.loc[(df['pop'] > 3) | (df['year'] == 2001), 'pop'] = 3 df
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Sorting index
df.sort_index(ascending=False) df.sort_index(axis=1, ascending=False)
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Summarizing and Computing Descriptive Statistics Built in functions to calculate the values over row or columns.
df df.loc[:, ['pop', 'rain']].sum() df.loc[:,['pop', 'rain']].mean() df.loc[:, ['pop', 'rain']].var() df.loc[:, ['pop', 'rain']].cumsum()
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Apply functions to each column or row of a DataFrame
df df.loc[:, ['pop', 'rain']].apply(lambda x: x.max() - x.min()) # apply new functions to each row
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Grouped and apply
df df.groupby(df['state']).mean() df.groupby(df['state'])[['pop', 'rain']].apply(lambda x: x.max() - x.min()) grouped = df.groupby(df['state']) group_list = [] for name, group in grouped: print(name) print(group) print('\n')
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Set Hierarchical indexing
df df_h = df.set_index(['state', 'year']) df_h df_h.index.is_unique df_h.loc['Ohio', :].max() - df_h.loc['Ohio', :].min()
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Import and Store Data Read and write csv file.
df df.to_csv('test_csv_file.csv',index=False) %more test_csv_file.csv df_csv = pd.read_csv('test_csv_file.csv') df_csv
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Read and write excel file.
writer = pd.ExcelWriter('test_excel_file.xlsx') df.to_excel(writer, 'sheet1', index=False) writer.save() df_excel = pd.read_excel('test_excel_file.xlsx', sheetname='sheet1') df_excel pd.read_table??
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Filtering out Missing Data You have a number of options for filtering out missing data.
df = pd.DataFrame([[1, 6.5, 3.], [1., np.nan, np.nan], [np.nan, np.nan, np.nan], [np.nan, 6.5, 3.]]) df cleaned = df.dropna() # delete rows with Nan value cleaned df.dropna(how='all') # delete rows with all Nan value df.dropna(thresh=2) # keep the rows with at least thresh non-Nan value df.fillna(0) # fill Nan with a constant
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Plotting in DataFrame
variables = pd.DataFrame({'normal': np.random.normal(size=100), 'gamma': np.random.gamma(1, size=100), 'poisson': np.random.poisson(size=100)}) variables.head() variables.shape variables.cumsum().plot() variables.cumsum().plot(subplots=True)
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
HTTP Methods with curl GET
! curl https://jsonplaceholder.typicode.com/photos/1
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
POST
! curl "https://jsonplaceholder.typicode.com/users/10" ! curl "https://jsonplaceholder.typicode.com/users/id=10&username=Moria.Stanley" ! curl "https://jsonplaceholder.typicode.com/users/10"
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
Delete
! curl -X DELETE https://jsonplaceholder.typicode.com/users/10 !curl https://jsonplaceholder.typicode.com/users/10
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
Install JSON-server http://nmotw.in/json-server/ https://github.com/typicode/json-server $ npm install -g json-server $ json-server --watch db.json get the entire db
!curl http://localhost:3000/db
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
HTTP Methods GET - query
!curl http://localhost:3000/posts
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
POST - add new
! curl -X POST -d "id=21&title=learn JSON&author=gong" http://localhost:3000/posts/ ! curl -X POST -d "id=5&title=learn Docker Compose&author=Mei" http://localhost:3000/posts/ ! curl -X POST -d "id=2&title=Play with Jenkins&author=oracle" http://localhost:3000/posts/
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
NOTE: JSON-server does not know how to parse JSON string
! curl -i -X POST http://localhost:3000/posts/ -d '{"id": 6, "title": "learn Jenkins", "author": "gong"}' --header "Content-Type: application/json"
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
PUT - update (whole node)
!curl -X PUT -d "id=21&title=learn REST&author=wen" http://localhost:3000/posts/21 !curl -i -X PUT -d "title=learn Ansible&author=albert" http://localhost:3000/posts/21 !curl http://localhost:3000/posts/21
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
PATCH (partial update)
!curl -i -X PATCH -d "author=annabella" http://localhost:3000/posts/21
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
DELETE
!curl http://localhost:3000/posts !curl -X DELETE http://localhost:3000/posts/t1FLpvx
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
work with json file https://stackoverflow.com/questions/18611903/how-to-pass-payload-via-json-file-for-curl
!cat request1.json ! curl -vX POST http://localhost:3000/posts -d @request1.json --header "Content-Type: application/json" !curl -X GET http://localhost:3000/posts/4
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
SLICE
!curl -X GET http://localhost:3000/posts !curl http://localhost:3000/posts?_start=1&_end=2
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
SORT
!curl http://localhost:3000/posts?_sort=id&_order=DESC !curl http://localhost:3000/posts?_sort=id&_order=DESC !curl http://localhost:3000/posts/2
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
Search
!curl http://localhost:3000/posts?q=Docker !curl http://localhost:3000/posts?q=learn !curl http://localhost:3000/posts?q=21 !curl http://localhost:3000/comments !curl -X POST http://localhost:3000/comments -d "id=6&postId=4&body=Apache is powerful" !curl http://localhost:3000/profile
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
Cross-Validation and Bias-Variance decomposition Cross-Validation Implementing 4-fold cross-validation below:
from helpers import load_data # load dataset x, y = load_data() def build_k_indices(y, k_fold, seed): """build k indices for k-fold.""" num_row = y.shape[0] interval = int(num_row / k_fold) np.random.seed(seed) indices = np.random.permutation(num_row) k_indices = [indices[k * interval: (k + 1) * interval] for k in range(k_fold)] return np.array(k_indices) from costs import compute_mse from ridge_regression import ridge_regression from build_polynomial import build_poly def cross_validation(y, x, k_indices, k, lambda_, degree): """return the loss of ridge regression.""" assert 0 <= k and k < len(k_indices) x_test = x[k_indices[k]] y_test = y[k_indices[k]] x_train = np.delete(x, k_indices[k]) y_train = np.delete(y, k_indices[k]) x_test = build_poly(x_test, degree) x_train = build_poly(x_train, degree) # *************************************************** # INSERT YOUR CODE HERE # ridge regression: TODO # *************************************************** w = ridge_regression(y_train, x_train, lambda_) # *************************************************** # INSERT YOUR CODE HERE # calculate the loss for train and test data: TODO # *************************************************** loss_tr = compute_mse(y_train, x_train, w) loss_te = compute_mse(y_test, x_test, w) return loss_tr, loss_te from plots import cross_validation_visualization def cross_validation_demo(): seed = 1 degree = 7 k_fold = 4 lambdas = np.logspace(-4, 0, 30) # split data in k fold k_indices = build_k_indices(y, k_fold, seed) # define lists to store the loss of training data and test data rmse_tr = [] rmse_te = [] tr_data = [] te_data = [] for lambda_ in lambdas: k_rmse_tr = [] k_rmse_te = [] for k in range(len(k_indices)): loss_tr, loss_te = cross_validation(y, x, k_indices, k, lambda_, degree) k_rmse_tr.append(loss_tr) k_rmse_te.append(loss_te) rmse_tr.append(np.mean(k_rmse_tr)) rmse_te.append(np.mean(k_rmse_te)) tr_data.append(k_rmse_tr) te_data.append(k_rmse_te) cross_validation_visualization(lambdas, rmse_tr, rmse_te) return (tr_data, te_data) tr_data, te_data = cross_validation_demo() def draw_plot(data, edge_color, fill_color, sym): bp = plt.boxplot(data, patch_artist=True, sym=sym) for element in ['boxes', 'whiskers', 'fliers', 'means', 'medians', 'caps']: plt.setp(bp[element], color=edge_color) for patch in bp['boxes']: patch.set(facecolor=fill_color) draw_plot(te_data, 'red', 'darkred', '+') draw_plot(tr_data, 'blue', 'darkblue', '*') plt.ylabel("MSE") plt.xlabel("Lambda") plt.title("Cross Validation Mean & Variance")
ml/ex04/template/ex04.ipynb
rusucosmin/courses
mit
Bias-Variance Decomposition Visualize bias-variance trade-off by implementing the function bias_variance_demo() below:
from least_squares import least_squares from split_data import split_data from plots import bias_variance_decomposition_visualization def bias_variance_demo(): """The entry.""" # define parameters seeds = range(100) num_data = 10000 ratio_train = 0.005 degrees = range(1, 10) # define list to store the variable rmse_tr = np.empty((len(seeds), len(degrees))) rmse_te = np.empty((len(seeds), len(degrees))) for index_seed, seed in enumerate(seeds): np.random.seed(seed) x = np.linspace(0.1, 2 * np.pi, num_data) y = np.sin(x) + 0.3 * np.random.randn(num_data).T x_train, y_train, x_test, y_test = split_data(x, y, ratio_train, seed) for index_degree, degree in enumerate(degrees): tx_train = build_poly(x_train, degree) tx_test = build_poly(x_test, degree) tr_loss, w = least_squares(y_train, tx_train) rmse_tr[index_seed][index_degree] = np.sqrt(2 * compute_mse(y_train, tx_train, w)) rmse_te[index_seed][index_degree] = np.sqrt(2 * compute_mse(y_test, tx_test, w)) bias_variance_decomposition_visualization(degrees, rmse_tr, rmse_te) bias_variance_demo() def bias_variance_ridge_demo(): """The entry.""" # define parameters seeds = range(100) num_data = 10000 ratio_train = 0.005 degrees = range(1, 10) # define list to store the variable rmse_tr = np.empty((len(seeds), len(degrees))) rmse_te = np.empty((len(seeds), len(degrees))) k_fold = 4 lambdas = np.logspace(-4, 4, 30) for index_seed, seed in enumerate(seeds): np.random.seed(seed) x = np.linspace(0.1, 2 * np.pi, num_data) y = np.sin(x) + 0.3 * np.random.randn(num_data).T x_train, y_train, x_test, y_test = split_data(x, y, ratio_train, seed) for index_degree, degree in enumerate(degrees): tx_train = build_poly(x_train, degree) tx_test = build_poly(x_test, degree) losses = [] for lambda_ in lambdas: w = ridge_regression(y_train, tx_train, lambda_) losses.append(compute_mse(y_train, tx_train, w)) best_lambda = lambdas[np.argmin(losses)] w = ridge_regression(y_train, tx_train, best_lambda) rmse_tr[index_seed][index_degree] = np.sqrt(2 * compute_mse(y_train, tx_train, w)) rmse_te[index_seed][index_degree] = np.sqrt(2 * compute_mse(y_test, tx_test, w)) bias_variance_decomposition_visualization(degrees, rmse_tr, rmse_te) bias_variance_ridge_demo()
ml/ex04/template/ex04.ipynb
rusucosmin/courses
mit
Make the grid and training data In the next cell, we create the grid, along with the 10000 training examples and labels. After running this cell, we create three important tensors: grid is a tensor that is grid_size x 2 and contains the 1D grid for each dimension. train_x is a tensor containing the full 10000 training examples. train_y are the labels. For this, we're just using a simple sine function.
grid_bounds = [(0, 1), (0, 2)] grid_size = 25 grid = torch.zeros(grid_size, len(grid_bounds)) for i in range(len(grid_bounds)): grid_diff = float(grid_bounds[i][1] - grid_bounds[i][0]) / (grid_size - 2) grid[:, i] = torch.linspace(grid_bounds[i][0] - grid_diff, grid_bounds[i][1] + grid_diff, grid_size) train_x = gpytorch.utils.grid.create_data_from_grid(grid) train_y = torch.sin((train_x[:, 0] + train_x[:, 1]) * (2 * math.pi)) + torch.randn_like(train_x[:, 0]).mul(0.01)
examples/02_Scalable_Exact_GPs/Grid_GP_Regression.ipynb
jrg365/gpytorch
mit
Creating the Grid GP Model In the next cell we create our GP model. Like other scalable GP methods, we'll use a scalable kernel that wraps a base kernel. In this case, we create a GridKernel that wraps an RBFKernel.
class GridGPRegressionModel(gpytorch.models.ExactGP): def __init__(self, grid, train_x, train_y, likelihood): super(GridGPRegressionModel, self).__init__(train_x, train_y, likelihood) num_dims = train_x.size(-1) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.GridKernel(gpytorch.kernels.RBFKernel(), grid=grid) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) likelihood = gpytorch.likelihoods.GaussianLikelihood() model = GridGPRegressionModel(grid, train_x, train_y, likelihood) # this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) training_iter = 2 if smoke_test else 50 # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) for i in range(training_iter): # Zero gradients from previous iteration optimizer.zero_grad() # Output from model output = model(train_x) # Calc loss and backprop gradients loss = -mll(output, train_y) loss.backward() print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % ( i + 1, training_iter, loss.item(), model.covar_module.base_kernel.lengthscale.item(), model.likelihood.noise.item() )) optimizer.step()
examples/02_Scalable_Exact_GPs/Grid_GP_Regression.ipynb
jrg365/gpytorch
mit
In the next cell, we create a set of 400 test examples and make predictions. Note that unlike other scalable GP methods, testing is more complicated. Because our test data can be different from the training data, in general we may not be able to avoid creating a num_train x num_test (e.g., 10000 x 400) kernel matrix between the training and test data. For this reason, if you have large numbers of test points, memory may become a concern. The time complexity should still be reasonable, however, because we will still exploit structure in the train-train covariance matrix.
model.eval() likelihood.eval() n = 20 test_x = torch.zeros(int(pow(n, 2)), 2) for i in range(n): for j in range(n): test_x[i * n + j][0] = float(i) / (n-1) test_x[i * n + j][1] = float(j) / (n-1) with torch.no_grad(), gpytorch.settings.fast_pred_var(): observed_pred = likelihood(model(test_x)) import matplotlib.pyplot as plt %matplotlib inline pred_labels = observed_pred.mean.view(n, n) # Calc abosolute error test_y_actual = torch.sin(((test_x[:, 0] + test_x[:, 1]) * (2 * math.pi))).view(n, n) delta_y = torch.abs(pred_labels - test_y_actual).detach().numpy() # Define a plotting function def ax_plot(f, ax, y_labels, title): if smoke_test: return # this is for running the notebook in our testing framework im = ax.imshow(y_labels) ax.set_title(title) f.colorbar(im) # Plot our predictive means f, observed_ax = plt.subplots(1, 1, figsize=(4, 3)) ax_plot(f, observed_ax, pred_labels, 'Predicted Values (Likelihood)') # Plot the true values f, observed_ax2 = plt.subplots(1, 1, figsize=(4, 3)) ax_plot(f, observed_ax2, test_y_actual, 'Actual Values (Likelihood)') # Plot the absolute errors f, observed_ax3 = plt.subplots(1, 1, figsize=(4, 3)) ax_plot(f, observed_ax3, delta_y, 'Absolute Error Surface')
examples/02_Scalable_Exact_GPs/Grid_GP_Regression.ipynb
jrg365/gpytorch
mit
Read in dataset and split into fraud/non-fraud
dataset01, dataset0, dataset1 = utils_data.get_real_dataset() datasets = [dataset0, dataset1] out_folder = utils_data.FOLDER_REAL_DATA_ANALYSIS
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Print some basic info about the dataset
print(dataset01.head()) data_stats = utils_data.get_real_data_stats() data_stats.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'aggregated_data.csv')) display(data_stats)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Percentage of fraudulent cards also in genuine transactions:
most_used_card = dataset0['CardID'].value_counts().index[0] print("Card (ID) with most transactions: ", most_used_card)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
1. TIME of TRANSACTION: Here we analyse number of transactions regarding time. 1.1 Activity per day:
plt.figure(figsize=(15, 5)) plt_idx = 1 for d in datasets: plt.subplot(1, 2, plt_idx) trans_dates = d["Global_Date"].apply(lambda date: date.date()) all_trans = trans_dates.value_counts().sort_index() date_num = matplotlib.dates.date2num(all_trans.index) plt.plot(date_num, all_trans.values, 'k.', label='num trans.') plt.plot(date_num, np.zeros(len(date_num))+np.sum(all_trans)/366, 'g--',label='average') plt_idx += 1 plt.title(d.name, size=20) plt.xlabel('days (1.1.16 - 31.12.16)', size=15) plt.xticks([]) plt.xlim(matplotlib.dates.date2num([datetime(2016,1,1), datetime(2016,12,31)])) if plt_idx == 2: plt.ylabel('num transactions', size=15) plt.legend(fontsize=15) plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_day-in-year')) plt.show()
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - It's interesting that there seems to be some kind of structure in the fraudster behavior. I.e., there are many days on which the number of frauds is exactly the same. This must either be due to some peculiarity in the data (are these days where fraud was investigated more?) or because the fraudsters do coordinated attacks 1.2 Activity per day in a month:
monthdays_2016 = np.unique([dates_2016[i].day for i in range(366)], return_counts=True) monthdays_2016 = monthdays_2016[1][monthdays_2016[0]-1] plt.figure(figsize=(12, 5)) plt_idx = 1 monthday_frac = np.zeros((31, 2)) idx = 0 for d in datasets: # get the average number of transactions per day in a month monthday = d["Local_Date"].apply(lambda date: date.day).value_counts().sort_index() monthday /= monthdays_2016 if idx > -1: monthday_frac[:, idx] = monthday.values / np.sum(monthday.values, axis=0) idx += 1 plt.subplot(1, 2, plt_idx) plt.plot(monthday.index, monthday.values, 'ko') plt.plot(monthday.index, monthday.values, 'k-', markersize=0.1) plt.plot(monthday.index, np.zeros(31)+np.sum(monthday)/31, 'g--', label='average') plt.title(d.name, size=20) plt.xlabel('day in month', size=15) if plt_idx == 1: plt.ylabel('avg. num transactions', size=15) plt_idx += 1 plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_day-in-month')) plt.show() # save the resulting data np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'monthday_frac'), monthday_frac)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - the amount of transactions does not depend on the day in a month in a utilisable way 1.3 Activity per weekday:
weekdays_2016 = np.unique([dates_2016[i].weekday() for i in range(366)], return_counts=True) weekdays_2016 = weekdays_2016[1][weekdays_2016[0]] plt.figure(figsize=(12, 5)) plt_idx = 1 weekday_frac = np.zeros((7, 2)) idx = 0 for d in datasets: weekday = d["Local_Date"].apply(lambda date: date.weekday()).value_counts().sort_index() weekday /= weekdays_2016 if idx > -1: weekday_frac[:, idx] = weekday.values / np.sum(weekday.values, axis=0) idx += 1 plt.subplot(1, 2, plt_idx) plt.plot(weekday.index, weekday.values, 'ko') plt.plot(weekday.index, weekday.values, 'k-', markersize=0.1) plt.plot(weekday.index, np.zeros(7)+np.sum(weekday)/7, 'g--', label='average') plt.title(d.name, size=20) plt.xlabel('weekday', size=15) plt.xticks(range(7), ['Mo', 'Tu', 'We', 'Th', 'Fr', 'Sa', 'Su']) if plt_idx == 1: plt.ylabel('avg. num transactions', size=15) plt_idx += 1 plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_day-in-week')) plt.show() # save the resulting data np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'weekday_frac'), weekday_frac)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - the amount of transactions does not depend on the day in a week in a utilisable way 1.4 Activity per month in a year:
monthdays = np.array([31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]) plt.figure(figsize=(12, 5)) plt_idx = 1 month_frac = np.zeros((12, 2)) idx = 0 for d in datasets: month = d["Local_Date"].apply(lambda date: date.month).value_counts().sort_index() # correct for different number of days in a month month = month / monthdays[month.index.values-1] * np.mean(monthdays[month.index.values-1]) if idx > -1: month_frac[month.index-1, idx] = month.values / np.sum(month.values, axis=0) idx += 1 plt.subplot(1, 2, plt_idx) plt.plot(month.index, month.values, 'ko') plt.plot(month.index, month.values, 'k-', markersize=0.1) plt.plot(range(1,13), np.zeros(12)+np.sum(month)/12, 'g--', label='average') plt.title(d.name, size=20) plt.xlabel('month', size=15) plt.xticks(range(1, 13), ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']) if plt_idx == 1: plt.ylabel('num transactions', size=15) plt_idx += 1 plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_month-in-year')) plt.show() # save the resulting data np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'month_frac'), month_frac)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - people buy more in summer than in winter 1.5 Activity per hour of day:
plt.figure(figsize=(12, 5)) plt_idx = 1 hour_frac = np.zeros((24, 2)) idx = 0 for d in datasets: hours = d["Local_Date"].apply(lambda date: date.hour).value_counts().sort_index() hours /= 366 if idx > -1: hour_frac[hours.index.values, idx] = hours.values / np.sum(hours.values, axis=0) idx += 1 plt.subplot(1, 2, plt_idx) plt.plot(hours.index, hours.values, 'ko') plt.plot(hours.index, hours.values, 'k-', markersize=0.1, label='transactions') plt.plot(range(24), np.zeros(24)+np.sum(hours)/24, 'g--', label='average') plt.title(d.name, size=20) plt.xlabel('hour', size=15) # plt.xticks([]) if plt_idx == 1: plt.ylabel('avg. num transactions', size=15) plt_idx += 1 plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_hour-in-day')) plt.show() # save the resulting data np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'hour_frac'), hour_frac)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - the hour of day is very important: people spend most in the evening and least during the night; fraud is usually committed in the night
# extract only hours date_hour_counts = dataset0["Local_Date"].apply(lambda d: d.replace(minute=0, second=0)).value_counts(sort=False) hours = np.array(list(map(lambda d: d.hour, list(date_hour_counts.index)))) counts = date_hour_counts.values hour_mean = np.zeros(24) hour_min = np.zeros(24) hour_max = np.zeros(24) hour_std = np.zeros(24) for h in range(24): hour_mean[h] = np.mean(counts[hours==h]) hour_min[h] = np.min(counts[hours==h]) hour_max[h] = np.max(counts[hours==h]) hour_std[h] = np.std(counts[hours==h]) print(np.vstack((range(24), hour_min, hour_max, hour_mean, hour_std)).T)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
1.6 TEST: Do the above calculated fractions lead to the correct amount of transactions?
# total number of transactions we want in one year aggregated_data = pd.read_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'aggregated_data.csv'), index_col=0) trans_per_year = np.array(aggregated_data.loc['transactions'].values, dtype=np.float)[1:] # transactions per day in a month frac_monthday = np.load(join(utils_data.FOLDER_SIMULATOR_INPUT, 'monthday_frac.npy')) # transactions per day in a week frac_weekday = np.load(join(utils_data.FOLDER_SIMULATOR_INPUT, 'weekday_frac.npy')) # transactions per month in a year frac_month = np.load(join(utils_data.FOLDER_SIMULATOR_INPUT, 'month_frac.npy')) # transactions hour in a day frac_hour = np.load(join(utils_data.FOLDER_SIMULATOR_INPUT, 'hour_frac.npy')) cust_idx = 0 std_transactions = 1000 num_customers = 200 # get the probability of a transaction in a given hour curr_date = datetime(2016, 1, 1) num_trans = 0 for i in range(366*24): new_trans = float(trans_per_year[cust_idx]) new_trans *= frac_month[curr_date.month-1, cust_idx] new_trans *= frac_monthday[curr_date.day-1, cust_idx] new_trans *= 7 * frac_weekday[curr_date.weekday(), cust_idx] new_trans *= frac_hour[curr_date.hour, cust_idx] num_trans += new_trans curr_date += timedelta(hours=1) print(curr_date) print(trans_per_year[cust_idx]) print(num_trans) print("") # the difference happens because some months have longer/shorter days. # We did not want to scale up the transactions on day 31 because that's unrealistic. curr_date = datetime(2016, 1, 1) num_trans = 0 for i in range(366*24): for c in range(num_customers): # num_trans is the number of transactions the customer will make in this hour # we assume that we have enough customers to model that each customer can make max 1 transaction per hour cust_trans = float(trans_per_year[cust_idx]) cust_trans += np.random.normal(0, std_transactions, 1)[0] cust_trans /= num_customers cust_trans *= frac_month[curr_date.month-1, cust_idx] cust_trans *= frac_monthday[curr_date.day-1, cust_idx] cust_trans *= 7 * frac_weekday[curr_date.weekday(), cust_idx] cust_trans *= frac_hour[curr_date.hour, cust_idx] cust_trans += np.random.normal(0, 0.01, 1)[0] if cust_trans > np.random.uniform(0, 1, 1)[0]: num_trans += 1 curr_date += timedelta(hours=1) print(curr_date) print(trans_per_year[cust_idx]) print(num_trans) print("")
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
2. COUNTRY 2.1 Country per transaction:
country_counts = pd.concat([d['Country'].value_counts() for d in datasets], axis=1) country_counts.fillna(0, inplace=True) country_counts.columns = ['non-fraud', 'fraud'] country_counts[['non-fraud', 'fraud']] /= country_counts.sum(axis=0) # save the resulting data country_counts.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'country_frac.csv')) countries_large = [] for c in ['non-fraud', 'fraud']: countries_large.extend(country_counts.loc[country_counts[c] > 0.05].index) countries_large = np.unique(countries_large) countries_large_counts = [] for c in countries_large: countries_large_counts.append(country_counts.loc[c, 'non-fraud']) countries_large = [countries_large[np.argsort(countries_large_counts)[::-1][i]] for i in range(len(countries_large))] plt.figure(figsize=(10,5)) bottoms = np.zeros(3) for i in range(len(countries_large)): c = countries_large[i] plt.bar((0, 1, 2), np.concatenate((country_counts.loc[c], [0])), label=c, bottom=bottoms) bottoms += np.concatenate((country_counts.loc[c], [0])) # fill up the rest plt.bar((0, 1), 1-bottoms[:-1], bottom=bottoms[:-1], label='rest') plt.legend(fontsize=20) plt.xticks([0, 1], ['non-fraud', 'fraud'], size=15) plt.ylabel('fraction transactions made', size=15) plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'country_distribution')) plt.show()
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
3. CURRENCY 3.1 Currency per Transaction
currency_counts = pd.concat([d['Currency'].value_counts() for d in datasets], axis=1) currency_counts.fillna(0, inplace=True) currency_counts.columns = ['non-fraud', 'fraud'] currency_counts[['non-fraud', 'fraud']] /= currency_counts.sum(axis=0) currencies_large = [] for c in ['non-fraud', 'fraud']: currencies_large.extend(currency_counts.loc[currency_counts[c] > 0].index) currencies_large = np.unique(currencies_large) currencies_large_counts = [] for c in currencies_large: currencies_large_counts.append(currency_counts.loc[c, 'non-fraud']) currencies_large = [currencies_large[np.argsort(currencies_large_counts)[::-1][i]] for i in range(len(currencies_large))] plt.figure(figsize=(10,5)) bottoms = np.zeros(3) for i in range(len(currencies_large)): c = currencies_large[i] plt.bar((0, 1, 2), np.concatenate((currency_counts.loc[c], [0])), label=c, bottom=bottoms) bottoms += np.concatenate((currency_counts.loc[c], [0])) plt.legend(fontsize=20) plt.xticks([0, 1], ['non-fraud', 'fraud'], size=15) plt.ylabel('fraction of total transactions made', size=15) plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'currency_distribution')) plt.show()
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
3.1 Currency per country Check how many cards make purchases in several currencies:
curr_per_cust = dataset0[['CardID', 'Currency']].groupby('CardID')['Currency'].value_counts().index.get_level_values(0) print(len(curr_per_cust)) print(len(curr_per_cust.unique())) print(len(curr_per_cust) - len(curr_per_cust.unique()))
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
CONCLUSION: Only 243 cards out of 54,000 puchased things in several currencies. Estimate the probability of selection a currency, given a country:
curr_per_country0 = dataset0.groupby(['Country'])['Currency'].value_counts(normalize=True) curr_per_country1 = dataset1.groupby(['Country'])['Currency'].value_counts(normalize=True) curr_per_country0.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'currency_per_country0.csv')) curr_per_country1.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'currency_per_country1.csv'))
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
4. Merchants 4.1: Merchants per Currency
plt.figure(figsize=(7,5)) currencies = dataset01['Currency'].unique() merchants = dataset01['MerchantID'].unique() for curr_idx in range(len(currencies)): for merch_idx in range(len(merchants)): plt.plot(range(len(currencies)), np.zeros(len(currencies))+merch_idx, 'r-', linewidth=0.2) if currencies[curr_idx] in dataset01.loc[dataset01['MerchantID'] == merch_idx, 'Currency'].values: plt.plot(curr_idx, merch_idx, 'ko') plt.xticks(range(len(currencies)), currencies) plt.ylabel('Merchant ID', size=15) plt.xlabel('Currency', size=15) plt.tight_layout() plt.show() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'currency_per_merchant'))
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
We conclude from this that most merchants only sell things in one currenyc; thus, we will let each customer select the merchant given the currency that the customer has (which is unique). Estimate the probability of selection a merchat, given the currency:
merch_per_curr0 = dataset0.groupby(['Currency'])['MerchantID'].value_counts(normalize=True) merch_per_curr1 = dataset1.groupby(['Currency'])['MerchantID'].value_counts(normalize=True) merch_per_curr0.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'merchant_per_currency0.csv')) merch_per_curr1.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'merchant_per_currency1.csv'))
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
4.2 Number transactions per merchant
merchant_count0 = dataset0['MerchantID'].value_counts().sort_index() merchant_count1 = dataset1['MerchantID'].value_counts().sort_index() plt.figure(figsize=(15,10)) ax = plt.subplot(2, 1, 1) ax.bar(merchant_count0.index.values, merchant_count0.values) rects = ax.patches for rect, label in zip(rects, merchant_count0.values): height = rect.get_height() ax.text(rect.get_x() + rect.get_width()/2, height, label, ha='center', va='bottom') plt.ylabel('num transactions') plt.xticks([]) plt.xlim([-0.5, data_stats.loc['num merchants', 'all']+0.5]) ax = plt.subplot(2, 1, 2) ax.bar(merchant_count1.index.values, merchant_count1.values) rects = ax.patches for rect, label in zip(rects, merchant_count1.values): height = rect.get_height() ax.text(rect.get_x() + rect.get_width()/2, height, label, ha='center', va='bottom') plt.ylabel('num transactions') plt.xlabel('Merchant ID') plt.xlim([-0.5, data_stats.loc['num merchants', 'all']+0.5]) plt.tight_layout() plt.show()
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
5. Transaction Amount 5.1 Amount over time
plt.figure(figsize=(12, 10)) plt_idx = 1 for d in datasets: plt.subplot(2, 1, plt_idx) plt.plot(range(d.shape[0]), d['Amount'], 'k.') # plt.plot(date_num, amount, 'k.', label='num trans.') # plt.plot(date_num, np.zeros(len(date_num))+np.mean(all_trans), 'g',label='average') plt_idx += 1 # plt.title(d.name, size=20) plt.xlabel('transactions', size=15) plt.xticks([]) if plt_idx == 2: plt.ylabel('amount', size=15) plt.legend(fontsize=15) plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'amount_day-in-year')) plt.show() print(dataset0.loc[dataset0['Amount'] == 5472.53,['Local_Date', 'CardID', 'MerchantID', 'Amount', 'Currency', 'Country']])
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
5.2 Amount distribution
plt.figure(figsize=(10,5)) bins = [0, 5, 25, 50, 100, 1000, 11000] plt_idx = 1 for d in datasets: amount_counts, loc = np.histogram(d["Amount"], bins=bins) amount_counts = np.array(amount_counts, dtype=np.float) amount_counts /= np.sum(amount_counts) plt.subplot(1, 2, plt_idx) am_bot = 0 for i in range(len(amount_counts)): plt.bar(plt_idx, amount_counts[i], bottom=am_bot, label='{}-{}'.format(bins[i], bins[i+1])) am_bot += amount_counts[i] plt_idx += 1 plt.ylim([0, 1.01]) plt.legend() # plt.title("Amount distribution") plt_idx += 1 plt.show() plt.figure(figsize=(12, 10)) plt_idx = 1 for d in datasets: plt.subplot(2, 1, plt_idx) min_amount = min(d['Amount']) max_amount = max(d['Amount']) plt.plot(range(d.shape[0]), np.sort(d['Amount']), 'k.', label='transaction') # plt.plot(date_num, amount, 'k.', label='num trans.') plt.plot(np.linspace(0, d.shape[0], 100), np.zeros(100)+np.mean(d['Amount']), 'g--',label='average') plt_idx += 1 plt.title(d.name, size=20) plt.ylabel('amount', size=15) if plt_idx == 3: plt.xlabel('transactions', size=15) else: plt.legend(fontsize=15) plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'amount_day-in-year')) plt.show()
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
For each merchant, we will have a probability distribution over the amount spent
from scipy.optimize import curve_fit def sigmoid(x, x0, k): y = 1 / (1 + np.exp(-k * (x - x0))) return y num_merchants = data_stats.loc['num merchants', 'all'] num_bins = 20 merchant_amount_distr = np.zeros((2, num_merchants, 2*num_bins+1)) plt.figure(figsize=(15, 5)) plt_idx = 1 for dataset in [dataset0, dataset1]: for m in dataset0['MerchantID'].unique(): # get all transactions from this merchant trans_merch = dataset.loc[dataset['MerchantID']==m] num_transactions = trans_merch.shape[0] if num_transactions > 0: # get the amounts paid for the transactions with this merchant amounts = trans_merch['Amount'] bins_height, bins_edges = np.histogram(amounts, bins=num_bins) bins_height = np.array(bins_height, dtype=np.float) bins_height /= np.sum(bins_height) merchant_amount_distr[int(plt_idx > 7), (plt_idx-1)%7, :] = np.concatenate((bins_height, bins_edges)) plt.subplot(2, num_merchants, plt_idx) plt.hist(amounts, bins=num_bins) plt_idx += 1 plt.tight_layout() plt.show() np.save(join(utils_data.FOLDER_SIMULATOR_INPUT,'merchant_amount_distr'), merchant_amount_distr) from scipy.optimize import curve_fit def sigmoid(x, x0, k): y = 1 / (1 + np.exp(-k * (x - x0))) return y num_merchants = data_stats.loc['num merchants', 'all'] merchant_amount_parameters = np.zeros((2, num_merchants, 4)) plt.figure(figsize=(15, 5)) plt_idx = 1 for dataset in [dataset0, dataset1]: for m in dataset0['MerchantID'].unique(): # get all transactions from this merchant trans_merch = dataset.loc[dataset['MerchantID']==m] num_transactions = trans_merch.shape[0] if num_transactions > 0: # get the amounts paid for the transactions with this merchant amounts = np.sort(trans_merch['Amount']) min_amount = min(amounts) max_amount = max(amounts) amounts_normalised = (amounts - min_amount) / (max_amount - min_amount) plt.subplot(2, num_merchants, plt_idx) plt.plot(np.linspace(0, 1, num_transactions), amounts, '.') # fit sigmoid x_vals = np.linspace(0, 1, 100) try: p_sigmoid, _ = curve_fit(sigmoid, np.linspace(0, 1, num_transactions), amounts_normalised) amounts_predict = sigmoid(x_vals, *p_sigmoid) amounts_predict_denormalised = amounts_predict * (max_amount - min_amount) + min_amount plt.plot(x_vals, amounts_predict_denormalised) except: # fit polynomial p_poly = np.polyfit(np.linspace(0, 1, num_transactions), amounts_normalised, 2) amounts_predict = np.polyval(p_poly, x_vals) p_sigmoid, _ = curve_fit(sigmoid, x_vals, amounts_predict) amounts_predict = sigmoid(x_vals, *p_sigmoid) amounts_predict_denormalised = amounts_predict * (max_amount - min_amount) + min_amount plt.plot(x_vals, amounts_predict_denormalised) merchant_amount_parameters[int(plt_idx > 7), (plt_idx-1)%7] = [min_amount, max_amount, p_sigmoid[0], p_sigmoid[1]] plt_idx += 1 plt.tight_layout() plt.show() np.save(join(utils_data.FOLDER_SIMULATOR_INPUT,'merchant_amount_parameters'), merchant_amount_parameters) print(merchant_amount_parameters)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
We conclude that the normal customers and fraudsters follow roughly the same distribution, so we will only have one per merchant; irrespective of whether a genuine or fraudulent customer is making the transaction.
from scipy.optimize import curve_fit def sigmoid(x, x0, k): y = 1 / (1 + np.exp(-k * (x - x0))) return y num_merchants = data_stats.loc['num merchants', 'all'] merchant_amount_parameters = np.zeros((2, num_merchants, 4)) plt.figure(figsize=(6, 3)) plt_idx = 1 dataset = dataset0 m = dataset0['MerchantID'].unique()[0] # get all transactions from this merchant trans_merch = dataset.loc[dataset['MerchantID']==m] num_transactions = trans_merch.shape[0] # get the amounts paid for the transactions with this merchant amounts = np.sort(trans_merch['Amount']) min_amount = min(amounts) max_amount = max(amounts) amounts_normalised = (amounts - min_amount) / (max_amount - min_amount) plt.plot(range(num_transactions), amounts, 'k-', linewidth=2, label='real') # fit sigmoid x_vals = np.linspace(0, 1, 100) x = np.linspace(0, 1, num_transactions) p_sigmoid, _ = curve_fit(sigmoid, np.linspace(0, 1, num_transactions), amounts_normalised) amounts_predict = sigmoid(x_vals, *p_sigmoid) amounts_predict_denormalised = amounts_predict * (max_amount - min_amount) + min_amount plt.plot(np.linspace(0, num_transactions, 100), amounts_predict_denormalised, 'm--', linewidth=3, label='approx') merchant_amount_parameters[int(plt_idx > 7), (plt_idx-1)%7] = [min_amount, max_amount, p_sigmoid[0], p_sigmoid[1]] plt.xlabel('transaction count', fontsize=20) plt.ylabel('price', fontsize=20) plt.legend(fontsize=15) plt.tight_layout() plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'merchant_price_sigmoid_fit')) plt.show()
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Customers Here we want to find out how long customers/fraudsters return, i.e., how often the same credit card is used over time.
plt.figure(figsize=(15, 30)) plt_idx = 1 dist_transactions = [[], []] for d in datasets: # d = d.loc[d['Date'].apply(lambda date: date.month) < 7] # d = d.loc[d['Date'].apply(lambda date: date.month) > 3] plt.subplot(1, 2, plt_idx) trans_idx = 0 for card in dataset01['CardID'].unique(): card_times = d.loc[d['CardID'] == card, 'Global_Date'] dist_transactions[plt_idx-1].extend([(card_times.iloc[i+1] - card_times.iloc[i]).days for i in range(len(card_times)-1)]) if plt_idx == 2: num_c = 2 else: num_c = 10 if len(card_times) > num_c: card_times = card_times.apply(lambda date: date.date()) card_times = matplotlib.dates.date2num(card_times) plt.plot(card_times, np.zeros(len(card_times)) + trans_idx, 'k.', markersize=1) plt.plot(card_times, np.zeros(len(card_times)) + trans_idx, 'k-', linewidth=0.2) trans_idx += 1 min_date = matplotlib.dates.date2num(min(dataset01['Global_Date']).date()) max_date = matplotlib.dates.date2num(max(dataset01['Global_Date']).date()) # plt.xlim([min_date, max_date]) plt.xticks([]) for m in range(1,13): datenum = matplotlib.dates.date2num(datetime(2016, m, 1)) plt.plot(np.zeros(2)+datenum, [-1, 1000], 'r-', linewidth=0.5) if plt_idx == 1: plt.ylim([0,300]) else: plt.ylim([0, 50]) plt_idx += 1 plt.show() # average distance between two transactions with the same card print(np.mean(dist_transactions[0])) print(np.mean(dist_transactions[1]))
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
At a given transaction, estimate the probability of doing another transaction with the same card.
prob_stay = np.zeros(2) for k in range(2): dataset = [dataset0, dataset1][k] creditcards = dataset.loc[dataset['Global_Date'].apply(lambda d: d.month) > 3] creditcards = creditcards.loc[creditcards['Global_Date'].apply(lambda d: d.month) < 6] creditcard_counts = creditcards['CardID'].value_counts() creditcardIDs = creditcards['CardID'] data = dataset.loc[dataset['Global_Date'].apply(lambda d: d.month) > 3] single = 0 multi = 0 for i in range(len(creditcards)): cc = creditcards.iloc[i]['CardID'] dd = creditcards.iloc[i]['Global_Date'] cond1 = data['CardID'] == cc cond2 = data['Global_Date'] > dd if len(data.loc[np.logical_and(cond1, cond2)]) == 0: single += 1 else: multi += 1 prob_stay[k] = multi/(single+multi) print('probability of doing another transaction:', prob_stay[k], '{}'.format(['non-fraud', 'fraud'][k])) np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'prob_stay'), prob_stay)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Fraud behaviour
cards0 = dataset0['CardID'].unique() cards1 = dataset1['CardID'].unique() print('cards total:', len(np.union1d(cards0, cards1))) print('fraud cards:', len(cards1)) print('intersection:', len(np.intersect1d(cards0, cards1))) # go through the cards that were in both sets cards0_1 = [] cards1_0 = [] cards010 = [] for cib in np.intersect1d(cards0, cards1): date0 = dataset0.loc[dataset0['CardID']==cib].iloc[0]['Global_Date'] date1 = dataset1.loc[dataset1['CardID']==cib].iloc[0]['Global_Date'] if date0 < date1: cards0_1.append(cib) # genuine purchases after fraud dates00 = dataset0.loc[dataset0['CardID']==cib].iloc[1:]['Global_Date'] if len(dates00)>0: if sum(dates00>date1)>0: cards010.append(cib) else: cards1_0.append(cib) print('first genuine then fraud: ', len(cards0_1)) print('first fraud then genuine: ', len(cards1_0)) print('genuine again after fraud: ', len(cards010)) prob_stay_after_fraud = len(cards010)/len(cards0_1) print('prob of purchase after fraud: ', prob_stay_after_fraud) np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'prob_stay_after_fraud'), prob_stay_after_fraud ) plt.figure(figsize=(10, 25)) dist_transactions = [] trans_idx = 0 data_compromised = dataset01.loc[dataset01['CardID'].apply(lambda cid: cid in np.intersect1d(cards0, cards1))] no_trans_after_fraud = 0 trans_after_fraud = 0 for card in data_compromised['CardID'].unique(): cards_used = data_compromised.loc[data_compromised['CardID'] == card, ['Global_Date', 'Target']] dist_transactions.extend([(cards_used.iloc[i+1, 0] - cards_used.iloc[i, 0]).days for i in range(len(cards_used)-1)]) card_times = cards_used['Global_Date'].apply(lambda date: date.date()) card_times = matplotlib.dates.date2num(card_times) plt.plot(card_times, np.zeros(len(card_times)) + trans_idx, 'k-', linewidth=0.9) cond0 = cards_used['Target'] == 0 plt.plot(card_times[cond0], np.zeros(len(card_times[cond0])) + trans_idx, 'g.', markersize=5) cond1 = cards_used['Target'] == 1 plt.plot(card_times[cond1], np.zeros(len(card_times[cond1])) + trans_idx, 'r.', markersize=5) if max(cards_used.loc[cards_used['Target']==0, 'Global_Date']) > max(cards_used.loc[cards_used['Target']==1, 'Global_Date']): trans_after_fraud += 1 else: no_trans_after_fraud += 1 trans_idx += 1 min_date = matplotlib.dates.date2num(min(dataset01['Global_Date']).date()) max_date = matplotlib.dates.date2num(max(dataset01['Global_Date']).date()) plt.xticks([]) plt.ylim([0, trans_idx]) # print lines for months for m in range(1,13): datenum = matplotlib.dates.date2num(datetime(2016, m, 1)) plt.plot(np.zeros(2)+datenum, [-1, 1000], 'r-', linewidth=0.5) plt_idx += 1 plt.show() print("genuine transactions after fraud: ", trans_after_fraud) print("fraud is the last transaction: ", no_trans_after_fraud)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
when a fraudster uses an existing card, are country and currency always the same?
plt.figure(figsize=(10, 25)) dist_transactions = [] trans_idx = 0 for card in data_compromised['CardID'].unique(): cards_used = data_compromised.loc[data_compromised['CardID'] == card, ['Global_Date', 'Target', 'Country', 'Currency']] if len(cards_used['Country'].unique()) > 1 or len(cards_used['Currency'].unique()) > 1: print(cards_used) print("")
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Now define the double_slit function and make it interactive:
#Quantum double-slit #define the experimental parameters #d = 15. # (micron) dist. between slits #a = 10. # (micron) slit width. #L = 1. # (m) dist. from slit to screen #lam = 632.8 # (nm) He-Neon laser def double_slit(d=15.,a=10.,L=3.,lam=632.8,N=0): #convert d and a in microns to meters dm = d*1.e-6 am = a*1.e-6 #convert wavelength from nm to m wave=lam*1.e-9 # create the probability distribution x = np.linspace(-0.2,0.2,10000) #Isingle = np.sin(np.pi*am*x/wave/L)**2./(np.pi*am*x/wave/L)**2 Isingle = np.sinc(am*x/wave/L)**2. Idouble = (np.cos(2*np.pi*dm*x/wave/L)**2) Itot = Isingle*Idouble #generate the random photon locations on the screen #x according to the intensity distribution xsamples = distribute1D(x,Itot,N) #y randomly over the full screen height ysamples = -0.2 + 0.4*np.random.ranf(N) #Make subplot of the intensity and the screen distribution fig = plt.figure(1,(10,6)) plt.subplot(2,1,1) plt.plot(x,Itot) plt.xlim(-0.2,0.2) plt.ylim(0.,1.2) plt.ylabel("Intensity",fontsize=20) plt.subplot(2,1,2) plt.xlim(-0.2,0.2) plt.ylim(-0.2,0.2) plt.scatter(xsamples,ysamples) plt.xlabel("x (m)",fontsize=20) plt.ylabel("y (m)",fontsize=20) v5 = interact(double_slit,d=(1.,20.,1.), a=(5,50.,1.), L=(1.0,3.0), lam=(435.,700.),N=(0,10000))
notebooks/DoubleSlit.ipynb
dedx/STAR2015
mit
Bioscrape Models Chemical Reactions Bioscrape models consist of a set of species and a set of reactions (delays will be discussed later). These models can be simulated either stochastically via SSA or deterministically as an ODE. Each reaction is of the form ${INPUTS} \xrightarrow[]{\rho(.)} {OUTPUTS}$ Here, INPUTS represent a multiset of input species and OUTPUTS represents a multiset of output species. The function $\rho(.)$ is either a deterministic rate function or a stochastic propensity. Propensities are identified by their name and require parameter dictionary with the appropriate parameters. The following functions are supported: "massaction: $\rho(S) = k \Pi_{s} s^{I_s}$. Required parameters: "k" the rate constant. Note: for stochastic simulations mass action propensities are $\rho(S) = \frac{1}{V} k \Pi_{s} s!/(s - I_s)!$ where $V$ is the volume. "positivehill": $\rho(s) = k \frac{s^n}{(K^n+s^n)}$. Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s". "negativehill": $\rho(s) = k \frac{1}{(K^n+s^n)}$. Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s". "proportionalpositivehill": $\rho(s) = k d \frac{s^n}{(K^n+s^n)}$. Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s", propritional species "d". "proportionalnegativehill": $\rho(s) = k d \frac{1}{(K^n+s^n)}$. Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s", propritional species "d". "general": $\rho(s) = f(s)$ where $f$ can be any algebraic function typed as a string. Required parameters: "rate" an algebraic expression including species and model parameters written as a string. More details on all these propensity types can be found in the <a href="https://github.com/ananswam/bioscrape/wiki/Propensities">wiki documentation</a> Transcription Translation Example First, the following model of transcription and translation will be created programatically. There are three chemical species: $G$ is a gene, $T$ is a transcript, $X$ is a protein. $G \xrightarrow[]{\rho_{tx}(G, I)} G+T$; $\rho_{tx}(G, I) = G k_{tx}\frac{I^{n}}{K_{I}^{n}+I^{n}}$, $I$ is an inducer. $T \xrightarrow[]{\rho_{tl}(T)} T+X$; $\rho_{tl}(T) = k_{tl} \frac{T}{K_{R} + T}$, $k_{tl}$ and $K_R$ model effects due to ribosome saturation. $T \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$. $X \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$. The first reaction uses a proportional positive hill function as its rate function to represent induction. The second reaction uses a positive hill function function to represent ribosome saturation. The third and fourth reactions reaction represent degredation via dilution. No delays will be included in this model. This model is constructed below and simulated both stochastically and deterministically.
from bioscrape.simulator import py_simulate_model from bioscrape.types import Model #Create a list of species names (strings) species = ["G", "T", "X", "I"] #create a list of parameters in the form (param_name[string], param_val[number]) params = [("ktx", 1.5), ("ktl", 10.0), ("KI", 10), ("n", 2.0), ("KR", 20), ("delta", .1)] #create reaction tuples in the form: #(Inputs[string list], Outputs[string list], propensity_type[string], propensity_dict {propensity_param:model_param}) rxn1 = (["G"], ["G", "T"], "proportionalhillpositive", {"d":"G", "s1":"I", "k":"ktx", "K":"KI", "n":"n"}) rxn2 = (["T"], ["T", "X"], "hillpositive", {"s1":"T", "k":"ktl", "K":"KR", "n":1}) #Notice that parameters can also take numerical values instead of being named directly rxn3 = (["T"], [], "massaction", {"k":"delta"}) rxn4 = (["X"], [], "massaction", {"k":"delta"}) #Create a list of all reactions rxns = [rxn1, rxn2, rxn3, rxn4] #create an initial condition dictionary species not included in the dictionary will default to 0 x0 = {"G":1, "I":10} #Instaniate the Model object M = Model(species = species, parameters = params, reactions = rxns, initial_condition_dict = x0) #Simulate the Model deterministically timepoints = np.arange(0, 150, .1) results_det = py_simulate_model(timepoints, Model = M) #Returns a Pandas DataFrame #Simulate the Model Stochastically results_stoch = py_simulate_model(timepoints, Model = M, stochastic = True) #Plot the results plt.figure(figsize = (12, 4)) plt.subplot(121) plt.title("Transcript T") plt.plot(timepoints, results_det["T"], label = "deterministic") plt.plot(timepoints, results_stoch["T"], label = "stochastic") plt.legend() plt.xlabel("Time") plt.subplot(122) plt.title("Protein X") plt.plot(timepoints, results_det["X"], label = "deterministic") plt.plot(timepoints, results_stoch["X"], label = "stochastic") plt.legend() plt.xlabel("Time")
examples/Basic Examples - START HERE.ipynb
ananswam/bioscrape
mit