markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Fitting the LDA model
%%time lda = models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=50, passes=10) lda.save('newsgroups_50.model')
notebooks/Gensim Newsgroup.ipynb
codingafuture/pyLDAvis
bsd-3-clause
Visualzing the model with pyLDAvis Okay, the moment we have all been waiting for is finally here! You'll notice in the visualizaiton that we have a few junk topics that would probably disappear after better preprocessing of the corpus. This is left as an exercises to the reader. :)
import pyLDAvis.gensim as gensimvis import pyLDAvis vis_data = gensimvis.prepare(lda, corpus, dictionary) pyLDAvis.display(vis_data)
notebooks/Gensim Newsgroup.ipynb
codingafuture/pyLDAvis
bsd-3-clause
Now, we'll use our model to generate a single prediction: we're not even going to learn a character for it, we simply want to check the output array
# Generate characters x = np.reshape(pattern, (1, len(pattern), 1)) x = x / float(n_vocab) prediction = model.predict(x, verbose=0) print(prediction)
#3 - Improving text generation/3.1 - Randomizing our prediction.ipynb
AlexGascon/playing-with-keras
apache-2.0
As we can see, the array contains numbers between 0 and 1; each one represents the probability of the character in that position of being the correct character to output. However, this doesn't seem to be a very clear representation. Let's see it graphically.
# Creating the indices array indices = np.arange(len(prediction[0])) # Creating the plots fig, ax = plt.subplots() preds = ax.bar(indices, prediction[0]) # Adding some text for labels and titles ax.set_xlabel('Character') ax.set_ylabel('Probability') ax.set_title('Probabilty of each character of being the correct out...
#3 - Improving text generation/3.1 - Randomizing our prediction.ipynb
AlexGascon/playing-with-keras
apache-2.0
As we can see, our model seems quite convinced that the correct character to output is 'i' (and, watching the seed, I can tell you that it's probably correct: "arremetió" is a real word in Spanish, and it would make sense in that context). However, we cannot simply discard the other characters: although they may seem i...
prob_cum = np.cumsum(prediction[0]) # Creating the indices array indices = np.arange(len(prob_cum)) # Creating the plots fig, ax = plt.subplots() preds = ax.bar(indices, prob_cum) # Adding some text for labels and titles ax.set_xlabel('Character') ax.set_ylabel('Probability') ax.set_title('Cummulative output probabi...
#3 - Improving text generation/3.1 - Randomizing our prediction.ipynb
AlexGascon/playing-with-keras
apache-2.0
As you can see the final value is 1, because it is sure that the output will be chosen from one of the characters in the array. In order to use that as the PDF, what we'll do is to generate a random number between 0 and 1 and choose the first element of the array to be greater than it. As you can imagine, the char tha...
# Generate characters for i in range(500): x = np.reshape(pattern, (1, len(pattern), 1)) x = x / float(n_vocab) prediction = model.predict(x, verbose=0) # Choosing the character randomly prob_cum = np.cumsum(prediction[0]) rand_ind = np.random.rand() for i in range(len(prob_cum)): if (rand_ind ...
#3 - Improving text generation/3.1 - Randomizing our prediction.ipynb
AlexGascon/playing-with-keras
apache-2.0
We can store it in a variable and print it out. It doesn't sound confusing at all!
x = 42 print (x)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Here, a variable x stores number 42 as an integer. However, we can store the same number as a different type or within another data structure - as float, string, part of a list or a tuple. Depending on the type of variable, Python will print it slightly differently.
x_float = float(42) x_scientific = 42e0 x_str = '42' print ('42 as a float', x_float) print ('42 as a float in scientific notation', x_scientific) print ('42 as a string', x_str)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
So far it looks pretty normal. float adds floating point to the integer. Scientific notation is typically a float (serious scientists don't work with integers!). 42 as string looks exactly like we expected and similar to the integer, but their behaviors are different. The difference in behavior isn't obvious until we m...
x_list = [x, x_str, x_float, x_scientific] print ("All 42 in a list:", x_list)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Here is the thing - Python won't show you quotes when you print a string, but if you print a string in another object, it encloses the string in single quotes. So each time you see this single quotes, you should understand that it's a string and not a number (at least for Python)! Let's look how:
x_tuple = tuple(x_list) x_set = set(x_list) x_dict = {x_str : x, x_tuple : x_list} print ("All 42 in a list:", x_list) print ("All 42 in a tuple:", x_tuple) print ("All 42 in a set:", x_set) print ("A dict of 42 in different flavors", x_dict)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Wow, now you should be extreeeeeemely watchful! Look how the shape of brackets differs between a tuple and a list; lists use [brackets], whereas tuples use (parentheses) Look how both sets and dicts use {braces}. That might create some confusion, but each element of set is just an object, whereas in a dictionary y...
print ('42 as integer', x, "variable type is", type(x)) print ('42 as a float', x_float, "variable type is", type(x_float)) print ('42 as a float in scientific notation', x_scientific, "variable type is", type(x_scientific)) print ('42 as a string', x_str, "variable type is", type(x_str))
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Using type() function might be extremely useful during debugging stage. However, quite often a simple print and a little bit of attention to what's printed is enough to figure out what's going on. Some objects have different results on calling the print function. For example, let's consider a frozenset, a built-in immu...
x_frozenset = frozenset(x_list) print ("Here's a set of 42:\n", x_set) print ("Here's a frozenset of 42:\n", x_frozenset)
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
As you see, when we print set and frozenset, they look very different. Frozenset, as lots of other objects in python, adds its object name when you print it. That makes really hard to confuse set and frozenset! If you want to do something similar for your custom class, you can do it rather easily in Python. You just ne...
class my_42: def __init__(self): self.n = 42 def __str__(self): return 'Member of class my_42(' + str(self.n) + ')' print ('Just 42:',42) print ('New class:', my_42())
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Exercise. Now let's use our knowledge to practise and play with a function that takes a list and returns exactly the same list if every element of the list is a string. Otherwise, it returns a new list with all non-string elements converted to string. To avoid confusion, the function also returns a flag variable showin...
def list_converter(l): """ l - input list Returns a list where all elements have been stringified as well a flag to indicate if the list has been modified """ assert (type(l) == list) flag = False for el in l: # print the type of el if type(el) != str: fl...
PythonQnA_1_vars_and_types.ipynb
michael-isaev/cse6040_qna
apache-2.0
Helper functions
def left_of_bracket(s): if '(' in s: needle = s.find('(') r = s[:needle-1].strip() return r else: return s def state_abbreviation(state): spaces = state.count(' ') if spaces == 2: bits = state.split(' ') r='' for b in bits: r ...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Import polling places import_polling_places(filepath) * Takes a path to a polling place file * Returns a tidy data frame * Renames some columns * Dedups on ['state','polling_place']
def import_polling_places(filepath): # read csv df_pp = pd.read_csv( filepath ) # pick the columns I want to keep cols = [ 'State', 'PollingPlaceNm', 'PremisesNm', 'PremisesAddress1', 'PremisesAddress2', 'PremisesAddress3', 'P...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Import 1999 polling places
def import_1999_pp(filepath): df_pp_1999 = pd.read_csv( filepath ) # add blank columns for match types and lat/lng df_pp_1999['match_source'] = np.nan df_pp_1999['match_type'] = np.nan df_pp_1999['latitude'] = np.nan df_pp_1999['longitude'] = np.nan # tell it to index on state...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Matches Pandas setting I need for below to behave pandas generates warnings for working with a data frame that's a copy of another it thinks I might think I'm changing df_pp_1999 when im working with df_pp_1999_working i'm turning this warning off because i'm doing this on purpose, so I can keep df_pp_1999 as a 'yet to...
pd.set_option('chained_assignment',None)
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Functions match_polling_places(df_pp_1999, df_pp, settings) For the 1999 data frame, and a given other polling place data frame, and a set of settings, run a merge, and return the rows that matched based on the join you specified <br /> E.g: match_polling_places( df_pp_1999, df_pp, dict( keys = ['s...
def match_polling_places(df1, df2, settings): # split up our meta field keys = settings['keys'] match_source = settings['match_source'] match_type = settings['match_type'] # filter for those columns df_working = df1.reset_index()[[ 'state', 'polling_place', 'premi...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
match_unmatched_polling_places(df1, settings) This is a wrapper function for match_polling_places It will only pass data that is NOT yet matched in df1 to the match function, so that we keep track of at what point in our order we matched the data frame (rather than overriding each time it matches) This will matter as ...
def match_unmatched_polling_places(df1, settings): # get polling place file from settings filepath = settings['pp_filepath'] df2 = import_polling_places(filepath) # work out which rows we haven't yet matched df1_unmatched = df1[df1.match_source.isnull()] # run match for those ...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
match_status(df1) a function to tell me for a given data frame what the match status is
def match_status(df1): # how many Nans are in match_type? not_matched = len(df1[df1['match_type'].isnull()].index) # make a df for none none = pd.DataFrame(dict( match_type = 'Not yet matched', count = not_matched ), index=[0]) if not_matched == len...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match attempts Match 1 - 2007 on premises name, state, and postcode Other than schools that have moved, these places should be the same And for schools that have moved, the postcode test should ensure it's not too far
# first match attempt - set up file filepath = '1999_referenda_output/polling_places.csv' df_pp_1999 = import_1999_pp(filepath) # double check none are somehow magically matched yet print('before') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_pollin...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match 2 through 4 - 2010 through 2016 on premises name, state, and postcode Other than schools that have moved, these places should be the same And for schools that have moved, the postcode test should ensure it's not too far
## 2 print('before 2') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2010_election.csv', keys = ['state','premises','postcode'], match_source = '2010 Polling Places', match_type = 'Match 02 - state, premises, postcode' ) ...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match 5 - 2007 polling places on polling place name, state, and postcode This will match to a polling place name, in a different location, as long as it is in the same suburb for the purposes of this analysis, this should be good enough
print('before 5') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2007_election.csv', keys = ['state','polling_place','postcode'], match_source = '2007 Polling Places', match_type = 'Match 05 - state, polling_place, postcod...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match 6-8 - 2010-2016 polling places on polling place name, state, and postcode
print('before 6') display(match_status(df_pp_1999)) # configure match settings settings = dict( pp_filepath = 'federal_election_polling_places/pp_2010_election.csv', keys = ['state','polling_place','postcode'], match_source = '2010 Polling Places', match_type = 'Match 06 - state, polling_place, postcod...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Gooogle geocoder keys.json contains a google maps api key, so it's not in this notebook.
def get_google_api_key(): filepath = 'config/keys.json' with open(filepath) as data_file: data = json.load(data_file) key = data['google_maps'] return key
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Google geocode example
def geocode_address(address): key = get_google_api_key() componentRestrictions = {'country':'AU'} gmaps = googlemaps.Client(key=key) # geocode_result = gmaps.geocode(address) geocode_result = gmaps.geocode(address, componentRestrictions) return geocode_result # display(geocode_address('1...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Match unmatched so far by google geocoder
# test above function on a few rows unmatched_places = df_pp_1999[df_pp_1999.match_source.isnull()] geocode_matches = pd.DataFrame() for row in unmatched_places.reset_index().to_dict('records'): row = geocode_polling_place(row) if geocode_matches.empty: geocode_matches = row else: ge...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Set the rest to the centroid of their suburb
filepath = 'from_abs/ssc_2016_aust_centroid.csv' subtown_centroids = results = pd.read_csv( filepath ) # create a column for abbreviated states lambda_states = lambda x: state_abbreviation(x) subtown_centroids['state'] = subtown_centroids['STE_NAME16'].apply(lambda_states) # strip the brackets out of suburb name...
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
How many are left now?
df_to_match = df_pp_1999[df_pp_1999.match_source.isnull()] df_to_match = df_to_match.append(df_pp_1999[df_pp_1999['match_type'] == 'failed']) display(df_to_match)
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Hooray! WE HAVE A RESULT FOR EVERYWHERE Let's write a CSV:
df_pp_1999.to_csv( '1999_referenda_output/polling_places_geocoded.csv', sep = ',' )
referenda_1999_geocode_polling_places.ipynb
jaketclarke/australian_referenda
mit
Before we explore the package pandas, let's import pandas package. We often use pd to refer to pandas in convention.
%matplotlib inline import pandas as pd import numpy as np
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Series A Series is a single vector of data (like a Numpy array) with an index that labels each element in the vector.
counts = pd.Series([223, 43, 53, 24, 43]) counts type(counts)
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
If an index is not specified, a default sequence of integers is assigned as index. We can access the values like an array
counts[0] counts[1:4]
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
You can get the array representation and index object of the Series via its values and index atrributes, respectively.
counts.values counts.index
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
We can assign meaningful labels to the index, if they are available:
fruit = pd.Series([223, 43, 53, 24, 43], index=['apple', 'orange', 'banana', 'pears', 'lemon']) fruit fruit.index
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
These labels can be used to refer to the values in the Series.
fruit['apple'] fruit[['apple', 'lemon']]
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
We can give both the array of values and the index meaningful labels themselves:
fruit.name = 'counts' fruit.index.name = 'fruit' fruit
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Operations can be applied to Series without losing the data structure. Use bool array to filter Series
fruit > 50 fruit[fruit > 50]
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Critically, the labels are used to align data when used in operations with other Series objects.
fruit2 = pd.Series([11, 12, 13, 14, 15], index=fruit.index) fruit2 fruit2 = fruit2.drop('apple') fruit2 fruit2['grape'] = 18 fruit2 fruit3 = fruit + fruit2 fruit3
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Contrast this with arrays, where arrays of the same length will combine values element-wise; Adding Series combined values with the same label in the resulting series. Notice that the missing values were propogated by addition.
fruit3.dropna() fruit3 fruit3.isnull()
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
DataFrame A DataFrame is a tabular data structure, encapsulating multiple series like columns in a spreadsheet.Each column can be a different value type (numeric, string, boolean etc).
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year':[2000, 2001, 2002, 2001, 2003], 'pop':[1.5, 1.7, 3.6, 2.4, 2.9]} df = pd.DataFrame(data) df len(df) # Get the number of rows in the dataframe df.shape # Get the (rows, cols) of the dataframe df.T df.columns # get the index of colu...
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
There are three basic ways to access the data in the dataframe use DataFrame[] to access data quickly use DataFrame.iloc[row, col] integer position based selection method use DataFrame.loc[row, col] label based selection method
df df['state'] # indexing by label df[['state', 'year']] # indexing by a list of label df[:2] # numpy-style indexing df.iloc[0, 0] df.iloc[0, :] df.iloc[:, 1] df.iloc[:2, 1:3] df.loc[:, 'state'] df.loc[:, ['state', 'year']]
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Add new column and delete column
df['debt'] = np.random.randn(len(df)) df['rain'] = np.abs(np.random.randn(len(df))) df df = df.drop('debt', axis=1) df row1 = pd.Series([4.5, 'Nevada', 2005, 2.56], index=df.columns) df.append(row1,ignore_index=True) df.drop([0, 1])
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
data filtering
df['pop'] < 2 df df.loc[df['pop'] < 2, 'pop'] = 2 df df['year'] == 2001 (df['pop'] > 3) | (df['year'] == 2001) df.loc[(df['pop'] > 3) | (df['year'] == 2001), 'pop'] = 3 df
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Sorting index
df.sort_index(ascending=False) df.sort_index(axis=1, ascending=False)
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Summarizing and Computing Descriptive Statistics Built in functions to calculate the values over row or columns.
df df.loc[:, ['pop', 'rain']].sum() df.loc[:,['pop', 'rain']].mean() df.loc[:, ['pop', 'rain']].var() df.loc[:, ['pop', 'rain']].cumsum()
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Apply functions to each column or row of a DataFrame
df df.loc[:, ['pop', 'rain']].apply(lambda x: x.max() - x.min()) # apply new functions to each row
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Grouped and apply
df df.groupby(df['state']).mean() df.groupby(df['state'])[['pop', 'rain']].apply(lambda x: x.max() - x.min()) grouped = df.groupby(df['state']) group_list = [] for name, group in grouped: print(name) print(group) print('\n')
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Set Hierarchical indexing
df df_h = df.set_index(['state', 'year']) df_h df_h.index.is_unique df_h.loc['Ohio', :].max() - df_h.loc['Ohio', :].min()
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Import and Store Data Read and write csv file.
df df.to_csv('test_csv_file.csv',index=False) %more test_csv_file.csv df_csv = pd.read_csv('test_csv_file.csv') df_csv
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Read and write excel file.
writer = pd.ExcelWriter('test_excel_file.xlsx') df.to_excel(writer, 'sheet1', index=False) writer.save() df_excel = pd.read_excel('test_excel_file.xlsx', sheetname='sheet1') df_excel pd.read_table??
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Filtering out Missing Data You have a number of options for filtering out missing data.
df = pd.DataFrame([[1, 6.5, 3.], [1., np.nan, np.nan], [np.nan, np.nan, np.nan], [np.nan, 6.5, 3.]]) df cleaned = df.dropna() # delete rows with Nan value cleaned df.dropna(how='all') # delete rows with all Nan value df.dropna(thresh=2) # keep the rows with at least thresh non-Nan value df...
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Plotting in DataFrame
variables = pd.DataFrame({'normal': np.random.normal(size=100), 'gamma': np.random.gamma(1, size=100), 'poisson': np.random.poisson(size=100)}) variables.head() variables.shape variables.cumsum().plot() variables.cumsum().plot(subplots=True)
Introduction_to_Pandas.ipynb
rongchuhe2/workshop_data_analysis_python
mit
HTTP Methods with curl GET
! curl https://jsonplaceholder.typicode.com/photos/1
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
POST
! curl "https://jsonplaceholder.typicode.com/users/10" ! curl "https://jsonplaceholder.typicode.com/users/id=10&username=Moria.Stanley" ! curl "https://jsonplaceholder.typicode.com/users/10"
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
Delete
! curl -X DELETE https://jsonplaceholder.typicode.com/users/10 !curl https://jsonplaceholder.typicode.com/users/10
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
Install JSON-server http://nmotw.in/json-server/ https://github.com/typicode/json-server $ npm install -g json-server $ json-server --watch db.json get the entire db
!curl http://localhost:3000/db
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
HTTP Methods GET - query
!curl http://localhost:3000/posts
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
POST - add new
! curl -X POST -d "id=21&title=learn JSON&author=gong" http://localhost:3000/posts/ ! curl -X POST -d "id=5&title=learn Docker Compose&author=Mei" http://localhost:3000/posts/ ! curl -X POST -d "id=2&title=Play with Jenkins&author=oracle" http://localhost:3000/posts/
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
NOTE: JSON-server does not know how to parse JSON string
! curl -i -X POST http://localhost:3000/posts/ -d '{"id": 6, "title": "learn Jenkins", "author": "gong"}' --header "Content-Type: application/json"
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
PUT - update (whole node)
!curl -X PUT -d "id=21&title=learn REST&author=wen" http://localhost:3000/posts/21 !curl -i -X PUT -d "title=learn Ansible&author=albert" http://localhost:3000/posts/21 !curl http://localhost:3000/posts/21
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
PATCH (partial update)
!curl -i -X PATCH -d "author=annabella" http://localhost:3000/posts/21
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
DELETE
!curl http://localhost:3000/posts !curl -X DELETE http://localhost:3000/posts/t1FLpvx
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
work with json file https://stackoverflow.com/questions/18611903/how-to-pass-payload-via-json-file-for-curl
!cat request1.json ! curl -vX POST http://localhost:3000/posts -d @request1.json --header "Content-Type: application/json" !curl -X GET http://localhost:3000/posts/4
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
SLICE
!curl -X GET http://localhost:3000/posts !curl http://localhost:3000/posts?_start=1&_end=2
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
SORT
!curl http://localhost:3000/posts?_sort=id&_order=DESC !curl http://localhost:3000/posts?_sort=id&_order=DESC !curl http://localhost:3000/posts/2
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
Search
!curl http://localhost:3000/posts?q=Docker !curl http://localhost:3000/posts?q=learn !curl http://localhost:3000/posts?q=21 !curl http://localhost:3000/comments !curl -X POST http://localhost:3000/comments -d "id=6&postId=4&body=Apache is powerful" !curl http://localhost:3000/profile
learn_stem/devops/curl-test-rest-api.ipynb
wgong/open_source_learning
apache-2.0
Cross-Validation and Bias-Variance decomposition Cross-Validation Implementing 4-fold cross-validation below:
from helpers import load_data # load dataset x, y = load_data() def build_k_indices(y, k_fold, seed): """build k indices for k-fold.""" num_row = y.shape[0] interval = int(num_row / k_fold) np.random.seed(seed) indices = np.random.permutation(num_row) k_indices = [indices[k * interval: (k + 1)...
ml/ex04/template/ex04.ipynb
rusucosmin/courses
mit
Bias-Variance Decomposition Visualize bias-variance trade-off by implementing the function bias_variance_demo() below:
from least_squares import least_squares from split_data import split_data from plots import bias_variance_decomposition_visualization def bias_variance_demo(): """The entry.""" # define parameters seeds = range(100) num_data = 10000 ratio_train = 0.005 degrees = range(1, 10) # define l...
ml/ex04/template/ex04.ipynb
rusucosmin/courses
mit
Make the grid and training data In the next cell, we create the grid, along with the 10000 training examples and labels. After running this cell, we create three important tensors: grid is a tensor that is grid_size x 2 and contains the 1D grid for each dimension. train_x is a tensor containing the full 10000 training...
grid_bounds = [(0, 1), (0, 2)] grid_size = 25 grid = torch.zeros(grid_size, len(grid_bounds)) for i in range(len(grid_bounds)): grid_diff = float(grid_bounds[i][1] - grid_bounds[i][0]) / (grid_size - 2) grid[:, i] = torch.linspace(grid_bounds[i][0] - grid_diff, grid_bounds[i][1] + grid_diff, grid_size) train_x...
examples/02_Scalable_Exact_GPs/Grid_GP_Regression.ipynb
jrg365/gpytorch
mit
Creating the Grid GP Model In the next cell we create our GP model. Like other scalable GP methods, we'll use a scalable kernel that wraps a base kernel. In this case, we create a GridKernel that wraps an RBFKernel.
class GridGPRegressionModel(gpytorch.models.ExactGP): def __init__(self, grid, train_x, train_y, likelihood): super(GridGPRegressionModel, self).__init__(train_x, train_y, likelihood) num_dims = train_x.size(-1) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpy...
examples/02_Scalable_Exact_GPs/Grid_GP_Regression.ipynb
jrg365/gpytorch
mit
In the next cell, we create a set of 400 test examples and make predictions. Note that unlike other scalable GP methods, testing is more complicated. Because our test data can be different from the training data, in general we may not be able to avoid creating a num_train x num_test (e.g., 10000 x 400) kernel matrix be...
model.eval() likelihood.eval() n = 20 test_x = torch.zeros(int(pow(n, 2)), 2) for i in range(n): for j in range(n): test_x[i * n + j][0] = float(i) / (n-1) test_x[i * n + j][1] = float(j) / (n-1) with torch.no_grad(), gpytorch.settings.fast_pred_var(): observed_pred = likelihood(model(test_x)) ...
examples/02_Scalable_Exact_GPs/Grid_GP_Regression.ipynb
jrg365/gpytorch
mit
Read in dataset and split into fraud/non-fraud
dataset01, dataset0, dataset1 = utils_data.get_real_dataset() datasets = [dataset0, dataset1] out_folder = utils_data.FOLDER_REAL_DATA_ANALYSIS
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Print some basic info about the dataset
print(dataset01.head()) data_stats = utils_data.get_real_data_stats() data_stats.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'aggregated_data.csv')) display(data_stats)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Percentage of fraudulent cards also in genuine transactions:
most_used_card = dataset0['CardID'].value_counts().index[0] print("Card (ID) with most transactions: ", most_used_card)
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
1. TIME of TRANSACTION: Here we analyse number of transactions regarding time. 1.1 Activity per day:
plt.figure(figsize=(15, 5)) plt_idx = 1 for d in datasets: plt.subplot(1, 2, plt_idx) trans_dates = d["Global_Date"].apply(lambda date: date.date()) all_trans = trans_dates.value_counts().sort_index() date_num = matplotlib.dates.date2num(all_trans.index) plt.plot(date_num, all_trans.values, 'k.', la...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - It's interesting that there seems to be some kind of structure in the fraudster behavior. I.e., there are many days on which the number of frauds is exactly the same. This must either be due to some peculiarity in the data (are these days where fraud was investigated more?) or because the fraudsters do coo...
monthdays_2016 = np.unique([dates_2016[i].day for i in range(366)], return_counts=True) monthdays_2016 = monthdays_2016[1][monthdays_2016[0]-1] plt.figure(figsize=(12, 5)) plt_idx = 1 monthday_frac = np.zeros((31, 2)) idx = 0 for d in datasets: # get the average number of transactions per day in a month m...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - the amount of transactions does not depend on the day in a month in a utilisable way 1.3 Activity per weekday:
weekdays_2016 = np.unique([dates_2016[i].weekday() for i in range(366)], return_counts=True) weekdays_2016 = weekdays_2016[1][weekdays_2016[0]] plt.figure(figsize=(12, 5)) plt_idx = 1 weekday_frac = np.zeros((7, 2)) idx = 0 for d in datasets: weekday = d["Local_Date"].apply(lambda date: date.weekday()).value_...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - the amount of transactions does not depend on the day in a week in a utilisable way 1.4 Activity per month in a year:
monthdays = np.array([31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]) plt.figure(figsize=(12, 5)) plt_idx = 1 month_frac = np.zeros((12, 2)) idx = 0 for d in datasets: month = d["Local_Date"].apply(lambda date: date.month).value_counts().sort_index() # correct for different number of days in a month ...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - people buy more in summer than in winter 1.5 Activity per hour of day:
plt.figure(figsize=(12, 5)) plt_idx = 1 hour_frac = np.zeros((24, 2)) idx = 0 for d in datasets: hours = d["Local_Date"].apply(lambda date: date.hour).value_counts().sort_index() hours /= 366 if idx > -1: hour_frac[hours.index.values, idx] = hours.values / np.sum(hours.values, axis=0) ...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Analysis: - the hour of day is very important: people spend most in the evening and least during the night; fraud is usually committed in the night
# extract only hours date_hour_counts = dataset0["Local_Date"].apply(lambda d: d.replace(minute=0, second=0)).value_counts(sort=False) hours = np.array(list(map(lambda d: d.hour, list(date_hour_counts.index)))) counts = date_hour_counts.values hour_mean = np.zeros(24) hour_min = np.zeros(24) hour_max = np.zeros(24) ho...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
1.6 TEST: Do the above calculated fractions lead to the correct amount of transactions?
# total number of transactions we want in one year aggregated_data = pd.read_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'aggregated_data.csv'), index_col=0) trans_per_year = np.array(aggregated_data.loc['transactions'].values, dtype=np.float)[1:] # transactions per day in a month frac_monthday = np.load(join(utils_da...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
2. COUNTRY 2.1 Country per transaction:
country_counts = pd.concat([d['Country'].value_counts() for d in datasets], axis=1) country_counts.fillna(0, inplace=True) country_counts.columns = ['non-fraud', 'fraud'] country_counts[['non-fraud', 'fraud']] /= country_counts.sum(axis=0) # save the resulting data country_counts.to_csv(join(utils_data.FOLDER_SIMULATO...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
3. CURRENCY 3.1 Currency per Transaction
currency_counts = pd.concat([d['Currency'].value_counts() for d in datasets], axis=1) currency_counts.fillna(0, inplace=True) currency_counts.columns = ['non-fraud', 'fraud'] currency_counts[['non-fraud', 'fraud']] /= currency_counts.sum(axis=0) currencies_large = [] for c in ['non-fraud', 'fraud']: currencies_lar...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
3.1 Currency per country Check how many cards make purchases in several currencies:
curr_per_cust = dataset0[['CardID', 'Currency']].groupby('CardID')['Currency'].value_counts().index.get_level_values(0) print(len(curr_per_cust)) print(len(curr_per_cust.unique())) print(len(curr_per_cust) - len(curr_per_cust.unique()))
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
CONCLUSION: Only 243 cards out of 54,000 puchased things in several currencies. Estimate the probability of selection a currency, given a country:
curr_per_country0 = dataset0.groupby(['Country'])['Currency'].value_counts(normalize=True) curr_per_country1 = dataset1.groupby(['Country'])['Currency'].value_counts(normalize=True) curr_per_country0.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'currency_per_country0.csv')) curr_per_country1.to_csv(join(utils_data.F...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
4. Merchants 4.1: Merchants per Currency
plt.figure(figsize=(7,5)) currencies = dataset01['Currency'].unique() merchants = dataset01['MerchantID'].unique() for curr_idx in range(len(currencies)): for merch_idx in range(len(merchants)): plt.plot(range(len(currencies)), np.zeros(len(currencies))+merch_idx, 'r-', linewidth=0.2) if currencies[...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
We conclude from this that most merchants only sell things in one currenyc; thus, we will let each customer select the merchant given the currency that the customer has (which is unique). Estimate the probability of selection a merchat, given the currency:
merch_per_curr0 = dataset0.groupby(['Currency'])['MerchantID'].value_counts(normalize=True) merch_per_curr1 = dataset1.groupby(['Currency'])['MerchantID'].value_counts(normalize=True) merch_per_curr0.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'merchant_per_currency0.csv')) merch_per_curr1.to_csv(join(utils_data.FO...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
4.2 Number transactions per merchant
merchant_count0 = dataset0['MerchantID'].value_counts().sort_index() merchant_count1 = dataset1['MerchantID'].value_counts().sort_index() plt.figure(figsize=(15,10)) ax = plt.subplot(2, 1, 1) ax.bar(merchant_count0.index.values, merchant_count0.values) rects = ax.patches for rect, label in zip(rects, merchant_count0....
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
5. Transaction Amount 5.1 Amount over time
plt.figure(figsize=(12, 10)) plt_idx = 1 for d in datasets: plt.subplot(2, 1, plt_idx) plt.plot(range(d.shape[0]), d['Amount'], 'k.') # plt.plot(date_num, amount, 'k.', label='num trans.') # plt.plot(date_num, np.zeros(len(date_num))+np.mean(all_trans), 'g',label='average') plt_idx += 1 # plt.ti...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
5.2 Amount distribution
plt.figure(figsize=(10,5)) bins = [0, 5, 25, 50, 100, 1000, 11000] plt_idx = 1 for d in datasets: amount_counts, loc = np.histogram(d["Amount"], bins=bins) amount_counts = np.array(amount_counts, dtype=np.float) amount_counts /= np.sum(amount_counts) plt.subplot(1, 2, plt_idx) am_bot = 0 for i i...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
For each merchant, we will have a probability distribution over the amount spent
from scipy.optimize import curve_fit def sigmoid(x, x0, k): y = 1 / (1 + np.exp(-k * (x - x0))) return y num_merchants = data_stats.loc['num merchants', 'all'] num_bins = 20 merchant_amount_distr = np.zeros((2, num_merchants, 2*num_bins+1)) plt.figure(figsize=(15, 5)) plt_idx = 1 for dataset in [dataset0, da...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
We conclude that the normal customers and fraudsters follow roughly the same distribution, so we will only have one per merchant; irrespective of whether a genuine or fraudulent customer is making the transaction.
from scipy.optimize import curve_fit def sigmoid(x, x0, k): y = 1 / (1 + np.exp(-k * (x - x0))) return y num_merchants = data_stats.loc['num merchants', 'all'] merchant_amount_parameters = np.zeros((2, num_merchants, 4)) plt.figure(figsize=(6, 3)) plt_idx = 1 dataset = dataset0 m = dataset0['MerchantID'].un...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Customers Here we want to find out how long customers/fraudsters return, i.e., how often the same credit card is used over time.
plt.figure(figsize=(15, 30)) plt_idx = 1 dist_transactions = [[], []] for d in datasets: # d = d.loc[d['Date'].apply(lambda date: date.month) < 7] # d = d.loc[d['Date'].apply(lambda date: date.month) > 3] plt.subplot(1, 2, plt_idx) trans_idx = 0 for card in dataset01['CardID'].unique(): card...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
At a given transaction, estimate the probability of doing another transaction with the same card.
prob_stay = np.zeros(2) for k in range(2): dataset = [dataset0, dataset1][k] creditcards = dataset.loc[dataset['Global_Date'].apply(lambda d: d.month) > 3] creditcards = creditcards.loc[creditcards['Global_Date'].apply(lambda d: d.month) < 6] creditcard_counts = creditcards['CardID'].value_counts(...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Fraud behaviour
cards0 = dataset0['CardID'].unique() cards1 = dataset1['CardID'].unique() print('cards total:', len(np.union1d(cards0, cards1))) print('fraud cards:', len(cards1)) print('intersection:', len(np.intersect1d(cards0, cards1))) # go through the cards that were in both sets cards0_1 = [] cards1_0 = [] cards010 = [] for ci...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
when a fraudster uses an existing card, are country and currency always the same?
plt.figure(figsize=(10, 25)) dist_transactions = [] trans_idx = 0 for card in data_compromised['CardID'].unique(): cards_used = data_compromised.loc[data_compromised['CardID'] == card, ['Global_Date', 'Target', 'Country', 'Currency']] if len(cards_used['Country'].unique()) > 1 or len(cards_used['Currency'].un...
data/analyse_data.ipynb
lmzintgraf/MultiMAuS
mit
Now define the double_slit function and make it interactive:
#Quantum double-slit #define the experimental parameters #d = 15. # (micron) dist. between slits #a = 10. # (micron) slit width. #L = 1. # (m) dist. from slit to screen #lam = 632.8 # (nm) He-Neon laser def double_slit(d=15.,a=10.,L=3.,lam=632.8,N=0): #convert d and a in microns to meters dm = d*1.e-6 ...
notebooks/DoubleSlit.ipynb
dedx/STAR2015
mit
Bioscrape Models Chemical Reactions Bioscrape models consist of a set of species and a set of reactions (delays will be discussed later). These models can be simulated either stochastically via SSA or deterministically as an ODE. Each reaction is of the form ${INPUTS} \xrightarrow[]{\rho(.)} {OUTPUTS}$ Here, INPUTS rep...
from bioscrape.simulator import py_simulate_model from bioscrape.types import Model #Create a list of species names (strings) species = ["G", "T", "X", "I"] #create a list of parameters in the form (param_name[string], param_val[number]) params = [("ktx", 1.5), ("ktl", 10.0), ("KI", 10), ("n", 2.0), ("KR", 20), ("del...
examples/Basic Examples - START HERE.ipynb
ananswam/bioscrape
mit