markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
3) Splitting data into training and testing sets Our day is now nice and squeaky clean! This definitely always happens in real life. Next up, let's scale the data and split it into a training and test set.
from sklearn import preprocessing from sklearn.cross_validation import train_test_split # Scale and split dataset X_scaled = preprocessing.scale(X) # Split into training and test sets XTrain, XTest, yTrain, yTest = train_test_split(X_scaled, y, random_state=1)
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
4. Running algorithms on the data Blah blah now it's time to train algorithms. We are doing binary classification. Could ahve also used logistic regression, kNN, etc etc. 4.1 Random forests Let’s build a random forest. A great explanation of random forests can be found here. Briefly, random forests build a collection o...
from sklearn import metrics from sklearn.grid_search import GridSearchCV, RandomizedSearchCV from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier # Search for good hyperparameter values # Specify values to grid search over n_estimators = np.arange(1, 30, 5) max_features ...
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
93-95% accuracy, not too shabby! Have a look and see how random forests with suboptimal hyperparameters fare. We got around 91-92% accuracy on the out of the box (untuned) random forests, which actually isn't terrible. 2) Second algorithm: support vector machines Let's train our second algorithm, support vector machin...
from sklearn.svm import SVC # Search for good hyperparameter values # Specify values to grid search over g_range = 2. ** np.arange(-15, 5, step=2) C_range = 2. ** np.arange(-5, 15, step=2) hyperparameters = [{'gamma': g_range, 'C': C_range}] # Grid search using cross-validation grid = GridSearc...
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
Looks good! This is similar performance to what we saw in the random forests. 3) Third algorithm: neural network Finally, let's jump on the hype wagon and throw neural networks at our problem. Neural networks (NNs) represent a different way of thinking about machine learning algorithms. A great place to start learning ...
from multilayer_perceptron import multilayer_perceptron # Search for good hyperparameter values # Specify values to grid search over layer_size_range = [(3,2),(10,10),(2,2,2),10,5] # different networks shapes learning_rate_range = np.linspace(.1,1,3) hyperparameters = [{'hidden_layer_sizes': layer_size_range, 'learnin...
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
Looks like this neural network (given this dataset, architecture, and hyperparameterisation) is doing slightly worse on the spam dataset. That's okay, it could still be picking up on a signal that the random forest and SVM weren't. Machine learning algorithns... ensemble! 4) Majority vote on classifications
# here's a rough solution import collections # stick all predictions into a dataframe predictions = pd.DataFrame(np.array([RF_predictions, SVM_predictions, NN_predictions])).T predictions.columns = ['RF', 'SVM', 'NN'] predictions = pd.DataFrame(np.where(predictions=='yes', 1, 0), columns=p...
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0
Lists Lists group together data. Many languages have arrays (we'll look at those in a bit in python). But unlike arrays in most languages, lists can hold data of all different types -- they don't need to be homogeneos. The data can be a mix of integers, floating point or complex #s, strings, or other objects (includ...
a = [1, 2.0, "my list", 4] print(a)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
We can index a list to get a single element -- remember that python starts counting at 0:
print(a[2])
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Like with strings, mathematical operators are defined on lists:
print(a*2)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
The len() function returns the length of a list
print(len(a))
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Unlike strings, lists are mutable -- you can change elements in a list easily
a[1] = -2.0 a a[0:1] = [-1, -2.1] # this will put two items in the spot where 1 existed before a
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Note that lists can even contain other lists:
a[1] = ["other list", 3] a
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Just like everything else in python, a list is an object that is the instance of a class. Classes have methods (functions) that know how to operate on an object of that class. There are lots of methods that work on lists. Two of the most useful are append, to add to the end of a list, and pop, to remove the last elem...
a.append(6) a a.pop() a
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
<div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div> An operation we'll see a lot is to begin with an empty list and add elements to it. An empty list is created as: a = [] Create an empty list Append the integers 1 through 10 to it. Now pop them out ...
a = [] a.append(1) a.append(2) a.append(3) a.append(4) a.append(5) a a.pop() a.pop() a.pop() a.pop() a.pop() a.pop()
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
copying may seem a little counterintuitive at first. The best way to think about this is that your list lives in memory somewhere and when you do a = [1, 2, 3, 4] then the variable a is set to point to that location in memory, so it refers to the list. If we then do b = a then b will also point to that same location ...
a = [1, 2, 3, 4] b = a # both a and b refer to the same list object in memory print(a) a[0] = "changed" print(b)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
if you want to create a new object in memory that is a copy of another, then you can either index the list, using : to get all the elements, or use the list() function:
c = list(a) # you can also do c = a[:], which basically slices the entire list a[1] = "two" print(a) print(c)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Things get a little complicated when a list contains another mutable object, like another list. Then the copy we looked at above is only a shallow copy. Look at this example&mdash;the list within the list here is still the same object in memory for our two copies:
f = [1, [2, 3], 4] print(f) g = list(f) print(g)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Now we are going to change an element of that list [2, 3] inside of our main list. We need to index f once to get that list, and then a second time to index that list:
f[1][0] = "a" print(f) print(g)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Note that the change occured in both&mdash;since that inner list is shared in memory between the two. Note that we can still change one of the other values without it being reflected in the other list&mdash;this was made distinct by our shallow copy:
f[0] = -1 print(g) print(f)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Note: this is what is referred to as a shallow copy. If the original list had any special objects in it (like another list), then the new copy and the old copy will still point to that same object. There is a deep copy method when you really want everything to be unique in memory. When in doubt, use the id() function...
print(id(a), id(b), id(c))
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
There are lots of other methods that work on lists (remember, ask for help)
my_list = [10, -1, 5, 24, 2, 9] my_list.sort() print(my_list) print(my_list.count(-1)) my_list help(a.insert) a.insert(3, "my inserted element") a
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
joining two lists is simple. Like with strings, the + operator concatenates:
b = [1, 2, 3] c = [4, 5, 6] d = b + c print(d)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Dictionaries A dictionary stores data as a key:value pair. Unlike a list where you have a particular order, the keys in a dictionary allow you to access information anywhere easily:
my_dict = {"key1":1, "key2":2, "key3":3} print(my_dict["key1"])
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
you can add a new key:pair easily, and it can be of any type
my_dict["newkey"] = "new" print(my_dict)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Note that a dictionary is unordered. You can also easily get the list of keys that are defined in a dictionary
keys = list(my_dict.keys()) print(keys)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
and check easily whether a key exists in the dictionary using the in operator
print("key1" in keys) print("invalidKey" in keys)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
List Comprehensions list comprehensions provide a compact way to initialize lists. Some examples from the tutorial
squares = [x**2 for x in range(10)] squares
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
here we use another python type, the tuple, to combine numbers from two lists into a pair
[(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
<div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div> Use a list comprehension to create a new list from squares containing only the even numbers. It might be helpful to use the modulus operator, % <hr> Tuples tuples are immutable -- they cannot be chang...
a = (1, 2, 3, 4) print(a)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
We can unpack a tuple:
w, x, y, z = a print(w) print(w, x, y, z)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Since a tuple is immutable, we cannot change an element:
a[0] = 2
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
But we can turn it into a list, and then we can change it
z = list(a) z[0] = "new" print(z)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Control Flow To write a program, we need the ability to iterate and take action based on the values of a variable. This includes if-tests and loops. Python uses whitespace to denote a block of code. While loop A simple while loop&mdash;notice the indentation to denote the block that is part of the loop. Here we also u...
n = 0 while n < 10: print(n) n += 1
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
This was a very simple example. But often we'll use the range() function in this situation. Note that range() can take a stride.
for n in range(2, 10, 2): print(n) print(list(range(10)))
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
if statements if allows for branching. python does not have a select/case statement like some other languages, but if, elif, and else can reproduce any branching functionality you might need.
x = 0 if x < 0: print("negative") elif x == 0: print("zero") else: print("positive")
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Iterating over elements it's easy to loop over items in a list or any iterable object. The in operator is the key here.
alist = [1, 2.0, "three", 4] for a in alist: print(a) for c in "this is a string": print(c)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
We can combine loops and if-tests to do more complex logic, like break out of the loop when you find what you're looking for
n = 0 for a in alist: if a == "three": break else: n += 1 print(n)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
(for that example, however, there is a simpler way)
print(alist.index("three"))
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
for dictionaries, you can also loop over the elements
my_dict = {"key1":1, "key2":2, "key3":3} for k, v in my_dict.items(): print("key = {}, value = {}".format(k, v)) # notice how we do the formatting here for k in sorted(my_dict): print(k, my_dict[k])
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
sometimes we want to loop over a list element and know its index -- enumerate() helps here:
for n, a in enumerate(alist): print(n, a)
day-1/python-advanced-datatypes.ipynb
sbu-python-summer/python-tutorial
bsd-3-clause
Feature Evaluation Pipeline
SEED = 2**8+1 # TODO: Optimize lgb_params = { 'objective':'binary', 'boosting_type':'gbdt', 'metric':'auc', 'n_jobs':-1, 'learning_rate':0.01, 'num_leaves': 2**5, # 5-8 'max_depth':-1, 'tree_learner':'serial', 'colsample_bytree': 0.7, 'subsample_freq':1, 'subsample':0.7, ...
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
The function above does a few things. First, it downsamples the negative values. This was reported by a number of top-50 people on the forums, and gold medal winners in previous competitions have used this technique as well. The idea being that fraud is unique and it shouldn't matter much which non-frauds we train agai...
data = traintr.append(testtr, sort=False) data.reset_index(inplace=True) features = [ 'TransactionAmt', 'ProductCD', 'card1', 'card2', 'card3', 'card4', 'card5', 'card6', 'addr1', 'addr2', 'dist1', 'dist2', 'P_emaildomain', 'R_emaildomain', 'D3', 'D1', ...
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
Do It
results = run_evaluation( data, features, lgb_params, downsample_seed=1773, downsample_frac=0.2, save_file_path='./report_test.json' # persist the results to a file ) results report, sns_features = display_report(results) plt.figure(figsize=(16, 16)) sns.barplot(x="perm_import", y="feature", ...
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
If we were to look at regular gain or splits, the V columns would dominate the show. But here we accurately see they really aren't that important in the grand scheme of things. Notice how some of our variables have a very high CVS score, especially the high cardinality categorical variables, like card2 for example. Whi...
a = set(traintr.card2.unique()) b = set(testtr.card2.unique()) len(a-b), len(b-a) traintr.card2.value_counts().head(15) testtr.card2.value_counts().head(15)
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
Not much we can do there. I'm not advocating removing card2 as a variable; but I am advocating we use these results to draw our attention to possible issues. So for example, the appropriate thing to do here would be to attempt the removal of card2 and observe how it affects the model's mean and std AUC. That is the ult...
def submission(num_boost_rounds=0): # We train using fixed num_boost_rounds train_groups = [] for month_start in range(4): # using 3x dif seeds each months = [12 + month_start, 12 + month_start + 1, 12 + month_start + 2] train_groups.append(months) # Then using double nu...
kaggle/ieee_fraud_detection/src/authman/2019-09-13_Validator and Submission Pipeline.ipynb
AdityaSoni19031997/Machine-Learning
mit
Sending a mail is, with the proper library, a piece of cake...
from sender import Mail mail = Mail(MAIL_SERVER) mail.fromaddr = ("Secret admirer", FROM_ADDRESS) mail.send_message("Raspberry Pi has a soft spot for you", to=TO_ADDRESS, body="Hi sweety! Grab a smoothie?")
notebooks/en-gb/Communication - Send mails.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0
... but if we take it a little further, we can connect our doorbell project to the sending of mail! APPKEY is the Application Key for a (free) http://www.realtime.co/ "Realtime Messaging Free" subscription. See "104 - Remote deurbel - Een cloud API gebruiken om berichten te sturen" voor meer gedetailleerde info. info.
APPKEY = "******" mail.fromaddr = ("Your doorbell", FROM_ADDRESS) mail_to_addresses = { "Donald Duck":"dd@****.com", "Maleficent":"mf@****.com", "BigBadWolf":"bw@****.com" } def on_message(sender, channel, message): mail_message = "{}: Call for {}".format(channel, message) print(mail_message)...
notebooks/en-gb/Communication - Send mails.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0
Estimates Using Pivot First, we'll replicate the results for the question "Ever diagnosed with diabetes" for all of California, for 2017, from AskCHIS. Diagnosed with diabetes: 10.7%, ( 9.6% - 11.8% ) 3,145,000 Never diagnosed with diabetes 89.3% ( 88.2% - 90.4% ) 26,311,000 Total population: 29,456,000 Getting esti...
t = df.pivot_table(values='rakedw0', columns='diabetes', index=df.index) t2 = t.sum().round(-3) t2
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Summing across responses yields the total popluation, which we can use to calculate percentages.
t2.sum() (t2/t2.sum()*100).round(1)
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Estimates Using Unstack You can also calculate the same values using set_index and unstack.
t = df[['diabetes','rakedw0']].set_index('diabetes',append=True).unstack() t2 = t.sum().round(-3) diabetes_yes = t2.unstack().loc['rakedw0','YES'] diabetes_no = t2.unstack().loc['rakedw0','NO'] diabetes_yes, diabetes_no
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Calculating Variance The basic formula for calculating the variance is in section 9.2, Methods for Variance Estimation of CHIS Report 5 Weighting and Variance Estimation . Basically, the other 80 raked weights, rakedw1 through rakedw80 give alternate estimates. It's like running the survey an additional 80 times, whi...
weight_cols = [c for c in df.columns if 'raked' in c] t = df[['diabetes']+weight_cols] # Get the column of interest, and all of the raked weights t = t.set_index('diabetes',append=True) # Move the column of interest into the index t = t.unstack() # Unstack the column of interest, so both values are now in multi-level ...
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
The final percentage ranges match those from AskCHIS.
((diabetes_yes-ci_95.loc['YES'])/29_456_000*100).round(1), ((diabetes_yes+ci_95.loc['YES'])/29_456_000*100).round(1) ((diabetes_no-ci_95.loc['NO'])/29_456_000*100).round(1), ((diabetes_no+ci_95.loc['NO'])/29_456_000*100).round(1)
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Functions Here is a function for calculating the estimate, percentages, Standard Error and Relative Standard Error from a dataset. This function also works with a subset of the dataset, but note that the percentages will be relative to the total from the input dataset, not the whole California population.
def chis_estimate(df, column, ci=True, pct=True, rse=False): """Calculate estimates for CHIS variables, with variances, as 95% CI, from the replicate weights""" weight_cols = [c for c in df.columns if 'raked' in c] t = df[[column]+weight_cols] # Get the column of interest, and all of the raked we...
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Segmenting Results This function allows segmenting on another column, for instance, breaking out responses by race. Note that in the examples we are checking for estimates to have a relative standard error ( such as diabetes_rse ) of greater than 30%. CHIS uses 30% as a limit for unstable values, and won't publish est...
def chis_segment_estimate(df, column, segment_columns): """Return aggregated CHIS data, segmented on one or more other variables. """ if not isinstance(segment_columns, (list,tuple)): segment_columns = [segment_columns] odf = None for index,row in df[segment_columns].drop_duplica...
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
The dataframe returned by this function has a multi-level index, which include all of the unique values from the segmentation columns, a level for measures, and the values from the target column. For instance:
chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'ur_ihs']).head(20)
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
You can "pivot" a level out of the row into the columns with unstack(). Here we move the measures out of the row index into columns.
t = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'ur_ihs']) t.unstack('measure').head()
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
Complex selections can be made with .loc.
t = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'ur_ihs']) idx = pd.IndexSlice # Convenience redefinition. # The IndexSlices should have one term ( seperated by ',') for each of the levels in the index. # We have one `IndexSlice` for rows, and one for columns. Note that the ``row_indexer`` has 4 terms. row...
healthpolicy.ucla.edu-chis/notebooks/Calculating Estimates and Variances.ipynb
CivicKnowledge/metatab-packages
mit
MuJoCo More detailed instructions in this tutorial. Institutional MuJoCo license.
#@title Edit and run mjkey = """ REPLACE THIS LINE WITH YOUR MUJOCO LICENSE KEY """.strip() mujoco_dir = "$HOME/.mujoco" # Install OpenGL deps !apt-get update && apt-get install -y --no-install-recommends \ libgl1-mesa-glx libosmesa6 libglew2.0 # Fetch MuJoCo binaries from Roboti !wget -q https://www.roboti.us/d...
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Machine-locked MuJoCo license.
#@title Add your MuJoCo License and run mjkey = """ """.strip() mujoco_dir = "$HOME/.mujoco" # Install OpenGL dependencies !apt-get update && apt-get install -y --no-install-recommends \ libgl1-mesa-glx libosmesa6 libglew2.0 # Get MuJoCo binaries !wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoc...
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
RWRL
!git clone https://github.com/google-research/realworldrl_suite.git !pip install realworldrl_suite/
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
RL Unplugged
!git clone https://github.com/deepmind/deepmind-research.git %cd deepmind-research
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Imports
import collections import copy from typing import Mapping, Sequence import acme from acme import specs from acme.agents.tf import actors from acme.agents.tf import d4pg from acme.tf import networks from acme.tf import utils as tf2_utils from acme.utils import loggers from acme.wrappers import single_precision from acm...
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Data
domain_name = 'cartpole' #@param task_name = 'swingup' #@param difficulty = 'easy' #@param combined_challenge = 'easy' #@param combined_challenge_str = str(combined_challenge).lower() tmp_path = '/tmp/rwrl' gs_path = f'gs://rl_unplugged/rwrl' data_path = (f'combined_challenge_{combined_challenge_str}/{domain_name}/...
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Dataset and environment
#@title Auxiliary functions def flatten_observation(observation): """Flattens multiple observation arrays into a single tensor. Args: observation: A mutable mapping from observation names to tensors. Returns: A flattened and concatenated observation array. Raises: ValueError: If `observation` is...
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
D4PG learner
#@title Auxiliary functions def make_networks( action_spec: specs.BoundedArray, hidden_size: int = 1024, num_blocks: int = 4, num_mixtures: int = 5, vmin: float = -150., vmax: float = 150., num_atoms: int = 51, ): """Creates networks used by the agent.""" num_dimensions = np.prod(action...
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
Evaluation
# Create a logger. logger = loggers.TerminalLogger(label='evaluation', time_delta=1.) # Create an environment loop. loop = acme.EnvironmentLoop( environment=environment, actor=actors.DeprecatedFeedForwardActor(online_networks['policy']), logger=logger) loop.run(5)
rl_unplugged/rwrl_d4pg.ipynb
deepmind/deepmind-research
apache-2.0
The nbformat API returns a special dict. You don't need to worry about the details of the structure. The nbconvert API exposes some basic exporters for common formats and defaults. You will start by using one of them. First you will import it, then instantiate it using most of the defaults, and finally you will proces...
from IPython.config import Config from IPython.nbconvert import HTMLExporter # The `basic` template is used here. # Later you'll learn how to configure the exporter. html_exporter = HTMLExporter(config=Config({'HTMLExporter':{'default_template':'basic'}})) (body, resources) = html_exporter.from_notebook_node(jake_not...
notebooks/1 - IPython Notebook Examples/IPython Project Examples/Notebook/Using nbconvert as a Library.ipynb
tylere/docker-tmpnb-ee
apache-2.0
Use the IPython configuration/Traitlets system to enable it. If you have already set IPython configuration options, this system is familiar to you. Configuration options will always of the form: ClassName.attribute_name = value You can create a configuration object a couple of different ways. Everytime you launch IPy...
from IPython.config import Config c = Config({ 'ExtractOutputPreprocessor':{'enabled':True} }) exportHTML = HTMLExporter() exportHTML_and_figs = HTMLExporter(config=c) (_, resources) = exportHTML.from_notebook_node(jake_notebook) (_, resources_with_fig) = exportHTML_and_figs.from_no...
notebooks/1 - IPython Notebook Examples/IPython Project Examples/Notebook/Using nbconvert as a Library.ipynb
tylere/docker-tmpnb-ee
apache-2.0
Custom Preprocessor There are an endless number of transformations that you may want to apply to a notebook. This is why we provide a way to register your own preprocessors that will be applied to the notebook after the default ones. To do so, you'll have to pass an ordered list of Preprocessors to the Exporter's cons...
from IPython.nbconvert.preprocessors import Preprocessor import IPython.config print("Four relevant docstring") print('=============================') print(Preprocessor.__doc__) print('=============================') print(Preprocessor.preprocess.__doc__) print('=============================') print(Preprocessor.prepr...
notebooks/1 - IPython Notebook Examples/IPython Project Examples/Notebook/Using nbconvert as a Library.ipynb
tylere/docker-tmpnb-ee
apache-2.0
Example The following demonstration was requested in an IPython GitHub issue, the ability to exclude a cell by index. Inject cells is similar, and won't be covered here. If you want to inject static content at the beginning/end of a notebook, use a custom template.
from IPython.utils.traitlets import Integer class PelicanSubCell(Preprocessor): """A Pelican specific preprocessor to remove some of the cells of a notebook""" # I could also read the cells from nbc.metadata.pelican is someone wrote a JS extension # But I'll stay with configurable value. start = ...
notebooks/1 - IPython Notebook Examples/IPython Project Examples/Notebook/Using nbconvert as a Library.ipynb
tylere/docker-tmpnb-ee
apache-2.0
To illustrate the main ideas, we are going to use a fake data set from this website. This file contains artificial names, addresses, companies, phone numbers etc. for fictitious US characters. Here is the complete list of variables The main purpose of this dataset is testing. Straight from the website "Always test you...
# we need some extra tools to download and handle zip files import zipfile as zf import requests, io url = "https://www.briandunning.com/sample-data/us-500.zip" r = requests.get(url) file = zf.ZipFile(io.BytesIO(r.content)) file file.namelist() # there is one csv file inside file_csv = file.open(file.namel...
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
(1) Selecting data using loc loc is primarily label-location based indexer. That is, it selects rows and columns by their labels (variable names for columns, index values for rows). also works with a boolean array. The syntax is python data.loc[&lt;row selection&gt;, &lt;column selection&gt;] First, set an arbitra...
dff = df.set_index(['last_name']) dff.head()
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
Now we can directly select rows by their index (last_name) values (just like we do with columns)
dff.loc['Butt'] # multiple rows dff.loc[['Butt', 'Venere']] # select a subset of the data (subDataFrame) dff.loc[['Butt', 'Foller'], ['city', 'email']] # ranges of index labels dff.loc[['Butt', 'Foller'], 'address':'phone2']
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
Boolean indexing using loc The most common method to work with data Pass an array of True/False values to the .loc to select the rows/columns with True values.
dff.loc[dff['city'] == 'New Orleans']
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
In fact, we don't need the loc indexer for this kind of task
dff[dff['city'] == 'New Orleans']
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
But what if we don't want all variables?
dff.loc[dff['city'] == 'New Orleans', ['company_name', 'zip']]
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
How would you get the same dataframe without loc?
dff[dff['city'] == 'New Orleans'][['company_name', 'zip']] # matter of taste
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
Recall the string methods applicable to DataFrames
dff[dff['email'].str.endswith("gmail.com")].head()
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
and the isin method?
dff.loc[dff['city'].isin(['New Orleans', 'New York'])] # intersection fo the two? gmails = dff['email'].str.endswith("gmail.com") NYNO = (dff['city'] == 'New Orleans') | (dff['city'] =='New York') dff[gmails & NYNO]
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
A tricky one: we can pass a function that returns True/False values to .apply() and evaluate it at each row
def short_company_name(x): """ returns True if x contains less than 2 words """ return len(x.split(' ')) < 2 dff.loc[dff['company_name'].apply(short_company_name)]
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
(2) Selecting data using iloc iloc is primarily used for integer position based indexing. That is, it selects rows and columns by number, in the order that they appear in the data frame. Numbers are from $0$ to df.shape-1 of both axes. The syntax is python data.iloc[&lt;row selection&gt;, &lt;column selection&gt;]
# Rows: df.iloc[0] # first row df.iloc[-1] # last row # Columns: df.iloc[:, 0] # first column = first variable (first_name) df.iloc[:, -1] # last column (web)
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
For multiple columns and rows, use slicer
df.iloc[:5] # first five rows df.iloc[:, :2] # first two columns df.iloc[[0, 4, 7, 25], # 1st, 5th, 6th, 26th row [0, 5, 6]] # 1st 6th 7th columns. df.iloc[:5, 5:8] # first 5 rows and 5th, 6th, 7th colu...
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
(3) Selecting data using ix ix is hybrid of loc and iloc. In general, 1. it is label-location based and acts just like loc 2. However, it also supports integer-location based selection just like iloc where passed an integer Second option only works where the index of the DataFrame is NOT an integer. The syntax is pyt...
# ix indexing works just the same as loc when passed labels dff.ix['Butt', 'city'] == dff.loc['Butt', 'city'] # ix indexing works the same as iloc when passed integers. dff.ix[33, 7] == dff.iloc[33, 7]
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
Hierarchical indexing with loc and iloc Multi-level indexing allows us to work with higher dimensional data while storing info in lower dimensional data structures like 2D DataFrame or 1D Series. More on this here. Read the WEO dataset that we have already used a couple of times
url_weo = 'http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/WEOOct2016all.xls' # (1) define the column indices col_indices = [1, 2, 3] + list(range(9, 46)) # (2) download the dataset weo = pd.read_csv(url_weo, sep = '\t', usecols=col_indices, skipfooter=1...
Code/notebooks/bootcamp_pandas_adv3-indexing.ipynb
NYUDataBootcamp/Materials
mit
We setup the Veldhuizen and Lamont multiobjective optimization problem 2 (vlmop2). The objectives of vlmop2 are very easy to model. Ideal for illustrating Bayesian multiobjective optimization.
# Objective def vlmop2(x): transl = 1 / np.sqrt(2) part1 = (x[:, [0]] - transl) ** 2 + (x[:, [1]] - transl) ** 2 part2 = (x[:, [0]] + transl) ** 2 + (x[:, [1]] + transl) ** 2 y1 = 1 - np.exp(-1 * part1) y2 = 1 - np.exp(-1 * part2) return np.hstack((y1, y2)) # Setup input domain domain = gpflowo...
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
Multiobjective acquisition function We can model the belief of each objective by one GP prior or model each objective separately using a GP prior. We illustrate the latter approach here. A set of data points arranged in a Latin Hypercube is evaluated on the vlmop2 function. In multiobjective optimization the definition...
# Initial evaluations design = gpflowopt.design.LatinHyperCube(11, domain) X = design.generate() Y = vlmop2(X) # One model for each objective objective_models = [gpflow.gpr.GPR(X.copy(), Y[:,[i]].copy(), gpflow.kernels.Matern52(2, ARD=True)) for i in range(Y.shape[1])] for model in objective_models: model.likeliho...
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
Running the Bayesian optimizer The optimization surface of multiobjective acquisition functions can be even more challenging than, e.g., standard expected improvement. Hence, a hybrid optimization scheme is preferred: a Monte Carlo optimization step first, then optimize the point with the best value. We then run the Ba...
# First setup the optimization strategy for the acquisition function # Combining MC step followed by L-BFGS-B acquisition_opt = gpflowopt.optim.StagedOptimizer([gpflowopt.optim.MCOptimizer(domain, 1000), gpflowopt.optim.SciPyOptimizer(domain)]) # Then run the Bayesia...
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
For multiple objectives the returned OptimizeResult object contains the identified Pareto set instead of just a single optimum. Note that this is computed on the raw data Y. The hypervolume-based probability of improvement operates on the Pareto set derived from the model predictions of the training data (to handle noi...
def plot(): grid_size = 51 # 101 shape = (grid_size, grid_size) Xeval = gpflowopt.design.FactorialDesign(grid_size, domain).generate() Yeval_1, _ = hvpoi.models[0].predict_f(Xeval) Yeval_2, _ = hvpoi.models[1].predict_f(Xeval) Yevalc = hvpoi.evaluate(Xeval) plots...
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
Finally, we can extract and plot the Pareto front ourselves using the pareto.non_dominated_sort function on the final data matrix Y. The non-dominated sort returns the Pareto set (non-dominated solutions) as well as a dominance vector holding the number of dominated points for each point in Y. For example, we could onl...
# plot pareto front plt.figure(figsize=(9, 4)) R = np.array([1.5, 1.5]) print('R:', R) hv = hvpoi.pareto.hypervolume(R) print('Hypervolume indicator:', hv) plt.figure(figsize=(7, 7)) pf, dom = gpflowopt.pareto.non_dominated_sort(hvpoi.data[1]) plt.scatter(hvpoi.data[1][:,0], hvpoi.data[1][:,1], c=dom) plt.title('Pa...
doc/source/notebooks/multiobjective.ipynb
GPflow/GPflowOpt
apache-2.0
<center> Doing Math with Python </center> <center> <p> <b>Amit Saha</b> <p>May 29, PyCon US 2016 Education Summit <p>Portland, Oregon </center> ## About me - Software Engineer at [Freelancer.com](https://www.freelancer.com) HQ in Sydney, Australia - Author of "Doing Math with Python" (No Starch Press, 2015) - Wri...
As I will attempt to describe in the next slides, Python is an amazing way to lead to a more fun learning and teaching experience. It can be a basic calculator, a fancy calculator and Math, Science, Geography.. Tools that will help us in that quest are:
notebooks/.ipynb_checkpoints/slides-checkpoint.ipynb
doingmathwithpython/pycon-us-2016
mit
(Main) Tools <img align="center" src="collage/logo_collage.png"></img> Python - a scientific calculator Python 3 is my favorite calculator (not Python 2 because 1/2 = 0) fabs(), abs(), sin(), cos(), gcd(), log() (See math) Descriptive statistics (See statistics) Python - a scientific calculator Develop your o...
When you bring in SymPy to the picture, things really get awesome. You are suddenly writing computer programs which are capable of speaking algebra. You are no more limited to numbers. # Create graphs from algebraic expressions from sympy import Symbol, plot x = Symbol('x') p = plot(2*x**2 + 2*x + 2) # Solve equa...
notebooks/.ipynb_checkpoints/slides-checkpoint.ipynb
doingmathwithpython/pycon-us-2016
mit
Python - Making other subjects more lively <img align="center" src="collage/collage1.png"></img> matplotlib basemap Interactive Jupyter Notebooks Bringing Science to life Animation of a Projectile motion Drawing fractals Interactively drawing a Barnsley Fern The world is your graph paper Showing places on a dig...
### TODO: digit recognition using Neural networks ### Scikitlearn, pandas, scipy, statsmodel
notebooks/.ipynb_checkpoints/slides-checkpoint.ipynb
doingmathwithpython/pycon-us-2016
mit
Reading in files is quite easy. There are several readers in pydsd in either pydsd.io or pydsd.io.aux_readers depending on their level of support. In this case we will use the read_parsivel_arm_netcdf function. Generally we design readers for each new format we encounter so if you find a format that we cannot read, fee...
dsd = pydsd.read_parsivel_arm_netcdf(filename)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
So at this point we have the drop size distribution read in. Let's start by looking at the format of the dsd object. A full listing of the features can be found at http://josephhardinee.github.io/PyDSD/pydsd.html#module-pydsd.DropSizeDistribution . Generally data is stored in the fields dictionary. This is a dictionary...
dsd.fields.keys() dsd.fields['Nd']
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
We can now start to plot and visualize some of this data. PyDSD has several built in plotting functions, and you can always pull the data out yourself for more custom plotting routines. Many built-in plotting routines are available in the pydsd.plot module.
pydsd.plot.plot_dsd(dsd) plt.title('Drop Size Distribution - November 12, 2018')
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Plotting routines usually take in extra arguments to help customize the plot. Please check the documentation for the full list of arguments. We'll revisit plotting in a bit once we've calculated a few more interesting parameters. Depending on the type of Disdrometer, the drop spectra may be stored. PyDSD has some capa...
dsd.calculate_dsd_parameterization()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Now PyDSD will calculate various parameters and store these on the dsd object in the fields dictionary.
dsd.fields.keys()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
And we can dig down further and see what is in one of these objects. Similar to the defaults read in, this will store the values in the data member, and metadata attached to the object.
dsd.fields['D0']
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
Now we can use some built in plotting functions to examine these. The first plot type is a time series plot of arbitrary 1D parameters. For example we can plot the D0 variable corresponding to median drop diameter. The plots know how to handle time and various pieces of metadata. Note that you can pass through argument...
pydsd.plot.plot_ts(dsd, 'D0', marker='.')
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
These plots accept an axis argument as well incase you want to use this to make side by side comparison plots.
plt.figure(figsize=(12,6)) ax = plt.subplot(1,2,1) pydsd.plot.plot_ts(dsd, 'D0', marker='.', ax =ax) ax = plt.subplot(1,2,2) pydsd.plot.plot_ts(dsd, 'Nw', marker='.', ax =ax) plt.tight_layout()
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1
We have other standard types of plots built in to make life easier. For instance a normal thing to do is compare the relationship of the median drop diameter, with the normalized intercept parameter (Nw).
pydsd.plot.plot_NwD0(dsd)
Notebooks/PyDSDExamples.ipynb
josephhardinee/PyDSD
lgpl-2.1