markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
4.2 run_experiment_lite
Using the above run_task method, we will execute the training process using rllab's run_experiment_lite methods. In this method, we are able to specify:
- The n_parallel cores you want to use for your experiment. If you set n_parallel>1, two processors will execute your code in parallel which re... | from rllab.misc.instrument import run_experiment_lite
for seed in [5]: # , 20, 68]:
run_experiment_lite(
run_task,
# Number of parallel workers for sampling
n_parallel=1,
# Keeps the snapshot parameters for all iterations
snapshot_mode="all",
# Specifies the seed fo... | tutorials/tutorial04_rllab.ipynb | cathywu/flow | mit |
1D called vector
2D called matrix
3D nad so on tensor | print x
type(x)
y=np.ones((2,3))
print y | shrimp/numpy.ipynb | wasit7/algae2 | gpl-2.0 |
I need a matrix like this
python
[[2,3],
[4,5],
[6,7]] | z=np.arange(2,8,1)
alpha=np.reshape(z,(3,2))
print alpha
beta= np.random.randn(3,4)
print beta
gamma=beta*2.0
print gamma
a=[3,4,5]
a=np.array(a)
type(a) | shrimp/numpy.ipynb | wasit7/algae2 | gpl-2.0 |
np array operator | a=np.random.randint(0,10,(2,3))
b=np.random.randint(0,10,(2,3))
print a
print b
print "element-wise addition:\n%s"%(a + b)
print "element-wise addition:\n%s"%(a * b)
print a
print b.T
print '-----'
print np.dot(a,b.T) | shrimp/numpy.ipynb | wasit7/algae2 | gpl-2.0 |
Sliccing | a=np.random.randint(0,10,(4,5))
print a
a[0,2]
a[3,3]=9
print a
print a[:,:3]
print a[:3,:]
print a[:3,:3]
print a[-3:,-3:]
b=np.array([2,3,5,7,8])
print b
b[::-1]
a=np.random.randint(0,10,(4,5))
print a
a[::-1]
print np.fliplr(a)
a.astype(float) | shrimp/numpy.ipynb | wasit7/algae2 | gpl-2.0 |
Goto opencv/build/python/2.7 folder.
Copy cv2.pyd to C:/Python27/lib/site-packeges.<br>
http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.html | print np.arange(16)[::2]
(-1,)+(1,2,3)+(4,5) | shrimp/numpy.ipynb | wasit7/algae2 | gpl-2.0 |
To demonstrate Python's performance, we'll use a short function | import math
import numpy as np
from datetime import datetime
def cart2pol(x, y):
r = np.sqrt(x**2 + y**2)
phi = np.arctan2(y, x)
return(r, phi) | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
As the name suggest cart2pol converts a pair of cartesian coordinates [x, y] to polar coordinates [r, phi] | from IPython.core.display import Image
Image(url='https://upload.wikimedia.org/wikipedia/commons/thumb/7/78/Polar_to_cartesian.svg/1024px-Polar_to_cartesian.svg.png',width=400)
x = 3
y = 4
r, phi = cart2pol(x,y)
print(r,phi) | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
All well and good. However, what if we want to convert a list of cartesian coordinates to polar coordinates?
We could loop through both lists and perform the conversion for each x-y pair: | def cart2pol_list(list_x, list_y):
# Prepare empty lists for r and phi values
r = np.empty(len(list_x))
phi = np.empty(len(list_x))
# Loop through the lists of x and y, calculating the r and phi values
for i in range(len(list_x)):
r[i] = np.sqrt(list_x[i]**2 + list_y[i]**2)
phi[... | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
These coordinates make a circle centered at [0,0] | import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(6,6))
ax.scatter(x_list,y_list)
r_list, phi_list = cart2pol_list(x_list,y_list)
print(r_list)
print(phi_list) | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
This is a bit time consuming to type out though, surely there is a better way to make our functions work for lists of inputs?
Step forward vectorise | cart2pol_vec = np.vectorize(cart2pol)
r_list_vec, phi_list_vec = cart2pol_vec(x_list, y_list) | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
Like magic! We can assure ourselves that these two methods produce the same answers | print(r_list == r_list_vec)
print(phi_list == phi_list_vec) | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
But how do they perform?
We can use Python's magic %timeit function to test this | %timeit cart2pol_list(x_list, y_list)
%timeit cart2pol_vec(x_list, y_list) | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
It is significantly faster, both for code writing and at runtime, to use vectorsie rather than manually looping through lists
Multiprocessing
Another important consideration when code becomes computationally intensive is multiprocessing. Python normally runs on one core, so you won't feel the full benefit of your quad... | def do_maths(start=0, num=10):
pos = start
big = 1000 * 1000
ave = 0
while pos < num:
pos += 1
val = math.sqrt((pos - big) * (pos - big))
ave += val / num
return int(ave)
t0 = datetime.now()
do_maths(num=30000000)
dt = datetime.now() - t0
print("Done in {:,.2f} sec.".form... | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
Note that you can recover results stored in the task list with get(). This list will be in the same order as that which you used to spawn the processes | for t in tasks:
print(t.get()) | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
The structure of a multiproccess call is: | pool = multiprocessing.Pool() # Make a pool ready to recieve taks
results = [] # empty list for results
for n in range(1, processor_count + 1): # Loop for assigning a number of tasks
result = pool.appy_async(function, (arguments)) # make a task by passing it a function and arguments
results.append(result) # app... | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
Why can't we multithread in Python?
If you have experience of other programming languages, you may wonder why we can't assign tasks to multiple threads to speed up execution. We are prevented from doing this by the Global Interpreter Lock (GIL). This is a lock on the interpreter which ensures that only one thread can b... | HTML(html) | content/notebooks/2019-10-08-speedy-python.ipynb | ueapy/ueapy.github.io | mit |
Now we make a FileReader | fileReader = bali.FileReader()
fileReader.taught
fileReader.transcribed | documentation/Using bali module.ipynb | cuthbertLab/bali | bsd-3-clause |
More useful Object
The FileParser is more useful than the file reader. | fp = bali.FileParser()
fp.taught | documentation/Using bali module.ipynb | cuthbertLab/bali | bsd-3-clause |
Now we have all the taught patterns! Yay!
Let's get the first taught pattern | firstPattern = fp.taught[0]
print(firstPattern)
firstPattern.title
firstPattern.drumPattern
firstPattern.gongPattern
firstPattern.beatLength()
firstPattern.strokes
for taughtPattern in fp.taught:
if taughtPattern.beatLength() == 4:
print(taughtPattern.title, " ::: ", taughtPattern.beatLength())
... | documentation/Using bali module.ipynb | cuthbertLab/bali | bsd-3-clause |
How many strokes total are there in the whole taught set? | total = 0
for pattern in fp.taught:
total += len(pattern.strokes)
print(total) | documentation/Using bali module.ipynb | cuthbertLab/bali | bsd-3-clause |
Create a list of all the taught patterns that contain "lanang" | lanang = [p for p in fp.taught if 'lanang' in p.title.lower()]
lanang
from music21 import *
ld = text.LanguageDetector()
ld
ld.trigrams
english = ld.trigrams['en']
english.lut
english.lut['be']
ld.trigrams['fr'].lut['be']
ld.mostLikelyLanguage("Das geht so gut heute!")
other = [p for p in fp.taught if 'lanang' ... | documentation/Using bali module.ipynb | cuthbertLab/bali | bsd-3-clause |
Find percentage of strokes that are on a particular beat subdivision in patterns for a given drum | subdivisionSearch = (0, 2)
strokeSearch = ('o', 'l')
for patt in fp.taught:
totalOff = 0
totalAll = 0
if patt.drumType != 'wadon':
continue
for b, s in patt.iterateStrokes():
if b == 0:
continue
if ((b*4) % 4) in subdivisionSearch and s in strokeSearch:
t... | documentation/Using bali module.ipynb | cuthbertLab/bali | bsd-3-clause |
Find the same as above, but eliminate all double strokes. | subdivisionSearch = (0, 2)
strokeSearch = ('o', 'l')
for patt in fp.taught:
totalOff = 0
totalAll = 0
if patt.drumType != 'lanang':
continue
for b, s in patt.iterateStrokes():
if b == 0:
continue
previousStrokeBeat = b - 0.25
if previousStrokeBeat >= 0:
... | documentation/Using bali module.ipynb | cuthbertLab/bali | bsd-3-clause |
Load Saved Model | path = 'gs://bigbird-transformer/summarization/pubmed/roberta/saved_model'
imported_model = tf.saved_model.load(path, tags='serve')
summerize = imported_model.signatures['serving_default'] | bigbird/summarization/eval.ipynb | google-research/bigbird | apache-2.0 |
Setup Data | dataset = tfds.load('scientific_papers/pubmed', split='test', shuffle_files=False, as_supervised=True)
# inspect at a few examples
for ex in dataset.take(3):
print(ex) | bigbird/summarization/eval.ipynb | google-research/bigbird | apache-2.0 |
Print predictions | predicted_summary = summerize(ex[0])['pred_sent'][0]
print('Article:\n {}\n\n Predicted summary:\n {}\n\n Ground truth summary:\n {}\n\n'.format(
ex[0].numpy(),
predicted_summary.numpy(),
ex[1].numpy())) | bigbird/summarization/eval.ipynb | google-research/bigbird | apache-2.0 |
Evaluate Rouge Scores | from rouge_score import rouge_scorer
from rouge_score import scoring
scorer = rouge_scorer.RougeScorer(["rouge1", "rouge2", "rougeLsum"], use_stemmer=True)
aggregator = scoring.BootstrapAggregator()
for ex in tqdm(dataset.take(100), position=0):
predicted_summary = summerize(ex[0])['pred_sent'][0]
score = scorer.... | bigbird/summarization/eval.ipynb | google-research/bigbird | apache-2.0 |
Mobile analytics | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
input_mob = pd.read_csv('files/TMRW_mob.csv')
input_mob.columns=['device','sessions','%news', 'new_users','bounce_rate','PPS', 'ASD', 'goal1CR','goal1']
input_mob = input_mob.set_index('device')
def p2f(x):
return float(x.strip('%'))/100
inpu... | .ipynb_checkpoints/UX_analytics-checkpoint.ipynb | datahac/jup | apache-2.0 |
Federal Office of Road Saftey:
THE HISTORY OF ROAD FATALITIES IN AUSTRALIA
https://www.monash.edu/__data/assets/pdf_file/0020/216452/muarc237.pdf
THE VICTORIAN PARLIAMENTARY ROAD SAFETY COMMITTEE
– A HISTORY OF INQUIRIES AND OUTCOMES:
2005
By
Belinda Clark
Narelle Haworth
Michael Lenné
https://infrastructure.gov.au/ro... | data = pd.read_csv(r'data/Fatalities_July_2016_II.csv')
filter_age = data['Age']>=0
data = data[filter_age]
data.head() | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
Make histogram Pipeline | def hist_plot(feature, data,
title,
rot_x_lbl=False,
x_axis_scale=False,
x_axis_interval=None,
**kwargs):
plot = sns.countplot(x=feature, data=data, **kwargs)
if rot_x_lbl == True:
plt.xticks(rotation = 90)
if x_axis_scale==True:
... | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
Generate gender plot | hist_plot('Gender', data, 'Fatalities by Gender since 1989') | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
Gender and agegroup plots | data_male=data[data['Gender']=='Male']
hist_plot('Age',
data_male,
'Male fatalities by age since 1989',
x_axis_scale=True,
x_axis_interval=5)
data_female=data[data['Gender']=='Female']
hist_plot('Age',
data_female,
'Female fatalities by age since 1989',
... | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
Generate plot for fatalaties by road user | hist_plot('Road_User',
data,
'Fatalities by user type since 1989',
rot_x_lbl=True,
order=['Driver',
'Passenger',
'Pedestrian',
'Motor cycle rider',
'Bicyclist',
'Motor cycle pillion passenger',
... | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
plots for 18 - 25 y.o. by gender
males: | z=data_male['Age'].isin([i for i in range(18,25)])
data_male_18_to_25=data_male[z]
hist_plot('Year',
data_male_18_to_25,
'Male fatalities in 18-25 age group since 1989',
rot_x_lbl=True) | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
Females: | z=data_female['Age'].isin([i for i in range(18,25)])
data_female_18_to_25=data_female[z]
hist_plot('Year',
data_female_18_to_25,
'Female fatalities in 18-25 age group since 1989',
rot_x_lbl=True)
hist_plot('Year',
data,
'Aggregate fatalities by year',
rot_x_lbl... | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
Getting Victorian cyclist fatality data | z=data['Road_User']=='Bicyclist'
data_bike=data[z]
z=data_bike['State']=='VIC'
data_bike=data_bike[z] | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
Plotting: | hist_plot('Year',
data_bike,
'Bicyclist fatalities by year in Victoria',
rot_x_lbl=True) | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
Interesting to see the number drop after Victoria was the first place in the world to enforce mandatory wearing of bicycle helments on the the road in 1990, although it would be good to compare it with data from pre 1989 to ensure 1989 and 1990 are not outliers. | z=data['Road_User']=='Bicyclist'
data_bike=data[z]
hist_plot('Year',
data_bike,
'Cyclist fatalities by year in Australia',
rot_x_lbl=True) | Road_fatalities.ipynb | ichbinjakes/CodeExamples | gpl-3.0 |
Writing a Shapefile
We can select for example 50 rows of the input data and write those into a new Shapefile by first <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" target="_blank">selecting the data using pandas functionalities</a> and then writing the selection with gpd.to_file() function: | # Create a output path for the data
out = r"C:\HY-Data\HENTENKA\Data\DAMSELFISH_distributions_SELECTION.shp"
# Select first 50 rows
selection = data[0:50]
# Print the head of our selection
print(selection.head())
# Write those rows into a new Shapefile
selection.to_file(out, driver="ESRI Shapefile") # drivers makes... | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Geometries in Geopandas
Geopandas takes advantage of Shapely's geometric objects. Geometries are stored in a column called geometry that is a default column name for storing geometric information in geopandas. | # Print first 5 rows of the column 'geometry'
# -------------------------------------------
# It is possible to use only specific columns by specifying the column name within square brackets []
data['geometry'].head() | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Since spatial data is stored as Shapely objects, it is possible to use all of the functionalities of Shapely module that we practiced earlier: | # Print the areas of the first 5 polygons
# ---------------------------------------
# Make a selection that contains only the first five rows
selection = data[0:5]
# Iterate over the selected rows by using a for loop in a pandas function called '.iterrows()'
for row_num, row in selection.iterrows():
# Calculate t... | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Creating GeoDataFrame and exporting it to Shapefile
Since geopandas takes advantage of Shapely geometric objects it is possible to create a Shapefile from a scratch or using e.g. a text file that contains coordinates.
Let's create an empty GeoDataFrame that is a Two-dimensional size-mutable, potentially heterogeneous ... | # Import necessary modules first
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point, Polygon
import fiona
# Create an empty geopandas GeoDataFrame
data = gpd.GeoDataFrame()
# Let's see what's inside
print(data)
# Outputs: | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
The GeoDataFrame is empty since we haven't places any data inside. Let's create a new column called geometry that will contain our Shapely Geometric Object: | # Create a new column called 'geometry' to the GeoDataFrame
data['geometry'] = None
# Let's see what's inside
data
# Outputs: | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Now we have a geometry column in the GeoDataFrame but we don't have any data yet. Let's create a Shapely Polygon that we can insert to our GeoDataFrame: | # Create a Shapely Polygon that has coordinates in WGS84 projection (i.e. lat, lon)
# Coordinates are corner coordinates of Senaatintori in Helsinki
coordinates = [(24.950899, 60.169158), (24.953492, 60.169158), (24.953510, 60.170104), (24.950958, 60.169990)]
# Create a Shapely polygon from the coordinate-tuple list
... | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Now we need to insert that polygon into the column 'geometry' of our GeoDataFrame: | # Insert the polygon to the GeoDataFrame
data['geometry'] = [poly]
# Print out the data
data
# Outputs: | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Now we have a GeoDataFrame with Polygon that we can export to a Shapefile. Let's add another column to our GeoDataFrame called Location with text Senaatintori. | # Add a new column and insert data
data['Location'] = ['Senaatintori']
# Print out the data
data
# Outputs | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Before exporting the data it is useful to determine the projection for the GeoDataFrame.
GeoDataFrame has a property called .crs that shows the coordinatesystem of the data which is empty in our case since we are creating the data from the scratch. If you import a Shapefile into geopandas that has a defined projection,... | # Check the current coordinate system
print(data.crs)
# Outputs:
# Import specific function 'from_epsg' from fiona module
from fiona.crs import from_epsg
# Set the GeoDataFrame's coordinate system to WGS84
data.crs = from_epsg(4326)
# Let's see how the crs definition looks like
data.crs | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Finally, we can export the data using GeoDataFrames .to_file function. | # Determine the output path for the Shapefile
out = r"C:\HY-Data\HENTENKA\Data\Senaatintori.shp"
# Write the data into that location
data.to_file(out)
| source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Now we have successfully created a Shapefile from the scratch using only Python programming. Similar approach can be used to for example to read coordinates from a text file (e.g. points) and create Shapefiles from those automatically.
Extra - Create an interactive visualization using bokeh
Following lines of code dem... | # Import bokeh stuff to visualize our map
from bokeh.models.glyphs import Patch
from bokeh.plotting import figure, show, output_notebook
from bokeh.models import GMapPlot, Range1d, ColumnDataSource, PanTool, WheelZoomTool, BoxSelectTool, GMapOptions, ResizeTool
# Plot the data using bokeh - Initialize
output_notebook(... | source/codes/AutoGIS15_Lecture7_PythonGIS.ipynb | Automating-GIS-processes/2017 | mit |
Imports | import copy
from typing import Sequence
import acme
from acme import specs
from acme.agents.tf import actors
from acme.agents.tf import crr
from acme.tf import networks as acme_networks
from acme.tf import utils as tf2_utils
from acme.utils import loggers
import numpy as np
from rl_unplugged import dm_control_suite
fro... | rl_unplugged/dm_control_suite_crr.ipynb | deepmind/deepmind-research | apache-2.0 |
Data | task_name = 'cartpole_swingup' #@param
gs_path = 'gs://rl_unplugged/dm_control_suite'
num_shards_str, = !gsutil ls {gs_path}/{task_name}/* | wc -l
num_shards = int(num_shards_str) | rl_unplugged/dm_control_suite_crr.ipynb | deepmind/deepmind-research | apache-2.0 |
Dataset and environment | batch_size = 256 #@param
task = dm_control_suite.ControlSuite(task_name)
environment = task.environment
environment_spec = specs.make_environment_spec(environment) | rl_unplugged/dm_control_suite_crr.ipynb | deepmind/deepmind-research | apache-2.0 |
Networks | def make_networks(
action_spec: specs.BoundedArray,
policy_lstm_sizes: Sequence[int] = None,
critic_lstm_sizes: Sequence[int] = None,
num_components: int = 5,
vmin: float = 0.,
vmax: float = 100.,
num_atoms: int = 21,
):
"""Creates recurrent networks with GMM head used by the agents."""
... | rl_unplugged/dm_control_suite_crr.ipynb | deepmind/deepmind-research | apache-2.0 |
Set up TPU if present | try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
accelerator_strategy = snt.distribute.TpuReplicator()
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
except ValueError... | rl_unplugged/dm_control_suite_crr.ipynb | deepmind/deepmind-research | apache-2.0 |
CRR learner | action_spec = environment_spec.actions
action_size = np.prod(action_spec.shape, dtype=int)
with accelerator_strategy.scope():
dataset = dm_control_suite.dataset(
'gs://rl_unplugged/',
data_path=task.data_path,
shapes=task.shapes,
uint8_features=task.uint8_features,
num_threads=1,
batch_size=b... | rl_unplugged/dm_control_suite_crr.ipynb | deepmind/deepmind-research | apache-2.0 |
Training loop | # Run
# tf.config.run_functions_eagerly(True)
# if you want to debug the code in eager mode.
for _ in range(100):
learner.step() | rl_unplugged/dm_control_suite_crr.ipynb | deepmind/deepmind-research | apache-2.0 |
Evaluation | # Create a logger.
logger = loggers.TerminalLogger(label='evaluation', time_delta=1.)
# Create an environment loop.
loop = acme.EnvironmentLoop(
environment=environment,
actor=actors.DeprecatedRecurrentActor(policy_network),
logger=logger)
loop.run(5) | rl_unplugged/dm_control_suite_crr.ipynb | deepmind/deepmind-research | apache-2.0 |
On longer functions, its nice to be able to see an estimation of how much longer things will take! | df = df.applymap(lambda x: ~x)
df | examples/tutorial/jupyter/execution/pandas_on_ray/local/exercise_4.ipynb | modin-project/modin | apache-2.0 |
Concept for exercise: Spreadsheet
For those who have worked with Excel, the Spreadsheet API will definitely feel familiar! The Spreadsheet API is a Jupyter notebook widget that allows us to interact with Modin DataFrames in a spreadsheet-like fashion while taking advantage of the underlying capabilities of Modin. The w... | !jupyter nbextension enable --py --sys-prefix modin_spreadsheet
ProgressBar.disable()
import modin.experimental.spreadsheet as mss
s3_path = "s3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv"
modin_df = pd.read_csv(s3_path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3, nrows=1000)
... | examples/tutorial/jupyter/execution/pandas_on_ray/local/exercise_4.ipynb | modin-project/modin | apache-2.0 |
EM in high dimensions
EM for high-dimensional data requires some special treatment:
* E step and M step must be vectorized as much as possible, as explicit loops are dreadfully slow in Python.
* All operations must be cast in terms of sparse matrix operations, to take advantage of computational savings enabled by spa... | from sklearn.cluster import KMeans
np.random.seed(5)
num_clusters = 25
# Use scikit-learn's k-means to simplify workflow
kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=-1)
kmeans_model.fit(tf_idf)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_mode... | machine_learning/4_clustering_and_retrieval/assigment/week4/4_em-with-text-data_graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Initializing cluster weights
We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above. | num_docs = tf_idf.shape[0]
weights = []
for i in xrange(num_clusters):
# Compute the number of data points assigned to cluster i:
num_assigned = cluster_assignment[cluster_assignment == i].shape[0] # YOUR CODE HERE
w = float(num_assigned) / num_docs
weights.append(w) | machine_learning/4_clustering_and_retrieval/assigment/week4/4_em-with-text-data_graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Running EM
Now that we have initialized all of our parameters, run EM. | out = EM_for_high_dimension(tf_idf, means, covs, weights, cov_smoothing=1e-10)
out['loglik']
len(out['means']) | machine_learning/4_clustering_and_retrieval/assigment/week4/4_em-with-text-data_graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Interpret clustering results
In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.
Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, ... | # Fill in the blanks
def visualize_EM_clusters(tf_idf, means, covs, map_index_to_word):
print('')
print('==========================================================')
num_clusters = len(means)
for c in xrange(num_clusters):
print('Cluster {0:d}: Largest mean parameters in cluster '.format(c))
... | machine_learning/4_clustering_and_retrieval/assigment/week4/4_em-with-text-data_graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]
Comparing to random initialization
Create variables for randomly initializing the EM algorithm. Complete the following code block. | np.random.seed(5) # See the note below to see why we set seed=5.
num_clusters = len(means)
num_docs, num_words = tf_idf.shape
random_means = []
random_covs = []
random_weights = []
for k in range(num_clusters):
# Create a numpy array of length num_words with random normally distributed values.
# Use the ... | machine_learning/4_clustering_and_retrieval/assigment/week4/4_em-with-text-data_graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz Question: Is the final loglikelihood larger or smaller than the final loglikelihood we obtained above when initializing EM with the results from running k-means?
Quiz Question: For the above model, out_random_init, use the visualize_EM_clusters method you created above. Are the clusters more or less interpretable ... | # YOUR CODE HERE. Use visualize_EM_clusters, which will require you to pass in tf_idf and map_index_to_word.
visualize_EM_clusters(tf_idf, out_random_init['means'], out_random_init['covs'], map_index_to_word) | machine_learning/4_clustering_and_retrieval/assigment/week4/4_em-with-text-data_graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a s... | # GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
... | deep-learnining-specialization/1. neural nets and deep learning/week4/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb | diegocavalca/Studies | cc0-1.0 |
<table style="width:50%">
<tr>
<td> **AL** </td>
<td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 3 </td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}... | # GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), sh... | deep-learnining-specialization/1. neural nets and deep learning/week4/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb | diegocavalca/Studies | cc0-1.0 |
Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss funct... | # GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from ... | deep-learnining-specialization/1. neural nets and deep learning/week4/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb | diegocavalca/Studies | cc0-1.0 |
Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
... | # GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (contain... | deep-learnining-specialization/1. neural nets and deep learning/week4/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb | diegocavalca/Studies | cc0-1.0 |
Let's now correct for the class proportion. First we construct the normalised confusion matrix. | # order: galaxy, quasar star
confusion_matrix = np.array(
[[97608, 1892, 500],
[ 2801, 93376, 3823],
[ 1633, 8878, 89489]])
class_total = confusion_matrix.sum(axis=0)
class_total = np.tile(class_total, (3, 1))
normalised_confusion = confusion_matrix / class_total
normalised_confusion
# put normalis... | projects/alasdair/notebooks/05_class_proportion_estimation.ipynb | alasdairtran/mclearn | bsd-3-clause |
Let's now correct for the potential misclassification. | total_galaxies = 357910241
total_quasars = 170020129
total_stars = 266083661
total = total_galaxies + total_quasars + total_stars
corrected_galaxies = int(normalised_confusion[0][0] * total_galaxies + \
normalised_confusion[0][1] * total_quasars + \
normalised_confusion... | projects/alasdair/notebooks/05_class_proportion_estimation.ipynb | alasdairtran/mclearn | bsd-3-clause |
Proportion of the data that has been labelled | training = 1707233+714313+379456
100 * training / total | projects/alasdair/notebooks/05_class_proportion_estimation.ipynb | alasdairtran/mclearn | bsd-3-clause |
A Parser for Regular Expression
This notebook implements a parser for regular expressions. The parser that is implemented in the function parseExpr parses a regular expression
according to the following <em style="color:blue">EBNF grammar</em>.
regExp -> product ('+' product)*
product -> factor factor*
fa... | import re | Python/RegExp-Parser.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
First, load the data. Loading may take some time. | # Run this cell, but please don't change it.
districts = Map.read_geojson('water_districts.geojson')
zips = Map.read_geojson('ca_zips.geojson.gz')
usage_raw = Table.read_table('water_usage.csv', dtype={'pwsid': str})
income_raw = Table.read_table('ca_income_by_zip.csv', dtype={'ZIP': str}).drop('STATEFIPS', 'STATE', '... | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Part 1: Maps
The districts and zips data sets are Map objects. Documentation on mapping in the datascience package can be found at data8.org/datascience/maps.html. To view a map of California's water districts, run the cell below. Click on a district to see its description. | districts.format(width=400, height=200) | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
A Map is a collection of regions and other features such as points and markers, each of which has a string id and various properties. You can view the features of the districts map as a table using Table.from_records. | district_table = Table.from_records(districts.features)
district_table.show(3) | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
To display a Map containing only two features from the district_table, call Map on an array containing those two features from the feature column.
Question 1.1. Draw a map of the Alameda County Water District (row 0) and the East Bay Municipal Utilities District (row 2). | # Fill in the next line so the last line draws a map of those two districts.
alameda_and_east_bay = ...
Map(alameda_and_east_bay, height=300, width=300)
_ = tests.grade('q11') | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Hint: If scrolling becomes slow on your computer, you can clear maps for the cells above by running Cell > All Output > Clear from the Cell menu.
Part 2: California Income
Let's look at the income_raw table, which comes from the IRS. We're going to link this information about incomes to our information about wa... | income_raw | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Some observations:
The table contains several numerical columns and a column for the ZIP code.
For each ZIP code, there are 6 rows. Each row for a ZIP code has data from tax returns in one income bracket. (A tax return is the tax filing from one person or household. An income bracket is a group of people whose annual... | income_by_zipcode = ...
income_by_zipcode
_ = tests.grade('q21') | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Your income_by_zipcode table probably has column names like N1 sum, which looks a little weird.
Question 2.2. Relabel the columns in income_by_zipcode to match the labels in income_raw
Hint: Inspect income_raw.labels and income_by_zipcode.labels to find the differences you need to change.
Hint 2: Since there are many c... | ...
...
income_by_zipcode
_ = tests.grade('q22') | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Question 2.3.
Create a table called income with one row per ZIP code and the following columns.
A ZIP column with the same contents as 'ZIP' from income_by_zipcode.
A num returns column containing the total number of tax returns that include a total income amount (column 'N02650' from income_by_zipcode).
A total inco... | income = Table().with_columns(
...
...
...
...
)
income.set_format('total income ($)', NumberFormatter(0)).show(5)
_ = tests.grade('q23') | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Question 2.4. All ZIP codes with less than 100 returns (or some other special conditions) are grouped together into one ZIP code with a special code. Remove the row for that ZIP code from the income table.
Hint 1: This ZIP code value has far more returns than any of the other ZIP codes. Try using group and sort to fin... | income = ...
_ = tests.grade('q24') | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Because each ZIP code has a different number of people, computing the average income across several ZIP codes requires some care. This will come up several times in this project. Here is a simple example:
Question 2.5 Among all the tax returns that
1. include a total income amount, and
2. are filed by people living i... | # Our solution took several lines of code.
average_income = ...
average_income
_ = tests.grade('q25') | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Question 2.6. Among all California tax returns that include a total income amount, what is the average total income? Express the answer in dollars as an int rounded to the nearest dollar. | avg_total = ...
avg_total | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Farming
Farms use water, so it's plausible that farming is an important factor in water usage. Here, we will check for a relationship between farming and income.
Among the tax returns in California for ZIP codes represented in the incomes table, is there an association between income and living in a ZIP code with a lo... | # Write code to make a scatter plot here.
... | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Question 2.8. From the graph, can you say whether ZIP codes with more farmers typically have lower or higher average income than ZIP codes with few or no farmers? Can you say how much lower or higher?
Write your answer here, replacing this text.
Question 2.9. Compare the average incomes for two groups of tax returns: ... | # Build and display a table with two rows:
# 1) incomes of returns in ZIP codes with a greater-than-average proportion of farmers
# 2) incomes of returns in other ZIP codes | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Write your answer here, replacing this text.
Question 2.10. The graph below displays two histograms: the distribution of average incomes of ZIP codes that have above-average proportions of farmers, and that of ZIP codes with below-average proportions of farmers.
<img src="https://i.imgur.com/jicA2to.png"/>
Are ZIP code... | # Write code to draw a map of only the high-income ZIP codes.
# We have filled in some of it and suggested names for variables
# you might want to define.
zip_features = Table.from_records(zips.features)
high_average_zips = ...
high_zips_with_region = ...
Map(high_zips_with_region.column('feature'), width=400, height=3... | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Write your answer here, replacing this text.
Part 3: Water Usage
We will now investigate water usage in California. The usage table contains three columns:
PWSID: The Public Water Supply Identifier of the district
Population: Estimate of average population served in 2015
Water: Average residential water use (gallons p... | # Run this cell to create the usage table.
usage_raw.set_format(4, NumberFormatter)
max_pop = usage_raw.select(0, 'population').group(0, max).relabeled(1, 'Population')
avg_water = usage_raw.select(0, 'res_gpcd').group(0, np.mean).relabeled(1, 'Water')
usage = max_pop.join('pwsid', avg_water).relabeled(0, 'PWSID')
usa... | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Question 3.1. Draw a map of the water districts, colored by the per capita water usage in each district.
Use the districts.color(...) method to generate the map. It takes as its first argument a two-column table with one row per district that has the district PWSID as its first column. The label of the second column is... | # We have filled in the call to districts.color(...). Set per_capita_usage
# to an appropriate table so that a map of all the water districts is
# displayed.
per_capita_usage = ...
districts.color(per_capita_usage, key_on='feature.properties.PWSID')
_ = tests.grade('q31') | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Question 3.2. Based on the map above, which part of California appears to use more water per person: the San Francisco area or the Los Angeles area?
Write your answer here, replacing this text.
Next, we will try to match each ZIP code with a water district. ZIP code boundaries do not always line up with water district... | wd_vs_zip.show(5) | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Question 3.3. Complete the district_for_zip function that takes a ZIP code as its argument. It returns the PWSID with the largest value of ZIP in District for that zip_code, if that value is at least 50%. Otherwise, it returns the string 'No District'. | def district_for_zip(zip_code):
zip_code = str(zip_code) # Ensure that the ZIP code is a string, not an integer
districts = ...
at_least_half = ...
if at_least_half:
...
else:
return 'No District'
district_for_zip(94709)
_ = tests.grade('q33') | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
This function can be used to associate each ZIP code in the income table with a PWSID and discard ZIP codes that do not lie (mostly) in a water district. | zip_pwsids = income.apply(district_for_zip, 'ZIP')
income_with_pwsid = income.with_column('PWSID', zip_pwsids).where('PWSID', are.not_equal_to("No District"))
income_with_pwsid.set_format(2, NumberFormatter(0)).show(5) | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Question 3.4. Create a table called district_data with one row per PWSID and the following columns:
PWSID: The ID of the district
Population: Population estimate
Water: Average residential water use (gallons per person per day) in 2014-2015
Income: Average income in dollars of all tax returns in ZIP codes that are (mo... | district_income = ...
district_data = ...
district_data.set_format(make_array('Population', 'Water', 'Income'), NumberFormatter(0))
_ = tests.grade('q34') | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Question 3.5. The bay_districts table gives the names of all water districts in the San Francisco Bay Area. Is there an association between water usage and income among Bay Area water districts? Use the tables you have created to compare water usage between the 10 Bay Area water districts with the highest average incom... | bay_districts = Table.read_table('bay_districts.csv')
bay_water_vs_income = ...
top_10 = ...
... | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Complete this one-sentence conclusion: In the Bay Area, people in the top 10 highest-income water districts used an average of _________ more gallons of water per person per day than people in the rest of the districts.
Question 3.6. In one paragraph, summarize what you have discovered through the analyses in this proj... | # For your convenience, you can run this cell to run all the tests at once!
import os
_ = [tests.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')] | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
If you want, draw some more maps below. | # Your extensions here (completely optional) | notebooks/data8_notebooks/project1/project1.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Radial rotation example
As in Figure 4a of Lange and Van Sebille (2017), we define a circular flow with period 24 hours, on a C-grid | def radialrotation_fieldset(xdim=201, ydim=201):
# Coordinates of the test fieldset (on C-grid in m)
a = b = 20000 # domain size
lon = np.linspace(-a/2, a/2, xdim, dtype=np.float32)
lat = np.linspace(-b/2, b/2, ydim, dtype=np.float32)
dx, dy = lon[2]-lon[1], lat[2]-lat[1]
# Define arrays R (ra... | parcels/examples/tutorial_analyticaladvection.ipynb | OceanPARCELS/parcels | mit |
Now simulate a set of particles on this fieldset, using the AdvectionAnalytical kernel. Keep track of how the radius of the Particle trajectory changes during the run. | def UpdateR(particle, fieldset, time):
particle.radius = fieldset.R[time, particle.depth, particle.lat, particle.lon]
class MyParticle(ScipyParticle):
radius = Variable('radius', dtype=np.float32, initial=0.)
radius_start = Variable('radius_start', dtype=np.float32, initial=fieldsetRR.R)
pset = ParticleSe... | parcels/examples/tutorial_analyticaladvection.ipynb | OceanPARCELS/parcels | mit |
Now plot the trajectory and calculate how much the radius has changed during the run. | output.close()
plotTrajectoriesFile('radialAnalytical.nc')
print('Particle radius at start of run %f' % pset.radius_start[0])
print('Particle radius at end of run %f' % pset.radius[0])
print('Change in Particle radius %f' % (pset.radius[0] - pset.radius_start[0])) | parcels/examples/tutorial_analyticaladvection.ipynb | OceanPARCELS/parcels | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.