markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality. Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images. Note that these are down-sampled yet again to half the resolution of the images f...
plot_conv_layer(layer=layer_conv2, image=image1)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
And these are the results of applying the filter-weights to the second image.
plot_conv_layer(layer=layer_conv2, image=image2)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
将测试结果写入 CSV 文件
# def write_predictions(ims, ids): # ims = ims.reshape(ims.shape[0], img_size_flat) # preds = session.run(y_pred, feed_dict={x: ims}) # result = pd.DataFrame(preds, columns=classes) # result.loc[:, 'id'] = pd.Series(ids, index=result.index) # pred_file = 'predictions.csv' # result.to_csv(pred_fi...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
关闭 TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
session.close()
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.p...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes self.activation_...
first-neural-network/Your_first_neural_network.ipynb
yuvrajsingh86/DeepLearning_Udacity
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys ### Set the hyperparameters here ### iterations = 5000 learning_rate = 0.5 hidden_nodes =26 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random ...
first-neural-network/Your_first_neural_network.ipynb
yuvrajsingh86/DeepLearning_Udacity
mit
Web Scraping
# Biblioteca usada para requisitar uma página de um web site import urllib.request # Definimos a url # Verifique as permissões em https://www.python.org/robots.txt with urllib.request.urlopen("https://www.python.org") as url: page = url.read() # Imprime o conteúdo print(page) from bs4 import BeautifulSoup # Ana...
Cap14/DSA-Python-Cap14-01-WebScraping.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Now let's define a class - CategoricalQHead class. Each class in Coach has a complementary Parameters class which defines its constructor parameters. So we will additionally define the CategoricalQHeadParameters class. The network structure should be defined in the _build_module function, which gets the previous layer ...
class CategoricalQHeadParameters(HeadParameters): def __init__(self, activation_function: str ='relu', name: str='categorical_q_head_params'): super().__init__(parameterized_class=CategoricalQHead, activation_function=activation_function, name=name) class CategoricalQHead(Head): def __init__(self, agen...
tutorials/1. Implementing an Algorithm.ipynb
NervanaSystems/coach
apache-2.0
The Agent The agent will implement the Categorical DQN algorithm. Each agent has a complementary AgentParameters class, which allows selecting the parameters of the agent sub modules: * the algorithm * the exploration policy * the memory * the networks Now let's go ahead and define the network parameters - it will reu...
from rl_coach.agents.dqn_agent import DQNNetworkParameters class CategoricalDQNNetworkParameters(DQNNetworkParameters): def __init__(self): super().__init__() self.heads_parameters = [CategoricalQHeadParameters()]
tutorials/1. Implementing an Algorithm.ipynb
NervanaSystems/coach
apache-2.0
Next we'll define the algorithm parameters, which are the same as the DQN algorithm parameters, with the addition of the Categorical DQN specific v_min, v_max and number of atoms. We'll also define the parameters of the exploration policy, which is epsilon greedy with epsilon starting at a value of 1.0 and decaying to ...
from rl_coach.agents.dqn_agent import DQNAlgorithmParameters from rl_coach.exploration_policies.e_greedy import EGreedyParameters from rl_coach.schedules import LinearSchedule class CategoricalDQNAlgorithmParameters(DQNAlgorithmParameters): def __init__(self): super().__init__() self.v_min = -10.0...
tutorials/1. Implementing an Algorithm.ipynb
NervanaSystems/coach
apache-2.0
Now let's define the agent parameters class which contains all the parameters to be used by the agent - the network, algorithm and exploration parameters that we defined above, and also the parameters of the memory module to be used, which is the default experience replay buffer in this case. Notice that the networks ...
from rl_coach.agents.value_optimization_agent import ValueOptimizationAgent from rl_coach.base_parameters import AgentParameters from rl_coach.core_types import StateType from rl_coach.memories.non_episodic.experience_replay import ExperienceReplayParameters class CategoricalDQNAgentParameters(AgentParameters): d...
tutorials/1. Implementing an Algorithm.ipynb
NervanaSystems/coach
apache-2.0
The last step is to define the agent itself - CategoricalDQNAgent - which is a type of value optimization agent so it will inherit the ValueOptimizationAgent class. It could have also inheritted DQNAgent, which would result in the same functionality. Our agent will implement the learn_from_batch function which updates ...
from typing import Union # Categorical Deep Q Network - https://arxiv.org/pdf/1707.06887.pdf class CategoricalDQNAgent(ValueOptimizationAgent): def __init__(self, agent_parameters, parent: Union['LevelManager', 'CompositeAgent']=None): super().__init__(agent_parameters, parent) self.z_values = np....
tutorials/1. Implementing an Algorithm.ipynb
NervanaSystems/coach
apache-2.0
Some important things to notice here: * self.networks['main'] is a NetworkWrapper object. It holds all the copies of the 'main' network: - a global network which is shared between all the workers in distributed training - an online network which is a local copy of the network intended to keep the weights stati...
from rl_coach.agents.categorical_dqn_agent import CategoricalDQNAgentParameters agent_params = CategoricalDQNAgentParameters() agent_params.network_wrappers['main'].learning_rate = 0.00025
tutorials/1. Implementing an Algorithm.ipynb
NervanaSystems/coach
apache-2.0
Now, let's define the environment parameters. We will use the default Atari parameters (frame skip of 4, taking the max over subsequent frames, etc.), and we will select the 'Breakout' game level.
from rl_coach.environments.gym_environment import Atari, atari_deterministic_v4 env_params = Atari(level='BreakoutDeterministic-v4')
tutorials/1. Implementing an Algorithm.ipynb
NervanaSystems/coach
apache-2.0
Connecting all the dots together - we'll define a graph manager with the Categorial DQN agent parameters, the Atari environment parameters, and the scheduling and visualization parameters
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager from rl_coach.base_parameters import VisualizationParameters from rl_coach.environments.gym_environment import atari_schedule graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params, ...
tutorials/1. Implementing an Algorithm.ipynb
NervanaSystems/coach
apache-2.0
Running the Preset (this is normally done from command line by running coach -p Atari_C51 -lvl breakout)
# let the adventure begin graph_manager.improve()
tutorials/1. Implementing an Algorithm.ipynb
NervanaSystems/coach
apache-2.0
Explore "core" ICPW data Prior to updating the "core" ICPW datasets in RESA, I need to get an overview of what's already in the database and what isn't.
# Connect to db eng = nivapy.da.connect()
check_core_icpw.ipynb
JamesSample/icpw
mit
1. Query ICPW projects The are 18 projects (one for each country) currently in RESA. We also have data for some countries that do not yet have a project defined (e.g. the Netherlands).
# Query projects prj_grid = nivapy.da.select_resa_projects(eng) prj_grid prj_df = prj_grid.get_selected_df() print(len(prj_df)) prj_df
check_core_icpw.ipynb
JamesSample/icpw
mit
2. Get station list There are 262 stations currently associated with the projects in RESA.
# Get stations stn_df = nivapy.da.select_resa_project_stations(prj_df, eng) print(len(stn_df)) stn_df.head() # Map nivapy.spatial.quickmap(stn_df, popup='station_code')
check_core_icpw.ipynb
JamesSample/icpw
mit
3. Get parameters Get a list of parameters available at these stations. I assume that all data submissions to ICPW will report pH, so extracting pH data should be a good way to get an indication of which stations actually have data.
# Select parameters par_grid = nivapy.da.select_resa_station_parameters(stn_df, '1970-01-01', '2019-01-01', eng) par_grid # Get selected pars par_df = par_grid.get...
check_core_icpw.ipynb
JamesSample/icpw
mit
4. Get chemistry data
# Get data wc_df, dup_df = nivapy.da.select_resa_water_chemistry(stn_df, par_df, '1970-01-01', '2019-01-01', ...
check_core_icpw.ipynb
JamesSample/icpw
mit
So, there are 262 stations within the "core" ICPW projects, but 24 of these have no data whatsoever associated with them (listed above). 5. Date for last sample by country The code below gets the most recent pH sample in the database for each country.
# Most recent datab for idx, row in prj_df.iterrows(): # Get stations cnt_stns = nivapy.da.select_resa_project_stations([row['project_id'],], eng) # Get pH data wc, dups = nivapy.da.select_resa_water_chemistry(cnt_stns, [1,], # pH ...
check_core_icpw.ipynb
JamesSample/icpw
mit
Compute MNE inverse solution on evoked data in a mixed source space Create a mixed source space and compute MNE inverse solution on evoked dataset
# Author: Annalisa Pascarella <a.pascarella@iac.cnr.it> # # License: BSD (3-clause) import os.path as op import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne import setup_volume_source_space from mne import make_forward_solution from mne.minimum_norm import make_inverse_operator, apply_...
0.14/_downloads/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set up our source space.
# List substructures we are interested in. We select only the # sub structures we want to include in the source space labels_vol = ['Left-Amygdala', 'Left-Thalamus-Proper', 'Left-Cerebellum-Cortex', 'Brain-Stem', 'Right-Amygdala', 'Right-Thalamus-Pro...
0.14/_downloads/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Export source positions to nift file:
nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject) src.export_volume(nii_fname, mri_resolution=True) plotting.plot_img(nii_fname, cmap=plt.cm.spectral) plt.show() # Compute the fwd matrix fwd = make_forward_solution(fname_evoked, fname_trans, src, fname_bem, mindist=5.0, # ignore ...
0.14/_downloads/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Here we create a basic DataArray by passing it just a numpy array of random data. Note that XArray generates some basic dimension names for us.
temp = xr.DataArray(data) temp
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
We can also pass in our own dimension names:
temp = xr.DataArray(data, dims=['time', 'lat', 'lon']) temp
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
This is already improved upon from a numpy array, because we have names for each of the dimensions (or axes in NumPy parlance). Even better, we can take arrays representing the values for the coordinates for each of these dimensions and associate them with the data when we create the DataArray.
# Use pandas to create an array of datetimes import pandas as pd times = pd.date_range('2018-01-01', periods=5) times # Sample lon/lats lons = np.linspace(-120, -60, 4) lats = np.linspace(25, 55, 3)
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
When we create the DataArray instance, we pass in the arrays we just created:
temp = xr.DataArray(data, coords=[times, lats, lons], dims=['time', 'lat', 'lon']) temp
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
...and we can also set some attribute metadata:
temp.attrs['units'] = 'kelvin' temp.attrs['standard_name'] = 'air_temperature' temp
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Notice what happens if we perform a mathematical operaton with the DataArray: the coordinate values persist, but the attributes are lost. This is done because it is very challenging to know if the attribute metadata is still correct or appropriate after arbitrary arithmetic operations.
# For example, convert Kelvin to Celsius temp - 273.15
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Selection We can use the .sel method to select portions of our data based on these coordinate values, rather than using indices (this is similar to the CDM).
temp.sel(time='2018-01-02')
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
.sel has the flexibility to also perform nearest neighbor sampling, taking an optional tolerance:
from datetime import timedelta temp.sel(time='2018-01-07', method='nearest', tolerance=timedelta(days=2))
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Exercise .interp() works similarly to .sel(). Using .interp(), get an interpolated time series "forecast" for Boulder (40°N, 105°W) or your favorite latitude/longitude location. (Documentation for <a href="http://xarray.pydata.org/en/stable/interpolation.html">interp</a>).
# Your code goes here
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Solution
# %load solutions/interp_solution.py
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Slicing with Selection
temp.sel(time=slice('2018-01-01', '2018-01-03'), lon=slice(-110, -70), lat=slice(25, 45))
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
.loc All of these operations can also be done within square brackets on the .loc attribute of the DataArray. This permits a much more numpy-looking syntax, though you lose the ability to specify the names of the various dimensions. Instead, the slicing must be done in the correct order.
# As done above temp.loc['2018-01-02'] temp.loc['2018-01-01':'2018-01-03', 25:45, -110:-70] # This *doesn't* work however: #temp.loc[-110:-70, 25:45,'2018-01-01':'2018-01-03']
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Opening netCDF data With its close ties to the netCDF data model, XArray also supports netCDF as a first-class file format. This means it has easy support for opening netCDF datasets, so long as they conform to some of XArray's limitations (such as 1-dimensional coordinates).
# Open sample North American Reanalysis data in netCDF format ds = xr.open_dataset('../../data/NARR_19930313_0000.nc') ds
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
This returns a Dataset object, which is a container that contains one or more DataArrays, which can also optionally share coordinates. We can then pull out individual fields:
ds.isobaric1
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
or
ds['isobaric1']
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Datasets also support much of the same subsetting operations as DataArray, but will perform the operation on all data:
ds_1000 = ds.sel(isobaric1=1000.0) ds_1000 ds_1000.Temperature_isobaric
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Aggregation operations Not only can you use the named dimensions for manual slicing and indexing of data, but you can also use it to control aggregation operations, like sum:
u_winds = ds['u-component_of_wind_isobaric'] u_winds.std(dim=['x', 'y'])
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Exercise Using the sample dataset, calculate the mean temperature profile (temperature as a function of pressure) over Colorado within this dataset. For this exercise, consider the bounds of Colorado to be: * x: -182km to 424km * y: -1450km to -990km (37°N to 41°N and 102°W to 109°W projected to Lambert Conformal proje...
# %load solutions/mean_profile.py
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Resources There is much more in the XArray library. To learn more, visit the XArray Documentation Introduction to Climate and Forecasting Metadata Conventions In order to better enable reproducible data and research, the Climate and Forecasting (CF) metadata convention was created to have proper metadata in atmospheric...
# Import some useful Python tools from datetime import datetime # Twelve hours of hourly output starting at 22Z today start = datetime.utcnow().replace(hour=22, minute=0, second=0, microsecond=0) times = np.array([start + timedelta(hours=h) for h in range(13)]) # 3km spacing in x and y x = np.arange(-150, 153, 3) y =...
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Time coordinates must contain a units attribute with a string value with a form similar to 'seconds since 2019-01-06 12:00:00.00'. 'seconds', 'minutes', 'hours', and 'days' are the most commonly used units for time. Due to the variable length of months and years, they are not recommended. Before we can write data, we n...
from cftime import date2num time_units = 'hours since {:%Y-%m-%d 00:00}'.format(times[0]) time_vals = date2num(times, time_units) time_vals
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Now we can create the forecast_time variable just as we did before for the other coordinate variables: Convert arrays into Xarray Dataset
ds = xr.Dataset({'temperature': (['time', 'z', 'y', 'x'], temps, {'units':'Kelvin'})}, coords={'x_dist': (['x'], x, {'units':'km'}), 'y_dist': (['y'], y, {'units':'km'}), 'pressure': (['z'], press, {'units':'hPa'}), 'forecast_ti...
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Due to how xarray handles time units, we need to encode the units in the forecast_time coordinate.
ds.forecast_time.encoding['units'] = time_units
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
If we look at our data variable, we can see the units printed out, so they were attached properly!
ds.temperature
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
We're going to start by adding some global attribute metadata. These are recommendations from the standard (not required), but they're easy to add and help users keep the data straight, so let's go ahead and do it.
ds.attrs['Conventions'] = 'CF-1.7' ds.attrs['title'] = 'Forecast model run' ds.attrs['nc.institution'] = 'Unidata' ds.attrs['source'] = 'WRF-1.5' ds.attrs['history'] = str(datetime.utcnow()) + ' Python' ds.attrs['references'] = '' ds.attrs['comment'] = '' ds
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
We can also add attributes to this variable to define metadata. The CF conventions require a units attribute to be set for all variables that represent a dimensional quantity. The value of this attribute needs to be parsable by the UDUNITS library. Here we have already set it to a value of 'Kelvin'. We also set the sta...
ds.temperature.attrs['standard_name'] = 'air_temperature' ds.temperature.attrs['long_name'] = 'Forecast air temperature' ds.temperature.attrs['missing_value'] = -9999 ds.temperature
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Coordinate variables To properly orient our data in time and space, we need to go beyond dimensions (which define common sizes and alignment) and include values along these dimensions, which are called "Coordinate Variables". Generally, these are defined by creating a one dimensional variable with the same name as the ...
ds.x.attrs['axis'] = 'X' # Optional ds.x.attrs['standard_name'] = 'projection_x_coordinate' ds.x.attrs['long_name'] = 'x-coordinate in projected coordinate system' ds.y.attrs['axis'] = 'Y' # Optional ds.y.attrs['standard_name'] = 'projection_y_coordinate' ds.y.attrs['long_name'] = 'y-coordinate in projected coordinate...
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
We also define a coordinate variable pressure to reference our data in the vertical dimension. The standard_name of 'air_pressure' is sufficient to identify this coordinate variable as the vertical axis, but let's go ahead and specify the axis as well. We also specify the attribute positive to indicate whether the vari...
ds.pressure.attrs['axis'] = 'Z' # Optional ds.pressure.attrs['standard_name'] = 'air_pressure' ds.pressure.attrs['positive'] = 'down' # Optional ds.forecast_time['axis'] = 'T' # Optional ds.forecast_time['standard_name'] = 'time' # Optional ds.forecast_time['long_name'] = 'time'
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Auxilliary Coordinates Our data are still not CF-compliant because they do not contain latitude and longitude information, which is needed to properly locate the data. To solve this, we need to add variables with latitude and longitude. These are called "auxillary coordinate variables", not because they are extra, but ...
from pyproj import Proj X, Y = np.meshgrid(x, y) lcc = Proj({'proj':'lcc', 'lon_0':-105, 'lat_0':40, 'a':6371000., 'lat_1':25}) lon, lat = lcc(X * 1000, Y * 1000, inverse=True)
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Now we can create the needed variables. Both are dimensioned on y and x and are two-dimensional. The longitude variable is identified as actually containing such information by its required units of 'degrees_east', as well as the optional 'longitude' standard_name attribute. The case is the same for latitude, except th...
ds = ds.assign_coords(lon = (['y', 'x'], lon)) ds = ds.assign_coords(lat = (['y', 'x'], lat)) ds ds.lon.attrs['units'] = 'degrees_east' ds.lon.attrs['standard_name'] = 'longitude' # Optional ds.lon.attrs['long_name'] = 'longitude' ds.lat.attrs['units'] = 'degrees_north' ds.lat.attrs['standard_name'] = 'latitude' # ...
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
With the variables created, we identify these variables as containing coordinates for the Temperature variable by setting the coordinates value to a space-separated list of the names of the auxilliary coordinate variables:
ds
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Coordinate System Information With our data specified on a Lambert conformal projected grid, it would be good to include this information in our metadata. We can do this using a "grid mapping" variable. This uses a dummy scalar variable as a namespace for holding all of the required information. Relevant variables then...
ds['lambert_projection'] = int() ds.lambert_projection.attrs['grid_mapping_name'] = 'lambert_conformal_conic' ds.lambert_projection.attrs['standard_parallel'] = 25. ds.lambert_projection.attrs['latitude_of_projection_origin'] = 40. ds.lambert_projection.attrs['longitude_of_central_meridian'] = -105. ds.lambert_projecti...
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Now that we created the variable, all that's left is to set the grid_mapping attribute on our Temperature variable to the name of our dummy variable:
ds.temperature.attrs['grid_mapping'] = 'lambert_projection' # or proj_var.name ds
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Write to NetCDF Xarray has built-in support for a few flavors of netCDF. Here we'll write a netCDF4 file from our Dataset.
ds.to_netcdf('test_netcdf.nc', format='NETCDF4') !ncdump test_netcdf.nc
notebooks/XArray/XArray and CF.ipynb
Unidata/unidata-python-workshop
mit
Introduction
from IPython.display import YouTubeVideo YouTubeVideo(id='sdF0uJo2KdU', width="100%")
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
In this chapter, we will introduce you to the NetworkX API. This will allow you to create and manipulate graphs in your computer memory, thus giving you a language to more concretely explore graph theory ideas. Throughout the book, we will be using different graph datasets to help us anchor ideas. In this section, we ...
import networkx as nx from datetime import datetime import matplotlib.pyplot as plt import numpy as np import warnings from nams import load_data as cf warnings.filterwarnings('ignore') G = cf.load_seventh_grader_network()
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Understanding a graph's basic statistics When you get graph data, one of the first things you'll want to do is to check its basic graph statistics: the number of nodes and the number of edges that are represented in the graph. This is a basic sanity-check on your data that you don't want to skip out on. Querying graph ...
type(G)
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Because the graph is a DiGraph, this tells us that the graph is a directed one. If it were undirected, the type would change:
H = nx.Graph() type(H)
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Querying node information Let's now query for the nodeset:
list(G.nodes())[0:5]
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
G.nodes() returns a "view" on the nodes. We can't actually slice into the view and grab out a sub-selection, but we can at least see what nodes are present. For brevity, we have sliced into G.nodes() passed into a list() constructor, so that we don't pollute the output. Because a NodeView is iterable, though, we can qu...
len(G.nodes())
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
If our nodes have metadata attached to them, we can view the metadata at the same time by passing in data=True:
list(G.nodes(data=True))[0:5]
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
G.nodes(data=True) returns a NodeDataView, which you can see is dictionary-like. Additionally, we can select out individual nodes:
G.nodes[1]
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Now, because a NodeDataView is dictionary-like, looping over G.nodes(data=True) is very much like looping over key-value pairs of a dictionary. As such, we can write things like: python for n, d in G.nodes(data=True): # n is the node # d is the metadata dictionary ... This is analogous to how we would loop ...
from nams.solutions.intro import node_metadata #### REPLACE THE NEXT LINE WITH YOUR ANSWER mf_counts = node_metadata(G)
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Test your implementation by checking it against the test_answer function below.
from typing import Dict def test_answer(mf_counts: Dict): assert mf_counts['female'] == 17 assert mf_counts['male'] == 12 test_answer(mf_counts)
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
With this dictionary-like syntax, we can query back the metadata that's associated with any node. Querying edge information Now that you've learned how to query for node information, let's now see how to query for all of the edges in the graph:
list(G.edges())[0:5]
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Similar to the NodeView, G.edges() returns an EdgeView that is also iterable. As with above, we have abbreviated the output inside a sliced list to keep things readable. Because G.edges() is iterable, we can get its length to see the number of edges that are present in a graph.
len(G.edges())
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Likewise, we can also query for all of the edge's metadata:
list(G.edges(data=True))[0:5]
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Additionally, it is possible for us to select out individual edges, as long as they exist in the graph:
G.edges[15, 10]
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
This yields the metadata dictionary for that edge. If the edge does not exist, then we get an error: ```python G.edges[15, 16] ``` ```python KeyError Traceback (most recent call last) <ipython-input-21-ce014cab875a> in <module> ----> 1 G.edges[15, 16] ~/anaconda/envs/nams/lib/pyth...
from nams.solutions.intro import edge_metadata #### REPLACE THE NEXT LINE WITH YOUR ANSWER maxcount = edge_metadata(G)
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Likewise, you can test your answer using the test function below:
def test_maxcount(maxcount): assert maxcount == 3 test_maxcount(maxcount)
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Manipulating the graph Great stuff! You now know how to query a graph for: its node set, optionally including metadata individual node metadata its edge set, optionally including metadata, and individual edges' metadata Now, let's learn how to manipulate the graph. Specifically, we'll learn how to add nodes and edge...
from nams.solutions.intro import adding_students #### REPLACE THE NEXT LINE WITH YOUR ANSWER G = adding_students(G)
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
You can verify that the graph has been correctly created by executing the test function below.
def test_graph_integrity(G): assert 30 in G.nodes() assert 31 in G.nodes() assert G.nodes[30]['gender'] == 'male' assert G.nodes[31]['gender'] == 'female' assert G.has_edge(30, 31) assert G.has_edge(30, 7) assert G.has_edge(31, 7) assert G.edges[30, 7]['count'] == 3 assert G.edges[7,...
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Coding Patterns These are some recommended coding patterns when doing network analysis using NetworkX, which stem from my personal experience with the package. Iterating using List Comprehensions I would recommend that you use the following for compactness: python [d['attr'] for n, d in G.nodes(data=True)] And if the ...
from nams.solutions.intro import unrequitted_friendships_v1 #### REPLACE THE NEXT LINE WITH YOUR ANSWER unrequitted_friendships = unrequitted_friendships_v1(G) assert len(unrequitted_friendships) == 124
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
In a previous session at ODSC East 2018, a few other class participants provided the following solutions, which you can take a look at by uncommenting the following cells. This first one by @schwanne is the list comprehension version of the above solution:
from nams.solutions.intro import unrequitted_friendships_v2 # unrequitted_friendships_v2??
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
This one by @end0 is a unique one involving sets.
from nams.solutions.intro import unrequitted_friendships_v3 # unrequitted_friendships_v3??
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Solution Answers Here are the answers to the exercises above.
import nams.solutions.intro as solutions import inspect print(inspect.getsource(solutions))
notebooks/01-introduction/02-networkx-intro.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Getting Data The first thing we're gonna do is get monthly values for the Market Cap, P/E Ratio and Monthly Returns for every equity. Monthly Returns is a metric that takes the returns accrued over an entire month of trading by dividing the last close price by the first close price and subtracting 1.
from quantopian.pipeline import Pipeline from quantopian.pipeline.data import morningstar from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.pipeline.factors import CustomFactor, Returns def make_pipeline(): """ Create and return our pipeline. We break this piece of l...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
Let's take a look at the data to get a quick sense of what we have. This may take a while.
from quantopian.research import run_pipeline start_date = '2013-01-01' end_date = '2015-02-01' data = run_pipeline(pipe, start_date, end_date) # remove NaN values data = data.dropna() # show data data
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
Now, we need to take each of these individual factors, clean them to remove NaN values and aggregate them for each month.
cap_data = data['Market Cap'].transpose().unstack() # extract series of data cap_data = cap_data.T.dropna().T # remove NaN values cap_data = cap_data.resample('M', how='last') # use last instance in month to aggregate pe_data = data['PE Ratio'].transpose().unstack() pe_data = pe_data.T.dropna().T pe_data = pe_data.res...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
The next step is to figure out which equities we have data for. Data sources are never perfect, and stocks go in and out of existence with Mergers, Acquisitions, and Bankruptcies. We'll make a list of the stocks common to all three sources (our factor data sets) and then filter down both to just those stocks.
common_equities = cap_data.T.index.intersection(pe_data.T.index).intersection(month_data.T.index)
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
Now, we will make sure that each time series is being run over identical an identical set of securities.
cap_data_filtered = cap_data[common_equities][:-1] month_forward_returns = month_data[common_equities][1:] pe_data_filtered = pe_data[common_equities][:-1]
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
Here, is the filtered data for market cap over all equities for the first 5 months, as an example.
cap_data_filtered.head()
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
Because we're dealing with ranking systems, at several points we're going to want to rank our data. Let's check how our data looks when ranked to get a sense for this.
cap_data_filtered.rank().head()
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
Looking at Correlations Over Time Now that we have the data, let's do something with it. Our first analysis will be to measure the monthly Spearman rank correlation coefficient between Market Cap and month-forward returns. In other words, how predictive of 30-day returns is ranking your universe by market cap.
scores = np.zeros(24) pvalues = np.zeros(24) for i in range(24): score, pvalue = stats.spearmanr(cap_data_filtered.iloc[i], month_forward_returns.iloc[i]) pvalues[i] = pvalue scores[i] = score plt.bar(range(1,25),scores) plt.hlines(np.mean(scores), 1, 25, colors='r', linestyles='dashed') plt.xlabel('Mo...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
We can see that the average correlation is positive, but varies a lot from month to month. Let's look at the same analysis, but with PE Ratio.
scores = np.zeros(24) pvalues = np.zeros(24) for i in range(24): score, pvalue = stats.spearmanr(pe_data_filtered.iloc[i], month_forward_returns.iloc[i]) pvalues[i] = pvalue scores[i] = score plt.bar(range(1,25),scores) plt.hlines(np.mean(scores), 1, 25, colors='r', linestyles='dashed') plt.xlabel('Mon...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
The correlation of PE Ratio and 30-day returns seems to be near 0 on average. It's important to note that this monthly and between 2012 and 2015. Different factors are predictive on differen timeframes and frequencies, so the fact that PE Ratio doesn't appear predictive here is not necessary throwing it out as a useful...
def compute_basket_returns(factor_data, forward_returns, number_of_baskets, month): data = pd.concat([factor_data.iloc[month-1],forward_returns.iloc[month-1]], axis=1) # Rank the equities on the factor values data.columns = ['Factor Value', 'Month Forward Returns'] data.sort('Factor Value', inplace=Tru...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
The first thing we'll do with this function is compute this for each month and then average. This should give us a sense of the relationship over a long timeframe.
number_of_baskets = 10 mean_basket_returns = np.zeros(number_of_baskets) for m in range(1, 25): basket_returns = compute_basket_returns(cap_data_filtered, month_forward_returns, number_of_baskets, m) mean_basket_returns += basket_returns mean_basket_returns /= 24 # Plot the returns of each basket plt.bar(...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
Spread Consistency Of course, that's just the average relationship. To get a sense of how consistent this is, and whether or not we would want to trade on it, we should look at it over time. Here we'll look at the monthly spreads for the first year. We can see a lot of variation, and further analysis should be done to ...
f, axarr = plt.subplots(3, 4) for month in range(1, 13): basket_returns = compute_basket_returns(cap_data_filtered, month_forward_returns, 10, month) r = np.floor((month-1) / 4) c = (month-1) % 4 axarr[r, c].bar(range(number_of_baskets), basket_returns) axarr[r, c].xaxis.set_visible(False) # Hide t...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
We'll repeat the same analysis for PE Ratio.
number_of_baskets = 10 mean_basket_returns = np.zeros(number_of_baskets) for m in range(1, 25): basket_returns = compute_basket_returns(pe_data_filtered, month_forward_returns, number_of_baskets, m) mean_basket_returns += basket_returns mean_basket_returns /= 24 # Plot the returns of each basket plt.bar(r...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
Sometimes Factors are Just Other Factors Often times a new factor will be discovered that seems to induce spread, but it turns out that it is just a new and potentially more complicated way to compute a well known factor. Consider for instance the case in which you have poured tons of resources into developing a new fa...
scores = np.zeros(24) pvalues = np.zeros(24) for i in range(24): score, pvalue = stats.spearmanr(cap_data_filtered.iloc[i], pe_data_filtered.iloc[i]) pvalues[i] = pvalue scores[i] = score plt.bar(range(1,25),scores) plt.hlines(np.mean(scores), 1, 25, colors='r', linestyles='dashed') plt.xlabel('Month')...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
And also the p-values because the correlations may not be that meaningful by themselves.
scores = np.zeros(24) pvalues = np.zeros(24) for i in range(24): score, pvalue = stats.spearmanr(cap_data_filtered.iloc[i], pe_data_filtered.iloc[i]) pvalues[i] = pvalue scores[i] = score plt.bar(range(1,25),pvalues) plt.xlabel('Month') plt.xlim((1, 25)) plt.legend(['Mean Correlation over All Months', ...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
There is interesting behavior, and further analysis would be needed to determine whether a relationship existed.
pe_dataframe = pd.DataFrame(pe_data_filtered.iloc[0]) pe_dataframe.columns = ['F1'] cap_dataframe = pd.DataFrame(cap_data_filtered.iloc[0]) cap_dataframe.columns = ['F2'] returns_dataframe = pd.DataFrame(month_forward_returns.iloc[0]) returns_dataframe.columns = ['Returns'] data = pe_dataframe.join(cap_dataframe).join...
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
quantopian/research_public
apache-2.0
Fix top issues when converting a TF model for Edge TPU This page shows how to fix some known issues when converting TensorFlow 2 models for the Edge TPU. <a href="https://colab.research.google.com/github/google-coral/tutorials/blob/master/fix_conversion_issues_ptq_tf2.ipynb" target="_parent"><img src="https://colab.re...
import tensorflow as tf assert float(tf.__version__[:3]) >= 2.3 import os import numpy as np import matplotlib.pyplot as plt
fix_conversion_issues_ptq_tf2.ipynb
google-coral/tutorials
apache-2.0
Install the Edge TPU Compiler:
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list ! sudo apt-get update ! sudo apt-get install edgetpu-compiler
fix_conversion_issues_ptq_tf2.ipynb
google-coral/tutorials
apache-2.0
Create quantization function
def quantize_model(converter): # This generator provides a junk representative dataset # (It creates a poor model but is only for demo purposes) def representative_data_gen(): for i in range(10): image = tf.random.uniform([1, 224, 224, 3]) yield [image] converter.optimizations = [tf.lite.Op...
fix_conversion_issues_ptq_tf2.ipynb
google-coral/tutorials
apache-2.0
Can't compile due to dynamic batch size The Edge TPU Compiler fails for some models such as MobileNetV1 if the input shape batch size is not set to 1, although this isn't exactly obvious from the compiler's output: Invalid model: mobilenet_quant.tflite Model not quantized That error might be caused by something else, b...
model = tf.keras.applications.MobileNet()
fix_conversion_issues_ptq_tf2.ipynb
google-coral/tutorials
apache-2.0