markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Method 1: update using the info in gradientThis means we will update the image based on the value of gradient, ideally, this will give us a adversarial image with less wiggle, as we only need to add a little wiggle when the gradient at that point is large.
adversarial_img = origin_images.copy() for i in range(0, iter_num): gradient = img_gradient.eval({x: adversarial_img, y_: target_labels, keep_prob: 1.0}) adversarial_img = adversarial_img - eta * gradient prediction=tf.argmax(y_pred,1) prediction_val = prediction.eval(feed_dict={x: adversarial_img, keep...
predictions [2 2 2 2 2 2 2 2 2 2] Confidence 2: [ 0.99839801 0.50398463 0.99999976 0.94279677 0.99306434 0.99999869 0.99774051 0.99999976 0.99999988 0.99998116] Confidence 6: [ 6.17733331e-09 3.38034965e-02 3.61205510e-11 5.49222386e-05 1.65044228e-04 2.51908945e-11 4.98797135e-07 3.61205510e-...
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
Method 2: update using the sign of gradientperform some step size for each pixel
eta = 0.02 iter_num = 10 adversarial_img = origin_images.copy() for i in range(0, iter_num): gradient = img_gradient.eval({x: adversarial_img, y_: target_labels, keep_prob: 1.0}) adversarial_img = adversarial_img - eta * np.sign(gradient) prediction=tf.argmax(y_pred,1) prediction_val = prediction.eval(f...
predictions [2 2 2 2 2 2 2 2 2 2] Confidence 2: [ 0.99979955 0.86275303 1. 0.9779107 0.99902475 0.99999976 0.99971646 1. 1. 0.99999583] Confidence 6: [ 1.66726910e-10 1.24624989e-03 4.56519967e-13 8.34497041e-06 5.59669525e-06 1.79199841e-12 1.30735716e-08 4.56519967e-...
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
Take a look at individual image
threshold = 0.99 eta = 0.001 prediction=tf.argmax(y_pred,1) probabilities=y_pred adversarial_img = origin_images[1: 2].copy() adversarial_label = target_labels[1: 2] start_img = adversarial_img.copy() confidence = 0 iter_num = 0 prob_history = list() while confidence < threshold: gradient = img_gradient.eval({x: ...
_____no_output_____
Apache-2.0
notebook/AdversarialMNIST_sketch.ipynb
tiddler/AdversarialMNIST
Chapter Break
from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(inputs, output, test_size=0.33, random_state=42) from sklearn.preprocessing import PolynomialFeatures, StandardScaler from sklearn.pipeline import make_pipeline pip...
_____no_output_____
MIT
Chapter 4.ipynb
PacktPublishing/-Python-Your-First-Step-Toward-Data-Science-V-
Understanding regression and linear regression `np.concatenate` joins a sequence of arrays along an existing axis.`np.ones` returns a new array of given shape and type, filled with ones.`np.zeroes` return a new array of given shape and type, filled with zeroes.`np.dot` if a is an N-D array and b is a 1-D array, it is ...
learning_rate = 0.01 fit_intercept = True weights = 0 def fit(X, y): global weights if fit_intercept: X = np.concatenate((np.ones((X.shape[0], 1)), X), axis=1) weights = np.zeros(X.shape[1]) # gradient descent (there are other optimizations) for i in range(1000): # epochs cur...
_____no_output_____
MIT
Chapter 4.ipynb
PacktPublishing/-Python-Your-First-Step-Toward-Data-Science-V-
Notation:- SAL- small area- PP- police precinct- AEA- Albers Equal Area Conic- CPS- crime per SAL
from random import shuffle, randint import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import PatchCollection from mpl_toolkits.basemap import Basemap from shapely.geometry import Polygon, Point, MultiPoint, MultiPolygon, LineString, mapping, shape from descartes import PolygonPatch import r...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
def sjoin(left_df, right_df, how='inner', op='intersects', lsuffix='left', rsuffix='right', **kwargs): """Spatial join of two GeoDataFrames. left_df, right_df are GeoDataFrames how: type of join left -> use keys from left_df; retain only left_df geometry column right -> use keys from rig...
def find_intersections(o): from collections import defaultdict paired_ind = [o.pp_index, o.sal_index] d_over_ind = defaultdict(list) # creating a dictionary that has prescints as keys and associated small areas as values for i in range(len(paired_ind[0].values)): if not paired_ind[0]...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Main functions to find intersection. Files loaded in are the AEA projected shapefiles.
salSHP_upd = 'shapefiles/updated/sal_population_aea.shp' polSHP_upd = 'shapefiles/updated/polPrec_murd2015_prov_aea.shp' geo_pol = gpd.GeoDataFrame.from_file(polSHP_upd) geo_sal = gpd.GeoDataFrame.from_file(salSHP_upd) geo_pol_reind = geo_pol.reset_index().rename(columns={'index':'pp_index'}) geo_sal_reind = geo_sal....
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
test on a subset:
gt1= geo_pol_reind[geo_pol.province=="Free State"].head(n=2) gt2 = geo_sal_reind[geo_sal_reind.PR_NAME=="Free State"].reset_index() d = calculate_join_indices(gt1, gt2)
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Running the intersections on pre-computed indices:
from timeit import default_timer as timer #start = timer() #df_inc, sum_area_inc, data_inc = calculate_join(dict_inc, geo_pol_reind, geo_sal_reind) #end = timer() #print("1st", end - start) start = timer() df_int, sum_area_int, data_int = calculate_join(dict_int, geo_pol_reind, geo_sal_reind) end = timer() print...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
find pol precincts within WC boundary
za_province = gpd.read_file('za-provinces.topojson',driver='GeoJSON')#.set_index('id') za_province.crs={'init': '27700'} wc_boundary = za_province.ix[8].geometry # WC #pp_WC = geo_pol[geo_pol.geometry.within(wc_boundary)] pp_WC_in = geo_pol[geo_pol.geometry.intersects(wc_boundary)] #.unary_union, sal_wc_union_bound = ...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Adding final columns:
# There are 101,546 intersections df_int_aea = compute_final_col(df_int) # add final calculations df_int_aea.to_csv('data/pp_int_intersections2.csv')
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Some intersections are multipolygons (PP and SAL intersect in multiple areas):
df_int_aea.head(n=3).values[2][0]
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
There are curious cases of intersections, which form polygons. For example,a Free State police precinct 'dewetsdorp' with murder count of 1 (yet high rate of Stock-theft- 52 in 2014) intersects the SAL 4990011 (part of SP Mangaung NU) in two lines:
geo_sal_reind[geo_sal_reind.sal_index==28532].geometry.values[0] geo_pol_reind[geo_pol_reind.pp_index ==358].geometry.values[0] a = geo_pol_reind[geo_pol_reind.pp_index ==358].geometry.values[0] b= geo_sal_reind[geo_sal_reind.sal_index==28532].geometry.values[0] c = [geo_pol_reind[geo_pol_reind.pp_index ==358].geometry...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
measuring the error of the 'CPS' estimateComputing the lower (LB) and upper bounds (UB), wherever possible, is done the following way:UB: based the calcualation of population per PP on all SALs included entirely within PP. If not possible, set to NaNLB: find all SALs intersecting a given PP, but base the PP population...
df_int=df_int_aea.ix[:,:20] # this function adds the remaining columns, calculates fractions etc def compute_final_col_bounds(df_aea): #recalculate pop frac per PP temp = df_aea.groupby(by=['index_PP'])['popu_inter'].sum().reset_index() data_with_population = pd.merge(df_aea, temp, on='index_PP', how='oute...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
create new tables for the LB and UB
list_lb =[] list_ub = [] for i,entry in df_int.iterrows():#f_inc_aea: if (entry.area_inter/entry.area_sal==1): # select those included 'completely' list_ub.append(entry) entry.popu_inter = entry.popu_sal # this is actually already true for the above if() case list_lb.append(entry) ...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
At the level of SP (and probably others) some bounds are inverted... UB < LB (2,242 out of 21,589)
#mn_bounds_def = mn_bounds[~mn_bounds.UB_murder.isnull()] df_inv_bounds = df_bounds[df_bounds.murd_est_per_int_ub<df_bounds.murd_est_per_int_lb] df_inv_bounds.tail() temp_ub = df_int_aea_ub.groupby(by=['SAL_CODE'])['murd_est_per_int_ub'].sum().reset_index() temp_lb = df_int_aea_lb.groupby(by=['SAL_CODE'])['murd_est_per...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Plotting the lower and upper bounds:
import warnings warnings.filterwarnings('ignore') import mpld3 from mpld3 import plugins from mpld3.utils import get_id #import numpy as np import collections from mpld3 import enable_notebook enable_notebook() def make_labels_points(dataf): L = len(dataf) x = np.array(dataf['murd_est_per_int_lb']) ...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Add gender data:
full_pop = pd.read_csv('data/sal_pop.csv') def get_ratio(i,full_pop): try: x = int(full_pop.iloc[i,].Female)/(int(full_pop.iloc[i,].Male)+int(full_pop.iloc[i,].Female)) except: x =0 return x wom_ratio = [get_ratio(i,full_pop) for i in range(len(full_pop))] full_pop['wom_ratio'] = wom_ratio full_...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
WARDS:
wardsShp =gpd.GeoDataFrame.from_file('../maps/data/Wards2011_aea.shp') wardsShp.head(n=2) za_province = gpd.GeoDataFrame.from_file('../south_africa_adm1.shp')#.set_index('id') %matplotlib inline #import matplotlib.pyplot as plt from matplotlib.collections import PatchCollection from descartes import PolygonPatch impor...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
clean the utf problems
from unidecode import unidecode with fiona.open( '../maps/data/wards_sel.shp', 'r') as source: # Create an output shapefile with the same schema, # coordinate systems. ISO-8859-1 encoding. with fiona.open( '../maps/data/wards_sel_cleaned.shp', 'w', **source.meta) as sink: ...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
to plot the data on a folium map, we need to convert to a Geographic coordinate system with the wgs84 datum (EPSG: 4326). We also need to greate a GeoJSON object out of the GeoDataFrame. AND! as it turns out (many hourse of tripping over the problem) to SIMPLIFY the geometries. They are too big for webmaps.
warSHP = '../maps/data/Wards2011.shp' geo_war = gpd.GeoDataFrame.from_file(warSHP) #geo_sal = gpd.GeoDataFrame.from_file(salSHP_upd) geo_war.head(n=2) geo_war_sub = geo_war.iloc[:,[2,3,7,8,9]].reset_index().head(n=2) #g = geo_war_sub.simplify(0.05, preserve_topology=False) geo_war_sub.head(n=3) geo_war_sub.to_file('....
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
analytics based on intersections:
def find_intersections(o): from collections import defaultdict paired_ind = [o.pp_index, o.sal_index] d_over_ind = defaultdict(list) # creating a dictionary that has prescints as keys and associated small areas as values for i in range(len(paired_ind[0].values)): if not paired_ind[0]...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
40515 out of 84907 SALs intersect ward borders.Let's see whether the intersections generated from PP and SAL fit better.
#trying the intersections geo_int_p = pd.read_csv('data/pp_int_intersections.csv') geo_war_sub.crs #geo_int.head(n=2) geo_int = gpd.GeoDataFrame(geo_int_p, crs=geo_war_sub.crs) #geo_int.head(n=2) cols = [c for c in geo_int.columns if c.lower()[:7] != 'unnamed'] geo_int = geo_int[cols] geo_int.head(n=2) geo_int_sub = ge...
_____no_output_____
MIT
crime_stats_compute_aea.ipynb
OpenUpSA/crime-stats-demystifed
Checklist for submissionIt is extremely important to make sure that:1. Everything runs as expected (no bugs when running cells);2. The output from each cell corresponds to its code (don't change any cell's contents without rerunning it afterwards);3. All outputs are present (don't delete any of the outputs);4. Fill in...
GROUP = "" NAME1 = "" NAME2 = ""
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Make sure you can run the following cell without errors.
import IPython assert IPython.version_info[0] >= 3, "Your version of IPython is too old, please update it."
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
--- Home Assignment 3This home assignment will focus on reinforcement learning and deep reinforcement learning. The first part will cover value-table reinforcement learning techniques, and the second part will include neural networks as function approximators, i.e. deep reinforcement learning. When handing in this ass...
from gridworld_mdp import * import numpy as np help(GridWorldMDP.get_states) # The constructor help(GridWorldMDP.__init__) help(GridWorldMDP.get_actions) help(GridWorldMDP.state_transition_func) help(GridWorldMDP.reward_function)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
We also provide two helper functions for visualizing the value function and the policies you obtain:
# Function for printing a policy pi def print_policy(pi): print('Policy for non-terminal states: ') indencies = np.arange(1, 16) txt = '| ' hor_delimiter = '---------------------' print(hor_delimiter) for a, i in zip(pi, indencies): txt += mdp.act_to_char_dict[a] + ' | ' if i % 5...
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Now it's time for you to implement your own version of value iteration to solve for the greedy policy and $V^*(s)$.
def value_iteration(gamma, mdp): V = np.zeros([16]) # state value table Q = np.zeros([16, 4]) # state action value table pi = np.zeros([16]) # greedy policy table # Complete this function return V, pi
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run your implementation for the deterministic version of our MDP. As a sanity check, compare your analytical solutions with the output from your implementation.
mdp = GridWorldMDP(trans_prob=1.) v, pi = value_iteration(.9, mdp) print_value_table(v) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Once your implementation passed the sanity check, run it for the stochastic case, where the probability of an action succeding is 0.8, and 0.2 of moving the agent in an orthogonal direction to the intended. Use $\gamma = .99$.
# Run for stochastic MDP, gamma = .99 mdp = GridWorldMDP() v, pi = value_iteration(.99, mdp) print_value_table(v) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Does the policy that the algorithm found looks reasonable? For instance, what's the policy for state $S_8$? Is that a good idea? Why?**Your answer**: (fill in here) Test your implementation using this function.
test_value_iteration(v, pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run value iteration for the same scenario as above, but now with $\gamma=.9$
# Run for stochastic MDP, gamma = .9 mdp = GridWorldMDP() v, pi = value_iteration(.9, mdp) print_value_table(v) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Do you notice any difference between the greedy policy for the two different discount factors. If so, what's the difference, and why do you think this happened? **Your answer:** (fill in here) Task 2: Q-learningIn the previous task, you solved for $V^*(s)$ and the greedy policy $\pi^*(s)$, with the entire model of the...
def eps_greedy_policy(q_values, eps): ''' Creates an epsilon-greedy policy :param q_values: set of Q-values of shape (num actions,) :param eps: probability of taking a uniform random action :return: policy of shape (num actions,) ''' # Complete this function return po...
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run the cell below to test your implementation
mdp = GridWorldMDP() # Test shape of output actions = mdp.get_actions() for eps in (0, 1): foo = np.zeros([len(actions)]) foo[0] = 1. eps_greedy = eps_greedy_policy(foo, eps) assert foo.shape == eps_greedy.shape, "wrong shape of output" actions = [i for i in range(10)] for eps in (0, 1): foo = np.z...
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Task 2.2: Implement the Q-learning algorithmNow it's time to actually implement the Q-learning algorithm. Unlike the Value iteration where there is no direct interactions with the environment, the Q-learning algorithm builds up its estimations by interacting and exploring the environment. To enable the agent to explor...
help(GridWorldMDP.reset) help(GridWorldMDP.step)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Implement your version of Q-learning in the cell below. **Hint:** It might be useful to study the pseudocode provided above.
def q_learning(eps, gamma): Q = np.zeros([16, 4]) # state action value table pi = np.zeros([16]) # greedy policy table alpha = .01 # Complete this function return pi, Q
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run Q-learning with $\epsilon = 1$ for the MDP with $\gamma=0.99$
pi, Q = q_learning(1, .99) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Test your implementation by running the cell below
test_q_learning(Q)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Run Q-learning with $\epsilon=0$
pi, Q = q_learning(0, .99) print_policy(pi)
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
You ran your implementation with $\epsilon$ set to both 0 and 1. What are the results, and your conclusions? **Your answer:** (fill in here) Task 3: Deep Double Q-learning (DDQN)For this task, you will implement a DDQN (double deep Q-learning network) to solve one of the problems of the OpenAI gym. Before we get into ...
def calculate_td_targets(q1_batch, q2_batch, r_batch, t_batch, gamma=.99): ''' Calculates the TD-target used for the loss : param q1_batch: Batch of Q(s', a) from online network, shape (N, num actions) : param q2_batch: Batch of Q(s', a) from target network, shape (N, num actions) : param r_batch: B...
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Test your implementation by trying to solve the reinforcement learning problem for the Cartpole environment. The following cell defines the `train_loop_ddqn` function, which will be called ahead,
# Import dependencies import numpy as np import gym from keras.utils.np_utils import to_categorical as one_hot from collections import namedtuple from dqn_model import DoubleQLearningModel, ExperienceReplay def train_loop_ddqn(model, env, num_episodes, batch_size=64, gamma=.94): Transition = namedtuple("Tr...
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
and the next cell performs the actual training. A Working implementation should start to improve after 500 episodes. An episodic reward of around 200 is likely to be achieved after 800 episodes for a batchsize of 128, and 1000 episodes for a batchsize of 64.
# Create the environment env = gym.make("CartPole-v0") # Initializations num_actions = env.action_space.n obs_dim = env.observation_space.shape[0] # Our Neural Netork model used to estimate the Q-values model = DoubleQLearningModel(state_dim=obs_dim, action_dim=num_actions, learning_rate=1e-4) # Create replay buffer...
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
According to the code above, and the code in the provided .py file, answer the following questions: What is the state for this problem?**Your answer**: (fill in here)When do we switch the networks (i.e. when does the online network become the fixed one, and vice-versa)?**Your answer**: (fill in here) Run the cell be...
import time num_episodes = 1 env = gym.make("CartPole-v0") for i in range(num_episodes): state = env.reset() #reset to initial state state = np.expand_dims(state, axis=0)/2 terminal = False # reset terminal flag while not terminal: env.render() time.sleep(.05) ...
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Plot the episodic rewards obtained throughout the optimization, together with a moving average of it (since the episodic reward is usually very noisy).
%matplotlib inline import matplotlib.pyplot as plt rewards = plt.plot(R, alpha=.4, label='R') avg_rewards = plt.plot(R_avg,label='avg R') plt.legend(bbox_to_anchor=(1.01, 1), loc=2, borderaxespad=0.) plt.xlabel('Episode') plt.ylim(0, 210) plt.show()
_____no_output_____
MIT
Home Assignments/HA3/HA3.ipynb
fridokus/deep-machine-learning
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a per...
import numpy as np import cv2 import glob import matplotlib.pyplot as plt import matplotlib.image as mpimg # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML %matplotlib qt import cv2 from lib import CameraCalibrate, EstimateWrapParamet...
FALSE [[-6.15384615e-01 -1.37820513e+00 9.69230769e+02] [ 1.97716240e-16 -1.96794872e+00 8.90769231e+02] [ 0.00000000e+00 -2.40384615e-03 1.00000000e+00]] [[ 1.43118893e-01 -7.85830619e-01 5.61278502e+02] [-2.27373675e-16 -5.08143322e-01 4.52638436e+02] [-2.41886889e-19 -1.22149837e-03 1.00000000e+00]]
MIT
main.ipynb
sharifchowdhury/Advanced-Lane-finding
And so on and so forth...
left_fit = [] right_fit = [] left_fitx_old = [] right_fitx_old = [] ind= 0 cr=[] pt=[] def init(): global left_fit global right_fit global left_fitx_old global right_fitx_old global ind global cr global pt left_fit = [] right_fit = [] left_fitx_old = [] right_fitx_old = []...
range(0, 10)
MIT
main.ipynb
sharifchowdhury/Advanced-Lane-finding
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Training an Object Detection model using AutoMLIn this notebook, we go over how you can use AutoML for training an Object Detection model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters...
from azureml.core.workspace import Workspace ws = Workspace.from_config()
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Compute target setupYou will need to provide a [Compute Target](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecturecomputes) that will be used for your AutoML model training. AutoML models for image tasks require [GPU SKUs](https://docs.microsoft.com/en-us/azure/virtual-...
from azureml.core.compute import AmlCompute, ComputeTarget cluster_name = "gpu-cluster-nc6" try: compute_target = ws.compute_targets[cluster_name] print("Found existing compute target.") except KeyError: print("Creating a new compute target...") compute_config = AmlCompute.provisioning_configuration( ...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Experiment SetupCreate an [Experiment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architectureexperiments) in your workspace to track your model training runs
from azureml.core import Experiment experiment_name = "automl-image-object-detection" experiment = Experiment(ws, name=experiment_name)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Dataset with input Training DataIn order to generate models for computer vision, you will need to bring in labeled image data as input for model training in the form of an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset). You can either use a dataset that ...
import os import urllib from zipfile import ZipFile # download data download_url = "https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip" data_file = "./odFridgeObjects.zip" urllib.request.urlretrieve(download_url, filename=data_file) # extract files with ZipFile(data_file, "r...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
This is a sample image from this dataset:
from IPython.display import Image Image(filename="./odFridgeObjects/images/31.jpg")
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Convert the downloaded data to JSONLIn this example, the fridge object dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels...
import json import os import xml.etree.ElementTree as ET src = "./odFridgeObjects/" train_validation_ratio = 5 # Retrieving default datastore that got automatically created when we setup a workspace workspaceblobstore = ws.get_default_datastore().name # Path to the annotations annotations_folder = os.path.join(src, ...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Convert annotation file from COCO to JSONLIf you want to try with a dataset in COCO format, the scripts below shows how to convert it to `jsonl` format. The file "odFridgeObjects_coco.json" consists of annotation information for the `odFridgeObjects` dataset.
# Generate jsonl file from coco file !python coco2jsonl.py \ --input_coco_file_path "./odFridgeObjects_coco.json" \ --output_dir "./odFridgeObjects" --output_file_name "odFridgeObjects_from_coco.jsonl" \ --task_type "ObjectDetection" \ --base_url "AmlDatastore://workspaceblobstore/odFridgeObjects/images/"
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Visualize bounding boxesPlease refer to the "Visualize data" section in the following [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-auto-train-image-modelsvisualize-data) to see how to easily visualize your ground truth bounding boxes before starting to train. Upload the JSONL file and i...
# Retrieving default datastore that got automatically created when we setup a workspace ds = ws.get_default_datastore() ds.upload(src_dir="./odFridgeObjects", target_path="odFridgeObjects")
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Finally, we need to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset) from the data we uploaded to the Datastore. We create one dataset for training and one for validation.
from azureml.core import Dataset from azureml.data import DataType # get existing training dataset training_dataset_name = "odFridgeObjectsTrainingDataset" if training_dataset_name in ws.datasets: training_dataset = ws.datasets.get(training_dataset_name) print("Found the training dataset", training_dataset_nam...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Validation dataset is optional. If no validation dataset is specified, by default 20% of your training data will be used for validation. You can control the percentage using the `split_ratio` argument - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-m...
training_dataset.to_pandas_dataframe()
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Configuring your AutoML run for image tasksAutoML allows you to easily train models for Image Classification, Object Detection & Instance Segmentation on your image data. You can control the model algorithm to be used, specify hyperparameter values for your model as well as perform a sweep across the hyperparameter sp...
from azureml.automl.core.shared.constants import ImageTask from azureml.train.automl import AutoMLImageConfig from azureml.train.hyperdrive import GridParameterSampling, choice image_config_yolov5 = AutoMLImageConfig( task=ImageTask.IMAGE_OBJECT_DETECTION, compute_target=compute_target, training_data=train...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Submitting an AutoML run for Computer Vision tasksOnce you've created the config settings for your run, you can submit an AutoML run using the config in order to train a vision model using your training dataset.
automl_image_run = experiment.submit(image_config_yolov5) automl_image_run.wait_for_completion(wait_post_processing=True)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Hyperparameter sweeping for your AutoML models for computer vision tasksIn this example, we use the AutoMLImageConfig to train an Object Detection model using `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains...
from azureml.automl.core.shared.constants import ImageTask from azureml.train.automl import AutoMLImageConfig from azureml.train.hyperdrive import BanditPolicy, RandomParameterSampling from azureml.train.hyperdrive import choice, uniform parameter_space = { "model": choice( { "model_name": choi...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main `automl_image_run` from above, which is the HyperDrive parent run. Then you can go into the 'Child run...
from azureml.core import Run hyperdrive_run = Run(experiment=experiment, run_id=automl_image_run.id + "_HD") hyperdrive_run
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Register the optimal vision model from the AutoML runOnce the run completes, we can register the model that was created from the best run (configuration that resulted in the best primary metric)
# Register the model from the best run best_child_run = automl_image_run.get_best_child() model_name = best_child_run.properties["model_name"] model = best_child_run.register_model( model_name=model_name, model_path="outputs/model.pt" )
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Deploy model as a web serviceOnce you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances ([ACI](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-container-instance)) or Azure Kubernetes Service ([AKS](ht...
from azureml.core.compute import ComputeTarget, AksCompute from azureml.exceptions import ComputeTargetException # Choose a name for your cluster aks_name = "cluster-aks-cpu" # Check to see if the cluster already exists try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print("Found existing compute ...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Next, you will need to define the [inference configuration](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-modelsupdate-inference-configuration), that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run ...
from azureml.core.model import InferenceConfig best_child_run.download_file( "outputs/scoring_file_v_1_0_0.py", output_file_path="score.py" ) environment = best_child_run.get_environment() inference_config = InferenceConfig(entry_script="score.py", environment=environment)
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
You can then deploy the model as an AKS web service.
# Deploy the model from the best run as an AKS web service from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration( autoscale_enabled=True, cpu_cores=1, memory_gb=5, enable_app_insights=True ) aks_service = Model.deploy( ws, mo...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Test the web serviceFinally, let's test our deployed web service to predict new images. You can pass in any image. In this case, we'll use a random image from the dataset and pass it to the scoring URI.
import requests # URL for the web service scoring_uri = aks_service.scoring_uri # If the service is authenticated, set the key or token key, _ = aks_service.get_keys() sample_image = "./test_image.jpg" # Load image data data = open(sample_image, "rb").read() # Set the content type headers = {"Content-Type": "appli...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
Visualize detectionsNow that we have scored a test image, we can visualize the bounding boxes for this image
%matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg import matplotlib.patches as patches from PIL import Image import numpy as np import json IMAGE_SIZE = (18, 12) plt.figure(figsize=IMAGE_SIZE) img_np = mpimg.imread(sample_image) img = Image.fromarray(img_np.astype("uint8"), "RGB") x, ...
_____no_output_____
MIT
python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb
brynn-code/azureml-examples
__author__="CANSEL KUNDUKAN" print("ADAM ASMACA OYUNUNA HOŞGELDİNİZ...") print("ip ucu=Oyunumuz da ülke isimlerini bulmaya çalışıyoruz") from random import choice while True: kelime = choice (["ispanya", "almanya","japonya","ingiltere","brezilya","mısır","macaristan","hindistan"]) kelime = kelime.upper() h...
ADAM ASMACA OYUNUNA HOŞGELDİNİZ... ip ucu=Oyunumuz da ülke isimlerini bulmaya çalışıyoruz Kelimemiz 9 harflidir. Kelimeyi Tahmin Ediniz _ _ _ _ _ _ _ _ _ 3 Canınız Kaldı Bir Harf Giriniz :b Yanlış. Kelimeyi Tahmin Ediniz _ _ _ _ _ _ _ _ _ 2 Canınız Kaldı Bir Harf Giriniz :a Dogru.A Harfi Kelimemiz ...
MIT
adam_asmaca.ipynb
canselkundukan/bby162
Truncated regression: minimum working example
import numpy as np %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import pymc3 as pm import arviz as az def pp_plot(x, y, trace): fig, ax = plt.subplots() # plot data ax.scatter(x, y) # plot posterior predicted... samples from posterior xi = np.array([np.min(x), np.ma...
_____no_output_____
MIT
truncated_regression_MWE.ipynb
drbenvincent/pymc3-demo-code
Linear regression of truncated data underestimates the slope
def linear_regression(x, y): with pm.Model() as model: m = pm.Normal("m", mu=0, sd=1) c = pm.Normal("c", mu=0, sd=1) σ = pm.HalfNormal("σ", sd=1) y_likelihood = pm.Normal("y_likelihood", mu=m*x+c, sd=σ, observed=y) with model: trace = pm.sample() return model, trac...
_____no_output_____
MIT
truncated_regression_MWE.ipynb
drbenvincent/pymc3-demo-code
Truncated regression avoids this underestimate
def truncated_regression(x, y, bounds): with pm.Model() as model: m = pm.Normal("m", mu=0, sd=1) c = pm.Normal("c", mu=0, sd=1) σ = pm.HalfNormal("σ", sd=1) y_likelihood = pm.TruncatedNormal( "y_likelihood", mu=m * x + c, sd=σ, observ...
Last updated: Sun Jan 24 2021 Python implementation: CPython Python version : 3.8.5 IPython version : 7.19.0 arviz : 0.11.0 pymc3 : 3.10.0 numpy : 1.19.2 matplotlib: 3.3.2 Watermark: 2.1.0
MIT
truncated_regression_MWE.ipynb
drbenvincent/pymc3-demo-code
get the minst dataset
batch_size = 128 num_classes = 10 epochs = 100 # input image dimensions img_rows, img_cols = 28, 28 # the data, shuffled and split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() if K.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img...
x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples
MIT
LeNet-5.ipynb
maxmax1992/Deep_Learning
Visualize the model
from IPython.display import SVG from keras.utils.vis_utils import plot_model plot_model(model, show_shapes=True, show_layer_names=True)
_____no_output_____
MIT
LeNet-5.ipynb
maxmax1992/Deep_Learning
![title](./model.png) Train the model
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1])
Train on 60000 samples, validate on 10000 samples Epoch 1/100 60000/60000 [==============================] - 2s - loss: 0.3232 - acc: 0.9029 - val_loss: 0.1030 - val_acc: 0.9701 Epoch 2/100 60000/60000 [==============================] - 1s - loss: 0.0855 - acc: 0.9744 - val_loss: 0.0740 - val_acc: 0.9774 Epoch 3/100 60...
MIT
LeNet-5.ipynb
maxmax1992/Deep_Learning
Load PPI and Targets
PPI = nx.read_gml('../data/CheckBestTargetSet/Human_Interactome.gml')
_____no_output_____
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Load all the different drug targets from the various sources
#Dictionary with the CLOUD : targets targets_DrugBank = {} targets_DrugBank_Filtered = {} targets_Pubchem = {} targets_Pubchem_Filtered = {} targets_Chembl = {} targets_Chembl_Filtered = {} targets_All_Filtered = {} targets_All = {} #Get all extracted targets (with the DrugBank target split) targets_only = set() fp = ...
_____no_output_____
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Calculate the various distance measurements
saved_distances = {} def Check_Drug_Module_Diameter(PPI,targets): ''' Extract the min path between targets (=Diameter) This is always the minimum path between one target and any other target of the same set. Returns Mean of all paths (d_d) as well as paths (min_paths) This function uses only o...
_____no_output_____
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Calculate All Distances
dic_target_sets = {'DrugBank':targets_DrugBank, 'PubChem':targets_Pubchem, 'Chembl':targets_Chembl,'DrugBank_Filtered':targets_DrugBank_Filtered, 'PubChem_Filtered':targets_Pubchem_Filtered, 'Chembl_Filtered':targets_Chembl_Filtered, 'All_Filtered':targets_All_Filtered, 'All':targets_All} for key in dic_target_sets: ...
DrugBank CLOUD001 CLOUD002 CLOUD003 CLOUD004 CLOUD005 CLOUD006 CLOUD007 CLOUD008 CLOUD009 CLOUD010 CLOUD011 CLOUD012 CLOUD013 CLOUD014 CLOUD015 CLOUD016 CLOUD017 CLOUD018 CLOUD019 CLOUD020 CLOUD021 CLOUD022 CLOUD023 CLOUD024 CLOUD025 CLOUD026 CLOUD027 CLOUD028 CLOUD029 CLOUD030 CLOUD031 CLOUD032 CLOUD033 CLOUD034 CLOUD...
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Calculate the different metrics for the different target setsTargetSets: All, Chembl, PubChem, DrugBank (all associations and target only filtered) Metrics: S_AB, D_AB, Min_AB and Mean_AB
#network = nx.read_gml('../data/Check_Features/DrugPairFeature_Files/DPI_iS3_pS7_abMAD2_gP100/Networks/DPI_Network_CoreToPeriphery.gml') targetLists = [f for f in os.listdir('../results/CheckBestTargetSet/') if os.path.isfile(os.path.join('../results/CheckBestTargetSet/', f)) and '.csv' in f] distance_metric = {'D_AB':...
Complete Calculate Metrics: Done Core Calculate Metrics: Done CoreToPeriphery Calculate Metrics: Done Periphery Calculate Metrics: Done
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Analyse the result file
interaction_types = ['Increasing','Decreasing','Emergent'] network_parts = ['Complete','Core','CoreToPeriphery','Periphery'] for part in network_parts: print part results = {} fp = open('../results/CheckBestTargetSet/Results/'+part+'/StatisticResult.csv','r') fp.next() for line in fp: tm...
Complete Min_AB DrugBank_Filtered Mean_AB PubChem_Filtered D_AB Chembl_Filtered Chembl PubChem_Filtered S_AB PubChem Core Min_AB DrugBank Mean_AB Chembl_Filtered Chembl D_AB S_AB CoreToPeriphery Min_AB All_Filtered All DrugBank PubChem Chembl_Filtered Chembl PubChem_Filtered Me...
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Plot S_AB distribution
import seaborn as sns targetLists = [f for f in os.listdir('../results/Check_Features/CheckBestTargetSet/') if os.path.isfile(os.path.join('../results/Check_Features/CheckBestTargetSet/', f)) and '.csv' in f] distance_metric = {'D_AB':4, 'S_AB':5, 'Min_AB':6, 'Mean_AB':7} metric = 'S_AB' for targetList in targetLists...
0.6722009834273841 1.3609922810737909 0.6663973106768771 1.4210949885061646 0.515554244097155 0.6616415751265295 0.2801638381785182 1.4125882193782637
MIT
code/12_CheckBestTargetSet.ipynb
menchelab/Perturbome
Specutils Analysis![Specutils: An Astropy Package for Spectroscopy](data/specutils_logo.png)This notebook provides an overview of some of the spectral analysis capabilities of the Specutils Astropy coordinated package. While this notebook is intended as an interactive introduction to specutils at the time of its writ...
import numpy as np import astropy.units as u import specutils from specutils import Spectrum1D, SpectralRegion specutils.__version__ # for plotting: %matplotlib inline import matplotlib.pyplot as plt # for showing quantity units on axes automatically: from astropy.visualization import quantity_support quantity_supp...
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Sample Spectrum and SNRFor use below, we also load the sample SDSS spectrum downloaded in the [overview notebook](Specutils_overview.ipynb). See that notebook if you have not yet downloaded this spectrum.
sdss_spec = Spectrum1D.read('data/sdss_spectrum.fits', format='SDSS-III/IV spec') plt.step(sdss_spec.wavelength, sdss_spec.flux);
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Because this example file already has uncertainties, it is straightforward to use one of the fundamental quantifications of a spectrum: the whole-spectrum signal-to-noise ratio:
from specutils import analysis analysis.snr(sdss_spec)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Spectral RegionsMost analysis required on a spectrum requires specification of a part of the spectrum - e.g., a spectral line. Because such regions may have value independent of a particular spectrum, they are represented as objects distrinct from a given spectrum object. Below we outline a few ways such regions are...
ha_region = SpectralRegion((6563-50)*u.AA, (6563+50)*u.AA) ha_region
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Regions can also be raw pixel values (although of course this is more applicable to a specific spectrum):
pixel_region = SpectralRegion(2100*u.pixel, 2600*u.pixel) pixel_region
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Additionally, *multiple* regions can be in the same `SpectralRegion` object. This is useful for e.g. measuring multiple spectral features in one call:
HI_wings_region = SpectralRegion([(1.4*u.GHz, 1.41*u.GHz), (1.43*u.GHz, 1.44*u.GHz)]) HI_wings_region
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
While regions are useful for a variety of analysis steps, fundamentally they can be used to extract sub-spectra from larger spectra:
from specutils.manipulation import extract_region subspec = extract_region(sdss_spec, pixel_region) plt.step(subspec.wavelength, subspec.flux) analysis.snr(subspec)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Line MeasurementsWhile line-fitting (detailed more below) is a good choice for high signal-to-noise spectra or when detailed kinematics are desired, more empirical measures are often used in the literature for noisier spectra or just simpler analysis procedures. Specutils provides a set of functions to provide these s...
# estimate a reasonable continuum-level estimate for the h-alpha area of the spectrum sdss_continuum = 205*subspec.flux.unit sdss_halpha_contsub = extract_region(sdss_spec, ha_region) - sdss_continuum plt.axhline(0, c='k', ls=':') plt.step(sdss_halpha_contsub.wavelength, sdss_halpha_contsub.flux) plt.ylim(-50, 50)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
With the continuum level identified, we can now make some measurements of the spectral lines that are apparent by eye - in particular we will focus on the H-alpha emission line. While there are techniques for identifying the line automatically (see the fitting section below), here we assume we are doing "quick-look" pr...
line_region = SpectralRegion(<LOWER>*u.angstrom, <UPPER>*u.angstrom) plt.step(sdss_halpha_contsub.wavelength, sdss_halpha_contsub.flux) yl1, yl2 = plt.ylim() plt.fill_between([halpha_lines_region.lower, halpha_lines_region.upper], yl1, yl2, alpha=.2) plt.ylim(yl1, yl2)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
You can now call a variety of analysis functions on the continuum-subtracted spectrum to estimate various properties of the line:
analysis.centroid(sdss_halpha_contsub, halpha_lines_region) analysis.gaussian_fwhm(sdss_halpha_contsub, halpha_lines_region) analysis.line_flux(sdss_halpha_contsub, halpha_lines_region)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Equivalent width, being a continuum dependent property, can either be computed directly from the spectrum if the continuum level is given, or measured on a continuum-normalized spectrum. The latter is mainly useful if the continuum is non-uniform over the line being measured.
analysis.equivalent_width(sdss_spec, sdss_continuum, regions=halpha_lines_region) sdss_halpha_contnorm = sdss_spec / sdss_continuum analysis.equivalent_width(sdss_halpha_contnorm, regions=halpha_lines_region)
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
ExerciseLoad one of the spectrum datasets you made in the overview exercises into this notebook (i.e., your own dataset, a downloaded one, or the blackbody with an artificially added spectral feature). Make a flux or width measurement of a line in that spectrum directly. Is anything odd? Continuum SubtractionWhile ...
from specutils.fitting import fit_generic_continuum generic_continuum = fit_generic_continuum(sdss_spec) generic_continuum_evaluated = generic_continuum(sdss_spec.spectral_axis) plt.step(sdss_spec.spectral_axis, sdss_spec.flux) plt.plot(sdss_spec.spectral_axis, generic_continuum_evaluated) plt.ylim(100, 300);
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
(Note that in some versions of astropy/specutils you may see a warning that the "Model is linear in parameters" upon executing the above cell. This is not a problem unless performance is a serious concern, in which case more customization is required.)With this model in hand, continuum-subtracted or continuum-normalize...
sdss_gencont_sub = sdss_spec - generic_continuum(sdss_spec.spectral_axis) sdss_gencont_norm = sdss_spec / generic_continuum(sdss_spec.spectral_axis) ax1, ax2 = plt.subplots(2, 1)[1] ax1.step(sdss_gencont_sub.wavelength, sdss_gencont_sub.flux) ax1.set_ylim(-50, 50) ax1.axhline(0, color='k', ls=':') # continuum should...
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
The customizable wayThe `fit_continuum` function operates similarly to `fit_generic_continuum`, but is meant for you to provide your favorite continuum model rather than being tailored to a specific continuum model. To see the list of models, see the [astropy.modeling documentation](http://docs.astropy.org/en/stable/m...
from specutils.fitting import fit_continuum from astropy.modeling import models
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
For example, suppose you want to use a 3rd-degree Chebyshev polynomial as your continuum model. You can use `fit_continuum` to get an object that behaves the same as for `fit_generic_continuum`:
chebdeg3_continuum = fit_continuum(sdss_spec, models.Chebyshev1D(3)) generic_continuum_evaluated = generic_continuum(sdss_spec.spectral_axis) plt.step(sdss_spec.spectral_axis, sdss_spec.flux) plt.plot(sdss_spec.spectral_axis, chebdeg3_continuum(sdss_spec.spectral_axis)) plt.ylim(100, 300);
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
This then provides total flexibility. For example, you can also try other polynomials like higher-degree Hermite polynomials:
hermdeg7_continuum = fit_continuum(sdss_spec, models.Hermite1D(degree=7)) hermdeg17_continuum = fit_continuum(sdss_spec, models.Hermite1D(degree=17)) plt.step(sdss_spec.spectral_axis, sdss_spec.flux) plt.plot(sdss_spec.spectral_axis, chebdeg3_continuum(sdss_spec.spectral_axis)) plt.plot(sdss_spec.spectral_axis, hermde...
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
This immediately demonstrates the tradeoffs in polynomial fitting: while the high-degree polynomials capture the wiggles of the spectrum better than the low, they also *over*-fit near the strong emission lines. ExerciseTry combining the `SpectralRegion` and continuum-fitting functionality to only fit the parts of the ...
from specutils import fitting
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
The fitting machinery must first be given guesses for line locations. This process can be automated using functions designed to identify lines (more detail on the options is [in the docs](https://specutils.readthedocs.io/en/latest/fitting.htmlline-finding)). For data sets where these algorithms are not ideal, you may ...
halpha_lines = fitting.find_lines_threshold(sdss_halpha_contsub, 3) plt.step(sdss_halpha_contsub.spectral_axis, sdss_halpha_contsub.flux, where='mid') for line in halpha_lines: plt.axvline(line['line_center'], color='k', ls=':') halpha_lines
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Now for each of these lines, we need to fit a model. Sometimes it is sufficient to simply create a model where the center is at the line and excise the appropriate area of the line to do a line estimate. This is not *too* sensitive to the size of the region, at least for well-separated lines like these. The result i...
halpha_line_models = [] for line in halpha_lines: line_region = SpectralRegion(line['line_center']-5*u.angstrom, line['line_center']+5*u.angstrom) line_spectrum = extract_region(sdss_halpha_contsub, line_region) line_estimate = models.Gaussian1D(mean=line['line_center']) ...
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops