markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Discretize the ordinal features into quartiles
explainer.fit(X_train, disc_perc=[25, 50, 75]) with open("explainer.dill", "wb") as dill_file: dill.dump(explainer, dill_file) dill_file.close() print(get_minio().fput_object(MINIO_MODEL_BUCKET, f"{EXPLAINER_MODEL_PATH}/explainer.dill", 'explainer.dill'))
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Get Explanation Below, we get an anchor for the prediction of the first observation in the test set. An anchor is a sufficient condition - that is, when the anchor holds, the prediction should be the same as the prediction for this instance.
model.predict(X_train) idx = 0 class_names = adult.target_names print('Prediction: ', class_names[explainer.predict_fn(X_test[idx].reshape(1, -1))[0]])
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
We set the precision threshold to 0.95. This means that predictions on observations where the anchor holds will be the same as the prediction on the explained instance at least 95% of the time.
explanation = explainer.explain(X_test[idx], threshold=0.95) print('Anchor: %s' % (' AND '.join(explanation['names']))) print('Precision: %.2f' % explanation['precision']) print('Coverage: %.2f' % explanation['coverage'])
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Train Outlier Detector
from alibi_detect.od import IForest od = IForest( threshold=0., n_estimators=200, ) od.fit(X_train) np.random.seed(0) perc_outlier = 5 threshold_batch = create_outlier_batch(X_train, Y_train, n_samples=1000, perc_outlier=perc_outlier) X_threshold, y_threshold = threshold_batch.data.astype('float'), threshol...
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Deploy Seldon Core Model
secret = f"""apiVersion: v1 kind: Secret metadata: name: seldon-init-container-secret namespace: {DEPLOY_NAMESPACE} type: Opaque stringData: AWS_ACCESS_KEY_ID: {MINIO_ACCESS_KEY} AWS_SECRET_ACCESS_KEY: {MINIO_SECRET_KEY} AWS_ENDPOINT_URL: http://{MINIO_HOST} USE_SSL: "false" """ with open("secret.yaml","w")...
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Make a prediction request
payload='{"data": {"ndarray": [[53,4,0,2,8,4,4,0,0,0,60,9]]}}' cmd=f"""curl -d '{payload}' \ http://income-classifier-default.{DEPLOY_NAMESPACE}:8000/api/v1.0/predictions \ -H "Content-Type: application/json" """ ret = Popen(cmd, shell=True,stdout=PIPE) raw = ret.stdout.read().decode("utf-8") print(raw)
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Make an explanation request
payload='{"data": {"ndarray": [[53,4,0,2,8,4,4,0,0,0,60,9]]}}' cmd=f"""curl -d '{payload}' \ http://income-classifier-default-explainer.{DEPLOY_NAMESPACE}:9000/api/v1.0/explain \ -H "Content-Type: application/json" """ ret = Popen(cmd, shell=True,stdout=PIPE) raw = ret.stdout.read().decode("utf-8") print(raw)
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Deploy Outier Detector
outlier_yaml=f"""apiVersion: serving.knative.dev/v1 kind: Service metadata: name: income-outlier namespace: {DEPLOY_NAMESPACE} spec: template: metadata: annotations: autoscaling.knative.dev/minScale: "1" spec: containers: - image: seldonio/alibi-detect-server:1.2.2-dev_alibidetec...
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Deploy KNative Eventing Event Display
event_display=f"""apiVersion: apps/v1 kind: Deployment metadata: name: event-display namespace: {DEPLOY_NAMESPACE} spec: replicas: 1 selector: matchLabels: &labels app: event-display template: metadata: labels: *labels spec: containers: - name: helloworld-go ...
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Test Outlier Detection
def predict(): payload='{"data": {"ndarray": [[300, 4, 4, 2, 1, 4, 4, 0, 0, 0, 600, 9]]}}' cmd=f"""curl -d '{payload}' \ http://income-classifier-default.{DEPLOY_NAMESPACE}:8000/api/v1.0/predictions \ -H "Content-Type: application/json" """ ret = Popen(cmd, shell=True,stdout=PIPE...
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Clean Up Resources
run(f"kubectl delete sdep income-classifier -n {DEPLOY_NAMESPACE}", shell=True) run(f"kubectl delete ksvc income-outlier -n {DEPLOY_NAMESPACE}", shell=True) run(f"kubectl delete sa minio-sa -n {DEPLOY_NAMESPACE}", shell=True) run(f"kubectl delete secret seldon-init-container-secret -n {DEPLOY_NAMESPACE}", shell=True) ...
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
kubeflow/pipelines
apache-2.0
Interaction strengths and boundary conditions
V = [1.8,1.4,1.0,0.6,0.2,-0.1,-0.5,-0.9,-1.3] BC = ['APBC','PBC']
FiniteSizeScaling/plotScript.ipynb
DelMaestroGroup/PartEntFermions
mit
Load the data and perform the linear fit for each BC and interaction strength
S2scaled = {} for cBC in BC: for cV in V: # load raw data data = np.loadtxt('N1%sn1u_%3.1f.dat'%(cBC[0],cV)) # Raises each N to the power of the leading finite size correction γ=(4g+1) x = data[:,0]**(-(1.0+4*g(cV))) # perform the linear fit p = np....
FiniteSizeScaling/plotScript.ipynb
DelMaestroGroup/PartEntFermions
mit
Plot the scaled finite-size data collapse
colors = ['#4173b3','#e95c47','#7dcba4','#5e4ea2','#fdbe6e','#808080','#2e8b57','#b8860b','#87ceeb'] markers = ['o','s','h','D','H','>','^','<','v'] plt.style.reload_library() with plt.style.context('../IOP_large.mplstyle'): # Create the figure fig1 = plt.figure() ax2 = fig1.add_subplot(111) ax3 =...
FiniteSizeScaling/plotScript.ipynb
DelMaestroGroup/PartEntFermions
mit
Notice: * Some values are large and negative - indicating a problem with the automated measurement routine. We will need to deal with these. * Sizes are "effective radii" in arcseconds. The typical resolution ("point spread function" effective radius) in an SDSS image is around 0.7". Let's save this download for furth...
!mkdir -p downloads sdssdata.to_csv("downloads/SDSSobjects.csv")
examples/SDSScatalog/FirstLook.ipynb
seniosh/StatisticalMethods
gpl-2.0
Visualizing Data in N-dimensions This is, in general, difficult. Looking at all possible 1 and 2-dimensional histograms/scatter plots helps a lot. Color coding can bring in a 3rd dimension (and even a 4th). Interactive plots and movies are also well worth thinking about. <br> Here we'll follow a multi-dimensional visu...
# We'll use astronomical g-r color as the colorizer, and then plot # position, magnitude, size and color against each other. data = pd.read_csv("downloads/SDSSobjects.csv",usecols=["ra","dec","u","g",\ "r","i","size"]) # Filter out objects with bad magnitude or size m...
examples/SDSScatalog/FirstLook.ipynb
seniosh/StatisticalMethods
gpl-2.0
Size-magnitude Let's zoom in and look at the objects' (log) sizes and magnitudes.
zoom = data.copy() del zoom['ra'],zoom['dec'],zoom['g-r_color'] plot_everything(zoom,'i',vmin=15.0, vmax=21.5)
examples/SDSScatalog/FirstLook.ipynb
seniosh/StatisticalMethods
gpl-2.0
3. Enter DV360 Report Emailed To BigQuery Recipe Parameters The person executing this recipe must be the recipient of the email. Schedule a DV360 report to be sent to an email like ****. Or set up a redirect rule to forward a report you already receive. The report can be sent as an attachment or a link. Ensure this re...
FIELDS = { 'auth_read':'user', # Credentials used for reading data. 'email':'', # Email address report was sent to. 'subject':'.*', # Regular expression to match subject. Double escape backslashes. 'dataset':'', # Existing dataset in BigQuery. 'table':'', # Name of table to be written to. 'dbm_schema':...
colabs/email_dv360_to_bigquery.ipynb
google/starthinker
apache-2.0
4. Execute DV360 Report Emailed To BigQuery This does NOT need to be modified unless you are changing the recipe, click play.
from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'email':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}}, 'read':{ 'from':'noreply-dv...
colabs/email_dv360_to_bigquery.ipynb
google/starthinker
apache-2.0
Cleaning and Formatting JSON Data
data = pd.read_json(os.path.abspath('./nations.json')) def clean_data(data): for column in ['income', 'lifeExpectancy', 'population']: data = data.drop(data[data[column].apply(len) <= 4].index) return data def extrap_interp(data): data = np.array(data) x_range = np.arange(1800, 2009, 1.) y...
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Creating the Tooltip to display the required fields bqplot's native Tooltip allows us to simply display the data fields we require on a mouse-interaction.
tt = Tooltip(fields=['name', 'x', 'y'], labels=['Country Name', 'Income per Capita', 'Life Expectancy'])
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Creating the Label to display the year Staying true to the d3 recreation of the talk, we place a Label widget in the bottom-right of the Figure (it inherits the Figure co-ordinates when no scale is passed to it). With enable_move set to True, the Label can be dragged around.
year_label = Label(x=[0.75], y=[0.10], font_size=52, font_weight='bolder', colors=['orange'], text=[str(initial_year)], enable_move=True)
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Defining Axes and Scales The inherent skewness of the income data favors the use of a LogScale. Also, since the color coding by regions does not follow an ordering, we use the OrdinalColorScale.
x_sc = LogScale(min=income_min, max=income_max) y_sc = LinearScale(min=life_exp_min, max=life_exp_max) c_sc = OrdinalColorScale(domain=data['region'].unique().tolist(), colors=CATEGORY10[:6]) size_sc = LinearScale(min=pop_min, max=pop_max) ax_y = Axis(label='Life Expectancy', scale=y_sc, orientation='vertical', side='...
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Creating the Scatter Mark with the appropriate size and color parameters passed To generate the appropriate graph, we need to pass the population of the country to the size attribute and its region to the color attribute.
# Start with the first year's data cap_income, life_exp, pop = get_data(initial_year) wealth_scat = Scatter(x=cap_income, y=life_exp, color=data['region'], size=pop, names=data['name'], display_names=False, scales={'x': x_sc, 'y': y_sc, 'color': c_sc, 'size': size_sc}, ...
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Creating the Figure
time_interval = 10 fig = Figure(marks=[wealth_scat, year_label, nation_line], axes=[ax_x, ax_y], title='Health and Wealth of Nations', animation_duration=time_interval)
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Using a Slider to allow the user to change the year and a button for animation Here we see how we can seamlessly integrate bqplot into the jupyter widget infrastructure.
year_slider = IntSlider(min=1800, max=2008, step=1, description='Year', value=initial_year)
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
When the hovered_point of the Scatter plot is changed (i.e. when the user hovers over a different element), the entire path of that country is displayed by making the Lines object visible and setting it's x and y attributes.
def hover_changed(change): if change.new is not None: nation_line.x = data['income'][change.new + 1] nation_line.y = data['lifeExpectancy'][change.new + 1] nation_line.visible = True else: nation_line.visible = False wealth_scat.observe(hover_changed, 'hovered_point')
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
On the slider value callback (a function that is triggered everytime the value of the slider is changed) we change the x, y and size co-ordinates of the Scatter. We also update the text of the Label to reflect the current year.
def year_changed(change): wealth_scat.x, wealth_scat.y, wealth_scat.size = get_data(year_slider.value) year_label.text = [str(year_slider.value)] year_slider.observe(year_changed, 'value')
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Add an animation button
play_button = Play(min=1800, max=2008, interval=time_interval) jslink((play_button, 'value'), (year_slider, 'value'))
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Displaying the GUI
VBox([HBox([play_button, year_slider]), fig])
pylondinium/notebooks/wealth-of-nations.ipynb
QuantStack/quantstack-talks
bsd-3-clause
We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use...
env.reset() rewards = [] for _ in range(100): env.render() state, reward, done, info = env.step(env.action_space.sample()) # take a random action rewards.append(reward) if done: rewards = [] env.reset()
tutorials/reinforcement/Q-learning-cart.ipynb
xpharry/Udacity-DLFoudation
mit
To shut the window showing the simulation, use env.close(). If you ran the simulation above, we can look at the rewards:
print(rewards[-20:])
tutorials/reinforcement/Q-learning-cart.ipynb
xpharry/Udacity-DLFoudation
mit
The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left an...
class QNetwork: def __init__(self, learning_rate=0.01, state_size=4, action_size=2, hidden_size=10, name='QNetwork'): # state inputs to the Q-network with tf.variable_scope(name): self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inpu...
tutorials/reinforcement/Q-learning-cart.ipynb
xpharry/Udacity-DLFoudation
mit
Experience replay Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on. Here, we'll create a Memory object that will store our experi...
from collections import deque class Memory(): def __init__(self, max_size = 1000): self.buffer = deque(maxlen=max_size) def add(self, experience): self.buffer.append(experience) def sample(self, batch_size): idx = np.random.choice(np.arange(len(self.buffer)), ...
tutorials/reinforcement/Q-learning-cart.ipynb
xpharry/Udacity-DLFoudation
mit
Exploration - Exploitation To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability ...
train_episodes = 1000 # max number of episodes to learn from max_steps = 200 # max steps in an episode gamma = 0.99 # future reward discount # Exploration parameters explore_start = 1.0 # exploration probability at start explore_stop = 0.01 # minimum expl...
tutorials/reinforcement/Q-learning-cart.ipynb
xpharry/Udacity-DLFoudation
mit
Populate the experience memory Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
# Initialize the simulation env.reset() # Take one random step to get the pole and cart moving state, reward, done, _ = env.step(env.action_space.sample()) memory = Memory(max_size=memory_size) # Make a bunch of random actions and store the experiences for ii in range(pretrain_length): # Uncomment the line below ...
tutorials/reinforcement/Q-learning-cart.ipynb
xpharry/Udacity-DLFoudation
mit
Training Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
# Now train with experiences saver = tf.train.Saver() rewards_list = [] with tf.Session() as sess: # Initialize variables sess.run(tf.global_variables_initializer()) step = 0 for ep in range(1, train_episodes): total_reward = 0 t = 0 while t < max_steps: step += ...
tutorials/reinforcement/Q-learning-cart.ipynb
xpharry/Udacity-DLFoudation
mit
Visualizing training Below I'll plot the total rewards for each episode. I took a rolling average too, in blue.
%matplotlib inline import matplotlib.pyplot as plt def running_mean(x, N): cumsum = np.cumsum(np.insert(x, 0, 0)) return (cumsum[N:] - cumsum[:-N]) / N eps, rews = np.array(rewards_list).T smoothed_rews = running_mean(rews, 10) plt.plot(eps[-len(smoothed_rews):], smoothed_rews) plt.plot(eps, rews, color='gr...
tutorials/reinforcement/Q-learning-cart.ipynb
xpharry/Udacity-DLFoudation
mit
Testing Let's checkout how our trained agent plays the game.
test_episodes = 10 test_max_steps = 400 env.reset() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) for ep in range(1, test_episodes): t = 0 while t < test_max_steps: env.render() # Get action from Q-network ...
tutorials/reinforcement/Q-learning-cart.ipynb
xpharry/Udacity-DLFoudation
mit
What we've got here are all names of our test types (test_type) and production types (prod_type) as well as the signatures of the test methods (test_method) and production methods (prod_method). We also have the amount of calls from the test methods to the production methods (invocations). Analysis OK, let's do some ac...
invocation_matrix = invocations.pivot_table( index=['test_type', 'test_method'], columns=['prod_type', 'prod_method'], values='invocations', fill_value=0 ) # show interesting parts of results invocation_matrix.iloc[4:8,4:6]
notebooks/Structural Test Case Similarity.ipynb
feststelltaste/software-analytics
gpl-3.0
What we've got now is the information for each invocation (or non-invocation) of test methods to production methods. In mathematical words, we've got now a n-dimensional vector for each test method where n is the number of tested production methods in our code base! That means we've just transformed our software data t...
from sklearn.metrics.pairwise import cosine_distances distance_matrix = cosine_distances(invocation_matrix) # show some interesting parts of results distance_matrix[81:85,60:62]
notebooks/Structural Test Case Similarity.ipynb
feststelltaste/software-analytics
gpl-3.0
From this data, we create a DataFrame to get a better representation. You can find the complete DataFrame here as excel file as well.
distance_df = pd.DataFrame(distance_matrix, index=invocation_matrix.index, columns=invocation_matrix.index) # show some interesting parts of results distance_df.iloc[81:85,60:62] invocations[ (invocations.test_method == "void readRoundtripWorksWithFullData()") | (invocations.test_method == "void postCommentAct...
notebooks/Structural Test Case Similarity.ipynb
feststelltaste/software-analytics
gpl-3.0
Visualization Our now 422x422 big distance matrix distance_df isn't a good way to spot similarities very well. Let's break down the result into two dimensions using multidimensional scaling (MDS) from scikit-learn and plot the results with the plotting library matplotlib. MDS tries to find a representation of our 422-d...
from sklearn.manifold import MDS model = MDS(dissimilarity='precomputed', random_state=10) distance_df_2d = model.fit_transform(distance_df) distance_df_2d[:5]
notebooks/Structural Test Case Similarity.ipynb
feststelltaste/software-analytics
gpl-3.0
Next, we plot the now two-dimensional matrix with matplotlib. We colorize all data points according to the name of the test types. We can achieve this by assigning each type a number within 0 and 1 (relative_index) and draw a color from a predefined color spectrum (cm.hsv) for each type. With this, each test class gets...
%matplotlib inline from matplotlib import cm import matplotlib.pyplot as plt relative_index = distance_df.index.labels[0].values() / distance_df.index.labels[0].max() colors = [x for x in cm.hsv(relative_index)] plt.figure(figsize=(8,8)) x = distance_df_2d[:,0] y = distance_df_2d[:,1] plt.scatter(x, y, c=colors)
notebooks/Structural Test Case Similarity.ipynb
feststelltaste/software-analytics
gpl-3.0
We now have the visual information about which test methods call similar production code! Let's discuss this plot: * Groups of data points (aka clusters) of the same color are the good ones (like the blue colored ones in the lower middle). They show that there is a high cohesion of test methods with test classes that t...
from sklearn.cluster import DBSCAN dbscan = DBSCAN(eps=0.08, min_samples=10) clustering_results = dbscan.fit(distance_df_2d) plt.figure(figsize=(8,8)) cluster_members = clustering_results.components_ # plot all data points plt.scatter(x, y, c='k', alpha=0.2) # plot cluster members plt.scatter( cluster_members[:,...
notebooks/Structural Test Case Similarity.ipynb
feststelltaste/software-analytics
gpl-3.0
Creating the KFP CLI builder Exercise In the cell below, write a docker file that * Uses gcr.io/deeplearning-platform-release/base-cpu as base image * Install the python package kfp with version 0.2.5 * Starts /bin/bash as entrypoint
%%writefile kfp-cli/Dockerfile # TODO
immersion/kubeflow_pipelines/cicd/labs/lab-03.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Build the image and push it to your project's Container Registry.
IMAGE_NAME='kfp-cli' TAG='latest' IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
immersion/kubeflow_pipelines/cicd/labs/lab-03.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Exercise In the cell below, use gcloud builds to build the kfp-cli Docker image and push it to the project gcr.io registry.
!gcloud builds # COMPLETE THE COMMAND
immersion/kubeflow_pipelines/cicd/labs/lab-03.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Understanding the Cloud Build workflow. Exercise In the cell below, you'll complete the cloudbuild.yaml file describing the CI/CD workflow and prescribing how environment specific settings are abstracted using Cloud Build variables. The CI/CD workflow automates the steps you walked through manually during lab-02-kfp-pi...
%%writefile cloudbuild.yaml steps: # Build the trainer image - name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', '.'] dir: $_PIPELINE_FOLDER/trainer_image # TODO: Build the base image for lightweight components - name: # TODO args: # TODO dir: # ...
immersion/kubeflow_pipelines/cicd/labs/lab-03.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Manually triggering CI/CD runs You can manually trigger Cloud Build runs using the gcloud builds submit command.
SUBSTITUTIONS=""" _ENDPOINT={},\ _TRAINER_IMAGE_NAME=trainer_image,\ _BASE_IMAGE_NAME=base_image,\ TAG_NAME=test,\ _PIPELINE_FOLDER=.,\ _PIPELINE_DSL=covertype_training_pipeline.py,\ _PIPELINE_PACKAGE=covertype_training_pipeline.yaml,\ _PIPELINE_NAME=covertype_continuous_training,\ _RUNTIME_VERSION=1.15,\ _PYTHON_VERSI...
immersion/kubeflow_pipelines/cicd/labs/lab-03.ipynb
GoogleCloudPlatform/mlops-on-gcp
apache-2.0
Data Load data from Spector and Mazzeo (1980). Examples follow Greene's Econometric Analysis Ch. 21 (5th Edition).
spector_data = sm.datasets.spector.load() spector_data.exog = sm.add_constant(spector_data.exog, prepend=False)
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Inspect the data:
print(spector_data.exog[:5,:]) print(spector_data.endog[:5])
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Linear Probability Model (OLS)
lpm_mod = sm.OLS(spector_data.endog, spector_data.exog) lpm_res = lpm_mod.fit() print('Parameters: ', lpm_res.params[:-1])
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Logit Model
logit_mod = sm.Logit(spector_data.endog, spector_data.exog) logit_res = logit_mod.fit(disp=0) print('Parameters: ', logit_res.params)
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Marginal Effects
margeff = logit_res.get_margeff() print(margeff.summary())
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
As in all the discrete data models presented below, we can print a nice summary of results:
print(logit_res.summary())
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Probit Model
probit_mod = sm.Probit(spector_data.endog, spector_data.exog) probit_res = probit_mod.fit() probit_margeff = probit_res.get_margeff() print('Parameters: ', probit_res.params) print('Marginal effects: ') print(probit_margeff.summary())
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Multinomial Logit Load data from the American National Election Studies:
anes_data = sm.datasets.anes96.load() anes_exog = anes_data.exog anes_exog = sm.add_constant(anes_exog, prepend=False)
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Inspect the data:
print(anes_data.exog[:5,:]) print(anes_data.endog[:5])
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Fit MNL model:
mlogit_mod = sm.MNLogit(anes_data.endog, anes_exog) mlogit_res = mlogit_mod.fit() print(mlogit_res.params)
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Poisson Load the Rand data. Note that this example is similar to Cameron and Trivedi's Microeconometrics Table 20.5, but it is slightly different because of minor changes in the data.
rand_data = sm.datasets.randhie.load() rand_exog = rand_data.exog.view(float).reshape(len(rand_data.exog), -1) rand_exog = sm.add_constant(rand_exog, prepend=False)
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Fit Poisson model:
poisson_mod = sm.Poisson(rand_data.endog, rand_exog) poisson_res = poisson_mod.fit(method="newton") print(poisson_res.summary())
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Negative Binomial The negative binomial model gives slightly different results.
mod_nbin = sm.NegativeBinomial(rand_data.endog, rand_exog) res_nbin = mod_nbin.fit(disp=False) print(res_nbin.summary())
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Alternative solvers The default method for fitting discrete data MLE models is Newton-Raphson. You can use other solvers by using the method argument:
mlogit_res = mlogit_mod.fit(method='bfgs', maxiter=100) print(mlogit_res.summary())
examples/notebooks/discrete_choice_overview.ipynb
ChadFulton/statsmodels
bsd-3-clause
Get Started with TensorFlow 1.x <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/_index.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td>...
import tensorflow.compat.v1 as tf
site/en/r1/tutorials/_index.ipynb
tensorflow/docs
apache-2.0
Load and prepare the MNIST dataset. Convert the samples from integers to floating-point numbers:
mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0
site/en/r1/tutorials/_index.ipynb
tensorflow/docs
apache-2.0
Build the tf.keras model by stacking layers. Select an optimizer and loss function used for training:
model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', ...
site/en/r1/tutorials/_index.ipynb
tensorflow/docs
apache-2.0
Train and evaluate model:
model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2)
site/en/r1/tutorials/_index.ipynb
tensorflow/docs
apache-2.0
Start streaming from device connected to HDMI input on the PYNQ Board
hdmi_in = HDMI('in') hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list) hdmi_out.mode(3) hdmi_out.start() hdmi_in.start()
Pynq-Z1/notebooks/Video_PR/Generic_Blur.ipynb
AEW2015/PYNQ_PR_Overlay
bsd-3-clause
Creates user interface In this section, we create 10 registers which hold the values of the Kernal matrix, the dividing factor and the bias value. We also create a slider which allows user to control the values store in the registers.
R0 =Register(0) R1 =Register(1) R2 =Register(2) R3 =Register(3) R4 =Register(4) R5 =Register(5) R6 =Register(6) R7 =Register(7) R8 =Register(8) R9 =Register(9) R10 =Register(10) import ipywidgets as widgets R0_s = widgets.IntSlider( value=1, min=-128, max=127, step=1, description='M_0:', disab...
Pynq-Z1/notebooks/Video_PR/Generic_Blur.ipynb
AEW2015/PYNQ_PR_Overlay
bsd-3-clause
Continue to create user interface
from IPython.display import clear_output from ipywidgets import Button, HBox, VBox words = ['HDMI Reset', 'Kernal Filter'] items = [Button(description=w) for w in words] def on_hdmi_clicked(b): hdmi_out.stop() hdmi_in.stop() hdmi_out.start() hdmi_in.start() def on_Kernal_clicked(b): Bitstream_Par...
Pynq-Z1/notebooks/Video_PR/Generic_Blur.ipynb
AEW2015/PYNQ_PR_Overlay
bsd-3-clause
User interface instruction At this point, the streaming may not work properly. Please run the code section below. Afterwards, press the 'HDMI Reset" button to reset the HDMI input and output. The streaming should now work properly. In order to start applying filter on the stream, press the 'Kernal Filter' button. The k...
HBox([VBox([items[0], items[1]]),R0_s,R1_s,R2_s,R3_s,R4_s,R5_s,R6_s,R7_s,R8_s,R9_s,R10_s]) hdmi_in.stop() hdmi_out.stop() del hdmi_in del hdmi_out
Pynq-Z1/notebooks/Video_PR/Generic_Blur.ipynb
AEW2015/PYNQ_PR_Overlay
bsd-3-clause
Preliminary Analysis
# deal with missing and inconvenient portions of data clean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available'] clean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int) clean_hospital_read_df = clean_hospital_read_df...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
Preliminary Report Read the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform. A. Initial observations based on the plot above + Overall, rate of readmissions...
# Your turn
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
The dataset
clean_hospital_read_df.tail()
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
We are interested in the hospitals with 'Excess Readmission Ratio' > 0 About hospitals with 'Excess Readminision Ratio' > 0
tot_hosp_valid = clean_hospital_read_df.loc[clean_hospital_read_df['Excess Readmission Ratio'] > 0]['Hospital Name'].count() excess_read_ratio_max = clean_hospital_read_df['Excess Readmission Ratio'].max() excess_read_ratio_min = clean_hospital_read_df['Excess Readmission Ratio'].min() excess_read_ratio_mean = clean_...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
'Excess Readmission Ratio' in hospitals with number of discharges < 100
tot_h100 = clean_hospital_read_df['Excess Readmission Ratio'].loc[clean_hospital_read_df['Number of Discharges'] < 100].count() med_h100 = clean_hospital_read_df['Excess Readmission Ratio'].loc[clean_hospital_read_df['Number of Discharges'] < 100].mean() tot_h100_exc_gt_one = clean_hospital_read_df['Hospital Name'].loc...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
'Excess Readmission Ratio' in hospitals with number of discharges > 1000
tot_h1000 = clean_hospital_read_df['Excess Readmission Ratio'].loc[clean_hospital_read_df['Number of Discharges'] > 1000].count() med_h1000 = clean_hospital_read_df['Excess Readmission Ratio'].loc[clean_hospital_read_df['Number of Discharges'] > 1000].mean() tot_h1000_exc_gt_one = clean_hospital_read_df['Hospital Name'...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
How to find out if there is a correlation between hospital capacity (number of discharges) and readmission rates.
from scipy.stats import pearsonr df_temp = clean_hospital_read_df[['Number of Discharges', 'Excess Readmission Ratio']].dropna() df_temp.head()
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
Pearson correlation "The Pearson correlation coefficient measures the linear relationship between two datasets. ... Like other correlation coefficients, this one varies between -1 and +1 with 0 implying no correlation. Correlations of -1 or +1 imply an exact linear relationship. Positive correlations imply that as x in...
pearson, pvalue = pearsonr(df_temp[['Number of Discharges']], df_temp[['Excess Readmission Ratio']]) pvalue1 = pvalue pearson1 = pearson print("+----------------------------------------------------------+") print("| 'Number of Discharges' versus 'Excess Readmission Ratio' |") print("| for all hospitals: ...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
A - All hospitals: Zooming in Calculating Pearson correlation and p-value without outlier hospitals. Let's verify only the hopitals with: Excess Readmission Ratio < 1.6 Number of Discharges < 2,000
df_temp_a = df_temp.loc[(df_temp['Number of Discharges'] < 2000) & (df_temp['Excess Readmission Ratio'] < 1.6)] pearson, pvalue = pearsonr(df_temp_a[['Number of Discharges']], df_temp_a[['Excess Readmission Ratio']]) print("+----------------------------------------------------------+") print("| 'Number of Discharges' v...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
B - Hospitals with discharges < 100
df_temp_dischg100 = df_temp.loc[df_temp['Number of Discharges'] < 100] pearson,pvalue = pearsonr(df_temp_dischg100[['Number of Discharges']], df_temp_dischg100[['Excess Readmission Ratio']]) pvalue2 = pvalue pearson2 = pearson print("+----------------------------------------------------------+") print("| 'Number of Dis...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
B - Hospitals with discharges < 100: Zooming in Calculating Pearson correlation and p-value without outlier hospitals. Let's verify only the hopitals with: Excess Readmission Ratio < 1.2 Number of Discharges > 40 (this dataset has already Number of Discharges < 100)
df_temp_b = df_temp_dischg100.loc[(df_temp_dischg100['Number of Discharges'] > 40) & (df_temp_dischg100['Excess Readmission Ratio'] < 1.2)] pearson, pvalue = pearsonr(df_temp_b[['Number of Discharges']], df_temp_b[['Excess Readmission Ratio']]) print("+----------------------------------------------------------+") print...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
C - Hospitals with discharges > 1,000
df_temp_dischg1000 = df_temp.loc[df_temp['Number of Discharges'] > 1000] pearson, pvalue = pearsonr(df_temp_dischg1000[['Number of Discharges']], df_temp_dischg1000[['Excess Readmission Ratio']]) pvalue3 = pvalue pearson3 = pearson print("+----------------------------------------------------------+") print("| 'Number o...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
C - Hospitals with discharges > 1,000: Zooming in Calculating Pearson correlation and p-value without outlier hospitals. Let's verify only the hopitals with: Excess Readmission Ratio < 1.2 Number of Discharges < 2,000 (this dataset has already Number of Discharges > 1,000)
df_temp_c = df_dischg_gt1000.loc[(df_dischg_gt1000['Number of Discharges'] < 2000) & (df_dischg_gt1000['Excess Readmission Ratio'] < 1.2)] pearson, pvalue = pearsonr(df_temp_c[['Number of Discharges']], df_temp_c[['Excess Readmission Ratio']]) print("+----------------------------------------------------------+") print(...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
D - What about hospitals with discharges between 100 and 1,000 (inclusive)?
df_temp2 = df_temp.loc[(df_temp['Number of Discharges'] >= 100) & (df_temp['Number of Discharges'] <= 1000)] pearson, pvalue = pearsonr(df_temp2[['Number of Discharges']], df_temp2[['Excess Readmission Ratio']]) pvalue4 = pvalue print("+----------------------------------------------------------+") print("| 'Number of D...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
Preliminary Report Read the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform. A. Initial observations based on the plot above + Overall, rate of readmissions...
print("+-----------------------------------------------------------+-------------+") print("| Hospital/facilities | Correlation*|") print("|-----------------------------------------------------------|-------------|") print("| i) All hospitals ...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
For all three groups above there is a negative correlation, The Pearson Correlation values is "far" from a strong correlation (close to 1) and, finally, Even without the outliers, we found similar values for the the Pearson Correlation (three groups above). B. Provide support for your arguments and your own recommenda...
print("+--------------------------------------------------------------+") print("| Null Hypothesis: |") print("| Ho: There is *not* a significant correlation between |") print("| hospital capacity (discharges) and readmission rates|") print("|----------------...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
2. Compute and report the observed significance value (or p-value).
print("+--------------------------------------------------------------+") print("| Scenario 1: |") print("| All Hospitals: P-Value = %.30f |" %pvalue1) print("|--------------------------------------------------------------|") print("| Scenario 2: ...
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
davicsilva/dsintensive
apache-2.0
Some other gems
def read_metadata(self, post, lang=None): """Read metadata directly from ipynb file. As ipynb files support arbitrary metadata as json, the metadata used by Nikola will be assume to be in the 'nikola' subfield. """ self._req_missing_ipynb() if lang is None: lang = LocaleBorg().current_la...
damian/Nikola.ipynb
mpacer/nb_struct2app
mit
Let see it in action!
cd /media/data/devel/damian_blog/ !ls title = "We are above 1000 stars!" tags_list = ['Jupyter', 'python', 'reveal', 'RISE', 'slideshow'] tags = ', '.join(tags_list) !nikola new_post -f ipynb -t "{title}" --tags="{tags}"
damian/Nikola.ipynb
mpacer/nb_struct2app
mit
``` Creating New Post Title: We are above 1000 stars! Scanning posts......done! [2017-07-12T16:45:00Z] NOTICE: compile_ipynb: No kernel specified, assuming "python3". [2017-07-12T16:45:01Z] INFO: new_post: Your post's text is at: posts/we-are-above-1000-stars.ipynb ```
!nikola build !nikola deploy from IPython.display import IFrame IFrame("http://www.damian.oquanta.info/", 980, 600)
damian/Nikola.ipynb
mpacer/nb_struct2app
mit
Upper Air Sounding Tutorial Upper air analysis is a staple of many synoptic and mesoscale analysis problems. In this tutorial we will gather weather balloon data, plot it, perform a series of thermodynamic calculations, and summarize the results. To learn more about the Skew-T diagram and its use in weather analysis an...
from datetime import datetime import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1.inset_locator import inset_axes import numpy as np import metpy.calc as mpcalc from metpy.io import get_upper_air_data from metpy.plots import Hodograph, SkewT
v0.5/_downloads/upperair_soundings.ipynb
metpy/MetPy
bsd-3-clause
Getting Data We will download data from the University of Wyoming sounding data page &lt;http://weather.uwyo.edu/upperair/sounding.html&gt;_ , which has an extensive archive of data available, as well as current data. In this case, we will download the sounding data from the Veterans Day tornado outbreak in 2002 by pas...
dataset = get_upper_air_data(datetime(2002, 11, 11, 0), 'BNA') # We can view the fields available in the dataset. We will create some simple # variables to make the rest of the code more concise. print(dataset.variables.keys()) p = dataset.variables['pressure'][:] T = dataset.variables['temperature'][:] Td = dataset...
v0.5/_downloads/upperair_soundings.ipynb
metpy/MetPy
bsd-3-clause
Thermodynamic Calculations Often times we will want to calculate some thermodynamic parameters of a sounding. The MetPy calc module has many such calculations already implemented! Lifting Condensation Level (LCL) - The level at which an air parcel's relative humidity becomes 100% when lifted along a dry adiabatic pa...
# Calculate the LCL lcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0]) print(lcl_pressure, lcl_temperature) # Calculate the parcel profile. parcel_prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')
v0.5/_downloads/upperair_soundings.ipynb
metpy/MetPy
bsd-3-clause
Basic Skew-T Plotting The Skew-T (log-P) diagram is the standard way to view rawinsonde data. The y-axis is height in pressure coordinates and the x-axis is temperature. The y coordinates are plotted on a logarithmic scale and the x coordinate system is skewed. An explanation of skew-T interpretation is beyond the scop...
# Create a new figure. The dimensions here give a good aspect ratio fig = plt.figure(figsize=(9, 9)) skew = SkewT(fig) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r', linewidth=2) skew.plot(p, Td, 'g', linewidth...
v0.5/_downloads/upperair_soundings.ipynb
metpy/MetPy
bsd-3-clause
Advanced Skew-T Plotting Fiducial lines indicating dry adiabats, moist adiabats, and mixing ratio are useful when performing further analysis on the Skew-T diagram. Often the 0C isotherm is emphasized and areas of CAPE and CIN are shaded.
# Create a new figure. The dimensions here give a good aspect ratio fig = plt.figure(figsize=(9, 9)) skew = SkewT(fig, rotation=30) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot...
v0.5/_downloads/upperair_soundings.ipynb
metpy/MetPy
bsd-3-clause
Adding a Hodograph A hodograph is a polar representation of the wind profile measured by the rawinsonde. Winds at different levels are plotted as vectors with their tails at the origin, the angle from the vertical axes representing the direction, and the length representing the speed. The line plotted on the hodograph ...
# Create a new figure. The dimensions here give a good aspect ratio fig = plt.figure(figsize=(9, 9)) skew = SkewT(fig, rotation=30) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot...
v0.5/_downloads/upperair_soundings.ipynb
metpy/MetPy
bsd-3-clause
Create Labels Labels are generated from the labelmaker's csv output of its internal sqlite database. Labels are shuffled and divided into training, validation, and test sets at a ratio of roughly 3:1:1
import pandas as pd import glob import numpy as np # read in and shuffle data labels = pd.read_csv("./labelmaker/labels.csv").as_matrix() print "Labels Shape: {}".format(labels.shape) np.random.seed(0) np.random.shuffle(labels) # split labels into train, validation, and test sets div = len(labels) / 5 train_labels = ...
project.ipynb
notnil/udacity-ml-capstone
mit
Model The Keras model is composed of a sequential model with two time sensitive LSTM layers followed by two Dense layers and an output layer. The initial input of (7, 2048) represents seven frames per clip each with a 2048 sized vector generated by InceptionV3. The final 4x1 output vector is the category prediction.
from keras.models import Sequential from keras.layers import Dense, LSTM, Dropout, Flatten, GRU from keras import backend as K model = Sequential([ LSTM(512, return_sequences=True, input_shape=(7, 2048)), LSTM(512, return_sequences=True, input_shape=(7, 512)), Flatten(), Dense(512, activation='relu')...
project.ipynb
notnil/udacity-ml-capstone
mit