markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Introduction **What?** Operators Basic Python Semantics: Operators In the previous section, we began to look at the semantics of Python variables and objects; here we'll dig into the semantics of the various *operators* included in the language.By the end of this section, you'll have the basic tools to begin comparin...
# addition, subtraction, multiplication (4 + 8) * (6.5 - 3)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
Floor division is true division with fractional parts truncated:
# True division print(11 / 2) # Floor division print(11 // 2)
5
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
The floor division operator was added in Python 3; you should be aware if working in Python 2 that the standard division operator (``/``) acts like floor division for integers and like true division for floating-point numbers.Finally, I'll mention an eighth arithmetic operator that was added in Python 3.5: the ``a @ b`...
bin(10)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
The result is prefixed with ``'0b'``, which indicates a binary representation.The rest of the digits indicate that the number 10 is expressed as the sum $1 \cdot 2^3 + 0 \cdot 2^2 + 1 \cdot 2^1 + 0 \cdot 2^0$.Similarly, we can write:
bin(4)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
Now, using bitwise OR, we can find the number which combines the bits of 4 and 10:
4 | 10 bin(4 | 10)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
These bitwise operators are not as immediately useful as the standard arithmetic operators, but it's helpful to see them at least once to understand what class of operation they perform.In particular, users from other languages are sometimes tempted to use XOR (i.e., ``a ^ b``) when they really mean exponentiation (i.e...
a = 24 print(a)
24
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
We can use these variables in expressions with any of the operators mentioned earlier.For example, to add 2 to ``a`` we write:
a + 2
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
We might want to update the variable ``a`` with this new value; in this case, we could combine the addition and the assignment and write ``a = a + 2``.Because this type of combined operation and assignment is so common, Python includes built-in update operators for all of the arithmetic operations:
a += 2 # equivalent to a = a + 2 print(a)
26
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
There is an augmented assignment operator corresponding to each of the binary operators listed earlier; in brief, they are:||||||-|-||``a += b``| ``a -= b``|``a *= b``| ``a /= b``||``a //= b``| ``a %= b``|``a **= b``|``a &= b``||a &124;= b| ``a ^= b``|``a >= b``|Each one is equivalent to the corresponding operation fol...
# 25 is odd 25 % 2 == 1 # 66 is odd 66 % 2 == 1
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
We can string-together multiple comparisons to check more complicated relationships:
# check if a is between 15 and 30 a = 25 15 < a < 30
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
And, just to make your head hurt a bit, take a look at this comparison:
-1 == ~0
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
Recall that ``~`` is the bit-flip operator, and evidently when you flip all the bits of zero you end up with -1.If you're curious as to why this is, look up the *two's complement* integer encoding scheme, which is what Python uses to encode signed integers, and think about what happens when you start flipping all the b...
x = 4 (x < 6) and (x > 2) (x > 10) or (x % 2 == 0) not (x < 6)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
Boolean algebra aficionados might notice that the XOR operator is not included; this can of course be constructed in several ways from a compound statement of the other operators.Otherwise, a clever trick you can use for XOR of Boolean values is the following:
# (x > 1) xor (x < 10) (x > 1) != (x < 10)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
These sorts of Boolean operations will become extremely useful when we begin discussing *control flow statements* such as conditionals and loops.One sometimes confusing thing about the language is when to use Boolean operators (``and``, ``or``, ``not``), and when to use bitwise operations (``&``, ``|``, ``~``).The answ...
a = [1, 2, 3] b = [1, 2, 3] a == b a is b a is not b
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
What do identical objects look like? Here is an example:
a = [1, 2, 3] b = a a is b
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
The difference between the two cases here is that in the first, ``a`` and ``b`` point to *different objects*, while in the second they point to the *same object*.As we saw in the previous section, Python variables are pointers. The "``is``" operator checks whether the two variables are pointing to the same container (o...
1 in [1, 2, 3] 2 not in [1, 2, 3]
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
**Note about the above cell:** There are a several options for this particular step depending on the computational resources available and network size. If the network is sufficiently small (<250k edges), it is recommended to use the 'small_network_AUPRC_wrapper' function as it can be much faster, especially when run i...
# Construct null networks and calculate the AUPRC of the gene sets of the null networks # We can use the AUPRC wrapper function for this null_AUPRCs = [] for i in range(10): shuffNet = nef.shuffle_network(network, max_tries_n=10, verbose=True) shuffNet_kernel = nef.construct_prop_kernel(shuffNet, alpha=alpha, v...
_____no_output_____
MIT
Network Evaluation Examples/Network Evaluation Example-BioPlex.ipynb
jdtibochab/network_bisb
**Note about the above cell:** We use a small number to calculate the null AUPRC values, but a larger number of shuffled networks may give a better representation of the true null AUPRC value. smaller number of networks here for this example, but larger numbers can be used, especially if the resulting distribution of ...
# Construct table of null AUPRCs null_AUPRCs_table = pd.concat(null_AUPRCs, axis=1) null_AUPRCs_table.columns = ['shuffNet'+repr(i+1) for i in range(len(null_AUPRCs))] # Calculate performance metric of gene sets network_performance = nef.calculate_network_performance_score(AUPRC_values, null_AUPRCs_table, verbose=True)...
Nodes: 9432 Edges: 151352 Avg Node Degree: 32.093299406276508 Edge Density: 0.0034029582659608213 Avg Network Performance Rank: 6.53125 Avg Network Performance Rank, Rank: 7 Avg Network Performance Gain Rank: 6.53125 Avg Network Performance Gain Rank, Rank: 7
MIT
Network Evaluation Examples/Network Evaluation Example-BioPlex.ipynb
jdtibochab/network_bisb
gym-anytrading `AnyTrading` is a collection of [OpenAI Gym](https://github.com/openai/gym) environments for reinforcement learning-based trading algorithms. Trading algorithms are mostly implemented in two markets: [FOREX](https://en.wikipedia.org/wiki/Foreign_exchange_market) and [Stock](https://en.wikipedia.org/wi...
import gym import gym_anytrading env = gym.make('forex-v0') # env = gym.make('stocks-v0')
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
- This will create the default environment. You can change any parameters such as dataset, frame_bound, etc. Create an environment with custom parametersI put two default datasets for [*FOREX*](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/datasets/data/FOREX_EURUSD_1H_ASK.csv) and [*Stocks*](htt...
from gym_anytrading.datasets import FOREX_EURUSD_1H_ASK, STOCKS_GOOGL custom_env = gym.make('forex-v0', df = FOREX_EURUSD_1H_ASK, window_size = 10, frame_bound = (10, 300), unit_side = 'right') # custom_env = gym.make('stocks-v0', # df = STOCK...
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
- It is to be noted that the first element of `frame_bound` should be greater than or equal to `window_size`. Print some information
print("env information:") print("> shape:", env.shape) print("> df.shape:", env.df.shape) print("> prices.shape:", env.prices.shape) print("> signal_features.shape:", env.signal_features.shape) print("> max_possible_profit:", env.max_possible_profit()) print() print("custom_env information:") print("> shape:", custom_...
env information: > shape: (24, 2) > df.shape: (6225, 5) > prices.shape: (6225,) > signal_features.shape: (6225, 2) > max_possible_profit: 4.054414887146586 custom_env information: > shape: (10, 2) > df.shape: (6225, 5) > prices.shape: (300,) > signal_features.shape: (300, 2) > max_possible_profit: 1.122900180008982
MIT
README.ipynb
kaiserho/gym-anytrading
- Here `max_possible_profit` signifies that if the market didn't have trade fees, you could have earned **4.054414887146586** (or **1.122900180008982**) units of currency by starting with **1.0**. In other words, your money is almost *quadrupled*. Plot the environment
env.reset() env.render()
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
- **Short** and **Long** positions are shown in `red` and `green` colors.- As you see, the starting *position* of the environment is always **Short**. A complete example
import gym import gym_anytrading from gym_anytrading.envs import TradingEnv, ForexEnv, StocksEnv, Actions, Positions from gym_anytrading.datasets import FOREX_EURUSD_1H_ASK, STOCKS_GOOGL import matplotlib.pyplot as plt env = gym.make('forex-v0', frame_bound=(50, 100), window_size=10) # env = gym.make('stocks-v0', fra...
info: {'total_reward': -173.10000000000602, 'total_profit': 0.980652456904312, 'position': 0}
MIT
README.ipynb
kaiserho/gym-anytrading
- You can use `render_all` method to avoid rendering on each step and prevent time-wasting.- As you see, the first **10** points (`window_size`=10) on the plot don't have a *position*. Because they aren't involved in calculating reward, profit, etc. They just display the first observations. So the environment's `_start...
def my_process_data(env): start = env.frame_bound[0] - env.window_size end = env.frame_bound[1] prices = env.df.loc[:, 'Low'].to_numpy()[start:end] signal_features = env.df.loc[:, ['Close', 'Open', 'High', 'Low']].to_numpy()[start:end] return prices, signal_features class MyForexEnv(ForexEnv): ...
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
**Method 2:**
def my_process_data(df, window_size, frame_bound): start = frame_bound[0] - window_size end = frame_bound[1] prices = df.loc[:, 'Low'].to_numpy()[start:end] signal_features = df.loc[:, ['Close', 'Open', 'High', 'Low']].to_numpy()[start:end] return prices, signal_features class MyStocksEnv(StocksEn...
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
Generate some training dataSample points from the true underlying function and add some noise
def black_box(x): return x * np.sin(x) x_lim = [0, 20] n_train = 10 sigma_n = 0.0 x_train = np.random.uniform(x_lim[0], x_lim[1], n_train)[None, :] y_train = black_box(x_train).T # Get some samples from the black-box function y_train += np.random.normal(0, sigma_n, n_train)[:, None] # Add noise
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Regression-checkpoint.ipynb
tsitsimis/gau-pro
Pick some test points for evaluation
# test inputs n_test = 200 x_test = np.linspace(x_lim[0], x_lim[1], n_test)[None, :]
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Regression-checkpoint.ipynb
tsitsimis/gau-pro
Fit and Evaluate Gaussian Process
# fit regressor = gp.Regressor(kernels.se_kernel()) regressor.fit(x_train, y_train) mu, cov = regressor.predict(x_test) plt.figure(figsize=(12, 6)) t = np.linspace(x_lim[0], x_lim[1], 100) plt.plot(t, black_box(t), c='k', linestyle=':', label="True function") plt.scatter(x_train, y_train, marker='+', c='r', s=220, zo...
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Regression-checkpoint.ipynb
tsitsimis/gau-pro
How to define a scikit-learn pipeline and visualize it The goal of keeping this notebook is to:- make it available for users that want to reproduce it locally- archive the script in the event we want to rerecord this video with an update in the UI of scikit-learn in a future release. First we load the dataset We nee...
import pandas as pd ames_housing = pd.read_csv("../datasets/house_prices.csv", na_values='?') target_name = "SalePrice" data, target = ames_housing.drop(columns=target_name), ames_housing[target_name] target = (target > 200_000).astype(int)
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
We inspect the first rows of the dataframe
data
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
We can cherry-pick some features and only retain this subset of data
numeric_features = ['LotArea', 'FullBath', 'HalfBath'] categorical_features = ['Neighborhood', 'HouseStyle'] data = data[numeric_features + categorical_features]
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Then we create the pipeline The first step is to define the preprocessing steps
from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OneHotEncoder numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler(), )]) categorical_transformer = OneHotEncoder(handle_u...
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
The next step is to apply the transformations using `ColumnTransformer`
from sklearn.compose import ColumnTransformer preprocessor = ColumnTransformer(transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features), ])
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Then we define the model and join the steps in order
from sklearn.linear_model import LogisticRegression model = Pipeline(steps=[ ('preprocessor', preprocessor), ('classifier', LogisticRegression()), ])
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Let's visualize it!
from sklearn import set_config set_config(display='diagram') model
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Finally we score the model
from sklearn.model_selection import cross_validate cv_results = cross_validate(model, data, target, cv=5) scores = cv_results["test_score"] print("The mean cross-validation accuracy is: " f"{scores.mean():.3f} +/- {scores.std():.3f}")
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Pipeline: Clean Continuous FeaturesUsing the Titanic dataset from [this](https://www.kaggle.com/c/titanic/overview) Kaggle competition.This dataset contains information about 891 people who were on board the ship when departed on April 15th, 1912. As noted in the description on Kaggle's website, some people aboard the...
import matplotlib.pyplot as plt import pandas as pd import seaborn as sns %matplotlib inline titanic = pd.read_csv('../../../titanic.csv') titanic.head()
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
Clean continuous variables1. Fill in missing values for `Age`2. Combine `SibSp` & `Parch`3. Drop irrelevant/repetitive variables (`SibSp`, `Parch`, `PassengerId`) Fill missing for `Age`
titanic.isnull().sum() titanic['Age'].fillna(titanic['Age'].mean(), inplace=True)
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
Combine `SibSp` & `Parch`
for i, col in enumerate(['SibSp', 'Parch']): plt.figure(i) sns.catplot(x=col, y='Survived', data=titanic, kind='point', aspect=2, ) titanic['Family_cnt'] = titanic['SibSp'] + titanic['Parch']
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
Drop unnnecessary variables
titanic.drop(['PassengerId', 'SibSp', 'Parch'], axis=1, inplace=True) titanic.head(10)
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
Write out cleaned data
titanic.to_csv('../../../titanic_cleaned.csv', index=False)
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
WeatherPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time from random import uniform from scipy.stats import linregress from scipy import stats # Import API key from config import weather_api_key # Incorporated citipy to determine city based on latitu...
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Generate Cities List
# List for holding lat_lngs and cities lat_lngs = [] cities = [] # Create a set of random lat and lng combinations lats = np.random.uniform(low=-90.000, high=90.000, size=1500) lngs = np.random.uniform(low=-180.000, high=180.000, size=1500) lat_lngs = zip(lats, lngs) # Identify nearest city for each lat, lng combinati...
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Perform API Calls* Perform a weather check on each city using a series of successive API calls.* Include a print log of each city as it's being processed (with the city number and city name).
# Set up the url url = "http://api.openweathermap.org/data/2.5/weather?" units = "imperial" query_url = f"{url}appid={weather_api_key}&units={units}&q=" print(query_url) #creating lists to store extracted values per city city_name = [] country = [] date = [] lat = [] lng = [] temp = [] humidity = [] cloudiness = [] wi...
Beginning Data Retrieval -------------------------------- The city is Mehrān and the city id is 124291. The city is Busselton and the city id is 2075265. The city is Bulgan and the city id is 2032201. The city is Bethel and the city id is 5282297. City not found. Skipping... The city is Mar del Plata and the city id is...
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Convert Raw Data to DataFrame* Export the city data into a .csv.* Display the DataFrame
# Create a data frame from the data weather_dict = { #key on left, right side is values "City": city_name, "Cloudiness": cloudiness, "Country": country, "Date": date, "Humidity": humidity, "Lat": lat, "Lng": lng, "Max Temp": temp, "Wind Speed": wind } # Put data into data frame weat...
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Inspect the data and remove the cities where the humidity > 100%.----Skip this step if there are no cities that have humidity > 100%.
weather_data_df[weather_data_df["Humidity"]>100] # Get the indices of cities that have humidity over 100%. weather_data_df = weather_data_df.loc[(weather_data_df["Humidity"] < 100)] weather_data_df # Make a new DataFrame equal to the city data to drop all humidity outliers by index. # Passing "inplace=False" will make...
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Plotting the Data* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.* Save the plotted figures as .pngs. Latitude vs. Temperature Plot
# Plot the graph plt.scatter(lat, temp, marker="o", facecolors="tab:blue", edgecolors="black") # Setting the title and axises plt.title("City Latitude vs. Max Temperature (9/2020)") plt.xlabel("Latitude") plt.ylabel("Max Temperature (F)") # Add in a grid for the chart plt.grid() # Save our graph and show the grap p...
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Latitude vs. Humidity Plot
# Plot the graph plt.scatter(lat, humidity, marker="o", facecolors="tab:blue", edgecolors="black") # Setting the title and axises plt.title("City Latitude vs. Humidity (9/2020)") plt.xlabel("Latitude") plt.ylabel("Humidity (%)") # Add in a grid for the chart plt.grid() # Setting graph limits plt.xlim(-60, 85) plt.y...
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Latitude vs. Cloudiness Plot
# Plot the graph plt.scatter(lat, cloudiness, marker="o", facecolors="tab:blue", edgecolors="black") # Setting the title and axises plt.title("City Latitude vs. Cloudiness (9/2020)") plt.xlabel("Latitude") plt.ylabel("Cloudiness (%)") # Add in a grid for the chart plt.grid() # Save our graph and show the grap plt.t...
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Latitude vs. Wind Speed Plot
# Plot the graph plt.scatter(lat, wind, marker="o", facecolors="tab:blue", edgecolors="black") # Setting the title and axises plt.title("City Latitude vs. Wind Speed (9/2020)") plt.xlabel("Latitude") plt.ylabel("Wind Speed (MPH)") # Add in a grid for the chart plt.grid() # Save our graph and show the grap plt.tight...
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Linear Regression Northern Hemisphere - Max Temp vs. Latitude Linear Regression
north_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(north_hem['Lat'], north_hem['Max Temp'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Max Temp') plt.title("Northern Hemisphere - Max Temp vs. Latitude Linear Regression") x_values = north_hem['Lat'] y_values = north_...
R Val is 0.694055871357037
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Southern Hemisphere - Max Temp vs. Latitude Linear Regression
south_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(south_hem['Lat'], south_hem['Max Temp'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Max Temp (F)') plt.title("Southern Hemisphere - Max Temp vs. Latitude Linear Regression") x_values = south_hem['Lat'] y_values = so...
R Val is 0.694055871357037
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
north_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(north_hem['Lat'], north_hem['Humidity'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title("Northern Hemisphere - Humidity vs. Latitude Linear Regression") x_values = north_hem['Lat'] y_values = no...
R Val is 0.00679787383915855
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
south_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(south_hem['Lat'], south_hem['Humidity'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title("Southern Hemisphere - Humidity vs. Latitude Linear Regression") x_values = south_hem['Lat'] y_values = so...
R Val is 0.00679787383915855
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
north_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(north_hem['Lat'], north_hem['Cloudiness'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Cloudiness (%)') plt.title("Northern Hemisphere - Cloudiness vs. Latitude Linear Regression") x_values = north_hem['Lat'] y_value...
R Val is 0.00044310179247288993
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
south_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(south_hem['Lat'], south_hem['Cloudiness'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Cloudiness (%)') plt.title("Southern Hemisphere - Max Temp vs. Latitude Linear Regression") x_values = south_hem['Lat'] y_values ...
R Val is 0.00044310179247288993
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
north_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(north_hem['Lat'], north_hem['Wind Speed'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Wind Speed (MPH)') plt.title("Northern Hemisphere - Wind Speed vs. Latitude Linear Regression") x_values = north_hem['Lat'] y_val...
R Val is 0.02084202630425654
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
south_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(south_hem['Lat'], south_hem['Wind Speed'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Windspeed (MPH)') plt.title("Southern Hemisphere - Wind Speed vs. Latitude Linear Regression") x_values = south_hem['Lat'] y_valu...
R Val is 0.02084202630425654
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Figure 6
from sympy import symbols, exp, solve, logcombine, simplify, Piecewise, lambdify, N, init_printing, Eq import numpy import scipy.stats as ss from sympy.physics.units import seconds, siemens, volts, farads, amperes, milli, micro, nano, pico, ms, s, kg, meters from matplotlib import pyplot as plt import matplotlib from m...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 A Circuit diagram
prefix = '/home/bhalla/Documents/Codes/data'
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 B: Fitting voltage clamp data to get parameters
analysisFile = prefix + '/media/sahil/NCBS_Shares_BGStim/patch_data/170530/c1_EI/plots/c1_EI.pkl' plotDir = os.path.dirname(analysisFile) neuron = Neuron.load(analysisFile)
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
$g(t) = \bar{g}\frac{( e^\frac{\delta_{onset} - t }{\tau_{decay}} - e^\frac{\delta_{onset} - t }{\tau_{rise}})}{- \left(\frac{\tau_{rise}}{\tau_{decay}}\right)^{\frac{\tau_{decay}}{\tau_{decay} - \tau_{rise}}} + \left(\frac{\tau_{rise}}{\tau_{decay}}\right)^{\frac{\tau_{rise}}{\tau_{decay} - \tau_{rise}}}}$
def fitFunctionToPSP(time, vector, t_0=0, g_max=0): ''' Fits using lmfit ''' def _doubleExponentialFunction(t, t_0, tOn, tOff, g_max): ''' Returns the shape of an EPSP as a double exponential function ''' tPeak = t_0 + float(((tOff * tOn)/(tOff-tOn)) * numpy.log(tOff/tOn)) A = 1./(numpy...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
| Variable | Meaning | Range ||---|---|---||$t$|Time (ms)|0-100||$P$|Proportion of $g_i/g_e$|2-4||$\tau_{er}$|Excitatory Rise (ms)|1.5-5||$\tau_{ed}$|Excitatory Fall (ms)|8-20||$\delta_e$|Excitatory onset time (ms)|0-0||$\rho_e$|Excitatory $tau$ ratio (fall/rise)|2-7||$\bar{g}_e$|Excitatory max conductance|0.02-0.25...
### Double exponential to explain the net synaptic conductance. alpha = exp(-(t - delta_e) / e_d) - exp(-(t - delta_e) / e_r) alpha_prime = alpha.diff(t) theta_e = solve(alpha_prime, t) # Time to peak theta_e = logcombine(theta_e[0]) simplify(theta_e.subs(averageEstimateDict)) alpha_star = simplify(alpha.subs(t, theta...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 C Divisive Inhibition: Inhibition proportional to Excitation, or $g_i = P \times g_e$
di_exc = [[float(f(e * nS, 0., dt * ms)) for dt in trange] for e in erange] di_control = {prop: [[float(f(e * nS, prop, dt * ms)) for dt in trange] for e in erange] for prop in prop_array} fig, ax = plt.subplots() # plt.style.context('neuron-color') handles, labels = [], [] for prop in prop_array: v_max, e_max = [...
Constant $delta_i$ was 4.0 ms
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 DEF: Divisive Normalization: Inhibition proportional to Excitation, or $g_i = P \times g_e$ and $\delta_i$ inversely proportional to $g_e$ 6 D Changing $\delta_i$ = $\delta_{min} + me^{-k\times{g_e}}$
time_erange = numpy.linspace(0.,4.,10) d = lambda minDelay,k,e: minDelay + m*exp(-(k*e)) nS = nano*siemens k, m, minDelay = 1.43/nS, 18.15*ms, 2.54*ms maxDelay = (minDelay + m)/ms fig, ax = plt.subplots() ax.scatter(time_erange, [d(minDelay,k,e*nS)/ms for e in time_erange], s=40, facecolor='k', edgecolor='k') ax.set_xl...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 E Divisive Normalization
fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: v_max, e_max = [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) handles.append(a...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Synapses to threshold
threshold = 5.5 fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: v_max, e_max, spk_t = [], [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
5 B Model subtraction scheme
fig, ax = plt.subplots() handles, labels = [], [] prop = 4 i_max, e_max = [], [] trace_c, trace_e = numpy.array(dn_control[prop][-1]), numpy.array(dn_exc[-1]) ax.plot(trange, trace_c, label="PSP") ax.plot(trange, trace_e, label="EPSP") trace_i = float(approximateDict[leak_rev]/mV) + (trace_c - trace_e) ax.plot(trange...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 F Excitation - Derived Inhibition plot
fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: i_max, e_max = [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): i_t = numpy.array(e_t) - numpy.array(con_trace) i_max.append(numpy.max(i_t)) # i_max.append(max(e_t) - max(con_trace)) e_max.append(...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 G Time to peak
fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: ttp, e_max = [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): ttp.append(numpy.argmax(con_trace)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) handles.append(ax.scatter(e_max[1:], ttp[1:], s=1...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 H Permutation of P
check_vm = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({delta_i: d(minDelay,k,g_e)}).evalf()) f = lambdify((g_e, g_i, t), check_vm/mV, (unitsDict, "numpy")) p_perm_dn_exc = [[float(f(e * nS, 0., dt * ms)) for dt in trange] fo...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 I Permutation of $\delta_i$
check_vm = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({g_i: P*g_e}).evalf()) f = lambdify((g_e, P, delta_i, t), check_vm/mV, (unitsDict, "numpy")) d_perm_dn_exc = [[float(f(e * nS, 0., d(minDelay,k, e* nS), dt * ms)) for dt ...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 J Phase plot Divisive Normalization
import lmfit def DN_model(x,a=1): # Divisive normalization model return (a*x)/(x+a) DN_Model = lmfit.Model(DN_model) check_vm = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({g_i: P*g_e}).evalf()) f = lambdify((g_e, P, d...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Delay plots
d = lambda minDelay,k,e: minDelay + m*exp(-(k*e)) nS = nano*siemens m, minDelay = 18.15*ms, 2.54*ms maxDelay = (minDelay + m)/ms k_sample_indices = [1,3,5] fig, ax = plt.subplots(len(k_array[k_sample_indices]),1,sharey=True) for axis,k in zip(ax,k_array[k_sample_indices]): axis.plot(time_erange, [d(minDelay,k/nS,e*...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
I/E differences
ie_sample_indices = [1,3,6] fig, ax = plt.subplots(1,3,sharey=True) for axis,i_by_e in zip(ax, prop_array[ie_sample_indices]): axis.plot(erange, i_by_e * erange, '.-', c='k', markersize=5) axis.set_xlabel("$g_{exc}$ (nS)") axis.set_xlim(0,4.5) axis.set_xticks(range(4)) # axis.set_yticks(range(0,13,2...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
DN traces for these values
fig, ax = plt.subplots(len(k_sample_indices), len(ie_sample_indices), sharex=True, sharey=True) sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0, vmax=gamma_cutOff)) for ind1,k_index in enumerate(k_sample_indices): for ind2,prop_index in enumerate(ie_sample_indices): k, prop = k_array[k_index...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
SDN curve for these values
sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0, vmax=gamma_cutOff)) fig, ax = plt.subplots(len(k_sample), len(ie_sample), sharex=True, sharey=True) for ind1,k_index in enumerate(k_sample_indices): for ind2,prop_index in enumerate(ie_sample_indices): k, prop = k_array[k_index], prop_array[pr...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 K $\delta_i$ as a function of $g_e$
prefix = '/home/bhalla/Documents/Codes/data' n = Neuron.load(prefix + '/media/sahil/NCBS_Shares_BGStim/patch_data/170720/c5_EI/plots/c5_EI.pkl') def delay_excitation(x, a=1., b=1., c=1.): # Delay as a function of excitation # return a + b*numpy.exp(-c*x) return a+(x/b) def findOnsetTime(trial, step=0.5, sli...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Binning delays here
bins = numpy.linspace(0,max(maxConductance),6) digitized = numpy.digitize(maxConductance, bins) conductance_mean = [maxConductance[digitized == i].mean() for i in range(len(bins))] delay_mean = [delay[digitized == i].mean() for i in range(len(bins))] conductance_std = [maxConductance[digitized == i].std(ddof=1) for i i...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Over all EI cells
voltageClampFiles = '/media/sahil/NCBS_Shares_BGStim/patch_data/voltage_clamp_files.txt' with open (voltageClampFiles,'r') as r: dirnames = r.read().splitlines() a = ['161220 c2_EI', '170510 c2_EI', '170524 c3_EI', '170524 c1_EI', '170530 c2_EI', '170530 c1_EI', '170531 c2_EI', '170531 c4_EI', '170531 c1_EI...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Fitting through all cells
cmap = matplotlib.cm.viridis colors = matplotlib.cm.viridis(numpy.linspace(0, 1, len(all_inh_conductances))) fig, ax = plt.subplots() # norm = matplotlib.colors.Normalize(vmin=1, vmax=6) slopeArr = [] adist, bdist = [],[] flattened_g, flattened_d = [], [] for i, (g, gi, d, c) in enumerate(zip(all_conductances, all_inh_...
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Importing Libraries
import numpy as np import pandas as pd from matplotlib import pyplot as plt import os.path as op import pickle import tensorflow as tf from tensorflow import keras from keras.models import Model,Sequential,load_model from keras.layers import Input, Embedding from keras.layers import Dense, Bidirectional from keras.laye...
_____no_output_____
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Data Fetching
A1=np.empty((0,5),dtype='float32') U1=np.empty((0,7),dtype='float32') node=['150','149','147','144','142','140','136','61'] mon=['Apr','Mar','Aug','Jun','Jul','Sep','May','Oct'] for j in node: for i in mon: inp= pd.read_csv('../../../data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[1,2,3,15,16...
[[1.50000e+02 1.90401e+05 7.25000e+02 2.75500e+01 8.03900e+01] [1.50000e+02 1.90401e+05 8.25000e+02 2.75600e+01 8.03300e+01] [1.50000e+02 1.90401e+05 9.25000e+02 2.75800e+01 8.02400e+01] ... [6.10000e+01 1.91020e+05 1.94532e+05 2.93700e+01 7.52100e+01] [6.10000e+01 1.91020e+05 1.94632e+05 2.93500e+01 7.52700e+01] ...
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Min Max Scaler
from sklearn.decomposition import SparsePCA import warnings scaler_obj1=SparsePCA() scaler_obj2=SparsePCA() X1=scaler_obj1.fit_transform(A1) Y1=scaler_obj2.fit_transform(U1) warnings.filterwarnings(action='ignore', category=UserWarning) X1=X1[:,np.newaxis,:] Y1=Y1[:,np.newaxis,:] def rmse(y_true, y_pred): return...
_____no_output_____
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Model
inp=keras.Input(shape=(1,5)) l=keras.layers.Conv1D(16,1,padding="same",activation="tanh",kernel_initializer="glorot_uniform")(inp) output = keras.layers.Conv1D(7,4,padding="same",activation='sigmoid')(l) model1=keras.Model(inputs=inp,outputs=output) model1.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-5)...
13518/13518 [==============================] - 99s 7ms/step - loss: -140.0141 - accuracy: 0.3129 - mse: 1403845.8750 - mae: 73.5453 - rmse: 139.6234
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Saving Model as File
model1.evaluate(x_train,y_train) df1=pd.DataFrame(history1.history['loss'],columns=["Loss"]) df1=df1.join(pd.DataFrame(history1.history["val_loss"],columns=["Val Loss"])) df1=df1.join(pd.DataFrame(history1.history["accuracy"],columns=['Accuracy'])) df1=df1.join(pd.DataFrame(history1.history["val_accuracy"],columns=['Va...
_____no_output_____
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Error Analysis
# summarize history for loss plt.plot(history1.history['loss']) plt.plot(history1.history['val_loss']) plt.title('Model Loss',fontweight ='bold',fontsize = 15) plt.ylabel('Loss',fontweight ='bold',fontsize = 15) plt.xlabel('Epoch',fontweight ='bold',fontsize = 15) plt.legend(['Train', 'Test'], loc='upper left') plt.sho...
_____no_output_____
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Carreau-Yassuda
(nu*x).shape import pylab import matplotlib.pyplot as plt import numpy def get_cmap(n,name='viridis'): return plt.cm.get_cmap(name,n) a = 2 nu0=5e-3 nuinf=1e-3 h_channel = 1.2 #m shear_rate_order = 10 x = numpy.logspace(-shear_rate_order,shear_rate_order,10000) k=[5e-1,5e2,5e8] n=[0.8] fig, ax = plt.subplots(figs...
_____no_output_____
MIT
notebook/safa.ipynb
alineu/elastic_beams_in_shear_flow
Herschel-Bulkley
import pylab import matplotlib.pyplot as plt import numpy def get_cmap(n,name='viridis'): return plt.cm.get_cmap(name,n) a = 2 npoints = 10000 h_channel = 1.2 #m velocity = 2.0 #m/s shear_rate = velocity/h_channel #1/s nu0=5e-3*numpy.ones(npoints) nuinf=1e-3 # Newtonian regime tau0=shear_rate*nuinf*numpy.ones(npo...
_____no_output_____
MIT
notebook/safa.ipynb
alineu/elastic_beams_in_shear_flow
Vertex client library: AutoML video action recognition model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex client library for Python to create video action recognition models and do batch prediction using Google Cloud...
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = '--user' else: USER_FLAG = '' ! pip3 install -U google-cloud-aiplatform $USER_FLAG
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Install the latest GA version of *google-cloud-storage* library as well.
! pip3 install -U google-cloud-storage $USER_FLAG
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Restart the kernelOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud proje...
PROJECT_ID = "[your-project-id]" #@param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", ...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regiona...
REGION = 'us-central1' #@param {type: "string"}
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Cons...
# If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have ...
BUCKET_NAME = "gs://[your-bucket-name]" #@param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
! gsutil mb -l $REGION $BUCKET_NAME
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Finally, validate access to your Cloud Storage bucket by examining its contents:
! gsutil ls -al $BUCKET_NAME
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client libraryImport the Vertex client library into our Python environment.
import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples