markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Signal processing
Most MNE objects have in-built methods for filtering: | filt_bands = [(1, 3), (3, 10), (10, 20), (20, 60)]
f, (ax, ax2) = plt.subplots(2, 1, figsize=(15, 10))
data, times = raw[0]
_ = ax.plot(data[0])
for fmin, fmax in filt_bands:
raw_filt = raw.copy()
raw_filt.filter(fmin, fmax, fir_design='firwin')
_ = ax2.plot(raw_filt[0][0][0])
ax2.legend(filt_bands)
ax.set_... | 0.17/_downloads/4db67f73b2950e88bd1e641ba8cf44c0/plot_modifying_data_inplace.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
In addition, there are functions for applying the Hilbert transform, which is
useful to calculate phase / amplitude of your signal. | # Filter signal with a fairly steep filter, then take hilbert transform
raw_band = raw.copy()
raw_band.filter(12, 18, l_trans_bandwidth=2., h_trans_bandwidth=2.,
fir_design='firwin')
raw_hilb = raw_band.copy()
hilb_picks = mne.pick_types(raw_band.info, meg=False, eeg=True)
raw_hilb.apply_hilbert(hilb_p... | 0.17/_downloads/4db67f73b2950e88bd1e641ba8cf44c0/plot_modifying_data_inplace.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Finally, it is possible to apply arbitrary functions to your data to do
what you want. Here we will use this to take the amplitude and phase of
the hilbert transformed data.
<div class="alert alert-info"><h4>Note</h4><p>You can also use ``amplitude=True`` in the call to
:meth:`mne.io.Raw.apply_hilbert` to do ... | # Take the amplitude and phase
raw_amp = raw_hilb.copy()
raw_amp.apply_function(np.abs, hilb_picks)
raw_phase = raw_hilb.copy()
raw_phase.apply_function(np.angle, hilb_picks)
f, (a1, a2) = plt.subplots(2, 1, figsize=(15, 10))
a1.plot(raw_band[hilb_picks[0]][0][0].real)
a1.plot(raw_amp[hilb_picks[0]][0][0].real)
a2.plo... | 0.17/_downloads/4db67f73b2950e88bd1e641ba8cf44c0/plot_modifying_data_inplace.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
What is the total number of synapses in our data set? | # Total number of synapses
print np.sum(nonzero_rows[:,4]) | other/Descriptive and Exploratory_oldversion.ipynb | Upward-Spiral-Science/team1 | apache-2.0 |
What is the maximum number of synapses at a given point in our data set? | # Max number of synapses
max_syn = np.argmax(nonzero_rows[:,4])
print max_syn
loc = (nonzero_rows[max_syn,0],nonzero_rows[max_syn,1],nonzero_rows[max_syn,2]);
print loc | other/Descriptive and Exploratory_oldversion.ipynb | Upward-Spiral-Science/team1 | apache-2.0 |
What are the minimum and maximum x, y, and z values? (and thus, the set of (x,y,z) for our data set? | print [min(csv[1:,1]),min(csv[1:,2]),min(csv[1:,3])] #(x,y,z) minimum
print [max(csv[1:,1]),max(csv[1:,2]),max(csv[1:,3])] #(x,y,z) maximum
| other/Descriptive and Exploratory_oldversion.ipynb | Upward-Spiral-Science/team1 | apache-2.0 |
What does the histogram of our data look like? | # Histogram
fig = plt.figure()
ax = fig.gca()
plt.hist(nonzero_rows[:,4])
ax.set_title('Synapse Density')
ax.set_xlabel('Number of Synapses')
ax.set_ylabel('Number of (x,y,z) points with synapse density = x')
plt.show() | other/Descriptive and Exploratory_oldversion.ipynb | Upward-Spiral-Science/team1 | apache-2.0 |
What does the probability mass function of our data look like? | # PMF
syns = csv[1:,4]
sum = np.sum(syns)
density = syns/sum
mean = np.mean(density)
std = np.std(density)
print std, mean
#for locating synapse values of zero
def check_condition(row):
if row[-1] == 0:
return False
return True
#for filtering by the mean number of synapses
def synapse_filt(row, avg):
... | other/Descriptive and Exploratory_oldversion.ipynb | Upward-Spiral-Science/team1 | apache-2.0 |
What does our data look like in a 3-D scatter plot? | # following code adopted from
# https://www.getdatajoy.com/examples/python-plots/3d-scatter-plot
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_title('3D Scatter Plot')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_zlim(z_min, z_max)
ax.... | other/Descriptive and Exploratory_oldversion.ipynb | Upward-Spiral-Science/team1 | apache-2.0 |
Load the data | dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':float, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':in... | Regression/assignments/Feature selection and LASSO Programming Assignment 1.ipynb | Weenkus/Machine-Learning-University-of-Washington | mit |
Feature engineering | sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
sales['floors_square'] = sales['floors']*sales['floors']
testing['sqft_living_sqrt'] = testing['sqft_living'].apply(sqrt)
testing['sqft_lot_... | Regression/assignments/Feature selection and LASSO Programming Assignment 1.ipynb | Weenkus/Machine-Learning-University-of-Washington | mit |
LASSO | model_all = linear_model.Lasso(alpha=5e2, normalize=True) # set parameters
model_all.fit(sales[all_features], sales['price']) # learn weights
print (all_features, '\n')
print (model_all.coef_)
print ('With intercetp, nonzeros: ', np.count_nonzero(model_all.coef_) + np.count_nonzero(model_all.intercept_))
print ('No i... | Regression/assignments/Feature selection and LASSO Programming Assignment 1.ipynb | Weenkus/Machine-Learning-University-of-Washington | mit |
Calculate the RSS for best L1 on test data | model = linear_model.Lasso(alpha=bestL1, normalize=True)
model.fit(training[all_features], training['price']) # learn weights
# Calculate the RSS
RSS = ((model.predict(testing[all_features]) - testing.price) ** 2).sum()
print ("RSS: ", RSS, " for L1: ", bestL1)
print (model.coef_)
print (model.intercept_)
np.count_... | Regression/assignments/Feature selection and LASSO Programming Assignment 1.ipynb | Weenkus/Machine-Learning-University-of-Washington | mit |
2 Phase LASSO for finding a desired number of nonzero features
First we loop through an array of L1 alues and find an interval where our model gets the desired number of nonzero coefficients. In phase 2 we loop once more through the interval (from the first phase) but with small steps, the goal of the second phase is t... | max_nonzeros = 7
L1_range = np.logspace(1, 4, num=20)
print (L1_range)
l1_penalty_min = L1_range[0]
l1_penalty_max = L1_range[19]
for L1 in L1_range:
model = linear_model.Lasso(alpha=L1, normalize=True)
model.fit(training[all_features], training['price']) # learn weights
nonZeroes = np.count_nonzero(... | Regression/assignments/Feature selection and LASSO Programming Assignment 1.ipynb | Weenkus/Machine-Learning-University-of-Washington | mit |
Sampling Distribution of Pearson Correlation
As with all the other statistics we've looked at over the course of the semester, the sample correlation coefficient may differ substantially from the underly population correlation coefficient, depending on the vagaries of sampling.
First we'll generate a single sample, dr... | # generate bivariate normal data for uncorrelated variables
# See the docs on scipy.stats.multivariate_normal
# bivariate mean
mean = [0,0]
# covariance matrix
cov = np.array([[1,0],
[0,1]])
sample = stats.multivariate_normal.rvs(mean=mean, cov=cov, size=30)
sbn.jointplot(sample[:,0], sample[:,1... | 2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Simulate the sampling distribution of correlation coefficient, uncorrelated variables
First we'll simulate the sampling distribution of $r_{xy}$ when the true population correlation $\rho_{xy} = 0$ and the sample size $n = 30$. | mean = [0,0]
cov = [[1,0],
[0,1]]
ssize = 30
nsims = 2500
cors = []
for n in range(nsims):
sample = stats.multivariate_normal.rvs(mean=mean, cov=cov, size=ssize)
r = np.corrcoef(sample, rowvar=False, ddof=1)[0,1]
cors.append(r)
sbn.distplot(cors)
pass | 2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
The above plot looks fairly symmetrical, and approximately normal. However, now let's look at the sampling distribution for $\rho_{xy} = 0.9$ and $n=30$. | mean = [0,0]
cov = [[1,0.9],
[0.9,1]]
ssize = 30
nsims = 2500
cors090 = []
for n in range(nsims):
sample = stats.multivariate_normal.rvs(mean=mean, cov=cov, size=ssize)
r = np.corrcoef(sample, rowvar=False, ddof=1)[0,1]
cors090.append(r)
sbn.distplot(cors090)
pass | 2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
We see that the sampling correlation is strongly skewed for larger values of $\rho_{xy}$
Fisher's transformation normalizes the distribution of the Pearson product-moment correlation, and is usually used to calculate confidence intervals and carry out hypothesis test with correlations.
$$
F(r) = \frac{1}{2}\ln \frac{1+... | # plot for sampling distribution when pho = 0.9
# using Fisher's transformation
sbn.distplot(np.arctanh(cors090))
print("")
pass | 2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Confidence intervals in the space of the transformed variables are:
$$
100(1 - \alpha)\%\text{CI}: \operatorname{arctanh}(\rho) \in [\operatorname{arctanh}(r) \pm z_{\alpha/2}SE]
$$
To put this back in terms of untransformed correlations:
$$
100(1 - \alpha)\%\text{CI}: \rho \in [\operatorname{tanh}(\operatorname{arctan... | def correlationCI(r, n, alpha=0.05):
mu = np.arctanh(r)
sigma = 1.0/np.sqrt(n-3)
z = stats.norm.ppf(alpha/2)
left = np.tanh(mu) - z*sigma
right = np.tanh(mu) + z*sigma
return (left, right)
correlationCI(0, 30)
ssizes = np.arange(10,250,step=10)
cis = []
for i in ssizes:
cis.append(cor... | 2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Rank Correlation
There are two popular "robust" estimators of correlation, based on a consideration of correlations of ranks. These are known as Spearman's Rho and Kendall's Tau.
Spearman's Rho
Spearman's rank correlation, or Spearman's Rho, for variables $X$ and $Y$ is simply the correlation of the ranks of the $X$ a... | n = 50
x = np.linspace(1,10,n) + stats.norm.rvs(size=n)
y = x + stats.norm.rvs(loc=1, scale=1.5, size=n)
plt.scatter(x,y)
print("Pearson r: ", stats.pearsonr(x, y)[0])
print("Spearman's rho: ", stats.spearmanr(x, y)[0])
print("Kendall's tau: ", stats.kendalltau(x, y)[0])
pass
pollute_X = np.concatenate([x, stats.nor... | 2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Association Between Categorical Variables
A standard approach for testing the independence/dependence of a pair of categorical variables is to use a $\chi^2$ (Chi-square) test of independence.
The null and alternative hypotheses for the $\chi^2$ test are as follows:
$H_0$: the two categorical variables are independen... | # construct a contingency table for the sex and survival categorical variables
# from the bumpus data set
dataurl = "https://github.com/Bio204-class/bio204-datasets/raw/master/bumpus-data.txt"
bumpus = pd.read_table(dataurl)
observed = pd.crosstab(bumpus.survived, bumpus.sex, margins=True)
observed | 2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Expected counts
If the two categorical variables were independent than we would expect that the count in each cell of the contigency table to equal the product of the marginal probalities times the total number of observations. | nobs = observed.ix['All','All']
prob_female = observed.ix['All','f']/nobs
prob_male = observed.ix['All', 'm']/nobs
prob_surv = observed.ix['T', 'All']/nobs
prob_died = observed.ix['F', 'All']/nobs
expected_counts = []
for i in (prob_died, prob_surv):
row = []
for j in (prob_female, prob_male):
row.ap... | 2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
$X^2$-square test of independence using scipy.stats | # Same analysis with scipy.stats fxns
observed_nomargin = pd.crosstab(bumpus.survived, bumpus.sex, margins=False)
Chi2, Pval, Dof, Expected = stats.chi2_contingency(observed_nomargin.values,
correction=False)
Chi2, Pval, Dof
bumpus.survived.unique() | 2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Train and score Stepwise Regression model using the data prepared in SAS Studio | # Load action set
sess.loadactionset(actionset="regression")
# Train Logistic Regression
lr=sess.regression.logistic(
table={"name":prepped_data, "caslib":gcaslib},
classVars=[{"vars":class_vars}],
model={
"depVars":[{"name":"b_tgt", "options":{"event":"1"}}],
"effects":[{"vars":class_inputs | interval_i... | overview/vdmml_bank-Python.ipynb | sassoftware/sas-viya-machine-learning | apache-2.0 |
Load the GBM model create in SAS Visual Analytics and score using this model | # 1. Load GBM model (ASTORE) created in VA
sess.loadTable(
caslib="models", path="Gradient_Boosting_VA.sashdat",
casout={"name":"gbm_astore_model","caslib":"casuser", "replace":True}
)
# 2. Score code from VA (for data preparation)
sess.dataStep.runCode(
code="""data bank_part_post;
set bank_part(c... | overview/vdmml_bank-Python.ipynb | sassoftware/sas-viya-machine-learning | apache-2.0 |
Load the Forest model created in SAS Studio and score using this model | # Load action set
sess.loadactionset(actionset="decisionTree")
# Score using forest_model table
sess.decisionTree.forestScore(
table={"name":prepped_data, "caslib":gcaslib},
modelTable={"name":"forest_model", "caslib":"public"},
casOut={"name":"_scored_rf", "replace":True},
copyVars={"account", "b_tgt", "_par... | overview/vdmml_bank-Python.ipynb | sassoftware/sas-viya-machine-learning | apache-2.0 |
Load the SVM model created in SAS Studio and score using this model | # Score using ASTORE
sess.loadactionset(actionset="astore")
sess.astore.score(
table={"name":prepped_data, "caslib":gcaslib},
rstore={"name":"svm_astore_model", "caslib":"public"},
out={"name":"_scored_svm", "replace":True},
copyVars={"account", "_partind_", "b_tgt"}
) | overview/vdmml_bank-Python.ipynb | sassoftware/sas-viya-machine-learning | apache-2.0 |
Assess models from SAS Visual Analytics, SAS Studio and the new models created in Python interface | # Assess models
def assess_model(prefix):
return sess.percentile.assess(
table={
"name":"_scored_" + prefix,
"where": "strip(put(_partind_, best.))='0'"
},
inputs=[{"name":"p_b_tgt1"}],
response="b_tgt",
event="1",
pVar={"p_b_tgt0"},
pEvent={"0"} ... | overview/vdmml_bank-Python.ipynb | sassoftware/sas-viya-machine-learning | apache-2.0 |
Draw Assessment Plots | # Draw ROC charts
plt.figure()
for key, grp in all_rocinfo.groupby(["model"]):
plt.plot(grp["FPR"], grp["Sensitivity"], label=key)
plt.plot([0,1], [0,1], "k--")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.grid(True)
plt.legend(loc="best")
plt.title("ROC Curve (using validation data)")
p... | overview/vdmml_bank-Python.ipynb | sassoftware/sas-viya-machine-learning | apache-2.0 |
Vertex client library: AutoML image object detection model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb">
<img src="http... | import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource serv... | # Image Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml"
# Image Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_bounding_box_io_format_1.0.0.yaml"
# Image Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trai... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Tutorial
Now you are ready to start creating your own AutoML image object detection model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients ... | # client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the... | TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Data preparation
The Vertex Dataset resource for images has some requirements for your data:
Images must be stored in a Cloud Storage bucket.
Each image file must be in an image format (PNG, JPEG, BMP, ...).
There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image... | IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv" | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Quick peek at your data
You will use a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. | if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:
Uses the Dataset client.
Calls the client method import_data, with the following parameters:
name: The human readable name you give to the Dataset resource (e.g.... | def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Train the model
Now train an AutoML image object detection model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline fo... | def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to ... | PIPE_NAME = "salads_pipe-" + TIMESTAMP
MODEL_NAME = "salads_model-" + TIMESTAMP
task = json_format.ParseDict(
{
"budget_milli_node_hours": 20000,
"model_type": "CLOUD_HIGH_ACCURACY_1",
"disable_early_stopping": False,
},
Value(),
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, ... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Deployment
Training the above model may take upwards of 60 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline ... | while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
ra... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to ev... | def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDic... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to crea... | ENDPOINT_NAME = "salads_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Now get the unique identifier for the Endpoint resource you created. | # The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id) | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual ... | MIN_NODES = 1
MAX_NODES = 1 | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
d... | DEPLOYED_NAME = "salads_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"automatic_resources": {
"min_replica_count": MIN_NODE... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. | test_items = !gsutil cat $IMPORT_FILE | head -n1
cols = str(test_items[0]).split(",")
if len(cols) == 11:
test_item = str(cols[1])
test_label = str(cols[2])
else:
test_item = str(cols[0])
test_label = str(cols[1])
print(test_item, test_label) | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filter... | import base64
import tensorflow as tf
def predict_item(filename, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
with tf.io.gfile.GFile(filename, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's predictio... | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Verte... | def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id) | notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence fr... | def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionar... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() f... | def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=[None, None], name="input")
targets = tf.placeholder(tf.int32, shape=[... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch. | def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). | def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
cell = tf.contrib.rnn.Basic... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. | def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded ... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). | def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder R... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_le... | def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function... | def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size,
target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers,
target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:pa... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_siz... | # Number of Epochs
epochs = 5
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 1000
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 50
decoding_embedding_size = 50
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.75 | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id. | def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
word_ids = []
uks_id = vocab_to_int["<UNK>"]
for... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
Translate
This will translate translate_sentence from English to French. | translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path +... | language-translation/dlnd_language_translation.ipynb | Bismarrck/deep-learning | mit |
<a id="covariance"></a>
COVARIANCE_SAMP
The COVARIANCE_SAMP function returns the sample covariance of a set of number pairs. | %%sql
SELECT COVARIANCE_SAMP(SALARY, BONUS)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00' | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
<a id="stddev"></a>
STDDEV_SAMP
The STDDEV_SAMP column function returns the sample standard deviation (division by [n-1]) of a set of numbers. | %%sql
SELECT STDDEV_SAMP(SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00' | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
<a id="variance"></a>
VARIANCE_SAMP
The VARIANCE_SAMP column function returns the sample variance (division by [n-1]) of a set of numbers. | %%sql
SELECT VARIANCE_SAMP(SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00' | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
<a id="median"></a>
MEDIAN
The MEDIAN column function returns the median value in a set of values. | %%sql
SELECT MEDIAN(SALARY) AS MEDIAN, AVG(SALARY) AS AVERAGE
FROM EMPLOYEE
WHERE WORKDEPT = 'E21' | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
<a id="cume"></a>
CUME_DIST
The CUME_DIST column function returns the cumulative distribution of a row that is hypothetically inserted into
a group of rows. | %%sql
SELECT CUME_DIST(47000) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00' | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
<a id="rank"></a>
PERCENT_RANK
The PERCENT_RANK column function returns the relative percentile rank of a
row that is hypothetically inserted into a group of rows. | %%sql
SELECT PERCENT_RANK(47000) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'A00' | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
<a id="disc"></a>
PERCENTILE_DISC
The PERCENTILE_DISC/CONT returns the value that corresponds to the specified percentile
given a sort specification by using discrete (DISC) or continuous (CONT) distribution. | %%sql
SELECT PERCENTILE_DISC(0.75) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'E21' | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
<a id="cont"></a>
PERCENTILE_CONT
This is a function that gives you a continuous percentile calculation. | %%sql
SELECT PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY SALARY)
FROM EMPLOYEE
WHERE WORKDEPT = 'E21' | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
<a id="width"></a>
WIDTH BUCKET and Histogram Example
The WIDTH_BUCKET function is used to create equal-width histograms. Using the EMPLOYEE table,
This SQL will assign a bucket to each employee's salary using a range of 35000 to 100000 divided into 13 buckets. | %%sql
SELECT EMPNO, SALARY, WIDTH_BUCKET(SALARY, 35000, 100000, 13)
FROM EMPLOYEE
ORDER BY EMPNO | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
We can plot this information by adding some more details to the bucket output. | %%sql -a
WITH BUCKETS(EMPNO, SALARY, BNO) AS
(
SELECT EMPNO, SALARY,
WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET
FROM EMPLOYEE ORDER BY EMPNO
)
SELECT BNO, COUNT(*) AS COUNT FROM BUCKETS
GROUP BY BNO
ORDER BY BNO ASC | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
And here is a plot of the data to make sense of the histogram. | %%sql -pb
WITH BUCKETS(EMPNO, SALARY, BNO) AS
(
SELECT EMPNO, SALARY,
WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET
FROM EMPLOYEE ORDER BY EMPNO
)
SELECT BNO, COUNT(*) AS COUNT FROM BUCKETS
GROUP BY BNO
ORDER BY BNO ASC | Db2 Statistical Functions.ipynb | DB2-Samples/db2jupyter | apache-2.0 |
This Riemann problem has a simple rarefaction-shock structure: a left going rarefaction, a contact, and a right going shock. | from IPython.display import display, display_png
display_png(test_1_rp) | docs/p_v_plots.ipynb | harpolea/r3d2 | mit |
The solution in parameter space shows the curves that can connect to the left and right states. Both a shock and a rarefaction can connect to either state. However, the only intersection (for the central, "star" state) is when a left going rarefaction connects to the left state, and a right going shock connects to the ... | fig = pyplot.figure(figsize=(10,6))
ax = fig.add_subplot(111)
utils.plot_P_v(test_1_rp, ax, fig) | docs/p_v_plots.ipynb | harpolea/r3d2 | mit |
Varying EOS
As noted previously, it's not necessary for the equation of state to match in the different states. Here is a problem with two inert EOSs. This problem is solved by two rarefactions. | gamma_air = 1.4
eos_air = eos_defns.eos_gamma_law(gamma_air)
U_vary_eos_L = State(1.0, -0.5, 0.0, 2.0, eos, label="L")
U_vary_eos_R = State(1.0, +0.5, 0.0, 2.0, eos_air, label="R")
test_vary_eos_rp = RiemannProblem(U_vary_eos_L, U_vary_eos_R)
display_png(test_vary_eos_rp) | docs/p_v_plots.ipynb | harpolea/r3d2 | mit |
The parametric solution in this case shows the pressure decreasing across both curves along rarefactions to get to the star state: | fig2 = pyplot.figure(figsize=(10,6))
ax2 = fig2.add_subplot(111)
utils.plot_P_v(test_vary_eos_rp, ax2, fig2) | docs/p_v_plots.ipynb | harpolea/r3d2 | mit |
Reactive cases
Once reactions start, the behaviour gets more complex, and the parametric pictures can clarify some aspects. An example would be a single deflagration wave that is preceded by a shock to ignite the reaction: | eos = eos_defns.eos_gamma_law(5.0/3.0)
eos_reactive = eos_defns.eos_gamma_law_react(5.0/3.0, 0.1, 1.0, 1.0, eos)
U_reactive_right = State(0.5, 0.0, 0.0, 1.0, eos_reactive)
U_reactive_left = State(0.24316548798524526, -0.39922932397353039, 0.0,
0.61686385086179807, eos)
test_precursor_rp = RiemannPro... | docs/p_v_plots.ipynb | harpolea/r3d2 | mit |
The structure here looks like previous Riemann problems, but there is in fact only one wave. The right state is connected directly to the left state across a compound wave formed from a precursor shock, which raises the temperature until the gas ignites, then a Chapman-Jouget deflagration which is attached to a rarefac... | fig3 = pyplot.figure(figsize=(10,6))
ax3 = fig3.add_subplot(111)
utils.plot_P_v(test_precursor_rp, ax3, fig3) | docs/p_v_plots.ipynb | harpolea/r3d2 | mit |
Let's see the numeric operations | print spam + eggs # sum
print spam - eggs # difference
print spam * eggs # product
print spam / eggs # quotient
print spam // eggs # floored quotient
print spam % eggs # remainder or module
print pow(spam, eggs) # power (yes, this is how a funtion is called)
print spam ** eggs #... | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Python automatically infers the type of the result depending on operands type | # Let's try again the power
print eggs ** spam
print type(eggs ** spam)
from sys import maxint
print maxint
# Let's instantiate other values
spam = 65L # a long
print spam
print type(spam)
eggs = 2.0 # a float
print eggs
print type(eggs)
spam = 0101 # an integer in octet
print ... | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Use parentheses to alter operations order
SOURCES
http://docs.python.org/2/library/stdtypes.html#numeric-types-int-float-long-complex
STRINGS | spam = "spam" # a string
print spam
print type(spam)
eggs = '"eggs"' # another string
print eggs
print type(eggs)
eggs = '\'eggs\'' # another string
print eggs
spam_eggs = "'\tspam\n\teggs'" # another string
print spam_eggs
print type(spam_eggs) | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Remember
String literal are written in ingle or double quotes. It's exactly the same
Backslash \ is the escape character
Escape sequences in strings are interpreted according to rules similar to those used by Standard C | spam_eggs = r"'\tspam\n\teggs'" # a raw string
print spam_eggs
print type(spam_eggs) | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Raw strings are prefixed with 'r' or 'R'
Raw strings use different rules for interpreting interpreting backslash escape sequences | spam_eggs = u"'\tspam\n\teggs'" # a unicode string
print spam_eggs
print type(spam_eggs) | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Unicode strings are prefixed with 'u' or 'U'
- unicode is a basic data type different than str:
- str is ALWAYS an encoded text (although you never notice it)
- unicode use the Unicode character set as defined by the Unicode Consortium and ISO 10646. We could say unicode are encoding-free strings
- str string liter... | spam_eggs = u"'spam\u0020eggs'" # a unicode string with Unicode-Escape encoding
print spam_eggs
print type(spam_eggs) | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
WARNING! Note that in Py3k this approach was radically changed:
- All string literals are unicode by default and its type is 'str'
- Encoded strings are specified with 'b' or 'B' prefix and its type is 'bytes'
- Operations mixing str and bytes always raise a TypeError exception | spam_eggs = """'\tspam
\teggs'""" # another string
print spam_eggs
print type(spam_eggs) | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Three single or double quotes also work, and they support multiline text | spam_eggs = "'\tspam" "\n\teggs'" # another string
print spam_eggs
print type(spam_eggs) | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Several consecutive string literals are automatically concatenated, even if declared in different consecutive lines
Useful to declare too much long string literals | spam_eggs = u"'\tspam\n \
\teggs'" # a unicode string
print spam_eggs
print type(spam_eggs) | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Let's see strings operations | spam = "spam"
eggs = u'"Eggs"'
print spam.capitalize() # Return a copy with first character in upper case
print spam | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
WARNING! String are immutables. Its methods return always a new copy | print spam.decode() # Decode the str with given or default encoding
print type(spam.decode('utf8'))
print eggs.encode() # Encode the unicode with given or default encoding
print type(eggs.encode('utf8'))
print spam.endswith("am") # Check string suffix (op... | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
NOTE:
The repr module provides a means for producing object representations with limits on the size of the resulting strings. This is used in the Python debugger and may be useful in other contexts as well.
str Return a string containing a nicely printable representation of an object. For strings, this returns the stri... | print "spam, eggs, foo".split(", ")
print "spam, eggs, foo".split(", ", 1) # Split by given character, returning a list (optionally specify times to split)
print ", ".join(("spam", "eggs", "foo")) # Use string as separator to concatenate an iterable of strings | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Let's format strings | print "%s %s %d" % (spam, spam, 7) # This is the old string formatting, similar to C
print "{0} {0} {1}".format(spam, 7) # This is the new string formatting method, standard in Py3k
print "{} {}".format(spam, 7.12345)
print "[{0:16}|{1:16}]".format(-7.12345, 7.12345) # Use colon and width of formatte... | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
SOURCES:
http://docs.python.org/2/library/stdtypes.html#numeric-types-int-float-long-complex
http://docs.python.org/2/reference/lexical_analysis.html#strings
http://docs.python.org/2/reference/lexical_analysis.html#encodings
http://docs.python.org/2/library/stdtypes.html#string-methods
http://docs.python.org/2/library... | # use every cell to run and make the assert pass
# Exercise: write the code
def minimum(a, b):
"""Computes minimum of given 2 numbers.
>>> minimum(2, 3)
2
>>> minimum(8, 5)
5
"""
# your code here
assert minimum(2,3) == 2
def istrcmp(s1, s2):
"""Compare giv... | basic/1_Numbers_Strings.ipynb | ealogar/curso-python | apache-2.0 |
Getting started
The only user-facing function in the module is corner.corner and, in its simplest form, you use it like this: | import corner
import numpy as np
ndim, nsamples = 2, 10000
np.random.seed(42)
samples = np.random.randn(ndim * nsamples).reshape([nsamples, ndim])
figure = corner.corner(samples) | docs/_static/notebooks/quickstart.ipynb | mattpitkin/corner.py | bsd-2-clause |
The following snippet demonstrates a few more bells and whistles: | # Set up the parameters of the problem.
ndim, nsamples = 3, 50000
# Generate some fake data.
np.random.seed(42)
data1 = np.random.randn(ndim * 4 * nsamples // 5).reshape([4 * nsamples // 5, ndim])
data2 = (4*np.random.rand(ndim)[None, :] + np.random.randn(ndim * nsamples // 5).reshape([nsamples // 5, ndim]))
data = np... | docs/_static/notebooks/quickstart.ipynb | mattpitkin/corner.py | bsd-2-clause |
AVL Trees
This notebook implements
AVL trees. The set $\mathcal{A}$ of AVL trees is defined inductively:
$\texttt{Nil} \in \mathcal{A}$.
$\texttt{Node}(k,v,l,r) \in \mathcal{A}\quad$ iff
$\texttt{Node}(k,v,l,r) \in \mathcal{B}$,
$l, r \in \mathcal{A}$, and
$|l.\texttt{height}() - r.\texttt{height}()| \leq 1$.
Ac... | class AVLTree:
def __init__(self):
self.mKey = None
self.mValue = None
self.mLeft = None
self.mRight = None
self.mHeight = 0 | Python/Chapter-06/AVL-Trees.ipynb | Danghor/Algorithms | gpl-2.0 |
Given an ordered binary tree $t$, the expression $t.\texttt{isEmpty}()$ checks whether $t$ is the empty tree. | def isEmpty(self):
return self.mHeight == 0
AVLTree.isEmpty = isEmpty | Python/Chapter-06/AVL-Trees.ipynb | Danghor/Algorithms | gpl-2.0 |
Given an ordered binary tree $t$ and a key $k$, the expression $t.\texttt{find}(k)$ returns the value stored unter the key $k$.
The method find is defined inductively as follows:
- $\texttt{Nil}.\texttt{find}(k) = \Omega$,
because the empty tree is interpreted as the empty map.
$\texttt{Node}(k, v, l, r).\texttt{f... | def find(self, key):
if self.isEmpty():
return
elif self.mKey == key:
return self.mValue
elif key < self.mKey:
return self.mLeft.find(key)
else:
return self.mRight.find(key)
AVLTree.find = find | Python/Chapter-06/AVL-Trees.ipynb | Danghor/Algorithms | gpl-2.0 |
The method $\texttt{insert}()$ is specified via recursive equations.
- $\texttt{Nil}.\texttt{insert}(k,v) = \texttt{Node}(k,v, \texttt{Nil}, \texttt{Nil})$,
- $\texttt{Node}(k, v_2, l, r).\texttt{insert}(k,v_1) = \texttt{Node}(k, v_1, l, r)$,
- $k_1 < k_2 \rightarrow
\texttt{Node}(k_2, v_2, l, r).\tex... | def insert(self, key, value):
if self.isEmpty():
self.mKey = key
self.mValue = value
self.mLeft = AVLTree()
self.mRight = AVLTree()
self.mHeight = 1
elif self.mKey == key:
self.mValue = value
elif key < self.mKey:
self.mLeft.insert(key, value)
... | Python/Chapter-06/AVL-Trees.ipynb | Danghor/Algorithms | gpl-2.0 |
The method $\texttt{self}.\texttt{delete}(k)$ removes the key $k$ from the tree $\texttt{self}$. It is defined as follows:
$\texttt{Nil}.\texttt{delete}(k) = \texttt{Nil}$,
$\texttt{Node}(k,v,\texttt{Nil},r).\texttt{delete}(k) = r$,
$\texttt{Node}(k,v,l,\texttt{Nil}).\texttt{delete}(k) = l$,
$l \not= \texttt{Nil} \,\... | def delete(self, key):
if self.isEmpty():
return
if key == self.mKey:
if self.mLeft.isEmpty():
self._update(self.mRight)
elif self.mRight.isEmpty():
self._update(self.mLeft)
else:
self.mRight, self.mKey, self.mValue = self.mRight._delMin()
... | Python/Chapter-06/AVL-Trees.ipynb | Danghor/Algorithms | gpl-2.0 |
The method $\texttt{self}.\texttt{delMin}()$ removes the smallest key from the given tree $\texttt{self}$
and returns a triple of the form
$$ (\texttt{self}, k_m, v_m) $$
where $\texttt{self}$ is the tree that remains after removing the smallest key, while $k_m$ is the smallest key that has been found and $v_m$ is the ... | def _delMin(self):
if self.mLeft.isEmpty():
return self.mRight, self.mKey, self.mValue
else:
ls, km, vm = self.mLeft._delMin()
self.mLeft = ls
self._restore()
return self, km, vm
AVLTree._delMin = _delMin | Python/Chapter-06/AVL-Trees.ipynb | Danghor/Algorithms | gpl-2.0 |
Given two ordered binary trees $s$ and $t$, the expression $s.\texttt{update}(t)$ overwrites the attributes of $s$ with the corresponding attributes of $t$. | def _update(self, t):
self.mKey = t.mKey
self.mValue = t.mValue
self.mLeft = t.mLeft
self.mRight = t.mRight
self.mHeight = t.mHeight
AVLTree._update = _update | Python/Chapter-06/AVL-Trees.ipynb | Danghor/Algorithms | gpl-2.0 |
The function $\texttt{restore}(\texttt{self})$ restores the balancing condition of the given binary tree
at the root node and recompute the variable $\texttt{mHeight}$.
The method $\texttt{restore}$ is specified via conditional equations.
$\texttt{Nil}.\texttt{restore}() = \texttt{Nil}$,
because the empty tree alread... | def _restore(self):
if abs(self.mLeft.mHeight - self.mRight.mHeight) <= 1:
self._restoreHeight()
return
if self.mLeft.mHeight > self.mRight.mHeight:
k1, v1, l1, r1 = self.mKey, self.mValue, self.mLeft, self.mRight
k2, v2, l2, r2 = l1.mKey, l1.mValue, l1.mLeft, l1.mRight
i... | Python/Chapter-06/AVL-Trees.ipynb | Danghor/Algorithms | gpl-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.