markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
General Simulation Data: Reports the number of iterations, states, and atoms in each phase. If no checkpoint file is found, the number of atoms is reported as No Cpt. as this information is inferred from the checkpoint file. All other information comes from the analysis file.
report.general_simulation_data()
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
Equilibration How to interpret these plots Shown is the potential energy added up across all replicas (black dots), the moving average (red line), and where we have auto-detected the equilibration (blue line) for each phase. Finally, the total number of decorrelated samples for each phase is attached to each plot. You ...
sams_weights_figure = report.generate_sams_weights_plots() equilibration_figure = report.generate_equilibration_plots(discard_from_start=1)
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
Additional Decorrelation Analysis The following Pie Charts show you the breakdown of how many samples were kept, and how many were lost to either equilibration or decorrelation. Warnings are shown when below a threshold (originally written to be 10%)
decorrelation_figure = report.generate_decorrelation_plots(decorrelation_threshold=0.1)
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
RMSD Analysis Trace the RMSD from the initial frame to the end of the simulaton for both the ligand and receptor. This is an experimental feature and has been commented out due to instability
#rmsd_figure = report.compute_rmsds()
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
Mixing statistics We can analyze the "mixing statistics" of the equilibrated part of the simulation to ensure that the $(X,S)$ chain is mixing reasonably well among the various alchemical states. For information on how this is computed, including how to interpret the Perron Eigenvalue, please see the Mixing Statistics ...
mixing_figure = report.generate_mixing_plot(mixing_cutoff=mixing_cutoff, mixing_warning_threshold=mixing_warning_threshold, cmap_override=None)
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
Replica Pseudorandom Walk Examination This section checks to see if all the replicas are exchanging states over the whole thermodynamic state space. This is different from tracking states as any replica is a continuous trajectory of configurations, just undergoing different forces at different times. What do I want to ...
replica_mixing_figure = report.generate_replica_mixing_plot(phase_stacked_replica_plots=phase_stacked_replica_plots)
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
Free Energy Difference The free energy difference is shown last as the quality of this estimate should be gauged with the earlier sections. Although MBAR provides an estimate of the free energy difference and its error, it is still only an estimate. You should consider if you have a sufficient number of decorrelated sa...
report.generate_free_energy()
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
Free Energy Trace for Equilibrium Stability The free energy difference alone, even with all the additional information previously, may still be an underestimate of the true free energy. One way to check this is to drop samples from the start and end of the simulation, and re-run the free energy estimate. Ideally, you w...
free_energy_trace_figure = report.free_energy_trace(discard_from_start=1, n_trace=10)
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
Radially-symmetric restraint energy and distance distributions This plot is generated only if the simulation employs a radially-symmetric restraint (e.g. harmonic, flat-bottom), and the unbias_restraint option of the analyzer was set. What do I want to see here? When unbiasing the restraint, it is important to verify t...
restraint_distribution_figure = report.restraint_distributions_plot()
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
Execute this block to write out serialized data This is left commented out in the template to prevent it from auto-running with everything else
#report.dump_serial_data('SERIALOUTPUT')
Yank/reports/YANK_Health_Report_Template.ipynb
choderalab/yank
mit
Recode Race and Ethnicity RAC1P Recoded detailed race code 1 .White alone 2 .Black or African American alone 3 .American Indian alone 4 .Alaska Native alone 5 .American Indian and Alaska Native tribes specified; or .American Indian or Alaska Native, not specified and no .other races 6 .Asian alone ...
rac1p_map = { 1: 'white', 2: 'black', 3: 'amind', 4: 'alaskanat', 5: 'aian', 6: 'asian', 7: 'nhopi', 8: 'other', 9: 'many' } pop['race'] = pop.rac1p.astype('category') pop['race'] = pop.race.cat.rename_categories(rac1p_map) # The raceeth variable is the race varaiable, but with 'wh...
census.gov/census.gov-pums-20165/notebooks/Extract.ipynb
CivicKnowledge/metatab-packages
mit
Recode Age Age groups from CHIS: 18-25 YEARS 1906 26-29 YEARS 867 30-34 YEARS 1060 35-39 YEARS 1074 40-44 YEARS 1062 45-49 YEARS 1302 50-54 YEARS 1621 55-59 YEARS 1978 60-64 YEARS 2343 65-69 YEARS 2170 70-74 YEARS 1959 75-79 YEARS 1525 80-84 YEARS 1125 85+ YEARS 1161
ages = ['18-25 YEARS', '26-29 YEARS', '30-34 YEARS', '35-39 YEARS', '40-44 YEARS', '45-49 YEARS', '50-54 YEARS', '55-59 YEARS', '60-64 YEARS', '65-69 YEARS', '70-74 YEARS', '75-79 YEARS', '80-84 YEARS', '85+ YEARS'] def extract_age(v): if v.startswith('85'): return pd.Interval(left=85, right=1...
census.gov/census.gov-pums-20165/notebooks/Extract.ipynb
CivicKnowledge/metatab-packages
mit
Recode Poverty Level
povlvls = ['0-99% FPL', '100-199% FPL', '200-299% FPL', '300% FPL AND ABOVE'] pov_index = pd.IntervalIndex( [pd.Interval(left=0, right=99, closed='both'), pd.Interval(left=100, right=199, closed='both'), pd.Interval(left=200, right=299, closed='both'), pd.Interval(left=300, right=501, closed='both')] ) ...
census.gov/census.gov-pums-20165/notebooks/Extract.ipynb
CivicKnowledge/metatab-packages
mit
Build the full population set
def build_set(df, rep_no): new_rows = [] for row in df.iterrows(): repl = row[1].at['pwgtp'+str(rep_no)] if repl > 1: new_rows.extend([row]*(repl-1)) return new_rows %time new_rows = build_set(dfx, 1) %time t = dfx.copy().append(new_rows, ignore_index = True...
census.gov/census.gov-pums-20165/notebooks/Extract.ipynb
CivicKnowledge/metatab-packages
mit
What is it? Doc2Vec is an NLP tool for representing documents as a vector and is a generalizing of the Word2Vec method. This tutorial will serve as an introduction to Doc2Vec and present ways to train and assess a Doc2Vec model. Resources Word2Vec Paper Doc2Vec Paper Dr. Michael D. Lee's Website Lee Corpus IMDB Doc2Ve...
# Set file names for train and test data test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) lee_train_file = test_data_dir + os.sep + 'lee_background.cor' lee_test_file = test_data_dir + os.sep + 'lee.cor'
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Define a Function to Read and Preprocess Text Below, we define a function to open the train/test file (with latin encoding), read the file line-by-line, pre-process each line using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a li...
def read_corpus(fname, tokens_only=False): with open(fname, encoding="iso-8859-1") as f: for i, line in enumerate(f): if tokens_only: yield gensim.utils.simple_preprocess(line) else: # For training data, add tags yield gensim.models.doc...
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Let's take a look at the training corpus
train_corpus[:2]
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
And the testing corpus looks like this:
print(test_corpus[:2])
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Notice that the testing corpus is just a list of lists and does not contain any tags. Training the Model Instantiate a Doc2Vec Object Now, we'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 10 times. We set the minimum word count to 2 in order to give higher freque...
model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=10)
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Build a Vocabulary
model.build_vocab(train_corpus)
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Essentially, the vocabulary is a dictionary (accessible via model.vocab) of all of the unique words extracted from the training corpus along with the count (e.g., model.vocab['penalty'].count for counts for the word penalty). Time to Train This should take no more than 2 minutes
%time model.train(train_corpus)
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Inferring a Vector One important thing to note is that you can now infer a vector for any piece of text without having to re-train the model by passing a list of words to the model.infer_vector function. This vector can then be compared with other vectors via cosine similarity.
model.infer_vector(['only', 'you', 'can', 'prevent', 'forrest', 'fires'])
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Assessing Model To assess our new model, we'll first infer new vectors for each document of the training corpus, compare the inferred vectors with the training corpus, and then returning the rank of the document based on self-similarity. Basically, we're pretending as if the training corpus is some new unseen data and ...
ranks = [] second_ranks = [] for doc_id in range(len(train_corpus)): inferred_vector = model.infer_vector(train_corpus[doc_id].words) sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs)) rank = [docid for docid, sim in sims].index(doc_id) ranks.append(rank) second_ranks...
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Let's count how each document ranks with respect to the training corpus
collections.Counter(ranks) #96% accuracy
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Basically, greater than 95% of the inferred documents are found to be most similar to itself and about 5% of the time it is mistakenly most similar to another document. This is great and not entirely surprising. We can take a look at an example:
print('Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words))) print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model) for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]: print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].wor...
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Notice above that the most similar document is has a similarity score of ~80% (or higher). However, the similarity score for the second ranked documents should be significantly lower (assuming the documents are in fact different) and the reasoning becomes obvious when we examine the text itself
# Pick a random document from the test corpus and infer a vector from the model doc_id = random.randint(0, len(train_corpus)) # Compare and print the most/median/least similar documents from the train corpus print('Train Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words))) sim_id = second_ranks...
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
Testing the Model Using the same approach above, we'll infer the vector for a randomly chosen test document, and compare the document to our model by eye.
# Pick a random document from the test corpus and infer a vector from the model doc_id = random.randint(0, len(test_corpus)) inferred_vector = model.infer_vector(test_corpus[doc_id]) sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs)) # Compare and print the most/median/least similar document...
docs/notebooks/doc2vec-lee.ipynb
pombredanne/gensim
lgpl-2.1
1.2 - Overview of the model Your model will have the following structure: Initialize parameters Run the optimization loop Forward propagation to compute the loss function Backward propagation to compute the gradients with respect to the loss function Clip the gradients to avoid exploding gradients Using the gradient...
### GRADED FUNCTION: clip def clip(gradients, maxValue): ''' Clips the gradients' values between minimum and maximum. Arguments: gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby" maxValue -- everything above this number is set to this number, and everything...
course-deeplearning.ai/course5-rnn/Week 1/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected output: <table> <tr> <td> **gradients["dWaa"][1][2] ** </td> <td> 10.0 </td> </tr> <tr> <td> **gradients["dWax"][3][1]** </td> <td> -10.0 </td> </td> </tr> <tr> <td> **gradients["dWya"][1][2]** </td> <td> 0.29713815361 </td> </tr> <...
# GRADED FUNCTION: sample def sample(parameters, char_to_ix, seed): """ Sample a sequence of characters according to a sequence of probability distributions output of the RNN Arguments: parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. char_to_ix -- python dictio...
course-deeplearning.ai/course5-rnn/Week 1/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected output: <table> <tr> <td> **Loss ** </td> <td> 126.503975722 </td> </tr> <tr> <td> **gradients["dWaa"][1][2]** </td> <td> 0.194709315347 </td> <tr> <td> **np.argmax(gradients["dWax"])** </td> <td> 93 </td> </tr> <tr> <td> **gra...
# GRADED FUNCTION: model def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27): """ Trains the model and generates dinosaur names. Arguments: data -- text corpus ix_to_char -- dictionary that maps the index to a character char_to_ix -- ...
course-deeplearning.ai/course5-rnn/Week 1/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
liufuyang/deep_learning_tutorial
mit
Vertex SDK: Train and deploy an SKLearn model with pre-built containers (formerly hosted runtimes) Installation Install the Google cloud-storage library as well.
! pip3 install google-cloud-storage
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend when possible, to choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You cannot use a Multi-Region...
REGION = "us-central1" # @param {type: "string"}
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex SDK Import the Vertex SDK into our Python environment.
import json import os import sys import time from googleapiclient import discovery
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Vertex constants Setup up the following constants for Vertex: PARENT: The Vertex location root path for dataset, model and endpoint resources.
# Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Clients The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex). You will use several clients in this tutorial, so set them all up upfront.
client = discovery.build("ml", "v1")
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Prepare a trainer script Package assembly
# Make folder for python training script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ tag_build =\n\ tag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\ setuptools.setup(\n\ install_requires=[\n\ ],\n\ packa...
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Task.py contents
%%writefile custom/trainer/task.py # Single Instance Training for Census Income from sklearn.ensemble import RandomForestClassifier import joblib from sklearn.feature_selection import SelectKBest from sklearn.pipeline import FeatureUnion from sklearn.pipeline import Pipeline from sklearn.preprocessing import LabelBina...
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Store training script on your Cloud Storage bucket
! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz gs://$BUCKET_NAME/census.tar.gz
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train a model projects.jobs.create Request
JOB_NAME = "custom_job_SKL" + TIMESTAMP training_input = { "scaleTier": "BASIC", "packageUris": ["gs://" + BUCKET_NAME + "/census.tar.gz"], "pythonModule": "trainer.task", "args": ["--model-dir=" + "gs://{}/{}".format(BUCKET_NAME, JOB_NAME)], "region": REGION, "runtimeVersion": "2.4", "pyth...
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/jobs?alt=json", "method": "POST", "body": { "jobId": "custom_job_SKL20210302140139", "trainingInput": { "scaleTier": "BASIC", "packageUris": [ "gs://migration-ucaip-trainingaip-20210302140139/censu...
result = request.execute()
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
print(json.dumps(result, indent=2))
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "jobId": "custom_job_SKL20210302140139", "trainingInput": { "packageUris": [ "gs://migration-ucaip-trainingaip-20210302140139/census.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210302140139/custom_job_SKL202103021...
# The short numeric ID for the custom training job custom_training_short_id = result["jobId"] # The full unique ID for the custom training job custom_training_id = "projects/" + PROJECT_ID + "/jobs/" + result["jobId"] print(custom_training_id)
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.jobs.get Call
request = client.projects().jobs().get(name=custom_training_id) result = request.execute()
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "jobId": "custom_job_SKL20210302140139", "trainingInput": { "packageUris": [ "gs://migration-ucaip-trainingaip-20210302140139/census.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210302140139/custom_job_SKL202103021...
while True: response = client.projects().jobs().get(name=custom_training_id).execute() if response["state"] != "SUCCEEDED": print("Training job has not completed:", response["state"]) if response["state"] == "FAILED": break else: break time.sleep(60) # model artifac...
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy the model projects.models.create Request
body = {"name": "custom_job_SKL" + TIMESTAMP} request = client.projects().models().create(parent="projects/" + PROJECT_ID) request.body = json.loads(json.dumps(body, indent=2)) print(json.dumps(json.loads(request.to_json()), indent=2)) request = client.projects().models().create(parent="projects/" + PROJECT_ID, body...
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/models?alt=json", "method": "POST", "body": { "name": "custom_job_SKL20210302140139" }, "headers": { "accept": "application/json", "accept-encoding": "gzip, deflate", "user-agent": "(gzip)", "x-goog-ap...
result = request.execute()
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/migration-ucaip-training/models/custom_job_SKL20210302140139", "regions": [ "us-central1" ], "etag": "Lmd8u9MSSIA=" }
model_id = result["name"]
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.models.versions.create Request
version = { "name": "custom_job_SKL" + TIMESTAMP, "deploymentUri": model_artifact_dir, "runtimeVersion": "2.1", "framework": "SCIKIT_LEARN", "pythonVersion": "3.7", "machineType": "mls1-c1-m2", } request = client.projects().models().versions().create(parent=model_id) request.body = version pri...
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/models/custom_job_SKL20210302140139/versions?alt=json", "method": "POST", "body": { "name": "custom_job_SKL20210302140139", "deploymentUri": "gs://migration-ucaip-trainingaip-20210302140139/custom_job_SKL2021030214013...
result = request.execute()
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/migration-ucaip-training/operations/create_custom_job_SKL20210302140139_custom_job_SKL20210302140139-1614695138432", "metadata": { "@type": "type.googleapis.com/google.cloud.ml.v1.OperationMetadata", "createTime": "2021-03-02T14:25:38Z", "operationType": "CREATE_VERSI...
# The full unique ID for the model version model_version_name = result["metadata"]["version"]["name"] print(model_version_name) while True: response = ( client.projects().models().versions().get(name=model_version_name).execute() ) if response["state"] == "READY": print("Model version crea...
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make batch predictions Batch prediction only supports Tensorflow. FRAMEWORK_SCIKIT_LEARN is not currently available. Make online predictions Prepare data item for online prediction
INSTANCES = [ [ 25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States", ], [ 38, "Private", 89814, ...
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.predict Request
request = client.projects().predict(name=model_version_name) request.body = json.loads(json.dumps({"instances": INSTANCES}, indent=2)) print(json.dumps(json.loads(request.to_json()), indent=2)) request = client.projects().predict( name=model_version_name, body={"instances": INSTANCES} )
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/models/custom_job_SKL20210302140139/versions/custom_job_SKL20210302140139:predict?alt=json", "method": "POST", "body": { "instances": [ [ 25, "Private", 226802, "11th", 7, ...
result = request.execute()
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "predictions": [ false, false, false, false, false, false, false, false, false, false ] } projects.models.versions.delete Request
request = client.projects().models().versions().delete(name=model_version_name)
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Call
response = request.execute()
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
print(json.dumps(response, indent=2))
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/migration-ucaip-training/operations/delete_custom_job_SKL20210302140139_custom_job_SKL20210302140139-1614695211809", "metadata": { "@type": "type.googleapis.com/google.cloud.ml.v1.OperationMetadata", "createTime": "2021-03-02T14:26:51Z", "operationType": "DELETE_VERSI...
delete_model = True delete_bucket = True # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model: client.projects().models().delete(name=model_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r gs://$BUCKET_...
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Reading in data. Let's search MAST for the long-cadence light curve file of WASP-55 using the lightkurve API, and do some very basic filtering for data quality.
lc = lk.search_lightcurve('EPIC 212300977')[1].download() lc = lc.remove_nans() lc = lc[lc.quality==0]
notebooks/lightkurve.ipynb
OxES/k2sc
gpl-3.0
Let's now try K2SC! As a quick hack for now, let's just clobber the lightkurve object class to our k2sc standalone.
lc.__class__ = k2sc_lc
notebooks/lightkurve.ipynb
OxES/k2sc
gpl-3.0
Now we run with default values! The tqdm progress bar will show a percentage of the maximum iterations of the differential evolution optimizer, but it will usually finish early.
lc.k2sc()
notebooks/lightkurve.ipynb
OxES/k2sc
gpl-3.0
Now we plot! See how the k2sc lightcurve has such better quality than the uncorrected data. Careful with astropy units - flux and time are dimensionful quantities in lightkurve 2.0, so we have to use .value to render them as numbers.
fig = plt.figure(figsize=(12.0,8.0)) plt.plot(lc.time.value,lc.flux.value,'.',label="Uncorrected") detrended = lc.corr_flux-lc.tr_time + np.nanmedian(lc.tr_time) plt.plot(lc.time.value,detrended.value,'.',label="K2SC") plt.legend() plt.xlabel('BJD') plt.ylabel('Flux') plt.title('WASP-55',y=1.01)
notebooks/lightkurve.ipynb
OxES/k2sc
gpl-3.0
Now we save the data.
extras = {'CORR_FLUX':lc.corr_flux.value, 'TR_TIME':lc.tr_time.value, 'TR_POSITION':lc.tr_position.value} out = lc.to_fits(extra_data=extras,path='test.fits',overwrite=True)
notebooks/lightkurve.ipynb
OxES/k2sc
gpl-3.0
Note: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: X = Input(...) or X = ZeroPadding2D(...). 1 - The Happy House For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient ...
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255. # Reshape Y_train = Y_train_orig.T Y_test = Y_test_orig.T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = "...
DeepLearning/3-ConvolutionalNeuralNetwork/2-DeepCNN_CaseStudy/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
excelsimon/AI
mit
Details of the "Happy" dataset: - Images are of shape (64,64,3) - Training: 600 pictures - Test: 150 pictures It is now time to solve the "Happy" Challenge. 2 - Building a model in Keras Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results. H...
# GRADED FUNCTION: HappyModel def HappyModel(input_shape): """ Implementation of the HappyModel. Arguments: input_shape -- shape of the images of the dataset Returns: model -- a Model() instance in Keras """ ### START CODE HERE ### # Feel free to use the suggested outline...
DeepLearning/3-ConvolutionalNeuralNetwork/2-DeepCNN_CaseStudy/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
excelsimon/AI
mit
You have now built a function to describe your model. To train and test this model, there are four steps in Keras: 1. Create the model by calling the function above 2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"]) 3. Train the model on train data by calling model.fi...
### START CODE HERE ### (1 line) happyModel = None ### END CODE HERE ###
DeepLearning/3-ConvolutionalNeuralNetwork/2-DeepCNN_CaseStudy/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
excelsimon/AI
mit
Exercise: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of compile() wisely. Hint: the Happy Challenge is a binary classification problem.
### START CODE HERE ### (1 line) None ### END CODE HERE ###
DeepLearning/3-ConvolutionalNeuralNetwork/2-DeepCNN_CaseStudy/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
excelsimon/AI
mit
Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.
### START CODE HERE ### (1 line) None ### END CODE HERE ###
DeepLearning/3-ConvolutionalNeuralNetwork/2-DeepCNN_CaseStudy/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
excelsimon/AI
mit
Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them. Exercise: Implement step 4, i.e. test/evaluate the model.
### START CODE HERE ### (1 line) preds = None ### END CODE HERE ### print() print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
DeepLearning/3-ConvolutionalNeuralNetwork/2-DeepCNN_CaseStudy/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
excelsimon/AI
mit
If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets. To pass this assignment, you have to get at least 75% accuracy. To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) wit...
### START CODE HERE ### img_path = 'images/my_image.jpg' ### END CODE HERE ### img = image.load_img(img_path, target_size=(64, 64)) imshow(img) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print(happyModel.predict(x))
DeepLearning/3-ConvolutionalNeuralNetwork/2-DeepCNN_CaseStudy/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
excelsimon/AI
mit
5 - Other useful functions in Keras (Optional) Two other basic features of Keras that you'll find useful are: - model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs - plot_model(): plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like t...
happyModel.summary() plot_model(happyModel, to_file='HappyModel.png') SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
DeepLearning/3-ConvolutionalNeuralNetwork/2-DeepCNN_CaseStudy/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
excelsimon/AI
mit
Data Science Motivation <center><img src="model-inference1.svg"> <center><img src="model-inference2.svg"> What's wrong with statistics Models should not be built for mathematical convenience (e.g. normality assumption), but to most accurately model the data. Pre-specified models, like frequentist statistics, make many...
plot_strats()
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Sharpe Ratio $$\text{Sharpe} = \frac{\text{mean returns}}{\text{volatility}}$$
print "Sharpe ratio strategy etrade =", data_0.mean() / data_0.std() * np.sqrt(252) print "Sharpe ratio strategy IB =", data_1.mean() / data_1.std() * np.sqrt(252) plt.title('Sharpe ratio'); plt.xlabel('Sharpe ratio'); plt.axvline(data_0.mean() / data_0.std() * np.sqrt(252), color='b'); plt.axvline(data_1.mean() / dat...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Detour ahead Short primer on random variables Represents our beliefs about an unknown state. Probability distribution assigns a probability to each possible state. Not a single number (e.g. most likely state). You already know what a variable is...
coin = 0 # 0 for tails coin = 1 # 1 for heads
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
A random variable assigns all possible values a certain probability
#coin = {0: 50%, # 1: 50%}
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Alternatively: coin ~ Bernoulli(p=0.5) coin is a random variable Bernoulli is a probability distribution ~ reads as "is distributed as" This was discrete (binary), what about the continuous case? returns ~ Normal($\mu$, $\sigma^2$)
from scipy import stats sns.distplot(data_0, kde=False, fit=stats.norm) plt.xlabel('returns')
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
How to estimate $\mu$ and $\sigma$? Naive: point estimate Set mu = mean(data) and sigma = std(data) Maximum Likelihood Estimate Correct answer as $n \rightarrow \infty$ Bayesian analysis Most of the time $n \neq \infty$... Uncertainty about $\mu$ and $\sigma$ Turn $\mu$ and $\sigma$ into random variables How to esti...
figsize(7, 7) from IPython.html.widgets import interact, interactive from scipy import stats def gen_plot(n=0, bayes=False): np.random.seed(3) x = np.random.randn(n) ax1 = plt.subplot(221) ax2 = plt.subplot(222) ax3 = plt.subplot(223) #fig, (ax1, ax2, ax3, _) = plt.subplots(2, 2) if n > 1: ...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Approximating the posterior with MCMC sampling
def plot_want_get(): from scipy import stats fig = plt.figure(figsize=(14, 6)) ax1 = fig.add_subplot(121, title='What we want', ylim=(0, .5), xlabel='', ylabel='') ax1.plot(np.linspace(-4, 4, 100), stats.norm.pdf(np.linspace(-3, 3, 100)), lw=4.) ax2 = fig.add_subplot(122, title='What we get')#, xlim...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Approximating the posterior with MCMC sampling
plot_want_get()
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
PyMC3 Probabilistic Programming framework written in Python. Allows for construction of probabilistic models using intuitive syntax. Features advanced MCMC samplers. Fast: Just-in-time compiled by Theano. Extensible: easily incorporates custom MCMC algorithms and unusual probability distributions. Authors: John Salvat...
import theano.tensor as T x = np.linspace(-.3, .3, 500) plt.plot(x, T.exp(pm.Normal.dist(mu=0, sd=.1).logp(x)).eval()) plt.title(u'Prior: mu ~ Normal(0, $.1^2$)'); plt.xlabel('mu'); plt.ylabel('Probability Density'); plt.xlim((-.3, .3)); x = np.linspace(-.1, .5, 500) plt.plot(x, T.exp(pm.HalfNormal.dist(sd=.1).logp(x)...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Bayesian Sharpe ratio $\mu \sim \text{Normal}(0, .1^2)$ $\leftarrow \text{Prior}$ $\sigma \sim \text{HalfNormal}(.1^2)$ $\leftarrow \text{Prior}$ $\text{returns} \sim \text{Normal}(\mu, \sigma^2)$ $\leftarrow \text{Observed!}$ $\text{Sharpe} = \frac{\mu}{\sigma}$ Graphical model of returns <img width=80% src='bayes_for...
print data_0.head() from pymc3 import * with Model() as model: # Priors on parameters mean_return = Normal('mean return', mu=0, sd=.1) volatility = HalfNormal('volatility', sd=.1) # Model observed returns as Normal obs = Normal('returns', mu=mean_return, sd=vol...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Analyzing the posterior
sns.distplot(results_normal[0]['mean returns'], hist=False, label='etrade') sns.distplot(results_normal[1]['mean returns'], hist=False, label='IB') plt.title('Posterior of the mean'); plt.xlabel('mean returns') sns.distplot(results_normal[0]['volatility'], hist=False, label='etrade') sns.distplot(results_normal[1]['vo...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Value at Risk with uncertainty
results_normal[0] import scipy.stats as stats ppc_etrade = post_pred(var_cov_var_normal, results_normal[0], 1e6, .05, samples=800) ppc_ib = post_pred(var_cov_var_normal, results_normal[1], 1e6, .05, samples=800) sns.distplot(ppc_etrade, label='etrade', norm_hist=True, hist=False, color='b') sns.distplot(ppc_ib, label=...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Interim summary Bayesian stats allows us to reformulate common risk metrics, use priors and quantify uncertainty. IB strategy seems better in almost every regard. Is it though? So far, only added confidence
sns.distplot(results_normal[0]['sharpe'], hist=False, label='etrade') sns.distplot(results_normal[1]['sharpe'], hist=False, label='IB') plt.title('Bayesian Sharpe ratio'); plt.xlabel('Sharpe ratio'); plt.axvline(data_0.mean() / data_0.std() * np.sqrt(252), color='b'); plt.axvline(data_1.mean() / data_1.std() * np.sqrt(...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Is this a good model?
sns.distplot(data_1, label='data IB', kde=False, norm_hist=True, color='.5') for p in ppc_dist_normal: plt.plot(x, p, c='r', alpha=.1) plt.plot(x, p, c='r', alpha=.5, label='Normal model') plt.xlabel('Daily returns') plt.legend();
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Can it be improved? Yes! Identical model as before, but instead, use a heavy-tailed T distribution: $ \text{returns} \sim \text{T}(\nu, \mu, \sigma^2)$
sns.distplot(data_1, label='data IB', kde=False, norm_hist=True, color='.5') for p in ppc_dist_t: plt.plot(x, p, c='y', alpha=.1) plt.plot(x, p, c='y', alpha=.5, label='T model') plt.xlabel('Daily returns') plt.legend();
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Volatility
sns.distplot(results_normal[1]['annual volatility'], hist=False, label='normal') sns.distplot(results_t[1]['annual volatility'], hist=False, label='T') plt.xlim((0, 0.2)) plt.xlabel('Posterior of annual volatility') plt.ylabel('Probability Density');
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Lets compare posteriors of the normal and T model Mean returns
sns.distplot(results_normal[1]['mean returns'], hist=False, color='r', label='normal model') sns.distplot(results_t[1]['mean returns'], hist=False, color='y', label='T model') plt.xlabel('Posterior of the mean returns'); plt.ylabel('Probability Density');
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Bayesian T-Sharpe ratio
sns.distplot(results_normal[1]['sharpe'], hist=False, color='r', label='normal model') sns.distplot(results_t[1]['sharpe'], hist=False, color='y', label='T model') plt.xlabel('Bayesian Sharpe ratio'); plt.ylabel('Probability Density');
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
But why? T distribution is more robust!
sim_data = list(np.random.randn(75)*.01) sim_data.append(-.2) sns.distplot(sim_data, label='data', kde=False, norm_hist=True, color='.5'); sns.distplot(sim_data, label='Normal', fit=stats.norm, kde=False, hist=False, fit_kws={'color': 'r', 'label': 'Normal'}); sns.distplot(sim_data, fit=stats.t, kde=False, hist=False, ...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Estimating tail risk using VaR
ppc_normal = post_pred(var_cov_var_normal, results_normal[1], 1e6, .05, samples=800) ppc_t = post_pred(var_cov_var_t, results_t[1], 1e6, .05, samples=800) sns.distplot(ppc_normal, label='Normal', norm_hist=True, hist=False, color='r') sns.distplot(ppc_t, label='T', norm_hist=True, hist=False, color='y') plt.legend(loc=...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Comparing the Bayesian T-Sharpe ratios
sns.distplot(results_t[0]['sharpe'], hist=False, label='etrade') sns.distplot(results_t[1]['sharpe'], hist=False, label='IB') plt.xlabel('Bayesian Sharpe ratio'); plt.ylabel('Probability Density'); print 'P(Sharpe ratio IB > Sharpe ratio etrade) = %.2f%%' % \ (np.mean(results_t[1]['sharpe'] > results_t[0]['sharpe'...
research/bayesian_risk_perf_v3.ipynb
bspalding/research_public
apache-2.0
Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.
kids = resp['numkdhh'] kids
code/.ipynb_checkpoints/chap03ex-checkpoint.ipynb
kevntao/ThinkStats2
gpl-3.0
Display the PMF.
pmf = thinkstats2.Pmf(kids) thinkplot.Pmf(pmf, label='PMF') thinkplot.Show(xlabel='# of Children', ylabel='PMF')
code/.ipynb_checkpoints/chap03ex-checkpoint.ipynb
kevntao/ThinkStats2
gpl-3.0
Define <tt>BiasPmf</tt>.
def BiasPmf(pmf, label=''): """Returns the Pmf with oversampling proportional to value. If pmf is the distribution of true values, the result is the distribution that would be seen if values are oversampled in proportion to their values; for example, if you ask students how big their classes are, l...
code/.ipynb_checkpoints/chap03ex-checkpoint.ipynb
kevntao/ThinkStats2
gpl-3.0
The Raw data structure: continuous data This tutorial covers the basics of working with raw EEG/MEG data in Python. It introduces the :class:~mne.io.Raw data structure in detail, including how to load, query, subselect, export, and plot data from a :class:~mne.io.Raw object. For more info on visualization of :class:~mn...
import os import numpy as np import matplotlib.pyplot as plt import mne
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Loading continuous data .. sidebar:: Datasets in MNE-Python There are ``data_path`` functions for several example datasets in MNE-Python (e.g., :func:`mne.datasets.kiloword.data_path`, :func:`mne.datasets.spm_face.data_path`, etc). All of them will check the default download location first to see if the dataset is alre...
sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
As you can see above, :func:~mne.io.read_raw_fif automatically displays some information about the file it's loading. For example, here it tells us that there are three "projection items" in the file along with the recorded data; those are :term:SSP projectors &lt;projector&gt; calculated to remove environmental noise ...
print(raw)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
By default, the :samp:mne.io.read_raw_{*} family of functions will not load the data into memory (instead the data on disk are memory-mapped_, meaning the data are only read from disk as-needed). Some operations (such as filtering) require that the data be copied into RAM; to do that we could have passed the preload=Tr...
raw.crop(tmax=60)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Querying the Raw object .. sidebar:: Attributes vs. Methods **Attributes** are usually static properties of Python objects — things that are pre-computed and stored as part of the object's representation in memory. Attributes are accessed with the ``.`` operator and do not require parentheses after the attribute name (...
n_time_samps = raw.n_times time_secs = raw.times ch_names = raw.ch_names n_chan = len(ch_names) # note: there is no raw.n_channels attribute print('the (cropped) sample data object has {} time samples and {} channels.' ''.format(n_time_samps, n_chan)) print('The last time sample is at {} seconds.'.format(time_se...
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
<div class="alert alert-info"><h4>Note</h4><p>Most of the fields of ``raw.info`` reflect metadata recorded at acquisition time, and should not be changed by the user. There are a few exceptions (such as ``raw.info['bads']`` and ``raw.info['projs']``), but in most cases there are dedicated MNE-Python functio...
print(raw.time_as_index(20)) print(raw.time_as_index([20, 30, 40]), '\n') print(np.diff(raw.time_as_index([1, 2, 3])))
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause