markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now try a 15th degree polynomial:
poly15_data = polynomial_sframe(sales['sqft_living'], 15) my_features15 = poly15_data.column_names() poly15_data['price'] = sales['price'] model15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = my_features15, validation_set = None) plt.plot(poly15_data['power_1'],poly15_data['price'],'.', poly15_data['power_1'], model15.predict(poly15_data),'-')
Course 2 - ML, Regression/week-3-polynomial-regression-assignment-blank.ipynb
dennys-bd/Coursera-Machine-Learning-Specialization
mit
What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look. Changing the data and re-learning We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results. To split the sales data into four subsets, we perform the following steps: * First split sales into 2 subsets with .random_split(0.5, seed=0). * Next split the resulting subsets into 2 more subsets each. Use .random_split(0.5, seed=0). We set seed=0 in these steps so that different users get consistent results. You should end up with 4 subsets (set_1, set_2, set_3, set_4) of approximately equal size.
set_11, set_12 = sales.random_split(.5,seed=0) set_1, set_2 = set_11.random_split(.5,seed=0) set_3, set_4 = set_12.random_split(.5,seed=0)
Course 2 - ML, Regression/week-3-polynomial-regression-assignment-blank.ipynb
dennys-bd/Coursera-Machine-Learning-Specialization
mit
Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.
set1_poly15_data = polynomial_sframe(set_1['sqft_living'], 15) set1_poly15_data['price'] = set_1['price'] set1_model15 = graphlab.linear_regression.create(set1_poly15_data, target = 'price', features = my_features15, validation_set = None) set1_model15.get("coefficients") plt.plot(set1_poly15_data['power_1'],set1_poly15_data['price'],'.', set1_poly15_data['power_1'], set1_model15.predict(set1_poly15_data),'-') set2_poly15_data = polynomial_sframe(set_2['sqft_living'], 15) set2_poly15_data['price'] = set_2['price'] set2_model15 = graphlab.linear_regression.create(set2_poly15_data, target = 'price', features = my_features15, validation_set = None) set2_model15.get("coefficients") plt.plot(set2_poly15_data['power_1'],set2_poly15_data['price'],'.', set2_poly15_data['power_1'], set2_model15.predict(set2_poly15_data),'-') set3_poly15_data = polynomial_sframe(set_3['sqft_living'], 15) set3_poly15_data['price'] = set_3['price'] set3_model15 = graphlab.linear_regression.create(set3_poly15_data, target = 'price', features = my_features15, validation_set = None) set3_model15.get("coefficients") plt.plot(set3_poly15_data['power_1'],set3_poly15_data['price'],'.', set3_poly15_data['power_1'], set3_model15.predict(set3_poly15_data),'-') set4_poly15_data = polynomial_sframe(set_4['sqft_living'], 15) set4_poly15_data['price'] = set_4['price'] set4_model15 = graphlab.linear_regression.create(set4_poly15_data, target = 'price', features = my_features15, validation_set = None) set4_model15.get("coefficients")
Course 2 - ML, Regression/week-3-polynomial-regression-assignment-blank.ipynb
dennys-bd/Coursera-Machine-Learning-Specialization
mit
Some questions you will be asked on your quiz: Quiz Question: Is the sign (positive or negative) for power_15 the same in all four models? Quiz Question: (True/False) the plotted fitted lines look the same in all four plots Selecting a Polynomial Degree Whenever we have a "magic" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4). We split the sales dataset 3-way into training set, test set, and validation set as follows: Split our sales data into 2 sets: training_and_validation and testing. Use random_split(0.9, seed=1). Further split our training data into two sets: training and validation. Use random_split(0.5, seed=1). Again, we set seed=1 to obtain consistent results for different users.
training_validation_set, test_data= sales.random_split(.9,seed=1) train_data, validation_data = training_validation_set.random_split(.5,seed=1)
Course 2 - ML, Regression/week-3-polynomial-regression-assignment-blank.ipynb
dennys-bd/Coursera-Machine-Learning-Specialization
mit
Next you should write a loop that does the following: * For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1)) * Build an SFrame of polynomial data of train_data['sqft_living'] at the current degree * hint: my_features = poly_data.column_names() gives you a list e.g. ['power_1', 'power_2', 'power_3'] which you might find useful for graphlab.linear_regression.create( features = my_features) * Add train_data['price'] to the polynomial SFrame * Learn a polynomial regression model to sqft vs price with that degree on TRAIN data * Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynmial SFrame using validation data. * Report which degree had the lowest RSS on validation data (remember python indexes from 0) (Note you can turn off the print out of linear_regression.create() with verbose = False)
RSS = {} for i in range(0, 15): poly_data = polynomial_sframe(train_data['sqft_living'], i) my_features = poly_data.column_names() poly_data['price'] = train_data['price'] train_model = graphlab.linear_regression.create(poly_data, target = 'price', features = my_features, validation_set = None) poly_validation = polynomial_sframe(validation_data['sqft_living'], i) RSS[i] = sum(train_model.predict(poly_validation)) print RSS key=0 value=0 for k,v in RSS.iteritems(): if(k == 0): value = v else: if(v<value): key = k value = v print key, value
Course 2 - ML, Regression/week-3-polynomial-regression-assignment-blank.ipynb
dennys-bd/Coursera-Machine-Learning-Specialization
mit
Quiz Question: Which degree (1, 2, …, 15) had the lowest RSS on Validation data? Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz.
poly_data = polynomial_sframe(train_data['sqft_living'], 5) my_features = poly_data.column_names() poly_data['price'] = train_data['price'] train_model = graphlab.linear_regression.create(poly_data, target = 'price', features = my_features, validation_set = None) poly_test = polynomial_sframe(test_data['sqft_living'], 5) sum(train_model.predict(poly_test))
Course 2 - ML, Regression/week-3-polynomial-regression-assignment-blank.ipynb
dennys-bd/Coursera-Machine-Learning-Specialization
mit
Cluster quality metrics evaluated (see Clustering performance evaluation for definitions and discussions of the metrics): | Shorthand | full name | |------------ |----------------------------- | | homo | homogeneity score | | compl | completeness score | | v-meas | V measure | | ARI | adjusted Rand index | | AMI | adjusted mutual information | | silhouette | silhouette coefficient | Unsupervised Learning: Dimensionality Reduction and Visualization Dimensionality Reduction: PCA Dimensionality reduction derives a set of new artificial features smaller than the original feature set. Here we’ll use Principal Component Analysis (PCA), a dimensionality reduction that strives to retain most of the variance of the original data. We’ll use sklearn.decomposition.PCA on the iris dataset:
from sklearn.datasets import load_iris iris = load_iris() X = iris.data y = iris.target
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
PCA computes linear combinations of the original features using a truncated Singular Value Decomposition of the matrix X, to project the data onto a base of the top singular vectors.
from sklearn.decomposition import PCA pca = PCA(n_components=2, whiten=True) pca.fit(X)
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Once fitted, PCA exposes the singular vectors in the components_ attribute:
pca.components_ pca.explained_variance_ratio_
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Let us project the iris dataset along those first two dimensions::
X_pca = pca.transform(X) X_pca.shape print(X_pca[2])
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Furthermore, the samples components do no longer carry any linear correlation:
np.corrcoef(X_pca.T)
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
With a number of retained components 2 or 3, PCA is useful to visualize the dataset:
target_ids = range(len(iris.target_names)) for i, c, label in zip(target_ids, 'rgbcmykw', iris.target_names):\ plt.scatter(X_pca[y == i, 0], X_pca[y == i, 1], c=c, label=label) plt.show()
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Visualization with a non-linear embedding: tSNE For visualization, more complex embeddings can be useful (for statistical analysis, they are harder to control). sklearn.manifold.TSNE is such a powerful manifold learning method. We apply it to the digits dataset, as the digits are vectors of dimension 8*8 = 64. Embedding them in 2D enables visualization:
# Take the first 500 data points: it's hard to see 1500 points X = digits.data[:500] y = digits.target[:500] # Fit and transform with a TSNE from sklearn.manifold import TSNE tsne = TSNE(n_components=2, random_state=0) X_2d = tsne.fit_transform(X) # Visualize the data plt.scatter(X_2d[:, 0], X_2d[:, 1], c=y) plt.show()
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
The eigenfaces example: chaining PCA and SVMs The goal of this example is to show how an unsupervised method and a supervised one can be chained for better prediction. It starts with a didactic but lengthy way of doing things, and finishes with the idiomatic approach to pipelining in scikit-learn. Here we’ll take a look at a simple facial recognition example. Ideally, we would use a dataset consisting of a subset of the Labeled Faces in the Wild data that is available with sklearn.datasets.fetch_lfw_people(). However, this is a relatively large download (~200MB) so we will do the tutorial on a simpler, less rich dataset. Feel free to explore the LFW dataset.
from sklearn import datasets faces = datasets.fetch_olivetti_faces() faces.data.shape from matplotlib import pyplot as plt fig = plt.figure(figsize=(8, 6)) # plot several images for i in range(15): ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[]) ax.imshow(faces.images[i], cmap=plt.cm.bone) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(faces.data, faces.target, random_state=0) print(X_train.shape, X_test.shape)
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Spam classifier
import os import tarfile files = ["20030228_easy_ham.tar.bz2", "20050311_spam_2.tar.bz2"] SPAM_PATH = os.path.join("datasets", "spam") def fetch_spam_data(spam_url=SPAM_URL, spam_path=SPAM_PATH): for filename in (files): path = os.path.join(SPAM_PATH, filename) tar_bz2_file = tarfile.open(path) tar_bz2_file.extractall(path=SPAM_PATH) tar_bz2_file.close() fetch_spam_data() HAM_DIR = os.path.join(SPAM_PATH, "easy_ham") SPAM_DIR = os.path.join(SPAM_PATH, "spam_2") ham_filenames = [name for name in sorted(os.listdir(HAM_DIR)) if len(name) > 20] spam_filenames = [name for name in sorted(os.listdir(SPAM_DIR)) if len(name) > 20] print(len(ham_filenames), len(spam_filenames)) import email import email.policy def load_email(is_spam, filename, spam_path=SPAM_PATH): directory = "spam_2" if is_spam else "easy_ham" with open(os.path.join(spam_path, directory, filename), "rb") as f: return email.parser.BytesParser(policy=email.policy.default).parse(f) ham_emails = [load_email(is_spam=False, filename=name) for name in ham_filenames] spam_emails = [load_email(is_spam=True, filename=name) for name in spam_filenames] print(spam_emails[6].get_content().strip()) print(ham_emails[1].get_content().strip()) def get_email_structure(email): if isinstance(email, str): return email payload = email.get_payload() if isinstance(payload, list): return "multipart({})".format(", ".join([ get_email_structure(sub_email) for sub_email in payload ])) else: return email.get_content_type() from collections import Counter def structures_counter(emails): structures = Counter() for email in emails: structure = get_email_structure(email) structures[structure] += 1 return structures structures_counter(ham_emails).most_common() structures_counter(spam_emails).most_common()
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is a usual information to have. Now let's take a look at the email headers:
for header, value in spam_emails[0].items(): print(header,":",value) spam_emails[0]["Subject"]
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Okay, let's start writing the preprocessing functions. First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great BeautifulSoup library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment). The following function first drops the <head> section, then converts all <a> tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as &gt; or &nbsp;):
import re from html import unescape def html_to_plain_text(html): text = re.sub('<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I) text = re.sub('<a\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I) text = re.sub('<.*?>', '', text, flags=re.M | re.S) text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S) return unescape(text)
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Let's see if it works. This is HTML spam:
html_spam_emails = [email for email in X_train[y_train==1] if get_email_structure(email) == "text/html"] sample_html_spam = html_spam_emails[7] print(sample_html_spam.get_content().strip()[:1000], "...") print(html_to_plain_text(sample_html_spam.get_content())[:1000], "...") def email_to_text(email): html = None for part in email.walk(): ctype = part.get_content_type() if not ctype in ("text/plain", "text/html"): continue try: content = part.get_content() except: # in case of encoding issues content = str(part.get_payload()) if ctype == "text/plain": return content else: html = content if html: return html_to_plain_text(html) print(email_to_text(sample_html_spam)[:100], "...")
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Let's throw in some stemming! For this to work, you need to install the Natural Language Toolkit (NLTK). It's as simple as running the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the --user option):
try: import nltk stemmer = nltk.PorterStemmer() for word in ("Computations", "Computation", "Computing", "Computed", "Compute", "Compulsive"): print(word, "=>", stemmer.stem(word)) except ImportError: print("Error: stemming requires the NLTK module.") stemmer = None try: import urlextract # may require an Internet connection to download root domain names url_extractor = urlextract.URLExtract() print(url_extractor.find_urls("Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s")) except ImportError: print("Error: replacing URLs requires the urlextract module.") url_extractor = None
Section 3 - Machine Learning/UnSupervised Learning Algorithm/0. Introduction.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Continuous Target Decoding with SPoC Source Power Comodulation (SPoC) [1]_ allows to identify the composition of orthogonal spatial filters that maximally correlate with a continuous target. SPoC can be seen as an extension of the CSP for continuous variables. Here, SPoC is applied to decode the (continuous) fluctuation of an electromyogram from MEG beta activity using data from Cortico-Muscular Coherence example of FieldTrip &lt;http://www.fieldtriptoolbox.org/tutorial/coherence&gt;_ References .. [1] Dahne, S., et al (2014). SPoC: a novel framework for relating the amplitude of neuronal oscillations to behaviorally relevant parameters. NeuroImage, 86, 111-122.
# Author: Alexandre Barachant <alexandre.barachant@gmail.com> # Jean-Remi King <jeanremi.king@gmail.com> # # License: BSD (3-clause) import matplotlib.pyplot as plt import mne from mne import Epochs from mne.decoding import SPoC from mne.datasets.fieldtrip_cmc import data_path from sklearn.pipeline import make_pipeline from sklearn.linear_model import Ridge from sklearn.model_selection import KFold, cross_val_predict # Define parameters fname = data_path() + '/SubjectCMC.ds' raw = mne.io.read_raw_ctf(fname) raw.crop(50., 250.).load_data() # crop for memory purposes # Filter muscular activity to only keep high frequencies emg = raw.copy().pick_channels(['EMGlft']) emg.filter(20., None, fir_design='firwin') # Filter MEG data to focus on beta band raw.pick_types(meg=True, ref_meg=True, eeg=False, eog=False) raw.filter(15., 30., fir_design='firwin') # Build epochs as sliding windows over the continuous raw file events = mne.make_fixed_length_events(raw, id=1, duration=.250) # Epoch length is 1.5 second meg_epochs = Epochs(raw, events, tmin=0., tmax=1.500, baseline=None, detrend=1, decim=8) emg_epochs = Epochs(emg, events, tmin=0., tmax=1.500, baseline=None) # Prepare classification X = meg_epochs.get_data() y = emg_epochs.get_data().var(axis=2)[:, 0] # target is EMG power # Classification pipeline with SPoC spatial filtering and Ridge Regression spoc = SPoC(n_components=2, log=True, reg='oas', rank='full') clf = make_pipeline(spoc, Ridge()) # Define a two fold cross-validation cv = KFold(n_splits=2, shuffle=False) # Run cross validaton y_preds = cross_val_predict(clf, X, y, cv=cv) # Plot the True EMG power and the EMG power predicted from MEG data fig, ax = plt.subplots(1, 1, figsize=[10, 4]) times = raw.times[meg_epochs.events[:, 0] - raw.first_samp] ax.plot(times, y_preds, color='b', label='Predicted EMG') ax.plot(times, y, color='r', label='True EMG') ax.set_xlabel('Time (s)') ax.set_ylabel('EMG Power') ax.set_title('SPoC MEG Predictions') plt.legend() mne.viz.tight_layout() plt.show()
0.20/_downloads/cef7da557456c738c3f4db863a633b34/plot_decoding_spoc_CMC.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
API Key Create an API key in the Ovation application at Account > Settings > API Keys (https://support.ovation.io/article/52-api-overview) Connection s is a Session object representing a connection to the Ovation API
s = connect(input("Email: "), api=constants.LAB_STAGING_HOST) # use constants.LAB_PRODUCTION_HOST for production
examples/lab/samples-and-results.ipynb
physion/ovation-python
gpl-3.0
Sample results results.get_sample_results gets all Workflow Sample Results for the sample, result type, and workflow. If workflow is omitted, all sample results for the given type are returned. Get the quantification results for a sample:
sample_id = input('Sample Id: ') results = results.get_sample_results(s, result_type='quantification', sample_id=sample_id)
examples/lab/samples-and-results.ipynb
physion/ovation-python
gpl-3.0
Vertex client library: Custom training text binary classification model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_text_binary_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_text_binary_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom text binary classification model for batch prediction. Dataset The dataset used for this tutorial is the IMDB Movie Reviews from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts whether a review is positive or negative in sentiment. Objective In this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a batch prediction on the uploaded model. You can alternatively create custom models using gcloud command-line tool or online using Google Cloud Console. The steps performed include: Create a Vertex custom job for training a model. Train the TensorFlow model. Retrieve and load the model artifacts. View the model evaluation. Upload the model as a Vertex Model resource. Make a batch prediction. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex client library.
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG
notebooks/community/gapic/custom/showcase_custom_text_binary_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Model deployment for batch prediction Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction. For online prediction, you: Create an Endpoint resource for deploying the Model resource to. Deploy the Model resource to the Endpoint resource. Make online prediction requests to the Endpoint resource. For batch-prediction, you: Create a batch prediction job. The job service will provision resources for the batch prediction request. The results of the batch prediction request are returned to the caller. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction request Now do a batch prediction to your deployed model. Prepare the request content Since the dataset is a tf.dataset, which acts as a generator, we must use it as an iterator to access the data items in the test data. We do the following to get a single data item from the test data: Set the property for the number of batches to draw per iteration to one using the method take(1). Iterate once through the test data -- i.e., we do a break within the for loop. In the single iteration, we save the data item which is in the form of a tuple. The data item will be the first element of the tuple, which you then will convert from an tensor to a numpy array -- data[0].numpy().
import tensorflow_datasets as tfds dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True) test_dataset = dataset["test"] test_dataset.take(1) for data in test_dataset: print(data) break test_item = data[0].numpy()
notebooks/community/gapic/custom/showcase_custom_text_binary_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make the batch input file Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a dictionary entry of the form: {serving_input: content} input_name: the name of the input layer of the underlying model. content: The text data item encoded as an embedding.
import json gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: data = {serving_input: test_item.tolist()} f.write(json.dumps(data) + "\n")
notebooks/community/gapic/custom/showcase_custom_text_binary_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make batch prediction request Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters: display_name: The human readable name for the prediction job. model_name: The Vertex fully qualified identifier for the Model resource. gcs_source_uri: The Cloud Storage path to the input file -- which you created above. gcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to. parameters: Additional filtering parameters for serving prediction results. The helper function calls the job client service's create_batch_prediction_job metho, with the following parameters: parent: The Vertex location root path for Dataset, Model and Pipeline resources. batch_prediction_job: The specification for the batch prediction job. Let's now dive into the specification for the batch_prediction_job: display_name: The human readable name for the prediction batch job. model: The Vertex fully qualified identifier for the Model resource. dedicated_resources: The compute resources to provision for the batch prediction job. machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated. starting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES. max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES. model_parameters: Additional filtering parameters for serving prediction results. No Additional parameters are supported for custom models. input_config: The input source and format type for the instances to predict. instances_format: The format of the batch prediction request file: csv or jsonl. gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests. output_config: The output destination and format for the predictions. prediction_format: The format of the batch prediction response file: csv or jsonl. gcs_destination: The output destination for the predictions. This call is an asychronous operation. You will print from the response object a few select fields, including: name: The Vertex fully qualified identifier assigned to the batch prediction job. display_name: The human readable name for the prediction batch job. model: The Vertex fully qualified identifier for the Model resource. generate_explanations: Whether True/False explanations were provided with the predictions (explainability). state: The state of the prediction job (pending, running, etc). Since this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state.
BATCH_MODEL = "imdb_batch-" + TIMESTAMP def create_batch_prediction_job( display_name, model_name, gcs_source_uri, gcs_destination_output_uri_prefix, parameters=None, ): if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accelerator_count": DEPLOY_NGPU, } else: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_count": 0, } batch_prediction_job = { "display_name": display_name, # Format: 'projects/{project}/locations/{location}/models/{model_id}' "model": model_name, "model_parameters": json_format.ParseDict(parameters, Value()), "input_config": { "instances_format": IN_FORMAT, "gcs_source": {"uris": [gcs_source_uri]}, }, "output_config": { "predictions_format": OUT_FORMAT, "gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix}, }, "dedicated_resources": { "machine_spec": machine_spec, "starting_replica_count": MIN_NODES, "max_replica_count": MAX_NODES, }, } response = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job ) print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", response.labels) return response IN_FORMAT = "jsonl" OUT_FORMAT = "jsonl" response = create_batch_prediction_job( BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME )
notebooks/community/gapic/custom/showcase_custom_text_binary_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get the predictions When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED. Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called prediction.results-xxxxx-of-xxxxx. Now display (cat) the contents. You will see multiple JSON objects, one for each prediction. The response contains a JSON object for each instance, in the form: embedding_input: The input for the prediction. predictions -- the predicated binary sentiment between 0 (negative) and 1 (positive).
def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subfolder > latest: latest = folder[:-1] return latest while True: predictions, state = get_batch_prediction_job(batch_job_id, True) if state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", state) if state == aip.JobState.JOB_STATE_FAILED: raise Exception("Batch Job Failed") else: folder = get_latest_predictions(predictions) ! gsutil ls $folder/prediction.results* print("Results:") ! gsutil cat $folder/prediction.results* print("Errors:") ! gsutil cat $folder/prediction.errors* break time.sleep(60)
notebooks/community/gapic/custom/showcase_custom_text_binary_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Feedback or issues? For any feedback or questions, please open an issue. Vertex SDK for Python: Custom Tabular Model Training Example To use this Colaboratory notebook, you copy the notebook to your own Google Drive and open it with Colaboratory (or Colab). You can run each step, or cell, and see its results. To run a cell, use Shift+Enter. Colab automatically displays the return value of the last line in each cell. For more information about running notebooks in Colab, see the Colab welcome page. This notebook demonstrate how to create a custom model based on a tabular dataset. It will require you provide a bucket where the dataset CSV will be stored. Note: you may incur charges for training, prediction, storage or usage of other GCP products in connection with testing this SDK. Install Vertex SDK for Python, Authenticate, and upload of a Dataset to your GCS bucket After the SDK installation the kernel will be automatically restarted. You may see this error message Your session crashed for an unknown reason which is normal.
!pip3 uninstall -y google-cloud-aiplatform !pip3 install google-cloud-aiplatform import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) import sys if "google.colab" in sys.modules: from google.colab import auth auth.authenticate_user()
notebooks/community/sdk/SDK_End_to_End_Tabular_Custom_Training.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Managed Tabular Dataset from CSV A Managed dataset can be used to create an AutoML model or a custom model.
ds = aiplatform.TabularDataset.create(display_name="abalone", gcs_source=[gcs_csv_path]) ds.resource_name
notebooks/community/sdk/SDK_End_to_End_Tabular_Custom_Training.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Launch a Training Job to Create a Model Once we have defined your training script, we will create a model.
job = aiplatform.CustomTrainingJob( display_name="train-abalone-dist-1-replica", script_path="training_script.py", container_uri="gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest", requirements=["gcsfs==0.7.1"], model_serving_container_image_uri="gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest", ) model = job.run(ds, replica_count=1, model_display_name="abalone-model")
notebooks/community/sdk/SDK_End_to_End_Tabular_Custom_Training.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy Model
endpoint = model.deploy(machine_type="n1-standard-4")
notebooks/community/sdk/SDK_End_to_End_Tabular_Custom_Training.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Predict on Endpoint
prediction = endpoint.predict( [ [0.435, 0.335, 0.11, 0.33399999999999996, 0.1355, 0.0775, 0.0965], [0.585, 0.45, 0.125, 0.874, 0.3545, 0.2075, 0.225], ] ) prediction
notebooks/community/sdk/SDK_End_to_End_Tabular_Custom_Training.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Q: Kako $k$ utječe na izgled granice između klasa? Q: Kako se algoritam ponaša u ekstremnim situacijama: $k=1$ i $k=100$? (b) Pomoću funkcije mlutils.knn_eval, iscrtajte pogreške učenja i ispitivanja kao funkcije hiperparametra $k\in{1,\dots,20}$, za $N={100, 500, 1000, 3000}$ primjera. Načinite 4 zasebna grafikona (generirajte ih u 2x2 polju). U svakoj iteraciji ispišite optimalnu vrijednost hiperparametra $k$ (najlakše kao naslov grafikona; vidi plt.title).
hiperparams = range(1, 21) N = [100, 500, 1000, 3000] figure(figsize(11, 8)) i = 1 for n in N: [k, err_tr, err_tst] = knn_eval(n_instances=n, k_range=(1, 20)) subplot(2,2,i) plot(np.array(range(1,21)), err_tr) plot(np.array(range(1,21)), err_tst) scatter(k,err_tst[hiperparams.index(k)]) legend(['train error', 'test error', 'min test err'], loc = 'best', prop={'size':10}) title('\nN = ' + str(n) + '\nk = ' + str(k)) xticks(hiperparams) grid() i+=1
STRUCE/2018/SU-2018-LAB03-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
count(*) 0 10 Quiz 2 - Temp on Foggy and Nonfoggy Days
import pandas import pandasql def max_temp_aggregate_by_fog(filename): ''' This function should run a SQL query on a dataframe of weather data. The SQL query should return two columns and two rows - whether it was foggy or not (0 or 1) and the max maxtempi for that fog value (i.e., the maximum max temperature for both foggy and non-foggy days). The dataframe will be titled 'weather_data'. You'll need to provide the SQL query. You might also find that interpreting numbers as integers or floats may not work initially. In order to get around this issue, it may be useful to cast these numbers as integers. This can be done by writing cast(column as integer). So for example, if we wanted to cast the maxtempi column as an integer, we would actually write something like where cast(maxtempi as integer) = 76, as opposed to simply where maxtempi = 76. You can see the weather data that we are passing in below: https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/weather_underground.csv ''' weather_data = pandas.read_csv(filename) q = """ SELECT fog, MAX(maxtempi) FROM weather_data GROUP BY fog; """ #Execute your SQL command against the pandas frame foggy_days = pandasql.sqldf(q.lower(), locals()) return foggy_days
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
fog max(maxtempi) 0 0 86 1 1 81 Quiz 3 - Mean Temp on Weekends
import pandas import pandasql def avg_weekend_temperature(filename): ''' This function should run a SQL query on a dataframe of weather data. The SQL query should return one column and one row - the average meantempi on days that are a Saturday or Sunday (i.e., the the average mean temperature on weekends). The dataframe will be titled 'weather_data' and you can access the date in the dataframe via the 'date' column. You'll need to provide the SQL query. You might also find that interpreting numbers as integers or floats may not work initially. In order to get around this issue, it may be useful to cast these numbers as integers. This can be done by writing cast(column as integer). So for example, if we wanted to cast the maxtempi column as an integer, we would actually write something like where cast(maxtempi as integer) = 76, as opposed to simply where maxtempi = 76. Also, you can convert dates to days of the week via the 'strftime' keyword in SQL. For example, cast (strftime('%w', date) as integer) will return 0 if the date is a Sunday or 6 if the date is a Saturday. You can see the weather data that we are passing in below: https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/weather_underground.csv ''' weather_data = pandas.read_csv(filename) q = """ SELECT AVG(CAST(meantempi AS int)) FROM weather_data WHERE CAST(strftime('%w', date) AS int) = 0 or CAST(strftime('%w', date) AS int) = 6; """ #Execute your SQL command against the pandas frame mean_temp_weekends = pandasql.sqldf(q.lower(), locals()) return mean_temp_weekends
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
More about SQL's CAST function: 1. https://docs.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql 2. https://www.w3schools.com/sql/func_sqlserver_cast.asp strftime function: 1. https://www.techonthenet.com/sqlite/functions/strftime.php Quiz 4 - Mean Temp on Rainy Days
import pandas import pandasql def avg_min_temperature(filename): ''' This function should run a SQL query on a dataframe of weather data. More specifically you want to find the average minimum temperature (mintempi column of the weather dataframe) on rainy days where the minimum temperature is greater than 55 degrees. You might also find that interpreting numbers as integers or floats may not work initially. In order to get around this issue, it may be useful to cast these numbers as integers. This can be done by writing cast(column as integer). So for example, if we wanted to cast the maxtempi column as an integer, we would actually write something like where cast(maxtempi as integer) = 76, as opposed to simply where maxtempi = 76. You can see the weather data that we are passing in below: https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/weather_underground.csv ''' weather_data = pandas.read_csv(filename) q = """ SELECT AVG(CAST (mintempi AS int)) FROM weather_data WHERE rain = 1 and CAST(MINTEMPI AS int) > 55; """ #Execute your SQL command against the pandas frame avg_min_temp_rainy = pandasql.sqldf(q.lower(), locals()) return avg_min_temp_rainy
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
Quiz 5 - Fixing Turnstile Data
import csv def fix_turnstile_data(filenames): ''' Filenames is a list of MTA Subway turnstile text files. A link to an example MTA Subway turnstile text file can be seen at the URL below: http://web.mta.info/developers/data/nyct/turnstile/turnstile_110507.txt As you can see, there are numerous data points included in each row of the a MTA Subway turnstile text file. You want to write a function that will update each row in the text file so there is only one entry per row. A few examples below: A002,R051,02-00-00,05-28-11,00:00:00,REGULAR,003178521,001100739 A002,R051,02-00-00,05-28-11,04:00:00,REGULAR,003178541,001100746 A002,R051,02-00-00,05-28-11,08:00:00,REGULAR,003178559,001100775 Write the updates to a different text file in the format of "updated_" + filename. For example: 1) if you read in a text file called "turnstile_110521.txt" 2) you should write the updated data to "updated_turnstile_110521.txt" The order of the fields should be preserved. Remember to read through the Instructor Notes below for more details on the task. In addition, here is a CSV reader/writer introductory tutorial: http://goo.gl/HBbvyy You can see a sample of the turnstile text file that's passed into this function and the the corresponding updated file by downloading these files from the resources: Sample input file: turnstile_110528.txt Sample updated file: solution_turnstile_110528.txt ''' for name in filenames: # create file input object `f_in` to work with "name" file. # create file output object `f_out` to write to the new "updated_name" file. with open(name, 'r') as f_in, open(''.join(['updated_',name]), 'w') as f_out: # creater csv readers and writers based on our file objects reader_in = csv.reader(f_in) writer_out = csv.writer(f_out) # Our reader in allows us to go through each line (row of the input data) # and access its data with the standard Python syntax. for row in reader_in: for i in range(3, len(row), 5): writer_out.writerow(row[0:3] + row[i:i+5]) return None
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
updated_turnstile_110528.txt A002,R051,02-00-00,05-21-11,00:00:00,REGULAR,003169391,001097585 A002,R051,02-00-00,05-21-11,04:00:00,REGULAR,003169415,001097588 A002,R051,02-00-00,05-21-11,08:00:00,REGULAR,003169431,001097607 A002,R051,02-00-00,05-21-11,12:00:00,REGULAR,003169506,001097686 A002,R051,02-00-00,05-21-11,16:00:00,REGULAR,003169693,001097734 ... Quiz 6 - Combining Turnstile Data
def create_master_turnstile_file(filenames, output_file): ''' Write a function that - takes the files in the list filenames, which all have the columns 'C/A, UNIT, SCP, DATEn, TIMEn, DESCn, ENTRIESn, EXITSn', and - consolidates them into one file located at output_file. - There should be ONE row with the column headers, located at the top of the file. - The input files do not have column header rows of their own. For example, if file_1 has: line 1 ... line 2 ... and another file, file_2 has: line 3 ... line 4 ... line 5 ... We need to combine file_1 and file_2 into a master_file like below: 'C/A, UNIT, SCP, DATEn, TIMEn, DESCn, ENTRIESn, EXITSn' line 1 ... line 2 ... line 3 ... line 4 ... line 5 ... ''' with open(output_file, 'w') as master_file: master_file.write('C/A,UNIT,SCP,DATEn,TIMEn,DESCn,ENTRIESn,EXITSn\n') for filename in filenames: with open(filename, 'r') as content: # Write everything read from `content` ( which is rows) to file `master_file` master_file.write(content.read()) return None
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
C/A,UNIT,SCP,DATEn,TIMEn,DESCn,ENTRIESn,EXITSn A002,R051,02-00-00,05-21-11,00:00:00,REGULAR,003169391,001097585 A002,R051,02-00-00,05-21-11,04:00:00,REGULAR,003169415,001097588 A002,R051,02-00-00,05-21-11,08:00:00,REGULAR,003169431,001097607 A002,R051,02-00-00,05-21-11,12:00:00,REGULAR,003169506,001097686 ... Quiz 7 - Filtering Irregular Data
import pandas def filter_by_regular(filename): ''' This function should read the csv file located at filename into a pandas dataframe, and filter the dataframe to only rows where the 'DESCn' column has the value 'REGULAR'. For example, if the pandas dataframe is as follows: ,C/A,UNIT,SCP,DATEn,TIMEn,DESCn,ENTRIESn,EXITSn 0,A002,R051,02-00-00,05-01-11,00:00:00,REGULAR,3144312,1088151 1,A002,R051,02-00-00,05-01-11,04:00:00,DOOR,3144335,1088159 2,A002,R051,02-00-00,05-01-11,08:00:00,REGULAR,3144353,1088177 3,A002,R051,02-00-00,05-01-11,12:00:00,DOOR,3144424,1088231 The dataframe will look like below after filtering to only rows where DESCn column has the value 'REGULAR': 0,A002,R051,02-00-00,05-01-11,00:00:00,REGULAR,3144312,1088151 2,A002,R051,02-00-00,05-01-11,08:00:00,REGULAR,3144353,1088177 ''' # Use pandas's read_csv function to read the csv file located at filename turnstile_data = pandas.read_csv(filename) # Use pandas's loc() function turnstile_data = turnstile_data.loc[turnstile_data['DESCn'] == 'REGULAR'] return turnstile_data
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
More detail: loc() function which is purely label-location based indexer for selection by label - https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html - More about selection by label: - https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-label Quiz 8 - Get Hourly Entries
import pandas def get_hourly_entries(df): ''' The data in the MTA Subway Turnstile data reports on the cumulative number of entries and exits per row. Assume that you have a dataframe called df that contains only the rows for a particular turnstile machine (i.e., unique SCP, C/A, and UNIT). This function should change these cumulative entry numbers to a count of entries since the last reading (i.e., entries since the last row in the dataframe). More specifically, you want to do two things: 1) Create a new column called ENTRIESn_hourly 2) Assign to the column the difference between ENTRIESn of the current row and the previous row. If there is any NaN, fill/replace it with 1. You may find the pandas functions shift() and fillna() to be helpful in this exercise. Examples of what your dataframe should look like at the end of this exercise: C/A UNIT SCP DATEn TIMEn DESCn ENTRIESn EXITSn ENTRIESn_hourly 0 A002 R051 02-00-00 05-01-11 00:00:00 REGULAR 3144312 1088151 1 1 A002 R051 02-00-00 05-01-11 04:00:00 REGULAR 3144335 1088159 23 2 A002 R051 02-00-00 05-01-11 08:00:00 REGULAR 3144353 1088177 18 3 A002 R051 02-00-00 05-01-11 12:00:00 REGULAR 3144424 1088231 71 4 A002 R051 02-00-00 05-01-11 16:00:00 REGULAR 3144594 1088275 170 5 A002 R051 02-00-00 05-01-11 20:00:00 REGULAR 3144808 1088317 214 6 A002 R051 02-00-00 05-02-11 00:00:00 REGULAR 3144895 1088328 87 7 A002 R051 02-00-00 05-02-11 04:00:00 REGULAR 3144905 1088331 10 8 A002 R051 02-00-00 05-02-11 08:00:00 REGULAR 3144941 1088420 36 9 A002 R051 02-00-00 05-02-11 12:00:00 REGULAR 3145094 1088753 153 10 A002 R051 02-00-00 05-02-11 16:00:00 REGULAR 3145337 1088823 243 ... ... ''' # Actually you should use diff() function rather than shift(), # shift() will return the previous value, not the difference between two value. df['ENTRIESn_hourly'] = df['ENTRIESn'].diff().fillna(1) return df
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
9 - Get Hourly Exits
import pandas def get_hourly_exits(df): ''' The data in the MTA Subway Turnstile data reports on the cumulative number of entries and exits per row. Assume that you have a dataframe called df that contains only the rows for a particular turnstile machine (i.e., unique SCP, C/A, and UNIT). This function should change these cumulative exit numbers to a count of exits since the last reading (i.e., exits since the last row in the dataframe). More specifically, you want to do two things: 1) Create a new column called EXITSn_hourly 2) Assign to the column the difference between EXITSn of the current row and the previous row. If there is any NaN, fill/replace it with 0. You may find the pandas functions shift() and fillna() to be helpful in this exercise. Example dataframe below: Unnamed: 0 C/A UNIT SCP DATEn TIMEn DESCn ENTRIESn EXITSn ENTRIESn_hourly EXITSn_hourly 0 0 A002 R051 02-00-00 05-01-11 00:00:00 REGULAR 3144312 1088151 0 0 1 1 A002 R051 02-00-00 05-01-11 04:00:00 REGULAR 3144335 1088159 23 8 2 2 A002 R051 02-00-00 05-01-11 08:00:00 REGULAR 3144353 1088177 18 18 3 3 A002 R051 02-00-00 05-01-11 12:00:00 REGULAR 3144424 1088231 71 54 4 4 A002 R051 02-00-00 05-01-11 16:00:00 REGULAR 3144594 1088275 170 44 5 5 A002 R051 02-00-00 05-01-11 20:00:00 REGULAR 3144808 1088317 214 42 6 6 A002 R051 02-00-00 05-02-11 00:00:00 REGULAR 3144895 1088328 87 11 7 7 A002 R051 02-00-00 05-02-11 04:00:00 REGULAR 3144905 1088331 10 3 8 8 A002 R051 02-00-00 05-02-11 08:00:00 REGULAR 3144941 1088420 36 89 9 9 A002 R051 02-00-00 05-02-11 12:00:00 REGULAR 3145094 1088753 153 333 ''' df['EXITSn_hourly'] = df['EXITSn'].diff().fillna(0) return df
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
Unnamed: 0 C/A UNIT SCP DATEn TIMEn DESCn ENTRIESn EXITSn ENTRIESn_hourly EXITSn_hourly 0 0 A002 R051 02-00-00 05-01-11 00:00:00 REGULAR 3144312 1088151 0.0 0.0 1 1 A002 R051 02-00-00 05-01-11 04:00:00 REGULAR 3144335 1088159 23.0 8.0 2 2 A002 R051 02-00-00 05-01-11 08:00:00 REGULAR 3144353 1088177 18.0 18.0 3 3 A002 R051 02-00-00 05-01-11 12:00:00 REGULAR 3144424 1088231 71.0 54.0 4 4 A002 R051 02-00-00 05-01-11 16:00:00 REGULAR 3144594 1088275 170.0 44.0 ... Quiz 10 - Time to Hour
import pandas def time_to_hour(time): ''' Given an input variable time that represents time in the format of: "00:00:00" (hour:minutes:seconds) Write a function to extract the hour part from the input variable time and return it as an integer. For example: 1) if hour is 00, your code should return 0 2) if hour is 01, your code should return 1 3) if hour is 21, your code should return 21 Please return hour as an integer. ''' # Python string slicing, returns from the begining to position 1. hour = int(time[:2]) return hour
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
Unnamed: 0 UNIT DATEn TIMEn DESCn ENTRIESn_hourly EXITSn_hourly Hour 0 0 R022 05-01-11 00:00:00 REGULAR 0.0 0.0 0 1 1 R022 05-01-11 04:00:00 REGULAR 562.0 173.0 4 2 2 R022 05-01-11 08:00:00 REGULAR 160.0 194.0 8 3 3 R022 05-01-11 12:00:00 REGULAR 820.0 1025.0 12 4 4 R022 05-01-11 16:00:00 REGULAR 2385.0 1954.0 16 5 5 R022 05-01-11 20:00:00 REGULAR 3631.0 2010.0 20 ... Quiz 11 - Reformat Subway Dates
import datetime def reformat_subway_dates(date): ''' The dates in our subway data are formatted in the format month-day-year. The dates in our weather underground data are formatted year-month-day. In order to join these two data sets together, we'll want the dates formatted the same way. Write a function that takes as its input a date in the MTA Subway data format, and returns a date in the weather underground format. Hint: There are a couple of useful functions in the datetime library that will help on this assignment, called strptime and strftime. More info can be seen here and further in the documentation section: http://docs.python.org/2/library/datetime.html#datetime.datetime.strptime ''' # Notice that the year in the MTA Subway format is year without century (99, 00, 01) date_formatted = datetime.datetime.strptime(date, '%m-%d-%y').strftime('%Y-%m-%d') return date_formatted
1-uIDS-quiz/ps2-wrangling-subway-data.ipynb
tanle8/Data-Science
mit
Summarize What does the range function do? Modify Change the cell below so it prints all $i$ between 0 and 20.
for i in range(11): print(i)
chapters/00_inductive-python/04_loops.ipynb
harmsm/pythonic-science
unlicense
Implement Write a loop that calculates the sum: $$ 1 + 2 + 3 + ... + 1001$$ Loops with conditionals Predict What will this code do?
for i in range(10): if i > 5: print(i)
chapters/00_inductive-python/04_loops.ipynb
harmsm/pythonic-science
unlicense
Modify Change the code below so it goes from 0 to 20 and prints all i less than 8.
for i in range(10): if i > 5: print(i)
chapters/00_inductive-python/04_loops.ipynb
harmsm/pythonic-science
unlicense
Summarize What does the break keyword do? Predict What will this code do?
for i in range(10): print("HERE") if i > 5: continue print(i)
chapters/00_inductive-python/04_loops.ipynb
harmsm/pythonic-science
unlicense
Summarize What does the continue keyword do? Implement Write a program that starts calculating the sum: $$1 + 2 + 3 + ... 100,000$$ but stops if the sum is greater than 30,000.
x = 0 for i in range(1,100001): x = x + i if x > 30000: break
chapters/00_inductive-python/04_loops.ipynb
harmsm/pythonic-science
unlicense
While loops Predict What will this code do?
x = 1 while x < 10: print(x) x = x + 1
chapters/00_inductive-python/04_loops.ipynb
harmsm/pythonic-science
unlicense
Summarize How does while work? Modify Change the following code so it will print all values of $x^2$ for $x$ between 5 and 11.
x = 0 while x < 20: print(x*x) x = x + 1
chapters/00_inductive-python/04_loops.ipynb
harmsm/pythonic-science
unlicense
Aufgabe: Plotten Sie mithilfe von plot_3_eigvalueplots die Eigenwerte der Systemmatrix für $n \in [5,10,20]$ und $\sigma = 100$,$\sigma = -100$ und $\sigma = 0$. Frage: Wie unterscheiden sich die Spektren der verschiedenen Systemmatrizen?
n=30 for sigma in [1000,0,-1000]: plot_2_eigvalueplots(system_matrix_hh1d(n,sigma),system_matrix_hh1d_periodic(n,sigma))
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Aufgabe Plotten Sie die Iterationsmatrix des gewichteten Jacobi für verschiedene $\sigma$. Zu welcher Klasse gehört diese Iterationsmatrix.
matrix_plot(iteration_matrix_wjac(10,-100))
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Gehen wir das ganze mal systematischer an!
n = 10 sigma_range = np.linspace(-100,100,100) sr_wjac_periodic = map(lambda sig : spec_rad(iteration_matrix_wjac(n, sig,periodic=True)), sigma_range) sr_wjac = map(lambda sig : spec_rad(iteration_matrix_wjac(n, sig,periodic=False)), sigma_range) # Achsen festhalten fig1, (ax1, ax2, ax3) = plt.subplots(ncols=3,figsize=(15,4)) ax1.plot(sigma_range, sr_wjac_periodic,'k-') ax1.set_xlabel('$\sigma$') ax1.set_ylabel("spectral radius") ax1.set_title('periodic') ax2.plot(sigma_range, sr_wjac,'k-') ax2.set_xlabel('$\sigma$') ax2.set_ylabel("spectral radius") ax2.set_title('non-periodic') ax3.plot(sigma_range, np.abs(np.asarray(sr_wjac) - np.asarray(sr_wjac_periodic)),'k-') ax3.set_xlabel('$\sigma$') ax3.set_ylabel("spectral radius") ax3.set_title('difference') fig1.tight_layout() plt.show()
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Frage Ist die Iterationsmatrix zyklisch?
matrix_plot(iteration_matrix_gs(10,0,True))
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Nochmal das Ganze mit mehr System!
sr_gs_periodic = map(lambda sig : spec_rad(iteration_matrix_gs(n, sig,periodic=True)), sigma_range) sr_gs = map(lambda sig : spec_rad(iteration_matrix_gs(n, sig,periodic=False)), sigma_range) fig1, (ax1, ax2, ax3) = plt.subplots(ncols=3,figsize=(15,4)) ax1.plot(sigma_range, sr_gs_periodic,'k-') ax1.set_xlabel('$\sigma$') ax1.set_ylabel("spectral radius") ax1.set_title('periodic') ax2.plot(sigma_range, sr_gs,'k-') ax2.set_xlabel('$\sigma$') ax2.set_ylabel("spectral radius") ax2.set_title('non-periodic') ax3.plot(sigma_range, np.abs(np.asarray(sr_gs) - np.asarray(sr_gs_periodic)),'k-') ax3.set_xlabel('$\sigma$') ax3.set_ylabel("spectral radius") ax3.set_title('difference') fig1.tight_layout() plt.show()
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Frage Verhält sich der gewichtete Jacobi Glätter wie in der Vorlesung vorraussgesagt? Frage: Wie gut passen eigentlich die Diagonalwerte der Fourier-transformierten Iterationsmatrix mit den Eigenwerten der Matrix für Gauss-Seidel zusammen? Was könnte man machen um Sie zu vergleichen.
It_gs = iteration_matrix_gs(16,0) eigvals = sp.linalg.eigvals(It_gs) diagonals = get_theta_eigvals(It_gs)
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Aufgabe Überlegt euch eigene Vergleichsmethoden und variiert $n$,$\sigma$ und die Periodizität, um herauszufinden, wann die get_theta_eigvals Methode die Eigenwerte gut schätzen kann. Zweigitter-Iterationsmatrix
from project.linear_transfer import LinearTransfer from project.linear_transfer_periodic import LinearTransferPeriodic
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Buntebilderaufgabe: Nutze plot_fourier_transformed um für $n=31$, $n_c=15$ und verschiedene $\sigma\in[-1000,1000]$ um die Grobgitterkorrekturiterationsmatrizen und deren Fourier-transformierten zu plotten.
plot_fourier_transformed(coarse_grid_correction(31,15,0))
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Und nun einmal der periodische Fall.
def coarse_grid_correction_periodic(n,nc, sigma): A_fine = to_dense(system_matrix_hh1d_periodic(n,sigma)) A_coarse = to_dense(system_matrix_hh1d_periodic(nc,sigma)) A_coarse_inv = sp.linalg.inv(A_coarse) lin_trans = LinearTransferPeriodic(n, nc) prolong = to_dense(lin_trans.I_2htoh) restrict = to_dense(lin_trans.I_hto2h) return np.eye(n)- np.dot(prolong.dot(A_coarse_inv.dot(restrict)), A_fine)
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Aufgabe: Nutzen Sie coarse_grid_correction_periodic für die Grobgitterkorrektur für das periodische Problem und plotten Sie nochmal für verschiedene $\sigma$ die Matrizen und ihre Fourier-transformierten.Für welche $n_f$ und $n_c$ ist die Grobgitterkorrektur für ein periodisches Problem sinnvoll? Frage: Was genau passiert bei $\sigma = 0$ und in der Nähe davon? Was fällt sonst auf? Und was hat das Ganze mit der Vorlesung zu tun? Nun verwenden wir die Grobgitterkorrektur und die Iterationsmatrizen der Glätter um die Zweigitteriterationsmatrix zu berrechnen.
def two_grid_it_matrix(n,nc, sigma, nu1=3,nu2=3,typ='wjac'): cg = coarse_grid_correction(n,nc,sigma) if typ is 'wjac': smoother = iteration_matrix_wjac(n,sigma, periodic=False) if typ is 'gs': smoother = iteration_matrix_gs(n,sigma, periodic=False) pre_sm = matrix_power(smoother, nu1) post_sm = matrix_power(smoother, nu2) return pre_sm.dot(cg.dot(post_sm))
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Buntebilderaufgabe: Nutzen Sie plot_fourier_transformed um für $n=15$, $n_c=7$ und verschiedene $\sigma\in[-1000,1000]$ um die Zweigittermatrizen und deren Fourier-transformierten zu plotten.
plot_fourier_transformed(two_grid_it_matrix(15,7,0,typ='wjac'))
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Nun etwas genauer!
sr_2grid_var_sigma = map(lambda sig : spec_rad(two_grid_it_matrix(15,7,sig)), sigma_range) plt.semilogy(sigma_range, sr_2grid_var_sigma,'k-') plt.title('$n_f = 15, n_c = 7$') plt.xlabel('$\sigma$') plt.ylabel("spectral radius") nf_range = map(lambda k: 2**k-1,range(3,10)) nc_range = map(lambda k: 2**k-1,range(2,9)) sr_2grid_m1000 = map(lambda nf,nc : spec_rad(two_grid_it_matrix(nf,nc,-1000)), nf_range, nc_range) sr_2grid_0 = map(lambda nf,nc : spec_rad(two_grid_it_matrix(nf,nc,0)), nf_range, nc_range) sr_2grid_p1000 = map(lambda nf,nc : spec_rad(two_grid_it_matrix(nf,nc,1000)), nf_range, nc_range) plt.semilogy(nf_range, sr_2grid_m1000,'k-',nf_range, sr_2grid_0,'k--',nf_range, sr_2grid_p1000,'k:') plt.xlabel('$n_f$') plt.ylabel("spectral radius") plt.legend(("$\sigma = -1000$","$\sigma = 0$","$\sigma = 1000$"),'upper right',shadow = True)
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Buntebilderaufgabe: Nutzen Sie plot_fourier_transformed um für $n=16$, $n_c=8$ und verschiedene $\sigma\in[-1000,1000]$ um die Zweigittermatrizen und deren Fourier-transformierten zu plotten.
plot_fourier_transformed(two_grid_it_matrix_periodic(16,8,-100,typ='wjac'))
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Frage Was fällt auf? (Das sind die besten Fragen... zumindest für den Übungsleiter.) Aufgabe: Nutzen Sie die Funktion two_grid_it_matrix_periodic für den periodischen Fall und plotten Sie den Spektralradius über $\sigma$ und den Spektralradius über $n$ für 3 verschiedene $\sigma$. Aufgabe: Plotten sie die Differenzen zwischen dem periodischem und nicht-periodischem Fall. Bonusbuntebilderaufgabe Vergleicht analog zu den Eigenwertplots der Systemmatrizen, die Eigenwertplots der Zweigitteriterationsmatrizen. Asymptotische Äquivalenz zwischen periodisch und nicht-periodisch Wir sehen, dass die Spektralradiien auf den ersten Blick gut übereinstimmen. Wir wollen nun empirisch ergründen ob die Matrizenklassen der periodischen und nicht periodischen Fällen möglicherweise zueinander asymptotisch äquivalent sind. Zur Erinnerung: Hilbert-Schmidt Norm Wir definieren die Hilbert-Schmidt Norm einer Matrx $A \in K^{n \times n}$ als $$ |A| = \left( \frac{1}{n}\sum_{i = 0}^{n-1}\sum_{i = 0}^{n-1} |a_{i,j}|^2 \right)^{1/2}.$$ Es gilt 1. $|A| = \left( \frac{1}{n}\mbox{Spur}(A^A) \right)^{1/2}$ 1. $|A| = \left( \frac{1}{n}\sum_{k=0}^{n-1}\lambda_k\right)^{1/2}$, wobei $\lambda_k$ die Eigenwerte von $A^A$ sind 1. $|A| \leq \|A\|$ Asymptotisch äquivalente Folgen von Matrizen Seien ${A_n}$ und ${B_n}$ Folgen von $n\times n$ Matrizen, welche beschränkt bzgl. der starken Norm sind: $$ \|A_n\|,\|B_n\| \leq M \le \infty, n=1,2,\ldots $$ und bzgl. der schwachen Norm konvergieren $$\lim_{n \to \infty} |A_n -B_n| = 0.$$ Wir nennen diese Folgen asymptotisch äquivalent und notieren dies als $A_n \sim B_n$. Für ${A_n}$ , ${B_n}$ und ${C_n}$, welche jeweils die Eigenwerte ${\alpha_{n,i}}$,${\beta_{n,i}}$ und ${\zeta_{n,i}}$ haben gelten folgende Zusammenhänge. Wenn $A_n \sim B_n$, dann $\lim_{n \to \infty} |A_n| = \lim_{n \to \infty} |B_n| $ Wenn $A_n \sim B_n$ und $B_n \sim C_n$, dann $A_n \sim C_n$ Wenn $A_nB_n \sim C_n$ und $\|A_n^{-1}\|\leq K \le \infty$, dann gilt $B_n \sim A_n^{-1}C_n$ Wenn $A_n \sim B_n$, dann $\exists -\infty \le m,M\le \infty$, s.d. $m\leq \alpha_{n,i}, \beta_{n,i}\leq M \; \forall n\geq 1 \mbox{und}\; k\geq 0$ Wenn $A_n \sim B_n$, dann gilt $ \lim_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} (\alpha_{n,k}^s - \beta_{n,k}^s) = 0$ Aufgabe: Schreiben sie eine Funktion hs_norm, welche die Hilbert-Schmidt Norm berechnet in maximal 3 Zeilen. Tipp: 'numpy norm frobenius' -> Suchmaschine Aufgabe: Überprüfen Sie empirisch ob die Systemmatrizenklassen Glättungsiterationsmatrizenklassen Grobgitterkorrekturmatrizenklassen Zweigitteriterationsmatrizenklassen asymptotisch äquivalent sind für $\sigma = { -1000, 0.001, 1000 }$. Systemmatrizen:
n_range = np.arange(10,100) hs_sysmat_m1000 = map(lambda n: hs_norm(to_dense(system_matrix_hh1d(n,-1000))-to_dense(system_matrix_hh1d_periodic(n,-1000))),n_range) hs_sysmat_0 = map(lambda n: hs_norm(to_dense(system_matrix_hh1d(n,0.001))-to_dense(system_matrix_hh1d_periodic(n,0.001))),n_range) hs_sysmat_p1000 = map(lambda n: hs_norm(to_dense(system_matrix_hh1d(n,1000))-to_dense(system_matrix_hh1d_periodic(n,1000))),n_range) plt.plot(hs_sysmat_m1000) plt.plot(hs_sysmat_0) plt.plot(hs_sysmat_p1000)
notebooks/InteraktivesUebungsblatt.ipynb
Parallel-in-Time/pyMG-2016
bsd-2-clause
Independent Graph Assumption
vectorized = np.reshape(X, (X.shape[0]**2, X.shape[2])).T covar = np.cov(vectorized) from mpl_toolkits.axes_grid1 import make_axes_locatable # create an axes on the right side of ax. The width of cax will be 5% # of ax and the padding between cax and ax will be fixed at 0.05 inch. plt.figure(figsize=(7,7)) ax = plt.gca() im = ax.imshow(covar/100000, interpolation='None') im.set_clim([0, np.max(covar/100000)]) plt.title('Covariance of KKI2009 dataset') plt.xticks((0, 20, 41), ('1', '21', '42')) plt.yticks((0, 20, 41), ('1', '21', '42')) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) plt.colorbar(im, cax=cax) plt.tight_layout() plt.savefig('../figs/graphs_covariance.png') plt.show() diag = covar.diagonal()*np.eye(covar.shape[0]) hollow = covar-diag d_det = np.linalg.det(diag) h_det = np.linalg.det(hollow) plt.figure(figsize=(11,8)) plt.subplot(121) plt.imshow(diag/100000, interpolation='None') plt.clim([0, np.max(covar/100000)]) plt.title('On-Diagonal Covariance') plt.xticks((0, 20, 41), ('1', '21', '42')) plt.yticks((0, 20, 41), ('1', '21', '42')) plt.subplot(122) plt.imshow(hollow/100000, interpolation='None') plt.clim([0, np.max(covar/100000)]) plt.title('Off-Diagonal Covariance') plt.xticks((0, 20, 41), ('1', '21', '42')) plt.yticks((0, 20, 41), ('1', '21', '42')) plt.tight_layout() plt.show() print "Ratio of on- and off-diagonal determinants: " + str(d_det/h_det)
code/test_assumptions.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
From the above, we conclude that the assumption that the graphs were independent is false. This is because the off-diagonal components of the covariance are highly significant in the cross-graph covariance matrix. Identical Graph Assumption
import sklearn.mixture i = np.linspace(1,15,15,dtype='int') print i bic = np.array(()) for idx in i: print "Fitting and evaluating model with " + str(idx) + " clusters." gmm = sklearn.mixture.GMM(n_components=idx,n_iter=1000,covariance_type='diag') gmm.fit(vectorized.T) bic = np.append(bic, gmm.bic(vectorized.T)) plt.figure(figsize=(8,6)) plt.plot(i, 10000/bic) plt.title('Clustering of KKI2009 Graphs with GMM') plt.ylabel('BIC score') plt.xlabel('number of clusters') plt.tight_layout() plt.savefig('../figs/graphs_identical.png') plt.show() print bic
code/test_assumptions.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
From the above we observe that, since the elbow of the bic curve lies at 6, that our data may not have been sampled identically from one distribution. This assumption based on the evidence provided is also false. Independent Edge Assumption
vect = np.reshape(X, (X.shape[0]**2, X.shape[2])) covar = np.cov(vect) plt.figure(figsize=(7,7)) ax = plt.gca() im = ax.imshow(np.log(covar/100000+1), interpolation='None') im.set_clim([0, np.max(np.log(covar/100000))]) plt.title('Covariance of Edges') # plt.xticks((0, 20, 41), ('1', '21', '42')) # plt.yticks((0, 20, 41), ('1', '21', '42')) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) plt.colorbar(im, cax=cax) plt.tight_layout() plt.savefig('../figs/edges_covariance.png') plt.show() diag = covar.diagonal()*np.eye(covar.shape[0]) hollow = covar-diag d_det = np.linalg.det(diag) h_det = np.linalg.det(hollow) plt.figure(figsize=(11,8)) plt.subplot(121) plt.imshow(np.log(diag/100000+1), interpolation='None') plt.clim([0, np.max(np.log(covar/100000+1))]) plt.title('On-Diagonal Covariance') # plt.xticks((0, 20, 41), ('1', '21', '42')) # plt.yticks((0, 20, 41), ('1', '21', '42')) plt.subplot(122) plt.imshow(np.log(hollow/100000+1), interpolation='None') plt.clim([0, np.max(np.log(covar/100000+1))]) plt.title('Off-Diagonal Covariance') # plt.xticks((0, 20, 41), ('1', '21', '42')) # plt.yticks((0, 20, 41), ('1', '21', '42')) plt.show() print "Ratio of on- and off-diagonal determinants: " + str(d_det/h_det)
code/test_assumptions.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
From the above, we can conclude that the edges are not independent of one another, as the ratio of on- to off-diagonal covariance is very small. This assumption is false. Identical Edge Assumption
import sklearn.mixture i = np.linspace(1,15,15,dtype='int') print i bic = np.array(()) for idx in i: print "Fitting and evaluating model with " + str(idx) + " clusters." gmm = sklearn.mixture.GMM(n_components=idx,n_iter=1000,covariance_type='diag') gmm.fit(vect.T) bic = np.append(bic, gmm.bic(vect.T)) plt.figure(figsize=(8,6)) plt.plot(i, 10000/bic) plt.title('Clustering of KKI2009 Edges with GMM') plt.ylabel('BIC score') plt.xlabel('number of clusters') plt.tight_layout() plt.savefig('../figs/edges_identical.png') plt.show() print bic
code/test_assumptions.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
From the above we can see quite plainly that our classifier fails to separate subjects based on their edge probability. Thus, this assumption is also false. Class covariances are different
g_0 = np.zeros((70, 70, sum([1 if x == 0 else 0 for x in y]))) g_1 = np.zeros((70, 70, sum([1 if x == 1 else 0 for x in y]))) c0=0 c1=0 for idx, val in enumerate(y): if val == 0: g_0[:,:,c1] = X[:,:,idx] c0 += 1 else: g_1[:,:,c1] = X[:,:,idx] c1 +=1 print g_0.shape print g_1.shape vect_0 = np.reshape(g_0, (g_0.shape[0]**2, g_0.shape[2])) covar_0 = np.cov(vect_0) vect_1 = np.reshape(g_1, (g_1.shape[0]**2, g_1.shape[2])) covar_1 = np.cov(vect_1) plt.figure(figsize=(10,20)) plt.subplot(311) plt.imshow(np.log(covar_0+1)) plt.clim([0, np.log(np.max(np.max(covar_0)))]) plt.title('Covariance of class 0') plt.subplot(312) plt.imshow(np.log(covar_1+1)) plt.clim([0, np.log(np.max(np.max(covar_0)))]) plt.title('Covariance of class 1') plt.subplot(313) plt.imshow(np.log(abs(covar_1 - covar_0 +1))) plt.clim([0, np.log(np.max(np.max(covar_0)))]) plt.title('Difference in covariances') plt.savefig('../figs/class_covariance.png') plt.show()
code/test_assumptions.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
quantulum3 quantulum3 is a Python package "for information extraction of quantities from unstructured text".
#!pip3 install quantulum3 from quantulum3 import parser for sent in sentences: print(sent) p = parser.parse(sent) if p: print('\tSpoken:',parser.inline_parse_and_expand(sent)) print('\tNumeric elements:') for q in p: display(q) print('\t\t{} :: {}'.format(q.surface, q)) print('\n---------\n')
notebooks/Quantity Parsing.ipynb
psychemedia/parlihacks
mit
Finding quantity statements in large texts If we have a large block of text, we might want to quickly skim it for quantity containing sentences, we can do something like the following...
import spacy nlp = spacy.load('en_core_web_lg', disable = ['ner']) text = ''' Once upon a time, there was a thing. The thing weighed forty kilogrammes and cost £250. It was blue. It took forty five minutes to get it home. What a day that was. I didn't get back until 2.15pm. Then I had some cake for tea. ''' doc = nlp(text) for sent in doc.sents: print(sent) for sent in doc.sents: sent = sent.text p = parser.parse(sent) if p: print('\tSpoken:',parser.inline_parse_and_expand(sent)) print('\tNumeric elements:') for q in p: display(q) print('\t\t{} :: {}'.format(q.surface, q)) print('\n---------\n')
notebooks/Quantity Parsing.ipynb
psychemedia/parlihacks
mit
Annotating a dataset Can we extract numbers from sentences in a CSV file? Yes we can...
url = 'https://raw.githubusercontent.com/BBC-Data-Unit/unduly-lenient-sentences/master/ULS%20for%20Sankey.csv' import pandas as pd df = pd.read_csv(url) df.head() #get a row df.iloc[1] #and a, erm. sentence... df.iloc[1]['Original sentence (refined)'] parser.parse(df.iloc[1]['Original sentence (refined)']) def amountify(txt): #txt may be some flavout of nan... #handle scruffily for now... try: if txt: p = parser.parse(txt) x=[] for q in p: x.append( '{} {}'.format(q.value, q.unit.name)) return '::'.join(x) return '' except: return df['amounts'] = df['Original sentence (refined)'].apply(amountify) df.head()
notebooks/Quantity Parsing.ipynb
psychemedia/parlihacks
mit
We could then do something to split multiple amounts into multiple rows or columns... Parsing Semi-Structured Sentences The sentencing sentences look to have a reasonable degree of structure to them (or at least, there are some commenalities in the way some of them are structured). We can exploit this structure by writing some more specific pattern matches to pull out even more information.
df['Original sentence (refined)'][:20].apply(print);
notebooks/Quantity Parsing.ipynb
psychemedia/parlihacks
mit
activate - make magics use the curent view
%%px %matplotlib inline from pylab import * plot([1,2,9],[2,9,3],'o-') %pxresult
MPI/ipyparallel_mpi_tests.ipynb
marcinofulus/ProgramowanieRownolegle
gpl-3.0
@dview.remote
@dview.remote(block=True) def getpid(): import os return os.getpid() getpid() @dview.remote(block=True) def iterate_logistic(a,N,Niter): import numpy as np x = np.random.random(N) for i in range(Niter): x = a*x*(1.0-x) return x iterate_logistic(4.0,1,10000)
MPI/ipyparallel_mpi_tests.ipynb
marcinofulus/ProgramowanieRownolegle
gpl-3.0
@dview.parallel
import numpy as np x = np.random.random((2**12)) print(x.nbytes/1024**2,x.size) a = np.ones_like(x)*4.0 @dview.parallel(block=True) def iterate_logistic(x,a): import numpy as np x = np.copy(x) for i in range(100000): x = a*x*(1.0-x) return x %%time for i in range(100000): x = a*x*(1.0-x) %time x = iterate_logistic(x,a) dview? import ipyparallel.client.remotefunction ipyparallel.client.remotefunction.ParallelFunction?
MPI/ipyparallel_mpi_tests.ipynb
marcinofulus/ProgramowanieRownolegle
gpl-3.0
Configure mpi profile first: - http://ipyparallel.readthedocs.io/en/latest/process.html#parallel-process
c = ipp.Client(profile='mpi') c.ids c[:].apply_sync(lambda : "Hello, World")
MPI/ipyparallel_mpi_tests.ipynb
marcinofulus/ProgramowanieRownolegle
gpl-3.0
Interesting might be: python c = Client('/path/to/my/ipcontroller-client.json', sshserver='me@myhub.example.com')
import mpi4py %%px %%writefile psum.py from mpi4py import MPI import numpy as np def psum(a): locsum = np.sum(a) rcvBuf = np.array(0.0,'d') MPI.COMM_WORLD.Allreduce([locsum, MPI.DOUBLE], [rcvBuf, MPI.DOUBLE], op=MPI.SUM) return rcvBuf view = c[:] view.activate() view.run('psum.py') view.scatter('a',np.arange(16,dtype='float')) view['a'] %px totalsum = psum(a) view['totalsum'] %%writefile psum.py from mpi4py import MPI import numpy as np import time def psum(a): locsum = np.sum(a) rcvBuf = np.array(0.0,'d') MPI.COMM_WORLD.Allreduce([locsum, MPI.DOUBLE], [rcvBuf, MPI.DOUBLE], op=MPI.SUM) return rcvBuf rank = MPI.COMM_WORLD.Get_rank() size = MPI.COMM_WORLD.Get_size() if rank==0: a = np.arange(16,dtype='float') else: a = None a_local = np.empty(16/size,dtype='float') MPI.COMM_WORLD.Scatter( a, a_local ) time.sleep(rank*0.1) print(rank,":::",a_local) totalsum = psum(a_local) print(totalsum) !mpirun -n 4 /opt/conda/envs/py27/bin/python psum.py
MPI/ipyparallel_mpi_tests.ipynb
marcinofulus/ProgramowanieRownolegle
gpl-3.0
Régression logistique en utilisant statsmodels Pour notre premier modèle, nous allons voir comment fonctionne le module statsmodels Par défaut, la regression logit n'a pas de beta zéro pour le module statsmodels : il faut donc l'ajouter
# première étape pour ce module, il faut ajouter à la main le beta zéro - l'intercept data['intercept'] = 1.0 data.rename(columns = {'default payment next month' : "Y"}, inplace = True) data.columns # variable = ['AGE', 'BILL_AMT1', 'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4', # 'BILL_AMT5', 'BILL_AMT6', 'LIMIT_BAL', 'PAY_0', # 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6', 'PAY_AMT1', 'PAY_AMT2', # 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6', 'SEX_1', # 'EDUCATION_0', 'EDUCATION_1', 'EDUCATION_2', 'EDUCATION_3', # 'EDUCATION_4', 'EDUCATION_5', 'MARRIAGE_0', 'MARRIAGE_1', # 'MARRIAGE_2', 'intercept'] train_cols = ["SEX_1", "AGE", "MARRIAGE_0", 'PAY_0','intercept'] logit = sm.Logit(data['Y'], data[train_cols].astype(float)) # fit the model result = logit.fit() print(result.summary())
_unittests/ut_helpgen/data_gallery/notebooks/competitions/2016/td2a_eco_competition_modeles_logistiques.ipynb
sdpython/pyquickhelper
mit
Add magmoms to initial structure Here we define magnetic moments of individual species present in the structure, if not already present. Refer to pymatgen docs for more information on options available for the argument overwrite_magmom_mode. Here we add magmoms for all sites in the structure irrespective of input structure, suitable for a spin-polarized (a.k.a 'magnetic') calculation. This is particularly interesting in case of either attempting a ferromagnetic calculation or an antiferromagnetic calculation.
magmom = CollinearMagneticStructureAnalyzer(structure, overwrite_magmom_mode="replace_all_if_undefined") fm_structure = magmom.get_ferromagnetic_structure(make_primitive=True) # Assume an initial ferromagnetic order print(fm_structure) order = magmom.ordering # Useful if magnetic order is unknown or not user-defined print(order)
notebooks/2021-08-26-Magnetic Structure Generation as Input for Initial DFT Calculations.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
Get space group information
spa = SpacegroupAnalyzer(structure) spa.get_point_group_symbol() spa.get_space_group_symbol() fm_structure.to(filename="lfp.mcif") # Save the structure in magCIF format. spn_structure = magmom.get_structure_with_spin() # Returns spin decorated values in structure instead of magmom site properties print(spn_structure)
notebooks/2021-08-26-Magnetic Structure Generation as Input for Initial DFT Calculations.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
The above structure is saved as a magCIF with .mcif extension. This can be converted back to a CIF with relevant magnetic information associated with each site. OpenBabel does this easily, on command line write- obabel -imcif lfp.mcif -ocif -O lfp.cif Analyze magnetic moment present in a calculated structure using MAPI In some cases, it might be useful to analyze magnetic behavior of a strucure from the Materials Project database.
# Establish rester for accessing Materials API mpr = MPRester(api_key='API_KEY') mp_id = 'mp-504263' # Previously reported structure; Co replaced at Fe site structure_from_mp = mpr.get_structure_by_material_id(mp_id) print(structure) mgmmnt = CollinearMagneticStructureAnalyzer(structure_from_mp, overwrite_magmom_mode="replace_all_if_undefined") mgmmnt.is_magnetic mgmmnt.magnetic_species_and_magmoms mgmmnt.ordering
notebooks/2021-08-26-Magnetic Structure Generation as Input for Initial DFT Calculations.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
Configuration
max_length = 128 # Maximum length of input sentence to the model. batch_size = 32 epochs = 2 # Labels in our dataset. labels = ["contradiction", "entailment", "neutral"]
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Load the Data
!curl -LO https://raw.githubusercontent.com/MohamadMerchant/SNLI/master/data.tar.gz !tar -xvzf data.tar.gz # There are more than 550k samples in total; we will use 100k for this example. train_df = pd.read_csv("SNLI_Corpus/snli_1.0_train.csv", nrows=100000) valid_df = pd.read_csv("SNLI_Corpus/snli_1.0_dev.csv") test_df = pd.read_csv("SNLI_Corpus/snli_1.0_test.csv") # Shape of the data print(f"Total train samples : {train_df.shape[0]}") print(f"Total validation samples: {valid_df.shape[0]}") print(f"Total test samples: {valid_df.shape[0]}")
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Dataset Overview: sentence1: The premise caption that was supplied to the author of the pair. sentence2: The hypothesis caption that was written by the author of the pair. similarity: This is the label chosen by the majority of annotators. Where no majority exists, the label "-" is used (we will skip such samples here). Here are the "similarity" label values in our dataset: Contradiction: The sentences share no similarity. Entailment: The sentences have similar meaning. Neutral: The sentences are neutral. Let's look at one sample from the dataset:
print(f"Sentence1: {train_df.loc[1, 'sentence1']}") print(f"Sentence2: {train_df.loc[1, 'sentence2']}") print(f"Similarity: {train_df.loc[1, 'similarity']}")
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Preprocessing
# We have some NaN entries in our train data, we will simply drop them. print("Number of missing values") print(train_df.isnull().sum()) train_df.dropna(axis=0, inplace=True)
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Distribution of our training targets.
print("Train Target Distribution") print(train_df.similarity.value_counts())
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Distribution of our validation targets.
print("Validation Target Distribution") print(valid_df.similarity.value_counts())
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
The value "-" appears as part of our training and validation targets. We will skip these samples.
train_df = ( train_df[train_df.similarity != "-"] .sample(frac=1.0, random_state=42) .reset_index(drop=True) ) valid_df = ( valid_df[valid_df.similarity != "-"] .sample(frac=1.0, random_state=42) .reset_index(drop=True) )
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
One-hot encode training, validation, and test labels.
train_df["label"] = train_df["similarity"].apply( lambda x: 0 if x == "contradiction" else 1 if x == "entailment" else 2 ) y_train = tf.keras.utils.to_categorical(train_df.label, num_classes=3) valid_df["label"] = valid_df["similarity"].apply( lambda x: 0 if x == "contradiction" else 1 if x == "entailment" else 2 ) y_val = tf.keras.utils.to_categorical(valid_df.label, num_classes=3) test_df["label"] = test_df["similarity"].apply( lambda x: 0 if x == "contradiction" else 1 if x == "entailment" else 2 ) y_test = tf.keras.utils.to_categorical(test_df.label, num_classes=3)
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Keras Custom Data Generator
class BertSemanticDataGenerator(tf.keras.utils.Sequence): """Generates batches of data. Args: sentence_pairs: Array of premise and hypothesis input sentences. labels: Array of labels. batch_size: Integer batch size. shuffle: boolean, whether to shuffle the data. include_targets: boolean, whether to incude the labels. Returns: Tuples `([input_ids, attention_mask, `token_type_ids], labels)` (or just `[input_ids, attention_mask, `token_type_ids]` if `include_targets=False`) """ def __init__( self, sentence_pairs, labels, batch_size=batch_size, shuffle=True, include_targets=True, ): self.sentence_pairs = sentence_pairs self.labels = labels self.shuffle = shuffle self.batch_size = batch_size self.include_targets = include_targets # Load our BERT Tokenizer to encode the text. # We will use base-base-uncased pretrained model. self.tokenizer = transformers.BertTokenizer.from_pretrained( "bert-base-uncased", do_lower_case=True ) self.indexes = np.arange(len(self.sentence_pairs)) self.on_epoch_end() def __len__(self): # Denotes the number of batches per epoch. return len(self.sentence_pairs) // self.batch_size def __getitem__(self, idx): # Retrieves the batch of index. indexes = self.indexes[idx * self.batch_size : (idx + 1) * self.batch_size] sentence_pairs = self.sentence_pairs[indexes] # With BERT tokenizer's batch_encode_plus batch of both the sentences are # encoded together and separated by [SEP] token. encoded = self.tokenizer.batch_encode_plus( sentence_pairs.tolist(), add_special_tokens=True, max_length=max_length, return_attention_mask=True, return_token_type_ids=True, pad_to_max_length=True, return_tensors="tf", ) # Convert batch of encoded features to numpy array. input_ids = np.array(encoded["input_ids"], dtype="int32") attention_masks = np.array(encoded["attention_mask"], dtype="int32") token_type_ids = np.array(encoded["token_type_ids"], dtype="int32") # Set to true if data generator is used for training/validation. if self.include_targets: labels = np.array(self.labels[indexes], dtype="int32") return [input_ids, attention_masks, token_type_ids], labels else: return [input_ids, attention_masks, token_type_ids] def on_epoch_end(self): # Shuffle indexes after each epoch if shuffle is set to True. if self.shuffle: np.random.RandomState(42).shuffle(self.indexes)
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Build the model.
# Create the model under a distribution strategy scope. strategy = tf.distribute.MirroredStrategy() with strategy.scope(): # Encoded token ids from BERT tokenizer. input_ids = tf.keras.layers.Input( shape=(max_length,), dtype=tf.int32, name="input_ids" ) # Attention masks indicates to the model which tokens should be attended to. attention_masks = tf.keras.layers.Input( shape=(max_length,), dtype=tf.int32, name="attention_masks" ) # Token type ids are binary masks identifying different sequences in the model. token_type_ids = tf.keras.layers.Input( shape=(max_length,), dtype=tf.int32, name="token_type_ids" ) # Loading pretrained BERT model. bert_model = transformers.TFBertModel.from_pretrained("bert-base-uncased") # Freeze the BERT model to reuse the pretrained features without modifying them. bert_model.trainable = False bert_output = bert_model( input_ids, attention_mask=attention_masks, token_type_ids=token_type_ids ) sequence_output = bert_output.last_hidden_state pooled_output = bert_output.pooler_output # Add trainable layers on top of frozen layers to adapt the pretrained features on the new data. bi_lstm = tf.keras.layers.Bidirectional( tf.keras.layers.LSTM(64, return_sequences=True) )(sequence_output) # Applying hybrid pooling approach to bi_lstm sequence output. avg_pool = tf.keras.layers.GlobalAveragePooling1D()(bi_lstm) max_pool = tf.keras.layers.GlobalMaxPooling1D()(bi_lstm) concat = tf.keras.layers.concatenate([avg_pool, max_pool]) dropout = tf.keras.layers.Dropout(0.3)(concat) output = tf.keras.layers.Dense(3, activation="softmax")(dropout) model = tf.keras.models.Model( inputs=[input_ids, attention_masks, token_type_ids], outputs=output ) model.compile( optimizer=tf.keras.optimizers.Adam(), loss="categorical_crossentropy", metrics=["acc"], ) print(f"Strategy: {strategy}") model.summary()
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Create train and validation data generators
train_data = BertSemanticDataGenerator( train_df[["sentence1", "sentence2"]].values.astype("str"), y_train, batch_size=batch_size, shuffle=True, ) valid_data = BertSemanticDataGenerator( valid_df[["sentence1", "sentence2"]].values.astype("str"), y_val, batch_size=batch_size, shuffle=False, )
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Train the Model Training is done only for the top layers to perform "feature extraction", which will allow the model to use the representations of the pretrained model.
history = model.fit( train_data, validation_data=valid_data, epochs=epochs, use_multiprocessing=True, workers=-1, )
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Fine-tuning This step must only be performed after the feature extraction model has been trained to convergence on the new data. This is an optional last step where bert_model is unfreezed and retrained with a very low learning rate. This can deliver meaningful improvement by incrementally adapting the pretrained features to the new data.
# Unfreeze the bert_model. bert_model.trainable = True # Recompile the model to make the change effective. model.compile( optimizer=tf.keras.optimizers.Adam(1e-5), loss="categorical_crossentropy", metrics=["accuracy"], ) model.summary()
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Train the entire model end-to-end.
history = model.fit( train_data, validation_data=valid_data, epochs=epochs, use_multiprocessing=True, workers=-1, )
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Evaluate model on the test set
test_data = BertSemanticDataGenerator( test_df[["sentence1", "sentence2"]].values.astype("str"), y_test, batch_size=batch_size, shuffle=False, ) model.evaluate(test_data, verbose=1)
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Inference on custom sentences
def check_similarity(sentence1, sentence2): sentence_pairs = np.array([[str(sentence1), str(sentence2)]]) test_data = BertSemanticDataGenerator( sentence_pairs, labels=None, batch_size=1, shuffle=False, include_targets=False, ) proba = model.predict(test_data[0])[0] idx = np.argmax(proba) proba = f"{proba[idx]: .2f}%" pred = labels[idx] return pred, proba
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Check results on some example sentence pairs.
sentence1 = "Two women are observing something together." sentence2 = "Two women are standing with their eyes closed." check_similarity(sentence1, sentence2)
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
Check results on some example sentence pairs.
sentence1 = "A smiling costumed woman is holding an umbrella" sentence2 = "A happy woman in a fairy costume holds an umbrella" check_similarity(sentence1, sentence2)
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0