markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
When the deployment is done, you can verify three Cloud Functions deployments via the Cloud Console UI. If deployment is succeeded, move to next section to upload audience data to Google Analytics via JSONL file. Configure audience upload endpoint Different audience upload endpoint APIs have different configurations. Following demonstrates how endpoint for Google Analytics can be configured via Measurement Protocol. Refer to 3.3. Configurations of APIs for detailed configuration options for other endpoints. Update following GA values according to your needs in the following cell. Refer to Working with the Measurement Protocol for details on field names and correct values. json { "t": "event", "ec": "video", "ea": "play", "ni": "1", "tid": "UA-112752759-1" }
%%writefile cloud-for-marketing/marketing-analytics/activation/gmp-googleads-connector/config_api.json { "MP": { "default": { "mpConfig": { "v": "1", "t": "event", "ec": "video", "ea": "play", "ni": "1", "tid": "UA-XXXXXXXXX-Y" } } } }
packages/propensity/09.audience_upload.ipynb
google/compass
apache-2.0
Create audience list JSON files GMP and Google Ads Connector's Google Analytics Measurement Protocol pipeline requires JSONL text format. Following cells help to export BigQuery table containing audience list as JSONL file to Google Cloud Storage Bucket. NOTE: This solution has specific file naming requirement to work properly. Refer to 3.4. Name convention of data files for more details. As soon as the file is uploaded, GMP and Google Ads Connector processes it and sends it via Measurement Protocol to Google Analytics property configured above ("tid": "UA-XXXXXXXXX-Y").
configs = helpers.get_configs('config.yaml') dest_configs = configs.destination # GCP project ID PROJECT_ID = dest_configs.project_id # Name of BigQuery dataset DATASET_NAME = dest_configs.dataset_name # Google Cloud Storage Bucket name to store audience upload JSON files # NOTE: The name should be same as indicated while deploying # "GMP and Google Ads Connector" on the Terminal GCS_BUCKET = 'bucket' # This Cloud Storage folder is monitored by the "GMP and Google Ads Connector" # to send over to endpoint (eg: Google Analytics). GCS_FOLDER = 'outbound' # File name to export BigQuery Table to Cloud Storage JSONL_FILENAME = 'myproject_API[MP]_config[default].jsonl' # BigQuery table containing scored audience data AUDIENCE_SCORE_TABLE_NAME = 'table' %%bash -s $PROJECT_ID $DATASET_NAME $AUDIENCE_SCORE_TABLE_NAME $GCS_BUCKET $GCS_FOLDER $JSONL_FILENAME bq extract \ --destination_format NEWLINE_DELIMITED_JSON \ $1:$2.$3 \ gs://$4/$5/$6
packages/propensity/09.audience_upload.ipynb
google/compass
apache-2.0
Examples and tests
nHat diff(nHat, t) diff(lambdaHat, t) diff(lambdaHat, t).components diff(lambdaHat, t).subs(t,0).components diff(lambdaHat, t, 2).components diff(lambdaHat, t, 2).subs(t,0).components diff(ellHat, t) diff(nHat, t, 2) diff(nHat,t, 3) diff(nHat,t, 4) diff(SigmaVec,t, 0) SigmaVec.fdiff() diff(SigmaVec,t, 1) diff(SigmaVec,t, 2) diff(SigmaVec,t, 2) | nHat T1 = TensorProduct(SigmaVec, SigmaVec, ellHat, coefficient=1) T2 = TensorProduct(SigmaVec, nHat, lambdaHat, coefficient=1) tmp = Tensor(T1,T2) display(T1, T2, tmp) diff(tmp, t, 1) T1+T2 T2*ellHat ellHat*T2 T1.trace(0,1) T2*ellHat for k in range(1,4): display((T2*ellHat).trace(0,k)) for k in range(1,4): display((T2*ellHat).trace(0,k).subs(t,0)) T1.trace(0,1) * T2
Waveforms/TestTensors.ipynb
moble/PostNewtonian
mit
Sympy can be a little tricky because it caches things, which means that the first implementation of this code silently changed tensors in place, without meaning to. Let's just check that our variables haven't changed:
display(T1, T2) T3 = SymmetricTensorProduct(SigmaVec, SigmaVec, ellHat, coefficient=1) display(T3) T3.trace(0,1) diff(T3, t, 1) T3.symmetric T3*ellHat ellHat*T3 T1+T3 T1 = SymmetricTensorProduct(SigmaVec, SigmaVec, ellHat, nHat, coefficient=1) display(T1) display(T1.trace()) T1*T2 type(_) import simpletensors isinstance(__, simpletensors.TensorProductFunction) SymmetricTensorProduct(nHat, nHat, nHat).trace() diff(T1.trace(), t, 1) diff(T1.trace(), t, 2) diff(T1.trace(), t, 2).subs(t,0)
Waveforms/TestTensors.ipynb
moble/PostNewtonian
mit
Let's get the data from Seattle open data site Data is published on a hourly basis from 2012 to now and can be found in the below url. Make a pandas dataframe from this for our analysis.
url = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD' df = pd.read_csv(url, parse_dates=True) df.head() df.shape df.index = pd.DatetimeIndex(df.Date) df.head() df.drop(columns=['Date'], inplace=True) df.head() df['total'] = df['Fremont Bridge East Sidewalk'] + df['Fremont Bridge West Sidewalk'] df.drop(columns=['Fremont Bridge East Sidewalk', 'Fremont Bridge West Sidewalk'], inplace=True)
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Let's start with visualization to understand this data
df.resample('D').sum().plot() df.resample('M').sum().plot() df.groupby(df.index.hour).mean().plot()
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Let's look a this data on hourly basis across days
pivoted_data = df.pivot_table('total', index=df.index.hour, columns=df.index.date) pivoted_data.iloc[:5, :5] # plot of this dates together pivoted_data.plot(legend=False, alpha=0.1)
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Let's do unsupervised learning to detangle this chart
from sklearn.decomposition import PCA X = pivoted_data.fillna(0).T.values; X.shape pca = PCA(2, svd_solver='full').fit(X) X_PCA = pca.transform(X) X_PCA.shape plt.scatter(X_PCA[:, 0], X_PCA[:, 1]) dayofweek = pd.DatetimeIndex(pivoted_data.columns).dayofweek plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=dayofweek, cmap='rainbow') plt.colorbar();
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Let's do a simple clustering to identify these two groups
from sklearn.mixture import GaussianMixture gmm = GaussianMixture(2) gmm.fit(X_PCA) labels = gmm.predict(X_PCA) labels plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=labels, cmap='rainbow')
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Load a data frame
df = pd.read_csv('../datasets/iris.csv') df.describe(include='all')
notebooks/Quickstart with categoricals.ipynb
fdion/stemgraphic
mit
Select a column with text.
stem_graphic(list(df['species'].values));
notebooks/Quickstart with categoricals.ipynb
fdion/stemgraphic
mit
Load data and model You should already have downloaded the TinyImageNet-100-A and TinyImageNet-100-B datasets along with the pretrained models. Run the cell below to load (a subset of) the TinyImageNet-100-B dataset and one of the models that was pretrained on TinyImageNet-100-A. TinyImageNet-100-B contains 50,000 training images in total (500 per class for all 100 classes) but for this exercise we will use only 5,000 training images (50 per class on average).
# Load the TinyImageNet-100-B dataset from cs231n.data_utils import load_tiny_imagenet, load_models tiny_imagenet_b = 'cs231n/datasets/tiny-imagenet-100-B' class_names, X_train, y_train, X_val, y_val, X_test, y_test = load_tiny_imagenet(tiny_imagenet_b) # Zero-mean the data mean_img = np.mean(X_train, axis=0) X_train -= mean_img X_val -= mean_img X_test -= mean_img # We will use a subset of the TinyImageNet-B training data mask = np.random.choice(X_train.shape[0], size=5000, replace=False) X_train = X_train[mask] y_train = y_train[mask] # Load a pretrained model; it is a five layer convnet. models_dir = 'cs231n/datasets/tiny-100-A-pretrained' model = load_models(models_dir)['model1']
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
TinyImageNet-100-B classes In the previous assignment we printed out a list of all classes in TinyImageNet-100-A. We can do the same on TinyImageNet-100-B; if you compare with the list in the previous exercise you will see that there is no overlap between the classes in TinyImageNet-100-A and TinyImageNet-100-B.
for names in class_names: print ' '.join('"%s"' % name for name in names)
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Visualize Examples Similar to the previous exercise, we can visualize examples from the TinyImageNet-100-B dataset. The images are similar to TinyImageNet-100-A, but the images and classes in the two datasets are disjoint.
# Visualize some examples of the training data classes_to_show = 7 examples_per_class = 5 class_idxs = np.random.choice(len(class_names), size=classes_to_show, replace=False) for i, class_idx in enumerate(class_idxs): train_idxs, = np.nonzero(y_train == class_idx) train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False) for j, train_idx in enumerate(train_idxs): img = X_train[train_idx] + mean_img img = img.transpose(1, 2, 0).astype('uint8') plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j) if j == 0: plt.title(class_names[class_idx][0]) plt.imshow(img) plt.gca().axis('off') plt.show()
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Extract features ConvNets tend to learn generalizable high-level image features. For the five layer ConvNet architecture, we will use the (rectified) activations of the first fully-connected layer as our high-level image features. Open the file cs231n/classifiers/convnet.py and modify the five_layer_convnet function to return features when the extract_features flag is True. This should be VERY simple. Once you have done that, fill in the cell below, which should use the pretrained model in the model variable to extract features from all images in the training and validation sets.
from cs231n.classifiers.convnet import five_layer_convnet # These should store extracted features for the training and validation sets # respectively. # # More concretely, X_train_feats should be an array of shape # (X_train.shape[0], 512) where X_train_feats[i] is the 512-dimensional # feature vector extracted from X_train[i] using model. # # Similarly X_val_feats should have shape (X_val.shape[0], 512) and # X_val_feats[i] should be the 512-dimensional feature vector extracted from # X_val[i] using model. X_train_feats = None X_val_feats = None # Use our pre-trained model to extract features on the subsampled training set # and the validation set. ################################################################################ # TODO: Use the pretrained model to extract features for the training and # # validation sets for TinyImageNet-100-B. # # # # HINT: Similar to computing probabilities in the previous exercise, you # # should split the training and validation sets into small batches to avoid # # using absurd amounts of memory. # ################################################################################ X_train_feats = five_layer_convnet(X_train, model, y=None, reg=0.0, extract_features=True) X_val_feats = five_layer_convnet(X_val, model, y=None, reg=0.0, extract_features=True) pass ################################################################################ # END OF YOUR CODE # ################################################################################
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
kNN with ConvNet features A simple way to implement transfer learning is to use a k-nearest neighborhood classifier. However instead of computing the distance between images using their pixel values as we did in Assignment 1, we will instead say that the distance between a pair of images is equal to the L2 distance between their feature vectors extracted using our pretrained ConvNet. Implement this idea in the cell below. You can use the KNearestNeighbor class in the file cs321n/classifiers/k_nearest_neighbor.py.
from cs231n.classifiers.k_nearest_neighbor import KNearestNeighbor # Predicted labels for X_val using a k-nearest-neighbor classifier trained on # the features extracted from X_train. knn_y_val_pred[i] = c indicates that # the kNN classifier predicts that X_val[i] has label c. knn_y_val_pred = None ################################################################################ # TODO: Use a k-nearest neighbor classifier to compute knn_y_val_pred. # # You may need to experiment with k to get the best performance. # ################################################################################ knn = KNearestNeighbor() knn.train(X_train_feats, y_train) knn_y_val_pred = knn.predict(X_val_feats, k=25) pass ################################################################################ # END OF YOUR CODE # ################################################################################ print 'Validation set accuracy: %f' % np.mean(knn_y_val_pred == y_val)
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Visualize neighbors Recall that the kNN classifier computes the distance between all of its training instances and all of its test instances. We can use this distance matrix to help understand what the ConvNet features care about; specifically, we can select several random images from the validation set and visualize their nearest neighbors in the training set. You will see that many times the nearest neighbors are quite far away from each other in pixel space; for example two images that show the same object from different perspectives may appear nearby in ConvNet feature space. Since the following cell selects random validation images, you can run it several times to get different results.
dists = knn.compute_distances_no_loops(X_val_feats) num_imgs = 5 neighbors_to_show = 6 query_idxs = np.random.randint(X_val.shape[0], size=num_imgs) next_subplot = 1 first_row = True for query_idx in query_idxs: query_img = X_val[query_idx] + mean_img query_img = query_img.transpose(1, 2, 0).astype('uint8') plt.subplot(num_imgs, neighbors_to_show + 1, next_subplot) plt.imshow(query_img) plt.gca().axis('off') if first_row: plt.title('query') next_subplot += 1 o = np.argsort(dists[query_idx]) for i in xrange(neighbors_to_show): img = X_train[o[i]] + mean_img img = img.transpose(1, 2, 0).astype('uint8') plt.subplot(num_imgs, neighbors_to_show + 1, next_subplot) plt.imshow(img) plt.gca().axis('off') if first_row: plt.title('neighbor %d' % (i + 1)) next_subplot += 1 first_row = False
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Softmax on ConvNet features Another way to implement transfer learning is to train a linear classifier on top of the features extracted from our pretrained ConvNet. In the cell below, train a softmax classifier on the features extracted from the training set of TinyImageNet-100-B and use this classifier to predict on the validation set for TinyImageNet-100-B. You can use the Softmax class in the file cs231n/classifiers/linear_classifier.py.
from cs231n.classifiers.linear_classifier import Softmax softmax_y_train_pred = None softmax_y_val_pred = None ################################################################################ # TODO: Train a softmax classifier to predict a TinyImageNet-100-B class from # # features extracted from our pretrained ConvNet. Use this classifier to make # # predictions for the TinyImageNet-100-B training and validation sets, and # # store them in softmax_y_train_pred and softmax_y_val_pred. # # # # You may need to experiment with number of iterations, regularization, and # # learning rate in order to get good performance. The softmax classifier # # should achieve a higher validation accuracy than the kNN classifier. # ################################################################################ softmax = Softmax() # NOTE: the input X of softmax classifier if an array of shape D x N softmax.train(X_train_feats.T, y_train, learning_rate=1e-2, reg=1e-4, num_iters=1000) y_train_pred = softmax.predict(X_train_feats.T) y_val_pred = softmax.predict(X_val_feats.T) pass ################################################################################ # END OF YOUR CODE # ################################################################################ print y_val_pred.shape, y_train_pred.shape train_acc = np.mean(y_train == y_train_pred) val_acc = np.mean(y_val_pred == y_val) print train_acc, val_acc
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Fine-tuning We can improve our classification results on TinyImageNet-100-B further by fine-tuning our ConvNet. In other words, we will train a new ConvNet with the same architecture as our pretrained model, and use the weights of the pretrained model as an initialization to our new model. Usually when fine-tuning you would re-initialize the weights of the final affine layer randomly, but in this case we will initialize the weights of the final affine layer using the weights of the trained softmax classifier from above. In the cell below, use fine-tuning to improve your classification performance on TinyImageNet-100-B. You should be able to outperform the softmax classifier from above using fewer than 5 epochs over the training data. You will need to adjust the learning rate and regularization to achieve good fine-tuning results.
from cs231n.classifier_trainer import ClassifierTrainer # Make a copy of the pretrained model model_copy = {k: v.copy() for k, v in model.iteritems()} # Initialize the weights of the last affine layer using the trained weights from # the softmax classifier above model_copy['W5'] = softmax.W.T.copy().astype(model_copy['W5'].dtype) model_copy['b5'] = np.zeros_like(model_copy['b5']) # Fine-tune the model. You will need to adjust the training parameters to get good results. trainer = ClassifierTrainer() learning_rate = 1e-4 reg = 1e-1 dropout = 0.5 num_epochs = 2 finetuned_model = trainer.train(X_train, y_train, X_val, y_val, model_copy, five_layer_convnet, learning_rate=learning_rate, reg=reg, update='rmsprop', dropout=dropout, num_epochs=num_epochs, verbose=True)[0]
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Now instead, of opening it as a simple text file, we will load the lines in the JSON file into a dictionary object. Let us read de data set using JavaScript Object Notation json module (we will not cover this topic)
import json data_file = 'usagov_bitly_data2012-03-16-1331923249.txt' records = [json.loads(line) for line in open(data_file)]
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Thanks to the json module now variable records is a list of dictionaries, imported from the JSON form.
print("Variable records is {} and its elements are {}.".format(type(records),type(records[0])))
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Let us have a look at the first element: As said before, this is a typical dictionary structure with keys and values. Find out the keys: Find out the value for the key 'tz' in this first record: In this case (and we were not supposed to know this) 'tz' stands for time zone. Suppose we are interested in identifying the most commonly found time zones in the set of data we just imported. Surely, each one of you will find a different way to work around it. First, we want to obtain a list of all time zones found in the list, name the list as list_of_timezones: Check the length of list_of_timezones and, for instance its first ten elements: Try to think of an algorithm to count the occurences of the different timezones (including the blank field ' '). Hint: You might want to use a dictionary to store the occurence (If you can't solve it, follow this link for a possible solution) How often does 'America/Sao_Paulo' appear? How many different timezones are there? Find out the top 10 time zones (sample code) Collections module The Python standard library provides the collections module that contains the collections.Counter class. This does the job that we just made but in a nicer way:
import collections print("First counter is of ", type(counter)) counter = collections.Counter(counter) #generate an instance to the Counter class using our counter variable print("Now counter is of ", type(counter)) #The Counter class has new useful functionalities counter.most_common(10)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The pandas alternative Now, let us do the same work using pandas. The main pandas data structure is the DataFrame. It can be seen as a representation of a table or spreadsheet of data. First, we will create the DataFrame from the original data file:
import numpy as np import pandas as pd from pandas import Series, DataFrame myframe = DataFrame(records) myframe
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
myframe is now a DataFrame, a class introduced by pandas to efficiently work with structured data. Check the type:
type(myframe)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The DataFrame object is composed of Series (another pandas object), They can be seen as the columns of a spreadsheet. For instance, myframe['tz'].
type(myframe['tz'])
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Check the time zones ('tz') in the first ten records of myframe. The Series object has a useful method: value_counts:
tz_counter = myframe['tz'].value_counts()
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
In one line of code, all the timezones are grouped and accounted for. Check the result and get the top 5 time zones: Much few lines of work, right? As we have said repeatedly, there is no need to reinvent the wheel. Probably someone out there solved your problem before you ran into it and, unless you are really really good, that solution is probably better than yours! ;-) Next, we might want to plot the data using the matplotlib library.
# This line configures matplotlib to show figures embedded in the notebook, # instead of opening a new window for each figure. %matplotlib inline
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Pandas can call matplotlib directly without calling the module explicitly. We will make an histogramatic plot:
tz_counter[:10].plot(kind='barh', rot=0)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
It is kind of odd to realize that the second most popular timezone has a blank label. The DataFrame object of pandas has a fillna function that can replace missing (NA) values or empty strings:
#First we generate new Series from a column of myframe, when a value is 'NaN' insert the word 'Missing' clean_tz = myframe['tz'].fillna('Missing') #In this new Series replace EMPTY VALUES with the word 'Unknown'. Try to understand BOOLEAN INDEXING clean_tz[clean_tz == ''] = 'Unknown' #Use the method VALUE_COUNTS to generate a new Series containing time zones and occurrences tz_counter = clean_tz.value_counts() #Finally, plot the top ten values tz_counter[:10].plot(kind='barh', rot=0)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Let's complicate the example a bit more. The 'a' field in the datasheet contains information on the browser used to perform the URL shortening. For example, check the content of the 'a' field in the first record of myframe: Let's generate a Series from the datasheet containing all the browser data: try to understand the following line. A good strategy might be to work it out by pieces: python myframe.a myframe.a.dropna()
browser = Series([x.split()[0] for x in myframe.a.dropna()])
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
As we did with the time zones, we can use the value_counts method on the browser Series to see the most common browsers: Let's decompose the top time zones into people using Windows and not using Windows. This piece of code requires some more knowledge of pandas, skip the details for now:
cframe = myframe[myframe.a.notnull()] os = np.where(cframe['a'].str.contains('Windows'),'Windows','Not Windows') tz_and_os = cframe.groupby(['tz',os]) agg_counter = tz_and_os.size().unstack().fillna(0) agg_counter[:10]
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Let's select the top overall time zones. To do so, we construct an indirect index array from the row counts in agg_counter
#Use to sort in ascending order indexer = agg_counter.sum(1).argsort() indexer[:10]
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Next, use take to select the rows in that order and then slice off the last 10 rows:
count_subset = agg_counter.take(indexer)[-10:] count_subset count_subset.plot(kind='barh', stacked='True')
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Same plot, but percentages instead of absolute numbers
subset_normalized = count_subset.div(count_subset.sum(1), axis=0) subset_normalized.plot(kind='barh', stacked='True')
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
After this example. We will go through the basics of pandas. DataFrame and Series The two basic Data Structures introduced by pandas are DataFrame and Series.
from pandas import Series, DataFrame import pandas as pd
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Series A Series is a one-dimensional array-like object containing an array of data (any NumPy data type is fine) and associated array of data labels, called index. The simplest Series one can think of would be formed only by an array of data:
o1 = Series([-4, 7, 11, 13, -22]) o1
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Notice that the representation of the Series shows the index on the left and the values on the right. No index was specified when the Series was created and a default one has been assigned: integer number from 0 to N-1 (N would be de length of the data array).
o1.values o1.index
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
If we need to specify the index:
o2 = Series([7, 0.2, 11.3, -5], index=['d','e','a','z']) o2 o2.index
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Unlike NumPy arrays, we can use values in the index when selecting single values from a set of values:
o2['e']
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The following is also equivalent:
o2.e
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Values correspondong to two indices (notice double square brackets!):
o2[['z','d']]
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
NumPy array operations, such as masking using a boolean array, scalar broadcasting, or applying mathematical functions, preserve the index-value link:
o2[o2 > 0] #filter positive elements in o2, the indices are conserved. Compare with the same operation in a NumPy array!!! o2*np.pi
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Pandas Series have also been described as a fixed.length, ordered dictionary, since it actually maps index values to data values. Many functions that expect a dict can be used with Series:
'z' in o2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
A Python dictionary can be used to create a pandas Series, here is a list of the top 5 most populated cities (2015) according to Wikipedia:
pop_data = {'Shanghai': 24150000, 'Karachi': 23500000, 'Beijing': 21516000, 'Tianjin': 14722000, 'Istanbul': 14377000} print ("pop_data is of type ",type(pop_data)) ser1 = Series(pop_data) print("ser1 is of type ",type(ser1)) print("Indices of the Series are: ",ser1.index) print("Values of the Series are: ",ser1.values)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
As you just checked, when passing the dictionary the resulting Series uses the dict keys as indices and sorts the values corresponding to the index. In the next case we create a Series from a dictionary but selecting the indices we are interested in:
cities = ['Karachi', 'Istanbul', 'Beijing', 'Moscow'] ser2 = Series(pop_data, index=cities) ser2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Note that the values found in pop_data have been placed in the appropiate locations. No data was found for 'Moscow' and value NaN is assigned. This is used in pandas to mark missing or not available (NA) values. In order to detect missing data in pandas, one should use the isnull and notnull (both present as functions and Series methods):
pd.isnull(ser2) ser2.isnull() ser2.notnull()
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
An important feature of Series to be highlighted here is that Series are automatically aligned when performing arithmetic operations. It doesn't make much sense to add the population data but...
ser1 + ser2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
We can assign names to both the Series object and its index using the name attribute:
ser1.name = 'population' ser1.index.name = 'city' ser1
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
DataFrame The DataFrame object represents a tabular, spreadsheet-like data structure containing an ordered collection of columns, each of which can be a different value type. The DataFrame has both a row and column index and it can be seen as a dictionary of Series. Under the hood, the data is stored as one or more 2D blocks rather than a list, dict, or some other collection of 1D arrays. See the following example:
data = {'city': ['Madrid', 'Madrid','Madrid','Barcelona','Barcelona','Sevilla','Sevilla','Girona','Girona','Girona'], 'year': ['2002', '2006', '2010', '2006', '2010', '2002', '2010', '2002', '2006', '2010'], 'pop': [5478405, 5953604, 6373532, 5221848, 5488633, 1732697, 1902956, 568690, 668911, 741841]} pop_frame = DataFrame(data) pop_frame
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The resulting DataFrame has automatically assigned indices and the columns are sorted. This order can be altered if we specify a sequence of colums:
DataFrame(data, columns=['year','city','pop'])
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
You should keep in mind that indices in the DataFrame are Index objects, they have attributes (such as name as we will see) and are immutable. What will happen if we pass a column that is not contained in the data set?
pop_frame2 = DataFrame(data, columns=['year','city','pop','births']) pop_frame2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
A column in a DataFrame can be retrieved as a Series in two ways: using dict-like notation
pop_frame2['city']
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
using the DataFrame attribute
pop_frame2.city
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Columns can be modified by assignment. Let's get rid of those NA values in the births column:
pop_frame2['births'] = 100000 pop_frame2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
When assigning lists or arrays to a column, the values' length must match the length of the DataFrame. If we assign a Series it will be instead conformed exactly to the DataFrame's index, inserting missing values in any holes:
birth_series = Series([100000, 15000, 98000], index=[0,2,3]) pop_frame2['births'] = birth_series pop_frame2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Assigning a column that doesn't exist will result in creating a new column. Columns can be deleted using the del command:
pop_frame2['Catalunya'] = ((pop_frame2.city == 'Barcelona') | (pop_frame2.city == 'Girona')) pop_frame2 del pop_frame2['Catalunya'] pop_frame2.columns
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Alternatively, the DataFrame can be built from a nested dict of dicts:
pop_data = {'Madrid': {'2002': 5478405, '2006': 5953604, '2010': 6373532}, 'Barcelona': {'2006': 5221848, '2010': 5488633}, 'Sevilla': {'2002': 1732697, '2010': 1902956}, 'Girona': {'2002': 568690, '2006': 668911, '2010': 741841}} pop_frame3 = DataFrame(pop_data) pop_frame3
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The outer dict keys act as the columns and the inner keys as the unioned row indices. Possible data inputs to construct a DataFrame: 2D NumPy array dict of arrays, lists, or tuples dict of Series dict of dicts list of dicts or Series List of lists or tuples DataFrames NumPy masked array As in Series, the index and columns in a DataFrame have name attributes:
pop_frame3.columns.name = 'city'; pop_frame3.index.name = 'year' pop_frame3
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Similarly, the values attribute returns de data contained in the DataFrame as a 2D array:
pop_frame3.values
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Basic functionality We will not cover all the possible operations using Pandas and the related data structures. We will try to cover some of the basics. Reindexing A critical method in pandas is reindex. This implies creating a new object with the data of a given structure but conformed to a new index. For instance: Extract the column of pop_frame3 belonging to Barcelona Check the type of the column, it should be a Series Find out the indices of the Barcelona Series Call reindex on the Barcelona Series to rearrange the data to a new index [2010, 2008, 2006, 2004, 2002], following this example: python obj = Series([4.5, 7.2, -5.3, 3.6], index=['d', 'b', 'a', 'c']) obj2 = obj.reindex(['a', 'b', 'c', 'd', 'e']) The reindex method can be combined with the fill_value= option in the non existing values: python obj2 = obj.reindex(['a', 'b', 'c', 'd', 'e'], fill_value=0) It does'nt make much sense in this case to estimate the non-existing values as zeros. For ordered data such as time series, we can use interpolation or foward/backward filling. In the case of DataFrames, reindex can alter either (row) index, column or both.
#Inverting the row indices and adding some more years years = ['2010', '2008', '2006', '2004', '2002'] pop_frame4 = pop_frame3.reindex(years)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
If instead we want to reindex the columns, we need to use the columns keyword:
cities = ['Madrid', 'Sevilla','Barcelona', 'Girona'] pop_frame4 = pop_frame4.reindex(columns=cities)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Both at once:
pop_frame4 = pop_frame3.reindex(index = years, columns = cities)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
reindex function arguments Dropping entries from an axis From the Barcelona population Series let's get rid of years 2002 and 2008:
pop_bcn2.drop(['2002', '2008'])
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Check that the object was not modified. In DataFrame, index values can be deleted from both axes. Use the IPython help to find out the use of drop and get rid of all the data related to Madrid and year 2002: Indexing, selection, and filtering Series Indexing in Series works similarly to NumPy array indexing. The main difference is that we can actually use the Serie's index instead of integer numbers
pop_bcn2.index.name = 'year'
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Population in Barcelona in year 2006? Boolean indexing, non-zero data in Barcelona? Important: Slicing with labels behaves differently than normal Python slicing the endpoint is inclusive! Give it a try:
pop_bcn2[:2] pop_bcn2['2002':'2006']
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Here we will create a QTable which is a quantity table. The first column is the redshift of the source, the second is the star formation rate (SFR), the third is the stellar mass and the fourth is the rotational velocity. Since it is a QTable, we can define the units for each of the entries, if we like.
a = [0.10, 0.15, 0.2] b = [10.0, 2.0, 100.0] * u.M_sun / u.yr c = [1e10, 1e9, 1e11] * u.M_sun d = [150., 100., 2000.] * u.km / u.s
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
When we create the table, we can assign the names to each column
t = QTable([a, b, c, d], names=('redshift', 'sfr', 'stellar_mass', 'velocity'))
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can display the table by typing the variable into the notebook
t
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can display the information about the table (i.e. data type, units, name)
t.info
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can view a specific column by calling is name.
t['sfr']
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can also group subsets of columns.
t['sfr','velocity']
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can call a specific row, in this case the first row, which shows all the entries for the first object. Remember python indexing starts with zero!
t[0]
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can write to a standard format if we like, binary FITS tables are often used for this. If could be an ASCII file (either sapace, tab or comma delimited) or whatever your favorite sharable file format is. There is a whole list of supported formats that can be read in, many astronomy specific.
t.write('my_table.fits', overwrite=True)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
You can read it in just as easily too. As you can see, the units are stored as well, as long as you are using a QTable.
all_targets = QTable.read("my_table.fits") all_targets
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can add new entries to the table. In this case specific star formation rate (sSFR).
t['ssfr'] = t['sfr'] / t['stellar_mass'] t
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Maybe we are unhappy using years and we want to convert to Gyr. There are a whole list of available units here.
t['ssfr'].to(1./u.gigayear)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
If we want to estimate the H-alpha line lunminosity from the SFR using an emperical relation (such as one from Kennicutt et al. 1998).
t['lum'] = (t['sfr'] * u.erg / u.s )/(7.9e-42 * u.Msun / u.yr) t
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Say we want to go from erg to photons, which are both "energy". You can see it is not quite as straight forward by the follwoing error.
(t['lum']).to(u.ph / u.s)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
In order to convert to photons/s properly, we need to check the units. Our beginning and ending units match (i.e. energy/time), we just need to acount for the energy of the photon which is depends on the wavelength. $$ E_{phot} = \frac{h c}{\lambda} $$ (1) As an exercise for the reader. Modify the above table to estimate the number of photons/s from the H-alpha line luminosity the energy of the photon instead. Photometry with Photutils What is photometry? It is the measurement of light from a source, either represented as flux or magnitude. There are several ways to compute the flux from a source. In this tutorial we will go over aperture photometry, where you draw a circular region around the source and measure the flux inside. Photoutils is a useful astropy package for performing photometry. This includes but not limited to aperture photometry, PSF photometry, star finding, source isophotes, background subtraction, etc... Large parts of this tutorial are borrowed from Getting Started with Photutils documentation. Here we load in a star field image provided with Photutils. We are interested in a small subsection of the image in which we want to do photometry.
import numpy as np from photutils import datasets hdu = datasets.load_star_image() image = hdu.data[500:700, 500:700].astype(float) head = hdu.header
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
The type function is useful for determining what data type you are dealing with, especially if it is an unfamilar one.
print(type(hdu)) print(type(image)) print(type(head))
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Assuming your background is uniform (and it most certianly is not), here is a rough background subtraction. This may not work for all astronomical fields (e.g. clusters, low surface brightness features, etc). There are more sophisticated ways to deal with the background subtraction, see the Background Estimation documentation.
image -= np.median(image)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We use DAOStarFinder to find all of the objects in the image. The FWHM (Full Width Half Maximum) is a way of measureing the size of the source. Often for unresolved sources, like stars, the FWHM should be matched the resolution of the telescope (or the seeing). The threshold is the value above the background in which the star finding algorithm will find sources. Often this should be done in a statistical sense in which a real detection is n-sigma above the background (or n standard deviations). We can measure the standard deviation of background. Statistics aside: Assuming a Gaussian distribution for the noise, 1-sigma contains 68% of the distribution, 2-sigma contains 95% of the distribution and 3-sigma contains 99.7% of the distribution. Often 5-sigma is the golden standard in physics and astronomy for a "real" detection as it is contains 99.99994% background distribution. Here we use 3-sigma (3 standard devations) above the background for a "real" detection.
from photutils import DAOStarFinder from astropy.stats import mad_std bkg_sigma = mad_std(image) daofind = DAOStarFinder(fwhm=4., threshold=3.*bkg_sigma) sources = daofind(image) for col in sources.colnames: sources[col].info.format = '%.8g' # for consistent table output print(sources[:10]) print(sources.info) from photutils import aperture_photometry, CircularAperture positions = np.transpose((sources['xcentroid'], sources['ycentroid'])) apertures = CircularAperture(positions, r=4.) phot_table = aperture_photometry(image, apertures) for col in phot_table.colnames: phot_table[col].info.format = '%.8g' # for consistent table output print(phot_table[:10]) print(phot_table.info) import matplotlib.pyplot as plt plt.imshow(image, cmap='gray_r', origin='lower') apertures.plot(color='blue', lw=1.5, alpha=0.5) print(head.keys())
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Astroquery Astroquery is a Astropy package for querying astronomical databases. There many, many, many catalogs and image services. Astroquery can be used to find catalogs and images. In this tutorial, we will show a few different databases and and ways to search for data. NED query NED is the NASA/IPAC Extragalacitc Database. It can be useful for finding measurements, properties and papers about your favorite galaxy (or galaxies). Like many databases you can search for galaxies via their name or coordinates. First we need to import the modules that we will use for this tutorial.
from astroquery.ned import Ned from astropy import coordinates import astropy.units as u result_table = Ned.query_object("m82") print(result_table)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Print the columns returned from the search.
print(result_table.keys())
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Print the redshift of the source
print(result_table['Redshift'])
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Lets now search for objects around M82.
result_table = Ned.query_region("m82", radius=1 * u.arcmin) print(result_table)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Gaia query Gaia is a space observatory designed to observe the precise positions of stars in order to determine their distances and motions within the galaxy. Gaia star coordinates are the new golden standard for astrometry. Being able to querry their database to correct your catalogs and images to their coordinate system would be highly beneficial.
from astroquery.gaia import Gaia coord = coordinates.SkyCoord(ra=280, dec=-60, unit=(u.degree, u.degree), frame='icrs') radius = u.Quantity(1.0, u.arcmin) j = Gaia.cone_search_async(coord, radius) r = j.get_results() r.pprint() print(r.keys())
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
IRSA query of 2MASS IRSA is the NASA/IPAC Infrared Science Archive. IRSA typically has a infrared mission focus, such as 2MASS, Herschel, Planck, Spitzer, WISE, etc...
from astroquery.irsa import Irsa Irsa.list_catalogs() table = Irsa.query_region("m82", catalog="fp_psc", spatial="Cone",radius=10 * u.arcmin) table table.keys()
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
SDSS query SDSS or the Sloan Digital Sky Survey is a massive survey of the sky which originally was optical imaging and spectroscopy. It has been operating for over 20 years and the well beyond the original planned survey as it has expanded to near-infrared observations of stars and IFUs (integral field units) for observing galaxies and eventually the whole sky LVM (Local Volume Mapper). It is an immense data set to draw from.
from astroquery.sdss import SDSS
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Coordinates of your favorite object, in this case M82 in degrees.
ra, dec = 148.969687, 69.679383 co = coordinates.SkyCoord(ra=ra, dec=dec,unit=(u.deg, u.deg), frame='fk5') xid = SDSS.query_region(co, radius=10 * u.arcmin) # print the first 10 entries print(xid[:10]) print(xid.keys()) print(xid['ra','dec'][:10]) # print the first 10 entries
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Skyview Skyview is a virtual observatory that is connected to several surveys and imaging data sets spanning multiple wavelengths. It can be a handy way to get quick access to multi-wavelength imaging data for your favorite object.
from astroquery.skyview import SkyView SkyView.list_surveys() pflist = SkyView.get_images(position='M82', survey=['SDSSr'],radius=10 * u.arcmin) ext = 0 pf = pflist[0] # first element of the list, might need a loop if multiple images m82_image = pf[ext].data
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Plot image
ax = plt.subplot() ax.imshow(m82_image, cmap='gray_r', origin='lower', vmin=-10, vmax=20) ax.set_xlabel('X (pixels)') ax.set_ylabel('Y (pixels)')
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Plot the in RA and Declination coordinates space, instead of pixels. The WCS (or World Coordinates System) in a FITS file has a set of keywords that define the coordinate system. Typical keywords are the following: CRVAL1, CRVAL2 - RA and Declination (in degrees) coordinates of CRPIX1 and CRPIX2. CRPIX1, CRPIX2 - Pixel positions of CRVAL1 and CRVAL2. CDELT1, CDELT2 - pixel scale (or plate scale) in degrees/pixel (or CD matrix: CD1_1, CD1_2, CD2_1, CD2_2 which represent the pixel scale and rotation with respect to the position angle)
from astropy.wcs import WCS
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Use the FITS header to define the WCS of the image.
head = pf[ext].header wcs = WCS(head) ax = plt.subplot(projection=wcs) ax.imshow(m82_image, cmap='gray_r', origin='lower', vmin=-10, vmax=20) #ax.grid(color='white', ls='solid') ax.set_xlabel('Right Ascension (J2000)') ax.set_ylabel('Declination (J2000)') #ax.scatter(xid['ra'],xid['dec'],marker="o",s=50,transform=ax.get_transform('fk5'),edgecolor='b', facecolor='none')
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Setup From the config, we'll grab the base URL and folders to be harvested.
baseurl = ARCGIS["SLIPFUTURE"]["url"] folders = ARCGIS["SLIPFUTURE"]["folders"] print("The base URL {0} contains folders {1}\n".format(baseurl, str(folders))) services = get_arc_services(baseurl, folders[0]) print("The services in folder {0} are:\n{1}\n".format(str(folders), str(services))) service_url = services[0] mrwa = get_arc_servicedict(services[0]) print("Service {0}\ncontains layers and extensions:\n{1}".format(services[0], mrwa))
Harvesting ArcGIS REST.ipynb
datawagovau/harvesters
mit
Harvest Hammertime? Hammertime.
harvest_arcgis_service(services[0], ckan, owner_org_id = ckan.action.organization_show(id="mrwa")["id"], author = "Main Roads Western Australia", author_email = "irissupport@mainroads.wa.gov.au", debug=False) a = ["SLIP Classic", "Harvested"] a.append("test") a
Harvesting ArcGIS REST.ipynb
datawagovau/harvesters
mit
Use a pretrained VGG model with our Vgg16 class Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy. We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward. The punchline: state of the art custom model in 7 lines of code Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
# As large as you can, but no larger than 64 is recommended. # If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this. #batch_size = 1 #batch_size = 4 batch_size = 64 # Import our class, and instantiate import vgg16; reload(vgg16) from vgg16 import Vgg16 vgg = Vgg16() # Grab a few images at a time for training and validation. # NB: They must be in subdirectories named based on their category batches = vgg.get_batches(path+'train', batch_size=batch_size) val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2) vgg.finetune(batches) vgg.fit(batches, val_batches, batch_size, nb_epoch=1)
nbs/lesson1.ipynb
roebius/deeplearning_keras2
apache-2.0
How big is the data?
print("There are " + str(len(titanic_data)) + " rows in out dataset. \n") print("The column names are " + str(list(titanic_data.columns.values)))
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
Question 2 What's the average age of.. - any Titanic passenger - a survivor - a non-surviving first-class passenger - male survivors older than 30 from anywhere but Queenstown
Ages = np.mean(titanic_data.Age) print('Average age of all passengers with data is {}'.format(Ages)) survivors = titanic_data[titanic_data.Survived == 1] srv_ages = np.mean(survivors.Age) print('Average survivor age is {}'.format(srv_ages)) first_dead = titanic_data[(titanic_data.Survived == 0) & (titanic_data.Pclass ==1)] first_dead_ages = np.mean(first_dead.Age) print('Average first class passenger who died age is {}'.format(first_dead_ages)) live_males = titanic_data[(titanic_data.Survived==1) & (titanic_data.Age>30) & (titanic_data.Sex=='male') & (titanic_data.Embarked!= 'Q')] live_males_ave = np.mean(live_males.Age) print('Average age of male not dead over 30 not from Queenstown is {}'.format(live_males_ave))
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
Question 3 What's the most common.. - passenger class - port of Embarkation - number of siblings or spouses aboard for surviors
titanic_data['Pclass'].value_counts()
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit