markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
When the deployment is done, you can verify three Cloud Functions deployments via the Cloud Console UI. If deployment is succeeded, move to next section to upload audience data to Google Analytics via JSONL file. Configure audience upload endpoint Different audience upload endpoint APIs have different configurations. F...
%%writefile cloud-for-marketing/marketing-analytics/activation/gmp-googleads-connector/config_api.json { "MP": { "default": { "mpConfig": { "v": "1", "t": "event", "ec": "video", "ea": "play", "ni": "1", "tid": "UA-XXXXXXXXX-Y" } } } }
packages/propensity/09.audience_upload.ipynb
google/compass
apache-2.0
Create audience list JSON files GMP and Google Ads Connector's Google Analytics Measurement Protocol pipeline requires JSONL text format. Following cells help to export BigQuery table containing audience list as JSONL file to Google Cloud Storage Bucket. NOTE: This solution has specific file naming requirement to work ...
configs = helpers.get_configs('config.yaml') dest_configs = configs.destination # GCP project ID PROJECT_ID = dest_configs.project_id # Name of BigQuery dataset DATASET_NAME = dest_configs.dataset_name # Google Cloud Storage Bucket name to store audience upload JSON files # NOTE: The name should be same as indicated...
packages/propensity/09.audience_upload.ipynb
google/compass
apache-2.0
Examples and tests
nHat diff(nHat, t) diff(lambdaHat, t) diff(lambdaHat, t).components diff(lambdaHat, t).subs(t,0).components diff(lambdaHat, t, 2).components diff(lambdaHat, t, 2).subs(t,0).components diff(ellHat, t) diff(nHat, t, 2) diff(nHat,t, 3) diff(nHat,t, 4) diff(SigmaVec,t, 0) SigmaVec.fdiff() diff(SigmaVec,t, 1) ...
Waveforms/TestTensors.ipynb
moble/PostNewtonian
mit
Sympy can be a little tricky because it caches things, which means that the first implementation of this code silently changed tensors in place, without meaning to. Let's just check that our variables haven't changed:
display(T1, T2) T3 = SymmetricTensorProduct(SigmaVec, SigmaVec, ellHat, coefficient=1) display(T3) T3.trace(0,1) diff(T3, t, 1) T3.symmetric T3*ellHat ellHat*T3 T1+T3 T1 = SymmetricTensorProduct(SigmaVec, SigmaVec, ellHat, nHat, coefficient=1) display(T1) display(T1.trace()) T1*T2 type(_) import simpletensors...
Waveforms/TestTensors.ipynb
moble/PostNewtonian
mit
Let's get the data from Seattle open data site Data is published on a hourly basis from 2012 to now and can be found in the below url. Make a pandas dataframe from this for our analysis.
url = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD' df = pd.read_csv(url, parse_dates=True) df.head() df.shape df.index = pd.DatetimeIndex(df.Date) df.head() df.drop(columns=['Date'], inplace=True) df.head() df['total'] = df['Fremont Bridge East Sidewalk'] + df['Fremont Bridge West S...
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Let's start with visualization to understand this data
df.resample('D').sum().plot() df.resample('M').sum().plot() df.groupby(df.index.hour).mean().plot()
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Let's look a this data on hourly basis across days
pivoted_data = df.pivot_table('total', index=df.index.hour, columns=df.index.date) pivoted_data.iloc[:5, :5] # plot of this dates together pivoted_data.plot(legend=False, alpha=0.1)
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Let's do unsupervised learning to detangle this chart
from sklearn.decomposition import PCA X = pivoted_data.fillna(0).T.values; X.shape pca = PCA(2, svd_solver='full').fit(X) X_PCA = pca.transform(X) X_PCA.shape plt.scatter(X_PCA[:, 0], X_PCA[:, 1]) dayofweek = pd.DatetimeIndex(pivoted_data.columns).dayofweek plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=dayofweek, cmap=...
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Let's do a simple clustering to identify these two groups
from sklearn.mixture import GaussianMixture gmm = GaussianMixture(2) gmm.fit(X_PCA) labels = gmm.predict(X_PCA) labels plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=labels, cmap='rainbow')
Intro to Data Science - Seattle Fremont Bridge.ipynb
suresh/notebooks
mit
Load a data frame
df = pd.read_csv('../datasets/iris.csv') df.describe(include='all')
notebooks/Quickstart with categoricals.ipynb
fdion/stemgraphic
mit
Select a column with text.
stem_graphic(list(df['species'].values));
notebooks/Quickstart with categoricals.ipynb
fdion/stemgraphic
mit
Load data and model You should already have downloaded the TinyImageNet-100-A and TinyImageNet-100-B datasets along with the pretrained models. Run the cell below to load (a subset of) the TinyImageNet-100-B dataset and one of the models that was pretrained on TinyImageNet-100-A. TinyImageNet-100-B contains 50,000 trai...
# Load the TinyImageNet-100-B dataset from cs231n.data_utils import load_tiny_imagenet, load_models tiny_imagenet_b = 'cs231n/datasets/tiny-imagenet-100-B' class_names, X_train, y_train, X_val, y_val, X_test, y_test = load_tiny_imagenet(tiny_imagenet_b) # Zero-mean the data mean_img = np.mean(X_train, axis=...
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
TinyImageNet-100-B classes In the previous assignment we printed out a list of all classes in TinyImageNet-100-A. We can do the same on TinyImageNet-100-B; if you compare with the list in the previous exercise you will see that there is no overlap between the classes in TinyImageNet-100-A and TinyImageNet-100-B.
for names in class_names: print ' '.join('"%s"' % name for name in names)
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Visualize Examples Similar to the previous exercise, we can visualize examples from the TinyImageNet-100-B dataset. The images are similar to TinyImageNet-100-A, but the images and classes in the two datasets are disjoint.
# Visualize some examples of the training data classes_to_show = 7 examples_per_class = 5 class_idxs = np.random.choice(len(class_names), size=classes_to_show, replace=False) for i, class_idx in enumerate(class_idxs): train_idxs, = np.nonzero(y_train == class_idx) train_idxs = np.random.choice(train_idxs, size...
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Extract features ConvNets tend to learn generalizable high-level image features. For the five layer ConvNet architecture, we will use the (rectified) activations of the first fully-connected layer as our high-level image features. Open the file cs231n/classifiers/convnet.py and modify the five_layer_convnet function to...
from cs231n.classifiers.convnet import five_layer_convnet # These should store extracted features for the training and validation sets # respectively. # # More concretely, X_train_feats should be an array of shape # (X_train.shape[0], 512) where X_train_feats[i] is the 512-dimensional # feature vector extracted from X...
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
kNN with ConvNet features A simple way to implement transfer learning is to use a k-nearest neighborhood classifier. However instead of computing the distance between images using their pixel values as we did in Assignment 1, we will instead say that the distance between a pair of images is equal to the L2 distance bet...
from cs231n.classifiers.k_nearest_neighbor import KNearestNeighbor # Predicted labels for X_val using a k-nearest-neighbor classifier trained on # the features extracted from X_train. knn_y_val_pred[i] = c indicates that # the kNN classifier predicts that X_val[i] has label c. knn_y_val_pred = None ##################...
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Visualize neighbors Recall that the kNN classifier computes the distance between all of its training instances and all of its test instances. We can use this distance matrix to help understand what the ConvNet features care about; specifically, we can select several random images from the validation set and visualize t...
dists = knn.compute_distances_no_loops(X_val_feats) num_imgs = 5 neighbors_to_show = 6 query_idxs = np.random.randint(X_val.shape[0], size=num_imgs) next_subplot = 1 first_row = True for query_idx in query_idxs: query_img = X_val[query_idx] + mean_img query_img = query_img.transpose(1, 2, 0).astype('uint8') ...
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Softmax on ConvNet features Another way to implement transfer learning is to train a linear classifier on top of the features extracted from our pretrained ConvNet. In the cell below, train a softmax classifier on the features extracted from the training set of TinyImageNet-100-B and use this classifier to predict on t...
from cs231n.classifiers.linear_classifier import Softmax softmax_y_train_pred = None softmax_y_val_pred = None ################################################################################ # TODO: Train a softmax classifier to predict a TinyImageNet-100-B class from # # features extracted from our pretrained Conv...
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Fine-tuning We can improve our classification results on TinyImageNet-100-B further by fine-tuning our ConvNet. In other words, we will train a new ConvNet with the same architecture as our pretrained model, and use the weights of the pretrained model as an initialization to our new model. Usually when fine-tuning you ...
from cs231n.classifier_trainer import ClassifierTrainer # Make a copy of the pretrained model model_copy = {k: v.copy() for k, v in model.iteritems()} # Initialize the weights of the last affine layer using the trained weights from # the softmax classifier above model_copy['W5'] = softmax.W.T.copy().astype(model_copy...
cs231n/assignment3/q3.ipynb
Alexoner/mooc
apache-2.0
Now instead, of opening it as a simple text file, we will load the lines in the JSON file into a dictionary object. Let us read de data set using JavaScript Object Notation json module (we will not cover this topic)
import json data_file = 'usagov_bitly_data2012-03-16-1331923249.txt' records = [json.loads(line) for line in open(data_file)]
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Thanks to the json module now variable records is a list of dictionaries, imported from the JSON form.
print("Variable records is {} and its elements are {}.".format(type(records),type(records[0])))
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Let us have a look at the first element: As said before, this is a typical dictionary structure with keys and values. Find out the keys: Find out the value for the key 'tz' in this first record: In this case (and we were not supposed to know this) 'tz' stands for time zone. Suppose we are interested in identifying the ...
import collections print("First counter is of ", type(counter)) counter = collections.Counter(counter) #generate an instance to the Counter class using our counter variable print("Now counter is of ", type(counter)) #The Counter class has new useful functionalities counter.most_common(10)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The pandas alternative Now, let us do the same work using pandas. The main pandas data structure is the DataFrame. It can be seen as a representation of a table or spreadsheet of data. First, we will create the DataFrame from the original data file:
import numpy as np import pandas as pd from pandas import Series, DataFrame myframe = DataFrame(records) myframe
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
myframe is now a DataFrame, a class introduced by pandas to efficiently work with structured data. Check the type:
type(myframe)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The DataFrame object is composed of Series (another pandas object), They can be seen as the columns of a spreadsheet. For instance, myframe['tz'].
type(myframe['tz'])
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Check the time zones ('tz') in the first ten records of myframe. The Series object has a useful method: value_counts:
tz_counter = myframe['tz'].value_counts()
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
In one line of code, all the timezones are grouped and accounted for. Check the result and get the top 5 time zones: Much few lines of work, right? As we have said repeatedly, there is no need to reinvent the wheel. Probably someone out there solved your problem before you ran into it and, unless you are really really ...
# This line configures matplotlib to show figures embedded in the notebook, # instead of opening a new window for each figure. %matplotlib inline
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Pandas can call matplotlib directly without calling the module explicitly. We will make an histogramatic plot:
tz_counter[:10].plot(kind='barh', rot=0)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
It is kind of odd to realize that the second most popular timezone has a blank label. The DataFrame object of pandas has a fillna function that can replace missing (NA) values or empty strings:
#First we generate new Series from a column of myframe, when a value is 'NaN' insert the word 'Missing' clean_tz = myframe['tz'].fillna('Missing') #In this new Series replace EMPTY VALUES with the word 'Unknown'. Try to understand BOOLEAN INDEXING clean_tz[clean_tz == ''] = 'Unknown' #Use the method VALUE_COUNTS to gen...
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Let's complicate the example a bit more. The 'a' field in the datasheet contains information on the browser used to perform the URL shortening. For example, check the content of the 'a' field in the first record of myframe: Let's generate a Series from the datasheet containing all the browser data: try to understand th...
browser = Series([x.split()[0] for x in myframe.a.dropna()])
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
As we did with the time zones, we can use the value_counts method on the browser Series to see the most common browsers: Let's decompose the top time zones into people using Windows and not using Windows. This piece of code requires some more knowledge of pandas, skip the details for now:
cframe = myframe[myframe.a.notnull()] os = np.where(cframe['a'].str.contains('Windows'),'Windows','Not Windows') tz_and_os = cframe.groupby(['tz',os]) agg_counter = tz_and_os.size().unstack().fillna(0) agg_counter[:10]
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Let's select the top overall time zones. To do so, we construct an indirect index array from the row counts in agg_counter
#Use to sort in ascending order indexer = agg_counter.sum(1).argsort() indexer[:10]
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Next, use take to select the rows in that order and then slice off the last 10 rows:
count_subset = agg_counter.take(indexer)[-10:] count_subset count_subset.plot(kind='barh', stacked='True')
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Same plot, but percentages instead of absolute numbers
subset_normalized = count_subset.div(count_subset.sum(1), axis=0) subset_normalized.plot(kind='barh', stacked='True')
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
After this example. We will go through the basics of pandas. DataFrame and Series The two basic Data Structures introduced by pandas are DataFrame and Series.
from pandas import Series, DataFrame import pandas as pd
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Series A Series is a one-dimensional array-like object containing an array of data (any NumPy data type is fine) and associated array of data labels, called index. The simplest Series one can think of would be formed only by an array of data:
o1 = Series([-4, 7, 11, 13, -22]) o1
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Notice that the representation of the Series shows the index on the left and the values on the right. No index was specified when the Series was created and a default one has been assigned: integer number from 0 to N-1 (N would be de length of the data array).
o1.values o1.index
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
If we need to specify the index:
o2 = Series([7, 0.2, 11.3, -5], index=['d','e','a','z']) o2 o2.index
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Unlike NumPy arrays, we can use values in the index when selecting single values from a set of values:
o2['e']
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The following is also equivalent:
o2.e
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Values correspondong to two indices (notice double square brackets!):
o2[['z','d']]
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
NumPy array operations, such as masking using a boolean array, scalar broadcasting, or applying mathematical functions, preserve the index-value link:
o2[o2 > 0] #filter positive elements in o2, the indices are conserved. Compare with the same operation in a NumPy array!!! o2*np.pi
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Pandas Series have also been described as a fixed.length, ordered dictionary, since it actually maps index values to data values. Many functions that expect a dict can be used with Series:
'z' in o2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
A Python dictionary can be used to create a pandas Series, here is a list of the top 5 most populated cities (2015) according to Wikipedia:
pop_data = {'Shanghai': 24150000, 'Karachi': 23500000, 'Beijing': 21516000, 'Tianjin': 14722000, 'Istanbul': 14377000} print ("pop_data is of type ",type(pop_data)) ser1 = Series(pop_data) print("ser1 is of type ",type(ser1)) print("Indices of the Series are: ",ser1.index) print("Values of the Series are: ",ser1.value...
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
As you just checked, when passing the dictionary the resulting Series uses the dict keys as indices and sorts the values corresponding to the index. In the next case we create a Series from a dictionary but selecting the indices we are interested in:
cities = ['Karachi', 'Istanbul', 'Beijing', 'Moscow'] ser2 = Series(pop_data, index=cities) ser2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Note that the values found in pop_data have been placed in the appropiate locations. No data was found for 'Moscow' and value NaN is assigned. This is used in pandas to mark missing or not available (NA) values. In order to detect missing data in pandas, one should use the isnull and notnull (both present as functions ...
pd.isnull(ser2) ser2.isnull() ser2.notnull()
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
An important feature of Series to be highlighted here is that Series are automatically aligned when performing arithmetic operations. It doesn't make much sense to add the population data but...
ser1 + ser2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
We can assign names to both the Series object and its index using the name attribute:
ser1.name = 'population' ser1.index.name = 'city' ser1
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
DataFrame The DataFrame object represents a tabular, spreadsheet-like data structure containing an ordered collection of columns, each of which can be a different value type. The DataFrame has both a row and column index and it can be seen as a dictionary of Series. Under the hood, the data is stored as one or more 2D ...
data = {'city': ['Madrid', 'Madrid','Madrid','Barcelona','Barcelona','Sevilla','Sevilla','Girona','Girona','Girona'], 'year': ['2002', '2006', '2010', '2006', '2010', '2002', '2010', '2002', '2006', '2010'], 'pop': [5478405, 5953604, 6373532, 5221848, 5488633, 1732697, 1902956, 568690, 668911, 741841]} po...
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The resulting DataFrame has automatically assigned indices and the columns are sorted. This order can be altered if we specify a sequence of colums:
DataFrame(data, columns=['year','city','pop'])
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
You should keep in mind that indices in the DataFrame are Index objects, they have attributes (such as name as we will see) and are immutable. What will happen if we pass a column that is not contained in the data set?
pop_frame2 = DataFrame(data, columns=['year','city','pop','births']) pop_frame2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
A column in a DataFrame can be retrieved as a Series in two ways: using dict-like notation
pop_frame2['city']
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
using the DataFrame attribute
pop_frame2.city
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Columns can be modified by assignment. Let's get rid of those NA values in the births column:
pop_frame2['births'] = 100000 pop_frame2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
When assigning lists or arrays to a column, the values' length must match the length of the DataFrame. If we assign a Series it will be instead conformed exactly to the DataFrame's index, inserting missing values in any holes:
birth_series = Series([100000, 15000, 98000], index=[0,2,3]) pop_frame2['births'] = birth_series pop_frame2
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Assigning a column that doesn't exist will result in creating a new column. Columns can be deleted using the del command:
pop_frame2['Catalunya'] = ((pop_frame2.city == 'Barcelona') | (pop_frame2.city == 'Girona')) pop_frame2 del pop_frame2['Catalunya'] pop_frame2.columns
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Alternatively, the DataFrame can be built from a nested dict of dicts:
pop_data = {'Madrid': {'2002': 5478405, '2006': 5953604, '2010': 6373532}, 'Barcelona': {'2006': 5221848, '2010': 5488633}, 'Sevilla': {'2002': 1732697, '2010': 1902956}, 'Girona': {'2002': 568690, '2006': 668911, '2010': 741841}} pop_frame3 = DataFrame(pop_data) pop_frame3
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The outer dict keys act as the columns and the inner keys as the unioned row indices. Possible data inputs to construct a DataFrame: 2D NumPy array dict of arrays, lists, or tuples dict of Series dict of dicts list of dicts or Series List of lists or tuples DataFrames NumPy masked array As in Series, the index and co...
pop_frame3.columns.name = 'city'; pop_frame3.index.name = 'year' pop_frame3
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Similarly, the values attribute returns de data contained in the DataFrame as a 2D array:
pop_frame3.values
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Basic functionality We will not cover all the possible operations using Pandas and the related data structures. We will try to cover some of the basics. Reindexing A critical method in pandas is reindex. This implies creating a new object with the data of a given structure but conformed to a new index. For instance: E...
#Inverting the row indices and adding some more years years = ['2010', '2008', '2006', '2004', '2002'] pop_frame4 = pop_frame3.reindex(years)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
If instead we want to reindex the columns, we need to use the columns keyword:
cities = ['Madrid', 'Sevilla','Barcelona', 'Girona'] pop_frame4 = pop_frame4.reindex(columns=cities)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Both at once:
pop_frame4 = pop_frame3.reindex(index = years, columns = cities)
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
reindex function arguments Dropping entries from an axis From the Barcelona population Series let's get rid of years 2002 and 2008:
pop_bcn2.drop(['2002', '2008'])
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Check that the object was not modified. In DataFrame, index values can be deleted from both axes. Use the IPython help to find out the use of drop and get rid of all the data related to Madrid and year 2002: Indexing, selection, and filtering Series Indexing in Series works similarly to NumPy array indexing. The main d...
pop_bcn2.index.name = 'year'
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Population in Barcelona in year 2006? Boolean indexing, non-zero data in Barcelona? Important: Slicing with labels behaves differently than normal Python slicing the endpoint is inclusive! Give it a try:
pop_bcn2[:2] pop_bcn2['2002':'2006']
notebooks/Pandas_Github_Day3.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Here we will create a QTable which is a quantity table. The first column is the redshift of the source, the second is the star formation rate (SFR), the third is the stellar mass and the fourth is the rotational velocity. Since it is a QTable, we can define the units for each of the entries, if we like.
a = [0.10, 0.15, 0.2] b = [10.0, 2.0, 100.0] * u.M_sun / u.yr c = [1e10, 1e9, 1e11] * u.M_sun d = [150., 100., 2000.] * u.km / u.s
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
When we create the table, we can assign the names to each column
t = QTable([a, b, c, d], names=('redshift', 'sfr', 'stellar_mass', 'velocity'))
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can display the table by typing the variable into the notebook
t
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can display the information about the table (i.e. data type, units, name)
t.info
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can view a specific column by calling is name.
t['sfr']
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can also group subsets of columns.
t['sfr','velocity']
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can call a specific row, in this case the first row, which shows all the entries for the first object. Remember python indexing starts with zero!
t[0]
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can write to a standard format if we like, binary FITS tables are often used for this. If could be an ASCII file (either sapace, tab or comma delimited) or whatever your favorite sharable file format is. There is a whole list of supported formats that can be read in, many astronomy specific.
t.write('my_table.fits', overwrite=True)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
You can read it in just as easily too. As you can see, the units are stored as well, as long as you are using a QTable.
all_targets = QTable.read("my_table.fits") all_targets
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We can add new entries to the table. In this case specific star formation rate (sSFR).
t['ssfr'] = t['sfr'] / t['stellar_mass'] t
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Maybe we are unhappy using years and we want to convert to Gyr. There are a whole list of available units here.
t['ssfr'].to(1./u.gigayear)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
If we want to estimate the H-alpha line lunminosity from the SFR using an emperical relation (such as one from Kennicutt et al. 1998).
t['lum'] = (t['sfr'] * u.erg / u.s )/(7.9e-42 * u.Msun / u.yr) t
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Say we want to go from erg to photons, which are both "energy". You can see it is not quite as straight forward by the follwoing error.
(t['lum']).to(u.ph / u.s)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
In order to convert to photons/s properly, we need to check the units. Our beginning and ending units match (i.e. energy/time), we just need to acount for the energy of the photon which is depends on the wavelength. $$ E_{phot} = \frac{h c}{\lambda} $$ (1) As an exercise for the reader. Modify the above table to estima...
import numpy as np from photutils import datasets hdu = datasets.load_star_image() image = hdu.data[500:700, 500:700].astype(float) head = hdu.header
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
The type function is useful for determining what data type you are dealing with, especially if it is an unfamilar one.
print(type(hdu)) print(type(image)) print(type(head))
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Assuming your background is uniform (and it most certianly is not), here is a rough background subtraction. This may not work for all astronomical fields (e.g. clusters, low surface brightness features, etc). There are more sophisticated ways to deal with the background subtraction, see the Background Estimation docum...
image -= np.median(image)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
We use DAOStarFinder to find all of the objects in the image. The FWHM (Full Width Half Maximum) is a way of measureing the size of the source. Often for unresolved sources, like stars, the FWHM should be matched the resolution of the telescope (or the seeing). The threshold is the value above the background in whic...
from photutils import DAOStarFinder from astropy.stats import mad_std bkg_sigma = mad_std(image) daofind = DAOStarFinder(fwhm=4., threshold=3.*bkg_sigma) sources = daofind(image) for col in sources.colnames: sources[col].info.format = '%.8g' # for consistent table output print(sources[:10]) print(sources.info) f...
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Astroquery Astroquery is a Astropy package for querying astronomical databases. There many, many, many catalogs and image services. Astroquery can be used to find catalogs and images. In this tutorial, we will show a few different databases and and ways to search for data. NED query NED is the NASA/IPAC Extragalaci...
from astroquery.ned import Ned from astropy import coordinates import astropy.units as u result_table = Ned.query_object("m82") print(result_table)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Print the columns returned from the search.
print(result_table.keys())
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Print the redshift of the source
print(result_table['Redshift'])
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Lets now search for objects around M82.
result_table = Ned.query_region("m82", radius=1 * u.arcmin) print(result_table)
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Gaia query Gaia is a space observatory designed to observe the precise positions of stars in order to determine their distances and motions within the galaxy. Gaia star coordinates are the new golden standard for astrometry. Being able to querry their database to correct your catalogs and images to their coordinate s...
from astroquery.gaia import Gaia coord = coordinates.SkyCoord(ra=280, dec=-60, unit=(u.degree, u.degree), frame='icrs') radius = u.Quantity(1.0, u.arcmin) j = Gaia.cone_search_async(coord, radius) r = j.get_results() r.pprint() print(r.keys())
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
IRSA query of 2MASS IRSA is the NASA/IPAC Infrared Science Archive. IRSA typically has a infrared mission focus, such as 2MASS, Herschel, Planck, Spitzer, WISE, etc...
from astroquery.irsa import Irsa Irsa.list_catalogs() table = Irsa.query_region("m82", catalog="fp_psc", spatial="Cone",radius=10 * u.arcmin) table table.keys()
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
SDSS query SDSS or the Sloan Digital Sky Survey is a massive survey of the sky which originally was optical imaging and spectroscopy. It has been operating for over 20 years and the well beyond the original planned survey as it has expanded to near-infrared observations of stars and IFUs (integral field units) for obs...
from astroquery.sdss import SDSS
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Coordinates of your favorite object, in this case M82 in degrees.
ra, dec = 148.969687, 69.679383 co = coordinates.SkyCoord(ra=ra, dec=dec,unit=(u.deg, u.deg), frame='fk5') xid = SDSS.query_region(co, radius=10 * u.arcmin) # print the first 10 entries print(xid[:10]) print(xid.keys()) print(xid['ra','dec'][:10]) # print the first 10 entries
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Skyview Skyview is a virtual observatory that is connected to several surveys and imaging data sets spanning multiple wavelengths. It can be a handy way to get quick access to multi-wavelength imaging data for your favorite object.
from astroquery.skyview import SkyView SkyView.list_surveys() pflist = SkyView.get_images(position='M82', survey=['SDSSr'],radius=10 * u.arcmin) ext = 0 pf = pflist[0] # first element of the list, might need a loop if multiple images m82_image = pf[ext].data
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Plot image
ax = plt.subplot() ax.imshow(m82_image, cmap='gray_r', origin='lower', vmin=-10, vmax=20) ax.set_xlabel('X (pixels)') ax.set_ylabel('Y (pixels)')
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Plot the in RA and Declination coordinates space, instead of pixels. The WCS (or World Coordinates System) in a FITS file has a set of keywords that define the coordinate system. Typical keywords are the following: CRVAL1, CRVAL2 - RA and Declination (in degrees) coordinates of CRPIX1 and CRPIX2. CRPIX1, CRPIX2 - Pix...
from astropy.wcs import WCS
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Use the FITS header to define the WCS of the image.
head = pf[ext].header wcs = WCS(head) ax = plt.subplot(projection=wcs) ax.imshow(m82_image, cmap='gray_r', origin='lower', vmin=-10, vmax=20) #ax.grid(color='white', ls='solid') ax.set_xlabel('Right Ascension (J2000)') ax.set_ylabel('Declination (J2000)') #ax.scatter(xid['ra'],xid['dec'],marker="o",s=50,transform=ax....
MoreNotebooks/AstropySmorgasbord.ipynb
obscode/bootcamp
mit
Setup From the config, we'll grab the base URL and folders to be harvested.
baseurl = ARCGIS["SLIPFUTURE"]["url"] folders = ARCGIS["SLIPFUTURE"]["folders"] print("The base URL {0} contains folders {1}\n".format(baseurl, str(folders))) services = get_arc_services(baseurl, folders[0]) print("The services in folder {0} are:\n{1}\n".format(str(folders), str(services))) service_url = services[0] ...
Harvesting ArcGIS REST.ipynb
datawagovau/harvesters
mit
Harvest Hammertime? Hammertime.
harvest_arcgis_service(services[0], ckan, owner_org_id = ckan.action.organization_show(id="mrwa")["id"], author = "Main Roads Western Australia", author_email = "irissupport@mainroads.wa.gov.au", debug=Fal...
Harvesting ArcGIS REST.ipynb
datawagovau/harvesters
mit
Use a pretrained VGG model with our Vgg16 class Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet t...
# As large as you can, but no larger than 64 is recommended. # If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this. #batch_size = 1 #batch_size = 4 batch_size = 64 # Import our class, and instantiate import vgg16; reload(vgg16) from vgg16 import Vgg16 vgg = Vgg16() # Grab a f...
nbs/lesson1.ipynb
roebius/deeplearning_keras2
apache-2.0
How big is the data?
print("There are " + str(len(titanic_data)) + " rows in out dataset. \n") print("The column names are " + str(list(titanic_data.columns.values)))
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
Question 2 What's the average age of.. - any Titanic passenger - a survivor - a non-surviving first-class passenger - male survivors older than 30 from anywhere but Queenstown
Ages = np.mean(titanic_data.Age) print('Average age of all passengers with data is {}'.format(Ages)) survivors = titanic_data[titanic_data.Survived == 1] srv_ages = np.mean(survivors.Age) print('Average survivor age is {}'.format(srv_ages)) first_dead = titanic_data[(titanic_data.Survived == 0) & (titanic_data.Pclas...
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit
Question 3 What's the most common.. - passenger class - port of Embarkation - number of siblings or spouses aboard for surviors
titanic_data['Pclass'].value_counts()
Statistical Titanic - Step 1.ipynb
muniri92/Echo-Pod
mit