markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
From the above graph we can interpret that majority of the people are High School passouts and this is true for both Males and Females Bivariate Analysis
#Pairplot of all variables
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**In the above plot scatter diagrams are plotted for all the numerical columns in the dataset. A scatter plot is a visual representation of the degree of correlation between any two columns. The pair plot function in seaborn makes it very easy to generate joint scatter plots for all the columns in the data.**
df.corr()
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
Correlation Heatmap Normalizing and Scaling **Often the variables of the data set are of different scales i.e. one variable is in millions and other in only 100. For e.g. in our data set Income is having values in thousands and age in just two digits. Since the data in these variables are of different scales, it is t...
#Scales the data. Essentially returns the z-scores of every attribute df.head()
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
**If you look at the variables INCOME, TRAVEL TIME and CAR AGE, all has been normalized and scaled in one scale now.** ENCODING**One-Hot-Encoding is used to create dummy variables to replace the categories in a categorical variable into features of each category and represent it using 1 or 0 based on the presence or a...
columns=["MARITAL STATUS", "SEX","EDUCATION","JOB","USE","CAR TYPE","CITY"] df = pd.concat([df, dummies], axis=1) # drop original column "fuel-type" from "df" df.head()
_____no_output_____
MIT
M3 Advance Statistics/W2 EDA/EDA_Cars_Student_File.ipynb
fborrasumh/greatlearning-pgp-dsba
Analysing tabular data We are going to use a LIBRARY called numpy We are going to use a LIBRARY called numpy
import numpy numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',')
_____no_output_____
MIT
01-analysing-data.ipynb
onatemarta/thursday
Variables
weight_kg = 55 print (weight_kg) print ('Weight in pounds: ', weight_kg * 2.2) weight_kg = 57.5 print ('New weight: ', weight_kg * 2.2) %whos data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',') print (data) print (type(data)) %whos # Finding out the data type print(data.dtype) # Find out the shape print ...
_____no_output_____
MIT
01-analysing-data.ipynb
onatemarta/thursday
print (triplesmallchunk)
print (triplesmallchunk) print (numpy.mean(data)) print (numpy.max(data)) print (numpy.min(data)) # Get a set of data for the first station station_0 = data [0, :] print (numpy.max(station_0)) # We don't need to create 'temporaty' array slices # We can refer to what we call array axes # axis = 0 gets the mean DOWN each...
_____no_output_____
MIT
01-analysing-data.ipynb
onatemarta/thursday
Task:* Produce maximum and minimum plots of this data* What do you think?
max_temperature = numpy.max (data, axis = 0) min_temperature = numpy.min (data, axis = 0) max_plot = matplotlib.pyplot.plot(max_temperature) min_plot = matplotlib.pyplot.plot(min_temperature)
_____no_output_____
MIT
01-analysing-data.ipynb
onatemarta/thursday
import sys IN_COLAB = 'google.colab' in sys.modules print('Google Colab? ' + str(IN_COLAB)) if not IN_COLAB: #!python -m pip show tensorflow !which python !python -m pip show tensorflow !pwd from google.colab import drive drive.mount("/content/gdrive") !ls "/content/gdrive/My Drive/cancer_detection/metastatic_c...
_____no_output_____
MIT
Chapter 04/vgg19_all_images_25_epochs_colab_modelfit.ipynb
bpbpublications/Mastering-TensorFlow-2.x
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/applications/vgg19
# Imports import numpy as np import pandas as pd from glob import glob from skimage.io import imread import os import shutil import matplotlib.pyplot as plt from sklearn.metrics import roc_curve, auc, roc_auc_score from sklearn.model_selection import train_test_split import tensorflow from tensorflow.keras.preproce...
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:2035: UserWarning: `Model.predict_generator` is deprecated and will be removed in a future version. Please use `Model.predict`, which supports generators. warnings.warn('`Model.predict_generator` is deprecated and '
MIT
Chapter 04/vgg19_all_images_25_epochs_colab_modelfit.ipynb
bpbpublications/Mastering-TensorFlow-2.x
Convolutional Neural Networks: Step by StepWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**:- Superscript $[l]$ denotes an object of the $l^{th}$...
import numpy as np import h5py import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1)
_____no_output_____
MIT
doc/courses/coursera/deep learning specialization/Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
junhan/learnmachinelearning
2 - Outline of the AssignmentYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:- Convolution functions, including: - Zero Padding - Convolve window - Convolution forward ...
# GRADED FUNCTION: zero_pad def zero_pad(X, pad): """ Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, as illustrated in Figure 1. Argument: X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images pad -- i...
x.shape = (4, 3, 3, 2) x_pad.shape = (4, 7, 7, 2) x[1,1] = [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] x_pad[1,1] = [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]]
MIT
doc/courses/coursera/deep learning specialization/Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
junhan/learnmachinelearning
**Expected Output**: **x.shape**: (4, 3, 3, 2) **x_pad.shape**: (4, 7, 7, 2) **x[1,1]**: [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943]...
# GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b): """ Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation of the previous layer. Arguments: a_slice_prev -- slice of input data of shape (f, f, n_C_prev) W -- Weight ...
Z = -6.99908945068
MIT
doc/courses/coursera/deep learning specialization/Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
junhan/learnmachinelearning
**Expected Output**: **Z** -6.99908945068 3.3 - Convolutional Neural Networks - Forward passIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to...
# GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters): """ Implements the forward propagation for a convolution function Arguments: A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) W -- Weights, numpy array of shap...
Z's mean = 0.0489952035289 Z[3,2,1] = [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] cache_conv[0][1][2][3] = [-0.20075807 0.18656139 0.41005165]
MIT
doc/courses/coursera/deep learning specialization/Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
junhan/learnmachinelearning
**Expected Output**: **Z's mean** 0.0489952035289 **Z[3,2,1]** [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] **cache_conv...
# GRADED FUNCTION: pool_forward def pool_forward(A_prev, hparameters, mode = "max"): """ Implements the forward pass of the pooling layer Arguments: A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) hparameters -- python dictionary containing "f" and "stride" mod...
mode = max A = [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] mode = average A = [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]]
MIT
doc/courses/coursera/deep learning specialization/Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
junhan/learnmachinelearning
**Expected Output:** A = [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] A = [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] Congratulations! You have now i...
def conv_backward(dZ, cache): """ Implement the backward propagation for a convolution function Arguments: dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward(), output of conv...
_____no_output_____
MIT
doc/courses/coursera/deep learning specialization/Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
junhan/learnmachinelearning
** Expected Output: ** **dA_mean** 1.45243777754 **dW_mean** 1.72699145831 **db_mean** 7.83923256462 5.2 Pooling layer - backward pas...
def create_mask_from_window(x): """ Creates a mask from an input matrix x, to identify the max entry of x. Arguments: x -- Array of shape (f, f) Returns: mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x. """ ###...
_____no_output_____
MIT
doc/courses/coursera/deep learning specialization/Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
junhan/learnmachinelearning
**Expected Output:** **x =**[[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] **mask =**[[ True False False] [False False False]] Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backpro...
def distribute_value(dz, shape): """ Distributes the input value in the matrix of dimension shape Arguments: dz -- input scalar shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns: a -- Array of size (n_H, n_W) for which we dis...
_____no_output_____
MIT
doc/courses/coursera/deep learning specialization/Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
junhan/learnmachinelearning
**Expected Output**: distributed_value =[[ 0.5 0.5] [ 0.5 0.5]] 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer.**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for...
def pool_backward(dA, cache, mode = "max"): """ Implements the backward pass of the pooling layer Arguments: dA -- gradient of cost with respect to the output of the pooling layer, same shape as A cache -- cache output from the forward pass of the pooling layer, contains the layer's input and h...
_____no_output_____
MIT
doc/courses/coursera/deep learning specialization/Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
junhan/learnmachinelearning
Imports
# Pandas, Numpy and Matplotlib import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Import All nltk import nltk #nltk.download_shell()
_____no_output_____
Unlicense
notebooks/22_0_L_ExploratoryDataAnalysis.ipynb
luiservela/AstraZeneca
Get tagged words
# Set name of file filename = '../data/interim/disease_tags.pkl' # Read to DataFrame df = pd.read_pickle(filename) # Echo df.head() # Drop nulls, exclude start/end/disease_tag columns tags = df['Id ont unique_id'.split()].dropna(axis=0) # Rename fields, create combined field ont:unique_id tags['summary_id'] = tags['...
_____no_output_____
Unlicense
notebooks/22_0_L_ExploratoryDataAnalysis.ipynb
luiservela/AstraZeneca
Create links between tags in same summary
links = set() for index, record in df.iterrows(): for tag1 in record['Tags']: for tag2 in record['Tags']: links.add((tag1, tag2)) len(links) import csv with open('Links_250.csv', 'w') as outfile: w = csv.writer(outfile, delimiter=',', quotechar='"') w.writerow(['Source','Target']) f...
_____no_output_____
Unlicense
notebooks/22_0_L_ExploratoryDataAnalysis.ipynb
luiservela/AstraZeneca
AI SATURDAYS DONOSTIA 2020 Regresión Indicador "DeprRate" (Índice de Depresión) - Cluster 1 Proyecto Práctico Equipo FACEMOOD
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from regresion_functions import * %load_ext autoreload %autoreload 2
_____no_output_____
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Conjunto de Datos con 3 Clusters
df = pd.read_csv('../processed-data/cluster3_socialmedia_data.csv', index_col=0) df.head()
_____no_output_____
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Creación Índice de Depresión
df["DeprRate"]=(df["LowMood"]+df["LossOfInt"]+df["Hopeless"])/3 df.head()
_____no_output_____
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Nuevo Conjunto de Datos
df2 = df[["ASMU", "News", "PSMU", "Stress", "Inferior", "Concentrat", "Loneliness", "Fatigue", "DeprRate", "Sintomas_Cluster3"]] print(df2.head()) print("No. Filas/Columnas del Conjunto de Datos: {}".format(df2.shape))
ASMU News PSMU Stress Inferior \ Participant 115091 16.792208 15.012987 32.883117 37.441558 17.831169 131183 28.254237 11.593220 45.203390 16.898305 0.254237 438907 27.040816 34.645833 44.59...
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Datos Cluster 1
df3=df2[df2["Sintomas_Cluster3"]==1] df3= df3[["ASMU", "News", "PSMU", "Stress", "Inferior", "Concentrat", "Loneliness", "Fatigue", "DeprRate"]] print(df3.head()) print("No. Filas/Columnas del Conjunto de Datos: {}".format(df3.shape))
ASMU News PSMU Stress Inferior \ Participant 115091 16.792208 15.012987 32.883117 37.441558 17.831169 438907 27.040816 34.645833 44.595745 25.000000 23.395833 680605 1.463158 14.631579 34.54...
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Estadísticas descriptivas de las medias por participante del Cluster 1
df3.describe()
_____no_output_____
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Diagrama de Matriz para las Medias de las 9 Variables
printMatrixDiagram(df3) # Función definida en "regresion_functions"
_____no_output_____
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Correlaciones de Pearson para las Medias de las 9 Variables
printPearsonCorrelations(df3) # Función definida en "regresion_functions"
_____no_output_____
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Se observan correlaciones más significativas entre las siguientes variables:DeprRate vs LonelinessDeprRate vs InferiorLoneliness vs InferiorNo se observa "multicolinealidad" Regresión Lineal para las Medias: y = DeprRate, X = Demás Variables
label = df3.DeprRate df3.drop('DeprRate', axis=1, inplace=True)
_____no_output_____
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Proceso de eliminación de variables X que no contribuyen significativamente para explicar y
resultsummary = pd.DataFrame(data={'iteration': [], 'intercept': [], 'RMSE_Training': [], 'RMSE_Testing': [], 'R2_Training': [],'R2_Testing': [],'p_value_max':[],'removed_var':[]}) data_list_medias = calculateRegression(df3, label, resultsummary, alpha=0.15) # Función definida en "...
iteration intercept RMSE_Training RMSE_Testing R2_Training R2_Testing \ 0 0.0 2.680 3.461 3.232 0.803 -0.225 1 1.0 2.228 3.468 3.121 0.802 -0.143 2 2.0 2.573 3.473 3.320 0.802 -0.293...
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Análisis de Residuos Modelo Final
fitt = data_list_medias[5] standardized_residuals = data_list_medias[4] residualAnalysis(fitt, standardized_residuals) # Función definida en "regresion_functions"
Estadística prueba normalidad Kolmogorov-Smirnov=0.107, pvalue=0.815 Probablemente Normal
MIT
scripts/Regresion_Depr_Rate_Cluster_1.ipynb
henrycorazza/AISaturdays-depresion-rrss
Collecting GPS coordinates from photo metadataFor orientation: This page is the main panel and to the left you should see a 'file browser' panel listing some folders and files, including this one. You may need to click and drag the border in between the two panes in order to better see the names. In particular you wa...
output_file = "coords.tsv" #The following command is based on Phil Harvey's answer on January 20, 2011, 07:30:32 PM, & # comment on on: January 20, 2011 to add the -n option, from # http://u88.n24.queensu.ca/exiftool/forum/index.php?topic=3075.0 !exiftool -filename -gpslatitude -gpslongitude -T -n PUT_PHOTOS_HERE > {ou...
_____no_output_____
MIT
index.ipynb
fomightez/photo2GPS
Packages
import numpy as np import matplotlib.pyplot as plt import os import cv2 from tqdm import tqdm import random
_____no_output_____
0BSD
tensorflow_intro/part2-loading_your_own_data.ipynb
pbrainz/intro-to-ml
Initialize Data
DATADIR = "/DriveArchive1/NN_DATASETS/PetImages" CATEGORIES = ["Dog", "Cat"] for category in CATEGORIES: # do dogs and cats path = os.path.join(DATADIR,category) # create path to dogs and cats for img in os.listdir(path): # iterate over each image per dogs and cats img_array = cv2.imread(os.path.jo...
_____no_output_____
0BSD
tensorflow_intro/part2-loading_your_own_data.ipynb
pbrainz/intro-to-ml
Build Training Data
training_data = [] def create_training_data(): for category in CATEGORIES: # do dogs and cats path = os.path.join(DATADIR,category) # create path to dogs and cats class_num = CATEGORIES.index(category) # get the classification (0 or a 1). 0=dog 1=cat for img in tqdm(os.listdir(path)):...
4%|▍ | 490/12501 [00:00<00:07, 1577.10it/s]Warning: unknown JFIF revision number 0.00 18%|█▊ | 2213/12501 [00:01<00:06, 1528.94it/s]Corrupt JPEG data: 226 extraneous bytes before marker 0xd9 39%|███▊ | 4835/12501 [00:03<00:05, 1501.41it/s]Corrupt JPEG data: 65 extraneous bytes before marker 0xd9...
0BSD
tensorflow_intro/part2-loading_your_own_data.ipynb
pbrainz/intro-to-ml
**shuffle data**
random.shuffle(training_data) for sample in training_data[:10]: print(sample[1])
0 0 0 0 1 1 1 0 1 0
0BSD
tensorflow_intro/part2-loading_your_own_data.ipynb
pbrainz/intro-to-ml
Make a Model
X = [] y = [] for features,label in training_data: X.append(features) y.append(label) print(X[0].reshape(-1, IMG_SIZE, IMG_SIZE, 1)) X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
[[[[ 52] [124] [134] ... [ 93] [196] [ 91]] [[123] [ 79] [129] ... [179] [101] [100]] [[127] [130] [ 93] ... [ 84] [ 92] [ 91]] ... [[141] [140] [141] ... [123] [ 92] [157]] [[132] [ 95] [125] ... [121] [160] [ 99]]...
0BSD
tensorflow_intro/part2-loading_your_own_data.ipynb
pbrainz/intro-to-ml
**Export Data**
import pickle pickle_out = open("X.pickle","wb") pickle.dump(X, pickle_out) pickle_out.close() pickle_out = open("y.pickle","wb") pickle.dump(y, pickle_out) pickle_out.close()
_____no_output_____
0BSD
tensorflow_intro/part2-loading_your_own_data.ipynb
pbrainz/intro-to-ml
**Import Data**
pickle_in = open("X.pickle","rb") X = pickle.load(pickle_in) pickle_in = open("y.pickle","rb") y = pickle.load(pickle_in)
_____no_output_____
0BSD
tensorflow_intro/part2-loading_your_own_data.ipynb
pbrainz/intro-to-ml
Project: Identify Customer SegmentsIn this project, you will apply unsupervised learning techniques to identify segments of the population that form the core customer base for a mail-order sales company in Germany. These segments can then be used to direct marketing campaigns towards audiences that will have the highe...
# import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # magic word for producing visualizations in notebook %matplotlib inline ''' Import note: The classroom currently uses sklearn version 0.19. If you need to use an impute, it is a...
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Step 0: Load the DataThere are four files associated with this project (not including this one):- `Udacity_AZDIAS_Subset.csv`: Demographics data for the general population of Germany; 891211 persons (rows) x 85 features (columns).- `Udacity_CUSTOMERS_Subset.csv`: Demographics data for customers of a mail-order company...
# Load in the general demographics data. azdias = # Load in the feature summary file. feat_info = # Check the structure of the data after it's loaded (e.g. print the number of # rows and columns, print the first few rows).
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
> **Tip**: Add additional cells to keep everything in reasonably-sized chunks! Keyboard shortcut `esc --> a` (press escape to enter command mode, then press the 'A' key) adds a new cell before the active cell, and `esc --> b` adds a new cell after the active cell. If you need to convert an active cell to a markdown cel...
# Identify missing or unknown data values and convert them to NaNs.
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Step 1.1.2: Assess Missing Data in Each ColumnHow much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. You will want to use matplotlib [`hist()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html) function to visualiz...
# Perform an assessment of how much missing data there is in each column of the # dataset. # Investigate patterns in the amount of missing data in each column. # Remove the outlier columns from the dataset. (You'll perform other data # engineering tasks such as re-encoding and imputation later.)
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Discussion 1.1.2: Assess Missing Data in Each Column(Double click this cell and replace this text with your own text, reporting your observations regarding the amount of missing data in each column. Are there any patterns in missing values? Which columns were removed from the dataset?) Step 1.1.3: Assess Missing Data...
# How much data is missing in each row of the dataset? # Write code to divide the data into two subsets based on the number of missing # values in each row. # Compare the distribution of values for at least five columns where there are # no or few missing values, between the two subsets.
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Discussion 1.1.3: Assess Missing Data in Each Row(Double-click this cell and replace this text with your own text, reporting your observations regarding missing data in rows. Are the data with lots of missing values are qualitatively different from data with few or no missing values?) Step 1.2: Select and Re-Encode F...
# How many features are there of each data type?
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Step 1.2.1: Re-Encode Categorical FeaturesFor categorical data, you would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following:- For binary (two-level) categorical that take numeric values, you can keep them without needing to do anything.- There ...
# Assess categorical variables: which are binary, which are multi-level, and # which one needs to be re-encoded? # Re-encode categorical variable(s) to be kept in the analysis.
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Discussion 1.2.1: Re-Encode Categorical Features(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding categorical features. Which ones did you keep, which did you drop, and what engineering steps did you perform?) Step 1.2.2: Engineer Mixed-Type FeaturesTher...
# Investigate "PRAEGENDE_JUGENDJAHRE" and engineer two new variables. # Investigate "CAMEO_INTL_2015" and engineer two new variables.
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Discussion 1.2.2: Engineer Mixed-Type Features(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding mixed-value features. Which ones did you keep, which did you drop, and what engineering steps did you perform?) Step 1.2.3: Complete Feature SelectionIn order...
# If there are other re-engineering tasks you need to perform, make sure you # take care of them here. (Dealing with missing data will come in step 2.1.) # Do whatever you need to in order to ensure that the dataframe only contains # the columns that should be passed to the algorithm functions.
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Step 1.3: Create a Cleaning FunctionEven though you've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that you'll need to perform the same cleaning steps on the customer demographics data. In this substep, complete the function below to execute the...
def clean_data(df): """ Perform feature trimming, re-encoding, and engineering for demographics data INPUT: Demographics DataFrame OUTPUT: Trimmed and cleaned demographics DataFrame """ # Put in code here to execute all main cleaning steps: # convert missing value codes into Na...
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Step 2: Feature Transformation Step 2.1: Apply Feature ScalingBefore we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. Starting from this part of the project, you'll w...
# If you've not yet cleaned the dataset of all NaN values, then investigate and # do that now. # Apply feature scaling to the general population demographics data.
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Discussion 2.1: Apply Feature Scaling(Double-click this cell and replace this text with your own text, reporting your decisions regarding feature scaling.) Step 2.2: Perform Dimensionality ReductionOn your scaled data, you are now ready to apply dimensionality reduction techniques.- Use sklearn's [PCA](http://scikit-...
# Apply PCA to the data. # Investigate the variance accounted for by each principal component. # Re-apply PCA to the data while selecting for number of components to retain.
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Discussion 2.2: Perform Dimensionality Reduction(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding dimensionality reduction. How many principal components / transformed features are you retaining for the next step of the analysis?) Step 2.3: Interpret Pri...
# Map weights for the first principal component to corresponding feature names # and then print the linked values, sorted by weight. # HINT: Try defining a function here or in a new cell that you can reuse in the # other cells. # Map weights for the second principal component to corresponding feature names # and then...
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Discussion 2.3: Interpret Principal Components(Double-click this cell and replace this text with your own text, reporting your observations from detailed investigation of the first few principal components generated. Can we interpret positive and negative values from them in a meaningful way?) Step 3: Clustering Step...
# Over a number of different cluster counts... # run k-means clustering on the data and... # compute the average within-cluster distances. # Investigate the change in within-cluster distance across number of clusters. # HINT: Use matplotlib's plot function to visualize this relationship. ...
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Discussion 3.1: Apply Clustering to General Population(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding clustering. Into how many clusters have you decided to segment the population?) Step 3.2: Apply All Steps to the Customer DataNow that you have cluste...
# Load in the customer demographics data. customers = # Apply preprocessing, feature transformation, and clustering from the general # demographics onto the customer data, obtaining cluster predictions for the # customer demographics data.
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
Step 3.3: Compare Customer Data to Demographics DataAt this point, you have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, you will compare the two cluster distribution...
# Compare the proportion of data in each cluster for the customer data to the # proportion of data in each cluster for the general population. # What kinds of people are part of a cluster that is overrepresented in the # customer data compared to the general population? # What kinds of people are part of a cluster ...
_____no_output_____
CC0-1.0
Identify_Customer_Segments.ipynb
OS-Shoaib/Customer-Segments-with-Arvato
结巴分词
import jieba seg_list = jieba.cut("我来到北京清华大学") print(' '.join(seg_list))
Building prefix dict from the default dictionary ... Dumping model to file cache C:\Users\Jan\AppData\Local\Temp\jieba.cache Loading model cost 0.935 seconds. Prefix dict has been built succesfully.
MIT
Task 2/Task 2.ipynb
yangjiada/NLP
自定义词典
jieba.load_userdict("dict.txt") import jieba.posseg as pseg test_sent = ( "李小福是创新办主任也是云计算方面的专家; 什么是八一双鹿\n" "例如我输入一个带“韩玉赏鉴”的标题,在自定义词库中也增加了此词为N类\n" "「台中」正確應該不會被切開。mac上可分出「石墨烯」;此時又可以分出來凱特琳了。" ) words = jieba.cut(test_sent) ' '.join(words)
_____no_output_____
MIT
Task 2/Task 2.ipynb
yangjiada/NLP
基于 TF-IDF 算法的关键词抽取
sentence = """ 《复仇者联盟4》上映16天,连续16天获得单日票房冠军,《何以为家》以优质的口碑正在冲击3亿票房,但市场大盘又再次回落至4千万元一天的水平,随着影片热度逐渐退却,靠它们“续命”的影院也重回经营窘境。 """ import jieba.analyse jieba.analyse.extract_tags(sentence, topK=20, withWeight=False, allowPOS=())
_____no_output_____
MIT
Task 2/Task 2.ipynb
yangjiada/NLP
基于 TextRank 算法的关键词抽取
jieba.analyse.textrank(sentence, topK=20, withWeight=False, allowPOS=('ns', 'n', 'vn', 'v'))
_____no_output_____
MIT
Task 2/Task 2.ipynb
yangjiada/NLP
Copyright 2020 Google LLC.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
Point Clouds for tensorflow_graphics Run in Google Colab View source on GitHub Initialization Clone repositories, and install requirements and custom_op package
# Clone repositories !rm -r graphics !git clone https://github.com/schellmi42/graphics # install requirements and load tfg module !pip install -r graphics/requirements.txt # install custom ops !pip install graphics/tensorflow_graphics/projects/point_convolutions/custom_ops/pkg_builds/tf_2.2.0/*.whl
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
Load modules
import sys # (this is equivalent to export PYTHONPATH='$HOME/graphics:/content/graphics:$PYTHONPATH', but adds path to running session) sys.path.append("/content/graphics") # load point cloud module # (this is equivalent to export PYTHONPATH='/content/graphics/tensorflow_graphics/projects/point_convolutions:$PYTHONPA...
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
Check if it loads without errors
import tensorflow as tf import tensorflow_graphics as tfg import pylib.pc as pc import numpy as np print('TensorFlow version: %s'%tf.__version__) print('TensorFlow-Graphics version: %s'%tfg.__version__) print('Point Cloud Module: ', pc)
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
Example Code 2D square point clouds using segmentation IDsHere we create a batch of point clouds with variable number of points per cloud from unordered points with an additional id tensor.The `batch_ids` are the segmentation ids, which indicate which point belongs to which point cloud in the batch. For more informat...
import numpy as np def square(num_samples, size=1): # 2D square in 3D for easier visualization points = np.random.rand(num_samples, 2)*2-1 return points*size num_samples=1000 batch_size = 10 # create numpy input data consisting of points and segmentation identifiers points = square(num_samples) batch_ids = np...
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
Create a batch of point hierarchies using sequential poisson disk sampling with pooling radii 0.1, 0.4, 2.
# numpy input parameters sampling_radii = np.array([[0.1], [0.4], [2]]) # create tensorflow point hierarchy point_hierarchy = pc.PointHierarchy(point_cloud, sampling_radii, 'poisson_disk') # print information num_levels = len(sampling_radii) + 1 p...
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
assign a shape to the batch and look at the sizes again
point_hierarchy.set_batch_shape([2, 5]) print('%s point clouds of sizes:'%point_cloud._batch_size) sizes = point_hierarchy.get_sizes() for i in range(num_levels): print('level: ' + str(i)) print(sizes[i].numpy())
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
Visualize the levels of one example from the batch.
import matplotlib.pyplot as plt %matplotlib inline # which example from the batch to choose, can be 'int' or relative in [A1,...,An] batch_id = [0,1] curr_points = point_hierarchy.get_points(batch_id) # plotting plt.figure(figsize=[num_levels*5,5]) for i in range(num_levels): plt.subplot(1,num_levels,i+1) plt.pl...
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
3D point clouds from input files using arbitrary batch sizes with paddingHere we create point clouds from input files using a zero padded representation of shape `[A1, .., An, V, D]`.Internally this is converted to a segmented representation. Loading from ASCII .txt files
import pylib.io as io # SHREC15 #### get files #### input_dir = 'graphics/tensorflow_graphics/projects/point_convolutions/test_point_clouds/SHREC15/' filenames = tf.io.gfile.listdir(input_dir) batch_size = len(filenames) print('### batch size ###'); print(batch_size) for i in range(batch_size): filenames[i] = inpu...
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
Loading vertices from mesh files
# Thingi10k meshes #### get files #### input_dir = 'graphics/tensorflow_graphics/projects/point_convolutions/test_point_clouds/meshes/' filenames = tf.io.gfile.listdir(input_dir) batch_size = len(filenames) print('### batch size ###'); print(batch_size) for i in range(batch_size): filenames[i] = input_dir+filenames...
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
Monte-Carlo Convolutions Create convolutions for a point hierarchy with MLPs as kernel
import numpy as np ### create random input data num_pts = 1000 point_dim = 3 feature_dim = 3 batch_size = 10 # create random points points = np.random.rand(num_pts,point_dim) batch_ids = np.random.randint(0,batch_size,num_pts) batch_ids[:batch_size] = np.arange(0,batch_size) # ensure non-empty point clouds # create ra...
_____no_output_____
Apache-2.0
tensorflow_graphics/projects/point_convolutions/pylib/notebooks/Introduction.ipynb
schellmi42/graphics
Lab 1: Overview, Review, and Environments ObjectivesIn this lab, we'll - Review the computational infrastructure around our data science environments,- Go through the process of ensuring that we have a Python environment set up for this class with the proper installed packages- Within our environment, we'll review the...
!echo $PATH
/Users/ipasha/anaconda3/bin:/Users/ipasha/anaconda3/condabin:/anaconda3/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/ipasha/anaconda3/bin:.
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
```{note}The "!" in my notebook allows me to run terminal commands from a notebook; you don't need this symbol when running commands in an actual terminal.``` I can check my python as follows:
!which python
/Users/ipasha/anaconda3/bin/python
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
We can see that calls to `python` are triggering the python installed in `anaconda3`, which is what we want (see the installation video for more details). If your call to `which python` in the terminal returns something like `usr/bin/python`, then something has likely gone wrong with your installation. There are some t...
conda create -n a330 python=3.8
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Once you run this, answer "y" to the prompts, and your new environment will be installed.```{note}The above command may take several minutes to execute.```Next, we want to activate this environment (still in our terminal). We do this as follows:
conda activate a330
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
When you do so, you should see the left hand edge of your prompt switch from (base) to (a330). Next, let's make an alias so that getting into our a330 environment is a snap. We're going to access a file called `.bash_profile`, which allows us to set aliases and environment variables. This file is located in your home d...
!more ~/.bash_profile
# >>> conda init >>> # !! Contents within this block are managed by 'conda init' !! __conda_setup="$(CONDA_REPORT_ERRORS=false '/anaconda3/bin/conda' shell.bash hook 2> /dev/null)" if [ $? -eq 0 ]; then \eval "$__conda_setup" else if [ -f "/anaconda3/etc/profile.d/conda.sh" ]; then # . "/anaconda3/etc/profile.d...
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Notice above I use the `~` which is a shorthand for home directory. On my computer, the default home directory for my user is `/Users/ipasha/`. This file has some conda stuff in it at the top, as well as some path and python path exports, as well as an alias. Yours should also have the conda init stuff, if you installe...
conda install -c anaconda ipykernel
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
This ensures we can select different kernels inside jupyter. A kernel is basically "the thing that is python", the root thing being run on your system when you use python. By creating environments, we're creating different unique kernels, and we can now get to them within our notebooks. Now, run the following:
python -m ipykernel install --user --name=a330
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Once you've done this, you should have the ability to access your new environment from within Jupyter. We can test this as follows: - First, open a new terminal window, and activate your environment (if you made the alias, this means typing `a330` in your terminal. - Next, type `jupyter lab` to open jupyter lab. If for...
conda install -n a330 numpy scipy astropy matplotlib
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
(again, hitting "y" when prompted). Again, this step might take a minute or so to run.Congrats, you now have an environment set up for this class, and can jump in and out of it at will, either in your terminal, or within a Jupyter notebook.```{admonition} Hot TipIt's highly recommended you do these steps anytime you st...
# Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Question 2The distribution of pixels in your above image should not have many outliers beyond 3-sigma from the mean, but there will be some. Find the location of any 3-sigma outliers in the image, and highlight them by circling their location. Confirm that the fraction of these out of the total number of pixels agrees...
# Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Question 3When dealing with astronomical data, it is sometimes advisable to not include outliers in a calculation being performed on a set of data (in this example, an image). We know, of course, that the data we're plotting ARE coming from a gaussian distribution, so there's no reason to exclude, e.g., 3-sigma outlie...
# Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Clipping the outliers of this distribution should not affect the mean in any strong way, but should noticably decrease $\sigma$. Question 4:Using Array indexing, re-plot the same array from above, but zoom in on the inner 20% of the image, such that the full width is 20% of the total. Note: try not to hard code your ...
# Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Your image should now be 200 by 200 pixels across. Note that your new image has its own indexing. A common "gotcha" when working with arrays like this is to index in, but then try to use indices found (e.g., via `where()`) in the larger array on the cropped in version, which can lead to errors. Question 5Often, we hav...
total = 0 for i in a: for j in b: total+= i*j
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
which, mathematically, makes sense! But as it turns out, there's a way we can do this without any loops at all --- and when $\vec{a}$ and $\vec{b}$ get long, this becomes hugely important in our code.The trick we're going to use here is called [array broadcasting](https://numpy.org/doc/stable/user/basics.broadcasting.h...
a = np.array([1,5,10,20]) b = np.array([1,2,4,16]) # Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
```{tip}If you're familiar with the jupyter magic command `%%timeit`, try timing your loop vs non-loop solutions with a longer list (say, 5000 random numbers in $\vec{a}$ and $\vec{b}$). How much faster is the non-loop?``` Question 6Often in astronomy we need to work with grids of values. For example, let's say we hav...
def chi2(a,b): return ((15-a)**2+(12-b)**2)**0.2 #note, this is nonsense, but should return a different value for each input a,b # Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Question 7 Re-show your final plot above, making the following changes:- label your colorbar as $\chi^2$ using latex notation, with a fontsize>13- Make your ticks point inward and be longer- Make your ticks appear on the top and right hand axes of the plot as well - If you didn't already, label the x and y axes approp...
# Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Question 8Some quick list comprehensions! For any unfamilar, **comprehensions** are pythonic statements that allow you to compress a for-loop (generally) into a single line, and usually runs faster than a full loop (but not by a ton). Take the for-loop below and write it as a list comprehension.
visited_cities = ['San Diego', 'Boston', 'New York City','Atlanta'] all_cities = ['San Diego', 'Denver', 'Boston', 'Portland', 'New York City', 'San Francisco', 'Atlanta'] not_visited = [] for city in all_cities: if city not in visited_cities: not_visited.append(city) print(not_visited) # Your Cod...
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Next, create an array of integers including 1 through 30, inclusive. Using a comprehension, create a numpy array containing the squared value of only the odd numbers in your original array. (*Hint, remember the modulo operator*)
# Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
In the next example, you have a list of first names and a list of last names. Use a list comprehension to create an array that is a list of full names (with a space between first and last names).
first_names = ['Bob','Samantha','John','Renee'] last_names = ['Smith','Bee','Oliver','Carpenter'] # Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
```{admonition} Challenge Problem (worth Extra Credit) I've created new lists that contain strings of the names in the format Lastname,Firstname, with random leading/trailing spaces and terrible capitalizations. Use a list comprehension to make our nice, "Firstname Lastname" list again.```
all_names = ['sMitH,BoB ', ' bee,samanthA',' oLIVER,JOHN ',' caRPENTer,reneE '] # Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
```{note}Note that with this last example, we're entering a degree of single-line length and complexity that it almost doesn't make sense to use a comprehension anymore. Just because something CAN be done in one line doesn't mean is has to be, or should be.```You may be wondering what use case this type of coding has i...
XX = np.array([1,2,3,4,5,6,7,8,9]) YY = np.array([5,6,7,8,9,10,11,12,13]) ZZ = np.array([10,11,12,13,14,15,16,17,18]) # Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Question 10 Units, units, units. The bane of every scientists' existence... except theorists that set every constant equal to 1. In the real world, we measure fluxes or magnitudes in astronomical images, infer temperatures and densities from data and simulations, and ultimately have to deal with units one way or anoth...
import astropy.units as u
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
The standard import for this library is `u`, so be careful not to name any variables that letter. To "assign" units to a variable, we multiply by the desired unit as follows. Note that generally the module knows several aliases/common abrreviations for a unit, if it is uniquely identifiable.
star_temp = 5000*u.K star_radius = 0.89 * u.Rsun star_mass = 0.6 * u.Msun
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
We can perform trivial conversions using the `.to()` method.
star_radius.to(u.km)
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Once we attach units to something, it is now a `Quantity` object. Quantity objects are great, above, we saw they have built-in methods to facilitate conversion. They can also be annoying -- sometimes another function we've written needs just the raw value or array back out. To get this, we use the `.value` attribute of...
star_mass.to(u.kg).value
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
This now strips away all `Quantity` stuff and gives us an array or value to use elsewhere in our code. Units are great because they help us combine quantities while tracking units and dimensional analysis. A common operation in astronomy is converting a flux to a luminosity given a distance, using $$F = \frac{L}{4\pi D...
L = 4 * np.pi * (3.6*u.Mpc)**2 * (7.5e-14 * u.erg/u.s/u.cm**2) L.to(u.Lsun)
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
This conversion worked because the units worked out. If my units of flux weren't correct, I'd get an error:
L = 4 * np.pi * (3.6*u.Mpc)**2 * (7.5e-14 * u.erg/u.s/u.cm**2/u.AA) L.to(u.Lsun)
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io
Here, `units` realized that I was putting in units of flux density, but wanted a luminosity out, and ultimately those units don't resolve out. Thus, it can be a great way to catch errors in your inputs. Note: just be careful that sometimes, you throw a constant into an equation but the constant has some units. If you'r...
# Your Code
_____no_output_____
MIT
_build/jupyter_execute/Lab1/Lab1.ipynb
Astro-330/Astro-330.github.io