markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Component MakeupWe can now examine the makeup of each PCA component based on **the weightings of the original features that are included in the component**. The following code shows the feature-level makeup of the first component.Note that the components are again ordered from smallest to largest and so I am getting t...
import seaborn as sns def display_component(v, features_list, component_num, n_weights=10): # get index of component (last row - component_num) row_idx = N_COMPONENTS-component_num # get the list of weights from a row in v, dataframe v_1_row = v.iloc[:, row_idx] v_1 = np.squeeze(v_1_row.value...
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Deploying the PCA ModelWe can now deploy this model and use it to make "predictions". Instead of seeing what happens with some test data, we'll actually want to pass our training data into the deployed endpoint to create principal components for each data point. Run the cell below to deploy/host this model on an insta...
%%time # this takes a little while, around 7mins pca_predictor = pca_SM.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
-----------------!CPU times: user 319 ms, sys: 14 ms, total: 333 ms Wall time: 8min 32s
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
We can pass the original, numpy dataset to the model and transform the data using the model we created. Then we can take the largest n components to reduce the dimensionality of our data.
# pass np train data to the PCA model train_pca = pca_predictor.predict(train_data_np) # check out the first item in the produced training features data_idx = 0 print(train_pca[data_idx])
label { key: "projection" value { float32_tensor { values: 0.0002009272575378418 values: 0.0002455431967973709 values: -0.0005782842636108398 values: -0.0007815659046173096 values: -0.00041911262087523937 values: -0.0005133943632245064 values: -0.0011316537857055664 ...
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Create a transformed DataFrameFor each of our data points, get the top n component values from the list of component data points, returned by our predictor above, and put those into a new DataFrame.You should end up with a DataFrame that looks something like the following:``` c_1 c_2...
# create dimensionality-reduced data def create_transformed_df(train_pca, counties_scaled, n_top_components): ''' Return a dataframe of data points with component features. The dataframe should be indexed by State-County and contain component values. :param train_pca: A list of pca training data, r...
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Now we can create a dataset where each county is described by the top n principle components that we analyzed earlier. Each of these components is a linear combination of the original feature space. We can interpret each of these components by analyzing the makeup of the component, shown previously. Define the `top_n` ...
## Specify top n top_n = 7 # call your function and create a new dataframe counties_transformed = create_transformed_df(train_pca, counties_scaled, n_top_components=top_n) ## TODO: Add descriptive column names PCA_list=['c_1', 'c_2', 'c_3', 'c_4', 'c_5', 'c_6', 'c_7'] counties_transformed.columns=PCA_list # print r...
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Delete the Endpoint!Now that we've deployed the mode and created our new, transformed training data, we no longer need the PCA endpoint.As a clean up step, you should always delete your endpoints after you are done using them (and if you do not plan to deploy them to a website, for example).
# delete predictor endpoint session.delete_endpoint(pca_predictor.endpoint)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
--- Population Segmentation Now, you’ll use the unsupervised clustering algorithm, k-means, to segment counties using their PCA attributes, which are in the transformed DataFrame we just created. K-means is a clustering algorithm that identifies clusters of similar data points based on their component makeup. Since we ...
# define a KMeans estimator kmeans = sagemaker.KMeans(role = role, train_instance_count = 1, train_instance_type='ml.c4.xlarge', output_path = output_path, k = 8) print('Training artifac...
Training artifacts will be uploaded to: s3://sagemaker-eu-central-1-730357687813/counties/
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Create formatted, k-means training dataJust as before, you should convert the `counties_transformed` df into a numpy array and then into a RecordSet. This is the required format for passing training data into a `KMeans` model.
# convert the transformed dataframe into record_set data kmeans_train_data_np = counties_transformed.values.astype('float32') kmeans_formatted_train_data = kmeans.record_set(kmeans_train_data_np)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Train the k-means modelPass in the formatted training data and train the k-means model.
%%time kmeans.fit(kmeans_formatted_train_data)
2020-05-23 06:55:58 Starting - Starting the training job... 2020-05-23 06:56:00 Starting - Launching requested ML instances...... 2020-05-23 06:57:03 Starting - Preparing the instances for training...... 2020-05-23 06:58:26 Downloading - Downloading input data 2020-05-23 06:58:26 Training - Downloading the training ima...
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Deploy the k-means modelDeploy the trained model to create a `kmeans_predictor`.
%%time # deploy the model to create a predictor kmeans_predictor = kmeans.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
-----------------!CPU times: user 316 ms, sys: 14 ms, total: 330 ms Wall time: 8min 32s
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Pass in the training data and assign predicted cluster labelsAfter deploying the model, you can pass in the k-means training data, as a numpy array, and get resultant, predicted cluster labels for each data point.
# get the predicted clusters for all the kmeans training data cluster_info= kmeans_predictor.predict(train_data_kmeans_np)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Exploring the resultant clustersThe resulting predictions should give you information about the cluster that each data point belongs to.You should be able to answer the **question**: which cluster does a given data point belong to?
# print cluster info for first data point data_idx = 3 print('County is: ', counties_transformed.index[data_idx]) print() print(cluster_info[data_idx])
County is: Alabama-Bibb label { key: "closest_cluster" value { float32_tensor { values: 3.0 } } } label { key: "distance_to_cluster" value { float32_tensor { values: 0.3843974173069 } } }
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Visualize the distribution of data over clustersGet the cluster labels for each of our data points (counties) and visualize the distribution of points over each cluster.
# get all cluster labels cluster_labels = [c.label['closest_cluster'].float32_tensor.values[0] for c in cluster_info] # count up the points in each cluster cluster_df = pd.DataFrame(cluster_labels)[0].value_counts() print(cluster_df)
3.0 907 6.0 842 0.0 386 7.0 375 1.0 368 5.0 167 2.0 87 4.0 86 Name: 0, dtype: int64
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Now, you may be wondering, what do each of these clusters tell us about these data points? To improve explainability, we need to access the underlying model to get the cluster centers. These centers will help describe which features characterize each cluster. Delete the Endpoint!Now that you've deployed the k-means mo...
# delete kmeans endpoint session.delete_endpoint(kmeans_predictor.endpoint)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
--- Model Attributes & ExplainabilityExplaining the result of the modeling is an important step in making use of our analysis. By combining PCA and k-means, and the information contained in the model attributes within a SageMaker trained model, you can learn about a population and remark on some patterns you've found, ...
# download and unzip the kmeans model file # use the name model_algo-1 # download and unzip the kmeans model file kmeans_job_name = 'kmeans-2020-05-23-06-55-58-261' model_key = os.path.join(prefix, kmeans_job_name, 'output/model.tar.gz') # download the model file boto3.resource('s3').Bucket(bucket_name).download_fil...
[ [[ 0.35492653 0.23771921 0.07889839 0.2500726 0.09919675 -0.05618306 0.04399072] [-0.23379213 -0.3808242 0.07702101 0.08526881 0.0603863 -0.00519104 0.0597847 ] [ 1.3077838 -0.2294502 -0.17610097 -0.42974427 -0.11858643 0.11248738 0.15853602] [-0.02278126 0.07436099 0.12951738 -0.05602401 -...
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
There is only 1 set of model parameters contained within the k-means model: the cluster centroid locations in PCA-transformed, component space.* **centroids**: The location of the centers of each cluster in component space, identified by the k-means algorithm.
# get all the centroids cluster_centroids=pd.DataFrame(kmeans_model_params[0].asnumpy()) cluster_centroids.columns=counties_transformed.columns display(cluster_centroids)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Visualizing Centroids in Component SpaceYou can't visualize 7-dimensional centroids in space, but you can plot a heatmap of the centroids and their location in the transformed feature space. This gives you insight into what characteristics define each cluster. Often with unsupervised learning, results are hard to inte...
# generate a heatmap in component space, using the seaborn library plt.figure(figsize = (12,9)) ax = sns.heatmap(cluster_centroids.T, cmap = 'YlGnBu') ax.set_xlabel("Cluster") plt.yticks(fontsize = 16) plt.xticks(fontsize = 16) ax.set_title("Attribute Value by Centroid") plt.show()
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
If you've forgotten what each component corresponds to at an original-feature-level, that's okay! You can use the previously defined `display_component` function to see the feature-level makeup.
# what do each of these components mean again? # let's use the display function, from above component_num=5 display_component(v, counties_scaled.columns.values, component_num=component_num)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Natural GroupingsYou can also map the cluster labels back to each individual county and examine which counties are naturally grouped together.
# add a 'labels' column to the dataframe counties_transformed['labels']=list(map(int, cluster_labels)) # sort by cluster label 0-6 sorted_counties = counties_transformed.sort_values('labels', ascending=True) # view some pts in cluster 0 sorted_counties.head(20)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
You can also examine one of the clusters in more detail, like cluster 1, for example. A quick glance at the location of the centroid in component space (the heatmap) tells us that it has the highest value for the `comp_6` attribute. You can now see which counties fit that description.
# get all counties with label == 1 cluster=counties_transformed[counties_transformed['labels']==1] cluster.head()
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Data Characteristics:The actual concrete compressive strength (MPa) for a given mixture under aspecific age (days) was determined from laboratory. Data is in raw form (not scaled).Summary Statistics:Number of instances (observations): 1030Number of Attributes: 9Attribute breakdown: 8 quantitative input variables, and ...
import pandas as pd import numpy as np import seaborn as sns from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') from sklearn.linear_model import SGDRegressor,GammaRegressor,Lasso,GammaRegressor,ElasticNet,Ridge from sklearn.linear_mode...
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
EXPLORATORY DATA ANALYSIS
data.columns data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1030 entries, 0 to 1029 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Cement (component 1)(kg in a m^3 mixture)...
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
all the variable are numeric
data.describe() data.isnull().sum()
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
no missing is present UNIVARIATE ANALYSIS
col=data.columns.to_list() col data.hist(figsize=(15,10),color='red') plt.show() i=1 plt.figure(figsize = (15,20)) for col in data.columns: plt.subplot(4,3,i) sns.boxplot(x = data[col], data = data) i+=1
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
here we have found some outliers,but we did't remove it due to getting loss of data BIVARIATE ANALYSIS
i=1 plt.figure(figsize = (18,18)) for col in data.columns: plt.subplot(4,3,i) sns.scatterplot(data=data,x='Concrete compressive strength(MPa, megapascals) ',y=col) i+=1
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that compressive strength is highly correlated with cement
plt.figure(figsize=(10,10)) sns.heatmap(data.corr(),linewidths=1,cmap='PuBuGn_r',annot=True) correlation=data.corr()['Concrete compressive strength(MPa, megapascals) '].sort_values() correlation.plot(kind='barh',color='green')
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that cement, superplasticizer,age,are +vely correlated, while water ,fine aggregate are negatively correlated with compressive strength. MODEL SELECTION
X=data.drop(columns='Concrete compressive strength(MPa, megapascals) ') Y=data[['Concrete compressive strength(MPa, megapascals) ']] sc=StandardScaler() X_scaled=sc.fit_transform(X) X_scaled=pd.DataFrame(X_scaled,columns=X.columns) x_train,x_test,y_train,y_test=train_test_split(X_scaled,Y,test_size=.30,random_state=0) ...
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that extra tree regressor has the highest accuracy level =90.7%,so we choose for our final model building MODEL BUILDING
etr1=ExtraTreesRegressor() rs=[] score=[] for i in range(1,200,1): x_train,x_test,y_train,y_test=train_test_split(X_scaled,Y,test_size=.30,random_state=i) etr1.fit(x_train,y_train) score.append(etr1.score(x_test,y_test)) rs.append(i) plt.figure(figsize=(20,6)) plt.plot(rs,score) for i in range(len(score...
1 0.89529318226024 2 0.9277744539369183 3 0.926825810368096 4 0.929277398220312 5 0.8946985733005189 6 0.9066382335271965 7 0.9375909152276649 8 0.8798177784082443 9 0.8792678508590264 10 0.9188761161352978 11 0.9248721043508471 12 0.9016606370091849 13 0.8790450510199522 14 0.90286206857159 15 0.9361845117635051 16 0....
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that at random state =77,we get a accuracy=94.39%
x_train,x_test,y_train,y_test=train_test_split(X_scaled,Y,test_size=.30,random_state=77) etr2=ExtraTreesRegressor() etr2.fit(x_train,y_train) etr2.score(x_train,y_train) etr2.score(x_test,y_test) y_test_pred=etr2.predict(x_test) y_test1=y_test.copy() y_test1['pred']=y_test_pred y_test1.corr()
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see here the accuracy is to be 97.17%
from sklearn.metrics import mean_squared_error,r2_score mean_squared_error(y_test1[ 'Concrete compressive strength(MPa, megapascals) '],y_test1['pred']) rsme=np.sqrt(mean_squared_error(y_test1[ 'Concrete compressive strength(MPa, megapascals) '],y_test1['pred'])) rsme
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that root mean sqaure error is only 4.15 , which shows that our model is very good
r2_score(y_test1[ 'Concrete compressive strength(MPa, megapascals) '],y_test1['pred']) plt.barh(X.columns,etr2.feature_importances_)
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
Load data
adni.load(show_output=False)
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Display MetaData
meta_df = adni.meta_to_df() sprint.pd_cols(meta_df)
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Display ImageFiles
files_df = adni.files_to_df() sprint.pd_cols(files_df) adni_df = adni.to_df() sprint.pd_cols(adni_df)
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Analysis Overview
fig, axes = splot.meta_settings(rows=3) splot.histplot( adni_df, x='subject.researchGroup', hue='subject.subjectSex', ax=axes[0,0], plot_kws={'stat':'frequency'}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'ResearchGroup distribution','xlabel':'Disorder'} ) splot.histplot...
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Data sizes
fig, axes = splot.meta_settings(rows=2,figsize=(15,10)) splot.histplot( adni_df, discrete=False, x='subject.study.imagingProtocol.protocolTerm.protocol.Number_of_Slices', hue='subject.researchGroup', multiple='stack', ax=axes[0,0], plot_kws={'stat':'frequency'}, legend_kws={'title':'...
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Scoring
fig, axes = splot.meta_settings(rows=3) splot.histplot( adni_df, discrete=True, x='subject.visit.assessment.component.assessmentScore.FAQTOTAL', hue='subject.researchGroup', multiple='stack', ax=axes[0,0], plot_kws={'stat':'frequency'}, legend_kws={'title':'ResearchGroup'}, setti...
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Visualise brain slices Create Image generator
SKIP_LAYERS = 10 LIMIT_LAYERS = 70 image_AD_generator = adni.load_images( files=adni.load_files(adni.path.category+'AD/', adni.filename_category, use_processed=True) ) image_CN_generator = adni.load_images( files=adni.load_files(adni.path.category+'CN/', adni.filename_category, use_processed=True) ) image_MCI_g...
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Coronal plane (From top)
image_AD_slices = [images_AD[layer,:,:] for layer in range(0,images_AD.shape[0],SKIP_LAYERS)] dplay.display_advanced_plot(image_AD_slices) plt.suptitle("Coronal plane - AD") image_CN_slices = [images_CN[layer,:,:] for layer in range(0,images_CN.shape[0],SKIP_LAYERS)] dplay.display_advanced_plot(image_CN_slices) plt....
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Sagittal plane (From front)
image_slices = [images_AD[:,layer,:] for layer in range(0,images_AD.shape[1], SKIP_LAYERS)] dplay.display_advanced_plot(image_slices) plt.suptitle("Sagittal plane")
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Horisontal plane (from side)
image_slices = [images_AD[:,:,layer] for layer in range(0,images_AD.shape[2], SKIP_LAYERS)] dplay.display_advanced_plot(image_slices) plt.suptitle("Horisonal plane")
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras...
import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict %matplotlib inline np.random.seed(1)
_____no_output_____
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (sessi...
9
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we create...
a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c)
Tensor("Mul:0", shape=(), dtype=int32)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a sessio...
sess = tf.Session() print(sess.run(c))
20
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed d...
# Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close()
6
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a...
# GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- r...
result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
*** Expected Output ***: **result**[[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoi...
# GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.pl...
sigmoid(0) = 0.5 sigmoid(12) = 0.999994
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
*** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variable...
# GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "...
cost = [ 1.00538719 1.03664088 0.41385433 0.39956614]
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
** Expected Output** : **cost** [ 1.00538719 1.03664088 0.41385433 0.39956614] 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, t...
# GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. ...
one_hot = [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]]
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **one_hot** [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you w...
# GRADED FUNCTION: ones def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(.....
ones = [ 1. 1. 1.]
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the com...
# Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
_____no_output_____
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Change the index below and run the cell to visualize some examples in the dataset.
# Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
y = 5
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
# Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train...
number of training examples = 1080 number of test examples = 120 X_train shape: (12288, 1080) Y_train shape: (6, 1080) X_test shape: (12288, 120) Y_test shape: (6, 120)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a t...
# GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns:...
X = Tensor("X_3:0", shape=(12288, ?), dtype=float32) Y = Tensor("Y_2:0", shape=(6, ?), dtype=float32)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) ...
# GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] ...
W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref> b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref> W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref> b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters ...
# GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python d...
Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very ...
# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the c...
cost = Tensor("Mean:0", shape=(), dtype=float32)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of...
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 1...
_____no_output_____
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break...
parameters = model(X_train, Y_train, X_test, Y_test)
Cost after epoch 0: 1.855702 Cost after epoch 100: 1.016458 Cost after epoch 200: 0.733102 Cost after epoch 300: 0.572940 Cost after epoch 400: 0.468774 Cost after epoch 500: 0.381021 Cost after epoch 600: 0.313822 Cost after epoch 700: 0.254158 Cost after epoch 800: 0.203829 Cost after epoch 900: 0.166421 Cost after e...
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your mod...
import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(...
Your algorithm predicts: y = 3
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Python I-->-->Python is an interpreted high-level general-purpose programming language. Its design philosophy emphasizes code readability with its use of significant indentation. Its language constructs as well as its object-oriented approach aim to help programmers write clear, logical code for small and large-scale ...
one_fish = 1 two_fish = one_fish + 1 blue_fish = one_fish + two_fish print(one_fish) print(two_fish) print(blue_fish) blue_fish = blue_fish + blue_fish print(blue_fish)
1 2 3 6
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Dynamic TypingNote that no data type (e.g., integer, string) is specified in an assignment, even the first time a variable is used. In general, variables and types are *not* declared in Python before a value is assigned. Python is said to be a **dynamically typed** language. The below code is perfectly fine in Python,...
a = 1 print(a) a = "hello" print(a)
1 hello
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Data TypesAs in most programming languages, each data value in a Python program has a **data type** (even though we typically don't specify it). We'll discuss some of the datatypes here. For a given data value, we can get its type using the `type` function, which takes an argument. The below print expressions show sev...
print(type(1)) # an integer print(type(2.0)) # a float print(type("hi!")) # a string print(type(True)) # a boolean value print(type([1, 2, 3, 4, 5])) # a list (a mutable collection) print(type((1, 2, 3, 4, 5))) # a tuple (an immutable collection) print(type({"fname": "john", "lname": "doe"})) # a dictionary (a c...
<class 'int'> <class 'float'> <class 'str'> <class 'bool'> <class 'list'> <class 'tuple'> <class 'dict'>
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
NumbersThe basic numerical data types of python are:* `int` (integer values), * `float` (floating point numbers), and * `complex` (complex numbers).
x = 1 # y = 1.0 z = 1 + 2j w = 1e10 v = 1.0 u = 2j print(type(x), ": ", x) print(type(y), ": ", y) print(type(z), ": ", z) print(type(w), ": ", w) print(type(u), ": ", v) print(type(u), ": ", u)
<class 'int'> : 1 <class 'float'> : 1.0 <class 'complex'> : (1+2j) <class 'float'> : 10000000000.0 <class 'complex'> : 1.0 <class 'complex'> : 2j
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
In general, a number written as a simply integer will, unsurprisingly, be interpreted in Python as an `int`.Numbers written using a `.` or scientific notation are interpreted as floats. Numbers written using `j` are interpreted as complex numbers.**NOTE**: Unlike some other languages, Python 3 does not have minimum or ...
1 + 3 - (3 - 2) # simple addition and subtraction 4 * 2.0 # multiplication of an int and a float (yields a float) 5 / 2 # floating point division print(5.6 // 2) # integer division print(type(5.6 // 2)) 5 % 2 # modulo operator (straightforwardly, the integer remainder of 5/2) 2 % -5 # (not so intuitive if negativ...
_____no_output_____
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Data Type of resultsWhen two numbers of different types are used in an arithmetic operation, the data type is usually what one would expect, but there are some cases where it's different than either operand. For instance, though 5 and 2 are both integers, the result of `5/2` is a `float`, and the result of `5.2//2` (i...
print("This is a string") print('this is a string containing "quotes"') print('this is another string containing "quotes"') print("this is string\nhas two lines")
This is a string this is a string containing "quotes" this is another string containing "quotes" this is string has two lines
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
To prevent processing of escape characters, you can use indicate a *raw* string by putting an `r` before the string.
print(r"this is string \n has only one line")
this is string \n has only one line
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Multiline StringsMultiline strings can be delineated using 3 quotes. If you do not wish to include a line end in the output, you can end the line with `\`.
print( """Line 1 Line 2 Line 3\ Line 3 continued""" )
Line 1 Line 2 Line 3Line 3 continued
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
String Concatenation Strings can be concatenated. You must be careful when trying to concatenate other types to a string, however. They must be converted to strings first using `str()`.
print("This" + " line contains " + str(4) + " components") print( "Here are some things converted to strings: " + str(2.3) + ", " + str(True) + ", " + str((1, 2)) )
This line contains 4 components Here are some things converted to strings: 2.3, True, (1, 2)
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
`print` can take an arbitrary number of arguments. Leveraging this eliminates the need to explicitly convert data values to strings (because we're no longer attempting to concatenate strings).
print("This", "line contains", 4, "components") print("Here are some things converted to strings:", 2.3, ",", True, ",", (1, 2))
This line contains 4 components Here are some things converted to strings: 2.3 , True , (1, 2)
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Note, however, that `print` will by default insert a space between elements. If you wish to change the separator between items (e.g. to `,`) , add `sep=","` as an argument.
print("This", "line contains", 4, "components", sep="---") print( "Here are some things converted to strings:", 2.3, ",", True, ",", (1, 2), sep="---" )
This---line contains---4---components Here are some things converted to strings:---2.3---,---True---,---(1, 2)
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
You can also create a string from another string by *multiplying* it with a number
word1 = "abba" word2 = 3 * word1 print(word2)
abbaabbaabba
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Also, if multiple **string literals** (as opposed to variables or string expressions) appear consecutively, they will be combined into one string.
a = "this " "is " "the " "way " "the " "world " "ends." print(a) print(type(a)) a = "this ", "is ", "the ", "way ", "the ", "world ", "ends." print(a) print(type(a))
this is the way the world ends. <class 'str'> ('this ', 'is ', 'the ', 'way ', 'the ', 'world ', 'ends.') <class 'tuple'>
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Substrings: Indexing and SlicingA character of a string can be extracted using an index (starting at 0), and a substring can be extracted using **slices**. Slices indicate a range of indexes. The notation is similar to that used for arrays in other languages.It also happens that indexing from the right (staring at -1)...
string1 = "this is the way the world ends." print(string1[12]) # the substring at index 12 (1 character) print(string1[0:4]) # from the start of the string to index 4 (but 4 is excluded) print(string1[5:]) # from index 5 to the end of the string print(string1[:4]) # from the start of the string to index 4 (4 is exc...
w this is the way the world ends. this . ends ends.
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
**NOTE**: Strings are **immutable**. We cannot reassign a character or sequence in a string as we might assign values to an array in some other programming languages. When the below code is executed, an exception (error) will be raised.
a = "abc" a[0] = "b" # this will raise an exception
_____no_output_____
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Splitting and Joining StringsIt's often the case that we want to split strings into multiple substrings, e.g., when reading a comma-delimited list of values. The `split` method of a string does just that. It retuns a list object (lists are covered later). To combine strings using a delimeter (e.g., to create a comma-d...
text = "The quick brown fox jumped over the lazy dog" spl = text.split() # This returns a list of strings (lists are covered later) print(spl) joined = ",".join(spl) print(joined) # and this re-joins them, separating words with commas spl = joined.split(",") # and this re-splits them, again based on commas print(spl...
['The', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog'] The,quick,brown,fox,jumped,over,the,lazy,dog ['The', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog'] The-quick-brown-fox-jumped-over-the-lazy-dog
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Similarly, to split a multiline string into a list of lines (each a string), we can use `splitlines`.
lines = """one two three""" li = lines.splitlines() # Split the multiple line string print(li)
['one', 'two', 'three']
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
To join strings into multiple lines, we can again use `join`.
lines = ["one", "two", "three"] data = "\n".join(lines)# join list of strings to multiple line string print(data)
one two three
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Boolean Values, and NonePython has two Boolean values, `True` and `False`. The normal logical operations (`and`, `or`, `not`) are present.
print(True and False) print(True or False) print(not True)
False True False
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
There is also the value `None` (the only value of the `NoneType` data type). `None` is used to stand for the absence of a value. However, it can be used in place of False, as can zero numerical values (of any numerical type), empty sequences/collections (`[]`,`()`, `{}`, etc.). Other values are treated as `True`. Note...
print(1 and True) print(True and 66) print(True and "aa") print(False and "aa") print(True or {}) print(not []) print(True and ())
True 66 aa False True True ()
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Boolean Comparisons There are 8 basic comparison operations in Python.| Symbol | Note | | --- | --- || `<` | less than | | `<=` | less than or equal to | | `>` | greater than | | `>=` | greater than or equal to | | `==` | equal to | | `!=` | not equal to | | `is` | identical to (for objects) | | `is not` | not ident...
print("abc" > "ac") print("a" < "1") print("A" < "a") print((1, 1, 2) < (1, 1, 3))
False False True True
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Note that `is` is true only if the two items compared are the *same* object, whereas `==` only checks for eqaulity in a weaker sense. Below, the elements of the two lists `x` and `y` have elements that evaluate as being equal, but the two lists are nevertheless distinct in memory. As such, the first `print` statement s...
x = (1, 1, 2) y = (1, 1, 2) print(x == y) print(x is y) x = "hello" y = x a = "hel" b = "lo" z = a + b w = x[:] print(x) print(y) print(z) print("x==y: ", x == y) print("x==z: ", x == z) print("x is y: ", x is y) print("x is z: ", x is z) print("x is w: ", x is w)
hello hello hello x==y: True x==z: True x is y: True x is z: False x is w: True
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Converting between TypesValues of certain data types can be converted to values of other datatypes (actually, a new value of the desired data type is produced). If the conversion cannot take place (becuase the datatypes are incompatible), an exception will be raised.
x = 1 s = str(x) # convert x to a string s_int = int(s) s_float = float(s) s_comp = complex(s) x_float = float(x) print(s) print(s_int) # convert to an integer print(s_float) # convert to a floating point number print(s_comp) # convert to a complext number print(x_float) # Let's check their IDs print(id(x)) print...
1 1 1.0 (1+0j) 1.0 93926537898496 140538951529264 93926537898496 140538952028656 140538952028464 93926537898496
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
The `id()` functionThe `id()` function can be used to identify an object in memory. It returns an integer value that is guaranteed to uniquely identify an object for the duration of its existence.
print("id(x): ", id(x)) print("id(y): ", id(y)) print("id(z): ", id(z)) print("id(w): ", id(w))
id(x): 140539018862384 id(y): 140539018862384 id(z): 140538951488880 id(w): 140539018862384
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Lists,Tuples, Sets, and Dictionaries Lists Many languages (e.g., Java) have what are often called **arrays**. In Python the object most like them are called **lists**. Like arrays in other languages, Python lists are represented syntactically using `[...]` blocks. Their elements can be referenced via indexes, and jus...
a = [0, 1, 2, 3] # a list of integers print(a) a[0] = 3 # overwrite the first element of the list print(a) a[1:3] = [4, 5] # overwrite the last two elements of the list (using values from a new list) print(a)
[0, 1, 2, 3] [3, 1, 2, 3] [3, 4, 5, 3]
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Note that some operations on lists return other lists.
a = [1, 2, 3] b = [4, 5, 6] c = a + b print(a) print(b) print(c) print("-" * 25) c[0] = 10 b[0] = 40 print(a) print(b) print(c)
[1, 2, 3] [4, 5, 6] [1, 2, 3, 4, 5, 6] ------------------------- [1, 2, 3] [40, 5, 6] [10, 2, 3, 4, 5, 6]
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Above, `c` is a new list containing elements copied from `a` and `b`. Subsequent changes to `a` or `b` do not affect `c`, and changes to `c` do not affect `a` or `b`. The length of a list can be obtained using `len()`, and a single element can be added to a list using `append()`. Note the syntax used for each.
a = [] a.append(1) # add an element to the end of the list a.append(2) a.append([3, 4]) print(a) print("length of 'a': ", len(a))
[1, 2, [3, 4]] length of 'a': 3
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Some additional list operations are shown below. Pay careful attention to how `a` and `b` are related.
a = [10] a.extend([11, 12]) # append elements of one list to the end of another one b = a c = a.copy() # copy the elements of a to a new list, and then assign it to c b[0] = 20 c[0] = 30 print("a:", a) print("b:", b) print("c:", c) b.reverse() # reverse the elements of the list in place print("a reversed:", a) b.sor...
['a', 'b', 'c', 'd', 'e'] popped: e ['a', 'b', 'c', 'd'] new list, sorted: ['a', 'b', 'b', 'c', 'd', 'd', 'd'] count of 'd': 3 first index of 'd': 4 ['a', 'b', 'b', 'c', 'd', 'd', 'd'] ele at index 2 removed: ['a', 'b', 'c', 'd', 'd', 'd'] elements at index 2-4 removed: ['a', 'b', 'd', 'd']
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
TuplesThere also exists an immutable counterpart to a list, the **tuple**. Elements can also be referenced by index, but (as with Python strings) new values cannot be assigned. Unlike a list, Tuples are created using either `(...)` or simply by using a comma-delimeted sequence of 1 or more elements.
a = () # the empty tuple b = (1, 2) # a tuple of 2 elements c = 3, 4, 5 # another way of creating a tuple d = (6,) # a singleton tuple e = (7,) # another singleton tuple print(a) print(b) print(c) print(d) print(len(d)) print(e) print(b[1])
() (1, 2) (3, 4, 5) (6,) 1 (7,) 2
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
As with lists, we can combine tuples to form new tuples
a = (1, 2, 3, 4) # Create python tuple b = "x", "y", "z" # Another way to create python tuple c = a[0:3] + b # Concatenate two python tuples print(c)
(1, 2, 3, 'x', 'y', 'z')
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
SetsSets, created using `{...}` or `set(...)` in Python, are unordered collections without duplicate elements. If the same element is added again, the set will not change.
a = {"a", "b", "c", "d"} # create a new set containing these elements b = set( "hello world" ) # create a set containing the distinct characters of 'hello world' print(a) print(b) print(a | b) # print the union of a and b print(a & b) # print the intersection of a and b print(a - b) # print elements of a not i...
{'a', 'd', 'c', 'b'} {'l', 'r', 'w', 'e', 'd', 'h', ' ', 'o'} {'l', 'b', 'c', 'r', 'w', 'e', 'd', 'h', ' ', 'a', 'o'} {'d'} {'a', 'c', 'b'} {'l', 'r', 'w', 'e', 'h', ' ', 'o'} {'l', 'b', 'c', 'r', 'w', 'e', 'h', ' ', 'a', 'o'}
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Given the below, it appears that `==` is used to evaluate membership.
a = "hello" b = "hel" c = "lo" d = b + c # Concatenate string s = {a, b, c, d} print("id(a):", a) print("id(d):", d) print(s)
id(a): hello id(d): hello {'lo', 'hel', 'hello'}
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
DictionariesDictionaries are collections of key-value pairs. A dictionary can be created using `d = {key1:value1, key2:value2, ...}` syntax, or else from 2-ary tuples using `dictionary()`. New key value pairs can be assigned, and old values referenced, using `d[key]`.
employee = {"last": "smth", "first": "joe"} # Create dictionary employee["middle"] = "william" # Add new key and value to the dictionary employee["last"] = "smith" addr = {} # an empty dictionary addr["number"] = 1234 addr["street"] = "Elm St" # Add new key and value to the dictionary addr["city"] = "Athens" # Add ...
{'last': 'smith', 'first': 'joe', 'middle': 'william', 'address': {'number': 1234, 'street': 'Elm St', 'city': 'Athens', 'state': 'GA', 'zip': '30602'}} keys: ['address', 'first', 'last', 'middle'] True False {'last': 'smith', 'first': 'joe', 'middle': 'william', 'address': {'number': 1234, 'street': 'beech', 'city': '...
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Conversion Between Types
y = (1, 2, 3, 1, 1) # Create tuple z = list(y) # convert tuple to a list print(y) print(z) print(tuple(z)) # convert z to a tuple print(set(z)) # convert z to a set w = (("one", 1), ("two", 2), ("three", 3)) # Create special tuple to convert it to dictionary v = dict(w) # Convert the tuple to dictionary print(v)...
(1, 2, 3, 1, 1) [1, 2, 3, 1, 1] (1, 2, 3, 1, 1) {1, 2, 3} {'one': 1, 'two': 2, 'three': 3} ('one', 'two', 'three') ('one', 'two', 'three') (1, 2, 3)
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Controlling the Flow of Program ExecutionAs in most programing languages, Python allows program execution to branch when certain conditions are met, and it also allows arbitrary execution loops. Without such features, Python would not be very useful (or Turing complete). If StatementsIn Python, *if-then-else* statem...
x = 3 # Test the number if it bigger than 10 if x > 10: print("value " + str(x) + " is greater than 10") # Test the number if it bigger than or equal to 7 and less than 10 elif x >= 7 and x < 10: print("value " + str(x) + " is in range [7,10)") # Test the number if it bigger than or equal to 5 and less than 7 ...
value 3 is less than 5
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
While LoopsPython provides both `while` loops and `for` loops. The former are arguably lower-level but not as natural-looking to a human eye. Below is a simple `while` loop. So long as the condition specified evaluates to a value comparable to `True`, the code in the body of the loop will be executed. As such, without...
string = "hello world" length = len(string)# get the length of the string i = 0 while i < length: print(string[i]) i = i + 1
h e l l o w o r l d
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Loops, including while loops, can contain break statements (which aborts execution of the loop) and continue statements (which tell the loop to proceed to the next cycle).![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAYIAAAGtCAYAAAAbAHNqAAAgAElEQVR4nOzdd5xcZb348c+p08v2mt5DeoGE3lFEsF1BsOC1oiKCgqg/EBWxXLFce0...
num = 0 while num < 5: num += 1 # num += 1 is same as num = num + 1 print('num = ', num) if num == 3: # condition before exiting a loop break num = 0 while num < 5: num += 1 if num > 3: # condition before exiting a loop continue print('num = ', num) # the statement after 'continue' sta...
num = 1 num = 2 num = 3
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
While Loop with else Block
num = 0 while num < 3: num += 1 print('num = ', num) else: print('else block executed') a = ['A', 'B', 'C', 'D'] s = 'd' i = 0 while i < len(a): if a[i] == s: # Processing for item found break i += 1 else: # Processing for item not found print(s, 'not found in list')
d not found in list
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Below we sort a string, identifying its unique members. Then we use a `while` loop together with the previously defined `count` function to count the occurrences of each character.
# unique characters raw_string = 'Hello' result = set() i = 0 length = len(raw_string) while i < length: result.add(raw_string[i]) i = i + 1 print(sorted(list(result)))
['H', 'e', 'l', 'o']
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp