markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We can now go down in the tree by using the id tag in the metadata we fetched. Let's say we are interested in the population statistics.
scb.go_down('BE')
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
To fetch the metadata about the population statistics, we once again run the function info():
scb.info()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
We can keep going down in the tree:
scb.go_down('BE0001') scb.info()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Whoops! We did not want the name statistics, but the population statistics. We go up one step and back down to the correct node:
scb.go_up() scb.go_down('BE0101') scb.info()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Direct route to specific nodes If we know where in the tree we want to go, we can initialise an object using the id tags as extra variables:
scb = SCB('en', 'BE', 'BE0101') scb.info()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
As you can see, we end up directly in the population statistics. The specific initialisation of the object does not stop us from navigating in the tree:
scb.go_up() scb.info()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Anyway, we go directly back to population density:
scb.go_down('BE0101', 'BE0101C') scb.info()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Now there is only one nod to go to, so we do that:
scb.go_down('BefArealTathetKon') scb.info()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Note how the metadata differs from previous nodes: The keyword variables is there, which indicates that we are in a leaf node. From here we can therefore fetch the actual data. It is not necessary to call info() after each go_down() but it is a good idea to do anyway, if you are not very sure of what the database looks like. Fetch data Now that we are in a leaf node we can look at which variables there are, and their respective ranges:
scb.get_variables()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Now that we have these, we can choose what we are interested in and create a json query. Let's say we are interested in the number of inhabitants per square kilometer in Örebro county the latest five years.
scb.set_query(region=["Örebro county"], observations=["Population density per sq. km"], year=["2014", "2015", "2016", "2017", "2018"])
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Now we can check how the query looks:
scb.get_query()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
The query is automatically formatted in the right way to fetch the data from the API. We fetch the data:
scb.get_data()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
This is the same data that one can fetch from the Statistical Database on the website. Via a function in pyscbwrapper we can get the URL to the page with the data:
scb.get_url()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
The data can of course be fetched without pyscbwrapper, by navigating to the URL above. There we select "Population density per sq. km", choose region County and select Örebro county, and select the last five years. On the next page we can click "API for this table" and get the query and a URL to post it to via e.g. the package requests, but he have to change "format": "px" to "format": "json".
import requests import json session = requests.Session() query = { "query": [ { "code": "Region", "selection": { "filter": "vs:RegionLän99EjAggr", "values": [ "18" ] } }, { "code": "ContentsCode", "selection": { "filter": "item", "values": [ "BE0101U1" ] } }, { "code": "Tid", "selection": { "filter": "item", "values": [ "2014", "2015", "2016", "2017", "2018" ] } } ], "response": { "format": "json" } } url = "http://api.scb.se/OV0104/v1/doris/en/ssd/START/BE/BE0101/BE0101C/BefArealTathetKon" response = session.post(url, json=query) response_json = json.loads(response.content.decode('utf-8-sig')) response_json
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
As you can see, we get the exact same data. More advanced calls Now that we have seen what the data looks like we can fetch more of it to make interesting graphs. Since we already are on the correct place in the API structure we only have to construct a new query. Let's say we want data from every available year for each county. First we can pick out a list over all regions, filter out the counties through regular expressions, and after that use the list in the json query:
import re regions = scb.get_variables()['region'] r = re.compile(r'.* county') county = list(filter(r.match, regions)) scb.set_query(region=county, observations=["Population density per sq. km"]) scb.get_query()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
This is the exact query we need. We fetch the data and place it in a variable so we can use it later:
scb_data = scb.get_data()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
As is good practice we look at the data before we do anything else:
scb_data
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
The actual data we look for is here:
scb_fetch = scb_data['data']
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Once again we check that we have gotten the correct data:
scb_fetch
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Now we need to understand the structure of the data. We have gotten a list of dictionaries where the first variable 'key' contains the domain (in this case county and year), and the variable 'values' contains the value of the observation variable (in this case inhabitants per square kilometer). To change this into time series that can be used for visualisation we need a few syntactic tricks that are described below. Data processing What we look for is one time series per county. Therefore we need to restructure the data we have gotten before we can do anything. This is outside the functionality of pyscbwrapper, but we can easily solve it. A good structure would be a dictionary with county as key and another dictionary as value, where the inner dictionary has year as key and variable value as value. To achieve this we take the list of counties that we created earlier and connect it to the county codes, which we can take from their place in get_query(), in this case 0. By comparing these codes, which are now connected to the county names, to the codes in the data, we can connect the county names to the data. This way we create the structure we want, and we take the opportunity to cast the the values as numeric.
codes = scb.get_query()['query'][0]['selection']['values'] countydic = {} for i in range(len(codes)): countydic[codes[i]] = county[i] countydata = {} for code in countydic: countydata[countydic[code]] = {} for i in range(len(scb_fetch)): if scb_fetch[i]['key'][0] == code: countydata[countydic[code]][scb_fetch[i]['key'][1]] = \ float(scb_fetch[i]['values'][0])
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
This got a bit hacky, so let's see if we got the structure we wanted:
countydata
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
This looks about right. Now we can loop over the keys and plot the values using key on the x axis and value on the y axis. Data visualisation We need numpy, pandas, and matplotlib for this. We install and import.
!pip install -q matplotlib !pip install -q pandas !pip install -q numpy import numpy as np import pandas as pd from matplotlib import pyplot as plt
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Now we can make a neat graph.
df = pd.DataFrame(countydata) df = df.reset_index() df = df.rename(columns={"index":"Year"}) ax = df.plot(x=df.index, xticks=np.arange(len(df.index)), colormap='hsv') ax.set_xticklabels(df["Year"], rotation=45) plt.title("Population density per county") plt.xlabel("Year") plt.ylabel("Inhabitants per square kilometer") plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) plt.show()
pyscbwrapper_en.ipynb
KiraHG/pyscbwrapper
gpl-3.0
Let's make our system so that the boosting effects will be quite noticeable.
b['rpole@primary'] = 1.8 b['rpole@secondary'] = 0.96 b['teff@primary'] = 10000 b['gravb_bol@primary'] = 1.0 b['teff@secondary'] = 5200 b['gravb_bol@secondary'] = 0.32 b['q@binary'] = 0.96/1.8 b['incl@binary'] = 88 b['period@binary'] = 1.0 b['sma@binary'] = 6.0
2.0/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
We'll add lc, rv, and mesh datasets so that we can see how they're each affected by beaming and boosting.
times = np.linspace(0,1,101) b.add_dataset('lc', times=times, dataset='lc01') b.add_dataset('rv', times=times, dataset='rv01') b.add_dataset('mesh', times=times[::10], dataset='mesh01')
2.0/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Influence on Light Curves (fluxes)
b.run_compute(boosting_method='none', model='boosting_none') b.run_compute(boosting_method='linear', model='boosting_linear') axs, artists = b['lc01'].plot() leg = plt.legend() axs, artists = b['lc01'].plot(ylim=(1.01,1.03)) leg = plt.legend()
2.0/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Influence on Radial Velocities
fig = plt.figure(figsize=(10,6)) ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122) axs, artists = b['rv01@boosting_none'].plot(ax=ax1) axs, artists = b['rv01@boosting_linear'].plot(ax=ax2)
2.0/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Influence on Meshes
fig = plt.figure(figsize=(10,6)) ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122) axs, artists = b['mesh@boosting_none'].plot(time=0.6, facecolor='boost_factors@lc01', edgecolor=None, ax=ax1) axs, artists = b['mesh@boosting_linear'].plot(time=0.6, facecolor='boost_factors@lc01', edgecolor=None, ax=ax2)
2.0/tutorials/beaming_boosting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
1. Признаки по одному 1.1. Количественные Гистограмма и боксплот
df['Total day minutes'].hist(); sns.boxplot(df['Total day minutes']); df.hist();
visualization/telecom_churn_inclass_as_is.ipynb
andreyf/machine-learning-examples
gpl-3.0
1.2. Категориальные countplot
df['State'].value_counts().head() df['Churn'].value_counts() sns.countplot(df['Churn']); sns.countplot(df['State']); sns.countplot(df[df['State'].\ isin(df['State'].value_counts().head().index)]['State']);
visualization/telecom_churn_inclass_as_is.ipynb
andreyf/machine-learning-examples
gpl-3.0
2. Взаимодействия признаков 2.1. Количественный с количественным pairplot, scatterplot, корреляции, heatmap
feat = [f for f in df.columns if 'charge' in f] df[feat].hist(); sns.pairplot(df[feat]); df['Churn'].map({False: 'blue', True: 'orange'}).head() df[~df['Churn']].head() plt.scatter(df[df['Churn']]['Total eve charge'], df[df['Churn']]['Total intl charge'], color='orange', label='churn'); plt.scatter(df[~df['Churn']]['Total eve charge'], df[~df['Churn']]['Total intl charge'], color='blue', label='loyal'); plt.xlabel('Вечерние начисления'); plt.ylabel('Межнар. начисления'); plt.title('Распределение начислений для лояльных/ушедших'); plt.legend(); sns.heatmap(df.corr()); df.drop(feat, axis=1, inplace=True) sns.heatmap(df.corr());
visualization/telecom_churn_inclass_as_is.ipynb
andreyf/machine-learning-examples
gpl-3.0
2.2. Количественный с категориальным boxplot, violinplot
sns.boxplot(x='Churn', y='Total day minutes', data=df); sns.boxplot(x='State', y='Total day minutes', data=df); sns.violinplot(x='Churn', y='Total day minutes', data=df); df.groupby('International plan')['Total day minutes'].mean() sns.boxplot(x='International plan', y='Total day minutes', data=df);
visualization/telecom_churn_inclass_as_is.ipynb
andreyf/machine-learning-examples
gpl-3.0
2.3. Категориальный с категориальным countplot
pd.crosstab(df['Churn'], df['International plan']) sns.countplot(x='International plan', hue='Churn', data=df); sns.countplot(x='Customer service calls', hue='Churn', data=df);
visualization/telecom_churn_inclass_as_is.ipynb
andreyf/machine-learning-examples
gpl-3.0
3. Прочее Manifold learning, один из представителей – t-SNE
from sklearn.manifold import TSNE tsne = TSNE(random_state=0) df2 = df.drop(['State', 'Churn'], axis=1) df2['International plan'] = df2['International plan'].map({'Yes': 1, 'No': 0}) df2['Voice mail plan'] = df2['Voice mail plan'].map({'Yes': 1, 'No': 0}) df2.info() %%time tsne.fit(df2) plt.scatter(tsne.embedding_[df['Churn'].values, 0], tsne.embedding_[df['Churn'].values, 1], color='orange', alpha=.7); plt.scatter(tsne.embedding_[~df['Churn'].values, 0], tsne.embedding_[~df['Churn'].values, 1], color='blue', alpha=.7);
visualization/telecom_churn_inclass_as_is.ipynb
andreyf/machine-learning-examples
gpl-3.0
You may see a warning message from Kubeflow Pipeline logs saying "Insufficient nvidia.com/gpu". If so, this probably means that your GPU-enabled node is still spinning up; please wait for few minutes. You can check the current nodes in your cluster like this: kubectl get nodes -o wide If everything runs as expected, the nvidia-smi command should list the CUDA version, GPU type, usage, etc. (See the logs panel in the pipeline UI to view output). You may also notice that after the pipeline step's GKE pod has finished, the new GPU cluster node is still there. GKE autoscale algorithm will free that node if no usage for certain time. More info is here. Multiple GPUs pool in one cluster It's possible you want more than one type of GPU to be supported in one cluster. There are several types of GPUs. Certain regions often support a only subset of the GPUs (document). Since we can set --num-nodes=0 for certain GPU node pool to save costs if no workload, we can create multiple node pools for different types of GPUs. Add additional GPU nodes to your cluster In a previous section, we added a node pool for P100s. Here we add another pool for V100s. ```shell You may customize these parameters. export GPU_POOL_NAME=v100pool export CLUSTER_NAME=existingClusterName export CLUSTER_ZONE=us-west1-a export GPU_TYPE=nvidia-tesla-v100 export GPU_COUNT=1 export MACHINE_TYPE=n1-highmem-8 Node pool creation may take several minutes. gcloud container node-pools create ${GPU_POOL_NAME} \ --accelerator type=${GPU_TYPE},count=${GPU_COUNT} \ --zone ${CLUSTER_ZONE} --cluster ${CLUSTER_NAME} \ --num-nodes=0 --machine-type=${MACHINE_TYPE} --min-nodes=0 --max-nodes=5 --enable-autoscaling ``` Consume certain GPU via Kubeflow Pipelines SDK If your cluster has multiple GPU node pools, you can explicitly specify that a given pipeline step should use a particular type of accelerator. This example shows how to use P100s for one pipeline step, and V100s for another.
import kfp from kfp import dsl def gpu_p100_op(): return dsl.ContainerOp( name='check_p100', image='tensorflow/tensorflow:latest-gpu', command=['sh', '-c'], arguments=['nvidia-smi'] ).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100').container.set_gpu_limit(1) def gpu_v100_op(): return dsl.ContainerOp( name='check_v100', image='tensorflow/tensorflow:latest-gpu', command=['sh', '-c'], arguments=['nvidia-smi'] ).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100').container.set_gpu_limit(1) @dsl.pipeline( name='GPU smoke check', description='Smoke check as to whether GPU env is ready.' ) def gpu_pipeline(): gpu_p100 = gpu_p100_op() gpu_v100 = gpu_v100_op() if __name__ == '__main__': kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')
samples/tutorials/gpu/gpu.ipynb
kubeflow/pipelines
apache-2.0
You should see different "nvidia-smi" logs from the two pipeline steps. Using Preemptible GPUs A Preemptible GPU resource is cheaper, but use of these instances means that a pipeline step has the potential to be aborted and then retried. This means that pipeline steps used with preemptible instances must be idempotent (the step gives the same results if run again), or creates some kind of checkpoint so that it can pick up where it left off. To use preemptible GPUs, create a node pool as follows. Then when specifying a pipeline, you can indicate use of a preemptible node pool for a step. The only difference in the following node-pool creation example is that the --preemptible and --node-taints=preemptible=true:NoSchedule parameters have been added. ``` export GPU_POOL_NAME=v100pool-preemptible export CLUSTER_NAME=existingClusterName export CLUSTER_ZONE=us-west1-a export GPU_TYPE=nvidia-tesla-v100 export GPU_COUNT=1 export MACHINE_TYPE=n1-highmem-8 gcloud container node-pools create ${GPU_POOL_NAME} \ --accelerator type=${GPU_TYPE},count=${GPU_COUNT} \ --zone ${CLUSTER_ZONE} --cluster ${CLUSTER_NAME} \ --preemptible \ --node-taints=preemptible=true:NoSchedule \ --num-nodes=0 --machine-type=${MACHINE_TYPE} --min-nodes=0 --max-nodes=5 --enable-autoscaling ``` Then, you can define a pipeline as follows (note the use of use_preemptible_nodepool()).
import kfp import kfp.gcp as gcp from kfp import dsl def gpu_p100_op(): return dsl.ContainerOp( name='check_p100', image='tensorflow/tensorflow:latest-gpu', command=['sh', '-c'], arguments=['nvidia-smi'] ).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100').container.set_gpu_limit(1) def gpu_v100_op(): return dsl.ContainerOp( name='check_v100', image='tensorflow/tensorflow:latest-gpu', command=['sh', '-c'], arguments=['nvidia-smi'] ).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100').container.set_gpu_limit(1) def gpu_v100_preemptible_op(): v100_op = dsl.ContainerOp( name='check_v100_preemptible', image='tensorflow/tensorflow:latest-gpu', command=['sh', '-c'], arguments=['nvidia-smi']) v100_op.container.set_gpu_limit(1) v100_op.add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100') v100_op.apply(gcp.use_preemptible_nodepool(hard_constraint=True)) return v100_op @dsl.pipeline( name='GPU smoking check', description='Smoking check whether GPU env is ready.' ) def gpu_pipeline(): gpu_p100 = gpu_p100_op() gpu_v100 = gpu_v100_op() gpu_v100_preemptible = gpu_v100_preemptible_op() if __name__ == '__main__': kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')
samples/tutorials/gpu/gpu.ipynb
kubeflow/pipelines
apache-2.0
You will work with the Housing Prices Competition for Kaggle Learn Users dataset from the previous exercise. Run the next code cell without changes to load the training and validation sets in X_train, X_valid, y_train, and y_valid. The test set is loaded in X_test.
import pandas as pd from sklearn.model_selection import train_test_split # Read the data X = pd.read_csv('../input/train.csv', index_col='Id') X_test_full = pd.read_csv('../input/test.csv', index_col='Id') # Remove rows with missing target, separate target from predictors X.dropna(axis=0, subset=['SalePrice'], inplace=True) y = X.SalePrice X.drop(['SalePrice'], axis=1, inplace=True) # Break off validation set from training data X_train_full, X_valid_full, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0) # "Cardinality" means the number of unique values in a column # Select categorical columns with relatively low cardinality (convenient but arbitrary) low_cardinality_cols = [cname for cname in X_train_full.columns if X_train_full[cname].nunique() < 10 and X_train_full[cname].dtype == "object"] # Select numeric columns numeric_cols = [cname for cname in X_train_full.columns if X_train_full[cname].dtype in ['int64', 'float64']] # Keep selected columns only my_cols = low_cardinality_cols + numeric_cols X_train = X_train_full[my_cols].copy() X_valid = X_valid_full[my_cols].copy() X_test = X_test_full[my_cols].copy() # One-hot encode the data (to shorten the code, we use pandas) X_train = pd.get_dummies(X_train) X_valid = pd.get_dummies(X_valid) X_test = pd.get_dummies(X_test) X_train, X_valid = X_train.align(X_valid, join='left', axis=1) X_train, X_test = X_train.align(X_test, join='left', axis=1)
notebooks/ml_intermediate/raw/ex6.ipynb
Kaggle/learntools
apache-2.0
Step 1: Build model Part A In this step, you'll build and train your first model with gradient boosting. Begin by setting my_model_1 to an XGBoost model. Use the XGBRegressor class, and set the random seed to 0 (random_state=0). Leave all other parameters as default. Then, fit the model to the training data in X_train and y_train.
from xgboost import XGBRegressor # Define the model my_model_1 = ____ # Your code here # Fit the model ____ # Your code here # Check your answer step_1.a.check() #%%RM_IF(PROD)%% from sklearn.utils.validation import check_is_fitted my_model_1 = XGBRegressor(random_state=0) step_1.a.assert_check_failed() #%%RM_IF(PROD)%% my_model_1 = XGBRegressor(random_state=0) my_model_1.fit(X_train, y_train) step_1.a.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ step_1.a.hint() #_COMMENT_IF(PROD)_ step_1.a.solution()
notebooks/ml_intermediate/raw/ex6.ipynb
Kaggle/learntools
apache-2.0
Part B Set predictions_1 to the model's predictions for the validation data. Recall that the validation features are stored in X_valid.
from sklearn.metrics import mean_absolute_error # Get predictions predictions_1 = ____ # Your code here # Check your answer step_1.b.check() #%%RM_IF(PROD)%% predictions_1 = my_model_1.predict(X_valid) step_1.b.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ step_1.b.hint() #_COMMENT_IF(PROD)_ step_1.b.solution()
notebooks/ml_intermediate/raw/ex6.ipynb
Kaggle/learntools
apache-2.0
Part C Finally, use the mean_absolute_error() function to calculate the mean absolute error (MAE) corresponding to the predictions for the validation set. Recall that the labels for the validation data are stored in y_valid.
# Calculate MAE mae_1 = ____ # Your code here # Uncomment to print MAE # print("Mean Absolute Error:" , mae_1) # Check your answer step_1.c.check() #%%RM_IF(PROD)%% mae_1 = mean_absolute_error(predictions_1, y_valid) print("Mean Absolute Error:" , mae_1) step_1.c.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ step_1.c.hint() #_COMMENT_IF(PROD)_ step_1.c.solution()
notebooks/ml_intermediate/raw/ex6.ipynb
Kaggle/learntools
apache-2.0
Step 2: Improve the model Now that you've trained a default model as baseline, it's time to tinker with the parameters, to see if you can get better performance! - Begin by setting my_model_2 to an XGBoost model, using the XGBRegressor class. Use what you learned in the previous tutorial to figure out how to change the default parameters (like n_estimators and learning_rate) to get better results. - Then, fit the model to the training data in X_train and y_train. - Set predictions_2 to the model's predictions for the validation data. Recall that the validation features are stored in X_valid. - Finally, use the mean_absolute_error() function to calculate the mean absolute error (MAE) corresponding to the predictions on the validation set. Recall that the labels for the validation data are stored in y_valid. In order for this step to be marked correct, your model in my_model_2 must attain lower MAE than the model in my_model_1.
# Define the model my_model_2 = ____ # Your code here # Fit the model ____ # Your code here # Get predictions predictions_2 = ____ # Your code here # Calculate MAE mae_2 = ____ # Your code here # Uncomment to print MAE # print("Mean Absolute Error:" , mae_2) # Check your answer step_2.check() #%%RM_IF(PROD)%% my_model_2 = XGBRegressor(n_estimators=1000, learning_rate=0.05) my_model_2.fit(X_train, y_train) predictions_2 = my_model_2.predict(X_valid) mae_2 = mean_absolute_error(predictions_2, y_valid) print("Mean Absolute Error:" , mae_2) step_2.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ step_2.hint() #_COMMENT_IF(PROD)_ step_2.solution()
notebooks/ml_intermediate/raw/ex6.ipynb
Kaggle/learntools
apache-2.0
Step 3: Break the model In this step, you will create a model that performs worse than the original model in Step 1. This will help you to develop your intuition for how to set parameters. You might even find that you accidentally get better performance, which is ultimately a nice problem to have and a valuable learning experience! - Begin by setting my_model_3 to an XGBoost model, using the XGBRegressor class. Use what you learned in the previous tutorial to figure out how to change the default parameters (like n_estimators and learning_rate) to design a model to get high MAE. - Then, fit the model to the training data in X_train and y_train. - Set predictions_3 to the model's predictions for the validation data. Recall that the validation features are stored in X_valid. - Finally, use the mean_absolute_error() function to calculate the mean absolute error (MAE) corresponding to the predictions on the validation set. Recall that the labels for the validation data are stored in y_valid. In order for this step to be marked correct, your model in my_model_3 must attain higher MAE than the model in my_model_1.
# Define the model my_model_3 = ____ # Fit the model ____ # Your code here # Get predictions predictions_3 = ____ # Calculate MAE mae_3 = ____ # Uncomment to print MAE # print("Mean Absolute Error:" , mae_3) # Check your answer step_3.check() #%%RM_IF(PROD)%% my_model_3 = XGBRegressor(n_estimators=1) my_model_3.fit(X_train, y_train) predictions_3 = my_model_3.predict(X_valid) mae_3 = mean_absolute_error(predictions_3, y_valid) print("Mean Absolute Error:" , mae_3) step_3.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ step_3.hint() #_COMMENT_IF(PROD)_ step_3.solution()
notebooks/ml_intermediate/raw/ex6.ipynb
Kaggle/learntools
apache-2.0
Все есть объект, что это значит Очень часто можно услышать такую фразу в отношении Python. Что это значит и какое влияние оказывает на программирование на языке?
# Объявим переменную num = 10 # На самом деле num - это не имя переменной, ссылающейся на численное значение в памяти, а имя, связанное с объектом. num.__add__ # И даже больше #dir(num)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Все типы в Python "отнаследованы" (на самом деле речь скорее о композиции) от С структуры PyObject: typedef struct _object { _PyObject_HEAD_EXTRA Py_ssize_t ob_refcnt; struct _typeobject *ob_type; } PyObject; Или от PyVarObject: typedef struct { PyObject ob_base; Py_ssize_t ob_size; /* Number of items in variable part */ } PyVarObject; Включающей ддополнительно размер данных в низлежащем типе данных. _typeobject - тип объекта, это структура, посволяющая Питону определять тип объекта в рантайме, содержащая все методы, необходимые для работы с типом. https://github.com/python/cpython/blob/d65c5d2db6b1429a6d61a648836913542766466f/Include/object.h#L346 _PyObject_HEAD_EXTRA - макро, включающей в себя указатели на предыдущий и следующий объекты, живущие в куче (heap). #define _PyObject_HEAD_EXTRA \ struct _object *_ob_next; \ struct _object *_ob_prev; PyObject_HEAD - макро, включающий в себя определение PyObject, PyObject_VAR_HEAD - макро для PyVarObject. Они используются для определения любого нового типа в Python. Тем самым любой объект в конечном итоге может быть скастован в PyObject. typedef struct PyMyObject { PyObject_HEAD ... } Объявления в исходном коде: https://github.com/python/cpython/blob/d65c5d2db6b1429a6d61a648836913542766466f/Include/object.h Основные типы можно найти в папке Objects в исходном коде Python: int - Include/longobject.h, Objects/longobject.c str - Include/unicodeobject.h, Objects/unicodeobject.c list - Include/listobject.h, Objects/listobject.c class - Include/classobject.h, Objects/classobject.c function - Include/funcobject.h, Objects/funcobject.c module - Include/moduleobject.h, Objects/moduleobject.c И другие! Первое, что важно из утверждения, что все в Питоне есть объект - все может быть присвоено переменной и передано как аргумент в функцию. Не только ф-ии или классы, но и, например, модули. Также знать это нужно, чтобы понять следующий момент: как работает присваивание. Присваивание (Часть 1)
# создаем связь имени переменной foo cо строковым объектом foo = "Испания" # связываем имя с новым строковым объектом, строковый объект со строкой "Испания" остался нетронутым. # В этот момент на куче живет 2 объекта. Один из них сборщик мусора может удалить, но для нас важно не это. foo = "Бразилия"
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
При этом в Python имя, привязанное к объекту, используется только в пределах скоупа (scope), в котором имя объявлено. Scope имени определяется блоком питоновского кода. Блоками являются модуль, определение класса, тело функции. Интерпретатор при обращении к переменной ищет ее имя начиная с локального пространства имен, затем у внешних пока не достигает глобального пространства имен.
top_level = 3 def print_result(): inner_level = 4 def print_calculation(): print(top_level * inner_level) print_calculation() print_result()
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Чуть более сложный пример
country = 'Испания' countries = [] countries.append(country) another_list_of_countries = countries another_list_of_countries.append('Бразилия') country = 'Россия' #print(country, countries, another_list_of_countries)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Упс, появилась первая структура данных - list (список). Пожалуй, сначала рассмотрим ее, а потом вернемся к этому примеру. Структуры данных Мы рассмотрим для начала структуры данных, доступные "из коробки"
# built-in структуры данных list, tuple, dict, set, frozenset
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Список: list Sequence тип. Позволяет хранить последовательность элементов одного или разного типа. Это не linked list!
# создаем list используя квадратные скобки или ф-ю list empty_list = [] empty_list = list() sys.getsizeof(empty_list) # list может содержать любые объекты, внутри Python они хранятся как массив указателей example_list = [1, True, "a"] for element in example_list: print(element) list_of_lists = [example_list, example_list] print(list_of_lists) sys.getsizeof(list_of_lists) import sys from numbers import Number from collections import Set, Mapping, deque zero_depth_bases = (str, bytes, Number, range, bytearray) iteritems = 'items' def getsize(obj_0): """ Recursively iterate to sum size of object & members. http://stackoverflow.com/a/30316760/1288429 """ def inner(obj, _seen_ids = set()): obj_id = id(obj) if obj_id in _seen_ids: return 0 _seen_ids.add(obj_id) size = sys.getsizeof(obj) if isinstance(obj, zero_depth_bases): pass # bypass remaining control flow and return elif isinstance(obj, (tuple, list, Set, deque)): size += sum(inner(i) for i in obj) elif isinstance(obj, Mapping) or hasattr(obj, iteritems): size += sum(inner(k) + inner(v) for k, v in getattr(obj, iteritems)()) # Check for custom object instances - may subclass above too if hasattr(obj, '__dict__'): size += inner(vars(obj)) if hasattr(obj, '__slots__'): # can have __slots__ with __dict__ size += sum(inner(getattr(obj, s)) for s in obj.__slots__ if hasattr(obj, s)) return size return inner(obj_0) getsize(list_of_lists) # добавляем элемент в конец списка O(1) example_list.append("last") print(example_list) # добавляем элемент в начало O(n) example_list.insert(0, "first") print(example_list) benchmark_list = [] %timeit -n10000 benchmark_list.append("last") benchmark_list = [] %timeit -n10000 benchmark_list.insert(0, "first") example_list = [0, 1, 2, 3, 4, 5, 6] # доступ к элементу O(1) print(example_list[0]) print(example_list[-1]) # Изменение по индексу O(1) example_list[6] = 10 # Удаление элемента O(n) print(example_list) del example_list[-1] print(example_list) # Обращение к несуществующему индексу. example_list[100] print(example_list[2:]) print(example_list[2:4]) print(example_list[::2]) print(example_list[-1]) print(example_list) print(example_list[::-1])
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Продолжение следует... Присваивание (Часть 2)
country = 'Испания' countries = [] countries.append(country) another_list_of_countries = countries another_list_of_countries.append('Бразилия') country = 'Россия' #print(country, countries, another_list_of_countries)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
В Python есть 2 типа объектов. Mutable (изменяемые) и Immutable (неизменяемые). Любые изменения в mutable объекте видны во всем привязанных к нему именах переменных. Список - пример изменяемого объекта. Значение неизменяемого объекта поменять нельзя, все что можно сделать - прочитать его и создать новый объект на основе значения старого. Примеры - строки, численные типы и т.д. Не все так просто, чуть позже на примере кортежа мы разберем, что несмотря на то, что кортеж - immutable объект, не верно считать, что он не изменяется. Теперь мы готовы посмотреть на передачу объекта как аргумент функции:
def foo(bar): # bar в данном случае связь нового имени с объектом, к которому привязан country_list. # Этот объект mutable - следовательно ф-я изменит его. bar.append("Испания") country_list = [] foo(country_list) #print(country_list)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Теперь пример с неизменяемым объектом - строкой:
def foo(bar): # bar это имя, которое связано с изначальным строковым объектом, # но это связь имени и объекта в локальном пространстве ф-ии. # так как bar в данном случае immutable - все что ф-я может # сделать - изменить связь имени, привязав к новому объекту в своем # локальном неймспейсе. Тем самым не повлияв на country из внешнего неймспейса. bar = 'Бразилия' #print(bar) country = 'Испания' foo(country) #print(country)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Вывод - важно, что объект в Питоне передается как связь (binding) имени аргумента ф-ии c объектом (pass-by-object). По большому счету эта связь - новая ссылка на изначальный объект. В зависимости от того, изменяемый объект или нет можно понять, способна ли функция изменить его. Список: list, продолжение
# при простом присваивании новая переменная ссылается на тот же список example_list = [1, 2, 3, 4, 5] another_list = example_list another_list[0] = 100 # print(example_list) # копирование списка example_list = [0, 1, 2, 3, 4, 5] another_list = example_list[:] another_list[0] = 100 print(example_list) # но вложенные объекты не копируются (shallow copy) nested_list = [1, 2, 3] example_list = [nested_list, 1, 2, 3, 4, 5] another_list = example_list[:] nested_list.append(4) print(another_list) # используем ф-ю deepcopy из пакета copy from copy import deepcopy nested_list = [1, 2, 3] example_list = [nested_list, 1, 2, 3, 4, 5] another_list = deepcopy(example_list) nested_list.append(4) print(another_list)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Сортировка
# Сортировка списка unsorted_list = [2, 1, 5, 4, 3] unsorted_list.sort() print(unsorted_list) # сортировка in-place unsorted_list = [2, 1, 5, 4, 3] print(sorted(unsorted_list)) print(unsorted_list)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Задача - соединить все числа от нуля до 10000 в одну длинную строку ? Кортеж: tuple Так же как и список позволяет хранить последовательность элементов одного или разного типа Неизменяемые (Immutable) => защищают данные Быстрее списка Хешируемые (Hashable)
# создадим пустой tuple используя круглые скобки или ф-ю tuple empty_tuple = () empty_tuple = tuple() sys.getsizeof(empty_tuple) len(empty_tuple) hash(empty_tuple) example_tuple = (1, "a", True) for element in example_tuple: print(element) len(example_tuple) example_tuple = (1, 2, 3) print(example_tuple[0]) print(example_tuple[1:]) example_tuple[0] = 2017 example_list[200] a = (123) a print(type(a)) a = (123, ) # в кортеже с одним элементом запятая обязательна! print(type(a))
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Важно понимать, что неизменяемым является сам tuple, но не объекты, которые он содержит!. Например:
first_list = [1, 2] second_list = [3, 4] example_tuple = (first_list, second_list) print(example_tuple) first_list.append(5) print(example_tuple)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Словарь: dict Позволяет хранить пары ключ-значение, для быстрого доступа к значению по ключу.
# создаем словарь используя фигурные скобки или ф-ю dict empty_dict = dict() empty_dict = {} sys.getsizeof(empty_dict) example_dict = {"a": 1, "b": True, (1, 2, 3): True} # Добавляем ключ и соответствующее ему значение в словарь O(1) example_dict["c"] = 4 print(example_dict) # Поиск значения по ключу O(1) print(example_dict["c"]) # Доступ к несуществующему ключу: print(example_dict["x"]) # ключом словаря может быть только hashable объект example_dict[example_list] = True
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Объект является hashable объектом, если он имеет hash-значение, неизменное в процессе его жизни (т.е. у объекта должен быть определен __hash____() метод) и объект можно сравнить с другими объектами (_eq_() или _cmp_() методы). Одинаковые объекты должны иметь одинаковое знаение hash. Если объект hashable, то он может быть использован как ключ в словаре или член множества (так как эти структуры используют значение hash внутри своей имплементации). Все immutable встроенные объекты в Python - hashable, mutable контейнеры - нет (списки, словари). Инстансы классов - hashable (их hash значение - их id, то есть адрес в памяти).
# если мы не уверены, что ключ есть в словаре - можно воспользоваться методом get print(example_dict.get("x")) print(example_dict.get("x", "default value")) # итерируемся по ключам и значениям - внимание, порядок недетерминирован (до Python 3.6)! for key, value in example_dict.items(): print("{}: {}".format(key, value)) # посмотрим на разницу поиска значения в словаре и списке search_list = list(range(100000)) search_dict = dict.fromkeys(list(range(100000))) %timeit -n10000 0 in search_list %timeit -n10000 0 in search_dict %timeit -n1000 99999 in search_list %timeit -n10000 99999 in search_dict # объединение словарей d1 = {"a": 1, "c": 3} d2 = {"b": 2} d1.update(d2) print(d1) # копирование d1 = {"a": 1, "c": 3} d2 = d1.copy() d2["a"] = 100 print(d1) print(d2) print() # вложенные не копируются nested = [1, 2, 3] d1 = {"a": nested} d2 = d1.copy() nested.append(4) print(d1) print(d2) print() # поэтому опять используем deepcopy from copy import deepcopy nested = [1, 2, 3] d1 = {"a": nested} d2 = deepcopy(d1) nested.append(4) print(d1) print(d2)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Задача - есть список элементов. Известно, что у большинства элементов в списке есть пары, то есть их по 2. Но есть некоторое количество элементов, у которых пары нет - их по 1 в списке. Цель - найти эти одиночные элементы. Вот решение с помощью словаря:
elements = [2, 1, 5, 2, 4, 3, 1, 4] solution_dict = {} # TODO by hand
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Множество: set Неупорядоченная коллекция, содержащая только уникальные элементы.
# создадим пустой set c помощью ф-ии set() empty_set = set() sys.getsizeof(empty_set) # непустое множество можно также задать фигурными скобками example_set = {1, 2, 3} # добавляем элемент в set example_set.add(4) print(example_set) # добавляем уже присутствующий элемент в set - ничего не меняется example_set.add(1) print(example_set) # Проверяем наличие значения в множестве O(1) 2 in example_set example_set.add(example_list) another_set = {4, 5, 6} print(example_set.difference(another_set)) # различие print(example_set.union(another_set)) # объединение print(example_set.intersection(another_set)) # пересечение # например, вызов какой-либо ф-ии чат сервиса вернул нам список активных в данный момент пользователей в комнате active_users = ["Александр", "Михаил", "Ольга"] # вызов другой ф-ии вернул список администраторов чата admins = ["Ольга", "Мария"] # нам нужно найти список активных администраторов и послать им какое-либо уведомление active_admins = set(active_users).intersection(set(admins)) active_admins # сеты могут содержать только hashable объекты. example_set = set() example_set.add([1, 2, 3])
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Основная задача, очень часто возникающая на практике. Есть список элементов, нужно оставить только уникальные. Это одна строка с использованием сета:
elements = ["yellow", "yellow", "green", "red", "blue", "blue", "magenta", "orange", "red"] set(elements)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
frozenset Так как set может содержать только hashable объекты, а иногда нужен сет сетов - существует frozenset
example_set = set() example_set.add(frozenset([1,2,3])) print(example_set)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
collections
import collections print([x for x in collections.__dict__.keys() if not x.startswith("_")]) # counter counter = collections.Counter([1,2,3,4,1,2,1,1,1]) for elem, count in counter.most_common(3): print("{}: {} times".format(elem, count)) # defaultdict default_dict = collections.defaultdict(int) default_dict["not_exists"] # namedtuple import math Point = collections.namedtuple('Point', ['x', 'y']) point1 = Point(0, 0) point2 = Point(3, 4) distance = math.sqrt((point2.x - point1.x)**2 + (point2.y - point1.y)**2) distance
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
queue
import queue print("Queue:", queue.Queue.__doc__) print() print("PriorityQueue:", queue.PriorityQueue.__doc__) print() print("LifoQueue:", queue.LifoQueue.__doc__)
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Queue PriorityQueue LifoQueue Разберем позже, когда будем говорить о многопоточности, так как доступ к этим структурам синхронизирован. heapq
import heapq from random import shuffle heap = [] data = list(range(10)) shuffle(data) for i in data: heapq.heappush(heap, i) # Порядок элементов будет совсем не случайным, как может показаться. # Элементы идут не строго по порядку, но удовлетворяют правилу: # heap[k] <= heap[2*k+1] and heap[k] <= heap[2*k+2] для всех элементов. print(heap) for k in range(3): print(heap[k] <= heap[2*k+1] and heap[k] <= heap[2*k+2]) print("...") print() for i in range(10): print(heapq.heappop(heap))
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Другие структуры данных Связанные списки Деревья Графы ... Если вам нужна реализация какой-то конкретно структуры данных - скорее всего вы найдете готовую open-source реализацию. Анализ сложности, визуализация некоторых операций
%matplotlib inline plotting = True try: import matplotlib import seaborn as sbn import matplotlib.pyplot as plt except ImportError: # Не удалось импортировать модуль, не будем рисовать. plotting = False print("can't import plotting tools") else: # задаем ширину и высоту графиков matplotlib.rcParams['figure.figsize'] = (20.0, 10.0) def draw_plot(labelY, dataX, dataY): plt.plot(dataX, dataY, 'ro') plt.ylabel(labelY) plt.xlabel("elements") plt.show() import random if plotting: elements = range(1000000)[::1000] measurements = [] example_dict = {} for elem in elements: for i in range(len(example_dict), elem+1): example_dict[i] = True num = random.randint(0, elem) result = %timeit -n1000 -q -o example_dict[num] measurements.append(result.best) draw_plot("dict key lookup O(1)", elements, measurements) else: print("plotting unavailable") import random if plotting: elements = range(10000)[::100] measurements = [] for elem in elements: example_list = [] for i in range(elem): example_list.append(random.randint(0, elem)) num = random.randint(0, elem) result = %timeit -n1000 -q -o example_list.count(num) measurements.append(result.best) draw_plot("list element count O(n)", elements, measurements) else: print("plotting unavailable") import random if plotting: elements = range(10000)[::100] measurements = [] example_list = [] for elem in elements: for i in range(len(example_list), elem): example_list.append(random.randint(0, elem)) result = %timeit -n50 -q -o sorted(example_list) measurements.append(result.best) draw_plot("list sorting O(n*log(n)) - timsort", elements, measurements) else: print("plotting unavailable")
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
Ссылки: https://docs.python.org/3/tutorial/datastructures.html https://pymbook.readthedocs.io/en/latest/datastructure.html http://interactivepython.org/runestone/static/pythonds/index.html Немного практики в завершение
from so import get_best_answer from IPython.core.display import display, HTML answer = get_best_answer("What is Python GIL") if answer: display(HTML(answer["body"])) else: print("Answer not found")
lectures/02_Data_Structures/notebook.ipynb
park-python/course
bsd-3-clause
First we prepare the dataset, which is used for training, testing and the finally prediction. Note: You can easily switch to a different dataset, such as the Breast Cancer dataset, by replacing 'ad_hoc_data' to 'Breast_cancer' below.
n = 2 # dimension of each data point training_dataset_size = 20 testing_dataset_size = 10 sample_Total, training_input, test_input, class_labels = ad_hoc_data(training_size=training_dataset_size, test_size=testing_dataset_size, n=n, gap=0.3, PLOT_DATA=False) datapoints, class_to_label = split_dataset_to_data_and_labels(test_input) print(class_to_label)
community/aqua/artificial_intelligence/qsvm_kernel_directly.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
With the dataset ready we initialize the necessary inputs for the algorithm: - build all components required by SVM - feature_map - multiclass_extension (optional)
svm = get_algorithm_instance("QSVM.Kernel") svm.random_seed = 10598 svm.setup_quantum_backend(backend='statevector_simulator') feature_map = get_feature_map_instance('SecondOrderExpansion') feature_map.init_args(num_qubits=2, depth=2, entanglement='linear') svm.init_args(training_input, test_input, datapoints[0], feature_map)
community/aqua/artificial_intelligence/qsvm_kernel_directly.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
With everything setup, we can now run the algorithm. The run method includes training, testing and predict on unlabeled data. For the testing, the result includes the success ratio. For the prediction, the result includes the predicted class names for each data. After that the trained model is also stored in the svm instance, you can use it for future prediction.
result = svm.run() print("kernel matrix during the training:") kernel_matrix = result['kernel_matrix_training'] img = plt.imshow(np.asmatrix(kernel_matrix),interpolation='nearest',origin='upper',cmap='bone_r') plt.show() print("testing success ratio: ", result['testing_accuracy']) print("predicted classes:", result['predicted_classes'])
community/aqua/artificial_intelligence/qsvm_kernel_directly.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Use the trained model to evaluate data directly, and we store a label_to_class and class_to_label for helping converting between label and class name
predicted_labels = svm.predict(datapoints[0]) predicted_classes = map_label_to_class_name(predicted_labels, svm.label_to_class) print("ground truth: {}".format(datapoints[1])) print("preduction: {}".format(predicted_labels))
community/aqua/artificial_intelligence/qsvm_kernel_directly.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Validation Curves
from sklearn.datasets import load_digits from sklearn.ensemble import RandomForestClassifier from sklearn.learning_curve import validation_curve digits = load_digits() X, y = digits.data, digits.target model = RandomForestClassifier(n_estimators=20) param_range = range(1, 13) training_scores, validation_scores = validation_curve(model, X, y, param_name="max_depth", param_range=param_range, cv=5) training_scores.shape training_scores def plot_validation_curve(parameter_values, train_scores, validation_scores): train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) validation_scores_mean = np.mean(validation_scores, axis=1) validation_scores_std = np.std(validation_scores, axis=1) plt.fill_between(parameter_values, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(parameter_values, validation_scores_mean - validation_scores_std, validation_scores_mean + validation_scores_std, alpha=0.1, color="g") plt.plot(parameter_values, train_scores_mean, 'o-', color="r", label="Training score") plt.plot(parameter_values, validation_scores_mean, 'o-', color="g", label="Cross-validation score") plt.ylim(validation_scores_mean.min() - .1, train_scores_mean.max() + .1) plt.legend(loc="best") plt.figure() plot_validation_curve(param_range, training_scores, validation_scores)
day3-machine-learning/06 - Model Complexity.ipynb
AstroHackWeek/AstroHackWeek2015
gpl-2.0
Exercise Plot the validation curve on the digit dataset for: * a LinearSVC with a logarithmic range of regularization parameters C. * KNeighborsClassifier with a linear range of neighbors n_neighbors. What do you expect them to look like? How do they actually look like?
# %load solutions/validation_curve.py
day3-machine-learning/06 - Model Complexity.ipynb
AstroHackWeek/AstroHackWeek2015
gpl-2.0
Predicting on sample validation data For each row in the sample_validation_data, write code to make model_5 predict whether or not the loan is classified as a safe loan. Hint: Use the predict method in model_5 for this. Quiz question: What percentage of the predictions on sample_validation_data did model_5 get correct? Prediction probabilities For each row in the sample_validation_data, what is the probability (according model_5) of a loan being classified as safe? Hint: Set output_type='probability' to make probability predictions using model_5 on sample_validation_data: Quiz Question: According to model_5, which loan is the least likely to be a safe loan? Checkpoint: Can you verify that for all the predictions with probability &gt;= 0.5, the model predicted the label +1? Evaluating the model on the validation data Recall that the accuracy is defined as follows: $$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}} $$ Evaluate the accuracy of the model_5 on the validation_data. Hint: Use the .evaluate() method in the model. Calculate the number of false positives made by the model. Quiz question: What is the number of false positives on the validation_data? Calculate the number of false negatives made by the model. Comparison with decision trees In the earlier assignment, we saw that the prediction accuracy of the decision trees was around 0.64 (rounded). In this assignment, we saw that model_5 has an accuracy of 0.67 (rounded). Here, we quantify the benefit of the extra 3% increase in accuracy of model_5 in comparison with a single decision tree from the original decision tree assignment. As we explored in the earlier assignment, we calculated the cost of the mistakes made by the model. We again consider the same costs as follows: False negatives: Assume a cost of \$10,000 per false negative. False positives: Assume a cost of \$20,000 per false positive. Assume that the number of false positives and false negatives for the learned decision tree was False negatives: 1936 False positives: 1503 Using the costs defined above and the number of false positives and false negatives for the decision tree, we can calculate the total cost of the mistakes made by the decision tree model as follows: cost = $10,000 * 1936 + $20,000 * 1503 = $49,420,000 The total cost of the mistakes of the model is $49.42M. That is a lot of money!. Quiz Question: Using the same costs of the false positives and false negatives, what is the cost of the mistakes made by the boosted tree model (model_5) as evaluated on the validation_set? Reminder: Compare the cost of the mistakes made by the boosted trees model with the decision tree model. The extra 3% improvement in prediction accuracy can translate to several million dollars! And, it was so easy to get by simply boosting our decision trees. Most positive & negative loans. In this section, we will find the loans that are most likely to be predicted safe. We can do this in a few steps: Step 1: Use the model_5 (the model with 5 trees) and make probability predictions for all the loans in the validation_data. Step 2: Similar to what we did in the very first assignment, add the probability predictions as a column called predictions into the validation_data. Step 3: Sort the data (in descreasing order) by the probability predictions. Start here with Step 1 & Step 2. Make predictions using model_5 for examples in the validation_data. Use output_type = probability. Checkpoint: For each row, the probabilities should be a number in the range [0, 1]. We have provided a simple check here to make sure your answers are correct.
print "Your loans : %s\n" % validation_data['predictions'].head(4) print "Expected answer : %s" % [0.4492515948736132, 0.6119100103640573, 0.3835981314851436, 0.3693306705994325]
ml-classification/blank/module-8-boosting-assignment-1-blank.ipynb
dnc1994/MachineLearning-UW
mit
Now, we are ready to go to Step 3. You can now use the prediction column to sort the loans in validation_data (in descending order) by prediction probability. Find the top 5 loans with the highest probability of being predicted as a safe loan. Quiz question: What grades are the top 5 loans? Let us repeat this excercise to find the top 5 loans (in the validation_data) with the lowest probability of being predicted as a safe loan: Checkpoint: You should expect to see 5 loans with the grade ['D', 'C', 'C', 'C', 'B']. Effect of adding more trees In this assignment, we will train 5 different ensemble classifiers in the form of gradient boosted trees. We will train models with 10, 50, 100, 200, and 500 trees. We use the max_iterations parameter in the boosted tree module. Let's get sarted with a model with max_iterations = 10:
model_10 = graphlab.boosted_trees_classifier.create(train_data, validation_set=None, target = target, features = features, max_iterations = 10, verbose=False)
ml-classification/blank/module-8-boosting-assignment-1-blank.ipynb
dnc1994/MachineLearning-UW
mit
Now, train 4 models with max_iterations to be: * max_iterations = 50, * max_iterations = 100 * max_iterations = 200 * max_iterations = 500. Let us call these models model_50, model_100, model_200, and model_500. You can pass in verbose=False in order to suppress the printed output. Warning: This could take a couple of minutes to run.
model_50 = model_100 = model_200 = model_500 =
ml-classification/blank/module-8-boosting-assignment-1-blank.ipynb
dnc1994/MachineLearning-UW
mit
Compare accuracy on entire validation set Now we will compare the predicitve accuracy of our models on the validation set. Evaluate the accuracy of the 10, 50, 100, 200, and 500 tree models on the validation_data. Use the .evaluate method. Quiz Question: Which model has the best accuracy on the validation_data? Quiz Question: Is it always true that the model with the most trees will perform best on test data? Plot the training and validation error vs. number of trees Recall from the lecture that the classification error is defined as $$ \mbox{classification error} = 1 - \mbox{accuracy} $$ In this section, we will plot the training and validation errors versus the number of trees to get a sense of how these models are performing. We will compare the 10, 50, 100, 200, and 500 tree models. You will need matplotlib in order to visualize the plots. First, make sure this block of code runs on your computer.
import matplotlib.pyplot as plt %matplotlib inline def make_figure(dim, title, xlabel, ylabel, legend): plt.rcParams['figure.figsize'] = dim plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) if legend is not None: plt.legend(loc=legend, prop={'size':15}) plt.rcParams.update({'font.size': 16}) plt.tight_layout()
ml-classification/blank/module-8-boosting-assignment-1-blank.ipynb
dnc1994/MachineLearning-UW
mit
In order to plot the classification errors (on the train_data and validation_data) versus the number of trees, we will need lists of these accuracies, which we get by applying the method .evaluate. Steps to follow: Step 1: Calculate the classification error for model on the training data (train_data). Step 2: Store the training errors into a list (called training_errors) that looks like this: [train_err_10, train_err_50, ..., train_err_500] Step 3: Calculate the classification error of each model on the validation data (validation_data). Step 4: Store the validation classification error into a list (called validation_errors) that looks like this: [validation_err_10, validation_err_50, ..., validation_err_500] Once that has been completed, the rest of the code should be able to evaluate correctly and generate the plot. Let us start with Step 1. Write code to compute the classification error on the train_data for models model_10, model_50, model_100, model_200, and model_500. Now, let us run Step 2. Save the training errors into a list called training_errors
training_errors = [train_err_10, train_err_50, train_err_100, train_err_200, train_err_500]
ml-classification/blank/module-8-boosting-assignment-1-blank.ipynb
dnc1994/MachineLearning-UW
mit
Now, onto Step 3. Write code to compute the classification error on the validation_data for models model_10, model_50, model_100, model_200, and model_500. Now, let us run Step 4. Save the training errors into a list called validation_errors
validation_errors = [validation_err_10, validation_err_50, validation_err_100, validation_err_200, validation_err_500]
ml-classification/blank/module-8-boosting-assignment-1-blank.ipynb
dnc1994/MachineLearning-UW
mit
Normalize the features Hint: You solved this in TensorFlow lab Problem 1.
# TODO: Normalize the data features to the variable X_normalized def normalize(X, a=0, b=1): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ # TODO: Implement Min-Max scaling for grayscale image data # feature range [a, b] X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) X_scaled = X_std * (b - a) + a return X_scaled X_normalized = normalize(X_train, a=-0.5, b=0.5) # STOP: Do not change the tests below. Your implementation should pass these tests. assert math.isclose(np.min(X_normalized), -0.5, abs_tol=1e-5) and math.isclose(np.max(X_normalized), 0.5, abs_tol=1e-5), 'The range of the training data is: {} to {}. It must be -0.5 to 0.5'.format(np.min(X_normalized), np.max(X_normalized)) print('Tests passed.')
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
One-Hot Encode the labels Hint: You can use the scikit-learn LabelBinarizer function to one-hot encode the labels.
# TODO: One Hot encode the labels to the variable y_one_hot from sklearn.preprocessing import LabelBinarizer lb = LabelBinarizer() y_one_hot = lb.fit_transform(y_train) # STOP: Do not change the tests below. Your implementation should pass these tests. import collections assert y_one_hot.shape == (39209, 43), 'y_one_hot is not the correct shape. It\'s {}, it should be (39209, 43)'.format(y_one_hot.shape) assert next((False for y in y_one_hot if collections.Counter(y) != {0: 42, 1: 1}), True), 'y_one_hot not one-hot encoded.' print('Tests passed.')
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Keras Sequential Model ```python from keras.models import Sequential Create the Sequential model model = Sequential() `` Thekeras.models.Sequentialclass is a wrapper for the neural network model. Just like many of the class models in scikit-learn, it provides common functions likefit(),evaluate(), andcompile()`. We'll cover these functions as we get to them. Let's start looking at the layers of the model. Keras Layer A Keras layer is just like a neural network layer. It can be fully connected, max pool, activation, etc. You can add a layer to the model using the model's add() function. For example, a simple model would look like this: ```python from keras.models import Sequential from keras.layers.core import Dense, Activation, Flatten Create the Sequential model model = Sequential() 1st Layer - Add a flatten layer model.add(Flatten(input_shape=(32, 32, 3))) 2nd Layer - Add a fully connected layer model.add(Dense(100)) 3rd Layer - Add a ReLU activation layer model.add(Activation('relu')) 4th Layer - Add a fully connected layer model.add(Dense(60)) 5th Layer - Add a ReLU activation layer model.add(Activation('relu')) ``` Keras will automatically infer the shape of all layers after the first layer. This means you only have to set the input dimensions for the first layer. The first layer from above, model.add(Flatten(input_shape=(32, 32, 3))), sets the input dimension to (32, 32, 3) and output dimension to (3072=32*32*3). The second layer takes in the output of the first layer and sets the output dimenions to (100). This chain of passing output to the next layer continues until the last layer, which is the output of the model. Build a Multi-Layer Feedforward Network Build a multi-layer feedforward neural network to classify the traffic sign images. Set the first layer to a Flatten layer with the input_shape set to (32, 32, 3) Set the second layer to Dense layer width to 128 output. Use a ReLU activation function after the second layer. Set the output layer width to 43, since there are 43 classes in the dataset. Use a softmax activation function after the output layer. To get started, review the Keras documentation about models and layers. The Keras example of a Multi-Layer Perceptron network is similar to what you need to do here. Use that as a guide, but keep in mind that there are a number of differences.
from keras.models import Sequential from keras.layers.core import Dense, Activation, Flatten model = Sequential() # TODO: Build a Multi-layer feedforward neural network with Keras here. # 1st Layer - Add a flatten layer model.add(Flatten(input_shape=(32, 32, 3))) # 2nd Layer - Add a fully connected layer model.add(Dense(128)) # 3rd Layer - Add a ReLU activation layer model.add(Activation('relu')) # 4th Layer - Add a fully connected layer model.add(Dense(43)) # 5th Layer - Add a softmax activation layer model.add(Activation('softmax')) # STOP: Do not change the tests below. Your implementation should pass these tests. from keras.layers.core import Dense, Activation, Flatten from keras.activations import relu, softmax def check_layers(layers, true_layers): assert len(true_layers) != 0, 'No layers found' for layer_i in range(len(layers)): assert isinstance(true_layers[layer_i], layers[layer_i]), 'Layer {} is not a {} layer'.format(layer_i+1, layers[layer_i].__name__) assert len(true_layers) == len(layers), '{} layers found, should be {} layers'.format(len(true_layers), len(layers)) check_layers([Flatten, Dense, Activation, Dense, Activation], model.layers) assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)' assert model.layers[1].output_shape == (None, 128), 'Second layer output is wrong, it should be (128)' assert model.layers[2].activation == relu, 'Third layer not a relu activation layer' assert model.layers[3].output_shape == (None, 43), 'Fourth layer output is wrong, it should be (43)' assert model.layers[4].activation == softmax, 'Fifth layer not a softmax activation layer' print('Tests passed.')
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Training a Sequential Model You built a multi-layer neural network in Keras, now let's look at training a neural network. ```python from keras.models import Sequential from keras.layers.core import Dense, Activation model = Sequential() ... Configures the learning process and metrics model.compile('sgd', 'mean_squared_error', ['accuracy']) Train the model History is a record of training loss and metrics history = model.fit(X_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2) Calculate test score test_score = model.evaluate(X_test_data, Y_test_data) `` The code above configures, trains, and tests the model. The linemodel.compile('sgd', 'mean_squared_error', ['accuracy'])configures the model's optimizer to'sgd'(stochastic gradient descent), the loss to'mean_squared_error', and the metric to'accuracy'`. You can find more optimizers here, loss functions here, and more metrics here. To train the model, use the fit() function as shown in model.fit(X_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2). The validation_split parameter will split a percentage of the training dataset to be used to validate the model. Typically you won't have to change the verbose parameter but in Jupyter notebooks the update animation can crash the notebook so we set verbose=2, this limits the animation to only update after an epoch is complete. The model can be further tested with the test dataset using the evaluation() function as shown in the last line. Train the Network Compile the network using adam optimizer and categorical_crossentropy loss function. Train the network for ten epochs and validate with 20% of the training data.
# TODO: Compile and train the model here. # Configures the learning process and metrics # Compile the network using adam optimizer and categorical_crossentropy loss function. model.compile('adam', 'categorical_crossentropy', ['accuracy']) # Train the model # History is a record of training loss and metrics # Train the network for ten epochs and validate with 20% of the training data. history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=10, validation_split=0.2, verbose=2) # STOP: Do not change the tests below. Your implementation should pass these tests. from keras.optimizers import Adam assert model.loss == 'categorical_crossentropy', 'Not using categorical_crossentropy loss function' assert isinstance(model.optimizer, Adam), 'Not using adam optimizer' assert len(history.history['acc']) == 10, 'You\'re using {} epochs when you need to use 10 epochs.'.format(len(history.history['acc'])) assert history.history['acc'][-1] > 0.92, 'The training accuracy was: %.3f. It shoud be greater than 0.92' % history.history['acc'][-1] assert history.history['val_acc'][-1] > 0.85, 'The validation accuracy is: %.3f. It shoud be greater than 0.85' % history.history['val_acc'][-1] print('Tests passed.')
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Convolutions Re-construct the previous network Add a convolutional layer with 32 filters, a 3x3 kernel, and valid padding before the flatten layer. Add a ReLU activation after the convolutional layer. Hint 1: The Keras example of a convolutional neural network for MNIST would be a good example to review.
from keras.models import Sequential from keras.layers.core import Dense, Activation, Flatten from keras.layers.convolutional import Convolution2D, Conv2D # TODO: Re-construct the network and add a convolutional layer before the flatten layer. model = Sequential() # TODO: Build a Multi-layer feedforward neural network with Keras here. # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='valid',input_shape=(32, 32, 3))) #model.add(Conv2D(32, 3, 3, border_mode='valid',input_shape=(32, 32, 3))) # 2nd Layer - Add a ReLU activation layer model.add(Activation('relu')) # 3rd Layer - Add a flatten layer model.add(Flatten()) # 4th Layer - Add a fully connected layer model.add(Dense(128)) # 5th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 6th Layer - Add a fully connected layer model.add(Dense(43)) # 7th Layer - Add a softmax activation layer model.add(Activation('softmax')) # STOP: Do not change the tests below. Your implementation should pass these tests. from keras.layers.core import Dense, Activation, Flatten from keras.layers.convolutional import Convolution2D check_layers([Convolution2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers) assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)' assert model.layers[0].nb_filter == 32, 'Wrong number of filters, it should be 32' assert model.layers[0].nb_col == model.layers[0].nb_row == 3, 'Kernel size is wrong, it should be a 3x3' assert model.layers[0].border_mode == 'valid', 'Wrong padding, it should be valid' model.compile('adam', 'categorical_crossentropy', ['accuracy']) history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2) assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1] print('Tests passed.')
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Pooling Re-construct the network Add a 2x2 max pooling layer immediately following your convolutional layer.
from keras.models import Sequential from keras.layers.core import Dense, Activation, Flatten from keras.layers.convolutional import Convolution2D, Conv2D from keras.layers.pooling import MaxPooling2D # TODO: Re-construct the network and add a pooling layer after the convolutional layer. model = Sequential() # TODO: Build a Multi-layer feedforward neural network with Keras here. # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='valid',input_shape=(32, 32, 3))) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a ReLU activation layer model.add(Activation('relu')) # 4th Layer - Add a flatten layer model.add(Flatten()) # 5th Layer - Add a fully connected layer model.add(Dense(128)) # 6th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 7th Layer - Add a fully connected layer model.add(Dense(43)) # 8th Layer - Add a softmax activation layer model.add(Activation('softmax')) # STOP: Do not change the tests below. Your implementation should pass these tests. from keras.layers.core import Dense, Activation, Flatten from keras.layers.convolutional import Convolution2D from keras.layers.pooling import MaxPooling2D check_layers([Convolution2D, MaxPooling2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers) assert model.layers[1].pool_size == (2, 2), 'Second layer must be a max pool layer with pool size of 2x2' model.compile('adam', 'categorical_crossentropy', ['accuracy']) history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2) assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1] print('Tests passed.')
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Dropout Re-construct the network Add a dropout layer after the pooling layer. Set the dropout rate to 50%.
from keras.layers.core import Dense, Activation, Flatten, Dropout from keras.layers.convolutional import Convolution2D from keras.layers.pooling import MaxPooling2D # TODO: Re-construct the network and add dropout after the pooling layer. model = Sequential() # TODO: Build a Multi-layer feedforward neural network with Keras here. # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='valid',input_shape=(32, 32, 3))) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a dropout layer. Set the dropout rate to 50%. model.add(Dropout(0.5)) # 4th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 5th Layer - Add a flatten layer model.add(Flatten()) # 6th Layer - Add a fully connected layer model.add(Dense(128)) # 7th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 8th Layer - Add a fully connected layer model.add(Dense(43)) # 9th Layer - Add a softmax activation layer model.add(Activation('softmax')) # STOP: Do not change the tests below. Your implementation should pass these tests. from keras.layers.core import Dense, Activation, Flatten, Dropout from keras.layers.convolutional import Convolution2D from keras.layers.pooling import MaxPooling2D check_layers([Convolution2D, MaxPooling2D, Dropout, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers) assert model.layers[2].p == 0.5, 'Third layer should be a Dropout of 50%' model.compile('adam', 'categorical_crossentropy', ['accuracy']) history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2) assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1] print('Tests passed.')
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Optimization Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code. Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs. What is the best validation accuracy you can achieve? batch_size=50, nb_epoch=20, border_mode='valid'
# TODO: Build a model from keras.layers.core import Dense, Activation, Flatten, Dropout from keras.layers.convolutional import Convolution2D from keras.layers.pooling import MaxPooling2D # TODO: Re-construct the network and add dropout after the pooling layer. model = Sequential() # TODO: Build a Multi-layer feedforward neural network with Keras here. # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='valid',input_shape=(32, 32, 3))) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a dropout layer. Set the dropout rate to 50%. model.add(Dropout(0.5)) # 4th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 5th Layer - Add a flatten layer model.add(Flatten()) # 6th Layer - Add a fully connected layer model.add(Dense(128)) # 7th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 8th Layer - Add a fully connected layer model.add(Dense(43)) # 9th Layer - Add a softmax activation layer model.add(Activation('softmax')) # TODO: Compile and train the model model.compile('adam', 'categorical_crossentropy', ['accuracy']) history = model.fit(X_normalized, y_one_hot, batch_size=50, nb_epoch=20, validation_split=0.2, verbose=2)
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
batch_size=128, nb_epoch=20, border_mode='valid'
# TODO: Build a model from keras.layers.core import Dense, Activation, Flatten, Dropout from keras.layers.convolutional import Convolution2D from keras.layers.pooling import MaxPooling2D # TODO: Re-construct the network and add dropout after the pooling layer. model = Sequential() # TODO: Build a Multi-layer feedforward neural network with Keras here. # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='valid',input_shape=(32, 32, 3))) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a dropout layer. Set the dropout rate to 50%. model.add(Dropout(0.5)) # 4th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 5th Layer - Add a flatten layer model.add(Flatten()) # 6th Layer - Add a fully connected layer model.add(Dense(128)) # 7th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 8th Layer - Add a fully connected layer model.add(Dense(43)) # 9th Layer - Add a softmax activation layer model.add(Activation('softmax')) # TODO: Compile and train the model model.compile('adam', 'categorical_crossentropy', ['accuracy']) history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=20, validation_split=0.2, verbose=2)
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Add one more convolution layer with dropout 50%, batch_size=50, nb_epoch=10
# TODO: Build a model from keras.layers.core import Dense, Activation, Flatten, Dropout from keras.layers.convolutional import Convolution2D from keras.layers.pooling import MaxPooling2D # TODO: Re-construct the network and add dropout after the pooling layer. model = Sequential() # TODO: Build a Multi-layer feedforward neural network with Keras here. # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='valid',input_shape=(32, 32, 3))) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a dropout layer. Set the dropout rate to 50%. model.add(Dropout(0.5)) # 4th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='valid')) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a dropout layer. Set the dropout rate to 50%. model.add(Dropout(0.5)) # 4th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 5th Layer - Add a flatten layer model.add(Flatten()) # 6th Layer - Add a fully connected layer model.add(Dense(128)) # 7th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 8th Layer - Add a fully connected layer model.add(Dense(43)) # 9th Layer - Add a softmax activation layer model.add(Activation('softmax')) # TODO: Compile and train the model model.compile('adam', 'categorical_crossentropy', ['accuracy']) history = model.fit(X_normalized, y_one_hot, batch_size=50, nb_epoch=10, validation_split=0.2, verbose=2)
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Add one more convolution layer with dropout 50%, batch_size=50, nb_epoch=20, border_mode='same'
# TODO: Build a model from keras.layers.core import Dense, Activation, Flatten, Dropout from keras.layers.convolutional import Convolution2D from keras.layers.pooling import MaxPooling2D # TODO: Re-construct the network and add dropout after the pooling layer. model = Sequential() # TODO: Build a Multi-layer feedforward neural network with Keras here. # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='same',input_shape=(32, 32, 3))) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a dropout layer. Set the dropout rate to 50%. model.add(Dropout(0.5)) # 4th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='same')) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a dropout layer. Set the dropout rate to 50%. model.add(Dropout(0.5)) # 4th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 5th Layer - Add a flatten layer model.add(Flatten()) # 6th Layer - Add a fully connected layer model.add(Dense(128)) # 7th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 8th Layer - Add a fully connected layer model.add(Dense(43)) # 9th Layer - Add a softmax activation layer model.add(Activation('softmax')) # TODO: Compile and train the model model.compile('adam', 'categorical_crossentropy', ['accuracy']) history = model.fit(X_normalized, y_one_hot, batch_size=50, nb_epoch=20, validation_split=0.2, verbose=2)
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Add one more convolution layer with dropout 50%, batch_size=128, nb_epoch=20, border_mode='valid'
# TODO: Build a model from keras.layers.core import Dense, Activation, Flatten, Dropout from keras.layers.convolutional import Convolution2D from keras.layers.pooling import MaxPooling2D # TODO: Re-construct the network and add dropout after the pooling layer. model = Sequential() # TODO: Build a Multi-layer feedforward neural network with Keras here. # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='valid',input_shape=(32, 32, 3))) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a dropout layer. Set the dropout rate to 50%. model.add(Dropout(0.5)) # 4th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 1st Layer - Add a convolution layer model.add(Convolution2D(32, 3, 3, border_mode='valid')) # 2nd Layer - Add a 2x2 max pooling layer model.add(MaxPooling2D(pool_size=(2, 2))) # 3rd Layer - Add a dropout layer. Set the dropout rate to 50%. model.add(Dropout(0.5)) # 4th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 5th Layer - Add a flatten layer model.add(Flatten()) # 6th Layer - Add a fully connected layer model.add(Dense(128)) # 7th Layer - Add a ReLU activation layer model.add(Activation('relu')) # 8th Layer - Add a fully connected layer model.add(Dense(43)) # 9th Layer - Add a softmax activation layer model.add(Activation('softmax')) # TODO: Compile and train the model model.compile('adam', 'categorical_crossentropy', ['accuracy']) history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=20, validation_split=0.2, verbose=2)
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Best Validation Accuracy: 0.9897 Testing Once you've picked out your best model, it's time to test it. Load up the test data and use the evaluate() method to see how well it does. Hint 1: The evaluate() method should return an array of numbers. Use the metrics_names property to get the labels.
# TODO: Load test data with open('test.p', 'rb') as f: test_data = pickle.load(f) # TODO: Load the feature data to the variable X_train X_test = test_data['features'] # TODO: Load the label data to the variable y_train y_test = test_data['labels'] # TODO: Preprocess data & one-hot encode the labels X_normalized_test = normalize(X_test, a=-0.5, b=0.5) y_one_hot_test = lb.transform(y_test) # TODO: Evaluate model on test data metrics = model.evaluate(X_normalized_test, y_one_hot_test) for metric_i in range(len(model.metrics_names)): metric_name = model.metrics_names[metric_i] metric_value = metrics[metric_i] print('{}: {}'.format(metric_name, metric_value))
CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
Show me what you got! <img src=https://staticdelivery.nexusmods.com/mods/1151/images/543-1-1447533110.png style="width:40px;">
# uncomment to see the values # carbon_monoxide
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
It is easy to create other useful plots using DataFrame:
fig, (ax0, ax1) = plt.subplots(ncols=2) air_quality.boxplot(ax=ax0, column=['O3', 'PM10']) air_quality.O3.plot(ax=ax1, kind="kde")
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Q: the plot is too small. How to make it bigger? As well as just a simple line plot:
air_quality.O3.plot(grid=True, figsize=(12, 2))
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Setting missing values As you may notice, we have negative values of ozone concentration, which does not make sense. So, let us replace those negative values with NaN: Q: how to list data entries with negative O3?
# your code here
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Some statistics
# air_quality.describe()
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Correlations and regressions Is there correlations between the timeseries we loaded? First, let's take a glance at the whole DataFrame using a fancy scatter_matrix function.
from pandas.tools.plotting import scatter_matrix # with plt.style.context('ggplot'): # scatter_matrix(air_quality, figsize=(7, 7))
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Creating DataFrames DataFrame can also be created manually, by grouping several Series together. Now just for fun we switch to another dataset create 2 Series objects from 2 CSV files create a DataFrame by combining the two Series Data are monthly values of Southern Oscillation Index (SOI) Outgoing Longwave Radiation (OLR), which is a proxy for convective precipitation in the western equatorial Pacific Data were downloaded from NOAA's website: https://www.ncdc.noaa.gov/teleconnections/
soi_df = pd.read_csv('../data/soi.csv', skiprows=1, parse_dates=[0], index_col=0, na_values=-999.9, date_parser=lambda x: pd.datetime.strptime(x, '%Y%m')) olr_df = pd.read_csv('../data/olr.csv', skiprows=1, parse_dates=[0], index_col=0, na_values=-999.9, date_parser=lambda x: pd.datetime.strptime(x, '%Y%m')) df = pd.DataFrame({'OLR': olr_df.Value, 'SOI': soi_df.Value}) # df.describe()
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Ordinary Least Square (OLS) regressions <img src=http://statsmodels.sourceforge.net/devel/_static/statsmodels_hybi_banner.png> The recommeded way to build ordinaty least square regressions is by using statsmodels.
# import statsmodels.formula.api as sm # sm_model = sm.ols(formula="SOI ~ OLR", data=df).fit() # df['SOI'].plot() # df['OLR'].plot() # ax = sm_model.fittedvalues.plot(label="model prediction") # ax.legend(loc="lower center", ncol=3)
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Advanced scatter plot Using the power of matplotlib, we can create a scatter plot with points coloured by the date index. To do this we need to import one additional submodule:
# import matplotlib.dates as mdates
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit