markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now it is time to stitch all that together. For that we will use OWSLib*. Constructing the filter is probably the most complex part. We start with a list comprehension using the fes.Or to create the variables filter. The next step is to exclude some unwanted results (ROMS Average files) using fes.Not. To select the desired dates we wrote a wrapper function that takes the start and end dates of the event. Finally, we apply the fes.And to join all the conditions above in one filter list. * OWSLib is a Python package for client programming with Open Geospatial Consortium (OGC) web service (hence OWS) interface standards, and their related content models.
from owslib import fes from utilities import fes_date_filter kw = dict(wildCard='*', escapeChar='\\', singleChar='?', propertyname='apiso:AnyText') or_filt = fes.Or([fes.PropertyIsLike(literal=('*%s*' % val), **kw) for val in name_list]) # Exclude ROMS Averages and History files. not_filt = fes.Not([fes.PropertyIsLike(literal='*Averages*', **kw)]) begin, end = fes_date_filter(start, stop) filter_list = [fes.And([fes.BBox(bbox), begin, end, or_filt, not_filt])]
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
Now we are ready to load a csw object and feed it with the filter we created.
from owslib.csw import CatalogueServiceWeb csw = CatalogueServiceWeb('http://www.ngdc.noaa.gov/geoportal/csw', timeout=60) csw.getrecords2(constraints=filter_list, maxrecords=1000, esn='full') fmt = '{:*^64}'.format print(fmt(' Catalog information ')) print("CSW version: {}".format(csw.version)) print("Number of datasets available: {}".format(len(csw.records.keys())))
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
We found 13 datasets! Not bad for such a narrow search area and time-span. What do we have there? Let's use the custom service_urls function to split the datasets into OPeNDAP and SOS endpoints.
from utilities import service_urls dap_urls = service_urls(csw.records, service='odp:url') sos_urls = service_urls(csw.records, service='sos:url') print(fmt(' SOS ')) for url in sos_urls: print('{}'.format(url)) print(fmt(' DAP ')) for url in dap_urls: print('{}.html'.format(url))
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
We will ignore the SOS endpoints for now and use only the DAP endpoints. But note that some of those SOS and DAP endpoints look suspicious. The Scripps Institution of Oceanography (SIO/UCSD) data should not appear in a search for the Boston Harbor. That is a known issue and we are working to sort it out. Meanwhile we have to filter out all observations form the DAP with the is_station function. However, that filter still leaves behind URLs like http://tds.maracoos.org/thredds/dodsC/SST-Three-Agg.nc.html. That is probably satellite data and not model. In an ideal world all datasets would have the metadata coverage_content_type defined. With the coverage_content_type we could tell models apart automatically. Until then we will have to make due with the heuristic function is_model from the utilities module. The is_model function works by comparing the metadata (and sometimes the data itself) against a series of criteria, like grid conventions, to figure out if a dataset is model data or not. Because the function operates on the data we will call it later on when we start downloading the data.
from utilities import is_station non_stations = [] for url in dap_urls: try: if not is_station(url): non_stations.append(url) except RuntimeError as e: print("Could not access URL {}. {!r}".format(url, e)) dap_urls = non_stations print(fmt(' Filtered DAP ')) for url in dap_urls: print('{}.html'.format(url))
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
We still need to find endpoints for the observations. For that we'll use pyoos' NdbcSos and CoopsSoscollectors. The pyoos API is different from OWSLib's, but note that we are re-using the same query variables we create for the catalog search (bbox, start, stop, and sos_name.)
from pyoos.collectors.ndbc.ndbc_sos import NdbcSos collector_ndbc = NdbcSos() collector_ndbc.set_bbox(bbox) collector_ndbc.end_time = stop collector_ndbc.start_time = start collector_ndbc.variables = [sos_name] ofrs = collector_ndbc.server.offerings title = collector_ndbc.server.identification.title print(fmt(' NDBC Collector offerings ')) print('{}: {} offerings'.format(title, len(ofrs)))
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
That number is misleading! Do we have 955 buoys available there? What exactly are the offerings? There is only one way to find out. Let's get the data!
from utilities import collector2table, get_ndbc_longname ndbc = collector2table(collector=collector_ndbc) names = [] for s in ndbc['station']: try: name = get_ndbc_longname(s) except ValueError: name = s names.append(name) ndbc['name'] = names ndbc.set_index('name', inplace=True) ndbc.head()
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
That makes more sense. Two buoys were found in the bounding box, and the name of at least one of them makes sense. Now the same thing for CoopsSos.
from pyoos.collectors.coops.coops_sos import CoopsSos collector_coops = CoopsSos() collector_coops.set_bbox(bbox) collector_coops.end_time = stop collector_coops.start_time = start collector_coops.variables = [sos_name] ofrs = collector_coops.server.offerings title = collector_coops.server.identification.title print(fmt(' Collector offerings ')) print('{}: {} offerings'.format(title, len(ofrs))) from utilities import get_coops_metadata coops = collector2table(collector=collector_coops) names = [] for s in coops['station']: try: name = get_coops_metadata(s)[0] except ValueError: name = s names.append(name) coops['name'] = names coops.set_index('name', inplace=True) coops.head()
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
We found one more. Now we can merge both into one table and start downloading the data.
from pandas import concat all_obs = concat([coops, ndbc]) all_obs.head() from pandas import DataFrame from owslib.ows import ExceptionReport from utilities import pyoos2df, save_timeseries iris.FUTURE.netcdf_promote = True data = dict() col = 'sea_water_temperature (C)' for station in all_obs.index: try: idx = all_obs['station'][station] df = pyoos2df(collector_ndbc, idx, df_name=station) if df.empty: df = pyoos2df(collector_coops, idx, df_name=station) data.update({idx: df[col]}) except ExceptionReport as e: print("[{}] {}:\n{}".format(idx, station, e))
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
The cell below reduces or interpolates, depending on the original frequency of the data, to 1 hour frequency time-series.
from pandas import date_range index = date_range(start=start, end=stop, freq='1H') for k, v in data.iteritems(): data[k] = v.reindex(index=index, limit=1, method='nearest') obs_data = DataFrame.from_dict(data) obs_data.head()
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
And now the same for the models. Note that now we use the is_model to filter out non-model endpotins.
import warnings from iris.exceptions import (CoordinateNotFoundError, ConstraintMismatchError, MergeError) from utilities import (quick_load_cubes, proc_cube, is_model, get_model_name, get_surface) cubes = dict() for k, url in enumerate(dap_urls): print('\n[Reading url {}/{}]: {}'.format(k+1, len(dap_urls), url)) try: cube = quick_load_cubes(url, name_list, callback=None, strict=True) if is_model(cube): cube = proc_cube(cube, bbox=bbox, time=(start, stop), units=units) else: print("[Not model data]: {}".format(url)) continue cube = get_surface(cube) mod_name, model_full_name = get_model_name(cube, url) cubes.update({mod_name: cube}) except (RuntimeError, ValueError, ConstraintMismatchError, CoordinateNotFoundError, IndexError) as e: print('Cannot get cube for: {}\n{}'.format(url, e))
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
And now we can use the iris cube objects we collected to download model data near the buoys we found above. We will need get_nearest_water to search the 10 nearest model points at least 0.08 degrees away from each buys. (This step is still a little bit clunky and need some improvements!)
from iris.pandas import as_series from utilities import (make_tree, get_nearest_water, add_station, ensure_timeseries, remove_ssh) model_data = dict() for mod_name, cube in cubes.items(): print(fmt(mod_name)) try: tree, lon, lat = make_tree(cube) except CoordinateNotFoundError as e: print('Cannot make KDTree for: {}'.format(mod_name)) continue # Get model series at observed locations. raw_series = dict() for station, obs in all_obs.iterrows(): try: kw = dict(k=10, max_dist=0.08, min_var=0.01) args = cube, tree, obs.lon, obs.lat series, dist, idx = get_nearest_water(*args, **kw) except ValueError as e: status = "No Data" print('[{}] {}'.format(status, obs.name)) continue if not series: status = "Land " else: series = as_series(series) raw_series.update({obs['station']: series}) status = "Water " print('[{}] {}'.format(status, obs.name)) if raw_series: # Save that model series. model_data.update({mod_name: raw_series}) del cube
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
To end this post let's plot the 3 buoys we found together with the nearest model grid point.
import matplotlib.pyplot as plt buoy = '44013' fig , ax = plt.subplots(figsize=(11, 2.75)) obs_data[buoy].plot(ax=ax, label='Buoy') for model in model_data.keys(): try: model_data[model][buoy].plot(ax=ax, label=model) except KeyError: pass # Could not find a model at this location. leg = ax.legend() buoy = '44029' fig , ax = plt.subplots(figsize=(11, 2.75)) obs_data[buoy].plot(ax=ax, label='Buoy') for model in model_data.keys(): try: model_data[model][buoy].plot(ax=ax, label=model) except KeyError: pass # Could not find a model at this location. leg = ax.legend() buoy = '8443970' fig , ax = plt.subplots(figsize=(11, 2.75)) obs_data[buoy].plot(ax=ax, label='Buoy') for model in model_data.keys(): try: model_data[model][buoy].plot(ax=ax, label=model) except KeyError: pass # Could not find a model at this location. leg = ax.legend()
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
That is it! We fetched data based only on a bounding box, time-range, and variable name. The workflow is not as smooth as we would like. We had to mix OWSLib catalog searches with to different pyoos collector to download the observed and modeled data. Another hiccup are all the workarounds used to go from iris cubes to pandas series/dataframes. There is a clear need to a better way to represent CF feature types in a single Python object. To end this post check out the full version of the Boston Light Swim notebook. (Specially the interactive map at the end.)
HTML(html)
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
Create Keras model <p> First, write an input_fn to read the data.
import shutil import numpy as np import tensorflow as tf print(tf.__version__) # Determine CSV, label, and key columns CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',') LABEL_COLUMN = 'weight_pounds' KEY_COLUMN = 'key' # Set default values for each CSV column. Treat is_male and plurality as strings. DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']] def features_and_labels(row_data): for unwanted_col in ['key']: row_data.pop(unwanted_col) label = row_data.pop(LABEL_COLUMN) return row_data, label # features, label # load the training data def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS) .map(features_and_labels) # features, label ) if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.shuffle(1000).repeat() dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE return dataset
quests/endtoendml/labs/3_keras_wd.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others (is_male, plurality) should be categorical.
## Build a Keras wide-and-deep model using its Functional API def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) # Helper function to handle categorical columns def categorical_fc(name, values): orig = tf.feature_column.categorical_column_with_vocabulary_list(name, values) wrapped = tf.feature_column.indicator_column(orig) return orig, wrapped def build_wd_model(dnn_hidden_units = [64, 32], nembeds = 3): # input layer deep_inputs = { colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32') for colname in ['mother_age', 'gestation_weeks'] } wide_inputs = { colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string') for colname in ['is_male', 'plurality'] } inputs = {**wide_inputs, **deep_inputs} # feature columns from inputs deep_fc = { colname : tf.feature_column.numeric_column(colname) for colname in ['mother_age', 'gestation_weeks'] } wide_fc = {} is_male, wide_fc['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown']) plurality, wide_fc['plurality'] = categorical_fc('plurality', ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)']) # TODO bucketize the float fields. This makes them wide # https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column age_buckets = tf.feature_column.bucketized_column() # TODO wide_fc['age_buckets'] = tf.feature_column.indicator_column(age_buckets) gestation_buckets = tf.feature_column.bucketized_column() # TODO wide_fc['gestation_buckets'] = tf.feature_column.indicator_column(gestation_buckets) # cross all the wide columns. We have to do the crossing before we one-hot encode crossed = tf.feature_column.crossed_column( [is_male, plurality, age_buckets, gestation_buckets], hash_bucket_size=20000) deep_fc['crossed_embeds'] = tf.feature_column.embedding_column(crossed, nembeds) # the constructor for DenseFeatures takes a list of numeric columns # The Functional API in Keras requires that you specify: LayerConstructor()(inputs) wide_inputs = tf.keras.layers.DenseFeatures() # TODO deep_inputs = tf.keras.layers.DenseFeatures() # TODO # hidden layers for the deep side layers = [int(x) for x in dnn_hidden_units] deep = deep_inputs for layerno, numnodes in enumerate(layers): deep = tf.keras.layers.Dense(numnodes, activation='relu', name='dnn_{}'.format(layerno+1))(deep) deep_out = deep # linear model for the wide side wide_out = tf.keras.layers.Dense(10, activation='relu', name='linear')(wide_inputs) # concatenate the two sides both = tf.keras.layers.concatenate([deep_out, wide_out], name='both') # final output is a linear activation because this is regression output = tf.keras.layers.Dense(1, activation='linear', name='weight')(both) model = tf.keras.models.Model(inputs, output) model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse']) return model print("Here is our Wide-and-Deep architecture so far:\n") model = build_wd_model() print(model.summary())
quests/endtoendml/labs/3_keras_wd.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We can visualize the DNN using the Keras plot_model utility.
tf.keras.utils.plot_model(model, 'wd_model.png', show_shapes=False, rankdir='LR')
quests/endtoendml/labs/3_keras_wd.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train and evaluate
TRAIN_BATCH_SIZE = 32 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around NUM_EVALS = 5 # how many times to evaluate NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down trainds = load_dataset('train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN) evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000) steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) history = model.fit(trainds, validation_data=evalds, epochs=NUM_EVALS, steps_per_epoch=steps_per_epoch)
quests/endtoendml/labs/3_keras_wd.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Visualize loss curve
# plot import matplotlib.pyplot as plt nrows = 1 ncols = 2 fig = plt.figure(figsize=(10, 5)) for idx, key in enumerate(['loss', 'rmse']): ax = fig.add_subplot(nrows, ncols, idx+1) plt.plot(history.history[key]) plt.plot(history.history['val_{}'.format(key)]) plt.title('model {}'.format(key)) plt.ylabel(key) plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left');
quests/endtoendml/labs/3_keras_wd.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Save the model
import shutil, os, datetime OUTPUT_DIR = 'babyweight_trained' shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S')) tf.saved_model.save(model, EXPORT_PATH) # with default serving function print("Exported trained model to {}".format(EXPORT_PATH)) !ls $EXPORT_PATH
quests/endtoendml/labs/3_keras_wd.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
创建客户端
from hanlp_restful import HanLPClient HanLP = HanLPClient('https://www.hanlp.com/api', auth=None, language='zh') # auth不填则匿名,zh中文,mul多语种
plugins/hanlp_demo/hanlp_demo/zh/con_restful.ipynb
hankcs/HanLP
apache-2.0
申请秘钥 由于服务器算力有限,匿名用户每分钟限2次调用。如果你需要更多调用次数,建议申请免费公益API秘钥auth。 短语句法分析 任务越少,速度越快。如指定仅执行短语句法分析:
doc = HanLP('2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。', tasks='con')
plugins/hanlp_demo/hanlp_demo/zh/con_restful.ipynb
hankcs/HanLP
apache-2.0
返回值为一个Document:
print(doc)
plugins/hanlp_demo/hanlp_demo/zh/con_restful.ipynb
hankcs/HanLP
apache-2.0
doc['con']为Tree类型,是list的子类。 可视化短语句法树:
doc.pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/con_restful.ipynb
hankcs/HanLP
apache-2.0
转换为bracketed格式:
print(doc['con'][0])
plugins/hanlp_demo/hanlp_demo/zh/con_restful.ipynb
hankcs/HanLP
apache-2.0
为已分词的句子执行短语句法分析:
HanLP(tokens=[ ["HanLP", "为", "生产", "环境", "带来", "次世代", "最", "先进", "的", "多语种", "NLP", "技术", "。"], ["我", "的", "希望", "是", "希望", "张晚霞", "的", "背影", "被", "晚霞", "映红", "。"] ], tasks='con').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/con_restful.ipynb
hankcs/HanLP
apache-2.0
Now let's apply the B function to a typical coil. We'll assume copper (at resistivity of 1.68x10<sup>-8</sup> ohm-m) conductors at a packing density of 0.75, inner radius of 1.25 cm, power of 100 W and with supposedly optimal $\alpha$ and $\beta$ of 3 and 2, respectively:
resistivity = 1.68E-8 # ohm-meter r1 = 0.0125 # meter packing = 0.75 power = 100.0 # watts B = BFieldUnitless(power, packing, resistivity, r1, 3, 2) print("B Field: {:.3} T".format(B))
solenoids/solenoid.ipynb
tiggerntatie/emagnet.py
mit
Now try any combination of factors (assuming packing of 0.75 and standard copper conductors) to compute the field:
from ipywidgets import interactive from IPython.display import display def B(power, r1, r2, length, x): return "{:.3} T".format(BField(power, 0.75, resistivity, r1, r2, length, x)) v = interactive(B, power=(0.0, 200.0, 1), r1 = (0.01, 0.1, 0.001), r2 = (0.02, 0.5, 0.001), length = (0.01, 2, 0.01), x = (0.0, 4, 0.01)) display(v)
solenoids/solenoid.ipynb
tiggerntatie/emagnet.py
mit
For a given inner radius, power and winding configuration, the field strength is directly proportional to G. Therefore, we can test the assertion that G is maximum when $\alpha$ is 3 and $\beta$ is 2 by constructing a map of G as a function of $\alpha$ and $\beta$:
from pylab import pcolor, colorbar, meshgrid, contour from numpy import arange a = arange(1.1, 6.0, 0.1) b = arange(0.1, 4.0, 0.1) A, B = meshgrid(a,b) G = GFactorUnitless(A, B) contour(A, B, G, 30) colorbar() xlabel("Unitless parameter, Alpha") ylabel("Unitless parameter, Beta") suptitle("Electromagnet 'G Factor'") show() print("G Factor at A=3, B=2: {:.3}".format(GFactorUnitless(3,2))) print("G Factor at A=3, B=1.9: {:.3}".format(GFactorUnitless(3,1.9)))
solenoids/solenoid.ipynb
tiggerntatie/emagnet.py
mit
Although it is apparent that the maximum G Factor occurs near the $\alpha=3$, $\beta=2$ point, it is not exactly so:
from scipy.optimize import minimize def GMin(AB): return -GFactorUnitless(AB[0], AB[1]) res = minimize(GMin, [3, 2]) print("G Factor is maximum at Alpha = {:.4}, Beta = {:.4}".format(*res.x))
solenoids/solenoid.ipynb
tiggerntatie/emagnet.py
mit
Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() rides.corr() # Maybe some freatures strongly correlate and can be removed from the model
first-neural-network/DLND_Your_first_neural_network.ipynb
ksooklall/deep_learning_foundation
mit
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
# Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] #print("Train_freatures shape: {}\nTrain_targets shape:{}".format(np.shape(train_features),np.shape(train_targets)))
first-neural-network/DLND_Your_first_neural_network.ipynb
ksooklall/deep_learning_foundation
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation. self.activation_prime = lambda x: x*(1-x) # Not exactly the derivative but it's arithematic operation # to save computing time since we calculated the sigmoid earlier def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### hidden_inputs = np.dot(X,self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer # Since the activation function on the output node is f(x) = x --> input = output final_outputs = final_inputs # signals from final output layer ### Backward pass ### error = y-final_outputs # Output layer error is the difference between desired target and actual output. # Error gradient in the output unit output_error_term = error#*(final_outputs) # final_output is the derivative hidden_error_term = np.dot(self.weights_hidden_to_output, output_error_term)*self.activation_prime(hidden_outputs) # Weight step (hidden to output) delta_weights_h_o += output_error_term*hidden_outputs[:, None] # reshaping delta_weights_i_h += hidden_error_term*X[:,None] # reshaping #Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += self.lr*delta_weights_h_o/n_records # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer # Since the output activation function is f(x) = x final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
first-neural-network/DLND_Your_first_neural_network.ipynb
ksooklall/deep_learning_foundation
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
import sys ### Set the hyperparameters here ### iterations = 12000 learning_rate = 0.4 hidden_nodes = 19 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) train_loss = var_loss = 0 losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) #print('train loss:{}, val_loss:{}'.format(train_loss, val_loss)) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim()
first-neural-network/DLND_Your_first_neural_network.ipynb
ksooklall/deep_learning_foundation
mit
Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) data = (test_targets['cnt']*std+mean).reshape(504,1) pred = predictions.T acc = MSE(data,pred) print(acc)
first-neural-network/DLND_Your_first_neural_network.ipynb
ksooklall/deep_learning_foundation
mit
Now, we have to compute a representative value of the funding amount for each type of invesstment. We can either choose the mean or the median - let's have a look at the distribution of raised_amount_usd to get a sense of the distribution of data.
# distribution of raised_amount_usd sns.boxplot(y=df['raised_amount_usd']) plt.yscale('log') plt.show()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Let's also look at the summary metrics.
# summary metrics df['raised_amount_usd'].describe()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Note that there's a significant difference between the mean and the median - USD 9.5m and USD 2m. Let's also compare the summary stats across the four categories.
# comparing summary stats across four categories sns.boxplot(x='funding_round_type', y='raised_amount_usd', data=df) plt.yscale('log') plt.show() # compare the mean and median values across categories df.pivot_table(values='raised_amount_usd', columns='funding_round_type', aggfunc=[np.median, np.mean])
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Note that there's a large difference between the mean and the median values for all four types. For type venture, for e.g. the median is about 20m while the mean is about 70m. Thus, the choice of the summary statistic will drastically affect the decision (of the investment type). Let's choose median, since there are quite a few extreme values pulling the mean up towards them - but they are not the most 'representative' values.
# compare the median investment amount across the types df.groupby('funding_round_type')['raised_amount_usd'].median().sort_values(ascending=False)
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
The median investment amount for type 'private_equity' is approx. USD 20m, which is beyond Spark Funds' range of 5-15m. The median of 'venture' type is about USD 5m, which is suitable for them. The average amounts of angel and seed types are lower than their range. Thus, 'venture' type investment will be most suited to them. Country Analysis Let's now compare the total investment amounts across countries. Note that we'll filter the data for only the 'venture' type investments and then compare the 'total investment' across countries.
# filter the df for private equity type investments df = df[df.funding_round_type=="venture"] # group by country codes and compare the total funding amounts country_wise_total = df.groupby('country_code')['raised_amount_usd'].sum().sort_values(ascending=False) print(country_wise_total)
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Let's now extract the top 9 countries from country_wise_total.
# top 9 countries top_9_countries = country_wise_total[:9] top_9_countries
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Among the top 9 countries, USA, GBR and IND are the top three English speaking countries. Let's filter the dataframe so it contains only the top 3 countries.
# filtering for the top three countries df = df[(df.country_code=='USA') | (df.country_code=='GBR') | (df.country_code=='IND')] df.head()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
After filtering for 'venture' investments and the three countries USA, Great Britain and India, the filtered df looks like this.
# filtered df has about 38800 observations df.info()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
One can visually analyse the distribution and the total values of funding amount.
# boxplot to see distributions of funding amount across countries plt.figure(figsize=(10, 10)) sns.boxplot(x='country_code', y='raised_amount_usd', data=df) plt.yscale('log') plt.show()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Now, we have shortlisted the investment type (venture) and the three countries. Let's now choose the sectors. Sector Analysis First, we need to extract the main sector using the column category_list. The category_list column contains values such as 'Biotechnology|Health Care' - in this, 'Biotechnology' is the 'main category' of the company, which we need to use. Let's extract the main categories in a new column.
# extracting the main category df.loc[:, 'main_category'] = df['category_list'].apply(lambda x: x.split("|")[0]) df.head()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
We can now drop the category_list column.
# drop the category_list column df = df.drop('category_list', axis=1) df.head()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Now, we'll read the mapping.csv file and merge the main categories with its corresponding column.
# read mapping file mapping = pd.read_csv("mapping.csv", sep=",") mapping.head()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Firstly, let's get rid of the missing values since we'll not be able to merge those rows anyway.
# missing values in mapping file mapping.isnull().sum() # remove the row with missing values mapping = mapping[~pd.isnull(mapping['category_list'])] mapping.isnull().sum()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Now, since we need to merge the mapping file with the main dataframe (df), let's convert the common column to lowercase in both.
# converting common columns to lowercase mapping['category_list'] = mapping['category_list'].str.lower() df['main_category'] = df['main_category'].str.lower() # look at heads print(mapping.head()) print(df.head())
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Let's have a look at the category_list column of the mapping file. These values will be used to merge with the main df.
mapping['category_list']
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
To be able to merge all the main_category values with the mapping file's category_list column, all the values in the main_category column should be present in the category_list column of the mapping file. Let's see if this is true.
# values in main_category column in df which are not in the category_list column in mapping file df[~df['main_category'].isin(mapping['category_list'])]
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Notice that values such as 'analytics', 'business analytics', 'finance', 'nanatechnology' etc. are not present in the mapping file. Let's have a look at the values which are present in the mapping file but not in the main dataframe df.
# values in the category_list column which are not in main_category column mapping[~mapping['category_list'].isin(df['main_category'])]
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
If you see carefully, you'll notice something fishy - there are sectors named alter0tive medicine, a0lytics, waste ma0gement, veteri0ry, etc. This is not a random quality issue, but rather a pattern. In some strings, the 'na' has been replaced by '0'. This is weird - maybe someone was trying to replace the 'NA' values with '0', and ended up doing this. Let's treat this problem by replacing '0' with 'na' in the category_list column.
# replacing '0' with 'na' mapping['category_list'] = mapping['category_list'].apply(lambda x: x.replace('0', 'na')) print(mapping['category_list'])
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
This looks fine now. Let's now merge the two dataframes.
# merge the dfs df = pd.merge(df, mapping, how='inner', left_on='main_category', right_on='category_list') df.head() # let's drop the category_list column since it is the same as main_category df = df.drop('category_list', axis=1) df.head() # look at the column types and names df.info()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Converting the 'wide' dataframe to 'long' You'll notice that the columns representing the main category in the mapping file are originally in the 'wide' format - Automotive & Sports, Cleantech / Semiconductors etc. They contain the value '1' if the company belongs to that category, else 0. This is quite redundant. We can as well have a column named 'sub-category' having these values. Let's convert the df into the long format from the current wide format. First, we'll store the 'value variables' (those which are to be melted) in an array. The rest will then be the 'index variables'.
help(pd.melt) # store the value and id variables in two separate arrays # store the value variables in one Series value_vars = df.columns[9:18] # take the setdiff() to get the rest of the variables id_vars = np.setdiff1d(df.columns, value_vars) print(value_vars, "\n") print(id_vars) # convert into long long_df = pd.melt(df, id_vars=list(id_vars), value_vars=list(value_vars)) long_df.head()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
We can now get rid of the rows where the column 'value' is 0 and then remove that column altogether.
# remove rows having value=0 long_df = long_df[long_df['value']==1] long_df = long_df.drop('value', axis=1) # look at the new df long_df.head() len(long_df) # renaming the 'variable' column long_df = long_df.rename(columns={'variable': 'sector'}) # info long_df.info()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
The dataframe now contains only venture type investments in countries USA, IND and GBR, and we have mapped each company to one of the eight main sectors (named 'sector' in the dataframe). We can now compute the sector-wise number and the amount of investment in the three countries.
# summarising the sector-wise number and sum of venture investments across three countries # first, let's also filter for investment range between 5 and 15m df = long_df[(long_df['raised_amount_usd'] >= 5000000) & (long_df['raised_amount_usd'] <= 15000000)] # groupby country, sector and compute the count and sum df.groupby(['country_code', 'sector']).raised_amount_usd.agg(['count', 'sum'])
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
This will be much more easy to understand using a plot.
# plotting sector-wise count and sum of investments in the three countries plt.figure(figsize=(16, 14)) plt.subplot(2, 1, 1) p = sns.barplot(x='sector', y='raised_amount_usd', hue='country_code', data=df, estimator=np.sum) p.set_xticklabels(p.get_xticklabels(),rotation=30) plt.title('Total Invested Amount (USD)') plt.subplot(2, 1, 2) q = sns.countplot(x='sector', hue='country_code', data=df) q.set_xticklabels(q.get_xticklabels(),rotation=30) plt.title('Number of Investments') plt.show()
Investment Case Group Project/3_Analysis.ipynb
prk327/CoAca
gpl-3.0
Two-layer model with head-specified line-sink Two-layer aquifer bounded on top by a semi-confined layer. Head above the semi-confining layer is 5. Head line-sink located at $x=0$ with head equal to 2, cutting through layer 0 only.
ml = ModelMaq(kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], \ topboundary='semi', hstar=5) ls = HeadLineSink1D(ml, xls=0, hls=2, layers=0) ml.solve() x = linspace(-200, 200, 101) h = ml.headalongline(x, zeros_like(x)) plot(x, h[0], label='layer 0') plot(x, h[1], label='layer 1') legend(loc='best')
notebooks/timml_xsection.ipynb
mbakker7/timml
mit
1D inhomogeneity Three strips with semi-confined conditions on top of all three
ml = ModelMaq(kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], topboundary='semi', hstar=5) StripInhomMaq(ml, x1=-inf, x2=-50, kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], npor=0.3, topboundary='semi', hstar=15) StripInhomMaq(ml, x1=-50, x2=50, kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], npor=0.3, topboundary='semi', hstar=13) StripInhomMaq(ml, x1=50, x2=inf, kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], npor=0.3, topboundary='semi', hstar=11) ml.solve() x = linspace(-200, 200, 101) h = ml.headalongline(x, zeros(101)) plot(x, h[0], label='layer 0') plot(x, h[1], label='layer 1') legend(loc='best'); ml.vcontoursf1D(x1=-200, x2=200, nx=100, levels=20)
notebooks/timml_xsection.ipynb
mbakker7/timml
mit
Three strips with semi-confined conditions at the top of the strip in the middle only. The head is specified in the strip on the left and in the strip on the right.
ml = ModelMaq(kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], topboundary='semi', hstar=5) StripInhomMaq(ml, x1=-inf, x2=-50, kaq=[1, 2], z=[3, 2, 1, 0], c=[1000], npor=0.3, topboundary='conf') StripInhomMaq(ml, x1=-50, x2=50, kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], npor=0.3, topboundary='semi', hstar=3) StripInhomMaq(ml, x1=50, x2=inf, kaq=[1, 2], z=[3, 2, 1, 0], c=[1000], npor=0.3, topboundary='conf') rf1 = Constant(ml, -100, 0, 4) rf2 = Constant(ml, 100, 0, 4) ml.solve() x = linspace(-200, 200, 101) h = ml.headalongline(x, zeros_like(x)) Qx, _ = ml.disvecalongline(x, zeros_like(x)) figure(figsize=(12, 4)) subplot(121) plot(x, h[0], label='layer 0') plot(x, h[1], label='layer 1') plot([-100, 100], [4, 4], 'b.', label='fixed heads') legend(loc='best') subplot(122) title('Qx') plot(x, Qx[0], label='layer 0') plot(x, Qx[1], label='layer 1') ml.vcontoursf1D(x1=-200, x2=200, nx=100, levels=20)
notebooks/timml_xsection.ipynb
mbakker7/timml
mit
Impermeable wall Flow from left to right in three-layer aquifer with impermeable wall in bottom 2 layers
from timml import * from pylab import * ml = ModelMaq(kaq=[1, 2, 4], z=[5, 4, 3, 2, 1, 0], c=[5000, 1000]) uf = Uflow(ml, 0.002, 0) rf = Constant(ml, 100, 0, 20) ld1 = ImpLineDoublet1D(ml, xld=0, layers=[0, 1]) ml.solve() x = linspace(-100, 100, 101) h = ml.headalongline(x, zeros_like(x)) Qx, _ = ml.disvecalongline(x, zeros_like(x)) figure(figsize=(12, 4)) subplot(121) title('head') plot(x, h[0], label='layer 0') plot(x, h[1], label='layer 1') plot(x, h[2], label='layer 2') legend(loc='best') subplot(122) title('Qx') plot(x, Qx[0], label='layer 0') plot(x, Qx[1], label='layer 1') plot(x, Qx[2], label='layer 2') legend(loc='best') ml.vcontoursf1D(x1=-200, x2=200, nx=100, levels=20) ml = ModelMaq(kaq=[1, 2], z=[3, 2, 1, 0], c=[1000], topboundary='conf') StripInhomMaq(ml, x1=-inf, x2=-50, kaq=[1, 2], z=[3, 2, 1, 0], c=[1000], npor=0.3, topboundary='conf') StripInhomMaq(ml, x1=-50, x2=50, kaq=[1, 2], z=[3, 2, 1, 0], c=[1000], npor=0.3, topboundary='conf', N=0.001) StripInhomMaq(ml, x1=50, x2=inf, kaq=[1, 2], z=[3, 2, 1, 0], c=[1000], npor=0.3, topboundary='conf') Constant(ml, -100, 0, 10) Constant(ml, 100, 0, 10) ml.solve() ml.vcontoursf1D(x1=-100, x2=100, nx=100, levels=20) # ml2 = ModelMaq(kaq=[1, 2], z=[3, 2, 1, 0], c=[1000], topboundary='conf') StripAreaSink(ml2, -50, 50, 0.001) Constant(ml2, -100, 0, 10) ml2.solve() ml2.vcontoursf1D(x1=-100, x2=100, nx=100, levels=20) # x = np.linspace(-100, 100, 100) plt.figure() plt.plot(x, ml.headalongline(x, 0)[0], 'C0') plt.plot(x, ml.headalongline(x, 0)[1], 'C0') plt.plot(x, ml2.headalongline(x, 0)[0], '--C1') plt.plot(x, ml2.headalongline(x, 0)[1], '--C1') ml = Model3D(kaq=1, z=np.arange(5, -0.1, -0.1)) StripInhom3D(ml, x1=-inf, x2=-5, kaq=1, z=np.arange(5, -0.1, -0.1), kzoverkh=0.1) StripInhom3D(ml, x1=-5, x2=5, kaq=1, z=np.arange(5, -0.1, -0.1), kzoverkh=0.1, topboundary='semi', hstar=3, topres=3) StripInhom3D(ml, x1=5, x2=inf, kaq=1, z=np.arange(5, -0.1, -0.1), kzoverkh=0.1) rf1 = Constant(ml, -100, 0, 3.2) rf2 = Constant(ml, 100, 0, 2.97) ml.solve() ml.vcontoursf1D(x1=-20, x2=20, nx=100, levels=20) ml = Model3D(kaq=1, z=np.arange(5, -0.1, -1)) StripInhom3D(ml, x1=-inf, x2=-5, kaq=1, z=np.arange(5, -0.1, -1), kzoverkh=0.1) StripInhom3D(ml, x1=-5, x2=5, kaq=1, z=np.arange(5, -0.1, -1), kzoverkh=0.1, topboundary='semi', hstar=3, topres=3) StripInhom3D(ml, x1=5, x2=inf, kaq=1, z=np.arange(5, -0.1, -1), kzoverkh=0.1) rf1 = Constant(ml, -100, 0, 3.2) rf2 = Constant(ml, 100, 0, 2.97) ml.solve() ml.vcontoursf1D(x1=-20, x2=20, nx=100, levels=20) ml = ModelMaq(kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], topboundary='semi', hstar=5) StripInhomMaq(ml, x1=-inf, x2=-50, kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], npor=0.3, topboundary='semi', hstar=15) StripInhomMaq(ml, x1=-50, x2=50, kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], npor=0.3, topboundary='semi', hstar=13) StripInhomMaq(ml, x1=50, x2=inf, kaq=[1, 2], z=[4, 3, 2, 1, 0], c=[1000, 1000], npor=0.3, topboundary='semi', hstar=11) ml.solve()
notebooks/timml_xsection.ipynb
mbakker7/timml
mit
Load data
import numpy as np from sklearn.datasets import load_digits digits = load_digits() X = digits.data # data in pixels y = digits.target # digit labels print(X.shape) print(y.shape) print(np.unique(y))
6 Cluster and CNN.ipynb
irsisyphus/machine-learning
apache-2.0
Visualize data
import matplotlib.pyplot as plt import pylab as pl num_rows = 4 num_cols = 5 fig, ax = plt.subplots(nrows=num_rows, ncols=num_cols, sharex=True, sharey=True) ax = ax.flatten() for index in range(num_rows*num_cols): img = digits.images[index] label = digits.target[index] ax[index].imshow(img, cmap='Greys', interpolation='nearest') ax[index].set_title('digit ' + str(label)) ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() plt.show()
6 Cluster and CNN.ipynb
irsisyphus/machine-learning
apache-2.0
Data sets: training versus test
if Version(sklearn_version) < '0.18': from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=1) num_training = y_train.shape[0] num_test = y_test.shape[0] print('training: ' + str(num_training) + ', test: ' + str(num_test)) import numpy as np # check to see if the data are well distributed among digits for y in [y_train, y_test]: print(np.bincount(y))
6 Cluster and CNN.ipynb
irsisyphus/machine-learning
apache-2.0
Answer We first write a scoring function for clustering so that we can use for GridSearchCV. Take a look at use_scorer under scikit learn.
## Note: We do not guarantee that there is a one-to-one correspondence, and therefore the toy result is different. ## See Explanation for more information def clustering_accuracy_score(y_true, y_pred): n_labels = len(list(set(y_true))) n_clusters = len(list(set(y_pred))) Pre = np.zeros((n_clusters, n_labels)) Rec = np.zeros((n_clusters, n_labels)) F = np.zeros((n_clusters, n_labels)) w = np.zeros((n_clusters)) F_i = np.zeros((n_clusters)) P = np.zeros((n_labels)) C = np.zeros((n_clusters)) for i in range(n_clusters): C[i] = sum(y_pred == i) for j in range(n_labels): P[j] = sum(y_true == j) for i in range(n_clusters): F_i_max = 0 for j in range(n_labels): if (C[i]): Pre[i][j] = sum(y_pred[y_true == j] == i) / C[i] if (P[j]): Rec[i][j] = sum(y_true[y_pred == i] == j) / P[j] if (Pre[i][j]+Rec[i][j]): F[i][j] = 2*Pre[i][j]*Rec[i][j]/(Pre[i][j]+Rec[i][j]) F_i_max = max(F_i_max, F[i][j]) F_i[i] = F_i_max w[i] = sum(y_pred == i) / len(y_pred) return F_i.dot(w) # toy case demonstrating the clustering accuracy # this is just a reference to illustrate what this score function is trying to achieve # feel free to design your own as long as you can justify # ground truth class label for samples toy_y_true = np.array([0, 0, 0, 1, 1, 2]) # clustering id for samples toy_y_pred_true = np.array([1, 1, 1, 2, 2, 0]) toy_y_pred_bad1 = np.array([0, 0, 1, 1, 1, 2]) toy_y_pred_bad2 = np.array([2, 2, 1, 0, 0, 0]) toy_accuracy = clustering_accuracy_score(toy_y_true, toy_y_pred_true) print('accuracy', toy_accuracy, ', should be 1') toy_accuracy = clustering_accuracy_score(toy_y_true, toy_y_pred_bad1) print('accuracy', toy_accuracy, ', should be', 5.0/6.0) toy_accuracy = clustering_accuracy_score(toy_y_true, toy_y_pred_bad2) print('accuracy', toy_accuracy, ', should be', 4.0/6.0, ', this will be explained in the following content')
6 Cluster and CNN.ipynb
irsisyphus/machine-learning
apache-2.0
Explanation I adopt a modified version of F-value selection, that is, for each cluster, select the best label class with highest F-score. This accuracy calculating method supports the condition that number of clusters not equal to number of labels, which supports GridSearchCV on number of clusters. Formula: Let $C[i]$ denotes cluster $i$ and $P[j]$ denotes label $j$. Then for each $(i, j)$, we have<br><br> $$ \begin{align} \text{Precision}[i][j] & = \frac{\left|C[i]\cap P[j]\right|}{|C[i]|} \ \text{Recall}[i][j] & = \frac{\left|C[i]\cap P[j]\right|}{|P[j]|} \ \text{F-value}[i][j] & = \frac{ 2 \times \text{Precision}[i][j] \times \text{Recall}[i][j]}{\text{Precision}[i][j] + \text{Recall}[i][j]}. \end{align} $$ Then for each cluster, we search the best F-value for each label, that is, $$\text{F-value}[i] = \max\limits_{j} \text{F-value}[i][j].$$ We also store the weight $w$ for each cluster, that is, $$w[i] = \frac{|C[i]|}{n}$$ Hence the final score is simply the dot product of $\text{F-value}$ and $w$, which is the weighted F-score. Again, since this accuracy calculating method supports the condition that number of clusters not equal to number of labels, we do not guarantee that there is a bijection between clusters and labels, therefore there are cases that some labels have no corresponding clusters, as the second toy example shows. Build a pipeline with standard scaler, PCA, and clustering.
from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.decomposition import KernelPCA from sklearn.cluster import KMeans from sklearn.metrics import make_scorer from scipy.stats import mode pipe = Pipeline([('scl', StandardScaler()), ('pca', KernelPCA()), ('km', KMeans(random_state=1))]) # map cluster number to acutal label def cluster_mapping(y_true, y_pred): mapping = {} n_labels = len(list(set(y_true))) n_clusters = len(list(set(y_pred))) Pre = np.zeros((n_clusters, n_labels)) Rec = np.zeros((n_clusters, n_labels)) F = np.zeros((n_clusters, n_labels)) P = np.zeros((n_labels)) C = np.zeros((n_clusters)) for i in range(n_clusters): C[i] = sum(y_pred == i) for j in range(n_labels): P[j] = sum(y_true == j) for i in range(n_clusters): F_i_max = 0 F_i_max_label = 0 for j in range(n_labels): if (C[i]): Pre[i][j] = sum(y_pred[y_true == j] == i) / C[i] if (P[j]): Rec[i][j] = sum(y_true[y_pred == i] == j) / P[j] if (Pre[i][j]+Rec[i][j]): F[i][j] = 2*Pre[i][j]*Rec[i][j]/(Pre[i][j]+Rec[i][j]) if (F_i_max < F[i][j]): F_i_max_label = j F_i_max = F[i][j] mapping[i] = F_i_max_label return mapping
6 Cluster and CNN.ipynb
irsisyphus/machine-learning
apache-2.0
Use GridSearchCV to tune hyper-parameters.
if Version(sklearn_version) < '0.18': from sklearn.grid_search import GridSearchCV else: from sklearn.model_selection import GridSearchCV pcs = list(range(1, 60)) kernels = ['linear', 'rbf', 'cosine'] initTypes = ['random', 'k-means++'] clusters = list(range(10, 20)) tfs = [True, False] param_grid = [{'pca__n_components': pcs, 'pca__kernel': kernels, 'km__init' : initTypes, 'km__n_clusters' : clusters, 'scl__with_std' : tfs, 'scl__with_mean' : tfs}] gs = GridSearchCV(estimator=pipe, param_grid=param_grid, scoring=make_scorer(clustering_accuracy_score), cv=10, n_jobs=-1, verbose=False) gs = gs.fit(X_train, y_train) print(gs.best_score_) print(gs.best_params_) best_model = gs.best_estimator_ print('Test accuracy: %.3f' % clustering_accuracy_score(y_test, best_model.predict(X_test)))
6 Cluster and CNN.ipynb
irsisyphus/machine-learning
apache-2.0
Visualize mis-clustered samples, and provide your explanation.
mapping = cluster_mapping(y_train, best_model.predict(X_train)) y_test_pred = np.array(list(map(lambda x: mapping[x], best_model.predict(X_test)))) miscl_img = X_test[y_test != y_test_pred][:25] correct_lab = y_test[y_test != y_test_pred][:25] miscl_lab = y_test_pred[y_test != y_test_pred][:25] fig, ax = plt.subplots(nrows=5, ncols=5, sharex=True, sharey=True) ax = ax.flatten() for i in range(25): img = miscl_img[i].reshape(8, 8) ax[i].imshow(img, cmap='Greys', interpolation='nearest') ax[i].set_title('%d) t: %d p: %d' % (i+1, correct_lab[i], miscl_lab[i])) ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() plt.show()
6 Cluster and CNN.ipynb
irsisyphus/machine-learning
apache-2.0
Explanation Since the accuracy is 84.4%, which means more than 1 digit will be incorrectly clustered in a group of 10 digits, the error is still considered to be high (compared with using neural networks or other methods). The mis-clustered samples, as we can observe from the picture above, are generally two kinds: 1. Hard to differentiate: e.g. No. 14, No. 21 and No, 22. It is even difficult for a human to tell which classes they should belong to. 2. Blurred to much: e.g. No. 4, No. 18 and No. 19. Those can be differentiated by a human but seems too vague to be correctly clustered. Furthermore, we can find that digit '5' tends to be mis-clustered to digit '9' (e.g. No. 5, No. 17 and No. 18), and digit '9' tends to be mis-clustered to digit '1' and '7' (e.g. No. 1, No. 2, No. 8, No. 19, No. 23 and No. 24). The reason may be that the distance between those digit clusters are not far, and a digit lying around the border can be easily mis-clustered. For example, when the digit is blurred/vague.<br> Also, we can find from No. 10, No. 14, No. 15 that the digit '1' in the dataset are sometimes twisted, tilted or blurred. It's even hard for a human to tell whether it is digit '1' or not. From the examples, we can see that they tend to be clustered as digit '6' or '2' with respect to their shape and level of clarity. Tiny image classification We will use the CIFAR-10 dataset for image object recognition. The dataset consists of 50000 training samples and 10000 test samples in 10 different classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck; see the link above for more information). The goal is to maximize the accuracy of your classifier on the test dataset after being optimized via the training dataset. You can use any learning models (supervised or unsupervised) or optimization methods (e.g. search methods for hyper-parameters). The only requirement is that your code can run inside an ipynb file, as usual. Please provide a description of your method, in addition to the code. Your answer will be evaluated not only on the test accuracy but also on the creativity of your methodology and the quality of your explanation/description. Description 1. Introduction 1.1. High-level approach I adopted a Wide Residual Network with 28 convolutional layers and 10x width. As stated Zagoruyko et al's paper <sup>1</sup>, a 96.11% testing accuracy can be obtained. 1.2. Reason Without cifar-10 data size-augmentation<sup>2</sup>, one of the best test results - a 93.57% testing accuracy is reported<sup>3</sup> with the famous deep residual netwook<sup>4</sup>. Since the desired testing accuracy is high, I decided to start from resnet. However, inspired by Zagoruyko et al, I choose to use a modified version of resent - the Wide Residual Network. <br><br> Wide resent basically increases the channel of each convolutional layer while decreasing the depth of the network. With result data from their work, a structure of 28 convolution layers and 10x width with dropout achieves the best result - 96.11% accuracy.<br><br> When training my network, however, I only found the result of their early version's paper<sup>5</sup>, which is training without dropout layer achieves better accuracy (which is also modified from 95.61% to 96.00% in the lastest version). Moreover, due to assignment time constraint, I simply use the default adadelta optimizer instead of [sgd + selected learning rate] in the first 100 itertaions, which may accounts for my less accurate result. After finetuning, in the end, I only get a 94.14% accuarcy, which is still higher than the original resnet. 2. Method 2.1 Data Preprocessing Using keras.preprocessing.image.ImageDataGenerator, the data preprocessing scheme I use is 1. featurewise_center: Set feature-wise input mean to 0 over the dataset 2. featurewise_std_normalization: Divide inputs by feature-wise std of the dataset 3. rotation_range: 10$^{\circ}$ range for random rotations 4. width_shift_range: 15.625% random horizontal shifts 5. height_shift_range: 15.625% random vertical shifts 6. horizontal_flip: Randomly flip inputs horizontally Since vertical_flip looks unrealistic for a real-world picture, I did not use it. Also, zca_whitening may results in loss of image info, and other methods may more or less lead to problems in training.<br> Moreover, I did not adopte cifar-10 data size-augmentation<sup>2</sup>, which requires structural changes to my network, which may results in more time cost in training and finetuning. As the paper suggests, the test accuracy should increase compared with non-size-augmentated data. 2.2 Model Since we use 28 convolutional layers, by structure of wide resnet, the layer blocks are: 1. A initial block with 1 convolutional layers 2. 4 conv1 block with 160 channels. All blocks contains 2 convolutional layers. The first block contains one additional residual convolutional layer. In total 3+2+2+2=9 convolutional layers. Each block ends with a residual merge. 3. 4 conv2 block with 320 channels. All blocks contains 2 convolutional layers. The first block contains one additional residual convolutional layer. In total 3+2+2+2=9 convolutional layers. Each block ends with a residual merge. 4. 4 conv3 block with 640 channels. All blocks contains 2 convolutional layers. The first block contains one additional residual convolutional layer. In total 3+2+2+2=9 convolutional layers. Each block ends with a residual merge. All convolutional layers are followed by a batch normalization layer and a relu layer.<br> After all layer blocks, we use average pooling to 8-by-8, followed by a flatten layer, and finalliy a fully connected layer (with softmax) of 10 outputs, which corresponds to each class. 2.3 Hyper-parameter As discussed in 1.2, the most important hyper-parameter of wide resnet, also as Zagoruyko et al discribed, is N (the number of layers), K (the width of the convolutional channel), and dropout ratio. I adopted the best result from version 1 of their work, that is, N = 28, K = 10 (times original resnet), dropout ratio = 0. However, as their version 2 points out, this model with dropout can achieve a better result. Since the difference is just 0.11%, we can see that dropout may not influence much in wide resnet. 2.4 Detailed Model You may view the detailed model in the following code output. 3. Result | Network | Accuracy | |------------------------------------------|--------| | Kaiming He et al Resent | 93.57% | | My WRN-28-10 w/o dropout (Adadelta only) | 93.69% | | My WRN-28-10 w/o dropout (Fine-tuned) | 94.14% | | Zagoruyko et al WRN-28-10 v1 w/ dropout | 95.61% | | Zagoruyko et al WRN-28-10 v1 w/o dropout | 95.83% | | Zagoruyko et al WRN-28-10 v2 w/o dropout | 96.00% | | Zagoruyko et al WRN-28-10 v2 w/ dropout | 96.11% | With adadelta optimizer only, my training accuracy finally becomes 100%, which denotes an over-fitting and thus making the testing accuracy fail to improve. With SGD fine-tuning on the model, the testing accuracy increases around 0.45%. However, my model fail to achieve a testing accuracy above 95% as the papaer suggestes. See 4.2 Limitations for more analysis.<br> Note that the code I provided below simply uses the fine-tuned result of my network, and the training process is just for showing the result. They are not the original processes. 4. Conclusion 4.1 Traning Conclusion Wide Residual Network achieves better result than the original Residual Network. Optimizer choice, fine-tuning and data augmentation are curtial for improving the testing rate. Ideas that worked several years ago may not work nowadays, ideas that work on a specific model may not work on another model (e.g. importance of dropout, improving the depth always work on resnet) The importance of keeping track of the latest news and models on machine learning. 4.2 Limitations Optimizer, Regulation and Over-fitting: With more time provided, I would try to use [different optimizer (adagrad, RMSprop, adam, sgd+specific training rate) + different Regulation (l1, l2)] instead of using adadelta only. This may account for the 1.5% accuracy below the paper's result. Dropout: As the second version of the paper suggested, with dropout the testing accuracy improved 0.11%. However, this should not be the major contraint of the result. Fine-tuning: I only ran 50 iterations of fine-tuning based on the training rate I set. The peak testing accuracy I obtained is 94.49% but was not saved during fine-tuning. Data Augmentation: Cifar-10 data size-augmentation<sup>2</sup> is proved to be useful to improve the testing rate, which I did not try during my training. FCN: Fully Convolutional Network<sup>6</sup> is proved to be useful to improve the testing rate. Also, R-FCN<sup>7</sup> provides a nice implimentation of Fully Convolutional Resnet. Maybe a wide, fully convolved residual network can provides a better result. Use of GPU: AWS refused to provide me with a GPU instance since I am a new user. I trained my network on my Nvidia GeForce GTX 980M, which takes 7 days to run 100 iterations. This heavily limited adjustments to my model. 5. References <a name="myfootnote1">1</a>: Zagoruyko, Sergey, and Nikos Komodakis. "Wide Residual Networks." arXiv preprint arXiv:1605.07146v2 (2016).<br> <a name="myfootnote2">2</a>: Graham, Benjamin. "Fractional max-pooling." arXiv preprint arXiv:1412.6071 (2015).<br> <a name="myfootnote3">3</a>: Benenson. What is the class of this image ?" rodrigob@github<br> <a name="myfootnote4">4</a>: He, Kaiming, et al. "Deep residual learning for image recognition." arXiv preprint arXiv:1512.03385 (2015).<br> <a name="myfootnote5">5</a>: Zagoruyko, Sergey, and Nikos Komodakis. "Wide Residual Networks." arXiv preprint arXiv:1605.07146v1 (2016).<br> <a name="myfootnote6">6</a>: Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional networks for semantic segmentation." IEEE Conference on CVPR. 2015.<br> <a name="myfootnote7">7</a>: Dai, Jifeng, et al. "R-FCN: Object Detection via Region-based Fully Convolutional Networks." arXiv preprint arXiv:1605.06409 (2016).<br> All codes referenced have been specified in their context.
# Functions to build a user-defined wide resnet for cifa-10 # Author: Somshubra Majumdar https://github.com/titu1994/Wide-Residual-Networks # Modified By: Gao Chang, HKU from keras.models import Model from keras.layers import Input, merge, Activation, Dropout, Flatten, Dense from keras.layers.convolutional import Convolution2D, MaxPooling2D, AveragePooling2D from keras.layers.normalization import BatchNormalization from keras import backend as K def initial_conv(input): x = Convolution2D(16, 3, 3, border_mode='same')(input) channel_axis = 1 if K.image_dim_ordering() == "th" else -1 x = BatchNormalization(axis=channel_axis)(x) x = Activation('relu')(x) return x def conv1_block(input, k=1, dropout=0.0): init = input channel_axis = 1 if K.image_dim_ordering() == "th" else -1 # Check if input number of filters is same as 16 * k, else create convolution2d for this input if K.image_dim_ordering() == "th": if init._keras_shape[1] != 16 * k: init = Convolution2D(16 * k, 1, 1, activation='linear', border_mode='same')(init) else: if init._keras_shape[-1] != 16 * k: init = Convolution2D(16 * k, 1, 1, activation='linear', border_mode='same')(init) x = Convolution2D(16 * k, 3, 3, border_mode='same')(input) x = BatchNormalization(axis=channel_axis)(x) x = Activation('relu')(x) if dropout > 0.0: x = Dropout(dropout)(x) x = Convolution2D(16 * k, 3, 3, border_mode='same')(x) x = BatchNormalization(axis=channel_axis)(x) x = Activation('relu')(x) m = merge([init, x], mode='sum') return m def conv2_block(input, k=1, dropout=0.0): init = input channel_axis = 1 if K.image_dim_ordering() == "th" else -1 # Check if input number of filters is same as 32 * k, else create convolution2d for this input if K.image_dim_ordering() == "th": if init._keras_shape[1] != 32 * k: init = Convolution2D(32 * k, 1, 1, activation='linear', border_mode='same')(init) else: if init._keras_shape[-1] != 32 * k: init = Convolution2D(32 * k, 1, 1, activation='linear', border_mode='same')(init) x = Convolution2D(32 * k, 3, 3, border_mode='same')(input) x = BatchNormalization(axis=channel_axis)(x) x = Activation('relu')(x) if dropout > 0.0: x = Dropout(dropout)(x) x = Convolution2D(32 * k, 3, 3, border_mode='same')(x) x = BatchNormalization(axis=channel_axis)(x) x = Activation('relu')(x) m = merge([init, x], mode='sum') return m def conv3_block(input, k=1, dropout=0.0): init = input channel_axis = 1 if K.image_dim_ordering() == "th" else -1 # Check if input number of filters is same as 64 * k, else create convolution2d for this input if K.image_dim_ordering() == "th": if init._keras_shape[1] != 64 * k: init = Convolution2D(64 * k, 1, 1, activation='linear', border_mode='same')(init) else: if init._keras_shape[-1] != 64 * k: init = Convolution2D(64 * k, 1, 1, activation='linear', border_mode='same')(init) x = Convolution2D(64 * k, 3, 3, border_mode='same')(input) x = BatchNormalization(axis=channel_axis)(x) x = Activation('relu')(x) if dropout > 0.0: x = Dropout(dropout)(x) x = Convolution2D(64 * k, 3, 3, border_mode='same')(x) x = BatchNormalization(axis=channel_axis)(x) x = Activation('relu')(x) m = merge([init, x], mode='sum') return m def WRN(nb_classes, N, k, dropout): """ Creates a Wide Residual Network with specified parameters :param nb_classes: Number of output classes :param N: Depth of the network. Compute N = (n - 4) / 6. Example : For a depth of 16, n = 16, N = (16 - 4) / 6 = 2 Example2: For a depth of 28, n = 28, N = (28 - 4) / 6 = 4 Example3: For a depth of 40, n = 40, N = (40 - 4) / 6 = 6 :param k: Width of the network. :param dropout: Adds dropout if value is greater than 0.0 """ init = Input(shape=(3, 32, 32)) x = initial_conv(init) for i in range(N): x = conv1_block(x, k, dropout) x = MaxPooling2D((2,2))(x) for i in range(N): x = conv2_block(x, k, dropout) x = MaxPooling2D((2,2))(x) for i in range(N): x = conv3_block(x, k, dropout) x = AveragePooling2D((8,8))(x) x = Flatten()(x) x = Dense(nb_classes, activation='softmax')(x) model = Model(init, x) return model import numpy as np import sklearn.metrics as metrics from keras.datasets import cifar10 from keras.models import Model from keras.layers import Input from keras.optimizers import SGD import keras.callbacks as callbacks import keras.utils.np_utils as kutils from keras.preprocessing.image import ImageDataGenerator from sklearn.metrics import accuracy_score batch_size = 64 nb_epoch = 5 (X_train, y_train), (X_test, y_test) = cifar10.load_data() X_train = X_train.astype('float32') X_train /= 255.0 X_test = X_test.astype('float32') X_test /= 255.0 y_train = kutils.to_categorical(y_train) y_test = kutils.to_categorical(y_test) generator = ImageDataGenerator(featurewise_center=True, featurewise_std_normalization=True, rotation_range=10, width_shift_range=5./32, height_shift_range=5./32, horizontal_flip=True) generator.fit(X_train, seed=0, augment=True) test_generator = ImageDataGenerator(featurewise_center=True, featurewise_std_normalization=True) test_generator.fit(X_test, seed=0, augment=True) model = WRN(nb_classes=10, N=4, k=10, dropout=0.0) model.summary() print ("Start Training:") sgd = SGD(lr = 0.001, decay = 0.1, momentum = 0.9, nesterov = True) model.compile(loss="categorical_crossentropy", optimizer=sgd, metrics=["acc"]) # model.load_weights("WRN-28-10.h5") model.fit_generator(generator.flow(X_train, y_train, batch_size=batch_size), samples_per_epoch=len(X_train), nb_epoch=nb_epoch, # callbacks=[callbacks.ModelCheckpoint("WRN-28-10-Best.h5", monitor="val_acc", save_best_only=True)], validation_data=test_generator.flow(X_test, y_test, batch_size=batch_size), nb_val_samples=X_test.shape[0], verbose = True) print ("Start Testing:") # model.load_weights("WRN-28-10.h5") results = model.evaluate_generator(test_generator.flow(X_test, y_test, batch_size=batch_size), X_test.shape[0]) print ("Results:") print ("Test loss: {0}".format(results[0])) print ("Test accuracy: {0}".format(results[1]))
6 Cluster and CNN.ipynb
irsisyphus/machine-learning
apache-2.0
Save configuration
import os try: import cPickle as pickle except ImportError: import pickle import iris import cf_units from datetime import datetime from utilities import CF_names, fetch_range, start_log # 1-week start of data. kw = dict(start=datetime(2014, 7, 1, 12), days=6) start, stop = fetch_range(**kw) # SECOORA region (NC, SC GA, FL). bbox = [-87.40, 24.25, -74.70, 36.70] # CF-names. sos_name = 'water_surface_height_above_reference_datum' name_list = CF_names[sos_name] # Units. units = cf_units.Unit('meters') # Logging. run_name = '{:%Y-%m-%d}'.format(stop) log = start_log(start, stop, bbox) # SECOORA models. secoora_models = ['SABGOM', 'USEAST', 'USF_ROMS', 'USF_SWAN', 'USF_FVCOM'] # Config. fname = os.path.join(run_name, 'config.pkl') config = dict(start=start, stop=stop, bbox=bbox, name_list=name_list, units=units, run_name=run_name, secoora_models=secoora_models) with open(fname,'wb') as f: pickle.dump(config, f) from owslib import fes from utilities import fes_date_filter kw = dict(wildCard='*', escapeChar='\\', singleChar='?', propertyname='apiso:AnyText') or_filt = fes.Or([fes.PropertyIsLike(literal=('*%s*' % val), **kw) for val in name_list]) # Exclude ROMS Averages and History files. not_filt = fes.Not([fes.PropertyIsLike(literal='*Averages*', **kw)]) begin, end = fes_date_filter(start, stop) filter_list = [fes.And([fes.BBox(bbox), begin, end, or_filt, not_filt])] from owslib.csw import CatalogueServiceWeb endpoint = 'http://www.ngdc.noaa.gov/geoportal/csw' csw = CatalogueServiceWeb(endpoint, timeout=60) csw.getrecords2(constraints=filter_list, maxrecords=1000, esn='full') fmt = '{:*^64}'.format log.info(fmt(' Catalog information ')) log.info("URL: {}".format(endpoint)) log.info("CSW version: {}".format(csw.version)) log.info("Number of datasets available: {}".format(len(csw.records.keys()))) from utilities import service_urls dap_urls = service_urls(csw.records, service='odp:url') sos_urls = service_urls(csw.records, service='sos:url') log.info(fmt(' CSW ')) for rec, item in csw.records.items(): log.info('{}'.format(item.title)) log.info(fmt(' SOS ')) for url in sos_urls: log.info('{}'.format(url)) log.info(fmt(' DAP ')) for url in dap_urls: log.info('{}.html'.format(url)) from utilities import is_station # Filter out some station endpoints. non_stations = [] for url in dap_urls: try: if not is_station(url): non_stations.append(url) except RuntimeError as e: log.warn("Could not access URL {}. {!r}".format(url, e)) dap_urls = non_stations log.info(fmt(' Filtered DAP ')) for url in dap_urls: log.info('{}.html'.format(url))
notebooks/timeSeries/ssh/00-fetch_data.ipynb
ocefpaf/secoora
mit
Add SECOORA models and observations
from utilities import titles, fix_url for secoora_model in secoora_models: if titles[secoora_model] not in dap_urls: log.warning('{} not in the NGDC csw'.format(secoora_model)) dap_urls.append(titles[secoora_model]) # NOTE: USEAST is not archived at the moment! # https://github.com/ioos/secoora/issues/173 dap_urls = [fix_url(start, url) if 'SABGOM' in url else url for url in dap_urls] import warnings from iris.exceptions import CoordinateNotFoundError, ConstraintMismatchError from utilities import (TimeoutException, secoora_buoys, quick_load_cubes, proc_cube) urls = list(secoora_buoys()) buoys = dict() if not urls: raise ValueError("Did not find any SECOORA buoys!") for url in urls: try: with warnings.catch_warnings(): warnings.simplefilter("ignore") # Suppress iris warnings. kw = dict(bbox=bbox, time=(start, stop), units=units) cubes = quick_load_cubes(url, name_list) cubes = [proc_cube(cube, **kw) for cube in cubes] buoy = url.split('/')[-1].split('.nc')[0] if len(cubes) == 1: buoys.update({buoy: cubes[0]}) else: #[buoys.update({'{}_{}'.format(buoy, k): cube}) for # k, cube in list(enumerate(cubes))] # FIXME: For now I am choosing the first sensor. buoys.update({buoy: cubes[0]}) except (RuntimeError, ValueError, TimeoutException, ConstraintMismatchError, CoordinateNotFoundError) as e: log.warning('Cannot get cube for: {}\n{}'.format(url, e)) from pyoos.collectors.coops.coops_sos import CoopsSos collector = CoopsSos() datum = 'NAVD' collector.set_datum(datum) collector.end_time = stop collector.start_time = start collector.variables = [sos_name] ofrs = collector.server.offerings title = collector.server.identification.title log.info(fmt(' Collector offerings ')) log.info('{}: {} offerings'.format(title, len(ofrs))) from pandas import read_csv from utilities import sos_request params = dict(observedProperty=sos_name, eventTime=start.strftime('%Y-%m-%dT%H:%M:%SZ'), featureOfInterest='BBOX:{0},{1},{2},{3}'.format(*bbox), offering='urn:ioos:network:NOAA.NOS.CO-OPS:WaterLevelActive') uri = 'http://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/SOS' url = sos_request(uri, **params) observations = read_csv(url) log.info('SOS URL request: {}'.format(url))
notebooks/timeSeries/ssh/00-fetch_data.ipynb
ocefpaf/secoora
mit
Clean the DataFrame
from utilities import get_coops_metadata, to_html columns = {'datum_id': 'datum', 'sensor_id': 'sensor', 'station_id': 'station', 'latitude (degree)': 'lat', 'longitude (degree)': 'lon', 'vertical_position (m)': 'height', 'water_surface_height_above_reference_datum (m)': sos_name} observations.rename(columns=columns, inplace=True) observations['datum'] = [s.split(':')[-1] for s in observations['datum']] observations['sensor'] = [s.split(':')[-1] for s in observations['sensor']] observations['station'] = [s.split(':')[-1] for s in observations['station']] observations['name'] = [get_coops_metadata(s)[0] for s in observations['station']] observations.set_index('name', inplace=True) to_html(observations.head()) from pandas import DataFrame from utilities import secoora2df if buoys: secoora_observations = secoora2df(buoys, sos_name) to_html(secoora_observations.head()) else: secoora_observations = DataFrame() from pandas import concat all_obs = concat([observations, secoora_observations], axis=0) to_html(concat([all_obs.head(2), all_obs.tail(2)]))
notebooks/timeSeries/ssh/00-fetch_data.ipynb
ocefpaf/secoora
mit
Uniform 6-min time base for model/data comparison
from owslib.ows import ExceptionReport from utilities import pyoos2df, save_timeseries iris.FUTURE.netcdf_promote = True log.info(fmt(' Observations ')) outfile = '{}-OBS_DATA.nc'.format(run_name) outfile = os.path.join(run_name, outfile) log.info(fmt(' Downloading to file {} '.format(outfile))) data, bad_station = dict(), [] col = 'water_surface_height_above_reference_datum (m)' for station in observations.index: station_code = observations['station'][station] try: df = pyoos2df(collector, station_code, df_name=station) data.update({station_code: df[col]}) except ExceptionReport as e: bad_station.append(station_code) log.warning("[{}] {}:\n{}".format(station_code, station, e)) obs_data = DataFrame.from_dict(data)
notebooks/timeSeries/ssh/00-fetch_data.ipynb
ocefpaf/secoora
mit
Split good and bad vertical datum stations.
pattern = '|'.join(bad_station) if pattern: all_obs['bad_station'] = all_obs.station.str.contains(pattern) observations = observations[~observations.station.str.contains(pattern)] else: all_obs['bad_station'] = ~all_obs.station.str.contains(pattern) # Save updated `all_obs.csv`. fname = '{}-all_obs.csv'.format(run_name) fname = os.path.join(run_name, fname) all_obs.to_csv(fname) comment = "Several stations from http://opendap.co-ops.nos.noaa.gov" kw = dict(longitude=observations.lon, latitude=observations.lat, station_attr=dict(cf_role="timeseries_id"), cube_attr=dict(featureType='timeSeries', Conventions='CF-1.6', standard_name_vocabulary='CF-1.6', cdm_data_type="Station", comment=comment, datum=datum, url=url)) save_timeseries(obs_data, outfile=outfile, standard_name=sos_name, **kw) to_html(obs_data.head())
notebooks/timeSeries/ssh/00-fetch_data.ipynb
ocefpaf/secoora
mit
SECOORA Observations
import numpy as np from pandas import DataFrame def extract_series(cube, station): time = cube.coord(axis='T') date_time = time.units.num2date(cube.coord(axis='T').points) data = cube.data return DataFrame(data, columns=[station], index=date_time) if buoys: secoora_obs_data = [] for station, cube in list(buoys.items()): df = extract_series(cube, station) secoora_obs_data.append(df) # Some series have duplicated times! kw = dict(subset='index', take_last=True) secoora_obs_data = [obs.reset_index().drop_duplicates(**kw).set_index('index') for obs in secoora_obs_data] secoora_obs_data = concat(secoora_obs_data, axis=1) else: secoora_obs_data = DataFrame()
notebooks/timeSeries/ssh/00-fetch_data.ipynb
ocefpaf/secoora
mit
These buoys need some QA/QC before saving
from utilities.qaqc import filter_spikes, threshold_series if buoys: secoora_obs_data.apply(threshold_series, args=(0, 40)) secoora_obs_data.apply(filter_spikes) # Interpolate to the same index as SOS. index = obs_data.index kw = dict(method='time', limit=30) secoora_obs_data = secoora_obs_data.reindex(index).interpolate(**kw).ix[index] log.info(fmt(' SECOORA Observations ')) fname = '{}-SECOORA_OBS_DATA.nc'.format(run_name) fname = os.path.join(run_name, fname) log.info(fmt(' Downloading to file {} '.format(fname))) url = "http://129.252.139.124/thredds/catalog_platforms.html" comment = "Several stations {}".format(url) kw = dict(longitude=secoora_observations.lon, latitude=secoora_observations.lat, station_attr=dict(cf_role="timeseries_id"), cube_attr=dict(featureType='timeSeries', Conventions='CF-1.6', standard_name_vocabulary='CF-1.6', cdm_data_type="Station", comment=comment, url=url)) save_timeseries(secoora_obs_data, outfile=fname, standard_name=sos_name, **kw) to_html(secoora_obs_data.head())
notebooks/timeSeries/ssh/00-fetch_data.ipynb
ocefpaf/secoora
mit
Loop discovered models and save the nearest time-series
from iris.exceptions import (CoordinateNotFoundError, ConstraintMismatchError, MergeError) from utilities import time_limit, get_model_name, is_model log.info(fmt(' Models ')) cubes = dict() with warnings.catch_warnings(): warnings.simplefilter("ignore") # Suppress iris warnings. for k, url in enumerate(dap_urls): log.info('\n[Reading url {}/{}]: {}'.format(k+1, len(dap_urls), url)) try: with time_limit(60*5): cube = quick_load_cubes(url, name_list, callback=None, strict=True) if is_model(cube): cube = proc_cube(cube, bbox=bbox, time=(start, stop), units=units) else: log.warning("[Not model data]: {}".format(url)) continue mod_name, model_full_name = get_model_name(cube, url) cubes.update({mod_name: cube}) except (TimeoutException, RuntimeError, ValueError, ConstraintMismatchError, CoordinateNotFoundError, IndexError) as e: log.warning('Cannot get cube for: {}\n{}'.format(url, e)) from iris.pandas import as_series from utilities import (make_tree, get_nearest_water, add_station, ensure_timeseries) for mod_name, cube in cubes.items(): fname = '{}-{}.nc'.format(run_name, mod_name) fname = os.path.join(run_name, fname) log.info(fmt(' Saving to file {} '.format(fname))) try: tree, lon, lat = make_tree(cube) except CoordinateNotFoundError as e: log.warning('Cannot create KDTree for: {}'.format(mod_name)) continue # Get model series at observed locations. raw_series = dict() for station, obs in all_obs.iterrows(): try: kw = dict(k=10, max_dist=0.04, min_var=0.01) args = cube, tree, obs.lon, obs.lat series, dist, idx = get_nearest_water(*args, **kw) except ValueError as e: status = "No Data" log.info('[{}] {}'.format(status, obs.name)) continue except RuntimeError as e: status = "Failed" log.info('[{}] {}. ({})'.format(status, obs.name, e.message)) continue if not series: status = "Land " else: raw_series.update({obs['station']: series}) series = as_series(series) status = "Water " log.info('[{}] {}'.format(status, obs.name)) if raw_series: # Save cube. for station, cube in raw_series.items(): cube = add_station(cube, station) try: cube = iris.cube.CubeList(raw_series.values()).merge_cube() except MergeError as e: log.warning(e) ensure_timeseries(cube) iris.save(cube, fname) del cube log.info(fmt('Finished processing {}\n'.format(mod_name))) from utilities import nbviewer_link, make_qr make_qr(nbviewer_link('00-fetch_data.ipynb')) elapsed = time.time() - start_time log.info('{:.2f} minutes'.format(elapsed/60.)) log.info('EOF') with open('{}/log.txt'.format(run_name)) as f: print(f.read())
notebooks/timeSeries/ssh/00-fetch_data.ipynb
ocefpaf/secoora
mit
Rawest Plot (100k sampling)
rawestImg = sitk.GetArrayFromImage(inImg) ##convert to simpleITK image to normal numpy ndarray ## Randomly sample 100k points import random x = rawestImg[:,0,0] y = rawestImg[0,:,0] z = rawestImg[0,0,:] # mod by x to get x, mod by y to get y, mod by z to get z xdimensions = len(x) ydimensions = len(y) zdimensions = len(z) # index = random.sample(xrange(0,xdimensions*ydimensions*zdimensions),100000) #66473400 is multiplying xshape by yshape by zshape #### random sampling of values > 250 ###### # X_val = [] # Y_val = [] # Z_val = [] # for i in index: # z_val = int(i/(xdimensions*ydimensions)) # z_rem = i%(xdimensions*ydimensions) # y_val = int(z_rem/xdimensions) # x_val = int(z_rem/ydimensions) # X_val.append(x_val) # Y_val.append(y_val) # Z_val.append(z_val) # xlist = [] # ylist = [] # zlist = [] xyz = [] # for i in range(100000): # value = 0 # while(value == 0): # xval = random.sample(xrange(0,xdimensions), 1)[0] # yval = random.sample(xrange(0,ydimensions), 1)[0] # zval = random.sample(xrange(0,zdimensions), 1)[0] # value = rawestImg[xval,yval,zval] # if [xval, yval, zval] not in xyz and value > 250: # xyz.append([xval, yval, zval]) # else: # value = 0 # print xyz #### obtaining 100000 brightest points ####### total = 100000 bright = np.max(rawestImg) allpoints = [] brightpoints = [] for (x,y,z),value in np.ndenumerate(rawestImg): # print (x,y,z) # print (x,z,z)[0] # print value if value == bright: brightpoints.append([(x,y,z)]) else: allpoints.append([(x,y,z),value]) total -= len(brightpoints) bright -= 1 if total < 0: index = random.sample(xrange(0, len(brightpoints)), total) for ind in index: xyz.append(brightpoints[ind]) else: xyz = xyz + brightpoints # for item in brightpoints: # outfile.write(",".join(item) + "\n") while(total > 0): del brightpoints[:] for item in allpoints: if item[3] == bright: brightpoins.append(item) allpoints.remove(item) total -= len(brightpoints) bright -= 1 if total < 0: index = random.sample(xrange(0, len(brightpoints)), total) for ind in index: xyz.append(brightpoints[ind]) else: xyz = xyz + brightpoints # for item in brightpoints: # outfile.write(",".join(item) + "\n") print len(xyz) from plotly.offline import download_plotlyjs, init_notebook_mode, iplot from plotly import tools import plotly plotly.offline.init_notebook_mode() import plotly.graph_objs as go # x = X_val # y = Y_val # z = Z_val xlist = [] ylist = [] zlist = [] for i in xyz: xlist.append(i[0]) ylist.append(i[1]) zlist.append(i[2]) x = xlist y = ylist z = zlist trace1 = go.Scatter3d( x = x, y = y, z = z, mode='markers', marker=dict( size=1.2, color='purple', # set color to an array/list of desired values colorscale='Viridis', # choose a colorscale opacity=0.15 ) ) data = [trace1] layout = go.Layout( margin=dict( l=0, r=0, b=0, t=0 ) ) fig = go.Figure(data=data, layout=layout) print('Aut1367rawest' + "plotly") iplot(fig) plotly.offline.plot(fig, filename= 'AUT1367rawestluke200' + "plotly.html")
Jupyter/Filter_and_Plotly_Luke.ipynb
NeuroDataDesign/seelviz
apache-2.0
Filter image
## Clean out noise (Filter Image) (values, bins) = np.histogram(sitk.GetArrayFromImage(inImg), bins=100, range=(0,500)) plt.plot(bins[:-1], values) counts = np.bincount(values) maximum = np.argmax(counts) lowerThreshold = 100 #maximum upperThreshold = sitk.GetArrayFromImage(inImg).max()+1 filteredImg = sitk.Threshold(inImg,lowerThreshold,upperThreshold,lowerThreshold) - lowerThreshold
Jupyter/Filter_and_Plotly_Luke.ipynb
NeuroDataDesign/seelviz
apache-2.0
Randomly sample 100k points
filterImg = sitk.GetArrayFromImage(filteredImg) ##convert to simpleITK image to normal numpy ndarray print filterImg[0][0] ## Randomly sample 100k points after filtering x = filterImg[:,0,0] y = filterImg[0,:,0] z = filterImg[0,0,:] # mod by x to get x, mod by y to get y, mod by z to get z xdimensions = len(x) ydimensions = len(y) zdimensions = len(z) index = random.sample(xrange(0,xdimensions*ydimensions*zdimensions),100000) #66473400 is multiplying xshape by yshape by zshape X_val = [] Y_val = [] Z_val = [] for i in index: z_val = int(i/(xdimensions*ydimensions)) z_rem = i%(xdimensions*ydimensions) y_val = int(z_rem/xdimensions) x_val = int(z_rem/ydimensions) X_val.append(x_val) Y_val.append(y_val) Z_val.append(z_val) xlist = [] ylist = [] zlist = [] for i in range(100000): xlist.append(random.sample(xrange(0,xdimensions), 1)[0]) ylist.append(random.sample(xrange(0,ydimensions), 1)[0]) zlist.append(random.sample(xrange(0,zdimensions), 1)[0]) print xlist print X_val print Y_val print Z_val print len(X_val) print len(Y_val) print len(Z_val)
Jupyter/Filter_and_Plotly_Luke.ipynb
NeuroDataDesign/seelviz
apache-2.0
UNUSED: spacingImg = inImg.GetSpacing() spacing = tuple(i * 50 for i in spacingImg) print spacingImg print spacing inImg.SetSpacing(spacingImg) inImg_download = inImg # Aut1367 set to default spacing inImg = imgResample(inImg, spacing=refImg.GetSpacing()) Img_reorient = imgReorient(inImg, "LPS", "RSA") #specific reoriented Aut1367 inImg_reorient = Img_reorient refImg_ds = imgResample(refImg, spacing=spacing) # atlas downsample 50x Plot of 100k data post filtering
from plotly.offline import download_plotlyjs, init_notebook_mode, iplot from plotly import tools import plotly plotly.offline.init_notebook_mode() import plotly.graph_objs as go x = X_val y = Y_val z = Z_val trace1 = go.Scatter3d( x = x, y = y, z = z, mode='markers', marker=dict( size=1.2, color='purple', # set color to an array/list of desired values colorscale='Viridis', # choose a colorscale opacity=0.15 ) ) data = [trace1] layout = go.Layout( margin=dict( l=0, r=0, b=0, t=0 ) ) fig = go.Figure(data=data, layout=layout) print('Aut1367filter' + "plotly") plotly.offline.plot(fig, filename= 'AUT1367filter' + "plotly.html")
Jupyter/Filter_and_Plotly_Luke.ipynb
NeuroDataDesign/seelviz
apache-2.0
Filter Image Again Don't do this Clean out noise (Filter Image) (values, bins) = np.histogram(filterImg, bins=100, range=(0,500)) plt.plot(bins[:-1], values) counts = np.bincount(values) maximum = np.argmax(counts) lowerThreshold = maximum upperThreshold = filterImg.max()+1 filterX2Img = sitk.Threshold(inImg,lowerThreshold,upperThreshold,lowerThreshold) - lowerThreshold newImg = sitk.GetArrayFromImage(filterX2Img) ##convert to simpleITK image to normal numpy ndarray
## Histogram Equalization ## Cut from generateHistogram from clarityviz import cv2 im = filterImg img = im[:,:,:] shape = im.shape #affine = im.get_affine() x_value = shape[0] y_value = shape[1] z_value = shape[2] ##################################################### imgflat = img.reshape(-1) #img_grey = np.array(imgflat * 255, dtype = np.uint8) #img_eq = exposure.equalize_hist(img_grey)#new_img = img_eq.reshape(x_value, y_value, z_value) #globaleq = nib.Nifti1Image(new_img, np.eye(4)) #nb.save(globaleq, '/home/albert/Thumbo/AutAglobaleq.nii') ###################################################### #clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) img_grey = np.array(imgflat * 255, dtype = np.uint8) #threshed = cv2.adaptiveThreshold(img_grey, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, 0) cl1 = clahe.apply(img_grey) #cv2.imwrite('clahe_2.jpg',cl1) #cv2.startWindowThread() #cv2.namedWindow("adaptive") #cv2.imshow("adaptive", cl1) #cv2.imshow("adaptive", threshed) #plt.imshow(threshed) localimgflat = cl1 #cl1.reshape(-1) newer_img = localimgflat.reshape(x_value, y_value, z_value) ##this is the numpy.ndarray version localeq = nb.Nifti1Image(newer_img, np.eye(4)) print newer_img.shape
Jupyter/Filter_and_Plotly_Luke.ipynb
NeuroDataDesign/seelviz
apache-2.0
Plotting post filtering/HistEq
x_histeq = newer_img[:,0,0] y_histeq = newer_img[0,:,0] z_histeq = newer_img[0,0,:] ## Randomly sample 100k points after filtering xdimensions = len(x) ydimensions = len(y) zdimensions = len(z) index = random.sample(xrange(0,xdimensions*ydimensions*zdimensions),100000) #66473400 is multiplying xshape by yshape by zshape X_val = [] Y_val = [] Z_val = [] for i in index: z_val = int(i/(xdimensions*ydimensions)) z_rem = i%(xdimensions*ydimensions) y_val = int(z_rem/xdimensions) x_val = int(z_rem/ydimensions) X_val.append(x_val) Y_val.append(y_val) Z_val.append(z_val) xlist = [] ylist = [] zlist = [] for i in range(100000): xlist.append(random.sample(xrange(0,xdimensions), 1)[0]) ylist.append(random.sample(xrange(0,ydimensions), 1)[0]) zlist.append(random.sample(xrange(0,zdimensions), 1)[0]) for i in index: z_val = int(i/(xdimensions*ydimensions)) z_rem = i%(xdimensions*ydimensions) y_val = int(z_rem/xdimensions) x_val = int(z_rem/ydimensions) X_val.append(x_val) Y_val.append(y_val) Z_val.append(z_val) trace1 = go.Scatter3d( x = x, y = y, z = z, mode='markers', marker=dict( size=1.2, color='purple', # set color to an array/list of desired values colorscale='Viridis', # choose a colorscale opacity=0.15 ) ) data = [trace1] layout = go.Layout( margin=dict( l=0, r=0, b=0, t=0 ) ) fig = go.Figure(data=data, layout=layout) print('Aut1367postfilterHistEq' + "plotly") plotly.offline.plot(fig, filename= 'Aut1367postfilterHistEq' + "plotly.html") print "done"
Jupyter/Filter_and_Plotly_Luke.ipynb
NeuroDataDesign/seelviz
apache-2.0
Load Data
train = pd.read_csv("train.csv") train.describe()
titanic/titanic.ipynb
ajmendez/explore
mit
Clean Data On the outset it seems that there are some issues with the number of observations for the columns (e.g., Age, Cabin, Embarked). * Gender is non numeric * Embarked is also a string * Age -- Missing and incorrect data
# Cleanup Gender and Embarked train['Sex'] = np.where(train['Sex'] == 'male', 0, 1) train['Embarked'] = train['Embarked'].fillna('Z').map(dict(C=0, S=1, Q=2, Z=3)) # AGE -- quickly look at data train['hasage'] = np.isnan(train['Age']) train.hist('Age', by='Survived', bins=25) train.groupby('Survived').mean()
titanic/titanic.ipynb
ajmendez/explore
mit
There is a clear difference in the distributions in ages between thoes who survived and not. Also from the table you can see the differences in the mean values of the passenger class (pclass), ages, and Fares. Note it is also more likely to have a missing age if you did not survive. Rather than attempting to model the missing ages, I include -1 age class
# Age is missing values train['Age'] = np.where(np.isfinite(train['Age']), train['Age'], -1)
titanic/titanic.ipynb
ajmendez/explore
mit
Feature Creation
# Remap cabin to a numeric value depending on the letter m = {chr(i+97).upper():i for i in range(26)} shortenmap = lambda x: m[x[0]] train['cleancabin'] = train['Cabin'].fillna('Z').apply(shortenmap) train['cleancabin'].hist() # Get person title / family name # These might be overfitting the data since the title is correlated with gender # and family name and siblings, however they seems to add more information. train['family'] = train['Name'].apply(lambda x: x.split(',')[0]) train['title'] = train['Name'].apply(lambda x: x.split(',')[1].split()[0]) nfamily = dict(train['family'].value_counts()) train['nfamily'] = train['family'].map(nfamily) ntitle = {title:i for i,title in enumerate(np.unique(train['title']))} train['ntitle'] = train['title'].map(ntitle) train.groupby('Survived').mean()
titanic/titanic.ipynb
ajmendez/explore
mit
Classify! First test different techniques to see how well they predict the training set.
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked", 'cleancabin', 'nfamily', 'ntitle'] from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier, AdaBoostRegressor from sklearn.svm import SVC, LinearSVC from sklearn.ensemble import VotingClassifier from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import ExtraTreesClassifier from sklearn.ensemble import BaggingClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn import cross_validation
titanic/titanic.ipynb
ajmendez/explore
mit
Logistic Regression
scores = cross_validation.cross_val_score( LogisticRegression(random_state=0), train[predictors], train["Survived"], cv=3 ) print('{:0.1f}'.format(100*scores.mean()))
titanic/titanic.ipynb
ajmendez/explore
mit
Random Forest
scores = cross_validation.cross_val_score( RandomForestClassifier( random_state=0, n_estimators=150, min_samples_split=4, min_samples_leaf=2 ), train[predictors], train["Survived"], cv=3 ) print('{:0.1f}'.format(100*scores.mean()))
titanic/titanic.ipynb
ajmendez/explore
mit
Gradient Boost
scores = cross_validation.cross_val_score( GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0), train[predictors], train["Survived"], cv=3 ) print('{:0.1f}'.format(100*scores.mean()))
titanic/titanic.ipynb
ajmendez/explore
mit
Support Vector Machine Classifier
scores = cross_validation.cross_val_score( SVC(random_state=0), train[predictors], train["Survived"], cv=3 ) print('{:0.1f}'.format(100*scores.mean()))
titanic/titanic.ipynb
ajmendez/explore
mit
Support Vector Machine Classifier with AdaBoost?! Broken!
scores = cross_validation.cross_val_score( AdaBoostRegressor(SVC(kernel='poly', random_state=0), random_state=0, n_estimators=500, learning_rate=0.5), train[predictors], train["Survived"], cv=3 ) print('{:0.1f}'.format(100*scores.mean()))
titanic/titanic.ipynb
ajmendez/explore
mit
AdaBoost
scores = cross_validation.cross_val_score( AdaBoostClassifier(random_state=0, n_estimators=100), train[predictors], train["Survived"], cv=3 ) print('{:0.1f}'.format(100*scores.mean()))
titanic/titanic.ipynb
ajmendez/explore
mit
K Nearest Neighbors + Bagging
bagging = BaggingClassifier(KNeighborsClassifier(), max_samples=0.5, max_features=0.5, random_state=0) scores = cross_validation.cross_val_score( bagging, train[predictors], train["Survived"], cv=3 ) print('{:0.1f}'.format(100*scores.mean()))
titanic/titanic.ipynb
ajmendez/explore
mit
Voting Classifier with multiple classifiers
est = [('GNB', GaussianNB()), ('LR', LogisticRegression(random_state=1)), ('RFC',RandomForestClassifier(random_state=1))] alg = BaggingClassifier(VotingClassifier(est, voting='soft'), max_samples=0.5, max_features=0.5) scores = cross_validation.cross_val_score( alg, train[predictors], train["Survived"], cv=3 ) print('{:0.1f}'.format(100*scores.mean()))
titanic/titanic.ipynb
ajmendez/explore
mit
Measure feature Strength
forest = ExtraTreesClassifier(n_estimators=250, random_state=0) forest.fit(train[predictors], train['Survived']) importances = forest.feature_importances_ std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0) indices = np.argsort(importances)[::-1] ind = np.arange(len(indices)) pylab.errorbar(ind, importances[indices], yerr=std, fmt='s') pylab.xticks(ind, [predictors[i] for i in indices], rotation='vertical') pylab.axhline(0.0, color='orange', lw=2) for index in indices: print index, predictors[index], importances[index]
titanic/titanic.ipynb
ajmendez/explore
mit
matplotlib matplotlib is a powerful plotting module that is part of Python's standard library. The website for matplotlib is at http://matplotlib.org/. And you can find a bunch of examples at the following two locations: http://matplotlib.org/examples/index.html and http://matplotlib.org/gallery.html. matplotlib contains a module called pyplot that was written to provide a Matlab-style ploting interface.
# Import matplotlib.pyplot
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
Next, we want to make sure that the plots that we create are displayed in this notebook. To achieve this we have to issue a command to be interpretted by Jupyter -- called a magic command. A magic command is preceded by a % character. Magics are not Python and will create errs if used outside of the Jupyter notebook
# Magic command for the Jupyter Notebook
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit
A quick matplotlib example Create a plot of the sine function for x values between -6 and 6. Add axis labels and a title.
# Import numpy as np # Create an array of x values from -6 to 6 # Create a variable y equal to the sin of x # Use the plot function to plot the # Add a title and axis labels
winter2017/econ129/python/Econ129_Class_04.ipynb
letsgoexploring/teaching
mit