markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Test that number of tracks is independent on Track description | def compute_target_number_of_tracks(X):
ids = numpy.unique(X.group_column, return_inverse=True)[1]
number_of_tracks = numpy.bincount(X.group_column)
target = number_of_tracks[ids]
return target
from decisiontrain import DecisionTrainRegressor
from rep.estimators import SklearnRegressor
from rep.metaml ... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Define base estimator and B weights, labels | tt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=1000,
n_threads=16)
B_signs = data['signB'].groupby(data['group_column']).aggregate(numpy.mean)
B_weights = data['N_sig_sw'].groupby(data['group_column']).aggregate(numpy.mean)
B_signs_MC = MC['signB'].groupby(MC['gro... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
B probability computation | from scipy.special import logit, expit
def compute_Bprobs(X, track_proba, weights=None, normed_weights=False):
if weights is None:
weights = numpy.ones(len(X))
_, data_ids = numpy.unique(X['group_column'], return_inverse=True)
track_proba[~numpy.isfinite(track_proba)] = 0.5
track_proba[nump... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Inclusive tagging: training on data | tt_data = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
%time tt_data.fit(data, data.label, sample_weight=data.N_sig_sw.values * mask_sw_positive)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Inclusive tagging: training on MC | tt_MC = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
%time tt_MC.fit(MC, MC.label)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
New method
Reweighting with classifier
combine data and MC together to train a classifier | combined_data_MC = pandas.concat([data, MC])
combined_label = numpy.array([0] * len(data) + [1] * len(MC))
combined_weights_data = data.N_sig_sw.values #/ numpy.bincount(data.group_column)[data.group_column.values]
combined_weights_data_passed = combined_weights_data * mask_sw_positive
combined_weights_MC = MC.N_sig_s... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
train classifier to distinguish data and MC | %%time
tt_base_large = DecisionTrainClassifier(learning_rate=0.3, n_estimators=1000,
n_threads=20)
tt_data_vs_MC = FoldingGroupClassifier(SklearnClassifier(tt_base_large), n_folds=2, random_state=321,
train_features=features + ['label'], g... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
quality | combined_p = tt_data_vs_MC.predict_proba(combined_data_MC)[:, 1]
roc_auc_score(combined_label, combined_p, sample_weight=combined_weights)
roc_auc_score(combined_label, combined_p, sample_weight=combined_weights_all) | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
calibrate probabilities (due to reweighting rule where probabilities are used) | from utils import calibrate_probs, plot_calibration
combined_p_calib = calibrate_probs(combined_label, combined_weights, combined_p)[0]
plot_calibration(combined_p, combined_label, weight=combined_weights)
plot_calibration(combined_p_calib, combined_label, weight=combined_weights) | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
compute MC and data track weights | # reweight data predicted as data to MC
used_probs = combined_p_calib
data_probs_to_be_MC = used_probs[combined_label == 0]
MC_probs_to_be_MC = used_probs[combined_label == 1]
track_weights_data = numpy.ones(len(data))
# take data with probability to be data
mask_data = data_probs_to_be_MC < 0.5
track_weights_data[mas... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
reweighting plotting | hist(combined_p_calib[combined_label == 1], label='MC', normed=True, alpha=0.4, bins=60,
weights=combined_weights_MC)
hist(combined_p_calib[combined_label == 0], label='data', normed=True, alpha=0.4, bins=60,
weights=combined_weights_data);
legend(loc='best')
hist(track_weights_MC, normed=True, alpha=0.4, b... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Check reweighting rule
train classifier to distinguish data vs MC with provided weights | %%time
tt_check = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=433,
train_features=features + ['label'], group_feature='group_column')
tt_check.fit(combined_data_MC, combined_label,
sample_weight=numpy.concatenate([track_weights_data * data... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Classifier trained on MC | tt_reweighted_MC = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
%time tt_reweighted_MC.fit(MC, MC.label, sample_weight=track_weights_MC * MC.N_sig_sw.values)
pass
pandas.DataFrame({'data... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Classifier trained on data | %%time
tt_reweighted_data = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
tt_reweighted_data.fit(data, data.label,
sample_weight=track_weights_data * data.N_sig_... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
numpy.mean(mc_sum_weights_per_event), numpy.mean(data_sum_weights_per_event)
_, data_ids = numpy.unique(data['group_column'], return_inverse=True)
mc_sum_weights_per_event = numpy.bincount(MC.group_column.values, weights=track_weights_MC)
data_sum_weights_per_event = numpy.bincount(data_ids, weights=track_weights_data... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 | |
Calibration | from utils import compute_mistag
bins_perc = [10, 20, 30, 40, 50, 60, 70, 80, 90]
compute_mistag(expit(p_data), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_perc,
uniform=False, label='data')
compute_mistag(expit(p_tt_mc), B_signs, B_weights, chosen=numpy.on... | experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Setup | samples = [
"Owen Wilson is the ugliest person I've ever seen, period.",
"Of the things I don't like, I like bankers the least.",
"You shouldn't listen to Sam Harris; He's an idiot.",
"I don't like women.",
"Alex is worse than James, though both of them are fuckheads."
"I just want to tell those... | insults/exploration/model/non_personal_insults.ipynb | thundergolfer/Insults | gpl-3.0 |
Exploration | import seaborn
seaborn.distplot(results, hist_kws={"range": [0,1]}) | insults/exploration/model/non_personal_insults.ipynb | thundergolfer/Insults | gpl-3.0 |
Comparing initial point generation methods
Holger Nahrstaedt 2020
.. currentmodule:: skopt
Bayesian optimization or sequential model-based optimization uses a surrogate
model to model the expensive to evaluate function func. There are several
choices for what kind of surrogate model to use. This notebook compares the
p... | print(__doc__)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt | dev/notebooks/auto_examples/sampler/sampling_comparison.ipynb | scikit-optimize/scikit-optimize.github.io | bsd-3-clause |
Note that this can take a few minutes. | plot = plot_convergence([("random", dummy_res),
("lhs", lhs_res),
("lhs_maximin", lhs2_res),
("sobol'", sobol_res),
("halton", halton_res),
("hammersly", hammersly_res),
("grid... | dev/notebooks/auto_examples/sampler/sampling_comparison.ipynb | scikit-optimize/scikit-optimize.github.io | bsd-3-clause |
Reload Parameters and test performance | batchSize=20
with tf.Session() as sess:
saver=tf.train.Saver()
saver.restore(sess=sess,save_path=r".\model_checkpoints\MNIST_CNN-"+str(3000))
acc=0
for batch_i in range(int(MNIST.test.num_examples/batchSize)):
x_batch,y_batch=MNIST.test.next_batch(batch_size=batchSize)
pred=sess.run(L4Ou... | CNN101/CNN101_Test.ipynb | BorisPolonsky/LearningTensorFlow | mit |
The first layer in this network, tf.keras.layers.Flatten, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; i... | model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
) | notebooks/image_models/solutions/5_fashion_mnist_class.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Interruptible optimization runs with checkpoints
Christian Schell, Mai 2018
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
Problem statement
Optimization runs can take a very long time and even run for multiple days.
If for some reason the process has to be interrupted results are irreversibly
lost, and... | print(__doc__)
import sys
import numpy as np
np.random.seed(777)
import os
# The followings are hacks to allow sphinx-gallery to run the example.
sys.path.insert(0, os.getcwd())
main_dir = os.path.basename(sys.modules['__main__'].__file__)
IS_RUN_WITH_SPHINX_GALLERY = main_dir != os.getcwd() | 0.7/notebooks/auto_examples/interruptible-optimization.ipynb | scikit-optimize/scikit-optimize.github.io | bsd-3-clause |
Simple example
We will use pretty much the same optimization problem as in the
sphx_glr_auto_examples_bayesian-optimization.py
notebook. Additionally we will instantiate the :class:callbacks.CheckpointSaver
and pass it to the minimizer: | from skopt import gp_minimize
from skopt import callbacks
from skopt.callbacks import CheckpointSaver
noise_level = 0.1
if IS_RUN_WITH_SPHINX_GALLERY:
# When this example is run with sphinx gallery, it breaks the pickling
# capacity for multiprocessing backend so we have to modify the way we
# define our ... | 0.7/notebooks/auto_examples/interruptible-optimization.ipynb | scikit-optimize/scikit-optimize.github.io | bsd-3-clause |
Generated some initial 2D data: | learning_rate = 0.01
training_epochs = 1000
num_labels = 3
batch_size = 100
x1_label0 = np.random.normal(1, 1, (100, 1))
x2_label0 = np.random.normal(1, 1, (100, 1))
x1_label1 = np.random.normal(5, 1, (100, 1))
x2_label1 = np.random.normal(4, 1, (100, 1))
x1_label2 = np.random.normal(8, 1, (100, 1))
x2_label2 = np.ran... | ch04_classification/Concept04_softmax.ipynb | BinRoot/TensorFlow-Book | mit |
Define the labels and shuffle the data: | xs_label0 = np.hstack((x1_label0, x2_label0))
xs_label1 = np.hstack((x1_label1, x2_label1))
xs_label2 = np.hstack((x1_label2, x2_label2))
xs = np.vstack((xs_label0, xs_label1, xs_label2))
labels = np.matrix([[1., 0., 0.]] * len(x1_label0) + [[0., 1., 0.]] * len(x1_label1) + [[0., 0., 1.]] * len(x1_label2))
arr = np.a... | ch04_classification/Concept04_softmax.ipynb | BinRoot/TensorFlow-Book | mit |
We'll get back to this later, but the following are test inputs that we'll use to evaluate the model: | test_x1_label0 = np.random.normal(1, 1, (10, 1))
test_x2_label0 = np.random.normal(1, 1, (10, 1))
test_x1_label1 = np.random.normal(5, 1, (10, 1))
test_x2_label1 = np.random.normal(4, 1, (10, 1))
test_x1_label2 = np.random.normal(8, 1, (10, 1))
test_x2_label2 = np.random.normal(0, 1, (10, 1))
test_xs_label0 = np.hstack... | ch04_classification/Concept04_softmax.ipynb | BinRoot/TensorFlow-Book | mit |
Again, define the placeholders, variables, model, and cost function: | train_size, num_features = xs.shape
X = tf.placeholder("float", shape=[None, num_features])
Y = tf.placeholder("float", shape=[None, num_labels])
W = tf.Variable(tf.zeros([num_features, num_labels]))
b = tf.Variable(tf.zeros([num_labels]))
y_model = tf.nn.softmax(tf.matmul(X, W) + b)
cost = -tf.reduce_sum(Y * tf.log... | ch04_classification/Concept04_softmax.ipynb | BinRoot/TensorFlow-Book | mit |
Train the softmax classification model: | with tf.Session() as sess:
tf.global_variables_initializer().run()
for step in range(training_epochs * train_size // batch_size):
offset = (step * batch_size) % train_size
batch_xs = xs[offset:(offset + batch_size), :]
batch_labels = labels[offset:(offset + batch_size)]
err, _ =... | ch04_classification/Concept04_softmax.ipynb | BinRoot/TensorFlow-Book | mit |
Define AOI
Define the AOI as a geojson polygon. This can be done at geojson.io. If you use geojson.io, only copy the single aoi feature, not the entire feature collection. | aoi = {u'geometry': {u'type': u'Polygon', u'coordinates': [[[-121.3113248348236, 38.28911976564886], [-121.3113248348236, 38.34622533958], [-121.2344205379486, 38.34622533958], [-121.2344205379486, 38.28911976564886], [-121.3113248348236, 38.28911976564886]]]}, u'type': u'Feature', u'properties': {u'style': {u'opacity'... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
Build Request
Build the Planet API Filter request for the Landsat 8 and PS Orthotile imagery taken in 2017 through August 23. | # define the date range for imagery
start_date = datetime.datetime(year=2017,month=1,day=1)
stop_date = datetime.datetime(year=2017,month=8,day=23)
# filters.build_search_request() item types:
# Landsat 8 - 'Landsat8L1G'
# Sentinel - 'Sentinel2L1C'
# PS Orthotile = 'PSOrthoTile'
def build_landsat_request(aoi_geom, st... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
Search Planet API
The client is how we interact with the planet api. It is created with the user-specific api key, which is pulled from $PL_API_KEY environment variable. Create the client then use it to search for PS Orthotile and Landsat 8 scenes. Save a subset of the metadata provided by Planet API as our 'scene'. | def get_api_key():
return os.environ['PL_API_KEY']
# quick check that key is defined
assert get_api_key(), "PL_API_KEY not defined."
def create_client():
return api.ClientV1(api_key=get_api_key())
def search_pl_api(request, limit=500):
client = create_client()
result = client.quick_search(request)
... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
In processing the items to scenes, we are only using a small subset of the product metadata. | def items_to_scenes(items):
item_types = []
def _get_props(item):
props = item['properties']
props.update({
'thumbnail': item['_links']['thumbnail'],
'item_type': item['properties']['item_type'],
'id': item['id'],
'acquired': item['properties']['a... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
Investigate Landsat Scenes
There are quite a few Landsat 8 scenes that are returned by our query. What do the footprints look like relative to our AOI and what is the collection time of the scenes? | landsat_scenes = items_to_scenes(search_pl_api(build_landsat_request(aoi['geometry'],
start_date, stop_date)))
# How many Landsat 8 scenes match the query?
print(len(landsat_scenes)) | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
Show Landsat 8 Footprints on Map | def landsat_scenes_to_features_layer(scenes):
features_style = {
'color': 'grey',
'weight': 1,
'fillColor': 'grey',
'fillOpacity': 0.15}
features = [{"geometry": r.footprint,
"type": "Feature",
"properties": {"style": features_st... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
This AOI is located in a region covered by 3 different path/row tiles. This means there is 3x the coverage than in regions only covered by one path/row tile. This is particularly lucky!
What about the within each path/row tile. How long and how consistent is the Landsat 8 collect period for each path/row? | def time_diff_stats(group):
time_diff = group.index.to_series().diff() # time difference between rows in group
stats = {'median': time_diff.median(),
'mean': time_diff.mean(),
'std': time_diff.std(),
'count': time_diff.count(),
'min': time_diff.min(),
... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
It looks like the collection period is 16 days, which lines up with the Landsat 8 mission description.
path/row 43/33 is missing one image which causes an unusually long collect period.
What this means is that we don't need to look at every Landsat 8 scene collect time to find crossovers with Planet scenes. We could lo... | def find_closest(date_time, data_frame):
# inspired by:
# https://stackoverflow.com/questions/36933725/pandas-time-series-join-by-closest-time
time_deltas = (data_frame.index - date_time).to_series().reset_index(drop=True).abs()
idx_min = time_deltas.idxmin()
min_delta = time_deltas[idx_min]
re... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
So the tiles that are in the same path are very close (24sec) together from the same day. Therefore, we would want to only use one tile and pick the best image.
Tiles that are in different paths are 7 days apart. Therefore, we want to keep tiles from different paths, as they represent unique crossovers.
Investigate PS ... | all_ps_scenes = items_to_scenes(search_pl_api(build_ps_request(aoi['geometry'], start_date, stop_date)))
# How many PS scenes match query?
print(len(all_ps_scenes))
all_ps_scenes[:1] | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
What about overlap? We really only want images that overlap over 20% of the AOI.
Note: we do this calculation in WGS84, the geographic coordinate system supported by geojson. The calculation of coverage expects that the geometries entered are 2D, which WGS84 is not. This will cause a small inaccuracy in the coverage ar... | def aoi_overlap_percent(footprint, aoi):
aoi_shape = sgeom.shape(aoi['geometry'])
footprint_shape = sgeom.shape(footprint)
overlap = aoi_shape.intersection(footprint_shape)
return overlap.area / aoi_shape.area
overlap_percent = all_ps_scenes.footprint.apply(aoi_overlap_percent, args=(aoi,))
all_ps_scen... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
Ideally, PS scenes have daily coverage over all regions. How many days have PS coverage and how many PS scenes were taken on the same day? | # ps_scenes.index.to_series().head()
# ps_scenes.filter(items=['id']).groupby(pd.Grouper(freq='D')).agg('count')
# Use PS acquisition year, month, and day as index and group by those indices
# https://stackoverflow.com/questions/14646336/pandas-grouping-intra-day-timeseries-by-date
daily_ps_scenes = ps_scenes.index.to... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
Looks like the multiple collects on the same day are just a few minutes apart. They are likely crossovers between different PS satellites. Cool! Since we only want to us one PS image for a crossover, we will chose the best collect for days with multiple collects.
Find Crossovers
Now that we have the PS Orthotiles filte... | def find_crossovers(acquired_time, landsat_scenes):
'''landsat_scenes: pandas dataframe with acquisition time as index'''
closest_idx, closest_delta = find_closest(acquired_time, landsat_scenes)
closest_landsat = landsat_scenes.iloc[closest_idx]
crossover = {'landsat_acquisition': closest_landsat.name,... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
Now that we have the crossovers, what we are really interested in is the IDs of the landsat and PS scenes, as well as how much they overlap the AOI. | def get_crossover_info(crossovers, aoi):
def get_scene_info(acquisition_time, scenes):
scene = scenes.loc[acquisition_time]
scene_info = {'id': scene.id,
'thumbnail': scene.thumbnail,
# we are going to use the footprints as shapes so convert to shapes now
... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
Next, we filter to overlaps that cover a significant portion of the AOI. | significant_crossovers_info = crossover_info[crossover_info.overlap_percent > 0.9]
print(len(significant_crossovers_info))
significant_crossovers_info | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
Browsing through the crossovers, we see that in some instances, multiple crossovers take place on the same day. Really, we are interested in 'unique crossovers', that is, crossovers that take place on unique days. Therefore, we will look at the concurrent crossovers by day. | def group_by_day(data_frame):
return data_frame.groupby([data_frame.index.year,
data_frame.index.month,
data_frame.index.day])
unique_crossover_days = group_by_day(significant_crossovers_info.index.to_series()).count()
print(len(unique_crossover_days))
... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
There are 6 unique crossovers between Landsat 8 and PS that cover over 90% of our AOI between January and August in 2017. Not bad! That is definitely enough to perform comparison.
Display Crossovers
Let's take a quick look at the crossovers we found to make sure that they don't look cloudy, hazy, or have any other qual... | # https://stackoverflow.com/questions/36006136/how-to-display-images-in-a-row-with-ipython-display
def make_html(image):
return '<img src="{0}" alt="{0}"style="display:inline;margin:1px"/>' \
.format(image)
def display_thumbnails(row):
print(row.name)
display(HTML(''.join(make_html(t)
... | jupyter-notebooks/crossovers/ps_l8_crossovers.ipynb | planetlabs/notebooks | apache-2.0 |
04.02 统计《择天记》人物出场次数
我们需要收集在择天记中出现的人物姓名,以便统计人物的出场次数。主要人物的姓名抓了《择天记》百度百科中的人物列表,保存成了TXT文本,名为names.txt。 | #读取人物名字
with open('names.txt') as f:
names = [name.strip() for name in f.readlines()]
print(names)
# 统计人物出场次数
def find_main_charecters(num = 10):
novel = ''.join(cont)
count = []
for name in names:
count.append([name,novel.count(name)])
count.sort(key = lambda v : v[1],reverse=True)
r... | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
我们使用echarts做数据可视化的工作 | from IPython.display import HTML
chart_header_html = """
<div id="main_charecters" style="width: 800px;height: 600px;" class="chart"></div>
<script>
require.config({
paths:{
echarts: '//cdn.bootcss.com/echarts/3.2.3/echarts.min',
}
});
require(['echarts'],function(ec){
var... | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
我们可以清楚的看到,《择天记》的主角为陈长生共出场接近16000次,紧随其后的是唐三十六(男性)和徐有容(有容奶大是女性)。仅从这个简单的数据,我们就可以推测唐三十六是主角陈长生的好基友,徐有容很有可能和陈长生是恋人关系。
另外,我们看到其他出场率比较相似的人物中,女性角色明显不多。我们可以大致推断《择天记》这本小说是一个单女主的小说。更进一步的说,徐有容和陈长生在整部小说中可能都很专情。
出场次数前20的人物中,可以看出一个明显的规律——主要人物的人名都非常奇葩,一看就不是普通人能叫的名字!在现实生活中不可能有人叫唐三十六,折袖,苟寒食,商行舟,南客这样的名字。所以,《择天记》的作者多半是一个很中二的人。
另外还有一个有趣的事情,... | import jieba
# 读取门派、境界、招式等名词
with open('novelitems.txt') as f:
items = [item.strip() for item in f.readlines()]
for i in items[10:20]:
print(i) | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
我们需要将这些名词添加到结巴分词的词库中。 | for name in names:
jieba.add_word(name)
for item in items:
jieba.add_word(item) | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
我们现在就可以开始使用机器学习来训练模型了。 | novel_sentences = []
# 对小说进行分词,这里只是任选了一句
# for line in cont:
for line in cont[:6]:
words = list(jieba.cut(line))
novel_sentences.append(words)
novel_sentences[4] | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
训练模型 | # 按照默认参数进行训练
model = gensim.models.Word2Vec(sentences, size=100, window=5, min_count=5, workers=4)
# 把训练好的模型存下来
model.save("zetianjied.model")
# 训练模型需要大概20分钟左右的时间,因性能而异。由于模型太大了就不放在github上面了。
import gensim
# 读取模型
model = gensim.models.Word2Vec.load("zetianjied.model") | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
寻找境界体系
首先,让我们看看《择天记》中实力境界的划分。作者在一开始告诉我们有一种境界叫做坐照境。那么,我们就通过上文中用Word2Vec 训练出来的模型找到与坐照类似的词汇。 | # 寻找相似境界
for s in model.most_similar(positive=["坐照"]):
print(s) | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
择天记中的大人物
找到择天记中和反派魔君实力水平相似的人物 | for s in model.most_similar(positive=["魔君"])[:7]:
print(s) | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
我们可以看到结果中出现了与魔君实力水平相似的前七个人物,这些人物可以与反派BOSS相提并论,肯定是站在《择天记》实力巅峰的大人物。事实上这些人物在原著中都是从圣境。
择天记中的情侣
训练出来的模型还可以找到具有相似联系的词汇,比如给定情侣关系的两个人物,模型会找到小说中的情侣关系。
我们先来测试一下模型,因为我们知道别样红和无穷碧是在小说中直接描写的情侣,所以我们给定陈长生和徐有容之间的关系,看看程序能否找出和无穷碧有情侣关系的人物。 | d = model.most_similar(positive=['无穷碧', '陈长生'], negative=['徐有容'])[0]
d | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
我们随便找一个人物,比如:折袖。运行程序,看一看机器眼中折袖与谁是情侣? | d = model.most_similar(positive=['折袖', '无穷碧'], negative=['别样红'])[0]
d | Practice_03/Python与择天记.ipynb | Alenwang1/Python_Practice | gpl-3.0 |
Fitting (Predicting) Topics Distribution From Raw Text
predict function will predict the topics distributions from a given raw text. The result is a pandas dataframe, with topics ids and confidence thereof. | def text2vec(text):
if text:
return dictionary.doc2bow(TextBlob(text.lower()).noun_phrases)
else:
return []
def tokenised2vec(tokenised):
if tokenised:
return dictionary.doc2bow(tokenised)
else:
return []
def predict(sometext):
vec = text2vec(sometext)
d... | tutorials/Profiling_Reviewers.ipynb | conferency/find-my-reviewers | mit |
Generate a Author's Topic Vector
The vector is a topic confidence vector for the author. The length of the vector should be the number of topics in the LDA model. | def update_author_vector(vec, doc_vec):
for topic_id, confidence in zip(doc_vec['topic_id'], doc_vec['confidence']):
vec[topic_id] += confidence
return vec
def get_topic_in_list(model, topic_id):
return [term.strip().split('*') for term in model.print_topic(topic_id).split("+")]
def get_author_top... | tutorials/Profiling_Reviewers.ipynb | conferency/find-my-reviewers | mit |
For a author, we first get all his previous papers in our database. For each paper we get, we generate a paper's vector. At last, the sum of all vectors will be the vector (aka the position) in the interest space. | def profile_author(author_id, model_topics_num=None):
if not model_topics_num:
model_topics_num = model.num_topics
author_vec = np.array([1.0 for i in range(model_topics_num)])
# Initialize with 1s
paper_list = pd.read_sql_query("SELECT * FROM documents_authors WHERE authors_id=" + str(author_id... | tutorials/Profiling_Reviewers.ipynb | conferency/find-my-reviewers | mit |
Save the Library (aka Pool of Scholars)
We will save our profiled authors in a JSON file. It will then used by our matching algorithm. | save_json(authors_lib, "aisnet_600_cleaned.authors.json") | tutorials/Profiling_Reviewers.ipynb | conferency/find-my-reviewers | mit |
Dealing with Conflicts
This file shows how Ply deals with shift/reduce and reduce/reduce conflicts.
The following grammar is ambiguous because it does not specify the precedence of the arithmetical operators:
expr : expr '+' expr
| expr '-' expr
| expr '*' expr
| expr '/' expr
| '(' ... | import ply.lex as lex
tokens = [ 'NUMBER' ]
def t_NUMBER(t):
r'0|[1-9][0-9]*'
t.value = int(t.value)
return t
literals = ['+', '-', '*', '/', '^', '(', ')']
t_ignore = ' \t'
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count('\n')
def t_error(t):
print(f"Illegal character '{t.value[... | Ply/Conflicts.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
We can specify multiple expressions in a single rule. In this case, we have used the passstatement
as we just want to generate the shift/reduce conflicts that are associated with this grammar.
def p_expr(p):
"""
expr : expr '+' expr
| expr '-' expr
| expr '*' expr
| expr '/' expr
... | def p_expr_plus(p):
"expr : expr '+' expr"
p[0] = ('+', p[1], p[3])
def p_expr_minus(p):
"expr : expr '-' expr"
p[0] = ('-', p[1], p[3])
def p_expr_mult(p):
"expr : expr '*' expr"
p[0] = ('*', p[1], p[3])
def p_expr_div(p):
"expr : expr '/' expr"
p[0] = ('/', p[1], p[3])... | Ply/Conflicts.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
We define p_errorin order to prevent a warning. | def p_error(p):
if p:
print(f'Syntax error at {p.value}.')
else:
print('Syntax error at end of input.') | Ply/Conflicts.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
Let's look at the action table that is generated. Note that all conflicts are resolved in favour of shifting. | !type parser.out
!cat parser.out
%run ../ANTLR4-Python/AST-2-Dot.ipynb | Ply/Conflicts.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
The function test(s) takes a string s as its argument an tries to parse this string. If all goes well, an abstract syntax tree is returned.
If the string can't be parsed, an error message is printed by the parser. | def test(s):
t = yacc.parse(s)
d = tuple2dot(t)
display(d)
return t | Ply/Conflicts.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
The next example shows that this parser does not produce the abstract syntax that reflects the precedences of the arithmetical operators. | test('2^3*4+5') | Ply/Conflicts.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
1.
Создайте DecisionTreeClassifier с настройками по умолчанию и измерьте качество его работы с помощью cross_val_score. Эта величина и будет ответом в пункте 1. | clf = tree.DecisionTreeClassifier()
x_val_score = cross_val_score(clf, X, y, cv=10).mean()
write_answer(x_val_score, 'answer_1.txt')
print(x_val_score) | src/cours_2/week_4/bagging_and_rand_forest.ipynb | agushman/coursera | mit |
2.
Воспользуйтесь BaggingClassifier из sklearn.ensemble, чтобы обучить бэггинг над DecisionTreeClassifier. Используйте в BaggingClassifier параметры по умолчанию, задав только количество деревьев равным 100.
Качество классификации новой модели - ответ в пункте 2. Обратите внимание, как соотносится качество работы компо... | bagging_clf = ensemble.BaggingClassifier(clf, n_estimators=100)
x_val_score = cross_val_score(bagging_clf, X, y, cv=10).mean()
write_answer(x_val_score, 'answer_2.txt')
print(x_val_score) | src/cours_2/week_4/bagging_and_rand_forest.ipynb | agushman/coursera | mit |
3.
Теперь изучите параметры BaggingClassifier и выберите их такими, чтобы каждый базовый алгоритм обучался не на всех d признаках, а на $\sqrt d$ случайных признаков. Качество работы получившегося классификатора - ответ в пункте 3. Корень из числа признаков - часто используемая эвристика в задачах классификации, в зада... | stoch_train_len = int(sqrt(X.shape[1]))
bagging_clf = ensemble.BaggingClassifier(clf, n_estimators=100, max_features=stoch_train_len)
x_val_score = cross_val_score(bagging_clf, X, y, cv=10).mean()
write_answer(x_val_score, 'answer_3.txt')
print(x_val_score) | src/cours_2/week_4/bagging_and_rand_forest.ipynb | agushman/coursera | mit |
4.
Наконец, давайте попробуем выбирать случайные признаки не один раз на все дерево, а при построении каждой вершины дерева. Сделать это несложно: нужно убрать выбор случайного подмножества признаков в BaggingClassifier и добавить его в DecisionTreeClassifier. Какой параметр за это отвечает, можно понять из документаци... | stoch_clf = tree.DecisionTreeClassifier(max_features=stoch_train_len)
bagging_clf = ensemble.BaggingClassifier(stoch_clf, n_estimators=100)
x_val_score_own = cross_val_score(bagging_clf, X, y, cv=10).mean()
write_answer(x_val_score_own, 'answer_4.txt')
print(x_val_score) | src/cours_2/week_4/bagging_and_rand_forest.ipynb | agushman/coursera | mit |
5.
Полученный в пункте 4 классификатор - бэггинг на рандомизированных деревьях (в которых при построении каждой вершины выбирается случайное подмножество признаков и разбиение ищется только по ним). Это в точности соответствует алгоритму Random Forest, поэтому почему бы не сравнить качество работы классификатора с Rand... | random_forest_clf = ensemble.RandomForestClassifier(random_state=stoch_train_len, n_estimators=100)
x_val_score_lib = cross_val_score(random_forest_clf, X, y, cv=10).mean()
print(x_val_score_lib)
answers = '2 3 4 7'
write_answer(answers, 'answer_5.txt') | src/cours_2/week_4/bagging_and_rand_forest.ipynb | agushman/coursera | mit |
A very simple pipeline to show how registers are inferred. | class SimplePipelineExample(SimplePipeline):
def __init__(self):
self._loopback = pyrtl.WireVector(1, 'loopback')
super(SimplePipelineExample, self).__init__()
def stage0(self):
self.n = ~ self._loopback
def stage1(self):
self.n = self.n
def stage2(self):
self.... | ipynb-examples/example5-instrospection.ipynb | UCSBarchlab/PyRTL | bsd-3-clause |
Simulation of the core | simplepipeline = SimplePipelineExample()
sim_trace = pyrtl.SimulationTrace()
sim = pyrtl.Simulation(tracer=sim_trace)
for cycle in range(15):
sim.step({})
sim_trace.render_trace() | ipynb-examples/example5-instrospection.ipynb | UCSBarchlab/PyRTL | bsd-3-clause |
Define Mosaic Parameters
In this tutorial, we use the Planet mosaic tile service. There are many mosaics to choose from. For a list of mosaics available, visit https://api.planet.com/basemaps/v1/mosaics.
We first build the url for the xyz basemap tile service, then we add authorization in the form of the Planet API key... | # Planet tile server base URL (Planet Explorer Mosaics Tiles)
mosaic = 'global_monthly_2018_02_mosaic'
mosaicsTilesURL_base = 'https://tiles.planet.com/basemaps/v1/planet-tiles/{}/gmap/{{z}}/{{x}}/{{y}}.png'.format(mosaic)
mosaicsTilesURL_base
# Planet tile server url with auth
planet_api_key = os.environ['PL_API_KEY'... | jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb | planetlabs/notebooks | apache-2.0 |
Prepare label maker config file
This config file is pulled from the label-maker repo README.md example and then customized to utilize the Planet mosaic. The imagery url is set to the Planet mosaic url and the zoom is changed to 15, the maximum zoom supported by the Planet tile services.
See the label-maker README.md fi... | # create data directory
data_dir = os.path.join('data', 'label-maker-mosaic')
if not os.path.isdir(data_dir):
os.makedirs(data_dir)
# label-maker doesn't clean up, so start with a clean slate
!cd $data_dir && rm -R *
# create config file
bounding_box = [1.09725, 6.05520, 1.34582, 6.30915]
config = {
"country":... | jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb | planetlabs/notebooks | apache-2.0 |
Visualize Mosaic at config area of interest | # calculate center of map
bounds_lat = [bounding_box[1], bounding_box[3]]
bounds_lon = [bounding_box[0], bounding_box[2]]
def calc_center(bounds):
return bounds[0] + (bounds[1] - bounds[0])/2
map_center = [calc_center(bounds_lat), calc_center(bounds_lon)] # lat/lon
print(bounding_box)
print(map_center)
# create a... | jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb | planetlabs/notebooks | apache-2.0 |
Download OSM tiles
In this step, label-maker downloads the OSM vector tiles for the country specified in the config file.
According to Label Maker documentation, these can be visualized with mbview. So far I have not been successful getting mbview to work. I will keep on trying and would love to hear how you got this t... | !cd $data_dir && label-maker download | jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb | planetlabs/notebooks | apache-2.0 |
Create ground-truth labels from OSM tiles
In this step, the OSM tiles are chipped into label tiles at the zoom level specified in the config file. Also, a geojson file is created for visual inspection. | !cd $data_dir && label-maker labels | jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb | planetlabs/notebooks | apache-2.0 |
Visualizing classification.geojson in QGIS gives:
Although Label Maker doesn't tell us which classes line up with the labels (see the legend in the visualization for labels), it looks like the following relationships hold:
- (1,0,0) - no roads or buildings
- (0,1,1) - both roads and buildings
- (0,0,1) - only building... | # !cd $data_dir && label-maker preview -n 3
# !ls $data_dir/data/examples
# for fclass in ('Roads', 'Buildings'):
# example_dir = os.path.join(data_dir, 'data', 'examples', fclass)
# print(example_dir)
# for img in os.listdir(example_dir):
# print(img)
# display(Image(os.path.join(example_... | jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb | planetlabs/notebooks | apache-2.0 |
Other than the fact that 4 tiles were created instead of the specified 3, the results look pretty good! All Road examples have roads, and all Building examples have buildings.
Create image tiles
In this step, we invoke label-maker images, which downloads and chips the mosaic into tiles that match the label tiles.
Inter... | !cd $data_dir && label-maker images
# look at three tiles that were generated
tiles_dir = os.path.join(data_dir, 'data', 'tiles')
print(tiles_dir)
for img in os.listdir(tiles_dir)[:3]:
print(img)
display(Image(os.path.join(tiles_dir, img))) | jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb | planetlabs/notebooks | apache-2.0 |
Package tiles and labels
Convert the image and label tiles into train and test datasets. | # will not be able to open image tiles that weren't generated because the label tiles contained no classes
!cd $data_dir && label-maker package | jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb | planetlabs/notebooks | apache-2.0 |
Check Package
Let's load the packaged data and look at the train and test datasets. | data_file = os.path.join(data_dir, 'data', 'data.npz')
data = np.load(data_file)
for k in data.keys():
print('data[\'{}\'] shape: {}'.format(k, data[k].shape)) | jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb | planetlabs/notebooks | apache-2.0 |
The best feature to split on first is x3
In this tree below you will see that starting from x3 = 1, the depth of the tree is 3. | decision_tree_model.show()
decision_tree_model.show(view="Tree") | machine_learning/4_clustering_and_retrieval/lecture/week3/.ipynb_checkpoints/quiz-Decision Trees-checkpoint.ipynb | tuanavu/coursera-university-of-washington | mit |
Question 3
<img src="images/lec3_quiz03.png">
Screenshot taken from Coursera
<!--TEASER_END--> | # Accuracy
print decision_tree_model.evaluate(x)['accuracy'] | machine_learning/4_clustering_and_retrieval/lecture/week3/.ipynb_checkpoints/quiz-Decision Trees-checkpoint.ipynb | tuanavu/coursera-university-of-washington | mit |
Here are the main Python imports for solving the ODEs, plotting and other data analysis. Import modules as needed. | # Python module imports
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
from scipy.integrate import odeint
from IPython.display import Image
from IPython.core.display import HTML
from scipy.optimize import minimize
from scipy.optimize import curve_fit
import statsmodels.formula.api as sm
%m... | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
The following numerically integrates the reaction rate to give the exact impurity profile. Nothing needs to be modified here by the user. Scroll down to see calculation outputs and model comparisons. | # gas constant kcal/mol K
R = 0.00198588
# define the domain for the ODEs solution
ndays = np.max(days) + 365*10
dt = 5000.
t = np.arange(0, ndays*(24*3600), dt)
npts = t.shape[0]
# differential equation solution variable
cols = [(T, C) for T in Temperatures for C in ['P','I','D']]
concentrations = pd.DataFrame({col:... | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
Below are the calculated results for the user-defined kinetics | t = _dat['conc']['t'].values
t_days = t / (24*3600)
max_days_index = np.argmin(np.abs(t_days - (np.max(_dat['exppts'].index.levels[1].values)+1)))
t_days = t_days[:max_days_index]
c = _dat['conc'].iloc[:max_days_index]
max_conc = c.iloc[:, c.columns.get_level_values(1)=='P'].max().max()
fig = plt.figure(figsize=(14,1... | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
<b>Concentration profiles</b>: The solid lines are the exact concentrations of the components $P$, $I$, $D$ as a function of time calculated from the defined reaction parameters. The filled symbols denote the experimental time points (defined by the user in the parameter list). | _dat['exppts'] | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
<b>Theoretical concentration measurements</b>: The above tabulates the theoretical impurity concentrations, $P$, at the measurement time points. | _dat['setpt']
t = _dat['conc']['t'].values
c = _dat['conc']
t_days = t / (24*3600)
max_conc = c.iloc[:, c.columns.get_level_values(1)=='P'].max().max()
fig = plt.figure(figsize=(14,10))
for i, T in enumerate(Temperatures):
ax = fig.add_subplot(2,2,i+1)
ax.plot(t_days, c[T,'P'], color='red', label='P')
ax... | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
<b>Concentration profiles</b>: The plots above show the concentration profiles over a duration sufficient that the impurity concentrations reaches the user defined set point (denoted by the red dashed line). | _dat['predictions'] | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
<b>Shelf life</b>: The above tabulates the theoretical shelf life in years calculated from the user defined kinetic parameters. The value for $25~ ^\circ C$ is the most useful as this is the number that the following models will attempt to predict.
Shelf life prediction assuming zero order kinetics
The actual reaction ... | fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111)
r = _dat['exppts'].iloc[_dat['exppts'].index.get_level_values(1)!=0].copy()
r['y'] = np.log(np.divide(r['P'].values, r.index.get_level_values(1).astype('f8').values*24*3600+1))
r['x'] = 1./(r.index.get_level_values(0).astype('f8').values + 273)
ax.plot(r.x, r.y,... | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
The above regression provides the kinetic parameters necessary to calculate the predicted shelf-life assuming the mechanism is zeroth-order. | temp = _dat['const']['R']*(_dat['predictions'].index.astype('f8').values+273)
k = A * np.exp(-E/temp)
_dat['predictions']['zero_order'] = _dat['setpt'] / k * 1/(3600*24*365)
_dat['predictions'] | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
<b>Table of predicted results</b> The above table compares the theoretical shelf-life in years (calculated from the exact reaction mechanism and user defined parameters) to the predicted shelf-life assuming zero order kinetics.
Shelf life prediction assuming first-order kinetics
Assuming a reaction mechanism as:
In th... | D = Do + Io - _dat['exppts']['P']
data = np.log(D/(Do+Io)).to_frame('ln(D/Do)')
data['t'] = data.index.get_level_values(1)*24*3600
data['k'] = [0]*data['t'].shape[0]
colors = ['blue','green','red','orange']
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111)
for i, T in enumerate(_dat['exppts'].index.levels[0]):... | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
<b>Table of the kinetic parameter</b>, $k$ regressed from data at the different experimental temperatures.
Once the kinetic parameter, $k$, is estimated at the different temperatures, the Arrenius parameters, $A$ and $E$, can be obtained by plotting the logarithm of the rate constant at each experimental temperature a... | _dat['exppts']
# variable data is used from above cell
df = pd.DataFrame({'x': 1./(data.index.levels[0]+273).values, 'y': np.log(data.loc[(slice(None),0),'k'])})
reg = sm.ols(formula='y~x', data=df).fit()
A = np.exp(reg.params['Intercept'])
E = -1 * R * reg.params['x']
fig = plt.figure(figsize=(8,6))
ax = fig.add_su... | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
<b>Figure $\ln k ~\mathrm{vs}~\frac{1}{T}$:</b> to estimate the activation energy and the collision factor to use as initial guesses in the subsequent nonlinear regression.
The estimated Arrhenius parameters can now be used to estimate shelf-life. | temp = _dat['const']['R']*(_dat['predictions'].index.astype('f8').values+273)
k = A * np.exp(-E/temp)
_dat['predictions']['first_order'] = _dat['setpt'] / k * 1/(3600*24*365)
_dat['predictions'] | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
Method 2 - non-linear regression
According to a paper by Fung (Statistical prediction of drug stability based on nonlinear parameter estimation, Fung et al Journal Pharm Sci Vol 73 No 5 pg 657-662 1984), improved confidence can be obtained by performing a non-linear regression of all the data simuntaneously, instead of... | D = Do + Io - _dat['exppts']['P'].values
df = pd.DataFrame(np.log(D/(Do+Io)), columns=['D*'])
df['t'] = _dat['exppts'].index.get_level_values(1)*24*3600
df['T'] = _dat['exppts'].index.get_level_values(0) + 273.
df = df.loc[df['t']!=0]
def error(p):
A = p[0]
E = p[1]
return np.sum((df['D*']/df['t'] + A * np.... | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
<b>Figure of square error</b> indicates that the objective function likely has some very shallow gradients near the minimum, thus inhibiting convergence to a unique value. | temp = _dat['const']['R']*(_dat['predictions'].index.astype('f8').values+273)
k = A * np.exp(-E/temp)
_dat['predictions']['first_order_nonlinear'] = impurity_setpoint / k * 1/(3600*24*365)
_dat['predictions'] | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
<b>Table of precition comparison</b> suggests that zero_order was actually the most accurate of the models. It should be noted that the convergence of the non-linear regression is questionable. The under prediction of shelf-life is to be expected, because of the faster reaction of the amorphous form that gets consumed ... | Co = Do + Io
C = Co - _dat['exppts']['P'].values
t = _dat['exppts'].index.get_level_values(1).values*24*3600.
T = _dat['exppts'].index.get_level_values(0).values + 273.
C = C[t!=0]
T = T[t!=0]
t = t[t!=0]
Sp = _dat['setpt']
R = _dat['const']['R'] | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
Zero order (method of King, Kung, Fung)
Define the objective function for a first order reaaction. | def error(p):
t_298 = p[0]
E = p[1]
k = np.exp(E/R * (1/298. - 1/T))
k = Sp / t_298 * k
err = Co * (1 - k * t)
return np.sum(err**2) | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
Perform the non-linear regression over a range of initial conditions. | t_298 = 10.*365*24*3600
E = 10.
xo = np.array([t_298,E])
opt = []
for d in np.arange(.1,10,.1):
opt.append(minimize(error, xo*d, tol=1e-30)) | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
Plot the error of the regression as a function of the initial conditions. | e2 = np.array([error(r.x) for r in opt])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(e2)
ax.set_ylabel(r"$\sum \mathrm{error}^2$")
ax.set_xlabel("initial condition index")
t_298, E = (opt[e2.argmin()]).x
print "optimal t_298 = %.1f y" %(t_298/(365*24*3600))
print "optimal E = %.2f kcal/mol K" %E | notebooks/Impurity Prediction Example 1.ipynb | brentjm/Impurity-Predictions | bsd-2-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.