markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
2 starbursts | s1=s.sygma(starbursts=[0.1,0.1],iolevel=1,mgal=1e11,dt=1e7,imf_type='salpeter',
imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',
sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table=... | Warning - Use isotopes with care.
['H-1']
Use initial abundance of yield_tables/iniabu/iniab_h1.ppn
Number of timesteps: 3.0E+01
### Start with initial metallicity of 1.0000E-04
###############################
SYGMA run in progress..
################## Star formation at 1.000E+07 (Z=1.0000E-04) of 0.1
Mass locked ... | BSD-3-Clause | regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb | katewomack/NuPyCEE |
imf_yield_range - include yields only in this mass range | s0=s.sygma(iolevel=0,iniZ=0.0001,imf_bdys=[0.01,100],imf_yields_range=[1,100],
hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False,
sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn') | SYGMA run in progress..
SYGMA run completed - Run time: 0.31s
| BSD-3-Clause | regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb | katewomack/NuPyCEE |
vgg | def make_vgg():
layers = []
in_channels = 3
cfg = [64, 64, 'M', 128, 128, 'M', 256, 256,
256, 'MC', 512, 512, 512, 'M', 512, 512, 512]
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
elif v == 'MC':
layers += [nn.MaxPool2... | ModuleList(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel... | MIT | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced |
extras | def make_extras():
layers = []
in_channels = 1024
cfg = [256, 512, 128, 256, 128, 256, 128, 256]
layers += [nn.Conv2d(in_channels, cfg[0], kernel_size=(1))]
layers += [nn.Conv2d(cfg[0], cfg[1], kernel_size=(3), stride=2, padding=1)]
layers += [nn.Conv2d(cfg[1], cfg[2], kernel_size=(1))]
... | ModuleList(
(0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
(3): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(4): Conv2d(256, 128, kernel_size=(1,... | MIT | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced |
loc conf |
def make_loc_conf(num_classes=21, bbox_aspect_num=[4, 6, 6, 6, 4, 4]):
loc_layers = []
conf_layers = []
loc_layers += [nn.Conv2d(512, bbox_aspect_num[0]
* 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(512, bbox_aspect_num[0]
* num_c... | ModuleList(
(0): Conv2d(512, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Conv2d(1024, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): Conv2d(512, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): Conv2d(256, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): Conv... | MIT | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced |
L2Norm | class L2Norm(nn.Module):
def __init__(self, input_channels=512, scale=20):
super(L2Norm, self).__init__()
self.weight = nn.Parameter(torch.Tensor(input_channels))
self.scale = scale
self.reset_parameters()
self.eps = 1e-10
def reset_parameters(self):
init.c... | _____no_output_____ | MIT | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced |
SSD | # SSD
class SSD(nn.Module):
def __init__(self, phase, cfg):
super(SSD, self).__init__()
self.phase = phase # train or inference
self.num_classes = cfg["num_classes"]
# SSD
self.vgg = make_vgg()
self.extras = make_extras()
self.L2Norm = L2Norm()
sel... | _____no_output_____ | MIT | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced |
Non-Maximum Suppression | # Non-Maximum Suppression
def nm_suppression(boxes, scores, overlap=0.45, top_k=200):
count = 0
keep = scores.new(scores.size(0)).zero_().long()
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
area = torch.mul(x2 - x1, y2 - y1)
tmp_x1 = boxes.new()
tmp_y1 = bo... | _____no_output_____ | MIT | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced |
Detect |
class Detect(Function):
def __init__(self, conf_thresh=0.01, top_k=200, nms_thresh=0.45):
self.softmax = nn.Softmax(dim=-1)
self.conf_thresh = conf_thresh
self.top_k = top_k
self.nms_thresh = nms_thresh
def forward(self, loc_data, conf_data, dbox_list):
... | _____no_output_____ | MIT | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced |
SSD |
class SSD(nn.Module):
def __init__(self, phase, cfg):
super(SSD, self).__init__()
self.phase = phase # train or inference
self.num_classes = cfg["num_classes"]
# SSD
self.vgg = make_vgg()
self.extras = make_extras()
self.L2Norm = L2Norm()
self.l... | _____no_output_____ | MIT | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced |
Download TEF test data> Data used for Example 2 | #export
# Code from xarray
try:
import pooch
except ImportError as e:
raise ImportError(
"tutorial.download_test_data depends on pooch to download and manage datasets."
" To proceed please install pooch."
) from e
#export
def download_test_data(downloadpath):
filepath_1d_winter = pooch.r... | _____no_output_____ | MIT | 04_download_test_data.ipynb | florianboergel/pyTEF |
Radioactive Decay Interactives Interactive Figure 1: Model of Radioactive DecayThis first figure takes a population of 900 atoms and models their radioactive decay. This first interactive is designed to allow you to explore what happens during radioactive decay of some isotope. An *isotope* refers to a particular vers... | from IPython.display import display
import numpy as np
import bqplot as bq
import ipywidgets as widgets
import random as random
import pandas as pd
import number_formatting as nf
from math import ceil, floor, log10
## Originally developed June 2018 by Samuel Holen
##
## Edits by Juan Cabanela October 2018 to allow chan... | _____no_output_____ | MIT | Interactives/Radioactivity.ipynb | greywormz/Astrophysics |
Interactive Figure 2: Geochron Plot Assuming a non-radiogenic isotope (that is, an isotope that is not the result of radioactive decay) that also will not decay, its amount should be constant. This means that for different mineral samples we can measure the ratio of parent isotope versus the non-radiogenic isotope ($... | ##
## Define the various isotopes we want to consider
##
isotope_info = pd.DataFrame(columns=['Name', 'PName', 'PAbbrev', 'DName', 'DAbbrev', 'DiName', 'DiAbbrev', 'HalfLife', 'HLUnits'])
isotope_info['index'] = ['generic', 'Rb87']
isotope_info['Name'] = ['Generic', 'Rb-87->Sr-87']
isotope_info['PName'] = ['Parent', '... | _____no_output_____ | MIT | Interactives/Radioactivity.ipynb | greywormz/Astrophysics |
Load model | bert_model.train() | _____no_output_____ | MIT | notebooks/BertModel.ipynb | dwszai/news-summarizer |
Original text | print(input_text) | b'Dollar gains on Greenspan speech\n\nThe dollar has hit its highest level against the euro in almost three months after the Federal Reserve head said the US trade deficit is set to stabilise.\n\nAnd Alan Greenspan highlighted the US government\'s willingness to curb spending and rising household savings as factors whi... | MIT | notebooks/BertModel.ipynb | dwszai/news-summarizer |
Actual summarized text from dataset | actual_summary = data[DATA_SUMMARIZED][TEST_INDEX]
print(actual_summary) | b'The dollar has hit its highest level against the euro in almost three months after the Federal Reserve head said the US trade deficit is set to stabilise.China\'s currency remains pegged to the dollar and the US currency\'s sharp falls in recent months have therefore made Chinese export prices highly competitive.Mark... | MIT | notebooks/BertModel.ipynb | dwszai/news-summarizer |
Summarized text | summary = bert_model.predict(input_text)
print(summary) | b'Dollar gains on Greenspan speech\n\nThe dollar has hit its highest level against the euro in almost three months after the Federal Reserve head said the US trade deficit is set to stabilise.\n\nAnd Alan Greenspan highlighted the US government\'s willingness to curb spending and rising household savings as factors whi... | MIT | notebooks/BertModel.ipynb | dwszai/news-summarizer |
Evaluation | reference_data = data[DATA_ORIGINAL].sample(n=10, random_state=42)
reference_data
# candidate_data = reference_data.apply(lambda x: bert_model.predict(x))
candidate_data = reference_data.map(bert_model.predict)
candidate_data | _____no_output_____ | MIT | notebooks/BertModel.ipynb | dwszai/news-summarizer |
Score for 10 samples | precision, recall, f1 = bert_model.evaluation(preds=candidate_data, refs=reference_data)
print(f'Precision: {precision}')
print(f'Recall: {recall}')
print(f'F1: {f1}') | calculating scores...
computing bert embedding.
| MIT | notebooks/BertModel.ipynb | dwszai/news-summarizer |
Score for 100 samples | precision, recall, f1 = bert_model.evaluation(preds=preds, refs=refs)
print(f'Precision: {precision}')
print(f'Recall: {recall}')
print(f'F1: {f1}') | calculating scores...
computing bert embedding.
| MIT | notebooks/BertModel.ipynb | dwszai/news-summarizer |
ExecutionThis notebook explains how PartiQL executes, with some discussion of its implementation. Data modelAs discussed in [02-data-model.ipynb](02-data-model.ipynb), the only allowed types are PLURP (PLUR in this demo). In a rowwise, dynamically typed interpreter, this means there are only values, records, and list... | import data
arrays = data.RecordArray({
"x": data.PrimitiveArray([0.1, 0.2, 0.3]),
"y": data.PrimitiveArray([1000, 2000, 3000]),
"table1": data.ListArray([0, 3, 3], [3, 3, 5], data.RecordArray({
"a": data.PrimitiveArray([1.1, 2.2, 3.3, 4.4, 5.5]),
"b": data.PrimitiveArray([100, 200,... | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Inputs and outputsNext, we'll run a few simple examples to show what the runtime engine requires and produces.If you have not already done so, install the [Lark parser](https://github.com/lark-parser/larkreadme) and Matplotlib. | !pip install lark-parser matplotlib
import interpreter | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
The interpreter takes a source string and a `data.ListInstance` of `data.RecordInstances` as input.It returns any newly assigned variables (as a `data.ListInstance` of `data.RecordInstances`) and the hierarchy of counters, which may be a single counter (total number of events) or a directory of histograms. | output, counter = interpreter.run(r"""
z = x + 1
hist z by regular(100, 1, 1.5) named "h"
""", instances)
output
counter.allkeys()
counter["h"].mpl() | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Variables assigned in nested blocks will *not* be returned, but histograms defined in these blocks will. | output, counter = interpreter.run(r"""
table1 with {
z = a + 1
hist z by regular(100, 0, 10) named "h"
}
""", instances)
output
counter["h"].mpl() | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
To get nested quantities as output, be sure to assign them to a top-level variable. | output, counter = interpreter.run(r"""
top = table1 with {
z = a + 1
hist z by regular(100, 0, 10) named "h"
}
""", instances)
output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Variables assigned at the top-level of the source are a functional return value, but histograms and counters are a side-effect.Histograms nested in a `cut` or `vary` are nested in their counters. | output, counter = interpreter.run(r"""
cut x > 0.1 named "c" {
table2 with {
hist c by regular(100, 0, 10) named "h"
}
}
""", instances)
counter.allkeys()
counter["c"]
counter["c/h"].mpl() | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Scalar expressions Trivial assignment is one way to pass on values from intput to output. | output, counter = interpreter.run(r"""
x = x
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Missing values are handled as unions of records with a field with records without that field. They're only passed through an expression if safe navigation operators (`?`) are used (or explicit checks for `has`). | output, counter = interpreter.run(r"""
a = if x > 0.1 then 100
b = if x > 0.1 then 100 else 200
c = ?a
d = if has a then 100 else 200
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Temporary variables don't appear if they're in a curly-brackets scope. | output, counter = interpreter.run(r"""
a = {
tmp1 = x + 1
tmp2 = x * 100
tmp2**2
}
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Set operations The `as` operator turns any set into a set of records nested within a single field.In the example below, `out1` does not have a record structure, but `out2` does—the same data is packed in a field named `nested`. | output, counter = interpreter.run(r"""
out1 = stuff
out2 = stuff as nested
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
It is also a special case of sampling without replacement.Below, we see that each record has an `x` and a `y` field, and these are unique pairs of the original data, per-event: `(1, 2)`, `(1, 3)`, `(2, 3)` in the first event and only `(4, 5)` in the third event. (The second event is empty.) | output, counter = interpreter.run(r"""
out = stuff as (x, y)
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
The natural alternative to sampling without replacement is sampling with replacement, which can be built from `as` and `cross` (cross-join).The cross-join computes a Cartesian product, just as it does in SQL. Promoting each set of integers into sets of records makes them mergable (`x` and `y` go into the same output re... | output, counter = interpreter.run(r"""
out = stuff as x cross stuff as y
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
For added flair, we can `group by` either `x` or `y` to build sets of sets. The Cartesian grid pattern should be more clear. | output, counter = interpreter.run(r"""
out = stuff as x cross stuff as y group by x
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
The cross-join/Cartesian product is one kind of a join: two sets of records are combined to make one set of records. The output records have fields from both input record types and the set is derived from a combinatorial rule, in this case: "for all in the left set, merge with each of the right set."If we didn't give t... | output, counter = interpreter.run(r"""
out = stuff as x cross stuff as x
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
In the above, it looks like these sets contain multiple values of `{"x": 4}` and `{"x": 5}`, but their index keys are different. (They are different ordered tuples of elements from the left and right sets.) | [[y.row for y in x["out"]] for x in output] | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Cross-join is, in a sense, the simplest join because it creates a new index: `CrossRef(, )` is distinct from `` and ``, and the only way to get another one is to `cross` sets with indexes `` and `` again.The next-simplest is `join`, which requires the left and right sets to have the same index and returns a set with th... | output, counter = interpreter.run(r"""
out = stuff as x join stuff as y
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
To show what this is doing, let's filter the left and right sets with `where`. Even and odd values have no overlap. | output, counter = interpreter.run(r"""
evens = stuff as x where x % 2 == 0
odds = stuff as x where x % 2 == 1
out = evens join odds
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
But the overlap of evens and `2 <= x and x <= 4` is `2` and `4`. | output, counter = interpreter.run(r"""
evens = stuff as x where x % 2 == 0
middle = stuff as x where 2 <= x and x <= 4
out = evens join middle
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
To see how this is like an SQL `INNER JOIN`, consider records with different fields: the left records have an `x` and the right records have a `y`. When they have an overlapping index, the merged records have `x` and `y`. | output, counter = interpreter.run(r"""
evens = stuff as x where x % 2 == 0
middle = stuff as y where 2 <= y and y <= 4
out = evens join middle
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
The opposite of PartiQL's `join` (like SQL's `INNER JOIN` on the hidden surrogate key) is PartiQL's `union` (like SQL's `FULL OUTER JOIN` on the hidden surrogate key).Just as `join` acts as set intersection, `union` acts as set union. I chose to use the word "`union`" in this case because field names of the records in ... | output, counter = interpreter.run(r"""
evens = stuff as x where x % 2 == 0
odds = stuff as x where x % 2 == 1
out = evens union odds
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Here's the same example, but with fields named `x` and `y`. If the output has a field named `x`, you know it came from the left, and if it has a field named `y`, you know it came from the right. | output, counter = interpreter.run(r"""
evens = stuff as x where x % 2 == 0
odds = stuff as y where y % 2 == 1
out = evens union odds
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Often, that would be used to combine particle lists, like `leptons = electrons union muons`. In the example below, we'll take a union of `table1` and `table2`. | output, counter = interpreter.run(r"""
out = table1 union table2
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
The `table1` and `table2` sets both have a `b` field, so `b` in the output records might have come from `table1` (in which case, it's an integer) or might have come from `table2` (in which case, it's boolean). To make the provenance clear, we can use `as` to label them.Below, the output records either have a `left` fie... | output, counter = interpreter.run(r"""
out = table1 as left union table2 as right
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Also notice that the index is a union of the two input indexes. Keys either come from `` or ``. | [[y.row for y in x["out"]] for x in output] | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
So `cross` creates a new index reference (`CrossRef(, )`), `join` returns the input index references (finding the ones that are in both left and right), and `union` creates a union of keys out of the left and right index references.There's one more set operation: `except`. This is a set difference, removing elements fr... | output, counter = interpreter.run(r"""
evens = stuff as x where x % 2 == 0
out = stuff as x except evens
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
I could have implemented a symmetric set difference or an equivalent of SQL's `LEFT OUTER JOIN` and `RIGHT OUTER JOIN`, but they're straightforward extensions of what I've implemented here (and it's not clear they would be needed for a physics analysis). Equality and set membershipIt should be emphasized that the set ... | output, counter = interpreter.run(r"""
out = {
subset1 = table2 where c % 2 == 1 with {
z = 2*c
}
subset2 = table2 where not b with {
z = 10*c
}
subset1 union subset2
}
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Note that this is not how simple equality—tested by the `==` operator—should work. Simple equality should return `True` if the left and right are the same value or have the same field names and field values, irrespective of whether they have the same indexes. The same applies to `!=`, `in`, and `not in`, which are all ... | output, counter = interpreter.run(r"""
out = 2 in stuff
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Modifying sets of recordsSQL's `SELECT` statement lets you create, replace, and remove columns from a table without changing its index (except inasmuch as a visible or natural index is a column that can be changed like any other). The equivalents in PartiQL are: * `with { ... }` to create or replace fields to a set ... | output, counter = interpreter.run(r"""
out = table1 with {
c = 10*a # add c as a new field
}
""", instances); output
output, counter = interpreter.run(r"""
out = table1 to {
c = 10*a # c is the only field in a new set of records
}
""", instances); output
output, counter = interpreter.run(r"""
out =... | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Group-byThe `GROUP BY` operator is a major part of SQL: it changes a `SELECT` operation from a filtered transformation of a table into an aggregation over subtables, each defined by a distinct value of the expression following `GROUP BY`. It changes the nature of the whole query—expressions after `SELECT` must be redu... | output, counter = interpreter.run(r"""
grouped = table2 group by b
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Below, we aggregate over the inner sets by modifying the set of records as we would modify any other set of records—by defining new fields in a `to` block. The aggregation functions (`count`, `sum`, `min`, `max`, `any`, `all`) operate on sets of values.If we wanted to aggregate `b` (because we grouped by `b`; it would ... | output, counter = interpreter.run(r"""
aggregated = table2 group by b as grouped to {
b = any(grouped.b)
c = sum(grouped.c)
}
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Min by and max byWhereas `group by` restructures a set of records into a set of sets of records, `min by` and `max by` restructure a set of records into a single record. If the set is empty, this returns missing data. | output, counter = interpreter.run(r"""
out = table1 max by a
""", instances); output
output, counter = interpreter.run(r"""
out = (table1 max by a).b
""", instances); output | _____no_output_____ | BSD-3-Clause | binder/04-execution.ipynb | lgray/AwkwardQL |
Load the Dataset | import pandas as pd
import numpy as np
import random
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
iris = pd.read_csv(url,
header=None,
names = ['sepal_length',
'sepal_width',
'petal_length',
'p... | _____no_output_____ | MIT | naive_bayes/Naive_Bayes_Basic.ipynb | MayukhSobo/ML |
Extracting the Feature Matrix | X = np.matrix(iris.iloc[:, 0:4])
X = X.astype(np.float)
m, n = X.shape | _____no_output_____ | MIT | naive_bayes/Naive_Bayes_Basic.ipynb | MayukhSobo/ML |
Extracting the response | y = np.asarray(iris.species) | _____no_output_____ | MIT | naive_bayes/Naive_Bayes_Basic.ipynb | MayukhSobo/ML |
Extracting features for different classes | CLS = []
for each in classes:
CLS.append(np.matrix(iris[iris.species == each].iloc[:, 0:4]))
len(CLS) | _____no_output_____ | MIT | naive_bayes/Naive_Bayes_Basic.ipynb | MayukhSobo/ML |
The real meat Calculating the mean and variance of each features for each class | pArray = []
def calculate_mean_and_variance(CLS, n, numClasses):
for i in range(numClasses):
pArray.append([])
for x in range(n):
mean = np.mean(CLS[i][:, x])
var = np.var(CLS[i][:, x])
pArray[i].append([mean, var])
calculate_mean_and_variance(CLS, n, numClasses)
... | [[5.0060000000000002, 0.12176400000000002], [3.4180000000000001, 0.14227600000000001], [1.464, 0.029504000000000002], [0.24399999999999999, 0.011264000000000003]]
[[5.9359999999999999, 0.261104], [2.7700000000000005, 0.096500000000000016], [4.2599999999999998, 0.21640000000000004], [1.3259999999999998, 0.0383239999999... | MIT | naive_bayes/Naive_Bayes_Basic.ipynb | MayukhSobo/ML |
Choosing training dataset (Random Choosing) | # Choosing 70% of the dataset for training Randomly
random_index = random.sample(range(m), int(m * 0.7))
def probability(mean, stdev, x):
| _____no_output_____ | MIT | naive_bayes/Naive_Bayes_Basic.ipynb | MayukhSobo/ML |
Creating the actual Baysean Classifier | def classify_baysean():
correct_predictions = 0
for index in random_index:
result = []
x = X[index, :]
for eachClass in range(numClasses):
result.append([])
prior = 1 / numClasses
# For sepal_length
prosterior_feature_1 = proba... | _____no_output_____ | MIT | naive_bayes/Naive_Bayes_Basic.ipynb | MayukhSobo/ML |
Optimization MethodsUntil now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm ... | import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_data... | C:\Users\Theochem\Desktop\DeepLearning\deep-learning-coursera-master\Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization\opt_utils.py:76: SyntaxWarning: assertion is always true, perhaps remove parentheses?
assert(parameters['W' + str(l)].shape == layer_dims[l], layer_dims[l-1])
C:\U... | MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
1 - Gradient DescentA simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. **Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = ... | # GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] =... | W1 = [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]]
b1 = [[ 1.74604067]
[-0.75184921]]
W2 = [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]]
b2 = [[-0.88020257]
[ 0.02561572]
[ 0.57539477]]
| MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
**Expected Output**: **W1** [[ 1.63535156 -0.62320365 -0.53718766] [-1.07799357 0.85639907 -2.29470142]] **b1** [[ 1.74604067] [-0.75184921]] **W2** [[ 0.32171798 -0.25467393 1.46902454] [-2.05617317 -0.31554548 -0.3756023 ] [ 1.140... | # GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (... | shape of the 1st mini_batch_X: (12288, 64)
shape of the 2nd mini_batch_X: (12288, 64)
shape of the 3rd mini_batch_X: (12288, 20)
shape of the 1st mini_batch_Y: (1, 64)
shape of the 2nd mini_batch_Y: (1, 64)
shape of the 3rd mini_batch_Y: (1, 20)
mini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]
| MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
**Expected Output**: **shape of the 1st mini_batch_X** (12288, 64) **shape of the 2nd mini_batch_X** (12288, 64) **shape of the 3rd mini_batch_X** (12288, 20) **shape of the 1st mini_batch_Y** (1, 64) ... | # GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Argumen... | v["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] = [[ 0.]
[ 0.]]
v["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] = [[ 0.]
[ 0.]
[ 0.]]
| MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
**Expected Output**: **v["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **v["db1"]** [[ 0.] [ 0.]] **v["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **v["db2"]** [[ 0.] [ 0.] [ 0.]] **Exercise**: ... | # GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
... | W1 = [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]]
b1 = [[ 1.74493465]
[-0.76027113]]
W2 = [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]]
b2 = [[-0.87809283]
[ 0.04055394]
[ 0.58207317]]
v["dW1"] = [[-0.11006192 ... | MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
**Expected Output**: **W1** [[ 1.62544598 -0.61290114 -0.52907334] [-1.07347112 0.86450677 -2.30085497]] **b1** [[ 1.74493465] [-0.76027113]] **W2** [[ 0.31930698 -0.24990073 1.4627996 ] [-2.05974396 -0.32173003 -0.38320915] [ 1.134... | # GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:... | v["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] = [[ 0.]
[ 0.]]
v["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] = [[ 0.]
[ 0.]
[ 0.]]
s["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db1"] = [[ 0.]
[ 0.]]
s["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db2"] = [[ 0.]
[ 0.]
[ 0.]]
| MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
**Expected Output**: **v["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **v["db1"]** [[ 0.] [ 0.]] **v["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **v["db2"]** [[ 0.] [ 0.] [ 0.]] **s["d... | # GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate=0.01,
beta1=0.9, beta2=0.999, epsilon=1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your paramete... | W1 = [[ 1.79078034 -0.77819144 -0.69460639]
[-1.23940099 0.69897299 -2.13510481]]
b1 = [[ 1.91119235]
[-0.59477218]]
W2 = [[ 0.48546317 -0.41580308 1.62854186]
[-1.89371033 -0.1559833 -0.21761985]
[ 1.30020326 -0.93841334 -0.00599321]]
b2 = [[-1.04427894]
[-0.12422162]
[ 0.41638106]]
v["dW1"] = [[-0.11006192 ... | MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
**Expected Output**: **W1** [[ 1.63178673 -0.61919778 -0.53561312] [-1.08040999 0.85796626 -2.29409733]] **b1** [[ 1.75225313] [-0.75376553]] **W2** [[ 0.32648046 -0.25681174 1.46954931] [-2.05269934 -0.31497584 -0.37661299] [ 1.141... | train_X, train_Y = load_dataset() | _____no_output_____ | MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
We have already implemented a 3-layer neural network. You will train it with: - Mini-batch **Gradient Descent**: it will call your function: - `update_parameters_with_gd()`- Mini-batch **Momentum**: it will call your functions: - `initialize_velocity()` and `update_parameters_with_momentum()`- Mini-batch **Adam**... | def model(X, Y, layers_dims, optimizer, learning_rate=0.0007, mini_batch_size=64, beta=0.9,
beta1=0.9, beta2=0.999, epsilon=1e-8, num_epochs=10000, print_cost=True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, nu... | _____no_output_____ | MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
You will now run this 3 layer neural network with each of the 3 optimization methods. 5.1 - Mini-batch Gradient descentRun the following code to see how the model does with mini-batch gradient descent. | # train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer="gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5, 2.5])
axe... | Cost after epoch 0: 0.690736
Cost after epoch 1000: 0.685273
Cost after epoch 2000: 0.647072
Cost after epoch 3000: 0.619525
Cost after epoch 4000: 0.576584
Cost after epoch 5000: 0.607243
Cost after epoch 6000: 0.529403
Cost after epoch 7000: 0.460768
Cost after epoch 8000: 0.465586
Cost after epoch 9000: 0.464518
| MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
5.2 - Mini-batch gradient descent with momentumRun the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains. | # train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta=0.9, optimizer="momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5, 2... | Cost after epoch 0: 0.690741
Cost after epoch 1000: 0.685341
Cost after epoch 2000: 0.647145
Cost after epoch 3000: 0.619594
Cost after epoch 4000: 0.576665
Cost after epoch 5000: 0.607324
Cost after epoch 6000: 0.529476
Cost after epoch 7000: 0.460936
Cost after epoch 8000: 0.465780
Cost after epoch 9000: 0.464740
| MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
5.3 - Mini-batch with Adam modeRun the following code to see how the model does with Adam. | # train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer="adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5, 2.5])
axes.set_ylim... | Cost after epoch 0: 0.687550
Cost after epoch 1000: 0.173593
Cost after epoch 2000: 0.150145
Cost after epoch 3000: 0.072939
Cost after epoch 4000: 0.125896
Cost after epoch 5000: 0.104185
Cost after epoch 6000: 0.116069
Cost after epoch 7000: 0.031774
Cost after epoch 8000: 0.112908
Cost after epoch 9000: 0.197732
| MIT | Improving Deep Neural Networks/Optimization methods.ipynb | LuketheDukeBates/Deep-Learning-Coursera-Andrew-Ng- |
Python Challenge 3challenge URL: http://www.pythonchallenge.com/pc/def/equality.html | import urllib.request
import re
html = urllib.request.urlopen("http://www.pythonchallenge.com/pc/def/equality.html").read().decode()
data = re.findall("<!--(.*?)-->", html, re.DOTALL)[-1]
data
# ***AAAbCCC***
# [A-Z] capital letters
# [a-z] small caps
# [^] not
# + multiple
# {3} 3 of something
"".join(re.findall( "[... | _____no_output_____ | MIT | toy-problems/toy-problem-005.ipynb | debracupitt/toolkitten |
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to ente... | # Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0 | Requirement already satisfied: sagemaker==1.72.0 in c:\users\katherine\anaconda3\lib\site-packages (1.72.0)
Requirement already satisfied: importlib-metadata>=1.4.0 in c:\users\katherine\anaconda3\lib\site-packages (from sagemaker==1.72.0) (3.10.0)
Requirement already satisfied: numpy>=1.9.0 in c:\users\katherine\anaco... | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Step 1: Downloading the dataAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of t... | #%mkdir .\data
#jupyter notebook --notebook-dir=/Users/Katherine/UdacityMLE/sagemaker-deployment-master/data
%cd C:/Users/Katherine/UdacityMLE/sagemaker-deployment-master/data
#!wget -O ./data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ./data/aclImdb_v1.tar.gz -C ./data | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a... | import os
import glob
def read_imdb_data(data_dir='./aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_ty... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records. | from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loade... | print(train_X[100])
print(train_y[100]) | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis. | import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.su... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set. | # TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100]) | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do t... | import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of cou... | import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer:** | # TODO: Use this space to determine the five most frequently appearing words in the training set. | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use. | data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f) | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`. | def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pa... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set? | # Use this cell to examine one of the processed reviews to make sure everything is working as intended. | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:** Step 3: Upload the data to S3As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our ... | import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model. | import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix) | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also ... | !pygmentize train/model.py | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training ... | import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later. | def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early ... | import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device) | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one)... | from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have ... | # TODO: Deploy the trained model | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is. | test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array i... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:** (TODO) More testingWe now have a trained model which has been deployed and which we can send processed r... | test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.' | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a se... | # TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data = None | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review. | predictor.predict(test_data) | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delet... | estimator.delete_endpoint() | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, w... | !pygmentize serve/predict.py | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete th... | from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchMod... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is t... | import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path... | _____no_output_____ | MIT | SageMaker Project.ipynb | KatherineKing/sagemaker-deployment |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.