markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Multi line strings | paragraph = "I am thinking of writing something that spans"\
"multiple lines and Nobody is helping me with that. So here"\
"is me typing something random"
print(paragraph)
# \n represents Newline
paragraph = "I am thinking of writing something that spans\n\
multiple lines and Nobody is helping me with that. So here\n\
is me typing something random"
print(paragraph) | I am thinking of writing something that spans
multiple lines and Nobody is helping me with that. So here
is me typing something random
| Apache-2.0 | Lecture-Notes/2018/Day2.ipynb | unmeshvrije/python-for-beginners |
String indices | sample_string = "Sorry Madam"
# Subscipt operator : []
sample_string[1] # sample_string of 1
sample_string[2]
'''
*******************************************
Example of a multi-line comment:
To access the first character of the string
you need to use the index 0
*******************************************
'''
sample_string[0]
'''
To access a part of string, use a colon notation in the
subscript operator []
'''
sample_string[0:5]
# give me the string madam from the sample_string
sample_string[6:11]
# Slice the string from index 6 and go until the end
sample_string[6:]
# give me string "Sorry" without writing 0 as index
sample_string[:5]
print(sample_string)
# Negative index: -1 will access the last element
print(sample_string[-1])
# access first element with negative index
print (sample_string[-11])
# This index is invalid
print (sample_string[-12])
sample_string[11]
# Python tries to slice the string
# by reading from left to right
# Indices in the statement below are wrong
sample_string[-4:-10]
sample_string[-10:-4]
sample_string[0:5]
'''
Slice the string from index 0 to 4
with the jump of 2
'''
sample_string[0:5:2]
sample_string[-5:0] # This will not work
sample_string[-5:] # will give you the desired result
sample_string2 = "I love Python"
# Slice this string and give me every third characater
# Expected outout : "Io tn"
# Pythonic
print(sample_string2[0::3])
print(sample_string2[::3]) # most pythonic
print(sample_string2[0:14:3])
print(sample_string2[0:15:3])
num1 = "5"
num2 = "3"
print(num1+ num2)
sample_string2
print(sample_string2[0]+sample_string2[7:14])
print(sample_string2[0]+ sample_string2[2]+sample_string2[7:14])
print(sample_string, sample_string2)
print(sample_string + sample_string2)
print(sample_string + "!! "+ sample_string2)
# to convert a string into lower case characters
sample_string.lower()
sample_string.upper()
sample_string.count()
type(sample_string)
help(str.count)
sample_string
sample_string.count('a')
fruit = "banana"
#it has overlapping word ana
fruit.count('ana')
sample_string.count('r',0,3)
sample_string
# Find length of the string
# i.e. number of characters in the string
len(sample_string)
help(len)
name = "Jeroen"
age = 27
country = "Netherlands"
print("Hoi, I am {}. I am {} years old.I come from {}".format(name,age, country) )
fruit
fruit2="guanabana"
fruit == 'banana'
is_it_raining = False | _____no_output_____ | Apache-2.0 | Lecture-Notes/2018/Day2.ipynb | unmeshvrije/python-for-beginners |
Conditional operators```== : Compare two expressions for equality!= : compare for inequality< : compare less than> : greater than<= : less than or equal to>= : greater than or equal to``` | fruit == 'banana'
fruit != 'orange'
print("fruit =", fruit)
print("fruit2 =", fruit2)
fruit[0:4] == fruit2[5:9] | _____no_output_____ | Apache-2.0 | Lecture-Notes/2018/Day2.ipynb | unmeshvrije/python-for-beginners |
Conditional statements | it_is_raining = False
it_is_sunny = not it_is_raining
if it_is_sunny:
print("I will go swimming in Sloterplas")
else:
print("I will work on Python (coding)")
it_is_raining = True
it_is_sunny = not it_is_raining
if it_is_sunny:
print("I will go swimming in Sloterplas")
print("I will run")
else:
print("I will work on Python (coding)")
# Accept a number from user (input)
# If the number is even, print "Hurray"
# Else print "Meah"
number = int(input("Enter a number : "))
if number%2 == 0:
print ("Hurray")
else:
print("Meah")
x = 3 # Assignment
print(x)
print(x%2)
time = float(input("Enter a number between 0 and 23"))
if time >= 0 and time <= 8:
print("I am asleep")
elif time >8 and time <= 10:
print("Morning rituals")
elif time > 10 and time <= 13:
print("I am Pythoning")
elif time >13 and time <= 14:
print("I am lunching")
elif time >14 and time < 17:
print("I am researching")
else:
print("I am having fun") | Enter a number between 0 and 2316
I am researching
| Apache-2.0 | Lecture-Notes/2018/Day2.ipynb | unmeshvrije/python-for-beginners |
Loops | # Not so smart way of printing Hello 5 times
print("Hello")
print("Hello")
print("Hello")
print("Hello")
print("Hello")
# Smart way of printing Hello 5 times
for i in range(5):
print("Hello")
for i in range(5):
print(i)
for i in range(1,6):
print(i)
for u in range(1,6):
print(u, ")", "Hello")
sample_string
'''
a way of accessing individual characters in string
by index
'''
some_number = 15
for i in range(len(sample_string)):
print("[",str(i),"]:", sample_string[i], some_number) | [ 0 ]: S 15
[ 1 ]: o 15
[ 2 ]: r 15
[ 3 ]: r 15
[ 4 ]: y 15
[ 5 ]: 15
[ 6 ]: M 15
[ 7 ]: a 15
[ 8 ]: d 15
[ 9 ]: a 15
[ 10 ]: m 15
| Apache-2.0 | Lecture-Notes/2018/Day2.ipynb | unmeshvrije/python-for-beginners |
``` i = 0 print("[",str(i),"]:", sample_string[0], some_number) i = 1 print("[",str(i),"]:", sample_string[1], some_number) i = 2 print("[",str(i),"]:", sample_string[2], some_number) i = 3 print("[",str(i),"]:", sample_string[3], some_number) ... ... i = 10 print("[",str(i),"]:", sample_string[10], some_number)``` | len(sample_string) | _____no_output_____ | Apache-2.0 | Lecture-Notes/2018/Day2.ipynb | unmeshvrije/python-for-beginners |
``` n = input()n= 121224364860728496108120n = 44812162024.40``` | n = int(input())
for i in range(1,11):
print(i*n) | 12
12
24
36
48
60
72
84
96
108
120
| Apache-2.0 | Lecture-Notes/2018/Day2.ipynb | unmeshvrije/python-for-beginners |
Set up working directory | cd /usr/local/notebooks
mkdir -p ./workdir
#check seqfile files to process in data directory (make sure you still remember the data directory)
!ls ./data/test/data | 1c.fa 1d.fa 2c.fa 2d.fa
| BSD-3-Clause | notebooks/ssu-search-Copy4.ipynb | smoe/SSUsearch |
README This part of pipeline search for the SSU rRNA gene fragments, classify them, and extract reads aligned specific region. It is also heavy lifting part of the whole pipeline (more cpu will help). This part works with one seqfile a time. You just need to change the "Seqfile" and maybe other parameters in the two cells bellow. To run commands, click "Cell" then "Run All". After it finishes, you will see "\*** pipeline runs successsfully :)" at bottom of this pape.If your computer has many processors, there are two ways to make use of the resource:1. Set "Cpu" higher number.2. make more copies of this notebook (click "File" then "Make a copy" in menu bar), so you can run the step on multiple files at the same time.(Again we assume the "Seqfile" is quality trimmed.)Here we will process one file at a time; set the "Seqfile" variable to the seqfile name to be be processedFirst part of seqfile basename (separated by ".") will be the label of this sample, so named it properly.e.g. for "/usr/local/notebooks/data/test/data/1c.fa", "1c" will the label of this sample. | Seqfile='./data/test/data/2d.fa' | _____no_output_____ | BSD-3-Clause | notebooks/ssu-search-Copy4.ipynb | smoe/SSUsearch |
Other parameters to set | Cpu='2' # number of maxixum threads for search and alignment
Hmm='./data/SSUsearch_db/Hmm.ssu.hmm' # hmm model for ssu
Gene='ssu'
Script_dir='./SSUsearch/scripts'
Gene_model_org='./data/SSUsearch_db/Gene_model_org.16s_ecoli_J01695.fasta'
Ali_template='./data/SSUsearch_db/Ali_template.silva_ssu.fasta'
Start='577' #pick regions for de novo clustering
End='727'
Len_cutoff='100' # min length for reads picked for the region
Gene_tax='./data/SSUsearch_db/Gene_tax.silva_taxa_family.tax' # silva 108 ref
Gene_db='./data/SSUsearch_db/Gene_db.silva_108_rep_set.fasta'
Gene_tax_cc='./data/SSUsearch_db/Gene_tax_cc.greengene_97_otus.tax' # greengene 2012.10 ref for copy correction
Gene_db_cc='./data/SSUsearch_db/Gene_db_cc.greengene_97_otus.fasta'
# first part of file basename will the label of this sample
import os
Filename=os.path.basename(Seqfile)
Tag=Filename.split('.')[0]
import os
Hmm=os.path.abspath(Hmm)
Seqfile=os.path.abspath(Seqfile)
Script_dir=os.path.abspath(Script_dir)
Gene_model_org=os.path.abspath(Gene_model_org)
Ali_template=os.path.abspath(Ali_template)
Gene_tax=os.path.abspath(Gene_tax)
Gene_db=os.path.abspath(Gene_db)
Gene_tax_cc=os.path.abspath(Gene_tax_cc)
Gene_db_cc=os.path.abspath(Gene_db_cc)
os.environ.update(
{'Cpu':Cpu,
'Hmm':os.path.abspath(Hmm),
'Gene':Gene,
'Seqfile':os.path.abspath(Seqfile),
'Filename':Filename,
'Tag':Tag,
'Script_dir':os.path.abspath(Script_dir),
'Gene_model_org':os.path.abspath(Gene_model_org),
'Ali_template':os.path.abspath(Ali_template),
'Start':Start,
'End':End,
'Len_cutoff':Len_cutoff,
'Gene_tax':os.path.abspath(Gene_tax),
'Gene_db':os.path.abspath(Gene_db),
'Gene_tax_cc':os.path.abspath(Gene_tax_cc),
'Gene_db_cc':os.path.abspath(Gene_db_cc)})
!echo "*** make sure: parameters are right"
!echo "Seqfile: $Seqfile\nCpu: $Cpu\nFilename: $Filename\nTag: $Tag"
cd workdir
mkdir -p $Tag.ssu.out
### start hmmsearch
!echo "*** hmmsearch starting"
!time hmmsearch --incE 10 --incdomE 10 --cpu $Cpu \
--domtblout $Tag.ssu.out/$Tag.qc.$Gene.hmmdomtblout \
-o /dev/null -A $Tag.ssu.out/$Tag.qc.$Gene.sto \
$Hmm $Seqfile
!echo "*** hmmsearch finished"
!python $Script_dir/get-seq-from-hmmout.py \
$Tag.ssu.out/$Tag.qc.$Gene.hmmdomtblout \
$Tag.ssu.out/$Tag.qc.$Gene.sto \
$Tag.ssu.out/$Tag.qc.$Gene | parsing hmmdotblout done..
50 of 114 seqs are kept after hmm parser
| BSD-3-Clause | notebooks/ssu-search-Copy4.ipynb | smoe/SSUsearch |
Pass hits to mothur aligner | !echo "*** Starting mothur align"
!cat $Gene_model_org $Tag.ssu.out/$Tag.qc.$Gene > $Tag.ssu.out/$Tag.qc.$Gene.RFadded
# mothur does not allow tab between its flags, thus no indents here
!time mothur "#align.seqs(candidate=$Tag.ssu.out/$Tag.qc.$Gene.RFadded, template=$Ali_template, threshold=0.5, flip=t, processors=$Cpu)"
!rm -f mothur.*.logfile | *** Starting mothur align
[H[2J
mothur v.1.34.4
Last updated: 12/22/2014
by
Patrick D. Schloss
Department of Microbiology & Immunology
University of Michigan
pschloss@umich.edu
http://www.mothur.org
When using, please cite:
Schloss, P.D., et al., Introducing mothur: Open-source, platform-independent, community-supported software for describing and comparing microbial communities. Appl Environ Microbiol, 2009. 75(23):7537-41.
Distributed under the GNU General Public License
Type 'help()' for information on the commands that are available
Type 'quit()' to exit program
mothur > align.seqs(candidate=1c.ssu.out/1c.qc.ssu.RFadded, template=/usr/local/notebooks/data/SSUsearch_db/Ali_template.silva_ssu.fasta, threshold=0.5, flip=t, processors=2)
Using 2 processors.
Reading in the /usr/local/notebooks/data/SSUsearch_db/Ali_template.silva_ssu.fasta template sequences... DONE.
It took 25 to read 18491 sequences.
Aligning sequences from 1c.ssu.out/1c.qc.ssu.RFadded ...
23
28
It took 1 secs to align 51 sequences.
Output File Names:
1c.ssu.out/1c.qc.ssu.align
1c.ssu.out/1c.qc.ssu.align.report
[WARNING]: your sequence names contained ':'. I changed them to '_' to avoid problems in your downstream analysis.
mothur > quit()
26.96user 2.61system 0:29.14elapsed 101%CPU (0avgtext+0avgdata 4881984maxresident)k
0inputs+7792outputs (0major+399013minor)pagefaults 0swaps
| BSD-3-Clause | notebooks/ssu-search-Copy4.ipynb | smoe/SSUsearch |
Get aligned seqs that have > 50% matched to references | !python $Script_dir/mothur-align-report-parser-cutoff.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.report \
$Tag.ssu.out/$Tag.qc.$Gene.align \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter \
0.5
!python $Script_dir/remove-gap.py $Tag.ssu.out/$Tag.qc.$Gene.align.filter $Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa | _____no_output_____ | BSD-3-Clause | notebooks/ssu-search-Copy4.ipynb | smoe/SSUsearch |
Search is done here (the computational intensive part). Hooray!- \$Tag.ssu.out/\$Tag.qc.\$Gene.align.filter: aligned SSU rRNA gene fragments - \$Tag.ssu.out/\$Tag.qc.\$Gene.align.filter.fa: unaligned SSU rRNA gene fragments Extract the reads mapped 150bp region in V4 (577-727 in *E.coli* SSU rRNA gene position) for unsupervised clustering | !python $Script_dir/region-cut.py $Tag.ssu.out/$Tag.qc.$Gene.align.filter $Start $End $Len_cutoff
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter."$Start"to"$End".cut.lenscreen $Tag.ssu.out/$Tag.forclust | 28 sequences are matched to 577-727 region
| BSD-3-Clause | notebooks/ssu-search-Copy4.ipynb | smoe/SSUsearch |
Classify SSU rRNA gene seqs using SILVA | !rm -f $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy
!mothur "#classify.seqs(fasta=$Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa, template=$Gene_db, taxonomy=$Gene_tax, cutoff=50, processors=$Cpu)"
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy
!python $Script_dir/count-taxon.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy.count
!rm -f mothur.*.logfile | _____no_output_____ | BSD-3-Clause | notebooks/ssu-search-Copy4.ipynb | smoe/SSUsearch |
Classify SSU rRNA gene seqs with Greengene for copy correction later | !rm -f $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy
!mothur "#classify.seqs(fasta=$Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa, template=$Gene_db_cc, taxonomy=$Gene_tax_cc, cutoff=50, processors=$Cpu)"
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy
!python $Script_dir/count-taxon.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy.count
!rm -f mothur.*.logfile
# check the output directory
!ls $Tag.ssu.out | 1c.577to727
1c.cut
1c.forclust
1c.qc.ssu
1c.qc.ssu.align
1c.qc.ssu.align.filter
1c.qc.ssu.align.filter.577to727.cut
1c.qc.ssu.align.filter.577to727.cut.lenscreen.fa
1c.qc.ssu.align.filter.fa
1c.qc.ssu.align.filter.greengene_97_otus.wang.tax.summary
1c.qc.ssu.align.filter.silva_taxa_family.wang.tax.summary
1c.qc.ssu.align.filter.wang.gg.taxonomy
1c.qc.ssu.align.filter.wang.gg.taxonomy.count
1c.qc.ssu.align.filter.wang.silva.taxonomy
1c.qc.ssu.align.filter.wang.silva.taxonomy.count
1c.qc.ssu.align.report
1c.qc.ssu.hmmdomtblout
1c.qc.ssu.hmmdomtblout.parsedToDictWithScore.pickle
1c.qc.ssu.hmmtblout
1c.qc.ssu.RFadded
1c.qc.ssu.sto
| BSD-3-Clause | notebooks/ssu-search-Copy4.ipynb | smoe/SSUsearch |
This part of pipeline (working with one sequence file) finishes here. Next we will combine samples for community analysis (see unsupervised analysis).Following are files useful for community analysis:* 1c.577to727: aligned fasta file of seqs mapped to target region for de novo clustering* 1c.qc.ssu.align.filter: aligned fasta file of all SSU rRNA gene fragments* 1c.qc.ssu.align.filter.wang.gg.taxonomy: Greengene taxonomy (for copy correction)* 1c.qc.ssu.align.filter.wang.silva.taxonomy: SILVA taxonomy | !echo "*** pipeline runs successsfully :)" | *** pipeline runs successsfully :)
| BSD-3-Clause | notebooks/ssu-search-Copy4.ipynb | smoe/SSUsearch |
Classification and RegressionThere are two major types of supervised machine learning problems, called *classification* and *regression*.In classification, the goal is to predict a *class label*, which is a choice from a predefined list of possibilities. In *Intro_to_Decision_Trees.ipynb* we used the example of classifying irises into one of three possible species. Classification is sometimes separated into binary classification, which is the special case of distinguishing between exactly two classes, and multiclass classification, which is classification between more than two classes. You can think of binary classification as trying to answer a yes/no question. Classifying emails as either spam or not spam is an example of a binary classification problem. In this binary classification task, the yes/no question being asked would be “Is this email spam?”For regression tasks, the goal is to predict a *continuous number*, or a floating-point number in programming terms (or real number in mathematical terms). Predicting a person’s annual income from their education, their age, and where they live is an example of a regression task. When predicting income, the predicted value is an amount, and can be any number in a given range. Another example of a regression task is predicting the yield of a corn farm given attributes such as previous yields, weather, and number of employees working on the farm. The yield again can be an arbitrary number.**An easy way to distinguish between classification and regression tasks is to ask whether there is some kind of continuity in the output. If there is continuity between possible outcomes, then the problem is a regression problem.** Think about predicting annual income. There is a clear continuity in the output. Whether a person makes $40,000 or $40,001 a year does not make a tangible difference, even though these are different amounts of money; if our algorithm predicts $39,999 or $40,001 when it should have predicted $40,000, we don’t mind that much.By contrast, for the task of recognizing the language of a website (which is a classification problem), there is no matter of degree. A website is in one language, or it is in another. There is no continuity between languages, and there is no language that is between English and French.*Disclaimer*: Much of the code in this notebook was lifted from the excellent book [Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do) by Andreas Muller and Sarah Guido. Generalization, Overfitting, and UnderfittingIn supervised learning, we want to build a model on the training data and then be able to make accurate predictions on new, unseen data that has the same characteristics as the training set that we used. If a model is able to make accurate predictions on unseen data, we say it is able to *generalize* from the training set to the test set. We want to build a model that is able to generalize as accurately as possible.Usually we build a model in such a way that it can make accurate predictions on the training set. If the training and test sets have enough in common, we expect the model to also be accurate on the test set. However, there are some cases where this can go wrong. For example, if we allow ourselves to build very complex models, we can always be as accurate as we like on the training set.The only measure of whether an algorithm will perform well on new data is the evaluation on the test set. However, intuitively we expect simple models to generalize better to new data. Therefore, we always want to find the simplest model. Building a model that is too complex for the amount of information we have, as our novice data scientist did, is called *overfitting*. Overfitting occurs when you fit a model too closely to the particularities of the training set and obtain a model that works well on the training set but is not able to generalize to new data. On the other hand, if your model is too simple, then you might not be able to capture all the aspects of and variability in the data, and your model will do badly even on the training set. Choosing too simple a model is called *underfitting*.The more complex we allow our model to be, the better we will be able to predict on the training data. However, if our model becomes too complex, we start focusing too much on each individual data point in our training set, and the model will not generalize well to new data.There is a sweet spot in between that will yield the best generalization performance. This is the model we want to find. Relation of Model Complexity to Dataset SizeIt’s important to note that model complexity is intimately tied to the variation of inputs contained in your training dataset: the larger variety of data points your dataset contains, the more complex a model you can use without overfitting. Usually, collecting more data points will yield more variety, so larger datasets allow building more complex models. However, simply duplicating the same data points or collecting very similar data will not help.Having more data and building appropriately more complex models can often work wonders for supervised learning tasks. In the real world, you often have the ability to decide how much data to collect, which might be more beneficial than tweaking and tuning your model. Never underestimate the power of more data. Linear ModelsLinear models are a class of models that are widely used in practice and have been studied extensively in the last few decades, with roots going back over a hundred years. Linear models make a prediction using a linear function of the input features. Linear Models for RegressionFor regression, the general prediction formula for a linear model looks as follows: ŷ = w[0] * x[0] + w[1] * x[1] + ... + w[p] * x[p] + bHere, x[0] to x[p] denotes the features (in this example, the number of features is p) of a single data point, w and b are parameters of the model that are learned, and ŷ is the prediction the model makes. For a dataset with a single feature, this is: ŷ = w[0] * x[0] + bwhich you might remember from high school mathematics as the equation for a line. Here, w[0] is the slope and b is the y-axis offset. For more features, w contains the slopes along each feature axis. Alternatively, you can think of the predicted response as being a weighted sum of the input features, with weights (which can be negative) given by the entries of w.Linear models for regression can be characterized as regression models for which the prediction is a line for a single feature, a plane when using two features, or a hyperplane in higher dimensions (that is, when using more features).For datasets with many features, linear models can be very powerful. In particular, if you have more features than training data points, any target y can be perfectly modeled (on the training set) as a linear function.There are many different linear models for regression. The difference between these models lies in how the model parameters w and b are learned from the training data, and how model complexity can be controlled. Linear Regression (aka Ordinary Least Squares)Linear regression, or *ordinary least squares* (OLS), is the simplest and most classic linear method for regression. Linear regression finds the parameters w and b that minimize the *mean squared error* between predictions and the true regression targets, y, on the training set. The mean squared error is the sum of the squared differences between the predictions and the true values. Linear regression has no parameters, which is a benefit, but it also has no way to control model complexity.The scikit-learn documentation on [Linear Regression]http://scikit-learn.org/stable/modules/linear_model.htmlordinary-least-squares) has a decent basic example of its use. Advantages of Linear Regression (general, not specific to OLS)* Simple to understand and to interpret, at least for a small number of features/dimensions * Easy to visualize for 2 or 3 features* Very fast to train and also fast to predict* Doesn't suffer from the *curse of dimensionality* that methods such as KNearsetNeighbors does * Actually linear methods tend to work better with lots of features than with a small number of features Big Disadvantage specific to OLS, but not applicable to linear regresison in general* OLS has no way to control model complexity and can suffer from overfitting, particularly if there are a large number of features * Modified versions of Linear Regression such as *Ridge Regression* and *Lasso* can mitigate or fix this issue Disadvantages of Linear Regression in general, not specific to OLS* In lower-dimensional spaces, other models might yield better generalization performance* Requires more data preparation than some other techniques * Feature normalization is required for best results (for any algorithm which includes regularization) * Non-ordinal categorical features need to be one-hot encoded * Ordinal features need to be numerically encoded | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline | _____no_output_____ | Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
A First Application: Predicting Boston Housing PricesOne of the most famous datasets for regression in a supervised learning setting is the [Boston Housing data set](https://archive.ics.uci.edu/ml/datasets/Housing). It is a multivariate dataset introduced in a 1978 paper which records 13 attributes concerning housing values in the suburbs of Boston. NOTE: The data is very, very old and the house prices are ridiculously low by today's standards.scikit-learn has a number of small toy datasets included with it which makes it quick and easy to experiment with different machine learning algorithms on these datasets.The [sklearn.datasets.load_boston()](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.htmlsklearn.datasets.load_boston) method can be used to load the this dataset. Meet the dataThe *boston* object that is returned by **load_boston** is a **Bunch** object, which is very similar to a dictionary. It contains keys and values.Feature Information:1. CRIM: per capita crime rate by town 2. ZN: proportion of residential land zoned for lots over 25,000 sq.ft. 3. INDUS: proportion of non-retail business acres per town 4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) 5. NOX: nitric oxides concentration (parts per 10 million) 6. RM: average number of rooms per dwelling 7. AGE: proportion of owner-occupied units built prior to 1940 8. DIS: weighted distances to five Boston employment centres 9. RAD: index of accessibility to radial highways 10. TAX: full-value property-tax rate per $10,000 11. PTRATIO: pupil-teacher ratio by town 12. B: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town 13. LSTAT: % lower status of the population Target Information14. MEDV: Median value of owner-occupied homes in $1000's | from sklearn.datasets import load_boston
boston = load_boston()
print("Keys of boston: {}".format(boston.keys()))
# The value of the key DESCR is a short description of the dataset. Here we show the beinning of the description.
print(boston['DESCR'][:193] + "\n...")
# The value of feature_names is a list of strings, giving the abbreviated name of each feature
print("Feature names: {}".format(boston['feature_names']))
# The data itself is contained in the target and data fields.
# data contains the numeric measurements of features in a NumPy array
print("Type of data: {}".format(type(boston['data'])))
# The rows in the data array correspond to neighborhoods, while the columns represent the features
print("Shape of data: {}".format(boston['data'].shape))
# We see that the array contains measurements for 506 different neighborhoods. Here are values for the first 5.
print("First five columns of data:\n{}".format(boston['data'][:5]))
# The target array contains the Median value of owner-occupied homes in $1000's, also as a NumPy array
print("Type of target: {}".format(type(boston['target'])))
# target is a one-dimensional array, with one entry per sample
print("Shape of target: {}".format(boston['target'].shape))
# The target values are positive floating point numbers which represent a median house value in thousands of dollars.
print("Target:\n{}".format(boston['target'])) | Target:
[ 24. 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 15. 18.9
21.7 20.4 18.2 19.9 23.1 17.5 20.2 18.2 13.6 19.6 15.2 14.5
15.6 13.9 16.6 14.8 18.4 21. 12.7 14.5 13.2 13.1 13.5 18.9
20. 21. 24.7 30.8 34.9 26.6 25.3 24.7 21.2 19.3 20. 16.6
14.4 19.4 19.7 20.5 25. 23.4 18.9 35.4 24.7 31.6 23.3 19.6
18.7 16. 22.2 25. 33. 23.5 19.4 22. 17.4 20.9 24.2 21.7
22.8 23.4 24.1 21.4 20. 20.8 21.2 20.3 28. 23.9 24.8 22.9
23.9 26.6 22.5 22.2 23.6 28.7 22.6 22. 22.9 25. 20.6 28.4
21.4 38.7 43.8 33.2 27.5 26.5 18.6 19.3 20.1 19.5 19.5 20.4
19.8 19.4 21.7 22.8 18.8 18.7 18.5 18.3 21.2 19.2 20.4 19.3
22. 20.3 20.5 17.3 18.8 21.4 15.7 16.2 18. 14.3 19.2 19.6
23. 18.4 15.6 18.1 17.4 17.1 13.3 17.8 14. 14.4 13.4 15.6
11.8 13.8 15.6 14.6 17.8 15.4 21.5 19.6 15.3 19.4 17. 15.6
13.1 41.3 24.3 23.3 27. 50. 50. 50. 22.7 25. 50. 23.8
23.8 22.3 17.4 19.1 23.1 23.6 22.6 29.4 23.2 24.6 29.9 37.2
39.8 36.2 37.9 32.5 26.4 29.6 50. 32. 29.8 34.9 37. 30.5
36.4 31.1 29.1 50. 33.3 30.3 34.6 34.9 32.9 24.1 42.3 48.5
50. 22.6 24.4 22.5 24.4 20. 21.7 19.3 22.4 28.1 23.7 25.
23.3 28.7 21.5 23. 26.7 21.7 27.5 30.1 44.8 50. 37.6 31.6
46.7 31.5 24.3 31.7 41.7 48.3 29. 24. 25.1 31.5 23.7 23.3
22. 20.1 22.2 23.7 17.6 18.5 24.3 20.5 24.5 26.2 24.4 24.8
29.6 42.8 21.9 20.9 44. 50. 36. 30.1 33.8 43.1 48.8 31.
36.5 22.8 30.7 50. 43.5 20.7 21.1 25.2 24.4 35.2 32.4 32.
33.2 33.1 29.1 35.1 45.4 35.4 46. 50. 32.2 22. 20.1 23.2
22.3 24.8 28.5 37.3 27.9 23.9 21.7 28.6 27.1 20.3 22.5 29.
24.8 22. 26.4 33.1 36.1 28.4 33.4 28.2 22.8 20.3 16.1 22.1
19.4 21.6 23.8 16.2 17.8 19.8 23.1 21. 23.8 23.1 20.4 18.5
25. 24.6 23. 22.2 19.3 22.6 19.8 17.1 19.4 22.2 20.7 21.1
19.5 18.5 20.6 19. 18.7 32.7 16.5 23.9 31.2 17.5 17.2 23.1
24.5 26.6 22.9 24.1 18.6 30.1 18.2 20.6 17.8 21.7 22.7 22.6
25. 19.9 20.8 16.8 21.9 27.5 21.9 23.1 50. 50. 50. 50.
50. 13.8 13.8 15. 13.9 13.3 13.1 10.2 10.4 10.9 11.3 12.3
8.8 7.2 10.5 7.4 10.2 11.5 15.1 23.2 9.7 13.8 12.7 13.1
12.5 8.5 5. 6.3 5.6 7.2 12.1 8.3 8.5 5. 11.9 27.9
17.2 27.5 15. 17.2 17.9 16.3 7. 7.2 7.5 10.4 8.8 8.4
16.7 14.2 20.8 13.4 11.7 8.3 10.2 10.9 11. 9.5 14.5 14.1
16.1 14.3 11.7 13.4 9.6 8.7 8.4 12.8 10.5 17.1 18.4 15.4
10.8 11.8 14.9 12.6 14.1 13. 13.4 15.2 16.1 17.8 14.9 14.1
12.7 13.5 14.9 20. 16.4 17.7 19.5 20.2 21.4 19.9 19. 19.1
19.1 20.1 19.9 19.6 23.2 29.8 13.8 13.3 16.7 12. 14.6 21.4
23. 23.7 25. 21.8 20.6 21.2 19.1 20.6 15.2 7. 8.1 13.6
20.1 21.8 24.5 23.1 19.7 18.3 21.2 17.5 16.8 22.4 20.6 23.9
22. 11.9]
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Measuring Success: Training and testing dataWe want to build a machine learning model from this data that can predict the species of iris for a new set of measurements. But before we can apply our model to new measurements, we need to know whether it actually works -- that is, whether we should trust its predictions.Unfortunately, we cannot use the data we used to build the model to evaluate it. This is because our model can always simply remember the whole training set, and will therefore always predict the correct label for any point in the training set. This "remembering" does not indicate to us whether the model will *generalize* well (in other words, whether it will also perform well on new data).To assess the model's performance, we show it new data (data that it hasn't seen before) for which we have labels. This is usually done by splitting the labeled data we have collected (here, our 150 flower measurements) into two parts. One part of the data is used to build our machine learning model, and is called the *training data* or *training set*. The rest of the data will be used to assess how well the model works; this is called the *test data*, *test set*, or *hold-out set*.scikit-learn contains a function that shuffles the dataset and splits it for you: the [train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function. This function extracts 75% of the rows in the data as the training set, together with the corresponding labels for this data. The remaining 25% of the data, together with the remaining labels, is declared as the test set. Deciding how much data you want to put into the training and the test set respectively is somewhat arbitrary, but scikit-learn's default 75/25 split is a reasonable starting point.In scikit-learn, data is usually denoted with a capital X, while labels are denoted by a lowercase y. This is inspired by the standard formulation *f(x)=y* in mathematics, where *x* is the input to a function and *y* is the output. Following more conventions from mathematics, we use a capital *X* because the data is a two-dimensional array (a matrix) and a lowercase *y* because the target is a one-dimensional array (a vector).Before making the split, the **train_test_split** function shuffles the dataset using a pseudorandom number generator. If we just took the last 25% of the data as a test set, all the data points would have the label 2, as the data points are sorted by the label.To make sure this example code will always get the same output if run multiple times, we provide the pseudorandom number generator with a fixed seed using the **random_state** parameter.The output of the **train_test_split** function is **X_train**, **X_test**, **y_train**, and **y_test**, which are all NumPy arrays. **X_train** contains 75% of the rows of the dataset, and **X_test** contains the remaining 25%. | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(boston['data'], boston['target'], random_state=0)
print("X_train shape: {}".format(X_train.shape))
print("y_train shape: {}".format(y_train.shape))
print("X_test shape: {}".format(X_test.shape))
print("y_test shape: {}".format(y_test.shape)) | X_test shape: (127, 13)
y_test shape: (127,)
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
First things first: Look at your dataBefore building a machine learning model, it is often a good idea to inspect the data, to see if the task is easily solvable without machine learning, or if the desired information might not be contained in the data.Additionally, inspecting the data is a good way to find abnormalities and peculiarities. Maybe some of your irises were measured using inches and not centimeters, for example. In the real world, inconsistencies in the data and unexpected measurements are very common, as are missing data and not-a-number (NaN) or infinite values.One of the best ways to inspect data is to visualize it. One way to do this is by using a *scatter plot*. A scatter plot of the data puts one feature along the x-axis and another along the y-axis, and draws a dot for each data point. Unfortunately, computer screens have only two dimensions, which allows us to plot only two (or maybe three) features at a time. It is difficult to plot datasets with more than three features this way. One way around this problem is to do a *pair plot*, which looks at all possible pairs of features. If you have a small number of features, such as the four we have here, this is quite reasonable. You should keep in mind, however, that a pair plot does not show the interaction of all of the features at once, so some interesting aspects of the data may not be revealed when visualizing it this way.In Python, the *pandas* library has a convenient function called [scatter_matrix](http://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.htmlscatter-matrix-plot) for creating pair plots for a DataFrame. | # create dataframe from data in X_train
boston_df = pd.DataFrame(X_train, columns=boston.feature_names)
# Add in the target data
boston_df['MEDV'] = y_train
# Look at the first few rows
boston_df.head()
# create a scatter matrix from the dataframe
tmp = pd.scatter_matrix(boston_df, figsize=(15, 15)) | _____no_output_____ | Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
From the plots, we can see RM has a strong positive linear relationship with MEDV and LSTAT has a strong negative one. This makes sense - the housing price should go up as the number of rooms increases and the housing prices should go down as the percentage of lower class/income families in the neighborhood increases. | # Get a high-level overview of the data
boston_df.describe()
# Find which features are most highly correlated with the housing prices
df = boston_df
df['MEDV'] = y_train
df.corr()['MEDV'] | _____no_output_____ | Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Building your model: Linear RegressionNow we can start building the actual machine learning model. There are many regression algorithms in *scikit-learn* that we could use. Here we will use Ordinary Least Squares (OLS) Linear Regression because it is easy to understand and interpret.All machine learning models in *scikit-learn* are implemented in their own classes, which are called *Estimator* classes. The Linear Regression algorithm is implemented in the [LinearRegression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) class in the **linear_model** module. Before we can use the model, we need to instantiate the class into an object. This is when we will set any parameters of the model. The LinearRegression model doesn't have any particular parameters of importance. | from sklearn.linear_model import LinearRegression
lr = LinearRegression() | _____no_output_____ | Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
The *lr* object encapsulates the algorithm that will be used to build the model from the training data, as well the algorithm to make predictions on new data points. It will also hold the information that the algorithm has extracted from the training data.To build the model on the training set, we call the **fit** method of the *lr* object, which takes as arguments the NumPy array *X_train* containing the training data and the NumPy array *y_train* of the corresponding training labels. | lr.fit(X_train, y_train) | _____no_output_____ | Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
The “slope” parameters (w), also called weights or coefficients, are stored in the coef_ attribute, while the offset or intercept (b) is stored in the intercept_ attribute: | print("lr.coef_: {}".format(lr.coef_))
print("lr.intercept_: {}".format(lr.intercept_)) | lr.coef_: [ -1.16869578e-01 4.39939421e-02 -5.34808462e-03 2.39455391e+00
-1.56298371e+01 3.76145473e+00 -6.95007160e-03 -1.43520477e+00
2.39755946e-01 -1.12937318e-02 -9.86626289e-01 8.55687565e-03
-5.00029440e-01]
lr.intercept_: 36.980455337620576
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
The intercept_ attribute is always a single float number, while the coef_ attribute is a NumPy array with one entry per input feature. As we only have 13 input features in this dataset, lr.coef_ has 13 entries.Let’s look at the training set and test set performance: | print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test))) | Training set score: 0.77
Test set score: 0.64
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
An R^2 of around 0.64 on the test set is not very good, but we can see that the scores on the training and test sets are are a decent distance apart. This means we are likely overfitting. With higher-dimensional datasets (meaning datasets with a large number of features), linear models become more powerful, and there is a higher chance of overfitting. More complicated linear models such as *Ridge Regression* and *Lasso* have been designed to help control this overfitting problem.An R^2 of around 0.77 on the training set is OK, but not great. For a really good fit, we would want an R^2 of around 0.95 or so. This tells us we are missing someting. One possibility is we could do some feature engineering and either include polynomial powers of some of the features and/or include products of some of the features.Also, linear models tend ot work better when all of the features exist on roughly the same scale, we could attempt to scale our data as well. Preprocessing and ScalingSome algorithms, like neural networks, SVMs, and k-NearestNeighbors are very sensitive to the scaling of the data; while many others such as linear models with regularization (Ridge, Lasso, etc.) are moderately sensitive to the scaling of the data. Therefore, a common practice is to adjust the features so that the data representation is more suitable for these algorithms. Often, this is a simple per-feature rescaling and shift of the data. Different Kinds of PreprocessingDiffernt algorithms benefit from different kinds of scaling and thus Scikit-Learn supports a variety of scaling methods, though they all have a similar API. StandardScalerNeural networks expect all input features to vary in a similar way, and ideally to have a mean of 0, and a variance of 1. When using ANN, we must rescale our data so that it fulfills these requirements. For doing this automatically, *scikit-learn* has the [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.htmlsklearn.preprocessing.StandardScaler). The **StandardScaler** in *scikit-learn* ensures that for each feature the mean is 0 and the variance is 1, bringing all features to the same magnitude. However, this scaling does not ensure any particular minimum and maximum values for the features. MinMaxScalerA common rescaling method for kernel SVMs is to scale the data such that all features are between 0 and 1. We can do this in *scikit-learn* by using the [MinMaxScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.htmlsklearn.preprocessing.MinMaxScaler) preprocessing method. The **MinMaxScaler** shifts the data such that all features are exactly between 0 and 1. For a two-dimensional dataset this means all of the data is contained within the rectangle created by the x-axis between 0 and 1 and the y-axis between 0 and 1. RobustScalerStandard scaling does not ensure any particular minimum and maximum values for the features. The [RobustScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.htmlsklearn.preprocessing.RobustScaler) works similarly to the **StandardScaler** in that it ensures statistical properties for each feature that guarantee that they are on the same scale. However, the **RobustScaler** uses the median and quartiles, instead of mean and variance. This makes the **RobustScaler** ignore data points that are very different from the rest (like measurement errors). These odd data points are also called *outliers*, and can lead to trouble for other scaling techniques. | # Scale the boston dataset
from sklearn.preprocessing import MinMaxScaler
X = MinMaxScaler().fit_transform(boston.data)
X_train, X_test, y_train, y_test = train_test_split(X, boston['target'], random_state=0)
lr = LinearRegression().fit(X_train, y_train)
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test))) | Training set score: 0.77
Test set score: 0.64
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Ordinary Least Squares (OLS) regression is not sensitive to feature scaling, but all of the regularized linear methods which help reduce the overfitting present in OLS are sensitive to feature scaling. Feature EngineeringFeature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. Feature engineering is fundamental to the application of machine learning, and is both difficult and expensive. The need for manual feature engineering can be obviated by automated feature learning.In particular, linear models might benefit greatly from generating new features via techniques such as binning, and adding polynomials and interactions. However, more complex models like random forests and SVMs might be able to learn more complex tasks without explicitly expanding the feature space.In practice, the features that are used (and the match between features and method) is often the most important piece in making a machine learning approach work well. Interactions and PolynomialsOne way to enrich a feature representation, particularly for linear models, is adding *interaction features* - products of individual original features. Another way to enrich a feature representation is to use *polynomials* of the original features - for a given feature x, we might want to consider x^2, x^3, x^4, and so on. This kind of feature engineering is often used in statistical modeling, but it’s also common in many practical machine learning applications.Within *scikit-learn*, the addition of both *interaction features* and *polynomial features* is implemented in [PolynomialFeatures](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.htmlsklearn.preprocessing.PolynomialFeatures) in the **preprocessing** module.In the code below, we modify the boston housing dataset by addig all polynomial features and interactions up to a degree of 2. The data originally had 13 features, which were expanded into 105 interaction features. These new features represent all possible interactions between two different original features, as well as the square of each original feature. degree=2 here means that we look at all features that are the product of up to two original features. The exact correspondence between input and output features can be found using the **get_feature_names** method. | from sklearn.datasets import load_boston
from sklearn.preprocessing import MinMaxScaler, PolynomialFeatures, StandardScaler, RobustScaler
def load_extended_boston(scaler='minmax'):
boston = load_boston()
X = boston.data
if 'standard' == scaler:
X = StandardScaler().fit_transform(boston.data)
elif 'robust' == scaler:
X = RobustScaler().fit_transform(boston.data)
else:
X = MinMaxScaler().fit_transform(boston.data)
X = PolynomialFeatures(degree=2).fit_transform(X)
return X, boston.target
X, y = load_extended_boston()
X.shape
# What if we fit this new dataset with a vastly expanded set of features using OLS?
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
lr = LinearRegression().fit(X_train, y_train)
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test))) | Training set score: 0.95
Test set score: 0.61
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Now the basic OLS model is doing a dramatically better job fitting the training set (R^2 of 0.95 vs 0.77).This discrepancy between performance on the training set and the test set is a clear sign of overfitting, and therefore we should try to find a model that allows us to control complexity. One of the most commonly used alternatives to standard linear regression is *ridge regression*, which we will look into next. Ridge RegressionRidge regression is also a linear model for regression, so the formula it uses to make predictions is the same one used for ordinary least squares. In ridge regression, though, the coefficients (w) are chosen not only so that they predict well on the training data, but also to fit an additional constraint. We also want the magnitude of coefficients to be as small as possible; in other words, all entries of w should be close to zero. Intuitively, this means each feature should have as little effect on the outcome as possible (which translates to having a small slope), while still predicting well. This constraint is an example of what is called *regularization*. Regularization means explicitly restricting a model to avoid overfitting. The particular kind used by ridge regression is known as L2 regularization.Ridge regression is implemented in [linear_model.Ridge](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.htmlsklearn.linear_model.Ridge). Let’s see how well it does on the extended Boston Housing dataset: | from sklearn.linear_model import Ridge
ridge = Ridge().fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge.score(X_test, y_test))) | Training set score: 0.89
Test set score: 0.75
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
As you can see, the training set score of Ridge is *lower* than for LinearRegression, while the test set score is *higher*. This is consistent with our expectation. With linear regression, we were overfitting our data. Ridge is a more restricted model, so we are less likely to overfit. A less complex model means worse performance on the training set, but better generalization. As we are only interested in generalization performance, we should choose the Ridge model over the LinearRegression model. The Ridge model makes a trade-off between the simplicity of the model (near-zero coefficients) and its performance on the training set. How much importance the model places on simplicity versus training set performance can be specified by the user, using the **alpha** parameter. In the previous example, we used the default parameter alpha=1.0. There is no reason why this will give us the best trade-off, though. The optimum setting of alpha depends on the particular dataset we are using. Increasing alpha forces coefficients to move more toward zero, which decreases training set performance but might help generalization. For example: | ridge10 = Ridge(alpha=10).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge10.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge10.score(X_test, y_test)))
| Training set score: 0.79
Test set score: 0.64
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Decreasing alpha allows the coefficients to be less restricted. For very small values of alpha, coefficients are barely restricted at all, and we end up with a model that resembles LinearRegression: | ridge01 = Ridge(alpha=0.1).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge01.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge01.score(X_test, y_test))) | Training set score: 0.93
Test set score: 0.77
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Here, alpha=0.1 seems to be working well. We could try decreasing alpha even more to improve generalization. For now, notice how the parameter alpha corresponds to the model complexity. Very shortly we need to think about systematic methods for properly select optimal values for parameters such as **alpha**.We can also get a more qualitative insight into how the alpha parameter changes the model by inspecting the coef_ attribute of models with different values of alpha. A higher alpha means a more restricted model, so we expect the entries of coef_ to have smaller magnitude for a high value of alpha than for a low value of alpha. This is confirmed in the plot below: | plt.figure(figsize=(15, 10))
plt.plot(ridge.coef_, 's', label="Ridge alpha=1")
plt.plot(ridge10.coef_, '^', label="Ridge alpha=10")
plt.plot(ridge01.coef_, 'v', label="Ridge alpha=0.1")
plt.plot(lr.coef_, 'o', label="LinearRegression")
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
plt.hlines(0, 0, len(lr.coef_))
plt.ylim(-25, 25)
plt.legend()
plt.show() | _____no_output_____ | Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Clearly, the interactions and polynomial features gave us a good boost in performance when using Ridge. When using a more complex model like a random forest, the story can be a bit different, though. Adding features will benefit linear models the most. For very complex models, adding features may actually slightly decrease the performance.Machine learning is complex. Often you have to try several experiments and just see what works best. Model Evaluation and ImprovementTo evaluate our supervised models, so far we have split our dataset into a training set and a test set using the **train_test_split function**, built a model on the training set by calling the fit method, and evaluated it on the test set using the score method, which for classification computes the fraction of correctly classified samples and for regression computes the R^2.Remember, the reason we split our data into training and test sets is that we are interested in measuring how well our model *generalizes* to new, previously unseen data. We are not interested in how well our model fit the training set, but rather in how well it can make predictions for data that was not observed during training.As we saw when exploring Ridge regression, we need a more robust way to assess generalization performance which is capable of automatically choosing optimal values for hyper-parameters such as **alpha**. Cross-Validation*Cross-validation* is a statistical method of evaluating generalization performance that is more stable and thorough than using a split into a training and a test set. In cross-validation, the data is instead split repeatedly and multiple models are trained. The most commonly used version of cross-validation is *k-fold cross-validation*, where *k* is a user-specified number, usually 5 or 10. When performing five-fold cross-validation, the data is first partitioned into five parts of (approximately) equal size, called *folds*. Next, a sequence of models is trained. The first model is trained using the first fold as the test set, and the remaining folds (2–5) are used as the training set. The model is built using the data in folds 2–5, and then the accuracy is evaluated on fold 1. Then another model is built, this time using fold 2 as the test set and the data in folds 1, 3, 4, and 5 as the training set. This process is repeated using folds 3, 4, and 5 as test sets. For each of these five splits of the data into training and test sets, we compute the accuracy. In the end, we have collected five accuracy values.Usually, the first fifth of the data is the first fold, the second fifth of the data is the second fold, and so on.The whole point of cross-validation is to be more robust than a simple train/test split so that the results are not likely to be influenced by a particularly good or bad split of the data. The main disadvantage is that it requires more computation. Cross-Validation in scikit-learnCross-validation is implemented in scikit-learn using the [cross_val_score](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.htmlsklearn.model_selection.cross_val_score) function from the *model_selection* module. The parameters of the **cross_val_score** function are the model we want to evaluate, the training data, and the ground-truth labels. | # Let's evaluate cross-validation on the iris dataset using logistic regression (which is actually classification)
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
iris = load_iris()
logreg = LogisticRegression()
scores = cross_val_score(logreg, iris.data, iris.target)
print("Cross-validation scores: {}".format(scores)) | Cross-validation scores: [ 0.96078431 0.92156863 0.95833333]
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
By default, cross_val_score performs three-fold cross-validation, returning three accuracy values. We can change the number of folds used by changing the cv parameter: | scores = cross_val_score(logreg, iris.data, iris.target, cv=5)
print("Cross-validation scores: {}".format(scores)) | Cross-validation scores: [ 1. 0.96666667 0.93333333 0.9 1. ]
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
A common way to summarize the cross-validation accuracy is to compute the mean: | print("Average cross-validation score: {:.2f}".format(scores.mean())) | Average cross-validation score: 0.96
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Using the mean cross-validation we can conclude that we expect the model to be around 96% accurate on average. Looking at all five scores produced by the five-fold cross-validation, we can also conclude that there is a relatively high variance in the accuracy between folds, ranging from 100% accuracy to 90% accuracy. This could imply that the model is very dependent on the particular folds used for training, but it could also just be a consequence of the small size of the dataset. Benefits of Cross-ValidationThere are several benefits to using cross-validation instead of a single split into a training and a test set. First, remember that train_test_split performs a random split of the data. Imagine that we are “lucky” when randomly splitting the data, and all examples that are hard to classify end up in the training set. In that case, the test set will only contain “easy” examples, and our test set accuracy will be unrealistically high. Conversely, if we are “unlucky,” we might have randomly put all the hard-to-classify examples in the test set and consequently obtain an unrealistically low score. However, when using cross-validation, each example will be in the training set exactly once: each example is in one of the folds, and each fold is the test set once. Therefore, the model needs to generalize well to all of the samples in the dataset for all of the cross-validation scores (and their mean) to be high.Having multiple splits of the data also provides some information about how sensitive our model is to the selection of the training dataset. For the iris dataset, we saw accuracies between 90% and 100%. This is quite a range, and it provides us with an idea about how the model might perform in the worst case and best case scenarios when applied to new data.Another benefit of cross-validation as compared to using a single split of the data is that we use our data more a single split of the data is that we use our data more effectively. When using train_test_split, we usually use 75% of the data for training and 25% of the data for evaluation. When using five-fold cross-validation, in each iteration we can use four-fifths of the data (80%) to fit the model. When using 10-fold cross-validation, we can use nine-tenths of the data (90%) to fit the model. More data will usually result in more accurate models.The main disadvantage of cross-validation is increased computational cost. As we are now training k models instead of a single model, cross-validation will be roughly k times slower than doing a single split of the data.It is important to keep in mind that cross-validation is not a way to build a model that can be applied to new data. Cross-validation does not return a model. When calling cross_val_score, multiple models are built internally, but the purpose of cross-validation is only to evaluate how well a given algorithm will generalize when trained on a specific dataset. Stratified k-Fold Cross-Validation and Other StrategiesSplitting the dataset into k folds by starting with the first one-k-th part of the data, as described in the previous section, might not always be a good idea. For example, let’s have a look at the boston housing dataset: | lr = LinearRegression()
scores = cross_val_score(lr, boston.data, boston.target)
print("Cross-validation scores: {}".format(scores)) | Cross-validation scores: [ 0.5828011 0.53193819 -5.85104986]
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
As we can see, a default 3-fold cross-validation performed ok for the first two folds, but horribly bad for the third one. The fundamental problem here is that if that data isn't organized in a random way, then just taking folds in order doesn't represent a random sampling for each fold. There are multiple possible ways to mitigate this issue. Stratified k-Fold Cross-ValidationAs the simple k-fold strategy would obviously fail for classification problems if the data is organized by target category, *scikit-learn* does not use it for classification, but rather uses *stratified k-fold cross-validation*. In stratified cross-validation, we split the data such that the proportions between classes are the same in each fold as they are in the whole dataset.*scikit-learn* supports startified k-fold cross-validation via the [StratifiedKFold](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.htmlsklearn.model_selection.StratifiedKFold) class in the *model_selection* module.For example, if 90% of your samples belong to class A and 10% of your samples belong to class B, then stratified cross-validation ensures that in each fold, 90% of samples belong to class A and 10% of samples belong to class B.For regression, *scikit-learn* uses the standard k-fold cross-validation by default. Shuffle-split cross-validationAnother, very flexible strategy for cross-validation is *shuffle-split cross-validation*. In shuffle-split cross-validation, each split samples **train_size** many points for the training set and **test_size** many (disjoint) point for the test set. This splitting is repeated **n_iter** times. You can use integers for **train_size** and **test_size** to use absolute sizes for these sets, or floating-point numbers to use fractions of the whole dataset.Since the sampling in *shuffle-split cross-validation* is done in a random fashion, this is a safer alternative to default *k-Fold Cross-Validation* when the data isn't truly randomized.*scikit-learn* supports shuffle-split cross-validation via the [ShuffleSplit](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.htmlsklearn.model_selection.ShuffleSplit) class in the *model_selection* module.There is also a stratified variant of ShuffleSplit, aptly named [StratifiedShuffleSplit](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.htmlsklearn.model_selection.StratifiedShuffleSplit), which can provide more reliable results for classification tasks. | # Let's look at the boston housing dataset again using shuffle-split cross-validation to ensure random sampling
# The following code splits the dataset into 80% training set and 20% test set for 3 iterations:
from sklearn.model_selection import ShuffleSplit
shuffle_split = ShuffleSplit(test_size=.8, train_size=.2, n_splits=3)
scores = cross_val_score(lr, boston.data, boston.target, cv=shuffle_split)
print("Cross-validation scores:\n{}".format(scores)) | Cross-validation scores:
[ 0.69514428 0.69350546 0.63047915]
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Grid SearchNow that we know how to evaluate how well a model generalizes, we can take the next step and improve the model’s generalization performance by tuning its parameters. We discussed the parameter settings of the Ridge model for ridge regression earlier. Finding the values of the important parameters of a model (the ones that provide the best generalization performance) is a tricky task, but necessary for almost all models and datasets. Because it is such a common task, there are standard methods in *scikit-learn* to help you with it. The most commonly used method is grid search, which basically means trying all possible combinations of the parameters of interest.Consider the case of ridge regression, as implemented in the Ridge class. As we discussed earlier, there is one important parameters: the regularization parameter, *alpha*. Say we want to try the values 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, and 100 for *alpha*. Because we have eleven different settings for *alpha* and *alpha* is the only parameter, we have 11 combinations of parameters in total. Looking at all possible combinations creates a table (or grid) of parameter settings for the Ridge regression model. Simple Grid SearchWe can implement a simple grid search just as a for loop over the parameter, training and evaluating a classifier for each value: | X, y = load_extended_boston(scaler='standard')
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
print("Size of training set: {} size of test set: {}".format(X_train.shape[0], X_test.shape[0]))
best_score = 0
for alpha in [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]:
# for each combination of parameters, train an SVC
ridge = Ridge(alpha=alpha)
ridge.fit(X_train, y_train)
# evaluate the SVC on the test set
score = ridge.score(X_test, y_test)
# if we got a better score, store the score and parameters
if score > best_score:
best_score = score
best_parameters = {'alpha': alpha}
print("Best score: {:.2f}".format(best_score))
print("Best parameters: {}".format(best_parameters)) | Size of training set: 379 size of test set: 127
Best score: 0.78
Best parameters: {'alpha': 50}
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
The Danger of Overfitting the Parameters and the Validation SetGiven this result, we might be tempted to report that we found a model that performs with 78% accuracy on our dataset. However, this claim could be overly optimistic (or just wrong), for the following reason: we tried many different parameters and selected the one with best accuracy on the test set, but this accuracy won’t necessarily carry over to new data. Because we used the test data to adjust the parameters, we can no longer use it to assess how good the model is. This is the same reason we needed to split the data into training and test sets in the first place; we need an independent dataset to evaluate, one that was not used to create the model.One way to resolve this problem is to split the data again, so we have three sets: the training set to build the model, the validation (or development) set to select the parameters of the model, and the test set to evaluate the performance of the selected parameters. After selecting the best parameters using the validation set, we can rebuild a model using the parameter settings we found, but now training on both the training data and the validation data. This way, we can use as much data as possible to build our model. This leads to the following implementation: | X, y = load_extended_boston(scaler='standard')
# split data into train+validation set and test set
X_trainval, X_test, y_trainval, y_test = train_test_split(X, y, random_state=0)
# split train+validation set into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X_trainval, y_trainval, random_state=1)
print("Size of training set: {} size of validation set: {} size of test set:"
" {}\n".format(X_train.shape[0], X_valid.shape[0], X_test.shape[0]))
best_score = 0
for alpha in [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]:
# for each combination of parameters, train an SVC
ridge = Ridge(alpha=alpha)
ridge.fit(X_train, y_train)
# evaluate the Ridge on the test set
score = ridge.score(X_valid, y_valid)
# if we got a better score, store the score and parameters
if score > best_score:
best_score = score
best_parameters = {'alpha': alpha}
# rebuild a model on the combined training and validation set,
# and evaluate it on the test set
ridge = Ridge(**best_parameters)
ridge.fit(X_trainval, y_trainval)
test_score = ridge.score(X_test, y_test)
print("Best score on validation set: {:.2f}".format(best_score))
print("Best parameters: ", best_parameters)
print("Test set score with best parameters: {:.2f}".format(test_score)) | Size of training set: 284 size of validation set: 95 size of test set: 127
Best score on validation set: 0.92
Best parameters: {'alpha': 50}
Test set score with best parameters: 0.78
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
The best score on the validation set is 92%. However, the score on the test set—the score that actually tells us how well we generalize—is lower, at 78%. So we can claim to classify new data 78% correctly. This happens to be the same as before, now we can make a stronger claim since the final test set wasn't used in any way shape or form during hyper-parameter tuning.The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy “leak” information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation—this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Grid Search with Cross-ValidationWhile the method of splitting the data into a training, a validation, and a test set that we just saw is workable, and relatively commonly used, it is quite sensitive to how exactly the data is split. From the output of the previous code snippet we can see that GridSearchCV selects 'alhpa': 50 as the best parameter. But if we were to take a different part of the training data as the validation set, it may optimize for a different value. For a better estimate of the generalization performance, instead of using a single split into a training and a validation set, we can use cross-validation to evaluate the performance of each parameter combination. This method can be coded up as follows: | best_score = 0
for alpha in [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]:
# for each combination of parameters, train an SVC
ridge = Ridge(alpha=alpha)
# perform cross-validation
scores = cross_val_score(ridge, X_trainval, y_trainval, cv=5)
# compute mean cross-validation accuracy
score = np.mean(scores)
# if we got a better score, store the score and parameters
if score > best_score:
best_score = score
best_parameters = {'alpha': alpha}
# rebuild a model on the combined training and validation set,
# and evaluate it on the test set
ridge = Ridge(**best_parameters)
ridge.fit(X_trainval, y_trainval)
test_score = ridge.score(X_test, y_test)
print("Best score on validation set: {:.2f}".format(best_score))
print("Best parameters: ", best_parameters)
print("Test set score with best parameters: {:.2f}".format(test_score)) | Best score on validation set: 0.83
Best parameters: {'alpha': 10}
Test set score with best parameters: 0.77
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
To evaluate the accuracy of the Ridge Regression model using a particular setting of alpha using five-fold cross-validation, we need to train 11 * 5 = 55 models. As you can imagine, the main downside of the use of cross-validation is the time it takes to train all these models. However, as you can see here, it is a more reliable method which is less sensitive to how precisely the validation set is sampled from the overall trainin set, and thus more likely to generalize well. GridSearchCVBecause grid search with cross-validation is such a commonly used method to adjust parameters, *scikit-learn* provides the [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.htmlsklearn.model_selection.GridSearchCV) class, which implements it in the form of an estimator. To use the **GridSearchCV** class, you first need to specify the parameters you want to search over using a dictionary. GridSearchCV will then perform all the necessary model fits. The keys of the dictionary are the names of parameters we want to adjust (as given when constructing the model—in this case, alpha), and the values are the parameter settings we want to try out. Trying the values 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, and 100 for alpha translates to the following dictionary: | param_grid = {'alpha': [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]}
print("Parameter grid:\n{}".format(param_grid)) | Parameter grid:
{'alpha': [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]}
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
We can now instantiate the **GridSearchCV** class with the model (*Ridge*), the parameter grid to search (*param_grid*), and the cross-validation strategy we want to use (say, five-fold stratified cross-validation): | from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
grid_search = GridSearchCV(Ridge(), param_grid, cv=5) | _____no_output_____ | Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
**GridSearchCV** will use cross-validation in place of the split into a training and validation set that we used before. However, we still need to split the data into a training and a test set, to avoid overfitting the parameters: | X, y = load_extended_boston(scaler='standard')
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) | _____no_output_____ | Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
The *grid_search* object that we created behaves just like a classifier; we can call the standard methods **fit**, **predict**, and **score** on it. However, when we call **fit**, it will run cross-validation for each combination of parameters we specified in param_grid: | grid_search.fit(X_train, y_train) | _____no_output_____ | Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Fitting the **GridSearchCV** object not only searches for the best parameters, but also automatically fits a new model on the whole training dataset with the parameters that yielded the best cross-validation performance. What happens in fit is therefore equivalent to the result of the code we saw at the beginning of this section. The **GridSearchCV** class provides a very convenient interface to access the retrained model using the predict and score methods. To evaluate how well the best found parameters generalize, we can call score on the test set: | print("Test set score: {:.2f}".format(grid_search.score(X_test, y_test))) | Test set score: 0.77
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Choosing the parameters using cross-validation, we actually found a model that achieves 77% accuracy on the test set. The important thing here is that we *did not use the test set* to choose the parameters. The parameters that were found are scored in the **`best_params_`** attribute, and the best cross-validation accuracy (the mean accuracy over the different splits for this parameter setting) is stored in **`best_score_`**: | print("Best parameters: {}".format(grid_search.best_params_))
print("Best cross-validation score: {:.2f}".format(grid_search.best_score_)) | Best parameters: {'alpha': 10}
Best cross-validation score: 0.83
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Sometimes it is helpful to have access to the actual model that was found—for example, to look at coefficients or feature importances. You can access the model with the best parameters trained on the whole training set using the **`best_estimator_`** attribute: | print("Best estimator:\n{}".format(grid_search.best_estimator_)) | Best estimator:
Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None,
normalize=False, random_state=None, solver='auto', tol=0.001)
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Because *grid_search* itself has **predict** and **score** methods, using **`best_estimator_`** is not needed to make predictions or evaluate the model. Putting it all togetherThe one thing we didn't do was experiment with different train/test splits. Let's run it with randomness a bunch of times and see how consistent it is: | from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Ridge
param_grid = {'alpha': [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100]}
grid_search = GridSearchCV(Ridge(), param_grid, cv=5)
X, y = load_extended_boston(scaler='standard')
for i in range(10):
X_train, X_test, y_train, y_test = train_test_split(X, y)
grid_search.fit(X_train, y_train)
print("Run {} - Test set score: {:.2f} Best parameters: {}".format(i, grid_search.score(X_test, y_test),
grid_search.best_params_)) | Run 0 - Test set score: 0.81 Best parameters: {'alpha': 1}
Run 1 - Test set score: 0.82 Best parameters: {'alpha': 5}
Run 2 - Test set score: 0.81 Best parameters: {'alpha': 10}
Run 3 - Test set score: 0.89 Best parameters: {'alpha': 10}
Run 4 - Test set score: 0.87 Best parameters: {'alpha': 10}
Run 5 - Test set score: 0.84 Best parameters: {'alpha': 10}
Run 6 - Test set score: 0.87 Best parameters: {'alpha': 50}
Run 7 - Test set score: 0.88 Best parameters: {'alpha': 10}
Run 8 - Test set score: 0.88 Best parameters: {'alpha': 10}
Run 9 - Test set score: 0.85 Best parameters: {'alpha': 50}
| Apache-2.0 | SL2_Regression_and_Classification.ipynb | tleonhardt/machine_learning |
Parameter ManagementOnce we have chosen an architectureand set our hyperparameters,we proceed to the training loop,where our goal is to find parameter valuesthat minimize our objective function. After training, we will need these parameters in order to make future predictions.Additionally, we will sometimes wish to extract the parameters either to reuse them in some other context,to save our model to disk so that it may be exectuted in other software,or for examination in the hopes of gaining scientific understanding.Most of the time, we will be able to ignore the nitty-gritty detailsof how parameters are declaredand manipulated, relying on DJLto do the heavy lifting.However, when we move away from stacked architectures with standard layers, we will sometimes need to get into the weedsof declaring and manipulating parameters. In this section, we cover the following:* Accessing parameters for debugging, diagnostics, and visualiziations.* Parameter initialization.* Sharing parameters across different model components.We start by focusing on an MLP with one hidden layer. | %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.7.0-SNAPSHOT
%maven ai.djl:model-zoo:0.7.0-SNAPSHOT
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
%maven net.java.dev.jna:jna:5.3.0
%maven ai.djl.mxnet:mxnet-engine:0.7.0-SNAPSHOT
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-a
import ai.djl.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.*;
import ai.djl.ndarray.index.*;
import ai.djl.nn.*;
import ai.djl.nn.core.*;
import ai.djl.training.*;
import ai.djl.training.initializer.*;
import ai.djl.training.dataset.*;
import ai.djl.util.*;
import ai.djl.translate.*;
import ai.djl.inference.Predictor;
NDManager manager = NDManager.newBaseManager();
NDArray x = manager.randomUniform(0, 1, new Shape(2, 4));
Model model = Model.newInstance("lin-reg");
SequentialBlock net = new SequentialBlock();
net.add(Linear.builder().setUnits(8).build());
net.add(Activation.reluBlock());
net.add(Linear.builder().setUnits(1).build());
net.setInitializer(new NormalInitializer());
net.initialize(manager, DataType.FLOAT32, x.getShape());
model.setBlock(net);
Predictor<NDList, NDList> predictor = model.newPredictor(new NoopTranslator());
predictor.predict(new NDList(x)).singletonOrThrow(); // forward computation | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Parameter AccessLet us start with how to access parametersfrom the models that you already know.Each layer's parameters are conveniently stored in a `Pair` consisting of a unique`String` that serves as a key for the layer and the `Parameter` itself.The `ParameterList` is an extension of `PairList` and is returned with a call to the `getParameters()` method on a `Block`. We can inspect the parameters of the `net` defined above.When a model is defined via the `SequentialBlock` class,we can access any layer's `Pair` by calling `get()` on the `ParameterList` and passing in the indexof the parameter we want. Calling `getKey()` and `getValue()` on a `Pair` will get the parameter's name and `Parameter` respectively. We can also directly get the `Parameter` we want from the `ParameterList`by calling `get()` and passing in its unique key(the `String` portion of the `Pair`. If we call `valueAt()` and pass inthe index, we will get the `Parameter` directly as well. | ParameterList params = net.getParameters();
// Print out all the keys (unique!)
for (var pair : params) {
System.out.println(pair.getKey());
}
// Use the unique key to access the Parameter
NDArray dense0Weight = params.get("01Linear_weight").getArray();
NDArray dense0Bias = params.get("01Linear_bias").getArray();
// Use indexing to access the Parameter
NDArray dense1Weight = params.valueAt(2).getArray();
NDArray dense1Bias = params.valueAt(3).getArray();
System.out.println(dense0Weight);
System.out.println(dense0Bias);
System.out.println(dense1Weight);
System.out.println(dense1Bias); | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
The output tells us a few important things.First, each fully-connected layer has two parameters, e.g., `dense0Weight` and `dense0Bias`,corresponding to that layer's weights and biases, respectively.The `params` variable is a `ParameterList` which contain thekey-value pairs of the layer name and a parameter of the `Parameter` class.With a `Parameter`, we can get the underlying numerical values as `NDArray`s by calling `getArray()` on them!Both the weights and biases are stored as single precision floats(`FLOAT32`). Targeted ParametersParameters are complex objects,containing data, gradients,and additional information.That's why we need to request the data explicitly.Note that the bias vector consists of zeroesbecause we have not updated the networksince it was initialized. Note that unlike the biases, the weights are nonzero. This is because unlike biases, weights are initialized randomly. In addition to `getArray()`, each `Parameter`also provides a `requireGradient()` method whichreturns whether the parameter needs gradients to be computed(which we set on the `NDArray` with `attachGradient()`).The gradient has the same shape as the weight. To actually access the gradient, we simply call `getGradient()` on the`NDArray`.Because we have not invoked backpropagation for this network yet, its values are all 0.We would invoke it by creating a `GradientCollector` instance andrun our calculations inside it. | dense0Weight.getGradient(); | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Collecting Parameters from Nested BlocksLet us see how the parameter naming conventions work if we nest multiple blocks inside each other. For that we first define a function that produces Blocks (a Block factory, so to speak) and then combine these inside yet larger Blocks. | public SequentialBlock block1() {
SequentialBlock net = new SequentialBlock();
net.add(Linear.builder().setUnits(32).build());
net.add(Activation.reluBlock());
net.add(Linear.builder().setUnits(16).build());
net.add(Activation.reluBlock());
return net;
}
public SequentialBlock block2() {
SequentialBlock net = new SequentialBlock();
for (int i = 0; i < 4; i++) {
net.add(block1());
}
return net;
}
SequentialBlock rgnet = new SequentialBlock();
rgnet.add(block2());
rgnet.add(Linear.builder().setUnits(10).build());
rgnet.setInitializer(new NormalInitializer());
rgnet.initialize(manager, DataType.FLOAT32, x.getShape());
Model model = Model.newInstance("rgnet");
model.setBlock(rgnet);
Predictor<NDList, NDList> predictor = model.newPredictor(new NoopTranslator());
predictor.predict(new NDList(x)).singletonOrThrow(); | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Now that we have designed the network, let us see how it is organized.We can get the list of named parameters by calling `getParameters()`.However, we not only want to see the parameters, but also howour network is structured.To see our network architecture, we can simply print out the block whose architecture we want to see. | /* Network Architecture for RgNet */
rgnet
/* Parameters for RgNet */
for (var param : rgnet.getParameters()) {
System.out.println(param.getValue().getArray());
} | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Since the layers are hierarchically nested,we can also access them by calling their `getChildren()` methodto get a `BlockList`(also an extension of `PairList`) of their inner blocks.It shares methods with `ParameterList` and as such we can use theirfamiliar structure to access the blocks. We can call `get(i)` to get the`Pair` at the index `i` we want, and then finally `getValue()` to get the actualblock. We can do this in one step as shown above with `valueAt(i)`. Then we have to repeat that to get that blocks child and so on.Here, we access the first major block, within it the second subblock, and within that the bias of the first layer,with as follows: | Block majorBlock1 = rgnet.getChildren().get(0).getValue();
Block subBlock2 = majorBlock1.getChildren().valueAt(1);
Block linearLayer1 = subBlock2.getChildren().valueAt(0);
NDArray bias = linearLayer1.getParameters().valueAt(1).getArray();
bias | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Parameter InitializationNow that we know how to access the parameters,let us look at how to initialize them properly.We discussed the need for initialization in :numref:`sec_numerical_stability`. By default, DJL initializes weight matricesbased on your set initializer and the bias parameters are all set to $0$.However, we will often want to initialize our weightsaccording to various other protocols. DJL's `ai.djl.training.initializer` package provides a variety of preset initialization methods.If we want to create a custom initializer,we need to do some extra work. Built-in InitializationIn DJL, when setting the initializer for blocks, the default `setInitializer()` function does not overwriteany previous set initializers. So if you set an initializer earlier, but decide you want to change your initializer and call `setInitializer()` again, the second `setInitializer()` will NOT overwrite your first one.Additionally, when you call `setInitializer()` on a block, all internal blocks will also call `setInitializer()` with the same given `initializer`.This means that we can call `setInitializer()` on the highest level of a block and know that all internal blocks that do not have an initializer already set will be set to that given `initializer`.This setup has the advantage that we don't have to worry about our `setInitializer()` overriding our previous `initializer`s on internal blocks!If you want to however, you can explicitly set an initializer for a `Parameter` by calling its `setInitializer()` function directly and passing in `true` to the overwrite input.Simply loop over all the parameters returned from `getParameters()` and set their initializers directly! Let us begin by calling on built-in initializers. The code below initializes all parameters to a given constant value 1, by using the `ConstantInitializer()` initializer. Note that this will not do anything currently since we have already setour initializer in the previous code block.We can verify this by checking the weight of a parameter. | net.setInitializer(new ConstantInitializer(1));
net.initialize(manager, DataType.FLOAT32, x.getShape());
Block linearLayer = net.getChildren().get(0).getValue();
NDArray weight = linearLayer.getParameters().get(0).getValue().getArray();
weight | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
We can see these initializations however if we create a new network.Let us write a function to create these network architectures for usconveniently. | public SequentialBlock getNet() {
SequentialBlock net = new SequentialBlock();
net.add(Linear.builder().setUnits(8).build());
net.add(Activation.reluBlock());
net.add(Linear.builder().setUnits(1).build());
return net;
} | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
If we run our previous initializer on this new net and check a parameter, we'llsee that everything is initialized properly! (to 7777!) | SequentialBlock net = getNet();
net.setInitializer(new ConstantInitializer(7777));
net.initialize(manager, DataType.FLOAT32, x.getShape());
Block linearLayer = net.getChildren().valueAt(0);
NDArray weight = linearLayer.getParameters().valueAt(0).getArray();
weight | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
We can also initialize all parameters as Gaussian random variables with standard deviation $.01$. | SequentialBlock net = getNet();
net.setInitializer(new NormalInitializer());
net.initialize(manager, DataType.FLOAT32, x.getShape());
Block linearLayer = net.getChildren().valueAt(0);
NDArray weight = linearLayer.getParameters().valueAt(0).getArray();
weight | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
We can also apply different initializers for certain Blocks.For example, below we initialize the first layerwith the `Xavier` initializerand initialize the second layer to a constant value of 0.We will do this without the `getNet()` function as it will be easierto have the reference to each block we want to set. | SequentialBlock net = new SequentialBlock();
Linear linear1 = Linear.builder().setUnits(8).build();
net.add(linear1);
net.add(Activation.reluBlock());
Linear linear2 = Linear.builder().setUnits(1).build();
net.add(linear2);
linear1.setInitializer(new XavierInitializer());
linear1.initialize(manager, DataType.FLOAT32, x.getShape());
linear2.setInitializer(Initializer.ZEROS);
linear2.initialize(manager, DataType.FLOAT32, x.getShape());
System.out.println(linear1.getParameters().valueAt(0).getArray());
System.out.println(linear2.getParameters().valueAt(0).getArray()); | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Finally, we can loop over the `ParameterList` and set their initializers individually.When setting initializers directly on the `Parameter`, you must pass in an `overwrite`boolean along with the initializer to declare whether you want your currentinitializer to overwrite the previous initializer if one has already been set.Here, we do want to overwrite and so pass in `true`. For this example, however, since we haven't set the `weight` initializers before, there is no initializer to overwrite so we could pass in `false` and still have the same outcome.However, since `bias` parameters are automatically set to initialize at 0, to properly set our intializer here, we have to set overwrite to `true`. | SequentialBlock net = getNet();
ParameterList params = net.getParameters();
for (int i = 0; i < params.size(); i++) {
// Here we interleave initializers.
// We initialize parameters at even indexes to 0
// and parameters at odd indexes to 2.
Parameter param = params.valueAt(i);
if (i % 2 == 0) {
// All weight parameters happen to be at even indices.
// We set them to initialize to 0.
// There is no need to overwrite
// since no initializer has been set for them previously.
param.setInitializer(new ConstantInitializer(0), false);
}
else {
// All bias parameters happen to be at odd indices.
// We set them to initialize to 2.
// To set the initializer here properly, we must pass in true
// for overwrite
// since bias parameters automatically have their
// initializer set to 0.
param.setInitializer(new ConstantInitializer(2), true);
}
}
net.initialize(manager, DataType.FLOAT32, x.getShape());
for (var param : net.getParameters()) {
System.out.println(param.getKey());
System.out.println(param.getValue().getArray());
} | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Custom InitializationSometimes, the initialization methods we need are not standard in DJL. In these cases, we can define a class to implement the `Initializer` interface. We only have to implement the `initialize()` function,which takes an `NDManager`, a `Shape`, and the `DataType`. We then create the `NDArray` with the aforementioned `Shape` and `DataType`and initialize it to what we want! You can also design yourinitializer to take in some parameters. Simply declare themas fields in the class and pass them in as inputs to the constructor!In the example below, we define an initializerfor the following strange distribution:$$\begin{aligned} w \sim \begin{cases} U[5, 10] & \text{ with probability } \frac{1}{4} \\ 0 & \text{ with probability } \frac{1}{2} \\ U[-10, -5] & \text{ with probability } \frac{1}{4} \end{cases}\end{aligned}$$ | class MyInit implements Initializer {
public MyInit() {}
@Override
public NDArray initialize(NDManager manager, Shape shape, DataType dataType) {
System.out.printf("Init %s\n", shape.toString());
// Here we generate data points
// from a uniform distribution [-10, 10]
NDArray data = manager.randomUniform(-10, 10, shape, dataType);
// We keep the data points whose absolute value is >= 5
// and set the others to 0.
// This generates the distribution `w` shown above.
NDArray absGte5 = data.abs().gte(5); // returns boolean NDArray where
// true indicates abs >= 5 and
// false otherwise
return data.mul(absGte5); // keeps true indices and sets false indices to 0.
// special operation when multiplying a numerical
// NDArray with a boolean NDArray
}
}
SequentialBlock net = getNet();
net.setInitializer(new MyInit());
net.initialize(manager, DataType.FLOAT32, x.getShape());
Block linearLayer = net.getChildren().valueAt(0);
NDArray weight = linearLayer.getParameters().valueAt(0).getArray();
weight | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Note that we always have the option of setting parameters directly by calling `getValue().getArray()` to access the underlying `NDArray`. A note for advanced users: you cannot directly modify parameters within a `GarbageCollector` scope.You must modify them outside the `GarbageCollector` scope to avoid confusing the automatic differentiation mechanics. | // '__'i() is an inplace operation to modify the original NDArray
NDArray weightLayer = net.getChildren().valueAt(0)
.getParameters().valueAt(0).getArray();
weightLayer.addi(7);
weightLayer.divi(9);
weightLayer.set(new NDIndex(0, 0), 2020); // set the (0, 0) index to 2020
weightLayer; | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Tied ParametersOften, we want to share parameters across multiple layers.Later we will see that when learning word embeddings,it might be sensible to use the same parametersboth for encoding and decoding words. We discussed one such case when we introduced :numref:`sec_model_construction`. Let us see how to do this a bit more elegantly. In the following we allocate a dense layer and then use its parameters specifically to set those of another layer. | SequentialBlock net = new SequentialBlock();
// We need to give the shared layer a name
// such that we can reference its parameters
Block shared = Linear.builder().setUnits(8).build();
SequentialBlock sharedRelu = new SequentialBlock();
sharedRelu.add(shared);
sharedRelu.add(Activation.reluBlock());
net.add(Linear.builder().setUnits(8).build());
net.add(Activation.reluBlock());
net.add(sharedRelu);
net.add(sharedRelu);
net.add(Linear.builder().setUnits(10).build());
NDArray x = manager.randomUniform(-10f, 10f, new Shape(2, 20), DataType.FLOAT32);
net.setInitializer(new NormalInitializer());
net.initialize(manager, DataType.FLOAT32, x.getShape());
model.setBlock(net);
Predictor<NDList, NDList> predictor = model.newPredictor(new NoopTranslator());
System.out.println(predictor.predict(new NDList(x)).singletonOrThrow());
// Check that the parameters are the same
NDArray shared1 = net.getChildren().valueAt(2)
.getParameters().valueAt(0).getArray();
NDArray shared2 = net.getChildren().valueAt(3)
.getParameters().valueAt(0).getArray();
shared1.eq(shared2); | _____no_output_____ | MIT-0 | chapter_deep-learning-computation/parameters.ipynb | dltech-xyz/d2l-java |
Pandas for Data Science | import pandas as pd
s = pd.Series([3,-5,7,4], index = ['a','b','c','d']) | _____no_output_____ | BSD-4-Clause-UC | Untitled.ipynb | subhadip2038/Python |
Calling a single dimentional Array | s | _____no_output_____ | BSD-4-Clause-UC | Untitled.ipynb | subhadip2038/Python |
Create a two dimentional array | data = {'Country': ['Belgium','India','Brazil'],'Capital':['Brussels','New Delhi','Brasilia'],
'Population': [11190846,1303171035,207847528]}
df= pd.DataFrame(data, columns = ['Country','Capital','Population'])
df | _____no_output_____ | BSD-4-Clause-UC | Untitled.ipynb | subhadip2038/Python |
Selection of values | s['b']
df[1:]
df[0:2]
df[:]
df[2:2]
df[0:2]
df.iloc[[0],[0]]
df.loc[[0],['Country']]
df.ix[2, 'Capital']
df['Country']
df.loc[[0],['Country']]
df.ix[[2],['Capital']]
df.Country
df.Capital
df.Population
df.Country
df[:2]
df[1:2]
df[2:3]
df[:3]
df[2:]
df[1:]
import pandas as pd
energy = pd.read_excel('Energy Indicators.xls', header= 16,usecols="C:F")
energy.head()
energy.tail(50)
new_name = {'Unnamed: 2' :'Country',
'Energy Supply':'Energy Supply',
'Energy Supply per capita':'Energy Supply per capita',
'Renewable Electricity Production':'% Renewable'}
energy.rename(columns= new_name, inplace=True)
energy.tail()
import pandas as pd
energy = pd.read_excel('Energy Indicators.xls', header= 16,usecols="C:F")
new_name = {'Unnamed: 2' :'Country',
'Energy Supply':'Energy Supply',
'Energy Supply per capita':'Energy Supply per capita',
'Renewable Electricity Production':'% Renewable'}
energy.rename(columns= new_name, inplace=True)
energy
energy = energy.drop(energy, axis = 0)
energy = energy.drop(energy.index[0], axis = 0)
energy
| _____no_output_____ | BSD-4-Clause-UC | Untitled.ipynb | subhadip2038/Python |
***Introduction to Radar Using Python and MATLAB*** Andy Harrison - Copyright (C) 2019 Artech House Frustum Radar Cross Section*** Referring to Section 7.4.1.7, the frustum geometry is shown in Figure 7.13. An approximation for the radar cross section due to linearly polarized incident energy is given by (Equation 7.60)$$ \sigma = \frac{b\, \lambda}{8\pi\sin\theta_i} \tan^2(\theta_i - \alpha) \hspace{0.5in} \text{(m}^2\text{)},$$When the incident angle is normal to the side of the frustum, $\theta_i = 90 + \alpha$, an approximation for the radar cross section is (Equation 7.61)$$ \sigma = \frac{8\pi\big(z_2^{1.5} - z_1^{1.5} \big)^2 \sin\alpha}{9\lambda\cos^4\alpha} \hspace{0.5in} \text{(m}^2\text{)}.$$When the incident angle is either $0^o$ or $180^o$, the radar cross section is approximated by a flat circular plate, as given in Table 7.1.*** Begin by getting the library path | import lib_path | _____no_output_____ | Apache-2.0 | jupyter/Chapter07/frustum.ipynb | mberkanbicer/software |
Set the operating frequency (Hz), the nose radius (m), the base radisu (m) and the length (m) | frequency = 1e9
nose_radius = 0.1
base_radius = 0.8
length = 3.0 | _____no_output_____ | Apache-2.0 | jupyter/Chapter07/frustum.ipynb | mberkanbicer/software |
Set the incident angles using the `linspace` routine from `scipy` | from numpy import linspace
from scipy.constants import pi
incident_angle = linspace(0, pi, 1801) | _____no_output_____ | Apache-2.0 | jupyter/Chapter07/frustum.ipynb | mberkanbicer/software |
Calculate the radar cross section (m^2) for the frustum | from Libs.rcs.frustum import radar_cross_section
from numpy import array
rcs = array([radar_cross_section(frequency, nose_radius, base_radius, length, ia) for ia in incident_angle])
from matplotlib import pyplot as plt
from numpy import log10, degrees
# Set the figure size
plt.rcParams["figure.figsize"] = (15, 10)
# Display the results
plt.plot(degrees(incident_angle), 10 * log10(rcs), '')
# Set the plot title and labels
plt.title('RCS vs Incident Angle', size=14)
plt.ylabel('RCS (dBsm)', size=12)
plt.xlabel('Incident Angle (deg)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5) | _____no_output_____ | Apache-2.0 | jupyter/Chapter07/frustum.ipynb | mberkanbicer/software |
Advent of Code Day 1:see [here](https://adventofcode.com/2019/day/1) for detail Part 1: | input=[56123,145192,123702,66722,148748,53337,147279,126828,118438,54030,145839,87751,58832,90085,113196,104802,61235,136935,108620,60795,107908,123023,142399,131074,123411,122653,84776,100891,78816,62762,92077,91428,56831,65122,94694,78668,112506,73406,118239,57897,59200,54437,55185,102667,86076,80655,83406,141502,67171,88472,149260,68395,56828,108798,125682,68203,118263,101824,94853,68536,95646,120283,135355,82701,92243,122282,55760,129959,142814,56599,70836,69996,85262,126648,69043,67460,119934,82453,147012,72957,53374,97577,59696,121630,122666,116591,145967,75699,85963,140970,75612,78792,100795,92034,132569,117172,134179,109504,103707,54664]
def calcFuel(x):
return x//3-2
print(sum(map(calcFuel,input))) | 3223398
| MIT | Advent_of_Code_1.ipynb | wustudent/advent-of-code-2019 |
Part 2: | def calcExtra(x):
sum = 0
t = x
while t>0:
t = calcFuel(t)
if t>0:
sum+=t
return sum
print(sum(map(calcExtra,input))) | 4832253
| MIT | Advent_of_Code_1.ipynb | wustudent/advent-of-code-2019 |
再構成モデル | callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=100)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), loss="mse")
model.fit(recon_train_ds, epochs=1000, validation_data=recon_test_ds, callbacks=[callback]) | Epoch 1/1000
23/23 [==============================] - 19s 758ms/step - loss: 2.6931e-04 - val_loss: 0.0014
Epoch 2/1000
23/23 [==============================] - 16s 692ms/step - loss: 3.0463e-04 - val_loss: 0.0013
Epoch 3/1000
23/23 [==============================] - 16s 684ms/step - loss: 2.7788e-04 - val_loss: 4.3864e-04
Epoch 4/1000
23/23 [==============================] - 15s 667ms/step - loss: 2.6806e-04 - val_loss: 3.5493e-04
Epoch 5/1000
23/23 [==============================] - 15s 667ms/step - loss: 2.6256e-04 - val_loss: 2.2297e-04
Epoch 6/1000
23/23 [==============================] - 16s 710ms/step - loss: 2.5900e-04 - val_loss: 3.0397e-04
Epoch 7/1000
23/23 [==============================] - 16s 696ms/step - loss: 2.6216e-04 - val_loss: 7.5065e-04
Epoch 8/1000
23/23 [==============================] - 16s 686ms/step - loss: 2.6271e-04 - val_loss: 2.6105e-04
Epoch 9/1000
23/23 [==============================] - 15s 662ms/step - loss: 2.7049e-04 - val_loss: 4.4475e-04
Epoch 10/1000
23/23 [==============================] - 15s 665ms/step - loss: 3.6309e-04 - val_loss: 0.0033
Epoch 11/1000
23/23 [==============================] - 15s 661ms/step - loss: 4.0707e-04 - val_loss: 0.0044
Epoch 12/1000
23/23 [==============================] - 15s 664ms/step - loss: 3.3436e-04 - val_loss: 0.0015
Epoch 13/1000
23/23 [==============================] - 15s 662ms/step - loss: 3.2791e-04 - val_loss: 7.9584e-04
Epoch 14/1000
23/23 [==============================] - 15s 660ms/step - loss: 3.0325e-04 - val_loss: 0.0011
Epoch 15/1000
23/23 [==============================] - 15s 663ms/step - loss: 2.6937e-04 - val_loss: 0.0014
Epoch 16/1000
23/23 [==============================] - 15s 660ms/step - loss: 2.6344e-04 - val_loss: 7.5094e-04
Epoch 17/1000
23/23 [==============================] - 15s 660ms/step - loss: 2.5912e-04 - val_loss: 2.7722e-04
Epoch 18/1000
23/23 [==============================] - 15s 661ms/step - loss: 2.5858e-04 - val_loss: 8.7809e-04
Epoch 19/1000
23/23 [==============================] - 15s 661ms/step - loss: 2.7711e-04 - val_loss: 2.1553e-04
Epoch 20/1000
23/23 [==============================] - 15s 661ms/step - loss: 2.6730e-04 - val_loss: 2.6714e-04
Epoch 21/1000
23/23 [==============================] - 15s 662ms/step - loss: 2.6007e-04 - val_loss: 4.6356e-04
Epoch 22/1000
23/23 [==============================] - 15s 661ms/step - loss: 2.4658e-04 - val_loss: 0.0015
Epoch 23/1000
23/23 [==============================] - 15s 660ms/step - loss: 2.4077e-04 - val_loss: 0.0012
Epoch 24/1000
23/23 [==============================] - 15s 664ms/step - loss: 2.3507e-04 - val_loss: 0.0015
Epoch 25/1000
23/23 [==============================] - 17s 731ms/step - loss: 2.4945e-04 - val_loss: 2.9421e-04
Epoch 26/1000
23/23 [==============================] - 16s 684ms/step - loss: 2.4924e-04 - val_loss: 0.0013
Epoch 27/1000
23/23 [==============================] - 15s 670ms/step - loss: 2.4059e-04 - val_loss: 7.0216e-04
Epoch 28/1000
23/23 [==============================] - 16s 675ms/step - loss: 2.3116e-04 - val_loss: 3.3376e-04
Epoch 29/1000
23/23 [==============================] - 15s 635ms/step - loss: 2.3752e-04 - val_loss: 0.0029
Epoch 30/1000
23/23 [==============================] - 15s 646ms/step - loss: 2.2816e-04 - val_loss: 3.0590e-04
Epoch 31/1000
23/23 [==============================] - 14s 626ms/step - loss: 2.2568e-04 - val_loss: 2.8490e-04
Epoch 32/1000
23/23 [==============================] - 14s 624ms/step - loss: 2.1650e-04 - val_loss: 8.1723e-04
Epoch 33/1000
23/23 [==============================] - 15s 647ms/step - loss: 2.2072e-04 - val_loss: 0.0025
Epoch 34/1000
23/23 [==============================] - 14s 628ms/step - loss: 2.9488e-04 - val_loss: 2.3816e-04
Epoch 35/1000
23/23 [==============================] - 14s 622ms/step - loss: 2.9208e-04 - val_loss: 0.0030
Epoch 36/1000
23/23 [==============================] - 14s 629ms/step - loss: 2.4878e-04 - val_loss: 0.0036
Epoch 37/1000
23/23 [==============================] - 15s 633ms/step - loss: 2.9344e-04 - val_loss: 9.4073e-04
Epoch 38/1000
23/23 [==============================] - 15s 632ms/step - loss: 2.4745e-04 - val_loss: 0.0042
Epoch 39/1000
23/23 [==============================] - 15s 639ms/step - loss: 2.3642e-04 - val_loss: 9.5809e-04
Epoch 40/1000
23/23 [==============================] - 16s 687ms/step - loss: 2.3311e-04 - val_loss: 3.2295e-04
Epoch 41/1000
23/23 [==============================] - 16s 678ms/step - loss: 2.3596e-04 - val_loss: 2.5416e-04
Epoch 42/1000
23/23 [==============================] - 16s 701ms/step - loss: 2.3155e-04 - val_loss: 1.9505e-04
Epoch 43/1000
23/23 [==============================] - 16s 682ms/step - loss: 2.4357e-04 - val_loss: 5.2335e-04
Epoch 44/1000
23/23 [==============================] - 16s 679ms/step - loss: 2.5182e-04 - val_loss: 3.6590e-04
Epoch 45/1000
23/23 [==============================] - 15s 669ms/step - loss: 2.4585e-04 - val_loss: 9.1492e-04
Epoch 46/1000
23/23 [==============================] - 15s 665ms/step - loss: 2.2842e-04 - val_loss: 5.7687e-04
Epoch 47/1000
23/23 [==============================] - 15s 667ms/step - loss: 2.2323e-04 - val_loss: 1.5992e-04
Epoch 48/1000
23/23 [==============================] - 16s 676ms/step - loss: 2.3632e-04 - val_loss: 7.4080e-04
Epoch 49/1000
23/23 [==============================] - 16s 676ms/step - loss: 2.2493e-04 - val_loss: 0.0032
Epoch 50/1000
23/23 [==============================] - 15s 669ms/step - loss: 2.6978e-04 - val_loss: 4.4632e-04
Epoch 51/1000
23/23 [==============================] - 15s 668ms/step - loss: 2.2619e-04 - val_loss: 9.1764e-04
Epoch 52/1000
23/23 [==============================] - 15s 667ms/step - loss: 2.1252e-04 - val_loss: 2.3814e-04
Epoch 53/1000
23/23 [==============================] - 16s 683ms/step - loss: 2.0336e-04 - val_loss: 1.6774e-04
Epoch 54/1000
23/23 [==============================] - 15s 672ms/step - loss: 2.0439e-04 - val_loss: 4.5984e-04
Epoch 55/1000
23/23 [==============================] - 16s 675ms/step - loss: 2.0363e-04 - val_loss: 0.0013
Epoch 56/1000
23/23 [==============================] - 15s 670ms/step - loss: 2.2389e-04 - val_loss: 0.0065
Epoch 57/1000
23/23 [==============================] - 15s 671ms/step - loss: 2.7850e-04 - val_loss: 6.5166e-04
Epoch 58/1000
23/23 [==============================] - 15s 662ms/step - loss: 2.6746e-04 - val_loss: 0.0011
Epoch 59/1000
23/23 [==============================] - 15s 671ms/step - loss: 2.6016e-04 - val_loss: 3.5491e-04
Epoch 60/1000
23/23 [==============================] - 15s 671ms/step - loss: 2.0630e-04 - val_loss: 1.3370e-04
Epoch 61/1000
23/23 [==============================] - 15s 669ms/step - loss: 2.0109e-04 - val_loss: 2.0369e-04
Epoch 62/1000
23/23 [==============================] - 16s 675ms/step - loss: 2.0214e-04 - val_loss: 3.9300e-04
Epoch 63/1000
23/23 [==============================] - 16s 675ms/step - loss: 1.9187e-04 - val_loss: 2.6768e-04
Epoch 64/1000
23/23 [==============================] - 16s 676ms/step - loss: 2.0184e-04 - val_loss: 2.5621e-04
Epoch 65/1000
23/23 [==============================] - 15s 669ms/step - loss: 2.0221e-04 - val_loss: 2.5124e-04
Epoch 66/1000
23/23 [==============================] - 16s 678ms/step - loss: 1.8469e-04 - val_loss: 2.4853e-04
Epoch 67/1000
23/23 [==============================] - 16s 679ms/step - loss: 2.1092e-04 - val_loss: 2.8843e-04
Epoch 68/1000
23/23 [==============================] - 15s 671ms/step - loss: 1.9626e-04 - val_loss: 0.0019
Epoch 69/1000
23/23 [==============================] - 15s 668ms/step - loss: 1.9190e-04 - val_loss: 2.1657e-04
Epoch 70/1000
23/23 [==============================] - 15s 667ms/step - loss: 1.9456e-04 - val_loss: 2.4827e-04
Epoch 71/1000
23/23 [==============================] - 15s 673ms/step - loss: 1.8696e-04 - val_loss: 2.3935e-04
Epoch 72/1000
23/23 [==============================] - 16s 678ms/step - loss: 1.7884e-04 - val_loss: 4.1783e-04
Epoch 73/1000
23/23 [==============================] - 16s 675ms/step - loss: 1.7981e-04 - val_loss: 1.9446e-04
Epoch 74/1000
23/23 [==============================] - 15s 667ms/step - loss: 2.0818e-04 - val_loss: 3.9886e-04
Epoch 75/1000
23/23 [==============================] - 15s 667ms/step - loss: 1.9467e-04 - val_loss: 0.0014
Epoch 76/1000
23/23 [==============================] - 16s 694ms/step - loss: 2.1267e-04 - val_loss: 5.1296e-04
Epoch 77/1000
23/23 [==============================] - 16s 692ms/step - loss: 2.3400e-04 - val_loss: 6.7129e-04
Epoch 78/1000
23/23 [==============================] - 16s 699ms/step - loss: 2.2141e-04 - val_loss: 0.0010
Epoch 79/1000
23/23 [==============================] - 16s 709ms/step - loss: 2.0008e-04 - val_loss: 9.8638e-04
Epoch 80/1000
23/23 [==============================] - 16s 676ms/step - loss: 2.3342e-04 - val_loss: 1.6988e-04
Epoch 81/1000
23/23 [==============================] - 17s 727ms/step - loss: 1.9834e-04 - val_loss: 0.0025
Epoch 82/1000
23/23 [==============================] - 16s 692ms/step - loss: 2.0885e-04 - val_loss: 0.0023
Epoch 83/1000
23/23 [==============================] - 16s 694ms/step - loss: 2.0974e-04 - val_loss: 6.0048e-04
Epoch 84/1000
23/23 [==============================] - 15s 670ms/step - loss: 2.0607e-04 - val_loss: 0.0084
Epoch 85/1000
23/23 [==============================] - 16s 692ms/step - loss: 2.0870e-04 - val_loss: 0.0017
Epoch 86/1000
23/23 [==============================] - 16s 706ms/step - loss: 1.9577e-04 - val_loss: 3.6291e-04
Epoch 87/1000
23/23 [==============================] - 17s 723ms/step - loss: 1.7144e-04 - val_loss: 4.3733e-04
Epoch 88/1000
23/23 [==============================] - 16s 701ms/step - loss: 1.9855e-04 - val_loss: 4.0861e-04
Epoch 89/1000
23/23 [==============================] - 16s 702ms/step - loss: 1.9790e-04 - val_loss: 1.7028e-04
Epoch 90/1000
23/23 [==============================] - 16s 682ms/step - loss: 1.8026e-04 - val_loss: 9.2695e-04
Epoch 91/1000
23/23 [==============================] - 16s 701ms/step - loss: 1.7046e-04 - val_loss: 9.8231e-04
Epoch 92/1000
23/23 [==============================] - 18s 797ms/step - loss: 1.6572e-04 - val_loss: 6.0884e-04
Epoch 93/1000
23/23 [==============================] - 17s 753ms/step - loss: 1.5448e-04 - val_loss: 2.2141e-04
Epoch 94/1000
23/23 [==============================] - 15s 671ms/step - loss: 1.6892e-04 - val_loss: 4.5721e-04
Epoch 95/1000
23/23 [==============================] - 15s 671ms/step - loss: 1.7620e-04 - val_loss: 2.8149e-04
Epoch 96/1000
23/23 [==============================] - 16s 673ms/step - loss: 1.7527e-04 - val_loss: 3.0979e-04
Epoch 97/1000
23/23 [==============================] - 15s 658ms/step - loss: 1.7063e-04 - val_loss: 4.5756e-04
Epoch 98/1000
23/23 [==============================] - 15s 652ms/step - loss: 1.6344e-04 - val_loss: 2.7359e-04
Epoch 99/1000
23/23 [==============================] - 16s 695ms/step - loss: 1.5203e-04 - val_loss: 4.3915e-04
Epoch 100/1000
23/23 [==============================] - 16s 700ms/step - loss: 1.7857e-04 - val_loss: 0.0011
Epoch 101/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.9629e-04 - val_loss: 3.3144e-04
Epoch 102/1000
23/23 [==============================] - 15s 652ms/step - loss: 1.6494e-04 - val_loss: 4.2757e-04
Epoch 103/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.5575e-04 - val_loss: 5.2974e-04
Epoch 104/1000
23/23 [==============================] - 15s 650ms/step - loss: 1.5404e-04 - val_loss: 2.1810e-04
Epoch 105/1000
23/23 [==============================] - 15s 652ms/step - loss: 1.3957e-04 - val_loss: 2.0010e-04
Epoch 106/1000
23/23 [==============================] - 15s 650ms/step - loss: 1.4385e-04 - val_loss: 5.0322e-04
Epoch 107/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.3835e-04 - val_loss: 0.0012
Epoch 108/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.5717e-04 - val_loss: 8.0037e-04
Epoch 109/1000
23/23 [==============================] - 15s 652ms/step - loss: 1.4003e-04 - val_loss: 3.7162e-04
Epoch 110/1000
23/23 [==============================] - 15s 650ms/step - loss: 1.5414e-04 - val_loss: 4.7373e-04
Epoch 111/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.3622e-04 - val_loss: 3.5076e-04
Epoch 112/1000
23/23 [==============================] - 15s 650ms/step - loss: 1.5844e-04 - val_loss: 5.5163e-04
Epoch 113/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.6136e-04 - val_loss: 2.5431e-04
Epoch 114/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.5537e-04 - val_loss: 1.7010e-04
Epoch 115/1000
23/23 [==============================] - 15s 650ms/step - loss: 1.6491e-04 - val_loss: 5.0787e-04
Epoch 116/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.7724e-04 - val_loss: 0.0017
Epoch 117/1000
23/23 [==============================] - 15s 652ms/step - loss: 1.6633e-04 - val_loss: 7.8944e-04
Epoch 118/1000
23/23 [==============================] - 15s 650ms/step - loss: 1.7287e-04 - val_loss: 2.8613e-04
Epoch 119/1000
23/23 [==============================] - 15s 652ms/step - loss: 1.8492e-04 - val_loss: 4.1230e-04
Epoch 120/1000
23/23 [==============================] - 15s 652ms/step - loss: 1.4525e-04 - val_loss: 0.0010
Epoch 121/1000
23/23 [==============================] - 15s 648ms/step - loss: 1.3240e-04 - val_loss: 2.0035e-04
Epoch 122/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.3879e-04 - val_loss: 2.8640e-04
Epoch 123/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.3572e-04 - val_loss: 0.0013
Epoch 124/1000
23/23 [==============================] - 15s 650ms/step - loss: 1.4374e-04 - val_loss: 4.2176e-04
Epoch 125/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.4544e-04 - val_loss: 8.4127e-04
Epoch 126/1000
23/23 [==============================] - 15s 652ms/step - loss: 1.3499e-04 - val_loss: 2.4049e-04
Epoch 127/1000
23/23 [==============================] - 15s 652ms/step - loss: 1.6036e-04 - val_loss: 0.0039
Epoch 128/1000
23/23 [==============================] - 16s 695ms/step - loss: 2.0673e-04 - val_loss: 0.0018
Epoch 129/1000
23/23 [==============================] - 15s 654ms/step - loss: 2.1472e-04 - val_loss: 6.8521e-04
Epoch 130/1000
23/23 [==============================] - 15s 671ms/step - loss: 1.6757e-04 - val_loss: 0.0011
Epoch 131/1000
23/23 [==============================] - 15s 662ms/step - loss: 1.3960e-04 - val_loss: 1.2232e-04
Epoch 132/1000
23/23 [==============================] - 15s 664ms/step - loss: 1.5514e-04 - val_loss: 4.6518e-04
Epoch 133/1000
23/23 [==============================] - 15s 651ms/step - loss: 1.4478e-04 - val_loss: 0.0019
Epoch 134/1000
23/23 [==============================] - 15s 672ms/step - loss: 1.3420e-04 - val_loss: 0.0020
Epoch 135/1000
23/23 [==============================] - 16s 714ms/step - loss: 1.9585e-04 - val_loss: 5.7078e-04
Epoch 136/1000
23/23 [==============================] - 16s 681ms/step - loss: 1.5260e-04 - val_loss: 6.1735e-04
Epoch 137/1000
23/23 [==============================] - 15s 673ms/step - loss: 1.4818e-04 - val_loss: 0.0015
Epoch 138/1000
23/23 [==============================] - 17s 722ms/step - loss: 1.4660e-04 - val_loss: 0.0028
Epoch 139/1000
23/23 [==============================] - 16s 696ms/step - loss: 1.5105e-04 - val_loss: 2.7117e-04
Epoch 140/1000
23/23 [==============================] - 16s 712ms/step - loss: 1.3566e-04 - val_loss: 5.0722e-04
Epoch 141/1000
23/23 [==============================] - 16s 693ms/step - loss: 1.7393e-04 - val_loss: 0.0011
Epoch 142/1000
23/23 [==============================] - 17s 741ms/step - loss: 2.6077e-04 - val_loss: 0.0153
Epoch 143/1000
23/23 [==============================] - 16s 702ms/step - loss: 0.0010 - val_loss: 0.0019
Epoch 144/1000
23/23 [==============================] - 17s 724ms/step - loss: 5.3874e-04 - val_loss: 0.0253
Epoch 145/1000
23/23 [==============================] - 17s 738ms/step - loss: 2.4405e-04 - val_loss: 0.0023
Epoch 146/1000
23/23 [==============================] - 16s 675ms/step - loss: 1.6825e-04 - val_loss: 9.8633e-04
Epoch 147/1000
23/23 [==============================] - 16s 682ms/step - loss: 1.4791e-04 - val_loss: 7.7319e-04
Epoch 148/1000
23/23 [==============================] - 16s 702ms/step - loss: 1.3564e-04 - val_loss: 0.0049
Epoch 149/1000
23/23 [==============================] - 16s 682ms/step - loss: 1.8267e-04 - val_loss: 0.0041
Epoch 150/1000
23/23 [==============================] - 15s 668ms/step - loss: 2.0590e-04 - val_loss: 0.0102
Epoch 151/1000
23/23 [==============================] - 15s 668ms/step - loss: 1.9887e-04 - val_loss: 0.0017
Epoch 152/1000
23/23 [==============================] - 15s 675ms/step - loss: 1.5852e-04 - val_loss: 0.0012
Epoch 153/1000
23/23 [==============================] - 17s 719ms/step - loss: 1.2788e-04 - val_loss: 3.1935e-04
Epoch 154/1000
23/23 [==============================] - 16s 700ms/step - loss: 1.3903e-04 - val_loss: 2.0976e-04
Epoch 155/1000
23/23 [==============================] - 16s 715ms/step - loss: 1.1767e-04 - val_loss: 2.9402e-04
Epoch 156/1000
23/23 [==============================] - 17s 739ms/step - loss: 1.1437e-04 - val_loss: 5.4074e-04
Epoch 157/1000
23/23 [==============================] - 15s 646ms/step - loss: 1.1515e-04 - val_loss: 0.0018
Epoch 158/1000
23/23 [==============================] - 15s 633ms/step - loss: 1.5289e-04 - val_loss: 3.3638e-04
Epoch 159/1000
23/23 [==============================] - 15s 660ms/step - loss: 1.1444e-04 - val_loss: 0.0010
Epoch 160/1000
23/23 [==============================] - 15s 653ms/step - loss: 1.1932e-04 - val_loss: 2.1282e-04
Epoch 161/1000
23/23 [==============================] - 14s 622ms/step - loss: 1.1191e-04 - val_loss: 0.0011
Epoch 162/1000
23/23 [==============================] - 15s 660ms/step - loss: 1.1420e-04 - val_loss: 0.0026
Epoch 163/1000
23/23 [==============================] - 16s 675ms/step - loss: 1.4274e-04 - val_loss: 2.2346e-04
Epoch 164/1000
23/23 [==============================] - 15s 662ms/step - loss: 1.2231e-04 - val_loss: 5.4641e-04
Epoch 165/1000
23/23 [==============================] - 16s 683ms/step - loss: 1.6918e-04 - val_loss: 0.0025
Epoch 166/1000
23/23 [==============================] - 16s 674ms/step - loss: 1.5079e-04 - val_loss: 0.0017
Epoch 167/1000
23/23 [==============================] - 15s 634ms/step - loss: 1.9319e-04 - val_loss: 0.0022
Epoch 168/1000
23/23 [==============================] - 16s 684ms/step - loss: 1.7569e-04 - val_loss: 4.2596e-04
Epoch 169/1000
23/23 [==============================] - 14s 629ms/step - loss: 1.1923e-04 - val_loss: 0.0011
Epoch 170/1000
23/23 [==============================] - 14s 628ms/step - loss: 1.1500e-04 - val_loss: 2.8580e-04
Epoch 171/1000
23/23 [==============================] - 15s 646ms/step - loss: 1.0920e-04 - val_loss: 9.6705e-04
Epoch 172/1000
23/23 [==============================] - 15s 636ms/step - loss: 1.2356e-04 - val_loss: 0.0022
Epoch 173/1000
23/23 [==============================] - 15s 649ms/step - loss: 1.4291e-04 - val_loss: 5.1553e-04
Epoch 174/1000
23/23 [==============================] - 15s 649ms/step - loss: 1.1708e-04 - val_loss: 0.0013
Epoch 175/1000
23/23 [==============================] - 15s 638ms/step - loss: 1.0709e-04 - val_loss: 3.4964e-04
Epoch 176/1000
23/23 [==============================] - 15s 646ms/step - loss: 1.1189e-04 - val_loss: 5.0408e-04
Epoch 177/1000
23/23 [==============================] - 18s 775ms/step - loss: 1.1457e-04 - val_loss: 3.4192e-04
Epoch 178/1000
23/23 [==============================] - 17s 717ms/step - loss: 1.4419e-04 - val_loss: 0.0011
Epoch 179/1000
23/23 [==============================] - 18s 787ms/step - loss: 1.6207e-04 - val_loss: 0.0017
Epoch 180/1000
23/23 [==============================] - 17s 750ms/step - loss: 1.1805e-04 - val_loss: 2.0848e-04
Epoch 181/1000
23/23 [==============================] - 17s 723ms/step - loss: 1.1239e-04 - val_loss: 6.9807e-04
Epoch 182/1000
23/23 [==============================] - 15s 669ms/step - loss: 1.0697e-04 - val_loss: 1.1478e-04
Epoch 183/1000
23/23 [==============================] - 15s 665ms/step - loss: 1.1756e-04 - val_loss: 7.4999e-04
Epoch 184/1000
23/23 [==============================] - 15s 663ms/step - loss: 1.1200e-04 - val_loss: 2.3420e-04
Epoch 185/1000
23/23 [==============================] - 15s 664ms/step - loss: 1.1297e-04 - val_loss: 1.5326e-04
Epoch 186/1000
23/23 [==============================] - 15s 663ms/step - loss: 1.1030e-04 - val_loss: 3.4488e-04
Epoch 187/1000
23/23 [==============================] - 15s 662ms/step - loss: 1.0501e-04 - val_loss: 1.1828e-04
Epoch 188/1000
23/23 [==============================] - 15s 661ms/step - loss: 1.2165e-04 - val_loss: 5.8026e-04
Epoch 189/1000
23/23 [==============================] - 15s 665ms/step - loss: 1.5336e-04 - val_loss: 7.8114e-04
Epoch 190/1000
23/23 [==============================] - 15s 664ms/step - loss: 1.3314e-04 - val_loss: 4.8967e-04
Epoch 191/1000
23/23 [==============================] - 15s 665ms/step - loss: 1.0769e-04 - val_loss: 0.0023
Epoch 192/1000
23/23 [==============================] - 15s 670ms/step - loss: 1.0570e-04 - val_loss: 0.0012
Epoch 193/1000
23/23 [==============================] - 15s 663ms/step - loss: 1.2058e-04 - val_loss: 2.7041e-04
Epoch 194/1000
23/23 [==============================] - 15s 662ms/step - loss: 1.0705e-04 - val_loss: 4.7686e-04
Epoch 195/1000
23/23 [==============================] - 16s 681ms/step - loss: 8.9597e-05 - val_loss: 2.1531e-04
Epoch 196/1000
23/23 [==============================] - 15s 654ms/step - loss: 1.1748e-04 - val_loss: 5.7347e-04
Epoch 197/1000
23/23 [==============================] - 15s 662ms/step - loss: 1.4451e-04 - val_loss: 0.0037
Epoch 198/1000
23/23 [==============================] - 16s 678ms/step - loss: 1.3179e-04 - val_loss: 0.0012
Epoch 199/1000
23/23 [==============================] - 16s 671ms/step - loss: 9.8411e-05 - val_loss: 1.2076e-04
Epoch 200/1000
23/23 [==============================] - 15s 665ms/step - loss: 8.6410e-05 - val_loss: 4.3190e-04
Epoch 201/1000
23/23 [==============================] - 17s 725ms/step - loss: 9.5341e-05 - val_loss: 4.6861e-04
Epoch 202/1000
23/23 [==============================] - 16s 706ms/step - loss: 8.3709e-05 - val_loss: 1.9518e-04
Epoch 203/1000
23/23 [==============================] - 16s 704ms/step - loss: 8.2903e-05 - val_loss: 1.0753e-04
Epoch 204/1000
23/23 [==============================] - 16s 685ms/step - loss: 9.6000e-05 - val_loss: 4.0448e-04
Epoch 205/1000
23/23 [==============================] - 16s 688ms/step - loss: 1.1637e-04 - val_loss: 0.0019
Epoch 206/1000
23/23 [==============================] - 17s 718ms/step - loss: 1.2999e-04 - val_loss: 5.5372e-04
Epoch 207/1000
23/23 [==============================] - 16s 677ms/step - loss: 1.1406e-04 - val_loss: 4.4783e-04
Epoch 208/1000
23/23 [==============================] - 15s 667ms/step - loss: 9.2362e-05 - val_loss: 1.0418e-04
Epoch 209/1000
23/23 [==============================] - 16s 698ms/step - loss: 8.6476e-05 - val_loss: 7.8167e-05
Epoch 210/1000
23/23 [==============================] - 17s 724ms/step - loss: 9.9494e-05 - val_loss: 5.6862e-04
Epoch 211/1000
23/23 [==============================] - 17s 713ms/step - loss: 1.2758e-04 - val_loss: 0.0011
Epoch 212/1000
23/23 [==============================] - 16s 696ms/step - loss: 1.1053e-04 - val_loss: 4.5280e-04
Epoch 213/1000
23/23 [==============================] - 16s 700ms/step - loss: 9.1211e-05 - val_loss: 2.8869e-04
Epoch 214/1000
23/23 [==============================] - 16s 677ms/step - loss: 9.4580e-05 - val_loss: 1.1355e-04
Epoch 215/1000
23/23 [==============================] - 15s 670ms/step - loss: 9.9763e-05 - val_loss: 6.4341e-04
Epoch 216/1000
23/23 [==============================] - 14s 621ms/step - loss: 1.3142e-04 - val_loss: 2.7706e-04
Epoch 217/1000
23/23 [==============================] - 14s 620ms/step - loss: 1.2608e-04 - val_loss: 0.0014
Epoch 218/1000
23/23 [==============================] - 14s 620ms/step - loss: 1.1253e-04 - val_loss: 2.4089e-04
Epoch 219/1000
23/23 [==============================] - 14s 619ms/step - loss: 9.5533e-05 - val_loss: 0.0011
Epoch 220/1000
23/23 [==============================] - 14s 618ms/step - loss: 8.5999e-05 - val_loss: 0.0012
Epoch 221/1000
23/23 [==============================] - 14s 617ms/step - loss: 9.5943e-05 - val_loss: 0.0018
Epoch 222/1000
23/23 [==============================] - 14s 619ms/step - loss: 1.0874e-04 - val_loss: 5.2739e-04
Epoch 223/1000
23/23 [==============================] - 14s 617ms/step - loss: 1.3878e-04 - val_loss: 7.3780e-04
Epoch 224/1000
23/23 [==============================] - 14s 618ms/step - loss: 1.0670e-04 - val_loss: 3.8383e-04
Epoch 225/1000
23/23 [==============================] - 14s 621ms/step - loss: 8.3555e-05 - val_loss: 2.0055e-04
Epoch 226/1000
23/23 [==============================] - 15s 636ms/step - loss: 8.0928e-05 - val_loss: 4.5755e-04
Epoch 227/1000
23/23 [==============================] - 16s 699ms/step - loss: 7.6199e-05 - val_loss: 8.3726e-04
Epoch 228/1000
23/23 [==============================] - 15s 672ms/step - loss: 8.4891e-05 - val_loss: 0.0040
Epoch 229/1000
23/23 [==============================] - 15s 672ms/step - loss: 1.1997e-04 - val_loss: 0.0040
Epoch 230/1000
23/23 [==============================] - 16s 690ms/step - loss: 1.8949e-04 - val_loss: 0.0016
Epoch 231/1000
23/23 [==============================] - 16s 689ms/step - loss: 1.0629e-04 - val_loss: 7.4612e-04
Epoch 232/1000
23/23 [==============================] - 16s 691ms/step - loss: 8.7829e-05 - val_loss: 4.4970e-04
Epoch 233/1000
23/23 [==============================] - 16s 704ms/step - loss: 8.0259e-05 - val_loss: 4.5791e-04
Epoch 234/1000
23/23 [==============================] - 17s 749ms/step - loss: 7.0723e-05 - val_loss: 5.3438e-04
Epoch 235/1000
23/23 [==============================] - 16s 707ms/step - loss: 6.9378e-05 - val_loss: 9.9936e-05
Epoch 236/1000
23/23 [==============================] - 15s 670ms/step - loss: 8.0675e-05 - val_loss: 5.5927e-04
Epoch 237/1000
23/23 [==============================] - 15s 665ms/step - loss: 9.1138e-05 - val_loss: 1.1798e-04
Epoch 238/1000
23/23 [==============================] - 15s 665ms/step - loss: 9.9707e-05 - val_loss: 0.0014
Epoch 239/1000
23/23 [==============================] - 15s 632ms/step - loss: 7.8218e-05 - val_loss: 4.0964e-04
Epoch 240/1000
23/23 [==============================] - 14s 628ms/step - loss: 8.3750e-05 - val_loss: 2.1858e-04
Epoch 241/1000
23/23 [==============================] - 15s 636ms/step - loss: 7.9370e-05 - val_loss: 3.6395e-04
Epoch 242/1000
23/23 [==============================] - 14s 627ms/step - loss: 7.1904e-05 - val_loss: 2.6569e-04
Epoch 243/1000
23/23 [==============================] - 15s 642ms/step - loss: 9.6953e-05 - val_loss: 0.0019
Epoch 244/1000
23/23 [==============================] - 15s 636ms/step - loss: 1.7347e-04 - val_loss: 5.8287e-04
Epoch 245/1000
23/23 [==============================] - 15s 650ms/step - loss: 1.1883e-04 - val_loss: 9.0168e-04
Epoch 246/1000
23/23 [==============================] - 15s 633ms/step - loss: 8.6529e-05 - val_loss: 0.0011
Epoch 247/1000
23/23 [==============================] - 15s 640ms/step - loss: 7.9669e-05 - val_loss: 5.3988e-04
Epoch 248/1000
23/23 [==============================] - 15s 641ms/step - loss: 1.0798e-04 - val_loss: 7.6920e-04
Epoch 249/1000
23/23 [==============================] - 15s 643ms/step - loss: 1.3301e-04 - val_loss: 9.5231e-04
Epoch 250/1000
23/23 [==============================] - 15s 648ms/step - loss: 1.0443e-04 - val_loss: 5.6924e-04
Epoch 251/1000
23/23 [==============================] - 15s 638ms/step - loss: 1.0534e-04 - val_loss: 1.1943e-04
Epoch 252/1000
23/23 [==============================] - 15s 636ms/step - loss: 1.0536e-04 - val_loss: 4.3356e-04
Epoch 253/1000
23/23 [==============================] - 15s 653ms/step - loss: 1.0634e-04 - val_loss: 1.5282e-04
Epoch 254/1000
23/23 [==============================] - 15s 646ms/step - loss: 9.4955e-05 - val_loss: 6.5689e-04
Epoch 255/1000
23/23 [==============================] - 14s 627ms/step - loss: 9.6670e-05 - val_loss: 6.7971e-04
Epoch 256/1000
23/23 [==============================] - 14s 622ms/step - loss: 9.0910e-05 - val_loss: 0.0014
Epoch 257/1000
23/23 [==============================] - 15s 642ms/step - loss: 8.5290e-05 - val_loss: 4.6642e-04
Epoch 258/1000
23/23 [==============================] - 15s 651ms/step - loss: 8.4130e-05 - val_loss: 3.9960e-04
Epoch 259/1000
23/23 [==============================] - 15s 643ms/step - loss: 7.7888e-05 - val_loss: 2.0773e-04
Epoch 260/1000
23/23 [==============================] - 15s 637ms/step - loss: 7.7414e-05 - val_loss: 9.3789e-05
Epoch 261/1000
23/23 [==============================] - 17s 759ms/step - loss: 9.2085e-05 - val_loss: 0.0015
Epoch 262/1000
23/23 [==============================] - 18s 760ms/step - loss: 9.8998e-05 - val_loss: 4.4217e-04
Epoch 263/1000
23/23 [==============================] - 18s 782ms/step - loss: 9.7327e-05 - val_loss: 7.5551e-05
Epoch 264/1000
23/23 [==============================] - 16s 683ms/step - loss: 9.8785e-05 - val_loss: 3.3084e-04
Epoch 265/1000
23/23 [==============================] - 15s 655ms/step - loss: 7.4745e-05 - val_loss: 8.2197e-04
Epoch 266/1000
23/23 [==============================] - 16s 685ms/step - loss: 7.5642e-05 - val_loss: 0.0011
Epoch 267/1000
23/23 [==============================] - 17s 718ms/step - loss: 8.0876e-05 - val_loss: 2.2032e-04
Epoch 268/1000
23/23 [==============================] - 16s 708ms/step - loss: 8.3589e-05 - val_loss: 3.7776e-04
Epoch 269/1000
23/23 [==============================] - 18s 769ms/step - loss: 8.6947e-05 - val_loss: 3.6664e-04
Epoch 270/1000
23/23 [==============================] - 16s 714ms/step - loss: 1.2916e-04 - val_loss: 3.0799e-04
Epoch 271/1000
23/23 [==============================] - 17s 717ms/step - loss: 1.0953e-04 - val_loss: 2.2332e-04
Epoch 272/1000
23/23 [==============================] - 17s 718ms/step - loss: 8.5116e-05 - val_loss: 2.6896e-04
Epoch 273/1000
23/23 [==============================] - 17s 736ms/step - loss: 8.0718e-05 - val_loss: 3.2201e-04
Epoch 274/1000
23/23 [==============================] - 17s 734ms/step - loss: 7.9204e-05 - val_loss: 0.0013
Epoch 275/1000
23/23 [==============================] - 15s 672ms/step - loss: 7.8251e-05 - val_loss: 4.8717e-04
Epoch 276/1000
23/23 [==============================] - 16s 702ms/step - loss: 9.9208e-05 - val_loss: 1.5839e-04
Epoch 277/1000
23/23 [==============================] - 15s 659ms/step - loss: 9.5094e-05 - val_loss: 0.0039
Epoch 278/1000
23/23 [==============================] - 16s 709ms/step - loss: 2.0854e-04 - val_loss: 6.8690e-04
Epoch 279/1000
23/23 [==============================] - 17s 742ms/step - loss: 3.6528e-04 - val_loss: 0.0039
Epoch 280/1000
23/23 [==============================] - 19s 827ms/step - loss: 2.5086e-04 - val_loss: 2.2197e-04
Epoch 281/1000
23/23 [==============================] - 18s 765ms/step - loss: 1.4447e-04 - val_loss: 2.9405e-04
Epoch 282/1000
23/23 [==============================] - 16s 707ms/step - loss: 1.1184e-04 - val_loss: 5.3401e-04
Epoch 283/1000
23/23 [==============================] - 19s 836ms/step - loss: 1.0177e-04 - val_loss: 0.0016
Epoch 284/1000
23/23 [==============================] - 19s 834ms/step - loss: 9.5719e-05 - val_loss: 4.8914e-04
Epoch 285/1000
23/23 [==============================] - 19s 835ms/step - loss: 8.7329e-05 - val_loss: 2.3498e-04
Epoch 286/1000
23/23 [==============================] - 19s 836ms/step - loss: 8.2389e-05 - val_loss: 0.0013
Epoch 287/1000
23/23 [==============================] - 18s 789ms/step - loss: 1.0385e-04 - val_loss: 8.8304e-04
Epoch 288/1000
23/23 [==============================] - 17s 721ms/step - loss: 9.3454e-05 - val_loss: 1.7767e-04
Epoch 289/1000
23/23 [==============================] - 15s 643ms/step - loss: 7.8535e-05 - val_loss: 2.9859e-04
Epoch 290/1000
23/23 [==============================] - 15s 648ms/step - loss: 7.5864e-05 - val_loss: 0.0017
Epoch 291/1000
23/23 [==============================] - 16s 679ms/step - loss: 1.3143e-04 - val_loss: 0.0011
Epoch 292/1000
23/23 [==============================] - 16s 689ms/step - loss: 1.2135e-04 - val_loss: 0.0052
Epoch 293/1000
23/23 [==============================] - 16s 687ms/step - loss: 9.3240e-05 - val_loss: 0.0030
Epoch 294/1000
23/23 [==============================] - 16s 685ms/step - loss: 8.6622e-05 - val_loss: 2.7965e-04
Epoch 295/1000
23/23 [==============================] - 16s 679ms/step - loss: 8.4352e-05 - val_loss: 2.7716e-04
Epoch 296/1000
23/23 [==============================] - 17s 752ms/step - loss: 9.0380e-05 - val_loss: 2.6130e-04
Epoch 297/1000
23/23 [==============================] - 16s 705ms/step - loss: 7.9760e-05 - val_loss: 2.3163e-04
Epoch 298/1000
23/23 [==============================] - 17s 717ms/step - loss: 7.1155e-05 - val_loss: 1.6107e-04
Epoch 299/1000
23/23 [==============================] - 16s 688ms/step - loss: 7.7386e-05 - val_loss: 0.0013
Epoch 300/1000
23/23 [==============================] - 16s 682ms/step - loss: 1.1138e-04 - val_loss: 0.0016
Epoch 301/1000
23/23 [==============================] - 16s 683ms/step - loss: 1.0013e-04 - val_loss: 1.9795e-04
Epoch 302/1000
23/23 [==============================] - 16s 686ms/step - loss: 8.2814e-05 - val_loss: 8.6194e-05
Epoch 303/1000
23/23 [==============================] - 16s 683ms/step - loss: 8.1023e-05 - val_loss: 6.3182e-04
Epoch 304/1000
23/23 [==============================] - 16s 683ms/step - loss: 8.0645e-05 - val_loss: 7.9532e-05
Epoch 305/1000
23/23 [==============================] - 16s 685ms/step - loss: 6.8268e-05 - val_loss: 5.1277e-04
Epoch 306/1000
23/23 [==============================] - 16s 685ms/step - loss: 6.2709e-05 - val_loss: 9.0953e-05
Epoch 307/1000
23/23 [==============================] - 16s 688ms/step - loss: 6.3203e-05 - val_loss: 8.4901e-05
Epoch 308/1000
23/23 [==============================] - 17s 725ms/step - loss: 6.1703e-05 - val_loss: 6.8715e-05
Epoch 309/1000
23/23 [==============================] - 17s 732ms/step - loss: 6.1924e-05 - val_loss: 6.4956e-04
Epoch 310/1000
23/23 [==============================] - 16s 674ms/step - loss: 5.7828e-05 - val_loss: 4.2400e-04
Epoch 311/1000
23/23 [==============================] - 16s 684ms/step - loss: 6.2367e-05 - val_loss: 2.9409e-04
Epoch 312/1000
23/23 [==============================] - 16s 696ms/step - loss: 6.1845e-05 - val_loss: 7.3305e-05
Epoch 313/1000
23/23 [==============================] - 16s 682ms/step - loss: 6.0853e-05 - val_loss: 3.2798e-04
Epoch 314/1000
23/23 [==============================] - 16s 687ms/step - loss: 6.9445e-05 - val_loss: 8.9930e-05
Epoch 315/1000
23/23 [==============================] - 17s 719ms/step - loss: 6.3028e-05 - val_loss: 1.0633e-04
Epoch 316/1000
23/23 [==============================] - 16s 675ms/step - loss: 6.8123e-05 - val_loss: 2.0997e-04
Epoch 317/1000
23/23 [==============================] - 17s 725ms/step - loss: 6.9151e-05 - val_loss: 1.3818e-04
Epoch 318/1000
23/23 [==============================] - 16s 687ms/step - loss: 8.4388e-05 - val_loss: 6.3117e-05
Epoch 319/1000
23/23 [==============================] - 16s 685ms/step - loss: 6.2061e-05 - val_loss: 1.5672e-04
Epoch 320/1000
23/23 [==============================] - 16s 681ms/step - loss: 6.3250e-05 - val_loss: 1.3906e-04
Epoch 321/1000
23/23 [==============================] - 16s 683ms/step - loss: 7.2621e-05 - val_loss: 2.3827e-04
Epoch 322/1000
23/23 [==============================] - 16s 681ms/step - loss: 6.6934e-05 - val_loss: 5.9031e-04
Epoch 323/1000
23/23 [==============================] - 18s 765ms/step - loss: 6.0328e-05 - val_loss: 1.2673e-04
Epoch 324/1000
23/23 [==============================] - 16s 684ms/step - loss: 6.4040e-05 - val_loss: 1.7989e-04
Epoch 325/1000
23/23 [==============================] - 15s 657ms/step - loss: 5.9834e-05 - val_loss: 1.7417e-04
Epoch 326/1000
23/23 [==============================] - 15s 630ms/step - loss: 5.8511e-05 - val_loss: 6.8909e-05
Epoch 327/1000
23/23 [==============================] - 14s 627ms/step - loss: 6.6812e-05 - val_loss: 8.9699e-05
Epoch 328/1000
23/23 [==============================] - 14s 625ms/step - loss: 7.0100e-05 - val_loss: 9.5119e-05
Epoch 329/1000
23/23 [==============================] - 15s 632ms/step - loss: 5.8636e-05 - val_loss: 3.3346e-04
Epoch 330/1000
23/23 [==============================] - 15s 634ms/step - loss: 5.4574e-05 - val_loss: 4.0824e-04
Epoch 331/1000
23/23 [==============================] - 14s 625ms/step - loss: 6.1734e-05 - val_loss: 1.1207e-04
Epoch 332/1000
23/23 [==============================] - 15s 630ms/step - loss: 5.9587e-05 - val_loss: 6.9079e-05
Epoch 333/1000
23/23 [==============================] - 14s 629ms/step - loss: 6.7056e-05 - val_loss: 1.6734e-04
Epoch 334/1000
23/23 [==============================] - 14s 626ms/step - loss: 8.4613e-05 - val_loss: 6.9166e-05
Epoch 335/1000
23/23 [==============================] - 15s 631ms/step - loss: 5.7981e-05 - val_loss: 6.3956e-05
Epoch 336/1000
23/23 [==============================] - 14s 628ms/step - loss: 5.9933e-05 - val_loss: 1.4144e-04
Epoch 337/1000
23/23 [==============================] - 14s 629ms/step - loss: 6.0938e-05 - val_loss: 2.5485e-04
Epoch 338/1000
23/23 [==============================] - 15s 633ms/step - loss: 6.1318e-05 - val_loss: 3.1939e-04
Epoch 339/1000
23/23 [==============================] - 14s 628ms/step - loss: 7.6089e-05 - val_loss: 1.2090e-04
Epoch 340/1000
23/23 [==============================] - 14s 628ms/step - loss: 7.2934e-05 - val_loss: 1.9651e-04
Epoch 341/1000
23/23 [==============================] - 15s 631ms/step - loss: 6.8435e-05 - val_loss: 7.6524e-05
Epoch 342/1000
23/23 [==============================] - 14s 628ms/step - loss: 8.1911e-05 - val_loss: 0.0013
Epoch 343/1000
23/23 [==============================] - 14s 626ms/step - loss: 7.1757e-05 - val_loss: 5.9992e-04
Epoch 344/1000
23/23 [==============================] - 15s 648ms/step - loss: 7.2663e-05 - val_loss: 9.1942e-05
Epoch 345/1000
23/23 [==============================] - 15s 664ms/step - loss: 6.0782e-05 - val_loss: 5.9945e-04
Epoch 346/1000
23/23 [==============================] - 15s 641ms/step - loss: 6.7595e-05 - val_loss: 3.7760e-04
Epoch 347/1000
23/23 [==============================] - 15s 647ms/step - loss: 6.2292e-05 - val_loss: 1.0344e-04
Epoch 348/1000
23/23 [==============================] - 15s 645ms/step - loss: 7.9188e-05 - val_loss: 0.0012
Epoch 349/1000
23/23 [==============================] - 15s 644ms/step - loss: 1.2151e-04 - val_loss: 1.3991e-04
Epoch 350/1000
23/23 [==============================] - 15s 641ms/step - loss: 8.3167e-05 - val_loss: 1.2822e-04
Epoch 351/1000
23/23 [==============================] - 15s 646ms/step - loss: 6.0219e-05 - val_loss: 6.7602e-04
Epoch 352/1000
23/23 [==============================] - 15s 637ms/step - loss: 6.9976e-05 - val_loss: 2.8744e-04
Epoch 353/1000
23/23 [==============================] - 15s 641ms/step - loss: 6.1714e-05 - val_loss: 1.1887e-04
Epoch 354/1000
23/23 [==============================] - 15s 647ms/step - loss: 6.9404e-05 - val_loss: 4.4153e-04
Epoch 355/1000
23/23 [==============================] - 15s 644ms/step - loss: 9.4744e-05 - val_loss: 7.7449e-04
Epoch 356/1000
23/23 [==============================] - 15s 648ms/step - loss: 7.5659e-05 - val_loss: 1.0052e-04
Epoch 357/1000
23/23 [==============================] - 15s 635ms/step - loss: 6.9455e-05 - val_loss: 1.4792e-04
Epoch 358/1000
23/23 [==============================] - 14s 625ms/step - loss: 6.5517e-05 - val_loss: 1.5945e-04
Epoch 359/1000
23/23 [==============================] - 15s 631ms/step - loss: 7.3663e-05 - val_loss: 9.1352e-05
Epoch 360/1000
23/23 [==============================] - 14s 628ms/step - loss: 7.5747e-05 - val_loss: 1.7546e-04
Epoch 361/1000
23/23 [==============================] - 14s 628ms/step - loss: 5.7269e-05 - val_loss: 1.6948e-04
Epoch 362/1000
23/23 [==============================] - 15s 631ms/step - loss: 5.5551e-05 - val_loss: 3.5239e-04
Epoch 363/1000
23/23 [==============================] - 15s 647ms/step - loss: 6.0143e-05 - val_loss: 8.2046e-05
Epoch 364/1000
23/23 [==============================] - 14s 627ms/step - loss: 5.7678e-05 - val_loss: 5.5298e-04
Epoch 365/1000
23/23 [==============================] - 15s 631ms/step - loss: 6.4172e-05 - val_loss: 2.5454e-04
Epoch 366/1000
23/23 [==============================] - 14s 627ms/step - loss: 5.5999e-05 - val_loss: 0.0010
Epoch 367/1000
23/23 [==============================] - 14s 625ms/step - loss: 6.5324e-05 - val_loss: 3.9830e-04
Epoch 368/1000
23/23 [==============================] - 14s 630ms/step - loss: 6.6774e-05 - val_loss: 2.1039e-04
Epoch 369/1000
23/23 [==============================] - 15s 630ms/step - loss: 6.1156e-05 - val_loss: 4.4659e-04
Epoch 370/1000
23/23 [==============================] - 14s 626ms/step - loss: 7.1143e-05 - val_loss: 1.5294e-04
Epoch 371/1000
23/23 [==============================] - 15s 630ms/step - loss: 5.9414e-05 - val_loss: 3.5998e-04
Epoch 372/1000
23/23 [==============================] - 14s 629ms/step - loss: 6.0674e-05 - val_loss: 2.1903e-04
Epoch 373/1000
23/23 [==============================] - 14s 625ms/step - loss: 6.2067e-05 - val_loss: 3.5203e-04
Epoch 374/1000
23/23 [==============================] - 15s 632ms/step - loss: 6.9223e-05 - val_loss: 0.0017
Epoch 375/1000
23/23 [==============================] - 14s 628ms/step - loss: 6.1498e-05 - val_loss: 3.0350e-04
Epoch 376/1000
23/23 [==============================] - 14s 627ms/step - loss: 7.0638e-05 - val_loss: 1.1943e-04
Epoch 377/1000
23/23 [==============================] - 15s 634ms/step - loss: 7.1672e-05 - val_loss: 2.3100e-04
Epoch 378/1000
23/23 [==============================] - 14s 629ms/step - loss: 6.2816e-05 - val_loss: 4.4763e-04
Epoch 379/1000
23/23 [==============================] - 14s 627ms/step - loss: 5.8992e-05 - val_loss: 5.1623e-04
Epoch 380/1000
23/23 [==============================] - 15s 633ms/step - loss: 6.4751e-05 - val_loss: 2.5885e-04
Epoch 381/1000
23/23 [==============================] - 14s 629ms/step - loss: 5.6421e-05 - val_loss: 0.0027
Epoch 382/1000
23/23 [==============================] - 14s 628ms/step - loss: 7.4082e-05 - val_loss: 3.7146e-04
Epoch 383/1000
23/23 [==============================] - 15s 632ms/step - loss: 6.7319e-05 - val_loss: 5.8043e-04
Epoch 384/1000
23/23 [==============================] - 14s 628ms/step - loss: 7.3111e-05 - val_loss: 4.8248e-04
Epoch 385/1000
23/23 [==============================] - 14s 628ms/step - loss: 7.6156e-05 - val_loss: 0.0030
Epoch 386/1000
23/23 [==============================] - 15s 630ms/step - loss: 1.8910e-04 - val_loss: 0.0011
Epoch 387/1000
23/23 [==============================] - 14s 627ms/step - loss: 1.1372e-04 - val_loss: 3.3992e-04
Epoch 388/1000
23/23 [==============================] - 14s 626ms/step - loss: 9.1117e-05 - val_loss: 0.0038
Epoch 389/1000
23/23 [==============================] - 15s 630ms/step - loss: 1.4107e-04 - val_loss: 0.0170
Epoch 390/1000
23/23 [==============================] - 15s 630ms/step - loss: 1.9821e-04 - val_loss: 0.0317
Epoch 391/1000
23/23 [==============================] - 14s 626ms/step - loss: 2.7783e-04 - val_loss: 9.9357e-04
Epoch 392/1000
23/23 [==============================] - 14s 630ms/step - loss: 1.2182e-04 - val_loss: 4.0435e-04
Epoch 393/1000
23/23 [==============================] - 14s 627ms/step - loss: 8.3279e-05 - val_loss: 2.8044e-04
Epoch 394/1000
23/23 [==============================] - 14s 627ms/step - loss: 6.6977e-05 - val_loss: 9.8559e-05
Epoch 395/1000
23/23 [==============================] - 15s 630ms/step - loss: 6.6767e-05 - val_loss: 1.7850e-04
Epoch 396/1000
23/23 [==============================] - 14s 629ms/step - loss: 6.7449e-05 - val_loss: 7.6396e-05
Epoch 397/1000
23/23 [==============================] - 15s 644ms/step - loss: 7.0251e-05 - val_loss: 2.4133e-04
Epoch 398/1000
23/23 [==============================] - 16s 670ms/step - loss: 7.4870e-05 - val_loss: 3.6817e-04
Epoch 399/1000
23/23 [==============================] - 16s 677ms/step - loss: 6.5644e-05 - val_loss: 2.2102e-04
Epoch 400/1000
23/23 [==============================] - 16s 693ms/step - loss: 8.5757e-05 - val_loss: 0.0028
Epoch 401/1000
23/23 [==============================] - 15s 662ms/step - loss: 1.2854e-04 - val_loss: 4.1062e-04
Epoch 402/1000
23/23 [==============================] - 16s 683ms/step - loss: 7.9855e-05 - val_loss: 1.3397e-04
Epoch 403/1000
23/23 [==============================] - 15s 654ms/step - loss: 7.4873e-05 - val_loss: 2.0793e-04
Epoch 404/1000
23/23 [==============================] - 15s 645ms/step - loss: 8.8317e-05 - val_loss: 6.5751e-04
Epoch 405/1000
23/23 [==============================] - 15s 645ms/step - loss: 8.7776e-05 - val_loss: 3.7997e-04
Epoch 406/1000
23/23 [==============================] - 14s 627ms/step - loss: 6.9350e-05 - val_loss: 0.0019
Epoch 407/1000
23/23 [==============================] - 14s 617ms/step - loss: 6.1656e-05 - val_loss: 2.3150e-04
Epoch 408/1000
23/23 [==============================] - 14s 616ms/step - loss: 7.3993e-05 - val_loss: 6.9324e-05
Epoch 409/1000
23/23 [==============================] - 14s 606ms/step - loss: 9.1926e-05 - val_loss: 0.0012
Epoch 410/1000
23/23 [==============================] - 14s 609ms/step - loss: 7.5132e-05 - val_loss: 0.0026
Epoch 411/1000
23/23 [==============================] - 14s 616ms/step - loss: 7.3166e-05 - val_loss: 3.0433e-04
Epoch 412/1000
23/23 [==============================] - 14s 615ms/step - loss: 6.9194e-05 - val_loss: 3.9418e-04
Epoch 413/1000
23/23 [==============================] - 14s 616ms/step - loss: 7.2402e-05 - val_loss: 1.0840e-04
Epoch 414/1000
23/23 [==============================] - 14s 622ms/step - loss: 6.7491e-05 - val_loss: 0.0015
Epoch 415/1000
23/23 [==============================] - 14s 609ms/step - loss: 6.9331e-05 - val_loss: 2.6549e-04
Epoch 416/1000
23/23 [==============================] - 14s 622ms/step - loss: 5.4063e-05 - val_loss: 0.0012
Epoch 417/1000
23/23 [==============================] - 14s 611ms/step - loss: 5.0254e-05 - val_loss: 1.3736e-04
Epoch 418/1000
23/23 [==============================] - 14s 613ms/step - loss: 5.5713e-05 - val_loss: 2.4701e-04
| MIT | notebooks/annotation_poc.ipynb | ydty/concreate_inspection_app |
セグメンテーションモデル | #callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=30)
#segment_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), loss="binary_crossentropy",metrics=["binary_crossentropy"])
segment_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), loss="mae",metrics=["mae"])
segment_model.fit(train_ds, epochs=1000)
model.save("reconstruct")
segment_model.save("segmentation") | INFO:tensorflow:Assets written to: reconstruct\assets
INFO:tensorflow:Assets written to: segmentation\assets
| MIT | notebooks/annotation_poc.ipynb | ydty/concreate_inspection_app |
可視化 | fig = plt.figure()
ax1 = fig.add_subplot(1, 2, 1)
ax1.set_title("pred",fontsize=20)
plt.imshow(model(tf.expand_dims(image, axis=0))[0])
ax2 = fig.add_subplot(1, 2, 2)
ax2.set_title("image",fontsize=20)
plt.imshow(image)
fig = plt.figure()
ax1 = fig.add_subplot(1, 2, 1)
ax1.set_title("pred",fontsize=20)
plt.imshow(segment_model(tf.expand_dims(image, axis=0))[0])
ax2 = fig.add_subplot(1, 2, 2)
ax2.set_title("label",fontsize=20)
plt.imshow(label)
t=segment_model(tf.expand_dims(image, axis=0))[0].numpy()
#tf.reduce_sum(tf.where(tf.greater(t,), t, tf.zeros_like(t)))
len(np.where(t>0.1)[0])
nc_image = non_clack_images[180]
fig = plt.figure()
ax1 = fig.add_subplot(1, 2, 1)
ax1.set_title("pred",fontsize=20)
plt.imshow(nc_image)
ax2 = fig.add_subplot(1, 2, 2)
ax2.set_title("label",fontsize=20)
plt.imshow((tf.keras.losses.MSE(model(np.expand_dims(nc_image, axis=0)), nc_image).numpy()[0] > 0.01) * 255)
fig = plt.figure()
ax1 = fig.add_subplot(1, 2, 1)
ax1.set_title("pred",fontsize=20)
plt.imshow(image)
ax2 = fig.add_subplot(1, 2, 2)
ax2.set_title("label",fontsize=20)
plt.imshow((tf.keras.losses.MSE(model(np.expand_dims(image, axis=0)), image).numpy()[0] > 0.01) * 255) | _____no_output_____ | MIT | notebooks/annotation_poc.ipynb | ydty/concreate_inspection_app |
特徴空間分離 | from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
import pandas as pd
feature_model = model.get_layer("sequential_1").get_layer("model_12")
feature_model = tf.keras.Model(inputs=feature_model.input, outputs=feature_model.get_layer("block3_pool").output)
# 特徴量抽出モデル
test = train_ds.unbatch()
#a_x = [tf.keras.losses.MAE(model(np.expand_dims(x, axis=0)), x).numpy().flatten() for x, y in test]
a_x = [feature_model(np.expand_dims(x, axis=0)).numpy().flatten() for x, y in test]
a_y = [0] * len(a_x)
a_c = ["ひびあり"] * len(a_x)
a_n = []
for idx in coco.getImgIds()[:15]:
anns = coco.loadAnns(coco.getAnnIds(idx))
a_n.append(coco.loadImgs(idx)[0]["file_name"])
unb_test = recon_train_ds.unbatch()
#b_x = [tf.keras.losses.MAE(model(np.expand_dims(x, axis=0)), x).numpy().flatten() for x, y in unb_test]
b_x = [feature_model(np.expand_dims(x, axis=0)).numpy().flatten() for x, y in unb_test]
b_y = [1] * len(b_x)
b_c = ["ひびなし"] * len(b_x)
b_n = [str(p.name) for p in Path("./data/images/non_clack").glob("*")]
a_x.extend(b_x)
a_y.extend(b_y)
a_c.extend(b_c)
a_n.extend(b_n[:180])
feature = TSNE(n_components=2).fit_transform(np.array(a_x))
print(len(a_x), len(a_c), len(a_n))
plot_df = pd.DataFrame({"f1":feature[:, 0],"f2":feature[:, 1],"color":a_c, "name":a_n})
#plt.scatter(feature[:, 0], feature[:, 1], alpha=0.8, color=a_c)
fig = px.scatter(plot_df, x="f1", y="f2", color="color", hover_name="name" )
fig.update_layout(width=800, height=600)
#result = tf.concat([tf.concat([model(image) for image, _, in test_ds], axis=0), tf.concat([model(image) for image, _, in recon_test_ds], axis=0)], axis=0)
#score = [len(np.where(t.numpy()>0.1)[0]) for t in result]
#score
counts = []
for image,_ in test_ds:
result = (tf.keras.losses.MSE(model(np.expand_dims(image, axis=0)), image).numpy()[0] > 0.001) * 255
counts.append(len(np.where(result==255)[0]))
for image,_ in recon_test_ds:
result = (tf.keras.losses.MSE(model(np.expand_dims(image, axis=0)), image).numpy()[0] > 0.001) * 255
counts.append(len(np.where(result==255)[0]))
labels = [1]*5
labels.extend([0]*20)
from sklearn.metrics import roc_auc_score, roc_curve
fpr, tpr, th = roc_curve(labels, counts, drop_intermediate=False)
fpr, tpr, th
roc_auc_score(labels, counts)
plt.plot(fpr, tpr, marker='o') | _____no_output_____ | MIT | notebooks/annotation_poc.ipynb | ydty/concreate_inspection_app |
import random
import os
import numpy as np
import torch
SEED = 45
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_everything(SEED)
!git clone https://github.com/dpoqb/wechat_big_data_baseline_pytorch.git
!dir
!mkdir data
!unzip ./drive/MyDrive/wechat_algo_data1.zip -d ./data
!pip install deepctr_torch
import torch
import os
print(torch.cuda.is_available())
for i in range(torch.cuda.device_count()):
print(torch.cuda.get_device_name(i))
# -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook as tqdm
from sklearn.decomposition import PCA
from collections import defaultdict
import os
os.chdir('/content/wechat_big_data_baseline_pytorch')
# 存储数据的根目录
ROOT_PATH = "../data"
# 比赛数据集路径
DATASET_PATH = ROOT_PATH + '/wechat_algo_data1/'
# 训练集
USER_ACTION = DATASET_PATH + "user_action.csv"
FEED_INFO = DATASET_PATH + "feed_info.csv"
FEED_EMBEDDINGS = DATASET_PATH + "feed_embeddings.csv"
# 测试集
TEST_FILE = DATASET_PATH + "test_a.csv"
# 初赛待预测行为列表
ACTION_LIST = ["read_comment", "like", "click_avatar", "forward"]
FEA_COLUMN_LIST = ["read_comment", "like", "click_avatar", "forward", "comment", "follow", "favorite", "device"]
FEA_FEED_LIST = ['feedid', 'authorid', 'videoplayseconds', 'bgm_song_id', 'bgm_singer_id', 'manual_tag_list']
# 负样本下采样比例(负样本:正样本)
ACTION_SAMPLE_RATE = {"read_comment": 4, "like": 4, "click_avatar": 4, "forward": 10, "comment": 10, "follow": 10, "favorite": 10}
def process_embed(train):
feed_embed_array = np.zeros((train.shape[0], 512))
for i in tqdm(range(train.shape[0])):
x = train.loc[i, 'feed_embedding']
if x != np.nan and x != '':
y = [float(i) for i in str(x).strip().split(" ")]
else:
y = np.zeros((512,)).tolist()
feed_embed_array[i] += y
temp = pd.DataFrame(columns=[f"embed{i}" for i in range(512)], data=feed_embed_array)
train = pd.concat((train, temp), axis=1)
return train
def proc_tag(df, name='manual_tag_list', thre=5, max_len=5):
stat = defaultdict(int)
for row in df[name]:
if isinstance(row, str):
for tag in row.strip().split(';'):
stat[tag] += 1
zero_tags = set([tag for tag in stat if stat[tag] < thre]) # 低于频次的 tag
def tag_func(row, max_len=max_len):
ret = []
if isinstance(row, str):
for tag in row.strip().split(';'):
ret.append(0 if tag in zero_tags else int(tag) + 1)
ret = ret[:max_len] + [0] * (max_len - len(ret))
return ' '.join([str(n) for n in ret])
df[name] = df[name].apply(tag_func)
tag_vocab_size = max([int(tag) for tag in stat]) + 2
print('%s: vocab_size == %d' % (name, tag_vocab_size))
return df
def prepare_data():
feed_info_df = pd.read_csv(FEED_INFO)
feed_info_df = proc_tag(feed_info_df, name='manual_tag_list', thre=5, max_len=5)
user_action_df = pd.read_csv(USER_ACTION)[["userid", "date_", "feedid",] + FEA_COLUMN_LIST]
feed_info_df = feed_info_df[FEA_FEED_LIST]
test = pd.read_csv(TEST_FILE)
# add feed feature
train = pd.merge(user_action_df, feed_info_df, on='feedid', how='left')
test = pd.merge(test, feed_info_df, on='feedid', how='left')
test["videoplayseconds"] = np.log(test["videoplayseconds"] + 1.0)
test.to_csv(ROOT_PATH + f'/test_data.csv', index=False)
for action in tqdm(ACTION_LIST):
print(f"prepare data for {action}")
tmp = train.drop_duplicates(['userid', 'feedid', action], keep='last')
df_neg = tmp[tmp[action] == 0]
df_neg = df_neg.sample(frac=1.0 / ACTION_SAMPLE_RATE[action], random_state=SEED, replace=False)
df_all = pd.concat([df_neg, tmp[tmp[action] == 1]])
df_all["videoplayseconds"] = np.log(df_all["videoplayseconds"] + 1.0)
df_all.to_csv(ROOT_PATH + f'/train_data_for_{action}.csv', index=False)
if __name__ == "__main__":
prepare_data()
from sklearn.decomposition import PCA
n_dim = 32
feed_embed = pd.read_csv(FEED_EMBEDDINGS)
feed_embed['feed_embedding'] = feed_embed['feed_embedding'].apply(lambda row: [float(x) for x in row.strip().split()])
pca = PCA(n_components=n_dim)
pca_emb = pca.fit_transform(feed_embed['feed_embedding'].tolist())
feed_embed['pca_emb'] = list(pca_emb)
feed_embed = feed_embed[['feedid', 'pca_emb']]
# feed_embed.drop(['feed_embedding'], axis=1).to_csv("/content/drive/MyDrive/pca_emb%d.csv" % n_dim, index=False)
from numba import njit
from scipy.stats import rankdata
@njit
def _auc(actual, pred_ranks):
n_pos = np.sum(actual)
n_neg = len(actual) - n_pos
return (np.sum(pred_ranks[actual == 1]) - n_pos*(n_pos+1)/2) / (n_pos*n_neg)
def fast_auc(actual, predicted):
# https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/208031
pred_ranks = rankdata(predicted)
return _auc(actual, pred_ranks)
def uAUC(labels, preds, user_id_list):
user_pred = defaultdict(lambda: [])
user_truth = defaultdict(lambda: [])
for idx, truth in enumerate(labels):
user_id = user_id_list[idx]
pred = preds[idx]
truth = labels[idx]
user_pred[user_id].append(pred)
user_truth[user_id].append(truth)
user_flag = defaultdict(lambda: False)
for user_id in set(user_id_list):
truths = user_truth[user_id]
flag = False
# 若全是正样本或全是负样本,则flag为False
for i in range(len(truths) - 1):
if truths[i] != truths[i + 1]:
flag = True
break
user_flag[user_id] = flag
total_auc = 0.0
size = 0.0
for user_id in user_flag:
if user_flag[user_id]:
auc = fast_auc(np.asarray(user_truth[user_id]), np.asarray(user_pred[user_id]))
total_auc += auc
size += 1.0
user_auc = float(total_auc)/size
return user_auc
def compute_weighted_score(score_dict, weight_dict):
score = 0.0
weight_sum = 0.0
for action in score_dict:
weight = float(weight_dict[action])
score += weight*score_dict[action]
weight_sum += weight
score /= float(weight_sum)
score = round(score, 6)
return score
sparse_2_dim = {
'userid': 8,
'feedid': 8,
'authorid': 8,
'bgm_song_id': 8,
'bgm_singer_id': 8,
}
dense_2_dim = {
'videoplayseconds': 1,
'pca_emb': 32,
#'w2v': 8 * 3
}
var_2_dim = {
'manual_tag_list': {'dim': 8, 'vocab_size': 354},
}
# -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
import torch
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from collections import defaultdict
from deepctr_torch.inputs import SparseFeat, DenseFeat, get_feature_names
from deepctr_torch.models.deepfm import *
from deepctr_torch.models.basemodel import *
class MyBaseModel(BaseModel):
def fit(self, x, y, batch_size, val_data=None, epochs=1, verbose=1, mode='offline'):
x = [x[feature] for feature in self.feature_index] # type(x) = dict
for i in range(len(x)):
x[i] = np.array(x[i].tolist())
if len(x[i].shape) == 1:
x[i] = np.expand_dims(x[i], axis=1)
val_x, val_y = [], []
if mode == 'offline':
val_x, val_y = val_data
val_uids = val_x['userid'].tolist()
val_x = [val_x[feature] for feature in self.feature_index]
train_tensor_data = Data.TensorDataset(torch.from_numpy(np.concatenate(x, axis=-1)), torch.from_numpy(y))
train_loader = DataLoader(dataset=train_tensor_data, shuffle=True, batch_size=batch_size)
sample_num = len(train_tensor_data)
steps_per_epoch = (sample_num - 1) // batch_size + 1
# Train
print("Train on {0} samples, validate on {1} samples, {2} steps per epoch".format(len(train_tensor_data), len(val_y), steps_per_epoch))
epoch_logs = defaultdict(dict)
model = self.train()
for epoch in range(epochs):
start_time = time.time()
loss_epoch = 0
total_loss_epoch = 0
train_result = defaultdict(list)
for _, (x_train, y_train) in tqdm(enumerate(train_loader)):
x = x_train.to(self.device).float()
y = y_train.to(self.device).float()
y_pred = model(x).squeeze()
self.optim.zero_grad()
loss = self.loss_func(y_pred, y.squeeze(), reduction='sum')
total_loss = loss + self.get_regularization_loss() + self.aux_loss
loss_epoch += loss.item()
total_loss_epoch += total_loss.item()
total_loss.backward()
self.optim.step()
for name, func in self.metrics.items():
try:
temp = func(y.cpu().data.numpy(), y_pred.cpu().data.numpy().astype("float64"))
except:
temp = 0
finally:
train_result[name].append(temp)
# Add logs
logs = {}
logs["loss"] = total_loss_epoch / sample_num
for name, result in train_result.items():
logs[name] = np.sum(result) / steps_per_epoch
if mode == 'offline':
eval_result = self.evaluate(val_x, val_y, val_uids, batch_size)
for name, result in eval_result.items():
logs["val_" + name] = result
print('Epoch {0}/{1}, {2}s'.format(epoch + 1, epochs, int(time.time() - start_time)))
eval_str = "loss: {0: .4f}".format(logs["loss"])
for name in logs:
eval_str += " - " + name + ": {0: .4f}".format(logs[name])
print(eval_str)
epoch_logs[epoch+1] = logs
return epoch_logs
def evaluate(self, x, y, uids, batch_size=256):
preds = self.predict(x, batch_size)
eval_result = {}
for name, metric_fun in self.metrics.items():
eval_result[name] = metric_fun(y, preds)
eval_result['uAUC'] = uAUC(y.squeeze(), preds.squeeze(), uids)
return eval_result
def predict(self, x, batch_size=256):
model = self.eval()
if isinstance(x, dict):
x = [x[feature] for feature in self.feature_index]
for i in range(len(x)):
x[i] = np.array(x[i].tolist())
if len(x[i].shape) == 1:
x[i] = np.expand_dims(x[i], axis=1)
tensor_data = Data.TensorDataset(torch.from_numpy(np.concatenate(x, axis=-1)))
test_loader = DataLoader(dataset=tensor_data, shuffle=False, batch_size=batch_size)
pred_ans = []
with torch.no_grad():
for _, x_test in enumerate(test_loader):
x = x_test[0].to(self.device).float()
y_pred = model(x).cpu().data.numpy()
pred_ans.append(y_pred)
return np.concatenate(pred_ans).astype("float64")
class MyDeepFM(MyBaseModel):
def __init__(self,
linear_feature_columns, dnn_feature_columns,
dense_map = None, dnn_hidden_units=(256, 128),
l2_reg_linear=0.00001, l2_reg_embedding=0.00001, l2_reg_dnn=0, init_std=0.0001, seed=1024,
dnn_dropout=0., dnn_activation='relu', dnn_use_bn=True, task='binary', device='cpu'):
super(MyDeepFM, self).__init__(linear_feature_columns, dnn_feature_columns, l2_reg_linear=l2_reg_linear,
l2_reg_embedding=l2_reg_embedding, init_std=init_std, seed=seed, task=task,
device=device)
# dense map
dense_map = {}
self.dense_map = dense_map
self.dense_map_dict = dict([(name, nn.Linear(dense_2_dim[name], dense_map[name], bias=False).to(device)) for name in dense_map])
dim_delta = sum([dense_map[name] - dense_2_dim[name] for name in dense_map])
# dnn tower
self.dnn = DNN(self.compute_input_dim(dnn_feature_columns) + dim_delta, dnn_hidden_units,
activation=dnn_activation, l2_reg=l2_reg_dnn, dropout_rate=dnn_dropout, use_bn=dnn_use_bn,
init_std=init_std, seed=seed, device=device)
self.dnn_linear = nn.Linear(dnn_hidden_units[-1], 1, bias=False).to(device)
self.add_regularization_weight(filter(lambda x: 'weight' in x[0] and 'bn' not in x[0], self.dnn.named_parameters()), l2=l2_reg_dnn)
self.add_regularization_weight(self.dnn_linear.weight, l2=l2_reg_dnn)
self.to(device)
def forward(self, X):
sparse_embedding_list, dense_value_list = self.input_from_feature_columns(X, self.dnn_feature_columns, self.embedding_dict) # 5*[512,1,4], 1*[512,1]
# lr
logit = self.linear_model(X)
# fm
fm_input = torch.cat(sparse_embedding_list, dim=1)
square_of_sum = torch.pow(torch.sum(fm_input, dim=1, keepdim=True), 2)
sum_of_square = torch.sum(fm_input * fm_input, dim=1, keepdim=True)
logit += 0.5 * torch.sum(square_of_sum - sum_of_square, dim=2, keepdim=False)
# dense map
dense_names = [fc.name for fc in self.dnn_feature_columns if isinstance(fc, DenseFeat)]
tmp = []
for name, tensor in zip (dense_names, dense_value_list):
if name in self.dense_map_dict:
tensor = self.dense_map_dict[name](tensor)
tmp.append(tensor)
dense_value_list = tmp
# dnn tower
sparse_dnn_input = torch.flatten(torch.cat(sparse_embedding_list, dim=-1), start_dim=1)
dense_dnn_input = torch.flatten(torch.cat(dense_value_list, dim=-1), start_dim=1)
dnn_input = torch.cat([sparse_dnn_input, dense_dnn_input], dim=-1)
logit += self.dnn_linear(self.dnn(dnn_input))
return self.out(logit)
mode = 'online' # online
if __name__ == "__main__":
submit = pd.read_csv(ROOT_PATH + '/test_data.csv')[['userid', 'feedid']]
logs = {}
for action in ACTION_LIST:
print('*** train for %s ***' % action)
USE_FEAT = ['userid', 'feedid', 'device', action] + FEA_FEED_LIST[1:]
train = pd.read_csv(ROOT_PATH + f'/train_data_for_{action}.csv')[['date_'] + USE_FEAT]
# TODO: sampling
# train = train.sample(frac=0.1, random_state=42).reset_index(drop=True)
print("positive ratio:", sum((train[action] == 1) * 1) / train.shape[0])
test = pd.read_csv(ROOT_PATH + '/test_data.csv')[[i for i in USE_FEAT if i != action]]
test[action] = 0
test['date_'] = 15
test = test[['date_'] + USE_FEAT]
data = pd.concat((train, test)).reset_index(drop=True)
# universal embedding
data = pd.merge(data, feed_embed, on='feedid', how='left')
data['pca_emb'] = [e if isinstance(e, np.ndarray) else np.zeros((32)) for e in data['pca_emb']]
data['manual_tag_list'] = data['manual_tag_list'].apply(lambda row: np.array([int(x) for x in row.split()]))
# features
sparse_features = list(sparse_2_dim.keys())
dense_features = list(dense_2_dim.keys())
var_features = list(var_2_dim.keys())
print('sparse_features: ', sparse_features)
print('dense_features: ', dense_features)
print('var_features: ', var_features)
data[sparse_features] = data[sparse_features].fillna(0)
data[dense_features] = data[dense_features].fillna(0)
# 1.Label Encoding for sparse features,and do simple Transformation for dense features
for feat in sparse_features:
lbe = LabelEncoder()
data[feat] = lbe.fit_transform(data[feat])
# mms = MinMaxScaler(feature_range=(0, 1))
# data[dense_features] = mms.fit_transform(data[dense_features])
# 2.count #unique features for each sparse field,and record dense feature field name
varlen_feature_columns = [VarLenSparseFeat(SparseFeat(feat, vocabulary_size=var_2_dim[feat]['vocab_size'], embedding_dim=var_2_dim[feat]['dim']), maxlen=5, combiner='sum') for feat in var_features]
fixlen_feature_columns = [SparseFeat(feat, data[feat].nunique(), sparse_2_dim[feat]) for feat in sparse_features] + [DenseFeat(feat, dense_2_dim[feat]) for feat in dense_features]
dnn_feature_columns = fixlen_feature_columns + varlen_feature_columns
linear_feature_columns = fixlen_feature_columns + varlen_feature_columns
feature_names = get_feature_names(linear_feature_columns + dnn_feature_columns)
# 3.generate input data for model
train, test = data.iloc[:train.shape[0]].reset_index(drop=True), data.iloc[train.shape[0]:].reset_index(drop=True)
if mode == 'offline':
train_idxes, eval_idxes = train['date_'] != 14, train['date_'] == 14
train, eval = train[train_idxes].drop(['date_'], axis=1), train[eval_idxes].drop(['date_'], axis=1)
if mode == 'online':
train = train.drop(['date_'], axis=1)
eval = train.head() # fake
test = test.drop(['date_'], axis=1)
train_x = {name: train[name] for name in feature_names}
eval_x = {name: eval[name] for name in feature_names}
test_x = {name: test[name] for name in feature_names}
# 4.Define Model,train,predict and evaluate
model = MyDeepFM(
linear_feature_columns=linear_feature_columns,
dnn_feature_columns=dnn_feature_columns,
task='binary', l2_reg_embedding=1e-1, device='cuda:0' if torch.cuda.is_available() else 'cpu', seed=SEED)
model.compile("adagrad", "binary_crossentropy", metrics=["binary_crossentropy", "auc"])
act_logs = model.fit(train_x, train[[action]].values, val_data=(eval_x, eval[[action]].values), batch_size=512, epochs=2, mode=mode)
logs[action] = act_logs
# online
submit[action] = model.predict(test_x, 128)
torch.cuda.empty_cache()
# weighted uAUC
if mode == 'offline':
score_dict = {}
for act in logs:
act_logs = logs[act]
score_dict[act] = act_logs[max(act_logs.keys())]['val_uAUC']
weight_dict = {"read_comment": 4.0, "like": 3.0, "click_avatar": 2.0, "forward": 1.0, "favorite": 1.0, "comment": 1.0, "follow": 1.0}
weighted_uAUC = compute_weighted_score(score_dict, weight_dict)
print(score_dict)
print('weighted_uAUC: ', weighted_uAUC)
# online
submit.to_csv("./submit_2_45.csv", index=False)
todo:
不同的action使用不同的epoch
seed
int(k[1:-1].strip().split(',')[1])
p = data['manual_tag_list'].apply(lambda row: np.array([int(x) for x in row.split()]))
p[0].dtype
# baseline
{'read_comment': 0.6102415130979689, 'like': 0.6055234369612766, 'click_avatar': 0.7059927976309249, 'forward': 0.6832353813536607}
weighted_uAUC: 0.635276
# dnn_dropout = 0.1
{'read_comment': 0.6094100217906185, 'like': 0.6052801328988395, 'click_avatar': 0.7059140934189055, 'forward': 0.6846734262464789}
weighted_uAUC: 0.634998
# 256, 128, 128
{'read_comment': 0.613116787160124, 'like': 0.6062583852548347, 'click_avatar': 0.7058735217580193, 'forward': 0.6769030704770939}
weighted_uAUC: 0.635989
# epoch = 2
{'read_comment': 0.6117841889858322, 'like': 0.6089919743022709, 'click_avatar': 0.7138421964649098, 'forward': 0.6829949302549756}
weighted_uAUC: 0.638479
# sparse dim = 8, epoch = 2 (new baseline)
{'read_comment': 0.6126884118803656, 'like': 0.6078158393185238, 'click_avatar': 0.7141126528216767, 'forward': 0.6923154125787877}
weighted_uAUC: 0.639474
# 删除了对 videoplayseconds 的归一化(new baseline)
{'read_comment': 0.6150373746448982, 'like': 0.6087792274162345, 'click_avatar': 0.7137088800810096, 'forward': 0.6919173648006157}
weighted_uAUC: 0.640582
# add feed embedding 32(new baseline)
{'read_comment': 0.6231230935993682, 'like': 0.6162679088683002, 'click_avatar': 0.7128391281987229, 'forward': 0.6951917541544708}
weighted_uAUC: 0.646217
# add feed embedding 64
{'read_comment': 0.6179610910963779, 'like': 0.617180918593666, 'click_avatar': 0.7121687727167492, 'forward': 0.6969728833664359}
weighted_uAUC: 0.64447
# sparse dim = 12
{'read_comment': 0.6152862366363533, 'like': 0.6172504324924313, 'click_avatar': 0.7100718453099804, 'forward': 0.701999472669805}
weighted_uAUC: 0.643504
# baseline 的重复实验,线上 0.656674
{'read_comment': 0.6220072250372239, 'like': 0.6181791275945606, 'click_avatar': 0.7129768375663601, 'forward': 0.6987107057431032}
weighted_uAUC: 0.646723
# (256, 128, 64)
{'read_comment': 0.6176483873668901, 'like': 0.6170515088013665, 'click_avatar': 0.713929279701119, 'forward': 0.6961728898267605}
weighted_uAUC: 0.644578
# dnn_use_bn = True(new baseline) 线上 0.65576
{'read_comment': 0.6269150887735059, 'like': 0.6245276506750953, 'click_avatar': 0.715901103365852, 'forward': 0.7038550482328185}
weighted_uAUC: 0.65169
# dropout = 0.1
{'read_comment': 0.6261901395330384, 'like': 0.6239817428964435, 'click_avatar': 0.7163355996839406, 'forward': 0.6963938852580808}
weighted_uAUC: 0.650577
# baseline 重复实验
{'read_comment': 0.6296090935187716, 'like': 0.621418106767897, 'click_avatar': 0.7168717294762987, 'forward': 0.6967281458462133}
weighted_uAUC: 0.651316
# 先使用linear对ue降维(32->16),再接入dnn
{'read_comment': 0.6227016241604177, 'like': 0.6199456756568217, 'click_avatar': 0.7143073328988507, 'forward': 0.6791432406166807}
weighted_uAUC: 0.64584
# ue(32->4) dnn_use_bn = True
{'read_comment': 0.6214967487996874, 'like': 0.6117349933619828, 'click_avatar': 0.7123781829253133, 'forward': 0.6907462327570015}
weighted_uAUC: 0.643669
# ue(32->8) dnn_use_bn = True
{'read_comment': 0.6255372000774586, 'like': 0.6099148843168334, 'click_avatar': 0.7147080055545442, 'forward': 0.6913280305289646}
weighted_uAUC: 0.645264
# ue(32->16) dnn_use_bn = True
{'read_comment': 0.6230882216710302, 'like': 0.620136770671566, 'click_avatar': 0.716609921279133, 'forward': 0.6855595234090964}
weighted_uAUC: 0.647154
# ue(32->32) dnn_use_bn = True
{'read_comment': 0.6244980658541014, 'like': 0.6178982426111442, 'click_avatar': 0.7149869209063016, 'forward': 0.7032611484183776}
weighted_uAUC: 0.648492
# ue(32) dnn_use_bn = True * 2
{'read_comment': 0.6268910782755379, 'like': 0.6222017679020581, 'click_avatar': 0.7150488812479852, 'forward': 0.6991933553474539}
weighted_uAUC: 0.650346
# ue(32) dnn_use_bn = True 换gpu跑
{'read_comment': 0.6244744632801036, 'like': 0.6220399203865046, 'click_avatar': 0.7144416351024233, 'forward': 0.6968944127096272}
weighted_uAUC: 0.64898
# ue(32) dnn_use_bn = True 换gpu跑
{'read_comment': 0.6275061489685031, 'like': 0.6229652888075203, 'click_avatar': 0.7143199406719899, 'forward': 0.6981564617860107}
weighted_uAUC: 0.650572
# ue(32) dnn_use_bn = False
{'read_comment': 0.621629670929673, 'like': 0.6172394906951797, 'click_avatar': 0.7138702099995802, 'forward': 0.6971143776259561}
weighted_uAUC: 0.646309
# ue(32->32) dnn_use_bn = False
{'read_comment': 0.6185276618440073, 'like': 0.6170842463943023, 'click_avatar': 0.7135048353197133, 'forward': 0.6971910507462337}
weighted_uAUC: 0.644956
# 采样 4 4 4 10 (new baseline)
{'read_comment': 0.6313212962172534, 'like': 0.6220075992585066, 'click_avatar': 0.7143381748417214, 'forward': 0.6978311251747286}
weighted_uAUC: 0.651782
# + device
{'read_comment': 0.6244486461941755, 'like': 0.6263065400087143, 'click_avatar': 0.7067274451536654, 'forward': 0.7078156246005303}
weighted_uAUC: 0.649798
{'read_comment': 0.6239386227276458, 'like': 0.6246523621908081, 'click_avatar': 0.7127589873646121, 'forward': 0.7021687370818925}
weighted_uAUC: 0.64974
# baseline
{'read_comment': 0.6288471047258013, 'like': 0.6219350043589409, 'click_avatar': 0.7146902582599014, 'forward': 0.6918359172388889}
weighted_uAUC: 0.650241
# seed = 80 split_seed = 42
{'read_comment': 0.6242634094879379, 'like': 0.6264243440165063, 'click_avatar': 0.7118413185387293, 'forward': 0.6945659928938664}
weighted_uAUC: 0.649458
# seed = 80
{'read_comment': 0.6204573895103794, 'like': 0.62392994828236, 'click_avatar': 0.7235837537540227, 'forward': 0.6789320461307407}
weighted_uAUC: 0.647972
seed = 81
{'read_comment': 0.6224667184141309, 'like': 0.6232470714913648, 'click_avatar': 0.710962678982845, 'forward': 0.6922917821851972}
weighted_uAUC: 0.647383
# seed 41 42 43 44 45 avg online
{'read_comment': 0.644098, 'like': 0.63073, 'click_avatar': 0.733325, 'forward': 0.697216}
weighted_uAUC: 0.663245
# seed = 41 ; with manual_tags_dim = 8
{'read_comment': 0.6248747741666473, 'like': 0.6202584290770113, 'click_avatar': 0.7195684676287956, 'forward': 0.695015289241745}
weighted_uAUC: 0.649443
### 换gpu ###
# seed = 42 ; with manual_tags_dim = 8
{'read_comment': 0.6326313492611053, 'like': 0.6238611589799947, 'click_avatar': 0.7190238030791103, 'forward': 0.7043097908858788}
weighted_uAUC: 0.654447
# 线上 42
{'read_comment': 0.636948, 'like': 0.623838, 'click_avatar': 0.727661, 'forward': 0.693742}
weighted_uAUC: 0.656837
# 线上 seed 41 42 43 44 45 avg
{'read_comment': 0.646524, 'like': 0.629584, 'click_avatar': 0.733393, 'forward': 0.698964}
weighted_uAUC: 0.66406
# seed = 42 ; w/o manual_tags_dim = 8
{'read_comment': 0.6227764240658794, 'like': 0.6238680596273618, 'click_avatar': 0.7145209265367466, 'forward': 0.7000066937625322}
weighted_uAUC: 0.649176
data.head()
USE_FEAT
| _____no_output_____ | MIT | algo.ipynb | RenqinSS/Rec | |
Style Transfer Intensity | # Style Transfer intensity
sti_responses = gc.open_by_url('https://docs.google.com/spreadsheets/d/1_B3ayl6-p3nRl3RUtTgcu7fGT2v3n6rg3CLrR4wTafQ/edit#gid=2064143541')
sti_response_sheet = sti_responses.sheet1
sti_reponse_data = sti_response_sheet.get_all_values()
# sti_reponse_data
sti_answer_dict = {}
for idx, row in enumerate(sti_reponse_data[1:]):
if row[1] != "":
sti_answer_dict[idx] = [(idx, reverse_dict[i]) for idx, i in enumerate(row[2:-1])]
# inter-annotator agreement
k_alpha = krippendorff.alpha([[i[1] for i in v] for k, v in sti_answer_dict.items()])
print("Krippendorffs' Alpha:")
print(round(k_alpha,4))
# inter-annotator agreement, ignoring neither cases
remove_indexes = []
for lst in [v for k, v in sti_answer_dict.items()]:
for idx, i in enumerate(lst):
if i[1] == 2:
remove_indexes.append(idx)
sti_answers_without_neither = copy.deepcopy([v for k, v in sti_answer_dict.items()])
for lst in sti_answers_without_neither:
for i in sorted(set(remove_indexes), reverse=True):
del lst[i]
print("\nKrippendorffs' Alpha (ignoring neither cases):")
print(f"Answers remaining: {len(sti_answers_without_neither[0])}%")
k_alpha = krippendorff.alpha([[j[1] for j in usr] for usr in sti_answers_without_neither])
print(round(k_alpha,4))
# amount neither
neither_percentage = 0
for k, v in sti_answer_dict.items():
v = [i[1] for i in v]
neither_percentage += Counter(v)[2]/len(v)
print(f"Average amount of neither selected: {round((neither_percentage/3)*100, 2)}%")
# Select most common answer of each human evaluator, if all same, select random
final_sti_human_answers = []
for idx, i in enumerate(np.array([[i[1] for i in v] for k, v in sti_answer_dict.items()]).transpose()):
try:
final_sti_human_answers.append((idx, mode(i)))
except StatisticsError as e:
final_sti_human_answers.append((idx, random.choice(i)))
with open("df_evaluation.pickle", "rb") as handle:
df_evaluation = pickle.load(handle)
id_sentence_dict = {}
for idx, sentence in enumerate(sti_reponse_data[0][2:-1]):
id_sentence_dict[idx] = sentence
sentence_human_sentiment = {}
for sentence_id, sentiment in final_sti_human_answers:
if sentiment == 2:
continue
sentence_human_sentiment[id_sentence_dict[sentence_id]] = sentiment
human_sentiment = [v for k,v in sentence_human_sentiment.items()]
og_sentiment = []
for k, v in sentence_human_sentiment.items():
og_sentiment.append(df_evaluation.OG_sentiment[df_evaluation.GEN_sentences==k].item())
# Accuracy style transfer intensity for human classification
count = 0
count_0_to_1_correct, count_0_to_1_total = 0, 0
count_1_to_0_correct, count_1_to_0_total = 0, 0
for og, gen in zip(og_sentiment, human_sentiment):
if og == 0:
count_0_to_1_total += 1
else:
count_1_to_0_total += 1
if og != gen:
count += 1
if og == 0:
count_0_to_1_correct += 1
else:
count_1_to_0_correct += 1
print(f"accuracy [including neither] = {round((count/len(final_sti_human_answers))*100, 2)}%")
print(f"accuracy [excluding neither] = {round((count/len(og_sentiment))*100, 2)}%")
print(f"accuracy [0 -> 1] = {round((count_0_to_1_correct/count_0_to_1_total)*100, 2)}%")
print(f"accuracy [1 -> 0]= {round((count_1_to_0_correct/count_1_to_0_total)*100, 2)}%")
# Agreement between human and automatic evaluation
gen_sentiment = []
for k, v in sentence_human_sentiment.items():
gen_sentiment.append(df_evaluation.GEN_sentiment[df_evaluation.GEN_sentences==k].item())
k_alpha = krippendorff.alpha([gen_sentiment, human_sentiment])
print("\nKrippendorffs' Alpha:")
print(round(k_alpha,4))
# https://www.ncbi.nlm.nih.gov/pubmed/15883903 reference to cohen's kappa
print(f"Cohen's Kappa:\n{round(cohen_kappa_score(gen_sentiment, human_sentiment), 4)}")
cm = confusion_matrix(og_sentiment, human_sentiment)
create_confusion_matrix(cm, ["neg", "pos"], show_plots=True, title="Gold labels vs. Human Predictions",
xlabel="Human Labels", ylabel="Gold Labels", dir="", y_lim_value=2, save_plots=True)
cm = confusion_matrix(gen_sentiment, human_sentiment)
create_confusion_matrix(cm, ["neg", "pos"], show_plots=True, title="Automatic vs. Human Predictions",
xlabel="Human Labels", ylabel="Automatic Labels", dir="", y_lim_value=2, save_plots=True) | _____no_output_____ | Apache-2.0 | evaluation/all_evaluation.ipynb | cs-mac/Unsupervised_Style_Transfer |
 Naturalness (Isolated) | # Naturalness (isolated)
nat_iso_responses = gc.open_by_url('https://docs.google.com/spreadsheets/d/1tEOalZErOjSOD8DGKfvi-edv8sKkGczLx0eYi7N6Kjw/edit#gid=1759015116')
nat_iso_response_sheet = nat_iso_responses.sheet1
nat_iso_reponse_data = nat_iso_response_sheet.get_all_values()
# nat_iso_reponse_data
nat_iso_answer_dict = {}
for idx, row in enumerate(nat_iso_reponse_data[1:]):
if row[1] != "":
nat_iso_answer_dict[idx] = [int(i) for i in row[2:-1]]
# inter-annotator agreement
print("Krippendorffs' Alpha:")
k_alpha = krippendorff.alpha([v for k,v in nat_iso_answer_dict.items()])
print(round(k_alpha,4))
# naturalness mean (isolated)
naturalness_mean_list = []
for idx, row in enumerate(nat_iso_reponse_data[1:]):
if row[1] != "":
naturalness_mean_list.append(int(i) for i in row[2:-1])
print("Mean of naturalness (isolated):")
print(round(mean([mean(i) for i in naturalness_mean_list]),4))
nat_all = []
for k, v in nat_iso_answer_dict.items():
nat_all += v
nat_all_dist = Counter(nat_all)
nat_all_dist
# naturalness (isolated) distribution
fig = plt.figure(figsize=[7, 5], dpi=100)
ax = fig.add_axes([0,0,1,1])
ax.bar(nat_all_dist.keys(), nat_all_dist.values())
plt.title("Naturalness (Isolated) distribution")
plt.xlabel("Answer")
plt.ylabel("Frequency")
plt.savefig("naturalness_isolated_dist" + '.png', figsize = (16, 9), dpi=150, bbox_inches="tight")
plt.show()
plt.close()
df_evaluation
id_sentiment_dict = {}
for idx, sentence in enumerate(nat_iso_reponse_data[0][2:-1]):
# GEN_sentiment
sentiment = df_evaluation.OG_sentiment[df_evaluation.GEN_sentences == sentence].item()
id_sentiment_dict[idx] = sentiment
nat_iso_answer_dict_div = {}
for idx, row in enumerate(nat_iso_reponse_data[1:]):
if row[1] != "":
nat_iso_answer_dict_div[idx] = ([int(i) for id, i in enumerate(row[2:-1]) if id_sentiment_dict[id] == 0],
[int(i) for id, i in enumerate(row[2:-1]) if id_sentiment_dict[id] == 1])
nat_all_neg, nat_all_pos = [], []
for k, (v_neg, v_pos) in nat_iso_answer_dict_div.items():
nat_all_neg += v_neg
nat_all_pos += v_pos
nat_all_dist_neg = Counter(nat_all_neg)
nat_all_dist_pos = Counter(nat_all_pos)
df = pd.DataFrame([['g1','c1',10],['g1','c2',12],['g1','c3',13],['g2','c1',8],
['g2','c2',10],['g2','c3',12]],columns=['group','column','val'])
df = pd.DataFrame([nat_all_dist_neg, nat_all_dist_pos]).T
ax = df.plot(kind='bar')
ax.figure.set_size_inches(16, 9)
plt.title("Naturalness (Isolated) distribution")
plt.xlabel("Answer")
plt.ylabel("Frequency")
plt.xticks(rotation='horizontal')
ax.figure.savefig("naturalness_isolated_dist_div" + '.png', figsize = (16, 9), dpi=150, bbox_inches="tight")
plt.legend(["Negative", "Positive"])
plt.show()
plt.close()
| _____no_output_____ | Apache-2.0 | evaluation/all_evaluation.ipynb | cs-mac/Unsupervised_Style_Transfer |
Naturalness (Comparison) | # Naturalness (comparison)
nat_comp_responses = gc.open_by_url('https://docs.google.com/spreadsheets/d/1mFtsNNaJXDK2dT9LkLz_r8LSfIOPskDqn4jBamE-bns/edit#gid=890219669')
nat_comp_response_sheet = nat_comp_responses.sheet1
nat_comp_reponse_data = nat_comp_response_sheet.get_all_values()
# nat_comp_reponse_data
nat_comp_answer_dict = {}
for idx, row in enumerate(nat_comp_reponse_data[1:]):
if row[1] != "":
nat_comp_answer_dict[idx] = [int(i) for i in row[2:-1]]
# inter-annotator agreement
print("Krippendorffs' Alpha:")
k_alpha = krippendorff.alpha([v for k,v in nat_comp_answer_dict.items()])
print(round(k_alpha,4))
# naturalness mean (comparison)
naturalness_mean_list = []
for idx, row in enumerate(nat_comp_reponse_data[1:]):
if row[1] != "":
naturalness_mean_list.append(int(i) for i in row[2:-1])
print("Mean of naturalness (comparison):")
print(round(mean([mean(i) for i in naturalness_mean_list]),4))
nat_comp_questions = gc.open_by_url('https://docs.google.com/spreadsheets/d/1uxAGaOvJcb-Cg3wjTDEovTgR--TFZet0VnpzInljjfo/edit#gid=167268481')
nat_comp_questions_sheet = nat_comp_questions.sheet1
nat_comp_questions_data = nat_comp_questions_sheet.get_all_values()
# naturalness (og vs. gen naturalness)
# 1: A is far more natural than B
# 2: A is slightly more natural than B
# 3: A and B are equally natural
# 4: B is slightly more natural than A
# 5 : B is far more natural than A
# 1: OG is far more natural than GEN
# 2: OG is slightly more natural than GEN
# 3: OG and GEN are equally natural
# 4: GEN is slightly more natural than OG
# 5: GEN is far more natural than OG
one, two, three, four, five = 0, 0, 0, 0, 0
for idx, row in enumerate(nat_comp_reponse_data[1:]):
if row[1] != "":
for idx2, (row, answer) in enumerate(zip(nat_comp_questions_data[1:], row[2:-1])):
original, generated = row[-2:]
answer = int(answer)
# print("A", "B", "|", original, generated, "|", answer)
if original == "A":
if answer == 1:
one += 1
if answer == 2:
two += 1
if answer == 3:
three += 1
if answer == 4:
four += 1
if answer == 5:
five += 1
if original == "B":
if answer == 1:
five += 1
if answer == 2:
four += 1
if answer == 3:
three += 1
if answer == 4:
two += 1
if answer == 5:
one += 1
print(one,two,three,four,five)
print("Mean of naturalness (comparison) original vs. generated:")
print(round((one*1+two*2+three*3+four*4+five*5)/sum([one,two,three,four,five]),4))
# naturalness (comparison) distribution
fig = plt.figure(figsize=[7, 5], dpi=100)
answers = {'OG is far more natural than GEN ':'red',
'OG is slightly more natural than GEN':'green',
'OG and GEN are equally natural':'blue',
'GEN is slightly more natural than OG':'orange',
'GEN is far more natural than OG': 'purple'}
labels = list(answers.keys())
handles = [plt.Rectangle((0,0),1,1, color=answers[label]) for label in labels]
ax = fig.add_axes([0,0,1,1])
plt.bar([1,2,3,4,5], [one,two,three,four,five], color=answers.values())
plt.title("Naturalness (Comparison) distribution [translated]")
plt.legend(handles, labels)
plt.xlabel("Answer")
plt.ylabel("Frequency")
plt.savefig("naturalness_comparison_dist_translated" + '.png', figsize = (16, 9), dpi=150, bbox_inches="tight")
plt.show()
plt.close()
nat_all = []
for k, v in nat_comp_answer_dict.items():
nat_all += v
nat_all_dist = Counter(nat_all)
nat_all_dist
# naturalness (comparison) distribution
fig = plt.figure(figsize=[7, 5], dpi=100)
ax = fig.add_axes([0,0,1,1])
ax.bar(nat_all_dist.keys(), nat_all_dist.values())
plt.title("Naturalness (Comparison) distribution")
plt.xlabel("Answer")
plt.ylabel("Frequency")
plt.savefig("naturalness_comparison_dist" + '.png', figsize = (16, 9), dpi=150, bbox_inches="tight")
plt.show()
plt.close() | _____no_output_____ | Apache-2.0 | evaluation/all_evaluation.ipynb | cs-mac/Unsupervised_Style_Transfer |
Which Words | # Which words
ww_responses = gc.open_by_url('https://docs.google.com/spreadsheets/d/1bRoF5l8Lt9fqeOki_YrJffd2XwEpROKi1RUsbC1umIk/edit#gid=1233025762')
ww_response_sheet = ww_responses.sheet1
ww_reponse_data = ww_response_sheet.get_all_values()
ww_answer_dict = {}
for idx, row in enumerate(ww_reponse_data[1:]):
if row[1] != "":
ww_answer_dict[idx]= [[word.strip() for word in i.split(",")] for i in row[2:-1]]
# Human-annotator agreement
user1 = ww_answer_dict[0]
user2 = ww_answer_dict[1]
total = 0
for l1, l2 in zip(user1, user2):
total += len((set(l1) & set(l2)))/max(len(l1), len(l2))
print("Human Annotator Agreement, which word:")
print(f"{round((total/len(user1)*100), 2)}%")
# Human-annotator agreement (Ignoreing <NONE>)
user1 = ww_answer_dict[0]
user2 = ww_answer_dict[1]
total = 0
none = 0
for l1, l2 in zip(user1, user2):
if l1==['<NONE>'] or l2==['<NONE>']:
none+=1
continue
total += len((set(l1) & set(l2)))/max(len(l1), len(l2))
print("Human Annotator Agreement, which word:")
print(f"{round((total/(len(user1)-none)*100), 2)}%")
# Human-annotator agreement on <NONE>
user1 = ww_answer_dict[0]
user2 = ww_answer_dict[1]
none = 0
none_both = 0
for l1, l2 in zip(user1, user2):
if l1==['<NONE>'] or l2==['<NONE>']:
none+=1
if l1==l2:
none_both+=1
print("Human Annotator Agreement, <NONE>:")
print(f"{round((none_both/none)*100, 2)}%")
# Human-annotator agreement on <NONE>
user1 = ww_answer_dict[0]
user2 = ww_answer_dict[1]
human_total_words_chosen = 0
for l1, l2 in zip(user1, user2):
human_total_words_chosen += len(set(l1) & set(l2))
with open("../to_substitute_dict.pickle", "rb") as handle:
to_substitute_dict = pickle.load(handle)
id_sentence_dict = {}
for idx, sentence in enumerate(ww_reponse_data[0][2:-1]):
id_sentence_dict[idx] = sentence
cls_total_words_chosen = 0
total = 0
amount_none = 0
for l1, l2, (k, v) in zip(user1, user2, id_sentence_dict.items()):
human_chosen_words = set(l1) & set(l2)
if human_chosen_words == {'<NONE>'}:
amount_none += 1
cls_total_words_chosen -= len(classifier_chosen_words)
classifier_chosen_words = {v.split()[idx] for idx, _ in to_substitute_dict[v]}
cls_total_words_chosen += len(classifier_chosen_words)
total += len((human_chosen_words & classifier_chosen_words))/max(len(human_chosen_words), len(classifier_chosen_words))
print("Classifier/Human Agreement, which word (counting none):")
print(f"{round((total/len(user1)*100), 2)}%")
print("\nClassifier/Human Agreement, which word (excluding none):")
print(f"{round((total/(len(user1)-amount_none)*100), 2)}%")
print(f"\nAmount of <NONE> chosen by all annotators:\n{round(len(user1)/amount_none, 2)}%")
print("\ntotal words chosen by Human Evaluators")
print(f"{human_total_words_chosen}")
print("total words chosen by Classifier")
print(f"{cls_total_words_chosen}")
# More example sentences, for better in-depth analysis
sentences_one, sentences_two, sentences_three, sentences_four, sentences_five = [], [], [], [], []
for idx, row in enumerate(nat_comp_reponse_data[1:]):
if row[1] != "":
for idx2, (row, answer) in enumerate(zip(nat_comp_questions_data[1:], row[2:-1])):
original, generated = row[-2:]
answer = int(answer)
if generated == "A":
generated_sentence = row[0].rsplit(":")[1].strip()
original_sentence = row[2].rsplit(":")[1].strip()
elif generated == "B":
generated_sentence = row[2].rsplit(":")[1].strip()
original_sentence = row[0].rsplit(":")[1].strip()
# print("A", "B", "|", original, generated, "|", answer)
if original == "A":
if answer == 1:
sentences_one.append(generated_sentence)
if answer == 2:
sentences_two.append(generated_sentence)
if answer == 3:
sentences_three.append(generated_sentence)
if answer == 4:
sentences_four.append(generated_sentence)
if answer == 5:
sentences_five.append(generated_sentence)
if original == "B":
if answer == 1:
sentences_five.append(generated_sentence)
if answer == 2:
sentences_four.append(generated_sentence)
if answer == 3:
sentences_three.append(generated_sentence)
if answer == 4:
sentences_two.append(generated_sentence)
if answer == 5:
sentences_one.append(generated_sentence)
print(len(sentences_one), len(sentences_two), len(sentences_three), len(sentences_four), len(sentences_five))
low_natural_sentences = sentences_one + sentences_two
high_natural_sentences = sentences_three + sentences_four + sentences_five
og_sentiment, gen_sentiment = [], []
for sentence in low_natural_sentences:
og_sentiment.append(df_evaluation.OG_sentiment[df_evaluation.GEN_sentences == sentence].item())
gen_sentiment.append(df_evaluation.GEN_sentiment[df_evaluation.GEN_sentences == sentence].item())
print("Accuracy Low Naturalness Sentences")
print(round((1-accuracy_score(og_sentiment, gen_sentiment))*100, 4))
og_sentiment, gen_sentiment = [], []
for sentence in high_natural_sentences:
og_sentiment.append(df_evaluation.OG_sentiment[df_evaluation.GEN_sentences == sentence].item())
gen_sentiment.append(df_evaluation.GEN_sentiment[df_evaluation.GEN_sentences == sentence].item())
print("\nAccuracy High Naturalness Sentences")
print(round((1-accuracy_score(og_sentiment, gen_sentiment))*100, 4))
length = []
for sentence in low_natural_sentences:
og_sentence = df_evaluation.OG_sentences[df_evaluation.GEN_sentences == sentence].item()
length.append(len(to_substitute_dict[og_sentence]))
print("Avg. amount of words substituted Low Naturalness Sentences")
print(round(mean(length), 2))
length = []
for sentence in high_natural_sentences:
og_sentence = df_evaluation.OG_sentences[df_evaluation.GEN_sentences == sentence].item()
length.append(len(to_substitute_dict[og_sentence]))
print("\nAvg. amount of words substituted High Naturalness Sentences")
print(round(mean(length), 2))
print("Examples of generated sentence more natural than source sentence\n")
for sentence in sentences_five+sentences_four:
og_sentence = df_evaluation.OG_sentences[df_evaluation.GEN_sentences == sentence].item()
print(f"OG = {og_sentence}\nGEN = {sentence}\n")
print("Examples of generated sentence as natural as source sentence\n")
for idx, sentence in enumerate(sentences_three):
og_sentence = df_evaluation.OG_sentences[df_evaluation.GEN_sentences == sentence].item()
print(f"OG = {og_sentence}\nGEN = {sentence}\n")
if idx == 10:
break
user_answers = []
for idx, row in enumerate(nat_iso_reponse_data[1:]):
if row[1] != "":
answers = [int(i) for i in row[2:-1]]
user_answers.append(answers)
highly_natural_sentences = [] # average naturalness >= 4
highly_unnatural_sentences = [] # average naturalness <= 2
for idx, sentence in enumerate(nat_iso_reponse_data[0][2:-1]):
answers = []
for user in user_answers:
answers.append(user[idx])
if mean(answers) >= 4:
highly_natural_sentences.append(sentence)
elif mean(answers) <= 2:
highly_unnatural_sentences.append(sentence)
print(len(highly_natural_sentences), len(highly_unnatural_sentences))
print("Examples of highly natural sentences\n")
for sentence in highly_natural_sentences:
print(sentence)
print("\nExamples of highly unnatural sentences\n")
for sentence in highly_unnatural_sentences:
print(sentence)
int_to_string_dict = {0: "negative", 1: "positive"}
user_answers = []
for idx, row in enumerate(sti_reponse_data[1:]):
if row[1] != "":
answers = [i for i in row[2:-1]]
user_answers.append(answers)
all_neither_sentences = []
all_negative_sentences = []
all_positive_sentences = []
human_cls_agree_transfer = []
human_cls_agree_no_transfer = []
human_yes_cls_no = []
human_no_cls_yes = []
for idx, sentence in enumerate(sti_reponse_data[0][2:-1]):
answers = []
for user in user_answers:
answers.append(user[idx])
if set(answers) == {'neither'}:
all_neither_sentences.append(sentence)
if set(answers) == {'negative'}:
all_negative_sentences.append(sentence)
if set(answers) == {'positive'}:
all_positive_sentences.append(sentence)
try:
human_sentiment = mode(answers)
except StatisticsError as e:
human_sentiment = random.choice(answers)
cls_sentiment = int_to_string_dict[df_evaluation.GEN_sentiment[df_evaluation.GEN_sentences == sentence].item()]
og_sentiment = int_to_string_dict[df_evaluation.OG_sentiment[df_evaluation.GEN_sentences == sentence].item()]
union = set([human_sentiment])|set([cls_sentiment])
if (len(union) == 1) and ({og_sentiment} != union):
og_sentence = df_evaluation.OG_sentences[df_evaluation.GEN_sentences == sentence].item()
human_cls_agree_transfer.append((og_sentence, sentence))
if (len(union) == 1) and ({og_sentiment} == union):
og_sentence = df_evaluation.OG_sentences[df_evaluation.GEN_sentences == sentence].item()
human_cls_agree_no_transfer.append((og_sentence, sentence))
if (human_sentiment != og_sentiment) and (gen_sentiment == og_sentiment):
og_sentence = df_evaluation.OG_sentences[df_evaluation.GEN_sentences == sentence].item()
human_yes_cls_no.append((og_sentence, sentence))
if (human_sentiment == og_sentiment) and (gen_sentiment != og_sentiment):
og_sentence = df_evaluation.OG_sentences[df_evaluation.GEN_sentences == sentence].item()
human_no_cls_yes.append((og_sentence, sentence))
threshold = 20
print("Examples of sentences that were classified as neither by all evaluators")
print("-"*40, f"[{len(all_neither_sentences)}]", "-"*40)
for sentence in all_neither_sentences[:threshold]:
print(sentence)
print("\nExamples of sentences that were classified as negative by all evaluators")
print("-"*40, f"[{len(all_negative_sentences)}]", "-"*40)
for sentence in all_negative_sentences[:threshold]:
print(sentence)
print("\nExamples of sentences that were classified as positive by all evaluators")
print("-"*40, f"[{len(all_positive_sentences)}]", "-"*40)
for sentence in all_positive_sentences[:threshold]:
print(sentence)
print("\nClassification examples where both human + cls agree style is transferred")
print("-"*40, f"[{len(human_cls_agree_transfer)}]", "-"*40)
for og_sentence, gen_sentence in human_cls_agree_transfer[:threshold]:
print(f"{og_sentence}\n{gen_sentence}\n")
print("\nClassification examples where human says style is transferred, but cls not")
print("-"*40, f"[{len(human_yes_cls_no)}]", "-"*40)
for og_sentence, gen_sentence in human_yes_cls_no[:threshold]:
print(f"{og_sentence}\n{gen_sentence}\n")
print("\nClassification examples where cls says style is transferred, but human not")
print("-"*40, f"[{len(human_no_cls_yes)}]", "-"*40)
for og_sentence, gen_sentence in human_no_cls_yes[:threshold]:
print(f"{og_sentence}\n{gen_sentence}\n")
print("\nClassification examples where both human + cls agree style is not transferred")
print("-"*40, f"[{len(human_cls_agree_no_transfer)}]", "-"*40)
for og_sentence, gen_sentence in human_cls_agree_no_transfer[:threshold]:
print(f"{og_sentence}\n{gen_sentence}\n")
|
Classification examples where both human + cls agree style is transferred
---------------------------------------- [15] ----------------------------------------
but can not complain too much it was super cheap
but can not complain barely little it was super cheap
they are not the best in the series and anyone can clearly tell you that
not they are not the worst in the series and anyone can clearly tell me that
it is not a good design and the board is really cheaply made
it is not a evil design and the board is hardly cheaply made
after giving this one to my sister i ordered myself a logitech mx510
after not giving this one to my sister i ordered myself a logitech mx510
this used to be one of my favorite brands
this fresh to be one of my favorite brands
love the reprinted labels for the jar tops
hate the reprinted labels for the jar tops
forget to set timer get busy with many activites
forget to set timer avoid busy with few activites
i use it too fast and it got part of my hand
i use it too slow and it got part of my hand
teachers have been out with flu hand colds or something or other
teachers have been out not with flu hand colds or something or same
this is one of the best purchases i ever made in my life
this is one of the worst purchases i ever not made in my life
this bench scraper is extremely efficient and practical
this bench scraper is incompletely efficient and impractical
i think it is a great deal for the price
i think it is a insignificant deal for the price
the belt clip is thin and applies good pressure on your belt
the belt clip is thin and applies evil pressure on your belt
i give it four stars because of this weakness
i deny it four stars because of this weakness
also it will stay wet for a long time
also it will stay wet for a short time
Classification examples where human says style is transferred, but cls not
---------------------------------------- [0] ----------------------------------------
Classification examples where cls says style is transferred, but human not
---------------------------------------- [52] ----------------------------------------
either we have a smart mouse or none of our traps are any good
either we have a smart mouse or none of not our traps are any good
the wusb11 is a power hog so it will not work with passive usb hubs
the wusb11 is a power hog so it will not fun with passive usb hubs
the build quality of the lens is decent but nothing to rave about
the build quality of the lens is indecent but nothing to rave about
my one concern was they must be a heavy shoe
my one concern was they must be a light shoe
the rod broke in i places on my first trip and the fish was gone
the rod rich in i places on my first trip and the fish was gone
there is nothing fun about it for a very small child
here is nothing boredom about it for a very small child
this lens is great but a bit pricey
this lens is insignificant but a bit pricey
i bought this because i liked the idea and the color
i bought this not because i liked the idea and the color
the rest have lasted a week or so at best
the rest abandon lasted a week or so at best
that is just too much money for a mouthful of salt
that is just too little money for a mouthful of salt
now i cant get it to stay at all
now i cant avoid it to stay at all
i purchased this lantern and promptly returned it
i purchased this lantern and slowly returned it
made from cheap plastic and imperfection is highly visible after applying polish
made from cheap plastic and imperfection is little visible after applying polish
because of these design choices i cannot recommend this product
not because of these design choices i cannot recommend this product
it broke the first time i used it i had to trow it away
it broke the middle time i used it i had to trow it away
what will irritate you is how the game feels
what will irritate you is not how the game feels
this wireless headphone will not work with ps
this wireless headphone dislike not fun with ps
there has to be something better than this
here has to be something better than this
take the shot review and the carpet is completely blurred on the left side
debt the shot review and the carpet is completely blurred on the left side
i do not think any natural deodorant works more than a few hours
i do not disbelieve any unnatural deodorant works more than a few hours
Classification examples where both human + cls agree style is not transferred
---------------------------------------- [45] ----------------------------------------
either we have a smart mouse or none of our traps are any good
either we have a smart mouse or none of not our traps are any good
the build quality of the lens is decent but nothing to rave about
the build quality of the lens is indecent but nothing to rave about
my one concern was they must be a heavy shoe
my one concern was they must be a light shoe
the rod broke in i places on my first trip and the fish was gone
the rod rich in i places on my first trip and the fish was gone
there is nothing fun about it for a very small child
here is nothing boredom about it for a very small child
this lens is great but a bit pricey
this lens is insignificant but a bit pricey
i bought this because i liked the idea and the color
i bought this not because i liked the idea and the color
the rest have lasted a week or so at best
the rest abandon lasted a week or so at best
that is just too much money for a mouthful of salt
that is just too little money for a mouthful of salt
now i cant get it to stay at all
now i cant avoid it to stay at all
i purchased this lantern and promptly returned it
i purchased this lantern and slowly returned it
made from cheap plastic and imperfection is highly visible after applying polish
made from cheap plastic and imperfection is little visible after applying polish
because of these design choices i cannot recommend this product
not because of these design choices i cannot recommend this product
it broke the first time i used it i had to trow it away
it broke the middle time i used it i had to trow it away
what will irritate you is how the game feels
what will irritate you is not how the game feels
this wireless headphone will not work with ps
this wireless headphone dislike not fun with ps
there has to be something better than this
here has to be something better than this
take the shot review and the carpet is completely blurred on the left side
debt the shot review and the carpet is completely blurred on the left side
i do not think any natural deodorant works more than a few hours
i do not disbelieve any unnatural deodorant works more than a few hours
first i had the worst time mixing this product
first i had the worst time divorcing this product
| Apache-2.0 | evaluation/all_evaluation.ipynb | cs-mac/Unsupervised_Style_Transfer |
说明: 删除最小数量的无效括号,以使输入字符串有效。 返回所有可能的结果。 注意: 输入字符串可能包含除括号(和)以外的其他字母。Example 1: Input: "()())()" Output: ["()()()", "(())()"]Example 2: Input: "(a)())()" Output: ["(a)()()", "(a())()"]Example 3: Input: ")(" Output: [""] | class Solution:
def removeInvalidParentheses(self, s: str):
if not s: return []
self.max_len = self.get_max_len(s)
self.ans = []
self.dfs(s, 0, "", 0)
return self.ans
def dfs(self, s, idx, cur_str, count):
if len(cur_str) > self.max_len: return
if count < 0: return # count表示 "(" 的数量
if idx == len(s): # 遍历到了最后 s 的一个字母
if count == 0 and len(cur_str) == self.max_len:
self.ans.append(cur_str)
return
# 如果是其他字母,可以直接添加,不会收到影响
if s[idx] != '(' and s[idx] != ')':
self.dfs(s, idx+1, cur_str+s[idx], count)
else:
val = 1 if s[idx] == '(' else -1
# 肯定取,有两种情况,最后一个字符与cur_str的最后一个字符相同
# 或者是不同
self.dfs(s, idx+1, cur_str+s[idx], count+val)
if not cur_str or s[idx] != cur_str[-1]:
# 对于不同的情况是可以不取的
self.dfs(s, idx+1, cur_str, count)
def get_max_len(self, s):
"""返回原始字符串是 valid 的最大长度"""
l_count, res = 0, 0
for a in s:
if a == '(':
l_count += 1
elif a == ')':
if l_count == 0:
res += 1
else:
l_count -= 1
return len(s) - l_count - res
class Solution:
def removeInvalidParentheses(self, s: str):
if not s: return [""]
self.max_len = self.get_max_len(s)
self.ans = []
self.dfs(s, 0, "", 0)
return self.ans
def dfs(self, s, idx, cur_str, count):
# count代表了 “(” 的数量,如果小于0,一定不合法
if len(cur_str) > self.max_len: return
if count < 0: return
if idx == len(s): # 遍历到了最后 s 的一个字母
if count == 0 and len(cur_str) == self.max_len:
self.ans.append(cur_str)
return
# 其他字母
if s[idx] != '(' and s[idx] != ')':
self.dfs(s, idx+1, cur_str+s[idx], count)
else:
val = 1 if s[idx] == '(' else -1
self.dfs(s, idx+1, cur_str+s[idx], count+val)
if not cur_str or s[idx] != cur_str[-1]:
self.dfs(s, idx+1, cur_str, count)
def get_max_len(self, s):
l_count, res = 0, 0
for a in s:
if a == '(':
l_count += 1
elif a == ')':
if l_count == 0:
res += 1
else:
l_count -= 1
return len(s) - l_count - res
solution = Solution()
solution.removeInvalidParentheses("(a)())()") | _____no_output_____ | Apache-2.0 | DFS/1010/301. Remove Invalid Parentheses.ipynb | YuHe0108/Leetcode |
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video 1 | # @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1M54y1B7hs", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="pzA1GpxodnM", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out) | _____no_output_____ | CC-BY-4.0 | tutorials/W2D1_DeepLearning/W2D1_Outro.ipynb | Beilinson/course-content |
Video 2 | # @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1GT4y1j7aQ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"nWlgIclpyt4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out) | _____no_output_____ | CC-BY-4.0 | tutorials/W2D1_DeepLearning/W2D1_Outro.ipynb | Beilinson/course-content |
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there isa small delay before you will be redirected to the survey. Slides | # @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/z5g93/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) | _____no_output_____ | CC-BY-4.0 | tutorials/W2D1_DeepLearning/W2D1_Outro.ipynb | Beilinson/course-content |
Suicide Analysis in IndiaIn this notebook we will try to understand what might be the different reasons due to which people committed suicide in India (using the dataset "Suicides in India"). Almost 11,89,068 people committed suicide in 2012 alone, it is quite important to understand why they commit suicide and try to mitigate. | # import lib
import numpy as np #for math operations
import pandas as pd#for data manipulation
import plotly.express as px#for better visualization
import plotly.io as pio
# read dataset
data = pd.read_csv('../input/suicides-in-india/Suicides in India 2001-2012.csv')
data.tail(10) | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Dataset Information | data.info() | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Check Missing & Null Values | data.isna().sum() | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
People committed suicide from 2001-2012 | print("Total cases from 2001-12: \n",data.groupby("Year")["Total"].sum())
data.groupby("Year")["Total"].sum().plot(kind="line",marker="o",title="People Commited Suicide From 2001-2012") | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
States Present Inside DatasetThis step is for merging states with same name and remove redundency. | data["State"].value_counts() | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Remove rows with value as Total (States), Total (All India) or Total (Uts) | data = data[(data["State"]!="Total (States)")&(data["State"]!="Total (Uts)")&(data["State"]!="Total (All India)") ] | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Which Gender with Highest number of suicide? Males are commiting more sucides in comaprision to females | filter_gender = pd.DataFrame(data.groupby("Gender")["Total"].sum()).reset_index()
px.bar(filter_gender,x="Gender", y="Total",color="Gender") | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
States with Higher Suicide cases1. Maharashtra2. West Bengal3. Tamil Nadu4. Andhra Pradesh | pio.templates.default = "plotly_dark"
filter_state = pd.DataFrame(data.groupby(["State"])["Total"].sum()).reset_index()
px.bar(filter_state,x = 'State', y = 'Total',color="State")
| _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Number of cases changing over time Changing Rate of sucides over time | grouped_year = data.groupby(["Year","Gender"])["Total"].sum()
grouped_year = pd.DataFrame(grouped_year).reset_index()
# grouped_year
px.line(grouped_year,x="Year", y="Total", color="Gender") | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Number of cases based on the reasons they committed suicide | filter_type_code = pd.DataFrame(data.groupby(["Type_code","Year"])["Total"].sum()).reset_index()
filter_type_code
px.bar(filter_type_code,x="Type_code", y="Total",color="Year") | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Which social issues causes more suicides?It is clear that **married people** are more Suicides.Which makes sense because marriage issues may cause conflict between the couple and as a result they might be prone to commit suicide. | filter_social_status = pd.DataFrame(data[data["Type_code"]=="Social_Status"].groupby(["Type","Gender"])["Total"].sum()).reset_index()
px.bar(filter_social_status,x="Type", y="Total",color="Gender") | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Education status of people who committed suicidespeople with low education are commiting more suicide.People with Diploma and Graduate tend to commit least no. of suicide | filter_social_status = pd.DataFrame(data[data["Type_code"]=="Education_Status"].groupby(["Type","Gender"])["Total"].sum()).reset_index()
fig = px.bar(filter_social_status,x="Type", y="Total",color="Gender")
fig.show(rotation=90) | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Profession of the people who committed suicides**Farmers** and **housewives** have commited more suicide compared to others.This makes sense because most of the Indian farmers have debt and their life depends on the yield of their crops, if the yield is not good then they will not be able to clear their debt and in the worst case they might commit suicide.> Global warming, monsoon delay, drought etc can lead to bad yield.Housewives might have issues in their marriage which this might be a reason for such a high number of cases.> Domestic violence, dowry, gender discrimination, etc might be some of the reasons for housewives to commit suicide. | filter_social_status = pd.DataFrame(data[data["Type_code"]=="Professional_Profile"].groupby(["Type","Gender"])["Total"].sum()).reset_index()
fig2 = px.bar(filter_social_status,x="Type", y="Total",color="Gender")
fig2.show(rotation=90) | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Which age group people have commited most Suicides?From the below visualization it is clear that youngsters (15-29 age) and middle age (30-44) tend to commit the maximum number of suicides.It can be due to several reasons like:* unemployment* academic stress* bad friend circle* farmers (since they have to be young and strong enough to do farming)* addictions | # age group 0-100+ encapsulates all the remaining age groups, hence it would make sense to drop it
import matplotlib.pyplot as plt #for visualization
import seaborn as sns
%matplotlib inline
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_palette("BrBG")
filter_age = data[data["Age_group"]!="0-100+"]
sns.catplot(x="Age_group", y="Total", kind="bar", data=filter_age,height=8.27, aspect=11.7/8.27); | _____no_output_____ | Apache-2.0 | Suicide Analysis in India.ipynb | Ayush810/Sucide-Analysis-India-From-2001-to-2012 |
Example notebook to show Github IntegrationThis notebook in the `treebeard` master branch is here so treebeard can run against this project and show the Github App Integration. Github Integration can be added to any project in the settings of the admin page when a project is built. The CLI returns the link to the admin page. | assert 1 + 1 == 2 | _____no_output_____ | Apache-2.0 | main.ipynb | pombredanne/treebeard |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.