markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Relu
perfect for blacking out everyhing beyong threshold
this is just what everyone actually uses | def np_relu(x):
return np.maximum(0, x)
x = np.arange(-10, 10, 0.01)
y = np_relu(x)
centerAxis()
plt.plot(x,y,lw=3) | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit |
The classic VGG16 Architecture | def predict(model, img_path):
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
# decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch)
print('Predicted:', decode_predictions(preds, top=3)[0])
from keras import applications
# applications.VGG16?
vgg16_model = applications.VGG16(weights='imagenet') | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit |
VGG starts with a number of convolutional blocks for feature extraction and ends with a fully connected classifier | vgg16_model.summary()
!curl -O https://upload.wikimedia.org/wikipedia/commons/thumb/d/de/Beagle_Upsy.jpg/440px-Beagle_Upsy.jpg | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit |
predict(model = vgg16_model, img_path = '440px-Beagle_Upsy.jpg')
!curl -O https://djcordhose.github.io/ai/img/cat-bonkers.png
predict(model = vgg16_model, img_path = 'cat-bonkers.png')
!curl -O https://djcordhose.github.io/ai/img/squirrels/original/Michigan-MSU-raschka.jpg
!curl -O https://djcordhose.github.io/ai/img/squirrels/original/Black_New_York_stuy_town_squirrel_amanda_ernlund.jpeg
!curl -O https://djcordhose.github.io/ai/img/squirrels/original/london.jpg | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit | |
predict(model = vgg16_model, img_path = 'Michigan-MSU-raschka.jpg')
predict(model = vgg16_model, img_path = 'Black_New_York_stuy_town_squirrel_amanda_ernlund.jpeg')
predict(model = vgg16_model, img_path = 'london.jpg') | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit | |
What does the CNN "see"?
Does it "see" the right thing?
Each filter output of a convolutional layer is called feature channel
with each input they should ideally either be
blank if they do not recognize any feature in the input or
encode what the feature channel "sees" in the input
feature channels directly before FC layers are often called bottleneck feature channels
Some activations from bottleneck features: | # create a tmp dir in the local directory this notebook runs in, otherwise quiver will fail (and won't tell you why)
!rm -rf tmp
!mkdir tmp | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit |
Visualizing feature channels using Quiver
Only works locally | # https://github.com/keplr-io/quiver
# Alternative with more styles of visualization: https://github.com/raghakot/keras-vis
# https://github.com/keplr-io/quiver
from quiver_engine import server
server.launch(vgg16_model, input_folder='.', port=7000)
# open at http://localhost:7000/
# interrupt kernel to return control to notebook | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit |
Modern Alternative: Resnet
https://keras.io/applications/#resnet50
https://arxiv.org/abs/1512.03385
New Layer Type: https://keras.io/layers/normalization/ | from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
resnet_model = ResNet50(weights='imagenet')
resnet_model.summary()
predict(model = resnet_model, img_path = 'cat-bonkers.png')
predict(model = resnet_model, img_path = 'Michigan-MSU-raschka.jpg')
predict(model = resnet_model, img_path = 'Black_New_York_stuy_town_squirrel_amanda_ernlund.jpeg')
predict(model = resnet_model, img_path = 'london.jpg') | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit |
Leitura usando matplotlib native e com PIL
O matplotlib possui a leitura nativa de imagens no formato png. Quando este formato é lido, a imagem é automaticamente mapeada da faixa 0 a 255 contido no pixel uint8 da imagem para float 0 a 1 no pixel do array lido
Já se o formato for outro, se o PIL estiver instalado, a leitura é feita pelo PIL e neste caso o tipo do pixel é mantido em uint8, de 0 a 255. Veja no exemplo a seguir
Leitura de imagem em níveis de cinza de imagem TIFF:
Como a imagem lida é TIFF, o array lido fica no tipo uint8, com valores de 0 a 255 | f = mpimg.imread('../data/cameraman.tif')
print(f.dtype,f.shape,f.max(),f.min()) | deliver/Leitura-Display-imagem-com-matplotlib.ipynb | robertoalotufo/ia898 | mit |
Leitura de imagem colorida formato TIFF
Quando a imagem é colorida e não está no formato png, matplotlib utiliza PIL para leitura. O array terá o tipo uint8 e o shape do array é organizado em (H, W, 3). | fcor = mpimg.imread('../data/boat.tif')
print(fcor.dtype,fcor.shape,fcor.max(),fcor.min()) | deliver/Leitura-Display-imagem-com-matplotlib.ipynb | robertoalotufo/ia898 | mit |
Leitura de imagem colorida formato png
Se a imagem está no formato png, o matplotlib mapeia os pixels de 0 a 255 para float de 0 a 1.0 | fcor2 = mpimg.imread('../data/boat.tif')
print(fcor2.dtype, fcor2.shape, fcor2.max(), fcor2.min()) | deliver/Leitura-Display-imagem-com-matplotlib.ipynb | robertoalotufo/ia898 | mit |
Mostrando as imagens lidas | %matplotlib inline
plt.imshow(f, cmap='gray')
plt.colorbar()
plt.imshow(fcor)
plt.colorbar()
plt.imshow(fcor2)
plt.colorbar() | deliver/Leitura-Display-imagem-com-matplotlib.ipynb | robertoalotufo/ia898 | mit |
Observe que o display mostra apenas a última chamada do imshow | plt.imshow(fcor2)
plt.imshow(fcor)
plt.imshow(f, cmap='gray') | deliver/Leitura-Display-imagem-com-matplotlib.ipynb | robertoalotufo/ia898 | mit |
Quick demo | # Let's see a very quick demonstration, how CogStat works
# All results can be seen below. Appropriate graphs and statistics were chosen and compiled automatically by CogStat.
# Load some data
data = cs.CogStatData(data = os.path.join(cs_dir, 'sample_data', 'example_data.csv'))
# Display the data below
cs.display(data.print_data())
# Let's compare two variables
cs.display(data.compare_variables(['X', 'Y']))
# Let's compare two groups in a variable
cs.display(data.compare_groups('X', grouping_variables=['TIME'])) | cogstat/docs/CogStat Jupyter Notebook tutorial.ipynb | cogstat/cogstat | gpl-3.0 |
Import and display data
CogStat can import from three sources:
- It can read from file (either SPSS .sav file or tab separated txt files)
- It can convert pandas data frames (only in the Jupyter Notebook interface)
- It can read multiline string | ### Import from file ###
"""
The file should have the following structure:
- The first line should contain the names of the variables.
- The second lines can contain the measurement levels (int, ord or nom). This is optional, but recommended.
- The rest of the file is your data.
"""
# New CogStat data can be created with the CogStatData class of the cogstat module
# For importing a file, the data parameter should include tha path of the file
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))
# The filename looks a bit complicated here, but it is only to make sure that the tutorial works OK.
# Instead you could use a simple path like this:
# data = cs.CogStatData(data='path/to/file/filename.csv')
# Now let's display our imported data.
# All methods of the CogStatData class return a list of html files and graphs.
# These items can be displayed with the cogstat.display function
# To display the current data, use the print_data() method to create the appropriate html output,
# and display it with the cogstat.display function
result = data.print_data()
cs.display(result)
# Or you can write it shorter:
cs.display(data.print_data())
# If your csv file doesn't include the measurement levels, you can specify them in the import process.
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data_no_levels.csv'), measurement_level='nom nom nom nom nom int ord ord')
cs.display(data.print_data())
# If your file does include the measurement levels, and you still specify it, then your specification
# overwrites the file settings
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'), measurement_level='nom nom nom nom nom int ord ord')
cs.display(data.print_data())
# If your csv file doesn't include the measurement levels, and you do not specify them, then CogStat sets them
# nom (nominal) for string variables and unk (unkown) otherwise.
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data_no_levels.csv'))
cs.display(data.print_data())
# Or simply read your SPSS .sav file
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.sav'))
cs.display(data.print_data())
### Import from pandas ###
# First, we create a pandas dataframe
data = {'one' : [1., 3., 3., 4.],
'two' : [4., 3., 2., 1.]}
pandas_data = pd.DataFrame(data)
print(pandas_data)
# Then we simply specify the pandas data to import
data = cs.CogStatData(data=pandas_data)
cs.display(data.print_data())
# Again, you can specify the measurement level
data = cs.CogStatData(data=pandas_data, measurement_level='ord ord')
cs.display(data.print_data())
### Import from multiline string ###
# Use \t to separate columns and \n to separate rows.
data_string = '''A\tB\tC
nom\tint\tord
a\t123\t23
b\t143\t42'''
data = cs.CogStatData(data=data_string)
cs.display(data.print_data())
# measurement_level parameter can be used as in the case of the file and the pandas import | cogstat/docs/CogStat Jupyter Notebook tutorial.ipynb | cogstat/cogstat | gpl-3.0 |
Filter outliers
Cases can be filtered based on outliers.
In its simplest form (2sd method) a case is an outlier if based on the appropriate variable its value is more extreme than the average +- 2 standard deviation.
When several variables are used for filtering, a case will be an outlier, if it is an outlier based on any of the variables.
Filtering is kept until a new filtering is set, which will overwrite the previous filtering. | # Let's import a data file
data = cs.CogStatData(data = os.path.join(cs_dir, 'sample_data', 'example_data.csv'))
cs.display(data.print_data())
# To turn on filtering based on a singla variable:
# Note that even if only a single variable is given, it should be in a list.
cs.display(data.filter_outlier(['X']))
cs.display(data.print_data())
# To turn on filtering based on several variables simultaniously:
cs.display(data.filter_outlier(['X', 'Y']))
cs.display(data.print_data())
# To turn off filtering:
cs.display(data.filter_outlier(None))
cs.display(data.print_data()) | cogstat/docs/CogStat Jupyter Notebook tutorial.ipynb | cogstat/cogstat | gpl-3.0 |
Analyse the data
CogStat collects the most typical analysis into a single task, and chooses the appropriate methods. This is one of the main strength of CogStat: you don't have to figure out what statistics to use, and don't have to click through several menus and dialogs, but you get all the main (and only the relevant) information with a single command. | # Here are all the available CogStat analysis packages
# Hopefully all these method names speak for themselves
# In a function the chosen analysis will automatically depend on the measurement level, and other properties of the data.
# Load some data
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))
# Display the data
cs.display(data.print_data())
### Explore variable ###
# Get the most important statistics of a single variable
cs.display(data.explore_variable('X', frequencies=True, central_value=0.0))
# A shorter, but less readable version:
#cs.display(data.explore_variable('X', 1, 0.0))
### Explore variable pair ###
# Get the statistics of a variable pair
# Optionally
cs.display(data.explore_variable_pair('X', 'Y', xlims=[None, None], ylims=[None, None]))
### Pivot tables ###
# Pivot tables are only available from the GUI at the moment
# Fortunatelly, all CogStat pivot computations can be run in pandas
### Behavioral data diffusion analyses ###
# cs.display(data.diffusion(error_name=['error'], RT_name=['RT'], participant_name=['participant_id'], condition_names=['loudness', 'side']))
### Compare variables ###
# Specify two or more variables to compare
# Optionally set the visible range of the y axis
cs.display(data.compare_variables(['X', 'Y'], factors=[], ylims=[None, None]))
# To use several factors add the factor names and levels, too. Variable names will be assigned to the factor
# level combinations automatically.
# cs.display(data.compare_variables(['F1S1', 'F1S2', 'F2S1', 'F2S2']), factors=[['first factor', 2], ['second factor', 2]])
### Compare groups ###
# Specify a dependent and a grouping variable
# Optionally set the visible range of the y axis
cs.display(data.compare_groups('X', grouping_variables=['TIME'], ylims=[None, None]))
| cogstat/docs/CogStat Jupyter Notebook tutorial.ipynb | cogstat/cogstat | gpl-3.0 |
Summary (Cheatsheet) | ### Import data ###
# Import from file
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.sav'))
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'),
measurement_level='nom nom nom nom nom int ord ord')
# Import from pandas
data = cs.CogStatData(data=pandas_data)
data = cs.CogStatData(data=pandas_data, measurement_level='ord ord')
# Import from multiline string
data = cs.CogStatData(data=data_string)
data = cs.CogStatData(data=data_string, measurement_level='ord ord')
### Display the data ###
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))
cs.display(data.print_data())
### Filter outliers ###
# Filter outliers based on a single variable
cs.display(data.filter_outlier(['X']))
# Filter outliers based on several variables simultaniously
cs.display(data.filter_outlier(['X', 'Y']))
# Turn off filtering
cs.display(data.filter_outlier(None))
### Analyse the data ###
# Explore variable
cs.display(data.explore_variable('X', frequencies=True, central_value=0.0))
# Explore variable pair
cs.display(data.explore_variable_pair('X', 'Y'))
# Compare variables
cs.display(data.compare_variables(['X', 'Y']))
# Compare groups
cs.display(data.compare_groups('X', grouping_variables=['TIME'])) | cogstat/docs/CogStat Jupyter Notebook tutorial.ipynb | cogstat/cogstat | gpl-3.0 |
To check the full documentation you can always refer to https://root.cern/doc/master (and then switch to the documentation for your particular ROOT version with the drop-down menu at the top of the page).
Drawing a histogram
Drawing options documentation
The link above contains the documentation for the histogram drawing options.
In a notebook, as usual, we want to also use the %jsroot on magic and also explicitly draw a TCanvas. | %jsroot on
c = ROOT.TCanvas()
#h.SetLineColor(ROOT.kBlue)
#h.SetFillColor(ROOT.kBlue)
#h.GetXaxis().SetTitle("value")
#h.GetYaxis().SetTitle("count")
#h.SetTitle("My histo with latex: p_{t}, #eta, #phi")
h.Draw() # draw the histogram on the canvas
c.Draw() # draw the canvas on the screen | SoftwareCarpentry/04-histograms-and-graphs.ipynb | root-mirror/training | gpl-2.0 |
ROOT functions
The type that represents an arbitrary one-dimensional mathematical function in ROOT is TF1.<br>
Similarly, TF2 and TF3 represent 2-dimensional and 3-dimensional functions.
As an example, let's define and plot a simple surface: | f2 = ROOT.TF2("f2", "sin(x*x - y*y)", xmin=-2, xmax=2, ymin=-2, ymax=2)
c = ROOT.TCanvas()
f2.Draw("surf1") # to get a surface instead of the default contour plot
c.Draw() | SoftwareCarpentry/04-histograms-and-graphs.ipynb | root-mirror/training | gpl-2.0 |
Fitting a histogram
Let's see how to perform simple histogram fits of arbitrary functions. We will need a TF1 that represents the function we want to use for the fit.
This time we define our TF1 as a C++ function (note the usage of the %%cpp magic to define some C++ inline). Here we define a simple gaussian with scale and mean parameters (par[0] and par[1] respectively): | %%cpp
double gaussian(double *x, double *par) {
return par[0]*TMath::Exp(-TMath::Power(x[0] - par[1], 2.) / 2.)
/ TMath::Sqrt(2 * TMath::Pi());
} | SoftwareCarpentry/04-histograms-and-graphs.ipynb | root-mirror/training | gpl-2.0 |
The function signature, that takes an array of coordinates and an array of parameters as inputs, is the generic signature of functions that can be used to construct a TF1 object: | fitFunc = ROOT.TF1("fitFunc", ROOT.gaussian, xmin=-5, xmax=5, npar=2) | SoftwareCarpentry/04-histograms-and-graphs.ipynb | root-mirror/training | gpl-2.0 |
Now we fit our h histogram with fitFunc: | res = h.Fit(fitFunc) | SoftwareCarpentry/04-histograms-and-graphs.ipynb | root-mirror/training | gpl-2.0 |
Drawing the histogram now automatically also shows the fitted function: | c2 = ROOT.TCanvas()
h.Draw()
c2.Draw() | SoftwareCarpentry/04-histograms-and-graphs.ipynb | root-mirror/training | gpl-2.0 |
For the particular case of a gaussian fit, we could also have used the built-in "gaus" function, as we did when we called FillRandom (for the full list of supported expressions see here): | res = h.Fit("gaus")
c3 = ROOT.TCanvas()
h.Draw()
c3.Draw() | SoftwareCarpentry/04-histograms-and-graphs.ipynb | root-mirror/training | gpl-2.0 |
For more complex binned and unbinned likelihood fits, check out RooFit, a powerful data modelling framework integrated in ROOT.
ROOT graphs
TGraph is a type useful for scatter plots.
Their drawing options are documented here.
Like for histograms, the aspect of TGraphs can be greatly customized, they can be fitted with custom functions, etc. | g = ROOT.TGraph()
for x in range(-20, 21):
y = -x*x
g.AddPoint(x, y)
c4 = ROOT.TCanvas()
g.SetMarkerStyle(7)
g.SetLineColor(ROOT.kBlue)
g.SetTitle("My graph")
g.Draw()
c4.Draw() | SoftwareCarpentry/04-histograms-and-graphs.ipynb | root-mirror/training | gpl-2.0 |
The same graph can be displayed as a bar plot: | c5 = ROOT.TCanvas()
g.SetTitle("My graph")
g.SetFillColor(ROOT.kOrange + 1) # base colors can be tweaked by adding/subtracting values to them
g.Draw("AB1")
c5.Draw() | SoftwareCarpentry/04-histograms-and-graphs.ipynb | root-mirror/training | gpl-2.0 |
Shebang: /usr/bin/python2 or /usr/bin/python3
main function
What the heck is __name__ or __file__ ? Snake charmers
Adding parameters to your script ...
Your first imports and function parameters: *args, **kwargs
$ python hello_montoya.py "Hello" "My name is Iñigo Montoya" "You killed my father" "Prepare to Die"
Encoding
Check out How to unicode
Brief history
ASCII (American Standard Code for Information Interexchange) 1968.
English alphabet took the conversion of a letter to a digit, between 0-127.
Mid 1980's computers -> 8-bit (0-255)
But what happens to accents? cyrilic alphabets? French (Latin1 or ISO-8859-1) Russian (KOI8)?
UNICODE Standarization with 16-bit (2^16 = 65.535 distinct values)
Definitions
Character: smallest component of text ("A", "É")
Unicode: code points: integer value usually denoted in base 16
Unicode string: Serie of code points from 0 to 0x010ffff.
Unicode scapes:
- \xhh -> \xf1 == ñ
- \uhhhh ->
- \Uhhhhhhhh
Encoding: translates a unicode string sequence of bytes
Comment -- X: Y -- (inspired by Emacs, PEP 263)
# -*- coding: latin-1 -*-
In Python 3 the default encoding: UTF-8
All strings → python3 -c 'print("buenos dias" "hyvää huomenta" """おはようございます""")' are unicode
IMPORTS
Official Docs
Namespace is designed to overcome this difficulty and is used to differentiate functions, classes, variables etc. with the same name, available in different modules.
A Python module is simply a Python source file, which can expose classes, functions and global variables. When imported from another Python source file, the file name is sometimes treated as a namespace.
__main__ is the name of the scope in which top-level code executes.
A module’s __name__ variable is set to __main__ when read from standard input, a script, or from an interactive prompt.
```python
!/usr/bin/env python
import os
import sys
if name == "main":
settings_module = "settings.local"
os.environ.setdefault("DJANGO_SETTINGS_MODULE", settings_module)
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
```
A Python package is simply a directory of Python module(s).
__init__.py.
The __init__.py file is the first thing that gets executed when a package is loaded.
rocklab/ ...
spacelab/
__init__.py
manage.py
utils/
__init__.py
physics.py
multidimensional/
__init__.py
laws.py
rockets/
__init__.py
engine.py
base.py # Defines a rocket model that exposes all its functionality
Relative imports specific location of the modules to be imported are relative to the current package.
laws.py
python
from ..phsycis import gravity
base.py
```python
from ..mutidimensional.laws import InterDimensionalTravel
from .engine import (Motor, turn_on_engine, turn_off_engine,
Bolt)
Avoid the use of \ for linebreaks and use parenthesis. Lisp people will be happy
from .engine import Motor, turn_on_engine, turn_off_engine, \
Bolt
```
Absolute imports an import where you fully specify the location of the entities being imported.
base.py
python
from utils.multidimensional.laws import Infinity
from rockets.engine import Motor
Circular imports happen when you create two modules that import each other.
rockets.engine.py
```python
from .base import bad_design_decision
D'ouh
def inside_the_function(*params):
# avoid circular import, or ci. Is good to mention (IMAO)
from rockets.base import bad_design_decision
bad_design_decision(params)
```
Here comes the future
Official Docs
Note for absolute imports:
from __future__ import absolute_import
Keep Reading
About imports in Python
About future imports | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
def main(*args, **kwargs):
"""Simple main function that prints stuff"""
print(args) # args is a tuple of positional params
print(kwargs) # kwargs is a dict with keyword params
## The names are just a mere convention
if __name__ == '__main__':
main(sys.argv) # input params from the command line
| workshop/fundamentals/fundamental.ipynb | miguelgr/python-crash-course | mit |
Programming Types
Boolean: True, False
NoneType: None
Mutable / Immutable Objects
Some immutable types:
int, float, long, complex
str
bytes
tuple
frozen set
Some mutable types:
byte array
list
set
dict | # Immutable Objects
age = 60 # int
weight = 77.8 # float
infinite = float('inf')
name = "Rick" # basestring/str
nick_names = ("Sanchez", "Grandpa") # tuple
jobs = frozenset(("scientist", "inventor", "arms salesman", "store owner"))
# Mutable Objects
interests = ['interdimensional travel', 'nihilism', 'alcohol']
info = {
"name": name,
"last_names": last_names,
"age": age
}
redundant = set(interests)
# Information from objects
type(age) # int
isinstance(age, int) # True
type(infinite)
type(name)
isinstance(name, basestring)
# typve vs isinstance: type doesnt check for object subclasses
# we will discuss the type constructor later on | workshop/fundamentals/fundamental.ipynb | miguelgr/python-crash-course | mit |
Why immutable objects? | # integers
print(id(age))
age += 10
print(id(age))
age -= 10
print(id(age))
# Strings
print(name + ": Wubba lubba dub-dub!!")
print(name.replace("R", "r"))
print(name.upper(), name.lower())
# Tuples
operations = "test", # note the comma as it makes it a tuple!!! | tuple.count/index
print(id(operations))
operations += ('build', 'deploy')
print(operations, id(operations))
## Tuple assignment
def say(*args):
print(args)
say(range(8))
# Packing
test, build, deploy = "Always passing", "A better world", "Your mind"
# OK but use parenthesis :)
(test, build, deploy) = ("Always passing", "A better world", "Your mind")
print("Test: ", test)
print("Build: " + build)
print("Deploy: " + deploy)
# Unpacking
test, build, deploy = operations
print(test, build, deploy)
# You are warned: # ERROR -- too many values to unpack
# https://docs.python.org/3.6/tutorial/controlflow.html#unpacking-argument-lists
# lists
| workshop/fundamentals/fundamental.ipynb | miguelgr/python-crash-course | mit |
Operators
Official Docs
Arithmetic Operators: + - / // * ** % | print(1 + 3)
print(2 ** 10)
print(5 % 3)
print(10 / 4) 2
from __future__ import division
print(10 / 4) 2.5 # float division
10 // 4 # in python 3 to get integer division
import operator
operator.add(2, 4)
operator.gt(10, 5) | workshop/fundamentals/fundamental.ipynb | miguelgr/python-crash-course | mit |
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time. | # Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show() | assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
* The distinctly bright rows indicate that they are all far away from all the training set (outlier)
* The distinctly bright columns indicate that they are all far away from all the test set | # Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)) | assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5: | y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)) | assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
You should expect to see a slightly better performance than with k = 1. | # Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation | assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. | num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
#pass
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
#pass
for k in k_choices:
inner_accuracies = np.zeros(num_folds)
for i in range(num_folds):
X_sub_train = np.concatenate(np.delete(X_train_folds, i, axis=0))
y_sub_train = np.concatenate(np.delete(y_train_folds, i, axis=0))
print(X_sub_train.shape,y_sub_train.shape)
X_sub_test = X_train_folds[i]
y_sub_test = y_train_folds[i]
print(X_sub_test.shape,y_sub_test.shape)
classifier = KNearestNeighbor()
classifier.train(X_sub_train, y_sub_train)
dists = classifier.compute_distances_no_loops(X_sub_test)
pred_y = classifier.predict_labels(dists, k)
num_correct = np.sum(y_sub_test == pred_y)
inner_accuracies[i] = float(num_correct)/X_test.shape[0]
k_to_accuracies[k] = np.sum(inner_accuracies)/num_folds
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
X_train_folds = np.array_split(X_train, 5)
t = np.delete(X_train_folds, 1,axis=0)
print(X_train_folds)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)) | assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
RTL and Implimentation Schamatics are from Xilinx Vivado 2016.1
Read Only Memory (ROM)
ROM is a memory structure that holds static information that can only be read from. In other words, these are hard-coded instruction memory. That should never change. Furthermore, this data is held in a sort of array; for example, we can think of a python tuple as a sort of read-only memory since the content of a tuple is static and we use array indexing to access a certain portions of the memory. | #use type casting on list genrator to store 0-9 in 8bit binary
TupleROM=tuple([bin(i, 8) for i in range(10)])
TupleROM
f'accesss location 6: {TupleROM[6]}, read contents of location 6 to dec:{int(TupleROM[6], 2)}' | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
And if we try writing to the tuple we will get an error | #TupleROM[6]=bin(16,2) | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
Random and Sequntial Access Memory
So to start off with the Random in RAM does not mean Random in a proplositc sence. It refares to Random as in you can randomly access any part of the data array opposed to the now specility sequantil only memory wich are typicly made with a counter or stat machine to sequance that acation
HDL Memeorys
in HDL ROM the data is stored a form of a D flip flop that are structerd in a sort of two diminal array where one axis is the address and the other is the content and we use a mux to contorl wich address "row" we are trying to read. There fore we have two signals: address and content. Where the address contorls the mux.
ROM Preloaded | @block
def ROMLoaded(addr, dout):
"""
A ROM laoded with data already incoded in the structer
insted of using myHDL inchanced parmter loading
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
"""
@always_comb
def readAction():
if addr==0:
dout.next=3
elif addr==1:
dout.next=2
elif addr==2:
dout.next=1
elif addr==3:
dout.next=0
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
DUT=ROMLoaded(addr, dout)
def ROMLoaded_TB():
"""Python Only Testbench for `ROMLoaded`"""
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMLoaded_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
DUT.convert()
VerilogTextReader('ROMLoaded'); | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
ROMLoaded RTL
<img src='ROMLoadedRTL.png'>
ROMLoaded Synthesis
<img src='ROMLoadedSynth.png'> | @block
def ROMLoaded_TBV():
"""Verilog Only Testbench for `ROMLoaded`"""
clk = Signal(bool(0))
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
DUT=ROMLoaded(addr, dout)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(10)
@instance
def stimules():
for i in range(3+1):
addr.next=i
#yield delay(1)
yield clk.posedge
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout)
return instances()
#create instaince of TB
TB=ROMLoaded_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMLoaded_TBV'); | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
With myHDL we can dynamicaly load the contents that will be hard coded in the convertion to verilog/VHDL wich is an ammazing benfict for devlopment as is sean here | @block
def ROMParmLoad(addr, dout, CONTENT):
"""
A ROM laoded with data from CONTENT input tuple
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
Parm:
CONTENT: tuple size 4 with contende must be no larger then 4bit
"""
@always_comb
def readAction():
dout.next=CONTENT[int(addr)]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoad(addr, dout, CONTENT)
def ROMParmLoad_TB():
"""Python Only Testbench for `ROMParmLoad`"""
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMParmLoad_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
DUT.convert()
VerilogTextReader('ROMParmLoad'); | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
ROMParmLoad RTL
<img src="ROMParmLoadRTL.png">
ROMParmLoad Synthesis
<img src="ROMParmLoadSynth.png"> | @block
def ROMParmLoad_TBV():
"""Verilog Only Testbench for `ROMParmLoad`"""
clk=Signal(bool(0))
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoad(addr, dout, CONTENT)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield clk.posedge
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout)
return instances()
#create instaince of TB
TB=ROMParmLoad_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMParmLoad_TBV'); | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
we can also create rom that insted of being asynchronous is synchronous | @block
def ROMParmLoadSync(addr, dout, clk, rst, CONTENT):
"""
A ROM laoded with data from CONTENT input tuple
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
clk (bool): clock feed
rst (bool): reset
Parm:
CONTENT: tuple size 4 with contende must be no larger then 4bit
"""
@always(clk.posedge)
def readAction():
if rst:
dout.next=0
else:
dout.next=CONTENT[int(addr)]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)
def ROMParmLoadSync_TB():
"""Python Only Testbench for `ROMParmLoadSync`"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(3+1):
yield clk.posedge
addr.next=i
for i in range(4):
yield clk.posedge
rst.next=1
addr.next=i
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMParmLoadSync_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
ROMData=Peeker.to_dataframe()
#keep only clock high
ROMData=ROMData[ROMData['clk']==1]
ROMData.drop(columns='clk', inplace=True)
ROMData.reset_index(drop=True, inplace=True)
ROMData
DUT.convert()
VerilogTextReader('ROMParmLoadSync'); | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
ROMParmLoadSync RTL
<img src="ROMParmLoadSyncRTL.png">
ROMParmLoadSync Synthesis
<img src="ROMParmLoadSyncSynth.png"> | @block
def ROMParmLoadSync_TBV():
"""Python Only Testbench for `ROMParmLoadSync`"""
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
rst=Signal(bool(0))
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(3+1):
yield clk.posedge
addr.next=i
for i in range(4):
yield clk.posedge
rst.next=1
addr.next=i
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout, rst)
return instances()
#create instaince of TB
TB=ROMParmLoadSync_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMParmLoadSync_TBV');
@block
def SeqROMEx(clk, rst, dout):
"""
Seq Read Only Memory Ex
I/O:
clk (bool): clock
rst (bool): rst on counter
dout (signal >4): data out
"""
Count=Signal(intbv(0)[3:])
@always(clk.posedge)
def counter():
if rst:
Count.next=0
elif Count==3:
Count.next=0
else:
Count.next=Count+1
@always(clk.posedge)
def Memory():
if Count==0:
dout.next=3
elif Count==1:
dout.next=2
elif Count==2:
dout.next=1
elif Count==3:
dout.next=0
return instances()
Peeker.clear()
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=SeqROMEx(clk, rst, dout)
def SeqROMEx_TB():
"""Python Only Testbench for `SeqROMEx`"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(5+1):
yield clk.posedge
for i in range(4):
yield clk.posedge
rst.next=1
raise StopSimulation()
return instances()
sim = Simulation(DUT, SeqROMEx_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
SROMData=Peeker.to_dataframe()
#keep only clock high
SROMData=SROMData[SROMData['clk']==1]
SROMData.drop(columns='clk', inplace=True)
SROMData.reset_index(drop=True, inplace=True)
SROMData
DUT.convert()
VerilogTextReader('SeqROMEx'); | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
SeqROMEx RTL
<img src="SeqROMExRTL.png">
SeqROMEx Synthesis
<img src="SeqROMExSynth.png"> | @block
def SeqROMEx_TBV():
"""Verilog Only Testbench for `SeqROMEx`"""
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
rst=Signal(bool(0))
DUT=SeqROMEx(clk, rst, dout)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(5+1):
yield clk.posedge
for i in range(4):
yield clk.posedge
rst.next=1
raise StopSimulation()
@always(clk.posedge)
def print_data():
print(clk, rst, dout)
return instances()
#create instaince of TB
TB=SeqROMEx_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('SeqROMEx_TBV'); | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
read and write memory | @block
def RAMConcur(addr, din, writeE, dout, clk):
"""
Random access read write memeory
I/O:
addr(signal>4): the memory cell arrdress
din (signal>4): data to write into memeory
writeE (bool): write enable contorl; false is read only
dout (signal>4): the data out
clk (bool): clock
Note:
this is only a 4 byte memory
"""
#create the memeory list (1D array)
memory=[Signal(intbv(0)[4:]) for i in range(4)]
@always(clk.posedge)
def writeAction():
if writeE:
memory[addr].next=din
@always_comb
def readAction():
dout.next=memory[addr]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
din=Signal(intbv(0)[4:]); Peeker(din, 'din')
writeE=Signal(bool(0)); Peeker(writeE, 'writeE')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=RAMConcur(addr, din, writeE, dout, clk)
def RAMConcur_TB():
"""Python Only Testbench for `RAMConcur`"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
# do nothing
for i in range(1):
yield clk.posedge
#write memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
# rewrite memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[-i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
raise StopSimulation()
return instances()
sim = Simulation(DUT, RAMConcur_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
RAMData=Peeker.to_dataframe()
RAMData=RAMData[RAMData['clk']==1]
RAMData.drop(columns='clk', inplace=True)
RAMData.reset_index(drop=True, inplace=True)
RAMData
RAMData[RAMData['writeE']==1]
RAMData[RAMData['writeE']==0]
DUT.convert()
VerilogTextReader('RAMConcur'); | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
RAMConcur RTL
<img src="RAMConcurRTL.png">
RAMConcur Synthesis
<img src="RAMConcurSynth.png"> | @block
def RAMConcur_TBV():
"""Verilog Only Testbench for `RAMConcur`"""
addr=Signal(intbv(0)[4:])
din=Signal(intbv(0)[4:])
writeE=Signal(bool(0))
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
CONTENT=tuple([i for i in range(4)][::-1])
DUT=RAMConcur(addr, din, writeE, dout, clk)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
# do nothing
for i in range(1):
yield clk.posedge
#write memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
# rewrite memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[-i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
raise StopSimulation()
@always(clk.posedge)
def print_data():
print(addr, din, writeE, dout, clk)
return instances()
#create instaince of TB
TB=RAMConcur_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('RAMConcur_TBV'); | myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb | PyLCARS/PythonUberHDL | bsd-3-clause |
Final
Chinmai Raman
5/21/2016
Given three real-valued functions of time x(t), y(t), z(t), consider the following coupled first-order ODEs:
$x˙ = −y − z, y˙ = x + ay, z˙ = b + z(x − c)$
where a = b = 0.2 and c is a parameter that we will tune. Note that this system has a single nonlinear term
xz.
I will be exploring the consequences of this nonlinearity.
Problem 1 (c = 2) | ros = p1.Rossler(2)
ros.run()
ros.plotx()
ros.ploty()
ros.plotz()
ros.plotxy()
ros.plotyz()
ros.plotxz()
ros.plotxyz() | Final.ipynb | ChinmaiRaman/phys227-final | mit |
Problem 2 (c = 2, 3, 4, 4.15, 4.2, 5.7)
c = 2 is plotted above
c = 3 | ros3 = p1.Rossler(3)
ros3.run()
ros3.plotx()
ros3.plotxy()
ros3.plotxyz() | Final.ipynb | ChinmaiRaman/phys227-final | mit |
Already we can see a bifurcation occuring in the y vs x and z vs y vs x graphs that were not there in the case of c =2. The nonlinearity in the z variable has begun to become active in that the trajectory is leaving the x-y plane.
The x vs t graph shows us that the x-values are now alternating between four values, as opposed to two previously. This is identical to the behavior we saw from the logistic update map on the midterm.
c = 4 | ros4 = p1.Rossler(4)
ros4.run()
ros4.plotx()
ros4.plotxy()
ros4.plotxyz() | Final.ipynb | ChinmaiRaman/phys227-final | mit |
Another bifurcation has now occured and is apparent in the y vs x graph. The limits of the x-values are now eight-fold; the number of values that x converges to has doubled again. The influence of the non-linearity in z is now very obvious.
c = 4.15 | ros415 = p1.Rossler(4.15)
ros415.run()
ros415.plotx()
ros415.plotxy()
ros415.plotxyz() | Final.ipynb | ChinmaiRaman/phys227-final | mit |
The period doubling is occuring at an increasing rate. This is demonstrated by the thicker lines in the xy and xyz graphs. This period doubling phase will soon end as the system approaches complete chaos and the our predictive power decreases greatly.
c = 4.2 | ros42 = p1.Rossler(4.2)
ros42.run()
ros42.plotx()
ros42.plotxy()
ros42.plotxyz() | Final.ipynb | ChinmaiRaman/phys227-final | mit |
The lines are getting thicker as the bifurcations increase at an increasing rate. It is now not immediately apparent how many asymptotic values x approaches from the x vs t graph.
c = 5.7 | ros57 = p1.Rossler(5.7)
ros57.run()
ros57.plotx()
ros57.plotxy()
ros57.plotxyz() | Final.ipynb | ChinmaiRaman/phys227-final | mit |
The period doubling cascade has given rise to a chaotic attractor with a single lobe. This is an example of spiral-type chaos and exhibits the characteristic sensitivity to initial conditions. The oscillations in the x vs t graph are now completely chaotic and irregular in amplitude. The logistic update map also displays the same behavior of a period doubling cascade giving rise to a chaotic system. As we increase c, this system, like the logistic update map from the midterm as ve vary initial conditions, also demonstrates the stretching and folding quality that we discussed in class. The xyz graph is also reminiscent of a mobius strip in that the underside becomes the upperside via the portion of the graph not in the x-y plane.
Problem 3 | p1.plotmaxima('x')
p1.plotmaxima('y')
p1.plotmaxima('z') | Final.ipynb | ChinmaiRaman/phys227-final | mit |
Magic commands
Magic commands start with % charcter, e.g.: | %cd /content | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Shell commands
Shell commands must be precided by ! character, e.g.: | !ls -lah | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
2 - Colab sessions
I invite you to explore Colab front-end menus and see what you get.
When you open a Colab file, you gain a virtual machine session inside the Google Compute Engine.
Special attention should be payed to Colab sessions operation. On one hand, code, text and last result of each code cell execution in the notebook are automatically saved in your Google Drive during the session using IPYNB extension. Nonetheless, on the other hand, virtual execution environment and workspace are lost each time the notebook file is closed. In other words, kernel is reset to the factory raw configuration, losing the information from imported libraries, variables, image files downloaded to the execution environment and so on.
Therefore, a brand new virtual machine is set up with Google default configuration, no mater how many packages you installed on the last session. Files downloaded in previous sessions and anything else won’t be available anymore, unless you put them on your google drive. You may execute each code cell in any order, however, errors may occur if dependencies are not respected.
Google raw virtual machine has a default software configuration. See bellow some alternatives to reconfigure the environment.
2.1 - Managing your virtual machine
According to its documentation, Colab is a Google Research product dedicated to allowing the execution of arbitrary Python code on Google virtual machines.
Colab focuses on supporting Python, its ecosystem and third-party tools. Despite users interest for other Jupyter Notebook kernels (e.g. Julia, R or Scala), Python is still the only programming language natively supported by Colab.
In order to overcome the virtual machine default configuration limitations you may want or need to manage your virtual machine software infrastructure. If that is the case, your first choice should be using pip, the package installer for Python. This stands for a sort of package manager for the Python ecosystem as well as third part packages.
See bellow a brief list of pip commands:
To install the latest version of a package:
>>pip install 'PackageName'
To install a specific version, type the package name followed by the required version:
>>pip install 'PackageName==1.4'
To upgrade an already installed package to the latest from PyPI:
>>pip install --upgrade PackageName
Uninstalling/removing a package is very easy with pip:
>>pip uninstall PackageName
For more details about pip follow its user guide. | !pip freeze | grep keras
print()
!pip freeze | grep tensorflow
!apt update
!apt list --upgradable
!apt upgrade
!python3 --version
!uname -r | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
3 - Hands-on TensorFlow + Keras
3.1 - Load tensor flow | import tensorflow as tf | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
3.2 - Dataset preparation
Import dataset
Modified NIST (MNIST) is a database of handwritten digits. It encompasses a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set produced by EUA National Institute for Standards and Technology (NIST). Images have a fixed-size image and is also available in keras library. | mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data() #verificar se esta tupla é realmente necessária
print ("Training set info:",x_train.shape)
print ("Train target info:",y_train.shape)
print ("Test set info:",x_test.shape)
print ("Test target info:",y_test.shape) | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Show sample images | import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for i in range(100,200):
ax = plt.subplot(10, 10, i-99)
plt.axis("off")
plt.imshow(x_train[i].reshape(28,28))
plt.gray() | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Normalize data between [0, 1] | x_train_norm, x_test_norm = x_train / 255.0, x_test / 255.0
plt.figure(figsize=(10, 10))
for i in range(100):
ax = plt.subplot(10, 10, i+1)
plt.axis("off")
plt.imshow(x_test_norm[i].reshape(28,28))
plt.gray() | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
3.3 - Create and Initialize Network Perceptron Architecture
The code bellow creates a single hidden layer perceptron.
Task 1 Calculate the number of parameters of the model bellow. | model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
#Reference: https://towardsdatascience.com/how-to-calculate-the-number-of-parameters-in-keras-models-710683dae0ca
#Input: 28*28*1
#The Flattern layer doesn’t learn anything, and thus the number of parameters is 0.
#Dense layer formula: param_number = output_channel_number * (input_channel_number + 1)
firstLayer = 128 * ((28*28*1) + 1)
secondLayer = 10 * (128 + 1)
total_params = firstLayer + secondLayer
print(total_params)
model.summary() | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 2 Create a new version of the above code converting this sequential implementation into a functional one. Then encapsulate it in a Python function. Make the number of neurons in the hidden layer, the respective activation function and the dropout frequency parameters of such function. Implement it on the code cell bellow. | def createFuncModel(hiddenNeurons: int = 128, activationFunc: str = "relu", dropoutFrequency: float=0.2):
input = tf.keras.Input(shape=(28,28))
layer1 = tf.keras.layers.Flatten()
layer2 = tf.keras.layers.Dense(hiddenNeurons, activation=activationFunc)
layer3 = tf.keras.layers.Dropout(dropoutFrequency)
layer4 = tf.keras.layers.Dense(10, activation='softmax')
x = layer4(layer3(layer2(layer1(input))))
return tf.keras.Model(inputs=input, outputs=x, name="mnist_model")
newModel = createFuncModel()
newModel.summary() | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
3.4 - Network training
Details about training process can be found in the keras documentation
Present model details | model.summary() | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 3: Compare the number of parameters calculated by yourself to the one provided by model.summary
Number of parameters are equal in both ways (calculated and returned by summary function)
Build network graph object for training | model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
) | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 4: If you try to change loss parameter to 'categorical_crossentropy' the model.fit will stop working. Please explain the reasons why. Make on the next text cell your considerations about Task 4.
Categorical_crossentropy loss function requires the labels to be encoded in a one-hot representation, that is, each label would be a vector of binary encoded variables according to the categories. On the other hand, sparse_categorical_crossentropy works with integer encoded label, in other words, each label comes with an integer representing the right category. Since the provided training dataset comes with integer encoded labels, sparse_categorical_crossentropy shall be picked for this model.
Start network training | EPOCHS = 10
H = model.fit(x_train_norm, y_train, epochs=EPOCHS) | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 5: Use the 'validation_split' parameter on the previous code cell to dedicate 10% of the training set to validation and run it again. Put your code on the cell bellow. | H = model.fit(x_train_norm, y_train, epochs=EPOCHS, validation_split = 0.1) | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 6: Take note of the accuracy values. Then, run the model construction, the model compilation, and training again and see what happens with the accuracies. Pay attention to implicit parameters initialization and to the effective number o epochs of training in each case.
Scribe your answer to Task 6 on the text cell bellow.
Accuracy values slightly drop in the original model after recompiling. The model with validation split data kept a good result.
Original model:
First run: best accuracy: 0.9907
Second run: best accuracy: 0.9808
Validation split model:
First run: best accuracy: 0.9938
Second run: best accuracy: 0.9953
Show graphs
The function 'fit' returns a history object, which keeps track of the network training. | plt.figure(figsize=(10, 10))
plt.plot( H.history["loss"], label="train_loss")
plt.plot( H.history["accuracy"], label="train_acc")
plt.plot( H.history["val_loss"], label="validation_loss")
plt.plot( H.history["val_accuracy"], label="validation_acc")
plt.title("Loss / accuracy evolution")
plt.xlabel("Epoch #")
plt.ylabel("Loss / Accuracy")
plt.ylim([0, 1])
leg=plt.legend()
H.history | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 7: Include the validation loss and accuraccy on the above chart
3.5 - Test Network
The final model must be evaluated in order to check its quality. | print("Train:")
Train_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)
print("Test:")
Test_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2) | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 8: Notice the differences between the loss and accuracy presented by the 'model.fit' and the 'model.evaluate'. Don't find out such a strange outcome? Try to explain it on the following text cell.
Model.fit method trains the model by adjusting the weights and minimizing the losses whereas the Model.evaluate method only tests the trained model and computes the accuracy and loss aided by the labeled test data. Thus, it is natural that the evaluation of never seen data results on an lower accuracy compared to evaluation of already trained data. It is even considered wrong to run evaluations with already seen data.
Given an image, it is possible using the trained network to preview its class using the 'model.predict' method.
Task 9: Present ten test images aside the respective predictions. | predictions = model.predict(x_test[:10])
predictions
plt.figure(figsize=(10, 10))
for i in range(10):
ax = plt.subplot(10, 10, i+1)
plt.axis("off")
plt.imshow(x_test[i].reshape(28,28))
plt.gray()
print([j for j in range(10) if predictions[i][j] == 1])
| Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
3.6 - Save, del and load trained network | print("Train:")
Train_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)
print("Test:")
Test_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2)
model.save('ultimate_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model object
from keras.models import load_model
model = load_model('ultimate_model.h5')
print("Train:")
Train_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)
print("Test:")
Test_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2)
from google.colab import drive
print("Train:")
Train_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)
print("Test:")
Test_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2)
drive.mount('/content/gdrive')
root_path = 'gdrive/My Drive/CComp/Deeplearning'
model.save(root_path + '/ultimate_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model object
from keras.models import load_model
model = load_model(root_path + '/ultimate_model.h5')
print("Train:")
Train_Evaluation = model.evaluate(x_train_norm, y_train, verbose=2)
print("Test:")
Test_Evaluation = model.evaluate(x_test_norm, y_test, verbose=2)
| Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 10: Modify the above code to persist and recover your HDF5 on google drive. See the following link for more information about how to mount your Google Drive into Colab.
3.7 - Using training checkpoints
The code bellow save the model at each epoch in HDF5 format.
Task 11: Use the callback options 'save_freq' and 'period' to save a copy of your model at each 3 epochs. Change it on code bellow. | %cd ../
import os
checkpoint_path = "/content/MyFirstCkpt/"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create a callback that saves the model's weights
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path +'model.{epoch:02d}-{val_loss:.2f}.h5',
save_weights_only=True,
verbose=1,
save_freq='epoch',
period=3
)
# Train the model with the new callback
H = model.fit(
x_train_norm, y_train, epochs=EPOCHS,
validation_split = 0.1,
callbacks=[cp_callback] # Pass callback to training
)
| Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 12: Produce in the cells bellow code pieces that help you to get the results necessary to fill in the following tables:
Hidden layer Neurons | Train accuracy |Test accuracy
---- |-----|-----
16| 0.8972 | 0.9396
32 | 0.9475 | 0.9630
64| 0.9723 | 0.9747
128| 0.9854 | 0.9801
256 | 0.9895 | 0.9795 | #Put here the code for obtaining the values for distinct amount of hidden layer neurons.
newModel = createFuncModel(hiddenNeurons=256)
newModel.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H = newModel.fit(x_train_norm, y_train, epochs=10, validation_split = 0.1)
eval_results = newModel.evaluate(x_test_norm, y_test, verbose=2)
print(max(H.history['accuracy']))
print(eval_results[1])
| Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
Task 13: Using the number of neurons that best performed over the test set in the previous table, fill in the following one.
Dropout frequency | Train accuracy |Test accuracy
--- |-----|-----
0.1| 0.9894 | 0.9787
0.2 | 0.9839 | 0.9791
0.3| 0.9802 | 0.9786
0.4| 0.9714 | 0.9778
0.5 | 0.9622 | 0.9758 | #Put here the code for obtaining the values for distinct values of dropout frequency.
newModel = createFuncModel(hiddenNeurons=128,dropoutFrequency=0.5)
newModel.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
H = newModel.fit(x_train_norm, y_train, epochs=10, validation_split = 0.1)
eval_results = newModel.evaluate(x_test_norm, y_test, verbose=2)
print(max(H.history['accuracy']))
print(eval_results[1]) | Luiz_Fernando_De_Moura_2021_1_Practice_1_Introduction_to_Colab_and_Keras.ipynb | luizfmoura/datascience | gpl-2.0 |
1. Reading simple JSON from a local file | df = pd.read_json('data/simple.json')
df
df.info() | stop_starting_start_stopping/pandas_convert_json/pandas-convert-json.ipynb | bflaven/BlogArticlesExamples | mit |
2. Reading simple JSON from a URL | URL = 'http://raw.githubusercontent.com/BindiChen/machine-learning/master/data-analysis/027-pandas-convert-json/data/simple.json'
df = pd.read_json(URL)
df
df.info() | stop_starting_start_stopping/pandas_convert_json/pandas-convert-json.ipynb | bflaven/BlogArticlesExamples | mit |
3. Flattening nested list from JSON object | df = pd.read_json('data/nested_list.json')
df
import json
# load data using Python JSON module
with open('data/nested_list.json','r') as f:
data = json.loads(f.read())
# Flatten data
df_nested_list = pd.json_normalize(data, record_path =['students'])
df_nested_list
# To include school_name and class
df_nested_list = pd.json_normalize(
data,
record_path =['students'],
meta=['school_name', 'class']
)
df_nested_list | stop_starting_start_stopping/pandas_convert_json/pandas-convert-json.ipynb | bflaven/BlogArticlesExamples | mit |
4. Flattening nested list and dict from JSON object | ### working
import json
# load data using Python JSON module
with open('data/nested_mix.json','r') as f:
data = json.loads(f.read())
# Normalizing data
df = pd.json_normalize(data, record_path =['students'])
df
# Normalizing data
df = pd.json_normalize(
data,
record_path =['students'],
meta=[
'class',
['info', 'president'],
['info', 'contacts', 'tel']
]
)
df | stop_starting_start_stopping/pandas_convert_json/pandas-convert-json.ipynb | bflaven/BlogArticlesExamples | mit |
5. Extracting a value from deeply nested JSON | df = pd.read_json('data/nested_deep.json')
df
type(df['students'][0])
# to install glom inside your python env through the notebook
# pip install glom
from glom import glom
df['students'].apply(lambda row: glom(row, 'grade.math')) | stop_starting_start_stopping/pandas_convert_json/pandas-convert-json.ipynb | bflaven/BlogArticlesExamples | mit |
Let's explore the data | products.head()
len(products)
# Sklearn does not work well with empty fields, so we're dropping all rows that have empty fields
products = products.dropna()
len(products) | Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb | Donnyvdm/courses | unlicense |
Build the word count vector for each review
Here Sklearn works different from the Graphlab.
Word counts are recorded in a sparse matrix, where every column is a unique word and every row is a review. For demonstration purposes and to stay in line with the lecture, the word_counts column is added here, but this is not actually used in the model later on. Instead, the word count vector cv will be used. | from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
cv.fit(products['review']) # Create the word count vector
products['word_counts'] = cv.transform(products['review'])
products.head()
products['name'].describe() | Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb | Donnyvdm/courses | unlicense |
The total number of reviews is lower than in the lecture video. Likely due to dropping the reviews with NA's.
Explore Vulli Sophie | giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].hist()
giraffe_reviews['rating'].value_counts() | Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb | Donnyvdm/courses | unlicense |
Build a sentiment classifier
Define what's a positive and negative review | # Ignore all 3* review
products = products[products['rating'] != 3]
products['sentiment'] = products['rating'] >= 4
products.head() | Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb | Donnyvdm/courses | unlicense |
Let's train the sentiment classifier | from sklearn.cross_validation import train_test_split
# Due to the random divide between the train and test data, the model will be
# slightly different from the lectures from here on out.
train_data, test_data = train_test_split(products, test_size=0.2, random_state=42)
from sklearn.linear_model import LogisticRegression
cv.fit(train_data['review']) # Use the count vector, but fit only the train data
sentiment_model = LogisticRegression().fit(cv.transform(train_data['review']), train_data['sentiment'])
# Predict sentiment for the test data, based on the sentiment model
# The cv.transform is necessary to get the test_data review data in the right format for the model
predicted = sentiment_model.predict(cv.transform(test_data['review'])) | Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb | Donnyvdm/courses | unlicense |
Evaluate the sentiment model | from sklearn import metrics
# These metrics will be slightly different then in the lecture, due to the different
# train/test data split and differences in how the model is fitted
print ("Accuracy:", metrics.accuracy_score(test_data['sentiment'], predicted))
print ("ROC AUC Score:", metrics.roc_auc_score(test_data['sentiment'], predicted))
print ("Confusion matrix:")
print (metrics.confusion_matrix(test_data['sentiment'], predicted))
print (metrics.classification_report(test_data['sentiment'], predicted))
# for the ROC curve, we need the prediction probabilities rather than the True/False values
# which are obtained by using the .predict_proba function instead of .predict
predicted_probs = sentiment_model.predict_proba(cv.transform(test_data['review']))
false_positive_rate, true_positive_rate, _ = metrics.roc_curve(test_data['sentiment'], predicted_probs[:,1])
plt.plot(false_positive_rate, true_positive_rate)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Sentiment Analysis')
plt.show() | Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb | Donnyvdm/courses | unlicense |
Applying the learned model to understand sentiment for Giraffe | giraffe_reviews['predicted_sentiment'] = sentiment_model.predict_proba(cv.transform(giraffe_reviews['review']))[:,1]
giraffe_reviews.head() | Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb | Donnyvdm/courses | unlicense |
Sort the reviews based on the predicted sentiment and explore | giraffe_reviews.sort_values(by='predicted_sentiment', inplace=True, ascending=False)
# Despite the slightly different model, the same review is ranked highest in predicted sentiment
giraffe_reviews.head(10)
giraffe_reviews.iloc[0]['review'] | Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb | Donnyvdm/courses | unlicense |
Let's look at the negative reviews | giraffe_reviews.tail(10)
## We can see the lowest scoring review in the lecture is ranked 10th lowest in this analysis
giraffe_reviews.iloc[-1]['review'] | Machine Learning/1. Foundations A Case Study Approach/Week 3/Analyzing product sentiment.ipynb | Donnyvdm/courses | unlicense |
Fetch the daily returns for a stock | stock_rets = pf.utils.get_symbol_rets('FB') | pyfolio/examples/bayesian.ipynb | femtotrader/pyfolio | apache-2.0 |
Create Bayesian tear sheet | out_of_sample = stock_rets.index[-40]
pf.create_bayesian_tear_sheet(stock_rets, live_start_date=out_of_sample) | pyfolio/examples/bayesian.ipynb | femtotrader/pyfolio | apache-2.0 |
Lets go through these row by row:
The first one is the Bayesian cone plot that is the result of a summer internship project of Sepideh Sadeghi here at Quantopian. It's similar to the cone plot you already saw in the tear sheet above but has two critical additions: (i) it takes uncertainty into account (i.e. a short backtest length will result in a wider cone), and (ii) it does not assume normality of returns but instead uses a Student-T distribution with heavier tails.
The next row compares mean returns of the in-sample (backest) and out-of-sample or OOS (forward) period. As you can see, mean returns are not a single number but a (posterior) distribution that gives us an indication of how certain we can be in our estimates. The green distribution on the left side is much wider, representing our increased uncertainty due to having less OOS data. We can then calculate the difference between these two distributions as shown on the right side. The grey lines denote the 2.5% and 97.5% percentiles. Intuitively, if the right grey line is lower than 0 you can say that with probability > 97.5% the OOS mean returns are below what is suggested by the backtest. The model used here is called BEST and was developed by John Kruschke.
The next couple of rows follow the same pattern but are an estimate of annual volatility, Sharpe ratio and their respective differences.
The 5th row shows the effect size or the difference of means normalized by the standard deviation and gives you a general sense how far apart the two distributions are. Intuitively, even if the means are significantly different, it may not be very meaningful if the standard deviation is huge amounting to a tiny difference of the two returns distributions.
The 6th row shows predicted returns (based on the backtest) for tomorrow, and 5 days from now. The blue line indicates the probability of losing more than 5% of your portfolio value and can be interpeted as a Bayesian VaR estimate.
The 7th row shows a Bayesian estimate of annual alpha and beta. In addition to uncertainty estimates, this model, like all above ones, assumes returns to be T-distributed which leads to more robust estimates than a standard linear regression would. The default benchmark is the S&P500. Alternatively, users may use the Fama-French model as a bunchmark by setting benchmark_rets="Fama-French".
By default, stoch_vol=False because running the stochastic volatility model is computationally expensive.
Only the most recent 400 days of returns are used when computing the stochastic volatility model. This is to minimize computational time.
Running models directly
You can also run individual models. All models can be found in pyfolio.bayesian and run via the run_model() function. | help(pf.bayesian.run_model) | pyfolio/examples/bayesian.ipynb | femtotrader/pyfolio | apache-2.0 |
For example, to run a model that assumes returns to be normally distributed, you can call: | # Run model that assumes returns to be T-distributed
trace = pf.bayesian.run_model('t', stock_rets) | pyfolio/examples/bayesian.ipynb | femtotrader/pyfolio | apache-2.0 |
The returned trace object can be directly inquired. For example might we ask what the probability of the Sharpe ratio being larger than 0 is by checking what percentage of posterior samples of the Sharpe ratio are > 0: | # Check what frequency of samples from the sharpe posterior are above 0.
print('Probability of Sharpe ratio > 0 = {:3}%'.format((trace['sharpe'] > 0).mean() * 100)) | pyfolio/examples/bayesian.ipynb | femtotrader/pyfolio | apache-2.0 |
But we can also interact with it like with any other pymc3 trace: | import pymc3 as pm
pm.traceplot(trace); | pyfolio/examples/bayesian.ipynb | femtotrader/pyfolio | apache-2.0 |
Create Parametrized Model
To perform the search we will use the openmc.search_for_keff function. This function requires a different function be defined which creates an parametrized model to analyze. This model is required to be stored in an openmc.model.Model object. The first parameter of this function will be modified during the search process for our critical eigenvalue.
Our model will be a pin-cell from the Multi-Group Mode Part II assembly, except this time the entire model building process will be contained within a function, and the Boron concentration will be parametrized. | # Create the model. `ppm_Boron` will be the parametric variable.
def build_model(ppm_Boron):
# Create the pin materials
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.741)
water.add_element('H', 2.)
water.add_element('O', 1.)
# Include the amount of boron in the water based on the ppm,
# neglecting the other constituents of boric acid
water.add_element('B', ppm_Boron * 1e-6)
# Instantiate a Materials object
materials = openmc.Materials([fuel, zircaloy, water])
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(r=0.39218)
clad_outer_radius = openmc.ZCylinder(r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius & (+min_x & -max_x & +min_y & -max_y)
# Create root Universe
root_universe = openmc.Universe(name='root universe')
root_universe.add_cells([fuel_cell, clad_cell, moderator_cell])
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Instantiate a Settings object
settings = openmc.Settings()
# Set simulation parameters
settings.batches = 300
settings.inactive = 20
settings.particles = 1000
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -10, 0.63, 0.63, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.source.Source(space=uniform_dist)
# We dont need a tallies file so dont waste the disk input/output time
settings.output = {'tallies': False}
model = openmc.model.Model(geometry, materials, settings)
return model | examples/jupyter/search.ipynb | liangjg/openmc | mit |
Search for the Critical Boron Concentration
To perform the search we imply call the openmc.search_for_keff function and pass in the relvant arguments. For our purposes we will be passing in the model building function (build_model defined above), a bracketed range for the expected critical Boron concentration (1,000 to 2,500 ppm), the tolerance, and the method we wish to use.
Instead of the bracketed range we could have used a single initial guess, but have elected not to in this example. Finally, due to the high noise inherent in using as few histories as are used in this example, our tolerance on the final keff value will be rather large (1.e-2) and the default 'bisection' method will be used for the search. | # Perform the search
crit_ppm, guesses, keffs = openmc.search_for_keff(build_model, bracket=[1000., 2500.],
tol=1e-2, print_iterations=True)
print('Critical Boron Concentration: {:4.0f} ppm'.format(crit_ppm)) | examples/jupyter/search.ipynb | liangjg/openmc | mit |
Finally, the openmc.search_for_keff function also provided us with Lists of the guesses and corresponding keff values generated during the search process with OpenMC. Let's use that information to make a quick plot of the value of keff versus the boron concentration. | plt.figure(figsize=(8, 4.5))
plt.title('Eigenvalue versus Boron Concentration')
# Create a scatter plot using the mean value of keff
plt.scatter(guesses, [keffs[i].nominal_value for i in range(len(keffs))])
plt.xlabel('Boron Concentration [ppm]')
plt.ylabel('Eigenvalue')
plt.show() | examples/jupyter/search.ipynb | liangjg/openmc | mit |
Read repeatedly records from oscilloscope | filename = 1
if (filename == 1):
for f in glob.iglob("./data/*.h5"): # delete all .h5 files
print 'Deleting', f
os.remove(f)
else:
print 'Not removing old files, as filename {0} is not 1.'.format(filename)
osc.write(':STOP') # start recording
time.sleep(0.5)
while True:
#print(' Enter to continue.')
#raw_input() Wait for key press
osc.write(':FUNC:WREC:OPER REC') # start recording
run_start_time = time.time()
print ' Capturing...'
time.sleep(0.5)
while True:
osc.write(':FUNC:WREC:OPER?') # finish recording?
reply = osc.read()
if reply == 'STOP':
run_time = round(time.time() - run_start_time, 2)
print(' Subrun finished, capturing for %.2f seconds.' % run_time)
break
time.sleep(0.01)
osc.write(':WAV:SOUR CHAN1')
osc.write(':WAV:MODE NORM')
osc.write(':WAV:FORM BYTE')
osc.write(':WAV:POIN 1400')
osc.write(':WAV:XINC?')
xinc = float(osc.read(100))
print 'XINC:', xinc,
osc.write(':WAV:YINC?')
yinc = float(osc.read(100))
print 'YINC:', yinc,
osc.write(':TRIGger:EDGe:LEVel?')
trig = float(osc.read(100))
print 'TRIG:', trig,
osc.write(':WAVeform:YORigin?')
yorig = float(osc.read(100))
print 'YORIGIN:', yorig,
osc.write(':WAVeform:XORigin?')
xorig = float(osc.read(100))
print 'XORIGIN:', xorig,
osc.write(':FUNC:WREP:FEND?') # get number of last frame
frames = int(osc.read(100))
print 'FRAMES:', frames, 'SUBRUN', filename
with h5py.File('./data/data'+'{:02.0f}'.format(filename)+'_'+str(int(round(time.time(),0)))+'.h5', 'w') as hf:
hf.create_dataset('FRAMES', data=(frames)) # write number of frames
hf.create_dataset('XINC', data=(xinc)) # write axis parameters
hf.create_dataset('YINC', data=(yinc))
hf.create_dataset('TRIG', data=(trig))
hf.create_dataset('YORIGIN', data=(yorig))
hf.create_dataset('XORIGIN', data=(xorig))
hf.create_dataset('CAPTURING', data=(run_time))
osc.write(':FUNC:WREP:FCUR 1') # skip to n-th frame
time.sleep(0.5)
for n in range(1,frames+1):
osc.write(':FUNC:WREP:FCUR ' + str(n)) # skip to n-th frame
time.sleep(0.001)
osc.write(':WAV:DATA?') # read data
#time.sleep(0.4)
wave1 = bytearray(osc.read_raw(500))
wave2 = bytearray(osc.read_raw(500))
wave3 = bytearray(osc.read_raw(500))
#wave4 = bytearray(osc.read(500))
#wave = np.concatenate((wave1[11:],wave2[:(500-489)],wave3[:(700-489)]))
wave = np.concatenate((wave1[11:],wave2,wave3[:-1]))
hf.create_dataset(str(n), data=wave)
filename = filename + 1
| vxi.ipynb | ODZ-UJF-AV-CR/osciloskop | gpl-3.0 |
Read repeatedly records from oscilloscope
This should be run after the initialization step. Timeout at the end should be enlarged if not all 508 frames are transferred. | filename = 1
run_start_time = time.time()
if (filename == 1):
for f in glob.iglob("./data/*.h5"): # delete all .h5 files
print 'Deleting', f
os.remove(f)
else:
print 'Not removing old files, as filename {0} is not 1.'.format(filename)
while True:
osc.write(':WAV:SOUR CHAN1')
osc.write(':WAV:MODE NORM')
osc.write(':WAV:FORM BYTE')
osc.write(':WAV:POIN 1400')
osc.write(':WAV:XINC?')
xinc = float(osc.read(100))
print 'XINC:', xinc,
osc.write(':WAV:YINC?')
yinc = float(osc.read(100))
print 'YINC:', yinc,
osc.write(':TRIGger:EDGe:LEVel?')
trig = float(osc.read(100))
print 'TRIG:', trig,
osc.write(':WAVeform:YORigin?')
yorig = float(osc.read(100))
print 'YORIGIN:', yorig,
osc.write(':WAVeform:XORigin?')
xorig = float(osc.read(100))
print 'XORIGIN:', xorig,
osc.write(':FUNC:WREP:FEND?') # get number of last frame
frames = int(osc.read(100))
print 'FRAMES:', frames, 'SUBRUN', filename
# This is not good if the scaling is different and frames are for example just 254
# if (frames < 508):
# loop_sleep_time += 10
with h5py.File('./data/data'+'{:02.0f}'.format(filename)+'.h5', 'w') as hf:
hf.create_dataset('FRAMES', data=(frames)) # write number of frames
hf.create_dataset('XINC', data=(xinc)) # write axis parameters
hf.create_dataset('YINC', data=(yinc))
hf.create_dataset('TRIG', data=(trig))
hf.create_dataset('YORIGIN', data=(yorig))
hf.create_dataset('XORIGIN', data=(xorig))
osc.write(':FUNC:WREP:FCUR 1') # skip to n-th frame
time.sleep(0.5)
for n in range(1,frames+1):
osc.write(':FUNC:WREP:FCUR ' + str(n)) # skip to n-th frame
time.sleep(0.001)
osc.write(':WAV:DATA?') # read data
#time.sleep(0.4)
wave1 = bytearray(osc.read_raw(500))
wave2 = bytearray(osc.read_raw(500))
wave3 = bytearray(osc.read_raw(500))
#wave4 = bytearray(osc.read(500))
#wave = np.concatenate((wave1[11:],wave2[:(500-489)],wave3[:(700-489)]))
wave = np.concatenate((wave1[11:],wave2,wave3[:-1]))
hf.create_dataset(str(n), data=wave)
filename = filename + 1
osc.write(':FUNC:WREC:OPER REC') # start recording
#print(' Subrun finished, sleeping for %.0f seconds.' % loop_sleep_time)
run_start_time = time.time()
#time.sleep(loop_sleep_time) # delay for capturing
print(' Subrun finished, Enter to continue.')
#raw_input()
time.sleep(100) # delay for capturing
#print(' We were waiting for ', time.time() - run_start_time())
| vxi.ipynb | ODZ-UJF-AV-CR/osciloskop | gpl-3.0 |
Stopwatch for timing the first loop | first_run_start_time = time.time()
raw_input()
loop_sleep_time = time.time() - first_run_start_time + 15
print loop_sleep_time
loop_sleep_time=60 | vxi.ipynb | ODZ-UJF-AV-CR/osciloskop | gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.