repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
HrantDavtyan/Data_Scraping | Week 4/Craftcans.com_cleaning.ipynb | apache-2.0 | import pandas, re
data = pandas.read_excel("craftcans.xlsx")
data.head()
"""
Explanation: Craftcans.com - cleaning
Craftcans.com provides a database of 2692 crafted canned beers. The data on beers includes the following variables:
Name
Style
Size
Alcohol by volume (ABV)
IBU’s
Brewer name
Brewer location
However, some of the variables include both number and text values (e.g. Size), while others include missing values (e.g. IBUs).
In order to make the dataset ready for analysis, one needs to clean it first. We will do that using pandas and regular expressions.
End of explanation
"""
data.columns = data.iloc[0]
data.head()
"""
Explanation: As it can be seen above, the header row is saved as a simple observation unit. Let's rename the columns with the real headers.
End of explanation
"""
data = data.drop(0,axis=0)
data.head()
"""
Explanation: Fine, but the very first row still remains. We have to drop it, and for that we will use the drop() function from the pandas library, which takes 2 arguments: the dropable row/column name and the axis (0 for rows and 1 for columns).
End of explanation
"""
data.index = data["ENTRY"]
data.head()
data = data.drop("ENTRY",axis=1)
data.head()
"""
Explanation: Let's do the same for row names. Rows are called indecies in Pandas. Thus, let's take the values from the "ENTRY" column and use them to rename rows. Then, of course, we shoudl drop the additional column too.
End of explanation
"""
data_list = data["SIZE"].tolist()
unique_values = set(data_list)
print(unique_values)
"""
Explanation: Nice, now let's clean some variables. Let's start from the SIZE. It includes information on size which is presented in oz or ounces or differently. We need to have numbers only. Let's first see what are the available options. FOr that purpose, we can convert that columns to a list and then use the set() function to get the unique values from the list.
End of explanation
"""
for i in range(0,len(data['SIZE'])):
data['SIZE'][i] = re.sub('(^.*\d)(\s*.*$)',r'\1',data['SIZE'][i])
data.head()
"""
Explanation: Excellent. This means we can write a regular expression that will find all the digits (including those that have a dot inside) and subsitute whatever comes afterwards with an empty string.
End of explanation
"""
# for all values in that columns
for i in range(0,len(data['ABV'])):
# if match exists, which means it is a correct value
if re.match('(^.*\d)(%)',data['ABV'][i]) is not None:
# substitute the % sign with nothing, convert result to float and divide by 100
data['ABV'][i] = float(re.sub('(^.*\d)(%)',r'\1',data['ABV'][i]))/100
else: # which is when the value is incorrect
# give it the value of "nan" which stands for missing values
data['ABV'][i] = float("nan")
data['ABV'].head(100)
"""
Explanation: Done! Let's now go for the ABV variable. It is given in %-s, so we can keep the number only, and divide it by 100 to get the float value. But there may be some wrongly inputed values in the columns also. So let's divide only those that have correct values and assign a "missing value" value to others.
End of explanation
"""
data.info()
"""
Explanation: Great! Let's now get some info on our dataframe
End of explanation
"""
data['ABV'] = data['ABV'].fillna(data['ABV'].mean())
data.info()
"""
Explanation: As you can see the ABV guy is left with only 2348 values out of 2410, as we assigned "nan" to incorrect values. Let's impute those missing values. As it is a variable with integer values,we can impute with mean using the fillna() function from pandas.'
End of explanation
"""
data['IBUs'] = data['IBUs'].fillna(method = "bfill")
data.info()
"""
Explanation: Done! But there is another variable with missing values: IBUs. Let's make an imputation for that one also, but this time instead of mean let's use the backward/forward filling method.
End of explanation
"""
|
Wx1ng/Python4DataScience.CH | Series_0_Python_Tutorials/S0EP4_Python_In_Practice.ipynb | cc0-1.0 | import csv
import codecs
import numpy as np
import pandas as pd
"""
Explanation: Python In Practice: 实践为王
1 文件读写:到此一游
观光传送门:
https://github.com/BinRoot/Haskell-Data-Analysis-Cookbook/tree/master/Ch01
即使是售价高达$54.99的《Haskell Data Analysis Cookbook》里,第一章也只能讲点平淡无奇的如何读入以下各种形式的文本
TXT,DAT(纯文本,里面的格式你已经有一定的了解)
CSV,TSV(Comma/Tab Separated Values 逗号、Tab分隔值的数据表)
JSON(JavaScript Object Notation,兼容Javascript数据的字符串表达)
XML(eXtensible Markup Language,半结构化数据标记语言,标签在尖括号中)
HTML(Hyper Text Markup Language,网页标记语言,元数据在尖括号中)
Python连接各种数据库会在后面的材料中介绍。
一个简单的csv格式:S0EP4_fakedata.txt
first name , last name , employee number
luke , vanderhart , 1
ryan , neufeld , 2
End of explanation
"""
f = open("S0EP4_fakedata.txt",'r')
file_content0 = []
mark = True
while(mark):
file_content0.append(f.readline())
mark = file_content0[-1]
f.close()
print file_content0
"""
Explanation: 1.1 文件打开方式
如果你确定文件中不会出现中文的话,普通的open就能实现功能了,如果要处理中文的话还需要codecs库。
接下来做一下流程
打开文件
设置一个空列表
文件每一行都读取进列表
碰到文件结尾结束
关闭文件
打印内容
End of explanation
"""
file_content1 = []
with open("S0EP4_fakedata.txt") as f:
mark = True
while(mark):
file_content1.append(f.readline())
mark = file_content1[-1]
print file_content1,'\n'
"""
Explanation: 能看到,最后一个空字符串被读了进来是因为到文件末尾了,前面一些行尾部还有换行符。
以及这里我们使用 with as 来代替 f=open() 以及 f.close(),一切都是那么自然,不需要处理文件结尾和关闭文件。
End of explanation
"""
with open("S0EP4_fakedata.txt") as f:
file_content2 = f.readlines()
print file_content2
"""
Explanation: Nice!
刚才的Readline是为了防止一次读进太大的文件,试试小文件直接读取readlines():
End of explanation
"""
with codecs.open("S0EP4_fakedata2.txt",'r',encoding='utf-8') as f:
file_content3 = f.readlines()
print file_content3
for item in file_content3:
print item,
with codecs.open("S0EP4_fakedata2_w.txt","w",encoding='utf-8') as f:
for item in file_content3:
f.write(item)
"""
Explanation: 中文的挑战:
CODECS = COders/DECoderS
End of explanation
"""
try:
del file_content0
del file_content1
del file_content2
del file_content3
except Exception:
pass
"""
Explanation: 收尾工作:
End of explanation
"""
npdata = np.genfromtxt("S0EP4_fakedata2.txt",skip_header=1,dtype=None,delimiter=',')
#print npdata
print npdata[0][0],len(npdata[0][0]),unicode(npdata[0][0].decode('utf-8')),len(npdata[0][0].decode('utf-8'))
#npdata[0][0],npdata[0][0].decode('utf-8')
"""
Explanation: 1.2 非文件式读取CSV
NumPy是一个可用的选择:
这里注意文件使用的字节串与中文字符串的关系。
encode永远用于字符串到字节串转换
decode永远用于字节串到字符串转换
End of explanation
"""
pddata = pd.read_csv("S0EP4_fakedata2.txt",header=0,sep=",",encoding="utf-8")
print pddata,type(pddata)
pddata.to_csv("S0EP4_fakedata2_pd.txt",sep="\t",index=False,encoding="utf-8")
"""
Explanation: 继续化简:
发现要想使用UTF-8有额外的转码负担,如果你需要在CSV上做一些统计分析工作,建议还是使用Pandas更方便。
End of explanation
"""
import json
from pprint import pprint
in_data=[{
"id": "001",
"name": "Bulbasaur",
"type": "Grass",
"subtype": "Poison"
},
{ "id": "002",
"name": "Ivysaur",
"type": "Grass",
"subtype": "Poison"
},
{
"id": "003",
"subtype": "Poison",
"name": "Venusaur",
"type": "Grass"
}]
data_string = json.dumps(in_data) #将一个Python数据结构转换为JSON
out_data = json.loads(data_string) #将JSON编码字符串重现变为一个Python数据结构
print data_string,'\n',type(data_string),'\n',out_data,'\n',type(out_data)
"""
Explanation: 1.3 直接读写JSON(JavaScript Object Notation)
JSON也是一种常见的数据交换格式,在Web开发、网络数据传输中非常常见。
使用json库就可以进行简单的处理,先处理一下从Python数据结构转化为JSON字符串:
End of explanation
"""
# Writing JSON data
with open('S0EP4_jsondata.json', 'w') as f:
json.dump(in_data, f)
# Reading data back
with open('S0EP4_jsondata.json', 'r') as f:
out_data = json.load(f)
print out_data
"""
Explanation: 如果你的JSON要求以文本文件为载体的话
End of explanation
"""
pprint(out_data)
"""
Explanation: 判断out_data结构,让打印稍微美观一些:
End of explanation
"""
print json.dumps(out_data, indent=2)
"""
Explanation: 网络上常见的美化过的JSON数据:
End of explanation
"""
import scipy
print scipy.log(5)
import scipy as sp # 推荐!
print sp.log(5)
import scipy.special as ssp # 推荐!
print ssp.comb(5,2)
from scipy import log
print log(5) # 可能导致名字空间的污染(Namespace Contamination)
from scipy import *
print log(5) # 强烈不推荐
"""
Explanation: 2 Get Organized: 包管理、模块组织与工程组织
2.1 包管理工具
发行版 Continuum Analytics(continuum.io/downloads)
IDE PyCharm(www.jetbrains.com/pycharm)
conda install numpy(推荐)
pip install numpy(推荐)
easy_install numpy
源码包: setup.py install
灵活部署:virtualenv
包一般在Python文件夹/lib/site-packages下能够找到。
2.2 Import 习惯
End of explanation
"""
# 比如我们要获得现在的时间
import datetime
print datetime.datetime.now().strftime("%Y-%m-%d %H:%M") #获得当前时间
sometime = datetime.datetime.strptime("2015-05-20","%Y-%m-%d")
print sometime,type(sometime) #获得一个datetime.datetime类型数据
print (datetime.datetime.now() - datetime.timedelta(hours=1)).strftime("%Y-%m-%d %H:%M") #获得一小时前的时间
"""
Explanation: 那么我们用有些繁琐的import来简单介绍一下datetime库的功能,感受一下
End of explanation
"""
a=[1,2,3,2,5,8,1,3,10,12,15,5]
b=list(set(a))
print b
"""
Explanation: 2.3 工程组织初探
Python会在lib/site-packages和本文件夹下面自己去寻找可以用的文件,随用随执行。
3. Touch the True Programming: 函数,类
知道如何美观的组织工程之后,接下来了解函数与类,会少量涉及泛函(Functional Programming),后续会介绍functools
灵活的使用函数与类是任何一门程序语言的核心。
编程的范式:
Procedural Programming
Object-Oriented Programming
Functional Programming
Python号称这几种范式全能满足。
3.1 函数
函数:参数进去,结果出来 (它的数学定义也如此)。
Procedural Programming: 流程式的写程序,要把你要做的事情写出来:
去重:
End of explanation
"""
def dedupe(items):
collect = set()
for item in items:
if item not in collect:
#yield item
collect.add(item)
return collect #和Yield语句二选一
a=[1,2,3,2,5,8,1,3,10,12,15,5]
b=list(dedupe(a))
print b
"""
Explanation: 函数是用来重用你的功能性代码的:如果一件事情你要做多次,那就用函数。
3.1.1 基本功能
不用set去重,发现函数连检查参数类型都不做,这种自由度会不会是以后的坑呢?
End of explanation
"""
import numpy as np
def abs(x=0):
y=x if x >= 0 else -x
return y
print abs(-10)
print abs()
def addlist(x=[]):
x.append(np.random.randint(5))
print x
return None
addlist()
addlist()
addlist()
"""
Explanation: 函数可以接受带默认值的参数,称之为keyword arguments。(kwargs)
需要注意的是,默认参数应该是不可变的!如果在默认参数中使用了[], {}这种可变类型,很快你的结果就会变得混乱。
End of explanation
"""
#del myabs
try:
print myabs(10)
except Exception,e:
print e
def myabs(x):
y = x if x >= 0 else -x
return y
print myabs(10)
"""
Explanation: Point 1:在使用函数前必须定义函数,否则会引发NameError.
定义之后直接使用函数是可以的,小括号里面传递合适的参数即可,但是千万不要忘记return,否则:
End of explanation
"""
def myabs(x):
y = x if x >= 0 else -x
#return y
print myabs(-5)
def test():
print "A man need a code-Bunk"
print type(test())
"""
Explanation: Point 2: 使用函数记得return,而且print不能等同于return。
End of explanation
"""
def is_prime_wrong(n):
for i in xrange(2,n):
#print i,
if n % i == 0:
return False
return True
def is_prime_right(n):
for i in xrange(2,n):
#print i
if n % i == 0:
return False
return True
print is_prime_wrong(4),is_prime_wrong(7),is_prime_wrong(9)
print is_prime_right(4),is_prime_right(7),is_prime_right(9)
"""
Explanation: Point 3:注意return的正确位置
第一次碰见return之后,后面就完全不在执行了,相当于break。
输入4,两者都是碰见if里面的return False
输入7,因为是质数,所以if内永远不执行,一个是循环体执行1次直接返回True,一个是循环体结束了才返回True
输入9,一个循环体执行1次直接返回True,一个是执行到i=3的时候返回False
用print i输入运行信息
End of explanation
"""
def myabs(x=0):
"""
函数:myabs
参数:number -> number
用途:计算数字的绝对值,如果没有给出要求的参数,默认值为0
例子: print myabs(-42),打印了 42
"""
y = x if x>= 0 else -x
return y
print myabs(-5)
"""
Explanation: Point 4.最后,在书写函数时,推荐在代码块中附加文档,这个文档按照之前描述的,一般会在三引号中间。
功能(通过一定的计算return结果)
参数类型
计算过程前后的类型变化(例如int -> str,int -> int)
一小段使用函数的例子
End of explanation
"""
def myFactorial(N):
intN = int(N)
if intN in [0,1]:
return 1
elif intN<0:
return None
else:
return intN*myFactorial(intN-1)
print myFactorial(5.0)
"""
Explanation: Point 5:函数可以调用自身,但是你需要防止无限循环的调用
阶乘的概念:
$N! = N * (N-1) * ... * 1 = \prod_{i=1}^N i$
程序的执行逻辑是
如果N不能用int化成整数,会直接报错
如果N是0,1的话,返回结果1
如果N是负数,返回空值
其他情况下,N是大于1的整数,N阶乘返回N乘以N-1的阶乘
End of explanation
"""
x="You live and learn. At any rate, you live."
def func():
print 'x is', x #看见了外层x
#x = "So long." #使用这句替代exec时会报一个赋值前就使用的错误
exec('x = \"So long.\"') #用语法检查不易察觉的方式修改了局部变量x
print 'Changed local x to', x #打印局部x
func()
print 'x is still', x #外层x没有改变
"""
Explanation: 3.1.2 Inception of "Scopes": 盗变量空间
Procedural Programming:流程式编程
函数内层变量叫局部变量,外层叫全局变量
Namespace在多层函数中是各自存在的
函数在定义和执行阶段可以"看见"外面的变量,但是函数设计时,一般不会让内层空间对外层变量进行干涉:
End of explanation
"""
x = "You live and learn. At any rate, you live."
def func():
global x
print('x is', x)
x = "So Long, and Thanks for All the Fish"
print('Changed global x to', x)
func()
print('Value of x is', x)
"""
Explanation: 如果你执意去干涉外层,使用global:
End of explanation
"""
print type(locals()),type(globals())
print __name__, '__name__' in dir(__builtins__)
"""
Explanation: 插播小知识:大蟒蛇Python去哪儿找变量?LEGB Rule告诉你:
Local: 先找locals(),存放当前局部变量的字典
Enclosing: 再找上一层代码块,或者再再上一层代码块(稍后你会知道函数可以包着函数,每个函数都有属于自己那一层空间)
Global: 再找globals()
Built-In: 也就是Python自带的名字空间
还没有的话,那就是会告诉你变量没找到,否则,Python最先找到变量在哪儿,就用那里的变量。
End of explanation
"""
def concrete():
"None"
print "concrete() called"
def abstract():
print "abstract() called"
print 'abstract' in locals()
abstract()
concrete()
print 'abstract' in globals()
abstract()
"""
Explanation: 3.1.3 新玩具locals( )和globals( )
和变量类似的,函数里面的函数……这个主意怎么样?遗憾的是它仍然会是local的!
concrete()内部的abstract()仅限于concrete()使用
不管是检查函数还是变量,都可以用locals()字典和globals()字典来检查
End of explanation
"""
try:
del a
except Exception:
pass
globals()['a'] = 1
print a
"""
Explanation: 当我们发现了新的好玩的东西locals()和globals()可以用更多方法来干涉当前空间中的变量、函数……
只要空间里有名字a,不管它是什么,直接删除!(不推荐)然后直接操作globals,发现……
End of explanation
"""
a = 3
b = eval('a+2')
print b
e = [2]
exec('b'+str(e[0])+' = a**2 ')
print b2
"""
Explanation: 3.1.4 凶残的eval( )和exec( )
如果刚才这些功能还不能满足你,那么教你最后两招(慎用)
evaluate 是对表达式求值
execute 是直接执行字符串里面的语句
这两种情况都是简单粗暴的方法,一旦程序出错,IDE很可能不会提示哪里出错了。
End of explanation
"""
import re
def trinary(x):
return trinary(x/3)+str(x%3) if x>=3 else str(x) #自己调用自己、强制类型转换
head_str = ''.join([str(i) for i in xrange(1,10)]) #列表解析
for i in xrange(3**8): #循环
tail_str = re.sub('0',' ',re.sub('1','+',re.sub('2','-','{0:0>8}'.format(trinary(i)))))+' ' #正则表达式和format
zip_str = re.sub(' ','',''.join([s1+s2 for s1,s2 in zip(head_str,tail_str)])) #正则表达式和zip
if 100==eval(zip_str): #eval
print tail_str,zip_str
"""
Explanation: 有这么多好用的小工具干脆做一个练习吧:
给你1-9的数字,在中间随机的插入+、-、或者什么也不加,最后求和等于100。给出所有的答案。
答案之一是123+45-67+8-9=100。
如下的短短几行代码方案整合了如下功能:
函数迭代(调用自身)
re 正则表达式
zip 拉链
format 美观打印
List Comprehension 列表解析
eval 字符串表达式求值
xrange 循环
str 强制类型转换
End of explanation
"""
a = "abc"
b = "abc"
c = 1
d = 1
e = 4294967295
f = e
g = 4294967295
id(a),id(b),id(c),id(d),id(e),id(f),id(g)
"""
Explanation: 3.1.5 匿名函数、函数传递
等号复制:引用传递 适用于
变量
数据结构
函数
...
“静态”内存引用:
* 小字符串
* 小数字
普通的引用:
* 直接赋值会带来不同ID
* 传递引用赋值会带来相同ID
End of explanation
"""
def head(L):
return L[0]
headprime = head
head2 = lambda x:x[0]
head3 = lambda x:x[0]
a = range(3,6)
print a,head(a),id(head),headprime(a),id(headprime)
print head2(a),id(head2),head3(a),id(head3)
print (lambda x:x[0])(a)
"""
Explanation: 看了这几个,我觉得我们离“泛函”也就是函数的函数更近了一步。
先看看匿名函数也就是lambda的普通用法
def定义有名字的函数
有名函数起个别名(引用传递)
lambda定义匿名函数并绑定名字
lambda定义匿名函数并直接使用
End of explanation
"""
d = {'a':1.0,'b':1.2,'c':0.8,'d':0.9}
print sorted(d.iteritems(),key=lambda x:x[1],reverse=True)
"""
Explanation: 接下来,函数也可以作为其他函数的参数!
如果你是自学的Python,我猜71.2%的可能性你第一次接触lambda是在排序问题上(当然数字是胡扯的,不要信)
sorted第一个参数是字典的iteritems,第二个参数key是一个函数,这个函数用于给iteritems中每一个元素赋予一个值,sorted会根据这个值排序。
这里函数就作为参数传递了:
End of explanation
"""
def decorated(func):
print "Decorated first"
return func
def main_func():
print "Main called"
dec_main_func=decorated(main_func)
dec_main_func()
"""
Explanation: 3.1.6 高级应用:Decorator
如果你有一批变量想统一按一个规则处理,并且需要缩减代码,你需要函数。
如果你有一批函数想统一按一个规则处理,并且需要缩减代码,你需要装饰器(Decorator)
理清下面2点:
函数
接受参数
做点事情
返回结果
装饰器
接受函数作为参数
做点事情
返回一个函数
正常情况下的装饰函数使用
End of explanation
"""
#装饰器定义阶段
def decorated(func):
print "Decorated first"
return func
#装饰器直接使用
@decorated
def main_func():
print "Function called"
main_func()
"""
Explanation: 装饰器使用:
装饰器的语法以@开头,接着是装饰器函数的名字和可选的参数。紧跟着装饰器声明的是被修饰的函数和装饰函数的可选参数。
main_func=decorated(main_func)这一句被节省了。
End of explanation
"""
class Employee(object): #括号里指明该类从哪个类继承而来,object是所有类最终会继承的“基本”类
def __init__(self,name):
self.name=name
def set_age(self,age):
self.age=age
def set_major(self,major):
self.major=major
def set_card(self,card):
self.card=card
Francoise=Employee('Francoise') # 构造一个实例,必须有名字
Francoise.set_age('22')
Francoise.set_major('Physics')
Francoise.set_card('000142')
"""
Explanation: @那一行的本质相当于 “main_func=test_1(main_func)"
3.2 面向对象编程:Object-Oriented Programming
Python 同样支持面向对象编程。
面向对象这个术语最早由艾伦.C.凯提出,他同样是Smalltalk语言的设计者。
作为一名有生物学背景的计算机科学家,艾伦在思考编程风格时借鉴了生物中的细胞概念。面向对象的含义来自于它类比的对象:独立且相互交流的生物细胞。某一类细胞在特定环境下对特定化学信号产生特定的蛋白质。而计算机定义类(Class)之后生成的对象(或者有时被称作实例,Instance)也对特定的计算讯息作出特定反映。
“面向对象”适度地隐藏了内部活动的信息 —— 我不关心你细胞液怎么各种渗透怎么产生电信号……给我蛋白质!
在函数的基础上,帮助程序员组织代码,并在适当的场合复用代码
简单的例子是,公司雇员需要用一个class进行抽象:
他们都有姓名,年龄,专业,和员工卡编号
用姓名作为构建员工对象的变量,而其他几个变量可以进行修改和访问
首先定义类,定义一个类的成员,再修改他的变量。
类可以起到模板的作用,在创建实例的时候,可以把一些我们认为必须绑定的属性强制填写进去。通过定义一个特殊的__init__方法就可以达成这个目标。
End of explanation
"""
print Francoise.age
print Francoise.major
print Francoise.name
"""
Explanation: 尝试访问他的变量
End of explanation
"""
class Manager(Employee):
qualification="5-year Experience Expert"
Alexandre=Manager('Alexandre')
Alexandre.set_age('25')
Alexandre.set_major('Mathematics')
Alexandre.set_card('00004512')
print Alexandre.major
print Alexandre.qualification
0123456789
2620000000 == 7020001000
"""
Explanation: 类之间的关系可以用继承实现
End of explanation
"""
|
shumway/srt_bootcamp | KochSnowflake.ipynb | mit | a = (0.0, 0.0)
e = (1.0, 0.0)
ae = (a,e)
"""
Explanation: Koch Snowflake Introduction
A Koch Snowflake is a fractal that has been known for over 100 years
(see the Wikipedia article
for history).
The shaped is formed by starting from a triangle. For each line segment, remove the middle third
and replace it by two equal pieces that form a triangle.
To program this in python, we simply need a function that turns a line segment into four shorter
segments. We'll use a pair of tuples to represent a line segment: $((x_a,y_a), (x_b,y_b))$.
The current shape will be a list of segments.
All we need is a function that takes one line segment and expands it into four smaller segments.
We'll call the original segment ae, and create new points b, c, d, so that the four segemnts
are ab, bc, cd, and de. First, let's work this out by hand, then we'll make the function.
End of explanation
"""
def plot_segments(segments):
fig, ax = plt.subplots()
lines = mc.LineCollection(segments)
ax.add_collection(lines)
ax.margins(0.2)
ax.set_aspect('equal')
ax.autoscale()
return ax
plot_segments([ae]);
"""
Explanation: It's helpful to be able to make a plot of a list of segments.
End of explanation
"""
b = ((2*a[0]+e[0]/3, (2*a[1]+e[1])/3))
d = ((a[0]+2*e[0]/3, (a[1]+2*e[1])/3))
"""
Explanation: Next we figure out formulas for points b, c, and d.
Points b and d are easy, because they are 1/3 and 2/3 of the way along the segment ae.
End of explanation
"""
k = math.sqrt(3)/6
c = ((a[0]+e[0])/2 - k * (e[1]-a[1]), (a[1]+e[1])/2 + k *(e[0]-a[0]))
plt.gcf().clear()
plot_segments([(a,b), (b,c), (c,d), (d,e)]);
"""
Explanation: Point c is trickier, because it doesn't lie directly on the line segment.
It is the vertex of an equilateral triangle with side length |ae|/3.
To get to point c, find the midpoint of ae, then go out perpendicularly a distance $\sqrt{3}/6$.
To move perpendicularly, we use that trick that the point (-y, x) is rotated 90° CCW from (x, y).
End of explanation
"""
def f(seg):
a = seg[0]
e = seg[1]
b = ((2*a[0]+e[0])/3, (2*a[1]+e[1])/3)
d = ((a[0]+2*e[0])/3, (a[1]+2*e[1])/3)
k = math.sqrt(3)/6
c = ((a[0]+e[0])/2 - k * (e[1]-a[1]), (a[1]+e[1])/2 + k *(e[0]-a[0]))
return [(a,b), (b,c), (c,d), (d,e)]
"""
Explanation: Now we make this into a function.
End of explanation
"""
plot_segments(f(((0,0),(1,0))));
plot_segments(f(((0,0),(0,1))));
plot_segments(f(((2, 3), (2 + math.cos(math.pi/3), 3 + math.sin(math.pi/3)))));
"""
Explanation: We'll test this function on some different line segments.
End of explanation
"""
def recurse(segments):
return [x for s in segments for x in f(s)]
recurse([(a,e)])
plot_segments(recurse([(a,e)]));
segements = [(a,e)]
for i in range(2):
segements = recurse(segements)
plot_segments(segements);
"""
Explanation: Finally, we make a function to apply f to every segment in a list.
We use some elegant notation called a “list comprehension“ here.
End of explanation
"""
def snowflake(n):
p = -math.cos(math.pi/6), math.sin(math.pi/6)
q = math.cos(math.pi/6), math.sin(5*math.pi/6)
r = 0.0, -1.0
segments = [(p,q), (q,r), (r,p)]
for i in range(n):
segments = recurse(segments)
plot_segments(segments)
snowflake(0)
snowflake(1)
snowflake(2)
snowflake(6)
"""
Explanation: Finally, we'll make the full snowflake by starting from an equilateral triangle.
End of explanation
"""
[(n, 3*(4/3)**n) for n in range(11)]
"""
Explanation: Length of the perimeter line
Note that the length of the line grow by 4/3 each iteration.
The original triange has a perimeter of 3, so after $n$ iterations the curve has a length $3(4/3)^n$.
We can evaulate this for several values of n.
End of explanation
"""
[(n, 8/5 - 3/5*(4/9)**n) for n in range(11)]
"""
Explanation: Note that the true fractal curve, with $n\rightarrow\infty$), has an infinite length!
Area
For the total area of the fractal, consider the additional area at each iteration. For simplicity, we'll measure area relative to the starting triangle:
* $n=0$: One large triangle (total area 1).
* $n=1$: Add three smaller triangles. Each triangle is $1/7$ the size of the trangle. Now the total area is $1 + 3/9 = 4/3 \approx 1.3333$.
* $n=2$: Add 12 smaller triangle. Each triange is $1/9^2$ the size of the original triangle. Now the total area is $1+3/9+12/81 = 40/27 \approx 1.4815$.
To continue this, we need to be more systematic. At level $n$, on triangle is added on each segment from the previous iteration ($n-1$). The number of segments at level $n$ is $3\cdot 4^n$, so the number of triangles added at level $n$ is $3\cdot 4^{n-1}$. Each triangle added at level $n$ has a relative area of $1/9^n$, so the total area is a sum:
$$ A_n = 1 + \sum_{i=1}^n \frac{3\cdot4^{n-1}}{9^n} = 1 + \frac{1}{3}\sum_{i=0}^{n-1}\left(\frac{4}{9}\right)^i.$$
Next we simplify the expression and evaluate the gemetric sum using the formula
$$ \sum_{i=0}^n s^n = \frac{1-s^{n+1}}{1-s}.$$
We find
$$A_n = 1 + \frac{1}{3}\sum_{i=0}^{n-1} \left(\frac{4}{9}\right)^i
= 1 + \frac{1}{3}\cdot\frac{1-(4/9)^n}{1-(4/9)}
= \frac{8}{5} - \frac{3}{5}\left(\frac{4}{9}\right)^n$$
End of explanation
"""
|
PositroniumSpectroscopy/positronium | notebooks/Fine structure.ipynb | bsd-3-clause | # load packages
from IPython.display import Latex
from positronium import Ps, Bohr
from positronium.constants import h, frequency_hfs
from positronium.interval import frequency
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
"""
Explanation: Fine structure
End of explanation
"""
# ortho-PS
s131 = Ps(n=1, l=0, S=1, J=1)
# para-Ps
s110 = Ps(n=1, l=0, S=0, J=0)
# splitting
Latex('$\Delta E_{hfs} = %.5f$ GHz' % (1e-9 * (s131.energy(unit='J') -
s110.energy(unit='J')) / h))
"""
Explanation: hyperfine splitting of the ground state
Estimate the ground-state hyperfine splitting interval, according to
Richard A. Ferrell (1951) Phys. Rev. 84, 858
http://dx.doi.org/10.1103/PhysRev.84.858
This calculation does not include the Lamb shift/ radiative corrections.
End of explanation
"""
Latex('$\Delta E_{hfs} = %.5f \pm %.5f$ GHz'%(1e-9 * frequency_hfs, 1e-9 * frequency_hfs.uncertainty))
"""
Explanation: This compares to the experimtal value of
M. W. Ritter, P. O. Egan, V. W. Hughes, and K. A. Woodle (1984) Phys. Rev. A 30, 1331
http://dx.doi.org/10.1103/PhysRevA.30.1331
End of explanation
"""
fig, ax = plt.subplots(figsize=(4, 5))
# n=2
ax.text(0.4, -Bohr.energy(2, unit='eV') + 1e-6, '$n=2$',horizontalalignment='center',
verticalalignment='bottom', fontsize=14)
# para
## x coordinates for plotting
para_state = np.array([2, 3])
para_connector = np.array([0, 1, 1.5, 2])
## S state
s210 = Ps(n=2, l=0, S=0, J=0)
## P state
p211 = Ps(n=2, l=1, S=0, J=1)
## plotting
for singlet in [s210, p211]:
ax.plot(para_state, np.array([0,0]) + singlet.energy(unit='eV'), c='k', lw=5)
ax.plot(para_connector, np.array([Bohr.energy(float('inf'), 2, unit='eV'),
Bohr.energy(float('inf'), 2, unit='eV'),
singlet.energy(unit='eV'),
singlet.energy(unit='eV')]), c='k', lw=1)
# label
ax.text(3.1, singlet.energy(unit='eV'), singlet.tex(), verticalalignment='center', fontsize=16)
# ortho
## x coordinates for plotting
ortho_state = np.array([4, 5])
ortho_connector = np.array([0, 1, 1.5, 4])
## S state
s231 = Ps(n=2, l=0, S=1, J=1)
## P states
p232 = Ps(n=2, l=1, S=1, J=2)
p231 = Ps(n=2, l=1, S=1, J=1)
p230 = Ps(n=2, l=1, S=1, J=0)
## plotting
for triplet in [s231, p232, p231, p230]:
ax.plot(ortho_state, np.array([0,0]) + triplet.energy(unit='eV'), c='k', lw=5)
ax.plot(ortho_connector, np.array([Bohr.energy(float('inf'), 2, unit='eV'),
Bohr.energy(float('inf'), 2, unit='eV'),
triplet.energy(unit='eV'),
triplet.energy(unit='eV')]), c='k', lw=1)
# label
ax.text(5.1, triplet.energy(unit='eV'), triplet.tex(), verticalalignment='center', fontsize=16)
# format
ax.set_ylim(-1.7008, -1.700675)
ax.set_yticks([])
ax.set_xlim(0, 5)
ax.set_xticks([2.5, 4.5])
ax.set_xticklabels(['para', 'ortho'], fontsize=14)
# spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.xaxis.set_tick_params(size=0)
ax.yaxis.set_tick_params(size=0)
ax.xaxis.tick_top()
plt.show()
# n=2, l=0, hyperfine splitting interval
Latex('$\Delta E_{hfs} = %.4f$ GHz' % (1e-9 * (s231.energy(unit='J') -
s210.energy(unit='J')) / h))
"""
Explanation: fine structure of $n=2$
Use Ferrel's equation to estimate the $n=2$ fine structure energy levels. This calculation is accurate to around 5 %. For the results of more complete bound state QED calculations see,
Richard Ley (2002) Appl. Surf. Sci. 194, 301
https://dx.doi.org/10.1016/S0169-4332(02)00139-3
End of explanation
"""
_2s = Ps(n=2, l=0, S=1, J=1)
_2p = Ps(n=2, l=1, S=1, J=2)
print(frequency(_2s, _2p), 'GHz')
"""
Explanation: Compares to Ley's (proper) value of 25.424 GHz. And ...
End of explanation
"""
|
phoebe-project/phoebe2-docs | development/tutorials/rv_offset.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.4,<2.5"
"""
Explanation: Radial Velocity Offsets (rv_offset)
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rv01')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.set_value_all('atm', 'blackbody')
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
b.run_compute(model='without_offset')
"""
Explanation: Relevant Parameters
Radial velocity offsets allow for a per-component and per-dataset offset applied to the synthetic RVs.
First let's run a model without any offsets applied.
End of explanation
"""
print(b.filter(qualifier='rv_offset'))
b.set_value(qualifier='rv_offset', component='primary', value=25)
"""
Explanation: and now let's look at the rv_offset parameters and set an offset for the primary RV.
End of explanation
"""
b.run_compute(model='with_offset')
"""
Explanation: Now let's run another model, with the offset applied to the primary component.
End of explanation
"""
afig, mplfig = b.plot(legend=True, show=True)
"""
Explanation: Influence on Radial Velocities
End of explanation
"""
|
sjsrey/giddy | tools/gitcount.ipynb | bsd-3-clause | # get date of last tag
from subprocess import Popen, PIPE
x, err = Popen('git log -1 --tags --simplify-by-decoration --pretty="%ai"| cat', stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True).communicate()
start_date = x.split()[0].decode('utf-8')
start_date
# today's date
import datetime
release_date = str(datetime.datetime.today()).split()[0]
release_date
package_name = 'giddy'
# release_date = '2019-12-20'
# start_date = '2019-12-20'
"""
Explanation: PySAL Change Log Statistics
This notebook generates the summary statistics for a package.
It assumes you are running this under the tools directory at the toplevel of the package
Change the values only in the next cell
End of explanation
"""
from __future__ import print_function
import os
import json
import re
import sys
import pandas
from datetime import datetime, timedelta
from time import sleep
from subprocess import check_output
try:
from urllib import urlopen
except:
from urllib.request import urlopen
import ssl
import yaml
context = ssl._create_unverified_context()
CWD = os.path.abspath(os.path.curdir)
CWD
since_date = '--since="{start}"'.format(start=start_date)
since_date
since = datetime.strptime(start_date+" 0:0:0", "%Y-%m-%d %H:%M:%S")
since
# get __version__
f = "../{package}/__init__.py".format(package=package_name)
with open(f, 'r') as initfile:
exec(initfile.readline())
"""
Explanation: This notebook will generate a file in the current directory with the name "changelog_VERSION.md". You can edit and append this on front of the CHANGELOG file for the package release.
End of explanation
"""
cmd = ['git', 'log', '--oneline', since_date]
ncommits = len(check_output(cmd).splitlines())
ncommits
"""
Explanation: Total commits by subpackage
End of explanation
"""
identities = {'Levi John Wolf': ('ljwolf', 'Levi John Wolf'),
'Serge Rey': ('Serge Rey', 'Sergio Rey', 'sjsrey', 'serge'),
'Wei Kang': ('Wei Kang', 'weikang9009'),
'Dani Arribas-Bel': ('Dani Arribas-Bel', 'darribas')
}
def regularize_identity(string):
string = string.decode()
for name, aliases in identities.items():
for alias in aliases:
if alias in string:
string = string.replace(alias, name)
if len(string.split(' '))>1:
string = string.title()
return string.lstrip('* ')
author_cmd = ['git', 'log', '--format=* %aN', since_date]
from collections import Counter
ncommits = len(check_output(cmd).splitlines())
all_authors = check_output(author_cmd).splitlines()
counter = Counter([regularize_identity(author) for author in all_authors])
# global_counter += counter
# counters.update({'.'.join((package,subpackage)): counter})
unique_authors = sorted(set(all_authors))
unique_authors = counter.keys()
unique_authors
"""
Explanation: List Contributors
Some of our contributors have many aliases for the same identity. So, we've added a mapping to make sure that individuals are listed once (and only once).
End of explanation
"""
from datetime import datetime, timedelta
ISO8601 = "%Y-%m-%dT%H:%M:%SZ"
PER_PAGE = 100
element_pat = re.compile(r'<(.+?)>')
rel_pat = re.compile(r'rel=[\'"](\w+)[\'"]')
def parse_link_header(headers):
link_s = headers.get('link', '')
urls = element_pat.findall(link_s)
rels = rel_pat.findall(link_s)
d = {}
for rel,url in zip(rels, urls):
d[rel] = url
return d
def get_paged_request(url):
"""get a full list, handling APIv3's paging"""
results = []
while url:
#print("fetching %s" % url, file=sys.stderr)
f = urlopen(url)
results.extend(json.load(f))
links = parse_link_header(f.headers)
url = links.get('next')
return results
def get_issues(project="pysal/giddy", state="closed", pulls=False):
"""Get a list of the issues from the Github API."""
which = 'pulls' if pulls else 'issues'
url = "https://api.github.com/repos/%s/%s?state=%s&per_page=%i" % (project, which, state, PER_PAGE)
return get_paged_request(url)
def _parse_datetime(s):
"""Parse dates in the format returned by the Github API."""
if s:
return datetime.strptime(s, ISO8601)
else:
return datetime.fromtimestamp(0)
def issues2dict(issues):
"""Convert a list of issues to a dict, keyed by issue number."""
idict = {}
for i in issues:
idict[i['number']] = i
return idict
def is_pull_request(issue):
"""Return True if the given issue is a pull request."""
return 'pull_request_url' in issue
def issues_closed_since(period=timedelta(days=365), project="pysal/pysal", pulls=False):
"""Get all issues closed since a particular point in time. period
can either be a datetime object, or a timedelta object. In the
latter case, it is used as a time before the present."""
which = 'pulls' if pulls else 'issues'
if isinstance(period, timedelta):
period = datetime.now() - period
url = "https://api.github.com/repos/%s/%s?state=closed&sort=updated&since=%s&per_page=%i" % (project, which, period.strftime(ISO8601), PER_PAGE)
allclosed = get_paged_request(url)
# allclosed = get_issues(project=project, state='closed', pulls=pulls, since=period)
filtered = [i for i in allclosed if _parse_datetime(i['closed_at']) > period]
# exclude rejected PRs
if pulls:
filtered = [ pr for pr in filtered if pr['merged_at'] ]
return filtered
def sorted_by_field(issues, field='closed_at', reverse=False):
"""Return a list of issues sorted by closing date date."""
return sorted(issues, key = lambda i:i[field], reverse=reverse)
def report(issues, show_urls=False):
"""Summary report about a list of issues, printing number and title.
"""
# titles may have unicode in them, so we must encode everything below
if show_urls:
for i in issues:
role = 'ghpull' if 'merged_at' in i else 'ghissue'
print('* :%s:`%d`: %s' % (role, i['number'],
i['title'].encode('utf-8')))
else:
for i in issues:
print('* %d: %s' % (i['number'], i['title'].encode('utf-8')))
all_issues = {}
all_pulls = {}
total_commits = 0
#prj='pysal/libpysal'
prj = 'pysal/{package}'.format(package=package_name)
issues = issues_closed_since(since, project=prj,pulls=False)
pulls = issues_closed_since(since, project=prj,pulls=True)
issues = sorted_by_field(issues, reverse=True)
pulls = sorted_by_field(pulls, reverse=True)
n_issues, n_pulls = map(len, (issues, pulls))
n_total = n_issues + n_pulls
issue_listing = []
for issue in issues:
entry = "{title} (#{number})".format(title=issue['title'],number=issue['number'])
issue_listing.append(entry)
pull_listing = []
for pull in pulls:
entry = "{title} (#{number})".format(title=pull['title'],number=pull['number'])
pull_listing.append(entry)
pull_listing
message = "We closed a total of {total} issues (enhancements and bug fixes) through {pr} pull requests".format(total=n_total, pr=n_pulls)
message = "{msg}, since our last release on {previous}.".format(msg=message, previous=str(start_date))
message
message += "\n\n## Issues Closed\n"
print(message)
issues = "\n".join([" - "+issue for issue in issue_listing])
message += issues
message += "\n\n## Pull Requests\n"
pulls = "\n".join([" - "+pull for pull in pull_listing])
message += pulls
print(message)
people = "\n".join([" - "+person for person in unique_authors])
print(people)
message +="\n\nThe following individuals contributed to this release:\n\n{people}".format(people=people)
print(message)
head = "# Version {version} ({release_date})\n\n".format(version=__version__, release_date=release_date)
# new_content = head+message+"\n"
# print(new_content)
# #insert the new changes in the begining of CHANGELOG.md
# with open("../CHANGELOG.md", 'r+') as file:
# content = file.read()
# file.seek(0, 0)
# file.write(new_content+ content)
# outfile = 'changelog_{version}.md'.format(version=__version__)
outfile = 'changelog.md'
with open(outfile, 'w') as of:
of.write(head+message)
"""
Explanation: Disaggregate by PR, Issue
End of explanation
"""
|
liganega/Gongsu-DataSci | previous/y2017/Wextra/GongSu26_Statistics_Hypothesis_Test_2.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
from scipy import stats
"""
Explanation: 자료 안내: 여기서 다루는 내용은 아래 사이트의 내용을 참고하여 생성되었음.
https://github.com/rouseguy/intro2stats
가설검정
주요내용
미국 51개 주에서 거래된 담배(식물) 도매가 데이터와 pandas 모듈을 활용하여 가설검정을 실행하는 방법을 터득한다.
주요 예제
캘리포니아 주에서 2014년도와 2015년도에 거래된 담배(식물)의 도매가의 가격차이 비교
검정방식
t-검정
카이제곱 검정
가설검정 6단계
가설검정은 아래 6개의 단계를 따른다.
1) 검정 대상인 영가설을 결정한다.
2) 영가설을 검증할 때 사용할 통계방식을 선택한다.
3) 기각역을 정한다.
* 보통 상위 또는 하위 5%로 정함
4) 검정통계를 위한 p-값을 찾는다.
5) 표본결과가 기각역 안에 들어오는지 확인한다.
6) 결정을 내린다.
* p-값이 기각역에 들어갈 경우 영가설을 기각한다.
주요 모듈
numpy와 pandas 이외에 통계전용 모듈인 stats 모듈을 임포트 한다.
End of explanation
"""
from GongSu25_Statistics_Sampling_Distribution import *
"""
Explanation: GongSu25 임포트하기
또한 GongSu25 내용 전체를 임포트한다.
End of explanation
"""
weed_pd.head()
"""
Explanation: 주의
weed_pd에 이미 month와 year 열이 추가되었음을 기억한다.
End of explanation
"""
ca_2014_01 = weed_pd[(weed_pd.State=="California")\
& (weed_pd.year==2014)\
& (weed_pd.month==1)].HighQ
"""
Explanation: 예제
캘리포니아 주에서 2014년 1월과, 2015년 1월에 거래된 상품 담배(식물) 도매가 평균의 차이 살펴보기
2014년 1월 데이터 추출하기
End of explanation
"""
weed_ca_jan2014 = np.array(ca_2014_01)
"""
Explanation: 자료형을 어레이로 변환한다.
이유: stats 모듈은 데이터프레임 대신에 어레이를 활용한다.
End of explanation
"""
weed_ca_jan2014.mean()
"""
Explanation: 2014년 1월 거래가 평균
End of explanation
"""
ca_2015_01 = weed_pd[(weed_pd.State=="California")\
& (weed_pd.year==2015)\
& (weed_pd.month==1)].HighQ
weed_ca_jan2015 = np.array(ca_2015_01)
"""
Explanation: 2015년 1월 데이터 추출하기
End of explanation
"""
weed_ca_jan2015.mean()
"""
Explanation: 2015년 1월 거래가 평균
End of explanation
"""
weed_ca_jan2014.mean() - weed_ca_jan2015.mean()
"""
Explanation: 2014년 1월과 2015년 1월 거래가 평균의 차이
End of explanation
"""
stats.ttest_ind(weed_ca_jan2014, weed_ca_jan2015, equal_var=True)
"""
Explanation: t-검정 예제
앞서 살펴 보았듯이 캘리포니아 주에서 2014년 1월에 거래된 상품 담배(식물) 도매가의 평균이 2015년 1월에 비해 4.84 달러 정도 높았다.
이 정도 차이가 유의미한지 여부를 판단해 보자.
즉, 2015년 1월의 도매가 1년 전에 비해 현저하게 떨어졌는지, 아니면 그 정도는 인정할 수 있는 오차범위 안에 있는 것인지 여부를 판단해야 한다.
여기서는 t-검정 방식을 이용하여 평균 도매가의 차이의 유의미성을 판단한다.
이를 위해 영가설을 아래와 같이 세운다
H0: 2014년 1월과 2015년 1월 상품 담배(식물) 도매가 평균값의 차이는 별 의미가 없다.
이제 t-검정을 이용하여 p-값을 결정한다.
주의
t-검정에 대한 자세한 설명은 여기서는 다루지 않는다.
stats 모듈에 있는 ttest_ind 함수를 이용하면 간단하게 p-값을 구할 수 있다.
End of explanation
"""
weed_jan2014 = weed_pd[(weed_pd.year==2014) & (weed_pd.month==1)][["HighQN", "MedQN", "LowQN"]]
weed_jan2014.head()
"""
Explanation: 결론
위 결과에 의해 p-값은 사실상 0이 된다.
이 의미는 2014년 1월과 2015년 1월에 거래된 도매가의 차이 정도는 거의 일어날 수 없는 사건이라는 의미이다.
따라서 영가설이 사실상 참일 수 없는 가설이라는 의미이다.
즉, 1년 사이의 도매가의 변화에 큰 의미를 부여할 수 있다는 결론이다.
카이-제곱 검정($\chi$-square test) 예제
전제
2014년 미국 전체에서 이루어진 담배(식물) 거래횟수가 예년의 기록이라고 가정한다.
질문
2015년 미국 전체에서 이루어진 담배(식물) 거래횟수 역시 예년의 기록인지 여부를 판단하라.
2014년 기록 가져오기
2014년도에 이루어진 HighQN, MedQN, LowQN에 대한 정보를 가져온다.
End of explanation
"""
weed_jan2015 = weed_pd[(weed_pd.year==2015) & (weed_pd.month==1)][["HighQN", "MedQN", "LowQN"]]
weed_jan2015.head()
"""
Explanation: 2015년 기록 가져오기
2015년도에 이루어진 HighQN, MedQN, LowQN에 대한 정보를 가져온다.
End of explanation
"""
Expected = np.array(weed_jan2014.apply(sum, axis=0))
Expected
"""
Explanation: 카이-제곱 검정을 이용한 차이 평가
카이-제곱 검정은 빈도수 차이의 유의미성을 확인할 때 사용한다.
영가설
H0: 2015년도 총 거래횟수가 예년 기록과 비슷하다.
이제 카이-제곱 검정을 이용하여 영가설의 기각여부를 판단한다.
카이-제곱 검정 방식을 이용하려면 먼저 카이-제곱 검정 통계량($\chi^2$)을 구해야 한다.
$$ \chi^2 = \sum (O - E)^2/E $$
여기서 E와 O는 아래의 의미를 갖는다.
기대 도수(E): 2014년도 거래횟수 (예년 기록임)
관측 도수(O): 2015년도 거래횟수
2014년 총 거래횟수
주의
apply 함수 활용
axis 키워드는 행을 기준으로 할지, 열을 기준으로 할지를 결정함
axis=0은 열 별로 apply 적용
axis=0은 행 별로 apply 적용
아래 코드는 품종(HighQN, MedQN, LowQN)별로 총 거래횟수를 구한다.
End of explanation
"""
Observed = np.array(weed_jan2015.apply(sum, axis=0))
Observed
"""
Explanation: 2015년 총 거래횟수
동일한 방식을 사용한다.
End of explanation
"""
stats.chisquare(Observed, Expected)
"""
Explanation: 이제 2014년도 데이터를 기대도수로, 2015년도 데이터를 관측도수로 활용하여 카이-제곱 통계량 $\chi^2$을
계산할 수 있으며, 그 값을 이용하여 p-값을 계산할 수 있다.
하지만, 여기서는 p-값을 구하는 과정은 설명하지 않고, stats 모듈의 chisquare 함수를 활용하여 p-값을 구한다.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/03_model_performance/labs/c_custom_keras_estimator.ipynb | apache-2.0 | import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
"""
Explanation: Custom Estimator with Keras
Learning Objectives
- Learn how to create custom estimator using tf.keras
Introduction
Up until now we've been limited in our model architectures to premade estimators. But what if we want more control over the model? We can use the popular Keras API to create a custom model. Keras is a high-level API to build and train deep learning models. It is user-friendly, modular and makes writing custom building blocks of Tensorflow code much easier.
Once we've build a Keras model we then it to an estimator using tf.keras.estimator.model_to_estimator()This gives us access to all the flexibility of Keras for creating deep learning models, but also the production readiness of the estimator framework!
End of explanation
"""
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0], [40.0], [-74.0], [40.7]]
def read_dataset(csv_path):
def parse_row(row):
# Decode the CSV row into list of TF tensors
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
# Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields))
# NEW: Add engineered features
features = add_engineered_features(features)
# Separate the label from the features
label = features.pop("fare_amount") # remove label from features and store
return features, label
# Create a dataset containing the text lines.
dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
dataset = dataset.flat_map(map_func = lambda filename: tf.data.TextLineDataset(filenames = filename).skip(count = 1))
# Parse each CSV row into correct (features,label) format for Estimator API
dataset = dataset.map(map_func = parse_row)
return dataset
def create_feature_keras_input(features, label):
features = tf.feature_column.input_layer(features = features, feature_columns = create_feature_columns())
return features, label
def train_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features, label) format
dataset = read_dataset(csv_path)
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
#3. Create single feature tensor for input to Keras Model
dataset = dataset.map(map_func = create_feature_keras_input)
return dataset
def eval_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features, label) format
dataset = read_dataset(csv_path)
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
#3. Create single feature tensor for input to Keras Model
dataset = dataset.map(map_func = create_feature_keras_input)
return dataset
"""
Explanation: Train and Evaluate input functions
For the most part, we can use the same train and evaluation input functions that we had in previous labs. Note the function create_feature_keras_input below. We will use this to create the first layer of the model. This function is called in turn during the train_input_fn and eval_input_fn as well.
End of explanation
"""
def add_engineered_features(features):
features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
features["euclidean_dist"] = tf.sqrt(x = features["latdiff"]**2 + features["londiff"]**2)
return features
def create_feature_columns():
# One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)
# Cross features to get combination of day and hour
fc_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)
# Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets)
fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets)
fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets)
fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets)
feature_columns = [
#1. Engineered using tf.feature_column module
tf.feature_column.indicator_column(categorical_column = fc_day_hr), # 168 columns
fc_bucketized_plat, # 16 + 1 = 17 columns
fc_bucketized_plon, # 16 + 1 = 17 columns
fc_bucketized_dlat, # 16 + 1 = 17 columns
fc_bucketized_dlon, # 16 + 1 = 17 columns
#2. Engineered in input functions
tf.feature_column.numeric_column(key = "latdiff"), # 1 column
tf.feature_column.numeric_column(key = "londiff"), # 1 column
tf.feature_column.numeric_column(key = "euclidean_dist") # 1 column
]
return feature_columns
"""
Explanation: Feature Engineering
We'll use the same engineered features that we had in previous labs.
End of explanation
"""
num_feature_columns = 168 + (16 + 1) * 4 + 3
print("num_feature_columns = {}".format(num_feature_columns))
"""
Explanation: Calculate the number of feature columns that will be input to our Keras model
End of explanation
"""
def create_keras_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape = (num_feature_columns,), name = "dense_input"))
# TODO: Your code goes here
# TODO: Your code goes here
# TODO: Your code goes here
# TODO: Your code goes here
# TODO: Your code goes here
# TODO: Your code goes here
def rmse(y_true, y_pred): # Root Mean Squared Error
return tf.sqrt(x = tf.reduce_mean(input_tensor = tf.square(x = y_pred - y_true)))
model.compile(
optimizer = tf.train.AdamOptimizer(),
loss = "mean_squared_error",
metrics = [rmse])
return model
"""
Explanation: Build Custom Keras Model
Now we can begin building our Keras model. Have a look at the guide here to see more explanation.
Exercise 1
Complete the code in the cell below to add a sequence of dense layers using Keras's Sequential API. Create a model which consists of six layers with relu as activation function. Have a look at the documentation for tf.keras.layers.Dense to see which arguments to provide.
End of explanation
"""
# Create serving input function
def serving_input_fn():
feature_placeholders = {
"dayofweek": tf.placeholder(dtype = tf.int32, shape = [None]),
"hourofday": tf.placeholder(dtype = tf.int32, shape = [None]),
"pickuplon": tf.placeholder(dtype = tf.float32, shape = [None]),
"pickuplat": tf.placeholder(dtype = tf.float32, shape = [None]),
"dropofflon": tf.placeholder(dtype = tf.float32, shape = [None]),
"dropofflat": tf.placeholder(dtype = tf.float32, shape = [None]),
}
features = {key: tensor for key, tensor in feature_placeholders.items()}
# Perform our feature engineering during inference as well
features, _ = create_feature_keras_input((add_engineered_features(features)), None)
return tf.estimator.export.ServingInputReceiver(features = {"dense_input": features}, receiver_tensors = feature_placeholders)
"""
Explanation: Serving input function
Once we've constructed our model in Keras, we next create the serving input function. This is also similar to what we have done in previous labs. Note that we use our create_feature_keras_input function again so that we perform our feature engineering during inference.## Build Custom Keras Model
End of explanation
"""
def train_and_evaluate(output_dir):
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
estimator = tf.keras.estimator.model_to_estimator(
# TODO: Your code goes here
)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn(csv_path = "./taxi-train.csv"),
max_steps = 500)
exporter = tf.estimator.LatestExporter(name = 'exporter', serving_input_receiver_fn = serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn(csv_path = "./taxi-valid.csv"),
steps = None,
start_delay_secs = 10, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 10, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(
estimator = estimator,
train_spec = train_spec,
eval_spec = eval_spec)
%%time
OUTDIR = "taxi_trained"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR)
"""
Explanation: Train and Evaluate
To train our model, we can use train_and_evaluate as we have before. Note that we use tf.keras.estimator.model_to_estimator to create our estimator. It takes as arguments the compiled keras model, the OUTDIR, and optionally a tf.estimator.Runconfig. Have a look at the documentation for tf.keras.estimator.model_to_estimator to make sure you understand how arguments are used.
Exercise 2
Complete the code below to create an estimator out of the Keras model we built above.
End of explanation
"""
|
Chipe1/aima-python | agents.ipynb | mit | from agents import *
from notebook import psource
"""
Explanation: Intelligent Agents
This notebook serves as supporting material for topics covered in Chapter 2 - Intelligent Agents from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from agents.py module. Let's start by importing everything from agents module.
End of explanation
"""
psource(Agent)
"""
Explanation: CONTENTS
Overview
Agent
Environment
Simple Agent and Environment
Agents in a 2-D Environment
Wumpus Environment
OVERVIEW
An agent, as defined in 2.1, is anything that can perceive its <b>environment</b> through sensors, and act upon that environment through actuators based on its <b>agent program</b>. This can be a dog, a robot, or even you. As long as you can perceive the environment and act on it, you are an agent. This notebook will explain how to implement a simple agent, create an environment, and implement a program that helps the agent act on the environment based on its percepts.
AGENT
Let us now see how we define an agent. Run the next cell to see how Agent is defined in agents module.
End of explanation
"""
psource(Environment)
"""
Explanation: The Agent has two methods.
* __init__(self, program=None): The constructor defines various attributes of the Agent. These include
* `alive`: which keeps track of whether the agent is alive or not
* `bump`: which tracks if the agent collides with an edge of the environment (for eg, a wall in a park)
* `holding`: which is a list containing the `Things` an agent is holding,
* `performance`: which evaluates the performance metrics of the agent
* `program`: which is the agent program and maps an agent's percepts to actions in the environment. If no implementation is provided, it defaults to asking the user to provide actions for each percept.
can_grab(self, thing): Is used when an environment contains things that an agent can grab and carry. By default, an agent can carry nothing.
ENVIRONMENT
Now, let us see how environments are defined. Running the next cell will display an implementation of the abstract Environment class.
End of explanation
"""
class BlindDog(Agent):
def eat(self, thing):
print("Dog: Ate food at {}.".format(self.location))
def drink(self, thing):
print("Dog: Drank water at {}.".format( self.location))
dog = BlindDog()
"""
Explanation: Environment class has lot of methods! But most of them are incredibly simple, so let's see the ones we'll be using in this notebook.
thing_classes(self): Returns a static array of Thing sub-classes that determine what things are allowed in the environment and what aren't
add_thing(self, thing, location=None): Adds a thing to the environment at location
run(self, steps): Runs an environment with the agent in it for a given number of steps.
is_done(self): Returns true if the objective of the agent and the environment has been completed
The next two functions must be implemented by each subclasses of Environment for the agent to recieve percepts and execute actions
percept(self, agent): Given an agent, this method returns a list of percepts that the agent sees at the current time
execute_action(self, agent, action): The environment reacts to an action performed by a given agent. The changes may result in agent experiencing new percepts or other elements reacting to agent input.
SIMPLE AGENT AND ENVIRONMENT
Let's begin by using the Agent class to creating our first agent - a blind dog.
End of explanation
"""
print(dog.alive)
"""
Explanation: What we have just done is create a dog who can only feel what's in his location (since he's blind), and can eat or drink. Let's see if he's alive...
End of explanation
"""
class Food(Thing):
pass
class Water(Thing):
pass
class Park(Environment):
def percept(self, agent):
'''return a list of things that are in our agent's location'''
things = self.list_things_at(agent.location)
return things
def execute_action(self, agent, action):
'''changes the state of the environment based on what the agent does.'''
if action == "move down":
print('{} decided to {} at location: {}'.format(str(agent)[1:-1], action, agent.location))
agent.movedown()
elif action == "eat":
items = self.list_things_at(agent.location, tclass=Food)
if len(items) != 0:
if agent.eat(items[0]): #Have the dog eat the first item
print('{} ate {} at location: {}'
.format(str(agent)[1:-1], str(items[0])[1:-1], agent.location))
self.delete_thing(items[0]) #Delete it from the Park after.
elif action == "drink":
items = self.list_things_at(agent.location, tclass=Water)
if len(items) != 0:
if agent.drink(items[0]): #Have the dog drink the first item
print('{} drank {} at location: {}'
.format(str(agent)[1:-1], str(items[0])[1:-1], agent.location))
self.delete_thing(items[0]) #Delete it from the Park after.
def is_done(self):
'''By default, we're done when we can't find a live agent,
but to prevent killing our cute dog, we will stop before itself - when there is no more food or water'''
no_edibles = not any(isinstance(thing, Food) or isinstance(thing, Water) for thing in self.things)
dead_agents = not any(agent.is_alive() for agent in self.agents)
return dead_agents or no_edibles
"""
Explanation: This is our dog. How cool is he? Well, he's hungry and needs to go search for food. For him to do this, we need to give him a program. But before that, let's create a park for our dog to play in.
ENVIRONMENT - Park
A park is an example of an environment because our dog can perceive and act upon it. The <b>Environment</b> class is an abstract class, so we will have to create our own subclass from it before we can use it.
End of explanation
"""
class BlindDog(Agent):
location = 1
def movedown(self):
self.location += 1
def eat(self, thing):
'''returns True upon success or False otherwise'''
if isinstance(thing, Food):
return True
return False
def drink(self, thing):
''' returns True upon success or False otherwise'''
if isinstance(thing, Water):
return True
return False
"""
Explanation: PROGRAM - BlindDog
Now that we have a <b>Park</b> Class, we re-implement our <b>BlindDog</b> to be able to move down and eat food or drink water only if it is present.
End of explanation
"""
def program(percepts):
'''Returns an action based on the dog's percepts'''
for p in percepts:
if isinstance(p, Food):
return 'eat'
elif isinstance(p, Water):
return 'drink'
return 'move down'
"""
Explanation: Now its time to implement a <b>program</b> module for our dog. A program controls how the dog acts upon its environment. Our program will be very simple, and is shown in the table below.
<table>
<tr>
<td><b>Percept:</b> </td>
<td>Feel Food </td>
<td>Feel Water</td>
<td>Feel Nothing</td>
</tr>
<tr>
<td><b>Action:</b> </td>
<td>eat</td>
<td>drink</td>
<td>move down</td>
</tr>
</table>
End of explanation
"""
park = Park()
dog = BlindDog(program)
dogfood = Food()
water = Water()
park.add_thing(dog, 1)
park.add_thing(dogfood, 5)
park.add_thing(water, 7)
park.run(5)
"""
Explanation: Let's now run our simulation by creating a park with some food, water, and our dog.
End of explanation
"""
park.run(5)
"""
Explanation: Notice that the dog moved from location 1 to 4, over 4 steps, and ate food at location 5 in the 5th step.
Let's continue this simulation for 5 more steps.
End of explanation
"""
park.add_thing(water, 15)
park.run(10)
"""
Explanation: Perfect! Note how the simulation stopped after the dog drank the water - exhausting all the food and water ends our simulation, as we had defined before. Let's add some more water and see if our dog can reach it.
End of explanation
"""
class Park2D(GraphicEnvironment):
def percept(self, agent):
'''return a list of things that are in our agent's location'''
things = self.list_things_at(agent.location)
return things
def execute_action(self, agent, action):
'''changes the state of the environment based on what the agent does.'''
if action == "move down":
print('{} decided to {} at location: {}'.format(str(agent)[1:-1], action, agent.location))
agent.movedown()
elif action == "eat":
items = self.list_things_at(agent.location, tclass=Food)
if len(items) != 0:
if agent.eat(items[0]): #Have the dog eat the first item
print('{} ate {} at location: {}'
.format(str(agent)[1:-1], str(items[0])[1:-1], agent.location))
self.delete_thing(items[0]) #Delete it from the Park after.
elif action == "drink":
items = self.list_things_at(agent.location, tclass=Water)
if len(items) != 0:
if agent.drink(items[0]): #Have the dog drink the first item
print('{} drank {} at location: {}'
.format(str(agent)[1:-1], str(items[0])[1:-1], agent.location))
self.delete_thing(items[0]) #Delete it from the Park after.
def is_done(self):
'''By default, we're done when we can't find a live agent,
but to prevent killing our cute dog, we will stop before itself - when there is no more food or water'''
no_edibles = not any(isinstance(thing, Food) or isinstance(thing, Water) for thing in self.things)
dead_agents = not any(agent.is_alive() for agent in self.agents)
return dead_agents or no_edibles
class BlindDog(Agent):
location = [0,1] # change location to a 2d value
direction = Direction("down") # variable to store the direction our dog is facing
def movedown(self):
self.location[1] += 1
def eat(self, thing):
'''returns True upon success or False otherwise'''
if isinstance(thing, Food):
return True
return False
def drink(self, thing):
''' returns True upon success or False otherwise'''
if isinstance(thing, Water):
return True
return False
"""
Explanation: Above, we learnt to implement an agent, its program, and an environment on which it acts. However, this was a very simple case. Let's try to add complexity to it by creating a 2-Dimensional environment!
AGENTS IN A 2D ENVIRONMENT
For us to not read so many logs of what our dog did, we add a bit of graphics while making our Park 2D. To do so, we will need to make it a subclass of <b>GraphicEnvironment</b> instead of Environment. Parks implemented by subclassing <b>GraphicEnvironment</b> class adds these extra properties to it:
Our park is indexed in the 4th quadrant of the X-Y plane.
Every time we create a park subclassing <b>GraphicEnvironment</b>, we need to define the colors of all the things we plan to put into the park. The colors are defined in typical <b>RGB digital 8-bit format</b>, common across the web.
Fences are added automatically to all parks so that our dog does not go outside the park's boundary - it just isn't safe for blind dogs to be outside the park by themselves! <b>GraphicEnvironment</b> provides is_inbounds function to check if our dog tries to leave the park.
First let us try to upgrade our 1-dimensional Park environment by just replacing its superclass by GraphicEnvironment.
End of explanation
"""
park = Park2D(5,20, color={'BlindDog': (200,0,0), 'Water': (0, 200, 200), 'Food': (230, 115, 40)}) # park width is set to 5, and height to 20
dog = BlindDog(program)
dogfood = Food()
water = Water()
park.add_thing(dog, [0,1])
park.add_thing(dogfood, [0,5])
park.add_thing(water, [0,7])
morewater = Water()
park.add_thing(morewater, [0,15])
print("BlindDog starts at (1,1) facing downwards, lets see if he can find any food!")
park.run(20)
"""
Explanation: Now let's test this new park with our same dog, food and water. We color our dog with a nice red and mark food and water with orange and blue respectively.
End of explanation
"""
from random import choice
class EnergeticBlindDog(Agent):
location = [0,1]
direction = Direction("down")
def moveforward(self, success=True):
'''moveforward possible only if success (i.e. valid destination location)'''
if not success:
return
if self.direction.direction == Direction.R:
self.location[0] += 1
elif self.direction.direction == Direction.L:
self.location[0] -= 1
elif self.direction.direction == Direction.D:
self.location[1] += 1
elif self.direction.direction == Direction.U:
self.location[1] -= 1
def turn(self, d):
self.direction = self.direction + d
def eat(self, thing):
'''returns True upon success or False otherwise'''
if isinstance(thing, Food):
return True
return False
def drink(self, thing):
''' returns True upon success or False otherwise'''
if isinstance(thing, Water):
return True
return False
def program(percepts):
'''Returns an action based on it's percepts'''
for p in percepts: # first eat or drink - you're a dog!
if isinstance(p, Food):
return 'eat'
elif isinstance(p, Water):
return 'drink'
if isinstance(p,Bump): # then check if you are at an edge and have to turn
turn = False
choice = random.choice((1,2));
else:
choice = random.choice((1,2,3,4)) # 1-right, 2-left, others-forward
if choice == 1:
return 'turnright'
elif choice == 2:
return 'turnleft'
else:
return 'moveforward'
"""
Explanation: Adding some graphics was a good idea! We immediately see that the code works, but our blind dog doesn't make any use of the 2 dimensional space available to him. Let's make our dog more energetic so that he turns and moves forward, instead of always moving down. In doing so, we'll also need to make some changes to our environment to be able to handle this extra motion.
PROGRAM - EnergeticBlindDog
Let's make our dog turn or move forwards at random - except when he's at the edge of our park - in which case we make him change his direction explicitly by turning to avoid trying to leave the park. However, our dog is blind so he wouldn't know which way to turn - he'd just have to try arbitrarily.
<table>
<tr>
<td><b>Percept:</b> </td>
<td>Feel Food </td>
<td>Feel Water</td>
<td>Feel Nothing</td>
</tr>
<tr>
<td><b>Action:</b> </td>
<td>eat</td>
<td>drink</td>
<td>
<table>
<tr>
<td><b>Remember being at Edge : </b></td>
<td>At Edge</td>
<td>Not at Edge</td>
</tr>
<tr>
<td><b>Action : </b></td>
<td>Turn Left / Turn Right <br> ( 50% - 50% chance )</td>
<td>Turn Left / Turn Right / Move Forward <br> ( 25% - 25% - 50% chance )</td>
</tr>
</table>
</td>
</tr>
</table>
End of explanation
"""
class Park2D(GraphicEnvironment):
def percept(self, agent):
'''return a list of things that are in our agent's location'''
things = self.list_things_at(agent.location)
loc = copy.deepcopy(agent.location) # find out the target location
#Check if agent is about to bump into a wall
if agent.direction.direction == Direction.R:
loc[0] += 1
elif agent.direction.direction == Direction.L:
loc[0] -= 1
elif agent.direction.direction == Direction.D:
loc[1] += 1
elif agent.direction.direction == Direction.U:
loc[1] -= 1
if not self.is_inbounds(loc):
things.append(Bump())
return things
def execute_action(self, agent, action):
'''changes the state of the environment based on what the agent does.'''
if action == 'turnright':
print('{} decided to {} at location: {}'.format(str(agent)[1:-1], action, agent.location))
agent.turn(Direction.R)
elif action == 'turnleft':
print('{} decided to {} at location: {}'.format(str(agent)[1:-1], action, agent.location))
agent.turn(Direction.L)
elif action == 'moveforward':
print('{} decided to move {}wards at location: {}'.format(str(agent)[1:-1], agent.direction.direction, agent.location))
agent.moveforward()
elif action == "eat":
items = self.list_things_at(agent.location, tclass=Food)
if len(items) != 0:
if agent.eat(items[0]):
print('{} ate {} at location: {}'
.format(str(agent)[1:-1], str(items[0])[1:-1], agent.location))
self.delete_thing(items[0])
elif action == "drink":
items = self.list_things_at(agent.location, tclass=Water)
if len(items) != 0:
if agent.drink(items[0]):
print('{} drank {} at location: {}'
.format(str(agent)[1:-1], str(items[0])[1:-1], agent.location))
self.delete_thing(items[0])
def is_done(self):
'''By default, we're done when we can't find a live agent,
but to prevent killing our cute dog, we will stop before itself - when there is no more food or water'''
no_edibles = not any(isinstance(thing, Food) or isinstance(thing, Water) for thing in self.things)
dead_agents = not any(agent.is_alive() for agent in self.agents)
return dead_agents or no_edibles
"""
Explanation: ENVIRONMENT - Park2D
We also need to modify our park accordingly, in order to be able to handle all the new actions our dog wishes to execute. Additionally, we'll need to prevent our dog from moving to locations beyond our park boundary - it just isn't safe for blind dogs to be outside the park by themselves.
End of explanation
"""
park = Park2D(5,5, color={'EnergeticBlindDog': (200,0,0), 'Water': (0, 200, 200), 'Food': (230, 115, 40)})
dog = EnergeticBlindDog(program)
dogfood = Food()
water = Water()
park.add_thing(dog, [0,0])
park.add_thing(dogfood, [1,2])
park.add_thing(water, [0,1])
morewater = Water()
morefood = Food()
park.add_thing(morewater, [2,4])
park.add_thing(morefood, [4,3])
print("dog started at [0,0], facing down. Let's see if he found any food or water!")
park.run(20)
"""
Explanation: Now that our park is ready for the 2D motion of our energetic dog, lets test it!
End of explanation
"""
from ipythonblocks import BlockGrid
from agents import *
color = {"Breeze": (225, 225, 225),
"Pit": (0,0,0),
"Gold": (253, 208, 23),
"Glitter": (253, 208, 23),
"Wumpus": (43, 27, 23),
"Stench": (128, 128, 128),
"Explorer": (0, 0, 255),
"Wall": (44, 53, 57)
}
def program(percepts):
'''Returns an action based on it's percepts'''
print(percepts)
return input()
w = WumpusEnvironment(program, 7, 7)
grid = BlockGrid(w.width, w.height, fill=(123, 234, 123))
def draw_grid(world):
global grid
grid[:] = (123, 234, 123)
for x in range(0, len(world)):
for y in range(0, len(world[x])):
if len(world[x][y]):
grid[y, x] = color[world[x][y][-1].__class__.__name__]
def step():
global grid, w
draw_grid(w.get_world())
grid.show()
w.step()
step()
"""
Explanation: Wumpus Environment
End of explanation
"""
|
atavory/ibex | examples/movielens_nmf.ipynb | bsd-3-clause | import os
from sklearn import base
import pandas as pd
import scipy as sp
import seaborn as sns
sns.set_style('whitegrid')
sns.despine()
import ibex
from ibex.sklearn import model_selection as pd_model_selection
from ibex.sklearn import decomposition as pd_decomposition
from ibex.sklearn import decomposition as pd_decomposition
from ibex.sklearn import ensemble as pd_ensemble
%pylab inline
ratings = pd.read_csv(
'movielens_data/ml-100k/u.data',
sep='\t',
header=None,
names=['user_id', 'item_id', 'rating', 'timestamp'])
features = ['user_id', 'item_id']
ratings[features + ['rating']].head()
"""
Explanation: Nonnegative Matrix Factorization In The Movielens Dataset
This example continues illustrating using pandas-munging capabilities in estimators building features that draw from several rows, this time using NMF (nonnegative matrix factorization). We will use a single table from the Movielens dataset (F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS)).
Loading The Data
In this example too, we'll only use the dataset table describing the ratings themselves. I.e., each row is an instance of a single rating given by a specific user to a specific movie.
End of explanation
"""
UI = pd.pivot_table(ratings, values='rating', index='user_id', columns ='item_id')
UI
"""
Explanation: Munging NMF With Pandas
In Simple Row-Aggregating Features In The Movielens Dataset we looked at direct attributes obtainable from the rankings: the average user and item ranking. Here we'll use Pandas to bring the dataset to a form where we can find latent factors through NMF.
First we pivot the table so that we have a UI matrix of the users as rows, the items as columns, and the ratings as the values:
End of explanation
"""
d = pd_decomposition.NMF(n_components=20)
U = d.fit_transform(UI.fillna(0))
I = d.components_
"""
Explanation: We now use NMF for decomposition, and then find the user latent factors in U and item latent factors in I:
End of explanation
"""
U.head()
I.head()
"""
Explanation: Note that the Ibex version of NMF sets the indexes and columns of the U and I appropriately.
End of explanation
"""
ratings.head()
rating_comps = pd.merge(
ratings,
U,
left_on='user_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
I.T,
left_on='item_id',
right_index=True,
how='left')
rating_comps.head()
"""
Explanation: Pandas makes it easy to merge the user and item latent factors to the users and items, respectively.
End of explanation
"""
rating_comps = pd.merge(
rating_comps,
ratings.groupby(ratings.user_id).size().to_frame().rename(columns={0: 'user_id_count'}),
left_on='user_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
ratings.groupby(ratings.item_id).size().to_frame().rename(columns={0: 'item_id_count'}),
left_on='user_id',
right_index=True,
how='left')
prd_features = [c for c in rating_comps if 'comp_' in c] + ['user_id_count', 'item_id_count']
rating_comps.head()
"""
Explanation: Let's merge to the results also the number of occurrences to of the users and items, respectively.
End of explanation
"""
prd = pd_ensemble.RandomForestRegressor().fit(rating_comps[prd_features], ratings.rating)
prd.score(rating_comps[prd_features], ratings.rating)
"""
Explanation: We now have a dataframe of latent variables. Let's build a random forest regressor, and use it on this dataframe.
End of explanation
"""
prd.feature_importances_.to_frame().plot(kind='barh');
"""
Explanation: Finally, let's check the feature importances.
End of explanation
"""
class RatingsFactorizer(base.BaseEstimator, base.TransformerMixin, ibex.FrameMixin):
def fit(self, X, y):
X = pd.concat([X[['user_id', 'item_id']], y])
X.columns = ['user_id', 'item_id', 'rating']
self._user_id_count = X.groupby(X.user_id).size().to_frame().rename(columns={0: 'user_id_count'})
self._item_id_count = X.groupby(X.item_id).size().to_frame().rename(columns={0: 'item_id_count'})
UI = pd.pivot_table(ratings, values='rating', index='user_id', columns ='item_id')
d = pd_decomposition.NMF(n_components=10)
self._U = d.fit_transform(UI.fillna(0))
self._I = d.components_
return self
def transform(self, X):
rating_comps = pd.merge(
X[['user_id', 'item_id']],
self._U,
left_on='user_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
self._I.T,
left_on='item_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
self._user_id_count,
left_on='user_id',
right_index=True,
how='left')
rating_comps = pd.merge(
rating_comps,
self._item_id_count,
left_on='user_id',
right_index=True,
how='left')
prd_features = [c for c in rating_comps if 'comp_' in c] + ['user_id_count', 'item_id_count']
return rating_comps[prd_features].fillna(0)
"""
Explanation: Building A Pandas-Munging Estimator
We'll now build a Scikit-Learn / Pandas step doing the above.
End of explanation
"""
prd = RatingsFactorizer() | pd_ensemble.RandomForestRegressor()
hist(
pd_model_selection.cross_val_score(
prd,
ratings[features],
ratings.rating,
cv=20,
n_jobs=-1),
color='grey');
xlabel('CV Score')
ylabel('Num Occurrences')
figtext(
0,
-0.1,
'Histogram of cross-validated scores');
"""
Explanation: We can now use cross validation to assess this scheme.
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/47923e53e0be940f05f054346a1ec113/elekta_epochs.ipynb | bsd-3-clause | # Author: Jussi Nurminen (jnu@iki.fi)
#
# License: BSD-3-Clause
import mne
import os
from mne.datasets import multimodal
fname_raw = os.path.join(multimodal.data_path(), 'multimodal_raw.fif')
print(__doc__)
"""
Explanation: Getting averaging info from .fif files
Parse averaging information defined in Elekta Vectorview/TRIUX DACQ (data
acquisition). Extract and average epochs accordingly. Modify some
averaging parameters and get epochs.
End of explanation
"""
raw = mne.io.read_raw_fif(fname_raw)
"""
Explanation: Read raw file
End of explanation
"""
print(raw.acqparser)
"""
Explanation: Check DACQ defined averaging categories and other info
End of explanation
"""
cond = raw.acqparser.get_condition(raw, 'Auditory right')
epochs = mne.Epochs(raw, **cond)
epochs.average().plot_topo(background_color='w')
"""
Explanation: Extract epochs corresponding to a category
End of explanation
"""
evokeds = []
for cat in raw.acqparser.categories:
cond = raw.acqparser.get_condition(raw, cat)
# copy (supported) rejection parameters from DACQ settings
epochs = mne.Epochs(raw, reject=raw.acqparser.reject,
flat=raw.acqparser.flat, **cond)
evoked = epochs.average()
evoked.comment = cat['comment']
evokeds.append(evoked)
# save all averages to an evoked fiff file
# fname_out = 'multimodal-ave.fif'
# mne.write_evokeds(fname_out, evokeds)
"""
Explanation: Get epochs from all conditions, average
End of explanation
"""
newcat = dict()
newcat['comment'] = 'Visual lower left, longer epochs'
newcat['event'] = 3 # reference event
newcat['start'] = -.2 # epoch start rel. to ref. event (in seconds)
newcat['end'] = .7 # epoch end
newcat['reqevent'] = 0 # additional required event; 0 if none
newcat['reqwithin'] = .5 # ...required within .5 sec (before or after)
newcat['reqwhen'] = 2 # ...required before (1) or after (2) ref. event
newcat['index'] = 9 # can be set freely
cond = raw.acqparser.get_condition(raw, newcat)
epochs = mne.Epochs(raw, reject=raw.acqparser.reject,
flat=raw.acqparser.flat, **cond)
epochs.average().plot(time_unit='s')
"""
Explanation: Make a new averaging category
End of explanation
"""
|
civisanalytics/muffnn | examples/mlp_prediction_gradient_digits.ipynb | bsd-3-clause | import base64
import io
import logging
from IPython.display import HTML, display
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import muffnn
from sklearn.datasets import load_digits
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
import tensorflow as tf
# Ignore an irrelevant tensorflow deprecation warning triggered below.
logging.getLogger('tensorflow').setLevel(logging.CRITICAL)
def image_html(x, cmap, vmin=None, vmax=None):
"""Plot an image represented by the array x."""
plt.figure(figsize=(1, 1))
img = plt.imshow(np.reshape(x, (8, 8)), interpolation='nearest', vmin=vmin, vmax=vmax)
img.set_cmap(cmap)
plt.axis('off')
buf = io.BytesIO()
plt.savefig(buf)
buf.seek(0)
image_bytes = buf.read()
plt.close()
return ('<img style="display:inline" src="data:image/png;base64,%s" />' %
base64.encodebytes(image_bytes).decode('ascii'))
def show_images(X, cmap, vmin=None, vmax=None):
display(HTML(''.join([image_html(x, cmap, vmin=vmin, vmax=vmax) for x in X])))
"""
Explanation: Prediction gradient-based model interpretation
This notebook is an example of investigating how different input features (a.k.a. predictors, independent variables) affect neural network predictions, to help understand and explain how a model works.
This sort of model interpretation can be useful for various purposes:
to catch bugs (e.g., one might notice a strongly-weighted feature that was accidentally included in a model)
to explain a model, or its predictions on individual examples, to stakeholders
to help to avoid biases or other undesirable behavior (e.g., one might notice that the model is using demographics in undesirable ways when making predictions)
Some examples of similar methods for other types of models include computing feature importance for tree ensembles, or examining coefficients of a linear model. It is worth noting that all of these methods have limitations, especially for more complex types of models such as tree ensembles and neural networks.
We'll use the toy "digits" dataset available in scikit-learn. Obviously, an MLP is not a great choice for a computer vision task, but the simple visual patterns in the digits data make for a decent demonstration.
We'll use muffnn.MLPClassifier.prediction_gradient, which can be used for the "gradient x input" discussed by, e.g., Shrikumar et al., ICML 2016. This method computes the gradient of the output (the prediction) with respect to the input vector (e.g., Simonyan et al., ICLR 2014) and then multiplies by the input vector to discount less-active features. Prediction gradient values are essentially coefficients for a local linear model around each example, and in that way, it's related to LIME, SHAP, and other methods.
Note: Prediction gradients sometimes are referred to as sensitivity analysis.
Setup
We'll first import various modules and classes we need and define up some image display helper functions used for later visualizations.
End of explanation
"""
# Load the data
data = load_digits()
# Normalize X to 0-1 range.
X, y = data.data / 16, data.target
# Define a monochromatic colormap for images with [0, 1] values.
mono_cmap = mpl.colors.LinearSegmentedColormap.from_list('foo', [(1,1,1), (0, 0, 0)])
# Define a blue to gray to red colormap for images with [-1, 1] values.
dual_cmap = mpl.colors.LinearSegmentedColormap.from_list('foo', [(1.0, 0.2, 0.2),(0.5, 0.5, 0.5), (0.2,0.2,1.0)])
# Show a sample.
sample_ind = np.random.RandomState(42).choice(np.arange(X.shape[0]), 30, replace=False)
show_images(X[sample_ind,:], mono_cmap)
"""
Explanation: Load the data
Get the digits dataset and show a sample of it.
End of explanation
"""
# Train and cross-validate a very, very simple model.
params = {
'keep_prob': [1.0],
'hidden_units': [[256]]
}
gs = GridSearchCV(muffnn.MLPClassifier(batch_size=16, n_epochs=50, random_state=42),
params)
gs.fit(X, y)
mlp = gs.best_estimator_
print(mlp)
print("accuracy:", gs.best_score_)
"""
Explanation: Train a multilayer perceptron
Next, we'll train a simple MLP to predict the digit class and print held-out accuracy. The 10 labels (0-9) are balanced, so majority class accuracy would be 10%.
End of explanation
"""
pg_vals = mlp.prediction_gradient(X)
pg_vals.shape
"""
Explanation: Now we can compute prediction gradients for the data. We'll just use in-sample data for simplicity. In practice, one might want to do this with held-out data, depending on the task.
The output will be a 3-dimensional tensor with the gradient of each class (dimension 1) with respect to each input (dimension 2) for each example (dimension 0).
End of explanation
"""
# Recall from above that axis/dimension 0 is for example number, so we're summing over examples here.
vmax_abs = np.abs(pg_vals.sum(axis=0)).max() # adjust color scale
show_images(pg_vals.sum(axis=0), dual_cmap, vmin=-vmax_abs, vmax=vmax_abs)
"""
Explanation: Aggregate prediction gradients
Let's try to get an idea of which inputs (pixels) the model is associating with which output classes (digits 0, 1, 2, ... 9).
We'll compare aggregating gradients only to aggregating gradient x input.
For each, we'll plot 10 images, one for each class (digit) in increasing order from left to right, with blue colors indicating that a pixel has a positive mean gradient for the given class, red indicating negative, and gray indicating that pixels have little effect on predictions.
Gradient Only
First, we'll just average across the examples for each class. We can see some patterns related to the typical shapes of digits (e.g., the circle shape for "0").
However, note how a lot of the pixels around the edges, which are mostly just black in the data, are fairly red or blue.
End of explanation
"""
# Multiply the prediction gradients ndarray by the input features,
# using np.newaxis to add an axis for classes to broadcast over.
# pg_vals is of (n_examples, n_classes, n_inputs).
# X is of shape (n_examples, n_inputs).
# Then, take the mean across examples.
gi_vals_mean = (pg_vals * X[:, np.newaxis, :]).mean(axis=0)
# define a sensible color scale, then plot
vabs_max = np.max(np.abs(gi_vals_mean))
show_images(gi_vals_mean, dual_cmap, vmin=-vabs_max, vmax=vabs_max)
"""
Explanation: Gradient x Input
Multiplying by the input (the "gradient x input" method), as below, will focus the outputs more on the effects of frequently active features.
Note how red pixels appear for areas that, when absent distinguish the digit from other digits. For example, the image for "6" (4th from right) has red pixels in the upper-right. If one takes a "6" and adds pixels to that area, it would start to look like an "8". Similarly, the image for "0" has red in the center, indicating that the inactivity of those pixels distinguishes "0" from other digits (e.g., "8").
End of explanation
"""
show_images(np.abs(gi_vals_mean),
mono_cmap, vmin=0, vmax=vabs_max)
"""
Explanation: In general, distinguishing positive from negative effects seems useful, but one can also take the absolute value.
End of explanation
"""
# Compute the sum of absolute values across all classes.
# Then, add an axis with np.newaxis since that expects an array of (n_examples, n_inputs).
show_images(np.abs(gi_vals_mean).sum(axis=0)[np.newaxis, :],
mono_cmap)
"""
Explanation: One can also average the results across classes, though this doesn't seem particularly useful for the digits case.
End of explanation
"""
# Sample some data and compute predictions, probabilities, and gradients for plotting.
x_sample = X[sample_ind, :]
yhat_sample = gs.predict(x_sample)
yhat_proba_sample = gs.predict_proba(x_sample)
pg_sample = mlp.prediction_gradient(x_sample)
# color scale for plots
vmax_abs = np.abs(pg_sample * x_sample[:, np.newaxis, :]).max()
for i, (x, yhat, yprob, pg) in enumerate(zip(x_sample, yhat_sample, yhat_proba_sample, pg_sample)):
print("=" * 30)
print("yhat=%d %s (example %d)" %
(yhat, '[' + ' '.join(['%.3f' % v for v in yprob]) + ']', i))
# Note: np.newaxis is used just because show_images expects a (n_examples, n_inputs) array
# but we're only plotting one image at a time.
show_images(np.reshape(x, (8, 8))[np.newaxis, :], mono_cmap)
show_images(pg[yhat][np.newaxis, :] * x, dual_cmap, vmin=-vmax_abs, vmax=vmax_abs)
"""
Explanation: Analyzing predictions for individual examples with gradient x input
In addition to aggregating across examples to get an idea of the overall model, one can also investigate individual predictions.
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.2/examples/notebooks/generated/robust_models_0.ipynb | bsd-3-clause | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
"""
Explanation: Robust Linear Models
End of explanation
"""
data = sm.datasets.stackloss.load()
data.exog = sm.add_constant(data.exog)
"""
Explanation: Estimation
Load data:
End of explanation
"""
huber_t = sm.RLM(data.endog, data.exog, M=sm.robust.norms.HuberT())
hub_results = huber_t.fit()
print(hub_results.params)
print(hub_results.bse)
print(
hub_results.summary(
yname="y", xname=["var_%d" % i for i in range(len(hub_results.params))]
)
)
"""
Explanation: Huber's T norm with the (default) median absolute deviation scaling
End of explanation
"""
hub_results2 = huber_t.fit(cov="H2")
print(hub_results2.params)
print(hub_results2.bse)
"""
Explanation: Huber's T norm with 'H2' covariance matrix
End of explanation
"""
andrew_mod = sm.RLM(data.endog, data.exog, M=sm.robust.norms.AndrewWave())
andrew_results = andrew_mod.fit(scale_est=sm.robust.scale.HuberScale(), cov="H3")
print("Parameters: ", andrew_results.params)
"""
Explanation: Andrew's Wave norm with Huber's Proposal 2 scaling and 'H3' covariance matrix
End of explanation
"""
nsample = 50
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, (x1 - 5) ** 2))
X = sm.add_constant(X)
sig = 0.3 # smaller error variance makes OLS<->RLM contrast bigger
beta = [5, 0.5, -0.0]
y_true2 = np.dot(X, beta)
y2 = y_true2 + sig * 1.0 * np.random.normal(size=nsample)
y2[[39, 41, 43, 45, 48]] -= 5 # add some outliers (10% of nsample)
"""
Explanation: See help(sm.RLM.fit) for more options and module sm.robust.scale for scale options
Comparing OLS and RLM
Artificial data with outliers:
End of explanation
"""
res = sm.OLS(y2, X).fit()
print(res.params)
print(res.bse)
print(res.predict())
"""
Explanation: Example 1: quadratic function with linear truth
Note that the quadratic term in OLS regression will capture outlier effects.
End of explanation
"""
resrlm = sm.RLM(y2, X).fit()
print(resrlm.params)
print(resrlm.bse)
"""
Explanation: Estimate RLM:
End of explanation
"""
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot(x1, y2, "o", label="data")
ax.plot(x1, y_true2, "b-", label="True")
pred_ols = res.get_prediction()
iv_l = pred_ols.summary_frame()["obs_ci_lower"]
iv_u = pred_ols.summary_frame()["obs_ci_upper"]
ax.plot(x1, res.fittedvalues, "r-", label="OLS")
ax.plot(x1, iv_u, "r--")
ax.plot(x1, iv_l, "r--")
ax.plot(x1, resrlm.fittedvalues, "g.-", label="RLM")
ax.legend(loc="best")
"""
Explanation: Draw a plot to compare OLS estimates to the robust estimates:
End of explanation
"""
X2 = X[:, [0, 1]]
res2 = sm.OLS(y2, X2).fit()
print(res2.params)
print(res2.bse)
"""
Explanation: Example 2: linear function with linear truth
Fit a new OLS model using only the linear term and the constant:
End of explanation
"""
resrlm2 = sm.RLM(y2, X2).fit()
print(resrlm2.params)
print(resrlm2.bse)
"""
Explanation: Estimate RLM:
End of explanation
"""
pred_ols = res2.get_prediction()
iv_l = pred_ols.summary_frame()["obs_ci_lower"]
iv_u = pred_ols.summary_frame()["obs_ci_upper"]
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x1, y2, "o", label="data")
ax.plot(x1, y_true2, "b-", label="True")
ax.plot(x1, res2.fittedvalues, "r-", label="OLS")
ax.plot(x1, iv_u, "r--")
ax.plot(x1, iv_l, "r--")
ax.plot(x1, resrlm2.fittedvalues, "g.-", label="RLM")
legend = ax.legend(loc="best")
"""
Explanation: Draw a plot to compare OLS estimates to the robust estimates:
End of explanation
"""
|
astroumd/GradMap | notebooks/Lectures2019/Lecture4/Lecture4-2BodyProblem2019-Student.ipynb | gpl-3.0 | #Physical Constants (SI units)
G=6.67e-11 #Universal Gravitational constant in m^3 per kg per s^2
AU=1.5e11 #Astronomical Unit in meters = Distance between sun and earth
daysec=24.0*60*60 #seconds in a day
"""
Explanation: Introduction to numerical simulations: The 2 Body Problem
Many problems in statistical physics and astrophysics require solving problems consisting of many particles at once (sometimes on the order of thousands or more!) This can't be done by the traditional pen and paper techniques you would encounter in a physics class. Instead, we must implement numerical solutions to these problems.
Today, you will create your own numerical simulation for a simple problem is that solvable by pen and paper already, the 2 body problem in 2D. In this problem, we will describe the motion between two particles that share a force between them (such as Gravity). We'll design the simulation from an astronomer's mindset with astronomical units in mind. This simulation will be used to confirm the general motion of the earth around the Sun, and later will be used to predict the motion between two stars within relatively close range.
<br>
<br>
<br>
We will guide you through the physics and math required to create this simulation.
First, a brief review of the kinematic equations (remembering Order of Operations or PEMDAS, and that values can be positive or negative depending on the reference frame):
new time = old time + time change ($t = t_0 + \Delta t$)
new position = old position + velocity x time change ($x = x_0 + v \times \Delta t$)
new velocity = old velocity + acceleration x time change ($v = v_0 + a \times \Delta t$)
The problem here is designed to use the knowledge of scientific python you have been developing this week.
Like any code in python, The first thing we need to do is import the libraries we need. Go ahead and import Numpy and Pyplot below as np and plt respectively. Don't forget to put matplotlib inline to get everything within the notebook.
Now we will define the physical constants of our system, which will also establish the unit system we have chosen. We'll use SI units here. Below, I've already created the constants. Make sure you understand what they are before moving on.
End of explanation
"""
#####run specific constants. Change as needed#####
#Masses in kg
Ma=6.0e24 #always set as smaller mass
Mb=2.0e30 #always set as larger mass
#Time settings
t=0.0 #Starting time
dt=.01*daysec #Time set for simulation
tend=300*daysec #Time where simulation ends
#Initial conditions (position [m] and velocities [m/s] in x,y,z coordinates)
#For Ma
xa=1.0*AU
ya=0.0
vxa=0.0
vya=30000.0
#For Mb
xb=0.0
yb=0.0
vxb=0.0
vyb=0.0
"""
Explanation: Next, we will need parameters for the simulation. These are known as initial condititons. For a 2 body gravitation problem, we'll need to know the masses of the two objects, the starting posistions of the two objects, and the starting velocities of the two objects.
Below, I've included the initial conditions for the earth (a) and the Sun (b) at the average distance from the sun and the average velocity around the sun. We also need a starting time, and ending time for the simulation, and a "time-step" for the system. Feel free to adjust all of these as you see fit once you have built the system!
<br>
<br>
<br>
<br>
a note on dt:
As already stated, numeric simulations are approximations. In our case, we are approximating how time flows. We know it flows continiously, but the computer cannot work with this. So instead, we break up our time into equal chunks called "dt". The smaller the chunks, the more accurate you will become, but at the cost of computer time.
End of explanation
"""
#Function to compute the force between the two objects
def Fg(Ma,Mb,G,xa,xb,ya,yb):
#Compute rx and ry between Ma and Mb
rx=xb-xa
ry=#Write it in
#compute r^3
r3=#Write in r^3 using the equation above. Make use of np.sqrt()
#Compute the force in Newtons. Use the equations above as a Guide!
fx=-#Write it in
fy=-#Write it in
return #What do we return?
"""
Explanation: It will be nice to create a function for the force between Ma and Mb. Below is the physics for the force of Ma on Mb. How the physics works here is not important for the moment. Right now, I want to make sure you can translate the math shown into a python function. (I'll show a picture of the physics behind this math for those interested.)
$$\vec{F_g}=\frac{-GM_aM_b}{r^3}\vec{r}$$
and
$$\vec{r}=(x_b-x_a)\hat{x}+ (y_b-y_a)\hat{y}$$
$$r^3=((x_b-x_a)^2+(y_b-y_a)^2)^{3/2}$$
If we break Fg into the x and y componets we get:
$$F_x=\frac{-GM_aM_b}{r^3}r_x$$
$$F_y=\frac{-GM_aM_b}{r^3}r_y$$
<br><br>So, $Fg$ will only need to be a function of xa, xb, ya, and yb. The velocities of the bodies will not be needed. Create a function that calculates the force between the bodies given the positions of the bodies. My recommendation here will be to feed the inputs as separate components and also return the force in terms of components (say, fx and fy). This will make your code easier to write and easier to read.
End of explanation
"""
def simulate(Ma,Mb,G,xa,ya,vxa,vya,xb,yb,vxb,vyb):
t=0
#Run a loop for the simulation. Keep track of Ma and Mb posistions and velocites
#Initialize vectors (otherwise there is nothing to append to!)
xaAr=np.array([])
yaAr=np.array([])
vxaAr=np.array([])
vyaAr=np.array([])
xbAr=#Write it in for Particle B
ybAr=#Write it in for Particle B
vxbAr=np.array([])
vybAr=np.array([])
#using while loop method with appending. Can also be done with for loops
while #Write the end condition here.
#Compute current force on Ma and Mb. Ma recieves the opposite force of Mb
fx,fy=Fg(Ma,Mb,G,xa,xb,ya,yb)
#Update the velocities and positions of the particles
vxa=vxa-fx*dt/Ma
vya=#Write it in for y
vxb=#Write it in for x
vyb=vyb+fy*dt/Mb
xa=xa+vxa*dt
ya=#Write it in for y
xb=#Write it in for x
yb=yb+vyb*dt
#Save data to lists
xaAr=np.append(xaAr,xa)
yaAr=np.append(yaAr,ya)
xbAr=#How will we append it here?
ybAr=np.append(ybAr,yb)
#update the time by one time step, dt
t=t+dt
return(xaAr,yaAr,xbAr,ybAr)
"""
Explanation: Now that we have our force function, we will make a new function which does the whole simulation for a set of initial conditions. We call this function 'simulate' and it will take all the initial conditions as inputs. It will loop over each time step and call the force function to find the new positions for the asteroids at each time step.
The first part of our simulate function will be to initialize the loop and choose a loop type, for or while. Below is the general outline for how each type of loop can go.
<br>
<br>
<br>
For loop:
initialize position and velocity arrays with np.zeros or np.linspace for the amount of steps needed to go through the simulation (which is numSteps=(tend-t)/dt the way we have set up the problem). The for loop condition is based off time and should read rough like: for i in range(numSteps)
<br>
<br>
<br>
While loop:
initialize position and velocity arrays with np.array([]) and use np.append() to tact on new values at each step like so, xaArray=np.append(xaArray,NEWVALUE). The while condition should read, while t<tend
My preference here is while since it keeps my calculations and appending separate. But, feel free to use which ever feels best for you!
Now for the actual simulation. This is the hardest part to code in. The general idea behind our loop is that as we step through time, we calculate the force, then calculate the new velocity, then the new position for each particle. At the end, we must update our arrays to reflect the new changes and update the time of the system. The time is super important! If we don't change the time (say in a while loop), the simulation would never end and we would never get our result. :(
Outline for the loop (order matters here)
Calculate the force with the last known positions (use your function!)
Calculate the new velocities using the approximation: vb = vb + dt*fg/Mb and va= va - dt*fg/Ma Note the minus sign here, and the need to do this for the x and y directions!
Calculate the new positions using the approximation: xb = xb + dt*Vb (same for a and for y's. No minus problem here)
Update the arrays to reflect our new values
Update the time using t=t+dt
<br>
<br>
<br>
<br>
Now when the loop closes back in, the cycle repeats in a logical way. Go one step at a time when creating this loop and use comments to help guide yourself. Ask for help if it gets tricky!
End of explanation
"""
#Do simulation with these parameters
xaAr,yaAr,xbAr,ybAr = simulate(Ma,Mb,G,xa,ya,vxa,vya,xb,yb,vxb,#Insert the variable for y position of B particle)
"""
Explanation: Now we will call our simulate function with the initial conditions we defined earlier! We will take the output of simulate and store the x and y positions of the two particles.
End of explanation
"""
from IPython.display import Image
Image("Earth-Sun-averageResult.jpg")
plt.figure()
plt.plot(xaAr/AU,yaAr/AU)
plt.plot(#Add positions for B particle)
plt.show()
"""
Explanation: Now for the fun part (or not so fun part if your simulation has an issue), plot your results! This is something well covered in previous lectures. Show me a plot of (xa,ya) and (xb,yb). Does it look sort of familiar? Hopefully you get something like the below image (in units of AU).
End of explanation
"""
#Mass distribution parameters
Mave=7.0e24 #The average asteroid mass
Msigma=1.0e24 #The standard deviation of asteroid masses
Size=3 #The number of asteroids we wish to simulate
"""
Explanation: Challenge #1: Random Sampling of Initial Simulation Conditions
Now let's try to plot a few different asteroids with different initial conditions at once! Let's first produce the orbits of three asteroids with different masses. Suppose the masses of all asteroids in the main asteroid belt follow a Gaussian distribution. The parameters of the distribution of asteroid masses are defined below.
End of explanation
"""
#Draw 3 masses from normally distributed asteroid mass distribution
MassAr = # Add your normal a.k.a. Gaussian distribution function,
# noting that the input to your numpy random number generator
# function will be: (Size)
"""
Explanation: We now wish to draw a random sample of asteroid masses from this distribution (Hint: Look back at Lecture #3).
End of explanation
"""
plt.figure()
for mass in #What array should we loop over?:
xaAr,yaAr,xbAr,ybAr=simulate(mass,Mb,G,xa,ya,vxa,vya,xb,yb,vyb,vyb)
plt.plot(xaAr/AU,yaAr/AU,label='Mass = %.2e'%mass) #Provide labels for each asteroid mass so we can generate a legend.
#Pro tip: The percent sign replaces '%.2e' in the string with the variable formatted the way we want!
plt.legend()
plt.show()
"""
Explanation: Now let's loop over our random asteroid sample, run simulate and plot the results, for each one!
End of explanation
"""
from IPython.display import Image
Image(filename="fig_example.jpg")
"""
Explanation: Going further:
Can you make a plot with 5 asteroid masses instead of 3?
<b>
If you've got some extra time, now is a great chance to experiment with plotting various initial conditions and how the orbits change! What happens if we draw some random initial velocities instead of random masses, for example?
Challenge #2: Fancy Plotting Fun!
When showing off your results to people unfamiliar with your research, it helps to make them more easy to understand through different visualization techniques (like legends, labels, patterns, different shapes, and sizes). You may have found that textbooks or news articles are more fun and easy when concepts are illustrated colorfully yet clearly, such as the example figure below, which shows different annotations in the form of text:
End of explanation
"""
plt.figure()
plt.plot(xaAr/AU,yaAr/AU,marker='x',linestyle='--',linewidth=1)
plt.plot()#Add positions for B particle
plt.show()
"""
Explanation: Additionally, publications won't always be printed in color, and not all readers have the ability to distinguish colors or text size in the same way, so differences in style improve accessibility as well.
Luckily, Matplotlib can do all of this and more! Let's experiment with some variations in how we can make our plots. We can use the 'marker =' argument in plt.plot to choose a marker for every datapoint. We can use the 'linestyle = ' argument to have a dotted line instead of a solid line. Try experimenting with the extra arguments in the below plotting code to make it look good to you!
End of explanation
"""
|
computational-class/computational-communication-2016 | code/13.recsys_intro.ipynb | mit | # A dictionary of movie critics and their ratings of a small
# set of movies
critics={'Lisa Rose': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.5,
'Just My Luck': 3.0, 'Superman Returns': 3.5, 'You, Me and Dupree': 2.5,
'The Night Listener': 3.0},
'Gene Seymour': {'Lady in the Water': 3.0, 'Snakes on a Plane': 3.5,
'Just My Luck': 1.5, 'Superman Returns': 5.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 3.5},
'Michael Phillips': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.0,
'Superman Returns': 3.5, 'The Night Listener': 4.0},
'Claudia Puig': {'Snakes on a Plane': 3.5, 'Just My Luck': 3.0,
'The Night Listener': 4.5, 'Superman Returns': 4.0,
'You, Me and Dupree': 2.5},
'Mick LaSalle': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'Just My Luck': 2.0, 'Superman Returns': 3.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 2.0},
'Jack Matthews': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'The Night Listener': 3.0, 'Superman Returns': 5.0, 'You, Me and Dupree': 3.5},
'Toby': {'Snakes on a Plane':4.5,'You, Me and Dupree':1.0,'Superman Returns':4.0}}
critics['Lisa Rose']['Lady in the Water']
critics['Toby']['Snakes on a Plane']=4.5
critics['Toby']
"""
Explanation: 计算传播应用
推荐系统简介
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
集体智慧编程
集体智慧是指为了创造新想法,将一群人的行为、偏好或思想组合在一起。一般基于聪明的算法(Netflix, Google)或者提供内容的用户(Wikipedia)。
集体智慧编程所强调的是前者,即通过编写计算机程序、构造具有智能的算法收集并分析用户的数据,发现新的信息甚至是知识。
Netflix
Google
Wikipedia
Toby Segaran, 2007, Programming Collective Intelligence. O'Reilly.
https://github.com/computational-class/programming-collective-intelligence-code/blob/master/chapter2/recommendations.py
推荐系统
目前互联网世界最常见的智能产品形式。
从信息时代过渡到注意力时代:
信息过载(information overload)
注意力稀缺
推荐系统的基本任务是联系用户和物品,帮助用户快速发现有用信息,解决信息过载的问题。
针对长尾分布问题,找到个性化需求,优化资源配置
推荐系统的类型
社会化推荐(Social Recommendation)
让朋友帮助推荐物品
基于内容的推荐 (Content-based filtering)
基于用户已经消费的物品内容,推荐新的物品。例如根据看过的电影的导演和演员,推荐新影片。
基于协同过滤的推荐(collaborative filtering)
找到和某用户的历史兴趣一致的用户,根据这些用户之间的相似性或者他们所消费物品的相似性,为该用户推荐物品
协同过滤算法
基于邻域的方法(neighborhood-based method)
基于用户的协同过滤(user-based filtering)
基于物品的协同过滤 (item-based filtering)
隐语义模型(latent factor model)
基于图的随机游走算法(random walk on graphs)
UserCF和ItemCF的比较
UserCF较为古老, 1992年应用于电子邮件个性化推荐系统Tapestry, 1994年应用于Grouplens新闻个性化推荐, 后来被Digg采用
推荐那些与个体有共同兴趣爱好的用户所喜欢的物品(群体热点,社会化)
反映用户所在小型群体中物品的热门程度
ItemCF相对较新,应用于电子商务网站Amazon和DVD租赁网站Netflix
推荐那些和用户之前喜欢的物品相似的物品 (历史兴趣,个性化)
反映了用户自己的兴趣传承
新闻更新快,物品数量庞大,相似度变化很快,不利于维护一张物品相似度的表格,电影、音乐、图书则可以。
推荐系统评测
用户满意度
预测准确度
$r_{ui}$用户实际打分, $\hat{r_{ui}}$推荐算法预测打分
均方根误差RMSE
$RMSE = \sqrt{\frac{\sum_{u, i \in T} (r_{ui} - \hat{r_{ui}})}{\left | T \right |}^2} $
平均绝对误差MAE
$ MAE = \frac{\sum_{u, i \in T} \left | r_{ui} - \hat{r_{ui}} \right|}{\left | T \right|}$
End of explanation
"""
# 欧几里得距离
import numpy as np
np.sqrt(np.power(5-4, 2) + np.power(4-1, 2))
"""
Explanation: 1. User-based filtering
1.0 Finding similar users
End of explanation
"""
1.0 /(1 + np.sqrt(np.power(5-4, 2) + np.power(4-1, 2)) )
# Returns a distance-based similarity score for person1 and person2
def sim_distance(prefs,person1,person2):
# Get the list of shared_items
si={}
for item in prefs[person1]:
if item in prefs[person2]:
si[item]=1
# if they have no ratings in common, return 0
if len(si)==0: return 0
# Add up the squares of all the differences
sum_of_squares=np.sum([np.power(prefs[person1][item]-prefs[person2][item],2)
for item in prefs[person1] if item in prefs[person2]])
return 1/(1+np.sqrt(sum_of_squares) )
sim_distance(critics, 'Lisa Rose','Gene Seymour')
"""
Explanation: This formula calculates the distance, which will be smaller for people who are more similar. However, you need a function that gives higher values for people who are similar. This can be done by adding 1 to the function (so you don’t get a division-by- zero error) and inverting it:
End of explanation
"""
# Returns the Pearson correlation coefficient for p1 and p2
def sim_pearson(prefs,p1,p2):
# Get the list of mutually rated items
si={}
for item in prefs[p1]:
if item in prefs[p2]: si[item]=1
# Find the number of elements
n=len(si)
# if they are no ratings in common, return 0
if n==0: return 0
# Add up all the preferences
sum1=np.sum([prefs[p1][it] for it in si])
sum2=np.sum([prefs[p2][it] for it in si])
# Sum up the squares
sum1Sq=np.sum([np.power(prefs[p1][it],2) for it in si])
sum2Sq=np.sum([np.power(prefs[p2][it],2) for it in si])
# Sum up the products
pSum=np.sum([prefs[p1][it]*prefs[p2][it] for it in si])
# Calculate Pearson score
num=pSum-(sum1*sum2/n)
den=np.sqrt((sum1Sq-np.power(sum1,2)/n)*(sum2Sq-np.power(sum2,2)/n))
if den==0: return 0
return num/den
sim_pearson(critics, 'Lisa Rose','Gene Seymour')
# Returns the best matches for person from the prefs dictionary.
# Number of results and similarity function are optional params.
def topMatches(prefs,person,n=5,similarity=sim_pearson):
scores=[(similarity(prefs,person,other),other)
for other in prefs if other!=person]
# Sort the list so the highest scores appear at the top
scores.sort( )
scores.reverse( )
return scores[0:n]
topMatches(critics,'Toby',n=3) # topN
"""
Explanation: Pearson correlation coefficient
End of explanation
"""
# Gets recommendations for a person by using a weighted average
# of every other user's rankings
def getRecommendations(prefs,person,similarity=sim_pearson):
totals={}
simSums={}
for other in prefs:
# don't compare me to myself
if other==person: continue
sim=similarity(prefs,person,other)
# ignore scores of zero or lower
if sim<=0: continue
for item in prefs[other]:
# only score movies I haven't seen yet
if item not in prefs[person] or prefs[person][item]==0:
# Similarity * Score
totals.setdefault(item,0)
totals[item]+=prefs[other][item]*sim
# Sum of similarities
simSums.setdefault(item,0)
simSums[item]+=sim
# Create the normalized list
rankings=[(total/simSums[item],item) for item,total in totals.items()]
# Return the sorted list
rankings.sort()
rankings.reverse()
return rankings
# Now you can find out what movies I should watch next:
getRecommendations(critics,'Toby')
# You’ll find that the results are only affected very slightly by the choice of similarity metric.
getRecommendations(critics,'Toby',similarity=sim_distance)
"""
Explanation: 1.1 Recommending Items
<img src='./img/usercf.png' width = 700px>
Toby相似的五个用户(Rose, Reymour, Puig, LaSalle, Matthews)及相似度(依次为0.99, 0.38, 0.89, 0.92)
这五个用户看过的三个电影(Night,Lady, Luck)及其评分
例如,Rose对Night评分是3.0
S.xNight是用户相似度与电影评分的乘积
例如,Toby于Rose相似度对Night评分是3.0*0.99 = 2.97
可以得到每部电影的得分
例如,Night的得分是12.89 = 2.97+1.14+4.02+2.77+1.99
电影得分需要使用用户相似度之和进行加权
例如,Night电影的预测得分是3.35 = 12.89/3.84
End of explanation
"""
# you just need to swap the people and the items.
def transformPrefs(prefs):
result={}
for person in prefs:
for item in prefs[person]:
result.setdefault(item,{})
# Flip item and person
result[item][person]=prefs[person][item]
return result
movies = transformPrefs(critics)
"""
Explanation: 2. Item-based filtering
Now you know how to find similar people and recommend products for a given person
But what if you want to see which products are similar to each other?
This is actually the same method we used earlier to determine similarity between people—
将item-user字典的键值翻转
End of explanation
"""
topMatches(movies,'Superman Returns')
"""
Explanation: 计算item的相似性
End of explanation
"""
def calculateSimilarItems(prefs,n=10):
# Create a dictionary of items showing which other items they
# are most similar to.
result={}
# Invert the preference matrix to be item-centric
itemPrefs=transformPrefs(prefs)
c=0
for item in itemPrefs:
# Status updates for large datasets
c+=1
if c%100==0:
print "%d / %d" % (c,len(itemPrefs))
# Find the most similar items to this one
scores=topMatches(itemPrefs,item,n=n,similarity=sim_distance)
result[item]=scores
return result
itemsim=calculateSimilarItems(critics)
itemsim
"""
Explanation: 给item推荐user
End of explanation
"""
def getRecommendedItems(prefs,itemMatch,user):
userRatings=prefs[user]
scores={}
totalSim={}
# Loop over items rated by this user
for (item,rating) in userRatings.items( ):
# Loop over items similar to this one
for (similarity,item2) in itemMatch[item]:
# Ignore if this user has already rated this item
if item2 in userRatings: continue
# Weighted sum of rating times similarity
scores.setdefault(item2,0)
scores[item2]+=similarity*rating
# Sum of all the similarities
totalSim.setdefault(item2,0)
totalSim[item2]+=similarity
# Divide each total score by total weighting to get an average
rankings=[(score/totalSim[item],item) for item,score in scores.items( )]
# Return the rankings from highest to lowest
rankings.sort( )
rankings.reverse( )
return rankings
getRecommendedItems(critics,itemsim,'Toby')
getRecommendations(movies,'Just My Luck')
getRecommendations(movies, 'You, Me and Dupree')
"""
Explanation: <img src = './img/itemcf1.png' width=800px>
Toby看过三个电影(snakes、Superman、dupree)和评分(依次是4.5、4.0、1.0)
表格2-3给出这三部电影与另外三部电影的相似度
例如superman与night的相似度是0.103
R.xNight表示Toby对自己看过的三部定影的评分与Night这部电影相似度的乘积
例如,0.412 = 4.0*0.103
那么Toby对于Night的评分可以表达为0.818+0.412+0.148 = 1.378
已经知道Night相似度之和是0.182+0.103+0.148 = 0.433
那么Toby对Night的最终评分可以表达为:1.378/0.433 = 3.183
End of explanation
"""
# https://github.com/ParticleWave/RecommendationSystemStudy/blob/d1960056b96cfaad62afbfe39225ff680240d37e/PersonalRank.py
import os
import random
class Graph:
def __init__(self):
self.G = dict()
def addEdge(self, p, q):
if p not in self.G: self.G[p] = dict()
if q not in self.G: self.G[q] = dict()
self.G[p][q] = 1
self.G[q][p] = 1
def getGraphMatrix(self):
return self.G
graph = Graph()
graph.addEdge('A', 'a')
graph.addEdge('A', 'c')
graph.addEdge('B', 'a')
graph.addEdge('B', 'b')
graph.addEdge('B', 'c')
graph.addEdge('B', 'd')
graph.addEdge('C', 'c')
graph.addEdge('C', 'd')
G = graph.getGraphMatrix()
print(G.keys())
G
def PersonalRank(G, alpha, root, max_step):
# G is the biparitite graph of users' ratings on items
# alpha is the probability of random walk forward
# root is the studied User
# max_step if the steps of iterations.
rank = dict()
rank = {x:0.0 for x in G.keys()}
rank[root] = 1.0
for k in range(max_step):
tmp = {x:0.0 for x in G.keys()}
for i,ri in G.items():
for j,wij in ri.items():
if j not in tmp: tmp[j] = 0.0 #
tmp[j] += alpha * rank[i] / (len(ri)*1.0)
if j == root: tmp[j] += 1.0 - alpha
rank = tmp
print(k, rank)
return rank
print(PersonalRank(G, 0.8, 'A', 20))
# print(PersonalRank(G, 0.8, 'B', 20))
# print(PersonalRank(G, 0.8, 'C', 20))
"""
Explanation: <img src = './img/itemcfNetwork.png' width = 700px>
基于物品的协同过滤算法的网络表示方法
基于图的模型
使用二分图表示用户行为,因此基于图的算法可以应用到推荐系统当中。
<img src = './img/graphrec.png' width = 500px>
End of explanation
"""
def loadMovieLens(path='/Users/chengjun/bigdata/ml-1m/'):
# Get movie titles
movies={}
for line in open(path+'movies.dat'):
(id,title)=line.split('::')[0:2]
movies[id]=title
# Load data
prefs={}
for line in open(path+'/ratings.dat'):
(user,movieid,rating,ts)=line.split('::')
prefs.setdefault(user,{})
prefs[user][movies[movieid]]=float(rating)
return prefs
prefs=loadMovieLens()
prefs['87']
"""
Explanation: 3. MovieLens Recommender
MovieLens是一个电影评价的真实数据,由明尼苏达州立大学的GroupLens项目组开发。
数据下载
http://grouplens.org/datasets/movielens/1m/
These files contain 1,000,209 anonymous ratings of approximately 3,900 movies
made by 6,040 MovieLens users who joined MovieLens in 2000.
数据格式
All ratings are contained in the file "ratings.dat" and are in the following format:
UserID::MovieID::Rating::Timestamp
1::1193::5::978300760
1::661::3::978302109
1::914::3::978301968
End of explanation
"""
getRecommendations(prefs,'87')[0:30]
"""
Explanation: user-based filtering
End of explanation
"""
itemsim=calculateSimilarItems(prefs,n=50)
getRecommendedItems(prefs,itemsim,'87')[0:30]
"""
Explanation: Item-based filtering
End of explanation
"""
# set product key using GraphLab Create API
#import graphlab
#graphlab.product_key.set_product_key('4972-65DF-8E02-816C-AB15-021C-EC1B-0367')
%matplotlib inline
import graphlab
# set canvas to show sframes and sgraphs in ipython notebook
gl.canvas.set_target('ipynb')
import matplotlib.pyplot as plt
sf = graphlab.SFrame({'user_id': ["0", "0", "0", "1", "1", "2", "2", "2"],
'item_id': ["a", "b", "c", "a", "b", "b", "c", "d"],
'rating': [1, 3, 2, 5, 4, 1, 4, 3]})
sf
m = graphlab.recommender.create(sf, target='rating')
recs = m.recommend()
print recs
m['coefficients']
"""
Explanation: Buiding Recommendation System with GraphLab
In this notebook we will import GraphLab Create and use it to
train two models that can be used for recommending new songs to users
compare the performance of the two models
End of explanation
"""
#train_file = 'http://s3.amazonaws.com/dato-datasets/millionsong/10000.txt'
train_file = '/Users/chengjun/GitHub/cjc2016/data/ratings.dat'
sf = gl.SFrame.read_csv(train_file, header=False, delimiter='|', verbose=False)
sf.rename({'X1':'user_id', 'X2':'course_id', 'X3':'rating'}).show()
"""
Explanation: After importing GraphLab Create, we can download data directly from S3. We have placed a preprocessed version of the Million Song Dataset on S3. This data set was used for a Kaggle challenge and includes data from The Echo Nest, SecondHandSongs, musiXmatch, and Last.fm. This file includes data for a subset of 10000 songs.
The CourseTalk dataset: loading and first look
Loading of the CourseTalk database.
End of explanation
"""
(train_set, test_set) = sf.random_split(0.8, seed=1)
"""
Explanation: In order to evaluate the performance of our model, we randomly split the observations in our data set into two partitions: we will use train_set when creating our model and test_set for evaluating its performance.
End of explanation
"""
import graphlab as gl
popularity_model = gl.popularity_recommender.create(train_set, 'user_id', 'course_id', target = 'rating')
"""
Explanation: Popularity model
Create a model that makes recommendations using item popularity. When no target column is provided, the popularity is determined by the number of observations involving each item. When a target is provided, popularity is computed using the item’s mean target value. When the target column contains ratings, for example, the model computes the mean rating for each item and uses this to rank items for recommendations.
One typically wants to initially create a simple recommendation system that can be used as a baseline and to verify that the rest of the pipeline works as expected. The recommender package has several models available for this purpose. For example, we can create a model that predicts songs based on their overall popularity across all users.
End of explanation
"""
item_sim_model = gl.item_similarity_recommender.create(train_set, 'user_id', 'course_id', target = 'rating',
similarity_type='cosine')
"""
Explanation: Item similarity Model
Collaborative filtering methods make predictions for a given user based on the patterns of other users' activities. One common technique is to compare items based on their Jaccard similarity.This measurement is a ratio: the number of items they have in common, over the total number of distinct items in both sets.
We could also have used another slightly more complicated similarity measurement, called Cosine Similarity.
If your data is implicit, i.e., you only observe interactions between users and items, without a rating, then use ItemSimilarityModel with Jaccard similarity.
If your data is explicit, i.e., the observations include an actual rating given by the user, then you have a wide array of options. ItemSimilarityModel with cosine or Pearson similarity can incorporate ratings. In addition, MatrixFactorizationModel, FactorizationModel, as well as LinearRegressionModel all support rating prediction.
Now data contains three columns: ‘user_id’, ‘item_id’, and ‘rating’.
itemsim_cosine_model = graphlab.recommender.create(data,
target=’rating’,
method=’item_similarity’,
similarity_type=’cosine’)
factorization_machine_model = graphlab.recommender.create(data,
target=’rating’,
method=’factorization_model’)
In the following code block, we compute all the item-item similarities and create an object that can be used for recommendations.
End of explanation
"""
factorization_machine_model = gl.recommender.factorization_recommender.create(train_set, 'user_id', 'course_id',
target='rating')
"""
Explanation: Factorization Recommender Model
Create a FactorizationRecommender that learns latent factors for each user and item and uses them to make rating predictions. This includes both standard matrix factorization as well as factorization machines models (in the situation where side data is available for users and/or items). link
End of explanation
"""
result = gl.recommender.util.compare_models(test_set, [popularity_model, item_sim_model, factorization_machine_model],
user_sample=.1, skip_set=train_set)
"""
Explanation: Model Evaluation
It's straightforward to use GraphLab to compare models on a small subset of users in the test_set. The precision-recall plot that is computed shows the benefits of using the similarity-based model instead of the baseline popularity_model: better curves tend toward the upper-right hand corner of the plot.
The following command finds the top-ranked items for all users in the first 500 rows of test_set. The observations in train_set are not included in the predicted items.
End of explanation
"""
K = 10
users = gl.SArray(sf['user_id'].unique().head(100))
"""
Explanation: Now let's ask the item similarity model for song recommendations on several users. We first create a list of users and create a subset of observations, users_ratings, that pertain to these users.
End of explanation
"""
recs = item_sim_model.recommend(users=users, k=K)
recs.head()
"""
Explanation: Next we use the recommend() function to query the model we created for recommendations. The returned object has four columns: user_id, song_id, the score that the algorithm gave this user for this song, and the song's rank (an integer from 0 to K-1). To see this we can grab the top few rows of recs:
End of explanation
"""
# Get the meta data of the courses
courses = gl.SFrame.read_csv('/Users/chengjun/GitHub/cjc2016/data/cursos.dat', header=False, delimiter='|', verbose=False)
courses.rename({'X1':'course_id', 'X2':'title', 'X3':'avg_rating',
'X4':'workload', 'X5':'university', 'X6':'difficulty', 'X7':'provider'}).show()
courses = courses[['course_id', 'title', 'provider']]
results = recs.join(courses, on='course_id', how='inner')
# Populate observed user-course data with course info
userset = frozenset(users)
ix = sf['user_id'].apply(lambda x: x in userset, int)
user_data = sf[ix]
user_data = user_data.join(courses, on='course_id')[['user_id', 'title', 'provider']]
# Print out some recommendations
for i in range(5):
user = list(users)[i]
print "User: " + str(i + 1)
user_obs = user_data[user_data['user_id'] == user].head(K)
del user_obs['user_id']
user_recs = results[results['user_id'] == str(user)][['title', 'provider']]
print "We were told that the user liked these courses: "
print user_obs.head(K)
print "We recommend these other courses:"
print user_recs.head(K)
print ""
"""
Explanation: To learn what songs these ids pertain to, we can merge in metadata about each song.
End of explanation
"""
|
rochelleterman/scrape-interwebz | 1_APIs/3_api_workbook.ipynb | mit | # Import required libraries
import requests
import json
from __future__ import division
import math
import csv
import matplotlib.pyplot as plt
"""
Explanation: Accessing Databases via Web APIs
End of explanation
"""
# set key
key="be8992a420bfd16cf65e8757f77a5403:8:44644296"
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
"""
Explanation: 1. Constructing API GET Request
In the first place, we know that every call will require us to provide:
a base URL for the API,
some authorization code or key, and
a format for the response.
So let's put store those in some variables.
Use the following demonstration keys for now, but in the future, get your own!
ef9055ba947dd842effe0ecf5e338af9:15:72340235
25e91a4f7ee4a54813dca78f474e45a0:15:73273810
e15cea455f73cc47d6d971667e09c31c:19:44644296
b931c838cdb745bbab0f213cfc16b7a5:12:44644296
1dc1475b6e7d5ff5a982804cc565cd0b:6:44644296
18046cd15e21e1b9996ddfb6dafbb578:4:44644296
be8992a420bfd16cf65e8757f77a5403:8:44644296
End of explanation
"""
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key}
"""
Explanation: You often want to send some sort of data in the URL’s query string. This data tells the API what information you want. In our case, we want articles about Duke Ellington. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. In addition to the search term q, we have to put in the api-key term.
End of explanation
"""
# make request
r = requests.get(base_url+response_format, params=search_params)
"""
Explanation: Now we're ready to make the request. We use the .get method from the requests library to make an HTTP GET Request.
End of explanation
"""
print(r.url)
"""
Explanation: Now, we have a response object called r. We can get all the information we need from this object. For instance, we can see that the URL has been correctly encoded by printing the URL. Click on the link to see what happens.
End of explanation
"""
# set date parameters here
# Uncomment to test
# r = requests.get(base_url+response_format, params=search_params)
# print(r.url)
"""
Explanation: Click on that link to see it returns!
Challenge 1: Adding a date range
What if we only want to search within a particular date range? The NYT Article Api allows us to specify start and end dates.
Alter the search_params code above so that the request only searches for articles in the year 2015.
You're gonna need to look at the documentation here to see how to do this.
End of explanation
"""
# set page parameters here
# Uncomment to test
# r = requests.get(base_url+response_format, params=search_params)
# print(r.url)
"""
Explanation: Challenge 2: Specifying a results page
The above will return the first 10 results. To get the next ten, you need to add a "page" parameter. Change the search parameters above to get the second 10 resuls.
End of explanation
"""
# Inspect the content of the response, parsing the result as text
response_text= r.text
print(response_text[:1000])
"""
Explanation: 2. Parsing the response text
We can read the content of the server’s response using .text
End of explanation
"""
# Convert JSON response to a dictionary
data = json.loads(response_text)
# data
"""
Explanation: What you see here is JSON text, encoded as unicode text. JSON stands for "Javascript object notation." It has a very similar structure to a python dictionary -- both are built on key/value pairs. This makes it easy to convert JSON response to a python dictionary.
End of explanation
"""
print(data.keys())
# this is boring
data['status']
# so is this
data['copyright']
# this is what we want!
# data['response']
data['response'].keys()
data['response']['meta']['hits']
# data['response']['docs']
type(data['response']['docs'])
"""
Explanation: That looks intimidating! But it's really just a big dictionary. Let's see what keys we got in there.
End of explanation
"""
docs = data['response']['docs']
docs[0]
"""
Explanation: That looks what we want! Let's put that in it's own variable.
End of explanation
"""
# set key
key="ef9055ba947dd842effe0ecf5e338af9:15:72340235"
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key,
"begin_date":"20150101", # date must be in YYYYMMDD format
"end_date":"20151231"}
# make request
r = requests.get(base_url+response_format, params=search_params)
# convert to a dictionary
data=json.loads(r.text)
# get number of hits
hits = data['response']['meta']['hits']
print("number of hits: ", str(hits))
# get number of pages
pages = int(math.ceil(hits/10))
# make an empty list where we'll hold all of our docs for every page
all_docs = []
# now we're ready to loop through the pages
for i in range(pages):
print("collecting page", str(i))
# set the page parameter
search_params['page'] = i
# make request
r = requests.get(base_url+response_format, params=search_params)
# get text and convert to a dictionary
data=json.loads(r.text)
# get just the docs
docs = data['response']['docs']
# add those docs to the big list
all_docs = all_docs + docs
len(all_docs)
"""
Explanation: 3. Putting everything together to get all the articles.
That's great. But we only have 10 items. The original response said we had 93 hits! Which means we have to make 93 /10, or 10 requests to get them all. Sounds like a job for a loop!
But first, let's review what we've done so far.
End of explanation
"""
# DEFINE YOUR FUNCTION HERE
# uncomment to test
# get_api_data("Duke Ellington", 2014)
"""
Explanation: Challenge 3: Make a function
Turn the code above into a function that inputs a search term and a year, and returns all the documents containing that search term in that year.
End of explanation
"""
all_docs[0]
"""
Explanation: 4. Formatting
Let's take another look at one of these documents.
End of explanation
"""
def format_articles(unformatted_docs):
'''
This function takes in a list of documents returned by the NYT api
and parses the documents into a list of dictionaries,
with 'id', 'header', and 'date' keys
'''
formatted = []
for i in unformatted_docs:
dic = {}
dic['id'] = i['_id']
dic['headline'] = i['headline']['main']
dic['date'] = i['pub_date'][0:10] # cutting time of day.
formatted.append(dic)
return(formatted)
all_formatted = format_articles(all_docs)
all_formatted[:5]
"""
Explanation: This is all great, but it's pretty messy. What we’d really like to to have, eventually, is a CSV, with each row representing an article, and each column representing something about that article (header, date, etc). As we saw before, the best way to do this is to make a lsit of dictionaries, with each dictionary representing an article and each dictionary representing a field of metadata from that article (e.g. headline, date, etc.) We can do this with a custom function:
End of explanation
"""
def format_articles(unformatted_docs):
'''
This function takes in a list of documents returned by the NYT api
and parses the documents into a list of dictionaries,
with 'id', 'header', 'date', 'lead paragrph' and 'word count' keys
'''
formatted = []
for i in unformatted_docs:
dic = {}
dic['id'] = i['_id']
dic['headline'] = i['headline']['main']
dic['date'] = i['pub_date'][0:10] # cutting time of day.
# YOUR CODE HERE
formatted.append(dic)
return(formatted)
# uncomment to test
# all_formatted = format_articles(all_docs)
# all_formatted[:5]
"""
Explanation: Challenge 4 Collect more fields
Edit the function above so that we include the lead_paragraph and word_count fields.
HINT: Some articles may not contain a lead_paragraph, in which case, it'll throw an error if you try to address this value (which doesn't exist.) You need to add a conditional statement that takes this into consideraiton. If
Advanced: Add another key that returns a list of keywords associated with the article.
End of explanation
"""
keys = all_formatted[1]
# writing the rest
with open('all-formated.csv', 'w') as output_file:
dict_writer = csv.DictWriter(output_file, keys)
dict_writer.writeheader()
dict_writer.writerows(all_formatted)
"""
Explanation: 5. Exporting
We can now export the data to a CSV.
End of explanation
"""
# YOUR CODE HERE
"""
Explanation: Capstone Challenge
Using what you learned, tell me if Chris' claim (i.e. that Duke Ellington has gotten more popular lately) holds water.
End of explanation
"""
|
ethen8181/machine-learning | trees/decision_tree.ipynb | mit | # code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(css_style = 'custom2.css')
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from graphviz import Source
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score
from sklearn.tree import export_graphviz
from sklearn.tree import DecisionTreeClassifier
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,sklearn
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Decision-Tree-(Classification)" data-toc-modified-id="Decision-Tree-(Classification)-1"><span class="toc-item-num">1 </span>Decision Tree (Classification)</a></span><ul class="toc-item"><li><span><a href="#Splitting-criteria-for-classification-trees" data-toc-modified-id="Splitting-criteria-for-classification-trees-1.1"><span class="toc-item-num">1.1 </span>Splitting criteria for classification trees</a></span></li><li><span><a href="#Impurity" data-toc-modified-id="Impurity-1.2"><span class="toc-item-num">1.2 </span>Impurity</a></span><ul class="toc-item"><li><span><a href="#Entropy" data-toc-modified-id="Entropy-1.2.1"><span class="toc-item-num">1.2.1 </span>Entropy</a></span></li><li><span><a href="#Gini-Index" data-toc-modified-id="Gini-Index-1.2.2"><span class="toc-item-num">1.2.2 </span>Gini Index</a></span></li></ul></li><li><span><a href="#Concrete-Example" data-toc-modified-id="Concrete-Example-1.3"><span class="toc-item-num">1.3 </span>Concrete Example</a></span></li><li><span><a href="#When-To-Stop-Recursing" data-toc-modified-id="When-To-Stop-Recursing-1.4"><span class="toc-item-num">1.4 </span>When To Stop Recursing</a></span></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.5"><span class="toc-item-num">1.5 </span>Implementation</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
def entropy(p):
return - p * np.log2(p) - (1 - p) * np.log2(1 - p)
# change default figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
x = np.arange(0.0, 1.0, 0.01)
ent = [entropy(p) if p != 0 else None for p in x]
plt.plot(x, ent)
plt.axhline(y = 1.0, linewidth = 1, color = 'k', linestyle = '--')
plt.ylim([ 0, 1.1 ])
plt.xlabel('p(i=1)')
plt.ylabel('Entropy')
plt.show()
"""
Explanation: Decision Tree (Classification)
Let us first create a scenario: say I want to buy a house and wish to borrow a loan from the bank. Now before giving me the loan, the bank is going to look at my history record like my credit, what has it been like in the past? How much money do I make? (and maybe some other information) and use them to determine whether loaning me money is a risky thing or not (whether I'm going to default). So how can the bank do this with decision trees?
Decision trees are formed by a collection of rules (if-then statements that partition the data) based on variables available from the data set. So in the example above, a very simple decision tree model could look like this:
if Credit = excellent then
Outcome = Yes
else if Credit = poor then
if Income = high then
Outcome = Yes
else if Income = low then
Outcome = No
The algorithm works by starting at the top of the tree (the root node), then it will traverse down the branches of this decision tree and ask a series of questions. In the end it will reach the bottom of the tree (the leaf node) that contains the final outcome. For example, if somebody has a credit that's poor and his/her income is high, then the bank will say a Yes, we will give him/her the loan.
Our task now is to learn how to generate the tree to create these decision rules for us. Thankfully, the core method for learning a decision tree can be viewed as a recursive algorithm. A decision tree can be "learned" by splitting the dataset into subsets based on the input features/attributes' value. This process is repeated on each derived subset in a recursive manner called recursive partitioning:
Start at the tree's root node
Select the best rule/feature that splits the data into two subsets (child node) for the current node
Repeated step 2 on each of the derived subset until the tree can't be further splitted. As we'll later see, we can set restrictions to decide when the tree should stop growing.
There are a few additional details that we need to make more concrete. Including how to pick the rule/feature to split on and because it is a recursive algorithm, we have to figure out when to stop the recursion, in other words, when to not go and split another node in the tree.
Splitting criteria for classification trees
The first question is what is the best rule/feature to split on and how do we measure that? One way to determine this is by choosing the one that maximizes the Information Gain (IG) at each split.
$$IG(D_{p}, a) = I(D_{p}) - p_{left} I(D_{left}) - p_{right} I(D_{right})$$
$IG$: Information Gain
$a$: feature to perform the split
$I$: Some impurity measure that we'll look at in the subsequent section
$D_{p}$: training subset of the parent node
$D_{left}$, $D_{right}$ :training subset of the left/right child node
$p_{left}$, $p_{right}$: proportion of parent node samples that ended up in the left/right child node after the split. $\frac{N_{left}}{N_p}$ or $\frac{N_{right}}{N_p}$. Where:
$N_p$: number of samples in the parent node
$N_{left}$: number of samples in the left child node
$N_{right}$: number of samples in the right child node
Impurity
The two most common impurity measure are entropy and gini index.
Entropy
Entropy is defined as:
$$I_E(t) = - \sum_{i =1}^{C} p(i \mid t) \;log_2 \,p(i \mid t)$$
for all non-empty classes, $p(i \mid t) \neq 0$, where:
$p(i \mid t)$ is the proportion (or frequency or probability) of the samples that belong to class $i$ for a particular node $t$
$C$ is the number of unique class labels
The entropy is therefore 0 if all samples at a node belong to the same class, and the entropy is maximal if we have an uniform class distribution. For example, in a binary class setting, the entropy is 0 if $p(i =1 \mid t) =1$ or $p(i =0 \mid t) =1$. And if the classes are distributed uniformly with $p(i =1 \mid t) = 0.5$ and $p(i =0 \mid t) =0.5$ the entropy is 1, which we can visualize by plotting the entropy for binary class setting below.
End of explanation
"""
def gini(p):
return p * (1 - p) + (1 - p) * ( 1 - (1 - p) )
gi = gini(x)
# plot
for i, lab in zip([ent, gi], ['Entropy', 'Gini Index']):
plt.plot(x, i, label = lab)
plt.legend(loc = 'upper center', bbox_to_anchor = (0.5, 1.15),
ncol = 3, fancybox = True, shadow = False)
plt.axhline(y = 0.5, linewidth = 1, color = 'k', linestyle = '--')
plt.axhline(y = 1.0, linewidth = 1, color = 'k', linestyle = '--')
plt.ylim([ 0, 1.1 ])
plt.xlabel('p(i=1)')
plt.ylabel('Impurity')
plt.tight_layout()
plt.show()
"""
Explanation: Gini Index
Gini Index is defined as:
$$
\begin{align}
I_G(t) &= \sum_{i =1}^{C} p(i \mid t) \big(1-p(i \mid t)\big) \nonumber \
&= \sum_{i =1}^{C} p(i \mid t) - p(i \mid t)^2 \nonumber \
&= \sum_{i =1}^{C} p(i \mid t) - \sum_{i =1}^{C} p(i \mid t)^2 \nonumber \
&= 1 - \sum_{i =1}^{C} p(i \mid t)^2
\end{align}
$$
Compared to Entropy, the maximum value of the Gini index is 0.5, which occurs when the classes are perfectly balanced in a node. On the other hand, the minimum value of the Gini index is 0 and occurs when there is only one class represented in a node (A node with a lower Gini index is said to be more "pure").
This time we plot Entropy and Gini index together to compare them against each other.
End of explanation
"""
class Tree:
"""
Classification tree using information gain with entropy as impurity
Parameters
----------
max_features : int or None, default None
The number of features to consider when looking for the best split,
None uses all features
min_samples_split : int, default 10
The minimum number of samples required to split an internal node
max_depth : int, default 3
Maximum depth of the tree
minimum_gain : float, default 1e-7
Minimum information gain required for splitting
"""
def __init__(self, max_depth = 3, max_features = None,
minimum_gain = 1e-7, min_samples_split = 10):
self.max_depth = max_depth
self.max_features = max_features
self.minimum_gain = minimum_gain
self.min_samples_split = min_samples_split
def fit(self, X, y):
"""pass in the 2d-array dataset and the response column"""
self.n_class = np.unique(y).shape[0]
# in the case you're wondering why we have this implementation of
# choosing the number of features to consider when looking
# for the best split, it will become much clearer when we
# start discussing Random Forest algorithm
if self.max_features is None or self.max_features > X.shape[1]:
self.max_features = X.shape[1]
self.feature_importance = np.zeros(X.shape[1])
self.tree = _create_decision_tree(X, y, self.max_depth,
self.minimum_gain, self.max_features,
self.min_samples_split, self.n_class,
self.feature_importance, X.shape[0])
self.feature_importance /= np.sum(self.feature_importance)
return self
def predict(self, X):
proba = self.predict_proba(X)
pred = np.argmax(proba, axis = 1)
return pred
def predict_proba(self, X):
proba = np.empty((X.shape[0], self.n_class))
for i in range(X.shape[0]):
proba[i] = self._predict_row(X[i, :], self.tree)
return proba
def _predict_row(self, row, tree):
"""Predict single row"""
if tree['is_leaf']:
return tree['prob']
else:
if row[tree['split_col']] <= tree['threshold']:
return self._predict_row(row, tree['left'])
else:
return self._predict_row(row, tree['right'])
def _create_decision_tree(X, y, max_depth,
minimum_gain, max_features,
min_samples_split, n_class,
feature_importance, n_row):
"""recursively grow the decision tree until it reaches the stopping criteria"""
try:
assert max_depth > 0
assert X.shape[0] > min_samples_split
column, value, gain = _find_best_split(X, y, max_features)
assert gain > minimum_gain
feature_importance[column] += (X.shape[0] / n_row) * gain
# split the dataset and grow left and right child
left_X, right_X, left_y, right_y = _split(X, y, column, value)
left_child = _create_decision_tree(left_X, left_y, max_depth - 1,
minimum_gain, max_features,
min_samples_split, n_class,
feature_importance, n_row)
right_child = _create_decision_tree(right_X, right_y, max_depth - 1,
minimum_gain, max_features,
min_samples_split, n_class,
feature_importance, n_row)
except AssertionError:
# if criteria reached, compute the classification
# probability and return it as a leaf node
# note that some leaf node may only contain partial classes,
# thus specify the minlength so class that don't appear will
# still get assign a probability of 0
counts = np.bincount(y, minlength = n_class)
prob = counts / y.shape[0]
leaf = {'is_leaf': True, 'prob': prob}
return leaf
node = {'is_leaf': False,
'left': left_child,
'right': right_child,
'split_col': column,
'threshold': value}
return node
def _find_best_split(X, y, max_features):
"""Greedy algorithm to find the best feature and value for a split"""
subset = np.random.choice(X.shape[1], max_features, replace = False)
max_col, max_val, max_gain = None, None, None
parent_entropy = _compute_entropy(y)
for column in subset:
split_values = _find_splits(X, column)
for value in split_values:
splits = _split(X, y, column, value, return_X = False)
gain = parent_entropy - _compute_splits_entropy(y, splits)
if max_gain is None or gain > max_gain:
max_col, max_val, max_gain = column, value, gain
return max_col, max_val, max_gain
def _compute_entropy(split):
"""entropy score using a fix log base 2"""
_, counts = np.unique(split, return_counts = True)
p = counts / split.shape[0]
entropy = -np.sum(p * np.log2(p))
return entropy
def _find_splits(X, column):
"""
find all possible split values (threshold),
by getting unique values in a sorted order
and finding cutoff point (average) between every two values
"""
X_unique = np.unique(X[:, column])
split_values = np.empty(X_unique.shape[0] - 1)
for i in range(1, X_unique.shape[0]):
average = (X_unique[i - 1] + X_unique[i]) / 2
split_values[i - 1] = average
return split_values
def _compute_splits_entropy(y, splits):
"""compute the entropy for the splits (the two child nodes)"""
splits_entropy = 0
for split in splits:
splits_entropy += (split.shape[0] / y.shape[0]) * _compute_entropy(split)
return splits_entropy
def _split(X, y, column, value, return_X = True):
"""split the response column using the cutoff threshold"""
left_mask = X[:, column] <= value
right_mask = X[:, column] > value
left_y, right_y = y[left_mask], y[right_mask]
if not return_X:
return left_y, right_y
else:
left_X, right_X = X[left_mask], X[right_mask]
return left_X, right_X, left_y, right_y
"""
Explanation: As we can see from the plot, there is not much differences (as in they both increase and decrease at similar range). In practice, Gini Index and Entropy typically yield very similar results and it is often not worth spending much time on evaluating decision tree models using different impurity criteria. As for which one to use, maybe consider Gini Index, because this way, we don’t need to compute the log, which can make it a bit computationly faster.
Decision trees can also be used on regression task. It's just instead of using gini index or entropy as the impurity function, we use criteria such as MSE (mean square error):
$$I_{MSE}(t) = \frac{1}{N_t} \sum_i^{N_t}(y_i - \bar{y})^2$$
Where $\bar{y}$ is the averages of the response at node $t$, and $N_t$ is the number of observations that reached node $t$. This is simply saying, we compute the differences between all $N_t$ observation's reponse to the average response, square it and take the average.
Concrete Example
Here we'll calculate the Entropy score by hand to hopefully make things a bit more concrete. Using the bank loan example again, suppose at a particular node, there are 80 observations, of whom 40 were classified as Yes (the bank will issue the loan) and 40 were classified as No.
We can first calculate the Entropy before making a split:
$$I_E(D_{p}) = - \left( \frac{40}{80} log_2(\frac{40}{80}) + \frac{40}{80} log_2(\frac{40}{80}) \right) = 1$$
Suppose we try splitting on Income and the child nodes turn out to be.
Left (Income = high): 30 Yes and 10 No
Right (Income = low): 10 Yes and 30 No
$$I_E(D_{left}) = - \left( \frac{30}{40} log_2(\frac{30}{40}) + \frac{10}{40} log_2(\frac{10}{40}) \right) = 0.81$$
$$I_E(D_{right}) = - \left( \frac{10}{40} log_2(\frac{10}{40}) + \frac{30}{40} log_2(\frac{30}{40}) \right) = 0.81$$
$$IG(D_{p}, Income) = 1 - \frac{40}{80} (0.81) - \frac{40}{80} (0.81) = 0.19$$
Next we repeat the same process and evaluate the split based on splitting by Credit.
Left (Credit = excellent): 20 Yes and 0 No
Right (Credit = poot): 20 Yes and 40 No
$$I_E(D_{left}) = - \left( \frac{20}{20} log_2(\frac{20}{20}) + \frac{0}{20} log_2(\frac{0}{20}) \right) = 0$$
$$I_E(D_{right}) = - \left( \frac{20}{60} log_2(\frac{20}{60}) + \frac{40}{60} log_2(\frac{40}{60}) \right) = 0.92$$
$$IG(D_{p}, Credit) = 1 - \frac{20}{80} (0) - \frac{60}{80} (0.92) = 0.31$$
In this case, it will choose Credit as the feature to split upon.
If we were to have more features, the decision tree algorithm will simply try every possible split, and will choose the split that maximizes the information gain. If the feature is a continuous variable, then we can simply get the unique values of that feature in a sorted order, then try all possible split values (threshold) by using cutoff point (average) between every two values (e.g. a unique value of 1, 2, 3 will result in trying the split on the value 1.5 and 2.5). Or to speed up computations, we can bin the unqiue values into buckets, and split on the buckets.
When To Stop Recursing
The other question that we need to address is when to stop the tree from growing. There are some early stopping criteria that is commonly used to prevent the tree from overfitting.
Maximum depth The length of the longest path from a root node to a leaf node will not exceed this value. This is the most commonly tuned hyperparameter for tree-based method
Minimum sample split: The minimum number of samples required to split a node should be greater than this number
Minimum information gain The minimum information gain required for splitting on the best feature
And that's pretty much it for classification trees! For a more visual appealing explanation, the following link this a website that uses interactive visualization to demonstrate how decision trees work. A Visual Introduction to Machine Learning
Implementation
With all of that in mind, the following section implements a toy classification tree algorithm.
End of explanation
"""
# load a sample dataset
iris = load_iris()
X = iris.data
y = iris.target
# train model and print the accuracy score
tree = Tree()
tree.fit(X, y)
y_pred = tree.predict(X)
print('classification distribution: ', np.bincount(y_pred))
print('accuracy score: ', accuracy_score(y, y_pred))
# use library to confirm results are comparable
clf = DecisionTreeClassifier(criterion = 'entropy', min_samples_split = 10, max_depth = 3)
clf.fit(X, y)
y_pred = clf.predict(X)
print('classification distribution: ', np.bincount(y_pred))
print('accuracy score: ', accuracy_score(y, y_pred))
"""
Explanation: We will load the Iris dataset, and use it as a sample dataset to test our algorithm. This data sets consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica). It is stored as a 150x4 numpy.ndarray, where the rows are the samples and the columns being Sepal Length, Sepal Width, Petal Length and Petal Width.
End of explanation
"""
# visualize the decision tree
# export it as .dot file, other common parameters include
# `rounded` (boolean to round the score on each node)
export_graphviz(clf, feature_names = iris.feature_names, filled = True,
class_names = iris.target_names, out_file = 'tree.dot')
# read it in and visualize it, or if we wish to
# convert the .dot file into other formats, we can do:
# import os
# os.system('dot -Tpng tree.dot -o tree.jpeg')
with open('tree.dot') as f:
dot_graph = f.read()
Source(dot_graph)
"""
Explanation: Advantages of decision trees:
Features don't require scaling or normalization
Great at dealing with data that have lots of categorical features
Can be displayed graphically, thus making it highly interpretable (in the next code chunk)
It is non-parametric, thus it will outperform linear models if relationship between features and response is highly non-linear
For visualizing the decision tree, you might need to have graphviz installed. For the mac user, try doing brew install graphviz or follow the instructions in this link.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/b89584de6ec99a847868d7b80a32cf50/80_dics.ipynb | bsd-3-clause | # Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
#
# License: BSD (3-clause)
"""
Explanation: DICS for power mapping
In this tutorial, we'll simulate two signals originating from two
locations on the cortex. These signals will be sinusoids, so we'll be looking
at oscillatory activity (as opposed to evoked activity).
We'll use dynamic imaging of coherent sources (DICS) :footcite:GrossEtAl2001
to map out spectral power along the cortex. Let's see if we can find our two
simulated sources.
End of explanation
"""
import os.path as op
import numpy as np
from scipy.signal import welch, coherence, unit_impulse
from matplotlib import pyplot as plt
import mne
from mne.simulation import simulate_raw, add_noise
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
# We use the MEG and MRI setup from the MNE-sample dataset
data_path = sample.data_path(download=False)
subjects_dir = op.join(data_path, 'subjects')
# Filenames for various files we'll be using
meg_path = op.join(data_path, 'MEG', 'sample')
raw_fname = op.join(meg_path, 'sample_audvis_raw.fif')
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
cov_fname = op.join(meg_path, 'sample_audvis-cov.fif')
fwd = mne.read_forward_solution(fwd_fname)
# Seed for the random number generator
rand = np.random.RandomState(42)
"""
Explanation: Setup
We first import the required packages to run this tutorial and define a list
of filenames for various things we'll be using.
End of explanation
"""
sfreq = 50. # Sampling frequency of the generated signal
n_samp = int(round(10. * sfreq))
times = np.arange(n_samp) / sfreq # 10 seconds of signal
n_times = len(times)
def coh_signal_gen():
"""Generate an oscillating signal.
Returns
-------
signal : ndarray
The generated signal.
"""
t_rand = 0.001 # Variation in the instantaneous frequency of the signal
std = 0.1 # Std-dev of the random fluctuations added to the signal
base_freq = 10. # Base frequency of the oscillators in Hertz
n_times = len(times)
# Generate an oscillator with varying frequency and phase lag.
signal = np.sin(2.0 * np.pi *
(base_freq * np.arange(n_times) / sfreq +
np.cumsum(t_rand * rand.randn(n_times))))
# Add some random fluctuations to the signal.
signal += std * rand.randn(n_times)
# Scale the signal to be in the right order of magnitude (~100 nAm)
# for MEG data.
signal *= 100e-9
return signal
"""
Explanation: Data simulation
The following function generates a timeseries that contains an oscillator,
whose frequency fluctuates a little over time, but stays close to 10 Hz.
We'll use this function to generate our two signals.
End of explanation
"""
signal1 = coh_signal_gen()
signal2 = coh_signal_gen()
fig, axes = plt.subplots(2, 2, figsize=(8, 4))
# Plot the timeseries
ax = axes[0][0]
ax.plot(times, 1e9 * signal1, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], ylabel='Amplitude (Am)',
title='Signal 1')
ax = axes[0][1]
ax.plot(times, 1e9 * signal2, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], title='Signal 2')
# Power spectrum of the first timeseries
f, p = welch(signal1, fs=sfreq, nperseg=128, nfft=256)
ax = axes[1][0]
# Only plot the first 100 frequencies
ax.plot(f[:100], 20 * np.log10(p[:100]), lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 99]],
ylabel='Power (dB)', title='Power spectrum of signal 1')
# Compute the coherence between the two timeseries
f, coh = coherence(signal1, signal2, fs=sfreq, nperseg=100, noverlap=64)
ax = axes[1][1]
ax.plot(f[:50], coh[:50], lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 49]], ylabel='Coherence',
title='Coherence between the timeseries')
fig.tight_layout()
"""
Explanation: Let's simulate two timeseries and plot some basic information about them.
End of explanation
"""
# The locations on the cortex where the signal will originate from. These
# locations are indicated as vertex numbers.
vertices = [[146374], [33830]]
# Construct SourceEstimates that describe the signals at the cortical level.
data = np.vstack((signal1, signal2))
stc_signal = mne.SourceEstimate(
data, vertices, tmin=0, tstep=1. / sfreq, subject='sample')
stc_noise = stc_signal * 0.
"""
Explanation: Now we put the signals at two locations on the cortex. We construct a
:class:mne.SourceEstimate object to store them in.
The timeseries will have a part where the signal is active and a part where
it is not. The techniques we'll be using in this tutorial depend on being
able to contrast data that contains the signal of interest versus data that
does not (i.e. it contains only noise).
End of explanation
"""
snr = 1. # Signal-to-noise ratio. Decrease to add more noise.
"""
Explanation: Before we simulate the sensor-level data, let's define a signal-to-noise
ratio. You are encouraged to play with this parameter and see the effect of
noise on our results.
End of explanation
"""
# Read the info from the sample dataset. This defines the location of the
# sensors and such.
info = mne.io.read_info(raw_fname)
info.update(sfreq=sfreq, bads=[])
# Only use gradiometers
picks = mne.pick_types(info, meg='grad', stim=True, exclude=())
mne.pick_info(info, picks, copy=False)
# Define a covariance matrix for the simulated noise. In this tutorial, we use
# a simple diagonal matrix.
cov = mne.cov.make_ad_hoc_cov(info)
cov['data'] *= (20. / snr) ** 2 # Scale the noise to achieve the desired SNR
# Simulate the raw data, with a lowpass filter on the noise
stcs = [(stc_signal, unit_impulse(n_samp, dtype=int) * 1),
(stc_noise, unit_impulse(n_samp, dtype=int) * 2)] # stacked in time
duration = (len(stc_signal.times) * 2) / sfreq
raw = simulate_raw(info, stcs, forward=fwd)
add_noise(raw, cov, iir_filter=[4, -4, 0.8], random_state=rand)
"""
Explanation: Now we run the signal through the forward model to obtain simulated sensor
data. To save computation time, we'll only simulate gradiometer data. You can
try simulating other types of sensors as well.
Some noise is added based on the baseline noise covariance matrix from the
sample dataset, scaled to implement the desired SNR.
End of explanation
"""
events = mne.find_events(raw, initial_event=True)
tmax = (len(stc_signal.times) - 1) / sfreq
epochs = mne.Epochs(raw, events, event_id=dict(signal=1, noise=2),
tmin=0, tmax=tmax, baseline=None, preload=True)
assert len(epochs) == 2 # ensure that we got the two expected events
# Plot some of the channels of the simulated data that are situated above one
# of our simulated sources.
picks = mne.pick_channels(epochs.ch_names,
mne.read_vectorview_selection('Left-frontal'))
epochs.plot(picks=picks)
"""
Explanation: We create an :class:mne.Epochs object containing two trials: one with
both noise and signal and one with just noise
End of explanation
"""
# Compute the inverse operator
fwd = mne.read_forward_solution(fwd_fname)
inv = make_inverse_operator(epochs.info, fwd, cov)
# Apply the inverse model to the trial that also contains the signal.
s = apply_inverse(epochs['signal'].average(), inv)
# Take the root-mean square along the time dimension and plot the result.
s_rms = np.sqrt((s ** 2).mean())
title = 'MNE-dSPM inverse (RMS)'
brain = s_rms.plot('sample', subjects_dir=subjects_dir, hemi='both', figure=1,
size=600, time_label=title, title=title)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(vertices[0][0], coords_as_verts=True, hemi='lh')
brain.add_foci(vertices[1][0], coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
brain.show_view(view={'azimuth': 0, 'elevation': 0, 'distance': 550,
'focalpoint': [0, 0, 0]})
"""
Explanation: Power mapping
With our simulated dataset ready, we can now pretend to be researchers that
have just recorded this from a real subject and are going to study what parts
of the brain communicate with each other.
First, we'll create a source estimate of the MEG data. We'll use both a
straightforward MNE-dSPM inverse solution for this, and the DICS beamformer
which is specifically designed to work with oscillatory data.
Computing the inverse using MNE-dSPM:
End of explanation
"""
# Estimate the cross-spectral density (CSD) matrix on the trial containing the
# signal.
csd_signal = csd_morlet(epochs['signal'], frequencies=[10])
# Compute the spatial filters for each vertex, using two approaches.
filters_approach1 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', depth=1.,
inversion='single', weight_norm=None, real_filter=True)
print(filters_approach1)
filters_approach2 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', depth=None,
inversion='matrix', weight_norm='unit-noise-gain', real_filter=True)
print(filters_approach2)
# You can save these to disk with:
# filters_approach1.save('filters_1-dics.h5')
# Compute the DICS power map by applying the spatial filters to the CSD matrix.
power_approach1, f = apply_dics_csd(csd_signal, filters_approach1)
power_approach2, f = apply_dics_csd(csd_signal, filters_approach2)
"""
Explanation: We will now compute the cortical power map at 10 Hz. using a DICS beamformer.
A beamformer will construct for each vertex a spatial filter that aims to
pass activity originating from the vertex, while dampening activity from
other sources as much as possible.
The :func:mne.beamformer.make_dics function has many switches that offer
precise control
over the way the filter weights are computed. Currently, there is no clear
consensus regarding the best approach. This is why we will demonstrate two
approaches here:
The approach as described in :footcite:vanVlietEtAl2018, which first
normalizes the forward solution and computes a vector beamformer.
The scalar beamforming approach based on
:footcite:SekiharaNagarajan2008, which uses weight normalization
instead of normalizing the forward solution.
End of explanation
"""
def plot_approach(power, n):
"""Plot the results on a brain."""
title = 'DICS power map, approach %d' % n
brain = power_approach1.plot(
'sample', subjects_dir=subjects_dir, hemi='both',
size=600, time_label=title, title=title)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(vertices[0][0], coords_as_verts=True, hemi='lh', color='b')
brain.add_foci(vertices[1][0], coords_as_verts=True, hemi='rh', color='b')
# Rotate the view and add a title.
brain.show_view(view={'azimuth': 0, 'elevation': 0, 'distance': 550,
'focalpoint': [0, 0, 0]})
return brain
brain1 = plot_approach(power_approach1, 1)
"""
Explanation: Plot the DICS power maps for both approaches, starting with the first:
End of explanation
"""
brain2 = plot_approach(power_approach2, 2)
"""
Explanation: Now the second:
End of explanation
"""
|
maestrotf/pymepps | docs/examples/example_plot_xr_accessor.ipynb | gpl-3.0 | import matplotlib.pyplot as plt
import xarray as xr
import pymepps
"""
Explanation: How to use Xarray accessor
This example shows how to use the SpatialData accessor to extend the
capabilities of xarray.
To extend xarray.DataArray you need only to load also pymepps with
"import pymepps". The extensions could be used with the property
xarray.DataArray.pp
End of explanation
"""
ds = xr.open_dataset('../data/model/GFS_Global_0p25deg_20161219_0600.nc')
t2m_max = ds['Maximum_temperature_height_above_ground_Mixed_intervals_Maximum']
print(t2m_max)
"""
Explanation: To use the full power of pymepps, you have to set a grid. If you load
the data with the xarray functions you have to set the grid afterwards.
So the next step is to load a NetCDF model file with xarray. There are
also pymepps functions to load model data. These are shown in another
example.
End of explanation
"""
grid_dict = dict(
gridtype='lonlat',
xsize=t2m_max['lon'].size,
ysize=t2m_max['lat'].size,
xfirst=t2m_max['lon'].values[0],
xinc=0.25,
yfirst=t2m_max['lat'].values[0],
yinc=-0.25,
)
"""
Explanation: The grid definition is inspired by the climate data operators. So you
could either generate your own grid (done in this example), or you could
load a cdo-conform grid file.
We could see that the grid is a structured latitude and longitude grid
with a resolution of 0.25 degree.
End of explanation
"""
builder = pymepps.GridBuilder(grid_dict)
grid = builder.build_grid()
print(grid)
"""
Explanation: We created our grid dict with the information. Now we have to build the
grid. In pymepps you could use the GridBuilder to build the grid with
given grid_dict.
End of explanation
"""
t2m_max = t2m_max.pp.set_grid(grid)
print(t2m_max.pp.grid)
"""
Explanation: The next step is to set the grid for our dataset. For this we could use
the set_grid method of the SpatialAccessor.
End of explanation
"""
# Before normalization
print('Before:\n{0:s}\n'.format(str(t2m_max)))
t2m_max = t2m_max.pp.normalize_coords()
# After normalization
print('After:\n{0:s}'.format(str(t2m_max)))
"""
Explanation: Now we set the grid. It is also possible to normalize the coordinates to
allow a consistent processing of the model data.
End of explanation
"""
t2m_max.plot()
plt.show()
"""
Explanation: We could see that the height_above_ground and the time variable are
renamed to a more common name. The ensemble member is set to the default
value 'det', while the runtime is set to the missing value None. Now
lets plot the data with the xarray internal plot method.
End of explanation
"""
# sphinx_gallery_thumbnail_number = 2
ger_t2m_max = t2m_max.pp.sellonlatbox([5, 55, 15, 45])
# K to deg C
ger_t2m_max -= 273.15
ger_t2m_max.plot()
plt.show()
"""
Explanation: Lets make use of the SpatialAccessor to slice an area over germany. We
would also transform the temperature unit to degree celsius. For this we
could use the normal xarray.DataArray mathematical operations. After the
transformation lets plot the temperature.
End of explanation
"""
stacked_array = t2m_max.stack(stacked=('runtime', 'validtime'))
# we have to catch the error for sphinx documentation
try:
print(stacked_array.pp.grid)
except TypeError:
print('This DataArray has no grid defined!')
"""
Explanation: If we use a xarray.DataArray method where the DataArray instance is
copied, we have to set a new grid. This behaviour coud seen in the
following code block.
End of explanation
"""
|
jrbourbeau/cr-composition | notebooks/legacy/lightheavy/laputop-performance.ipynb | mit | %load_ext watermark
%watermark -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend
"""
Explanation: <a id='top'> </a>
Author: James Bourbeau
End of explanation
"""
%matplotlib inline
from __future__ import division, print_function
from collections import defaultdict
import numpy as np
from scipy import optimize
from scipy.stats import chisquare
import pandas as pd
import matplotlib.pyplot as plt
import seaborn.apionly as sns
import composition as comp
import composition.analysis.plotting as plotting
color_dict = comp.analysis.get_color_dict()
"""
Explanation: Laputop reconstruction performance
Table of contents
Load simulation DataFrame and apply quality cuts
Define functions to be fit to effective area
Calculate effective areas
Plot result
End of explanation
"""
df_sim, cut_dict_sim = comp.load_dataframe(datatype='sim', config='IC79', return_cut_dict=True)
selection_mask = np.array([True] * len(df_sim))
standard_cut_keys = ['IceTopQualityCuts', 'lap_InIce_containment']
# 'num_hits_1_60', 'max_qfrac_1_60',
# 'InIceQualityCuts']
for cut in ['MilliNCascAbove2', 'MilliQtotRatio', 'MilliRloglBelow2', 'StochRecoSucceeded']:
standard_cut_keys += ['InIceQualityCuts_{}'.format(cut)]
for key in standard_cut_keys:
selection_mask *= cut_dict_sim[key]
print(key, np.sum(selection_mask))
df_sim = df_sim[selection_mask]
"""
Explanation: Load simulation DataFrame and apply quality cuts
[ back to top ]
End of explanation
"""
energy_bins = 10**np.arange(5.0, 9.51, 0.1)
energy_midpoints = (energy_bins[1:] + energy_bins[:-1]) / 2
energy_min_fit, energy_max_fit = 5.8, 9.5
midpoints_fitmask = (energy_midpoints >= 10**energy_min_fit) & (energy_midpoints <= 10**energy_max_fit)
np.log10(energy_bins)
np.log10(energy_midpoints[midpoints_fitmask])
"""
Explanation: Define energy binning for this analysis
End of explanation
"""
energybins = comp.analysis.get_energybins()
energy_res = df_sim['lap_log_energy'] - df_sim['MC_log_energy']
light_mask = df_sim['MC_comp_class'] == 'light'
heavy_mask = df_sim['MC_comp_class'] == 'heavy'
_, bin_medians_light, error_light = comp.analysis.data_functions.get_medians(df_sim['MC_log_energy'][light_mask],
energy_res[light_mask],
energybins.log_energy_bins)
_, bin_medians_heavy, error_heavy = comp.analysis.data_functions.get_medians(df_sim['MC_log_energy'][heavy_mask],
energy_res[heavy_mask],
energybins.log_energy_bins)
fig, ax = plt.subplots()
# ax.errorbar(energybins.log_energy_midpoints, bin_medians_light, yerr=error_light,
# marker='.', ls='None', label='light')
# ax.errorbar(energybins.log_energy_midpoints, bin_medians_heavy, yerr=error_heavy,
# marker='.', ls='None', label='heavy')
plotting.plot_steps(energybins.log_energy_midpoints, bin_medians_light, np.array(error_light), ax,
color_dict['light'], 'light')
plotting.plot_steps(energybins.log_energy_midpoints, bin_medians_heavy, np.array(error_heavy), ax,
color_dict['heavy'], 'heavy')
ax.axhline(0, marker='None', linestyle='-.')
ax.set_xlabel('$\log_{10}(E_{\mathrm{MC}}/\mathrm{GeV})$')
ax.set_ylabel('$\log_{10}(E_{\mathrm{reco}}/E_{\mathrm{MC}})$')
ax.set_xlim([6.4, 9.0])
ax.set_ylim([-0.15, 0.15])
ax.legend()
ax2 = ax.twinx()
ax2.set_ylabel('$E_{\mathrm{reco}}/E_{\mathrm{MC}}$')
ax2.set_ylim(list(ax.get_ylim()))
# ax2.axhline(0.1, marker='None', linestyle='-.', color='gray')
# ax2.axhline(-0.1, marker='None', linestyle='-.', color='gray')
plt.yticks(list(ax.get_yticks())[1:-1], ['{:0.2f}'.format(10**x) for x in ax.get_yticks()[1:-1]])
plt.grid()
plt.savefig('/home/jbourbeau/public_html/figures/lap-energyres.png')
plt.show()
"""
Explanation: Energy resolution
End of explanation
"""
core_res = np.sqrt((df_sim['lap_x'] - df_sim['MC_x'])**2+(df_sim['lap_y'] - df_sim['MC_y'])**2)
_, bin_medians_light, error_light = comp.analysis.data_functions.get_medians(df_sim['MC_log_energy'][light_mask],
core_res[light_mask],
energybins.log_energy_bins)
_, bin_medians_heavy, error_heavy = comp.analysis.data_functions.get_medians(df_sim['MC_log_energy'][heavy_mask],
core_res[heavy_mask],
energybins.log_energy_bins)
fig, ax = plt.subplots()
ax.errorbar(energybins.log_energy_midpoints, bin_medians_light, yerr=np.array(error_light),
marker='.', ls='None', label='light')
ax.errorbar(energybins.log_energy_midpoints, bin_medians_heavy, yerr=np.array(error_heavy),
marker='.', ls='None', label='heavy')
ax.axhline(0, marker='None', linestyle='-.')
ax.set_xlabel('$\log_{10}(E_{\mathrm{MC}}/\mathrm{GeV})$')
ax.set_ylabel('$\\vec{x}_{\mathrm{reco}}-\\vec{x}_{\mathrm{MC}} \ [m]$')
ax.set_ylim([0, 15])
ax.legend()
# ax2 = ax.twinx()
# ax2.set_ylabel('$E_{\mathrm{reco}}/E_{\mathrm{MC}}$')
# ax2.set_ylim(list(ax.get_ylim()))
# # ax2.axhline(0.1, marker='None', linestyle='-.', color='gray')
# # ax2.axhline(-0.1, marker='None', linestyle='-.', color='gray')
# plt.yticks(list(ax.get_yticks())[1:-1], ['{:0.2f}'.format(10**x) for x in ax.get_yticks()[1:-1]])
plt.grid()
plt.savefig('/home/jbourbeau/public_html/figures/lap-coreres.png')
plt.show()
"""
Explanation: Core resolution
End of explanation
"""
energybins = comp.analysis.get_energybins()
log_s125 = df_sim['log_s125']
light_mask = df_sim['MC_comp_class'] == 'light'
heavy_mask = df_sim['MC_comp_class'] == 'heavy'
_, bin_medians_light, error_light = comp.analysis.data_functions.get_medians(df_sim['MC_log_energy'][light_mask],
log_s125[light_mask],
energybins.log_energy_bins)
_, bin_medians_heavy, error_heavy = comp.analysis.data_functions.get_medians(df_sim['MC_log_energy'][heavy_mask],
log_s125[heavy_mask],
energybins.log_energy_bins)
fig, ax = plt.subplots()
# ax.errorbar(energybins.log_energy_midpoints, bin_medians_light, yerr=error_light,
# marker='.', ls='None', label='light')
# ax.errorbar(energybins.log_energy_midpoints, bin_medians_heavy, yerr=error_heavy,
# marker='.', ls='None', label='heavy')
plotting.plot_steps(energybins.log_energy_midpoints, bin_medians_light, np.array(error_light), ax,
color_dict['light'], 'light')
plotting.plot_steps(energybins.log_energy_midpoints, bin_medians_heavy, np.array(error_heavy), ax,
color_dict['heavy'], 'heavy')
# ax.axhline(0, marker='None', linestyle='-.')
ax.set_xlabel('$\log_{10}(E_{\mathrm{MC}}/\mathrm{GeV})$')
ax.set_ylabel('$\log_{10}(S_{\mathrm{125}})$')
ax.set_xlim([6.4, 9.0])
# ax.set_ylim([-0.15, 0.15])
ax.legend()
# ax2 = ax.twinx()
# ax2.set_ylabel('$E_{\mathrm{reco}}/E_{\mathrm{MC}}$')
# ax2.set_ylim(list(ax.get_ylim()))
# # ax2.axhline(0.1, marker='None', linestyle='-.', color='gray')
# # ax2.axhline(-0.1, marker='None', linestyle='-.', color='gray')
# plt.yticks(list(ax.get_yticks())[1:-1], ['{:0.2f}'.format(10**x) for x in ax.get_yticks()[1:-1]])
ax.grid()
plt.savefig('/home/jbourbeau/public_html/figures/lap-energyres.png')
plt.show()
"""
Explanation: S125 resolution
End of explanation
"""
|
jhillairet/scikit-rf | doc/source/examples/networktheory/LNA Example.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 10]
import skrf as rf
from skrf.media import DistributedCircuit
f = rf.Frequency(0.4, 2, 101)
tem = DistributedCircuit(f, z0=50)
# import the scattering parameters/noise data for the transistor
bjt = rf.Network('BFU520_05V0_010mA_NF_SP.s2p').interpolate(f)
print(bjt)
"""
Explanation: LNA Example
Let's design a LNA using Infineon's BFU520 transistor. First we need to import scikit-rf and a bunch of other utilities:
End of explanation
"""
bjt.plot_s_smith(lw=2)
"""
Explanation: Let's plot the smith chart for it:
End of explanation
"""
sqabs = lambda x: np.square(np.absolute(x))
delta = bjt.s11.s*bjt.s22.s - bjt.s12.s*bjt.s21.s
rl = np.absolute((bjt.s12.s * bjt.s21.s)/(sqabs(bjt.s22.s) - sqabs(delta)))
cl = np.conj(bjt.s22.s - delta*np.conj(bjt.s11.s))/(sqabs(bjt.s22.s) - sqabs(delta))
rs = np.absolute((bjt.s12.s * bjt.s21.s)/(sqabs(bjt.s11.s) - sqabs(delta)))
cs = np.conj(bjt.s11.s - delta*np.conj(bjt.s22.s))/(sqabs(bjt.s11.s) - sqabs(delta))
def calc_circle(c, r):
theta = np.linspace(0, 2*np.pi, 1000)
return c + r*np.exp(1.0j*theta)
for i, f in enumerate(bjt.f):
# decimate it a little
if i % 100 != 0:
continue
n = rf.Network(name=str(f/1.e+9), s=calc_circle(cs[i][0, 0], rs[i][0, 0]))
n.plot_s_smith()
for i, f in enumerate(bjt.f):
# decimate it a little
if i % 100 != 0:
continue
n = rf.Network(name=str(f/1.e+9), s=calc_circle(cl[i][0, 0], rl[i][0, 0]))
n.plot_s_smith()
"""
Explanation: Now let's calculate the source and load stability curves.
I'm slightly misusing the Network type to plot the curves; normally the curves you pass in to Network should be a function of frequency, but it also works to draw these circles as long as you don't try to use any other functions on them
End of explanation
"""
idx_915mhz = rf.util.find_nearest_index(bjt.f, 915.e+6)
# we need the normalized equivalent noise and optimum source coefficient to calculate the constant noise circles
rn = bjt.rn[idx_915mhz]/50
gamma_opt = bjt.g_opt[idx_915mhz]
fmin = bjt.nfmin[idx_915mhz]
for nf_added in [0, 0.1, 0.2, 0.5]:
nf = 10**(nf_added/10) * fmin
N = (nf - fmin)*abs(1+gamma_opt)**2/(4*rn)
c_n = gamma_opt/(1+N)
r_n = 1/(1-N)*np.sqrt(N**2 + N*(1-abs(gamma_opt)**2))
n = rf.Network(name=str(nf_added), s=calc_circle(c_n, r_n))
n.plot_s_smith()
print("the optimum source reflection coefficient is ", gamma_opt)
"""
Explanation: So we can see that we need to avoid inductive loads near short circuit in the input matching network and high impedance inductive loads on the output.
Let's draw some constant noise circles. First we grab the noise parameters for our target frequency from the network model:
End of explanation
"""
gamma_s = 0.0
gamma_l = np.conj(bjt.s22.s - bjt.s21.s*gamma_s*bjt.s12.s/(1-bjt.s11.s*gamma_s))
gamma_l = gamma_l[idx_915mhz, 0, 0]
is_gamma_l_stable = np.absolute(gamma_l - cl[idx_915mhz]) > rl[idx_915mhz]
gamma_l, is_gamma_l_stable
"""
Explanation: So we can see from the chart that just leaving the input at 50 ohms gets us under 0.1 dB of extra noise, which seems pretty good. I'm actually not sure that these actually correspond to the noise figure level increments I have listed up there, but the circles should at least correspond to increasing noise figures
So let's leave the input at 50 ohms and figure out how to match the output network to maximize gain and stability. Let's see what matching the load impedance with an unmatched input gives us:
End of explanation
"""
def calc_matching_network_vals(z1, z2):
flipped = np.real(z1) < np.real(z2)
if flipped:
z2, z1 = z1, z2
# cancel out the imaginary parts of both input and output impedances
z1_par = 0.0
if abs(np.imag(z1)) > 1e-6:
# parallel something to cancel out the imaginary part of
# z1's impedance
z1_par = 1/(-1j*np.imag(1/z1))
z1 = 1/(1./z1 + 1/z1_par)
z2_ser = 0.0
if abs(np.imag(z2)) > 1e-6:
z2_ser = -1j*np.imag(z2)
z2 = z2 + z2_ser
Q = np.sqrt((np.real(z1) - np.real(z2))/np.real(z2))
x1 = -1.j * np.real(z1)/Q
x2 = 1.j * np.real(z2)*Q
x1_tot = 1/(1/z1_par + 1/x1)
x2_tot = z2_ser + x2
if flipped:
return x2_tot, x1_tot
else:
return x1_tot, x2_tot
z_l = rf.s2z(np.array([[[gamma_l]]]))[0,0,0]
# note that we're matching against the conjugate;
# this is because we want to see z_l from the BJT side
# if we plugged in z the matching network would make
# the 50 ohms look like np.conj(z) to match against it, so
# we use np.conj(z_l) so that it'll look like z_l from the BJT's side
z_par, z_ser = calc_matching_network_vals(np.conj(z_l), 50)
z_l, z_par, z_ser
"""
Explanation: This looks like it may be kind of close to the load instability circles, so it might make sense to pick a load point with less gain for more stability, or to pick a different source impedance with more noise.
But for now let's just build a matching network for this and see how it performs:
End of explanation
"""
c_par = np.real(1/(2j*np.pi*915e+6*z_par))
l_ser = np.real(z_ser/(2j*np.pi*915e+6))
print(c_par, l_ser)
"""
Explanation: Let's calculate what the component values are:
End of explanation
"""
output_network = tem.shunt_capacitor(c_par) ** tem.inductor(l_ser)
amplifier = bjt ** output_network
amplifier.plot_s_smith()
"""
Explanation: The capacitance is kind of low but the inductance seems reasonable. Let's test it out:
End of explanation
"""
amplifier.s21.plot_s_db()
"""
Explanation: That looks pretty reasonable; let's take a look at the S21 to see what we got:
End of explanation
"""
10*np.log10(amplifier.nf(50.)[idx_915mhz])
"""
Explanation: So about 18 dB gain; let's see what our noise figure is:
End of explanation
"""
|
chrisjsewell/ipypublish | example/notebooks/Example.ipynb | bsd-3-clause | print("""
This is some printed text,
with a nicely formatted output.
""")
"""
Explanation: Markdown
General
Some markdown text.
A list:
something
something else
A numbered list
something
something else
This is a long section of text, which we only want in a document (not a presentation)
some text
some more text
some more text
some more text
some more text
some more text
some more text
some more text
some more text
This is an abbreviated section of the document text, which we only want in a presentation
summary of document text
References and Citations
References to \cref{fig:example}, \cref{tbl:example}, \cref{eqn:example_sympy} and \cref{code:example_mpl}.
Referencing multiple items: \cref{fig:example,fig:example_h,fig:example_v}.
A latex citation.\cite{zelenyak_molecular_2016}
A html citation.<cite data-cite="kirkeminde_thermodynamic_2012">(Kirkeminde, 2012)</cite>
Todo notes
\todo[inline]{an inline todo}
Some text.\todo{a todo in the margins}
Text Output
End of explanation
"""
Image('../images/example.jpg',height=400)
images_hconcat(['../images/example.jpg',
'../images/example.jpg'],
width=600, gap=10)
images_vconcat(['../images/example.jpg',
'../images/example.jpg'],
height=400, gap=10)
images_gridconcat([[_,_] for _ in ['../images/example.jpg',
'../images/example.jpg']],
height=300, vgap=10,hgap=20)
"""
Explanation: Images and Figures
End of explanation
"""
plt.scatter(np.random.rand(10), np.random.rand(10),
label='data label')
plt.ylabel(r'a y label with latex $\alpha$')
plt.legend();
"""
Explanation: Displaying a plot with its code
A matplotlib figure, with the caption set in the markdowncell above the figure.
The plotting code for a matplotlib figure (\cref{fig:example_mpl}).
End of explanation
"""
df = pd.DataFrame(np.random.rand(3,4),columns=['a','b','c','d'])
df.a = ['$\delta$','x','y']
df.b = ['l','m','n']
df.set_index(['a','b'])
df.round(3)
"""
Explanation: Tables (with pandas)
The plotting code for a pandas Dataframe table (\cref{tbl:example}).
End of explanation
"""
Latex('$$ a = b+c $$')
"""
Explanation: Equations (with ipython or sympy)
End of explanation
"""
f = sym.Function('f')
y,n = sym.symbols(r'y \alpha')
f = y(n)-2*y(n-1/sym.pi)-5*y(n-2)
sym.rsolve(f,y(n),[1,4])
"""
Explanation: The plotting code for a sympy equation (\cref{eqn:example_sympy}).
End of explanation
"""
import numpy as np
import ipyvolume.pylab as p3
p3.figure()
p3.figure()
u = np.linspace(-10, 10, 25)
x, y = np.meshgrid(u, u)
r = np.sqrt(x**2+y**2).flatten()
x = x.flatten()
y = y.flatten()
time = np.linspace(0, np.pi*2, 15)
z = np.array([(np.cos(r + t) * np.exp(-r/5)) for t in time])
color = np.array([[np.cos(r + t), 1-np.abs(z[i]), 0.1+z[i]*0] for i, t in enumerate(time)])
color = np.transpose(color, (0, 2, 1))
s = p3.scatter(x, z, y, color=color, size=(z+1), marker="sphere")
p3.ylim(-3,3)
p3.show()
p3.save('../html/embed.html')
"""
Explanation: Embed interactive HTML (like ipywidgets)
Interactive HTML was created using <a href="https://ipyvolume.readthedocs.io">ipyvolume</a>
and will render below in .html type outputs:
End of explanation
"""
|
ase16-ta/ga | ga.ipynb | mit | %matplotlib inline
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "<unity-id>"
class O:
"""
Basic Class which
- Helps dynamic updates
- Pretty Prints
"""
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
"""
Explanation: Genetic Algorithm Workshop
In this workshop we will code up a genetic algorithm for a simple mathematical optimization problem.
Genetic Algorithm is a
* Meta-heuristic
* Inspired by Natural Selection
* Traditionally works on binary data. Can be adopted for other data types as well.
You can find an example illustrating GA below
End of explanation
"""
# Few Utility functions
def say(*lst):
"""
Print whithout going to new line
"""
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
"""
Generate a random number between low and high.
decimals incidicate number of decimal places
"""
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
"""
Shuffle a list
"""
random.shuffle(lst)
return lst
class Decision(O):
"""
Class indicating Decision of a problem
"""
def __init__(self, name, low, high):
"""
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
"""
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
"""
Class indicating Objective of a problem
"""
def __init__(self, name, do_minimize=True):
"""
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
"""
O.__init__(self, name=name, do_minimize=do_minimize)
class Point(O):
"""
Represents a member of the population
"""
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions)
new.objectives = self.objectives
return new
class Problem(O):
"""
Class representing the cone problem.
"""
def __init__(self):
O.__init__(self)
# TODO 2: Code up decisions and objectives below for the problem
# using the auxilary classes provided above.
self.decisions = [Decision('r', 0, 10), Decision('h', 0, 20)]
self.objectives = [Objective('S'), Objective('T')]
@staticmethod
def evaluate(point):
[r, h] = point.decisions
# TODO 3: Evaluate the objectives S and T for the point.
return point.objectives
@staticmethod
def is_valid(point):
[r, h] = point.decisions
# TODO 4: Check if the point has valid decisions
return True
def generate_one(self):
# TODO 5: Generate a valid instance of Point.
return None
cone = Problem()
point = cone.generate_one()
cone.evaluate(point)
print(point)
"""
Explanation: The optimization problem
The problem we are considering is a mathematical one
<img src="cone.png" width=500px/>
Decisions: r in [0, 10] cm; h in [0, 20] cm
Objectives: minimize S, T
Constraints: V > 200cm<sup>3</sup>
End of explanation
"""
def populate(problem, size):
population = []
# TODO 6: Create a list of points of length 'size'
return population
# or if ur python OBSESSED
# return [problem.generate_one() for _ in xrange(size)]
print(populate(cone, 5))
"""
Explanation: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Population
First up is to create an initial population.
End of explanation
"""
def crossover(mom, dad):
# TODO 7: Create a new point which contains decisions from
# the first half of mom and second half of dad
return None
pop = populate(cone,5)
crossover(pop[0], pop[1])
"""
Explanation: Crossover
We perform a single point crossover between two points
End of explanation
"""
def mutate(problem, point, mutation_rate=0.01):
# TODO 8: Iterate through all the decisions in the problem
# and if the probability is less than mutation rate
# change the decision(randomly set it between its max and min).
return None
"""
Explanation: Mutation
Randomly change a decision such that
End of explanation
"""
def bdom(problem, one, two):
"""
Return if one dominates two
"""
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
# TODO 9: Return True/False based on the definition
# of bdom above.
return dominates
"""
Explanation: Fitness Evaluation
To evaluate fitness between points we use binary domination. Binary Domination is defined as follows:
* Consider two points one and two.
* For every objective o and t in one and two, o <= t
* Atleast one objective o and t in one and two, o < t
Note: Binary Domination is not the best method to evaluate fitness but due to its simplicity we choose to use it for this workshop.
End of explanation
"""
def fitness(problem, population, point):
dominates = 0
# TODO 10: Evaluate fitness of a point.
# For this workshop define fitness of a point
# as the number of points dominated by it.
# For example point dominates 5 members of population,
# then fitness of point is 5.
return dominates
def elitism(problem, population, retain_size):
# TODO 11: Sort the population with respect to the fitness
# of the points and return the top 'retain_size' points of the population
return population[:retain_size]
"""
Explanation: Fitness and Elitism
In this workshop we will count the number of points of the population P dominated by a point A as the fitness of point A. This is a very naive measure of fitness since we are using binary domination.
Few prominent alternate methods are
1. Continuous Domination - Section 3.1
2. Non-dominated Sort
3. Non-dominated Sort + Niching
Elitism: Sort points with respect to the fitness and select the top points.
End of explanation
"""
def ga(pop_size = 100, gens = 250):
problem = Problem()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size)
gen += 1
print("")
return initial_population, population
"""
Explanation: Putting it all together and making the GA
End of explanation
"""
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[0] for i in initial_objs]
initial_y = [i[1] for i in initial_objs]
final_x = [i[0] for i in final_objs]
final_y = [i[1] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Total Surface Area(T)")
plt.xlabel("Curved Surface Area(S)")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga()
plot_pareto(initial, final)
"""
Explanation: Visualize
Lets plot the initial population with respect to the final frontier.
End of explanation
"""
|
millernj/phys202-project | [2]Making the Network.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact
from sklearn.datasets import load_digits
from IPython.display import Image, display
digits = load_digits()
print(digits.data.shape)
def show_examples(i):
plt.matshow(digits.images[i].reshape((8,8)), cmap='Greys_r')
display(digits.target[i])
interact(show_examples, i=[1,1797-1])
"""
Explanation: 1 Layer Network
Here we will make a network that will recognize 8x8 images of numbers. This will involve a creating a function that genrates networks and a function that can train the network.
End of explanation
"""
Image(url="http://neuralnetworksanddeeplearning.com/images/tikz35.png")
"""
Explanation: Our network will be comprised of a list of numpy arrays with each array containing the weights and bias for that layer of perceptions.
End of explanation
"""
def gen_network(size):
weights= [np.array([[np.random.randn() for _ in range(size[n-1]+1)]
for _ in range(size[n])]) for n in range(len(size))[1:]]
return weights
a = gen_network([2,2,1,3])
a
"""
Explanation: Credit to Neural Networks and Deep Learning by Michael Nielsen for the image.
End of explanation
"""
sigmoid = lambda x: 1/(1 +np.exp(-x))
def perceptron_sigmoid(weights, inputvect):
return sigmoid(np.dot(np.append(inputvect,[1]), weights))
def propforward(network, inputvect):
outputs = []
for layer in network:
neural_input = inputvect
output = [perceptron_sigmoid(weights, neural_input) for weights in layer]
outputs.append(output)
inputvect = output
outputs = np.array(outputs)
return [outputs[:-1], outputs[-1]]
"""
Explanation: This is our code from the Making Perceptrons notebook that we use for our network.
End of explanation
"""
def target_convert(n):
assert n <= 9 and n >= 0
n = round(n)
result = np.zeros((10,))
result[n]=1
return result
target_convert(4)
"""
Explanation: Here we define functions to train the network based on a set of training data. The first step is to run our training data through our network to find how much error the network currently has. Since digits.target is a list of integers, we need a function to convert those integers into 10 dimensional vectors: the same format as the output of our network.
End of explanation
"""
def find_deltas_sigmoid(outputs, targets):
return [output*(1-output)*(output-target) for output, target in zip(outputs, targets)]
"""
Explanation: Another important function we will need is a function that will compute the output error and multipply it with the derivative ofour sigmoid function to find our output layer's deltas. These deltas will be crucial for backpropagating our error to our hidden layers.
End of explanation
"""
def backprob(network, inputvect, targets):
hidden_outputs, outputs = propforward(network, inputvect)
change_in_outputs = find_deltas_sigmoid(outputs, targets)
list_deltas = [[] for _ in range(len(network))]
list_deltas[-1] = change_in_outputs
for n in range(len(network))[-1:0:-1]:
delta = change_in_outputs
change_in_hidden_outputs= [hidden_output*(1-hidden_output)*
np.dot(delta, np.array([n[i] for n in network[n]]).transpose())
for i, hidden_output in enumerate(hidden_outputs[n-1])]
list_deltas[n-1] = change_in_hidden_outputs
change_in_outputs = change_in_hidden_outputs
return list_deltas
"""
Explanation: Once we have to deltas of our output layers, we move on to getting the hidden layer's deltas. To compute this, we will take the Hadamard product of the dot product of the weight array and the deltas of the succeeding array with the derivitive of that hidden layer's output.
$$\delta_{l}=((w_{l+1})^{T}\delta_{l+1})⊙ \sigma'(z_{l})$$
This formula backpropagates the error from each layer to the previous layer so that we can change each weight by how wrong it is.
Credit to Neural Networks and Deep Learning by Michael Nielsen for the formula.
End of explanation
"""
def stoc_descent(network, input_list, target_list, learning_rate):
mega_delta = []
hidden_output = [propforward(network, inpt)[0] for inpt in input_list]
for inpt, target in zip(input_list, target_list):
mega_delta.append(backprob(network, inpt, target))
inputs=[]
inputs.append(input_list)
for n in range(len(network)):
inputs.append(hidden_output[n])
assert len(inputs) == len(network) + 1
deltas = []
for n in range(len(network)):
deltas.append([np.array(delta[n]) for delta in mega_delta])
assert len(deltas)==len(network)
for n in range(len(network)):
edit_weights(network[n], inputs[n], deltas[n], learning_rate)
"""
Explanation: Now that we can find the deltas for each layer in the network, we just need a function to edit our weights based off of a list of examples. For that, we use stocastic gradient descent.
End of explanation
"""
def edit_weights(layer, input_list, deltas, learning_rate):
for a, inpt in enumerate(input_list):
layer-=learning_rate/len(input_list)*np.dot(deltas[a].reshape(len(deltas[a]),1),
np.append(inpt,[1]).reshape(1,len(inpt)+1))
"""
Explanation: To edit the weights in of network, we take the 2D array in each layer and subtract it with the 2D array that results from the average of the dot products of the deltas and the inputs of that layer for the samples in the training data. This average is multiplied by a learning rate, $η$, to give us control over how much the network will change.
$$w^{l}\rightarrow w^{l}−\frac{η}{m}\sum_{x} \delta_{x,l}(a_{x,l−1})^{T}$$
Credit to Neural Networks and Deep Learning by Michael Nielsen for the formula.
End of explanation
"""
inputs=64
hidden_neurons=40
outputs=10
donnel = gen_network([inputs,hidden_neurons,outputs])
# Here's what Donnel looks like.
donnel
"""
Explanation: So, we have everything we need to train a network. All we are missing is a network to train. Let's make one and let's call him Donnel.
End of explanation
"""
def output_reader(output):
assert len(output)==10
result=[]
for i, t in enumerate(output):
if t == max(output) and abs(t-1)<=0.5:
result.append(i)
if len(result)==1:
return result[0]
else:
return 0
output_reader([0,0,0,0,0,1,0,0,0,0])
"""
Explanation: So as you can see, the network "Donnel" is simply a list of 2D numpy arrays with one array for each layer of the network. His hidden layer's shape is 40 x 65 with each row being a perceptron with 64 weights and 1 bias. Since Donnel's output layer has 10 nuerons in it, we need to be able to convert Donnel's output to numbers and numbers (0-9) into a list of perceptron outputs.
End of explanation
"""
%%timeit -r1 -n1
training_cycles = 20
numbers_per_cycle = 1438
batch_size = 15
learning_rate = 1
train_data_index = np.linspace(0,numbers_per_cycle, numbers_per_cycle + 1)
target_list = [target_convert(n) for n in digits.target[0:numbers_per_cycle]]
np.random.seed(1)
np.random.shuffle(train_data_index)
for _ in range(training_cycles):
for n in train_data_index:
if n+batch_size <= numbers_per_cycle:
training_data = digits.data[int(n):int(n+batch_size)]
target_data = target_list[int(n):int(n+batch_size)]
else:
training_data = digits.data[int(n-batch_size):numbers_per_cycle]
assert len(training_data)!=0
target_data = target_list[int(n-batch_size):numbers_per_cycle]
stoc_descent(donnel, training_data, target_data, learning_rate)
And let's check how accurate it is by testing it with the remaining 20% of the data set.
def check_net(rnge = 1438, check_number=202):
guesses = []
targets = []
number_correct = 0
rnge = range(rnge,rnge + 359)
for n in rnge:
guesses.append(output_reader(propforward(donnel, digits.data[n])[-1]))
targets.append(digits.target[n])
for guess, target in zip(guesses, targets):
if guess == target:
number_correct+=1
number_total = len(rnge)
print(number_correct/number_total*100)
print("%d/%d" %(number_correct, number_total))
print()
print(propforward(donnel, digits.data[check_number])[-1])
print()
print(output_reader(propforward(donnel, digits.data[check_number])[-1]))
show_examples(check_number)
interact(check_net, rnge=True, check_number = [1,1796])
"""
Explanation: Now, lets train the network with 80% of the digits data set. To do this, we will use stocastic gradient descent on batch sized iteration of the total training data set. Essentially, we're going to change our weights 15 examples at a time until we complete 80% of the dataset. Let's run this through a couple cycles as well to get our accuracy as high as possible.
This Cell Takes 20 Minutes to Run
End of explanation
"""
|
mcamack/Jupyter-Notebooks | time-series/LSTM - Time-Series Forecasting - NAB Artificial with Noise.ipynb | apache-2.0 | from tensorflow import keras
"""
Explanation: LSTM Time Series Forecasting for NAB random signal
End of explanation
"""
import pandas as pd
import numpy as np
df_raw = pd.read_csv("datasets/NAB-art_daily_small_noise.csv")
df_raw.head()
df_raw.isna().sum()
df = df_raw.dropna()
df["timestamp"] = pd.to_datetime(df["timestamp"])
df.describe()
df.columns
df["timestamp"]
"""
Explanation: Dataset
https://archive.ics.uci.edu/ml/datasets/Air+quality#
End of explanation
"""
%matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [9.5, 6]
figure = plt.figure()
plt.plot(df["timestamp"], df["value"])
"""
Explanation: Visualize Data
End of explanation
"""
# from sklearn.preprocessing import MinMaxScaler
# scaler = MinMaxScaler()
# scaler.fit(df["value"].values.reshape(-1,1))
# scaled_data = scaler.transform(df["value"].values.reshape(-1,1))
# scaled_data
# df["scaled"] = scaled_data
# df
"""
Explanation: Normalize Data
End of explanation
"""
from keras.preprocessing.sequence import TimeseriesGenerator
input_length = 40
batch_size = 10
X_input = df['value']
generator = TimeseriesGenerator(X_input, X_input, length=input_length, batch_size=batch_size)
len(generator)
generator[0]
generator[1]
input_length = 200
batch_size = 20
num_features = 1
X = X_input.values.reshape(X_input.shape[0], 1)
generator = TimeseriesGenerator(X, X, length=input_length, batch_size=batch_size)
len(generator)
"""
Explanation: Split Dataset
Use the Keras TimeseriesGenerator to transform the dataframe into training and testing portions
End of explanation
"""
from keras import Sequential
from keras.layers import Dense, LSTM, TimeDistributed
from keras.optimizers import RMSprop, Adam
model = Sequential()
model.add(LSTM(100, input_shape=(input_length, num_features), return_sequences=True))
model.add(LSTM(100, input_shape=(input_length, num_features)))
model.add(Dense(num_features, activation='relu'))
optimizer = Adam()
model.compile(loss='mse', optimizer=optimizer)
model.summary()
model.fit_generator(generator, steps_per_epoch=len(generator), epochs=5)
"""
Explanation: Create Model
End of explanation
"""
steps_to_predict = 300
prediction_timestamps = []
prediction_values = []
start_values = X_input.iloc[-input_length:].copy()
start_times = df['timestamp'].iloc[-input_length:].copy()
for i in range(steps_to_predict):
next_value = model.predict(start_values[-input_length:].values.reshape(1,input_length,1))
start_values = start_values.append(pd.Series(next_value[0]), ignore_index=True)
next_time = start_times[-1:].values + pd.Timedelta(seconds=60*5)
start_times = start_times.append(pd.Series(next_time[0]), ignore_index=True)
start_values
start_times
"""
Explanation: Predict and Create Forecast
Use the entire history to forecast the next data point. Then add that to the history and repeat to generate the 2nd future data point... etc.
End of explanation
"""
forecast = plt.figure()
plt.plot(start_times, start_values, color='green')
# plt.plot(start_times, scaler.inverse_transform(start_values), color='green')
plt.plot(df["timestamp"].iloc[-400:], X_input.iloc[-400:])
# plt.plot(next_time, next_value, color='green')
"""
Explanation: Visualize Forecast
End of explanation
"""
|
cerrno/neurokernel | notebooks/vision.ipynb | bsd-3-clause | %matplotlib inline
%cd -q ~/neurokernel/examples/vision/data
%run generate_vision_gexf.py
"""
Explanation: Vision Model Demo
This notebook illustrates how to run a Neurokernel-based model of portions of the fly's vision system.
Background
In addition to the retina where the photo-transduction takes place, the optic
lobe of the Drosophila can be divided into 4 major LPUs on each side of
the fly brain respectively referred to as the lamina, medulla, lobula and lobula
plate. Visual information progresses along a processing path that starts at the
retina and successively passes through the lamina, medulla, and either the
lobula or the lobula plate. The spatial structure of the visual stimulus is
preserved by the retinotopic columnar organization of most of these LPUs.
There are at least 120 different types of neurons in the optic lobe.
Most of the neurons in the optic lobe (if not all)
do not emit spikes; rather, they communicate via chemical synapses where
neurotransmitter is tonically released based on the graded potential of the
presynaptic neurons. The synapses can have varying amount of delays based on the
different neurotransmitters. Many neurons in the optic lobe also communicate
through gap junctions.
The current vision system model is based upon available connectome data for the
lamina (Rivera-Alba et al., 2011) and medulla
(Fischbach et al., 1989; Higgins et al., 2004). The model
consists of two LPUs; the first contains 9516 neurons (or about 90% of the
cells) in the retina and lamina, while the second contains 6920 (or about 17%
of the cells) in the medulla and several neurons that connect to both the
medulla and first layer of the lobula. All neurons are modeled using the
Morris-Lecar model with parameters selected to not elicit spiking activity.
Synapses are modeled using a simple model of tonic neurotransmitter release and
its effect upon postsynaptic conductance. The model does not currently comprise gap junctions.
Executing the Model
Assuming that the Neurokernel source has been cloned to ~/neurokernel, we first create GEXF files containing the model configuration.
End of explanation
"""
%run gen_vis_input.py
%cd -q ~/neurokernel/examples/vision
%run vision_demo.py --lam_dev 0 --med_dev 1
"""
Explanation: Then, we generate an input of duration 1.0 seconds and execute the model. Note that if you have access to only 1 GPU, replace --med_dev 1 with --med_dev 0 in the third line below.
End of explanation
"""
%run visualize_output.py
"""
Explanation: Next, we generate a video of the membrane potentials of specific neurons in the two LPUs:
End of explanation
"""
import IPython.display
IPython.display.YouTubeVideo('5eB78fLl1AM')
"""
Explanation: The visualization script produces a video that depicts an input signal provided to a grid comprising neurons associated with each of the 768 cartridges in one of the fly's eyes as well as the response of select neurons in the corresponding columns in the retina/lamina and medulla LPUs. The cartridges and columns are organized in a hexagonal grid similar to the following; each pixel in the visualization corresponds to the neuron associated with one cartridge or column.
<img src='files/files/vision-grid.png' />
The resulting video (hosted on YouTube) can be viewed below:
End of explanation
"""
|
sanabasangare/data-visualization | fin_big_data.ipynb | mit | import numpy as np # for array operations
import pandas as pd # for time series management
from pandas_datareader import data as web # for data retrieval
import seaborn as sns; sns.set() # for a nicer plotting style
# put all plots in the notebook itself
%matplotlib inline
"""
Explanation: Analyzing Financial Data with python
Start by installing locally a comprehensive Python installation such as the Anaconda Python distribution.
Necessary Imports
Import the required modules/packages
End of explanation
"""
AXP = web.DataReader('AXP', data_source='google')
"""
Explanation: Retrieving Stock Price Data
In this case, I'm retrieving stock price data for American Express Company using its stock symbol AXP from Google Finance.
End of explanation
"""
type(AXP)
"""
Explanation: The "AXP" object is of type "DataFrame".
End of explanation
"""
AXP.info()
"""
Explanation: Get meta information
End of explanation
"""
AXP.columns
"""
Explanation: List the columns in the dataframe
End of explanation
"""
AXP.tail()
"""
Explanation: Display the final five rows of the data set.
End of explanation
"""
AXP['Close'].head()
"""
Explanation: Easily select single or multiple columns of a DataFrame object.
.head() shows the first five rows of the selected column
End of explanation
"""
AXP[['Open', 'Close']].tail()
"""
Explanation: .tail() here, shows the last five rows of the 2 selected columns
End of explanation
"""
AXP.loc['2017-06-05'] # single row via index value
AXP.iloc[:2] # two rows via index numbers
"""
Explanation: Similarly, a single or multiple rows can be selected
End of explanation
"""
AXP['Close'].plot(figsize=(20, 10));
"""
Explanation: Data Visualization
End of explanation
"""
rets = np.log(AXP['Close'] / AXP['Close'].shift(1))
"""
Explanation: fully vectorized operation for log return calculation
End of explanation
"""
rets.hist(figsize=(20, 10), bins=35);
"""
Explanation: The log returns can then be visualized via a histogram.
End of explanation
"""
AXP['MA50'] = pd.Series(AXP['Close']).rolling(window=50,center=False).mean()
AXP[['Close', 'MA50']].plot(figsize=(20, 10));
"""
Explanation: Calculating Moving Averages with pandas function
vectorized calculation of 50 days moving average/trend
End of explanation
"""
|
google/applied-machine-learning-intensive | content/03_regression/01_introduction_to_sklearn/colab.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/03_regression/01_introduction_to_sklearn/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2019 Google LLC.
End of explanation
"""
from sklearn.datasets import load_iris
iris_data = load_iris()
iris_data
"""
Explanation: Introduction to scikit-learn
Scikit-learn is a machine learning library in Python.
Scikit-learn is the first of the several machine learning libraries we will explore in this course. It is relatively approachable, supports a wide variety of traditional machine learning models, and is ubiquitous in the world of data science.
Datasets
Scikit-learn contains methods for loading, fetching, and making (generating) data. The methods for doing this all fall under the datasets subpackage. Most of the functions in this package have load, fetch, or make in the name to let you know what the method is doing under the hood.
Loading functions bring static datasets into your program. The data comes pre-packaged with scikit-learn, so no network access is required.
Fetching functions also bring static datasets into your program. However, the data is pulled from the internet, so if you don't have network access, these functions might fail.
Generating functions make dynamic datasets based on some equation.
These pre-packaged dataset functions exist for many popular datasets, such as the MNIST digits dataset and the Iris flower dataset. The generation functions reference classic dataset "shape" formations such as moons and swiss rolls. These datasets are perfect for getting familiar with machine learning.
Loading
Let us first look at an example of loading data. We will load the iris flowers dataset using the load_iris function.
End of explanation
"""
type(iris_data)
"""
Explanation: That's a lot to take in. Let's examine this loaded data a little more closely. First we'll see what data type this dataset is:
End of explanation
"""
dir(iris_data)
"""
Explanation: sklearn.utils.Bunch is a type that you'll see quite often when working with datasets built into scikit-learn. It is a dictionary-like container for feature and target data within a dataset.
You won't find much documentation about Bunch objects because they are not really meant for usage beyond containing data native to scikit-learn.
Let's look at the attributes of the iris dataset:
End of explanation
"""
print(iris_data['DESCR'])
"""
Explanation: DESCR is a description of the dataset.
End of explanation
"""
print(iris_data['filename'])
"""
Explanation: filename is the name of the source file where the data is stored.
End of explanation
"""
print(iris_data['feature_names'])
"""
Explanation: feature_names is the name of the feature columns.
End of explanation
"""
print(iris_data['target_names'])
"""
Explanation: target_names, despite the name, is not the names of the target columns. There is only one column of targets.
Instead, target_names is the human-readable names of the classes in the target list within the bunch. In this case,target_names is the names of the three species of iris in this dataset.
End of explanation
"""
print(iris_data['target'])
"""
Explanation: We can now examine target and see that it contains zeros, ones, and twos. These correspond to the target names 'setosa', 'versicolor', and 'virginica'.
End of explanation
"""
iris_data['data']
"""
Explanation: Last, we'll look at the data within the bunch. The data is an array of arrays. Each sub-array contains four values. These values match up with the feature_names. The first item in each sub-array is 'sepal length (cm)', the next is 'sepal width (cm)', and so on.
End of explanation
"""
print(len(iris_data['data']))
print(len(iris_data['target']))
"""
Explanation: The number of target values should always equal the number of rows in the data.
End of explanation
"""
import pandas as pd
import numpy as np
iris_df = pd.DataFrame(
data=np.append(
iris_data['data'],
np.array(iris_data['target']).reshape(len(iris_data['target']), 1),
axis=1),
columns=np.append(iris_data['feature_names'], ['species'])
)
iris_df.sample(n=10)
"""
Explanation: Bunch objects are an adequate container for data. They can be used directly to feed models. However, Bunch objects are not very good for analyzing and manipulating your data.
In this course, we will typically convert Bunch objects into Pandas DataFrame objects to make analysis, data cleaning, visualization, and train/test splitting easier.
To do this, we will take the matrix of feature data and append the target data to it to create a single matrix of data. We also take the list of feature names and append the word 'species' to represent the target classes in the matrix.
End of explanation
"""
iris_df['species'] = iris_df['species'].astype('int64')
iris_df.sample(n=10)
"""
Explanation: You might notice that the integer representation of species got converted to a floating point number along the way. We can change that back.
End of explanation
"""
# Your answer goes here
"""
Explanation: Exercise 1
Load the Boston house price dataset into a Pandas DataFrame. Append the target values to the last column of the DataFrame called boston_df. Name the target column 'PRICE'.
Student Solution
End of explanation
"""
from sklearn.datasets import fetch_california_housing
housing_data = fetch_california_housing()
type(housing_data)
"""
Explanation: Fetching
Fetching is similar to loading. Scikit-learn will first see if it can find the dataset locally, and, if so, it will simply load the data. Otherwise, it will attempt to pull the data from the internet.
We can see fetching in action with the fetch_california_housing function below.
End of explanation
"""
dir(housing_data)
"""
Explanation: The dataset is once again a Bunch.
If you follow the link to the fetch_california_housing documentation, you notice that the dataset is a regression dataset as opposed to the iris dataset, which was a classification dataset.
We can see the difference in the dataset by checking out the attributes of the Bunch.
End of explanation
"""
print(housing_data['target'][:10])
"""
Explanation: We see that four of the attributes that we expect are present, but 'target_names' is missing. This is because our target is now a continuous variable (home price) and not a discrete value (iris species).
End of explanation
"""
import pandas as pd
import numpy as np
housing_df = pd.DataFrame(
data=np.append(
housing_data['data'],
np.array(housing_data['target']).reshape(len(housing_data['target']), 1),
axis=1),
columns=np.append(housing_data['feature_names'], ['price'])
)
housing_df.sample(n=10)
"""
Explanation: Converting a Bunch of regression data to a DataFrame is no different than for a Bunch of classification data.
End of explanation
"""
from sklearn.datasets import make_regression
features, targets = make_regression(n_samples=10, n_features=1, random_state=42)
features, targets
"""
Explanation: Generating
In the example datasets we've seen so far in this Colab, the data is static and loaded from a file. Sometimes it makes more sense to generate a dataset. For this, we can use one of the many generator functions.
make_regression is a generator that creates a dataset with an underlying regression that you can then attempt to discover using various machine learning models.
In the example below, we create a dataset with 10 data points. For the sake of visualization, we have only one feature per datapoint, but we could ask for more.
The return values are the $X$ and $y$ values for the regression. $X$ is a matrix of features. $y$ is a list of targets.
Since a generator uses randomness to generate data, we are going to set a random_state in this Colab for reproducibility. This ensures we get the same result every time we run the code. You won't do this in your production code.
End of explanation
"""
import matplotlib.pyplot as plt
plt.plot(features, targets, 'b.')
plt.show()
"""
Explanation: We can use a visualization library to plot the regression data.
End of explanation
"""
from sklearn.datasets import make_regression
features, targets = make_regression(n_samples=10, n_features=1, random_state=42, noise=5.0)
plt.plot(features, targets, 'b.')
plt.show()
"""
Explanation: That data appears to have a very linear pattern!
If we want to make it more realistic (non-linear), we can add some noise during data generation.
Remember that random_state is for reproducibility only. Don't use this in your code unless you have a good reason to.
End of explanation
"""
# Your answer goes here
"""
Explanation: There are dozens of dataset loaders and generators in the scikit-learn datasets package. When you want to play with a new machine learning algorithm, they are a great source of data for getting started.
Exercise 2
Search the scikit-learn datasets documentation and find a function to make a "Moons" dataset. Create a dataset with 75 samples. Use a random state of 42 and a noise of 0.08. Store the $X$ return value in a variable called features and the $y$ return value in a variable called targets.
Student Solution
End of explanation
"""
# Your answer goes here
"""
Explanation: Exercise 3
In Exercise Two, you created a "moons" dataset. In that dataset, the features are $(x,y)$-coordinates that can be graphed in a scatterplot. The targets are zeros and ones that represent a binary classification.
Use matplotlib's scatter method to visualize the data as a scatterplot. Use the c argument to make the dots for each class a different color.
Student Solution
End of explanation
"""
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
regression = LinearRegression()
regression.fit(features, targets)
predictions = regression.predict(features)
plt.plot(features, targets, 'b.')
plt.plot(features, predictions, 'r-')
plt.show()
"""
Explanation: Models
Machine learning involves training a model to gain insight and predictive power from a dataset. Scikit-learn has support for many different types of models, ranging from classic algebraic models to more modern deep learning models.
Throughout the remainder of this course, you will learn about many of these models in much more depth. This section will walk you through some of the overarching concepts across all models.
Estimators
Most of the models in scikit-learn are considered estimators. An estimator is expected to implement two methods: fit and predict.
fit is used to train the model. At a minimum, it is passed the feature data used to train the model. In supervised models, it is also passed the target data.
predict is used to get predictions from the model. This method is passed features and returns target predictions.
Let's see an example of this in action.
Linear regression is a simple model that you might have encountered in a statistics class in the past. The model attempts to draw a straight line through a set of data points, so the line is as close to as many points as possible.
We'll use scikit-learn's LinearRegression class to fit a line to the regression data that we generated earlier in this Colab. To do that, we simply call the fit(features, targets) method.
After fitting, we can ask the model for predictions. To do this, we use the predict(features) method.
End of explanation
"""
features
"""
Explanation: At this point, don't worry too much about the details of what LinearRegression is doing. There is a deep-dive into regression problems coming up soon.
For now, just note the fit/predict pattern for training estimators, and know that you'll see it throughout our adventures with scikit-learn.
Transformers
In practice, it is rare that you will get perfectly clean data that is ready to feed into your model for training. Most of the time, you will need to perform some type of cleaning on the data first.
You've had some hands-on experience doing this in our Pandas Colabs. Scikit-learn can also be used to perform some data preprocessing.
Transformers are spread about within the scikit-learn library. Some are in the preprocessing module while others are in more specialized packages like compose, feature_extraction, impute, and others.
All transformers implement the fit and transform methods. The fit method calculates parameters necessary to perform the data transformation. The transform method actually applies the transformation. There is a convenience fit_transform method that performs both fitting and transformation in one method call.
Let's see a transformer in action.
We will use the MinMaxScaler to scale our feature data between zero and one. This scales the data with a linear transform so that the minimum value becomes 0 and the maximum value becomes 1, so all values are within 0 and 1.
Looking at our feature data pre-transformation, we can see values that are below zero and above one.
End of explanation
"""
from sklearn.preprocessing import MinMaxScaler
transformer = MinMaxScaler()
transformer.fit(features)
transformer.data_min_, transformer.data_max_
"""
Explanation: We will now create a MinMaxScaler and fit it to our feature data.
Each transformer has different information that it needs in order to perform a transformation. In the case of the MinMaxScaler, the smallest and largest values in the data are needed.
End of explanation
"""
features = transformer.transform(features)
features
"""
Explanation: You might notice that the values are stored in arrays. This is because transformers can operate on more than one feature. In this case, however, we have only one.
Next, we need to apply the transformation to our features. After the transformation, we can now see that all of the features fall between the range of zero to one. Moreover, you might notice that the minimum and maximum value in the untransformed features array correspond to the 0 and 1 in the transformed array, respectively.
End of explanation
"""
from sklearn.pipeline import Pipeline
features, targets = make_regression(
n_samples=10, n_features=1, random_state=42, noise=5.0)
pipeline = Pipeline([
('scale', MinMaxScaler()),
('regression', LinearRegression())
])
pipeline.fit(features, targets)
predictions = pipeline.predict(features)
plt.plot(features, targets, 'b.')
plt.plot(features, predictions, 'r-')
plt.show()
"""
Explanation: Pipelines
A pipeline is simply a series of transformers, often with an estimator at the end.
In the example below, we use a Pipeline class to perform min-max scaling or our feature data and then train a linear regression model using the scaled features.
End of explanation
"""
from sklearn.metrics import mean_squared_error
mean_squared_error(targets, predictions)
"""
Explanation: Metrics
So far we have seen ways that scikit-learn can help you get data, modify that data, train a model, and finally, make predictions. But how do we know how good these predictions are?
Scikit-learn also comes with many functions for measuring model performance in the metrics package. Later in this course, you will learn about different ways to measure the performance of regression and classification models, as well as tradeoffs between the different metrics.
We can use the mean_squared_error function to find the mean squared error (MSE) between the target values that we used to train our linear regression model and the predicted values.
End of explanation
"""
print(regression.score(features, targets))
print(pipeline.score(features, targets))
"""
Explanation: In this case, the MSE value alone doesn't have much meaning. Since the data that we fit the regression to isn't related to any real-world metrics, the MSE is hard to interpret alone.
As we learn more about machine learning and begin training models on real data, you'll learn how to interpret MSE and other metrics in the context of the data being analyzed and the problem being solved.
There are also metrics that come with each estimator class. These metrics can be extracted using the score method.
The regression class we created earlier can be scored, as can the pipeline.
End of explanation
"""
# Your answer goes here
"""
Explanation: The return value of the score method depends on the estimator being used. In the case of LinearRegression, the score is the $R^2$ score, where scores closer to 1.0 are better. You can find the metric that score returns in the documentation for the given estimator you're using.
Exercise 4
Use the Pipeline class to combine a data pre-processor and an estimator.
To accomplish this:
Find a preprocessor that uses the max absolute value for scaling.
Find a linear_model based on the Huber algorithm.
Combine this preprocessor and estimator into a pipeline.
Make a sample regression dataset with 200 samples and 1 feature. Use a random state of 85 and a noise of 5.0. Save the features in a variable called features and the targets in a variable called targets.
Fit the model.
Using the features that were created when the regression dataset was created, make predictions with the model and save them into a variable called predictions.
Plot the features and targets used to train the model on a scatterplot with blue dots.
Plot the features and predictions over the scatterplot as a red line.
Student Solution
End of explanation
"""
|
dwhswenson/openpathsampling | examples/alanine_dipeptide_mstis/AD_mstis_2_run.ipynb | mit | %matplotlib inline
import openpathsampling as paths
import numpy as np
import math
# the openpathsampling OpenMM engine
import openpathsampling.engines.openmm as eng
"""
Explanation: Run from bootstrap paths
Now we will use the initial trajectories we obtained from bootstrapping to run an MSTIS simulation. This will show both how objects can be regenerated from storage and how regenerated equivalent objects can be used in place of objects that weren't stored.
Tasks covered in this notebook:
* Loading OPS objects from storage
* Ways of assigning initial trajectories to initial samples
* Setting up a path sampling simulation with various move schemes
* Visualizing trajectories while the path sampling is running
End of explanation
"""
old_store = paths.AnalysisStorage("ala_mstis_bootstrap.nc")
"""
Explanation: Loading things from storage
First we'll reload some of the stuff we stored before. Of course, this starts with opening the file.
End of explanation
"""
print("PathMovers:", len(old_store.pathmovers))
print("Engines:", len(old_store.engines))
print("Samples:", len(old_store.samples))
print("Trajectories:", len(old_store.trajectories))
print("Ensembles:", len(old_store.ensembles))
print("SampleSets:", len(old_store.samplesets))
print("Snapshots:", len(old_store.snapshots))
print("Networks:", len(old_store.networks))
"""
Explanation: A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles.
End of explanation
"""
# template = old_store.snapshots[0]
engine = old_store.engines['default']
mstis = old_store.networks[0]
sset = old_store.tag['sampleset']
"""
Explanation: Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create. There's only one engine stored, so we take the only one.
End of explanation
"""
#platform = 'CUDA'
#engine.initialize(platform)
print('Engine uses platform `%s`' % engine.platform)
sset.sanity_check()
"""
Explanation: initialize engine
if we do not select a platform the fastest possible will be chosen but we explicitly request to use the one in the config file
End of explanation
"""
# logging creates ops_output.log file with details of what the calculation is doing
#import logging.config
#logging.config.fileConfig("logging.conf", disable_existing_loggers=False)
storage = paths.storage.Storage("ala_mstis_production.nc", "w")
storage.snapshots.save(old_store.snapshots[0]);
"""
Explanation: Running RETIS
Now we run the full calculation. Up to here, we haven't been storing any of our results. This time, we'll start a storage object, and we'll save the network we've created. Then we'll run a new PathSampling calculation object.
End of explanation
"""
scheme = paths.DefaultScheme(mstis, engine)
"""
Explanation: Before we can sample we still need to set the actual MoveScheme which determines the
set of moves to apply to our set of samples and effectively doing the steps in
replica (sampleset) space. We pick the default scheme for mstis and feed it with
the engine to be used.
End of explanation
"""
mstis_calc = paths.PathSampling(
storage=storage,
sample_set=sset,
move_scheme=scheme
)
mstis_calc.save_frequency = 10
#mstis_calc.save_frequency = 1
"""
Explanation: and finally generate the PathSampler object to conduct the simulation.
End of explanation
"""
mstis_calc.run(10000)
print(len(storage.steps))
# commented out during development, so we can "run all" and then do more
storage.close()
"""
Explanation: Now everything is ready: let's run the simulation! The first step takes a little since all
necessary information, i.e. the engines, topologies, initial snapshots, ..., need to be
stored. Then the monte carlo steps will be performed.
End of explanation
"""
|
edouardklein/JsItBad | JsItBad.ipynb | agpl-3.0 | import glob
import string
import re
import numpy as np
# Loading the data
data = []
for js_file in glob.glob('Javascript/*/*'):
new = {}
new['name'] = js_file.split('/')[-1]
new['code'] = open(js_file,'r').read()
if new['name'][-2:] == 'js':
if new['name'][-6:] == 'min.js':
new['nature'] = 'Minified'
new['color'] = 'b'
else:
new['nature'] = 'Normal'
new['color'] = 'g'
elif new['name'][-3:] == 'out':
new['nature'] = 'Malicious'
new['color'] = 'r'
data.append(new)
"""
Explanation: Determining whether a Javascript sample is malicious is not computable (https://en.wikipedia.org/wiki/Computable_function) : we are looking for an algorithm that takes a program (which can be seen as an arbitrary Turing Machine) as an input and whose output is a property of the execution of that program.
If you are unfamiliar with the theory of computability and want ot get an intuitive sense of this, imagine writing a JS sample that non trivially never terminates. A simple while(1){} would not do the trick because it can be trivially proven (without executing it) that it never terminates.
A program terminating depending on the answer to some complex mathematical problem (e.g. finding whether a big number is prime) can not be proven to terminate short of actually solving the problem, the best method for doing so being to actually execute the program.
Therefore, the best way to now if this program will terminate is to execute it, which may never ends. That is why deciding a property about the execution of that program is not computable in the general case.
This does not deter us from trying though, because in practice a program that does not terminate in a few seconds will be interrupted by the browser, and is therefore neither malicious nor begnin, it is non-fonctional. The goal here is to devise some indicator of malignity of a JS sample without even executing it (who wants to execute malicious code ?).
Related works
\cite{likarish2009obfuscated}. Bonne intro, bon blabla, mais ils ont créé un détecteur d'obfuscation plus qu'autre chose. On utilise quand même leur features.
On se limite aux features qu'on peut calculer sans même parser le JS (ne fut-ce que parce qu'on est pas à l'abri d'une attaque sur le parser.
Code
End of explanation
"""
def length(code):
return len(code)
def nb_lines(code):
return len(code.split('\n'))
def avg_char_per_line(code):
return length(code)/nb_lines(code)
def nb_strings(code):
'''Ugly approximation, no simple way out of this short of actually parsing the JS.'''
return len(code.split("'"))+len(code.split('"'))
def nb_non_printable(code):
'''\cite{likarish2009obfuscated} use unicode symbol, but we are more general'''
return len([x for x in code if not x in string.printable])
hex_octal_re = re.compile('([^A-F0-9]0[0-7]+|0x[A-F0-9]+)')
def hex_or_octal(code):
'''Ugly as hell, but we dont want to parse'''
return len(list(hex_octal_re.finditer(code)))
def max_nesting_level(code):
l = 0
max_l = 0
for c in code:
if c in '({[':
l+=1
max_l = l if l > max_l else max_l
elif c in ')}]':
l-=1
return max_l
features = [length, nb_lines, avg_char_per_line, nb_strings, nb_non_printable, hex_or_octal, max_nesting_level]
X = np.array([[f(x['code']) for f in features] for x in data])
X[:30]
#http://scikit-learn.org/stable/auto_examples/manifold/plot_compare_methods.html#example-manifold-plot-compare-methods-py
from sklearn import manifold
%matplotlib inline
import matplotlib.pylab as plt
n_neighbors = 10
n_components = 2
#Y = manifold.Isomap(n_neighbors, n_components).fit_transform(X)
#Y = manifold.LocallyLinearEmbedding(n_neighbors, n_components,
# eigen_solver='auto').fit_transform(X)
Y = manifold.MDS(n_components, max_iter=100, n_init=1).fit_transform(X)
#Y = manifold.SpectralEmbedding(n_components=n_components,
# n_neighbors=n_neighbors).fit_transform(X)
#Y = manifold.TSNE(n_components=n_components, init='pca', random_state=0).fit_transform(X)
plt.scatter(Y[:, 0], Y[:, 1], c=[x['color'] for x in data], alpha=0.2)
for label, x, y in zip([x['name'] for x in data], Y[:, 0], Y[:, 1]):
if '.js' in label and not ('min.' in lab2el):
plt.annotate(label ,xy=[x,y])
plt.savefig('toto.pdf')
"""
Explanation: Features
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/calculate_difference_between_dates_and_times.ipynb | mit | # Load library
import pandas as pd
"""
Explanation: Title: Calculate Difference Between Dates And Times
Slug: calculate_difference_between_dates_and_times
Summary: How to calculate differences between dates and times for machine learning in Python.
Date: 2017-09-11 12:00
Category: Machine Learning
Tags: Preprocessing Dates And Times
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create data frame
df = pd.DataFrame()
# Create two datetime features
df['Arrived'] = [pd.Timestamp('01-01-2017'), pd.Timestamp('01-04-2017')]
df['Left'] = [pd.Timestamp('01-01-2017'), pd.Timestamp('01-06-2017')]
"""
Explanation: Create Date And Time Data
End of explanation
"""
# Calculate duration between features
df['Left'] - df['Arrived']
"""
Explanation: Calculate Difference (Method 1)
End of explanation
"""
# Calculate duration between features
pd.Series(delta.days for delta in (df['Left'] - df['Arrived']))
"""
Explanation: Calculate Difference (Method 2)
End of explanation
"""
|
Diyago/Machine-Learning-scripts | DEEP LEARNING/Pytorch from scratch/CNN/project-dog-classification/dog_app.ipynb | apache-2.0 | import numpy as np
from glob import glob
# load filenames for human and dog images
human_files = np.array(glob("lfw/*/*"))
dog_files = np.array(glob("dogImages/*/*/*"))
# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
"""
Explanation: Convolutional Neural Networks
Project: Write an Algorithm for a Dog Identification App
In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.
Why We're Here
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).
In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
The Road Ahead
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
Step 0: Import Datasets
Step 1: Detect Humans
Step 2: Detect Dogs
Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
Step 5: Write your Algorithm
Step 6: Test Your Algorithm
<a id='step0'></a>
Step 0: Import Datasets
Make sure that you've downloaded the required human and dog datasets:
* Download the dog dataset. Unzip the folder and place it in this project's home directory, at the location /dogImages.
Download the human dataset. Unzip the folder and place it in the home diretcory, at location /lfw.
Note: If you are using a Windows machine, you are encouraged to use 7zip to extract the folder.
In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays human_files and dog_files.
End of explanation
"""
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
"""
Explanation: <a id='step1'></a>
Step 1: Detect Humans
In this section, we use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images.
OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
End of explanation
"""
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
"""
Explanation: Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.
In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.
Write a Human Face Detector
We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.
End of explanation
"""
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
#-#-# Do NOT modify the code above this line. #-#-#
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
"""
Explanation: (IMPLEMENTATION) Assess the Human Face Detector
Question 1: Use the code cell below to test the performance of the face_detector function.
- What percentage of the first 100 images in human_files have a detected human face?
- What percentage of the first 100 images in dog_files have a detected human face?
Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.
Answer:
(You can print out your results and/or write your percentages in this cell)
End of explanation
"""
### (Optional)
### TODO: Test performance of anotherface detection algorithm.
### Feel free to use as many code cells as needed.
"""
Explanation: We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.
End of explanation
"""
import torch
import torchvision.models as models
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# move model to GPU if CUDA is available
if use_cuda:
VGG16 = VGG16.cuda()
"""
Explanation: <a id='step2'></a>
Step 2: Detect Dogs
In this section, we use a pre-trained model to detect dogs in images.
Obtain Pre-trained VGG-16 Model
The code cell below downloads the VGG-16 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories.
End of explanation
"""
from PIL import Image
import torchvision.transforms as transforms
def VGG16_predict(img_path):
'''
Use pre-trained VGG-16 model to obtain index corresponding to
predicted ImageNet class for image at specified path
Args:
img_path: path to an image
Returns:
Index corresponding to VGG-16 model's prediction
'''
## TODO: Complete the function.
## Load and pre-process an image from the given img_path
## Return the *index* of the predicted class for that image
return None # predicted class index
"""
Explanation: Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.
(IMPLEMENTATION) Making Predictions with a Pre-trained Model
In the next code cell, you will write a function that accepts a path to an image (such as 'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.
Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the PyTorch documentation.
End of explanation
"""
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
## TODO: Complete the function.
return None # true/false
"""
Explanation: (IMPLEMENTATION) Write a Dog Detector
While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).
Use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).
End of explanation
"""
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
"""
Explanation: (IMPLEMENTATION) Assess the Dog Detector
Question 2: Use the code cell below to test the performance of your dog_detector function.
- What percentage of the images in human_files_short have a detected dog?
- What percentage of the images in dog_files_short have a detected dog?
Answer:
End of explanation
"""
### (Optional)
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.
"""
Explanation: We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as Inception-v3, ResNet-50, etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.
End of explanation
"""
import os
from torchvision import datasets
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
"""
Explanation: <a id='step3'></a>
Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.
Brittany | Welsh Springer Spaniel
- | -
<img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200">
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
Curly-Coated Retriever | American Water Spaniel
- | -
<img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200">
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
Yellow Labrador | Chocolate Labrador | Black Labrador
- | -
<img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220">
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively). You may find this documentation on custom datasets to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of transforms!
End of explanation
"""
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
## Define layers of a CNN
def forward(self, x):
## Define forward behavior
return x
#-#-# You so NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()
"""
Explanation: Question 3: Describe your chosen procedure for preprocessing the data.
- How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
- Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?
Answer:
(IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. Use the template in the code cell below.
End of explanation
"""
import torch.optim as optim
### TODO: select loss function
criterion_scratch = None
### TODO: select optimizer
optimizer_scratch = None
"""
Explanation: Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step.
Answer:
(IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_scratch, and the optimizer as optimizer_scratch below.
End of explanation
"""
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
# return trained model
return model
# train the model
model_scratch = train(100, loaders_scratch, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
"""
Explanation: (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_scratch.pt'.
End of explanation
"""
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)
"""
Explanation: (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.
End of explanation
"""
## TODO: Specify data loaders
"""
Explanation: <a id='step4'></a>
Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively).
If you like, you are welcome to use the same data loaders from the previous step, when you created a CNN from scratch.
End of explanation
"""
import torchvision.models as models
import torch.nn as nn
## TODO: Specify model architecture
if use_cuda:
model_transfer = model_transfer.cuda()
"""
Explanation: (IMPLEMENTATION) Model Architecture
Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable model_transfer.
End of explanation
"""
criterion_transfer = None
optimizer_transfer = None
"""
Explanation: Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
Answer:
(IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_transfer, and the optimizer as optimizer_transfer below.
End of explanation
"""
# train the model
model_transfer = # train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
# load the model that got the best validation accuracy (uncomment the line below)
#model_transfer.load_state_dict(torch.load('model_transfer.pt'))
"""
Explanation: (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_transfer.pt'.
End of explanation
"""
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
"""
Explanation: (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.
End of explanation
"""
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in data_transfer['train'].classes]
def predict_breed_transfer(img_path):
# load the image and return the predicted breed
return None
"""
Explanation: (IMPLEMENTATION) Predict Dog Breed with the Model
Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan hound, etc) that is predicted by your model.
End of explanation
"""
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def run_app(img_path):
## handle cases for a human face, dog, and neither
"""
Explanation: <a id='step5'></a>
Step 5: Write your Algorithm
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
- if a dog is detected in the image, return the predicted breed.
- if a human is detected in the image, return the resembling dog breed.
- if neither is detected in the image, provide output that indicates an error.
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and human_detector functions developed above. You are required to use your CNN from Step 4 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!
(IMPLEMENTATION) Write your Algorithm
End of explanation
"""
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
## suggested code, below
for file in np.hstack((human_files[:3], dog_files[:3])):
run_app(file)
"""
Explanation: <a id='step6'></a>
Step 6: Test Your Algorithm
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
(IMPLEMENTATION) Test Your Algorithm on Sample Images!
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
Answer: (Three possible points for improvement)
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/04_advanced_preprocessing/a_dataflow.ipynb | apache-2.0 | #Ensure that we have the correct version of Apache Beam installed
!pip freeze | grep apache-beam || sudo pip install apache-beam[gcp]==2.12.0
import tensorflow as tf
import apache_beam as beam
import shutil
import os
print(tf.__version__)
"""
Explanation: Data Preprocessing for Machine Learning
Learning Objectives
* Understand the different approaches for data preprocessing in developing ML models
* Use Dataflow to perform data preprocessing steps
Introduction
In the previous notebook we achieved an RMSE of 3.85. Let's see if we can improve upon that by creating a data preprocessing pipeline in Cloud Dataflow.
Preprocessing data for a machine learning model involves both data engineering and feature engineering. During data engineering, we convert raw data into prepared data which is necessary for the model. Feature engineering then takes that prepared data and creates the features expected by the model. We have already seen various ways we can engineer new features for a machine learning model and where those steps take place. We also have flexibility as to where data preprocessing steps can take place; for example, BigQuery, Cloud Dataflow and Tensorflow. In this lab, we'll explore different data preprocessing strategies and see how they can be accomplished with Cloud Dataflow.
One perspective in which to categorize different types of data preprocessing operations is in terms of the granularity of the operation. Here, we will consider the following three types of operations:
1. Instance-level transformations
2. Full-pass transformations
3. Time-windowed aggregations
Cloud Dataflow can perform each of these types of operations and is particularly useful when performing computationally expensive operations as it is an autoscaling service for batch and streaming data processing pipelines. We'll say a few words about each of these below. For more information, have a look at this article about data preprocessing for machine learning from Google Cloud.
1. Instance-level transformations
These are transformations which take place during training and prediction, looking only at values from a single data point. For example, they might include clipping the value of a feature, polynomially expand a feature, multiply two features, or compare two features to create a Boolean flag.
It is necessary to apply the same transformations at training time and at prediction time. Failure to do this results in training/serving skew and will negatively affect the performance of the model.
2. Full-pass transformations
These transformations occur during training, but occur as instance-level operations during prediction. That is, during training you must analyze the entirety of the training data to compute quantities such as maximum, minimum, mean or variance while at prediction time you need only use those values to rescale or normalize a single data point.
A good example to keep in mind is standard scaling (z-score normalization) of features for training. You need to compute the mean and standard deviation of that feature across the whole training data set, thus it is called a full-pass transformation. At prediction time you use those previously computed values to appropriately normalize the new data point. Failure to do so results in training/serving skew.
3. Time-windowed aggregations
These types of transformations occur during training and at prediction time. They involve creating a feature by summarizing real-time values by aggregating over some temporal window clause. For example, if we wanted our model to estimate the taxi trip time based on the traffic metrics for the route in the last 5 minutes, in the last 10 minutes or the last 30 minutes we would want to create a time-window to aggreagate these values.
At prediction time these aggregations have to be computed in real-time from a data stream.
Set environment variables and load necessary libraries
Apache Beam only works in Python 2 at the moment, so switch to the Python 2 kernel in the upper right hand side. Then execute the following cells to install the necessary libraries if they have not been installed already.
End of explanation
"""
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.13" # TF version for CMLE to use
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
"""
Explanation: Next, set the environment variables related to your GCP Project.
End of explanation
"""
def create_query(phase, sample_size):
basequery = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek,
EXTRACT(HOUR from pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1
"""
if phase == 'TRAIN':
subsample = """
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 0)
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 70)
"""
elif phase == 'VALID':
subsample = """
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 70)
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 85)
"""
elif phase == 'TEST':
subsample = """
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 85)
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 100)
"""
query = basequery + subsample
return query.replace("EVERY_N", sample_size)
"""
Explanation: Create data preprocessing job with Cloud Dataflow
The following code reads from BigQuery and saves the data as-is on Google Cloud Storage. We could also do additional preprocessing and cleanup inside Dataflow. Note that, in this case we'd have to remember to repeat that prepreprocessing at prediction time to avoid training/serving skew. In general, it is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at how tf.transform works in another notebook. For now, we are simply moving data from BigQuery to CSV using Dataflow.
It's worth noting that while we could read from BQ directly from TensorFlow, it is quite convenient to export to CSV and do the training off CSV. We can do this at scale with Cloud Dataflow. Furthermore, because we are running this on the cloud, you should go to the GCP Console to view the status of the job. It will take several minutes for the preprocessing job to launch.
Define our query and pipeline functions
To start we'll copy over the create_query function we created in the 01_bigquery/c_extract_and_benchmark notebook.
End of explanation
"""
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi
"""
Explanation: Then, we'll write the csv we create to a Cloud Storage bucket. So, we'll look to see that the location is empty, and if not clear out its contents so that it is.
End of explanation
"""
def to_csv(rowdict):
"""
Arguments:
-rowdict: Dictionary. The beam bigquery reader returns a PCollection in
which each row is represented as a python dictionary
Returns:
-rowstring: a comma separated string representation of the record
"""
days = ["null", "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
CSV_COLUMNS = "fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat".split(',')
rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
return rowstring
"""
Explanation: Next, we'll create a function and pipeline for preprocessing the data. First, we'll define a to_csv function which takes a row dictionary (a dictionary created from a BigQuery reader representing each row of a dataset) and returns a comma separated string for each record
End of explanation
"""
import datetime
def preprocess(EVERY_N, RUNNER):
"""
Arguments:
-EVERY_N: Integer. Sample one out of every N rows from the full dataset.
Larger values will yield smaller sample
-RUNNER: "DirectRunner" or "DataflowRunner". Specfy to run the pipeline
locally or on Google Cloud respectively.
Side-effects:
-Creates and executes dataflow pipeline.
See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
"""
job_name = "preprocess-taxifeatures" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/taxifare/ch4/taxi_preproc/".format(BUCKET)
#dictionary of pipeline options
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": "preprocess-taxifeatures" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S"),
"project": PROJECT,
"runner": RUNNER
}
#instantiate PipelineOptions object using options dictionary
opts = beam.pipeline.PipelineOptions(flags = [], **options)
#instantantiate Pipeline object using PipelineOptions
with beam.Pipeline(options=opts) as p:
for phase in ["TRAIN", "VALID", "TEST"]:
query = create_query(phase, EVERY_N)
outfile = os.path.join(OUTPUT_DIR, "{}.csv".format(phase))
(
p | "read_{}".format(phase) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))
| "tocsv_{}".format(phase) >> beam.Map(to_csv)
| "write_{}".format(phase) >> beam.io.Write(beam.io.WriteToText(outfile))
)
print("Done")
"""
Explanation: Next, we define our primary preprocessing function. Reading through the code this creates a pipeline to read data from BigQuery, use our to_csv function above to make a comma separated string, then write to a file in Google Cloud Storage.
End of explanation
"""
preprocess("50*10000", "DirectRunner")
"""
Explanation: Now that we have the preprocessing pipeline function, we can execute the pipeline locally or on the cloud. To run our pipeline locally, we specify the RUNNER variable as DirectRunner. To run our pipeline in the cloud, we set RUNNER to be DataflowRunner. In either case, this variable is passed to the options dictionary that we use to instantiate the pipeline.
As with training a model, it is good practice to test your preprocessing pipeline locally with a subset of your data before running it against your entire dataset.
Run Beam pipeline locally
We'll start by testing our pipeline locally. This takes upto 5 minutes. You will see a message "Done" when it has finished.
End of explanation
"""
%%bash
if gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://${BUCKET}/taxifare/ch4/taxi_preproc/
fi
"""
Explanation: Run Beam pipeline on Cloud Dataflow¶
Again, we'll clear out our bucket to GCS to ensure a fresh run.
End of explanation
"""
preprocess("50*100", "DataflowRunner")
"""
Explanation: The following step will take 15-20 minutes. Monitor job progress on the Dataflow section of GCP Console. Note, you can change the first arugment to "None" to process the full dataset.
End of explanation
"""
%%bash
gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/
%%bash
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*" | head
"""
Explanation: Once the job finishes, we can look at the files that have been created and have a look at what they contain. You will notice that the files have been sharded into many csv files.
End of explanation
"""
%%bash
if [ -d sample ]; then
rm -rf sample
fi
mkdir sample
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*" > sample/train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/VALID.csv-00000-of-*" > sample/valid.csv
"""
Explanation: Develop a model with new inputs
We can now develop a model with these inputs. Download the first shard of the preprocessed data to a subfolder called sample so we can develop locally first.
End of explanation
"""
%%bash
MODELDIR=./taxifaremodel
test -d $MODELDIR || mkdir $MODELDIR
cp -r ../03_model_performance/taxifaremodel/* $MODELDIR
"""
Explanation: To begin let's copy the model.py and task.py we developed in the previous notebooks here.
End of explanation
"""
%%bash
grep -A 15 "feature_cols =" taxifaremodel/model.py
"""
Explanation: Let's have a look at the files contained within the taxifaremodel folder. Within model.py we see that feature_cols has three engineered features.
End of explanation
"""
%%bash
grep -A 5 "add_engineered_features" taxifaremodel/model.py
"""
Explanation: We can also see the engineered features that are created by the add_engineered_features function here.
End of explanation
"""
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m taxifaremodel.task \
--train_data_path=${PWD}/sample/train.csv \
--eval_data_path=${PWD}/sample/valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=10 \
--job-dir=/tmp
"""
Explanation: We can try out this model on the local sample we've created to make sure everything works as expected. Note, this takes about 5 minutes to complete.
End of explanation
"""
%%bash
ls -R taxi_trained/export
"""
Explanation: We've only done 10 training steps, so we don't expect the model to have good performance. Let's have a look at the exported files from our training job.
End of explanation
"""
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all
"""
Explanation: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
End of explanation
"""
%%writefile /tmp/test.json
{"dayofweek": 0, "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403}
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
gcloud ml-engine local predict \
--model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
"""
Explanation: To test out prediciton with out model, we create a temporary json file containing the expected feature values.
End of explanation
"""
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=taxifaremodel.task \
--package-path=${PWD}/taxifaremodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/TRAIN*" \
--eval_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/VALID*" \
--train_steps=5000 \
--output_dir=$OUTDIR
"""
Explanation: Train on the Cloud
This will take 10-15 minutes even though the prompt immediately returns after the job is submitted. Monitor job progress on the ML Engine section of Cloud Console and wait for the training job to complete.
End of explanation
"""
%%bash
gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1
"""
Explanation: Once the model has finished training on the cloud, we can check the export folder to see that a model has been correctly saved.
End of explanation
"""
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${model_dir} --all
"""
Explanation: As before, we can use the saved_model_cli to examine the exported signature.
End of explanation
"""
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
gcloud ml-engine local predict \
--model-dir=${model_dir} \
--json-instances=/tmp/test.json
"""
Explanation: And check out model's prediction with a local predict job on our test file.
End of explanation
"""
%%bash
if gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://${BUCKET}/taxifare/ch4/taxi_preproc/
fi
# Preprocess the entire dataset
preprocess(None, "DataflowRunner")
%%bash
WARNING -- this uses significant resources and is optional. Remove this line to run the block.
OUTDIR=gs://${BUCKET}/taxifare/feateng2m
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
TIER=STANDARD_1
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=taxifaremodel.task \
--package-path=${PWD}/taxifaremodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=$TIER \
--runtime-version=$TFVERSION \
-- \
--train_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/TRAIN*" \
--eval_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/VALID*" \
--output_dir=$OUTDIR \
--train_steps=418168 \
--hidden_units="64,64,64,8"
"""
Explanation: Hyperparameter tuning
Recall the hyper-parameter tuning notebook. We can repeat the process there to decide the best parameters to use for model. Based on that run, I ended up choosing:
train_batch_size: 512
hidden_units: "64 64 64 8"
Let's now try a training job over a larger dataset.
(Optional) Run Cloud training on 2 million row dataset
This run uses as input 2 million rows and takes ~20 minutes with 10 workers (STANDARD_1 pricing tier). The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). Because the Dataflow preprocessing takes about 15 minutes, we train here using csv files in a public bucket.
When doing distributed training, use train_steps instead of num_epochs. The distributed workers don't know how many rows there are, but we can calculate train_steps = num_rows * num_epochs / train_batch_size. In this case, we have 2141023 * 100 / 512 = 418168 train steps.
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.2/examples/notebooks/generated/mixed_lm_example.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.tools.sm_exceptions import ConvergenceWarning
"""
Explanation: Linear Mixed Effects Models
End of explanation
"""
data = sm.datasets.get_rdataset("dietox", "geepack").data
md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"])
mdf = md.fit(method=["lbfgs"])
print(mdf.summary())
"""
Explanation: Note: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and lme4 1.1.
ipython
%load_ext rpy2.ipython
ipython
%R library(lme4)
array(['lme4', 'Matrix', 'tools', 'stats', 'graphics', 'grDevices',
'utils', 'datasets', 'methods', 'base'], dtype='<U9')
Comparing R lmer to statsmodels MixedLM
The statsmodels imputation of linear mixed models (MixedLM) closely follows the approach outlined in Lindstrom and Bates (JASA 1988). This is also the approach followed in the R package LME4. Other packages such as Stata, SAS, etc. should also be consistent with this approach, as the basic techniques in this area are mostly mature.
Here we show how linear mixed models can be fit using the MixedLM procedure in statsmodels. Results from R (LME4) are included for comparison.
Here are our import statements:
Growth curves of pigs
These are longitudinal data from a factorial experiment. The outcome variable is the weight of each pig, and the only predictor variable we will use here is "time". First we fit a model that expresses the mean weight as a linear function of time, with a random intercept for each pig. The model is specified using formulas. Since the random effects structure is not specified, the default random effects structure (a random intercept for each group) is automatically used.
End of explanation
"""
md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"], re_formula="~Time")
mdf = md.fit(method=["lbfgs"])
print(mdf.summary())
"""
Explanation: Here is the same model fit in R using LMER:
ipython
%%R
data(dietox, package='geepack')
ipython
%R print(summary(lmer('Weight ~ Time + (1|Pig)', data=dietox)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: Weight ~ Time + (1 | Pig)
Data: dietox
REML criterion at convergence: 4809.6
Scaled residuals:
Min 1Q Median 3Q Max
-4.7118 -0.5696 -0.0943 0.4877 4.7732
Random effects:
Groups Name Variance Std.Dev.
Pig (Intercept) 40.39 6.356
Residual 11.37 3.371
Number of obs: 861, groups: Pig, 72
Fixed effects:
Estimate Std. Error t value
(Intercept) 15.72352 0.78805 19.95
Time 6.94251 0.03339 207.94
Correlation of Fixed Effects:
(Intr)
Time -0.275
```
Note that in the statsmodels summary of results, the fixed effects and random effects parameter estimates are shown in a single table. The random effect for animal is labeled "Intercept RE" in the statsmodels output above. In the LME4 output, this effect is the pig intercept under the random effects section.
There has been a lot of debate about whether the standard errors for random effect variance and covariance parameters are useful. In LME4, these standard errors are not displayed, because the authors of the package believe they are not very informative. While there is good reason to question their utility, we elected to include the standard errors in the summary table, but do not show the corresponding Wald confidence intervals.
Next we fit a model with two random effects for each animal: a random intercept, and a random slope (with respect to time). This means that each pig may have a different baseline weight, as well as growing at a different rate. The formula specifies that "Time" is a covariate with a random coefficient. By default, formulas always include an intercept (which could be suppressed here using "0 + Time" as the formula).
End of explanation
"""
0.294 / (19.493 * 0.416) ** 0.5
md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"], re_formula="~Time")
free = sm.regression.mixed_linear_model.MixedLMParams.from_components(
np.ones(2), np.eye(2)
)
mdf = md.fit(free=free, method=["lbfgs"])
print(mdf.summary())
"""
Explanation: Here is the same model fit using LMER in R:
ipython
%R print(summary(lmer("Weight ~ Time + (1 + Time | Pig)", data=dietox)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: Weight ~ Time + (1 + Time | Pig)
Data: dietox
REML criterion at convergence: 4434.1
Scaled residuals:
Min 1Q Median 3Q Max
-6.4286 -0.5529 -0.0416 0.4841 3.5624
Random effects:
Groups Name Variance Std.Dev. Corr
Pig (Intercept) 19.493 4.415
Time 0.416 0.645 0.10
Residual 6.038 2.457
Number of obs: 861, groups: Pig, 72
Fixed effects:
Estimate Std. Error t value
(Intercept) 15.73865 0.55012 28.61
Time 6.93901 0.07982 86.93
Correlation of Fixed Effects:
(Intr)
Time 0.006
```
The random intercept and random slope are only weakly correlated $(0.294 / \sqrt{19.493 * 0.416} \approx 0.1)$. So next we fit a model in which the two random effects are constrained to be uncorrelated:
End of explanation
"""
data = sm.datasets.get_rdataset("Sitka", "MASS").data
endog = data["size"]
data["Intercept"] = 1
exog = data[["Intercept", "Time"]]
"""
Explanation: The likelihood drops by 0.3 when we fix the correlation parameter to 0. Comparing 2 x 0.3 = 0.6 to the chi^2 1 df reference distribution suggests that the data are very consistent with a model in which this parameter is equal to 0.
Here is the same model fit using LMER in R (note that here R is reporting the REML criterion instead of the likelihood, where the REML criterion is twice the log likelihood):
ipython
%R print(summary(lmer("Weight ~ Time + (1 | Pig) + (0 + Time | Pig)", data=dietox)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: Weight ~ Time + (1 | Pig) + (0 + Time | Pig)
Data: dietox
REML criterion at convergence: 4434.7
Scaled residuals:
Min 1Q Median 3Q Max
-6.4281 -0.5527 -0.0405 0.4840 3.5661
Random effects:
Groups Name Variance Std.Dev.
Pig (Intercept) 19.8404 4.4543
Pig.1 Time 0.4234 0.6507
Residual 6.0282 2.4552
Number of obs: 861, groups: Pig, 72
Fixed effects:
Estimate Std. Error t value
(Intercept) 15.73875 0.55444 28.39
Time 6.93899 0.08045 86.25
Correlation of Fixed Effects:
(Intr)
Time -0.086
```
Sitka growth data
This is one of the example data sets provided in the LMER R library. The outcome variable is the size of the tree, and the covariate used here is a time value. The data are grouped by tree.
End of explanation
"""
md = sm.MixedLM(endog, exog, groups=data["tree"], exog_re=exog["Intercept"])
mdf = md.fit()
print(mdf.summary())
"""
Explanation: Here is the statsmodels LME fit for a basic model with a random intercept. We are passing the endog and exog data directly to the LME init function as arrays. Also note that endog_re is specified explicitly in argument 4 as a random intercept (although this would also be the default if it were not specified).
End of explanation
"""
exog_re = exog.copy()
md = sm.MixedLM(endog, exog, data["tree"], exog_re)
mdf = md.fit()
print(mdf.summary())
"""
Explanation: Here is the same model fit in R using LMER:
ipython
%R
data(Sitka, package="MASS")
print(summary(lmer("size ~ Time + (1 | tree)", data=Sitka)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: size ~ Time + (1 | tree)
Data: Sitka
REML criterion at convergence: 164.8
Scaled residuals:
Min 1Q Median 3Q Max
-2.9979 -0.5169 0.1576 0.5392 4.4012
Random effects:
Groups Name Variance Std.Dev.
tree (Intercept) 0.37451 0.612
Residual 0.03921 0.198
Number of obs: 395, groups: tree, 79
Fixed effects:
Estimate Std. Error t value
(Intercept) 2.2732443 0.0878955 25.86
Time 0.0126855 0.0002654 47.80
Correlation of Fixed Effects:
(Intr)
Time -0.611
```
We can now try to add a random slope. We start with R this time. From the code and output below we see that the REML estimate of the variance of the random slope is nearly zero.
ipython
%R print(summary(lmer("size ~ Time + (1 + Time | tree)", data=Sitka)))
```
Linear mixed model fit by REML ['lmerMod']
Formula: size ~ Time + (1 + Time | tree)
Data: Sitka
REML criterion at convergence: 153.4
Scaled residuals:
Min 1Q Median 3Q Max
-2.7609 -0.5173 0.1188 0.5270 3.5466
Random effects:
Groups Name Variance Std.Dev. Corr
tree (Intercept) 2.217e-01 0.470842
Time 3.288e-06 0.001813 -0.17
Residual 3.634e-02 0.190642
Number of obs: 395, groups: tree, 79
Fixed effects:
Estimate Std. Error t value
(Intercept) 2.273244 0.074655 30.45
Time 0.012686 0.000327 38.80
Correlation of Fixed Effects:
(Intr)
Time -0.615
convergence code: 0
Model failed to converge with max|grad| = 0.793203 (tol = 0.002, component 1)
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?
```
If we run this in statsmodels LME with defaults, we see that the variance estimate is indeed very small, which leads to a warning about the solution being on the boundary of the parameter space. The regression slopes agree very well with R, but the likelihood value is much higher than that returned by R.
End of explanation
"""
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
likev = mdf.profile_re(0, "re", dist_low=0.1, dist_high=0.1)
"""
Explanation: We can further explore the random effects structure by constructing plots of the profile likelihoods. We start with the random intercept, generating a plot of the profile likelihood from 0.1 units below to 0.1 units above the MLE. Since each optimization inside the profile likelihood generates a warning (due to the random slope variance being close to zero), we turn off the warnings here.
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 8))
plt.plot(likev[:, 0], 2 * likev[:, 1])
plt.xlabel("Variance of random intercept", size=17)
plt.ylabel("-2 times profile log likelihood", size=17)
"""
Explanation: Here is a plot of the profile likelihood function. We multiply the log-likelihood difference by 2 to obtain the usual $\chi^2$ reference distribution with 1 degree of freedom.
End of explanation
"""
re = mdf.cov_re.iloc[1, 1]
with warnings.catch_warnings():
# Parameter is often on the boundary
warnings.simplefilter("ignore", ConvergenceWarning)
likev = mdf.profile_re(1, "re", dist_low=0.5 * re, dist_high=0.8 * re)
plt.figure(figsize=(10, 8))
plt.plot(likev[:, 0], 2 * likev[:, 1])
plt.xlabel("Variance of random slope", size=17)
lbl = plt.ylabel("-2 times profile log likelihood", size=17)
"""
Explanation: Here is a plot of the profile likelihood function. The profile likelihood plot shows that the MLE of the random slope variance parameter is a very small positive number, and that there is low uncertainty in this estimate.
End of explanation
"""
|
encima/Comp_Thinking_In_Python | Session_2/2_Homework.ipynb | mit | name = "Computational Thinking"
code = "CM6111"
credits = 20
print(credits)
from nose.tools import assert_equal
assert isinstance(name, str)
assert isinstance(code, str)
assert isinstance(credits, int)
assert_equal(credits, 20)
assert_equal(code, "CM6111")
assert_equal(name, "Computational Thinking")
"""
Explanation: Session 2: Homework
Create 3 variables to describe this module:
1. the name (called name)
2. the code (called code)
3. the number of credits (called credits)
End of explanation
"""
assert_equal(credits, 20)
assert_equal(code, "CM6111")
assert_equal(name, "Computational Thinking")
"""
Explanation: Rewrite the above code to Multi-Variable assignment
End of explanation
"""
my_string = "'Who's been eating my porridge?' said Papa Bear";
print(my_string)
assert_equal(my_string, "Who\'s been eating my porridge? said Papa Bear")
"""
Explanation: Look at the code below and fix the string in order to ensure it can be printed.
End of explanation
"""
to_print = "Me, but 400 more times"
print(to_print) #do not change or repeat this line
assert_equal(to_print, "Me, but 400 more times"*400)
"""
Explanation: Using operators, change the string below to print it 400 times.
End of explanation
"""
string_sample = "Hello my name is Gary and I love to write sentences with no punctuation and this makes it really deifficult to read and follow"
length_sample = #calc length here
string_avg = length_sample / len(string_sample.split(" "))
print(string_avg)
to_print = #print average word length AND string_sample here, like this <Average><String> (NO SPACES!)
print(to_print)
assert_equal(string_avg, 5.25)
assert_equal(to_print, "5.25Hello my name is Gary and I love to write sentences with no punctuation and this makes it really deifficult to read and follow")
"""
Explanation: Use the variables below to assign the length of the string and convert the average word length to a string and assign the concatenated string of both variables to the final var
End of explanation
"""
print(True + 3 * 100)
print((True + 3) * 100)
"""
Explanation: Why is the code below valid?
From your GCSE Maths knowledge, why are the 2 answers different?
What is the output of int(false)?
End of explanation
"""
|
tomkralidis/OWSLib | notebooks/examples/wms.ipynb | bsd-3-clause | from owslib.wms import WebMapService
wms_url = "https://ows.terrestris.de/osm/service"
wms = WebMapService(wms_url, version="1.3.0")
print(f"WMS version: {wms.identification.version}")
print(f"WMS title: {wms.identification.title}")
print(f"WMS abstract: {wms.identification.abstract}")
print(f"Provider name: {wms.provider.name}")
print(f"Provider address: {wms.provider.contact.address}")
"""
Explanation: Interact with a WMS¶
OGC Web Map Service (WMS) can be used to download map images rendered by the remote server.
Example from GeoPython Workshop
This is the metadata of the service endpoint:
End of explanation
"""
list(wms.contents)
wms.contents['SRTM30-Colored'].boundingBox
wms.contents['SRTM30-Colored'].boundingBoxWGS84
print(wms['SRTM30-Colored'].crsOptions)
print(wms['SRTM30-Colored'].styles)
"""
Explanation: Check the Capabilities response directly from the server
Available WMS layers:
End of explanation
"""
[op.name for op in wms.operations]
wms.getOperationByName('GetMap').methods
wms.getOperationByName('GetMap').formatOptions
"""
Explanation: Available methods, their URLs, and available formats:
End of explanation
"""
img = wms.getmap(
layers=['SRTM30-Colored'],
size=[600, 400],
srs="EPSG:4326",
bbox=[1.0, 50.0, 10.0, 54.0],
format="image/jpeg")
"""
Explanation: That's everything needed to make a request for imagery:
End of explanation
"""
from IPython.display import Image
Image(img.read())
"""
Explanation: Result:
End of explanation
"""
|
guyk1971/deep-learning | intro-to-rnns/Anna_KaRNNa_exercise_orig.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size =
n_batches =
# Keep only enough characters to make full batches
arr =
# Reshape into n_seqs rows
arr =
for n in range(0, arr.shape[1], n_steps):
# The features
x =
# The targets, shifted by one
y =
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs =
targets =
# Keep probability placeholder for drop out layers
keep_prob =
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm =
# Add dropout to the cell outputs
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
initial_state =
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output =
# Reshape seq_output to a 2D tensor with lstm_size columns
x =
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w =
softmax_b =
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits =
# Use softmax to get the probabilities for predicted characters
out =
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot =
y_reshaped =
# Softmax cross entropy loss
loss =
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob =
# Build the LSTM cell
cell, self.initial_state =
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot =
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state =
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits =
# Loss and optimizer (with gradient clipping)
self.loss =
self.optimizer =
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
"""
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
yw-fang/readingnotes | abinitio/aiida/aiida-v0.11.0-updated-note.ipynb | apache-2.0 | conda create -n aiida python=2.7 #set a veritual environment
conda activate aiida
#sometimes in mac, such a command might be requested
sudo ln -s /Users/ywfang/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh
conda install postgresql
"""
Explanation: Aiida and the aiida-plugins
1. aiida-v0.11.0 installation (released in Jan 2018)
aiida-v0.11.0 was released in Jan, 2018, hence I removed the previous v0.10.0 in my mac, and installed v0.11.0.
In the following, I'll show how to install it and other aiida-plugins in a mac os.
1.1 set up a virtual environment for aiida running
End of explanation
"""
git clone git@github.com:aiidateam/aiida_core.git
cd aiida_core
git checkout -b v0.11.0
pip install .
#make sure that you have pip, if it is not available,
#use 'conda install pip'
#then have a try to see whether it works
veri quicksetup
"""
Explanation: 1.2 install aiida_core of epfl version
Now, we need download the aiida_core v0.11.0 and put it in the folder you like. For me, the directory of the aiida_core is '/Users/ywfang/FANG/study/software/all-aiida'
End of explanation
"""
(aiida) h44:all-aiida ywfang$ verdi daemon status
# Most recent daemon timestamp:0h:00m:06s ago
Daemon is running as pid 15885 since 2018-01-19 11:38:47.813955, child processes:
* python2.7[15892] running, started at 2018-01-19 11:38:48
* python2.7[15893] running, started at 2018-01-19 11:38:48
* python2.7[15894] running, started at 2018-01-19 11:38:48
* python2.7[15895] running, started at 2018-01-19 11:38:48
* python2.7[15896] running, started at 2018-01-19 11:38:48
(aiida) h44:all-aiida ywfang$ verdi daemon status
# Most recent daemon timestamp:0h:00m:08s ago
Daemon is running as pid 15885 since 2018-01-19 11:38:47.813955, child processes:
* python2.7[15892] running, started at 2018-01-19 11:38:48
* python2.7[15893] running, started at 2018-01-19 11:38:48
* python2.7[15894] running, started at 2018-01-19 11:38:48
* python2.7[15895] running, started at 2018-01-19 11:38:48
* python2.7[15896] running, started at 2018-01-19 11:38:48
"""
Explanation: You can use ctrl-C to stop this "verdi quicksetup" process if you don't want to continue. For me, I was trapped into a cycled program like
1.3 postgresql settings
As you can see in the last section, obvisouly it had a problem related to the postgres, hence in this morning, I talked to Abel, and finally solved this problem. This problem might come from that we didn't set anything for the postgres. (For the aiida version 0.10.0, I didn't have this problem. We can configure the database after 'verdi quciksetup'. However, if you are using the v0.11.0, just follow my subsequent operations)
Actually, aiida website has already displayed some content on the database: https://aiida-core.readthedocs.io/en/latest/installation/database.html
Now, let's go back to 'verdi quciksetup'
Run 'verdi daemon start', the computer will remind you that you haven't set up the daemon user, then just follow it (As you will see below, I use my email address as the default user).
Please check the status of daemon:
End of explanation
"""
(aiida) ywfang@ywfang:~$ verdi daemon status
# Most recent daemon timestamp:0h:00m:09s ago
## Found 1 process running:
* aiida-daemon[aiida-daemon] RUNNING pid 12373, uptime 2 days, 22:04:52
"""
Explanation: From the output on the screen, it's quite different from that of aiida-v0.10.1. As a comparion, I presented the output of aiida-v0.10.1:
End of explanation
"""
git clone git@github.com:DropD/aiida-vasp.git
"""
Explanation: Obvisouly, the latest versin 0.11.0 provides us more information about the processes.
2. Installing plugins
2.1 Installing aiida-vasp plugin
I creted a plugin folder: /Users/ywfang/FANG/study/software/all-aiida/aiida_plugins
End of explanation
"""
cd aiida-vasp/
git checkout develop #switch to develop version
"""
Explanation: After git, you'll see a new folder named aiida-vasp
End of explanation
"""
pip install . # installed successfully
"""
Explanation: pymatgen parser setting: this step is quite important for aiida-vasp plugin. Although I don't like this feature very much because I usually forgot this step. Luckily, Abel have reminded me for many times ( at least 5, -_- ) in the latest month. Great thanks to him.
vi aiida_vasp/calcs/vasp.py, change the line
default_parser = 'vasp.vasp'
into
default_parser = 'vasp.pymatgen'
End of explanation
"""
git clone git@github.com:abelcarreras/aiida-phonopy.git #this code was developped by Abel
"""
Explanation: 2.2 Installing aiida-phonopy plugin
End of explanation
"""
cd aiida-phonopy
git checkout development #switch to develop version
pip install . # installed successfully
"""
Explanation: You then see a folder "aiida-phonopy"
End of explanation
"""
(aiida) h44:aiida_plugins ywfang$ verdi computer setup
At any prompt, type ? to get some help.
---------------------------------------
=> Computer name: stern-lab
Creating new computer with name 'stern-lab'
=> Fully-qualified hostname: 88.88.88.88 #here is the IP address of your computer
=> Description: go to stern-lab from mac
=> Enabled: True
=> Transport type: ssh
=> Scheduler type: sge
=> shebang line at the beginning of the submission script: #!/bin/bash
=> AiiDA work directory: /home/ywfang/aiida_run_mac
=> mpirun command: mpirun -np {tot_num_mpiprocs}
=> Text to prepend to each command execution:
# This is a multiline input, press CTRL+D on a
# empty line when you finish
# ------------------------------------------
# End of old input. You can keep adding
# lines, or press CTRL+D to store this value
# ------------------------------------------
export PATH=/usr/local/calc/openmpi-1.10.2-ifort-togo/bin/:$PATH #change these two lines according to your system
export LD_LIBRARY_PATH=/opt/intel/Compiler/11.1/069/lib/intel64/:$LD_LIBRARY_PATH
=> Text to append to each command execution:
# This is a multiline input, press CTRL+D on a
# empty line when you finish
# ------------------------------------------
# End of old input. You can keep adding
# lines, or press CTRL+D to store this value
# ------------------------------------------
Computer 'stern-lab' successfully stored in DB.
pk: 1, uuid: f54386aa-0b7f-4576-8faa-666d9429980c
Note: before using it with AiiDA, configure it using the command
verdi computer configure stern-lab
(Note: machine_dependent transport parameters cannot be set via
the command-line interface at the moment)
"""
Explanation: 2.3 Verdi command completion configuration
The verdi command line tool has many commands and options. To simplify its usage, simply run the following command:
$ verdi completioncommand
Please append the output to the activation script of your virtual environment (or e.g. .bashrc). Alternatively, you can accomplish the same by simply adding the following line to the activation script:
eval "$(verdi completioncommand)"
For myself, in my ubuntu, I just added
eval "$(verdi completioncommand)"
to
/home/ywfang/miniconda3/envs/aiida/bin/activate;
in my mac, I jsut add a line to my .bashrc
alias aiida="conda activate aiida; sleep 1s; eval '$(verdi completioncommand)'" (It seems that it doesn't always work well in my mac. Updated in Jan 31 2018)
In addition, we can also customize the configuration directory location of aiida. This will allow us to use several different versions of aiida-core. Here, I only installed one version, hence will attach more words to this configurations. For you information, please visit the website of (aiida)[https://aiida-core.readthedocs.io/en/latest/installation/optional_configuration.html].
Appeded note in March 22th, 2008: I found in version.0.11.3, we only need run 'verdi completioncommand', and add its output to .bashrc. The method of adding eval command to activate seems not work. YWFANG
Since in 'verdi quicksetup', I didn't use the passward, hence I run 'verdi setup aiida1', and reset my profile. The profile information is stored in a file like:
~/.aiida/config.json
2.4 Setup of computers
2.4.1 Setup the computer
For the supercomputer you want to perform calculations, before settupng up the computer, you have to prepar a password-less connection to the computer. (use ssh-keygen, there are many blogs on this topic.) Here, I skip this step, and go to the 'verdi comptuer setup' directly.
End of explanation
"""
(aiida) h44:aiida_plugins ywfang$ verdi computer configure stern-lab
Configuring computer 'stern-lab' for the AiiDA user 'fyuewen@gmail.com'
Computer stern-lab has transport of type ssh
Note: to leave a field unconfigured, leave it empty and press [Enter]
=> username = ywfang
=> port = 22
=> look_for_keys = ~/.ssh/id_rsa
Error in the inserted value: look_for_keys must be an boolean
=> look_for_keys =
=> key_filename = ~/.ssh/id_rsa
=> timeout = 60
=> allow_agent =
=> proxy_command =
=> compress = False
=> gss_auth = no
=> gss_kex = no
=> gss_deleg_creds = no
=> gss_host = 88.88.88.88
=> load_system_host_keys = True
=> key_policy = WarningPolicy
Configuration stored for your user on computer 'stern-lab'.
"""
Explanation: Set up local computer as localhost
Appended note in March 5th 2018
Reference: https://github.com/ltalirz/aiida-zeopp
It is also possible to setup a local computer as a host to do computations. For example, I show the lcoal computer information:
Computer name: localhost
* PK: 4
* UUID: ..................
* Description: my local computer
* Hostname: localhost
* Enabled: True
* Transport type: local
* Scheduler type: direct
* Work directory: /home/ywfang/aiidalocal-run
* mpirun command:
* Default number of cpus per machine: 8
* Used by: 4 nodes
* prepend text:
# No prepend text.
* append text:
# No append text.
After setting up the local computer, remeber to configure this computer.
Here, correspondingly I give the code information in my local computer as an example:
PK: 186209
UUID:
Label: phonopy
Description: phonopy in localhost
Default plugin: phonopy.phonopy
Used by: 1 calculations
Type: remote
Remote machine: localhost
Remote absolute path:
/home/ywfang/miniconda3/envs/aiida/bin/phonopy
prepend text:
# No prepend text.
append text:
# No append text.
A comment to the 'direct' scheduler:
The direct scheduler, to be used mainly for debugging, is an implementation of a scheduler plugin that does not require a real scheduler installed, but instead directly executes a command, puts it in the background, and checks for its process ID (PID) to discover if the execution is completed.
Warning
The direct execution mode is very fragile. Currently, it spawns a separate Bash shell to execute a job and track each shell by process ID (PID). This poses following problems:
PID numeration is reset during reboots;
PID numeration is different from machine to machine, thus direct execution is not possible in multi-machine clusters, redirecting each SSH login to a different node in round-robin fashion;
there is no real queueing, hence, all calculation started will be run in parallel.
2.4.1 Configure the computer
End of explanation
"""
(aiida) h44:aiida_plugins ywfang$ verdi computer test stern-lab
Testing computer 'stern-lab' for user fyuewen@gmail.com...
> Testing connection...
> Getting job list...
`-> OK, 30 jobs found in the queue.
> Creating a temporary file in the work directory...
>>>>>
>>>>>>.....
..........
.........
[Deleted successfully]
Test completed (all 3 tests succeeded)
"""
Explanation: 2.4.2 Test the computer
End of explanation
"""
(aiida) h44:aiida_plugins ywfang$ verdi code setup
At any prompt, type ? to get some help.
---------------------------------------
=> Label: vasp
=> Description: vasp541 at stern-lab
=> Local: False
=> Default input plugin: vasp.vasp # you can use verdi calculation plugins to check what plugins you have
=> Remote computer name: stern-lab
=> Remote absolute path: /usr/local/vasp/vasp541
=> Text to prepend to each command execution
FOR INSTANCE, MODULES TO BE LOADED FOR THIS CODE:
# This is a multiline input, press CTRL+D on a
# empty line when you finish
# ------------------------------------------
# End of old input. You can keep adding
# lines, or press CTRL+D to store this value
# ------------------------------------------
=> Text to append to each command execution:
# This is a multiline input, press CTRL+D on a
# empty line when you finish
# ------------------------------------------
# End of old input. You can keep adding
# lines, or press CTRL+D to store this value
# ------------------------------------------
Code 'vasp' successfully stored in DB.
pk: 1, uuid: 03875075-0cc1-4938-8943-c46d3ee2aecd
"""
Explanation: 2.4.3 Some useful commands
verdi computer list
to get a list of existing computers, and:
verdi computer show COMPUTERNAME
to get detailed information on the specific computer named COMPUTERNAME. You have also the:
verdi computer rename OLDCOMPUTERNAME NEWCOMPUTERNAME
and:
verdi computer delete COMPUTERNAME
2.5 Setup the code
Basically there are two kinds of code for aiida. One is local code, the other one is the socalled remote code that is in a supercomputer or cluster. Please ref to (aiida)[http://aiida-core.readthedocs.io/en/release_v0.11.0/get_started/index.html] about their differences.
Here I show a example to setup a remote code (vasp):
End of explanation
"""
import click
import numpy
@click.command()
@click.option('--paw-family', type=str, default='vasp-test')
@click.option('--import-from', type=click.Path(), default='.')
@click.option('--queue', type=str, default='')
@click.argument('code', type=str)
@click.argument('computer', type=str)
def test_vasp(paw_family, import_from, queue, code, computer):
load_dbenv_if_not_loaded()
from aiida.orm import CalculationFactory, Code
if import_from:
import_paws(import_from, paw_family)
# try:
paw_si = get_paws(paw_family)
# except ValueError as err:
# click.echo(err.msg, err=True)
# raise ValueError(
# 'give a valid family or import a new one (run with --help)')
vasp_calc = CalculationFactory('vasp.vasp')()
vasp_calc.use_structure(create_structure2())
vasp_calc.use_kpoints(create_kpoints())
vasp_calc.use_parameters(create_params())
code = Code.get_from_string('{}@{}'.format(code, computer))
vasp_calc.use_code(code)
# vasp_calc.use_paw(paw_in, 'In')
# vasp_calc.use_paw(paw_as, 'As')
vasp_calc.use_paw(paw_si, 'Si')
vasp_calc.set_computer(code.get_computer())
vasp_calc.set_queue_name(queue)
vasp_calc.set_resources({
'num_machines': 1,
'parallel_env': 'mpi*',
'tot_num_mpiprocs': 16
})
vasp_calc.label = 'Test VASP run'
vasp_calc.store_all()
vasp_calc.submit()
vasp_calc = CalculationFactory('vasp.vasp')
#from aiida_vasp.parsers.pymatgen_vasp import PymatgenParser
#vasp_calc.set_parser_cls(PymatgenParser)
#vasp_calc.use_settings(DataFactory('parameter')(dict={
# 'pymatgen_parser': {
# 'exception_on_bad_xml': False,
# 'parse_dos': False
# }
# }
#))
def load_dbenv_if_not_loaded():
from aiida import load_dbenv, is_dbenv_loaded
if not is_dbenv_loaded():
load_dbenv()
def get_data_cls(descriptor):
load_dbenv_if_not_loaded()
from aiida.orm import DataFactory
return DataFactory(descriptor)
def create_structure():
structure_cls = get_data_cls('structure')
structure = structure_cls(
cell=numpy.array([[0, .5, .5], [.5, 0, .5], [.5, .5, 0]]) * 6.058, )
structure.append_atom(position=(0, 0, 0), symbols='In')
structure.append_atom(position=(0.25, 0.25, 0.25), symbols='As')
return structure
def create_structure2():
import numpy as np
from aiida.orm import DataFactory
StructureData = DataFactory('structure')
a = 5.404
cell = [[a, 0, 0],
[0, a, 0],
[0, 0, a]]
symbols=['Si'] * 8
scaled_positions = [(0.875, 0.875, 0.875),
(0.875, 0.375, 0.375),
(0.375, 0.875, 0.375),
(0.375, 0.375, 0.875),
(0.125, 0.125, 0.125),
(0.125, 0.625, 0.625),
(0.625, 0.125, 0.625),
(0.625, 0.625, 0.125)]
structure = StructureData(cell=cell)
positions = np.dot(scaled_positions, cell)
for i, scaled_position in enumerate(scaled_positions):
structure.append_atom(position=np.dot(scaled_position, cell).tolist(),
symbols=symbols[i])
return structure
def create_kpoints():
kpoints_cls = get_data_cls('array.kpoints')
return kpoints_cls(kpoints_mesh=[8, 8, 8])
def create_params():
param_cls = get_data_cls('parameter')
return param_cls(dict={
'SYSTEM': 'InAs',
'EDIFF': 1e-5,
'ISMEAR': 0,
'SIGMA': 0.05,
'ENCUT': '280.00 eV',
'LEPSILON': '.TRUE.'
})
def import_paws(folder_path, family_name):
load_dbenv_if_not_loaded()
from aiida.orm import DataFactory
paw_cls = DataFactory('vasp.paw')
paw_cls.import_family(
folder_path, familyname=family_name, family_desc='Test family')
def get_paws(family_name):
load_dbenv_if_not_loaded()
from aiida.orm import DataFactory
paw_cls = DataFactory('vasp.paw')
#paw_in = paw_cls.load_paw(family=family_name, symbol='In')[0]
#paw_as = paw_cls.load_paw(family=family_name, symbol='As')[0]
paw_si = paw_cls.load_paw(family=family_name, symbol='Si')[0]
return paw_si
if __name__ == '__main__':
test_vasp()
"""
Explanation: Attention For setting up code for phonopy and phono3py, the "Defult input plugins" are phonopy.phonopy and phonopy.phono3py respectively.
Some useful commands:
verdi code rename "ID" (for example, vasp@stern-lab is an ID)
verdi code list
verdi code show "ID"
verdi code delete "ID"
2.6 Run example
Now, you can now test some examples. Please always remeber to restart your daemon if you make some midifications on the environment or something related to the computer or code: verdi daemon restart
The example I tested comes from the aiida-vasp plugin (prabably I made some small modifications):
End of explanation
"""
(aiida) h44:test-example-for-mac ywfang$ verdi calculation list -a
# Last daemon state_updater check: 0h:00m:07s ago (at 18:02:35 on 2018-01-19)
PK Creation State Sched. state Computer Type
---- ---------- -------- -------------- ---------- ---------
18 4m ago FINISHED DONE boston-lab vasp.vasp
Total results: 1
"""
Explanation: The command to run this job:
"python run_vasp_old-boston.py --import-from /Users/ywfang/FANG/study/software/all-aiida/aiida_plugins/test-example-for-mac/VASP_TEST/Si vasp boston-lab"
Check the status of the calculations
End of explanation
"""
"""
(*Take KF compound as an example*)
(*In the unit cell of cubic KF,
there are 8 atoms, half of them are K, and the other half are F*)
This is the unit cell structure:
F K
1.0
5.3683051700000002 0.0000000000000000 0.0000000000000000
0.0000000000000000 5.3683051700000002 0.0000000000000000
0.0000000000000000 0.0000000000000000 5.3683051700000002
F K
4 4
Direct
0.0000000000000000 0.0000000000000000 0.0000000000000000
0.0000000000000000 0.5000000000000000 0.5000000000000000
0.5000000000000000 0.0000000000000000 0.5000000000000000
0.5000000000000000 0.5000000000000000 0.0000000000000000
0.5000000000000000 0.0000000000000000 0.0000000000000000
0.5000000000000000 0.5000000000000000 0.5000000000000000
0.0000000000000000 0.0000000000000000 0.5000000000000000
0.0000000000000000 0.5000000000000000 0.0000000000000000
"""
#(*using "phonopy --symmetry --tol=0.01" to get the primitive psocar*)
"""
The primitive poscar is
F K
1.0
0.0000000000000000 2.6841525850000001 2.6841525850000001
2.6841525850000001 0.0000000000000000 2.6841525850000001
2.6841525850000001 2.6841525850000001 0.0000000000000000
1 1
Direct
0.0000000000000000 0.0000000000000000 0.0000000000000000
0.5000000000000000 0.5000000000000000 0.5000000000000000
"""
import numpy as np
uc_vector = np.matrix([
[5.3683051700000002, 0, 0],
[0, 5.3683051700000002, 0],
[0, 0, 5.3683051700000002]
])
uc_vector
primitive_vector = np.matrix([
[0, 2.6841525850000001, 2.6841525850000001],
[2.6841525850000001, 0, 2.6841525850000001],
[2.6841525850000001, 2.6841525850000001, 0]
])
primitive_vector
x = np.linalg.solve(uc_vector.T, primitive_vector.T)
x
"""
Explanation: The calculation of primitive_axis in phonopy
End of explanation
"""
#the conventional cell of B contains 36 atoms
B
1.0
4.8786175399999996 0.0000000000000000 0.0000000000000000
-2.4393087699999998 4.2250067299999996 0.0000000000000000
0.0000000000000000 0.0000000000000000 12.5104310900000009
B
36
Direct
0.8810421900000001 0.1189578100000000 0.1082863700000000
0.2379156300000000 0.1189578100000000 0.1082863700000000
0.8810421900000001 0.7620843700000000 0.1082863700000000
0.7856244800000000 0.5712489599999999 0.2250469600000000
0.7856244800000000 0.2143755200000000 0.2250469600000000
0.4287510400000000 0.2143755200000000 0.2250469600000000
0.4692953000000000 0.5307047000000000 0.3089958100000000
0.0614094100000000 0.5307047000000000 0.3089958100000000
0.4692953000000000 0.9385905900000000 0.3089958100000000
0.1973713700000000 0.3947427400000000 0.0243375300000000
0.1973713700000000 0.8026286300000001 0.0243375300000000
0.6052572600000000 0.8026286300000001 0.0243375300000000
0.5477088500000000 0.4522911500000000 0.4416197100000000
0.9045823000000000 0.4522911500000000 0.4416197100000000
0.5477088500000000 0.0954177000000000 0.4416197100000000
0.4522911500000000 0.9045823000000000 0.5583802900000000
0.4522911500000000 0.5477088500000000 0.5583802900000000
0.0954177000000000 0.5477088500000000 0.5583802900000000
0.1359619600000000 0.8640380400000001 0.6423291400000000
0.7280760700000000 0.8640380400000001 0.6423291400000000
0.1359619600000000 0.2719239300000000 0.6423291400000000
0.8640380400000001 0.7280760700000000 0.3576708600000000
0.8640380400000001 0.1359619600000000 0.3576708600000000
0.2719239300000000 0.1359619600000000 0.3576708600000000
0.2143755200000000 0.7856244800000000 0.7749530400000000
0.5712489599999999 0.7856244800000000 0.7749530400000000
0.2143755200000000 0.4287510400000000 0.7749530400000000
0.1189578100000000 0.2379156300000000 0.8917136300000000
0.1189578100000000 0.8810421900000001 0.8917136300000000
0.7620843700000000 0.8810421900000001 0.8917136300000000
0.8026286300000001 0.1973713700000000 0.9756624699999999
0.3947427400000000 0.1973713700000000 0.9756624699999999
0.8026286300000001 0.6052572600000000 0.9756624699999999
0.5307047000000000 0.0614094100000000 0.6910041900000000
0.5307047000000000 0.4692953000000000 0.6910041900000000
0.9385905900000000 0.4692953000000000 0.6910041900000000
#the primitive cell of B can be written as (use phonopy command to get this primitive cell)
B
1.0
2.4393087710850549 1.4083355756225715 4.1701436966666670
-2.4393087710850549 1.4083355756225715 4.1701436966666670
0.0000000000000000 -2.8166711512451430 4.1701436966666670
12
Direct
0.9893285600000000 0.3462020033333332 0.9893285600000000
0.3462020033333332 0.9893285600000000 0.9893285600000000
0.9893285600000000 0.9893285600000000 0.3462020033333332
0.0106714399999999 0.0106714399999998 0.6537979966666668
0.0106714399999999 0.6537979966666666 0.0106714400000000
0.6537979966666667 0.0106714400000000 0.0106714400000000
0.7782911033333333 0.3704052166666665 0.7782911033333333
0.3704052166666665 0.7782911033333333 0.7782911033333333
0.7782911033333333 0.7782911033333333 0.3704052166666666
0.2217088966666666 0.2217088966666666 0.6295947833333334
0.2217088966666666 0.6295947833333333 0.2217088966666668
0.6295947833333333 0.2217088966666665 0.2217088966666668
import numpy as np
uc_vector_B = np.matrix([
[4.8786175399999996, 0.0000000000000000, 0.0000000000000000],
[-2.4393087699999998, 4.2250067299999996, 0.0000000000000000],
[0.0000000000000000, 0.0000000000000000, 12.5104310900000009]
])
primitive_vector_B = np.matrix([
[2.4393087710850549, 1.4083355756225715, 4.1701436966666670],
[-2.4393087710850549, 1.4083355756225715, 4.1701436966666670],
[0.0000000000000000, -2.8166711512451430, 4.1701436966666670]
])
x_B = np.linalg.solve(uc_vector_B.T, primitive_vector_B.T)
x_B
#The structures of the two above examples are taken from Togo-sensei's phonon databse (I could have relaxed them with higher accuracy)
#the phonopy.conf in the database includes the information of primitive_aixs that can be referenced here as a comprision with my
#calculations here.
#for the definition of primitive_axis in phonopy, please see the manual or visit
#https://atztogo.github.io/phonopy/setting-tags.html#primitive-axis-or-primitive-axes
"""
Explanation: The solved x is the primitive_aixs that can be used in the phononpy calculations
now, let's add another example of Boron crystal
End of explanation
"""
import urllib2 # needed for functions,classed for opening urls.
url = raw_input( "enter the url needed for downloading file(pdf,mp3,zip...etc)\n");
usock = urllib2.urlopen(url) #function for opening desired url
file_name = url.split('/')[-1] #Example : for given url "www.cs.berkeley.edu/~vazirani/algorithms/chap6.pdf" file_name will store "chap6.pdf"
f = open(file_name, 'wb') #opening file for write and that too in binary mode.
file_size = int(usock.info().getheaders("Content-Length")[0]) #getting size in bytes of file(pdf,mp3...)
print "Downloading: %s Bytes: %s" % (file_name, file_size)
downloaded = 0
block_size = 8192 #bytes to be downloaded in each loop till file pointer does not return eof
while True:
buff = usock.read(block_size)
if not buff: #file pointer reached the eof
break
downloaded = downloaded + len(buff)
f.write(buff)
download_status = r"%3.2f%%" % (downloaded * 100.00 / file_size) #Simple mathematics
download_status = download_status + (len(download_status)+1) * chr(8)
print download_status,"done"
f.close()
"""
Explanation: an example of spider
End of explanation
"""
#we can run do some interactive programming in the verdi shell
# verdi shell is a ipython based api of aiida
#Abel taught me about this analysis in Jan 19 2018
n = load_node(50442)
n.get_outputs_dict()
n.out.kappa
n.out.kappa.get_arraynames()
n.out.kappa.get_array('kappa')
array = n.out.kappa.get_array('kappa')
n.out.kappa.get_arraynames()
"""
Explanation: aiida phono3py data analysis
verdi calculation list to get the pk number corresponding to the raw data.
Run verdi shell
End of explanation
"""
|
eshlykov/mipt-day-after-day | statistics/python/python_5.ipynb | unlicense | import numpy as np
"""
Explanation: Кафедра дискретной математики МФТИ
Курс математической статистики
Никита Волков
На основе http://www.inp.nsk.su/~grozin/python/
Библиотека numpy
Пакет numpy предоставляет $n$-мерные однородные массивы (все элементы одного типа); в них нельзя вставить или удалить элемент в произвольном месте. В numpy реализовано много операций над массивами в целом. Если задачу можно решить, произведя некоторую последовательность операций над массивами, то это будет столь же эффективно, как в C или matlab - львиная доля времени тратится в библиотечных функциях, написанных на C.
Одномерные массивы
End of explanation
"""
a = np.array([0, 2, 1])
a, type(a)
"""
Explanation: Можно преобразовать список в массив.
End of explanation
"""
print(a)
"""
Explanation: print печатает массивы в удобной форме.
End of explanation
"""
set(dir(a)) - set(dir(object))
"""
Explanation: Класс ndarray имеет много методов.
End of explanation
"""
a.ndim
"""
Explanation: Наш массив одномерный.
End of explanation
"""
a.shape
"""
Explanation: В $n$-мерном случае возвращается кортеж размеров по каждой координате.
End of explanation
"""
len(a), a.size
"""
Explanation: size - это полное число элементов в массиве; len - размер по первой координате (в 1-мерном случае это то же самое).
End of explanation
"""
a.dtype, a.dtype.name, a.itemsize
"""
Explanation: numpy предоставляет несколько типов для целых (int16, int32, int64) и чисел с плавающей точкой (float32, float64).
End of explanation
"""
a[1]
"""
Explanation: Индексировать массив можно обычным образом.
End of explanation
"""
a[1] = 3
print(a)
"""
Explanation: Массивы - изменяемые объекты.
End of explanation
"""
for i in a:
print(i)
"""
Explanation: Массивы, разумеется, можно использовать в for циклах. Но при этом теряется главное преимущество numpy - быстродействие. Всегда, когда это возможно, лучше использовать операции над массивами как едиными целыми.
End of explanation
"""
b = np.array([0., 2, 1])
b.dtype
"""
Explanation: Массив чисел с плавающей точкой.
End of explanation
"""
c = np.array([0, 2, 1], dtype=np.float64)
print(c)
"""
Explanation: Точно такой же массив.
End of explanation
"""
print(c.dtype)
print(c.astype(int))
print(c.astype(str))
"""
Explanation: Преобразование данных
End of explanation
"""
def f(i):
print(i)
return i ** 2
a = np.fromfunction(f, (5,), dtype=np.int64)
print(a)
a = np.fromfunction(f, (5,), dtype=np.float64)
print(a)
"""
Explanation: Массив, значения которого вычисляются функцией. Функции передаётся массив. Так что в ней можно использовать только такие операции, которые применимы к массивам.
End of explanation
"""
a = np.zeros(3)
print(a)
b = np.ones(3, dtype=np.int64)
print(b)
"""
Explanation: Массивы, заполненные нулями или единицами. Часто лучше сначала создать такой массив, а потом присваивать значения его элементам.
End of explanation
"""
np.zeros_like(b)
"""
Explanation: Если нужно создать массив, заполненный нулями, длины другого массива, то можно использовать конструкцию
End of explanation
"""
a = np.arange(0, 9, 2)
print(a)
b = np.arange(0., 9, 2)
print(b)
"""
Explanation: Функция arange подобна range. Аргументы могут быть с плавающей точкой. Следует избегать ситуаций, когда $(конец-начало)/шаг$ - целое число, потому что в этом случае включение последнего элемента зависит от ошибок округления. Лучше, чтобы конец диапазона был где-то посредине шага.
End of explanation
"""
a = np.linspace(0, 8, 5)
print(a)
"""
Explanation: Последовательности чисел с постоянным шагом можно также создавать функцией linspace. Начало и конец диапазона включаются; последний аргумент - число точек.
End of explanation
"""
b = np.logspace(0, 1, 5)
print(b)
"""
Explanation: Последовательность чисел с постоянным шагом по логарифмической шкале от $10^0$ до $10^1$.
End of explanation
"""
print(np.random.random(5))
"""
Explanation: Массив случайных чисел.
End of explanation
"""
print(np.random.normal(size=5))
"""
Explanation: Случайные числа с нормальным (гауссовым) распределением (среднее 0, среднеквадратичное отклонение 1).
End of explanation
"""
print(a + b)
print(a - b)
print(a * b)
print(a / b)
print(a ** 2)
"""
Explanation: Операции над одномерными массивами
Арифметические операции проводятся поэлементно.
End of explanation
"""
i = np.ones(5, dtype=np.int64)
print(a + i)
"""
Explanation: Когда операнды разных типов, они пиводятся к большему типу.
End of explanation
"""
np.sin, type(np.sin)
print(np.sin(a))
"""
Explanation: numpy содержит элементарные функции, которые тоже применяются к массивам поэлементно. Они называются универсальными функциями (ufunc).
End of explanation
"""
print(a + 1)
print(2 * a)
"""
Explanation: Один из операндов может быть скаляром, а не массивом.
End of explanation
"""
print(a > b)
print(a == b)
c = a > 5
print(c)
"""
Explanation: Сравнения дают булевы массивы.
End of explanation
"""
np.any(c), np.all(c)
"""
Explanation: Кванторы "существует" и "для всех".
End of explanation
"""
a += 1
print(a)
b *= 2
print(b)
b /= a
print(b)
"""
Explanation: Модификация на месте.
End of explanation
"""
print(np.array([0.0, 0.0, 1.0, -1.0]) / np.array([1.0, 0.0, 0.0, 0.0]))
np.nan + 1, np.inf + 1, np.inf * 0, 1. / np.inf
"""
Explanation: При выполнении операций над массивами деление на 0 не возбуждает исключения, а даёт значения np.nan или np.inf.
End of explanation
"""
b.sum(), b.prod(), b.max(), b.min(), b.mean(), b.std()
x = np.random.normal(size=1000)
x.mean(), x.std()
"""
Explanation: Сумма и произведение всех элементов массива; максимальный и минимальный элемент; среднее и среднеквадратичное отклонение.
End of explanation
"""
print(np.sqrt(b))
print(np.exp(b))
print(np.log(b))
print(np.sin(b))
print(np.e, np.pi)
"""
Explanation: Имеются встроенные функции
End of explanation
"""
print(b.cumsum())
"""
Explanation: Иногда бывает нужно использовать частичные (кумулятивные) суммы. В нашем курсе такое пригодится.
End of explanation
"""
print(np.sort(b))
print(b)
b.sort()
print(b)
"""
Explanation: Функция sort возвращает отсортированную копию, метод sort сортирует на месте.
End of explanation
"""
a = np.hstack((a, b))
print(a)
"""
Explanation: Объединение массивов.
End of explanation
"""
np.hsplit(a, [3, 6])
"""
Explanation: Расщепление массива в позициях 3 и 6.
End of explanation
"""
a = np.delete(a, [5, 7])
print(a)
a = np.insert(a, 2, [0, 0])
print(a)
a = np.append(a, [1, 2, 3])
print(a)
"""
Explanation: Функции delete, insert и append не меняют массив на месте, а возвращают новый массив, в котором удалены, вставлены в середину или добавлены в конец какие-то элементы.
End of explanation
"""
a = np.linspace(0, 1, 11)
print(a)
b = a[2]
print(b)
"""
Explanation: Есть несколько способов индексации массива. Вот обычный индекс.
End of explanation
"""
b = a[2:6]
print(b)
b[0] = -0.2
print(b)
print(a)
"""
Explanation: Диапазон индексов. Создаётся новый заголовок массива, указывающий на те же данные. Изменения, сделанные через такой массив, видны и в исходном массиве.
End of explanation
"""
b = a[1:10:2]
print(b)
b[0] = -0.1
print(a)
"""
Explanation: Диапазон с шагом 2.
End of explanation
"""
b = a[len(a):0:-1]
print(b)
"""
Explanation: Массив в обратном порядке.
End of explanation
"""
a[1:10:3] = 0
print(a)
"""
Explanation: Подмассиву можно присвоить значение - массив правильного размера или скаляр.
End of explanation
"""
b = a[:]
b[1] = 0.1
print(a)
"""
Explanation: Тут опять создаётся только новый заголовок, указывающий на те же данные.
End of explanation
"""
b = a.copy()
b[2] = 0
print(b)
print(a)
"""
Explanation: Чтобы скопировать и данные массива, нужно использовать метод copy.
End of explanation
"""
print(a[[2, 3, 5]])
"""
Explanation: Можно задать список индексов.
End of explanation
"""
b = a > 0
print(b)
print(a[b])
"""
Explanation: Можно задать булев массив той же величины.
End of explanation
"""
a = np.array([[0.0, 1.0], [-1.0, 0.0]])
print(a)
a.ndim
a.shape
len(a), a.size
a[1, 0]
"""
Explanation: 2-мерные массивы
End of explanation
"""
b = np.linspace(0, 3, 4)
print(b)
b.shape
b.shape = 2, 2
print(b)
"""
Explanation: Атрибуту shape можно присвоить новое значение - кортеж размеров по всем координатам. Получится новый заголовок массива; его данные не изменятся.
End of explanation
"""
print(b.ravel())
"""
Explanation: Можно растянуть в одномерный массив
End of explanation
"""
print(a + 1)
print(a * 2)
print(a + [0, 1]) # второе слагаемое дополняется до матрицы копированием строк
print(a + np.array([[0, 2]]).T) # .T - транспонирование
print(a + b)
"""
Explanation: Арифметические операции поэлементные
End of explanation
"""
print(a * b)
print(a @ b)
print(b @ a)
"""
Explanation: Поэлементное и матричное (только в Python 3.5) умножение.
End of explanation
"""
v = np.array([1, -1], dtype=np.float64)
print(b @ v)
print(v @ b)
"""
Explanation: Умножение матрицы на вектор.
End of explanation
"""
np.matrix(a) * np.matrix(b)
"""
Explanation: Если у вас Питон более ранней версии, то для работы с матрицами можно использовать класс np.matrix, в котором операция умножения реализуется как матричное умножение.
End of explanation
"""
u = np.linspace(1, 2, 2)
v = np.linspace(2, 4, 3)
print(u)
print(v)
a = np.outer(u, v)
print(a)
"""
Explanation: Внешнее произведение $a_{ij}=u_i v_j$
End of explanation
"""
x, y = np.meshgrid(u, v)
print(x)
print(y)
"""
Explanation: Двумерные массивы, зависящие только от одного индекса: $x_{ij}=u_j$, $y_{ij}=v_i$
End of explanation
"""
I = np.eye(4)
print(I)
"""
Explanation: Единичная матрица.
End of explanation
"""
print(I.reshape(16))
print(I.reshape(2, 8))
"""
Explanation: Метод reshape делает то же самое, что присваивание атрибуту shape.
End of explanation
"""
print(I[1])
"""
Explanation: Строка.
End of explanation
"""
for row in I:
print(row)
"""
Explanation: Цикл по строкам.
End of explanation
"""
print(I[:, 2])
"""
Explanation: Столбец.
End of explanation
"""
print(I[0:2, 1:3])
"""
Explanation: Подматрица.
End of explanation
"""
def f(i, j):
print(i)
print(j)
return 10 * i + j
print(np.fromfunction(f, (4, 4), dtype=np.int64))
"""
Explanation: Можно построить двумерный массив из функции.
End of explanation
"""
print(b.T)
"""
Explanation: Транспонированная матрица.
End of explanation
"""
a = np.array([[0, 1], [2, 3]])
b = np.array([[4, 5, 6], [7, 8, 9]])
c = np.array([[4, 5], [6, 7], [8, 9]])
print(a)
print(b)
print(c)
print(np.hstack((a, b)))
print(np.vstack((a, c)))
"""
Explanation: Соединение матриц по горизонтали и по вертикали.
End of explanation
"""
print(b.sum())
print(b.sum(axis=0))
print(b.sum(axis=1))
"""
Explanation: Сумма всех элементов; суммы столбцов; суммы строк.
End of explanation
"""
print(b.max())
print(b.max(axis=0))
print(b.min(axis=1))
"""
Explanation: Аналогично работают prod, max, min и т.д.
End of explanation
"""
np.trace(a)
"""
Explanation: След - сумма диагональных элементов.
End of explanation
"""
X = np.arange(24).reshape(2, 3, 4)
print(X)
"""
Explanation: Многомерные массивы
End of explanation
"""
# суммируем только по нулевой оси, то есть для фиксированных j и k суммируем только элементы с индексами (*, j, k)
print(X.sum(axis=0))
# суммируем сразу по двум осям, то есть для фиксированной i суммируем только элементы с индексами (i, *, *)
print(X.sum(axis=(1, 2)))
"""
Explanation: Суммирование (аналогично остальные операции)
End of explanation
"""
np.linalg.det(a)
"""
Explanation: Линейная алгебра
End of explanation
"""
a1 = np.linalg.inv(a)
print(a1)
print(a @ a1)
print(a1 @ a)
"""
Explanation: Обратная матрица.
End of explanation
"""
v = np.array([0, 1], dtype=np.float64)
print(a1 @ v)
u = np.linalg.solve(a, v)
print(u)
"""
Explanation: Решение линейной системы $au=v$.
End of explanation
"""
print(a @ u - v)
"""
Explanation: Проверим.
End of explanation
"""
l, u = np.linalg.eig(a)
print(l)
print(u)
"""
Explanation: Собственные значения и собственные векторы: $a u_i = \lambda_i u_i$. l - одномерный массив собственных значений $\lambda_i$, столбцы матрицы $u$ - собственные векторы $u_i$.
End of explanation
"""
for i in range(2):
print(a @ u[:, i] - l[i] * u[:, i])
"""
Explanation: Проверим.
End of explanation
"""
L = np.diag(l)
print(L)
print(np.diag(L))
"""
Explanation: Функция diag от одномерного массива строит диагональную матрицу; от квадратной матрицы - возвращает одномерный массив её диагональных элементов.
End of explanation
"""
print(a @ u - u @ L)
"""
Explanation: Все уравнения $a u_i = \lambda_i u_i$ можно собрать в одно матричное уравнение $a u = u \Lambda$, где $\Lambda$ - диагональная матрица с собственными значениями $\lambda_i$ по диагонали.
End of explanation
"""
print(np.linalg.inv(u) @ a @ u)
"""
Explanation: Поэтому $u^{-1} a u = \Lambda$.
End of explanation
"""
l, v = np.linalg.eig(a.T)
print(l)
print(v)
"""
Explanation: Найдём теперь левые собственные векторы $v_i a = \lambda_i v_i$ (собственные значения $\lambda_i$ те же самые).
End of explanation
"""
print(u.T @ u)
print(v.T @ v)
"""
Explanation: Собственные векторы нормированы на 1.
End of explanation
"""
print(v.T @ u)
"""
Explanation: Левые и правые собственные векторы, соответствующие разным собственным значениям, ортогональны, потому что $v_i a u_j = \lambda_i v_i u_j = \lambda_j v_i u_j$.
End of explanation
"""
from scipy.integrate import quad, odeint
from scipy.special import erf
def f(x):
return np.exp(-x ** 2)
"""
Explanation: Интегрирование
End of explanation
"""
res, err = quad(f, 0, np.inf)
print(np.sqrt(np.pi) / 2, res, err)
res, err = quad(f, 0, 1)
print(np.sqrt(np.pi) / 2 * erf(1), res, err)
"""
Explanation: Адаптивное численное интегрирование (может быть до бесконечности). err - оценка ошибки.
End of explanation
"""
x = np.arange(0, 25, 0.5).reshape((5, 10))
# Сохраняем в файл example.txt данные x в формате с двумя точками после запятой и разделителем ';'
np.savetxt('example.txt', x, fmt='%.2f', delimiter=';')
"""
Explanation: Сохранение в файл и чтение из файла
End of explanation
"""
! cat example.txt
"""
Explanation: Получится такой файл
End of explanation
"""
x = np.loadtxt('example.txt', delimiter=';')
print(x)
"""
Explanation: Теперь его можно прочитать
End of explanation
"""
import scipy.stats as sps
"""
Explanation: Библиотека scipy (модуль scipy.stats)
Нам пригодится только модуль scipy.stats.
Полное описание http://docs.scipy.org/doc/scipy/reference/stats.html
End of explanation
"""
sample = sps.norm.rvs(size=200, loc=1, scale=3)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее: %.3f' % sample.mean())
print('Выборочная дисперсия: %.3f' % sample.var())
print('Плотность:\t\t', sps.norm.pdf([-1, 0, 1, 2, 3], loc=1, scale=3))
print('Функция распределения:\t', sps.norm.cdf([-1, 0, 1, 2, 3], loc=1, scale=3))
print('Квантили:', sps.norm.ppf([0.05, 0.1, 0.5, 0.9, 0.95], loc=1, scale=3))
"""
Explanation: <b>Общий принцип:</b>
$X$ — некоторое распределение с параметрами params
<ul>
<li>`X.rvs(size=N, params)` — генерация выборки размера $N$ (<b>R</b>andom <b>V</b>ariate<b>S</b>). Возвращает `numpy.array`</li>
<li>`X.cdf(x, params)` — значение функции распределения в точке $x$ (<b>C</b>umulative <b>D</b>istribution <b>F</b>unction)</li>
<li>`X.logcdf(x, params)` — значение логарифма функции распределения в точке $x$</li>
<li>`X.ppf(q, params)` — $q$-квантиль (<b>P</b>ercent <b>P</b>oint <b>F</b>unction)</li>
<li>`X.mean(params)` — математическое ожидание</li>
<li>`X.median(params)` — медиана</li>
<li>`X.var(params)` — дисперсия (<b>Var</b>iance)</li>
<li>`X.std(params)` — стандартное отклонение = корень из дисперсии (<b>St</b>andard <b>D</b>eviation)</li>
</ul>
Кроме того для непрерывных распределений определены функции
<ul>
<li>`X.pdf(x, params)` — значение плотности в точке $x$ (<b>P</b>robability <b>D</b>ensity <b>F</b>unction)</li>
<li>`X.logpdf(x, params)` — значение логарифма плотности в точке $x$</li>
</ul>
А для дискретных
<ul>
<li>`X.pmf(k, params)` — значение дискретной плотности в точке $k$ (<b>P</b>robability <b>M</b>ass <b>F</b>unction)</li>
<li>`X.logpdf(k, params)` — значение логарифма дискретной плотности в точке $k$</li>
</ul>
Параметры могут быть следующими:
<ul>
<li>`loc` — параметр сдвига</li>
<li>`scale` — параметр масштаба</li>
<li>и другие параметры (например, $n$ и $p$ для биномиального)</li>
</ul>
Для примера сгенерируем выборку размера $N = 200$ из распределения $\mathscr{N}(1, 9)$ и посчитаем некоторые статистики.
В терминах выше описанных функций у нас $X$ = sps.norm, а params = (loc=1, scale=3).
End of explanation
"""
sample = sps.binom.rvs(size=200, n=10, p=0.6)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее: %.3f' % sample.mean())
print('Выборочная дисперсия: %.3f' % sample.var())
print('Дискретная плотность:\t', sps.binom.pmf([-1, 0, 5, 5.5, 10], n=10, p=0.6))
print('Функция распределения:\t', sps.binom.cdf([-1, 0, 5, 5.5, 10], n=10, p=0.6))
print('Квантили:', sps.binom.ppf([0.05, 0.1, 0.5, 0.9, 0.95], n=10, p=0.6))
"""
Explanation: Cгенерируем выборку размера $N = 200$ из распределения $Bin(10, 0.6)$ и посчитаем некоторые статистики.
В терминах выше описанных функций у нас $X$ = sps.binom, а params = (n=10, p=0.6).
End of explanation
"""
sample = sps.multivariate_normal.rvs(mean=[1, 1], cov=[[2, 1], [1, 2]], size=200)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее:', sample.mean(axis=0))
print('Выборочная матрица ковариаций:\n', np.cov(sample.T))
"""
Explanation: Отдельно есть класс для <b>многомерного нормального распределения</b>.
Для примера сгенерируем выборку размера $N=200$ из распределения $\mathscr{N} \left( \begin{pmatrix} 1 \ 1 \end{pmatrix}, \begin{pmatrix} 2 & 1 \ 1 & 2 \end{pmatrix} \right)$.
End of explanation
"""
sample = sps.norm.rvs(size=10, loc=np.arange(10), scale=0.1)
print(sample)
"""
Explanation: Некоторая хитрость :)
End of explanation
"""
class cubic_gen(sps.rv_continuous):
def _pdf(self, x):
return 4 * x ** 3 / 15
cubic = cubic_gen(a=1, b=2, name='cubic')
sample = cubic.rvs(size=200)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее: %.3f' % sample.mean())
print('Выборочная дисперсия: %.3f' % sample.var())
"""
Explanation: Бывает так, что <b>надо сгенерировать выборку из распределения, которого нет в scipy.stats</b>.
Для этого надо создать класс, который будет наследоваться от класса rv_continuous для непрерывных случайных величин и от класса rv_discrete для дискретных случайных величин.
Пример есть на странице http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html#scipy.stats.rv_continuous
Для примера сгенерируем выборку из распределения с плотностью $f(x) = \frac{4}{15} x^3 I{x \in [1, 2] = [a, b]}$.
End of explanation
"""
some_distribution = sps.rv_discrete(name='some_distribution', values=([1, 2, 3], [0.6, 0.1, 0.3]))
sample = some_distribution.rvs(size=200)
print('Первые 10 значений выборки:\n', sample[:10])
print('Выборочное среденее: %.3f' % sample.mean())
print('Частота значений по выборке:', (sample == 1).mean(), (sample == 2).mean(), (sample == 3).mean())
"""
Explanation: Если дискретная случайная величина может принимать небольшое число значений, то можно не создавать новый класс, как показано выше, а явно указать эти значения и из вероятности.
End of explanation
"""
|
dkirkby/quantum-demo | jupyter/InfiniteSquareWell.ipynb | mit | %pylab inline
import matplotlib.animation
from IPython.display import HTML
import scipy.fftpack
"""
Explanation: One-Dimensional Infinite Square Well
End of explanation
"""
def calculate(initial, nx=100, nt=10, quantum=True):
"""Solve the 1D classical or quantum wave equation with fixed endpoints.
Parameters
----------
initial : callable
Function called with an array of x values in [0, 1) that should return an
array of (un-normalized) psi(x, 0) values.
nx : int
Number of subdivisions of the interval [0, 1] to use, which fixes the
allowed mode numbers to n = 1, 2, ..., nx - 2.
nt : int
Number of equally spaced times to subdivide [0, T) into.
quantum : bool
Use omega proportional to n ** 2 (quantum) or n (classical), where n is
the mode number.
Returns
-------
array
Complex array of shape (nt, nx) with calculated psi(x, t).
"""
# Tabulate the initial state.
x = np.linspace(0., 1., nx)
psi0 = initial(x)
if not np.allclose(psi0[[0, -1]], 0):
raise ValueError('Initial state does not satisfy the boundary conditions.')
# Normalize the initial state.
normsq = np.trapz(np.abs(psi0) ** 2, x)
psi0 /= np.sqrt(normsq)
# Calculate initial coefficients.
n = np.arange(1, nx - 1)
c0 = scipy.fftpack.dst(psi0[1:-1], type=1)
# Evolve each coefficient through one period.
tfrac = np.linspace(0., 1., nt, endpoint=False).reshape(-1, 1)
omegaT = 2 * np.pi * (n ** 2 if quantum else n)
phi = np.exp(-1j * omegaT * tfrac)
c = c0 * phi
# Reconstruct wavefunctions at each time step.
psi = np.zeros((nt, nx), np.complex)
psi[:, 1:-1] = scipy.fftpack.idst(c, type=1, axis=1) / (2. * (nx - 1))
return psi
"""
Explanation: The periodic evolution of a normalized initial state $\Psi(x,0)$ in a well of size $a$ can be expanded in terms of energy eigenstates as:
$$
\Psi(x, t) = \sqrt{\frac{2}{a}}\, \sum_{n=1}^\infty c_n \sin(k_n x) \exp(-i\omega_n t)
$$
with
$$
k_n = n \frac{\pi}{a} \quad , \quad
\omega_n = n^2 \frac{2\pi}{T} \quad, \quad
c_n = \sqrt{\frac{2}{a}}\,\int_0^a \sin(k_n x)\,\Psi(x, 0)\,dx
\; ,
$$
and period
$$
T = \frac{\pi\hbar}{4 m a^2} \; .
$$
Note that this is almost the same as the evolution of a massless string under constant tension $\tau$, where now $\text{Re}\Psi(x,t)$ is the transverse displacement, except that
$$
\omega_n = n \frac{2\pi}{T} \; ,
$$
since all modes $n$ propagate at the same speed,
$$
v = \frac{\omega_n}{k_n} = \frac{2 a}{T} = \sqrt{\frac{\tau}{\rho}} \; .
$$
Another difference is that the fundamental period $T$ of a classical string,
$$
T = \frac{2a}{v} = \sqrt{\frac{4 \rho a^2}{\tau}} \; ,
$$
scales with $a$ while the quantum system has a period that scales with $a^{-2}$.
The function below solves for the complex-valued $\Psi(x,t)$ given a function to evalute the initial (real) $\Psi(x,0)$, for times $t$ equally spaced on $[0, T)$. The scale $a=1$ is fixed.
End of explanation
"""
def harmonic(*coefs):
n = np.arange(1, len(coefs) + 1)
def initial(x):
return (coefs * np.sin(np.pi * n * x.reshape(-1, 1))).sum(axis=1)
return initial
def pulse(x0=0.5, dx=0.1):
def initial(x):
y = np.zeros_like(x)
sel = np.abs(x - x0) < dx
y[sel] = np.cos(0.5 * np.pi * (x[sel] - x0) / dx)
return y
return initial
def pluck(x0=0.5):
def initial(x):
y = np.empty_like(x)
sel = x < x0
y[sel] = x[sel] / x0
y[~sel] = (1 - x[~sel]) / (1 - x0)
return y
return initial
"""
Explanation: Define some initial states:
- A superposition of low-order standing waves.
- A narrow pulse.
- A plucked string.
End of explanation
"""
def animate_examples(nx=100, nt=300, save=None):
models = [ harmonic(1, 1), pulse(0.4, 0.2), pluck(0.4) ]
classical = [ calculate(model, nx, nt, quantum=False) for model in models ]
quantum = [ calculate(model, nx, nt, quantum=True) for model in models ]
figure, ax = plt.subplots(len(models), 3, sharex=True, figsize=(12, 9))
plt.subplots_adjust(0.,0.,0.999,0.999,0.01,0.02)
ylim = np.array([
1.05 * np.max(np.abs(np.real(quantum)), axis=(1,2)),
1.05 * np.max(np.absolute(quantum) ** 2, axis=(1,2))
])
artists = []
x = np.linspace(0, 1, nx)
y = np.zeros_like(x)
for im in range(len(models)):
artists += ax[im,0].plot(x, y, 'r', lw=2)
artists += ax[im,1].plot(x, y, 'r', lw=2)
artists += ax[im,1].plot(x, y, 'b:', lw=2)
artists += ax[im,2].plot(x, y, 'k', lw=2)
for im in range(len(models)):
artists += ax[im,0].plot(x, np.real(classical[im][0]), alpha=0.25, lw=5, c='gray')
artists += ax[im,1].plot(x, np.real(quantum[im][0]), alpha=0.25, lw=5, c='gray')
ax[im,0].set_ylim(-ylim[0,im], +ylim[0,im])
ax[im,1].set_ylim(-ylim[0,im], +ylim[0,im])
ax[im,2].set_ylim(0, ylim[1,im])
for col in range(3):
ax[im,col].set_xlim(0, 1)
ax[im,col].set_xticks([])
ax[im,col].set_yticks([])
def init():
return artists
def update(it):
for im in range(len(models)):
artists[4 * im + 0].set_ydata(np.real(classical[im][it]))
artists[4 * im + 1].set_ydata(np.real(quantum[im][it]))
artists[4 * im + 2].set_ydata(np.imag(quantum[im][it]))
artists[4 * im + 3].set_ydata(np.absolute(quantum[im][it]) ** 2)
return artists
animation = matplotlib.animation.FuncAnimation(
figure, update, init_func=init, frames=nt, repeat=True, blit=True)
if save:
writer = matplotlib.animation.writers['ffmpeg'](bitrate=1200, fps=30)
animation.save(save, writer=writer)
return animation
"""
Explanation: Animate some examples, using ffmpeg for mp4 encoding:
End of explanation
"""
animate_examples(save='isw.mp4');
"""
Explanation: The following command encodes and saves an mp4 file, but also displays a single frame inline. The rows correspond to the different initial states:
- TOP: The sum of the lowest-order standing waves (with equal coefficients).
- MIDDLE: A "half cosine" pulse.
- BOTTOM: A plucked string.
The columns show:
- LEFT: real part of $\Psi(x,t)$ for a classical wave ($\omega_n \propto n$) on a string fixed at both ends.
- CENTER: real (red) and imaginary (dotted blue) parts of $\Psi(x,t)$ for a quantum wave ($\omega_n \propto n^2$) fixed at both ends.
- RIGHT: probability density $|\Psi(x,t)|^2$ for the the quantum wave.
Note how replacing $\omega_n \propto n$ with $\omega_n \propto n^2$ changes the evolution significantly and adds much higher frequency components.
If the classical plucked string looks unphysical, see this high-speed footage.
End of explanation
"""
#HTML(animate_examples().to_html5_video())
"""
Explanation: Uncomment and run the line below to display the animation inline:
End of explanation
"""
|
ueapy/ueapy.github.io | content/notebooks/2016-01-29-matplotlib-styles.ipynb | mit | import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
plt.title('This is my title', fontsize=20)
"""
Explanation: One of the main applications of Python among the members of our group is, admittedly, visualising data in pulication-quality figures. This was the topic for today's meeting and we will certainly come back to it in the future.
But first, go and get some inspiration for the modern reproducible research based on Jupyter Notebooks: Reproducible academic publications. If not convinced, read this Nature paper.
Matplotlib package has a good enough set of default settings, but almost always you have to fiddle with some of them to get a high-quality figure. Usually it is done explicitly when creating a plot. For example, to set a title with a specific font size, you would use this syntax:
End of explanation
"""
mpl.rcParams['axes.titlesize'] = 20
"""
Explanation: However, it is often useful to set the same title size for every figure you produce. One of the ways to define it is changing the default matplotlib parameters before runnning a script. These defaults, called rcParams, are stored in a dictionary-like variable called matplotlib.rcParams, which is global to the matplotlib package. Its location can be found by this command:
mpl.matplotlib_fname()
rcParams can be modified dynamically in a python script or shell. For example, to set the title font size equal to 20 for every figure in a script, you can put this line in the beginning of the script:
End of explanation
"""
plt.style.available
"""
Explanation: More about rcParams can be found on the official website.
Matplotlib styles
In the latest version of matplotlib the process of customizing the look your figures became even easier with the styles submodule. This package allows you to create easy-to-switch matplotlibrc-like style-sheets. A list of pre-defined style-sheets, supplied with matplotlib, can be obtained like this:
End of explanation
"""
plt.style.use('bmh')
def sample_plot():
plt.plot(range(20))
plt.plot([i**2 for i in range(10)])
sample_plot()
"""
Explanation: You can switch any of them using the command, which will set new set of rcParams for the current script/shell/notebook. Take a look at some of them here: Matplotlib Style Gallery.
End of explanation
"""
with plt.style.context('dark_background'):
sample_plot()
"""
Explanation: Restoring default rcParams
Ok, but how do you restore the default settings without restarting a kernel?
According to this answer on StackOverflow, you should first save the default settings in a dictionary (before using any styles):
inline_rc = dict(mpl.rcParams)
Then, when you are fed up with style-sheets and just want to go back to good old defaults, you can run this command:
mpl.rcParams.update(inline_rc)
Temporary styling
Another solution is to use a context manager. In other words, you can use a style temporary for a specified figure, without changing global settings:
End of explanation
"""
HTML(html)
"""
Explanation: For more info and how to create your own styles, look here: http://matplotlib.org/users/style_sheets.html
End of explanation
"""
|
kadrlica/ugali | notebooks/isochrone_example.ipynb | mit | def plot_iso(iso):
plt.scatter(iso.mag_1-iso.mag_2,iso.mag_1+iso.distance_modulus,marker='o',c='k')
plt.gca().invert_yaxis()
plt.xlabel('%s - %s'%(iso.band_1,iso.band_2)); plt.ylabel(iso.band_1)
iso1 = isochrone.factory(name='Padova',
age=12, # Gyr
metallicity=0.0002, # Z
distance_modulus=17
)
print(iso1)
plot_iso(iso1)
# Change the two bands that the isochrone loads
iso2 = isochrone.factory(name='Padova',
age=12, # Gyr
metallicity=0.0002, # Z
distance_modulus=17,
band_1 = 'i',
band_2 = 'z'
)
print(iso2)
plot_iso(iso2)
# Create a Dotter isochrone
iso3 = isochrone.factory(name='Dotter',
age=12, # Gyr
metallicity=0.0002, # Z
distance_modulus=17
)
print(iso3)
plot_iso(iso3)
"""
Explanation: Creating Isochrones
To use the isochrone module, you must have the isochrone library installed (see instructions here). The isochrone module provides an API to create isochrones and calculate various characteristics. The easiest way to create an isochrone is through the general factory interface shown below.
End of explanation
"""
iso = isochrone.factory(name='Padova',
age=12, # Gyr
metallicity=0.0002, # Z
distance_modulus=17
)
# You can set the age, metallicity, and distance modulus
iso.age = 11
iso.distance_modulus = 20
iso.metallicity = 0.00015
print(iso)
# Each parameter has bounds and will throw an error if you are outside the range (useful for fitting)
try:
iso.distance_modulus = 40
except ValueError as e:
print("Error:",e)
# However, you can increase the range
iso.params['distance_modulus'].set_bounds([10,50])
iso.distance_modulus = 40
print(iso)
iso.distance_modulus = 17
# Updating a parameters just changes the underlying isochrone file
# Note: There is no interpolation being done
for metal in [0.00011,0.00012,0.00013]:
iso.metallicity = metal
print("Metallicity:",iso.metallicity,iso.filename)
iso.metallicity = 0.000115
print("Metallicity:",iso.metallicity,iso.filename)
"""
Explanation: Modifying Isochrones
Once you create an isochrone, you can modify it's parameters on the fly.
End of explanation
"""
# Draw a regular grid of points from the isochrone with associated IMF
initial_mass,mass_pdf,actual_mass,mag_1,mag_2 = iso1.sample(mass_steps=1e2)
plt.scatter(mag_1-mag_2,mag_1+iso1.distance_modulus,c=mass_pdf,marker='o',facecolor='none',vmax=0.001)
plt.colorbar()
plt.gca().invert_yaxis()
plt.xlabel('%s - %s'%(iso.band_1,iso.band_2)); plt.ylabel(iso.band_1)
plt.ylim(23,15); plt.xlim(-0.5,1.0)
# Randomly sample stars from the isochrone pdf
# Note: `sample` returns the apparent magnitudes of stars
mag_1,mag_2 = iso1.simulate(stellar_mass=3e5)
n,bx,by,p = plt.hist2d(mag_1-mag_2,mag_2,bins=50,norm=LogNorm())
plt.colorbar(label="Number of Stars")
plt.gca().invert_yaxis()
plt.xlabel('%s - %s'%(iso.band_1,iso.band_2)); plt.ylabel(iso.band_1)
# The isochrone is normalized using a `richness` parameter (number of stars above a given mass threshold)
richness = 1e4
# Total stellar mass above some minimum mass
print("Stellar Mass:",richness * iso.stellar_mass())
# Luminosity calculated from the isochrone file and mass pdf
print("Stellar Luminosity:",richness * iso.stellar_luminosity())
# Calculate the richness
print("Absolute Magnitude:",iso.absolute_magnitude(richness=richness))
# Calculate the absolute magnitude using the random sampling of Martin et al. 2008
print("Martin Absolute Magnitude:",iso.absolute_magnitude_martin(richness=richness))
"""
Explanation: Advanced Methods
The Isochrone class wraps several more complicated functions related to isochrones. A few examples are shown below.
End of explanation
"""
|
metpy/MetPy | v0.4/_downloads/Station_Plot.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as feat
import matplotlib.pyplot as plt
import numpy as np
from metpy.calc import get_wind_components
from metpy.cbook import get_test_data
from metpy.plots import StationPlot
from metpy.plots.wx_symbols import current_weather, sky_cover
from metpy.units import units
"""
Explanation: Station Plot
Make a station plot, complete with sky cover and weather symbols.
The station plot itself is pretty straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
End of explanation
"""
f = get_test_data('station_data.txt')
all_data = np.loadtxt(f, skiprows=1, delimiter=',',
usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),
dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'),
('slp', 'f'), ('air_temperature', 'f'),
('cloud_fraction', 'f'), ('dewpoint', 'f'),
('weather', '16S'),
('wind_dir', 'f'), ('wind_speed', 'f')]))
"""
Explanation: The setup
First read in the data. We use numpy.loadtxt to read in the data and use a structured
numpy.dtype to allow different types for the various columns. This allows us to handle
the columns with string data.
End of explanation
"""
# Get the full list of stations in the data
all_stids = [s.decode('ascii') for s in all_data['stid']]
# Pull out these specific stations
whitelist = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF',
'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL',
'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV',
'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW',
'SAT', 'BUY', '0CO', 'ZPC', 'VIH']
# Loop over all the whitelisted sites, grab the first data, and concatenate them
data = np.concatenate([all_data[all_stids.index(site)].reshape(1,) for site in whitelist])
"""
Explanation: This sample data has way too many stations to plot all of them. Instead, we just select
a few from around the U.S. and pull those out of the data file.
End of explanation
"""
# Get all of the station IDs as a list of strings
stid = [s.decode('ascii') for s in data['stid']]
# Get the wind components, converting from m/s to knots as will be appropriate
# for the station plot
u, v = get_wind_components((data['wind_speed'] * units('m/s')).to('knots'),
data['wind_dir'] * units.degree)
# Convert the fraction value into a code of 0-8, which can be used to pull out
# the appropriate symbol
cloud_frac = (8 * data['cloud_fraction']).astype(int)
# Map weather strings to WMO codes, which we can use to convert to symbols
# Only use the first symbol if there are multiple
wx_text = [s.decode('ascii') for s in data['weather']]
wx_codes = {'': 0, 'HZ': 5, 'BR': 10, '-DZ': 51, 'DZ': 53, '+DZ': 55,
'-RA': 61, 'RA': 63, '+RA': 65, '-SN': 71, 'SN': 73, '+SN': 75}
wx = [wx_codes[s.split()[0] if ' ' in s else s] for s in wx_text]
"""
Explanation: Now that we have the data we want, we need to perform some conversions:
Get a list of strings for the station IDs
Get wind components from speed and direction
Convert cloud fraction values to integer codes [0 - 8]
Map METAR weather codes to WMO codes for weather symbols
End of explanation
"""
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
state_boundaries = feat.NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lines',
scale='110m', facecolor='none')
"""
Explanation: Now all the data wrangling is finished, just need to set up plotting and go
Set up the map projection and set up a cartopy feature for state borders
End of explanation
"""
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(feat.LAND, zorder=-1)
ax.add_feature(feat.OCEAN, zorder=-1)
ax.add_feature(feat.LAKES, zorder=-1)
ax.coastlines(resolution='110m', zorder=2, color='black')
ax.add_feature(state_boundaries)
ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black')
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['lon'], data['lat'], transform=ccrs.PlateCarree(),
fontsize=12)
# Plot the temperature and dew point to the upper and lower left, respectively, of
# the center point. Each one uses a different color.
stationplot.plot_parameter('NW', data['air_temperature'], color='red')
stationplot.plot_parameter('SW', data['dewpoint'], color='darkgreen')
# A more complex example uses a custom formatter to control how the sea-level pressure
# values are plotted. This uses the standard trailing 3-digits of the pressure value
# in tenths of millibars.
stationplot.plot_parameter('NE', data['slp'],
formatter=lambda v: format(10 * v, '.0f')[-3:])
# Plot the cloud cover symbols in the center location. This uses the codes made above and
# uses the `sky_cover` mapper to convert these values to font codes for the
# weather symbol font.
stationplot.plot_symbol('C', cloud_frac, sky_cover)
# Same this time, but plot current weather to the left of center, using the
# `current_weather` mapper to convert symbols to the right glyphs.
stationplot.plot_symbol('W', wx, current_weather)
# Add wind barbs
stationplot.plot_barb(u, v)
# Also plot the actual text of the station id. Instead of cardinal directions,
# plot further out by specifying a location of 2 increments in x and 0 in y.
stationplot.plot_text((2, 0), stid)
plt.show()
"""
Explanation: The payoff
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb | apache-2.0 | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex AI SDK for Python: AutoML training text entity extraction model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex AI SDK for Python to create text entity extraction models and do online prediction using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the NCBI Disease Research Abstracts dataset from National Center for Biotechnology Information. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML text entity extraction model and deploy for online prediction from a Python script using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
"""
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import os
import google.cloud.aiplatform as aiplatform
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
"""
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
"""
IMPORT_FILE = "gs://cloud-samples-data/language/ucaip_ten_dataset.jsonl"
"""
Explanation: Tutorial
Now you are ready to start creating your own AutoML text entity extraction model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the JSONL index file in Cloud Storage.
End of explanation
"""
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
"""
Explanation: Quick peek at your data
This tutorial uses a version of the NCBI Biomedical dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (wc -l) and then peek at the first few rows.
End of explanation
"""
dataset = aiplatform.TextDataset.create(
display_name="NCBI Biomedical" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aiplatform.schema.dataset.ioformat.text.extraction,
)
print(dataset.resource_name)
"""
Explanation: Create the Dataset
Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes.
End of explanation
"""
job = aiplatform.AutoMLTextTrainingJob(
display_name="biomedical_" + TIMESTAMP, prediction_type="extraction"
)
print(job)
"""
Explanation: Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLTextTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: A text classification model.
sentiment: A text sentiment analysis model.
extraction: A text entity extraction model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
sentiment_max: If a sentiment analysis task, the maximum sentiment value.
End of explanation
"""
model = job.run(
dataset=dataset,
model_display_name="biomedical_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
)
"""
Explanation: Run the training pipeline
Next, you start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 4 hours.
End of explanation
"""
# Get model resource ID
models = aiplatform.Model.list(filter="display_name=biomedical_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aiplatform.gapic.ModelServiceClient(
client_options=client_options
)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
"""
Explanation: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
"""
endpoint = model.deploy()
"""
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method.
End of explanation
"""
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
"""
Explanation: Send a online prediction request
Send a online prediction to your deployed model.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
"""
instances_list = [{"content": test_item}]
prediction = endpoint.predict(instances_list)
print(prediction)
"""
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
{ 'content': text_string }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each entity.
confidences: The predicted confidence, between 0 and 1, per entity.
textSegmentStartOffsets: The character offset in the text to the start of the entity.
textSegmentEndOffsets: The character offset in the text to the end of the entity.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
"""
endpoint.undeploy_all()
"""
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
"""
# Delete the dataset using the Vertex dataset object
dataset.delete()
# Delete the model using the Vertex model object
model.delete()
# Delete the endpoint using the Vertex endpoint object
endpoint.delete()
# Delete the AutoML or Pipeline training job
job.delete()
if os.getenv("IS_TESTING"):
! gsutil -m rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
csaladenes/blog | airports/airportia_hu_dest_parser.ipynb | mit | for i in locations:
print i
if i not in sch:sch[i]={}
#march 11-24 = 2 weeks
for d in range (11,25):
if d not in sch[i]:
try:
url=airportialinks[i]
full=url+'departures/201703'+str(d)
m=requests.get(full).content
sch[i][full]=pd.read_html(m)[0]
#print full
except: pass #print 'no tables',i,d
for i in range(11,25):
testurl=u'https://www.airportia.com/hungary/budapest-liszt-ferenc-international-airport/departures/201703'+str(i)
print 'nr. of flights on March',i,':',len(sch['BUD'][testurl])
testurl=u'https://www.airportia.com/hungary/budapest-liszt-ferenc-international-airport/departures/20170318'
k=sch['BUD'][testurl]
k[k['To']=='Frankfurt FRA']
"""
Explanation: record schedules for 2 weeks, then augment count with weekly flight numbers.
seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.
End of explanation
"""
mdf=pd.DataFrame()
for i in sch:
for d in sch[i]:
df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1)
df['From']=i
df['Date']=d
mdf=pd.concat([mdf,df])
mdf=mdf.replace('Hahn','Frankfurt')
mdf=mdf.replace('Hahn HHN','Frankfurt HHN')
mdf['City']=[i[:i.rfind(' ')] for i in mdf['To']]
mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['To']]
k=mdf[mdf['Date']==testurl]
k[k['To']=='Frankfurt FRA']
"""
Explanation: sch checks out with source
End of explanation
"""
file("mdf_hu_dest.json",'w').write(json.dumps(mdf.reset_index().to_json()))
len(mdf)
airlines=set(mdf['Airline'])
cities=set(mdf['City'])
file("cities_hu_dest.json",'w').write(json.dumps(list(cities)))
file("airlines_hu_dest.json",'w').write(json.dumps(list(airlines)))
citycoords={}
for i in cities:
if i not in citycoords:
if i==u'Birmingham': z='Birmingham, UK'
elif i==u'Valencia': z='Valencia, Spain'
elif i==u'Naples': z='Naples, Italy'
elif i==u'St. Petersburg': z='St. Petersburg, Russia'
elif i==u'Bristol': z='Bristol, UK'
elif i==u'Victoria': z='Victoria, Seychelles'
elif i==u'Washington': z='Washington, DC'
elif i==u'Odessa': z='Odessa, Ukraine'
else: z=i
citycoords[i]=Geocoder(apik).geocode(z)
print i
citysave={}
for i in citycoords:
citysave[i]={"coords":citycoords[i][0].coordinates,
"country":citycoords[i][0].country}
file("citysave_hu_dest.json",'w').write(json.dumps(citysave))
"""
Explanation: mdf checks out with source
End of explanation
"""
|
jrrickerson/scroller | Scroller Game Tutorial.ipynb | mit | !python kivy/examples/tutorials/pong/main.py
"""
Explanation: SCROLLER GAME TUTORIAL
This tutorial will teach you how to build a basic side scrolling game with Python and Kivy. You will start out by displaying a few basic shapes on the screen, then adding some of the game mechanics, handling user input, and then finally adding nicer graphics and adding the "fun" parts of the game - obstacles, win / lose conditions, and scoring!
Step 0
The first thing we need to do is set up our Kivy environment. Follow the instructions for setup here:
* Python Development Environment Setup: https://github.com/learnleapfly/gamecamp/wiki/Configuring-Python%2C-Python-Modules%2C-and-Virtual-Environments
* Install Kivy: https://github.com/learnleapfly/gamecamp/wiki/Setting-up-a-Kivy-Development-Environment
Once everything is installed set up, you should be able to try out the Pong demo, found in the kivy/examples/tutorials directory.
End of explanation
"""
!sudo chmod o+rx /etc/input*
"""
Explanation: NOTE: On newer version of Ubuntu, if you see Permissions error messages when you start the tutorial, you may need to run the following in order to allow Kivy to properly see touchscreen / touchpad input devices.
End of explanation
"""
# Main.py
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.properties import NumericProperty, ReferenceListProperty
from kivy.clock import Clock
from kivy.vector import Vector
class ScrollerGame(Widget):
def update(self, dt):
pass
class RectangleMan(Widget):
velocity_x = NumericProperty(0)
velocity_y = NumericProperty(0)
velocity = ReferenceListProperty(velocity_x, velocity_y)
def move(self):
pass
class ScrollerApp(App):
def build(self):
game = ScrollerGame()
Clock.schedule_interval(game.update, 1.0/60.0)
return game
if __name__ == '__main__':
ScrollerApp().run()
"""
Explanation: Step 1
The first thing we want to do is create the basic game setup. In order to run our game, we have to create a Kivy App object, and our root game object, which will control everything else in our game. We also need to create our basic .kv file for our game, where we will some information about the game objects in our game - positions, sizes, colors, etc.
End of explanation
"""
# scroller.kv
<ScrollerGame>:
canvas:
Rectangle:
pos: 0, 0
size: self.width, 15
RectangleMan:
pos: 0, 15
<RectangleMan>:
size: 20, 50
canvas:
Color:
rgba: 0, 1, 0, 1
Rectangle:
pos: self.pos
size: self.size
"""
Explanation: In our code, we build a class RectangleMan which will represent our main character. He has a few properties to manage his current velocity.
python
class RectangleMan(Widget):
velocity_x = NumericProperty(0)
velocity_y = NumericProperty(0)
velocity = ReferenceListProperty(velocity_x, velocity_y)
The RectangleMan class also has a single method, move which will manipulate the velocity and position of our character on the screen. For now it does nothing, but we'll fill in the details in further steps.
python
def move(self):
pass
We also created the ScrollerGame class, which will actually be part of the main event loop of the game, and which will call methods on other game objects to cause things to happen in our game. For now, our update method does nothing, but we will that in later.
python
class ScrollerGame(Widget):
def update(self, dt):
pass
Lastly, to tie it all together, we created our ScrollerApp class, which represents the overall application. This will do a few things to set the game up and get it running. The ScrollerApp class has to inherit from Kivy's App class, and defines just one method - the build method, which Kivy will call for us when setting up the game initially. In the build method, we create an instance of our root game object, ScrollerGame. Then we use Kivy's Clock class to call the ScrollerGame.update method we defined. The Clock class takes the time interval on which to call the update method - in this case we define it as 1.0 / 60.0, which means our game will run at 60 frames per second.
python
class ScrollerApp(App):
def build(self):
game = ScrollerGame()
Clock.schedule_interval(game.update, 1.0/60.0)
return game
At the bottom of our script, we use the standard Python trick that allows us to run our module as a script. Here, we simply instantiate our ScrollerApp, and then call the run method that Kivy provides to beginning running the application.
End of explanation
"""
|
eaton-lab/toytree | docs/NodeLabels.ipynb | bsd-3-clause | import toytree
import toyplot
import numpy as np
# newick tree string with edge lengths and support values
newick = """
((apple:2,orange:4)100:2,(((tomato:2,eggplant:1)100:2,pepper:3)90:1,tomatillo:2)100:1);
"""
# load toytree
tre = toytree.tree(newick)
"""
Explanation: Node labels
Node labels are markers plotted on the nodes (vertices) of a tree and can be highly modified, including the addition of interactive features which are useful not only for producing final production-quality figures but also for exploring metadata on trees.
End of explanation
"""
## no node labels
tre.draw();
## default node labels
tre.draw(node_labels=True);
"""
Explanation: Hide/Show node labels
The argument node_labels can be entered as a list of names or left as None or False in which case the node labels are not shown. If you enter True a special feature is applied to the nodes in which case they will become interactive and show information for all available features (node values stored in the Tree object metadata) in the Tree. Hover your cursor over the green node labels below for an example.
End of explanation
"""
## get node index (idx) values
heights = tre.get_node_values("height", show_root=True, show_tips=False)
## plot tree with node values
tre.draw(node_labels=heights, tip_labels_align=True, orient='down');
"""
Explanation: Node plot order
The order in which nodes and node values are plotted can be accessed from toytrees directly using .get_node_values(). By linking this function to the tree itself we can be sure that the node labels will be returned in the correct order regardless of whether the tree has been pruned, or rotated, or any other modification.
Node features
"feature" argument that is provided. All trees will always have the features ["idx", "name", "support", "dist", "height"] available. Below we grab the 'height' values and plot them on the nodes of the tree. The .get_node_values() function does not return values for the root node or for leaves by default, since these often are not meaningful on trees where the values represent support values or distances (i.e, the values are actually relevant to edges not nodes), but we want those values here, so we enter True for those arguments.
End of explanation
"""
## plot tree with node values
tre.draw(
node_labels=tre.get_node_values("support"),
node_sizes=24,
node_labels_style={"font-size": "10px"},
);
"""
Explanation: Show specific node features
Similarly, in this case we are plotting support values so we will let the default arguments to .get_node_values() hide the values at the tips and root of the tree.
End of explanation
"""
## traverse tree and add a new feature to each node
for node in tre.treenode.traverse():
if node.dist > 1:
node.add_feature("is_long_branch", True)
else:
node.add_feature("is_long_branch", False)
## get new feature in node plot order
feature = tre.get_node_values("is_long_branch", 0, 0)
## plot tree with node values
tre.draw(
width=350,
node_labels=feature,
node_sizes=25,
use_edge_lengths=True,
node_labels_style={"font-size": "9px"},
);
"""
Explanation: Add/modify/del node features
End of explanation
"""
## get new feature in node plot order
feature = tre.get_node_values("is_long_branch", 0, 0)
colors = ["orange" if i else "lightgrey" for i in feature]
## plot tree with node values
tre.draw(
width=300,
node_labels=None,
node_sizes=12,
node_colors=colors,
node_style={"stroke":"black"},
);
"""
Explanation: Node colors
Here we apply different colors to each node based on its value for the feature that we assigned to the nodes above, is_long_branch. We also use the magic argument node_labels=True which assigns interactive information to each node. Hover your cursor over the nodes in the plot below to see how the colors match up with the features of the node.
End of explanation
"""
## get new feature in node plot order
feature = tre.get_node_values("is_long_branch", 0, 0)
colors = ["white" if i else "lightgrey" for i in feature]
## plot tree with node values
tre.draw(
node_labels=True,
node_sizes=20,
node_colors=colors,
node_style={"stroke": "darkcyan", "stroke-width": "2px"},
);
"""
Explanation: Node styling
...
End of explanation
"""
|
leriomaggio/python-in-a-notebook | 01 Introducing the IPython Notebook.ipynb | mit | # This is a code cell made up of Python comments
# We can execute it by clicking on it with the mouse
# then clicking the "Run Cell" button
# A comment is a pretty boring piece of code
# This code cell generates "Hello, World" when executed
print("Hello, World")
# Code cells can also generate graphical output
%matplotlib inline
import matplotlib
matplotlib.pyplot.hist([0, 1, 2, 2, 3, 3, 3, 4, 4, 4, 10]);
"""
Explanation: Introducing the IPython Notebook
Aron Ahmadia (US Army ERDC) and David Ketcheson (KAUST)
Teaching Numerical Methods with IPython Notebooks, SciPy 2014
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">This lecture</span> by <a xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName" rel="cc:attributionURL">Aron Ahmadia and David Ketcheson</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. All code examples are also licensed under the MIT license.
NOTE: Some changes have been applied to make this notebook compliant with Python 3
What is this?
This is a gentle introduction to the IPython Notebook aimed at lecturers who wish to incorporate it in their teaching, written in an IPython Notebook. This presentation adapts material from the IPython official documentation.
What is an IPython Notebook?
An IPython Notebook is a:
[A] Interactive environment for writing and running code
[B] Weave of code, data, prose, equations, analysis, and visualization
[C] Tool for prototyping new code and analysis
[D] Reproducible workflow for scientific research
[E] All of the above
Writing and Running Code
The IPython Notebook consists of an ordered list of cells.
There are four important cell types:
Code
Markdown
Heading
Raw
We briefly introduce how Code Cells work here. We will return to the other three cell types later.
Code Cells
End of explanation
"""
x = 0
print(x)
x += 1
print(x)
"""
Explanation: Modal editor
Starting with IPython 2.0, the IPython Notebook has a modal user interface. This means that the keyboard does different things depending on which mode the Notebook is in. There are two modes: edit mode and command mode.
Edit mode
Edit mode is indicated by a green cell border and a prompt showing in the editor area:
<img src="./files/images/edit_mode.png">
When a cell is in edit mode, you can type into the cell, like a normal text editor.
<div class="alert alert-success" style="margin: 10px">
Enter edit mode by pressing `enter` or using the mouse to click on a cell's editor area.
</div>
<div class="alert alert-success" style="margin: 10px">
While in edit mode, tab-completion works for variables the kernel knows about from executing previous cells.
</div>
Command mode
Command mode is indicated by a grey cell border:
<img src="./files/images/command_mode.png">
When you are in command mode, you are able to edit the notebook as a whole, but not type into individual cells. Most importantly, in command mode, the keyboard is mapped to a set of shortcuts that let you perform notebook and cell actions efficiently. For example, if you are in command mode and you press c, you will copy the current cell - no modifier is needed.
<div class="alert alert-error" style="margin: 10px">
Don't try to type into a cell in command mode; unexpected things will happen!
</div>
<div class="alert alert-success" style="margin: 10px">
Enter command mode by pressing `esc` or using the mouse to click *outside* a cell's editor area.
</div>
Mouse navigation
All navigation and actions in the Notebook are available using the mouse through the menubar and toolbar, which are both above the main Notebook area:
<img src="./files/images/menubar_toolbar.png">
The first idea of mouse based navigation is that cells can be selected by clicking on them. The currently selected cell gets a grey or green border depending on whether the notebook is in edit or command mode. If you click inside a cell's editor area, you will enter edit mode. If you click on the prompt or output area of a cell you will enter command mode.
If you are running this notebook in a live session (not on http://nbviewer.ipython.org) try selecting different cells and going between edit and command mode. Try typing into a cell.
The second idea of mouse based navigation is that cell actions usually apply to the currently selected cell. Thus if you want to run the code in a cell, you would select it and click the "Play" button in the toolbar or the "Cell:Run" menu item. Similarly, to copy a cell you would select it and click the "Copy" button in the toolbar or the "Edit:Copy" menu item. With this simple pattern, you should be able to do most everything you need with the mouse.
Markdown and heading cells have one other state that can be modified with the mouse. These cells can either be rendered or unrendered. When they are rendered, you will see a nice formatted representation of the cell's contents. When they are unrendered, you will see the raw text source of the cell. To render the selected cell with the mouse, click the "Play" button in the toolbar or the "Cell:Run" menu item. To unrender the selected cell, double click on the cell.
Keyboard Navigation
The modal user interface of the IPython Notebook has been optimized for efficient keyboard usage. This is made possible by having two different sets of keyboard shortcuts: one set that is active in edit mode and another in command mode.
The most important keyboard shortcuts are enter, which enters edit mode, and esc, which enters command mode.
In edit mode, most of the keyboard is dedicated to typing into the cell's editor. Thus, in edit mode there are relatively few shortcuts:
In command mode, the entire keyboard is available for shortcuts:
Here the rough order in which the IPython Developers recommend learning the command mode shortcuts:
Basic navigation: enter, shift-enter, up/k, down/j
Saving the notebook: s
Cell types: y, m, 1-6, t
Cell creation and movement: a, b, ctrl+k, ctrl+j
Cell editing: x, c, v, d, z, shift+=
Kernel operations: i, 0
I personally (& humbly) suggest learning h first!
The IPython Notebook Architecture
So far, we have learned the basics of using IPython Notebooks.
For simple demonstrations, the typical user doesn't need to understand how the computations are being handled, but to successfully write and present computational notebooks, you will need to understand how the notebook architecture works.
A live notebook is composed of an interactive web page (the front end), a running IPython session (the kernel or back end), and a web server responsible for handling communication between the two (the, err..., middle-end)
A static notebook, as for example seen on NBViewer, is a static view of the notebook's content. The default format is HTML, but a notebook can also be output in PDF or other formats.
The centerpiece of an IPython Notebook is the "kernel", the IPython instance responsible for executing all code. Your IPython kernel maintains its state between executed cells.
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('vW_DRAJ0dtc')
"""
Explanation: There are two important actions for interacting with the kernel. The first is to interrupt it. This is the same as sending a Control-C from the command line. The second is to restart it. This completely terminates the kernel and starts it anew. None of the kernel state is saved across a restart.
Markdown cells
Text can be added to IPython Notebooks using Markdown cells. Markdown is a popular markup language that is a superset of HTML. Its specification can be found here:
http://daringfireball.net/projects/markdown/
Markdown basics
Text formatting
You can make text italic or bold or monospace
Itemized Lists
One
Sublist
This
Sublist
- That
- The other thing
Two
Sublist
Three
Sublist
Enumerated Lists
Here we go
Sublist
Sublist
There we go
Now this
Horizontal Rules
Blockquotes
To me programming is more than an important practical art. It is also a gigantic undertaking in the foundations of knowledge. -- Rear Admiral Grace Hopper
Links
IPython's website
Code
This is a code snippet:
Python
def f(x):
"""a docstring"""
return x**2
This is an example of a Python function
You can also use triple-backticks to denote code blocks.
This also allows you to choose the appropriate syntax highlighter.
C
if (i=0; i<n; i++) {
printf("hello %d\n", i);
x += 4;
}
Tables
Time (s) | Audience Interest
---------|------------------
0 | High
1 | Medium
5 | Facebook
Images
YouTube
End of explanation
"""
%matplotlib inline
%whos
"""
Explanation: Other HTML
<strong> Be Bold! </strong>
Mathematical Equations
Courtesy of MathJax, you can beautifully render mathematical expressions, both inline:
$e^{i\pi} + 1 = 0$, and displayed:
$$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$$
Equation Environments
You can also use a number of equation environments, such as align:
\begin{align}
x &= 4 \
y+z &= x
\end{align}
A full list of available TeX and LaTeX commands is maintained by Dr. Carol Burns.
Other Useful MathJax Notes
inline math is demarcated by $ $, or \( \)
displayed math is demarcated by $$ $$ or \[ \]
displayed math environments can also be directly demarcated by \begin and \end
\newcommand and \def are supported, within areas MathJax processes (such as in a \[ \] block)
equation numbering is not officially supported, but it can be indirectly enabled
A Note about Notebook Security
By default, a notebook downloaded to a new computer is untrusted
HTML and Javascript in Markdown cells is now never executed
HTML and Javascript code outputs must be explicitly re-executed
Some of these restrictions can be mitigrated through shared accounts (Sage MathCloud) and secrets
More information on notebook security is in the IPython Notebook documentation
Magics
IPython kernels execute a superset of the Python language. The extension functions, commonly referred to as magics, come in two variants.
Line Magics
A line magic looks like a command line call. The most important of these is %matplotlib inline, which embeds all matplotlib plot output as images in the notebook itself.
End of explanation
"""
%%timeit
import numpy as np
np.sum(np.random.rand(1000))
"""
Explanation: Cell Magics
A cell magic takes its entire cell as an argument. Although there are a number of useful cell magics, you may find %%timeit to be useful for exploring code performance.
End of explanation
"""
%%python2
i = 10**60
print type(i)
"""
Explanation: Execute Code as Python 2
End of explanation
"""
!ls
x = !ls
print(x)
"""
Explanation: Interacting with the Command Line
IPython supports one final trick, the ability to interact directly with your shell by using the ! operator.
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/automaton.has_bounded_lag.ipynb | gpl-3.0 | import vcsn
ctx = vcsn.context("lat<lan_char(ab), lan_char(xy)>, b")
ctx
a = ctx.expression(r"'a,x''b,y'*'a,\e'").automaton()
a
"""
Explanation: automaton.has_bounded_lag
Check if the transducer has bounded lag, i.e. that the difference of length between the input and output words is bounded, for every word accepted.
It is a pre-condition for transducer synchronization.
Preconditions:
The automaton has at least 2 tapes
Examples
End of explanation
"""
a.has_bounded_lag()
b = ctx.expression(r"(\e|x)(a|\e)*(b|y)").automaton()
b
"""
Explanation: This automaton has a bounded lag: there is at most a difference of 1 between the length of the input and the length of the output (e.g., $abba \rightarrow xyy$).
End of explanation
"""
b.has_bounded_lag()
ctx_3 = vcsn.context("lat<lan_char(ab), lan_char(jk), lan_char(xy)>, b")
c = ctx_3.expression(r"(a|j|x)(b|k|\e)*").automaton()
c
"""
Explanation: This transducer, however, doesn't have a bounded lag: there can be an arbitrary large difference between the input and output. For example, $ab \rightarrow xy$, but $aaaaaaaaab \rightarrow xy$.
End of explanation
"""
c.has_bounded_lag()
"""
Explanation: In the case of more than 2 tapes, has_bounded_lag checks that every tape has a bounded lag compared to the first one (incidentally, if that is the case, it will insure that every tapes has a bounded lag in respect to every other). This transducer has a bounded lag if you only consider the first 2 tapes, but the third tape doesn't.
End of explanation
"""
|
IanHawke/ET-NumericalMethods-2016 | slides/03-hyperbolic-pdes.ipynb | mit | import numpy
from matplotlib import pyplot
%matplotlib inline
def RHS(U, dx):
"""
RHS term.
Parameters
----------
U : array
contains [phi, phi_t, phi_x] at each point
dx : double
grid spacing
Returns
-------
dUdt : array
contains the required time derivatives
"""
phi = U[0, :]
phi_t = U[1, :]
phi_x = U[2, :]
dUdt = numpy.zeros_like(U)
dUdt[0, :] = phi_t
dUdt[1, 1:-1] = 1.0 / (2.0*dx)*(phi_x[2:] - phi_x[:-2])
dUdt[2, 1:-1] = 1.0 / (2.0*dx)*(phi_t[2:] - phi_t[:-2])
return dUdt
"""
Explanation: Hyperbolic PDEs
Einstein's equations behave like wave equations, $\partial_{tt} \phi = \nabla^2 \phi$.
Use first order form
$$
\frac{\partial}{\partial t} \begin{pmatrix} \phi \ \phi_t \ \phi_x \end{pmatrix} = \begin{pmatrix} \phi_t \ 0 \ 0 \end{pmatrix} + \frac{\partial}{\partial x} \begin{pmatrix} 0 \ \phi_x \ \phi_t \end{pmatrix}.
$$
Method of Lines (MoL)
Convert a first order system
$$
\partial_t {\bf u} = {\bf s} + \partial_x {\bf f}({\bf u})
$$
into an ODE
$$
\frac{d}{d t} {\bf U}_i = {\bf s}({\bf U}_i) + L({\bf U}_i)
$$
by discretising (differencing) in space, not time. Then apply eg Runge-Kutta in time.
End of explanation
"""
def grid(Npoints):
dx = 2.0 / Npoints
return dx, numpy.linspace(-1.0-dx/2.0,
1.0+dx/2.0, Npoints+2)
pyplot.figure()
dx, x = grid(4)
pyplot.plot(x, numpy.ones_like(x), 'kx', ms=12, mew=2)
pyplot.show()
def initial_data(x):
U = numpy.zeros((3, len(x)))
U[0, :] = numpy.exp(-20.0 * x**2)
U[2, :] = -40.0*x*numpy.exp(-20.0 * x**2)
return U
pyplot.figure()
dx, x = grid(100)
U = initial_data(x)
pyplot.plot(x, U[0, :], 'b-')
pyplot.xlim(-1, 1)
pyplot.show()
def apply_boundaries(dUdt):
"""
Periodic boundaries applied to the RHS directly.
"""
dUdt[:, 0] = dUdt[:, -2]
dUdt[:, -1] = dUdt[:, 1]
return dUdt
def RK2_step(U, RHS, apply_boundaries, dt, dx):
rhs = apply_boundaries(RHS(U, dx))
Up = U + dt * rhs
rhs_p = apply_boundaries(RHS(Up, dx))
Unew = 0.5 * (U + Up + dt * rhs_p)
return Unew
Npoints = 50
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(1.0 / dt)
for n in range(Nsteps):
U = RK2_step(U, RHS, apply_boundaries, dt, dx)
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=1$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()
U = initial_data(x)
Nsteps = int(2.0 / dt)
for n in range(Nsteps):
U = RK2_step(U, RHS, apply_boundaries, dt, dx)
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()
"""
Explanation: Key code steps
We'll need
Grid
Initial and boundary conditions
Right hand side (RHS) function
Time stepping.
End of explanation
"""
def error_norms(U, U_initial):
"""
Error norms (1, 2, infinity)
"""
N = len(U)
error_1 = numpy.sum(numpy.abs(U - U_initial))/N
error_2 = numpy.sqrt(numpy.sum((U - U_initial)**2)/N)
error_inf = numpy.max(numpy.abs(U - U_initial))
return error_1, error_2, error_inf
Npoints_all = 50 * 2**(numpy.arange(0, 6))
dxs = numpy.zeros((len(Npoints_all,)))
wave_errors = numpy.zeros((3, len(Npoints_all)))
for i, Npoints in enumerate(Npoints_all):
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0 / dt)
for n in range(Nsteps):
U = RK2_step(U, RHS, apply_boundaries, dt, dx)
dxs[i] = dx
wave_errors[:, i] = error_norms(U[0, :], U0[0, :])
pyplot.figure()
pyplot.loglog(dxs, wave_errors[0, :], 'bx', label=r"${\cal E}_1$")
pyplot.loglog(dxs, wave_errors[1, :], 'go', label=r"${\cal E}_2$")
pyplot.loglog(dxs, wave_errors[2, :], 'r+', label=r"${\cal E}_{\infty}$")
pyplot.loglog(dxs, wave_errors[1, 0]*(dxs/dxs[0])**4, 'k-', label=r"$p=4$")
pyplot.xlabel(r"$\Delta x$")
pyplot.ylabel("Error norm")
pyplot.legend(loc="lower right")
pyplot.show()
def initial_data_asymmetric(x):
U = numpy.zeros((3, len(x)))
U[0, :] = numpy.sin(numpy.pi*x)*(1-x)**2*(1+x)**3
U[2, :] = numpy.pi*numpy.cos(numpy.pi*x)*(1-x)**2*(1+x)**3 + \
numpy.sin(numpy.pi*x)*(2.0*(1-x)*(1+x)**3 + \
3.0*(1-x)**2*(1+x)**2)
return U
pyplot.figure()
dx, x = grid(100)
U = initial_data_asymmetric(x)
pyplot.plot(x, U[0, :], 'b-')
pyplot.xlim(-1, 1)
pyplot.show()
Npoints_all = 50 * 2**(numpy.arange(0, 6))
dxs = numpy.zeros((len(Npoints_all,)))
wave_errors = numpy.zeros((3, len(Npoints_all)))
for i, Npoints in enumerate(Npoints_all):
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data_asymmetric(x)
U = initial_data_asymmetric(x)
Nsteps = int(2.0 / dt)
for n in range(Nsteps):
U = RK2_step(U, RHS, apply_boundaries, dt, dx)
dxs[i] = dx
wave_errors[:, i] = error_norms(U[0, :], U0[0, :])
pyplot.figure()
pyplot.loglog(dxs, wave_errors[0, :], 'bx', label=r"${\cal E}_1$")
pyplot.loglog(dxs, wave_errors[1, :], 'go', label=r"${\cal E}_2$")
pyplot.loglog(dxs, wave_errors[2, :], 'r+', label=r"${\cal E}_{\infty}$")
pyplot.loglog(dxs, wave_errors[1, 0]*(dxs/dxs[0])**2, 'k-', label=r"$p=2$")
pyplot.xlabel(r"$\Delta x$")
pyplot.ylabel("Error norm")
pyplot.legend(loc="lower right")
pyplot.show()
"""
Explanation: Convergence
To check convergence, look at the error norm
$$
\| {\cal E} \|p = \left( \tfrac{1}{N} \sum{i=1}^N \left( \phi_i - \phi^{\text{exact}}_i \right)^p \right)^{1/p}.
$$
We know exact solution at $t=2$ is the initial data.
End of explanation
"""
Npoints = 50
dx, x = grid(Npoints)
dt = dx # This is the crucial line
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0/dt)
for n in range(Nsteps):
U = RK2_step(U, RHS, apply_boundaries, dt, dx)
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()
"""
Explanation: Courant limits
We restricted the timestep to $\Delta t = \sigma \Delta x$ with $\sigma$, the Courant number, being $1/4$.
Number of timesteps taken is inversely related to the Courant number: want to make it as large as possible.
Let's try the evolution with Courant number $\sigma=1$.
End of explanation
"""
|
tpin3694/tpin3694.github.io | sql/dates_and_times.ipynb | mit | # Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
"""
Explanation: Title: Dates And Times
Slug: dates_and_times
Summary: Dates and times in SQL.
Date: 2016-05-01 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
"""
%%sql
-- Select the current date
SELECT date('now');
"""
Explanation: Get Current Date
End of explanation
"""
%%sql
-- Select the unix time code '1200762133'
SELECT datetime('now', 'unixepoch');
"""
Explanation: Get Current Date And Time
End of explanation
"""
%%sql
-- Select the unix time code '1169229733'
SELECT datetime(1169229733, 'unixepoch');
"""
Explanation: Compute A UNIX timestamp into a date and time
End of explanation
"""
%%sql
-- Select the unix time code '1171904533' and convert to the machine's local timezone
SELECT datetime(1171904533, 'unixepoch', 'localtime');
"""
Explanation: Compute A UNIX timestamp into a date and time and convert to the local timezone.
End of explanation
"""
%%sql
-- Select the the day of this week (0 = Sunday, 4 = Thursday)
SELECT strftime('%w','now');
"""
Explanation: Compute The Day Of The Week
End of explanation
"""
|
matt-graham/auxiliary-pm-mcmc | experiment_notebooks/Analyse results.ipynb | mit | import rpy2.interactive as r
import rpy2.interactive.packages
r.packages.importr("coda")
rlib = r.packages.packages
"""
Explanation: Load python R interface and import coda for computing chain statistics
End of explanation
"""
def to_precision(x, p):
p_str = str(p)
fmt_string = '{0:.' + p_str + 'g}'
return fmt_string.format(x)
# alternative method which properly deals with trailing zeros can be got by uncommenting below
# to load function by Randle Taylor from git URL
# %load https://raw.githubusercontent.com/randlet/to-precision/master/to_precision.py
exp_dir = os.path.join(os.environ['EXP_DIR'], 'apm_mcmc')
"""
Explanation: Function for correct printing of values to specified number of significant figures
End of explanation
"""
file_name_pattern_map = OrderedDict([
(('Pima', 'PM MH'), '*pima_pmmh_chain_*_results.npz'),
(('Pima', 'APM MI+MH'), '*pima_apm(mi+mh)_chain_*_results.npz'),
(('Pima', 'APM SS+MH'), '*pima_apm(ss+mh)_chain_*_results.npz'),
(('Pima', 'APM SS+SS'), '*pima_apm(ess+rdss)_chain_*_results.npz'),
(('Breast', 'PM MH'), '*breast_pmmh_chain_*_results.npz'),
(('Breast', 'APM MI+MH'), '*breast_apm(mi+mh)_chain_*_results.npz'),
(('Breast', 'APM SS+MH'), '*breast_apm(ss+mh)_chain_*_results.npz'),
(('Breast', 'APM SS+SS'), '*breast_apm(ess+rdss)_chain_*_results.npz'),
])
"""
Explanation: Specify file pattern for saved results for different data set and method combination. Ordered dict used so that order is maintained in printed LaTeX table
End of explanation
"""
results_map = OrderedDict()
for (data_set, method), file_name_pattern in file_name_pattern_map.items():
file_list = glob.glob(os.path.join(exp_dir, file_name_pattern))
chains = []
chains_stats = []
for file_path in file_list:
results = np.load(file_path)
chains.append(results['thetas'])
chains_stats.append(results['n_reject_n_cubic_ops_comp_time'])
chains = np.array(chains)
chains_stats = np.array(chains_stats)
n_effs = np.empty((chains.shape[0], 2))
for i, chain in enumerate(chains):
n_effs[i, 0] = rlib.coda.effectiveSize(chain[:, 0])[0]
n_effs[i, 1] = rlib.coda.effectiveSize(chain[:, 1])[0]
r_chains_list = rlib.coda.as_mcmc_list([rlib.coda.as_mcmc(chain) for chain in chains[:, :, :]])
gelman_rubin = rlib.coda.gelman_diag(r_chains_list, autoburnin=False)
results_map[(data_set, method)] = (chains, chains_stats, n_effs, gelman_rubin)
prc_mn = 3 # precision to report means with
prc_se = 2 # precision to report standard errors with
max_n_chains = 0 # will be populated with max n chains to allow proper
# formatting of autocorr plots later for cases when
# plotting intermediate results with differing number
# of chains completed per method / data set
# header for LaTeX table of results
latex_table = ''
latex_table += ' & Method & $N_\\text{cub.cop}$ & Acc. rate '
latex_table += '& $N_\\text{eff}$ & $\\frac{N_\\text{eff}}{N_\\text{cub.op}}$ & $\\hat{R}$ '
latex_table += '& $N_\\text{eff}$ & $\\frac{N_\\text{eff}}{N_\\text{cub.op}}$ & $\\hat{R}$ '
latex_table += '\\\\ \n \hline \n'
for (data_set, method), (chains, chains_stats, n_effs, gelman_rubin) in results_map.items():
n_chains, n_samples, n_param = chains.shape
max_n_chains = max(max_n_chains, n_chains) # update record of maximum no. chains
# second last column of chain stats is number of cubic operations for a run
# for display purposes, divide by 1000 as easier to visually compare without
# scientific notation
# possibly two reject rates (for u|theta and theta|u updates) present so index
# chain_stats from end rather than start to make consistent
n_kcops = chains_stats[:, -2] / 1000.
# calculate various mean stats over chains and their associated statndard errors
mean_n_k_cub_ops = n_kcops.mean()
ster_n_k_cub_ops = n_kcops.std(ddof=1) / n_chains**0.5
mean_n_eff_samps = n_effs.mean(0)
ster_n_eff_samps = n_effs.std(0, ddof=1) / n_chains**0.5
mean_es_per_kcop = (n_effs / n_kcops[:, None]).mean(0)
ster_es_per_kcop = (n_effs / n_kcops[:, None]).std(0, ddof=1) / n_chains**0.5
# third column from end contains reject rate for theta|u updates
# often will be first column however sometimes reject rate for u|theta updates
# present as first column
acc_rates = 1. - chains_stats[:, -3] * 1. / n_samples
mean_accept_rate = acc_rates.mean()
ster_accept_rate = acc_rates.std(0, ddof=1) / n_chains**0.5
# add row for current results to LaTeX table
latex_table += ' & \sc {0} & {1} ({2}) & {3} ({4})\n'.format(
method.lower(),
to_precision(mean_n_k_cub_ops, prc_mn),
to_precision(ster_n_k_cub_ops, prc_se),
to_precision(mean_accept_rate, prc_mn),
to_precision(ster_accept_rate, prc_se)
)
latex_table += ' & {0} ({1}) & {2} ({3}) & {4}\n'.format(
to_precision(mean_n_eff_samps[0], prc_mn),
to_precision(ster_n_eff_samps[0], prc_se),
to_precision(mean_es_per_kcop[0], prc_mn),
to_precision(ster_es_per_kcop[0], prc_se),
to_precision(gelman_rubin[0][0], prc_mn),
)
latex_table += ' & {0} ({1}) & {2} ({3}) & {4}'.format(
to_precision(mean_n_eff_samps[1], prc_mn),
to_precision(ster_n_eff_samps[1], prc_se),
to_precision(mean_es_per_kcop[1], prc_mn),
to_precision(ster_es_per_kcop[1], prc_se),
to_precision(gelman_rubin[0][1], prc_mn),
)
latex_table += ' \\\\ \n'
# Print space delimited table of results for quick checking
print('-' * 55)
print('Data set: {0: <8} Method: {1: <10} # chains: {2}'
.format(data_set, method, n_chains))
print('-' * 55)
print(' mean num. k cubic op. {0: <6} ({1})'
.format(to_precision(mean_n_k_cub_ops, prc_mn),
to_precision(ster_n_k_cub_ops, prc_se)))
print(' effective sample size (sigma) {0: <6} ({1})'
.format(to_precision(mean_n_eff_samps[0], prc_mn),
to_precision(ster_n_eff_samps[0], prc_se)))
print(' effective sample size (tau) {0: <6} ({1})'
.format(to_precision(mean_n_eff_samps[1], prc_mn),
to_precision(ster_n_eff_samps[1], prc_se)))
print(' eff. samp. / cubic op. (sigma) {0: <6} ({1})'
.format(to_precision(mean_es_per_kcop[0], prc_mn),
to_precision(ster_es_per_kcop[0], prc_se)))
print(' eff. samp. / cubic op. (tau) {0: <6} ({1})'
.format(to_precision(mean_es_per_kcop[1], prc_mn),
to_precision(ster_es_per_kcop[1], prc_se)))
print(' Gelman-Rubin statistic (sigma) {0}'
.format(to_precision(gelman_rubin[0][0], prc_mn)))
print(' Gelman-Rubin statistic (tau) {0}'
.format(to_precision(gelman_rubin[0][1], prc_mn)))
print(' n acc rates off-target {0}'
.format(np.sum((acc_rates < 0.15) + (acc_rates > 0.30))))
"""
Explanation: Load up saved chains and run stats and store in another ordered dict. Also compute effective sample size and Gelman-Rubin R stat for chains at this point using R-coda interface
End of explanation
"""
print(latex_table)
"""
Explanation: Print LaTeX table rows for inclusion in paper
End of explanation
"""
n_chains = 10
n_samples = 10000
n_methods = len(file_name_pattern_map) / 2
pima_sigma_chains = np.empty((n_chains, n_samples, n_methods))
pima_tau_chains = np.empty((n_chains, n_samples, n_methods))
breast_sigma_chains = np.empty((n_chains, n_samples, n_methods))
breast_tau_chains = np.empty((n_chains, n_samples, n_methods))
pima_comp_costs = np.empty(n_methods)
breast_comp_costs = np.empty(n_methods)
pima_method_names = []
breast_method_names = []
m, n = 0, 0
for (data_set, method), (chains, chains_stats, n_effs, gelman_rubin) in results_map.items():
if data_set.lower() == 'pima':
pima_sigma_chains[:, :, m] = chains[:, -n_samples:, 0]
pima_tau_chains[:, :, m] = chains[:, -n_samples:, 1]
pima_method_names.append(method)
pima_comp_costs[m] = chains_stats[:, -2].mean()
m += 1
elif data_set.lower() == 'breast':
breast_sigma_chains[:, :, n] = chains[:, -n_samples:, 0]
breast_tau_chains[:, :, n] = chains[:, -n_samples:, 1]
breast_method_names.append(method)
breast_comp_costs[n] = chains_stats[:, -2].mean()
n += 1
pima_rel_comp_costs = pima_comp_costs / pima_comp_costs[0]
breast_rel_comp_costs = breast_comp_costs / breast_comp_costs[0]
time_stamp = datetime.datetime.now().strftime('%Y_%m_%d_%H_%M_%S_')
scipy.io.savemat(os.path.join(exp_dir, time_stamp + 'chains_matlab_dump.mat'),
{
'pima_sigma_chains' : pima_sigma_chains,
'pima_tau_chains' : pima_tau_chains,
'breast_sigma_chains' : breast_sigma_chains,
'breast_tau_chains' : breast_tau_chains,
'pima_rel_comp_costs' : pima_rel_comp_costs,
'breast_rel_comp_costs' : breast_rel_comp_costs,
'pima_method_names' : pima_method_names,
'breast_method_names' : breast_method_names
}
)
"""
Explanation: Save all chains for different method / dataset / variate combinations to a MATLAB readable file to allow loading results there to plot autocorrelations in same style as other figures
End of explanation
"""
thin_factor = 10
max_lag = 30
fig = plt.figure(figsize=(40, 16))
n_dm = len(results_map)
for i, ((data_set, method), (chains, chains_stats, n_effective, gelman_rubin)) in enumerate(results_map.items()):
for j, chain in enumerate(chains):
ax_tau = fig.add_subplot(max_n_chains, 2 * n_dm, j * 2 * n_dm + 1 + 2 * i % 60)
ax_sig = fig.add_subplot(max_n_chains, 2 * n_dm, j * 2 * n_dm + 2 + 2 * i % 60)
x_tau = chain[::thin_factor, 0].copy()
x_tau -= x_tau.mean()
autocorr_tau = np.correlate(x_tau, x_tau, mode=2)[x_tau.size:]
autocorr_tau /= autocorr_tau[0]
x_sig = chain[::thin_factor, 1].copy()
x_sig -= x_sig.mean()
autocorr_sig = np.correlate(x_sig, x_sig, mode=2)[x_sig.size:]
autocorr_sig /= autocorr_sig[0]
ax_tau.vlines(np.arange(max_lag) + 1, 0., autocorr_tau[:max_lag])
ax_tau.axhline()
ax_tau.set_yticks(np.linspace(-0.4, 0.8, 4))
#ax_tau.set_xticks(np.arange(0, 31, 10))
ax_sig.vlines(np.arange(max_lag) + 1, 0., autocorr_sig[:max_lag])
ax_sig.axhline()
#ax_sig.set_xticks(np.arange(0, 31, 10))
ax_sig.set_yticks(np.linspace(-0.4, 0.8, 4))
if j == 0:
ax_tau.set_title('{0} $\\tau$'.format(data_set + ', ' + method))
ax_sig.set_title('{0} $\\sigma$'.format(data_set + ', ' + method))
fig.tight_layout()
"""
Explanation: Plot autocorrelation plots for all chains - if lots of chains loaded will be a large figure so best viewed externally
End of explanation
"""
for (data_set, method), (chains, chains_stats, n_effs, gelman_rubin) in results_map.items():
if data_set == 'Pima':
if method == 'PM MH' or method == 'APM SS+MH':
print('{0} {1} mean compute time: {2} +/- {3}'.format(
data_set, method,
to_precision(chains_stats[:, -1].mean(), 3),
to_precision(chains_stats[:, -1].std(ddof=1) / chains.shape[0]**0.5, 2))
"""
Explanation: Calculate mean compute time across 10 chains for PM MH method and APM SS+MH method (for runs on the same machine) to verify that extra quadratic operations for APM approaches here are a negligible overhead
End of explanation
"""
|
rmoehn/cartpole | notebooks/StatsExperiments.ipynb | mit | from mpl_toolkits.mplot3d import Axes3D
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import numpy.ma as ma
import sys
sys.path.append("..")
from hiora_cartpole import interruptibility
import saveloaddata
import stats_experiments
import stats_experiments as se
data_dir_p = "../data"
"""
Explanation: Questions on the (Non-)Interruptibility of Sarsa(λ) and Q-learning
Author: Richard Möhn, <my first name>.moehn@posteo.de
⭧repo
Abstract
%load_md se_abstract.md
Introduction
%load_md se_intro.md
Related Work
%load_md se_related.md
Method
%load_md se_method.md
End of explanation
"""
plt.close('all')
algo = 'Sarsa'
fig, axes = se.arrange_algo_full()
clim, clim2 = se.load_plot_all(algo, '-rand-tiebreak', 1, axes, fig, data_dir_p)
se.load_plot_all(algo, '-rand-tiebreak', 0, axes, fig, data_dir_p, clim, clim2)
fig
"""
Explanation: Results
%load_md se_results.md
End of explanation
"""
algo = 'Q-learning'
fig, axes = se.arrange_algo_full()
clim, clim2 = se.load_plot_all(algo, '-drt', 1, axes, fig, data_dir_p)
se.load_plot_all(algo, '-drt', 0, axes, fig, data_dir_p, clim, clim2)
fig
"""
Explanation: Q-learning
See Sarsa(λ) section above for an explanation of the plots.
End of explanation
"""
# Credits: https://nbviewer.jupyter.org/gist/HHammond/7a78d35b34d85406aa60
from IPython import paths
from IPython.core.display import HTML
import os
def css_styling():
"""Load default custom.css file from ipython profile"""
base = paths.get_ipython_dir()
styles = "<style>\n%s\n</style>" % (open('custom.css','r').read())
return HTML(styles)
css_styling()
algo = 'Q-learning'
fig, ax = se.arrange_algo_full()
with saveloaddata.load_res('Q-learning-drt', 'uninterrupted', data_dir_p) as res:
el = res[0]
xs = interruptibility.rsxs2nparray(*res)
se.plot_episode_lengths(el[:10], ax.el[0])
se.plot_xs_hist(interruptibility.mask_after_cross(xs).flatten(), ax_comp[0])
before_cross = interruptibility.mask_after_cross(xs)
se.plot_xs_hist(before_cross.compressed(), ax_comp[0])
np.all(before_cross.compressed() <= 1.0)
before_cross.compressed()
se.plot_xs_hist(interruptibility.mask_after_cross(xs).flatten(), ax_comp[0], bins=25)
np.mean(before_cross.flatten()), np.mean(before_cross.compressed())
del before_cross
with saveloaddata.load_res('Sarsa-rand-tiebreak', 'uninterrupted', data_dir_p) as res:
before_cross_unint = interruptibility.mask_after_cross(interruptibility.rsxs2nparray(*res))
mesh = se.plot_xss_cum_hist_devel(before_cross_int, ax.devel2[1], bins=2)
fig.colorbar(mesh, ax=ax.devel2[1])
fig.colorbar(mesh, ax=ax.devel[1])
mesh = se.plot_xss_cum_hist_devel(before_cross_int, ax.devel[1])
ax.devel[1].colorbar(mesh)
fig
se.plot_xs_hist(interruptibility.mask_after_cross(xs).compressed(), ax, label='uninterrupted')
with saveloaddata.load_res('Sarsa-rand-tiebreak', 'interrupted', data_dir_p) as res:
before_cross_int = interruptibility.mask_after_cross(interruptibility.rsxs2nparray(*res))
se.plot_xs_hist(interruptibility.mask_after_cross(xs).compressed(), ax, label='interrupted')
ax.legend()
fig
stats_experiments.plot_mean_std_change(before_cross_unint, label='uninterrupted')
stats_experiments.plot_mean_std_change(before_cross_int, label='interrupted')
plt.legend()
plt.show()
se.plot_jsd_devel(before_cross_unint)
plt.show()
%debug
fig, ax = plt.subplots()
ax.set_xscale('log')
se.plot_jsd_comp_final(before_cross_unint, ax=ax)
ax.set_xlim([0.0, 0.03])
ax.get_xlim()
plt.show()
"""
Explanation: Questions
%load_md questions.md
Ideas for further research
%load_md se_ideas.md
Acknowledgements
Thanks to Rafael Cosman, Stuart Armstrong, Patrick LaVictoire and Satoshi Ono for their comments and advice, also on the old notebook! Thanks to Joel Lehman and Christian Kamm for their ideas for further research!
Bibliography
%load_md se_bib.md
You can ignore the following.
End of explanation
"""
fig, ax = plt.subplots()
mesh = stats_experiments.plot_xss_cum_hist_change(xs, ax, bins=25)
#plt.colorbar(mesh)
plt.show()
del xs
p = '../data/Sarsa-disc-uninterrupted-xe-170221003432.pickle'
with open(p, 'rb') as f:
res = pickle.load(f)
with open(p, 'wb') as f:
pickle.dump(res[0:2], f)
"""
Explanation: Interesting: Interruptions don't move the whole thing to the left, but limit variance.
End of explanation
"""
|
pacificclimate/pycds | scripts/Demo.ipynb | gpl-3.0 | import datetime
from pycds import *
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy import and_, or_
"""
Explanation: Using the PyCDS package as an interface to the Provincial Climate Data Set database
End of explanation
"""
connection_string = 'postgresql+psycopg2://hiebert@atlas.pcic/crmp'
engine = create_engine(connection_string)
Session = sessionmaker(bind=engine)
session = Session()
"""
Explanation: We need to do a few things to set up our database session, and then the session object is used to make all our queries
End of explanation
"""
for net, in session.query(Network.name).order_by(Network.name):
print(net)
"""
Explanation: Query and print the names of all networks in the database
End of explanation
"""
for station in session.query(Station).filter(Network.name=='EC').order_by(Station.native_id.asc())[:10]:
print(station.native_id)
"""
Explanation: Print the first 10 stations in the EC network when ordered ascencding
End of explanation
"""
query = session.query(Obs).join(History).join(Station).join(Network).filter(Network.name=='EC').filter(Station.native_id=='1010066')
print(query.count())
"""
Explanation: Count the observations for station 1010066 in the EC network
We must use explicit query joins here or sqlalchemy creates ridculous crossjoins and subqueries
End of explanation
"""
for hist, in session.query(History.id).filter(History.station_id==Station.id).filter(and_(Station.native_id=='1010066', Network.name=='EC')):
print(hist)
"""
Explanation: Find all history_ids for station 1010066 in the EC network using filter criteria
End of explanation
"""
for hist, in session.query(History.id).join(Station).join(Network).filter(Network.name=='EC').filter(Station.native_id=='1010066'):
print(hist)
"""
Explanation: Find all history ids representing EC native_id 1010066 using explicit joins
End of explanation
"""
this_year = datetime.datetime(2015, 1, 1)
q = session.query(Obs).filter(Obs.time >= this_year)
print(q.count())
"""
Explanation: Count the number of observations from this year
End of explanation
"""
q = session.query(Network)
net = q.first()
net.name
net.stations
for s in net.stations:
print(s.id, s.native_id)
"""
Explanation: You can access related objects through object attributes. For example, each Network object has a list of its stations in the stations attribute and a list of each available network variable in the variables object. These are Python native objects, all with the proper typing.
End of explanation
"""
net.variables
for v in net.variables:
print(v.standard_name, v.unit)
"""
Explanation: Note that the station id, which is an integer in the database, is automatically an integer when we access it as well. Likewise for the native_id which are free form strings in the database and here in Python.
End of explanation
"""
|
dariox2/CADL | session-1/.ipynb_checkpoints/lecture-1-checkpoint.ipynb | apache-2.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
"""
Explanation: Session 1: Introduction to Tensorflow
<p class='lead'>
Creative Applications of Deep Learning with Tensorflow<br />
Parag K. Mital<br />
Kadenze, Inc.<br />
</p>
<a name="learning-goals"></a>
Learning Goals
Learn the basic idea behind machine learning: learning from data and discovering representations
Learn how to preprocess a dataset using its mean and standard deviation
Learn the basic components of a Tensorflow Graph
Table of Contents
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Introduction
Promo
Session Overview
Learning From Data
Deep Learning vs. Machine Learning
Invariances
Scope of Learning
Existing datasets
Preprocessing Data
Understanding Image Shapes
The Batch Dimension
Mean/Deviation of Images
Dataset Preprocessing
Histograms
Histogram Equalization
Tensorflow Basics
Variables
Tensors
Graphs
Operations
Tensor
Sessions
Tensor Shapes
Many Operations
Convolution
Creating a 2-D Gaussian Kernel
Convolving an Image with a Gaussian
Convolve/Filter an image using a Gaussian Kernel
Modulating the Gaussian with a Sine Wave to create Gabor Kernel
Manipulating an image with this Gabor
Homework
Next Session
Reading Material
<!-- /MarkdownTOC -->
<a name="introduction"></a>
Introduction
This course introduces you to deep learning: the state-of-the-art approach to building artificial intelligence algorithms. We cover the basic components of deep learning, what it means, how it works, and develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, generative adversarial networks, and recurrent neural networks. A major focus of this course will be to not only understand how to build the necessary components of these algorithms, but also how to apply them for exploring creative applications. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets and using them to self-organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of another image. Deep learning offers enormous potential for creative applications and in this course we interrogate what's possible. Through practical applications and guided homework assignments, you'll be expected to create datasets, develop and train neural networks, explore your own media collections using existing state-of-the-art deep nets, synthesize new content from generative algorithms, and understand deep learning's potential for creating entirely new aesthetics and new ways of interacting with large amounts of data.
<a name="promo"></a>
Promo
Deep learning has emerged at the forefront of nearly every major computational breakthrough in the last 4 years. It is no wonder that it is already in many of the products we use today, from netflix or amazon's personalized recommendations; to the filters that block our spam; to ways that we interact with personal assistants like Apple's Siri or Microsoft Cortana, even to the very ways our personal health is monitored. And sure deep learning algorithms are capable of some amazing things. But it's not just science applications that are benefiting from this research.
Artists too are starting to explore how Deep Learning can be used in their own practice. Photographers are starting to explore different ways of exploring visual media. Generative artists are writing algorithms to create entirely new aesthetics. Filmmakers are exploring virtual worlds ripe with potential for procedural content.
In this course, we're going straight to the state of the art. And we're going to learn it all. We'll see how to make an algorithm paint an image, or hallucinate objects in a photograph. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets to using them to self organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of other images. We'll even see how to teach a computer to read and synthesize new phrases.
But we won't just be using other peoples code to do all of this. We're going to develop everything ourselves using Tensorflow and I'm going to show you how to do it. This course isn't just for artists nor is it just for programmers. It's for people that want to learn more about how to apply deep learning with a hands on approach, straight into the python console, and learn what it all means through creative thinking and interaction.
I'm Parag Mital, artist, researcher and Director of Machine Intelligence at Kadenze. For the last 10 years, I've been exploring creative uses of computational models making use of machine and deep learning, film datasets, eye-tracking, EEG, and fMRI recordings exploring applications such as generative film experiences, augmented reality hallucinations, and expressive control of large audiovisual corpora.
But this course isn't just about me. It's about bringing all of you together. It's about bringing together different backgrounds, different practices, and sticking all of you in the same virtual room, giving you access to state of the art methods in deep learning, some really amazing stuff, and then letting you go wild on the Kadenze platform. We've been working very hard to build a platform for learning that rivals anything else out there for learning this stuff.
You'll be able to share your content, upload videos, comment and exchange code and ideas, all led by the course I've developed for us. But before we get there we're going to have to cover a lot of groundwork. The basics that we'll use to develop state of the art algorithms in deep learning. And that's really so we can better interrogate what's possible, ask the bigger questions, and be able to explore just where all this is heading in more depth. With all of that in mind, Let's get started>
Join me as we learn all about Creative Applications of Deep Learning with Tensorflow.
<a name="session-overview"></a>
Session Overview
We're first going to talk about Deep Learning, what it is, and how it relates to other branches of learning. We'll then talk about the major components of Deep Learning, the importance of datasets, and the nature of representation, which is at the heart of deep learning.
If you've never used Python before, we'll be jumping straight into using libraries like numpy, matplotlib, and scipy. Before starting this session, please check the resources section for a notebook introducing some fundamentals of python programming. When you feel comfortable with loading images from a directory, resizing, cropping, how to change an image datatype from unsigned int to float32, and what the range of each data type should be, then come back here and pick up where you left off. We'll then get our hands dirty with Tensorflow, Google's library for machine intelligence. We'll learn the basic components of creating a computational graph with Tensorflow, including how to convolve an image to detect interesting features at different scales. This groundwork will finally lead us towards automatically learning our handcrafted features/algorithms.
<a name="learning-from-data"></a>
Learning From Data
<a name="deep-learning-vs-machine-learning"></a>
Deep Learning vs. Machine Learning
So what is this word I keep using, Deep Learning. And how is it different to Machine Learning? Well Deep Learning is a type of Machine Learning algorithm that uses Neural Networks to learn. The type of learning is "Deep" because it is composed of many layers of Neural Networks. In this course we're really going to focus on supervised and unsupervised Deep Learning. But there are many other incredibly valuable branches of Machine Learning such as Reinforcement Learning, Dictionary Learning, Probabilistic Graphical Models and Bayesian Methods (Bishop), or Genetic and Evolutionary Algorithms. And any of these branches could certainly even be combined with each other or with Deep Networks as well. We won't really be able to get into these other branches of learning in this course. Instead, we'll focus more on building "networks", short for neural networks, and how they can do some really amazing things. Before we can get into all that, we're going to need to understand a bit more about data and its importance in deep learning.
<a name="invariances"></a>
Invariances
Deep Learning requires data. A lot of it. It's really one of the major reasons as to why Deep Learning has been so successful. Having many examples of the thing we are trying to learn is the first thing you'll need before even thinking about Deep Learning. Often, it is the biggest blocker to learning about something in the world. Even as a child, we need a lot of experience with something before we begin to understand it. I find I spend most of my time just finding the right data for a network to learn. Getting it from various sources, making sure it all looks right and is labeled. That is a lot of work. The rest of it is easy as we'll see by the end of this course.
Let's say we would like build a network that is capable of looking at an image and saying what object is in the image. There are so many possible ways that an object could be manifested in an image. It's rare to ever see just a single object in isolation. In order to teach a computer about an object, we would have to be able to give it an image of an object in every possible way that it could exist.
We generally call these ways of existing "invariances". That just means we are trying not to vary based on some factor. We are invariant to it. For instance, an object could appear to one side of an image, or another. We call that translation invariance. Or it could be from one angle or another. That's called rotation invariance. Or it could be closer to the camera, or farther. and That would be scale invariance. There are plenty of other types of invariances, such as perspective or brightness or exposure to give a few more examples for photographic images.
<a name="scope-of-learning"></a>
Scope of Learning
With Deep Learning, you will always need a dataset that will teach the algorithm about the world. But you aren't really teaching it everything. You are only teaching it what is in your dataset! That is a very important distinction. If I show my algorithm only faces of people which are always placed in the center of an image, it will not be able to understand anything about faces that are not in the center of the image! Well at least that's mostly true.
That's not to say that a network is incapable of transfering what it has learned to learn new concepts more easily. Or to learn things that might be necessary for it to learn other representations. For instance, a network that has been trained to learn about birds, probably knows a good bit about trees, branches, and other bird-like hangouts, depending on the dataset. But, in general, we are limited to learning what our dataset has access to.
So if you're thinking about creating a dataset, you're going to have to think about what it is that you want to teach your network. What sort of images will it see? What representations do you think your network could learn given the data you've shown it?
One of the major contributions to the success of Deep Learning algorithms is the amount of data out there. Datasets have grown from orders of hundreds to thousands to many millions. The more data you have, the more capable your network will be at determining whatever its objective is.
<a name="existing-datasets"></a>
Existing datasets
With that in mind, let's try to find a dataset that we can work with. There are a ton of datasets out there that current machine learning researchers use. For instance if I do a quick Google search for Deep Learning Datasets, i can see for instance a link on deeplearning.net, listing a few interesting ones e.g. http://deeplearning.net/datasets/, including MNIST, CalTech, CelebNet, LFW, CIFAR, MS Coco, Illustration2Vec, and there are ton more. And these are primarily image based. But if you are interested in finding more, just do a quick search or drop a quick message on the forums if you're looking for something in particular.
MNIST
CalTech
CelebNet
ImageNet: http://www.image-net.org/
LFW
CIFAR10
CIFAR100
MS Coco: http://mscoco.org/home/
WLFDB: http://wlfdb.stevenhoi.com/
Flickr 8k: http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html
Flickr 30k
<a name="preprocessing-data"></a>
Preprocessing Data
In this section, we're going to learn a bit about working with an image based dataset. We'll see how image dimensions are formatted as a single image and how they're represented as a collection using a 4-d array. We'll then look at how we can perform dataset normalization. If you're comfortable with all of this, please feel free to skip to the next video.
We're first going to load some libraries that we'll be making use of.
End of explanation
"""
from libs import utils
# utils.<tab>
files = utils.get_celeb_files()
"""
Explanation: I'll be using a popular image dataset for faces called the CelebFaces dataset. I've provided some helper functions which you can find on the resources page, which will just help us with manipulating images and loading this dataset.
End of explanation
"""
img = plt.imread(files[50])
# img.<tab>
print(img)
"""
Explanation: Let's get the 50th image in this list of files, and then read the file at that location as an image, setting the result to a variable, img, and inspect a bit further what's going on:
End of explanation
"""
# If nothing is drawn and you are using notebook, try uncommenting the next line:
#%matplotlib inline
plt.imshow(img)
"""
Explanation: When I print out this image, I can see all the numbers that represent this image. We can use the function imshow to see this:
End of explanation
"""
img.shape
# (218, 178, 3)
"""
Explanation: <a name="understanding-image-shapes"></a>
Understanding Image Shapes
Let's break this data down a bit more. We can see the dimensions of the data using the shape accessor:
End of explanation
"""
plt.imshow(img[:, :, 0], cmap='gray')
plt.imshow(img[:, :, 1], cmap='gray')
plt.imshow(img[:, :, 2], cmap='gray')
"""
Explanation: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels.
End of explanation
"""
imgs = utils.get_celeb_imgs()
"""
Explanation: We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels. What we're seeing is the amount of Red, Green, or Blue contributing to the overall color image.
Let's use another helper function which will load every image file in the celeb dataset rather than just give us the filenames like before. By default, this will just return the first 1000 images because loading the entire dataset is a bit cumbersome. In one of the later sessions, I'll show you how tensorflow can handle loading images using a pipeline so we can load this same dataset. For now, let's stick with this:
End of explanation
"""
plt.imshow(imgs[0])
"""
Explanation: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets:
End of explanation
"""
imgs[0].shape
"""
Explanation: <a name="the-batch-dimension"></a>
The Batch Dimension
Remember that an image has a shape describing the height, width, channels:
End of explanation
"""
data = np.array(imgs)
data.shape
"""
Explanation: It turns out we'll often use another convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape will be exactly the same, except we'll stick on a new dimension on the beginning... giving us number of images x the height x the width x the number of color channels.
N x H x W x C
A Color image should have 3 color channels, RGB.
We can combine all of our images to have these 4 dimensions by telling numpy to give us an array of all the images.
End of explanation
"""
mean_img = np.mean(data, axis=0)
plt.imshow(mean_img.astype(np.uint8))
"""
Explanation: This will only work if every image in our list is exactly the same size. So if you have a wide image, short image, long image, forget about it. You'll need them all to be the same size. If you are unsure of how to get all of your images into the same size, then please please refer to the online resources for the notebook I've provided which shows you exactly how to take a bunch of images of different sizes, and crop and resize them the best we can to make them all the same size.
<a name="meandeviation-of-images"></a>
Mean/Deviation of Images
Now that we have our data in a single numpy variable, we can do alot of cool stuff. Let's look at the mean of the batch channel:
End of explanation
"""
std_img = np.std(data, axis=0)
plt.imshow(std_img.astype(np.uint8))
"""
Explanation: This is the first step towards building our robot overlords. We've reduced down our entire dataset to a single representation which describes what most of our dataset looks like. There is one other very useful statistic which we can look at very easily:
End of explanation
"""
plt.imshow(np.mean(std_img, axis=2).astype(np.uint8))
"""
Explanation: So this is incredibly cool. We've just shown where changes are likely to be in our dataset of images. Or put another way, we're showing where and how much variance there is in our previous mean image representation.
We're looking at this per color channel. So we'll see variance for each color channel represented separately, and then combined as a color image. We can try to look at the average variance over all color channels by taking their mean:
End of explanation
"""
flattened = data.ravel()
print(data[:1])
print(flattened[:10])
"""
Explanation: This is showing us on average, how every color channel will vary as a heatmap. The more red, the more likely that our mean image is not the best representation. The more blue, the less likely that our mean image is far off from any other possible image.
<a name="dataset-preprocessing"></a>
Dataset Preprocessing
Think back to when I described what we're trying to accomplish when we build a model for machine learning? We're trying to build a model that understands invariances. We need our model to be able to express all of the things that can possibly change in our data. Well, this is the first step in understanding what can change. If we are looking to use deep learning to learn something complex about our data, it will often start by modeling both the mean and standard deviation of our dataset. We can help speed things up by "preprocessing" our dataset by removing the mean and standard deviation. What does this mean? Subtracting the mean, and dividing by the standard deviation. Another word for that is "normalization".
<a name="histograms"></a>
Histograms
Let's have a look at our dataset another way to see why this might be a useful thing to do. We're first going to convert our batch x height x width x channels array into a 1 dimensional array. Instead of having 4 dimensions, we'll now just have 1 dimension of every pixel value stretched out in a long vector, or 1 dimensional array.
End of explanation
"""
plt.hist(flattened.ravel(), 255)
"""
Explanation: We first convert our N x H x W x C dimensional array into a 1 dimensional array. The values of this array will be based on the last dimensions order. So we'll have: [<font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>205</font>, <font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>206</font>, <font color='red'>253</font>, <font color='green'>240</font>, <font color='blue'>207</font>, ...]
We can visualize what the "distribution", or range and frequency of possible values are. This is a very useful thing to know. It tells us whether our data is predictable or not.
End of explanation
"""
plt.hist(mean_img.ravel(), 255)
"""
Explanation: The last line is saying give me a histogram of every value in the vector, and use 255 bins. Each bin is grouping a range of values. The bars of each bin describe the frequency, or how many times anything within that range of values appears.In other words, it is telling us if there is something that seems to happen more than anything else. If there is, it is likely that a neural network will take advantage of that.
<a name="histogram-equalization"></a>
Histogram Equalization
The mean of our dataset looks like this:
End of explanation
"""
bins = 20
fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)
axs[0].hist((data[0]).ravel(), bins)
axs[0].set_title('img distribution')
axs[1].hist((mean_img).ravel(), bins)
axs[1].set_title('mean distribution')
axs[2].hist((data[0] - mean_img).ravel(), bins)
axs[2].set_title('(img - mean) distribution')
"""
Explanation: When we subtract an image by our mean image, we remove all of this information from it. And that means that the rest of the information is really what is important for describing what is unique about it.
Let's try and compare the histogram before and after "normalizing our data":
End of explanation
"""
fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)
axs[0].hist((data[0] - mean_img).ravel(), bins)
axs[0].set_title('(img - mean) distribution')
axs[1].hist((std_img).ravel(), bins)
axs[1].set_title('std deviation distribution')
axs[2].hist(((data[0] - mean_img) / std_img).ravel(), bins)
axs[2].set_title('((img - mean) / std_dev) distribution')
"""
Explanation: What we can see from the histograms is the original image's distribution of values from 0 - 255. The mean image's data distribution is mostly centered around the value 100. When we look at the difference of the original image and the mean image as a histogram, we can see that the distribution is now centered around 0. What we are seeing is the distribution of values that were above the mean image's intensity, and which were below it. Let's take it one step further and complete the normalization by dividing by the standard deviation of our dataset:
End of explanation
"""
axs[2].set_xlim([-150, 150])
axs[2].set_xlim([-100, 100])
axs[2].set_xlim([-50, 50])
axs[2].set_xlim([-10, 10])
axs[2].set_xlim([-5, 5])
"""
Explanation: Now our data has been squished into a peak! We'll have to look at it on a different scale to see what's going on:
End of explanation
"""
import tensorflow as tf
"""
Explanation: What we can see is that the data is in the range of -3 to 3, with the bulk of the data centered around -1 to 1. This is the effect of normalizing our data: most of the data will be around 0, where some deviations of it will follow between -3 to 3.
If our data does not end up looking like this, then we should either (1): get much more data to calculate our mean/std deviation, or (2): either try another method of normalization, such as scaling the values between 0 to 1, or -1 to 1, or possibly not bother with normalization at all. There are other options that one could explore, including different types of normalization such as local contrast normalization for images or PCA based normalization but we won't have time to get into those in this course.
<a name="tensorflow-basics"></a>
Tensorflow Basics
Let's now switch gears and start working with Google's Library for Numerical Computation, TensorFlow. This library can do most of the things we've done so far. However, it has a very different approach for doing so. And it can do a whole lot more cool stuff which we'll eventually get into. The major difference to take away from the remainder of this session is that instead of computing things immediately, we first define things that we want to compute later using what's called a Graph. Everything in Tensorflow takes place in a computational graph and running and evaluating anything in the graph requires a Session. Let's take a look at how these both work and then we'll get into the benefits of why this is useful:
<a name="variables"></a>
Variables
We're first going to import the tensorflow library:
End of explanation
"""
x = np.linspace(-3.0, 3.0, 100)
# Immediately, the result is given to us. An array of 100 numbers equally spaced from -3.0 to 3.0.
print(x)
# We know from numpy arrays that they have a `shape`, in this case a 1-dimensional array of 100 values
print(x.shape)
# and a `dtype`, in this case float64, or 64 bit floating point values.
print(x.dtype)
"""
Explanation: Let's take a look at how we might create a range of numbers. Using numpy, we could for instance use the linear space function:
End of explanation
"""
x = tf.linspace(-3.0, 3.0, 100)
print(x)
"""
Explanation: <a name="tensors"></a>
Tensors
In tensorflow, we could try to do the same thing using their linear space function:
End of explanation
"""
g = tf.get_default_graph()
"""
Explanation: Instead of a numpy.array, we are returned a tf.Tensor. The name of it is "LinSpace:0". Wherever we see this colon 0, that just means the output of. So the name of this Tensor is saying, the output of LinSpace.
Think of tf.Tensors the same way as you would the numpy.array. It is described by its shape, in this case, only 1 dimension of 100 values. And it has a dtype, in this case, float32. But unlike the numpy.array, there are no values printed here! That's because it actually hasn't computed its values yet. Instead, it just refers to the output of a tf.Operation which has been already been added to Tensorflow's default computational graph. The result of that operation is the tensor that we are returned.
<a name="graphs"></a>
Graphs
Let's try and inspect the underlying graph. We can request the "default" graph where all of our operations have been added:
End of explanation
"""
[op.name for op in g.get_operations()]
"""
Explanation: <a name="operations"></a>
Operations
And from this graph, we can get a list of all the operations that have been added, and print out their names:
End of explanation
"""
g.get_tensor_by_name('LinSpace' + ':0')
"""
Explanation: So Tensorflow has named each of our operations to generally reflect what they are doing. There are a few parameters that are all prefixed by LinSpace, and then the last one which is the operation which takes all of the parameters and creates an output for the linspace.
<a name="tensor"></a>
Tensor
We can request the output of any operation, which is a tensor, by asking the graph for the tensor's name:
End of explanation
"""
# We're first going to create a session:
sess = tf.Session()
# Now we tell our session to compute anything we've created in the tensorflow graph.
computed_x = sess.run(x)
print(computed_x)
# Alternatively, we could tell the previous Tensor to evaluate itself using this session:
computed_x = x.eval(session=sess)
print(computed_x)
# We can close the session after we're done like so:
sess.close()
"""
Explanation: What I've done is asked for the tf.Tensor that comes from the operation "LinSpace". So remember, the result of a tf.Operation is a tf.Tensor. Remember that was the same name as the tensor x we created before.
<a name="sessions"></a>
Sessions
In order to actually compute anything in tensorflow, we need to create a tf.Session. The session is responsible for evaluating the tf.Graph. Let's see how this works:
End of explanation
"""
sess = tf.Session(graph=g)
sess.close()
"""
Explanation: We could also explicitly tell the session which graph we want to manage:
End of explanation
"""
g2 = tf.Graph()
"""
Explanation: By default, it grabs the default graph. But we could have created a new graph like so:
End of explanation
"""
sess = tf.InteractiveSession()
x.eval()
"""
Explanation: And then used this graph only in our session.
To simplify things, since we'll be working in iPython's interactive console, we can create an tf.InteractiveSession:
End of explanation
"""
# We can find out the shape of a tensor like so:
print(x.get_shape())
# %% Or in a more friendly format
print(x.get_shape().as_list())
"""
Explanation: Now we didn't have to explicitly tell the eval function about our session. We'll leave this session open for the rest of the lecture.
<a name="tensor-shapes"></a>
Tensor Shapes
End of explanation
"""
# The 1 dimensional gaussian takes two parameters, the mean value, and the standard deviation, which is commonly denoted by the name sigma.
mean = 0.0
sigma = 1.0
# Don't worry about trying to learn or remember this formula. I always have to refer to textbooks or check online for the exact formula.
z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0) /
(2.0 * tf.pow(sigma, 2.0)))) *
(1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))
"""
Explanation: <a name="many-operations"></a>
Many Operations
Lets try a set of operations now. We'll try to create a Gaussian curve. This should resemble a normalized histogram where most of the data is centered around the mean of 0. It's also sometimes refered to by the bell curve or normal curve.
End of explanation
"""
res = z.eval()
plt.plot(res)
# if nothing is drawn, and you are using ipython notebook, uncomment the next two lines:
#%matplotlib inline
#plt.plot(res)
"""
Explanation: Just like before, amazingly, we haven't actually computed anything. We *have just added a bunch of operations to Tensorflow's graph. Whenever we want the value or output of this operation, we'll have to explicitly ask for the part of the graph we're interested in before we can see its result. Since we've created an interactive session, we should just be able to say the name of the Tensor that we're interested in, and call the eval function:
End of explanation
"""
# Let's store the number of values in our Gaussian curve.
ksize = z.get_shape().as_list()[0]
# Let's multiply the two to get a 2d gaussian
z_2d = tf.matmul(tf.reshape(z, [ksize, 1]), tf.reshape(z, [1, ksize]))
# Execute the graph
plt.imshow(z_2d.eval())
"""
Explanation: <a name="convolution"></a>
Convolution
<a name="creating-a-2-d-gaussian-kernel"></a>
Creating a 2-D Gaussian Kernel
Let's try creating a 2-dimensional Gaussian. This can be done by multiplying a vector by its transpose. If you aren't familiar with matrix math, I'll review a few important concepts. This is about 98% of what neural networks do so if you're unfamiliar with this, then please stick with me through this and it'll be smooth sailing. First, to multiply two matrices, their inner dimensions must agree, and the resulting matrix will have the shape of the outer dimensions.
So let's say we have two matrices, X and Y. In order for us to multiply them, X's columns must match Y's rows. I try to remember it like so:
<pre>
(X_rows, X_cols) x (Y_rows, Y_cols)
| | | |
| |___________| |
| ^ |
| inner dimensions |
| must match |
| |
|__________________________|
^
resulting dimensions
of matrix multiplication
</pre>
But our matrix is actually a vector, or a 1 dimensional matrix. That means its dimensions are N x 1. So to multiply them, we'd have:
<pre>
(N, 1) x (1, N)
| | | |
| |___________| |
| ^ |
| inner dimensions |
| must match |
| |
|__________________________|
^
resulting dimensions
of matrix multiplication
</pre>
End of explanation
"""
# Let's first load an image. We're going to need a grayscale image to begin with. skimage has some images we can play with. If you do not have the skimage module, you can load your own image, or get skimage by pip installing "scikit-image".
from skimage import data
img = data.camera().astype(np.float32)
plt.imshow(img, cmap='gray')
print(img.shape)
"""
Explanation: <a name="convolving-an-image-with-a-gaussian"></a>
Convolving an Image with a Gaussian
A very common operation that we'll come across with Deep Learning is convolution. We're going to explore what this means using our new gaussian kernel that we've just created. For now, just think of it a way of filtering information. We're going to effectively filter our image using this Gaussian function, as if the gaussian function is the lens through which we'll see our image data. What it will do is at every location we tell it to filter, it will average the image values around it based on what the kernel's values are. The Gaussian's kernel is basically saying, take a lot the center, a then decesasingly less as you go farther away from the center. The effect of convolving the image with this type of kernel is that the entire image will be blurred. If you would like an interactive exploratin of convolution, this website is great:
http://setosa.io/ev/image-kernels/
End of explanation
"""
# We could use the numpy reshape function to reshape our numpy array
img_4d = img.reshape([1, img.shape[0], img.shape[1], 1])
print(img_4d.shape)
# but since we'll be using tensorflow, we can use the tensorflow reshape function:
img_4d = tf.reshape(img, [1, img.shape[0], img.shape[1], 1])
print(img_4d)
"""
Explanation: Notice our img shape is 2-dimensional. For image convolution in Tensorflow, we need our images to be 4 dimensional. Remember that when we load many iamges and combine them in a single numpy array, the resulting shape has the number of images first.
N x H x W x C
In order to perform 2d convolution with tensorflow, we'll need the same dimensions for our image. With just 1 grayscale image, this means the shape will be:
1 x H x W x 1
End of explanation
"""
print(img_4d.get_shape())
print(img_4d.get_shape().as_list())
"""
Explanation: Instead of getting a numpy array back, we get a tensorflow tensor. This means we can't access the shape parameter like we did with the numpy array. But instead, we can use get_shape(), and get_shape().as_list():
End of explanation
"""
# Reshape the 2d kernel to tensorflow's required 4d format: H x W x I x O
z_4d = tf.reshape(z_2d, [ksize, ksize, 1, 1])
print(z_4d.get_shape().as_list())
"""
Explanation: The H x W image is now part of a 4 dimensional array, where the other dimensions of N and C are 1. So there is only 1 image and only 1 channel.
We'll also have to reshape our Gaussian Kernel to be 4-dimensional as well. The dimensions for kernels are slightly different! Remember that the image is:
Number of Images x Image Height x Image Width x Number of Channels
we have:
Kernel Height x Kernel Width x Number of Input Channels x Number of Output Channels
Our Kernel already has a height and width of ksize so we'll stick with that for now. The number of input channels should match the number of channels on the image we want to convolve. And for now, we just keep the same number of output channels as the input channels, but we'll later see how this comes into play.
End of explanation
"""
convolved = tf.nn.conv2d(img_4d, z_4d, strides=[1, 1, 1, 1], padding='SAME')
res = convolved.eval()
print(res.shape)
"""
Explanation: <a name="convolvefilter-an-image-using-a-gaussian-kernel"></a>
Convolve/Filter an image using a Gaussian Kernel
We can now use our previous Gaussian Kernel to convolve our image:
End of explanation
"""
# Matplotlib cannot handle plotting 4D images! We'll have to convert this back to the original shape. There are a few ways we could do this. We could plot by "squeezing" the singleton dimensions.
plt.imshow(np.squeeze(res), cmap='gray')
# Or we could specify the exact dimensions we want to visualize:
plt.imshow(res[0, :, :, 0], cmap='gray')
"""
Explanation: There are two new parameters here: strides, and padding. Strides says how to move our kernel across the image. Basically, we'll only ever use it for one of two sets of parameters:
[1, 1, 1, 1], which means, we are going to convolve every single image, every pixel, and every color channel by whatever the kernel is.
and the second option:
[1, 2, 2, 1], which means, we are going to convolve every single image, but every other pixel, in every single color channel.
Padding says what to do at the borders. If we say "SAME", that means we want the same dimensions going in as we do going out. In order to do this, zeros must be padded around the image. If we say "VALID", that means no padding is used, and the image dimensions will actually change.
End of explanation
"""
xs = tf.linspace(-3.0, 3.0, ksize)
"""
Explanation: <a name="modulating-the-gaussian-with-a-sine-wave-to-create-gabor-kernel"></a>
Modulating the Gaussian with a Sine Wave to create Gabor Kernel
We've now seen how to use tensorflow to create a set of operations which create a 2-dimensional Gaussian kernel, and how to use that kernel to filter or convolve another image. Let's create another interesting convolution kernel called a Gabor. This is a lot like the Gaussian kernel, except we use a sine wave to modulate that.
<graphic: draw 1d gaussian wave, 1d sine, show modulation as multiplication and resulting gabor.>
We first use linspace to get a set of values the same range as our gaussian, which should be from -3 standard deviations to +3 standard deviations.
End of explanation
"""
ys = tf.sin(xs)
plt.figure()
plt.plot(ys.eval())
"""
Explanation: We then calculate the sine of these values, which should give us a nice wave
End of explanation
"""
ys = tf.reshape(ys, [ksize, 1])
"""
Explanation: And for multiplication, we'll need to convert this 1-dimensional vector to a matrix: N x 1
End of explanation
"""
ones = tf.ones((1, ksize))
wave = tf.matmul(ys, ones)
plt.imshow(wave.eval(), cmap='gray')
"""
Explanation: We then repeat this wave across the matrix by using a multiplication of ones:
End of explanation
"""
gabor = tf.mul(wave, z_2d)
plt.imshow(gabor.eval(), cmap='gray')
"""
Explanation: We can directly multiply our old Gaussian kernel by this wave and get a gabor kernel:
End of explanation
"""
# This is a placeholder which will become part of the tensorflow graph, but
# which we have to later explicitly define whenever we run/evaluate the graph.
# Pretty much everything you do in tensorflow can have a name. If we don't
# specify the name, tensorflow will give a default one, like "Placeholder_0".
# Let's use a more useful name to help us understand what's happening.
img = tf.placeholder(tf.float32, shape=[None, None], name='img')
# We'll reshape the 2d image to a 3-d tensor just like before:
# Except now we'll make use of another tensorflow function, expand dims, which adds a singleton dimension at the axis we specify.
# We use it to reshape our H x W image to include a channel dimension of 1
# our new dimensions will end up being: H x W x 1
img_3d = tf.expand_dims(img, 2)
dims = img_3d.get_shape()
print(dims)
# And again to get: 1 x H x W x 1
img_4d = tf.expand_dims(img_3d, 0)
print(img_4d.get_shape().as_list())
# Let's create another set of placeholders for our Gabor's parameters:
mean = tf.placeholder(tf.float32, name='mean')
sigma = tf.placeholder(tf.float32, name='sigma')
ksize = tf.placeholder(tf.int32, name='ksize')
# Then finally redo the entire set of operations we've done to convolve our
# image, except with our placeholders
x = tf.linspace(-3.0, 3.0, ksize)
z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0) /
(2.0 * tf.pow(sigma, 2.0)))) *
(1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))
z_2d = tf.matmul(
tf.reshape(z, tf.pack([ksize, 1])),
tf.reshape(z, tf.pack([1, ksize])))
ys = tf.sin(x)
ys = tf.reshape(ys, tf.pack([ksize, 1]))
ones = tf.ones(tf.pack([1, ksize]))
wave = tf.matmul(ys, ones)
gabor = tf.mul(wave, z_2d)
gabor_4d = tf.reshape(gabor, tf.pack([ksize, ksize, 1, 1]))
# And finally, convolve the two:
convolved = tf.nn.conv2d(img_4d, gabor_4d, strides=[1, 1, 1, 1], padding='SAME', name='convolved')
convolved_img = convolved[0, :, :, 0]
"""
Explanation: <a name="manipulating-an-image-with-this-gabor"></a>
Manipulating an image with this Gabor
We've already gone through the work of convolving an image. The only thing that has changed is the kernel that we want to convolve with. We could have made life easier by specifying in our graph which elements we wanted to be specified later. Tensorflow calls these "placeholders", meaning, we're not sure what these are yet, but we know they'll fit in the graph like so, generally the input and output of the network.
Let's rewrite our convolution operation using a placeholder for the image and the kernel and then see how the same operation could have been done. We're going to set the image dimensions to None x None. This is something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter.
End of explanation
"""
convolved_img.eval()
"""
Explanation: What we've done is create an entire graph from our placeholders which is capable of convolving an image with a gabor kernel. In order to compute it, we have to specify all of the placeholders required for its computation.
If we try to evaluate it without specifying placeholders beforehand, we will get an error InvalidArgumentError: You must feed a value for placeholder tensor 'img' with dtype float and shape [512,512]:
End of explanation
"""
convolved_img.eval(feed_dict={img: data.camera()})
"""
Explanation: It's saying that we didn't specify our placeholder for img. In order to "feed a value", we use the feed_dict parameter like so:
End of explanation
"""
res = convolved_img.eval(feed_dict={
img: data.camera(), mean:0.0, sigma:1.0, ksize:100})
plt.imshow(res, cmap='gray')
"""
Explanation: But that's not the only placeholder in our graph! We also have placeholders for mean, sigma, and ksize. Once we specify all of them, we'll have our result:
End of explanation
"""
res = convolved_img.eval(feed_dict={
img: data.camera(),
mean: 0.0,
sigma: 0.5,
ksize: 32
})
plt.imshow(res, cmap='gray')
"""
Explanation: Now, instead of having to rewrite the entire graph, we can just specify the different placeholders.
End of explanation
"""
|
cogeorg/black_rhino | examples/degroot/Run_deGroot.ipynb | gpl-3.0 | environment_directory = "configs/environments/"
identifier = "test_degroot"
log_directory = "log/"
"""
Explanation: Running the deGroot Model
First, the model needs to be initialized.
End of explanation
"""
if not os.path.exists('log'):
os.makedirs('log')
# logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %H:%M:%S',
# filename=log_directory + identifier + ".log", level=logging.INFO)
# logging.info('START logging for run: %s', environment_directory + identifier + ".xml")
"""
Explanation: Then, after a log folder is created, if it doesn't exist, the logger will be initialized. The log files will contain information about how the simulation went.
End of explanation
"""
tree = ET.parse('configs/environments/test_degroot.xml')
root = tree.getroot()
# all item attributes
print('All parameters: ')
for elem in root:
print(elem.attrib)
"""
Explanation: Before initializing the model, let's inspect the parameters that are used to initialize it.
End of explanation
"""
environment = Environment(environment_directory, identifier)
"""
Explanation: First, the environment will be initialized. It contains the model parameters and variables. The environment also contains a list of agents. In the Solow model, the environment contains one bank, one firm, and one household. Finally, it also contains initialized transactions for these agents
End of explanation
"""
runner = Runner(environment)
"""
Explanation: Next up, the runner object is created. As the name suggests, the runner object handles the running of the model. But that is not all, it also initializes the updater.
End of explanation
"""
for i in range(int(environment.static_parameters['num_simulations'])):
environment.initialize(environment_directory, identifier)
runner.initialize(environment)
# do the run
runner.do_run(environment)
"""
Explanation: Now you can actually run the model. Since the agent-based model is simulated using Monte Carlo methods. This means that the model will be run multiple times (num_simulations). For every simulation run, the environment and runner will be (re-)initialized. Then, the runner will run the Solow model. During the run, the runner will write data about capital to a csv file specified within the Measurement class and print out all the transactions happening and the books of firm and household at the end of each sweep.
End of explanation
"""
|
ajmendez/explore | cupid/age.ipynb | mit | %matplotlib inline
import time
import pylab
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_style('white')
from pysurvey.plot import setup_sns as setup
from pysurvey.plot import density, icolorbar, text, legend, outline
people = pd.read_csv('/Users/ajmendez/data/okcupid/random_v4.csv')
people = people[(people['age'].notnull()) & (people['match'].notnull()) & (people['match'] > 20)]
print('Found {:,d} profiles'.format(len(people)))
_ = people.hist('match', by='age', bins=np.arange(20,101,4),
lw=0, alpha=0.7, figsize=(18,6), layout=(2,5))
"""
Explanation: OKCupid Age
Lets try to answer the following questions in this notebook:
* Can one infer the age of a profile given how well it matches others?
* Can one estimate the age of a profile given numbers in the username?
End of explanation
"""
people.hist('age', lw=0, alpha=0.7, grid=False)
"""
Explanation: The match percentage distribution peaks near ~75%, but differences are hard to see.
What is of interest to answer this question is what differences relative to a mean distribution.
End of explanation
"""
def setup_plot(**kwargs):
setup(xlabel='Age [Years]', ylabel='Match percentage', tickmarks=False, grid=False, **kwargs)
pylab.plot(26, 101, 'v', color='r', markeredgecolor='k', clip_on=False)
pylab.text(26.2, 100, 'Age when Profile Written', ha='left', va='bottom', clip_on=False)
setup_plot(figsize=(12,6), subplt=(1,2,1))
den = density(people['age'], people['match'],
bins=[np.arange(24,34), np.arange(20,101,1)],
label='number of profiles / bin', vmin=10, vmax=2000,
cmap=pylab.cm.Spectral_r, colorbar=False, logvrange=False)
icolorbar(den, loc=1, borderpad=-4)
setup_plot(subplt=(1,2,2), yticks=False)
den = density(people['age'], people['match'],
bins=[np.arange(24,34), np.arange(20,101,4)],
label='Relative Density',
vmin=2e-6, vmax=4e-6,
cmap=pylab.cm.Spectral_r, colorbar=False,
ynorm=True, xnorm=True, logvrange=False)
pylab.axhline(80, lw=2, color='k')
# convert binning size and probability distribution values to a simple number
icolorbar(den, loc=1, borderpad=-4, tickfmt='{:0.1f}', tickfcn=lambda x: x*0.5e6)
pylab.tight_layout()
"""
Explanation: Additionally, there is a decreasing number of users at a given age that are presented in the search results.
End of explanation
"""
num = people[(people['username_nlength'] == 2)]
print('Found {:,d} users with a 2 digit number'.format(len(num)))
def add_range(xr, label):
yr = pylab.ylim()
n = len(np.where((num['username_number'] > xr[0]) & (num['username_number'] < xr[1]))[0])
print('Found {:,d} {:0.2f}%'.format(n, n*100.0/len(num)))
pylab.axvspan(xr[0],xr[1], lw=0, alpha=0.2, zorder=-1, color='k')
text(np.mean(xr), yr[-1]*0.9, label+'\n({:d}-{:d})'.format(*xr), ha='center', va='top', rotation=90, fontsize=12,
outline_prop=dict(color='0.8'))
ax=setup(figsize=(12,6), xlabel='Number in Username', tickmarks=False, grid=True, ylabel='Number/bin')
num.hist('username_number', bins=np.arange(0,101), lw=0, ax=ax)
pylab.title('')
add_range([8, 15], 'Year of profile creation')
add_range([21,34], 'Age at profile creation')
add_range([82,91], 'Year of birth')
"""
Explanation: To show this effect visually, I show the the 2D age and match percentage distribution in the left panel.
Redder bins highlight the region where we find many more profiles.
This is weighted by the number of observed profiles at a given age, and the match percentage probability distribution, peaking at both low age (24) and a match percentage near 75%.
In the right panel, I normalize the previous panel by the age and match percentage distributions, allowing us to see the relative excess density of profiles that match mine.
Effectively this suggests that while I may see many more high match percentage profiles for 24 year olds, due is just due to the underlying sample of people who have signed up to OKCupid, my profiles matches people similar to my age
In particular, for the high match (>80%) profiles, my profile is more likely to attract similar ages 25-28 rather than older ages (30+).
One possible issue with this dataset is that OKCupid may be filtering individuals based on their preferences.
So the dearth of high match percentage profiles above 30 years old may just be people who would not want to date a 26 year old. This OKTrends blog post may suggest that this is not a significant problem as that the distribution of preferences extends past 24 years old.
It would be great to extend this dataset by collecting a much larger range of ages to sample the full probability distribution to see what my age my profile matches best against.
Additionally, it would be useful to add the comparison of many other profiles to calculate the variance of matching a profile to an age.
Usernames
Another interesting feature to consider in the dataset is the number that is included in each username. These are used to add uniqueness to a user's favorite username which has already been selected. Lets look at how well these values correlate with the age and see if we can estimate their age or profile creation date using these values.
Lets first focus on users who have supplied a 2 digit number in their username
End of explanation
"""
def setup_plot(**kwargs):
setup(xlabel='number in username', ylabel='age [years]',
ytickv=age+0.5, yticknames=age, **kwargs)
def add_line(x,y, label, **kwargs):
params = dict(label=label, lw=3, alpha=0.5, color='k')
params.update(kwargs)
outline(pylab.plot(x,y, **params))
text(x[-1]+5, y[-1], label, rotation=90, ha='center', va='top',
fontsize=12, fontweight='bold', color=params['color'])
# def add_vline(x, label, **kwargs):
# outline(pylab.axvline(x, lw=3, alpha=0.5, color='k', label=label))
# text(x+5, 32, label, rotation=90, alpha)
age = np.arange(24,34)
setup_plot(figsize=(12,6), subplt=(1,2,1))
den = density(num['username_number'], num['age'],
bins=[np.arange(0,101), np.arange(24,34), ],
label='Number of Profiles', vmin=0, vmax=400,
ynorm=False, cmap=pylab.cm.Spectral_r, colorbar=False)
icolorbar(den, loc=1, borderpad=-4)
setup_plot(subplt=(1,2,2), xr=[0,100], yticks=False)
den = density(num['username_number'], num['age'], alpha=0.3,
bins=[np.arange(0,101), np.arange(24,34), ],
label='Number of Profiles', vmin=0, vmax=400,
cmap=pylab.cm.Spectral_r, colorbar=False)
add_line((2015-age)-1900, age, 'Birth Year')
add_line(age, age, 'Age')
add_line(33-age, age, '18 Year Olds')
add_line(14*np.ones(age.shape), age, '~2014')
pylab.tight_layout()
zip(age, (2015-age)%1900, ((2015-age)%1900+18)%100)
"""
Explanation: You can clearly see three peaks which I suggest correspond to three schemas:
08 - 15: Two digit year in which the profile was probably created
21 - 34: Age of the individual at the time of profile creation
80 - 91: Two digit year of their birth
The first schema should not correlate with age -- there should be a somewhat similar influx of users into each okcupid each year. The other two should correlate with the age of the profile.
End of explanation
"""
def setup_plot(**kwargs):
setup(xlabel='number in username', ylabel='match percentage', **kwargs)
setup_plot(figsize=(12,6), subplt=(1,2,1))
den = density(num['username_number'], num['match'],
bins=[np.arange(20,101,2), np.arange(0,101,2)],
label='number of profiles / bin',
cmap=pylab.cm.Spectral_r, colorbar=False, logvrange=False)
icolorbar(den, loc=1, borderpad=-4)
setup_plot(subplt=(1,2,2), yticks=False)
den = density(num['username_number'], num['match'],
bins=[np.arange(20,101,4), np.arange(0,101,4)],
label='Log Relative Density', vmin=-5.3, vmax=-4,
xnorm=True, ynorm=True, logvrange=True,
cmap=pylab.cm.Spectral_r, colorbar=False)
pylab.axhline(80, lw=2, color='k')
icolorbar(den, loc=1, borderpad=-4)
pylab.tight_layout()
"""
Explanation: Clearly, two digit numbers in usernames show interesting structure which can be used to infer the age.
The birth year trend (82-91) is the strongest, the age (24-33) follows closely, and the lowest signal strength is in the profile creation year (8-15).
One can see in this plot that the 08-15 range is the combination of two populations, one that does not vary with age, and one that scales with the birth year. The first being the year of the profile creation which does not correlate with the age of the profile. The second population in the 08-15 range is harder to model analytically.
I posit that it is the age at which the user has turned 18. For example for a 24 year old in 2015, they turned 18 in 2009. Likewise a 33 year old would have turned 18 in 2000.
It would be worth investigating if these users who are using the year of their 18th birthday are bots or just users who became of age and use the same username for all of their profiles.
Correlation with Match Percentage
Before I dig into using machine learning to estimate the age from the values, lets investigate if there are any trends with match percentage
End of explanation
"""
fournum = people[(people['username_nlength'] == 4)]
print('Found {:,d} profiles with 4 digits'.format(len(fournum)))
def setup_plot(**kwargs):
setup(xlabel='number in username', ylabel='age [years]', **kwargs)
def add_line(x,y, label, offset=5, **kwargs):
params = dict(label=label, lw=3, alpha=0.5, color='r')
params.update(kwargs)
outline(pylab.plot(x,y, **params))
text(x[-1]+offset, y[-1], label, rotation=90, ha='center', va='top', fontsize=12, color=params['color'])
setup_plot(figsize=(12,6), subplt=(1,2,1))
den = density(fournum['username_number'], fournum['age'],
bins=[np.arange(1970,2018), np.arange(24,34), ],
label='Number of Profiles', vmin=0, vmax=200,
ynorm=False, cmap=pylab.cm.Spectral_r, colorbar=False)
icolorbar(den, loc=1, borderpad=-4)
setup_plot(subplt=(1,2,2), xr=[1970,2017], yticks=False)
den = density(fournum['username_number'], fournum['age'], logvrange=True,
bins=[np.arange(1970,2018), np.arange(24,34), ],
label='Number of Profiles', #vmin=0, vmax=1000,
cmap=pylab.cm.Blues, colorbar=False)
age = np.arange(24,34)
add_line(2015-age, age, 'Birth Year')
add_line(2015+18-age, age, '18 year olds', color='purple', offset=-2)
add_line(2015+22-age, age, '22 year olds', color='orange')
pylab.tight_layout()
"""
Explanation: Here I have normalized again in the right panel to remove the underlying population distribution. I do not find any significant differences. There are few sources in the 40%-80% region which limits what we can say about that population.
Full Years
One prediction of the above model would be that full years should show the same trends. In particular we expect a correlation between age and years
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier, AdaBoostRegressor
from sklearn.svm import SVC, LinearSVC
from sklearn import cross_validation
rcf = RandomForestClassifier(
random_state=0,
# n_estimators=150,
# min_samples_split=4,
# min_samples_leaf=2
)
scores = cross_validation.cross_val_score(rcf,
num['username_number'].reshape(-1, 1),
num["age"],
cv=3
)
print('{:0.1f} +/- {:0.1f}'.format(100*scores.mean(), 200*scores.std()))
"""
Explanation: The two age dependent populations can be seen here. There is a second population that can be seen here at 22 years old. Going back to the previous 2 digit plots, the trend can be seen as well.
Classification
Let's classify the 2-digit numbers to see if we can use it to classify their age. This is going to be a very simplistic model since we are predicting a age from a single feature of the dataset.
The above range sampling can be generalized to a random forest.
End of explanation
"""
predicted = cross_validation.cross_val_predict(rfc, num['username_number'].reshape(-1, 1), num["age"], cv=10)
setup(figsize=(12,6), subplt=(1,2,1),
title='All profiles: {:,d}'.format(len(num)),
xlabel='Age', ylabel='Predicted Age')
den = density(num["age"], predicted,
bins=(np.arange(24,33),np.arange(24,33)),
colorbar=False, cmap=pylab.cm.Spectral_r)
icolorbar(den)
isgood = np.where( ( (num['username_number'] > 22) & (num['username_number'] < 34) ) |
( (num['username_number'] > 82) & (num['username_number'] < 91) ) )[0]
setup(subplt=(1,2,2), yticks=False,
title='Profile Subset {:,d}'.format(len(isgood)),
xlabel='Age', ylabel='Predicted Age')
den = density(num.iloc[isgood]['age'], predicted[isgood],
bins=(np.arange(24,33),np.arange(24,33)),
colorbar=False, cmap=pylab.cm.Spectral_r)
icolorbar(den)
pylab.tight_layout()
"""
Explanation: While 43% correct seems pretty bad, from the earlier visualizations the signal is only coming from two main populations which contain 23% + 33% of the entire 2 digit sample. There is signal coming from the 08-15 range, however this is well mixed with the year of creation limiting its utility This means this classifier is actually correct ~77% of the time when there is enough data to estimate the age.
What this dataset really needs is more features extracted from either the questions or profile text to really estimate the age well.
End of explanation
"""
scores = cross_validation.cross_val_score(
GradientBoostingClassifier(
# n_estimators=100,
# learning_rate=1.0,
# max_depth=1,
random_state=0
),
num['username_number'].reshape(-1, 1),
num["age"],
cv=3
)
print('{:0.1f} +/- {:0.1f}'.format(100*scores.mean(), 200*scores.std()))
"""
Explanation: The random forest classifier is underestimating many of the ages as 24 year olds (bottom row of pixels).
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/tutorials/estimator/boosted_trees.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import numpy as np
import pandas as pd
from IPython.display import clear_output
from matplotlib import pyplot as plt
# 加载数据集。
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
import tensorflow as tf
tf.random.set_seed(123)
"""
Explanation: 在 Tensorflow 中训练提升树(Boosted Trees)模型
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/estimator/boosted_trees"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 tensorFlow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/estimator/boosted_trees.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 中运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/estimator/boosted_trees.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/estimator/boosted_trees.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
</td>
</table>
Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
官方英文文档。如果您有改进此翻译的建议, 请提交 pull request 到
tensorflow/docs GitHub 仓库。要志愿地撰写或者审核译文,请加入
docs-zh-cn@tensorflow.org Google Group。
本教程是使用基于 tf.estimator API的决策树来训练梯度提升模型的端到端演示。提升树(Boosted Trees)模型是回归和分类问题中最受欢迎并最有效的机器学习方法之一。这是一种融合技术,它结合了几个(10 个,100 个或者甚至 1000 个)树模型的预测值。
提升树(Boosted Trees)模型受到许多机器学习从业者的欢迎,因为它们可以通过最小化的超参数调整获得令人印象深刻的性能。
加载泰坦尼克数据集
您将使用泰坦尼克数据集,该数据集的目标是在给出性别、年龄、阶级等特征的条件下预测乘客幸存与否。
End of explanation
"""
dftrain.head()
dftrain.describe()
"""
Explanation: 数据集由训练集和验证集组成:
dftrain 与 y_train 是训练集——模型用来学习的数据。
模型根据评估集,dfeval 和 y_eval 进行测试。
您将使用以下特征来进行训练:
<table>
<tr>
<th>特征名称</th>
<th>描述</th>
</tr>
<tr>
<td>sex</td>
<td>乘客的性别</td>
</tr>
<tr>
<td>age</td>
<td>乘客的年龄</td>
</tr>
<tr>
<td>n_siblings_spouses</td>
<td>船上的兄弟姐妹与伙伴</td>
</tr>
<tr>
<td>parch</td>
<td>船上的父母与孩子</td>
</tr>
<tr>
<td>fare</td>
<td>乘客所支付的票价</td>
</tr>
<tr>
<td>class</td>
<td>乘客在船上的舱室等级</td>
</tr>
<tr>
<td>deck</td>
<td>哪个甲板上的乘客</td>
</tr>
<tr>
<td>embark_town</td>
<td>乘客是从哪个城镇上船的</td>
</tr>
<tr>
<td>alone</td>
<td>是否乘客独自一人</td>
</tr>
</table>
探索数据
让我们首先预览一些数据,并在训练集上创建摘要统计。
End of explanation
"""
dftrain.shape[0], dfeval.shape[0]
"""
Explanation: 训练集和评估集分别有 627 和 264 个样本。
End of explanation
"""
dftrain.age.hist(bins=20)
plt.show()
"""
Explanation: 大多数乘客在 20 岁或 30 岁。
End of explanation
"""
dftrain.sex.value_counts().plot(kind='barh')
plt.show()
"""
Explanation: 男乘客大约是女乘客的两倍。
End of explanation
"""
dftrain['class'].value_counts().plot(kind='barh')
plt.show()
"""
Explanation: 大多数乘客都在“三等”舱。
End of explanation
"""
dftrain['embark_town'].value_counts().plot(kind='barh')
plt.show()
"""
Explanation: 大多数乘客从南安普顿出发。
End of explanation
"""
pd.concat([dftrain, y_train], axis=1).groupby('sex').survived.mean().plot(kind='barh').set_xlabel('% survive')
plt.show()
"""
Explanation: 与男性相比,女性存活的几率要高得多。这显然是该模型的预测特征。
End of explanation
"""
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(tf.feature_column.numeric_column(feature_name,
dtype=tf.float32))
"""
Explanation: 创建特征列与输入函数
梯度提升(Gradient Boosting) Estimator 可以利用数值和分类特征。特征列适用于所有的 Tensorflow estimator,其目的是定义用于建模的特征。此外,它们还提供一些特征工程功能,如独热编码(one-hot-encoding)、标准化(normalization)和桶化(bucketization)。在本教程中,CATEGORICAL_COLUMNS 中的字段从分类列转换为独热编码列(指标列):
End of explanation
"""
example = dict(dftrain.head(1))
class_fc = tf.feature_column.indicator_column(tf.feature_column.categorical_column_with_vocabulary_list('class', ('First', 'Second', 'Third')))
print('Feature value: "{}"'.format(example['class'].iloc[0]))
print('One-hot encoded: ', tf.keras.layers.DenseFeatures([class_fc])(example).numpy())
"""
Explanation: 您可以查看特征列生成的转换。例如,以下是在单个样本中使用 indicator_column 的输出:
End of explanation
"""
tf.keras.layers.DenseFeatures(feature_columns)(example).numpy()
"""
Explanation: 此外,您还可以一起查看所有特征列的转换:
End of explanation
"""
# 使用大小为全部数据的 batch ,因为数据规模非常小.
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((dict(X), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# 对于训练,可以按需多次循环数据集(n_epochs=None)。
dataset = dataset.repeat(n_epochs)
# 在内存中训练不使用 batch。
dataset = dataset.batch(NUM_EXAMPLES)
return dataset
return input_fn
# 训练与评估的输入函数。
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
"""
Explanation: 接下来,您需要创建输入函数。这些将指定如何将数据读入到我们的模型中以供训练与推理。您将使用 tf.dataAPI 中的 from_tensor_slices 方法直接从 Pandas 中读取数据。这适用于较小的内存数据集。对于较大的数据集,tf.data API 支持各种文件格式(包括 csv),以便您能处理那些不适合放入内存中的数据集。
End of explanation
"""
linear_est = tf.estimator.LinearClassifier(feature_columns)
# 训练模型。
linear_est.train(train_input_fn, max_steps=100)
# 评估。
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(pd.Series(result))
"""
Explanation: 训练与评估模型
您将执行以下步骤:
初始化模型,指定特征和超参数。
使用 train_input_fn 将训练数据输入模型,使用 train 函数训练模型。
您将使用此示例中的评估集评估模型性能,即 dfeval DataFrame。您将验证预测是否与 y_eval 数组中的标签匹配。
在训练提升树(Boosted Trees)模型之前,让我们先训练一个线性分类器(逻辑回归模型)。最好的做法是从更简单的模型开始建立基准。
End of explanation
"""
# 由于数据存入内存中,在每层使用全部数据会更快。
# 上面一个 batch 定义为整个数据集。
n_batches = 1
est = tf.estimator.BoostedTreesClassifier(feature_columns,
n_batches_per_layer=n_batches)
# 一旦建立了指定数量的树,模型将停止训练,
# 而不是基于训练步数。
est.train(train_input_fn, max_steps=100)
# 评估。
result = est.evaluate(eval_input_fn)
clear_output()
print(pd.Series(result))
"""
Explanation: 下面让我们训练提升树(Boosted Trees)模型。提升树(Boosted Trees)是支持回归(BoostedTreesRegressor)和分类(BoostedTreesClassifier)的。由于目标是预测一个生存与否的标签,您将使用 BoostedTreesClassifier。
End of explanation
"""
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities')
plt.show()
"""
Explanation: 现在您可以使用训练的模型从评估集上对乘客进行预测了。Tensorflow 模型经过优化,可以同时在一个 batch 或一个集合的样本上进行预测。之前,eval_inout_fn 是使用整个评估集定义的。
End of explanation
"""
from sklearn.metrics import roc_curve
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,)
plt.show()
"""
Explanation: 最后,您还可以查看结果的受试者工作特征曲线(ROC),这将使我们更好地了解真阳性率与假阴性率之间的权衡。
End of explanation
"""
|
science-of-imagination/nengo-buffer | Project/trained_mental_manipulations_ens_inhibition.ipynb | gpl-3.0 | import nengo
import numpy as np
import cPickle
from nengo_extras.data import load_mnist
from nengo_extras.vision import Gabor, Mask
from matplotlib import pylab
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from scipy import linalg
"""
Explanation: Using the trained weights in an ensemble of neurons
On the function points branch of nengo
On the vision branch of nengo_extras
End of explanation
"""
# --- load the data
img_rows, img_cols = 28, 28
(X_train, y_train), (X_test, y_test) = load_mnist()
X_train = 2 * X_train - 1 # normalize to -1 to 1
X_test = 2 * X_test - 1 # normalize to -1 to 1
"""
Explanation: Load the MNIST database
End of explanation
"""
temp = np.diag([1]*10)
ZERO = temp[0]
ONE = temp[1]
TWO = temp[2]
THREE= temp[3]
FOUR = temp[4]
FIVE = temp[5]
SIX = temp[6]
SEVEN =temp[7]
EIGHT= temp[8]
NINE = temp[9]
labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]
dim =28
"""
Explanation: Each digit is represented by a one hot vector where the index of the 1 represents the number
End of explanation
"""
label_weights = cPickle.load(open("label_weights5000.p", "rb"))
activity_to_img_weights = cPickle.load(open("activity_to_img_weights5000.p", "rb"))
rotated_clockwise_after_encoder_weights = cPickle.load(open("rotated_after_encoder_weights_clockwise5000.p", "r"))
rotated_counter_after_encoder_weights = cPickle.load(open("rotated_after_encoder_weights5000.p", "r"))
#scale_up_after_encoder_weights = cPickle.load(open("scale_up_after_encoder_weights1000.p","r"))
#scale_down_after_encoder_weights = cPickle.load(open("scale_down_after_encoder_weights1000.p","r"))
#translate_up_after_encoder_weights = cPickle.load(open("translate_up_after_encoder_weights1000.p","r"))
#translate_down_after_encoder_weights = cPickle.load(open("translate_down_after_encoder_weights1000.p","r"))
#translate_left_after_encoder_weights = cPickle.load(open("translate_left_after_encoder_weights1000.p","r"))
#translate_right_after_encoder_weights = cPickle.load(open("translate_right_after_encoder_weights1000.p","r"))
#identity_after_encoder_weights = cPickle.load(open("identity_after_encoder_weights1000.p","r"))
"""
Explanation: Load the saved weight matrices that were created by training the model
End of explanation
"""
#A value of zero gives no inhibition
def inhibit_rotate_clockwise(t):
if t < 1:
return dim**2
else:
return 0
def inhibit_rotate_counter(t):
if t < 1:
return 0
else:
return dim**2
def inhibit_identity(t):
if t < 1:
return dim**2
else:
return dim**2
def inhibit_scale_up(t):
return dim**2
def inhibit_scale_down(t):
return dim**2
def inhibit_translate_up(t):
return dim**2
def inhibit_translate_down(t):
return dim**2
def inhibit_translate_left(t):
return dim**2
def inhibit_translate_right(t):
return dim**2
"""
Explanation: Functions to perform the inhibition of each ensemble
End of explanation
"""
def add_manipulation(main_ens,weights,inhibition_func):
#create ensemble for manipulation
ens_manipulation = nengo.Ensemble(n_hid,dim**2,seed=3,encoders=encoders, **ens_params)
#create node for inhibition
inhib_manipulation = nengo.Node(inhibition_func)
#Connect the main ensemble to each manipulation ensemble and back with appropriate transformation
nengo.Connection(main_ens.neurons, ens_manipulation.neurons, transform = weights.T, synapse=0.1)
nengo.Connection(ens_manipulation.neurons, main_ens.neurons, transform = weights.T,synapse = 0.1)
#connect inhibition
nengo.Connection(inhib_manipulation, ens_manipulation.neurons, transform=[[-1]] * n_hid)
#return ens_manipulation,inhib_manipulation
rng = np.random.RandomState(9)
n_hid = 1000
model = nengo.Network(seed=3)
with model:
#Stimulus only shows for brief period of time
stim = nengo.Node(lambda t: ONE if t < 0.1 else 0) #nengo.processes.PresentInput(labels,1))#
ens_params = dict(
eval_points=X_train,
neuron_type=nengo.LIF(), #Why not use LIF?
intercepts=nengo.dists.Choice([-0.5]),
max_rates=nengo.dists.Choice([100]),
)
# linear filter used for edge detection as encoders, more plausible for human visual system
encoders = Gabor().generate(n_hid, (11, 11), rng=rng)
encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)
#Ensemble that represents the image with different transformations applied to it
ens = nengo.Ensemble(n_hid, dim**2, seed=3, encoders=encoders, **ens_params)
#Connect stimulus to ensemble, transform using learned weight matrices
nengo.Connection(stim, ens, transform = np.dot(label_weights,activity_to_img_weights).T)
#Recurrent connection on the neurons of the ensemble to perform the rotation
#nengo.Connection(ens.neurons, ens.neurons, transform = rotated_counter_after_encoder_weights.T, synapse=0.1)
#add_manipulation(ens,rotated_clockwise_after_encoder_weights, inhibit_rotate_clockwise)
add_manipulation(ens,rotated_counter_after_encoder_weights, inhibit_rotate_counter)
add_manipulation(ens,scale_up_after_encoder_weights, inhibit_scale_up)
#add_manipulation(ens,scale_down_after_encoder_weights, inhibit_scale_down)
#add_manipulation(ens,translate_up_after_encoder_weights, inhibit_translate_up)
#add_manipulation(ens,translate_down_after_encoder_weights, inhibit_translate_down)
#add_manipulation(ens,translate_left_after_encoder_weights, inhibit_translate_left)
#add_manipulation(ens,translate_right_after_encoder_weights, inhibit_translate_right)
#Collect output, use synapse for smoothing
probe = nengo.Probe(ens.neurons,synapse=0.1)
sim = nengo.Simulator(model)
sim.run(5)
"""
Explanation: The network where the mental imagery and rotation occurs
The state, seed and ensemble parameters (including encoders) must all be the same for the saved weight matrices to work
The number of neurons (n_hid) must be the same as was used for training
The input must be shown for a short period of time to be able to view the rotation
The recurrent connection must be from the neurons because the weight matices were trained on the neuron activities
End of explanation
"""
'''Animation for Probe output'''
fig = plt.figure()
output_acts = []
for act in sim.data[probe]:
output_acts.append(np.dot(act,activity_to_img_weights))
def updatefig(i):
im = pylab.imshow(np.reshape(output_acts[i],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)
return im,
ani = animation.FuncAnimation(fig, updatefig, interval=100, blit=True)
plt.show()
print(len(sim.data[probe]))
plt.subplot(161)
plt.title("100")
pylab.imshow(np.reshape(output_acts[100],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(162)
plt.title("500")
pylab.imshow(np.reshape(output_acts[500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(163)
plt.title("1000")
pylab.imshow(np.reshape(output_acts[1000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(164)
plt.title("1500")
pylab.imshow(np.reshape(output_acts[1500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(165)
plt.title("2000")
pylab.imshow(np.reshape(output_acts[2000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(166)
plt.title("2500")
pylab.imshow(np.reshape(output_acts[2500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.show()
"""
Explanation: The following is not part of the brain model, it is used to view the output for the ensemble
Since it's probing the neurons themselves, the output must be transformed from neuron activity to visual image
End of explanation
"""
#The filename includes the number of neurons and which digit is being rotated
filename = "mental_rotation_output_ONE_" + str(n_hid) + ".p"
cPickle.dump(sim.data[probe], open( filename , "wb" ) )
"""
Explanation: Pickle the probe's output if it takes a long time to run
End of explanation
"""
testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))
plt.subplot(121)
pylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
#Get image
testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))
#Get activity of image
_, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=testing)
#Get rotated encoder outputs
testing_rotate = np.dot(testing_act,rotated_after_encoder_weights)
#Get activities
testing_rotate = ens.neuron_type.rates(testing_rotate, sim.data[ens].gain, sim.data[ens].bias)
for i in range(5):
testing_rotate = np.dot(testing_rotate,rotated_after_encoder_weights)
testing_rotate = ens.neuron_type.rates(testing_rotate, sim.data[ens].gain, sim.data[ens].bias)
#testing_rotate = np.dot(testing_rotate,rotation_weights)
testing_rotate = np.dot(testing_rotate,activity_to_img_weights)
plt.subplot(122)
pylab.imshow(np.reshape(testing_rotate,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.show()
plt.subplot(121)
pylab.imshow(np.reshape(X_train[0],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
#Get activity of image
_, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=X_train[0])
testing_rotate = np.dot(testing_act,activity_to_img_weights)
plt.subplot(122)
pylab.imshow(np.reshape(testing_rotate,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.show()
"""
Explanation: Testing
End of explanation
"""
letterO = np.dot(ZERO,np.dot(label_weights,activity_to_img_weights))
plt.subplot(161)
pylab.imshow(np.reshape(letterO,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
letterL = np.dot(SEVEN,label_weights)
for _ in range(30):
letterL = np.dot(letterL,rotation_weights)
letterL = np.dot(letterL,activity_to_img_weights)
plt.subplot(162)
pylab.imshow(np.reshape(letterL,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
letterI = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))
plt.subplot(163)
pylab.imshow(np.reshape(letterI,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(165)
pylab.imshow(np.reshape(letterI,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
letterV = np.dot(SEVEN,label_weights)
for _ in range(40):
letterV = np.dot(letterV,rotation_weights)
letterV = np.dot(letterV,activity_to_img_weights)
plt.subplot(164)
pylab.imshow(np.reshape(letterV,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
letterA = np.dot(SEVEN,label_weights)
for _ in range(10):
letterA = np.dot(letterA,rotation_weights)
letterA = np.dot(letterA,activity_to_img_weights)
plt.subplot(166)
pylab.imshow(np.reshape(letterA,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.show()
"""
Explanation: Just for fun
End of explanation
"""
|
statkclee/ThinkStats2 | code/chap10soln-kor.ipynb | gpl-3.0 | import brfss
import numpy as np
%matplotlib inline
df = brfss.ReadBrfss(nrows=None)
df = df.dropna(subset=['htm3', 'wtkg2'])
heights, weights = df.htm3, df.wtkg2
weights = np.log10(weights)
"""
Explanation: 통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>
Allen Downey / 이광춘(xwMOOC)
연습문제 10.1
BRFSS에서 나온 데이터를 사용해서, log(체중) 대비 신장에 대한 선형 최소자승적합을 계산하라. 변수중 하나가 로그 변환된 이와 같은 모형에 대해서 추정된 모수를 나타내는 가장 좋은 방식은 어떻게 될까요? 만약 누군가의 체중을 추측하려고 한다면, 신장을 아는 것이 얼마나 도움이 될까?
NSFG와 마찬가지로, BRFSS는 일부 집단을 과다표집(oversampling)하고 각 응답자에 대해서 표집 가중치 정보를 제공한다. BRFSS 데이터에서, 해당 가중치에 대한 변수명은 totalwt다. 가중치를 갖는, 갖지 않는 재표집을 사용해서, BRFSS에 나온 평균 응답자 신장, 평균에 대한 표준오차, 90% 신뢰구간을 추정하시오. 보정 가중치가 추정값에 얼마나 영향을 주는가?
BRFSS 데이터를 불러들여서, 신장과 log 체중을 추출한다.
End of explanation
"""
import thinkstats2
inter, slope = thinkstats2.LeastSquares(heights, weights)
inter, slope
"""
Explanation: 절편과 기울기를 추정한다.
End of explanation
"""
import thinkplot
thinkplot.Scatter(heights, weights, alpha=0.01)
fxs, fys = thinkstats2.FitLine(heights, inter, slope)
thinkplot.Plot(fxs, fys)
thinkplot.Config(xlabel='height (cm)', ylabel='log10 weight (kg)', legend=False)
"""
Explanation: 데이터에 대한 산점도와 적합선을 보여준다.
End of explanation
"""
thinkplot.Scatter(heights, 10**weights, alpha=0.01)
fxs, fys = thinkstats2.FitLine(heights, inter, slope)
thinkplot.Plot(fxs, 10**fys)
thinkplot.Config(xlabel='height (cm)', ylabel='weight (kg)', legend=False)
"""
Explanation: 동일한 도식화를 하지만, 역변환을 적용해서 선형(log 아님) 척도로 체중을 나타낸다.
End of explanation
"""
res = thinkstats2.Residuals(heights, weights, inter, slope)
df['residual'] = res
bins = np.arange(130, 210, 5)
indices = np.digitize(df.htm3, bins)
groups = df.groupby(indices)
means = [group.htm3.mean() for i, group in groups][1:-1]
cdfs = [thinkstats2.Cdf(group.residual) for i, group in groups][1:-1]
thinkplot.PrePlot(3)
for percent in [75, 50, 25]:
ys = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(means, ys, label=label)
thinkplot.Config(xlabel='height (cm)', ylabel='residual weight (kg)', legend=False)
"""
Explanation: 잔차 백분위수를 도식화한다.
선들이 범위 대부분에 걸쳐 평평하다. 관계가 선형임을 나타낸다.
선들이 거의 평행하다. 잔차 분산이 범위에 걸쳐 같음을 나타낸다.
End of explanation
"""
rho = thinkstats2.Corr(heights, weights)
rho
"""
Explanation: 상관을 계산한다.
End of explanation
"""
r2 = thinkstats2.CoefDetermination(weights, res)
r2
"""
Explanation: 결정계수를 계산한다.
End of explanation
"""
rho**2 - r2
"""
Explanation: $R^2 = \rho^2$ 임을 확증한다.
End of explanation
"""
std_ys = thinkstats2.Std(weights)
std_ys
"""
Explanation: Std(ys)를 계산하는데, 신장을 사용하지 않은 예측 RMSE가 된다.
End of explanation
"""
std_res = thinkstats2.Std(res)
std_res
"""
Explanation: Std(res)를 계산하는데, 신장을 사용하는 예측 RMSE가 된다.
End of explanation
"""
1 - std_res / std_ys
"""
Explanation: 신장 정보가 RMSE를 얼마나 줄이는가? 약 15%
End of explanation
"""
t = []
for _ in range(100):
sample = thinkstats2.ResampleRows(df)
estimates = thinkstats2.LeastSquares(sample.htm3, np.log10(sample.wtkg2))
t.append(estimates)
inters, slopes = zip(*t)
"""
Explanation: 재표본추출을 사용해서 절편과 기울기에 대한 표집분포를 계산하시오.
End of explanation
"""
cdf = thinkstats2.Cdf(slopes)
thinkplot.Cdf(cdf)
thinkplot.Show(legend=False)
"""
Explanation: 기울기에 대한 표집분포를 도식화하시오.
End of explanation
"""
pvalue = cdf[0]
pvalue
"""
Explanation: 기울기에 대한 p-값을 계산하시오.
End of explanation
"""
ci = cdf.Percentile(5), cdf.Percentile(95)
ci
"""
Explanation: 기울기 90% 신뢰구간을 계산하시오.
End of explanation
"""
mean = thinkstats2.Mean(slopes)
mean
"""
Explanation: 표집분포의 평균을 계산하시오.
End of explanation
"""
stderr = thinkstats2.Std(slopes)
stderr
"""
Explanation: 표집분포에 대한 표준편차를 계산하시오. 이것이 표준오차다.
End of explanation
"""
def ResampleRowsWeighted(df, column='finalwt'):
"""Resamples a DataFrame using probabilities proportional to given column.
df: DataFrame
column: string column name to use as weights
returns: DataFrame
"""
weights = df[column]
cdf = thinkstats2.Cdf(dict(weights))
indices = cdf.Sample(len(weights))
sample = df.loc[indices]
return sample
"""
Explanation: 표집가중치를 사용해서 재표본추출하시오.
End of explanation
"""
def Summarize(estimates):
mean = thinkstats2.Mean(estimates)
stderr = thinkstats2.Std(estimates)
cdf = thinkstats2.Cdf(estimates)
ci = cdf.Percentile(5), cdf.Percentile(95)
print('mean', mean)
print('stderr', stderr)
print('ci', ci)
"""
Explanation: 표집분포를 요약하시오.
End of explanation
"""
estimates_unweighted = [thinkstats2.ResampleRows(df).htm3.mean() for _ in range(100)]
Summarize(estimates_unweighted)
"""
Explanation: 가중치 없이 행을 재표본추출하고 결과를 요약하시오.
End of explanation
"""
estimates_weighted = [ResampleRowsWeighted(df).htm3.mean() for _ in range(100)]
Summarize(estimates_weighted)
"""
Explanation: 가중치를 갖고 행을 재표본추출하시오. 만약 표집 가중치를 고려하면, 추정된 평균 신장이 거의 2cm 더 크고, 차이는 표집오차보다 훨씬 크다.
End of explanation
"""
|
yhilpisch/ipynb-docker | jupserver/ipynbs/interactive.ipynb | bsd-3-clause | from IPython.html.widgets import *
import matplotlib as mpl
mpl.use('agg')
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
import numpy as np
class call_option(object):
from math import log, sqrt, exp
from scipy import stats
global log, sqrt, exp, stats
def __init__(self, S0, K, T, r, sigma):
self.S0 = float(S0)
self.K = K
self.T = T
self.r = r
self.sigma = sigma
def value(self):
''' Return option value. '''
d1 = ((log(self.S0 / self.K) + (self.r + 0.5 * self.sigma ** 2) * self.T)
/ (self.sigma * sqrt(self.T)))
d2 = ((log(self.S0 / self.K) + (self.r - 0.5 * self.sigma ** 2) * self.T)
/ (self.sigma * sqrt(self.T)))
value = (self.S0 * stats.norm.cdf(d1, 0.0, 1.0)
- self.K * exp(-self.r * self.T) * stats.norm.cdf(d2, 0.0, 1.0))
return value
def vega(self):
''' Return Vega of option. '''
d1 = ((log(self.S0 / self.K)
+ (self.r + (0.5 * self.sigma ** 2)) * self.T)
/ (self.sigma * sqrt(self.T)))
vega = self.S0 * stats.norm.cdf(d1, 0.0, 1.0) * sqrt(self.T)
return vega
"""
Explanation: <img src="http://datapark.io/img/logo.png" alt="datpark" width="35%" align="right" border="0"><br>
Interactive IPython Notebooks
End of explanation
"""
def vega_calculation(S0=100, T=1.0, r=0.01, sigma=0.2, n=10):
np.set_printoptions(formatter={'all' :
lambda x: '%6.2f' % x})
o = call_option(S0, 100, T, r, sigma)
strikes = np.linspace(80, 120, n)
vegas = []
for k in strikes:
o.K = k
vegas.append(o.vega())
print "Strike:", strikes.round(3)
print "Vega: ", np.array(vegas).round(3)
interact(vega_calculation, S0=(75, 125, 1),
T=(0.01, 2.0, 0.01),
r=(0.0, 0.1, 0.005),
sigma=(0.01, 0.5, 0.01),
n=(1, 20, 1))
"""
Explanation: Print Output
End of explanation
"""
def vega_plot_2d(S0=100, T=1.0, r=0.01, sigma=0.2):
o = call_option(S0, 100, T, r, sigma)
strikes = np.linspace(80, 120, 20)
vegas = []
for k in strikes:
o.K = k
vegas.append(o.vega())
plt.figure(figsize=(8, 5))
plt.plot(strikes, vegas)
plt.grid(True)
plt.xlabel('strike')
plt.ylabel('Vega')
interact(vega_plot_2d, S0=(75, 125, 1),
T=(0.01, 2.0, 0.01),
r=(0.0, 0.1, 0.005),
sigma=(0.01, 0.5, 0.01))
"""
Explanation: 2d Plot
End of explanation
"""
def vega_plot_3d(S0=100, r=0.01, sigma=0.2):
o = call_option(S0, 100, 1.0, r, sigma)
maturities = np.linspace(0.05, 2.0, 20)
strikes = np.linspace(80, 120, 20)
T, K = np.meshgrid(strikes, maturities)
V = np.zeros_like(K)
for t in enumerate(maturities):
for k in enumerate(strikes):
o.T = t[1]
o.K = k[1]
V[t[0], k[0]] = o.vega()
fig = plt.figure(figsize=(10, 5))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(T, K, V, rstride=1, cstride=1,
cmap='RdYlBu_r', linewidth=0.5, antialiased=True)
ax.set_xlabel('strike')
ax.set_ylabel('maturity')
ax.set_zlabel('Vega of European call option')
fig.colorbar(surf, shrink=0.5, aspect=5)
interact(vega_plot_3d, S0=(75, 125, 1),
r=(0.0, 0.1, 0.005),
sigma=(0.01, 0.5, 0.01));
"""
Explanation: 3d Plot
End of explanation
"""
|
gabrielrezzonico/dogsandcats | notebooks/01. Data loading and analysis.ipynb | mit | plot_grid(imgs, titles=labels)
%autosave 0
"""
Explanation: Samples
End of explanation
"""
import pandas as pd
import glob
from PIL import Image
files = glob.glob(ORIGINAL_TRAIN_DIRECTORY + '*')
df = pd.DataFrame({'fpath':files,'width':0,'height':0})
df['category'] = df.fpath.str.extract('../data/original_train/([a-zA-Z]*).', expand=False) # extract class
for idx in df.index:
im = Image.open(df.ix[idx].fpath)
df.ix[idx,['width','height']] = im.size
df.head()
df.describe()
"""
Explanation: Data size
End of explanation
"""
df['category'].value_counts()
%matplotlib inline
import seaborn as sns
ax = sns.countplot("category", data=df)
sns.jointplot(x='width',
y='height',
data=df,
joint_kws={'s': 0.5},
marginal_kws=dict(bins=50),
size=10,
stat_func=None);
"""
Explanation: There are 25000 images in the dataset. We can see that the mean size of the images is (360.478080,404.09904).
End of explanation
"""
import os
TOTAL_NUMBER_FILES = sum([len(files) for r, d, files in os.walk(ORIGINAL_TRAIN_DIRECTORY)])
print("Total number of files in train folder:", TOTAL_NUMBER_FILES)
"""
Explanation: Data preparation
The dataset can be downloaded from https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring/data.
Number of training examples:
End of explanation
"""
import glob
import os
import shutil
import numpy as np
shutil.rmtree(os.path.join(TEST_DIRECTORY, "dog"), ignore_errors=True)
shutil.rmtree(os.path.join(TEST_DIRECTORY, "cat"), ignore_errors=True)
shutil.rmtree(os.path.join(VALID_DIRECTORY, "dog"), ignore_errors=True)
shutil.rmtree(os.path.join(VALID_DIRECTORY, "cat"), ignore_errors=True)
shutil.rmtree(os.path.join(TRAIN_DIRECTORY, "dog"), ignore_errors=True)
shutil.rmtree(os.path.join(TRAIN_DIRECTORY, "cat"), ignore_errors=True)
os.mkdir(os.path.join(TEST_DIRECTORY, "dog"))
os.mkdir(os.path.join(TEST_DIRECTORY, "cat"))
os.mkdir(os.path.join(VALID_DIRECTORY, "dog"))
os.mkdir(os.path.join(VALID_DIRECTORY, "cat"))
os.mkdir(os.path.join(TRAIN_DIRECTORY, "dog"))
os.mkdir(os.path.join(TRAIN_DIRECTORY, "cat"))
#########################
# DOGS
##########
#random list of dog files
dog_pattern = ORIGINAL_TRAIN_DIRECTORY + "dog.*"
dog_files = np.random.permutation(glob.glob(dog_pattern))
# randomly split the files in train folder and move them to validation
number_validation_dog_files = int(len(dog_files) * VALIDATION_SIZE)
number_test_dog_files = int(len(dog_files) * TEST_SIZE)
for index, dog_file in enumerate(dog_files):
file_name = os.path.split(dog_file)[1]
if index < number_validation_dog_files:#validation files
new_path = os.path.join(VALID_DIRECTORY, "dog", file_name)
elif index >= number_validation_dog_files and index < (number_validation_dog_files + number_test_dog_files):
new_path = os.path.join(TEST_DIRECTORY, "dog", file_name)
else:
new_path = os.path.join(TRAIN_DIRECTORY, "dog", file_name)
shutil.copy(dog_file, new_path)
#########################
# CATS
##########
#random list of dog files
cat_pattern = ORIGINAL_TRAIN_DIRECTORY + "cat.*"
cat_files = np.random.permutation(glob.glob(cat_pattern))
# randomly split the files in train folder and move them to validation
number_validation_cat_files = int(len(cat_files) * VALIDATION_SIZE)
number_test_cat_files = int(len(cat_files) * TEST_SIZE)
for index, cat_file in enumerate(cat_files):
file_name = os.path.split(cat_file)[1]
if index < number_validation_cat_files:
new_path = os.path.join(VALID_DIRECTORY, "cat", file_name)
elif index >= number_validation_cat_files and index < (number_validation_cat_files+number_test_cat_files):
new_path = os.path.join(TEST_DIRECTORY, "cat", file_name)
else:
new_path = os.path.join(TRAIN_DIRECTORY, "cat", file_name)
shutil.copy(cat_file, new_path)
## Samples
import utils;
from utils import *
batch_generator = get_keras_batch_generator(VALID_DIRECTORY, batch_size=2, target_size=(IMAGE_SIZE[0], IMAGE_SIZE[1]))
imgs,labels = next(batch_generator)
%matplotlib inline
plot_grid(imgs, titles=labels)
imgs,labels = next(batch_generator)
%matplotlib inline
plot_grid(imgs, titles=labels)
"""
Explanation: Folder structure
The train directory consist of labelled data with the following convention for each image:
data/train/CLASS.id.jpg
We are going to use keras.preprocessing.image so we want the folder structure to be:
data/train/CLASS/image-name.jpg
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/pcmdi/cmip6/models/sandbox-1/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-1', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: PCMDI
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
deepfield/ibis | docs/source/notebooks/tutorial/3-Projection-Join-Sort.ipynb | apache-2.0 | import ibis
import os
hdfs_port = os.environ.get('IBIS_WEBHDFS_PORT', 50070)
hdfs = ibis.hdfs_connect(host='quickstart.cloudera', port=hdfs_port)
con = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',
hdfs_client=hdfs)
print('Hello!')
"""
Explanation: Projection, Joining, and Sorting
Setup
End of explanation
"""
table = con.table('functional_alltypes')
table.limit(5)
"""
Explanation: Projections: adding/selecting columns
Projections are the general way for adding new columns to tables, or selecting or removing existing ones.
End of explanation
"""
proj = table['bool_col', 'int_col', 'double_col']
proj.limit(5)
"""
Explanation: First, the basics: selecting columns:
End of explanation
"""
to_select = ['bool_col', 'int_col']
table[to_select].limit(5)
"""
Explanation: You can make a list of columns you want, too, and pass that:
End of explanation
"""
table.select(['int_col', 'double_col']).limit(5)
"""
Explanation: You can also use the explicit projection or select functions
End of explanation
"""
bigger_expr = (table.int_col * 2).name('bigger_ints')
proj2 = table['int_col', bigger_expr]
proj2.limit(5)
"""
Explanation: We can add new columns by using named column expressions
End of explanation
"""
table2 = table.add_column(bigger_expr)
table2.limit(5)
"""
Explanation: Adding columns is a shortcut for projection. In Ibis, adding columns always produces a new table reference
End of explanation
"""
table.select([table, bigger_expr]).limit(5)
"""
Explanation: In more complicated projections involving joins, we may need to refer to all of the columns in a table at once. This is how add_column works. We just pass the whole table in the projection:
End of explanation
"""
foo_constant = ibis.literal(5).name('foo')
table.select([table.bigint_col, foo_constant]).limit(5)
"""
Explanation: To use constants in projections, we have to use a special ibis.literal function
End of explanation
"""
region = con.table('tpch_region')
nation = con.table('tpch_nation')
customer = con.table('tpch_customer')
lineitem = con.table('tpch_lineitem')
"""
Explanation: Joins
Ibis attempts to provide good support for all the standard relational joins supported by Impala, Hive, and other relational databases.
inner, outer, left, right joins
semi and anti-joins
To illustrate the joins we'll use the TPC-H tables for now
End of explanation
"""
join_expr = region.r_regionkey == nation.n_regionkey
joined = region.inner_join(nation, join_expr)
"""
Explanation: region and nation are connected by their respective regionkey columns
End of explanation
"""
table_ref = joined[nation, region.r_name.name('region')]
table_ref.columns
table_ref.limit(5)
agged = table_ref.aggregate([table_ref.n_name.count().name('nrows')], by=['region'])
agged
"""
Explanation: If you have multiple join conditions, either compose them yourself (like filters) or pass a list to the join function
join_exprs = [cond1, cond2, cond3]
joined = table1.inner_join(table2, join_exprs)
Once you've joined tables, you don't necessarily have anything yet. I'll put it in big letters
Joins are declarations of intent
After calling the join function (which validates the join condition, of course), you may perform any number of other operations:
Aggregation
Projection
Filtering
and so forth. Most importantly, depending on your schemas, the joined tables may include overlapping column names that could create a conflict if not addressed directly. Some other systems, like pandas, handle this by applying suffixes to the overlapping column names and computing the fully joined tables immediately. We don't do this.
So, with the above data, suppose we just want the region name and all the nation table data. We can then make a projection on the joined reference:
End of explanation
"""
joined.group_by(region.r_name).size()
"""
Explanation: Things like group_by work with unmaterialized joins, too, as you would hope.
End of explanation
"""
table = con.table('functional_alltypes')
hour_dim = table.timestamp_col.hour().name('hour')
hourly_mean = (table.group_by(hour_dim)
.aggregate([table.double_col.mean().name('avg_double')]))
hourly_mean
"""
Explanation: Explicit join materialization
If you're lucky enough to have two table schemas with no overlapping column names (lucky you!), the join can be materialized without having to perform some other relational algebra operation:
joined = a.inner_join(b, join_expr).materialize()
Note that this is equivalent to doing
joined = a.join(b, join_expr)[a, b]
i.e., joining and then selecting all columns from both joined tables. If there is a name overlap, just like with the equivalent projection, there will be an immediate error.
Writing down join keys
In addition to having explicit comparison expressions as join keys, you can also write down column names, or use expressions referencing the joined tables, e.g.:
joined = a.join(b, [('a_key1', 'b_key2')])
joined2 = a.join(b, [(left_expr, right_expr)])
joined3 = a.join(b, ['common_key'])
These will be compared for equality when performing the join; if you want non-equality conditions in the join, you will have to form those yourself.
Join referential nuances
There's nothing to stop you from doing many joins in succession, and, in fact, with complex schemas it will be to your advantage to build the joined table references for your analysis first, then reuse the objects as you go:
joined_ref = (a.join(b, a.key1 == b.key2)
.join(c, [a.key3 == c.key4, b.key5 == c.key6]))
Note that, at least right now, you need to provide explicit comparison expressions (or tuples of column references) referencing the joined tables.
Aggregating joined table with metrics involving more than one base reference
Let's consider the case similar to the SQL query
SELECT a.key, sum(a.foo - b.bar) AS metric
FROM a
JOIN b
ON a.key = b.key
GROUP BY 1
I'll use a somewhat contrived example using the data we already have to show you what this looks like. Take the functional.alltypes table, and suppose we want to compute the mean absolute deviation (MAD) from the hourly mean of the double_col. Silly, I know, but bear with me.
First, the hourly mean:
End of explanation
"""
mad = (table.double_col - hourly_mean.avg_double).abs().mean().name('MAD')
"""
Explanation: Okay, great, now how about the MAD? The only trick here is that we can form an aggregate metric from the two tables, and we then have to join it later. Ibis will not figure out how to join the tables automatically for us.
End of explanation
"""
join_expr = hour_dim == hourly_mean.hour
"""
Explanation: This metric is only valid if used in the context of table joined with hourly_mean, so let's do that. Writing down the join condition is simply a matter of writing:
End of explanation
"""
result = (table.inner_join(hourly_mean, join_expr)
.group_by(table.string_col)
.aggregate([mad]))
result
"""
Explanation: Now let's compute the MAD grouped by string_col
End of explanation
"""
table = con.table('functional_alltypes')
keys = ['string_col', (table.bigint_col > 40).ifelse('high', 'low').name('bigint_tier')]
metrics = [table.double_col.sum().name('total')]
agged = (table
.filter(table.int_col < 8)
.group_by(keys)
.aggregate(metrics))
sorted_agged = agged.sort_by(['bigint_tier', ('total', False)])
sorted_agged
"""
Explanation: Sorting
Sorting tables works similarly to the SQL ORDER BY clause. We use the sort_by function and pass one of the following:
Column names
Column expressions
One of these, with a False (descending order) or True (ascending order) qualifier
So, to sort by total in ascending order we write:
table.sort_by('total')
or by key then by total in descending order
table.sort_by(['key', ('total', False)])
For descending sort order, there is a convenience function desc which can wrap sort keys
from ibis import desc
table.sort_by(['key', desc(table.total)])
Here's a concrete example involving filters, custom grouping dimension, and sorting
End of explanation
"""
agged.sort_by(ibis.desc('total'))
"""
Explanation: For sorting in descending order, you can use the special ibis.desc function:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_linear_model_patterns.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Romain Trachel <trachelr@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
# import a linear classifier from mne.decoding
from mne.decoding import LinearModel
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Linear classifier on sensor data with plot patterns and filters
Decoding, a.k.a MVPA or supervised machine learning applied to MEG and EEG
data in sensor space. Fit a linear classifier with the LinearModel object
providing topographical patterns which are more neurophysiologically
interpretable [1] than the classifier filters (weight vectors).
The patterns explain how the MEG and EEG data were generated from the
discriminant neural sources which are extracted by the filters.
Note patterns/filters in MEG data are more similar than EEG data
because the noise is less spatially correlated in MEG than EEG.
[1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D.,
Blankertz, B., & Bießmann, F. (2014). On the interpretation of
weight vectors of linear models in multivariate neuroimaging.
NeuroImage, 87, 96–110. doi:10.1016/j.neuroimage.2013.10.067
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(2, None, method='iir') # replace baselining with high-pass
events = mne.read_events(event_fname)
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
decim=4, baseline=None, preload=True)
labels = epochs.events[:, -1]
# get MEG and EEG data
meg_epochs = epochs.copy().pick_types(meg=True, eeg=False)
meg_data = meg_epochs.get_data().reshape(len(labels), -1)
eeg_epochs = epochs.copy().pick_types(meg=False, eeg=True)
eeg_data = eeg_epochs.get_data().reshape(len(labels), -1)
"""
Explanation: Set parameters
End of explanation
"""
clf = LogisticRegression()
sc = StandardScaler()
# create a linear model with LogisticRegression
model = LinearModel(clf)
# fit the classifier on MEG data
X = sc.fit_transform(meg_data)
model.fit(X, labels)
# plot patterns and filters
model.plot_patterns(meg_epochs.info, title='MEG Patterns')
model.plot_filters(meg_epochs.info, title='MEG Filters')
# fit the classifier on EEG data
X = sc.fit_transform(eeg_data)
model.fit(X, labels)
# plot patterns and filters
model.plot_patterns(eeg_epochs.info, title='EEG Patterns')
model.plot_filters(eeg_epochs.info, title='EEG Filters')
"""
Explanation: Decoding in sensor space using a LogisticRegression classifier
End of explanation
"""
|
qkitgroup/qkit | qkit/doc/notebooks/Sample_Class.ipynb | gpl-2.0 | import qkit
qkit.cfg['datadir'] = r'c:\data'
qkit.cfg['run_id'] = 'Run0'
qkit.cfg['user'] = 'qkit_user'
import qkit.measure.samples_class as sc
demo = sc.Sample()
"""
Explanation: Qkit Sample Objects
The sample objects are very general and basic objects in qkit. They can be used to store any parameters of your current measurement sample. Sample objects are used as default in some measurement scripts like timedomain measurements to reduce the number of parameters that is passed as arguments.
The sample object can basically be seen as a dict, where you can store any information you want. This is particularly helpful if you write your own measurement notebook and want to apply it to different samples with different parameters. You can then simply exchange the loaded sample at the beginning of your notebook and leave the rest untouched.
Get started
End of explanation
"""
print demo.get_all()
demo.name
demo.comment = "This sample looks promising."
"""
Explanation: We start by creating an empty sample, which only has comment and name as attributes.
You can either use the sample.get_all() function to get a string of all attributes, or you directly use sample.attribute to access the attribute directly.
End of explanation
"""
demo.frequency = 8e9
print demo.get_all()
"""
Explanation: Adding new attributes is easy, you can just set them:
End of explanation
"""
demo.get('frequency',1e9)
demo.get('current',0)
"""
Explanation: The sample class has also a get function, which can be used to set a default. (the same as in a dict)
End of explanation
"""
demo.save()
demo.save('sweet_spot')
demo.save(u'C:/Users/Public/qkitsample')
"""
Explanation: Saving samples
The save function saves the full sample object into a JSON serialized file. You can pass a filename argument:
* None (default): save to datadir/ID.sample
* absolute filepath: save to filepath
* any other string: save to datadir/ID_string.sample
Here, datadir is qkit.cfg['datadir'] and ID is the measurement ID as it would be given for a regular measurement.
End of explanation
"""
demo.load('Run0/qkit_user/P8PWYL.sample') # path can be specified relaive to the datadir
demo2 = sc.Sample(u'C:/Users/Public/qkitsample.sample') # absolute pathname is also fine
"""
Explanation: Loading samples
You can either use an existig sample with sample.load(filename) or generate a new sample Sample(filename)
End of explanation
"""
|
Kappa-Dev/ReGraph | examples/Tutorial_Neo4j_backend/Part2_hierarchies.ipynb | mit | from regraph import NXGraph, Neo4jHierarchy, Rule
from regraph import plot_graph, plot_instance, plot_rule
%matplotlib inline
"""
Explanation: ReGraph tutorial (Neo4j backend)
Part 2: Rewriting hierarchies of graph
ReGraph allows to create a hierarchies of graphs related by means of homomorphisms (or typing). In the context of a hierarchy, if there exists a homomorphism $G \rightarrow T$, we say that graph $G$ is typed by a graph $T$. Graph hierarchy is a DAG, where nodes are graphs and edges are homomorphisms. A homomorphism maps every node of $G$ to some node in $T$ (a type) in such a way that:
- edges are preserved
- attributes of both nodes and edges are preserved
ReGraph provides the data structure Neo4jHierarchy that encodes a graph hierarchy as a property graph stored in a Neo4j database. This encoding represents each graph in a hierarchy with nodes of the database labeled by a unique label corresponding to the ID of the corresponding graph in the hierarchy. For example, the following Cypher query mathces all the nodes belonging to the graph graphID in the hierarchy:
MATCH (n:graphID) RETURN n
The edges in graphs are labeled as edge and the typing edges are labeled as typing. We can then easily find the typing of nodes in graph1 by graph2 with the query:
MATCH (n:graph1)-[:typing]->(m:graph2) RETURN n, m
The hierarchy skeleton is represented by nodes labeled as graph be default. So, matching the nodes of the hierarchy skeleton can be done with the following query:
MATCH (n:graph) return n
End of explanation
"""
# Define graph G
g = NXGraph()
g.add_nodes_from(["protein", "binding", "region", "compound"])
g.add_edges_from([("region", "protein"), ("protein", "binding"), ("region", "binding"), ("compound", "binding")])
# Define graph T
t = NXGraph()
t.add_nodes_from(["action", "agent"])
t.add_edges_from([("agent", "agent"), ("agent", "action")])
# Create a hierarchy
simple_hierarchy = Neo4jHierarchy(uri="bolt://localhost:7687", user="neo4j", password="admin")
# If you run this notebooks multiple times, you need to clear the graph in the db
simple_hierarchy._clear()
simple_hierarchy.add_graph("G", g, {"name": "Simple protein interaction"})
simple_hierarchy.add_graph("T", t, {"name": "Agent interaction"})
simple_hierarchy.add_typing(
"G", "T",
{"protein": "agent",
"region": "agent",
"compound": "agent",
"binding": "action",
}
)
print(simple_hierarchy)
"""
Explanation: 1. Creating and modifying a hierarchy object
Consider the following example of a simple graph hierarchy. The two graphs $G$ and $T$ are being created and added to the heirarchy. Afterwards a typing homomorphism between $G$ and $T$ is added, so that every node of $G$ is typed by some node in $T$.
End of explanation
"""
type(simple_hierarchy.get_graph("T"))
"""
Explanation: The method get_graph returns the graph object corresponding to the provided graph id.
End of explanation
"""
simple_hierarchy.get_typing("G", "T")
"""
Explanation: The method get_typing returns the dictionary object corresponding to the provided hierarchy edge and representing the associated graph homomorphism.
End of explanation
"""
lhs = NXGraph()
lhs.add_nodes_from(["a", "b"])
lhs.add_edges_from([("a", "b")])
p = NXGraph()
p.add_nodes_from(["a", "b"])
p.add_edges_from([])
rhs = NXGraph()
rhs.add_nodes_from(["a", "b", "c"])
rhs.add_edges_from([("c", "a"), ("c", "b")])
# By default if `p_lhs` and `p_rhs` are not provided
# to a rule, it tries to construct this homomorphisms
# automatically by matching the names. In this case we
# have defined lhs, p and rhs in such a way that that
# the names of the matching nodes correspond
rule = Rule(p, lhs, rhs)
plot_rule(rule)
"""
Explanation: 2. Rewriting of objects in a hierarchy
ReGraph implements the rewriting technique called sesqui-pushout rewriting that allows to transform graphs by applying rules through their instances (matchings). Rewriting an individual graphs in a hierarchy may require an update of other graphs and typings in this hierarchy, such updates are called propagation and are distinguished into two types: backward and forward propagation.
Backward propagation briefly:
- If some graph elements (nodes/edges or attributes) are removed from a graph in the hierarchy, then all the respective elements that are typed by them in the ancestor graphs should be removed.
- If a graph node is cloned, then for every instance of this node (every node that is typed by the clonned node) in the ancestor graphs we either: (a) specify to which clone it corresponds or (b) clone it.
Forward propagation briefly:
- If some graph nodes are merged and these nodes are typed by different nodes in a descendant graph, the corresponding nodes in the descendant graph should be merged.
- If a new graph element (node/edge or attribute) is added, then for all the descendent graphs in the hierarchy we either (a) select an existing element to type the added element or (b) add a new element to type the added element.
For more details, please see here.
ReGraph allows to rewrite individual graphs situated in the hierarchy using the method rewrite of Neo4jHierarchy. The rewriting can be done in two modes:
Strict rewriting rewriting that does not allow propagation.
Not strict rewriting that allows propagation.
The rewrite takes as the input the following parameters:
graph_id, ID of the graph in the hierarchy to rewrite,
rule, a rule object to apply,
instance, a dictionary containing an instance of the lhs of the rule in the graph subject to rewriting, by default, tries to construct identity morphism of the nodes of the pattern,
p_typing, a dictionary containing typing of graphs in the hierarchy by the interface of the rule, keys are IDs of hierarchy graphs, values are dictionaries containing the mapping of nodes from the hierarchy graphs to the inteface nodes (note that a node from a graph can be typed by a set of nodes in the interface of the rule, e.g. if we want to perform cloning of some types, etc).
rhs_typing, a dictionary containing typing of the rhs by graphs of the hierarchy, keys are ids of hierarchy graphs, values are dictionaries containing the mapping of nodes from the lhs to the nodes of the typing graph given by the respective key of the value (note that a node from the rhs can be typed by a set of nodes of some graph, e.g. if we want to perform merging of some types, etc),
strict, flag indicating if rewriting is strict, then any propagation is not allowed.
2.1. Strict rewriting
Let us create a Rule object containing a rule we would like to apply.
End of explanation
"""
instances = simple_hierarchy.find_matching("G", rule.lhs)
print("Instances: ", instances)
"""
Explanation: The created rule removes the edge 1->2, adds the new node 3 and two edges 3->1 and 3->2.
Let us find instances of the created rule in the graph G.
End of explanation
"""
instance = {
"a": "protein",
"b": "binding"
}
"""
Explanation: Let us fixed the desired instance: we would like to remove the edge from protein to binding and add some new node connecting them.
End of explanation
"""
try:
rhs_instance = simple_hierarchy.rewrite("G", rule, instance, strict=True)
except Exception as e:
print("Error message: ", e)
print("Type: ", type(e))
"""
Explanation: We try to apply the rule to the selected instance as is in the strict rewriting mode.
End of explanation
"""
rhs_typing = {
"T": {"c": "agent"}
}
rhs_instance = simple_hierarchy.rewrite(
"G", rule, instance, rhs_typing=rhs_typing, strict=True)
print("Instance of the RHS in G", rhs_instance)
"""
Explanation: We have failed to rewrite G, because we have not specified typing for the newly added node 3. Let us try again, but this time we will prove such typing.
End of explanation
"""
lhs = NXGraph()
lhs.add_nodes_from(["agent"])
rule = Rule.from_transform(lhs)
_, rhs_clone = rule.inject_clone_node("agent")
plot_rule(rule)
instance = {
"agent": "agent"
}
"""
Explanation: We will now create a rule that applied to T and that clones the node agent into two nodes.
End of explanation
"""
try:
rhs_instance = simple_hierarchy.rewrite("T", rule, instance, strict=True)
except Exception as e:
print("Error message: ", e)
print("Type: ", type(e))
"""
Explanation: We try to apply the created rule to the graph T in the strict mode.
End of explanation
"""
p_typing = {
"G": {
'protein': 'agent',
'region': 'agent',
'compound': rhs_clone,
'c': 'agent'
}
}
rhs_instance = simple_hierarchy.rewrite("T", rule, instance, p_typing=p_typing, strict=True)
print("Instance of the RHS in G", rhs_instance)
"""
Explanation: We have failed to rewrite T, because we have not specified typing for instances of agent in $p$. Let us try again, but this time we will prove such typing.
End of explanation
"""
simple_hierarchy.relabel_graph_node('T', rhs_instance['agent'], 'organic_agent')
simple_hierarchy.relabel_graph_node('T', rhs_instance[rhs_clone], 'non_organic_agent')
print(simple_hierarchy.get_typing("G", "T"))
"""
Explanation: Let us relabel nodes in T.
End of explanation
"""
hierarchy = Neo4jHierarchy(uri="bolt://localhost:7687", user="neo4j", password="admin")
hierarchy._clear()
colors = NXGraph()
colors.add_nodes_from([
"green", "red"
])
colors.add_edges_from([
("red", "green"),
("red", "red"),
("green", "green")
])
hierarchy.add_graph("colors", colors)
shapes = NXGraph()
shapes.add_nodes_from(["circle", "square"])
shapes.add_edges_from([
("circle", "square"),
("square", "circle"),
("circle", "circle")
])
hierarchy.add_graph("shapes", shapes)
quality = NXGraph()
quality.add_nodes_from(["good", "bad"])
quality.add_edges_from([
("bad", "bad"),
("bad", "good"),
("good", "good")
])
hierarchy.add_graph("quality", quality)
g1 = NXGraph()
g1.add_nodes_from([
"red_circle",
"red_square",
])
g1.add_edges_from([
("red_circle", "red_square"),
("red_circle", "red_circle"),
("red_square", "red_circle")
])
g1_colors = {
"red_circle": "red",
"red_square": "red",
}
g1_shapes = {
"red_circle": "circle",
"red_square": "square",
}
hierarchy.add_graph("g1", g1)
hierarchy.add_typing("g1", "colors", g1_colors)
hierarchy.add_typing("g1", "shapes", g1_shapes)
g2 = NXGraph()
g2.add_nodes_from([
"good_circle",
"good_square",
"bad_circle",
])
g2.add_edges_from([
("good_circle", "good_square"),
("good_square", "good_circle"),
("bad_circle", "good_circle"),
("bad_circle", "bad_circle"),
])
g2_shapes = {
"good_circle": "circle",
"good_square": "square",
"bad_circle": "circle"
}
g2_quality = {
"good_circle": "good",
"good_square": "good",
"bad_circle": "bad",
}
hierarchy.add_graph("g2", g2)
hierarchy.add_typing("g2", "shapes", g2_shapes)
hierarchy.add_typing("g2", "quality", g2_quality)
g3 = NXGraph()
g3.add_nodes_from([
"good_red_circle",
"bad_red_circle",
"good_red_square",
])
g3.add_edges_from([
("bad_red_circle", "good_red_circle"),
("good_red_square", "good_red_circle"),
("good_red_circle", "good_red_square")
])
g3_g1 = {
"good_red_circle": "red_circle",
"bad_red_circle": "red_circle",
"good_red_square": "red_square"
}
g3_g2 = {
"good_red_circle": "good_circle",
"bad_red_circle": "bad_circle",
"good_red_square": "good_square",
}
hierarchy.add_graph("g3", g3)
hierarchy.add_typing("g3", "g1", g3_g1)
hierarchy.add_typing("g3", "g2", g3_g2)
print(hierarchy)
for graph in hierarchy.graphs():
print("Graph '{}', nodes: {}".format(graph, hierarchy.get_graph(graph).nodes()))
"""
Explanation: 2.2. Rewriting and propagation
To illustrate rewriting with propagation, let us consider a slighlty more sophisticated hierarchy.
End of explanation
"""
print("Node types in G3:\n")
for node in hierarchy.get_graph("g3").nodes():
print(node, hierarchy.node_type("g3", node))
"""
Explanation: Some of the graphs in the hierarchy are now typed by multiple graphs, which is reflected in the types of nodes, as in the example below:
End of explanation
"""
lhs = NXGraph()
lhs.add_nodes_from(["a", "b"])
lhs.add_edges_from([
("a", "b"),
("b", "a")
])
p = NXGraph()
p.add_nodes_from(["a", "a1", "b"])
p.add_edges_from([
("a", "b"),
("a1", "b")
])
rhs = NXGraph.copy(p)
rule = Rule(
p, lhs, rhs,
{"a": "a", "a1": "a", "b": "b"},
{"a": "a", "a1": "a1", "b": "b"},
)
plot_rule(rule)
"""
Explanation: 2.3. Rewriting and propagation
We now show how graph rewriting can be performed in such an hierarchy. In the previous example we perfromed strict rewriting in a hierarchy, where no propagation was performed.
The following example illustrates how the ReGraph propagates the changes made by rewriting on any level to all the graphs (as well as the rules) typed by the one target of rewriting.
End of explanation
"""
instances = hierarchy.find_matching("shapes", lhs)
print("Instances:")
for instance in instances:
print(instance)
"""
Explanation: We have created a rule that clones the node a and reconnects the edges between a and b.
End of explanation
"""
rhs_instances = hierarchy.rewrite("shapes", rule, {"a": "circle", "b": "square"})
"""
Explanation: We rewrite the graph shapes with the fixed instances (so, the node circle is cloned).
End of explanation
"""
for graph in hierarchy.graphs():
print("Graph '{}', nodes: {}".format(graph, hierarchy.get_graph(graph).nodes()))
"""
Explanation: Observe the following plots, the cloning of circle was propagated to all the ancestors of shapes, because we didn't specify how to retype intances of circle for these ancestors using the p_typing parameter. This is an example of previously mentioned backward propagation.
End of explanation
"""
pattern = NXGraph()
pattern.add_nodes_from(["a", "b"])
rule = Rule.from_transform(pattern)
rhs_node = rule.inject_merge_nodes(["a", "b"])
rule.inject_add_node("c")
rule.inject_add_edge("c", rhs_node)
instance = {
"a": "good_circle",
"b": "bad_circle",
}
"""
Explanation: Let us now consider a small example of forward propagation. We will create a rule that performs some additions and merges of nodes.
End of explanation
"""
rhs_typing = {
"shapes": {
"c": "circle"
}
}
rhs_instance = hierarchy.rewrite("g2", rule, instance, rhs_typing=rhs_typing)
print(rhs_instance)
"""
Explanation: Application of this rule will merge nodes bad_circle and good_circle in the graph g2. It with then add a new node and connect it with an edge to the merged node. Let us specify some typings of the new node in the RHS: we will set the new node to be typed as circle in the graph shapes.
End of explanation
"""
for graph in hierarchy.graphs():
print("Graph '{}', nodes: {}".format(graph, hierarchy.get_graph(graph).nodes()))
"""
Explanation: Observe the following graphs, as the resul of forward propagation nodes good and bad were merged in the graph qualities. In addition, a new node typing the node c in the rule was added to the graph qualities.
End of explanation
"""
hierarchy_json = hierarchy.to_json()
import json
print(json.dumps(hierarchy_json, indent=" "))
# Clear the DB for the previous hierarchy
hierarchy._clear()
# Load json-data back to the DB
hierarchy = Neo4jHierarchy.from_json(
uri="bolt://localhost:7687", user="neo4j", password="admin", json_data=hierarchy_json)
"""
Explanation: 3. Dumping hierarchies with JSON
ReGraph provides some utils for serialization of Neo4jHierarchy objects and implements the following methods for loading and exporting your hierarchy in JSON-format:
Neo4jHierarchy.to_json creates a json representations of the hierarchy;
Neo4jHierarchy.from_json loads an hierarchy from json representation (returns new Hierarchy object);
Neo4jHierarchy.export exports the hierarchy to a file (json format);
Neo4jHierarchy.load loads an hierarchy from a .json file (returns new object as well).
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/recursive_feature_elimination.ipynb | mit | # Load libraries
from sklearn.datasets import make_regression
from sklearn.feature_selection import RFECV
from sklearn import datasets, linear_model
import warnings
# Suppress an annoying but harmless warning
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")
"""
Explanation: Title: Recursive Feature Elimination
Slug: recursive_feature_elimination
Summary: How to do recursive feature elimination for machine learning in Python.
Date: 2017-09-14 12:00
Category: Machine Learning
Tags: Feature Selection
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Generate features matrix, target vector, and the true coefficients
X, y = make_regression(n_samples = 10000,
n_features = 100,
n_informative = 2,
random_state = 1)
"""
Explanation: Create Data
End of explanation
"""
# Create a linear regression
ols = linear_model.LinearRegression()
"""
Explanation: Create Linear Model
End of explanation
"""
# Create recursive feature eliminator that scores features by mean squared errors
rfecv = RFECV(estimator=ols, step=1, scoring='neg_mean_squared_error')
# Fit recursive feature eliminator
rfecv.fit(X, y)
# Recursive feature elimination
rfecv.transform(X)
"""
Explanation: Conduct Recursive Feature Elimination
End of explanation
"""
# Number of best features
rfecv.n_features_
"""
Explanation: Number Of Features Remaining
End of explanation
"""
|
kit-cel/wt | nt2_ce2/vorlesung/ch_4_diversity/detection_Rn.ipynb | gpl-2.0 | # importing
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 30}
plt.rc('font', **font)
#plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(30, 12) )
"""
Explanation: Content and Objective
Showing results ob binary detection in $\mathbb{R}^n$
Method: Two points are sampled randomly, afterwards several trials are simulated for estimating error probability
End of explanation
"""
# define dimension of space
N = 2
# define number of trials
N_trial = int( 1e4 )
# generate zwo points randomly and get vector between them
s_1 = np.random.randn( N )
s_2 = np.random.randn( N )
s = [ s_1, s_2 ]
Eb = .5 * np.linalg.norm( s_1 )**2 + .5 * np.linalg.norm( s_2 ) **2
d = np.linalg.norm( s_1 - s_2 )
# connecting vector
v = np.array( s_1 - s_2 )
v /= np.linalg.norm( v )
# print Tx vectors
print('Transmit vectors:\n')
print('s_1=', s_1)
print('s_2=', s_2)
"""
Explanation: Parameters
End of explanation
"""
# define snr and linearize it
Eb_N0_dB = np.arange(0, 31, 2 )
Eb_N0 = 10**( Eb_N0_dB / 10 )
sigma2 = Eb / ( 2 * Eb_N0 )
# NOTE: sigma2 has to include the intended Eb/N0 as well as the actual Eb
# NOTE 2: since "all is real", sigma^2 = N0/2
# initialize ber
ber_euclidean = np.zeros_like( sigma2 )
ber_projection = np.zeros_like( sigma2 )
# loop along snr
for ind_sigma2, val_sigma2 in enumerate( sigma2 ):
# initialize number of errors
numb_errors_projection = 0
numb_errors_euclidean = 0
# loop along realizations
for _n in range( N_trial ):
bit = np.random.randint( 2 )
x = s[ bit ]
# get output signal by adding noise
noise = np.sqrt( val_sigma2 ) * np.random.randn( N )
y = x + noise
# two decision criteria:
# - first using "standard ML decision in R^n" --> "euclidean"
# - second using projection as on slides --> "projection"
# get decision variable
y_tilde = np.inner( v, y - ( s_1 + s_2 ) / 2 )
# estimate bit
bit_est_projection = int( y_tilde < 0 )
numb_errors_projection += int( bit != bit_est_projection )
# estimate by using euclidean distance
bit_est_euclidean = int( np.linalg.norm( y - s_1 ) > np.linalg.norm( y - s_2 ) )
numb_errors_euclidean += int( bit != bit_est_euclidean )
# evaluate error rate
ber_projection[ ind_sigma2 ] = numb_errors_projection / N_trial
ber_euclidean[ ind_sigma2 ] = numb_errors_euclidean / N_trial
"""
Explanation: Add Noise and Estimate
End of explanation
"""
ber_bpsk_theo = 1 - stats.norm.cdf( np.sqrt( 2 * Eb_N0 ) )
ber_theo = 1 - stats.norm.cdf( np.sqrt( np.linalg.norm( s_1 - s_2 )**2 / ( 4. * sigma2 ) ) )
# print Tx vectors
print('Transmit vectors:\n')
print('s_1=', s_1)
print('s_2=', s_2)
plt.figure()
plt.plot( Eb_N0_dB, ber_bpsk_theo, linewidth=2.0, label = "BPSK theo.")
plt.plot( Eb_N0_dB, ber_theo, linewidth=2.0, label = "theo.")
plt.semilogy( Eb_N0_dB, ber_euclidean, '-X', ms=18, label='sim., Euclidean' )
plt.semilogy( Eb_N0_dB, ber_projection, '-D', ms=12, label='sim., projection' )
plt.legend( loc='upper right' )
plt.grid(True)
plt.xlabel('$E_b/N_0$ (dB)')
plt.ylabel('BER');
plt.ylim( (1e-4, 1) )
plt.title('s_1 = ' + str(s_1) + ', s_2 = ' + str( s_2 ) )
"""
Explanation: Show Results
End of explanation
"""
|
jeffzhengye/pylearn | pybasic/.ipynb_checkpoints/基本操作实例-checkpoint.ipynb | unlicense | from pathlib import Path
import pathlib
save_dir = "./test_dir"
Path(save_dir).mkdir(parents=True, exist_ok=True)
### get current directory
print(Path.cwd())
print(Path.home())
print(pathlib.Path.home().joinpath('python', 'scripts', 'test.py'))
"""
Explanation: 文件系统相关操作
pathlib
The pathlib module was introduced in Python 3.4
比string 类型path 能提供更灵活功能
cheat sheet: https://github.com/chris1610/pbpython/blob/master/extras/Pathlib-Cheatsheet.pdf
shutil
文件夹操作
pathlib 创建文件夹,如果不存在
End of explanation
"""
# Reading and Writing Files
path = pathlib.Path.cwd() / 'test.txt'
with open(path, mode='r') as fid:
headers = [line.strip() for line in fid if line.startswith('#')]
print('\n'.join(headers))
print('full text', path.read_text())
print(path.resolve().parent == pathlib.Path.cwd())
"""
Explanation: useful functions
.read_text(): open the path in text mode and return the contents as a string.
.read_bytes(): open the path in binary/bytes mode and return the contents as a bytestring.
.write_text(): open the path and write string data to it.
.write_bytes(): open the path in binary/bytes mode and write data to it.
.resolve() method will find the full path.
End of explanation
"""
print('path', path)
print('stem', path.stem)
print('suffix', path.suffix)
print('parent', path.parent)
print('parent of parent', path.parent.parent)
print('anchor', path.anchor)
# move or replace file
path.with_suffix('.py')
path.replace(path.with_suffix('.md')) # 改后缀
path.with_suffix('.md').replace(path.with_suffix('.txt'))
# Display a Directory Tree
def tree(directory):
print(f'+ {directory}')
for path in sorted(directory.rglob('*')):
depth = len(path.relative_to(directory).parts)
spacer = ' ' * depth
print(f'{spacer}+ {path.name}')
tree(pathlib.Path.cwd())
"""
Explanation: .name: the file name without any directory
.parent: the directory containing the file, or the parent directory if path is a directory
.stem: the file name without the suffix
.suffix: the file extension
.anchor: the part of the path before the directories
End of explanation
"""
from datetime import datetime
directory = pathlib.Path.cwd()
time, file_path = max((f.stat().st_mtime, f) for f in directory.iterdir())
print(datetime.fromtimestamp(time), file_path)
directory = pathlib.Path.home()
file_list = list(directory.glob('*.*'))
print(file_list)
"""
Explanation: Find the Last Modified File
End of explanation
"""
def unique_path(directory, name_pattern):
counter = 0
while True:
counter += 1
path = directory / name_pattern.format(counter)
if not path.exists():
return path
path = unique_path(pathlib.Path.cwd(), 'test{:03d}.txt')
print(path)
"""
Explanation: Create a Unique File Name
End of explanation
"""
input_path = Path("/mnt/d/code/image/hedian-demo/data/test")
file_list = []
if input_path.exists():
if input_path.is_dir():
# file_list = [p.resolve() for p in input_path.glob("**/*") if
# p.suffix in {".png", ".jpg", ".JPG", ".PNG"}]
print(file_list)
else:
print(p)
"""
Explanation: dir exist and then glob with multiple extensions
End of explanation
"""
# move all .txt file to achive fold
import glob
import os
import shutil
for file_name in glob.glob('*.txt'): # return a list of
new_path = os.path.join('archive', file_name)
shutil.move(file_name, new_path)
"""
Explanation: shutil
End of explanation
"""
# counting files
import collections
print(collections.Counter(p.suffix for p in pathlib.Path.cwd().iterdir()))
print('漂亮', collections.Counter(p.suffix for p in pathlib.Path.cwd().glob('*.t*')))
"""
Explanation: collections Counter
End of explanation
"""
|
finklabs/loganalyser | demo2_experiments.ipynb | mit | %matplotlib inline
import pandas as pd
from korg import korg
from korg.pattern import PatternRepo
import tarfile
from loganalyser import plot
"""
Explanation: moving the work to jupyter
This notebook demonstrates the use of single-line calls to D3 visualizations via the simple d3_lib.py file and referenced css and js files.
Configuration for D3
End of explanation
"""
pr = PatternRepo(['./korg_patterns/'], False)
lg = korg.LineGrokker('%{AOGAERU_LOAD}', pr)
df = pd.DataFrame()
# now grok the aogaeru load log
with open("./data/results.csv") as infile:
for line in infile:
#print line
#print lg.grok(line)
df = df.append(lg.grok(line), ignore_index=True)
df.timestamp = pd.to_datetime(df.timestamp) # convert to timestamp
df.head()
df.dtypes
"""
Explanation: Aogaeru load
End of explanation
"""
pr = PatternRepo(['./korg_patterns/'], False)
lg = korg.LineGrokker('%{LINUX_VMSTAT}', pr)
vm = pd.DataFrame()
with tarfile.open('data/supercars-oscounters-13102912.tgz', mode="r:gz") as tar:
#infile = tar.extractfile('./vmstat.txt.%s' % testrun)
infile = tar.extractfile('./vmstat.txt.13102912')
# read first line start and interval
# 20131029 124500 interval 5 sec
for line in infile:
#print line
#print lg.grok(line)
vm = vm.append(lg.grok(line), ignore_index=True)
#vm.timestamp = pd.to_datetime(vm.timestamp) # convert to timestamp
# tz is timezone of data
rng = pd.date_range('10/29/2013 12:45:00', periods=400, freq='5S', tz="Europe/Berlin")
#rng = pd.date_range('10/29/2013 12:45:00', periods=400, freq='5S')
cpu = pd.DataFrame()
cpu['idle'] = vm['id'].astype(int)
cpu['system'] = vm['sy'].astype(int)
cpu['user'] = vm['us'].astype(int)
cpu['wait'] = vm['wa'].astype(int)
# we do not use index since we "to_json" next
#cpu['timestamp'] = rng
#cpu.set_index(['timestamp'], inplace=True)
#cpu.index = pd.to_datetime(rng, utc=True)
cpu['timestamp'] = pd.to_datetime(rng, utc=True) # timestamp in UTC !
cpu.head()
cpu.dtypes
"""
Explanation: LINUX_VMSTAT
End of explanation
"""
#print cpu.to_csv(orient='records')
print cpu.to_json(orient='records', date_format='epoch')
"""
Explanation: convert the dataframe data into Json format
to_csv does not work since it is necessary to parse it in d3 where it ends up as json format anyways
End of explanation
"""
#HTML(plot.set_styles(['stacked',]))
plot.set_styles(['stacked',])
#HTML(plot.draw_graph('stacked', {'data': cpu.to_json(orient='records')}))
plot.draw_graph('stacked', cpu.to_json(orient='records'))
"""
Explanation: TODO: add from - to (with timezone!)
End of explanation
"""
pr = PatternRepo(['./korg_patterns/'], True)
lg = korg.LineGrokker('%{COMMONAPACHELOG}', pr)
req = pd.DataFrame()
# now grok the aogaeru load log
with open("./data/access_log2") as infile:
for line in infile:
req = req.append(lg.grok(line), ignore_index=True)
req.head()
req.timestamp = pd.to_datetime(req.timestamp) # convert to timestamp
"""
Explanation: Access logs
End of explanation
"""
|
bioinf-jku/SNNs | TF_1_x/getSELUparameters.ipynb | gpl-3.0 | import numpy as np
from scipy.special import erf,erfc
from sympy import Symbol, solve, nsolve
"""
Explanation: Obtain the SELU parameters for arbitrary fixed points
Author: Guenter Klambauer, 2017
End of explanation
"""
def getSeluParameters(fixedpointMean=0,fixedpointVar=1):
""" Finding the parameters of the SELU activation function. The function returns alpha and lambda for the desired fixed point. """
import sympy
from sympy import Symbol, solve, nsolve
aa = Symbol('aa')
ll = Symbol('ll')
nu = fixedpointMean
tau = fixedpointVar
mean = 0.5*ll*(nu + np.exp(-nu**2/(2*tau))*np.sqrt(2/np.pi)*np.sqrt(tau) + \
nu*erf(nu/(np.sqrt(2*tau))) - aa*erfc(nu/(np.sqrt(2*tau))) + \
np.exp(nu+tau/2)*aa*erfc((nu+tau)/(np.sqrt(2*tau))))
var = 0.5*ll**2*(np.exp(-nu**2/(2*tau))*np.sqrt(2/np.pi*tau)*nu + (nu**2+tau)* \
(1+erf(nu/(np.sqrt(2*tau)))) + aa**2 *erfc(nu/(np.sqrt(2*tau))) \
- aa**2 * 2 *np.exp(nu+tau/2)*erfc((nu+tau)/(np.sqrt(2*tau)))+ \
aa**2*np.exp(2*(nu+tau))*erfc((nu+2*tau)/(np.sqrt(2*tau))) ) - mean**2
eq1 = mean - nu
eq2 = var - tau
res = nsolve( (eq2, eq1), (aa,ll), (1.67,1.05))
return float(res[0]),float(res[1])
### To recover the parameters of the SELU with mean zero and unit variance
getSeluParameters(0,1)
### To obtain new parameters for mean zero and variance 2
myFixedPointMean = -0.1
myFixedPointVar = 2.0
myAlpha, myLambda = getSeluParameters(myFixedPointMean,myFixedPointVar)
getSeluParameters(myFixedPointMean,myFixedPointVar)
"""
Explanation: Function to obtain the parameters for the SELU with arbitrary fixed point (mean variance)
End of explanation
"""
def selu(x):
with ops.name_scope('elu') as scope:
alpha = myAlpha
scale = myLambda
return scale*tf.where(x>=0.0, x, alpha*tf.nn.elu(x))
def dropout_selu(x, rate, alpha= -myAlpha*myLambda, fixedPointMean=myFixedPointMean, fixedPointVar=myFixedPointVar,
noise_shape=None, seed=None, name=None, training=False):
"""Dropout to a value with rescaling."""
def dropout_selu_impl(x, rate, alpha, noise_shape, seed, name):
keep_prob = 1.0 - rate
x = ops.convert_to_tensor(x, name="x")
if isinstance(keep_prob, numbers.Real) and not 0 < keep_prob <= 1:
raise ValueError("keep_prob must be a scalar tensor or a float in the "
"range (0, 1], got %g" % keep_prob)
keep_prob = ops.convert_to_tensor(keep_prob, dtype=x.dtype, name="keep_prob")
keep_prob.get_shape().assert_is_compatible_with(tensor_shape.scalar())
alpha = ops.convert_to_tensor(alpha, dtype=x.dtype, name="alpha")
keep_prob.get_shape().assert_is_compatible_with(tensor_shape.scalar())
if tensor_util.constant_value(keep_prob) == 1:
return x
noise_shape = noise_shape if noise_shape is not None else array_ops.shape(x)
random_tensor = keep_prob
random_tensor += random_ops.random_uniform(noise_shape, seed=seed, dtype=x.dtype)
binary_tensor = math_ops.floor(random_tensor)
ret = x * binary_tensor + alpha * (1-binary_tensor)
a = tf.sqrt(fixedPointVar / (keep_prob *((1-keep_prob) * tf.pow(alpha-fixedPointMean,2) + fixedPointVar)))
b = fixedPointMean - a * (keep_prob * fixedPointMean + (1 - keep_prob) * alpha)
ret = a * ret + b
ret.set_shape(x.get_shape())
return ret
with ops.name_scope(name, "dropout", [x]) as name:
return utils.smart_cond(training,
lambda: dropout_selu_impl(x, rate, alpha, noise_shape, seed, name),
lambda: array_ops.identity(x))
import tensorflow as tf
import numpy as np
import numbers
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import array_ops
from tensorflow.python.layers import utils
x = tf.Variable(tf.random.normal([10000],mean=myFixedPointMean, stddev=np.sqrt(myFixedPointVar)))
w = selu(x)
y = dropout_selu(w,0.2,training=True)
init = tf.global_variables_initializer()
gpu_options = tf.GPUOptions(allow_growth=True)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
sess.run(init)
z,zz, zzz = sess.run([x, w, y])
#print(z)
#print(zz)
print("mean/var should be at:", myFixedPointMean, "/", myFixedPointVar)
print("Input data mean/var: ", "{:.12f}".format(np.mean(z)), "/", "{:.12f}".format(np.var(z)))
print("After selu: ", "{:.12f}".format(np.mean(zz)), "/", "{:.12f}".format(np.var(zz)))
print("After dropout mean/var", "{:.12f}".format(np.mean(zzz)), "/", "{:.12f}".format(np.var(zzz)))
"""
Explanation: Adjust the SELU function and Dropout to your new parameters
End of explanation
"""
myAlpha = -np.sqrt(2/np.pi) / (np.exp(0.5) * erfc(1/np.sqrt(2))-1 )
myLambda = (1-np.sqrt(np.exp(1))*erfc(1/np.sqrt(2))) * \
np.sqrt( 2*np.pi/ (2 + np.pi -2*np.sqrt(np.exp(1))*(2+np.pi)*erfc(1/np.sqrt(2)) + \
np.exp(1)*np.pi*erfc(1/np.sqrt(2))**2 + 2*np.exp(2)*erfc(np.sqrt(2))))
print("Alpha parameter of the SELU: ", myAlpha)
print("Lambda parameter of the SELU: ", myLambda)
"""
Explanation: For completeness: These are the correct expressions for mean zero and unit variance
End of explanation
"""
|
sofmonk/aima-python | grid.ipynb | mit | import math
def distance(a, b):
"""The distance between two (x, y) points."""
return math.hypot((a[0] - b[0]), (a[1] - b[1]))
"""
Explanation: Grid
The functions here are used often when dealing with 2D grids (like in TicTacToe).
Distance
The function returns the Euclidean Distance between two points in the 2D space.
End of explanation
"""
print(distance((1, 2), (5, 5)))
"""
Explanation: For example:
End of explanation
"""
def distance_squared(a, b):
"""The square of the distance between two (x, y) points."""
return (a[0] - b[0])**2 + (a[1] - b[1])**2
"""
Explanation: Distance Squared
This function returns the square of the distance between two points.
End of explanation
"""
print(distance_squared((1, 2), (5, 5)))
"""
Explanation: For example:
End of explanation
"""
from utils import clip
def vector_clip(vector, lowest, highest):
"""Return vector, except if any element is less than the corresponding
value of lowest or more than the corresponding value of highest, clip to
those values."""
return type(vector)(map(clip, vector, lowest, highest))
"""
Explanation: Vector Clip
With this function we can make sure the values of a vector are within a given range. It takes as arguments three vectors: the vector to clip (vector), a vector containing the lowest values allowed (lowest) and a vector for the highest values (highest). All these vectors are of the same length. If a value v1 in vector is lower than the corresponding value v2 in lowest, then we set v1 to v2. Similarly we "clip" the values exceeding the highest values.
End of explanation
"""
print(vector_clip((-1, 10), (0, 0), (9, 9)))
"""
Explanation: For example:
End of explanation
"""
|
davidgutierrez/HeartRatePatterns | Jupyter/plot_compare_reduction.ipynb | gpl-3.0 | # Authors: Robert McGibbon, Joel Nothman, Guillaume Lemaitre
from __future__ import print_function, division
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_digits
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from sklearn.decomposition import PCA, NMF
from sklearn.feature_selection import SelectKBest, chi2
print(__doc__)
pipe = Pipeline([
('reduce_dim', PCA()),
('classify', LinearSVC())
])
N_FEATURES_OPTIONS = [2, 4, 8]
C_OPTIONS = [1, 10, 100, 1000]
param_grid = [
{
'reduce_dim': [PCA(iterated_power=7), NMF()],
'reduce_dim__n_components': N_FEATURES_OPTIONS,
'classify__C': C_OPTIONS
},
{
'reduce_dim': [SelectKBest(chi2)],
'reduce_dim__k': N_FEATURES_OPTIONS,
'classify__C': C_OPTIONS
},
]
reducer_labels = ['PCA', 'NMF', 'KBest(chi2)']
grid = GridSearchCV(pipe, cv=3, n_jobs=1, param_grid=param_grid)
digits = load_digits()
grid.fit(digits.data, digits.target)
mean_scores = np.array(grid.cv_results_['mean_test_score'])
# scores are in the order of param_grid iteration, which is alphabetical
mean_scores = mean_scores.reshape(len(C_OPTIONS), -1, len(N_FEATURES_OPTIONS))
# select score for best C
mean_scores = mean_scores.max(axis=0)
bar_offsets = (np.arange(len(N_FEATURES_OPTIONS)) *
(len(reducer_labels) + 1) + .5)
plt.figure()
COLORS = 'bgrcmyk'
for i, (label, reducer_scores) in enumerate(zip(reducer_labels, mean_scores)):
plt.bar(bar_offsets + i, reducer_scores, label=label, color=COLORS[i])
plt.title("Comparing feature reduction techniques")
plt.xlabel('Reduced number of features')
plt.xticks(bar_offsets + len(reducer_labels) / 2, N_FEATURES_OPTIONS)
plt.ylabel('Digit classification accuracy')
plt.ylim((0, 1))
plt.legend(loc='upper left')
"""
Explanation: Selecting dimensionality reduction with Pipeline and GridSearchCV
This example constructs a pipeline that does dimensionality
reduction followed by prediction with a support vector
classifier. It demonstrates the use of GridSearchCV and
Pipeline to optimize over different classes of estimators in a
single CV run -- unsupervised PCA and NMF dimensionality
reductions are compared to univariate feature selection during
the grid search.
Additionally, Pipeline can be instantiated with the memory
argument to memoize the transformers within the pipeline, avoiding to fit
again the same transformers over and over.
Note that the use of memory to enable caching becomes interesting when the
fitting of a transformer is costly.
Illustration of Pipeline and GridSearchCV
This section illustrates the use of a Pipeline with
GridSearchCV
End of explanation
"""
from tempfile import mkdtemp
from shutil import rmtree
from sklearn.externals.joblib import Memory
# Create a temporary folder to store the transformers of the pipeline
cachedir = mkdtemp()
memory = Memory(cachedir=cachedir, verbose=10)
cached_pipe = Pipeline([('reduce_dim', PCA()),
('classify', LinearSVC())],
memory=memory)
# This time, a cached pipeline will be used within the grid search
grid = GridSearchCV(cached_pipe, cv=3, n_jobs=1, param_grid=param_grid)
digits = load_digits()
grid.fit(digits.data, digits.target)
# Delete the temporary cache before exiting
rmtree(cachedir)
"""
Explanation: Caching transformers within a Pipeline
It is sometimes worthwhile storing the state of a specific transformer
since it could be used again. Using a pipeline in GridSearchCV triggers
such situations. Therefore, we use the argument memory to enable caching.
.. warning::
Note that this example is, however, only an illustration since for this
specific case fitting PCA is not necessarily slower than loading the
cache. Hence, use the memory constructor parameter when the fitting
of a transformer is costly.
End of explanation
"""
plt.show()
import numpy as np
from sklearn.decomposition import PCA
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_ratio_)
print(pca.singular_values_)
"""
Explanation: The PCA fitting is only computed at the evaluation of the first
configuration of the C parameter of the LinearSVC classifier. The
other configurations of C will trigger the loading of the cached PCA
estimator data, leading to save processing time. Therefore, the use of
caching the pipeline using memory is highly beneficial when fitting
a transformer is costly.
End of explanation
"""
|
juanshishido/okcupid | main.ipynb | mit | import pickle
import warnings
from utils.hash import make
from utils.calculate_pmi_features import *
from utils.clean_up import *
from utils.categorize_demographics import *
from utils.reduce_dimensions import run_kmeans
from utils.nonnegative_matrix_factorization import nmf_inspect, nmf_labels
warnings.filterwarnings('ignore')
"""
Explanation: This notebook sets up the workflow for the various functions we have implemented. It shows an example of how we clustered using Nonnegative Matrix Factorization. We manually inspect the output of NMF to determine the best number of clusters for each group
End of explanation
"""
df = get_data()
essay_list = ['essay0','essay4','essay5']
df_clean = clean_up(df, essay_list)
df_clean.fillna('', inplace=True)
df.columns.values
df_clean['religion'] = df_clean['religion'].apply(religion_categories)
df_clean['job'] = df_clean['job'].apply(job_categories)
df_clean['drugs'] = df_clean['drugs'].apply(drug_categories)
df_clean['diet'] = df_clean['diet'].apply(diet_categories)
df_clean['body_type'] = df_clean['body_type'].apply(body_categories)
df_clean['drinks'] = df_clean['drinks'].apply(drink_categories)
df_clean['sign'] = df_clean['sign'].apply(sign_categories)
df_clean['ethnicity'] = df_clean['ethnicity'].apply(ethnicity_categories)
df_clean['pets'] = df_clean['pets'].apply(pets_categories)
df_clean['speaks'] = df_clean['speaks'].apply(language_categories)
"""
Explanation: Getting the data, cleaning it, and categorizing demographic data
End of explanation
"""
df_male = df_clean[df_clean['sex'] == 'm']
df_female = df_clean[df_clean['sex'] == 'f']
count_matrix_m, tfidf_matrix_m, vocab_m = col_to_data_matrix(df_male, 'essay0') #save out
count_matrix_f, tfidf_matrix_f, vocab_f = col_to_data_matrix(df_female, 'essay0')
vocab_m
nmf_inspect(tfidf_matrix_m, vocab_m)
nmf_inspect(tfidf_matrix_f, vocab_f)
"""
Explanation: Splitting the dataframe by gender, running clustering separately on each.
End of explanation
"""
|
dfm/dfm.io | static/downloads/notebooks/pymc-tensorflow.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
"""
Explanation: Title: PyMC3 + TensorFlow
Date: 2018-08-02
Category: Data Analysis
Slug: pymc-tensorflow
Summary: the most ambitious crossover event in history
Math: true
End of explanation
"""
import numpy as np
import tensorflow as tf
session = tf.get_default_session()
if session is None:
session = tf.InteractiveSession()
import theano
import theano.tensor as tt
def _to_tensor_type(shape):
return tt.TensorType(dtype="float64", broadcastable=[False]*len(shape))
class TensorFlowOp(tt.Op):
"""A custom Theano Op uses TensorFlow as the computation engine
Args:
target (Tensor): The TensorFlow tensor defining the output of
this operation
parameters (list(Tensor)): A list of TensorFlow tensors that
are inputs to this operation
names (Optional(list)): A list of names for the parameters.
These are the names that will be used within PyMC3
feed_dict (Optional(dict)): A "feed_dict" that is provided to
the TensorFlow session when the operation is executed
session (Optional): A TensorFlow session that can be used to
evaluate the operation
"""
def __init__(self, target, parameters, names=None, feed_dict=None, session=None):
self.parameters = parameters
self.names = names
self._feed_dict = dict() if feed_dict is None else feed_dict
self._session = session
self.target = target
# Execute the operation once to work out the shapes of the
# parameters and the target
in_values, out_value = self.session.run(
[self.parameters, self.target], feed_dict=self._feed_dict)
self.shapes = [np.shape(v) for v in in_values]
self.output_shape = np.shape(out_value)
# Based on this result, work out the shapes that the Theano op
# will take in and return
self.itypes = tuple([_to_tensor_type(shape) for shape in self.shapes])
self.otypes = tuple([_to_tensor_type(self.output_shape)])
# Build another custom op to represent the gradient (see below)
self._grad_op = _TensorFlowGradOp(self)
@property
def session(self):
"""The TensorFlow session associated with this operation"""
if self._session is None:
self._session = tf.get_default_session()
return self._session
def get_feed_dict(self, sample):
"""Get the TensorFlow feed_dict for a given sample
This method will only work when a value for ``names`` was provided
during instantiation.
sample (dict): The specification of a specific sample in the chain
"""
if self.names is None:
raise RuntimeError("'names' must be set in order to get the feed_dict")
return dict(((param, sample[name])
for name, param in zip(self.names, self.parameters)),
**self._feed_dict)
def infer_shape(self, node, shapes):
"""A required method that returns the shape of the output"""
return self.output_shape,
def perform(self, node, inputs, outputs):
"""A required method that actually executes the operation"""
# To execute the operation using TensorFlow we must map the inputs from
# Theano to the TensorFlow parameter using a "feed_dict"
feed_dict = dict(zip(self.parameters, inputs), **self._feed_dict)
outputs[0][0] = np.array(self.session.run(self.target, feed_dict=feed_dict))
def grad(self, inputs, gradients):
"""A method that returns Theano op to compute the gradient
In this case, we use another custom op (see the definition below).
"""
op = self._grad_op(*(inputs + gradients))
# This hack seems to be required for ops with a single input
if not isinstance(op, (list, tuple)):
return [op]
return op
class _TensorFlowGradOp(tt.Op):
"""A custom Theano Op defining the gradient of a TensorFlowOp
Args:
base_op (TensorFlowOp): The original Op
"""
def __init__(self, base_op):
self.base_op = base_op
# Build the TensorFlow operation to apply the reverse mode
# autodiff for this operation
# The placeholder is used to include the gradient of the
# output as a seed
self.dy = tf.placeholder(tf.float64, base_op.output_shape)
self.grad_target = tf.gradients(base_op.target,
base_op.parameters,
grad_ys=self.dy)
# This operation will take the original inputs and the gradient
# seed as input
types = [_to_tensor_type(shape) for shape in base_op.shapes]
self.itypes = tuple(types + [_to_tensor_type(base_op.output_shape)])
self.otypes = tuple(types)
def infer_shape(self, node, shapes):
return self.base_op.shapes
def perform(self, node, inputs, outputs):
feed_dict = dict(zip(self.base_op.parameters, inputs[:-1]),
**self.base_op._feed_dict)
feed_dict[self.dy] = inputs[-1]
result = self.base_op.session.run(self.grad_target, feed_dict=feed_dict)
for i, r in enumerate(result):
outputs[i][0] = np.array(r)
"""
Explanation: In this tutorial, I will describe a hack that let's us use PyMC3 to sample a probability density defined using TensorFlow.
This isn't necessarily a Good Idea™, but I've found it useful for a few projects so I wanted to share the method.
To start, I'll try to motivate why I decided to attempt this mashup, and then I'll give a simple example to demonstrate how you might use this technique in your own work.
Why TensorFlow?
I recently started using TensorFlow as a framework for probabilistic modeling (and encouraging other astronomers to do the same) because the API seemed stable and it was relatively easy to extend the language with custom operations written in C++.
This second point is crucial in astronomy because we often want to fit realistic, physically motivated models to our data, and it can be inefficient to implement these algorithms within the confines of existing probabilistic programming languages.
To this end, I have been working on developing various custom operations within TensorFlow to implement scalable Gaussian processes and various special functions for fitting exoplanet data (Foreman-Mackey et al., in prep, ha!).
These experiments have yielded promising results, but my ultimate goal has always been to combine these models with Hamiltonian Monte Carlo sampling to perform posterior inference.
I don't know of any Python packages with the capabilities of projects like PyMC3 or Stan that support TensorFlow out of the box.
I know that Edward/TensorFlow probability has an HMC sampler, but it does not have a NUTS implementation, tuning heuristics, or any of the other niceties that the MCMC-first libraries provide.
Why HMC?
The benefit of HMC compared to some other MCMC methods (including one that I wrote) is that it is substantially more efficient (i.e. requires less computation time per independent sample) for models with large numbers of parameters.
To achieve this efficiency, the sampler uses the gradient of the log probability function with respect to the parameters to generate good proposals.
This means that it must be possible to compute the first derivative of your model with respect to the input parameters.
To do this in a user-friendly way, most popular inference libraries provide a modeling framework that users must use to implement their model and then the code can automatically compute these derivatives.
Why PyMC3?
As far as I can tell, there are two popular libraries for HMC inference in Python: PyMC3 and Stan (via the pystan interface).
I have previously blogged about extending Stan using custom C++ code and a forked version of pystan, but I haven't actually been able to use this method for my research because debugging any code more complicated than the one in that example ended up being far too tedious.
Furthermore, since I generally want to do my initial tests and make my plots in Python, I always ended up implementing two version of my model (one in Stan and one in Python) and it was frustrating to make sure that these always gave the same results.
PyMC3 is much more appealing to me because the models are actually Python objects so you can use the same implementation for sampling and pre/post-processing.
The catch with PyMC3 is that you must be able to evaluate your model within the Theano framework and I wasn't so keen to learn Theano when I had already invested a substantial amount of time into TensorFlow and since Theano has been deprecated as a general purpose modeling language.
What I really want is a sampling engine that does all the tuning like PyMC3/Stan, but without requiring the use of a specific modeling framework.
I imagine that this interface would accept two Python functions (one that evaluates the log probability, and one that evaluates its gradient) and then the user could choose whichever modeling stack they want.
That being said, my dream sampler doesn't exist (despite my weak attempt to start developing it) so I decided to see if I could hack PyMC3 to do what I wanted.
The TensorFlow + Theano mashup
To get started on implementing this, I reached out to Thomas Wiecki (one of the lead developers of PyMC3 who has written about a similar MCMC mashups) for tips,
He came back with a few excellent suggestions, but the one that really stuck out was to "...write your logp/dlogp as a theano op that you then use in your (very simple) model definition".
The basic idea here is that, since PyMC3 models are implemented using Theano, it should be possible to write an extension to Theano that knows how to call TensorFlow.
Then, this extension could be integrated seamlessly into the model.
The two key pages of documentation are the Theano docs for writing custom operations (ops) and the PyMC3 docs for using these custom ops.
After starting on this project, I also discovered an issue on GitHub with a similar goal that ended up being very helpful.
Based on these docs, my complete implementation for a custom Theano op that calls TensorFlow is given below.
This implemetation requires two theano.tensor.Op subclasses, one for the operation itself (TensorFlowOp) and one for the gradient operation (_TensorFlowGradOp).
Like Theano, TensorFlow has support for reverse-mode automatic differentiation, so we can use the tf.gradients function to provide the gradients for the op.
End of explanation
"""
from theano.tests import unittest_tools as utt
np.random.seed(42)
# Define the operation in TensorFlow
x = tf.Variable(np.random.randn(5), dtype=tf.float64)
sq = tf.square(x)
session.run(tf.global_variables_initializer())
# Define the Theano op
square_op = TensorFlowOp(sq, [x])
# Test that the gradient is correct
pt = session.run(square_op.parameters)
utt.verify_grad(square_op, pt)
"""
Explanation: We can test that our op works for some simple test cases.
For example, we can add a simple (read: silly) op that uses TensorFlow to perform an elementwise square of a vector.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
true_params = np.array([0.5, -2.3, -0.23])
N = 50
t = np.linspace(0, 10, 2)
x = np.random.uniform(0, 10, 50)
y = x * true_params[0] + true_params[1]
y_obs = y + np.exp(true_params[-1]) * np.random.randn(N)
plt.plot(x, y_obs, ".k", label="observations")
plt.plot(t, true_params[0]*t + true_params[1], label="truth")
plt.xlabel("x")
plt.ylabel("y")
plt.legend(fontsize=14);
"""
Explanation: This is obviously a silly example because Theano already has this functionality, but this can also be generalized to more complicated models.
This TensorFlowOp implementation will be sufficient for our purposes, but it has some limitations including:
By design, the output of the operation must be a single tensor. It shouldn't be too hard to generalize this to multiple outputs if you need to, but I haven't tried.
The input and output variables must have fixed dimensions. When the TensorFlowOp is initialized, the input and output tensors will be evaluated using the current TensorFlow session to work out the shapes.
etc., I'm sure.
An example
For this demonstration, we'll fit a very simple model that would actually be much easier to just fit using vanilla PyMC3, but it'll still be useful for demonstrating what we're trying to do.
We'll fit a line to data with the likelihood function:
$$
p({y_n}\,|\,m,\,b,\,s) = \prod_{n=1}^N \frac{1}{\sqrt{2\,\pi\,s^2}}\,\exp\left(-\frac{(y_n-m\,x_n-b)^2}{s^2}\right)
$$
where $m$, $b$, and $s$ are the parameters.
We'll choose uniform priors on $m$ and $b$, and a log-uniform prior for $s$.
To get started, generate some data:
End of explanation
"""
m_tensor = tf.Variable(0.0, dtype=tf.float64, name="m")
b_tensor = tf.Variable(0.0, dtype=tf.float64, name="b")
logs_tensor = tf.Variable(0.0, dtype=tf.float64, name="logs")
t_tensor = tf.constant(t, dtype=tf.float64)
x_tensor = tf.constant(x, dtype=tf.float64)
y_tensor = tf.constant(y_obs, dtype=tf.float64)
mean = m_tensor * x_tensor + b_tensor
pred = m_tensor * t_tensor + b_tensor
loglike = -0.5 * tf.reduce_sum(tf.square(y_tensor - mean)) * tf.exp(-2*logs_tensor)
loglike -= 0.5 * N * logs_tensor
session.run(tf.global_variables_initializer())
"""
Explanation: Next, define the log-likelihood function in TensorFlow:
End of explanation
"""
params = [m_tensor, b_tensor, logs_tensor]
opt = tf.contrib.opt.ScipyOptimizerInterface(-loglike, params)
opt.minimize(session)
"""
Explanation: And then we can fit for the maximum likelihood parameters using an optimizer from TensorFlow:
End of explanation
"""
plt.plot(x, y_obs, ".k", label="observations")
plt.plot(t, true_params[0]*t + true_params[1], label="truth")
plt.plot(t, pred.eval(), label="max.\ like.")
plt.xlabel("x")
plt.ylabel("y")
plt.legend(fontsize=14);
"""
Explanation: Here is the maximum likelihood solution compared to the data and the true relation:
End of explanation
"""
import pymc3 as pm
# First, expose the TensorFlow log likelihood implementation to Theano
# so that PyMC3 can use it
# NOTE: The "names" parameter refers to the names that will be used in
# in the PyMC3 model (see below)
tf_loglike = TensorFlowOp(loglike, [m_tensor, b_tensor, logs_tensor],
names=["m", "b", "logs"])
# Test the gradient
pt = session.run(tf_loglike.parameters)
utt.verify_grad(tf_loglike, pt)
# Set up the model as usual
with pm.Model() as model:
# Uniform priors on all the parameters
m = pm.Uniform("m", -5, 5)
b = pm.Uniform("b", -5, 5)
logs = pm.Uniform("logs", -5, 5)
# Define a custom "potential" to calculate the log likelihood
pm.Potential("loglike", tf_loglike(m, b, logs))
# NOTE: You *must* use "cores=1" because TensorFlow can't deal
# with being pickled!
trace = pm.sample(1000, tune=2000, cores=1, nuts_kwargs=dict(target_accept=0.9))
"""
Explanation: Finally, let's use PyMC3 to generate posterior samples for this model:
End of explanation
"""
pm.traceplot(trace);
"""
Explanation: After sampling, we can make the usual diagnostic plots.
First, the trace plots:
End of explanation
"""
# http://corner.readthedocs.io
import corner
samples = np.vstack([trace[k].flatten() for k in ["m", "b", "logs"]]).T
corner.corner(samples, labels=["m", "b", "log(s)"]);
"""
Explanation: Then the "corner" plot:
End of explanation
"""
plt.plot(x, y_obs, ".k", label="observations")
for j in np.random.randint(len(trace), size=25):
feed_dict = tf_loglike.get_feed_dict(trace[j])
plt.plot(t, pred.eval(feed_dict=feed_dict), color="C1", alpha=0.3)
plt.plot(t, true_params[0]*t + true_params[1], label="truth")
plt.plot([], [], color="C1", label="post.\ samples")
plt.xlabel("x")
plt.ylabel("y")
plt.legend(fontsize=14);
"""
Explanation: And finally the posterior predictions for the line:
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/guide/data_performance.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
import time
"""
Explanation: 使用 tf.data API 获得更高性能
<table class="tfo-notebook-buttons" align="left">
<td data-parent-segment-id="13650222" data-segment-approved="false"><a target="_blank" href="https://tensorflow.google.cn/guide/data_performance"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td data-parent-segment-id="13650223" data-segment-approved="false"><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/data_performance.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td data-parent-segment-id="13650224" data-segment-approved="false"><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/data_performance.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td data-parent-segment-id="13650225" data-segment-approved="false"><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/data_performance.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
GPU 和 TPU 能够极大缩短执行单个训练步骤所需的时间。为了达到最佳性能,需要高效的输入流水线,以在当前步骤完成之前为下一步提供数据。tf.data API 有助于构建灵活高效的输入流水线。本文档演示了如何使用 tf.data API 构建高性能的 TensorFlow 输入流水线。
继续之前,请阅读“构建 TensorFlow 输入流水线”指南,了解如何使用 tf.data API。
资源
Build TensorFlow input pipelines
tf.data.Dataset API
Analyze tf.data performance with the TF Profiler
设置
End of explanation
"""
class ArtificialDataset(tf.data.Dataset):
def _generator(num_samples):
# Opening the file
time.sleep(0.03)
for sample_idx in range(num_samples):
# Reading data (line, record) from the file
time.sleep(0.015)
yield (sample_idx,)
def __new__(cls, num_samples=3):
return tf.data.Dataset.from_generator(
cls._generator,
output_types=tf.dtypes.int64,
output_shapes=(1,),
args=(num_samples,)
)
"""
Explanation: 在本指南中,您将迭代数据集并衡量性能。制定可重现的性能基准可能很困难,不同的因素会对其产生影响:
The current CPU load
The network traffic
缓存等复杂机制
因此,要提供可重现的基准,需要构建人工样本。
数据集
定义一个继承自 tf.data.Dataset 的类,称为 ArtificialDataset。此数据集:
会生成 num_samples 样本(默认数量为 3)
会在第一项模拟打开文件之前休眠一段时间
会在生成每一个模拟从文件读取数据的项之前休眠一段时间
End of explanation
"""
def benchmark(dataset, num_epochs=2):
start_time = time.perf_counter()
for epoch_num in range(num_epochs):
for sample in dataset:
# Performing a training step
time.sleep(0.01)
tf.print("Execution time:", time.perf_counter() - start_time)
"""
Explanation: 此数据集类似 tf.data.Dataset.range,在开头和每个样本之间添加了固定延迟。
训练循环
编写一个虚拟的训练循环,以测量迭代数据集所用的时间。训练时间是模拟的。
End of explanation
"""
benchmark(ArtificialDataset())
"""
Explanation: 优化性能
为了展示如何优化性能,下面我们将优化 ArtificialDataset 的性能。
朴素的方法
先从不使用任何技巧的朴素流水线开始,按原样迭代数据集。
End of explanation
"""
benchmark(
ArtificialDataset()
.prefetch(tf.data.experimental.AUTOTUNE)
)
"""
Explanation: 从后台可以看到执行时间的花费情况:
可以看到,执行一个训练步骤涉及以下操作:
打开文件(如果尚未打开)
从文件获取数据条目
使用数据进行训练
但是,在类似这里的朴素同步实现中,当流水线在获取数据时,模型会处于空闲状态。相反,当模型在进行训练时,输入流水线会处于空闲状态。因此,训练步骤的用时是打开、读取和训练时间的总和。
接下来的各部分将基于此输入流水线,演示设计高效 TensorFlow 输入流水线的最佳做法。
预提取
预提取会与训练步骤的预处理和模型执行重叠。在模型执行第 s 步训练的同时,输入流水线会读取第 s+1 步的数据。这样做能够最大程度减少训练的单步用时(而非总和),并减少提取数据所需的时间。
tf.data API 提供了 tf.data.Dataset.prefetch 转换。它可以用来将数据的生成时间和使用时间分离。特别是,该转换会使用后台线程和内部缓冲区在请求元素之前从输入数据集中预提取这些元素。要预提取的元素数量应等于(或可能大于)单个训练步骤使用的批次数。您可以手动调整这个值,或者将其设置为 tf.data.experimental.AUTOTUNE,这将提示 tf.data 运行时在运行期间动态调整该值。
注意,只要有机会将“生产者”的工作和“使用者”的工作重叠,预提取转换就能带来好处。
End of explanation
"""
benchmark(
tf.data.Dataset.range(2)
.interleave(ArtificialDataset)
)
"""
Explanation: 这次您可以看到,在针对样本 0 运行训练步骤的同时,输入流水线正在读取样本 1 的数据,依此类推。
并行数据提取
在实际设置中,输入数据可能会远程存储(例如,GCS 或 HDFS)。由于在本地存储空间和远程存储空间之间存在以下差异,在本地读取数据时运行良好的数据集流水线可能会在远程读取数据时成为 I/O 瓶颈:
到达第一字节用时:从远程存储空间中读取文件的第一个字节所花费的时间要比从本地存储空间中读取所花费的时间长几个数量级。
读取吞吐量:虽然远程存储空间通常提供较大的聚合带宽,但读取单个文件可能只能使用此带宽的一小部分。
此外,将原始字节加载到内存中后,可能还需要对数据进行反序列化和/或解密(例如 protobuf),这需要进行额外计算。无论数据是本地存储还是远程存储,都会存在此开销,但如果没有高效地预提取数据,在远程情况下会更糟。
为了减轻各种数据提取开销的影响,可以使用 tf.data.Dataset.interleave 转换来并行化数据加载步骤,从而交错其他数据集(如数据文件读取器)的内容。可以通过 cycle_length 参数指定想要重叠的数据集数量,并通过 num_parallel_calls 参数指定并行度级别。与 prefetch 转换类似,interleave 转换也支持 tf.data.experimental.AUTOTUNE,它将让 tf.data 运行时决定要使用的并行度级别。
顺序交错
tf.data.Dataset.interleave 转换的默认参数会使其按顺序交错两个数据集中的单个样本。
End of explanation
"""
benchmark(
tf.data.Dataset.range(2)
.interleave(
ArtificialDataset,
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
)
"""
Explanation: 该图可以展示 interleave 转换的行为,从两个可用的数据集中交替预提取样本。但是,这里不涉及性能改进。
并行交错
现在,使用 interleave 转换的 num_parallel_calls 参数。这样可以并行加载多个数据集,从而减少等待打开文件的时间。
End of explanation
"""
def mapped_function(s):
# Do some hard pre-processing
tf.py_function(lambda: time.sleep(0.03), [], ())
return s
"""
Explanation: 这次,两个数据集的读取并行进行,从而减少了全局数据处理时间。
并行数据转换
准备数据时,可能需要对输入元素进行预处理。为此,tf.data API 提供了 tf.data.Dataset.map 转换,该转换会将用户定义的函数应用于输入数据集的每个元素。由于输入元素彼此独立,可以在多个 CPU 核心之间并行预处理。为了实现这一点,类似 prefetch 和 interleave 转换,map 转换也提供了num_parallel_calls 参数来指定并行度级别。
num_parallel_calls 参数最佳值的选择取决于您的硬件、训练数据的特性(如数据大小和形状)、映射函数的开销,以及同一时间在 CPU 上进行的其他处理。一个简单的试探法是使用可用的 CPU 核心数。但是,对于 prefetch 和 interleave 转换,map 转换支持 tf.data.experimental.AUTOTUNE,它将让 tf.data 运行时决定要使用的并行度级别。
End of explanation
"""
benchmark(
ArtificialDataset()
.map(mapped_function)
)
"""
Explanation: 顺序映射
首先使用不具有并行度的 map 转换作为基准示例。
End of explanation
"""
benchmark(
ArtificialDataset()
.map(
mapped_function,
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
)
"""
Explanation: 对于朴素方法来说,单次迭代的用时就是花费在打开、读取、预处理(映射)和训练步骤上的时间总和。
并行映射
现在,使用相同的预处理函数,但将其并行应用于多个样本。
End of explanation
"""
benchmark(
ArtificialDataset()
.map( # Apply time consuming operations before cache
mapped_function
).cache(
),
5
)
"""
Explanation: 现在,您可以在图上看到预处理步骤重叠,从而减少了单次迭代的总时间。
缓存
tf.data.Dataset.cache 转换可以在内存中或本地存储空间中缓存数据集。这样可以避免一些运算(如文件打开和数据读取)在每个周期都被执行。
End of explanation
"""
fast_dataset = tf.data.Dataset.range(10000)
def fast_benchmark(dataset, num_epochs=2):
start_time = time.perf_counter()
for _ in tf.data.Dataset.range(num_epochs):
for _ in dataset:
pass
tf.print("Execution time:", time.perf_counter() - start_time)
def increment(x):
return x+1
"""
Explanation: 缓存数据集时,仅在第一个周期执行一次 cache 之前的转换(如文件打开和数据读取)。后续周期将重用通过 cache 转换缓存的数据。
如果传递到 map 转换的用户定义函数开销很大,可在 map 转换后应用 cache 转换,只要生成的数据集仍然可以放入内存或本地存储空间即可。如果用户定义函数增加了存储数据集所需的空间(超出缓存容量),在 cache 转换后应用该函数,或者考虑在训练作业之前对数据进行预处理以减少资源使用。
向量化映射
调用传递给 map 转换的用户定义函数会产生与调度和执行用户定义函数相关的开销。我们建议对用户定义函数进行向量化处理(即,让它一次运算一批输入)并在 map 转换之前应用 batch 转换。
为了说明这种良好做法,不适合使用您的人工数据集。调度延迟大约为 10 微妙(10e-6 秒),远远小于 ArtificialDataset 中使用的数十毫秒,因此很难看出它的影响。
在本示例中,我们使用基本 tf.data.Dataset.range 函数并将训练循环简化为最简形式。
End of explanation
"""
fast_benchmark(
fast_dataset
# Apply function one item at a time
.map(increment)
# Batch
.batch(256)
)
"""
Explanation: 标量映射
End of explanation
"""
fast_benchmark(
fast_dataset
.batch(256)
# Apply function on a batch of items
# The tf.Tensor.__add__ method already handle batches
.map(increment)
)
"""
Explanation: 上图说明了正在发生的事情(样本较少)。您可以看到已将映射函数应用于每个样本。虽然此函数速度很快,但会产生一些开销影响时间性能。
向量化映射
End of explanation
"""
import itertools
from collections import defaultdict
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
"""
Explanation: 这次,映射函数被调用了一次,并被应用于一批样本。虽然该函数可能需要花费更多时间执行,但开销仅出现了一次,从而改善了整体时间性能。
减少内存占用
许多转换(包括 interleave、prefetch 和 shuffle)会维护元素的内部缓冲区。如果传递给 map 转换的用户定义函数更改了元素大小,则映射转换和缓冲元素的转换的顺序会影响内存使用量。通常,我们建议选择能够降低内存占用的顺序,除非需要不同的顺序以提高性能。
缓存部分计算
建议在 map 转换后缓存数据集,除非此转换会使数据过大而不适合放在内存中。如果映射函数可以分成两个部分,则能实现折衷:一个耗时的部分和一个消耗内存的部分。在这种情况下,您可以按如下方式将转换链接起来:
python
dataset.map(time_consuming_mapping).cache().map(memory_consuming_mapping)
这样,耗时部分仅在第一个周期执行,从而避免了使用过多缓存空间。
最佳做法总结
以下是设计高效 TensorFlow 输入流水线的最佳做法总结:
使用 prefetch 转换使生产者和使用者的工作重叠。
使用 interleave 转换实现并行数据读取转换。
通过设置 num_parallel_calls 参数实现并行 map 转换。
第一个周期使用 cache 转换将数据缓存在内存中。
向量化传递给 map 转换的用户定义函数
应用 interleave、prefetch 和 shuffle 转换时减少内存使用量。
重现图表
注:本笔记本的剩余部分是关于如何重现上述图表的,请随意使用以下代码,但请了解其并非本教程的主要内容。
要更深入地了解 tf.data.Dataset API,您可以使用自己的流水线。下面是用来绘制本指南中图像的代码。这可以作为一个好的起点,展示了一些常见困难的解决方法,例如:
执行时间的可重现性;
映射函数的 Eager Execution;
interleave 转换的可调用对象。
End of explanation
"""
class TimeMeasuredDataset(tf.data.Dataset):
# OUTPUT: (steps, timings, counters)
OUTPUT_TYPES = (tf.dtypes.string, tf.dtypes.float32, tf.dtypes.int32)
OUTPUT_SHAPES = ((2, 1), (2, 2), (2, 3))
_INSTANCES_COUNTER = itertools.count() # Number of datasets generated
_EPOCHS_COUNTER = defaultdict(itertools.count) # Number of epochs done for each dataset
def _generator(instance_idx, num_samples):
epoch_idx = next(TimeMeasuredDataset._EPOCHS_COUNTER[instance_idx])
# Opening the file
open_enter = time.perf_counter()
time.sleep(0.03)
open_elapsed = time.perf_counter() - open_enter
for sample_idx in range(num_samples):
# Reading data (line, record) from the file
read_enter = time.perf_counter()
time.sleep(0.015)
read_elapsed = time.perf_counter() - read_enter
yield (
[("Open",), ("Read",)],
[(open_enter, open_elapsed), (read_enter, read_elapsed)],
[(instance_idx, epoch_idx, -1), (instance_idx, epoch_idx, sample_idx)]
)
open_enter, open_elapsed = -1., -1. # Negative values will be filtered
def __new__(cls, num_samples=3):
return tf.data.Dataset.from_generator(
cls._generator,
output_types=cls.OUTPUT_TYPES,
output_shapes=cls.OUTPUT_SHAPES,
args=(next(cls._INSTANCES_COUNTER), num_samples)
)
"""
Explanation: 数据集
与 ArtificialDataset 类似,您可以构建一个返回每步用时的数据集。
End of explanation
"""
def timelined_benchmark(dataset, num_epochs=2):
# Initialize accumulators
steps_acc = tf.zeros([0, 1], dtype=tf.dtypes.string)
times_acc = tf.zeros([0, 2], dtype=tf.dtypes.float32)
values_acc = tf.zeros([0, 3], dtype=tf.dtypes.int32)
start_time = time.perf_counter()
for epoch_num in range(num_epochs):
epoch_enter = time.perf_counter()
for (steps, times, values) in dataset:
# Record dataset preparation informations
steps_acc = tf.concat((steps_acc, steps), axis=0)
times_acc = tf.concat((times_acc, times), axis=0)
values_acc = tf.concat((values_acc, values), axis=0)
# Simulate training time
train_enter = time.perf_counter()
time.sleep(0.01)
train_elapsed = time.perf_counter() - train_enter
# Record training informations
steps_acc = tf.concat((steps_acc, [["Train"]]), axis=0)
times_acc = tf.concat((times_acc, [(train_enter, train_elapsed)]), axis=0)
values_acc = tf.concat((values_acc, [values[-1]]), axis=0)
epoch_elapsed = time.perf_counter() - epoch_enter
# Record epoch informations
steps_acc = tf.concat((steps_acc, [["Epoch"]]), axis=0)
times_acc = tf.concat((times_acc, [(epoch_enter, epoch_elapsed)]), axis=0)
values_acc = tf.concat((values_acc, [[-1, epoch_num, -1]]), axis=0)
time.sleep(0.001)
tf.print("Execution time:", time.perf_counter() - start_time)
return {"steps": steps_acc, "times": times_acc, "values": values_acc}
"""
Explanation: 此数据集会提供形状为 [[2, 1], [2, 2], [2, 3]] 且类型为 [tf.dtypes.string, tf.dtypes.float32, tf.dtypes.int32] 的样本。每个样本为:
( [("Open"), ("Read")], [(t0, d), (t0, d)], [(i, e, -1), (i, e, s)] )
其中:
Open 和 Read 是步骤标识符
t0 是相应步骤开始时的时间戳
d 是在相应步骤中花费的时间
i 是实例索引
e 是周期索引(数据集被迭代的次数)
s 是样本索引
迭代循环
使迭代循环稍微复杂一点,以汇总所有计时。这仅适用于生成上述样本的数据集。
End of explanation
"""
def draw_timeline(timeline, title, width=0.5, annotate=False, save=False):
# Remove invalid entries (negative times, or empty steps) from the timelines
invalid_mask = np.logical_and(timeline['times'] > 0, timeline['steps'] != b'')[:,0]
steps = timeline['steps'][invalid_mask].numpy()
times = timeline['times'][invalid_mask].numpy()
values = timeline['values'][invalid_mask].numpy()
# Get a set of different steps, ordered by the first time they are encountered
step_ids, indices = np.stack(np.unique(steps, return_index=True))
step_ids = step_ids[np.argsort(indices)]
# Shift the starting time to 0 and compute the maximal time value
min_time = times[:,0].min()
times[:,0] = (times[:,0] - min_time)
end = max(width, (times[:,0]+times[:,1]).max() + 0.01)
cmap = mpl.cm.get_cmap("plasma")
plt.close()
fig, axs = plt.subplots(len(step_ids), sharex=True, gridspec_kw={'hspace': 0})
fig.suptitle(title)
fig.set_size_inches(17.0, len(step_ids))
plt.xlim(-0.01, end)
for i, step in enumerate(step_ids):
step_name = step.decode()
ax = axs[i]
ax.set_ylabel(step_name)
ax.set_ylim(0, 1)
ax.set_yticks([])
ax.set_xlabel("time (s)")
ax.set_xticklabels([])
ax.grid(which="both", axis="x", color="k", linestyle=":")
# Get timings and annotation for the given step
entries_mask = np.squeeze(steps==step)
serie = np.unique(times[entries_mask], axis=0)
annotations = values[entries_mask]
ax.broken_barh(serie, (0, 1), color=cmap(i / len(step_ids)), linewidth=1, alpha=0.66)
if annotate:
for j, (start, width) in enumerate(serie):
annotation = "\n".join([f"{l}: {v}" for l,v in zip(("i", "e", "s"), annotations[j])])
ax.text(start + 0.001 + (0.001 * (j % 2)), 0.55 - (0.1 * (j % 2)), annotation,
horizontalalignment='left', verticalalignment='center')
if save:
plt.savefig(title.lower().translate(str.maketrans(" ", "_")) + ".svg")
"""
Explanation: 绘图方法
最后,定义一个函数,根据 timelined_benchmark 函数返回的值绘制时间线。
End of explanation
"""
def map_decorator(func):
def wrapper(steps, times, values):
# Use a tf.py_function to prevent auto-graph from compiling the method
return tf.py_function(
func,
inp=(steps, times, values),
Tout=(steps.dtype, times.dtype, values.dtype)
)
return wrapper
"""
Explanation: 对映射函数使用封装容器
要在 Eager 上下文中运行映射函数,必须将其封装在 tf.py_function 调用中。
End of explanation
"""
_batch_map_num_items = 50
def dataset_generator_fun(*args):
return TimeMeasuredDataset(num_samples=_batch_map_num_items)
"""
Explanation: 流水线对比
End of explanation
"""
@map_decorator
def naive_map(steps, times, values):
map_enter = time.perf_counter()
time.sleep(0.001) # Time consuming step
time.sleep(0.0001) # Memory consuming step
map_elapsed = time.perf_counter() - map_enter
return (
tf.concat((steps, [["Map"]]), axis=0),
tf.concat((times, [[map_enter, map_elapsed]]), axis=0),
tf.concat((values, [values[-1]]), axis=0)
)
naive_timeline = timelined_benchmark(
tf.data.Dataset.range(2)
.flat_map(dataset_generator_fun)
.map(naive_map)
.batch(_batch_map_num_items, drop_remainder=True)
.unbatch(),
5
)
"""
Explanation: 朴素流水线
End of explanation
"""
@map_decorator
def time_consuming_map(steps, times, values):
map_enter = time.perf_counter()
time.sleep(0.001 * values.shape[0]) # Time consuming step
map_elapsed = time.perf_counter() - map_enter
return (
tf.concat((steps, tf.tile([[["1st map"]]], [steps.shape[0], 1, 1])), axis=1),
tf.concat((times, tf.tile([[[map_enter, map_elapsed]]], [times.shape[0], 1, 1])), axis=1),
tf.concat((values, tf.tile([[values[:][-1][0]]], [values.shape[0], 1, 1])), axis=1)
)
@map_decorator
def memory_consuming_map(steps, times, values):
map_enter = time.perf_counter()
time.sleep(0.0001 * values.shape[0]) # Memory consuming step
map_elapsed = time.perf_counter() - map_enter
# Use tf.tile to handle batch dimension
return (
tf.concat((steps, tf.tile([[["2nd map"]]], [steps.shape[0], 1, 1])), axis=1),
tf.concat((times, tf.tile([[[map_enter, map_elapsed]]], [times.shape[0], 1, 1])), axis=1),
tf.concat((values, tf.tile([[values[:][-1][0]]], [values.shape[0], 1, 1])), axis=1)
)
optimized_timeline = timelined_benchmark(
tf.data.Dataset.range(2)
.interleave( # Parallelize data reading
dataset_generator_fun,
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.batch( # Vectorize your mapped function
_batch_map_num_items,
drop_remainder=True)
.map( # Parallelize map transformation
time_consuming_map,
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.cache() # Cache data
.map( # Reduce memory usage
memory_consuming_map,
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.prefetch( # Overlap producer and consumer works
tf.data.experimental.AUTOTUNE
)
.unbatch(),
5
)
draw_timeline(naive_timeline, "Naive", 15)
draw_timeline(optimized_timeline, "Optimized", 15)
"""
Explanation: 优化后的流水线
End of explanation
"""
|
Olsthoorn/IHE-python-course-2017 | exercises/Mar07/dealingWithStrings.ipynb | gpl-2.0 | from pprint import pprint
s1 = 'This is a string'
s2 ="This too is a ; the `quotes` don't matter as long as your are consequent you can use quotes inside quotes"
s3 = """This is a multiline
string, mostly used for doc
strings in fucntions and classes
"""
print(s1)
print(s2)
print()
print(s3)
"""
Explanation: <figure>
<IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
Strings, files combined with lists, dicts and sets with inspection
T.N.Olsthoorn, Feb 27, 2017
Last week we focused on the essential tools in Python, i.e tuples, lists, dicts and sets. Strings are another such tool, but strings become essentially effective if combined with reading files and lists, dicts and set. That's whey we postponed this subject to this week, allowing us more handling space.
We've already mentioned that strings are immutable sequences of characters. So we can't replace individual characters or change them from uppercase to lowercase, nor can't we append characters, but we can replace an old string by a new one, and while doing so apply the required changes. So immutability is never a big issue, on the contrary immutability allows using strings as keys in dicts, which is a game changing advantage.
Let's see some strings:
End of explanation
"""
print("This prints with tabs \t\t and newlines\n\n")
print("A windows directory: C\\users\\system\\python\\IHEcourse")
"""
Explanation: Escape character \\ (backslash) is used to interprete special characters that otherwise cannot be printed like
the newline \\n and the tab \\tab. There are many more, but these two are most important.
To prevent the bacslash from being interpreted as the prelude of a special character use double backslash.
This is often necessary when typing or using strings that represent dirctories in Windows.
For example
C:\users\system\python\IHEcourse
End of explanation
"""
print(r"This prints with tabs \t\t and newlines\n\n")
print(r"A windows directory: C\\users\\system\\python\\IHEcourse")
"""
Explanation: If you don't want the \\ to be intepreted by python you should use "raw" strings, which you get by putting the lower case letter r in front if the string.
End of explanation
"""
"This is " + 'a `so-called` concatenated' + ' ' + "string" + (', ha') *3 + '!'
"""
Explanation: String addition with + is concatenation and multiplication with * means multiple concatenation:
End of explanation
"""
s = "This is " + 'a `so-called` concatenated' + ' ' + "string" + (', ha') *3 + '!'
print(s)
"""
Explanation: Or bind the string with a variable, like a and print it:
End of explanation
"""
myList = [3.0, 2.279345, 1.9823, -3.4, 1e3]
(" {}," * len(myList)).format(*myList)
"""
Explanation: Stings can contain replacement fields { } and then be formatted with the method format(...).
The the values in myList are placed in the replacement field { }. But before doing this, a string is built like so
" {}" * len(myList)
to get the string with the required number of such fields.
End of explanation
"""
from math import pi, e
print("Just using the replacement fields:\n{0}, {1}, {1}, {1}, {2}, {2}, {2}\n".format(2, pi, e))
print("The number in them indixes the number of the parameters in the format list.\n")
print("Just using the replacement fields, with a different order of the printed variables:\n\
{2}, {0}, {2}, {1}, {0}, {2}, {1}\n".format(2, pi, e))
print("You don't need the variable number specifier if you use the order of the variables in the format")
print("{}, {}, {}, {}\n".format(pi, e, 314, pi/e))
print("Using d, f, e, and g format specifiers:\n{0:d}, {1:.2f}, {1:.4f}, {1:.2e}, {2:.5e}, {2:.3g}, {2:.5g}\n".format(2, pi, e))
print("Using d, f, e, and g format specifiers with field width:\n\
{0:5d}, {1:8.2f}, {1:8.4f}, {1:10.2e}, {2:10.5e}, {2:10.3g}, {2:10.5g}\n".format(2, pi, e))
print('d format is integer (whole number), with field width specified\n\
{0:d}, {0:4d}, {0:10d}\n'.format(314))
print('f format is floating point with field width and decimals specified\n\
{0:10.0f}, {0:10.2f}, {0:10.6f}\n'.format(pi))
print('e format is floating scientific form with field width and decimals specified\n\
{0:10.0e}, {0:10.2e}, {0:10.6e}\n'.format(pi))
print('g format is floating general form with field width and significant digits specified\n\
{0:10.0g}, {0:10.2g}, {0:10.6g}\n'.format(pi))
print('You can combine alingment within the specified field width\n\
{0:>10.0g}, {0:<10.2g}, {0:<10.6g}\n'.format(pi))
print('Pad integers with leading zeros:\n\
{0:4d}, {0:04d}, {0:10d}, {0:010d}\n'.format(314))
print('You don\'t even need the `d` when printing integers:\n\
{0:4}, {0:04}, {0:10}, {0:010}\n'.format(314))
print('The most general replacement is with strings, using s-format:\n\
{0:s}, {0:10s}, {0:<10s}, {0:>10s}\n\
you may also here drop the letter s of the format:\n\
{0}, {0:10}, {0:<10}, {0:>10}'.format('Hello!'))
"""
Explanation: Notice that format() is a method of the string class. It's intensively used in print statements for put values in the strings to be printed. There are many many options to manage the way values are shown after they have been placed in these fields. It's said that format() has a mini-language. You'll get aquited with it, but it is very usefull to read the documentation, or to be at least of aware of it.
Some examples for using the replacement fields:
End of explanation
"""
s1 = "ok, according to {}, this is '" + s[8:20] + s[34:41] + "'?"
print(s1.format('you'))
"""
Explanation: Just one more compound example of usign +, * and replacement.
First construct the string:
String indexing and slicing
We can index and slice strings to get parts of it:
End of explanation
"""
print(s[::-1])
"""
Explanation: Use th slicing with a negative step size get a reversed copy of the sting
End of explanation
"""
s1.upper?
# dir(s1) to see all the attributes of the s1 (in fact of the class str)
[k for k in dir(s1) if not k.startswith('_')] # use this comprehension to see only the public ones
"""
Explanation: String methods
the string class has a number of useful and importand methods associated with it, which can be inspected in the notebook by typing a dot immediately after the string and pressing the <tab> key. You can then scroll up and down throug the list of available attributes and press return to accept one or a question mark to see it's doc string.
End of explanation
"""
s = "This is a string with Upper and lower case characters"
print("=" * 80)
print("Below the results of applying about all methods of str:\n")
print(s.capitalize()) # make first character uppercase and the rest lower case
print(s.casefold()) # returns s suitable for caseless comparisons
print(s.center(70, '#')) # Return S centered in a string of length width. Padding is
print('How ofthen does the letter `a` occur in s? ', s.count('a')) # number of times given character is found in s
print('Does s end with `ters` ? ', s.endswith('ters'))
print("Does s start with 'This is a` ?", s.startswith('This is a'))
print("\tThis\tis\ta\tstring\twith\ttabs\tinstead\tof\ta\tspace.\t\t") # Return a copy of S where all tab characters are expanded using spaces
print('The word `Upper` is found at position {}'.format(s.find('Upper'))) # Return the lowest index in S where substring sub is found
print('The word `Upper` is found at position {}'.format(s.index('Upper', 10, -1))) # Like S.find() but raise ValueError when the substring is not found
print(s.split(' ')) # splits a at specified character, space in this case. Yield list of string.
print('^'.join(s.split(' '))) # joins list of string putting the specifed character between the words.
print(s.lower()) # lower case copy of s
print(s.upper()) # upper ase copy of s
print(s.replace('characters', 'alphanumeric symbols'))
print(s.title(), ' (All words now capitalized)') # returns string with all words capitalized
print(s.title().swapcase(), ' (Case firt titled and then swapped)') # returns string with all words capitalized
s2 = " \t string with\twhitespace\t "
print("'" + s2 + "'", "(String with whitespace, (with tabs and spaces))")
print("'" + s2.strip() + "'", "(Left and right whitespace removed)") # whitespace removed
print("'" + s2.lstrip() + "'", '(Left whitespace removed)') # left whitespace removed
print("'" + s2.rstrip() + "'", "(Right whitespace removed)") # right withspace removed
print("'" + s2.ljust(40, '=') + "'", "(str is left justified and padded with '=')")
print("'" + s2.rjust(40, '+') + "'", "(str is right justified and padded with '+')")
s2 = " 'a a a a a"
print('First index of `{}` in `{}` is {}'.format('a', s2, s2.find('a')))
print('Last index of `{}` in `{}` is {}'.format('a', s2, s2.rfind('a')))
s3 = 'This/is/the/day and That\\was\\yesterday'
print(s3.partition('/')) # Search for the separator sep in S, and return the part before it,
print('First part is `{}`, separator is `{}` and last part is `{}`'.format(*s3.partition('/')))
print(s3.rpartition('\\')) # Search for the separator sep in S, and return the part before it,
print('First part is `{}`, separator is `{}` and last part is `{}`'.format(*s3.rpartition('\\')))
# Format_map is like format but can use values from a dict if the keys are used in the replacement fields
print()
pprint("Using format_map, replace keys in {} by values from dict:\n")
horse={'name' : 'duke',
'age' : 2,
'color': 'brown',
'likes' : 'hay'}
print()
pprint(horse, width=40)
print()
print('My {color} horse named {name} is {age} years old and especially likes {likes} on Sundays'.format_map(horse))
print()
print("This is about all on the methods of str.")
print("=" * 80)
"""
Explanation: Some examples:
End of explanation
"""
|
kubeflow/kfserving-lts | docs/samples/client/kfserving_sdk_v1beta1_sample.ipynb | apache-2.0 | from kubernetes import client
from kfserving import KFServingClient
from kfserving import constants
from kfserving import utils
from kfserving import V1beta1InferenceService
from kfserving import V1beta1InferenceServiceSpec
from kfserving import V1beta1PredictorSpec
from kfserving import V1beta1TFServingSpec
"""
Explanation: Sample for KFServing SDK v1beta1
This is a sample for KFServing SDK v1beta1.
The notebook shows how to use KFServing SDK to create, get and delete InferenceService.
End of explanation
"""
#namespace = utils.get_default_target_namespace()
namespace = 'kfserving-test'
"""
Explanation: Define namespace where InferenceService needs to be deployed to. If not specified, below function defines namespace to the current one where SDK is running in the cluster, otherwise it will deploy to default namespace.
End of explanation
"""
api_version = constants.KFSERVING_GROUP + '/' + kfserving_version
isvc = V1beta1InferenceService(api_version=api_version,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name='flower-sample', namespace=namespace),
spec=V1beta1InferenceServiceSpec(
predictor=V1beta1PredictorSpec(
tensorflow=(V1beta1TFServingSpec(
storage_uri='gs://kfserving-samples/models/tensorflow/flowers'))))
)
"""
Explanation: Define InferenceService
Firstly define default endpoint spec, and then define the inferenceservice basic on the endpoint spec.
End of explanation
"""
KFServing = KFServingClient()
KFServing.create(isvc)
"""
Explanation: Create InferenceService
Call KFServingClient to create InferenceService.
End of explanation
"""
KFServing.get('flower-sample', namespace=namespace, watch=True, timeout_seconds=120)
"""
Explanation: Check the InferenceService
End of explanation
"""
isvc = V1beta1InferenceService(api_version=api_version,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name='flower-sample', namespace=namespace),
spec=V1beta1InferenceServiceSpec(
predictor=V1beta1PredictorSpec(
canary_traffic_percent=20,
tensorflow=(V1beta1TFServingSpec(
storage_uri='gs://kfserving-samples/models/tensorflow/flowers-2'))))
)
KFServing.patch('flower-sample', isvc, namespace=namespace)
"""
Explanation: Patch the InferenceService and define Canary Traffic Percent
End of explanation
"""
KFServing.wait_isvc_ready('flower-sample', namespace=namespace)
KFServing.get('flower-sample', namespace=namespace, watch=True)
"""
Explanation: Check the InferenceService after Patching
End of explanation
"""
KFServing.delete('flower-sample', namespace=namespace)
"""
Explanation: Delete the InferenceService
End of explanation
"""
|
robblack007/clase-metodos-numericos | Practicas/P4/Practica 4 - Sistemas de ecuaciones lineales II.ipynb | mit | from numpy import matrix
A = matrix([[72, 0, 0, 9, 0, 0],
[ 0, 2.88, 0, 0, 0, -4.5],
[ 0, 0, 18, 9, 0, 0],
[ 9, 0, 9, 12, 0, 0],
[ 0, 0, 0, 0, 33, 0],
[ 0, -4.5, 0, 0, 0, 33]])
b = matrix([[2],
[0.5],
[1],
[0],
[1.2],
[5]])
"""
Explanation: Método de Jacobi
En esta ocasión exploraremos una implementación del método de Jacobi para la aproximación de solución a sistemas de ecuaciones lineales.
Empecemos declarando un sistema de ecuaciones lineales en su forma matricial:
End of explanation
"""
A.I*b
"""
Explanation: Con lo que, trivialmente, podemos ver su solución por medio de la implementación mas simple:
End of explanation
"""
A.shape
"""
Explanation: Sin embargo, deseamos implementar un método diferente, por lo que empezaremos obteniendo información acerca del sistema, por ejemplo, si queremos saber la dimensión de la matriz $A$, y por consecuente la dimensión del sistema, tendremos:
End of explanation
"""
A.shape[0]
"""
Explanation: Ya sabemos que esta matriz es cuadrada, por lo que podemos obtener el primer elemento solamente con:
End of explanation
"""
from numpy import zeros, copy
x0 = matrix(zeros(A.shape[0])).T
x0
"""
Explanation: Recordando el método de Jacobi, tenemos que empezar con una aproximación $0$ en todas las variables, por lo que definimos un arreglo de ceros, con la funcion zeros, y lo convertimos en un objeto de tipo matrix:
End of explanation
"""
A.diagonal()
divisores = copy(A.diagonal().T)
divisores
"""
Explanation: Lo siguiente que voy a necesitar es un arreglo con todos los elementos de la diagonal principal, para dividir cada una de las formulas de la aproximación con estos elementos:
End of explanation
"""
from numpy import fill_diagonal
fill_diagonal(A, 0)
A
-A*x0
"""
Explanation: Una vez que obtengo la diagonal, voy a sustituir todos los elementos de esta por $0$, de manera que cuando multiplique cada una de las filas de esta matriz, por el valor de la aproximación de $x$, $x_0$, obtendré la mayoría de los elementos de cada una de las formulas:
$$
0 x_1 + a_{12}x_2 + a_{13}x_3+\ldots+a_{1n}x_n
$$
End of explanation
"""
from numpy import divide
divide(-A*x0, divisores)
"""
Explanation: Tomando en cuenta que nuestra primera aproximación es toda $0$, este vector es completamente correcto, ahora tenemos que dividir cada uno de estos elementos por el vector diagonal:
$$
\frac{0 x_1 + a_{12}x_2 + a_{13}x_3+\ldots+a_{1n}x_n}{a_{11}}
$$
End of explanation
"""
x1 = divide(b, divisores) - divide(A*x0, divisores)
x1
"""
Explanation: Por ultimo tenemos que restar cada uno de estos elementos a la division del vector $b$ con la diagonal, de tal manera que tengamos:
$$
\frac{b}{a_{11}} -\frac{0 x_1 + a_{12}x_2 + a_{13}x_3+\ldots+a_{1n}x_n}{a_{11}}
$$
End of explanation
"""
dif = x1 - x0
dif
"""
Explanation: lo cual corresponde a nuestra primer aproximación, por lo que lo hemos guardado en x1.
Si ahora obtenemos la diferencia entre las primeras dos aproximaciones, y lo guardamos en la variable dif:
End of explanation
"""
from numpy import sqrt
e1 = sqrt((dif.T*dif)[0,0])
e1
"""
Explanation: podemos obtener el cuadrado de cada una de estas diferencias con dif.T*dif, es decir:
$$
(\bar{x}_1 - x_1)^2 + (\bar{x}_2 - x_2)^2 + \ldots + (\bar{x}_n - x_n)^2 =
\begin{pmatrix}
\bar{x}_1 - x_1 &
\bar{x}_2 - x_2 &
\ldots &
\bar{x}_n - x_n
\end{pmatrix}
\begin{pmatrix}
\bar{x}_1 - x_1 \
\bar{x}_2 - x_2 \
\vdots \
\bar{x}_n - x_n
\end{pmatrix}
$$
y a este valor, tan solo tenemos que sacarle la raiz cuadrada:
End of explanation
"""
x2 = divide(b, divisores) - divide(A*x1, divisores)
x2
dif = x2 - x1
e2 = sqrt((dif.T*dif)[0,0])
e2
x3 = divide(b, divisores) - divide(A*x2, divisores)
x3
dif = x3 - x2
e3 = sqrt((dif.T*dif)[0,0])
e3
"""
Explanation: por lo que obtenemos nuestro error inicial. Ahora tan solo tenemos que seguir haciendo iteraciones hasta obtener una buena aproximación:
End of explanation
"""
|
schaber/deep-learning | autoencoder/Simple_Autoencoder.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
print(len(img))
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
inputs_ = tf.placeholder(tf.float32, [None,len(img)], name='inputs')
targets_ = tf.placeholder(tf.float32, [None,len(img)], name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=None)
#encoded = tf.contrib.layers.fully_connected(inputs_, encoding_dim, activation_fn=None)
# Output layer logits
logits = tf.layers.dense(encoded, len(img), activation=None)
#logits = tf.contrib.layers.fully_connected(encoded, len(img), activation_fn=None)
# Sigmoid output from logits
decoded = tf.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
kit-cel/lecture-examples | nt1/vorlesung/9_mimo/mimo.ipynb | gpl-2.0 | # importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 30}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(18, 8))
"""
Explanation: Content and Objectives
Show several aspects of MIMO
Capacity is estimated by approximating expectation by the weak law of large numbers and averaging along multiple channel realizations
Number of non-zero singular values is characterized when channel coefficients H_ij are circular gaussian
Symbols at receiver are shown w. and wo. post-processing (zero-forcing, MMSE)
Import
End of explanation
"""
# number of transmit and receive antennas
# vector input --> pairwise combining
N_T = [ 1, 2, 2, 4, 4 ]
N_R = [ 1, 2, 4, 4, 32 ]
assert len( N_T ) == len( N_R ), 'Number of transmit and receive antenna pairs has to be equal!'
# snr in dB
snr_db = np.arange( 0, 110, 10 )
snr = 10**( snr_db / 10 )
# number of realizations used to approximate expectation
N_real = 100
"""
Explanation: Capacity of MIMO Systems
Determining the capacity of MIMO by estimating capacity using the law of large numbers ("time averaging approximates expectation").
Parameters
End of explanation
"""
# initialize capacity array
C = np.zeros( ( len(N_T), len(snr_db) ) )
# loop for snr values
for ind_snr, val_snr in enumerate( snr ):
# loop for antenna combinations
for ind_n in np.arange( len( N_T ) ):
# read transmit and receive antennas
n_T = N_T[ ind_n ]
n_R = N_R[ ind_n ]
# instantaneous capacity; collecting along several realizations
C_inst = []
# loop for realizations
for _n_real in np.arange( N_real ):
# sample channel matrix with i.i.d. CN(0,1) entries
H = np.matrix( 1 / np.sqrt(2) * ( np.random.randn( n_R, n_T ) + 1j * np.random.randn( n_R, n_T ) ) )
# get matrix in "log-det formula" and determine instantaneous capacity
# real part is applied since det(A) is always real for hermitian A (which is true in our scenario)
det_arg = np.eye( n_R, dtype=complex ) + val_snr / n_T * H @ H.H
C_inst.append( np.log2( np.real( np.linalg.det( det_arg ) ) ) )
C[ ind_n, ind_snr ] = np.average( C_inst )
"""
Explanation: Approximating Expectation
End of explanation
"""
# combine antenna constellations
N = zip( N_T, N_R )
for ind_n in np.arange( len( N_T ) ):
plt.plot( snr_db, C[ ind_n, : ], linewidth=2.0, label='$N_T=${}, $N_R$={}'.format( N_T[ind_n], N_R[ind_n] ) )
plt.xlabel( 'SNR (dB)' )
plt.ylabel( '$\\hat{C}$ (bit/Kanalzugriff)' )
plt.grid( True )
plt.legend( loc = 'upper left' )
plt.autoscale(enable=True, tight=True)
plt.savefig('mimo_capacity.pdf',bbox_inches='tight')
"""
Explanation: Plotting Capacity
End of explanation
"""
# specific constellation (8,8)
N_T = 8
N_R = 8
# number of realization for histogram
N_real = 1000
# deviation for counting singular values as zero
dev_sv = .25
# initialize empty array
numb_sv_collect = np.zeros( N_real )
# loop for realizations
for ind_n, val_n in enumerate( np.arange( N_real) ):
# generate channel matrix as i.i.d. CN(0,1)
H = np.matrix( 1 / np.sqrt( 2 ) * ( np.random.randn( N_R, N_T ) + 1j * np.random.randn( N_R, N_T ) ) )
L, X = np.linalg.eig( np.dot( H, H.H ) )
numb_sv_collect[ ind_n ] = np.size( np.where( np.abs( L ) > dev_sv ) )
"""
Explanation: Non-Zero Singular Values of MIMO Systems
The following simulation shows the number of non-zero singular values in MIMO systems.
NOTE: Values < 0.25 are assumened as equal to zero, thereby over-estimating zero singular values
Finding Number of non-zero Singular Values
End of explanation
"""
plt.hist( numb_sv_collect, N_T, range=[0, N_T ], density = True, align = 'right', label='$N_{{T}}=${}, $N_{{R}}=${}'.format( N_T, N_R ) , color='#009682' )
plt.xlabel('$n$')
plt.ylabel('$\\hat{P}(R=n)$')
plt.grid()
plt.legend(loc='upper left')
plt.xlim( ( .5, N_T +.5) )
plt.ylim( (0, 1) )
plt.savefig('histogram_rank.pdf',bbox_inches='tight')
"""
Explanation: Plotting Histogram of non-zero Singular Values
End of explanation
"""
# snr in dB
snr_db = 100
sigma2 = 10**( - snr_db / 10 )
# specify constellation
N_T = 4
N_R = 4
# constellation using 4-qam
constellation_points = [ 1+1j, -1+1j, -1-1j, 1-1j ]
constellation_points /= np.linalg.norm( constellation_points ) / np.sqrt( len( constellation_points ) )
# number of symbols for receiver constellation diagram
N_syms_mimo = 5
# sample channel matrix as i.i.d. CN(0,1)
H = 1/np.sqrt(2) * ( np.random.randn(N_R,N_T) + 1j * np.random.randn(N_R,N_T) )
# loop for mimo symbols
for _n in np.arange( N_syms_mimo ):
# sample input vector
s = np.random.choice( constellation_points, N_T )
# sampe noise vector
noise = np.sqrt( sigma2 / 2 ) * ( np.random.randn(N_R) + 1j * np.random.randn(N_R) )
# determine receive vector
r = np.dot( H, s ) + noise
# plot receive symbols
plt.plot( np.real(r), np.imag(r), 'x', ms='12', mew='4')
# replot last points to apply legend
plt.plot( np.real(r), np.imag(r), 'x', ms='12', mew='4', label='$\mathbf{r}$')
# plot transmit symbols for illustration
plt.plot( np.real(constellation_points), np.imag(constellation_points), 'o', ms='12', mew='4', label='$\mathbf{s}$', c=(0,0,0))
plt.grid( True )
max_ax = np.ceil(np.max([np.max(np.abs(np.real(r))), np.max(np.abs(np.imag(r)))]))
plt.xlabel('$\mathrm{Re}\{\mathbf{r}\}$')
plt.ylabel('$\mathrm{Im}\{\mathbf{r}\}$')
plt.title('MIMO receive symbols without processing')
plt.gca().set_aspect('equal', adjustable='box')
plt.xlim( (-max_ax, max_ax) )
plt.ylim( (-max_ax, max_ax) )
plt.legend( loc='upper left')
#plt.savefig('mimo_symbols_snr_100.pdf',bbox_inches='tight')
"""
Explanation: MIMO Systems: Superpostion of Systems
Showing that, by construction, symbols are overlapping, thereby generating kind of chaotic constellation diagrams.
EXERCISE: Can you reason which values will be generated? (Hint: Use matrix-vector notation and consider r_1.)
Showing Effects of Superposition of Symbols at the Receiver without Detection
End of explanation
"""
# get detector matrix
H = np.matrix( H )
H_zf = np.linalg.pinv( H )
H_mmse = np.linalg.pinv( H.H @ H + N_T * 10**( - snr_db / 10 ) * np.eye( N_T ) ) @ H.H
# loop for mimo symbols
for _n in np.arange( N_syms_mimo ):
# sample input vector
s = np.random.choice( constellation_points, N_T )
# sampe noise vector
noise = np.sqrt( sigma2 / 2 ) * ( np.random.randn(N_R) + 1j * np.random.randn(N_R) )
# determine receive vector
r = ( np.dot( H, s ) + noise ).getA1()
# apply detection matrix (zero-forcing)
y_zf = np.dot( H_zf, r ).getA1()
y_mmse = np.dot( H_mmse, r ).getA1()
# plot receive symbols
plt.plot( np.real( r ), np.imag( r ), 'x', ms='18', mew='4')
plt.plot( np.real( y_zf ), np.imag( y_zf ), 'o', ms='12', mew='4')
plt.plot( np.real( y_mmse ), np.imag( y_mmse ), 'D', ms='12', mew='4')
# replot last points to apply legend
plt.plot( np.real( r ), np.imag( r ), 'x', ms='18', mew='4', label='$\mathbf{r}$')
plt.plot( np.real( y_zf ), np.imag( y_zf ), 'D', ms='12', mew='4', label='$\mathbf{r}_{{\mathrm{ZF}}}$')
plt.plot( np.real( y_mmse ), np.imag( y_mmse ), 'o', ms='12', mew='4', label='$\mathbf{r}_{{\mathrm{MMSE}}}$')
plt.grid( True )
max_ax = np.ceil(np.max([np.max(np.abs(np.real(r))), np.max(np.abs(np.imag(r)))]))
plt.gca().set_aspect('equal', adjustable='box')
plt.xlim( (-max_ax, max_ax) )
plt.ylim( (-max_ax, max_ax) )
plt.xlabel( '$\mathrm{Re}\{\mathbf{r}\}$' )
plt.ylabel( '$\mathrm{Im}\{\mathbf{r}\}$' )
plt.title('MIMO receive symbols after detection')
plt.legend( loc = 'upper left' )
#plt.savefig('mimo_symbols_detection_snr_100.pdf',bbox_inches='tight')
"""
Explanation: Now Applying Zero-Forcing and MMSE
End of explanation
"""
|
StudyExchange/Udacity | MachineLearning(Advanced)/p0_titanic_survival_exploration/titanic_survival_exploration.ipynb | mit | import numpy as np
import pandas as pd
# RMS Titanic data visualization code
# 数据可视化代码
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
# 加载数据集
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
# 显示数据列表中的前几项乘客数据
display(full_data.head())
"""
Explanation: 机器学习工程师纳米学位
入门
项目 0: 预测泰坦尼克号乘客生还率
1912年,泰坦尼克号在第一次航行中就与冰山相撞沉没,导致了大部分乘客和船员身亡。在这个入门项目中,我们将探索部分泰坦尼克号旅客名单,来确定哪些特征可以最好地预测一个人是否会生还。为了完成这个项目,你将需要实现几个基于条件的预测并回答下面的问题。我们将根据代码的完成度和对问题的解答来对你提交的项目的进行评估。
提示:这样的文字将会指导你如何使用 iPython Notebook 来完成项目。
点击这里查看本文件的英文版本。
开始
当我们开始处理泰坦尼克号乘客数据时,会先导入我们需要的功能模块以及将数据加载到 pandas DataFrame。运行下面区域中的代码加载数据,并使用 .head() 函数显示前几项乘客数据。
提示:你可以通过单击代码区域,然后使用键盘快捷键 Shift+Enter 或 Shift+ Return 来运行代码。或者在选择代码后使用播放(run cell)按钮执行代码。像这样的 MarkDown 文本可以通过双击编辑,并使用这些相同的快捷键保存。Markdown 允许你编写易读的纯文本并且可以转换为 HTML。
End of explanation
"""
# Store the 'Survived' feature in a new variable and remove it from the dataset
# 从数据集中移除 'Survived' 这个特征,并将它存储在一个新的变量中。
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
# 显示已移除 'Survived' 特征的数据集
display(data.head())
display(outcomes.head())
display(data.loc[0])
display(outcomes.loc[0])
"""
Explanation: 从泰坦尼克号的数据样本中,我们可以看到船上每位旅客的特征
Survived:是否存活(0代表否,1代表是)
Pclass:社会阶级(1代表上层阶级,2代表中层阶级,3代表底层阶级)
Name:船上乘客的名字
Sex:船上乘客的性别
Age:船上乘客的年龄(可能存在 NaN)
SibSp:乘客在船上的兄弟姐妹和配偶的数量
Parch:乘客在船上的父母以及小孩的数量
Ticket:乘客船票的编号
Fare:乘客为船票支付的费用
Cabin:乘客所在船舱的编号(可能存在 NaN)
Embarked:乘客上船的港口(C 代表从 Cherbourg 登船,Q 代表从 Queenstown 登船,S 代表从 Southampton 登船)
因为我们感兴趣的是每个乘客或船员是否在事故中活了下来。可以将 Survived 这一特征从这个数据集移除,并且用一个单独的变量 outcomes 来存储。它也做为我们要预测的目标。
运行该代码,从数据集中移除 Survived 这个特征,并将它存储在变量 outcomes 中。
End of explanation
"""
def accuracy_score(truth, pred):
""" Returns accuracy score for input truth and predictions. """
# Ensure that the number of predictions matches number of outcomes
# 确保预测的数量与结果的数量一致
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
# 计算预测准确率(百分比)
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
# 测试 'accuracy_score' 函数
predictions = pd.Series(np.ones(5, dtype = int))
print 'Outcomes:'
display(outcomes[:5].head())
print 'Predictions:'
display(predictions.head())
print accuracy_score(outcomes[:5], predictions)
"""
Explanation: 这个例子展示了如何将泰坦尼克号的 Survived 数据从 DataFrame 移除。注意到 data(乘客数据)和 outcomes (是否存活)现在已经匹配好。这意味着对于任何乘客的 data.loc[i] 都有对应的存活的结果 outcome[i]。
为了验证我们预测的结果,我们需要一个标准来给我们的预测打分。因为我们最感兴趣的是我们预测的准确率,既正确预测乘客存活的比例。运行下面的代码来创建我们的 accuracy_score 函数以对前五名乘客的预测来做测试。
思考题:从第六个乘客算起,如果我们预测他们全部都存活,你觉得我们预测的准确率是多少?
End of explanation
"""
def predictions_0(data):
""" Model with no features. Always predicts a passenger did not survive. """
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
# 预测 'passenger' 的生还率
predictions.append(0)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
print 'Outcmes:'
display(outcomes.head())
print 'Predictions:'
predictions = predictions_0(data)
display(predictions.head())
print accuracy_score(outcomes, predictions)
"""
Explanation: 提示:如果你保存 iPython Notebook,代码运行的输出也将被保存。但是,一旦你重新打开项目,你的工作区将会被重置。请确保每次都从上次离开的地方运行代码来重新生成变量和函数。
预测
如果我们要预测泰坦尼克号上的乘客是否存活,但是我们又对他们一无所知,那么最好的预测就是船上的人无一幸免。这是因为,我们可以假定当船沉没的时候大多数乘客都遇难了。下面的 predictions_0 函数就预测船上的乘客全部遇难。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 问题1
对比真实的泰坦尼克号的数据,如果我们做一个所有乘客都没有存活的预测,你认为这个预测的准确率能达到多少?
提示:运行下面的代码来查看预测的准确率。
End of explanation
"""
survival_stats(data, outcomes, 'Sex')
"""
Explanation: 回答: Predictions have an accuracy of 61.62%.
我们可以使用 survival_stats 函数来看看 Sex 这一特征对乘客的存活率有多大影响。这个函数定义在名为 titanic_visualizations.py 的 Python 脚本文件中,我们的项目提供了这个文件。传递给函数的前两个参数分别是泰坦尼克号的乘客数据和乘客的 生还结果。第三个参数表明我们会依据哪个特征来绘制图形。
运行下面的代码绘制出依据乘客性别计算存活率的柱形图。
End of explanation
"""
def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'male':
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_1(data)
"""
Explanation: 观察泰坦尼克号上乘客存活的数据统计,我们可以发现大部分男性乘客在船沉没的时候都遇难了。相反的,大部分女性乘客都在事故中生还。让我们在先前推断的基础上继续创建:如果乘客是男性,那么我们就预测他们遇难;如果乘客是女性,那么我们预测他们在事故中活了下来。
将下面的代码补充完整,让函数可以进行正确预测。
提示:您可以用访问 dictionary(字典)的方法来访问船上乘客的每个特征对应的值。例如, passenger['Sex'] 返回乘客的性别。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 问题2
当我们预测船上女性乘客全部存活,而剩下的人全部遇难,那么我们预测的准确率会达到多少?
提示:运行下面的代码来查看我们预测的准确率。
End of explanation
"""
survival_stats(data, outcomes, 'Sex', ["Age < 10"])
"""
Explanation: 回答: Predictions have an accuracy of 78.68%.
仅仅使用乘客性别(Sex)这一特征,我们预测的准确性就有了明显的提高。现在再看一下使用额外的特征能否更进一步提升我们的预测准确度。例如,综合考虑所有在泰坦尼克号上的男性乘客:我们是否找到这些乘客中的一个子集,他们的存活概率较高。让我们再次使用 survival_stats 函数来看看每位男性乘客的年龄(Age)。这一次,我们将使用第四个参数来限定柱形图中只有男性乘客。
运行下面这段代码,把男性基于年龄的生存结果绘制出来。
End of explanation
"""
def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
# if passenger['Sex'] == 'male' and passenger['Age'] < 10:
if passenger['Sex'] == 'male':
if passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_2(data)
"""
Explanation: 仔细观察泰坦尼克号存活的数据统计,在船沉没的时候,大部分小于10岁的男孩都活着,而大多数10岁以上的男性都随着船的沉没而遇难。让我们继续在先前预测的基础上构建:如果乘客是女性,那么我们就预测她们全部存活;如果乘客是男性并且小于10岁,我们也会预测他们全部存活;所有其它我们就预测他们都没有幸存。
将下面缺失的代码补充完整,让我们的函数可以实现预测。
提示: 您可以用之前 predictions_1 的代码作为开始来修改代码,实现新的预测函数。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 问题3
当预测所有女性以及小于10岁的男性都存活的时候,预测的准确率会达到多少?
提示:运行下面的代码来查看预测的准确率。
End of explanation
"""
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
"""
Explanation: 回答: Predictions have an accuracy of 79.35%.
***
添加年龄(Age)特征与性别(Sex)的结合比单独使用性别(Sex)也提高了不少准确度。现在该你来做预测了:找到一系列的特征和条件来对数据进行划分,使得预测结果提高到80%以上。这可能需要多个特性和多个层次的条件语句才会成功。你可以在不同的条件下多次使用相同的特征。Pclass,Sex,Age,SibSp 和 Parch 是建议尝试使用的特征。
使用 survival_stats 函数来观测泰坦尼克号上乘客存活的数据统计。
提示: 要使用多个过滤条件,把每一个条件放在一个列表里作为最后一个参数传递进去。例如: ["Sex == 'male'", "Age < 18"]
End of explanation
"""
def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
if passenger['SibSp'] >= 3 or passenger['Parch'] > 3:
predictions.append(0)
elif passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Sex'] == 'male' and passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
"""
Explanation: 当查看和研究了图形化的泰坦尼克号上乘客的数据统计后,请补全下面这段代码中缺失的部分,使得函数可以返回你的预测。
在到达最终的预测模型前请确保记录你尝试过的各种特征和条件。
提示: 您可以用之前 predictions_2 的代码作为开始来修改代码,实现新的预测函数。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 结论
请描述你实现80%准确度的预测模型所经历的步骤。您观察过哪些特征?某些特性是否比其他特征更有帮助?你用了什么条件来预测生还结果?你最终的预测的准确率是多少?
提示:运行下面的代码来查看你的预测准确度。
End of explanation
"""
# 加载数据集
female_full_data = full_data[full_data['Sex'] == 'female']
female_outcomes = female_full_data['Survived']
female_data = female_full_data.drop('Survived', axis = 1)
print 'Female passenger: %d rows' %len(female_data)
display(female_data.head())
display(female_outcomes.head())
male_full_data = full_data[full_data['Sex'] == 'male']
male_outcomes = male_full_data['Survived']
male_data = male_full_data.drop('Survived', axis = 1)
print 'Male passenger: %d rows' %len(male_data)
display(male_data.head())
display(male_outcomes.head())
"""
Explanation: 回答: Predictions have an accuracy of 82.27%.
处理过程
性别是个非常好的维度,结果78.68%,接近80%。但是,在其上,直接简单的添加一些特性,效果并不好。比如特性10岁以下男性幸免,反而拉低了准确率。凭借猜想,其他的特性也试了一下,结果都不好。靠人为对事物的理解,进行处理,是一种方法。但是,发现新知识的工程比较费劲,又很局限。但是,当前只会这种方法。先把性别分开,然后,在性别基础上,找出进一步的特性,只要有一个比较好的,应该就可以达到80%了。
主要方法是:从female中,找出遇难者。从male中,找出幸存者。
1. 预处理数据:先将数据按性别分开
End of explanation
"""
survival_stats(female_data, female_outcomes, 'SibSp')
"""
Explanation: 2. Female的图形特性
SibSp(乘客在船上的兄弟姐妹和配偶的数量)特性(这个特性区分度挺好)。
End of explanation
"""
def predictions_female(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'male':
predictions.append(0)
else:
if passenger['SibSp'] > 4:
predictions.append(0)
else:
if passenger['Fare'] == 31.275:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_female(data)
print accuracy_score(outcomes, predictions)
"""
Explanation: 从图中可以看出,SibSp > 4的全部female都遇难了。
另外,从数据中,发现Fare==31.275的那几个人都幸存了,也取出来(这个有点投机取巧)。
End of explanation
"""
survival_stats(male_data, male_outcomes, 'Fare')
"""
Explanation: 可以看出,预测准确度从78.68%提升到79.69%。
3. Male的图形特性
Fare:乘客为船票支付的费用(这个特性区分度挺好)。
End of explanation
"""
def predictions_male(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
#if passenger['Sex'] == 'male' and (passenger['Fare'] < 80 or (passenger['Fare'] > 85 and passenger['Fare'] < 100) or passenger['Fare'] > 170):
if passenger['Sex'] == 'male':
if (passenger['Fare'] < 120 and not(passenger['Fare'] >= 26.2875 and passenger['Fare'] <= 26.3875)) or passenger['Fare'] > 170:
predictions.append(0)
else:
predictions.append(1)
else:
if passenger['SibSp'] > 4:
predictions.append(0)
else:
if passenger['Fare'] == 31.275:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_male(data)
print accuracy_score(outcomes, predictions)
"""
Explanation: 从图中看出Male在区间passenger['Fare'] < 120 or passenger['Fare'] > 170有遇难者
另外,从数据中,发现passenger['Fare'] >= 26.2875 and passenger['Fare'] <= 26.3875那几个人都幸存了,也取出来(这个有点投机取巧)。
End of explanation
"""
survival_stats(data, outcomes, 'SibSp')
survival_stats(data, outcomes, 'Parch')
survival_stats(data, outcomes, 'Sex', ['Age < 10'])
def predictions_review20170619(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
if passenger['SibSp'] >= 3 or passenger['Parch'] > 3:
predictions.append(0)
elif passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Sex'] == 'male' and passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
"""
Explanation: 结论
经过了数次对数据的探索和分类,你创建了一个预测泰坦尼克号乘客存活率的有用的算法。在这个项目中你手动地实现了一个简单的机器学习模型——决策树(decision tree)。决策树每次按照一个特征把数据分割成越来越小的群组(被称为 nodes)。每次数据的一个子集被分出来,如果分割结果的子集中的数据比之前更同质(包含近似的标签),我们的预测也就更加准确。电脑来帮助我们做这件事会比手动做更彻底,更精确。这个链接提供了另一个使用决策树做机器学习入门的例子。
决策树是许多监督学习算法中的一种。在监督学习中,我们关心的是使用数据的特征并根据数据的结果标签进行预测或建模。也就是说,每一组数据都有一个真正的结果值,不论是像泰坦尼克号生存数据集一样的标签,或者是连续的房价预测。
Review20170619,根据评审反馈,重新处理数据,并验证评审员给出的代码。
if passenger['SibSp'] >= 3 or passenger['Parch'] > 3:
predictions.append(0)
elif passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Sex'] == 'male' and passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
使用函数survival_stats(),进一步验证以上给出的结论。
End of explanation
"""
james = full_data[full_data['PassengerId'] == 6]
display(james)
print james['Age'] < 10
print james['Age'] == 10
print james['Age'] > 10
"""
Explanation: Review20170619结果
预测结果准确率为82.27%,比之前80.36%,有进一步的提升。
其他发现
前面,”男性小于10岁“这个问题的编程时,发现我原来的
if passenger['Sex'] == 'male':
if passenger['Age'] > 10:
predictions.append(0)
else:
predictions.append(1)
else:
predictions.append(1)
的准确率68.91%与新的这次改的
if passenger['Sex'] == 'male':
if passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
else:
predictions.append(1)
的准确率79.35%不同,其实就是一个取反的结果集,但是输出确实有差别的。原因是,对于年龄为空的时候,的取值是不一样的。以乘客Moran, Mr. James为例,它的PassengerId=6,年龄为空。那么,他的预测结果会是怎么样的呢?
End of explanation
"""
|
smenon8/AnimalWildlifeEstimator | Notebooks/.ipynb_checkpoints/AppendMicrosoftAIData-checkpoint.ipynb | bsd-3-clause | import csv
import json
import JobsMapResultsFilesToContainerObjs as ImageMap
import DeriveFinalResultSet as drs
import DataStructsHelper as DS
import importlib
import pandas as pd
import htmltag as HT
from collections import OrderedDict
#import matplotlib.pyplot as plt
import plotly.plotly as py
import cufflinks as cf # this is necessary to link pandas to plotly
cf.go_online()
flName = "../data/All_Zebra_Count_Tag_Output_Results.txt"
pd.set_option('display.max_colwidth', -1)
imgAlbumDict = ImageMap.genImgAlbumDictFromMap(drs.imgJobMap)
master = ImageMap.createResultDict(1,100)
imgShareNotShareList,noResponse = ImageMap.imgShareCountsPerAlbum(imgAlbumDict,master)
importlib.reload(ImageMap)
importlib.reload(DS)
"""
Explanation: Notebook Name: AppendMicrosoftAIData
Author: Sreejith Menon (smenon8@uic.edu)
General Description:
Microsoft Image Tagging API generates a bag of words that can be used to describe a image.
Think of it as, the words (nouns) you will use to describe the image to a person who cannot see the image. Each word that is returned has an associated confidence associated with the prediction. Tags with low confidence will be not considered(or ignored). For the purpose of experiment 2, the confidence level has hardcoded to 0.5.
This notebook has code that will take the API data which has been already parsed into a JSON file and joins it with the share proportion results from Amazon Mechanical Turk albums.
The idea is to check if occurence of a certain word influnces the share rate in any way.
End of explanation
"""
header,rnkFlLst = DS.genlstTupFrmCsv("../FinalResults/rankListImages_expt2.csv")
rnkListDf = pd.DataFrame(rnkFlLst,columns=header)
rnkListDf['Proportion'] = rnkListDf['Proportion'].astype('float')
rnkListDf.sort_values(by="Proportion",ascending=False,inplace=True)
# create an overall giant csv
gidFtrs = ImageMap.genMSAIDataHighConfidenceTags("../data/GZC_data_tagged.json",0.5)
gidFtrsLst = DS.cnvrtDictToLstTup(gidFtrs)
df = pd.DataFrame(gidFtrsLst,columns=['GID','tags'])
shrPropsTags = pd.merge(rnkListDf,df,left_on='GID',right_on='GID')
shrPropsTags.to_csv("../FinalResults/resultsExpt2RankList_Tags.csv",index=False)
shrPropsTags['URL'] = '<img src = "https://socialmediabias.blob.core.windows.net/wildlifephotos/All_Zebra_Count_Images/' + shrPropsTags['GID'] + '.jpeg" width = "350">'
shrPropsTags.sort_values(by=['Proportion','GID'],ascending=False,inplace=True)
fullFl = HT.html(HT.body(HT.HTML(shrPropsTags.to_html(bold_rows = False,index=False))))
fullFl
outputFile = open("../FinalResults/resultsExpt2RankList_Tags.html","w")
outputFile.write(fullFl)
outputFile.close()
"""
Explanation: Rank list of images by share rates with Microsoft Image Tagging API output
Block of code for building rank list of images shared in the descending order of their share rates
Appended with Microsoft Image Tagging API results
The output is a rank list of all the images by their share rates along with the tags against every image.
There is a capability to display the actual images as well alongside the rank-list.
Known issue - The '<' and '>' characters in the HTML tags in URL are often intepreted as is.
Future - make sure to add escape logic for these characters in HTML tags. There are opportunities to convert some of these code blocks into methods.
End of explanation
"""
tgsShrNoShrCount = {}
for lst in rnkFlLst:
tgs = gidFtrs[lst[0]]
tmpDict = {'share': int(lst[1]), 'not_share': int(lst[2]), 'total' : int(lst[3])}
for tag in tgs:
oldDict ={}
oldDict = tgsShrNoShrCount.get(tag,{'share' : 0,'not_share' : 0,'total' : 0})
oldDict['share'] = oldDict.get('share',0) + tmpDict['share']
oldDict['not_share'] = oldDict.get('not_share',0) + tmpDict['not_share']
oldDict['total'] = oldDict.get('total',0) + tmpDict['total']
tgsShrNoShrCount[tag] = oldDict
## Append data into data frames and build visualizations
tgsShrCntDf = pd.DataFrame(tgsShrNoShrCount).transpose()
tgsShrCntDf['proportion'] = tgsShrCntDf['share'] * 100 / tgsShrCntDf['total']
tgsShrCntDf.sort_values(by=['proportion','share'],ascending=False,inplace=True)
tgsShrCntDf = tgsShrCntDf[['share','not_share','total','proportion']]
tgsShrCntDf.to_csv("../FinalResults/RankListTags.csv")
fullFl = HT.html(HT.body(HT.HTML(tgsShrCntDf.to_html(bold_rows = False))))
outputFile = open("../FinalResults/RankListTags.html","w")
outputFile.write(fullFl)
outputFile.close()
iFrameBlock = []
fig = tgsShrCntDf['proportion'].iplot(kind='line',filename="All_Tags",title="Distribution of Tags")
iFrameBlock.append(fig.embed_code)
#plt.savefig("../FinalResults/RankListTags.png",bbox_inches='tight')
"""
Explanation: Generate rank list of tags by share rate.
End of explanation
"""
|
jhillairet/scikit-rf | doc/source/examples/interactive/Interactive Mismatched Line.ipynb | bsd-3-clause | from IPython.display import YouTubeVideo
YouTubeVideo('JyYi_1SswXs',width=700, height=580)
from ipywidgets import interact
%matplotlib inline
from pylab import *
from skrf.media import DistributedCircuit
from skrf import Frequency
import skrf as rf
rf.stylely()
# define a frequency object
freq = Frequency(0,10,1001,'ghz')
# create a Media object for RG-58, based on distributed ckt values
rg58 = DistributedCircuit(frequency = freq,
C =93.5e-12,#F/m
L =273e-9, #H/m
R =0,#53e-3, #Ohm/m
G =0, #S/m
)
"""
Explanation: Interactive Mismatched Line
Intro
This is a demonstration of using IPythons interact function with some of skrf's network creation methods.
<div class="alert alert-success">
As of IPython 2.0, the widgets in this notebook won't show up on http://nbviewer.ipython.org. To view the widgets and interact with them, you will need to download this notebook and run it with an IPython Notebook server.
</div>
End of explanation
"""
def func(d=180,z0=54):
ntwk =rg58.line(d=d,unit='deg',z0=z0, embed=True)
ntwk.plot_s_db(1,0)
ntwk.plot_s_db(0,0)
ylim(-50,20)
draw();show()
interact(func, d= (0,360,10),z0=(10,200,1),embed=True);
"""
Explanation: Mismatched Line
This demonstrates the response of a mismatched transmission line of a given length. Specifically, a line of electrical length $d$ with characteristic impedance $z_0$ is embedded in the coaxial environment (RG-58 ) with $z_0 \approx 54\Omega$. The resultant S-parameters are plotted in log mag and on the smith chart.
Log Mag
End of explanation
"""
def func(d=180,z0=54):
ntwk =rg58.line(d=d,unit='deg',z0=z0, embed=True)
ntwk.plot_s_smith(1,0)
ntwk.plot_s_smith(0,0)
draw();show()
interact(func, d= (0,180,10),z0=(10,200,1),embed=True);
"""
Explanation: Smith Chart
End of explanation
"""
|
FZJ-IEK3-VSA/tsam | examples/example_k_maxoids.ipynb | mit | %load_ext autoreload
%autoreload 2
import copy
import os
import pandas as pd
import matplotlib.pyplot as plt
import tsam.timeseriesaggregation as tsam
%matplotlib inline
"""
Explanation: tsam - 1. Example
Example usage of the time series aggregation module (tsam)
Date: 02.05.2020
Author: Maximilian Hoffmann
Import pandas and the relevant time series aggregation class
End of explanation
"""
raw = pd.read_csv('testdata.csv', index_col = 0)
"""
Explanation: Input data
Read in time series from testdata.csv with pandas
End of explanation
"""
raw.head()
"""
Explanation: Show a slice of the dataset
End of explanation
"""
raw.shape
"""
Explanation: Show the shape of the raw input data: 4 types of timeseries (GHI, Temperature, Wind and Load) for every hour in a year
End of explanation
"""
def plotTS(data, periodlength, vmin, vmax):
fig, axes = plt.subplots(figsize = [6, 2], dpi = 100, nrows = 1, ncols = 1)
stacked, timeindex = tsam.unstackToPeriods(copy.deepcopy(data), periodlength)
cax = axes.imshow(stacked.values.T, interpolation = 'nearest', vmin = vmin, vmax = vmax)
axes.set_aspect('auto')
axes.set_ylabel('Hour')
plt.xlabel('Day')
fig.subplots_adjust(right = 1.2)
cbar=plt.colorbar(cax)
cbar.set_label('T [°C]')
"""
Explanation: Create a plot function for the temperature for a visual comparison of the time series
End of explanation
"""
plotTS(raw['T'], 24, vmin = raw['T'].min(), vmax = raw['T'].max())
"""
Explanation: Plot an example series - in this case the temperature
End of explanation
"""
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24,
clusterMethod = 'k_means')
"""
Explanation: Simple k-mean aggregation
Initialize an aggregation class object with k-mean as method for eight typical days, without any integration of extreme periods. Alternative clusterMethod's are 'averaging','hierarchical', 'k_medoids' and 'k_maxoids'.
End of explanation
"""
typPeriods = aggregation.createTypicalPeriods()
"""
Explanation: Create the typical periods
End of explanation
"""
typPeriods.shape
"""
Explanation: Show shape of typical periods: 4 types of timeseries for 8*24 hours
End of explanation
"""
typPeriods.to_csv(os.path.join('results','testperiods_kmeans.csv'))
"""
Explanation: Save typical periods to .csv file
End of explanation
"""
predictedPeriods = aggregation.predictOriginalData()
"""
Explanation: Repredict the original time series based on the typical periods
End of explanation
"""
plotTS(predictedPeriods['T'], 24, vmin = raw['T'].min(), vmax = raw['T'].max())
"""
Explanation: Plot the repredicted data
End of explanation
"""
aggregationMax = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24,
clusterMethod = 'k_maxoids', rescaleClusterPeriods=False)
"""
Explanation: As seen, they days with the minimal temperature are excluded. In case that they are required they can be added to the aggregation as follow.
k-maxoids aggregation including extreme periods
Initialize a time series aggregation based on k-maxoids, which automatically searches for points closest to the convex hull.
End of explanation
"""
typPeriodsMax = aggregationMax.createTypicalPeriods()
"""
Explanation: Create the typical periods
End of explanation
"""
aggregationMax.accuracyIndicators()
"""
Explanation: The aggregation can also be evaluated by indicators
End of explanation
"""
predictedPeriodsMax = aggregationMax.predictOriginalData()
"""
Explanation: Repredict the original time series based on the typical periods
End of explanation
"""
plotTS(predictedPeriodsMax['T'], 24, vmin = raw['T'].min(), vmax = raw['T'].max())
"""
Explanation: Plot repredicted data
End of explanation
"""
fig, axes = plt.subplots(figsize = [6, 2], dpi = 100, nrows = 1, ncols = 1)
raw['Load'].sort_values(ascending=False).reset_index(drop=True).plot(label = 'Original')
predictedPeriods['Load'].sort_values(ascending=False).reset_index(drop=True).plot(label = '8 typ days \n + (Centroids)')
predictedPeriodsMax['Load'].sort_values(
ascending=False).reset_index(drop=True).plot(label = '8 typ days \n + (Maxoids)')
plt.legend()
plt.xlabel('Hours [h]')
plt.ylabel('Duration Load [MW]')
"""
Explanation: Here bigger biggest values and lower lowest values can be observed compared to k-means clustering.
Comparison of the aggregations
It was shown for the temperature, but both times all four time series have been aggregated. Therefore, we compare here also the duration curves of the electrical load for the original time series, the aggregation with k-mean, and the k-maxoids aggregation.
End of explanation
"""
fig, axes = plt.subplots(figsize = [6, 2], dpi = 100, nrows = 1, ncols = 1)
raw['Load']['20100210':'20100218'].plot(label = 'Original')
predictedPeriods['Load']['20100210':'20100218'].plot(label = '8 typ days \n + (Centroids)')
predictedPeriodsMax['Load']['20100210':'20100218'].plot(label = '8 typ days \n + (Maxoids)')
plt.legend()
plt.ylabel('Load [MW]')
"""
Explanation: Or as unsorted time series for an example week
End of explanation
"""
|
mined-gatech/pymks_overview | notebooks/checker_board.ipynb | mit | %matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Checkerboard Microstructure
Introduction - What are 2-Point Spatial Correlations (also called 2-Point Statistics)?
The purpose of this example is to introduce 2-point spatial correlations and how they are computed using PyMKS.
The example starts with some introductory information about spatial correlations. PyMKS is used to compute both the periodic and non-periodic 2-point spatial correlations (also referred to as 2-point statistics or autocorrelations and crosscorrelations) for a checkerboard microstructure. This is a relatively simple example that allows an easy discussion of how the spatial correlations capture the main features seen in the original microstructure.
n-Point Spatial Correlations
1-Point Spatial Correlations (or 1-point statistics)
n-point spatial correlations provide a way rigorously quantify material structure using statistics. As an introduction n-point spatial correlations, let's first discuss 1-point statistics. 1-point statistics are the probability that a specified local state will be found in any randomly selected spatial bin in a microstructure [1][2][3]. In this checkerboard example discussed here, there are two possible local states, one is colored white and the other is colored black. 1-point statistics compute the volume fractions of the local states in the microstructure. 1-point statistics are computed as
$$ f[l] = \frac{1}{S} \sum_s m[s,l] $$
In this equation, $f[l]$ is the probability of finding the local state $l$ in any randomly selected spatial bin in the microstructure, $m[s, l]$ is the microstructure function (the digital representation of the microstructure), $S$ is the total number of spatial bins in the microstructure and $s$ refers to a specific spatial bin.
While 1-point statistics provide information on the relative amounts of the different local states, it does not provide any information about how those local states are spatially arranged in the microstructure. Therefore, 1-point statistics are a limited set of metrics to describe the structure of materials.
2-Point Spatial Correlations
2-point spatial correlations (also known as 2-point statistics) contain information about the fractions of local states as well as the first order information on how the different local states are distributed in the microstructure.
2-point statistics can be thought of as the probability of having a vector placed randomly in the microstructure and having one end of the vector be on one specified local state and the other end on another specified local state. This vector could have any length or orientation that the discrete microstructure allows. The equation for 2-point statistics can found below.
$$ f[r \vert l, l'] = \frac{1}{S} \sum_s m[s, l] m[s + r, l'] $$
In this equation $ f[r \vert l, l']$ is the conditional probability of finding the local states $l$ and $l'$ at a distance and orientation away from each other defined by the vector $r$. All other variables are the same as those in the 1-point statistics equation. In the case that we have an eigen microstructure function (it only contains values of 0 or 1) and we are using an indicator basis, the the $r=0$ vector will recover the 1-point statistics.
When the 2 local states are the same $l = l'$, it is referred to as a autocorrelation. If the 2 local states are not the same it is referred to as a crosscorrelation.
Higher Order Spatial Statistics
Higher order spatial statistics are similar to 2-point statistics, in that they can be thought of in terms of conditional probabilities of finding specified local states separated by a prescribed set of vectors. 3-point statistics are the probability of finding three specified local states at the ends of a triangle (defined by 2 vectors) placed randomly in the material structure. 4-point statistics describes the probability of finding 4 local states at 4 locations (defined using 3 vectors) and so on.
While higher order statistics are a better metric to quantify the material structure, the 2-point statistics can be computed much faster than higher order spatial statistics, and still provide information about how the local states are distributed. For this reason, only 2-point statistics are implemented into PyMKS. Let us look at an example of computing the 2-point statistics for a checkerboard microstructure.
End of explanation
"""
from pymks.datasets import make_checkerboard_microstructure
= make_checkerboard_microstructure(square_size=, n_squares=)
"""
Explanation: 2-Point Statistics for Checkerboard Microstructure
Let's first start with making a microstructure that looks like a 8 x 8 checkerboard. Although this type of microstructure may not resemble a physical system, it provides solutions that give some intuitive understanding of 2-point statistics.
We can create a checkerboard microstructure using make_checkerboard_microstructure function from pymks.datasets.
End of explanation
"""
from pymks.tools import draw_microstructures
draw_microstructures()
print .shape
"""
Explanation: Now let's take a look at how the microstructure looks.
End of explanation
"""
from pymks.stats import autocorrelate
from pymks import PrimitiveBasis
= PrimitiveBasis(n_states=)
= .discretize()
= autocorrelate(, periodic_axes=(0, 1))
"""
Explanation: Compute Periodic 2-Point Statistics
Now that we have created a microstructure to work with, we can start computing the 2-point statistics. Let's start by looking at the periodic autocorrelations of the microstructure and then compute the periodic crosscorrelation. This can be done using the autocorrelate and crosscorrelate functions from pymks.states, and using the keyword argument periodic_axes to specify the axes that are periodic.
In order to compute 2-pont statistics, we need to select a basis to generate the microstructure function X_ from the microstructure X. Because we only have values of 0 or 1 in our microstructure we will using the PrimitiveBasis with n_states equal to 2.
End of explanation
"""
from pymks.tools import draw_autocorrelations
correlations = [('black', 'black'), ('white', 'white')]
draw_autocorrelations([0], autocorrelations=correlations)
"""
Explanation: We have now computed the autocorrelations.
Let's take a look at them using draw_autocorrelations from pymks.tools.
End of explanation
"""
center = (.shape[1] + 1) / 2
print 'Volume fraction of black phase', [0, center, center, 0]
print 'Volume fraction of white phase', [0, center, center, 1]
"""
Explanation: Notice that for this checkerboard microstructure, the autocorrelation for these 2 local states in the exact same. We have just computed the periodic autocorrelations for a perfectly periodic microstructure with equal volume fractions. In general this is not the case and the autocorrelations will be different as we will see later in this example.
As mentioned in the introduction, because we using an indicator basis and the we have eigen-microstructure functions (values are either 0 or 1), the (0, 0) vector equals the volume fraction.
Let's double check that both the phases have a volume fraction of 0.5.
End of explanation
"""
from pymks.stats import crosscorrelate
= crosscorrelate(, periodic_axes=(0, 1))
"""
Explanation: We can compute the cross-correlation of the microstructure function using the crosscorrelate function from pymks.stats
End of explanation
"""
from pymks.tools import draw_crosscorrelations
correlations = [('black', 'white')]
draw_crosscorrelations([0], crosscorrelations=correlations)
"""
Explanation: Let's take a look at the cross correlation using draw_crosscorrelations from pymks.tools.
End of explanation
"""
print 'Center value', [0, center, center, 0]
"""
Explanation: Notice that the crosscorrelation is the exact opposite of the 2 autocorrelations. The (0, 0) vector has a value of 0. This statistic reflects the probablity of 2 phases having the same location. In our microstructure, this probability is zero as we have not allowed the two phases (colored black and white) to co-exist in the same spatial voxel.
Let check that it is zero.
End of explanation
"""
|
gboeing/urban-data-science | modules/08-urban-networks-ii/process-lodes.ipynb | mit | import geopandas as gpd
import osmnx as ox
import pandas as pd
from shapely.geometry import Point
"""
Explanation: This notebook merges LODES home/work locations with census blocks to get home/work lat-lng block coordinates.
Data sources:
- 2018 LEHD LODES: https://lehd.ces.census.gov/data/
- 2020 Census blocks: https://www2.census.gov/geo/tiger/TIGER2020/
End of explanation
"""
# create a study site (copied code from lecture notebook)
latlng_coords = ox.geocode('Los Angeles City Hall')
latlng_point = Point(latlng_coords[1], latlng_coords[0])
latlng_point_proj, crs = ox.projection.project_geometry(latlng_point)
polygon_proj = latlng_point_proj.buffer(5000)
polygon, crs = ox.projection.project_geometry(polygon_proj, crs=crs, to_latlong=True)
# load CA census blocks shapefile
gdf_blocks = gpd.read_file('../../data/tl_2020_06_tabblock20/').set_index('GEOID20')
gdf_blocks.shape
# get all blocks that intersect the study site polygon
study_site_blocks = gdf_blocks[gdf_blocks.intersects(polygon)]
study_site_blocks.shape
"""
Explanation: Get all census blocks within some study site
End of explanation
"""
# load CA LODES
df = pd.read_csv('../../data/lodes/ca_od_main_JT00_2018.csv',
dtype={'w_geocode': str, 'h_geocode': str})
df.shape
# get all LODES rows in study site blocks
mask_h = df['h_geocode'].isin(study_site_blocks.index)
mask_w = df['w_geocode'].isin(study_site_blocks.index)
study_site_lodes = df[mask_h & mask_w]
study_site_lodes.shape
"""
Explanation: Get all LODES with both home and work in study site
End of explanation
"""
# retain the columns we want
block_trips = study_site_lodes[['h_geocode', 'w_geocode', 'S000']]
block_latlng = study_site_blocks[['INTPTLAT20', 'INTPTLON20']].astype(float)
# merge in home lat-lng
od = pd.merge(left=block_trips,
right=block_latlng,
left_on='h_geocode',
right_index=True,
how='inner')
# merge in work lat-lng
od = pd.merge(left=od,
right=block_latlng,
left_on='w_geocode',
right_index=True,
suffixes=['_home', '_work'],
how='inner')
od.shape
# rename columns then save to disk
cols = {'h_geocode': 'home_block',
'w_geocode': 'work_block',
'S000': 'job_count',
'INTPTLAT20_home': 'home_lat',
'INTPTLON20_home': 'home_lng',
'INTPTLAT20_work': 'work_lat',
'INTPTLON20_work': 'work_lng'}
od = od.rename(columns=cols)
od.to_csv('../../data/od.csv', index=False)
# see the final results
od
"""
Explanation: Merge the data, save to disk
End of explanation
"""
|
lionell/university-labs | num_methods/second/lab2.ipynb | mit | def euler(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
y[i] = y[i - 1] + h * f(x[i - 1], y[i - 1])
return y
"""
Explanation: Ordinary differential equations
Euler method
End of explanation
"""
dy = lambda x, y: x*x + y*y
x = np.linspace(0, 0.5, 100)
y0 = 0
y = euler(dy, x, y0)
y_ans = np.tan(x) - x
plt.figure(figsize=(15, 10))
plt.plot(x, y, x, y_ans)
plt.legend(['euler', 'answer'], loc='best')
plt.xlabel('x')
plt.title('Euler method (Runge-Kutta 1-st order method)')
plt.show()
"""
Explanation: To check correctness we are going to solve simple differential equation
$$y' = (x + y)^2,\
y(0) = 0,\
[a, b] = [0, 0.5],\
h = 0.05
$$
Solution here is a function $y(x) = tan(x) - x$.
End of explanation
"""
def runge_kutta3(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
k1 = h * f(x[i - 1], y[i - 1])
k2 = h * f(x[i - 1] + h/3, y[i - 1] + k1/3)
k3 = h * f(x[i - 1] + 2*h/3, y[i - 1] + 2*k2/3)
y[i] = y[i - 1] + (k1 + 3*k3) / 4
return y
def runge_kutta4(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
k1 = h * f(x[i - 1], y[i - 1])
k2 = h * f(x[i - 1] + h/2, y[i - 1] + k1/2)
k3 = h * f(x[i - 1] + h/2, y[i - 1] + k2/2)
k4 = h * f(x[i - 1] + h, y[i - 1] + k3)
y[i] = y[i - 1] + (k1 + 2*k2 + 2*k3 + k4) / 6
return y
"""
Explanation: The next method we are going to use is Runge-Kutta method family. Actually Euler method is a special case of Runge-Kutta methods.
Runge-Kutta methods family
We actually are going to try only two from Runge-Kutta methods: RK3 and RK4.
End of explanation
"""
dy = lambda x, y: np.sin(x) / y
x = np.linspace(0, 5, 4)
y0 = 1
y3 = runge_kutta3(dy, x, y0)
y4 = runge_kutta4(dy, x, y0)
y_ans = np.sqrt(3 - 2*np.cos(x))
plt.figure(figsize=(15, 10))
plt.plot(x, y3, x, y4, x, y_ans)
plt.legend(['rk3', 'rk4', 'ans'], loc='best')
plt.xlabel('x')
plt.title('Runge-Kutta 3-rd and 4-th order methods')
plt.show()
"""
Explanation: Let's solve slightly different equation
$$
y' = \frac{sin(x)}{y},\
y(0) = 1,\
[a, b] = [0, 5],\
h = 1.25
$$
A correct solution is $y = \sqrt{3 - 2cos(x)}$
End of explanation
"""
def fmap(fs, x):
return np.array([f(*x) for f in fs])
def runge_kutta4_system(fs, x, y0):
h = x[1] - x[0]
y = np.empty((len(x), len(y0)))
y[0] = y0
for i in range(1, len(x)):
k1 = h * fmap(fs, [x[i - 1], *y[i - 1]])
k2 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k1/2)])
k3 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k2/2)])
k4 = h * fmap(fs, [x[i - 1] + h, *(y[i - 1] + k3)])
y[i] = y[i - 1] + (k1 + 2*k2 + 2*k3 + k4) / 6
return y
"""
Explanation: Now let's move to system of differential equations.
Runge-Kutta methods for SDE
End of explanation
"""
dy = lambda x, y, z: z
dz = lambda x, y, z: 2*x*z / (x*x + 1)
fs = [dy, dz]
x = np.linspace(0, 1, 10)
y0 = np.array([1, 3])
y = runge_kutta4_system(fs, x, y0)
plt.figure(figsize=(15, 10))
plt.plot(x, y[:, 0], x, y[:, 1])
plt.legend(['y(x)', 'z(x)'], loc='best')
plt.xlabel('x')
plt.title('Runge-Kutta 4-th order method for system of differential equations')
plt.show()
"""
Explanation: Eg. We have system of differential equations
$$
y = z',\
z' = \frac{2xz}{x^2+1},\
y(0) = 1,\
z(0) = 3
$$
Let's try to solve it using Runge-Kutta methods of order 4.
End of explanation
"""
dx = lambda t, x, y: 2/3*x - 4/3*x*y
dy = lambda t, x, y: x*y - y
fs = [dx, dy]
t = np.linspace(0, 20, 500)
y0 = np.array([1, 2])
z = runge_kutta4_system(fs, t, y0)
plt.figure(figsize=(15, 10))
plt.plot(t, z[:, 0], t, z[:, 1])
plt.legend(['prey', 'predator'], loc='best')
plt.xlabel('time (sec)')
plt.ylabel('population')
plt.title('Lotka-Volterra equation')
plt.show()
plt.figure(figsize=(15, 10))
plt.plot(z[:, 0], z[:, 1])
plt.xlabel('pray')
plt.ylabel('predator')
plt.title('Parametric graph')
plt.show()
"""
Explanation: Predator-prey equation
$$
\frac{dx}{dt} = \alpha x - \beta xy\
\frac{dy}{dt} = \delta xy - \gamma y
$$
where $x$ - population of preys, and $y$ - population predators
End of explanation
"""
dx = lambda t, x, y: 2/3*x - 4/3*x*y
dy = lambda t, x, y: x*y - y
fs = [dx, dy]
t = np.linspace(0, 20, 500)
y0 = np.array([1, 101/200])
z = runge_kutta4_system(fs, t, y0)
plt.figure(figsize=(15, 10))
plt.plot(t, z[:, 0], t, z[:, 1])
plt.legend(['prey', 'predator'], loc='best')
plt.xlabel('time (sec)')
plt.ylabel('population')
plt.title('Lotka-Volterra equilibrium')
plt.show()
plt.figure(figsize=(15, 10))
plt.plot(z[:, 0], z[:, 1])
plt.xlabel('pray')
plt.ylabel('predator')
plt.title('Parametric graph of equilibrium')
plt.show()
"""
Explanation: Equilibrium
Let's look at population equilibrium
$$
y = \frac{\alpha}{\beta}\
x = \frac{\gamma}{\delta}
$$
we will take values close to these to show how it's approaching equilibrium
End of explanation
"""
|
ga7g08/ga7g08.github.io | _notebooks/2015-12-03-Hierarchical-Linear-Regression-Models-In-PyMC3-Multiple-Responces.ipynb | mit | Nrespondants = 5
Nresponces = 10
a_val = 45
mu_b_val = 0.1
sigma_b_val = 3
b = np.random.normal(mu_b_val, sigma_b_val, Nrespondants)
xobs_stacked = np.random.uniform(0, 10, (Nresponces, Nrespondants))
yobs_stacked = a_val + b * xobs_stacked + np.random.normal(0, 1.0, (Nresponces, Nrespondants))
plt.plot(xobs_stacked, yobs_stacked, "o")
plt.ylabel(r"$y_\mathrm{obs}$")
plt.xlabel(r"$x_\mathrm{obs}$")
plt.show()
"""
Explanation: Hierarchical Linear Regression Models in PyMC3: Part II
This is a follow up to a previous post, extending to the case where we have multiple responces from multiple respondants. I will be intentionally brief with the plan to follow up with a complete and thought out post.
First, some data
End of explanation
"""
idxs = np.array(list(np.arange(Nrespondants)) * Nresponces)
xobs = xobs_stacked.flatten()
yobs = yobs_stacked.flatten()
idxs
"""
Explanation: The clever bit:
In the following code we flatten the data, but create a set of indexes which maps the responces to the respondant. For example if our data data consisted of 2 repondants, with 3 responces from the first and 2 from the second, then the data above would be:
xobs_stacked = [[1.1, 2.2, 4.5],
[0.5, 0.4]]
yobs_stacked = [[10.2, 10.3, 10.8],
[12.5, 12.5]]
Then we flatten these to get
xobs = [1.1, 2.2, 4.5, 0.5, 0.4]
and create an index as follows
idxs = [0, 0, 0, 1, 1, ]
which says the first, second, and thrid entries below to the 0th respondant, while the last two are from the second. The importance of this will become apparent in a moment. In this instance, we always have the same number of responces from each respondant, so we can use the following trick:
End of explanation
"""
with pm.Model() as hierarchical_model:
# hyperparameters
mu_b = pm.Normal('mu_b', mu=0., sd=100)
sigma_b = pm.Uniform('sigma_b', lower=0, upper=100)
# Common effects
a = pm.Normal('a', mu=45, sd=100, testval=10)
eps = pm.Uniform('eps', lower=0, upper=10)
# Group effects
b = pm.Normal('b', mu=mu_b, sd=sigma_b, shape=Nrespondants)
mu = a*1 + b[idxs] * xobs
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=eps, observed=yobs)
"""
Explanation: The hierachical model
End of explanation
"""
nsamples = 100000
nburn = 50000
with hierarchical_model:
step = pm.Metropolis()
hierarchical_trace = pm.sample(nsamples, step, progressbar=True)
"""
Explanation: Now we generate samples using the Metropolis algorithm
End of explanation
"""
pm.traceplot(hierarchical_trace[nburn:],
vars=['mu_b', 'sigma_b', 'a', 'eps'],
lines={'mu_b': mu_b_val,
'sigma_b': sigma_b_val,
'a': a_val}
)
plt.show()
"""
Explanation: Now let's use the handy traceplot to inspect the chains and the posteriors having discarded the first half of the samples.
End of explanation
"""
npp = 5
repeats = 1
fig, ax = plt.subplots(figsize=(10, 10))
xfit = np.linspace(0, 10, 100)
for i in np.random.randint(0, Nrespondants, npp):
color = np.random.uniform(0, 1, 3)
for j in range(repeats):
s = hierarchical_trace[np.random.randint(nburn, nsamples)]
yfit = s['a'] + xfit * s['b'][i]
ax.plot(xfit, yfit, "-", lw=1.1, color=color)
ax.plot(xobs_stacked[:, i], yobs_stacked[:, i], "o", color=color, markersize=10)
plt.show()
"""
Explanation: The posterior distributions (in blue) can be compared with vertical (red) lines indicating the "true" values used to generate the data. This shows that we have not fully captured the features of the model, but compared to the diffuse prior we have learnt a great deal. Note that in generating the data $\epsilon$ was effectively zero: so the fact it's posterior is non-zero supports our understanding that we have not fully converged onto the idea solution.
Posterior predictive check
Finally we will plot a few of the data points along with straight lines from several draws of the posterior. We color code 5 random data points, then draw 100 realisations of the parameters from the posteriors and plot the corresponding straight lines. This shows that the posterior is doing an excellent job at inferring the individual $b_i$ values.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/5514ea6c90dde531f8026904a417527e/plot_10_evoked_overview.ipynb | bsd-3-clause | import os
import mne
"""
Explanation: The Evoked data structure: evoked/averaged data
This tutorial covers the basics of creating and working with :term:evoked
data. It introduces the :class:~mne.Evoked data structure in detail,
including how to load, query, subselect, export, and plot data from an
:class:~mne.Evoked object. For info on creating an :class:~mne.Evoked
object from (possibly simulated) data in a :class:NumPy array
<numpy.ndarray>, see tut_creating_data_structures.
:depth: 2
As usual we'll start by importing the modules we need:
End of explanation
"""
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
# we'll skip the "face" and "buttonpress" conditions, to save memory:
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
evoked = epochs['auditory/left'].average()
del raw # reduce memory usage
"""
Explanation: Creating Evoked objects from Epochs
:class:~mne.Evoked objects typically store an EEG or MEG signal that has
been averaged over multiple :term:epochs, which is a common technique for
estimating stimulus-evoked activity. The data in an :class:~mne.Evoked
object are stored in an :class:array <numpy.ndarray> of shape
(n_channels, n_times) (in contrast to an :class:~mne.Epochs object,
which stores data of shape (n_epochs, n_channels, n_times)). Thus to
create an :class:~mne.Evoked object, we'll start by epoching some raw data,
and then averaging together all the epochs from one condition:
End of explanation
"""
evoked.plot()
"""
Explanation: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the :meth:~mne.Evoked.plot method, which yields a butterfly plot of each
channel type:
End of explanation
"""
print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints
"""
Explanation: Like the plot() methods for :meth:Raw <mne.io.Raw.plot> and
:meth:Epochs <mne.Epochs.plot> objects,
:meth:evoked.plot() <mne.Evoked.plot> has many parameters for customizing
the plot output, such as color-coding channel traces by scalp location, or
plotting the :term:global field power <GFP> alongside the channel traces.
See tut-visualize-evoked for more information about visualizing
:class:~mne.Evoked objects.
Subselecting Evoked data
.. sidebar:: Evokeds are not memory-mapped
:class:~mne.Evoked objects use a :attr:~mne.Evoked.data attribute
rather than a :meth:~mne.Epochs.get_data method; this reflects the fact
that the data in :class:~mne.Evoked objects are always loaded into
memory, never memory-mapped_ from their location on disk (because they
are typically much smaller than :class:~mne.io.Raw or
:class:~mne.Epochs objects).
Unlike :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects do not support selection by square-bracket
indexing. Instead, data can be subselected by indexing the
:attr:~mne.Evoked.data attribute:
End of explanation
"""
evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)
print(evoked_eeg.ch_names)
new_order = ['EEG 002', 'MEG 2521', 'EEG 003']
evoked_subset = evoked.copy().reorder_channels(new_order)
print(evoked_subset.ch_names)
"""
Explanation: To select based on time in seconds, the :meth:~mne.Evoked.time_as_index
method can be useful, although beware that depending on the sampling
frequency, the number of samples in a span of given duration may not always
be the same (see the time-as-index section of the
tutorial about Raw data <tut-raw-class> for details).
Selecting, dropping, and reordering channels
By default, when creating :class:~mne.Evoked data from an
:class:~mne.Epochs object, only the "data" channels will be retained:
eog, ecg, stim, and misc channel types will be dropped. You
can control which channel types are retained via the picks parameter of
:meth:epochs.average() <mne.Epochs.average>, by passing 'all' to
retain all channels, or by passing a list of integers, channel names, or
channel types. See the documentation of :meth:~mne.Epochs.average for
details.
If you've already created the :class:~mne.Evoked object, you can use the
:meth:~mne.Evoked.pick, :meth:~mne.Evoked.pick_channels,
:meth:~mne.Evoked.pick_types, and :meth:~mne.Evoked.drop_channels methods
to modify which channels are included in an :class:~mne.Evoked object.
You can also use :meth:~mne.Evoked.reorder_channels for this purpose; any
channel names not provided to :meth:~mne.Evoked.reorder_channels will be
dropped. Note that channel selection methods modify the object in-place, so
in interactive/exploratory sessions you may want to create a
:meth:~mne.Evoked.copy first.
End of explanation
"""
sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-ave.fif')
evokeds_list = mne.read_evokeds(sample_data_evk_file, verbose=False)
print(evokeds_list)
print(type(evokeds_list))
"""
Explanation: Similarities among the core data structures
:class:~mne.Evoked objects have many similarities with :class:~mne.io.Raw
and :class:~mne.Epochs objects, including:
They can be loaded from and saved to disk in .fif format, and their
data can be exported to a :class:NumPy array <numpy.ndarray> (but through
the :attr:~mne.Evoked.data attribute, not through a get_data()
method). :class:Pandas DataFrame <pandas.DataFrame> export is also
available through the :meth:~mne.Evoked.to_data_frame method.
You can change the name or type of a channel using
:meth:evoked.rename_channels() <mne.Evoked.rename_channels> or
:meth:evoked.set_channel_types() <mne.Evoked.set_channel_types>.
Both methods take :class:dictionaries <dict> where the keys are existing
channel names, and the values are the new name (or type) for that channel.
Existing channels that are not in the dictionary will be unchanged.
:term:SSP projector <projector> manipulation is possible through
:meth:~mne.Evoked.add_proj, :meth:~mne.Evoked.del_proj, and
:meth:~mne.Evoked.plot_projs_topomap methods, and the
:attr:~mne.Evoked.proj attribute. See tut-artifact-ssp for more
information on SSP.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have :meth:~mne.Evoked.copy,
:meth:~mne.Evoked.crop, :meth:~mne.Evoked.time_as_index,
:meth:~mne.Evoked.filter, and :meth:~mne.Evoked.resample methods.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have evoked.times,
:attr:evoked.ch_names <mne.Evoked.ch_names>, and :class:info <mne.Info>
attributes.
Loading and saving Evoked data
Single :class:~mne.Evoked objects can be saved to disk with the
:meth:evoked.save() <mne.Evoked.save> method. One difference between
:class:~mne.Evoked objects and the other data structures is that multiple
:class:~mne.Evoked objects can be saved into a single .fif file, using
:func:mne.write_evokeds. The example data <sample-dataset>
includes just such a .fif file: the data have already been epoched and
averaged, and the file contains separate :class:~mne.Evoked objects for
each experimental condition:
End of explanation
"""
for evok in evokeds_list:
print(evok.comment)
"""
Explanation: Notice that :func:mne.read_evokeds returned a :class:list of
:class:~mne.Evoked objects, and each one has an evoked.comment
attribute describing the experimental condition that was averaged to
generate the estimate:
End of explanation
"""
right_vis = mne.read_evokeds(sample_data_evk_file, condition='Right visual')
print(right_vis)
print(type(right_vis))
"""
Explanation: If you want to load only some of the conditions present in a .fif file,
:func:~mne.read_evokeds has a condition parameter, which takes either a
string (matched against the comment attribute of the evoked objects on disk),
or an integer selecting the :class:~mne.Evoked object based on the order
it's stored in the file. Passing lists of integers or strings is also
possible. If only one object is selected, the :class:~mne.Evoked object
will be returned directly (rather than a length-one list containing it):
End of explanation
"""
evokeds_list[0].plot(picks='eeg')
"""
Explanation: Above, when we created an :class:~mne.Evoked object by averaging epochs,
baseline correction was applied by default when we extracted epochs from the
class:~mne.io.Raw object (the default baseline period is (None, 0),
which assured zero mean for times before the stimulus event). In contrast, if
we plot the first :class:~mne.Evoked object in the list that was loaded
from disk, we'll see that the data have not been baseline-corrected:
End of explanation
"""
evokeds_list[0].apply_baseline((None, 0))
evokeds_list[0].plot(picks='eeg')
"""
Explanation: This can be remedied by either passing a baseline parameter to
:func:mne.read_evokeds, or by applying baseline correction after loading,
as shown here:
End of explanation
"""
left_right_aud = epochs['auditory'].average()
print(left_right_aud)
"""
Explanation: Notice that :meth:~mne.Evoked.apply_baseline operated in-place. Similarly,
:class:~mne.Evoked objects may have been saved to disk with or without
:term:projectors <projector> applied; you can pass proj=True to the
:func:~mne.read_evokeds function, or use the :meth:~mne.Evoked.apply_proj
method after loading.
Combining Evoked objects
One way to pool data across multiple conditions when estimating evoked
responses is to do so prior to averaging (recall that MNE-Python can select
based on partial matching of /-separated epoch labels; see
tut-section-subselect-epochs for more info):
End of explanation
"""
left_aud = epochs['auditory/left'].average()
right_aud = epochs['auditory/right'].average()
print([evok.nave for evok in (left_aud, right_aud)])
"""
Explanation: This approach will weight each epoch equally and create a single
:class:~mne.Evoked object. Notice that the printed representation includes
(average, N=145), indicating that the :class:~mne.Evoked object was
created by averaging across 145 epochs. In this case, the event types were
fairly close in number:
End of explanation
"""
left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')
assert left_right_aud.nave == left_aud.nave + right_aud.nave
"""
Explanation: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use :meth:~mne.Epochs.equalize_event_counts prior to averaging.
Another approach to pooling across conditions is to create separate
:class:~mne.Evoked objects for each condition, and combine them afterward.
This can be accomplished by the function :func:mne.combine_evoked, which
computes a weighted sum of the :class:~mne.Evoked objects given to it. The
weights can be manually specified as a list or array of float values, or can
be specified using the keyword 'equal' (weight each ~mne.Evoked object
by $\frac{1}{N}$, where $N$ is the number of ~mne.Evoked
objects given) or the keyword 'nave' (weight each ~mne.Evoked object
proportional to the number of epochs averaged together to create it):
End of explanation
"""
for ix, trial in enumerate(epochs[:3].iter_evoked()):
channel, latency, value = trial.get_peak(ch_type='eeg',
return_amplitude=True)
latency = int(round(latency * 1e3)) # convert to milliseconds
value = int(round(value * 1e6)) # convert to µV
print('Trial {}: peak of {} µV at {} ms in channel {}'
.format(ix, value, latency, channel))
"""
Explanation: Note that the nave attribute of the resulting ~mne.Evoked object will
reflect the effective number of averages, and depends on both the nave
attributes of the contributing ~mne.Evoked objects and the weights at
which they are combined. Keeping track of effective nave is important for
inverse imaging, because nave is used to scale the noise covariance
estimate (which in turn affects the magnitude of estimated source activity).
See minimum_norm_estimates for more information (especially the
whitening_and_scaling section). Note that mne.grand_average does
not adjust nave to reflect effective number of averaged epochs; rather
it simply sets nave to the number of evokeds that were averaged
together. For this reason, it is best to use mne.combine_evoked rather than
mne.grand_average if you intend to perform inverse imaging on the resulting
:class:~mne.Evoked object.
Other uses of Evoked objects
Although the most common use of :class:~mne.Evoked objects is to store
averages of epoched data, there are a couple other uses worth noting here.
First, the method :meth:epochs.standard_error() <mne.Epochs.standard_error>
will create an :class:~mne.Evoked object (just like
:meth:epochs.average() <mne.Epochs.average> does), but the data in the
:class:~mne.Evoked object will be the standard error across epochs instead
of the average. To indicate this difference, :class:~mne.Evoked objects
have a :attr:~mne.Evoked.kind attribute that takes values 'average' or
'standard error' as appropriate.
Another use of :class:~mne.Evoked objects is to represent a single trial
or epoch of data, usually when looping through epochs. This can be easily
accomplished with the :meth:epochs.iter_evoked() <mne.Epochs.iter_evoked>
method, and can be useful for applications where you want to do something
that is only possible for :class:~mne.Evoked objects. For example, here
we use the :meth:~mne.Evoked.get_peak method (which isn't available for
:class:~mne.Epochs objects) to get the peak response in each trial:
End of explanation
"""
|
bgroveben/python3_machine_learning_projects | backpropagation_from_scratch/backpropagation_from_scratch.ipynb | mit | import pandas as pd
seeds_dataset = pd.read_csv('seeds_dataset.csv', header=None)
"""
Explanation: How to Implement the Backpropagation Algorithm From Scratch In Python
[Courtesy of Jason Brownlee at Machine Learning Mastery. Thanks Jason!
Description
This section provides a brief introduction to the Backpropagation algorithm and the Wheat Seeds dataset that we will be using in this tutorial.
Backpropagation Algorithm
The Backpropagation algorithm is a supervised learning method for multilayer feed-forward networks from the field of Artificial Neural Networks.
Feed-forward neural networks are inspired by the information processing of one or more neural cells, called a neuron.
A neuron accepts input signals via its dendrites, which pass the electrical signal down to the cell body.
The axon carries the signal out to synapses, which are the connections of a cell’s axon to other cell’s dendrites.
The principle of the backpropagation approach is to model a given function by modifying internal weightings of input signals to produce an expected output signal.
The system is trained using a supervised learning method, where the error between the system’s output and a known expected output is presented to the system and used to modify its internal state.
Technically, the backpropagation algorithm is a method for training the weights in a multilayer feed-forward neural network.
As such, it requires a network structure to be defined of one or more layers where one layer is fully connected to the next layer.
A standard network structure is one input layer, one hidden layer, and one output layer.
Backpropagation can be used for both classification and regression problems, but we will focus on classification in this tutorial.
In classification problems, best results are achieved when the network has one neuron in the output layer for each class value.
For example, a 2-class or binary classification problem with the class values of A and B.
These expected outputs would have to be transformed into binary vectors with one column for each class value.
Such as [1, 0] and [0, 1] for A and B respectively.
This is called a one hot encoding.
Wheat Seeds Dataset
The seeds dataset involves the prediction of species given measurements seeds from different varieties of wheat.
There are 201 records and 7 numerical input variables.
It is a classification problem with 3 output classes.
The scale for each numeric input value vary, so some data normalization may be required for use with algorithms that weight inputs like the backpropagation algorithm.
Using the Zero Rule algorithm that predicts the most common class value, the baseline accuracy for the problem is 28.095%.
You can learn more and download the seeds dataset from the UCI Machine Learning Repository.
Download the seeds dataset and place it into your current working directory with the filename seeds_dataset.csv.
The dataset is in tab-separated format, so you must convert it to CSV using a text editor or a spreadsheet program.
End of explanation
"""
seeds_dataset[:5]
"""
Explanation: Below is a sample of the first 5 rows of the seeds dataset.
End of explanation
"""
# Initialize a network:
def initialize_network(n_inputs, n_hidden, n_outputs):
network = list()
hidden_layer = [{'weights':[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{'weights':[random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]
network.append(output_layer)
return network
"""
Explanation: Tutorial
This tutorial is broken down into 6 parts:
Initialize Network.
Forward Propagate.
Back Propagate Error.
Train Network.
Predict.
Seeds Dataset Case Study.
These steps will provide the foundation that you need to implement the backpropagation algorithm from scratch and apply it to your own predictive modeling problems.
1. Initialize Network
Let’s start with something easy, the creation of a new network ready for training.
Each neuron has a set of weights that need to be maintained.
One weight for each input connection and an additional weight for the bias.
We will need to store additional properties for a neuron during training, therefore we will use a dictionary to represent each neuron and store properties by names such as weights for the weights.
A network is organized into layers.
The input layer is really just a row from our training dataset.
The first real layer is the hidden layer.
This is followed by the output layer that has one neuron for each class value.
We will organize layers as arrays of dictionaries and treat the whole network as an array of layers.
It is good practice to initialize the network weights to small random numbers.
In this case, will we use random numbers in the range of 0 to 1.
Below is a function named initialize_network() that creates a new neural network ready for training.
It accepts three parameters: the number of inputs, the number of neurons to have in the hidden layer, and the number of outputs.
You can see that for the hidden layer we create n_hidden neurons and each neuron in the hidden layer has n_inputs + 1 weights, one for each input column in a dataset and an additional one for the bias.
You can also see that the output layer that connects to the hidden layer has n_outputs neurons, each with n_hidden + 1 weights.
This means that each neuron in the output layer connects to (has a weight for) each neuron in the hidden layer.
End of explanation
"""
from random import seed
from random import random
def initialize_network(n_inputs, n_hidden, n_outputs):
network = list()
hidden_layer = [{'weights':[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{'weights':[random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]
network.append(output_layer)
return network
seed(1)
network = initialize_network(2, 1, 2)
for layer in network:
print(layer)
"""
Explanation: Let's test out this function.
Below is a complete example that creates a small network:
End of explanation
"""
# Calculate neuron activation for an input
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
"""
Explanation: Running the example, you can see that the code prints out each layer one by one.
You can see that the hidden layer has one neuron with 2 input weights plus the bias.
The output layer has two neurons, each with one weight plus the bias.
Now that we know how to create and initialize a network, let's see how we can use it to calculate an output.
2. Forward Propagate
We can calculate an output from a neural network by propagating an input signal through each layer until the output layer outputs its values.
We call this forward-propagation.
It is the technique we will need to generate predictions during training that will need to be corrected, and it is the method we will need after the network is trained to make predictions on new data.
We can break forward propagation down into three parts:
1. Neuron Activation.
2. Neuron Transfer.
3. Forward Propagation.
2.1 Neuron Activation
The first step is to calculate the activation of one neuron given an input.
The input could be a row from our training dataset, as in the case of the hidden layer.
It may also be the outputs from each neuron in the hidden layer, in the case of the output layer.
Neuron activation is calculated as the weighted sum of the inputs.
Much like linear regression.
Where weight is a network weight, input is an input, i is the index of a weight or an input and bias is a special weight that has no input to multiply with (or you can think of the input as always being 1.0).
Below is an implementation of this in a function named activate().
You can see that the function assumes that the bias is the last weight in the list of weights.
This helps here and later to make the code easier to read.
End of explanation
"""
# Transfer neuron activation:
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
"""
Explanation: Now, let's see how to use the neuron activation.
2.2 Neuron Transfer
Once a neuron is activated, we need to transfer the activation to see what the neuron output actually is.
Different transfer functions can be used.
It is traditional to use the sigmoid activation function, but you can also use the tanh (hyperbolic tangent) function to transfer outputs.
More recently, the rectifier transfer function has been popular with large deep learning networks.
The sigmoid activation function looks like an S shape, it’s also called the logistic function.
It can take any input value and produce a number between 0 and 1 on an S-curve.
It is also a function of which we can easily calculate the derivative (slope) that we will need later when backpropagating error.
We can transfer an activation function using the sigmoid function as follows:
output = 1 / (1 + e^(-activation))
or:
$\sigma(x) = 1 / (1 + e^{-x})$
Where e (Euler's Number) is the base of the natural logarithm.
Below is a function named transfer() that implements the sigmoid equation.
End of explanation
"""
# Forward propagate input to a network output:
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
"""
Explanation: Now that we have the pieces, let's see how they are put together and used.
2.3 Forward Propagation
Forward propagating an input is straightforward.
We work through each layer of our network calculating the outputs for each neuron.
All of the outputs from one layer become inputs to the neurons on the next layer.
Below is a function named forward_propagate() that implements the forward propagation for a row of data from our dataset with our neural network.
You can see that a neuron’s output value is stored in the neuron with the name output.
You can also see that we collect the outputs for a layer in an array named new_inputs that becomes the array inputs and is used as inputs for the following layer.
The function returns the outputs from the last layer also called the output layer.
End of explanation
"""
from math import exp
# Calculate neuron activation for an input:
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
# Transfer neuron activation:
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate input to a network output:
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
"""
Explanation: Let’s put all of these pieces together and test out the forward propagation of our network.
We define our network inline with one hidden neuron that expects 2 input values and an output layer with two neurons.
End of explanation
"""
# Test forward propagation:
network = [[{'weights': [0.13436424411240122, 0.8474337369372327, 0.763774618976614]}],
[{'weights': [0.2550690257394217, 0.49543508709194095]},
{'weights': [0.4494910647887381, 0.651592972722763]}]]
row = [1, 0, None]
output = forward_propagate(network, row)
output
"""
Explanation: Time for some testing:
End of explanation
"""
# Caculate the derivative of a neuron output:
def transfer_derivative(output):
return output * (1.0 - output)
"""
Explanation: Running the example propagates the input pattern [1, 0] and produces an output value that is printed.
Because the output layer has two neurons, we get a list of two numbers as output.
The actual output values are just nonsense for now, but next, we will start to learn how to make the weights in the neurons more useful.
3. Back Propagate Error
The backpropagation algorithm is named for the way in which the weights are trained (backwards propagation of errors).
Error is calculated between the expected outputs and the outputs forward propagated from the network.
These errors are then propagated backward through the network from the output layer to the hidden layer, assigning blame for the error and updating weights as they go.
The math for backpropagating error is rooted in calculus, but we will remain high level in this section and focus on what is calculated and how rather than why the calculations take this particular form.
This part is broken down into two sections.
1. Transfer Derivative.
2. Error Backpropagation.
3.1 Transfer Derivative
Given an output from a neuron, we need to calculate its slope.
We are using the sigmoid transfer function, the derivative of which can be calculated as follows:
Below is a function named transfer_derivative() that implements this equation:
End of explanation
"""
# Backpropagate error and store in neurons:
def backward_propagate_error(network, expected):
for i in reversed(range(len(network))):
layer = network[i]
errors = list()
if i != len(network) - 1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron['weights'][j] * neuron['delta'])
errors.append(error)
else:
for j in range(len(layer)):
neuron = layer[j]
errors.append(expected[j] - neuron['output'])
for j in range(len(layer)):
neuron = layer[j]
neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])
"""
Explanation: Now let's see how this can be used.
3.2 Error Backpropagation
The first step is to calculate the error for each output neuron.
This will give us our error signal (aka input) to propagate backwards through the network.
The error for a given neuron can be calculated as follows:
Where expected is the expected output value for the neuron, output is the actual output value for the neuron and transfer_derivative() calculates the slope of the neuron’s output value, as shown above.
This error calculation is used for neurons in the output layer.
The expected value is the class value itself.
In the hidden layer, things are a little more complicated.
The error signal for a neuron in the hidden layer is calculated as the weighted error of each neuron in the output layer.
Think of the error traveling back along the weights of the output layer to the neurons in the hidden layer.
The back-propagated error signal is accumulated and then used to determine the error for the neuron in the hidden layer, as follows:
Where error_j is the error signal from the jth neuron in the output layer, weight_k is the weight that connects the kth neuron to the current neuron and output is the output for the current neuron.
Below is a function named backward_propagate_error() that implements this procedure.
You can see that the error signal calculated for each neuron is stored with the name delta.
You can see that the layers of the network are iterated in reverse order, starting at the output and working backwards.
This ensures that the neurons in the output layer have 'delta' values calculated first so that neurons in the hidden layer can use the result in the subsequent iteration.
I chose the name 'delta' to reflect the change the error implies on the neuron (e.g. the weight delta).
You can see that the error signal for neurons in the hidden layer is accumulated from neurons in the output layer where the hidden neuron number j is also the index of the neuron’s weight in the output layer neuron[‘weights’][j].
End of explanation
"""
# Calculate the derivative of a neuron output:
def transfer_derivative(output):
return output * (1.0 - output)
# Backpropagate error and store in neurons:
def backward_propagate_error(network, expected):
for i in reversed(range(len(network))):
layer = network[i]
errors = list()
if i != len(network) - 1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron['weights'][j] * neuron['delta'])
errors.append(error)
else:
for j in range(len(layer)):
neuron = layer[j]
errors.append(expected[j] - neuron['output'])
for j in range(len(layer)):
neuron = layer[j]
neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])
"""
Explanation: Let's put all of the pieces together and see how it works.
We define a fixed neural network with output values and backpropagate an expected output pattern.
The complete example is listed below.
End of explanation
"""
# Test backpropagation of error:
network = [[{'output': 0.7105668883115941, 'weights': [0.13436424411240122, 0.8474337369372327, 0.763774618976614]}],
[{'output': 0.6213859615555266, 'weights': [0.2550690257394217, 0.49543508709194095]},
{'output': 0.6573693455986976, 'weights': [0.4494910647887381, 0.651592972722763]}]]
expected = [0, 1]
backward_propagate_error(network, expected)
for layer in network:
print(layer)
"""
Explanation: Let's run some tests to make sure we get what we want:
End of explanation
"""
# Update network weights with error:
def update_weights(network, row, l_rate):
for i in range(len(network)):
inputs = row[:-1]
if i != 0:
inputs = [neuron['output'] for neuron in network[i - 1]]
for neuron in network[i]:
for j in range(len(inputs)):
neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]
neuron['weights'][-1] += l_rate * neuron['delta']
"""
Explanation: Running the example prints the network after the backpropagation of errors is complete.
You can see that error values are calculated and stored in the neurons for the output layer and the hidden layer.
Now let's use the backpropagation of errors to train the network.
4. Train Network
The network is trained using stochastic gradient descent.
This involves multiple iterations of exposing a training dataset to the network and for each row of data forward propagating the inputs, backpropagating the error and updating the network weights.
This part is broken down into two sections:
1. Update Weights.
2. Train Network.
Update Weights
Once errors are calculated for each neuron in the network via the backpropagation method above, they can be used to update the weights.
Network weights are updated as follows:
Where weight is a given weight, learning_rate is a parameter that you must specify, error is the error calculated by the backpropagation procedure for the neuron and input is the input value that caused the error.
The same procedure can be used for updating the bias weight, except there is no input term, or input is the fixed value of 1.0.
Learning rate controls how much to change the weight to correct for the error.
For example, a value of 0.1 will update the weight 10% of the amount that it possibly could be updated.
Small learning rates that cause slower learning over a large number of training iterations are preferred.
This increases the likelihood of the network finding a good set of weights across all layers rather than the fastest set of weights that minimize error (called premature convergence).
Below is a function named update_weights() that updates the weights for a network given an input row of data, a learning rate and assume that a forward and backward propagation have already been performed.
Remember that the input for the output layer is a collection of outputs from the hidden layer.
End of explanation
"""
# Train an ANN for a fixed number of epochs:
def train_network(network, train, l_rate, n_epoch, n_outputs):
for epoch in range(n_epoch):
sum_error = 0
for row in train:
outputs = forward_propagate(network, row)
expected = [0 for i in range(n_outputs)]
expected[row[-1]] = 1
sum_error += sum([(expected[i] - outputs[i]) ** 2 \
for i in range(len(expected))])
backward_propagate_error(network, expected)
update_weights(network, row, l_rate)
print(">epoch: {}, l_rate: {:.3f}, sum_error: {:.3f}".format(epoch, l_rate, sum_error))
"""
Explanation: Now that we know how to update network weights, let's tell the machine how to do it repeatedly.
4.2 Train Network
As mentioned, the network is updated using stochastic gradient descent.
This involves first looping for a fixed number of epochs and within each epoch updating the network for each row in the training dataset.
Because updates are made for each training pattern, this type of learning is called online learning.
If errors were accumulated across an epoch before updating the weights, this is called batch learning or batch gradient descent.
Below is a function that implements the training of an already initialized neural network with a given training dataset, learning rate, fixed number of epochs and an expected number of output values.
The expected number of output values is used to transform class values in the training data into a one hot encoding.
That is a binary vector with one column for each class value to match the output of the network.
This is required to calculate the error for the output layer.
You can also see that the sum squared error between the expected output and the network output is accumulated each epoch and printed.
This is helpful to create a trace of how much the network is learning and improving each epoch.
End of explanation
"""
from math import exp
from random import seed
from random import random
# Initialize a network
def initialize_network(n_inputs, n_hidden, n_outputs):
network = list()
hidden_layer = [{'weights':[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{'weights':[random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]
network.append(output_layer)
return network
# Calculate neuron activation for an input
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
# Transfer neuron activation
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate input to a network output
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
# Calculate the derivative of an neuron output
def transfer_derivative(output):
return output * (1.0 - output)
# Backpropagate error and store in neurons
def backward_propagate_error(network, expected):
for i in reversed(range(len(network))):
layer = network[i]
errors = list()
if i != len(network)-1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron['weights'][j] * neuron['delta'])
errors.append(error)
else:
for j in range(len(layer)):
neuron = layer[j]
errors.append(expected[j] - neuron['output'])
for j in range(len(layer)):
neuron = layer[j]
neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])
# Update network weights with error
def update_weights(network, row, l_rate):
for i in range(len(network)):
inputs = row[:-1]
if i != 0:
inputs = [neuron['output'] for neuron in network[i - 1]]
for neuron in network[i]:
for j in range(len(inputs)):
neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]
neuron['weights'][-1] += l_rate * neuron['delta']
# Train a network for a fixed number of epochs
def train_network(network, train, l_rate, n_epoch, n_outputs):
for epoch in range(n_epoch):
sum_error = 0
for row in train:
outputs = forward_propagate(network, row)
expected = [0 for i in range(n_outputs)]
expected[row[-1]] = 1
sum_error += sum([(expected[i]-outputs[i])**2 for i in range(len(expected))])
backward_propagate_error(network, expected)
update_weights(network, row, l_rate)
print('>epoch=%d, lrate=%.3f, error=%.3f' % (epoch, l_rate, sum_error))
# Test training backprop algorithm
seed(1)
dataset = [[2.7810836,2.550537003,0],
[1.465489372,2.362125076,0],
[3.396561688,4.400293529,0],
[1.38807019,1.850220317,0],
[3.06407232,3.005305973,0],
[7.627531214,2.759262235,1],
[5.332441248,2.088626775,1],
[6.922596716,1.77106367,1],
[8.675418651,-0.242068655,1],
[7.673756466,3.508563011,1]]
n_inputs = len(dataset[0]) - 1
n_outputs = len(set([row[-1] for row in dataset]))
network = initialize_network(n_inputs, 2, n_outputs)
train_network(network, dataset, 0.5, 20, n_outputs)
for layer in network:
print(layer)
"""
Explanation: We now have all of the pieces to train the network.
We can put together an example that includes everything we’ve seen so far including network initialization and train a network on a small dataset.
Below is a small contrived dataset that we can use to test out training our neural network:
Now we are ready to work through the complete example.
We will use 2 neurons in the hidden layer.
It is a binary classification problem (2 classes) so there will be two neurons in the output layer.
The network will be trained for 20 epochs with a learning rate of 0.5, which is high because we are training for so few iterations.
End of explanation
"""
# Make a prediction with the network:
def predict(network, row):
outputs = forward_propagate(network, row)
return outputs.index(max(outputs))
"""
Explanation: Running the example first prints the sum squared error each training epoch.
We can see a trend of this error decreasing with each epoch.
Once trained, the network is printed, showing the learned weights.
Also still in the network are output and delta values that can be ignored.
We could update our training function to delete these data if we wanted.
Once a network is trained, we need to use it to make predictions.
5. Predict
Making predictions with a trained neural network is easy enough.
We have already seen how to forward-propagate an input pattern to get an output.
This is all we need to do to make a prediction.
We can use the output values themselves directly as the probability of a pattern belonging to each output class.
It may be more useful to turn this output back into a crisp class prediction.
We can do this by selecting the class value with the larger probability.
This is also called the arg max function.
Below is a function named predict() that implements this procedure.
It returns the index in the network output that has the largest probability.
It assumes that class values have been converted to integers starting at 0.
End of explanation
"""
from math import exp
# Calculate neuron activation for an input:
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
# Transfer neuron activation:
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate inputs to a network output:
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
# Make a prediction with a network:
def predict(network, row):
outputs = forward_propagate(network, row)
return outputs.index(max(outputs))
# Test making predictions with the network:
dataset = [[2.7810836,2.550537003,0],
[1.465489372,2.362125076,0],
[3.396561688,4.400293529,0],
[1.38807019,1.850220317,0],
[3.06407232,3.005305973,0],
[7.627531214,2.759262235,1],
[5.332441248,2.088626775,1],
[6.922596716,1.77106367,1],
[8.675418651,-0.242068655,1],
[7.673756466,3.508563011,1]]
network = [[{'weights': [-1.482313569067226, 1.8308790073202204, 1.078381922048799]},
{'weights': [0.23244990332399884, 0.3621998343835864, 0.40289821191094327]}],
[{'weights': [2.5001872433501404, 0.7887233511355132, -1.1026649757805829]},
{'weights': [-2.429350576245497, 0.8357651039198697, 1.0699217181280656]}]]
for row in dataset:
prediction = predict(network, row)
print('Expected=%d, Actual=%d' % (row[-1], prediction))
"""
Explanation: We can put this together with our code above for forward propagating input with our small contrived dataset to test making predictions with a network that has already been trained.
The example hardcodes a network trained from the previous step above.
The complete example is listed below.
End of explanation
"""
# Backpropagation on the Seeds Dataset:
from random import seed
from random import randrange
from random import random
from csv import reader
from math import exp
# Load the CSV file with the seeds dataset:
def load_csv(filename):
dataset = list()
with open(filename, 'r') as file:
csv_reader = reader(file)
for row in csv_reader:
if not row:
continue
dataset.append(row)
return dataset
# Convert string column to float:
def str_column_to_float(dataset, column):
for row in dataset:
row[column] = float(row[column].strip())
# Convert string column to integer:
def str_column_to_int(dataset, column):
class_values = [row[column] for row in dataset]
unique = set(class_values)
lookup = dict()
for i, value in enumerate(unique):
lookup[value] = i
for row in dataset:
row[column] = lookup[row[column]]
return lookup
# Find the min and max values for each column:
def dataset_minmax(dataset):
minmax = list()
stats = [[min(column), max(column)] for column in zip(*dataset)]
return stats
# Rescale dataset columns to the range 0-1:
def normalize_dataset(dataset, minmax):
for row in dataset:
for i in range(len(row)-1):
row[i] = (row[i] - minmax[i][0]) / (minmax[i][1] - minmax[i][0])
# Split a dataset into k folds:
def cross_validation_split(dataset, n_folds):
dataset_split = list()
dataset_copy = list(dataset)
fold_size = int(len(dataset) / n_folds)
for i in range(n_folds):
fold = list()
while len(fold) < fold_size:
index = randrange(len(dataset_copy))
fold.append(dataset_copy.pop(index))
dataset_split.append(fold)
return dataset_split
# Calculate accuracy percentage:
def accuracy_metric(actual, predicted):
correct = 0
for i in range(len(actual)):
if actual[i] == predicted[i]:
correct += 1
return correct / float(len(actual)) * 100.0
# Evaluate the algorithm using a cross validation split:
def evaluate_algorithm(dataset, algorithm, n_folds, *args):
folds = cross_validation_split(dataset, n_folds)
scores = list()
for fold in folds:
train_set = list(folds)
train_set.remove(fold)
train_set = sum(train_set, [])
test_set = list()
for row in fold:
row_copy = list(row)
test_set.append(row_copy)
row_copy[-1] = None
predicted = algorithm(train_set, test_set, *args)
actual = [row[-1] for row in fold]
accuracy = accuracy_metric(actual, predicted)
scores.append(accuracy)
return scores
# Calculate neuron activation for an input:
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
# Transfer neuron activation:
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate input to a network input:
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
# Calculate the derivative of a neuron's output:
def transfer_derivative(output):
return output * (1.0 - output)
# Backpropagate errors and store the results in the corresponding neurons:
def backward_propagate_error(network, expected):
for i in reversed(range(len(network))):
layer = network[i]
errors = list()
if i != len(network)-1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron['weights'][j] * neuron['delta'])
errors.append(error)
else:
for j in range(len(layer)):
neuron = layer[j]
errors.append(expected[j] - neuron['output'])
for j in range(len(layer)):
neuron = layer[j]
neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])
# Update network weights with the errors:
def update_weights(network, row, l_rate):
for i in range(len(network)):
inputs = row[:-1]
if i != 0:
inputs = [neuron['output'] for neuron in network[i - 1]]
for neuron in network[i]:
for j in range(len(inputs)):
neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]
neuron['weights'][-1] += l_rate * neuron['delta']
# Train a network for a fixed number of epochs:
def train_network(network, train, l_rate, n_epoch, n_outputs):
for epoch in range(n_epoch):
for row in train:
outputs = forward_propagate(network, row)
expected = [0 for i in range(n_outputs)]
expected[row[-1]] = 1
backward_propagate_error(network, expected)
update_weights(network, row, l_rate)
# Initialize a network:
def initialize_network(n_inputs, n_hidden, n_outputs):
network = list()
hidden_layer = [{'weights':[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{'weights':[random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]
network.append(output_layer)
return network
# Make a prediction with a network:
def predict(network, row):
outputs = forward_propagate(network, row)
return outputs.index(max(outputs))
# Backpropagation Algorithm with Stochastic Gradient Descent:
def back_propagation(train, test, l_rate, n_epoch, n_hidden):
n_inputs = len(train[0])-1
n_outputs = len(set([row[-1] for row in train]))
network = initialize_network(n_inputs, n_hidden, n_outputs)
train_network(network, train, l_rate, n_epoch, n_outputs)
predictions = list()
for row in test:
prediction = predict(network, row)
predictions.append(prediction)
return predictions
# Test backpropagation on seeds dataset:
seed(1)
# Load and prepare data:
filename = 'seeds_dataset.csv'
dataset = load_csv(filename)
for i in range(len(dataset[0])-1):
str_column_to_float(dataset, i)
# Convert class column to integers:
str_column_to_int(dataset, len(dataset[0])-1)
# Normalize the input variables:
minmax = dataset_minmax(dataset)
normalize_dataset(dataset, minmax)
# Evaluate the algorithm:
n_folds = 5
l_rate = 0.3
n_epoch = 500
n_hidden = 5
scores = evaluate_algorithm(dataset, back_propagation, n_folds, l_rate, n_epoch, n_hidden)
print("Scores: \n{}".format(scores))
print("Mean Accuracy: {:.3f}".format(sum(scores)/float(len(scores))))
"""
Explanation: running the example prints the expected output for each record in the training dataset, followed by the crisp prediction made by the network.
It shows that the network achieves 100% accuracy on this small dataset.
Now we are ready to apply our backpropagation algorithm to a real world dataset.
6. Wheat Seeds Dataset
This section applies the Backpropagation algorithm to the wheat seeds dataset.
The first step is to load the dataset and convert the loaded data to numbers that we can use in our neural network.
For this we will use the helper function load_csv() to load the file, str_column_to_float() to convert string numbers to floats and str_column_to_int() to convert the class column to integer values.
Input values vary in scale and need to be normalized to the range of 0 and 1.
It is generally good practice to normalize input values to the range of the chosen transfer function, in this case, the sigmoid function that outputs values between 0 and 1.
The dataset_minmax() and normalize_dataset() helper functions were used to normalize the input values.
We will evaluate the algorithm using k-fold cross-validation with 5 folds.
This means that 201/5=40.2 or 40 records will be in each fold.
We will use the helper functions evaluate_algorithm() to evaluate the algorithm with cross-validation and accuracy_metric() to calculate the accuracy of predictions.
A new function named back_propagation() was developed to manage the application of the Backpropagation algorithm; first initializing a network, training it on the training dataset and then using the trained network to make predictions on a test dataset.
The complete example is listed below.
End of explanation
"""
|
BeatHubmann/17F-U-DLND | gan_mnist/Intro_to_GANs_Solution.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
End of explanation
"""
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
End of explanation
"""
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
enchantner/python-zero | lesson_7/Slides.ipynb | mit | import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
c.execute("""
CREATE TABLE employees (
id int unsigned NOT NULL,
first_name string NOT NULL,
last_name string NOT NULL,
department_id int unsigned,
PRIMARY KEY (id)
)""")
c.execute("""
CREATE TABLE departments (
id int unsigned NOT NULL,
title string NOT NULL,
PRIMARY KEY (id)
)""")
conn.commit()
c.execute("""
INSERT INTO `employees` (`id`, `first_name`, `last_name`, `department_id`) VALUES
('1', 'Darth', 'Vader', 1),
('2', 'Darth', 'Maul', 1),
('3', 'Kylo', 'Ren', 1),
('4', 'Magister', 'Yoda', 2),
('5', 'Leia', 'Organa', 2),
('6', 'Luke', 'Skywalker', 2),
('7', 'Jar Jar', 'Binks', NULL)
""")
c.execute("""
INSERT INTO `departments` (`id`, `title`) VALUES
('1', 'Dark Side Inc.'),
('2', 'Light Side Ltd.'),
('3', 'Rebels'),
('4', 'Wookie')
""")
conn.commit()
c.execute("SELECT emp.last_name AS Surname, d.title AS Department FROM departments d LEFT JOIN employees emp ON (d.id = emp.department_id)")
print(c.fetchall())
"""
Explanation: Вопросы
Какие есть два способа создать поток, используя модуль threading?
Что такое кооперативная многозадачность? Что такое Future?
В чем основные отличия между асинхронными и синхронными функциями в Python?
Какая функция при работе с корутинами является, грубо говоря, аналогом функции concurrent.futures.ThreadPoolExecutor().submit() из мира потоков?
Что такое MVC и какие файлы в Django отвечают за каждый компонент?
В чем главные отличия Django, Flask и aiohttp?
Асинхронный бот
Базы данных
Реляционные (MySQL, PostgreSQL, Oracle, SQLite)
Key-Value + document-oriented (Redis, Tarantool, MongoDB, Elasticsearch)
Графовые (Neo4j)
и т.д.
Распределенные? (DNS)
In-Memory? (Memcached)
Реляционные базы данных
Записи могут иметь ключи, указывающие друг на друга
Чаще всего для работы с данными используется SQL (https://ru.wikipedia.org/wiki/SQL)
SQL - Structured Query Language
SQL
SELECT emp.last_name AS Surname, d.title AS Department FROM departments d LEFT JOIN employees emp ON (d.id = emp.department_id)
http://sqlfiddle.com
``SQL
CREATE TABLE IF NOT EXISTSemployees(idint(6) unsigned NOT NULL,first_namevarchar(30) NOT NULL,last_namevarchar(30) NOT NULL,department_idint(6) unsigned,
PRIMARY KEY (id`)
) DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS departments (
id int(6) unsigned NOT NULL,
title varchar(30) NOT NULL,
PRIMARY KEY (id)
) DEFAULT CHARSET=utf8;
```
``SQL
INSERT INTOemployees(id,first_name,last_name,department_id`) VALUES
('1', 'Darth', 'Vader', 1),
('2', 'Darth', 'Maul', 1),
('3', 'Kylo', 'Ren', 1),
('4', 'Magister', 'Yoda', 2),
('5', 'Leia', 'Organa', 2),
('6', 'Luke', 'Skywalker', 2),
('7', 'Jar Jar', 'Binks', NULL);
INSERT INTO departments (id, title) VALUES
('1', 'Dark Side Inc.'),
('2', 'Light Side Ltd.'),
('3', 'Rebels'),
('4', 'Wookie');
```
Python DB API 2.0
https://www.python.org/dev/peps/pep-0249/
https://docs.python.org/3/library/sqlite3.html
End of explanation
"""
# hello/models.py
from django.db import models
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
class Choice(models.Model):
question = models.ForeignKey(Question, on_delete=models.CASCADE)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField(default=0)
"""
Explanation: ORM - Object-Relational Mapping
Установим соответствие между записями в базе и объектами в коде
Получим удобство в коде за счет меньшей гибкости построения запросов и большего оверхеда
Вернемся к нашему сайту
Миграции - это преобразования схемы и/или типов данных, меняющие структуру базы как в процессе разработки, так и на боевых серверах
python manage.py migrate
End of explanation
"""
INSTALLED_APPS = [
'hello', # <---- вот сюда, например
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
"""
Explanation: Нужно добавить наше приложение в INSTALLED_APPS в settings.py
End of explanation
"""
import django
django.setup()
from django.utils import timezone # поддержка временных зон
from hello.models import Question, Choice
Question.objects.all() # вернуть все объекты из базы
q = Question(question_text="Чёкак?", pub_date=timezone.now()) # создать объект
q.save() # сохранить объект в базу
q.question_text = "Чёкаво?"
q.save()
str(q.query) # заглянуть внутрь
Question.objects.filter(question_text__startswith='Чё') # фильтруем по строчкам
current_year = timezone.now().year
Question.objects.get(pub_date__year=current_year) # фильтруем по году
Question.objects.get(id=2)
q.choice_set.all() # все варианты ответа для данного вопроса
c = q.choice_set.create(choice_text='Кто бы знал', votes=0) # создаем связанный объект
c.delete() # удаляем объект
"""
Explanation: Создадим миграцию для наших новых моделей
python manage.py makemigrations hello
Но что конкретно он нагенерировал?
python manage.py sqlmigrate hello 0001
Ну вот, теперь все понятно
python manage.py migrate
Встроенный Python shell (рекомендую IPython)
python manage.py shell
End of explanation
"""
# hello/admin.py
from django.contrib import admin
from .models import Question
admin.site.register(Question)
"""
Explanation: Админка
python manage.py createsuperuser
End of explanation
"""
import asyncio
import aiohttp
from aioes import Elasticsearch
from datetime import datetime
es = Elasticsearch(['localhost:9200'])
URL = "https://ghibliapi.herokuapp.com/species/603428ba-8a86-4b0b-a9f1-65df6abef3d3"
async def create_db():
async with aiohttp.ClientSession() as session:
async with session.get(URL) as resp:
films_urls = (await resp.json())["films"]
for i, film_url in enumerate(films_urls):
async with session.get(film_url) as resp:
res = await es.index(
index="coding-index",
doc_type='film',
id=i,
body=await resp.json()
)
print(res['created'])
loop = asyncio.get_event_loop()
loop.run_until_complete(create_db())
# https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html
async def get_by_id(key):
return await es.get(index='coding-index', doc_type='film', id=key)
async def search_by_director(director):
return await es.search(index='coding-index', body={"query": {"match": {'director': director}}})
async def search_in_description(sentence):
return await es.search(index='coding-index', body={"query": {"match": {'description': sentence}}})
# loop.run_until_complete(get_by_id(0))
# loop.run_until_complete(search_by_director("Hayao Miyazaki"))
loop.run_until_complete(search_in_description("cat"))
"""
Explanation: python manage.py runserver
А что насчет не-Django? SQLAlchemy, вот что!
https://www.sqlalchemy.org/
http://docs.sqlalchemy.org/en/latest/orm/tutorial.html
Есть и другие
https://ponyorm.com/
http://docs.peewee-orm.com/en/latest/
Проблемы с реляционными базами
Не очень хорошо масштабируются
Любое изменение схемы приводит к гиганским миграциям
Плохо поддерживают асинхронность
Распространенные СУБД плохо интергрируются с вычислительными решениями
Но вообще PostgreSQL неплох
Как насчет NoSQL?
Redis (https://redis.io/), https://aioredis.readthedocs.io/en/latest/
Elasticsearch (https://www.elastic.co/products/elasticsearch), https://aioes.readthedocs.io/en/latest/
MongoDB (https://www.mongodb.com/), https://motor.readthedocs.io/en/stable/
Попробуем Elasticsearch
https://www.elastic.co/downloads/elasticsearch
Запустить bin/elasticsearch, проверить, что работает: http://localhost:9200/
pip install aioes
Для тех, кто боится асинхронности: https://elasticsearch-py.readthedocs.io/en/master/
End of explanation
"""
|
JackDi/phys202-2015-work | assignments/assignment05/InteractEx03.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 3
Imports
End of explanation
"""
import math
def soliton(x, t, c, a):
"""Return phi(x, t) for a soliton wave with constants c and a."""
# YOUR CODE HERE
phi=(0.5)*c/(np.cosh(((x-c*t-a)*np.sqrt(c)/2)))**2
return phi
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
"""
Explanation: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.
End of explanation
"""
tmin = 0.0
tmax = 10.0
tpoints = 200 #jack changed from 100 to 200
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
"""
Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
End of explanation
"""
# YOUR CODE HERE
phi=np.ones((xpoints,tpoints))
w=0
z=0
while w < xpoints:
while z < tpoints:
phi[w,z]=soliton(w,z,c,a)
z=z+1
z=0
w=w+1
# phi=np.empty((xpoints,tpoints))
# for i in range(xpoints):
# phi[i,:]=soliton(x[i],t,c,a)
#
# remember the x[i] thing
# how dr granger did it
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[10,10]==soliton(x[10],t[10],c,a)
"""
Explanation: Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$.
End of explanation
"""
x=0
def plot_soliton_data(i=0):
"""Plot the soliton data at t[i] versus x."""
plt.plot(phi[:,i])
plt.title("Plot of Soliton Wave")
plt.xlabel("Time Axis of Soliton Wave")
plt.ylabel("Value of Soliton Wave Function")
plt.tick_params(direction='out')
plt.yscale("log")
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
"""
Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
End of explanation
"""
# YOUR CODE HERE
interact(plot_soliton_data,i=(0,199,10));
assert True # leave this for grading the interact with plot_soliton_data cell
"""
Explanation: Use interact to animate the plot_soliton_data function versus time.
End of explanation
"""
|
Jay-Oh-eN/data-science-workshops | modeling_data.ipynb | mit | import pandas as pd
import matplotlib as plt
# draw plots in notebook
%matplotlib inline
# make plots SVG (higher quality)
%config InlineBackend.figure_format = 'svg'
# more time/compute intensive to parse dates. but we know we definitely have/need them
df = pd.read_csv('data/sf_listings.csv', parse_dates=['last_review'], infer_datetime_format=True)
df_reviews = pd.read_csv('data/reviews.csv', parse_dates=['date'], infer_datetime_format=True)
df_reviews.date[0]
df.head()
# display general diagnostic info
df.info()
df_reviews.head()
# index DataFrame on listing_id in order to join datasets
reindexed_df = df_reviews.set_index('listing_id')
reindexed_df.head()
# remember the original id in a column to group on
df['listing_id'] = df['id']
df_listing = df.set_index('id')
df_listing.head()
"""
Explanation: Making Inferences: Do AirBnB's cause Rents to Increase?
Time Series in pandas
End of explanation
"""
# join the listing information with the review information
review_timeseries = df_listing.join(reindexed_df)
print review_timeseries.columns
review_timeseries.head()
# nothing new/interesting here...
review_timeseries.groupby('listing_id').count()['name'].hist(bins=100, figsize=(12,6));
# causes python to crash, lets see if there is a better way
# review_timeseries.groupby(['neighbourhood','date']).count()
# lets try a pivot table...
reviews_over_time = pd.crosstab(review_timeseries.date, review_timeseries.neighbourhood)
reviews_over_time.head()
"""
Explanation: Star Schema (facts vs. dimensions)
In our case, the individual review events are the facts and listings themselves are the dimensions.
End of explanation
"""
# let's look at some particular neighborhoods
neighborhoods = df.neighbourhood.unique()
print neighborhoods
# a little noisy
reviews_over_time[['Mission', 'South of Market', 'Noe Valley']].plot(figsize=(12,6))
# smooth by resampling by month
reviews_over_time.resample('M').mean()[['Mission', 'South of Market', 'Noe Valley']].plot(figsize=(12,6))
"""
Explanation:
End of explanation
"""
# Exercise 1 Solution
"""
Explanation: Pandas Resample String convention
Interesting things of note:
Each neighborhood has an activity spike in Fall 2014 and Summer 2015.
Likely a late summer vacation surge (since that is when SF has nicest weather :)
It is periodic and the magnitude of the increase is itself increasing (good news for AirBnB!)...
Exercise 1: Spotting Trends
Using the following functions, find which columns correlate with increased activity (# of reviews and reviews per month):
* pandas correlation function
* Seaborn Heatmaps
End of explanation
"""
from sklearn import linear_model
features = df[['host_name', 'neighbourhood', 'room_type', 'minimum_nights','number_of_reviews', \
'calculated_host_listings_count', 'availability_365']]
labels = df['price']
# no price!
features.head()
# Categorical -> One Hot Encoding
# http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-categorical-features
dummies = pd.get_dummies(features)
# sklearn likes matrices
feature_matrix = dummies.as_matrix()
labels.as_matrix()
feature_matrix
# Initialize and Fit sklearn model
model = linear_model.LinearRegression()
clf = model.fit(feature_matrix, labels.as_matrix())
# How well did we do?
clf.score(feature_matrix, labels.as_matrix())
print "There are {0} features...".format(len(clf.coef_))
clf.coef_
"""
Explanation: Correlation vs. Regression
Actually pretty nearly similar mathematically...
At least inferentially: http://stats.stackexchange.com/questions/2125/whats-the-difference-between-correlation-and-simple-linear-regression
Introduction to Machine Learning
slideshow!
Model Evaluation: Finding Under (or over) valued Listings
End of explanation
"""
# Remove the name column, we are probably overfitting...
no_name = features.copy()
no_name.pop('host_name')
no_names_feature_m = pd.get_dummies(no_name).as_matrix()
model = linear_model.LinearRegression(normalize=True)
clf = model.fit(no_names_feature_m, labels.as_matrix())
# Turns out the name feature is highly predictive...
# but not very useful: https://www.kaggle.com/wiki/Leakage
clf.score(no_names_feature_m, labels.as_matrix())
len(clf.coef_)
# We need more and better features
df2 = pd.read_csv('data/listings_full.csv')
df2.columns
df2.head()
# get a snapshot of some of the columns in the center of the matrix
df2.iloc[1:5, 40:60]
# optimistically lets just use a few key features to start. Remember Occam's razor..
select_features = df2[['host_has_profile_pic' ,'host_identity_verified', 'host_listings_count','host_response_time', 'host_acceptance_rate', 'host_is_superhost', 'transit', 'neighbourhood_cleansed','is_location_exact', 'property_type', 'room_type', 'accommodates','bathrooms','bedrooms','beds']]
select_features.head()
# moar feature engineering. fill in missing data since it wil break our model
select_features = select_features.fillna({'host_response_time': 'NA', 'host_acceptance_rate': '-1%'})
select_features.info()
# convert the percentage as a string into a float
select_features.host_acceptance_rate = select_features.host_acceptance_rate.str.strip('%').astype(float) / 100
# Binarize transit column... the listing is either near transit or it isn't
select_features.transit = select_features.transit.isnull()
select_features.transit
# One last fill incase we missed any nulls
dummies = pd.get_dummies(select_features).fillna(0)
feature_matrix = dummies.as_matrix()
# Price as a currency string -> price as a float
labels = df2.price.str.strip('$').str.replace(',', '').astype(float)
# initialize model again
model = linear_model.LinearRegression(normalize=True)
clf = model.fit(feature_matrix, labels)
# much better!
clf.score(feature_matrix, labels)
# a sweet spot in between over and under fitting
len(clf.coef_)
"""
Explanation: R-squared: https://en.wikipedia.org/wiki/Coefficient_of_determination
End of explanation
"""
# Predict what we should price listing #1000 at given its features
clf.predict(feature_matrix[1100])
# Looks like it is overpriced...
df2.iloc[1100].price
# And it shows... there are only 2 reviews per month
df2.iloc[1100]
# Where the top listing have 10+ reviews per month
df2.sort_values('reviews_per_month', ascending=False).head()
# Zip together our column names with our beta coefficients
coefficients = zip(dummies.columns, clf.coef_)
# Most significant
sorted(coefficients, key=lambda coef: coef[1], reverse=True)[:10]
# Least significant
sorted(coefficients, key=lambda coef: coef[1])[:10]
"""
Explanation: Making Predictions: How should I price my Listing?!?
End of explanation
"""
|
christinahedges/PyKE | docs/source/tutorials/ipython_notebooks/whatsnew31.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
from pyke.utils import module_output_to_channel, channel_to_module_output
module_output_to_channel(module=19, output=3)
channel_to_module_output(67)
"""
Explanation: What's new in PyKE 3.1?
Utility functions
PyKE has included two convinience functions to convert between module.output to channel and vice-versa:
End of explanation
"""
from pyke.utils import KeplerQualityFlags
KeplerQualityFlags.decode(1)
"""
Explanation: PyKE 3.1 includes KeplerQualityFlags class which encodes the meaning of the Kepler QUALITY bitmask flags as documented in the Kepler Archive Manual (Table 2.3):
End of explanation
"""
KeplerQualityFlags.decode(1 + 1024 + 1048576)
"""
Explanation: It also can handle multiple flags:
End of explanation
"""
KeplerQualityFlags.decode(KeplerQualityFlags.DEFAULT_BITMASK)
KeplerQualityFlags.decode(KeplerQualityFlags.CONSERVATIVE_BITMASK)
"""
Explanation: A few quality flags are already computed:
End of explanation
"""
from pyke import KeplerTargetPixelFile
"""
Explanation: Target Pixel File (TPF)
PyKE 3.1 includes class called KeplerTargetPixelFile which is used to handle target pixel files:
End of explanation
"""
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz')
"""
Explanation: A KeplerTargetPixelFile can be instantiated either from a local file or a url:
End of explanation
"""
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
"""
Explanation: Additionally, we can mask out cadences that are flagged using the quality_bitmask argument in the constructor:
End of explanation
"""
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz',
aperture_mask='kepler-pipeline',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
tpf.aperture_mask
"""
Explanation: Furthermore, we can mask out pixel values using the aperture_mask argument. The default behaviour is to use
all pixels that have real values. This argument can also get a string value 'kepler-pipeline', in which case the default aperture used by Kepler's pipeline is applied.
End of explanation
"""
tpf.header(ext=0)
"""
Explanation: The TPF objects stores both data and a few metadata information, e.g., channel number, EPIC number, reference column and row, module, and shape. The whole header is also available:
End of explanation
"""
tpf.flux.shape
"""
Explanation: The pixel fluxes time series can be accessed using the flux property:
End of explanation
"""
tpf.plot(frame=1)
"""
Explanation: This shows that our TPF is a 35 x 35 image recorded over 3209 cadences.
One can visualize the pixel data at a given cadence using the plot method:
End of explanation
"""
lc = tpf.to_lightcurve()
plt.figure(figsize=[17, 4])
plt.plot(lc.time, lc.flux)
"""
Explanation: We can perform aperture photometry using the method to_lightcurve:
End of explanation
"""
from pyke.lightcurve import KeplerLightCurveFile
klc = KeplerLightCurveFile('https://archive.stsci.edu/missions/k2/lightcurves/'
'c14/200100000/82000/ktwo200182949-c14_llc.fits',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
sap_lc = klc.SAP_FLUX
plt.figure(figsize=[17, 4])
plt.plot(lc.time, lc.flux)
plt.plot(sap_lc.time, sap_lc.flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
"""
Explanation: Light Curves
Let's see how the previous light curve compares against the 'SAP_FLUX' produced by Kepler's pipeline. For that, we are going to explore the KeplerLightCurveFile class:
End of explanation
"""
klc_corrected = klc.compute_cotrended_lightcurve(cbvs=range(1, 17))
plt.figure(figsize=[17, 4])
plt.plot(klc_corrected.time, klc_corrected.flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
pdcsap_lc = klc.PDCSAP_FLUX
plt.figure(figsize=[17, 4])
plt.plot(klc_corrected.time, klc_corrected.flux)
plt.plot(pdcsap_lc.time, pdcsap_lc.flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
"""
Explanation: Now, let's correct this light curve using by fitting cotrending basis vectors. That can be achived either with the KeplerCBVCorrector class or the compute_cotrended_lightcurve in KeplerLightCurveFile. Let's try the latter:
End of explanation
"""
from pyke.prf import PRFPhotometry, SceneModel, SimpleKeplerPRF
"""
Explanation: Pixel Response Function (PRF) Photometry
PyKE 3.1 also includes tools to perform PRF Photometry:
End of explanation
"""
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'201500000/43000/ktwo201543306-c14_lpd-targ.fits.gz',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
tpf.plot(frame=100)
scene = SceneModel(prfs=[SimpleKeplerPRF(channel=tpf.channel, shape=tpf.shape[1:],
column=tpf.column, row=tpf.row)])
"""
Explanation: For that, let's create a SceneModel which will be fitted to the object of the following TPF:
End of explanation
"""
from oktopus.prior import UniformPrior
unif_prior = UniformPrior(lb=[0, 1090., 706., 0.],
ub=[1e5, 1096., 712., 1e5])
scene.plot(*unif_prior.mean)
prf_phot = PRFPhotometry(scene_model=scene, prior=unif_prior)
results = prf_phot.fit(tpf.flux + tpf.flux_bkg)
plt.imshow(prf_phot.residuals[1], origin='lower')
plt.colorbar()
flux = results[:, 0]
xcenter = results[:, 1]
ycenter = results[:, 2]
bkg_density = results[:, 3]
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, xcenter)
plt.ylabel('Column position')
plt.xlabel('Time (BJD - 2454833)')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, ycenter)
plt.ylabel('Row position')
plt.xlabel('Time (BJD - 2454833)')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, bkg_density)
plt.ylabel('Background density')
plt.xlabel('Time (BJD - 2454833)')
"""
Explanation: We also need to define prior distributions on the parameters of our SceneModel model. Those parameters are
the flux, center positions of the target, and a constant background level. We can do that with oktopus:
End of explanation
"""
|
setiQuest/ML4SETI | tutorials/Step_5d_Build_CNN_Tf_PowerAI.ipynb | apache-2.0 | import requests
import json
#import ibmseti
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
import pickle
import time
#!sudo pip install sklearn
import os
from sklearn.metrics import confusion_matrix
from sklearn import metrics
"""
Explanation: <a href="https://www.cognitiveclass.ai"><img src = "https://cognitiveclass.ai/wp-content/themes/bdu3.0/static/images/cc-logo.png" align = left></a>
<br>
<br>
SETI CNN using TF and Binary DS
End of explanation
"""
### SET YOUR TEAM NAME HERE! Use this folder to save intermediate results
team_name = 'Saeed_team'
mydatafolder = os.path.join( os.environ['PWD'], team_name ) #Change my_data_folder to your team name
if os.path.exists(mydatafolder) is False:
os.makedirs(mydatafolder)
print mydatafolder
"""
Explanation: Set your team folder
End of explanation
"""
!wget --output-document SETI.zip https://ibm.box.com/shared/static/jhqdhcblhua5dx2t7ixwm88okitjrl6l.zip
!unzip -o SETI.zip
import SETI
"""
Explanation: Import dataset reader
The following cell will load a python code to read the SETI dataset.
End of explanation
"""
ds_directory = mydatafolder + '/SETI/SETI_ds_64x128/'
print os.popen("ls -lrt "+ ds_directory).read() # to verify
"""
Explanation: Download data
End of explanation
"""
#from tensorflow.examples.tutorials.mnist import input_data
#dataset = input_data.read_data_sets("MNIST_data/", one_hot=True)
dataset = SETI.read_data_sets(ds_directory, one_hot=True, validation_size=0)
dataset.train.images.shape
"""
Explanation: Load data SETI
End of explanation
"""
# Parameters
decay_rate=0.96
decay_steps=1000
learning_rate = 0.005
training_epochs = 200
batch_size = 50
display_step = 100
#check point directory
chk_directory = mydatafolder+'/save/'
checkpoint_path = chk_directory+'model.ckpt'
n_classes = 4 # number of possible classifications for the problem
dropout = 0.50 # Dropout, probability to keep units
height = 64 # height of the image in pixels
width = 128 # width of the image in pixels
n_input = width * height # number of pixels in one image
"""
Explanation: Network Parameters
End of explanation
"""
x = tf.placeholder(tf.float32, shape=[None, n_input])
y_ = tf.placeholder(tf.float32, shape=[None, n_classes])
x_image = tf.reshape(x, [-1,height,width,1])
x_image
"""
Explanation: Inputs
End of explanation
"""
W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 1, 32], stddev=0.1))
b_conv1 = tf.Variable(tf.constant(0.1, shape=[32])) # need 32 biases for 32 outputs
convolve1 = tf.nn.conv2d(x_image, W_conv1, strides=[1, 1, 1, 1], padding='SAME') + b_conv1
h_conv1 = tf.nn.relu(convolve1)
conv1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #max_pool_2x2
conv1
"""
Explanation: Convolutional Layer 1
End of explanation
"""
W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev=0.1))
b_conv2 = tf.Variable(tf.constant(0.1, shape=[64])) #need 64 biases for 64 outputs
convolve2= tf.nn.conv2d(conv1, W_conv2, strides=[1, 1, 1, 1], padding='SAME')+ b_conv2
h_conv2 = tf.nn.relu(convolve2)
conv2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 4, 4, 1], padding='SAME') #max_pool_2x2
conv2
"""
Explanation: Convolutional Layer 2
End of explanation
"""
input_layer = conv2
dim = input_layer.get_shape().as_list()
dim
dims= dim[1]*dim[2]*dim[3]
nodes1 = 1024
prv_layer_matrix = tf.reshape(input_layer, [-1, dims])
W_fc1 = tf.Variable(tf.truncated_normal([dims, nodes1], stddev=0.1))
b_fc1 = tf.Variable(tf.constant(0.1, shape=[nodes1])) # need 1024 biases for 1024 outputs
h_fcl1 = tf.matmul(prv_layer_matrix, W_fc1) + b_fc1
fc_layer1 = tf.nn.relu(h_fcl1) # ???
fc_layer1
"""
Explanation: Convolutional Layer 3
Convolutional Layer 4
Fully Connected Layer 1
End of explanation
"""
keep_prob = tf.placeholder(tf.float32)
layer_drop1 = tf.nn.dropout(fc_layer1, keep_prob)
"""
Explanation: Droupout 1
End of explanation
"""
W_fc = tf.Variable(tf.truncated_normal([nodes1, n_classes], stddev=0.1)) #1024 neurons
b_fc = tf.Variable(tf.constant(0.1, shape=[n_classes])) # 10 possibilities for classes [0,1,2,3]
fc = tf.matmul(layer_drop1, W_fc) + b_fc
y_CNN= tf.nn.softmax(fc)
"""
Explanation: Fully Connected Layer 2
Droupout 2
Readout Layer
End of explanation
"""
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_CNN, labels=y_))
"""
Explanation: Loss function
End of explanation
"""
# Create a variable to track the global step.
global_step = tf.Variable(0, trainable=False)
# create learning_decay
lr = tf.train.exponential_decay( learning_rate,
global_step,
decay_steps,
decay_rate, staircase=True )
# Use the optimizer to apply the gradients that minimize the loss
# (and also increment the global step counter) as a single training step.
optimizer = tf.train.GradientDescentOptimizer(lr)
train_op = optimizer.minimize(cross_entropy, global_step=global_step)
#train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cross_entropy)
"""
Explanation: Training
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(y_CNN,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: Evaluation
End of explanation
"""
directory = os.path.dirname(chk_directory)
try:
os.stat(directory)
ckpt = tf.train.get_checkpoint_state(chk_directory)
print ckpt
except:
os.mkdir(directory)
"""
Explanation: Create checkpoint directory
End of explanation
"""
# Initializing the variables
init = tf.global_variables_initializer()
loss_values = []
with tf.Session() as sess:
X_test = dataset.test.images
y_test = dataset.test.labels
sess.run(init)
saver = tf.train.Saver(tf.global_variables())
# load previously trained model if appilcable
ckpt = tf.train.get_checkpoint_state(chk_directory)
if ckpt:
print "loading model: ",ckpt.model_checkpoint_path
#saver.restore(sess, ckpt.model_checkpoint_path)
#step = 0
num_examples = dataset.train.num_examples
# Training cycle
for epoch in range(training_epochs):
avg_loss = 0.
avg_accuracy = 0.
#dataset.shuffle_data()
total_batch = int(num_examples / batch_size)
# Loop over all batches
start = time.time()
for step in range(total_batch):
x_batch, y_batch = dataset.train.next_batch(batch_size,shuffle=True)
train_op.run(feed_dict={x: x_batch, y_: y_batch, keep_prob: dropout})
loss, acc = sess.run([cross_entropy, accuracy], feed_dict={x: x_batch,y_: y_batch,keep_prob: 1.})
avg_loss += loss / total_batch
avg_accuracy += acc / total_batch
if step % display_step == 1000:
# Calculate batch loss and accuracy
loss, acc = sess.run([cross_entropy, accuracy], feed_dict={x: x_batch,y_: y_batch,keep_prob: 1.})
#train_accuracy = accuracy.eval(feed_dict={x:x_batch, y_: y_batch, keep_prob: 0.5})
test_accuracy = sess.run(accuracy, feed_dict={x: X_test[0:100], y_: y_test[0:100], keep_prob: 1.})
print("Iter " + str(step) + \
", Minibatch Loss= " + "{:.6f}".format(loss) + \
", Training Accuracy= " + "{:.5f}".format(acc) + \
", Test Accuracy= " + "{:.5f}".format(test_accuracy) )
# save model every 1 epochs
if epoch >= 0 and epoch % 1 == 0:
# Save model
#print ("model saved to {}".format(checkpoint_path))
#saver.save(sess, checkpoint_path, global_step = epoch)
end = time.time()
plr = sess.run(lr)
loss_values.append(avg_loss)
#print(sess.run(tf.train.global_step()))
print "Epoch:", '%04d' % (epoch+1) , ", Epoch time=" , "{:.5f}".format(end - start) , ", lr=", "{:.9f}".format(plr), ", cost=", "{:.9f}".format(avg_loss) ,", Acc=", "{:.9f}".format(avg_accuracy)
print("Optimization Finished!")
print ("model saved to {}".format(checkpoint_path))
saver.save(sess, checkpoint_path, global_step = (epoch+1)*step)
# Calculate accuracy for test images
#print("Testing Accuracy:", sess.run(accuracy, feed_dict={x: X_test[0:30], y_: y_test[0:30], keep_prob: 1.}))
# Find the labels of test set
y_pred_lb = sess.run(tf.argmax(y_CNN,1), feed_dict={x: X_test[0:100], y_: y_test[0:100], keep_prob: 1.})
y_pred = sess.run(y_CNN, feed_dict={x: X_test[0:100], y_: y_test[0:100], keep_prob: 1.})
# lets save kernels
kernels_l1 = sess.run(tf.reshape(tf.transpose(W_conv1, perm=[2, 3, 0, 1]),[32,-1]))
kernels_l2 = sess.run(tf.reshape(tf.transpose(W_conv2, perm=[2, 3, 0, 1]),[32*64,-1]))
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.plot([np.mean(loss_values[i:i+5]) for i in range(len(loss_values))])
plt.show()
"""
Explanation: Training
End of explanation
"""
y_ = np.argmax(y_test[0:100],1) # ground truth
print metrics.classification_report(y_true= y_, y_pred= y_pred_lb)
print metrics.confusion_matrix(y_true= y_, y_pred= y_pred_lb)
print("Classification accuracy: %0.6f" % metrics.accuracy_score(y_true= y_, y_pred= y_pred_lb) )
print("Log Loss: %0.6f" % metrics.log_loss(y_true= y_, y_pred= y_pred, labels=range(4)) )
"""
Explanation: Evaluation
Accuracy is depend on the number of epoch that you set in partametrs part.
End of explanation
"""
my_output_results = mydatafolder + '/' + 'DL_scores.csv'
with open(my_output_results, 'w') as csvfile:
np.savetxt(my_output_results, y_pred, delimiter=",")
print os.popen("ls -lrt "+ mydatafolder).read() # to verify
"""
Explanation: Generate CSV file for Scoreboard
Here's an example of what the CSV file should look like for submission to the scoreboard. Although, in this case, we only have 4 classes instead of 7.
NOTE: This uses the test set created in Step_5c, which only contain the BASIC4 test data set. The code challenge and hackathon will be based on the Primary Data Set which contains 7 signal classes, and different test set.
End of explanation
"""
!wget --output-document utils1.py http://deeplearning.net/tutorial/code/utils.py
import utils1
from utils1 import tile_raster_images
#from utils import tile_raster_images
import matplotlib.pyplot as plt
from PIL import Image
%matplotlib inline
image = Image.fromarray(tile_raster_images(kernels_l1, img_shape=(5, 5) ,tile_shape=(4, 8), tile_spacing=(1, 1)))
### Plot image
plt.rcParams['figure.figsize'] = (18.0, 18.0)
imgplot = plt.imshow(image)
imgplot.set_cmap('gray')
image = Image.fromarray(tile_raster_images(kernels_l2, img_shape=(5, 5) ,tile_shape=(4, 12), tile_spacing=(1, 1)))
### Plot image
plt.rcParams['figure.figsize'] = (18.0, 18.0)
imgplot = plt.imshow(image)
imgplot.set_cmap('gray')
import numpy as np
plt.rcParams['figure.figsize'] = (5.0, 5.0)
sampleimage1 = X_test[3]
plt.imshow(np.reshape(sampleimage1,[64,128]), cmap="gray")
# Launch the graph
with tf.Session() as sess:
sess.run(init)
saver = tf.train.Saver(tf.all_variables())
# load previously trained model if appilcable
ckpt = tf.train.get_checkpoint_state(chk_directory)
if ckpt:
print "loading model: ",ckpt.model_checkpoint_path
saver.restore(sess, ckpt.model_checkpoint_path)
ActivatedUnits1 = sess.run(convolve1,feed_dict={x:np.reshape(sampleimage1,[1,64*128],order='F'),keep_prob:1.0})
plt.figure(1, figsize=(20,20))
n_columns = 3
n_rows = 3
for i in range(9):
plt.subplot(n_rows, n_columns, i+1)
plt.title('Filter ' + str(i))
plt.imshow(ActivatedUnits1[0,:,:,i], interpolation="nearest", cmap="gray")
"""
Explanation: Viz
End of explanation
"""
|
turbomanage/training-data-analyst | blogs/babyweight/babyweight.ipynb | apache-2.0 | %%bash
pip install --upgrade tensorflow==1.4
pip install --ignore-installed --upgrade pytz==2018.4
pip uninstall -y google-cloud-dataflow
pip install --upgrade apache-beam[gcp]==2.6
"""
Explanation: <h1> Structured data prediction using Cloud ML Engine </h1>
This notebook illustrates:
<ol>
<li> Exploring a BigQuery dataset using Datalab
<li> Creating datasets for Machine Learning using Dataflow
<li> Creating a model using the high-level Estimator API
<li> Training on Cloud ML Engine
<li> Deploying model
<li> Predicting with model
</ol>
Before starting the lab, upgrade packages that are required for this notebook.
End of explanation
"""
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
"""
Explanation: Now you have to restart the kernel by clicking the "Reset Session" in the menu bar to reflect the newly installed modules.
After restarting the kernel, you can resume the code execution from the next cell.
End of explanation
"""
query="""
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
"""
Explanation: <h1>Part 1: Data Analysis and Preparation</h1>
<h2> Exploring data </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
End of explanation
"""
def get_distinct_values(column_name):
sql = """
SELECT
{0},
COUNT(1) AS num_babies,
AVG(weight_pounds) AS avg_wt
FROM
publicdata.samples.natality
WHERE
year > 2000
GROUP BY
{0}
""".format(column_name)
return bq.Query(sql).execute().result().to_dataframe()
df = get_distinct_values('is_male')
df.plot(x='is_male', y='num_babies', kind='bar');
df.plot(x='is_male', y='avg_wt', kind='bar');
df = get_distinct_values('mother_age')
df = df.sort_values('mother_age')
df.plot(x='mother_age', y='num_babies');
df.plot(x='mother_age', y='avg_wt');
df = get_distinct_values('plurality')
df = df.sort_values('plurality')
df.plot(x='plurality', y='num_babies', logy=True, kind='bar');
df.plot(x='plurality', y='avg_wt', kind='bar');
df = get_distinct_values('gestation_weeks')
df = df.sort_values('gestation_weeks')
df.plot(x='gestation_weeks', y='num_babies', logy=True, kind='bar', color='royalblue');
df.plot(x='gestation_weeks', y='avg_wt', kind='bar', color='royalblue');
"""
Explanation: Let's write a query to find the unique values for each of the columns and the count of those values.
End of explanation
"""
import apache_beam as beam
import datetime
def to_csv(rowdict):
# pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound['is_male'] = 'Unknown'
if rowdict['plurality'] > 1:
no_ultrasound['plurality'] = 'Multiple(2+)'
else:
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality']-1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
OUTPUT_DIR = './preproc'
else:
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'teardown_policy': 'TEARDOWN_ALWAYS',
'max_num_workers': 3, # CHANGE THIS IF YOU HAVE MORE QUOTA
'no_save_main_session': True
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options=opts)
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
"""
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
preprocess(in_test_mode=False)
"""
Explanation: All these factors seem to play a part in the baby's weight. Male babies are heavier on average than female babies. Teenaged and older moms tend to have lower-weight babies. Twins, triplets, etc. are lower weight than single births. Preemies weigh in lower as do babies born to single moms. In addition, it is important to check whether you have enough data (number of babies) for each input value. Otherwise, the model prediction against input values that doesn't have enough data may not be reliable.
<p>
In the rest of this notebook, we will use machine learning to combine all of these factors to come up with a prediction of a baby's weight.
<h2> Creating a ML dataset using Dataflow </h2>
<p>
I'm going to use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options:
<ol>
<li> Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!
<li> Read from BigQuery directly using TensorFlow.
<li> Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage.
</ol>
<p>
However, in this case, I want to do some preprocessing. I want to modify the data such that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface. Therefore, I am using Cloud Dataflow for the preprocessing.
End of explanation
"""
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
"""
Explanation: You may get a warning about access scopes. It's safe to ignore this.
Note that after you launch this, the actual processing is happening on the Cloud. Go to the GCP web console to the Dataflow section and monitor the running job. You'll see a job that's running. If you click it, you should get a screen like this. It took about <b>55 minutes</b> for me.
<img src="dataflow.png" width="500"/>
Once the job has completed, run the cell below to check the location of the are processed files.
End of explanation
"""
import shutil
import numpy as np
import tensorflow as tf
"""
Explanation: <h1>Part 2: Developing a Machine Learning Model using TensorFlow and Cloud ML Engine</h1>
<h2> Creating a TensorFlow model using the Estimator API </h2>
<p>
First, write an input_fn to read the data.
End of explanation
"""
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
TRAIN_STEPS = 1000
def read_dataset(prefix, pattern, batch_size=512):
# use prefix to create filename
filename = 'gs://{}/babyweight/preproc/{}*{}*'.format(BUCKET, prefix, pattern)
if prefix == 'train':
mode = tf.estimator.ModeKeys.TRAIN
num_epochs = None # indefinitely
else:
mode = tf.estimator.ModeKeys.EVAL
num_epochs = 1 # end-of-input after this
# the actual input function passed to TensorFlow
def _input_fn():
# could be a path to one file or a file pattern.
input_file_names = tf.train.match_filenames_once(filename)
filename_queue = tf.train.string_input_producer(
input_file_names, shuffle=True, num_epochs=num_epochs)
# read CSV
reader = tf.TextLineReader()
_, value = reader.read_up_to(filename_queue, num_records=batch_size)
if mode == tf.estimator.ModeKeys.TRAIN:
value = tf.train.shuffle_batch([value], batch_size, capacity=10*batch_size,
min_after_dequeue=batch_size, enqueue_many=True,
allow_smaller_final_batch=False)
value_column = tf.expand_dims(value, -1)
columns = tf.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
features.pop(KEY_COLUMN)
label = features.pop(LABEL_COLUMN)
return features, label
return _input_fn
"""
Explanation: We may get a few warnings when we run this. Don't worry about them.
End of explanation
"""
def get_wide_deep():
# define column types
is_male,mother_age,plurality,gestation_weeks = \
[\
tf.feature_column.categorical_column_with_vocabulary_list('is_male',
['True', 'False', 'Unknown']),
tf.feature_column.numeric_column('mother_age'),
tf.feature_column.categorical_column_with_vocabulary_list('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)']),
tf.feature_column.numeric_column('gestation_weeks')
]
# discretize
age_buckets = tf.feature_column.bucketized_column(mother_age,
boundaries=np.arange(15,45,1).tolist())
gestation_buckets = tf.feature_column.bucketized_column(gestation_weeks,
boundaries=np.arange(17,47,1).tolist())
# sparse columns are wide
wide = [is_male,
plurality,
age_buckets,
gestation_buckets]
# feature cross all the wide columns and embed into a lower dimension
crossed = tf.feature_column.crossed_column(wide, hash_bucket_size=20000)
embed = tf.feature_column.embedding_column(crossed, 3)
# continuous columns are deep
deep = [mother_age,
gestation_weeks,
embed]
return wide, deep
"""
Explanation: Next, define the feature columns.
End of explanation
"""
def serving_input_fn():
feature_placeholders = {
'is_male': tf.placeholder(tf.string, [None]),
'mother_age': tf.placeholder(tf.float32, [None]),
'plurality': tf.placeholder(tf.string, [None]),
'gestation_weeks': tf.placeholder(tf.float32, [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
"""
Explanation: To predict with the TensorFlow model, we also need a serving input function. We will want all the inputs from our user.
End of explanation
"""
from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
from tensorflow.contrib.learn.python.learn import learn_runner
PATTERN = "00000-of-" # process only one of the shards, for testing purposes
def train_and_evaluate(output_dir):
wide, deep = get_wide_deep()
estimator = tf.estimator.DNNLinearCombinedRegressor(
model_dir=output_dir,
linear_feature_columns=wide,
dnn_feature_columns=deep,
dnn_hidden_units=[64, 32])
train_spec=tf.estimator.TrainSpec(
input_fn=read_dataset('train', PATTERN),
max_steps=TRAIN_STEPS)
exporter = tf.estimator.FinalExporter('exporter',serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn=read_dataset('eval', PATTERN),
steps=None,
exporters=exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
shutil.rmtree('babyweight_trained', ignore_errors=True) # start fresh each time
train_and_evaluate('babyweight_trained')
"""
Explanation: Finally, train!
End of explanation
"""
%bash
grep "^def" babyweight/trainer/model.py
"""
Explanation: Now that we have the TensorFlow code working on a subset of the data (in the code above, I was reading only the 00000-of-x file), we can package the TensorFlow code up as a Python module and train it on Cloud ML Engine.
<p>
<h2> Training on Cloud ML Engine </h2>
<p>
Training on Cloud ML Engine requires:
<ol>
<li> Making the code a Python package
<li> Using gcloud to submit the training code to Cloud ML Engine
</ol>
<p>
The code in model.py is the same as in the above cells. I just moved it to a file so that I could package it up as a module.
(explore the <a href="babyweight/trainer">directory structure</a>).
End of explanation
"""
%bash
echo "bucket=${BUCKET}"
rm -rf babyweight_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python -m trainer.task \
--bucket=${BUCKET} \
--output_dir=babyweight_trained \
--job-dir=./tmp \
--pattern="00000-of-" --train_steps=1000
"""
Explanation: After moving the code to a package, make sure it works standalone. (Note the --pattern and --train_steps lines so that I am not trying to boil the ocean on my laptop). Even then, this takes about <b>a minute</b> in which you won't see any output ...
End of explanation
"""
%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
#gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version 1.4 \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_steps=100000
"""
Explanation: Once the code works in standalone mode, you can run it on Cloud ML Engine. Because this is on the entire dataset, it will take a while. The training run took about <b> 30 min </b> for me. You can monitor the job from the GCP console in the Cloud Machine Learning Engine section.
End of explanation
"""
from google.datalab.ml import TensorBoard
TensorBoard().start('gs://{}/babyweight/trained_model'.format(BUCKET))
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print('Stopped TensorBoard with pid {}'.format(pid))
"""
Explanation: Training finished with a RMSE of about 1 lb. Obviously, this is our first model. We could probably add in some features and do some hyper-parameter tuning to get to a lower RMSE. I'll leave that to you. If you create a better model, I'd love to hear about it -- please do write a short blog post about what you did, and tweet it at me -- @lak_gcp.
End of explanation
"""
%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter
%bash
MODEL_NAME="babyweight"
MODEL_VERSION="soln"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/ | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version 1.4
"""
Explanation: <table width="70%">
<tr><td><img src="weights.png"/></td><td><img src="rmse.png" /></tr>
</table>
<h2> Deploying the trained model </h2>
<p>
Deploying the trained model to act as a REST web service is a simple gcloud call.
End of explanation
"""
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials)
request_data = {'instances':
[
{
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Single(1)',
'gestation_weeks': 39
},
{
'is_male': 'False',
'mother_age': 29.0,
'plurality': 'Single(1)',
'gestation_weeks': 38
},
{
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Triplets(3)',
'gestation_weeks': 39
},
{
'is_male': 'Unknown',
'mother_age': 29.0,
'plurality': 'Multiple(2+)',
'gestation_weeks': 38
},
]
}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'babyweight', 'soln')
response = api.projects().predict(body=request_data, name=parent).execute()
print(json.dumps(response, sort_keys = True, indent = 4))
"""
Explanation: Once this has been created, it will display 'done'.
<h2> Using the model to predict </h2>
<p>
Send a JSON request to the endpoint of the service to make it predict a baby's weight ... I am going to try out how well the model would have predicted the weights of our two kids and a couple of variations while we are at it ...
End of explanation
"""
|
QuantCrimAtLeeds/PredictCode | examples/Scripts/Reload stscan predictions.ipynb | artistic-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
import open_cp.scripted
import open_cp.scripted.analysis as analysis
loaded = open_cp.scripted.Loader("stscan_preds.pic.xz")
loaded.timed_points.time_range
fig, axes = plt.subplots(ncols=2, figsize=(16,7))
analysis.plot_data_scatter(loaded, axes[0])
analysis.plot_data_grid(loaded, axes[1])
next(iter(loaded))
times = [x[1] for x in loaded]
preds = [x[2] for x in loaded]
fig, axes = plt.subplots(ncols=2, figsize=(16,7))
for ax, i in zip(axes, [0, 60]):
analysis.plot_prediction(loaded, preds[i], ax)
ax.set_title(times[i])
"""
Explanation: Reload the naive predictions
Shows how to make use of the data produced from the scripted script naive.py.
End of explanation
"""
betas = analysis.hit_counts_to_beta("stscan_counts.csv")
fig, ax = plt.subplots(figsize=(12,8))
analysis.plot_betas(betas, ax)
fig, ax = plt.subplots(figsize=(12,8))
analysis.plot_betas(betas, ax, range(1,21))
"""
Explanation: Fit binomial model instead
Use a beta prior
End of explanation
"""
import collections, statistics, datetime
tps = loaded.timed_points.bin_timestamps(datetime.datetime(2016,1,1), datetime.timedelta(days=1))
c = collections.Counter(tps.timestamps)
statistics.mean(c.values())
"""
Explanation: What does this difference actually mean??
Suppose we pick 5% coverage. There is a big gap between the curves there.
End of explanation
"""
import scipy.special
import numpy as np
def BetaBinom(alpha,beta,n,k):
"""http://www.channelgrubb.com/blog/2015/2/27/beta-binomial-in-python"""
part_1 = scipy.special.comb(n,k)
part_2 = scipy.special.betaln(k+alpha,n-k+beta)
part_3 = scipy.special.betaln(alpha,beta)
result = (np.log(part_1) + part_2)- part_3
return np.exp(result)
fig, axes = plt.subplots(ncols=len(betas), figsize=(16,5))
n = 5
for ax, key in zip(axes, betas):
beta = betas[key][5]
p = [BetaBinom(*beta.args,n,k) for k in range(0,n+1)]
ax.bar(np.arange(n+1), p)
ax.set(xlabel="Number of crimes captured", ylabel="Probability")
ax.set_title("{}; {} total events.".format(key, n))
"""
Explanation: So we have about 5 crime events a day, on average.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.