markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
๊ทธ๋ฆฌ๊ณ  ๊ฐ ๋…ธ๋“œ์— ๋Œ€ํ•ด ์ƒ์„ฑ๋œ dict๋“ค์„ ์ €์žฅํ•˜์ž.
for user in users: user["shortest_paths"] = shortest_paths_from(user)
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ทธ๋Ÿฌ๋ฉด ์ด์ œ ๋งค๊ฐœ ์ค‘์‹ฌ์„ฑ์„ ๊ตฌํ•  ์ค€๋น„๊ฐ€ ๋‹ค ๋˜์—ˆ๋‹ค. ์ด์ œ ๊ฐ๊ฐ์˜ ์ตœ๋‹จ ๊ฒฝ๋กœ์— ํฌํ•จ๋˜๋Š” ๊ฐ ๋…ธ๋“œ์˜ ๋งค๊ฐœ ์ค‘์‹ฌ์„ฑ์— $1/n$์„ ๋”ํ•ด ์ฃผ์ž.
for user in users: user["betweenness_centrality"] = 0.0 for source in users: source_id = source["id"] for target_id, paths in source["shortest_paths"].items(): # python2์—์„œ๋Š” items ๋Œ€์‹  iteritems ์‚ฌ์šฉ if source_id < target_id: # ์ž˜๋ชปํ•ด์„œ ๋‘ ๋ฒˆ ์„ธ์ง€ ์•Š๋„๋ก ์ฃผ์˜ํ•˜์ž num_paths = len(paths) # ์ตœ๋‹จ ๊ฒฝ๋กœ๊ฐ€ ๋ช‡ ๊ฐœ ์กด์žฌํ•˜๋Š”๊ฐ€? contrib = 1 / num_paths # ์ค‘์‹ฌ์„ฑ์— ๊ธฐ์—ฌํ•˜๋Š” ๊ฐ’ for path in paths: for id in path: if id not in [source_id, target_id]: users[id]["betweenness_centrality"] += contrib for user in users: print(user["id"], user["betweenness_centrality"])
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์‚ฌ์šฉ์ž 0๊ณผ 9์˜ ์ตœ๋‹จ ๊ฒฝ๋กœ ์‚ฌ์ด์—๋Š” ๋‹ค๋ฅธ ์‚ฌ์šฉ์ž๊ฐ€ ์—†์œผ๋ฏ€๋กœ ๋งค๊ฐœ ์ค‘์‹ฌ์„ฑ์ด 0์ด๋‹ค. ๋ฐ˜๋ฉด ์‚ฌ์šฉ์ž 3, 4, 5๋Š” ์ตœ๋‹จ ๊ฒฝ๋กœ์ƒ์— ๋ฌด์ฒ™ ๋นˆ๋ฒˆํ•˜๊ฒŒ ์œ„์น˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋†’์€ ๋งค๊ฐœ ์ค‘์‹ฌ์„ฑ์„ ๊ฐ€์ง„๋‹ค. ๋Œ€๊ฒŒ ์ค‘์‹ฌ์„ฑ์˜ ์ ˆ๋Œ“๊ฐ’ ์ž์ฒด๋Š” ํฐ ์˜๋ฏธ๋ฅผ ๊ฐ€์ง€์ง€ ์•Š๊ณ , ์ƒ๋Œ€๊ฐ’๋งŒ์ด ์˜๋ฏธ๋ฅผ ๊ฐ€์ง„๋‹ค. ๊ทธ ์™ธ์— ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ๋Š” ์ค‘์‹ฌ์„ฑ ์ง€ํ‘œ ์ค‘ ํ•˜๋‚˜๋Š” ๊ทผ์ ‘ ์ค‘์‹ฌ์„ฑ(closeness centrality)์ด๋‹ค. ๋จผ์ € ๊ฐ ์‚ฌ์šฉ์ž์˜ ์›์ ‘์„ฑ(farness)์„ ๊ณ„์‚ฐํ•œ๋‹ค. ์›์ ‘์„ฑ์ด๋ž€ from_user์™€ ๋‹ค๋ฅธ ๋ชจ๋“  ์‚ฌ์šฉ์ž์˜ ์ตœ๋‹จ ๊ฒฝ๋กœ๋ฅผ ํ•ฉํ•œ ๊ฐ’์ด๋‹ค.
# # closeness centrality # def farness(user): """๋ชจ๋“  ์‚ฌ์šฉ์ž์™€์˜ ์ตœ๋‹จ ๊ฑฐ๋ฆฌ ํ•ฉ""" return sum(len(paths[0]) for paths in user["shortest_paths"].values())
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์ด์ œ ๊ทผ์ ‘ ์ค‘์‹ฌ์„ฑ์€ ๊ฐ„๋‹จํžˆ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋‹ค.
for user in users: user["closeness_centrality"] = 1 / farness(user) for user in users: print(user["id"], user["closeness_centrality"])
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ณ„์‚ฐ๋œ ๊ทผ์ ‘ ์ค‘์‹ฌ์„ฑ์˜ ํŽธ์ฐจ๋Š” ๋”์šฑ ์ž‘๋‹ค. ๋„คํŠธ์›Œํฌ ์ค‘์‹ฌ์— ์žˆ๋Š” ๋…ธ๋“œ์กฐ์ฐจ ์™ธ๊ณฝ์— ์œ„์น˜ํ•œ ๋…ธ๋“œ๋“ค๋กœ๋ถ€ํ„ฐ ๋ฉ€๋ฆฌ ๋–จ์–ด์ ธ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์—ฌ๊ธฐ์„œ ๋ดค๋“ฏ์ด ์ตœ๋‹จ ๊ฒฝ๋กœ๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ๊ฒƒ์€ ๊ฝค๋‚˜ ๋ณต์žกํ•˜๋‹ค. ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์— ํฐ ๋„คํŠธ์›Œํฌ์—์„œ๋Š” ๊ทผ์ ‘ ์ค‘์‹ฌ์„ฑ์„ ์ž์ฃผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š”๋‹ค. ๋œ ์ง๊ด€์ ์ด์ง€๋งŒ ๋ณดํ†ต ๋” ์‰ฝ๊ฒŒ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋Š” ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ(eigenvector centrality)์„ ๋” ์ž์ฃผ ์‚ฌ์šฉํ•œ๋‹ค. 21.2 ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ธฐ ์ „์— ๋จผ์ € ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋ฌด์—‡์ธ์ง€ ์‚ดํŽด๋ด์•ผ ํ•˜๊ณ , ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋ฌด์—‡์ธ์ง€ ์•Œ๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋จผ์ € ํ–‰๋ ฌ ์—ฐ์‚ฐ์— ๋Œ€ํ•ด ์•Œ์•„๋ด์•ผ ํ•œ๋‹ค. 21.2.1 ํ–‰๋ ฌ ์—ฐ์‚ฐ
def matrix_product_entry(A, B, i, j): return dot(get_row(A, i), get_column(B, j)) def matrix_multiply(A, B): n1, k1 = shape(A) n2, k2 = shape(B) if k1 != n2: raise ArithmeticError("incompatible shapes!") return make_matrix(n1, k2, partial(matrix_product_entry, A, B)) def vector_as_matrix(v): """(list ํ˜•ํƒœ์˜) ๋ฒกํ„ฐ v๋ฅผ n x 1 ํ–‰๋ ฌ๋กœ ๋ณ€ํ™˜""" return [[v_i] for v_i in v] def vector_from_matrix(v_as_matrix): """n x 1 ํ–‰๋ ฌ์„ ๋ฆฌ์ŠคํŠธ๋กœ ๋ณ€ํ™˜""" return [row[0] for row in v_as_matrix] def matrix_operate(A, v): v_as_matrix = vector_as_matrix(v) product = matrix_multiply(A, v_as_matrix) return vector_from_matrix(product)
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
ํ–‰๋ ฌ A์˜ ๊ณ ์œ  ๋ฒกํ„ฐ๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด, ์ž„์˜์˜ ๋ฒกํ„ฐ $v$๋ฅผ ๊ณจ๋ผ matrix_operate๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ , ๊ฒฐ๊ณผ๊ฐ’์˜ ํฌ๊ธฐ๊ฐ€ 1์ด ๋˜๊ฒŒ ์žฌ์กฐ์ •ํ•˜๋Š” ๊ณผ์ •์„ ๋ฐ˜๋ณต ์ˆ˜ํ–‰ํ•œ๋‹ค.
def find_eigenvector(A, tolerance=0.00001): guess = [1 for __ in A] while True: result = matrix_operate(A, guess) length = magnitude(result) next_guess = scalar_multiply(1/length, result) if distance(guess, next_guess) < tolerance: return next_guess, length # eigenvector, eigenvalue guess = next_guess
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ฒฐ๊ณผ๊ฐ’์œผ๋กœ ๋ฐ˜ํ™˜๋˜๋Š” guess๋ฅผ matrix_operate๋ฅผ ํ†ตํ•ด ๊ฒฐ๊ณผ๊ฐ’์˜ ํฌ๊ธฐ๊ฐ€ 1์ธ ๋ฒกํ„ฐ๋กœ ์žฌ์กฐ์ •ํ•˜๋ฉด, ์ž๊ธฐ ์ž์‹ ์ด ๋ฐ˜ํ™˜๋œ๋‹ค. ์ฆ‰, ์—ฌ๊ธฐ์„œ guess๋Š” ๊ณ ์œ ๋ฒกํ„ฐ๋ผ๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค. ๋ชจ๋“  ์‹ค์ˆ˜ ํ–‰๋ ฌ์— ๊ณ ์œ ๋ฒกํ„ฐ์™€ ๊ณ ์œ ๊ฐ’์ด ์žˆ๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์‹œ๊ณ„ ๋ฐฉํ–ฅ์œผ๋กœ 90๋„ ํšŒ์ „ํ•˜๋Š” ์—ฐ์‚ฐ์„ ํ•˜๋Š” ๋‹ค์Œ ํ–‰๋ ฌ์—๋Š” ๊ณฑํ–ˆ์„ ๋•Œ ๊ฐ€์ง€ ์ž์‹ ์ด ๋˜๋Š” ๋ฒกํ„ฐ๋Š” ์˜๋ฒกํ„ฐ๋ฐ–์— ์—†๋‹ค.
rotate = [[0, 1], [-1, 0]]
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์ด ํ–‰๋ ฌ๋กœ ์•ž์„œ ๊ตฌํ˜„ํ•œ find_eignevector(rotate)๋ฅผ ์ˆ˜ํ–‰ํ•˜๋ฉด, ์˜์›ํžˆ ๋๋‚˜์ง€ ์•Š์„ ๊ฒƒ์ด๋‹ค. ํ•œํŽธ, ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ์žˆ๋Š” ํ–‰๋ ฌ๋„ ๋•Œ๋กœ๋Š” ๋ฌดํ•œ๋ฃจํ”„์— ๋น ์งˆ ์ˆ˜ ์žˆ๋‹ค.
flip = [[0, 1], [1, 0]]
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์ด ํ–‰๋ ฌ์€ ๋ชจ๋“  ๋ฒกํ„ฐ [x, y]๋ฅผ [y, x]๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. ๋”ฐ๋ผ์„œ [1, 1]์€ ๊ณ ์œ ๊ฐ’์ด 1์ธ ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋œ๋‹ค. ํ•˜์ง€๋งŒ x, y๊ฐ’์ด ๋‹ค๋ฅธ ์ž„์˜์˜ ๋ฒกํ„ฐ์—์„œ ์ถœ๋ฐœํ•ด์„œ find_eigenvector๋ฅผ ์ˆ˜ํ–‰ํ•˜๋ฉด x, y๊ฐ’์„ ๋ฐ”๊พธ๋Š” ์—ฐ์‚ฐ๋งŒ ๋ฌดํ•œํžˆ ์ˆ˜ํ–‰ํ•  ๊ฒƒ์ด๋‹ค. (NumPy๊ฐ™์€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—๋Š” ์ด๋Ÿฐ ์ผ€์ด์Šค๊นŒ์ง€ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•๋“ค์ด ๊ตฌํ˜„๋˜์–ด ์žˆ๋‹ค.) ์ด๋Ÿฐ ์‚ฌ์†Œํ•œ ๋ฌธ์ œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์–ด์จŒ๋“  find_eigenvector๊ฐ€ ๊ฒฐ๊ณผ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•œ๋‹ค๋ฉด, ๊ทธ ๊ฒฐ๊ณผ๊ฐ’์€ ๊ณง ๊ณ ์œ ๋ฒกํ„ฐ์ด๋‹ค. 21.2.2 ์ค‘์‹ฌ์„ฑ ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋ฐ์ดํ„ฐ ๋„คํŠธ์›Œํฌ๋ฅผ ์ดํ•ดํ•˜๋Š”๋ฐ ์–ด๋–ป๊ฒŒ ๋„์›€์„ ์ค„๊นŒ? ์–˜๊ธฐ๋ฅผ ํ•˜๊ธฐ ์ „์— ๋จผ์ € ๋„คํŠธ์›Œํฌ๋ฅผ ์ธ์ ‘ํ–‰๋ ฌ(adjacency matrix)์˜ ํ˜•ํƒœ๋กœ ๋‚˜ํƒ€๋‚ด ๋ณด์ž. ์ด ํ–‰๋ ฌ์€ ์‚ฌ์šฉ์ž i์™€ ์‚ฌ์šฉ์ž j๊ฐ€ ์นœ๊ตฌ์ธ ๊ฒฝ์šฐ (i, j)๋ฒˆ์งธ ํ•ญ๋ชฉ์— 1์ด ์žˆ๊ณ , ์นœ๊ตฌ๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ 0์ด ์žˆ๋Š” ํ–‰๋ ฌ์ด๋‹ค.
# # eigenvector centrality # def entry_fn(i, j): return 1 if (i, j) in friendships or (j, i) in friendships else 0 n = len(users) adjacency_matrix = make_matrix(n, n, entry_fn) adjacency_matrix
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ฐ ์‚ฌ์šฉ์ž์˜ ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ์ด๋ž€ find_eigenvector๋กœ ์ฐพ์€ ์‚ฌ์šฉ์ž์˜ ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋œ๋‹ค.
eigenvector_centralities, _ = find_eigenvector(adjacency_matrix) for user_id, centrality in enumerate(eigenvector_centralities): print(user_id, centrality)
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์—ฐ๊ฒฐ์˜ ์ˆ˜๊ฐ€ ๋งŽ๊ณ , ์ค‘์‹ฌ์„ฑ์ด ๋†’์€ ์‚ฌ์šฉ์ž๋“คํ•œํ…Œ ์—ฐ๊ฒฐ๋œ ์‚ฌ์šฉ์ž๋“ค์€ ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ์ด ๋†’๋‹ค. ์•ž์˜ ๊ฒฐ๊ณผ์— ๋”ฐ๋ฅด๋ฉด ์‚ฌ์šฉ์ž 1, ์‚ฌ์šฉ์ž 2์˜ ์ค‘์‹ฌ์„ฑ์ด ๊ฐ€์žฅ ๋†’์€๋ฐ, ์ด๋Š” ์ค‘์‹ฌ์„ฑ์ด ๋†’์€ ์‚ฌ๋žŒ๋“ค๊ณผ ์„ธ๋ฒˆ์ด๋‚˜ ์—ฐ๊ฒฐ๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์ด๋“ค๋กœ๋ถ€ํ„ฐ ๋ฉ€์–ด์งˆ์ˆ˜๋ก ์‚ฌ์šฉ์ž๋“ค์˜ ์ค‘์‹ฌ์„ฑ์€ ์ ์ฐจ ์ค„์–ด๋“ ๋‹ค. 21.3 ๋ฐฉํ–ฅ์„ฑ ๊ทธ๋ž˜ํ”„(Directed graphs)์™€ ํŽ˜์ด์ง€๋žญํฌ ๋ฐ์ดํ…€์ด ์ธ๊ธฐ๋ฅผ ๋ณ„๋กœ ๋Œ์ง€ ๋ชปํ•˜์ž, ์ˆœ์ด์ต ํŒ€์˜ ๋ถ€์‚ฌ์žฅ์€ ์นœ๊ตฌ ๋ชจ๋ธ์—์„œ ๋ณด์ฆ(endorsement)๋ชจ๋ธ๋กœ ์ „ํ–ฅํ•˜๋Š” ๊ฒƒ์„ ๊ณ ๋ ค ์ค‘์ด๋‹ค. ์•Œ๊ณ  ๋ณด๋‹ˆ ์‚ฌ๋žŒ๋“ค์€ ์–ด๋–ค ๋ฐ์ดํ„ฐ ๊ณผํ•™์ž๋“ค๋ผ๋ฆฌ ์นœ๊ตฌ์ธ์ง€์— ๋Œ€ํ•ด์„œ๋Š” ๋ณ„๋กœ ๊ด€์‹ฌ์ด ์—†์—ˆ์ง€๋งŒ, ํ—ค๋“œํ—Œํ„ฐ๋“ค์€ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ๊ณผํ•™์ž๋กœ๋ถ€ํ„ฐ ์กด๊ฒฝ ๋ฐ›๋Š” ๋ฐ์ดํ„ฐ ๊ณผํ•™์ž๊ฐ€ ๋ˆ„๊ตฌ์ธ์ง€์— ๋Œ€ํ•ด ๊ด€์‹ฌ์ด ๋งŽ๋‹ค. ์ด ์ƒˆ๋กœ์šด ๋ชจ๋ธ์—์„œ ๊ด€๊ณ„๋Š” ์ƒํ˜ธ์ ์ธ ๊ฒƒ์ด ์•„๋‹ˆ๋ผ, ํ•œ ์‚ฌ๋žŒ(source)์ด ๋‹ค๋ฅธ ๋ฉ‹์ง„ ํ•œ ์‚ฌ๋žŒ(target)์˜ ์‹ค๋ ฅ์— ๋ณด์ฆ์„ ์„œ์ฃผ๋Š” (source, target) ์Œ์œผ๋กœ ๋น„๋Œ€์นญ์ ์ธ ๊ด€๊ณ„๋ฅผ ํ‘œํ˜„ํ•˜๊ฒŒ ๋œ๋‹ค.
# # directed graphs # endorsements = [(0, 1), (1, 0), (0, 2), (2, 0), (1, 2), (2, 1), (1, 3), (2, 3), (3, 4), (5, 4), (5, 6), (7, 5), (6, 8), (8, 7), (8, 9)] for user in users: user["endorses"] = [] # add one list to track outgoing endorsements user["endorsed_by"] = [] # and another to track endorsements for source_id, target_id in endorsements: users[source_id]["endorses"].append(users[target_id]) users[target_id]["endorsed_by"].append(users[source_id])
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ทธ๋ฆฌ๊ณ  ๊ฐ€์žฅ ๋ณด์ฆ์„ ๋งŽ์ด ๋ฐ›์€ ๋ฐ์ดํ„ฐ ๊ณผํ•™์ž๋“ค์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜์ง‘ํ•ด์„œ, ๊ทธ๊ฒƒ์„ ํ—ค๋“œํ—Œํ„ฐ๋“คํ•œํ…Œ ํŒ”๋ฉด ๋œ๋‹ค.
endorsements_by_id = [(user["id"], len(user["endorsed_by"])) for user in users] sorted(endorsements_by_id, key=lambda x: x[1], # (user_id, num_endorsements) reverse=True)
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์‚ฌ์‹ค '๋ณด์ฆ์˜ ์ˆ˜'์™€ ๊ฐ™์€ ์ˆซ์ž๋Š” ์กฐ์ž‘ํ•˜๊ธฐ๊ฐ€ ๋งค์šฐ ์‰ฝ๋‹ค. ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜๋Š”, ๊ฐ€์งœ ๊ณ„์ •์„ ์—ฌ๋Ÿฌ ๊ฐœ ๋งŒ๋“ค์–ด์„œ ๊ทธ๊ฒƒ๋“ค๋กœ ๋‚ด ๊ณ„์ •์— ๋Œ€ํ•œ ๋ณด์ฆ์„ ์„œ๋Š” ๊ฒƒ์ด๋‹ค. ๋˜ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€, ์นœ๊ตฌ๋“ค๋ผ๋ฆฌ ์งœ๊ณ  ์„œ๋กœ๊ฐ€ ์„œ๋กœ๋ฅผ ๋ณด์ฆํ•ด ์ฃผ๋Š” ๊ฒƒ์ด๋‹ค. (์•„๋งˆ ์‚ฌ์šฉ์ž 0, 1, 2๊ฐ€ ์ด๋Ÿฐ ๊ด€๊ณ„์ผ ๊ฐ€๋Šฅ์„ฑ์ด ํฌ๋‹ค.) ์ข€ ๋” ๋‚˜์€ ์ง€์ˆ˜๋Š”, '๋ˆ„๊ฐ€' ๋ณด์ฆ์„ ์„œ๋Š”์ง€๋ฅผ ๊ณ ๋ คํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋ณด์ฆ์„ ๋งŽ์ด ๋ฐ›์€ ์‚ฌ์šฉ์ž๊ฐ€ ๋ณด์ฆ์„ ์„ค ๋•Œ๋Š”, ๋ณด์ฆ์„ ์ ๊ฒŒ ๋ฐ›์€ ์‚ฌ์šฉ์ž๊ฐ€ ๋ณด์ฆ์„ ์„ค ๋•Œ๋ณด๋‹ค ๋” ์ค‘์š”ํ•œ ๊ฒƒ์œผ๋กœ ๋ฐ›์•„๋“ค์—ฌ์ง€๋Š” ๊ฒƒ์ด ํƒ€๋‹นํ•˜๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์‚ฌ์‹ค ์ด๊ฒƒ์€ ์œ ๋ช…ํ•œ ํŽ˜์ด์ง€๋žญํฌ(PageRank) ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ธฐ๋ณธ ์ฒ ํ•™์ด๊ธฐ๋„ ํ•˜๋‹ค. 1. ๋„คํŠธ์›Œํฌ ์ „์ฒด์—๋Š” 1.0(๋˜๋Š” 100%)์˜ ํŽ˜์ด์ง€๋žญํฌ๊ฐ€ ์žˆ๋‹ค. 2. ์ดˆ๊ธฐ์— ์ด ํŽ˜์ด์ง€๋žญํฌ๋ฅผ ๋ชจ๋“  ๋…ธ๋“œ์— ๊ณ ๋ฅด๊ฒŒ ๋ฐฐ๋‹นํ•œ๋‹ค. 3. ๊ฐ ์Šคํ…์„ ๊ฑฐ์น  ๋•Œ๋งˆ๋‹ค ๊ฐ ๋…ธ๋“œ์— ๋ฐฐ๋‹น๋œ ํŽ˜์ด์ง€๋žญํฌ์˜ ๋Œ€๋ถ€๋ถ„์€ ์™ธ๋ถ€๋กœ ํ–ฅํ•˜๋Š” ๋งํฌ์— ๊ท ๋“ฑํ•˜๊ฒŒ ๋ฐฐ๋‹นํ•œ๋‹ค. 4. ๊ฐ ์Šคํ…์„ ๊ฑฐ์น  ๋•Œ๋งˆ๋‹ค ๊ฐ ๋…ธ๋“œ์— ๋‚จ์•„ ์žˆ๋Š” ํŽ˜์ด์ง€๋žญํฌ๋ฅผ ๋ชจ๋“  ๋…ธ๋“œ์— ๊ณ ๋ฅด๊ฒŒ ๋ฐฐ๋‹นํ•œ๋‹ค.
def page_rank(users, damping = 0.85, num_iters = 100): # ๋จผ์ € ํŽ˜์ด์ง€๋žญํฌ๋ฅผ ๋ชจ๋“  ๋…ธ๋“œ์— ๊ณ ๋ฅด๊ฒŒ ๋ฐฐ๋‹น num_users = len(users) pr = { user["id"] : 1 / num_users for user in users } # ๋งค ์Šคํ…๋งˆ๋‹ค ๊ฐ ๋…ธ๋“œ๊ฐ€ ๋ฐ›๋Š” # ์ ์€ ์–‘์˜ ํŽ˜์ด์ง€๋žญํฌ base_pr = (1 - damping) / num_users for __ in range(num_iters): next_pr = { user["id"] : base_pr for user in users } for user in users: # ํŽ˜์ด์ง€๋žญํฌ๋ฅผ ์™ธ๋ถ€๋กœ ํ–ฅํ•˜๋Š” ๋งํฌ์— ๋ฐฐ๋‹นํ•œ๋‹ค. links_pr = pr[user["id"]] * damping for endorsee in user["endorses"]: next_pr[endorsee["id"]] += links_pr / len(user["endorses"]) pr = next_pr return pr for user_id, pr in page_rank(users).items(): print(user_id, pr)
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
Generating some data Lets generate a classification dataset that is not easily linearly separable. Our favorite example is the spiral dataset, which can be generated as follows:
N = 100 #The number of "training examples" per class D = 2 # The dimension i.e i think the feature set K = 3 # The number of classes to classify into X = np.zeros((N*K, D)) # The data matrix , each row is a single example # In our case 100*3 amount of examples and each example has two dimensions # so the Matrix is 300 * 2 Y = np.zeros(N*K , dtype = 'uint8') #Class Labels for j in range(K): ix = range (N*j,N*(j+1)) #filling the matrices made above with datapoints r = np.linspace(0.0,1,N) # radius t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N) * 0.2 #theta #numpy.linspace(start, stop, num = 50, endpoint = True, retstep = False, dtype = None) #: Returns number spaces evenly w.r.t interval. X[ix] = np.c_[r*np.sin(t),r*np.cos(t)] # the above line is written so as to give the dataset a spiral structure # which will be clear in the dataset plot below # i think this can be played with different kinds of datasets and check how our neural network reacts # np.c_ slices objects to concatenation along the second axis. Y[ix] = j plt.scatter(X[:, 0], X[:, 1], c=Y, s=40, cmap = plt.cm.Spectral) plt.xlim([-1,1]) plt.ylim([-1,1]) plt.show()
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Training a Simple Softmax Linear Classifer Initialize the Parameters
# initialize parameters randomly W = 0.01 * np.random.randn(D,K) #randn(D,K) D * K Matrix # in our case 2 * 3 Matrix b = np.zeros((1,K)) # in our case 1 * 3 # some hyperparameters step_size = 1e-0 reg = 1e-3 # regularization strength
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Compute the Class Scores
# compute class scores for a linear classif scores = np.dot(X,W) + b #scores will be 300 *3 and remember numpy broadcasting for b
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Compute the loss
# Using cross - entropy loss # Softmax loss = data loss + regularization loss # Study about the Softmax loss and how it works before seeing how it works #in this case it could be intuitive num_examples = X.shape[0] # get unnormalized probabilities exp_scores = np.exp(scores) # normalize them for each example '''We now have an array probs of size [300 x 3], where each row now contains the class probabilities. In particular, since weโ€™ve normalized them every row now sums to one. We can now query for the log probabilities assigned to the correct classes in each example:''' probs = exp_scores / np.sum(exp_scores, axis=1 , keepdims=True) # Calculating the correct_log probability '''The array correct_logprobs is a 1D array of just the probabilities assigned to the correct classes for each example. The full loss is then the average of these log probabilities and the regularization loss:''' correct_logprobs = -np.log(probs[range(num_examples),y]) # compute ths loss : average cross-entropy loss and regularization data_loss = np.sum(correct_logprobs)/ num_examples reg_loss = 0.5*reg*np.sum(W*W) '''โˆ‚Liโˆ‚fk=pkโˆ’1(yi=k) Notice how elegant and simple this expression is. Suppose the probabilities we computed were p = [0.2, 0.3, 0.5], and that the correct class was the middle one (with probability 0.3). According to this derivation the gradient on the scores would be df = [0.2, -0.7, 0.5]. Recalling what the interpretation of the gradient, we see that this result is highly intuitive: increasing the first or last element of the score vector f (the scores of the incorrect classes) leads to an increased loss (due to the positive signs +0.2 and +0.5) - and increasing the loss is bad, as expected. However, increasing the score of the correct class has negative influence on the loss. The gradient of -0.7 is telling us that increasing the correct class score would lead to a decrease of the loss Li, which makes sense.''' dscores = probs dscores[range(num_examples),y]-=1 dscores/=num_examples dW = np.dot(X.T,dscores) db = np.sum(dscores, axis=0, keepdims=True) dW += reg*W # Not forgetting the regularization gradient #Updating Parameters W += -step_size *dW b += -step_size *db
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Putting it all Together Till now we just looked at individual elements
#Train a Linear Classifier # initialize parameters randomly W = 0.01 * np.random.randn(D,K) b = np.zeros((1,K)) # some hyperparameters step_size = 1e-0 reg = 1e-3 # regularization strength # gradient descent loop num_examples = X.shape[0] for i in range(200): # evaluate class scores, [N x K] scores = np.dot(X, W) + b # compute the class probabilities exp_scores = np.exp(scores) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K] # compute the loss: average cross-entropy loss and regularization corect_logprobs = -np.log(probs[range(num_examples),Y]) data_loss = np.sum(corect_logprobs)/num_examples reg_loss = 0.5*reg*np.sum(W*W) loss = data_loss + reg_loss if i % 10 == 0: print ("iteration %d: loss %f" % (i, loss)) # compute the gradient on scores dscores = probs dscores[range(num_examples),Y] -= 1 dscores /= num_examples # backpropate the gradient to the parameters (W,b) dW = np.dot(X.T, dscores) db = np.sum(dscores, axis=0, keepdims=True) dW += reg*W # regularization gradient # perform a parameter update W += -step_size * dW b += -step_size * db # evaluate training set Accuracy scores = np.dot(X,W) + b predicted_class = np.argmax(scores, axis=1) print("training_accuracy: %.2f" % (np.mean(predicted_class==Y))) # plot the resulting classifier h = 0.02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b Z = np.argmax(Z, axis=1) Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:, 0], X[:, 1], c=Y, s=40, cmap=plt.cm.Spectral) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.show()
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Using SQL for Queries Note that SQL is case-insensitive, but it is traditional to use ALL CAPS for SQL keywords. It is also standard to end SQL statements with a semi-colon. Simple Queries
pdsql('SELECT * FROM tips LIMIT 5;') q = """ SELECT sex as gender, smoker, day, size FROM tips WHERE size <= 3 AND smoker = 'Yes' LIMIT 5""" pdsql(q)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Filtering on strings
q = """ SELECT sex as gender, smoker, day, size , time FROM tips WHERE time LIKE '%u_c%' LIMIT 5""" pdsql(q)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Ordering
q = """ SELECT sex as gender, smoker, day, size , time FROM tips WHERE time LIKE '%u_c%' ORDER BY size DESC LIMIT 5""" pdsql(q)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Aggregate queries
pdsql('select * from tips limit 5;') q = """ SELECT sex, smoker, sum(total_bill) as total, max(tip) as max_tip FROM tips WHERE time = 'Lunch' GROUP BY sex, smoker ORDER BY max_tip DESC """ pdsql(q)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Matching students and majors Inner join
pdsql(""" SELECT * from student s INNER JOIN major m ON s.major_id = m.major_id; """)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Left outer join SQL also has RIGHT OUTER JOIN and FULL OUTER JOIN but these are not currently supported by SQLite3 (the database engine used by pdsql).
pdsql(""" SELECT s.*, m.name from student s LEFT JOIN major m ON s.major_id = m.major_id; """)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Emulating a full outer join with UNION ALL Only necessary if the database does not proivde FULL OUTER JOIN Using linker tables to match students to classes (a MANY TO MANY join) Using SQLite3 SQLite3 is part of the standard library. However, the mechanics of using essentially any database in Python is similar, because of the Python DB-API.
import sqlite3 c = sqlite3.connect('data/Chinook_Sqlite.sqlite')
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Standard SQL statements with parameter substitution Note: Using Python string substitution for Python defined parameters is dangerous because of the risk of SQL injection attacks. Use parameter substitution with ? instead. Do this
t = ['%rock%', 10] list(c.execute("SELECT * FROM Album WHERE Title like ? AND ArtistID > ? LIMIT 5;", t))
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Not this
t = ("'%rock%'", 10) list(c.execute("SELECT * FROM Album WHERE Title like %s AND ArtistID > %d LIMIT 5;" % t))
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
User defined functions Sometimes it is useful to have custom functions that run on the database server rather than on the client. These are called User Defined Functions or UDF. How do to do this varies with the database used, but it is fairly simple with Python and SQLite. A standard UDF
def encode(text, offset): """Caesar cipher of text with given offset.""" from string import ascii_lowercase, ascii_uppercase tbl = dict(zip(map(ord, ascii_lowercase + ascii_uppercase), ascii_lowercase[offset:] + ascii_lowercase[:offset] + ascii_uppercase[offset:] + ascii_uppercase[:offset])) return text.translate(tbl) c.create_function("encode", 2, encode) list(c.execute("SELECT Title, encode(Title, 2) FROM Album limit 5;"))
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Using SQL magic functions We will use the ipython-sql notebook extension for convenience. This will only work in notebooks and IPython scripts with the .ipy extension.
%load_ext sql
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
3. Open the file and use .read() to save the contents of the file to a string called fields. Make sure the file is closed at the end.
# Write your code here: # Run fields to see the contents of contacts.txt: fields
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb
rishuatgithub/MLPy
apache-2.0
Working with PDF Files 4. Use PyPDF2 to open the file Business_Proposal.pdf. Extract the text of page 2.
# Perform import # Open the file as a binary object # Use PyPDF2 to read the text of the file # Get the text from page 2 (CHALLENGE: Do this in one step!) page_two_text = # Close the file # Print the contents of page_two_text print(page_two_text)
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb
rishuatgithub/MLPy
apache-2.0
5. Open the file contacts.txt in append mode. Add the text of page 2 from above to contacts.txt. CHALLENGE: See if you can remove the word "AUTHORS:"
# Simple Solution: # CHALLENGE Solution (re-run the %%writefile cell above to obtain an unmodified contacts.txt file):
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb
rishuatgithub/MLPy
apache-2.0
Regular Expressions 6. Using the page_two_text variable created above, extract any email addresses that were contained in the file Business_Proposal.pdf.
import re # Enter your regex pattern here. This may take several tries! pattern = re.findall(pattern, page_two_text)
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb
rishuatgithub/MLPy
apache-2.0
Exercise: Compute the velocity of the penny after t seconds. Check that the units of the result are correct.
# Solution goes here
notebooks/chap01.ipynb
AllenDowney/ModSimPy
mit
Exercise: Why would it be nonsensical to add a and t? What happens if you try?
# Solution goes here
notebooks/chap01.ipynb
AllenDowney/ModSimPy
mit
Exercise: Suppose you bring a 10 foot pole to the top of the Empire State Building and use it to drop the penny from h plus 10 feet. Define a variable named foot that contains the unit foot provided by UNITS. Define a variable named pole_height and give it the value 10 feet. What happens if you add h, which is in units of meters, to pole_height, which is in units of feet? What happens if you write the addition the other way around?
# Solution goes here # Solution goes here
notebooks/chap01.ipynb
AllenDowney/ModSimPy
mit
Exercise: In reality, air resistance limits the velocity of the penny. At about 18 m/s, the force of air resistance equals the force of gravity and the penny stops accelerating. As a simplification, let's assume that the acceleration of the penny is a until the penny reaches 18 m/s, and then 0 afterwards. What is the total time for the penny to fall 381 m? You can break this question into three parts: How long until the penny reaches 18 m/s with constant acceleration a. How far would the penny fall during that time? How long to fall the remaining distance with constant velocity 18 m/s? Suggestion: Assign each intermediate result to a variable with a meaningful name. And assign units to all quantities!
# Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here
notebooks/chap01.ipynb
AllenDowney/ModSimPy
mit
Filtering and resampling data Some artifacts are restricted to certain frequencies and can therefore be fixed by filtering. An artifact that typically affects only some frequencies is due to the power line. Power-line noise is a noise created by the electrical network. It is composed of sharp peaks at 50Hz (or 60Hz depending on your geographical location). Some peaks may also be present at the harmonic frequencies, i.e. the integer multiples of the power-line frequency, e.g. 100Hz, 150Hz, ... (or 120Hz, 180Hz, ...). This tutorial covers some basics of how to filter data in MNE-Python. For more in-depth information about filter design in general and in MNE-Python in particular, check out sphx_glr_auto_tutorials_plot_background_filtering.py.
import numpy as np import mne from mne.datasets import sample data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' proj_fname = data_path + '/MEG/sample/sample_audvis_eog_proj.fif' tmin, tmax = 0, 20 # use the first 20s of data # Setup for reading the raw data (save memory by cropping the raw data # before loading it) raw = mne.io.read_raw_fif(raw_fname) raw.crop(tmin, tmax).load_data() raw.info['bads'] = ['MEG 2443', 'EEG 053'] # bads + 2 more fmin, fmax = 2, 300 # look at frequencies between 2 and 300Hz n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2 # Pick a subset of channels (here for speed reason) selection = mne.read_selection('Left-temporal') picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False, stim=False, exclude='bads', selection=selection) # Let's first check out all channel types raw.plot_psd(area_mode='range', tmax=10.0, picks=picks, average=False)
0.17/_downloads/1a57f50035589ab531155bdb55bcbcb5/plot_artifacts_correction_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The same care must be taken with these results as with partial derivatives. The formula for $Y$ is ostensibly $$X_1 + X_2 = X_1 + X^2 + X_1 = 2 X_1 + X^2$$ Or $2X_1$ plus a parabola. However, the coefficient of $X_1$ is 1. That is because $Y$ changes by 1 if we change $X_1$ by 1 <i>while holding $X_2$ constant</i>. Multiple linear regression separates out contributions from different variables. Similarly, running a linear regression on two securities might give a high $\beta$. However, if we bring in a third security (like SPY, which tracks the S&P 500) as an independent variable, we may find that the correlation between the first two securities is almost entirely due to them both being correlated with the S&P 500. This is useful because the S&P 500 may then be a more reliable predictor of both securities than they were of each other. This method allows us to better gauge the significance between the two securities and prevent confounding the two variables.
# Load pricing data for two arbitrarily-chosen assets and SPY start = '2014-01-01' end = '2015-01-01' asset1 = get_pricing('DTV', fields='price', start_date=start, end_date=end) asset2 = get_pricing('FISV', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end) # First, run a linear regression on the two assets slr = regression.linear_model.OLS(asset1, sm.add_constant(asset2)).fit() print 'SLR beta of asset2:', slr.params[1] # Run multiple linear regression using asset2 and SPY as independent variables mlr = regression.linear_model.OLS(asset1, sm.add_constant(np.column_stack((asset2, benchmark)))).fit() prediction = mlr.params[0] + mlr.params[1]*asset2 + mlr.params[2]*benchmark prediction.name = 'Prediction' print 'MLR beta of asset2:', mlr.params[1], '\nMLR beta of S&P 500:', mlr.params[2]
notebooks/lectures/Multiple_Linear_Regression/notebook.ipynb
quantopian/research_public
apache-2.0
The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple. The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
def complete_deg(n): a = np.identity((n), dtype = np.int) for element in np.nditer(a, op_flags=['readwrite']): if element > 0: element[...] = element + n - 2 return a complete_deg(3) D = complete_deg(5) assert D.shape==(5,5) assert D.dtype==np.dtype(int) assert np.all(D.diagonal()==4*np.ones(5)) assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
assignments/assignment03/NumpyEx04.ipynb
geoneill12/phys202-2015-work
mit
The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
def complete_adj(n): b = np.identity((n), dtype = np.int) for element in np.nditer(b, op_flags=['readwrite']): if element == 0: element[...] = 1 else: element[...] = 0 return b complete_adj(3) A = complete_adj(5) assert A.shape==(5,5) assert A.dtype==np.dtype(int) assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
assignments/assignment03/NumpyEx04.ipynb
geoneill12/phys202-2015-work
mit
Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
def L(n): L = complete_deg(n) - complete_adj(n) return L print L(1) print L(2) print L(3) print L(4) print L(5) print L(6)
assignments/assignment03/NumpyEx04.ipynb
geoneill12/phys202-2015-work
mit
ๅœจๆ–ฐไธ‰ๆฟๆŒ‚็‰Œ็š„ๅ…ฌๅธ่ฟ˜ๅŒ…ๆ‹ฌไบ†ๅŽŸไธค็ฝ‘็š„ๅ…ฌๅธ๏ผˆNETๅ’ŒSTAQ๏ผ‰๏ผŒไธค็ฝ‘็š„ๅ…ฌๅธๆ˜ฏๅœจ90ๅนดไปฃๆœ‰ไธญๅ›ฝไบบๆฐ‘้“ถ่กŒๆ‰นๅ‡†่ฎพ็ซ‹็š„๏ผŒๆ€ป็”จ9ๅฎถๅ…ฌๅธ๏ผˆ็ฒคไผ ๅช’(400003)ไธŠๅธ‚ๅŽๅ‰ฉไฝ™8ๅฎถๅ…ฌๅธ๏ผ‰๏ผŒ็”ฑไบŽๆ”ฟ็ญ–ๆ–น้ข็š„็บฆๆŸ๏ผŒๆญค็ฑปๅ…ฌๅธๅณไพฟๅœจไธ‰ๆฟๅธ‚ๅœบไธŠไพ็„ถไธ่ขซๅ…่ฎธ่ฟ›่กŒๆ–ฐ็š„่ž่ต„่€Œๅช่ƒฝ่‚กไปฝ่ฝฌ่ฎฉใ€‚่ฟ™ไบ›ๅ…ฌๅธๅŒ…ๅซไบ†๏ผš * ๅคง่‡ช็„ถ๏ผˆ400001๏ผ‰๏ผŒ่ฏฅ่‚ก็ฅจๅทฒ็ปๅœจๅœจๆ–ฐไธ‰ๆฟ้€€ๅธ‚๏ผŒๅนถไปฅ834019ไธบไปฃ็ ๆˆไธบๅ…จๅ›ฝ่‚ก่ฝฌๅ…ฌๅธ็š„ๆŒ‚็‰Œไผไธšใ€‚ๅฏ่ƒฝๅ‡บ็Žฐ็š„้—ฎ้ข˜ๆ˜ฏ๏ผŒๆ•ฐๆฎๅบ“ๆŠŠ400001ๅ…ฌๅธƒ็š„่ดขๅŠกไฟกๆฏไนŸไฝœไธบไบ†ๆ–ฐๆŒ‚็‰Œๅ…ฌๅธ็š„่ดขๅŠกไฟกๆฏ๏ผŒๅ› ๆญค๏ผŒๅค„็†ๆ—ถ็›ดๆŽฅไปฅๆ–ฐๆŒ‚็‰Œ็š„ๅ…ฌๅธ่ฝฌ่ฎฉๅ…ฌๅ‘ŠไนฆไธญๆŠซ้œฒ็š„่ดขๅŠกไฟกๆฏไฝœไธบๆฃ€้ชŒๆ ‡ๅ‡†๏ผˆTo-do: ไฝœไธบๅฏนๆฏ”็›ดๆŽฅ่ธข้™ค่ฟ™ๅฎถๅ…ฌๅธ็š„ๆ•ฐๆฎใ€‚๏ผ‰ * ้•ฟ็™ฝ5 (400002) * ๆตทๅ›ฝๅฎž5(400005) * ไบฌไธญๅ…ด5(400006) * ๅŽๅ‡ฏ1 (400007) * ๅนฟๅปบ1 (400009) * ้นซๅณฐ5 (400010) * ๆธฏๅฒณ1 (400013)
stkcs_net_staq = ["400001.OC", "400002.OC", "400005.OC", "400006.OC", "400007.OC", "400009.OC", "400010.OC", "400013.OC"] stocks_neeq = stocks_neeq[~stocks_neeq["่ฏๅˆธไปฃ็ "].isin(stkcs_net_staq)] len(stocks_neeq) stocks_neeq[stocks_neeq["่ฏๅˆธไปฃ็ "] == "834019.OC"] stocks_neeq = stocks_neeq.drop(6644) stocks_neeq[stocks_neeq["่ฏๅˆธไปฃ็ "] == "834019.OC"]
src/Stocks.ipynb
januslian/neeqem
gpl-3.0
้€€ๅธ‚่‚ก็ฅจๅค„็†๏ผš11ไธช่ฝฌๆฟไธŠๅธ‚ๅ…ฌๅธ๏ผŒ1ไธช่ฟž็ปญๅ››ๅนดไบๆŸ๏ผŒ16ไธชๅธๆ”ถๅˆๅนถ็š„ๅ…ฌๅธ๏ผŒ3ๅฎถๅ…ฌๅธๅฐ†ๅ…ฌๅธๅ˜ไธบๆœ‰้™่ดฃไปปๅ…ฌๅธ๏ผŒ6ๅฎถๅ…ฌๅธๆœช่ฏดๆ˜Ž้€€ๅธ‚ๅŽŸๅ› ใ€‚ๅœจๆ•ฐๆฎๅบ“้€€ๅธ‚่‚ก็ฅจๅˆ—่กจ้‡Œๆœ‰ไธคๅฎถๅ…ฌๅธ400001ๅ’Œ400003ๅฑžไบŽไธค็ฝ‘ๅ…ฌๅธ่ฝฌๆฟ๏ผŒไธไฝœไธบ็ ”็ฉถๆ ทๆœฌๅค„็†ใ€‚
delisting = pandas.read_excel("../data/raw/้€€ๅธ‚่‚ก็ฅจไธ€่งˆ.xlsx") #2016ๅนด4ๆœˆ15ๆ—ฅไน‹ๅ‰้€€ๅธ‚็š„ๅ…ฌๅธ delisting = delisting[:-2] delisting.head() delisting = delisting[~delisting["ไปฃ็ "].isin(["400001.OC", "400003.OC"])] stkcd = stocks_neeq["่ฏๅˆธไปฃ็ "].append(delisting["ไปฃ็ "]) stkcd[stkcd.duplicated()] delisting = pandas.read_excel("../data/delist.xls") delisting.head() stocks_neeq_all = pandas.concat([stocks_neeq, delisting], ignore_index=True) len(stocks_neeq_all) len(stocks_neeq), len(delisting) stocks_neeq_all.iloc[0] stocks_neeq_all[stocks_neeq_all["ๆ‰€ๅฑž่ฏ็›‘ไผš่กŒไธš"].isnull()] stocks_neeq_all.loc[stocks_neeq_all["่ฏๅˆธไปฃ็ "] == "835458.OC", "ๆ‰€ๅฑž่ฏ็›‘ไผš่กŒไธš"] = "่ดงๅธ้‡‘่žๆœๅŠก" stocks_neeq_all.loc[stocks_neeq_all["่ฏๅˆธไปฃ็ "] == "830806.OC", "ๆ‰€ๅฑž่ฏ็›‘ไผš่กŒไธš"] = "่ฝฏไปถๅ’ŒไฟกๆฏๆŠ€ๆœฏๆœๅŠกไธš" stocks_neeq_all.loc[stocks_neeq_all["่ฏๅˆธไปฃ็ "] == "430217.OC", "ๆ‰€ๅฑž่ฏ็›‘ไผš่กŒไธš"] = "ไบ’่”็ฝ‘ๅ’Œ็›ธๅ…ณๆœๅŠกไธš" stocks_neeq_all[stocks_neeq_all["ๆ‰€ๅฑž่ฏ็›‘ไผš่กŒไธš"].isnull()] finance_sector = ["่ดงๅธ้‡‘่žๆœๅŠก", "่ต„ๆœฌๅธ‚ๅœบๆœๅŠก", "ไฟ้™ฉไธš", "ๅ…ถไป–้‡‘่žไธš"] print(len(stocks_neeq_all[stocks_neeq_all["ๆ‰€ๅฑž่ฏ็›‘ไผš่กŒไธš"].isin(finance_sector)])) stocks_neeq_all = stocks_neeq_all[~stocks_neeq_all["ๆ‰€ๅฑž่ฏ็›‘ไผš่กŒไธš"].isin(finance_sector)] stocks_neeq_all.to_csv("../data/NEEQ_sample.csv", index=False) a = len(stocks_neeq_all[stocks_neeq_all["ๆŒ‚็‰Œๆ—ฅๆœŸ"] <= pandas.Timestamp('2009-10-30 00:00:00')])
src/Stocks.ipynb
januslian/neeqem
gpl-3.0
็”ฑๆญคๅพ—ๅˆฐ็š„โ€œๆ–ฐไธ‰ๆฟโ€ๅ…ฌๅธ็š„ๆœ‰ๆ•ˆๆ ทๆœฌๆ•ฐ้‡ไธบ{{len(stocks_neeq_all)}}ใ€‚ โ€œๆ–ฐไธ‰ๆฟโ€ๆ ทๆœฌๅ…ฌๅธๆœ€ๆ—ฉ็š„ๆŒ‚็‰Œๆ—ถ้—ดไธบ{{stocks_neeq_all["ๆŒ‚็‰Œๆ—ฅๆœŸ"].min()}}๏ผŒๆ‰€ไปฅๅœจ้€‰ๆ‹ฉไธŽไน‹ๅฏนๆฏ”็š„A่‚กไธปๆฟใ€ไธญๅฐๆฟๅ’Œๅˆ›ไธšๆฟๅ…ฌๅธๆ—ถไนŸ้€‰ๆ‹ฉไปŽ2006ๅนดไปฅๅŽ๏ผŒๅ…ถไธญๅˆ›ไธšๆฟๅผ€ๅง‹ไบŽ2009ๅนด10ๆœˆ๏ผŒๅœจ2006ๅˆฐ2009ๅนด10ๆœˆไน‹้—ดๆŒ‚็‰Œ็š„ๆ–ฐไธ‰ๆฟๅ…ฌๅธๆœ‰{{a}}ๅฎถๅ…ฌๅธใ€‚ ็”ฑไบŽๆ‰€ๆœ‰้€‚ๅˆๆœฌ็ ”็ฉถ็š„โ€œๆ–ฐไธ‰ๆฟโ€ๅ…ฌๅธๅ‡ไธบ2006ๅนดไน‹ๅŽๆŒ‚็‰Œไบคๆ˜“็š„๏ผŒๆ‰€ไปฅๅฏนๅบ”็š„ไธปๆฟใ€ไธญๅฐๆฟๅ’Œๅˆ›ไธšๆฟๅ…ฌๅธไนŸๅฐ†ไผš้€‰ๆ‹ฉ2005ๅนดไน‹ๅŽIPOs็š„ๅ…ฌๅธไฝœไธบๅฏนๆฏ”็š„ๅฏน่ฑกใ€‚ๅŽ็ปญ็š„ๆฃ€้ชŒไนŸไผšๅฐ†ไธปไธญๅˆ›ไธ‰ไธชๆฟๅ—็š„ๅฎšๅ‘ๅขžๅ‘ไฝœไธบๅฏนๆฏ”ๅฏน่ฑก๏ผŒไปฅไฟ่ฏๅœจๅ‘่กŒ่ฟ‡็จ‹ไธŠๆœ‰ไบ›่ฎธ็š„ๅฏๆฏ”ๆ€งใ€‚ To-do: ้œ€่ฆ็กฎๅฎšไผฐ่ฎกๆ ทๆœฌ็š„ๆฅๆบ๏ผŒๅค‡้€‰็š„ๆ–นๆกˆๅŒ…ๆ‹ฌไบ†๏ผš๏ผˆ1๏ผ‰ๅฏไปฅไฝฟ็”จๅฝ“ๅนดๆ‰€ๆœ‰ๅทฒ็ปไธŠๅธ‚็š„ๅ…ฌๅธไฝœไธบไผฐ่ฎกๆ ทๆœฌ๏ผ›๏ผˆ2๏ผ‰ๅฏไปฅไฝฟ็”จๅฝ“ๅนดๆ‰€ๆœ‰ไธŠๅธ‚ๅทฒ็ป่ถ…่ฟ‡ไธ€ๅนด็š„ๅ…ฌๅธไฝœไธบๆ ทๆœฌ๏ผ›๏ผˆ3๏ผ‰ไฝฟ็”จ่ง„ๆจกๅˆ†ๅฑ‚็š„ไผฐ่ฎกๆ–นๆณ•๏ผ›
stocks_neeq_all.iloc[0] year = lambda x: x.year stocks_neeq_all["ๆŒ‚็‰Œๅนดๅบฆ"] = stocks_neeq_all["ๆŒ‚็‰Œๆ—ฅๆœŸ"].apply(year) stocks_neeq_all["ๆŒ‚็‰Œๅนดๅบฆ"].value_counts().sort_index() sns.barplot(stocks_neeq_all["ๆŒ‚็‰Œๅนดๅบฆ"].value_counts().sort_index().index, stocks_neeq_all["ๆŒ‚็‰Œๅนดๅบฆ"].value_counts().sort_index(), palette="Blues_d"); sns.countplot(x="ๆŒ‚็‰Œๅนดๅบฆ", data=stocks_neeq_all, palette="Blues_d"); stocks_neeq_all["ๆ‰€ๅฑž่ฏ็›‘ไผš่กŒไธš"].value_counts()
src/Stocks.ipynb
januslian/neeqem
gpl-3.0
2. Hypothesis tests
#California burritos vs. Carnitas burritos TODO # Don Carlos 1 vs. Don Carlos 2 TODO # Bonferroni correction TODO
burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb
srcole/qwm
mit
3. Burrito dimension distributions Distribution of each burrito quality
import math def metrichist(metricname): if metricname == 'Volume': bins = np.arange(.375,1.225,.05) xticks = np.arange(.4,1.2,.1) xlim = (.4,1.2) else: bins = np.arange(-.25,5.5,.5) xticks = np.arange(0,5.5,.5) xlim = (-.25,5.25) plt.figure(figsize=(5,5)) n, _, _ = plt.hist(df[metricname].dropna(),bins,color='k') plt.xlabel(metricname + ' rating',size=20) plt.xticks(xticks,size=15) plt.xlim(xlim) plt.ylabel('Count',size=20) plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15) plt.tight_layout() m_Hist = ['Hunger','Volume','Tortilla','Temp','Meat','Fillings', 'Meat:filling','Uniformity','Salsa','Synergy','Wrap','overall'] for m in m_Hist: metrichist(m)
burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb
srcole/qwm
mit
Test for normal distribution
TODO
burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb
srcole/qwm
mit
Changes implemented From most important to least important: * pre-compute PSF parameters across multiple spectra and wavelengths to enable vectorized calls to legval instead of many scalar calls [~25% faster] * hoist wavelength -> [-1,1] for legendre calculations out of loop [~8% faster] * replace np.outer with numba optimized version [~5% faster (?)] * cast astropy.io.fits.Header to dict() before many many key/value lookups [~4% faster] * Avoid intermediate memory allocations by using out=... [<~1% faster] Note: amount of improvement was different for Haswell vs. KNL vs. Core i7 Laptop and varies with extraction problem size; above numbers are approximate; see below for before/after Haswell and KNL results. Before / after comparison
h1 = Table.read('data/portland/hsw-before.txt', format='ascii.basic') h2 = Table.read('data/portland/hsw-after.txt', format='ascii.basic') k1 = Table.read('data/portland/knl-before.txt', format='ascii.basic') k2 = Table.read('data/portland/knl-after.txt', format='ascii.basic') plot(h1['nproc'], h1['rate'], 's:', color='C0', label='Haswell original') plot(h2['nproc'], h2['rate'], 's-', color='C0', label='Haswell') plot(h2['nproc'], h2['nproc']*h2['rate'][0], color='0.5', alpha=0.5, label='_none_') plot(k1['nproc'], k1['rate'], 'd:', color='C1', label='KNL original') plot(k2['nproc'], k2['rate'], 'd-', color='C1', label='KNL') plot(k2['nproc'], k2['nproc']*k2['rate'][0], color='0.5', alpha=0.5, label='perfect scaling') semilogx() semilogy() xticks([1,2,4,8,16,32,64,128], [1,2,4,8,16,32,64,128]) legend(loc='upper left') ylim(20, 20000) xlabel('Number of Processes') ylabel('Extraction Rate') title('25 spectra x 300 wavelengths') savefig('desi-extract-improvements.pdf')
doc/portland-extract.ipynb
sbailey/knltest
bsd-3-clause
Build in plots for storytelling
def labelit(title_=''): semilogx() semilogy() xticks([1,2,4,8,16,32,64,128], [1,2,4,8,16,32,64,128]) legend(loc='upper left') title(title_) ylim(20, 20000) xlabel('Number of Processes') ylabel('Extraction Rate') figure() plot(h1['nproc'], h1['rate'], 's-', color='C0', label='Haswell') plot(h1['nproc'], h1['nproc']*h1['rate'][0], color='0.5', alpha=0.5, label='_none_') plot(k1['nproc'], k1['rate'][0]*np.ones(len(k1['nproc'])), 'd-', color='C1', label='KNL') plot(k1['nproc'], k1['nproc']*k1['rate'][0], color='0.5', alpha=0.5, label='_none_') labelit('Starting point: no KNL scaling') savefig('portland1.pdf') figure() plot(h1['nproc'], h1['rate'], 's-', color='C0', label='Haswell') plot(h1['nproc'], h1['nproc']*h1['rate'][0], color='0.5', alpha=0.5, label='_none_') plot(k1['nproc'], k1['rate'], 'd-', color='C1', label='KNL') plot(k1['nproc'], k1['rate'][0]*np.ones(len(k1['nproc'])), ':', color='C1', label='_none_', alpha=0.8) plot(k1['nproc'], k1['nproc']*k1['rate'][0], color='0.5', alpha=0.5, label='_none_') labelit('After OpenMP bug workaround') savefig('portland2.pdf') figure() plot(h1['nproc'], h1['rate'], ':', color='C0', label='_none_', alpha=0.8) plot(h2['nproc'], h2['rate'], 's-', color='C0', label='Haswell') plot(h2['nproc'], h2['nproc']*h2['rate'][0], color='0.5', alpha=0.5, label='_none_') plot(k1['nproc'], k1['rate'], ':', color='C1', label='_none_', alpha=0.8) plot(k2['nproc'], k2['rate'], 'd-', color='C1', label='KNL') plot(k2['nproc'], k2['nproc']*k2['rate'][0], color='0.5', alpha=0.5, label='_none_') labelit('After Portland Hackfest') savefig('portland3.pdf') print('For 25 spectra x 300 wavelengths...') print('Starting point ratios of HSW/KNL') print(' per processes {:.2f}'.format(h1['rate'][0]/k1['rate'][0])) print(' per node {:.2f}'.format(np.max(h1['rate'])/np.max(k1['rate']))) print('Ending ratios of HSW/KNL') print(' per processes {:.2f}'.format(h2['rate'][0]/k2['rate'][0])) print(' per node {:.2f}'.format(np.max(h2['rate'])/np.max(k2['rate']))) print('Haswell per-node improvement {:.2f}'.format(np.max(h2['rate'])/np.max(h1['rate']))) print('KNL per-node improvement {:.2f}'.format(np.max(k2['rate'])/np.max(k1['rate']))) print('Original HSW / Current KNL {:.2f}'.format(np.max(h1['rate'])/np.max(k2['rate']))) print('Original HSW rate {:.1f}'.format(np.max(h1['rate'])))
doc/portland-extract.ipynb
sbailey/knltest
bsd-3-clause
Configuration parameters
import GAN.utils as utils # reload(utils) class Parameters(utils.Parameters): # dataset to be loaded load_datasets=utils.param(["moriond_v9","abs(ScEta) < 1.5"]) c_names = utils.param(['Phi','ScEta','Pt','rho','run']) x_names = utils.param(['EtaWidth','R9','SigmaIeIe','S4','PhiWidth','CovarianceIetaIphi', 'CovarianceIphiIphi']) # generate variables in MC to be distributed like in data sample_from_data = ['run'] # MC reweighting reweight = ['rewei_zee_5d_barrel_m_cleaning_xgb'] mcweight = utils.param('') # if empty ignore original weight use_abs_weight = utils.param(True) # negative weights are notoriously nasty # cleaning cuts cleaning = utils.param('cleaning_zee_m_barrel.hd5') # input features transformation feat_transform = utils.param('minmax') global_match = utils.param(False) # generator and discriminator (aka as 'critic' in wGAN) g_opts=utils.param(dict(name="G_512to64x8", kernel_sizes=[64,64,128,128,256,256,512,512], do_weight_reg=1e-5,do_last_l1reg=1e-5 )) pretrain_g = utils.param(False) d_opts=utils.param(dict(name="D_1024to32x6",kernel_sizes=[1024,512,128,64,32],do_bn=False, do_dropout=False,hidden_activation="relu", clip_weights=2.e-2,activation=None)) # weights clipping and no actication # optimizers dm_opts=utils.param(dict(optimizer="RMSprop",opt_kwargs=dict(lr=1e-5,decay=1e-6))) am_opts=utils.param(dict(optimizer="RMSprop",opt_kwargs=dict(lr=1e-5,decay=1e-6))) loss = utils.param("wgan_loss") # use WGAN loss gan_targets = utils.param('gan_targets_hinge') # hinge targets are 1, -1 instead of 0, 1 schedule = utils.param([0]*5+[1]*1) # number of critic iterations per generator iterations # schedule training epochs=utils.param(200) batch_size=utils.param(4096) plot_every=utils.param(10) frac_data=utils.param(10) # fraction of data to use monitor_dir = utils.param('log') # folder for logging batch = utils.param(False) # are we running in batch? class MyApp(utils.MyApp): classes = utils.List([Parameters]) # Read all parameters above from command line. # Note: names are all converted to be all capital notebook_parameters = Parameters(MyApp()).get_params() # copy parameters to global scope globals().update(notebook_parameters) DM_OPTS.update( {"loss":LOSS} ) AM_OPTS.update( {"loss":LOSS} ) plotting.batch = BATCH notebook_parameters
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Load datasets Apply cleaning and reweight MC to match data
data,mc = cms.load_zee(*LOAD_DATASETS) data.columns mc.columns
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Cleaning
if not CLEANING is None: print('cleaning data and mc') thr_up = pd.read_hdf(CLEANING,'thr_up') thr_down = pd.read_hdf(CLEANING,'thr_down') nevts_data = data.shape[0] nevts_mc = mc.shape[0] data = data[ ((data[thr_down.index] >= thr_down) & (data[thr_up.index] <= thr_up)).all(axis=1) ] mc = mc[ ((mc[thr_down.index] >= thr_down) & (mc[thr_up.index] <= thr_up)).all(axis=1) ] print('cleaning eff (data,mc): %1.2f % 1.2f' % ( data.shape[0] / nevts_data, mc.shape[0] / nevts_mc ))
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Generate extra variables for MC (eg run number) distributed like data
for col in SAMPLE_FROM_DATA: print('sampling', col) mc[col] = data[col].sample(mc.shape[0]).values
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Select target and conditional features
c_names = C_NAMES x_names = X_NAMES data_c = data[c_names] data_x = data[x_names] mc_c = mc[c_names] mc_x = mc[x_names] data_x.columns, data_x.shape, data_c.columns, data_c.shape data_x.columns, data_c.columns mc_x.columns, mc_c.columns
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Reweight MC
# reload(preprocessing) # reload(utils) # iniliatlize weights to default if MCWEIGHT == '': mc_w = np.ones(mc_x.shape[0]) else: mc_w = mc[MCWEIGHT].values print(mc_w[:10]) # take care of negative weights if USE_ABS_WEIGHT: mc_w = np.abs(mc_w) # reweighting if not REWEIGHT is None: for fil in REWEIGHT: # apply weights from n-dimnesional histograms if 'npy' in fil: info = np.load(fil) inputs = info[0] weights = info[1] bins = info[2:] # print(bins[1]) print('weighting',inputs) mc_w *= preprocessing.reweight(mc,inputs,bins,weights,base=None) else: # or use a classifier clf = utils.read_pickle(fil,encoding='latin1') if hasattr(clf,'nthread'): clf.nthread = min(4,clf.nthread) clf.n_jobs = clf.nthread mc_w *= preprocessing.reweight_multidim(mc[clf.features_],clf) data_w = np.ones(data_x.shape[0])
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Features transformation
# reload(preprocessing) if GLOBAL_MATCH: _,data_c,mc_x,mc_c,scaler_x,scaler_c = preprocessing.transform(None,data_c,mc_x,mc_c,FEAT_TRANSFORM) _,_,data_x,_,scaler_x_data,_ = preprocessing.transform(None,None,data_x,None,FEAT_TRANSFORM) else: data_x,data_c,mc_x,mc_c,scaler_x,scaler_c = preprocessing.transform(data_x,data_c,mc_x,mc_c,FEAT_TRANSFORM) data_x.shape,mc_x.shape data_x[2].max(),mc_x[2].max() print(mc_c.max(axis=0),mc_c.min(axis=0)) print(data_c.max(axis=0),data_c.min(axis=0)) print(mc_w.shape,mc_c.shape) for ic in range(len(c_names)): plotting.plot_hists(data_c[:,0,ic],mc_c[:,0,ic],generated_w=mc_w,bins=100,range=[-2.0,2.0]) plt.xlabel(c_names[ic]) plt.show() for ix in range(len(x_names)): plotting.plot_hists(data_x[:,0,ix],mc_x[:,0,ix],generated_w=mc_w,bins=100,range=[-3,3]) plt.xlabel(x_names[ix]) plt.show()
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Prepare train and test sample
nmax = min(data_x.shape[0]//FRAC_DATA,mc_x.shape[0]) data_x_train,data_x_test,data_c_train,data_c_test,data_w_train,data_w_test = cms.train_test_split(data_x[:nmax],data_c[:nmax],data_w[:nmax],test_size=0.1) mc_x_train,mc_x_test,mc_c_train,mc_c_test,mc_w_train,mc_w_test = cms.train_test_split(mc_x[:nmax],mc_c[:nmax],mc_w[:nmax],test_size=0.1) wscl = data_w[:nmax].sum() / mc_w[:nmax].sum() mc_w_train *= wscl mc_w_test *= wscl print(nmax,mc_w_train.sum()/data_w_train.sum())
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Instantiate the model Create the model and compile it.
# reload(models) xz_shape = (1,len(x_names)) c_shape = (1,len(c_names)) gan = models.MyFFGAN( xz_shape, xz_shape, c_shape=c_shape, g_opts=G_OPTS, d_opts=D_OPTS, dm_opts=DM_OPTS, am_opts=AM_OPTS, gan_targets=GAN_TARGETS )
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
This will actually trigger the instantiation of the generator (if not done here, it will happen befor compilation).
gan.get_generator()
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Same as above for the discriminator. Two discriminator models are created: one for the discriminator training phase and another for the generator trainin phase. The difference between the two is that the second does not contain dropout layers.
gan.get_discriminator()
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
model compilation
gan.adversarial_compile(loss=LOSS,schedule=SCHEDULE) gan.get_generator().summary() gan.get_discriminator()[0].summary() gan.get_discriminator()[1].summary() # sanity check set(gan.am[0].trainable_weights)-set(gan.am[1].trainable_weights)
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Training Everything is ready. We start training.
from keras.optimizers import RMSprop if PRETRAIN_G: generator = gan.get_generator() generator.compile(loss="mse",optimizer=RMSprop(lr=1.e-3)) generator.fit( [mc_c_train,mc_x_train], [mc_c_train,mc_x_train], sample_weight=[mc_w_train,mc_w_train], epochs=1, batch_size=BATCH_SIZE ) # reload(base) initial_epoch = 0 # if hasattr(gan.model,"history"): # initial_epoch = gan.model.history.epoch[-1] + 1 do = dict( x_train=data_x_train, z_train=mc_x_train, c_x_train=data_c_train, c_z_train=mc_c_train, w_x_train = data_w_train, w_z_train = mc_w_train, x_test=data_x_test, z_test=mc_x_test, c_x_test=data_c_test, c_z_test=mc_c_test, w_x_test = data_w_test, w_z_test = mc_w_test, n_epochs=EPOCHS + initial_epoch - 1, initial_epoch=initial_epoch, batch_size=BATCH_SIZE, plot_every=PLOT_EVERY, monitor_dir=MONITOR_DIR ) exc = None try: gan.adversarial_fit(**do) except Exception as e: exc = e if not exc is None: print(exc)
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
### Clustering pearsons_r with HDBSCAN
# Clustering the pearsons_R with N/A vlaues removed hdb_t1 = time.time() hdb_pearson_r = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_pearson_r) hdb_pearson_r_labels = hdb_pearson_r.labels_ hdb_elapsed_time = time.time() - hdb_t1 print("time to cluster", hdb_elapsed_time) print(np.unique(hdb_pearson_r_labels)) # unique bins, zero is noise print(np.bincount(hdb_pearson_r_labels[hdb_pearson_r_labels!=-1])) pearson_clusters = {i: np.where(hdb_pearson_r_labels == i)[0] for i in range(2)} pearson_clusters #pd.set_option('display.height', 500) #These two commands allow for the display of max of 500 rows - exploring genes #pd.set_option('display.max_rows', 500) df2_TPM.iloc[pearson_clusters[1],:] #the genes that were clustered together [0,1]
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
Looks like there are two clusters, some expression and zero expression across samples. ### Clustering mean centered euclidean distance with with HDBSCAN
df3_euclidean_mean.hist() # Clustering the mean centered euclidean distance of TPM counts hdb_t1 = time.time() hdb_euclidean_mean = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_euclidean_mean) hdb_euclidean_mean_labels = hdb_euclidean_mean.labels_ hdb_elapsed_time = time.time() - hdb_t1 print("time to cluster", hdb_elapsed_time) print(np.unique(hdb_euclidean_mean_labels)) print(np.bincount(hdb_euclidean_mean_labels[hdb_euclidean_mean_labels!=-1])) euclidean_mean_clusters = {i: np.where(hdb_euclidean_mean_labels == i)[0] for i in range(2)} df2_TPM.iloc[euclidean_mean_clusters[1],:]
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
Looks like 2 clusters - both with zero expression. looks like wether it is a numpy array or pandas dataframe, the result is the same. lets now try to get index of the clustered points. ### Clustering log transformed euclidean distance with with HDBSCAN
df3_euclidean_log2 # Clustering the log2 transformed euclidean distance of TPM counts hdb_t1 = time.time() hdb_euclidean_log2 = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_euclidean_log2) hdb_euclidean_log2_labels = hdb_euclidean_log2.labels_ hdb_elapsed_time = time.time() - hdb_t1 print("time to cluster", hdb_elapsed_time) print(np.unique(hdb_euclidean_log2_labels)) print(np.bincount(hdb_euclidean_log2_labels[hdb_euclidean_log2_labels!=-1])) euclidean_log2_clusters = {i: np.where(hdb_euclidean_log2_labels == i)[0] for i in range(2)} df2_TPM.iloc[euclidean_log2_clusters[1],:]
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
Clustering using built-in HDBSCAN euclidean distance metric (mean centered and scaled to unit variance)
df2_TPM_values = df2_TPM.loc[:,"5GB1_FM40_T0m_TR2":"5GB1_FM40_T180m_TR1"] #isolating the data values df2_TPM_values_T = df2_TPM_values.T #transposing the data standard_scaler = StandardScaler() TPM_counts_mean_centered = standard_scaler.fit_transform(df2_TPM_values_T) #mean centering the data TPM_counts_mean_centered = pd.DataFrame(TPM_counts_mean_centered) #back to Dataframe #transposing back to original form and reincerting indeces and columns my_index = df2_TPM_values.index my_columns = df2_TPM_values.columns TPM_counts_mean_centered = TPM_counts_mean_centered.T TPM_counts_mean_centered.set_index(my_index, inplace=True) TPM_counts_mean_centered.columns = my_columns # Clustering the pearsons_R with N/A vlaues removed hdb_t1 = time.time() hdb_euclidean = hdbscan.HDBSCAN(metric = "euclidean", min_cluster_size=5).fit(TPM_counts_mean_centered) hdb_euclidean_labels = hdb_euclidean.labels_ hdb_elapsed_time = time.time() - hdb_t1 print("time to cluster", hdb_elapsed_time) print(np.unique(hdb_euclidean_labels)) print(np.bincount(hdb_euclidean_labels[hdb_euclidean_labels!=-1]))
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
lets look at some clusters Euclidean_standard_scaled_clusters = {i: np.where(hdb_euclidean_labels == i)[0] for i in range(7)} df2_TPM.iloc[Euclidean_standard_scaled_clusters[0],:]
Euclidean_standard_scaled_clusters = {i: np.where(hdb_euclidean_labels == i)[0] for i in range(7)} df2_TPM.iloc[Euclidean_standard_scaled_clusters[1],:]
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
Euclidean_standard_scaled_clusters Clustering log2 transformed data using built-in HDBSCAN euclidean distance metric (mean centered and scaled to unit variance)
df2_TPM_log2_scale= df2_TPM_log2.T #transposing the data standard_scaler = StandardScaler() TPM_log2_mean_scaled = standard_scaler.fit_transform(df2_TPM_log2_scale) #mean centering the data TPM_log2_mean_scaled = pd.DataFrame(TPM_log2_mean_scaled) #back to Dataframe #transposing back to original form and reincerting indeces and columns my_index = df2_TPM_values.index my_columns = df2_TPM_values.columns TPM_log2_mean_scaled = TPM_log2_mean_scaled.T TPM_log2_mean_scaled.set_index(my_index, inplace=True) TPM_log2_mean_scaled.columns = my_columns # Clustering the pearsons_R with N/A vlaues removed hdb_t1 = time.time() hdb_log2_euclidean = hdbscan.HDBSCAN(metric = "euclidean", min_cluster_size=5).fit(TPM_log2_mean_scaled) hdb_log2_euclidean = hdb_log2_euclidean.labels_ hdb_elapsed_time = time.time() - hdb_t1 print("time to cluster", hdb_elapsed_time) print(np.unique(hdb_log2_euclidean)) print(np.bincount(hdb_log2_euclidean[hdb_log2_euclidean!=-1]))
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
DM -- Piece by piece (as coded) $\rho_b = \Omega_b \rho_c (1+z)^3$ $\rho_{\rm diffuse} = \rho_b - (\rho_* + \rho_{\rm ISM})$ $\rho_*$ is the mass density in stars $\rho_{\rm ISM}$ is the mass density in the neutral ISM Number densities $n_{\rm H} = \rho_{\rm diffuse}/(m_p \, \mu)$ $\mu \approx 1.3$ accounts for Helium $n_{\rm He} = n_{\rm H}/12$ $n_e = n_{\rm H} [1-f_{\rm HI}] + n_{\rm He} Y$ $f_{\rm HI}$ is the fraction atomic Hydrogen [value betwee 0-1] $Y$ gives the number of free electrons per He nucleus [value between 0-2] Integrating $DM = \int \frac{n_e \, dr}{1+z} = \frac{c}{H_0} \int \frac{n_e \, dz}{(1+z)^2 \sqrt{(1+z)^3 \Omega_m + \Omega_\Lambda}}$ DM -- Altogether (using the code)
reload(igm) DM = igm.average_DM(1.) DM
docs/nb/DM_cosmic.ipynb
FRBs/FRB
bsd-3-clause
Cumulative plot
DM_cumul, zeval = igm.average_DM(1., cumul=True) # Inoue approximation DM_approx = 1000. * zeval * units.pc / units.cm**3 plt.clf() ax = plt.gca() ax.plot(zeval, DM_cumul, label='JXP') ax.plot(zeval, DM_approx, label='Approx') # Label ax.set_xlabel('z') ax.set_ylabel(r'${\rm DM}_{\rm IGM} [\rm pc / cm^3]$ ') # Legend legend = plt.legend(loc='lower right', scatterpoints=1, borderpad=0.2, handletextpad=0.1, fontsize='large') plt.show() plt.clf() ax = plt.gca() ax.plot(zeval, DM_approx/DM_cumul, label='Approx/JXP') #ax.plot(zeval, DM_approx, label='Approx') # Label ax.set_xlabel('z') ax.set_ylabel(r'Ratio of ${\rm DM}_{\rm IGM} [\rm pc / cm^3]$ ') # Legend legend = plt.legend(loc='upper right', scatterpoints=1, borderpad=0.2, handletextpad=0.1, fontsize='large') plt.show() DM_cumul[0:10] DM_approx[0:10] zeval[0]
docs/nb/DM_cosmic.ipynb
FRBs/FRB
bsd-3-clause
The Longley dataset is a time series dataset:
data = sm.datasets.longley.load() data.exog = sm.add_constant(data.exog) print(data.exog[:5])
examples/notebooks/gls.ipynb
ChadFulton/statsmodels
bsd-3-clause
While we don't have strong evidence that the errors follow an AR(1) process we continue
rho = resid_fit.params[1]
examples/notebooks/gls.ipynb
ChadFulton/statsmodels
bsd-3-clause
2x2 Grid You can easily create a layout with 4 widgets arranged on 2x2 matrix using the TwoByTwoLayout widget:
from ipywidgets import TwoByTwoLayout TwoByTwoLayout(top_left=top_left_button, top_right=top_right_button, bottom_left=bottom_left_button, bottom_right=bottom_right_button)
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
If you don't define a widget for some of the slots, the layout will automatically re-configure itself by merging neighbouring cells
TwoByTwoLayout(top_left=top_left_button, bottom_left=bottom_left_button, bottom_right=bottom_right_button)
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can pass merge=False in the argument of the TwoByTwoLayout constructor if you don't want this behavior
TwoByTwoLayout(top_left=top_left_button, bottom_left=bottom_left_button, bottom_right=bottom_right_button, merge=False)
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can add a missing widget even after the layout initialization:
layout_2x2 = TwoByTwoLayout(top_left=top_left_button, bottom_left=bottom_left_button, bottom_right=bottom_right_button) layout_2x2 layout_2x2.top_right = top_right_button
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can easily create more complex layouts with custom widgets. For example, you can use bqplot Figure widget to add plots:
import bqplot as bq import numpy as np size = 100 np.random.seed(0) x_data = range(size) y_data = np.random.randn(size) y_data_2 = np.random.randn(size) y_data_3 = np.cumsum(np.random.randn(size) * 100.) x_ord = bq.OrdinalScale() y_sc = bq.LinearScale() bar = bq.Bars(x=np.arange(10), y=np.random.rand(10), scales={'x': x_ord, 'y': y_sc}) ax_x = bq.Axis(scale=x_ord) ax_y = bq.Axis(scale=y_sc, tick_format='0.2f', orientation='vertical') fig = bq.Figure(marks=[bar], axes=[ax_x, ax_y], padding_x=0.025, padding_y=0.025, layout=Layout(width='auto', height='90%')) from ipywidgets import FloatSlider max_slider = FloatSlider(min=0, max=10, default_value=2, description="Max: ", layout=Layout(width='auto', height='auto')) min_slider = FloatSlider(min=-1, max=10, description="Min: ", layout=Layout(width='auto', height='auto')) app = TwoByTwoLayout(top_left=min_slider, bottom_left=max_slider, bottom_right=fig, align_items="center", height='700px') jslink((y_sc, 'max'), (max_slider, 'value')) jslink((y_sc, 'min'), (min_slider, 'value')) jslink((min_slider, 'max'), (max_slider, 'value')) jslink((max_slider, 'min'), (min_slider, 'value')) max_slider.value = 1.5 app
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
AppLayout AppLayout is a widget layout template that allows you to create an application-like widget arrangements. It consist of a header, a footer, two sidebars and a central pane:
from ipywidgets import AppLayout, Button, Layout header_button = create_expanded_button('Header', 'success') left_button = create_expanded_button('Left', 'info') center_button = create_expanded_button('Center', 'warning') right_button = create_expanded_button('Right', 'info') footer_button = create_expanded_button('Footer', 'success') AppLayout(header=header_button, left_sidebar=left_button, center=center_button, right_sidebar=right_button, footer=footer_button)
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
However with the automatic merging feature, it's possible to achieve many other layouts:
AppLayout(header=None, left_sidebar=None, center=center_button, right_sidebar=None, footer=None) AppLayout(header=header_button, left_sidebar=left_button, center=center_button, right_sidebar=right_button, footer=None) AppLayout(header=None, left_sidebar=left_button, center=center_button, right_sidebar=right_button, footer=None) AppLayout(header=header_button, left_sidebar=left_button, center=center_button, right_sidebar=None, footer=footer_button) AppLayout(header=header_button, left_sidebar=None, center=center_button, right_sidebar=right_button, footer=footer_button) AppLayout(header=header_button, left_sidebar=None, center=center_button, right_sidebar=None, footer=footer_button) AppLayout(header=header_button, left_sidebar=left_button, center=None, right_sidebar=right_button, footer=footer_button)
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can also modify the relative and absolute widths and heights of the panes using pane_widths and pane_heights arguments. Both accept a sequence of three elements, each of which is either an integer (equivalent to the weight given to the row/column) or a string in the format '1fr' (same as integer) or '100px' (absolute size).
AppLayout(header=header_button, left_sidebar=left_button, center=center_button, right_sidebar=right_button, footer=footer_button, pane_widths=[3, 3, 1], pane_heights=[1, 5, '60px'])
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
Grid layout GridspecLayout is a N-by-M grid layout allowing for flexible layout definitions using an API similar to matplotlib's GridSpec. You can use GridspecLayout to define a simple regularly-spaced grid. For example, to create a 4x3 layout:
from ipywidgets import GridspecLayout grid = GridspecLayout(4, 3) for i in range(4): for j in range(3): grid[i, j] = create_expanded_button('Button {} - {}'.format(i, j), 'warning') grid
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
To make a widget span several columns and/or rows, you can use slice notation:
grid = GridspecLayout(4, 3, height='300px') grid[:3, 1:] = create_expanded_button('One', 'success') grid[:, 0] = create_expanded_button('Two', 'info') grid[3, 1] = create_expanded_button('Three', 'warning') grid[3, 2] = create_expanded_button('Four', 'danger') grid
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can still change properties of the widgets stored in the grid, using the same indexing notation.
grid = GridspecLayout(4, 3, height='300px') grid[:3, 1:] = create_expanded_button('One', 'success') grid[:, 0] = create_expanded_button('Two', 'info') grid[3, 1] = create_expanded_button('Three', 'warning') grid[3, 2] = create_expanded_button('Four', 'danger') grid grid[0, 0].description = "I am the blue one"
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
Note: It's enough to pass an index of one of the grid cells occupied by the widget of interest. Slices are not supported in this context. If there is already a widget that conflicts with the position of the widget being added, it will be removed from the grid:
grid = GridspecLayout(4, 3, height='300px') grid[:3, 1:] = create_expanded_button('One', 'info') grid[:, 0] = create_expanded_button('Two', 'info') grid[3, 1] = create_expanded_button('Three', 'info') grid[3, 2] = create_expanded_button('Four', 'info') grid grid[3, 1] = create_expanded_button('New button!!', 'danger')
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
Creating scatter plots using GridspecLayout In these examples, we will demonstrate how to use GridspecLayout and bqplot widget to create a multipanel scatter plot. To run this example you will need to install the bqplot package. For example, you can use the following snippet to obtain a scatter plot across multiple dimensions:
import bqplot as bq import numpy as np from ipywidgets import GridspecLayout, Button, Layout n_features = 5 data = np.random.randn(100, n_features) data[:50, 2] += 4 * data[:50, 0] **2 data[50:, :] += 4 A = np.random.randn(n_features, n_features)/5 data = np.dot(data,A) scales_x = [bq.LinearScale() for i in range(n_features)] scales_y = [bq.LinearScale() for i in range(n_features)] gs = GridspecLayout(n_features, n_features) for i in range(n_features): for j in range(n_features): if i != j: sc_x = scales_x[j] sc_y = scales_y[i] scatt = bq.Scatter(x=data[:, j], y=data[:, i], scales={'x': sc_x, 'y': sc_y}, default_size=1) gs[i, j] = bq.Figure(marks=[scatt], layout=Layout(width='auto', height='auto'), fig_margin=dict(top=0, bottom=0, left=0, right=0)) else: sc_x = scales_x[j] sc_y = bq.LinearScale() hist = bq.Hist(sample=data[:,i], scales={'sample': sc_x, 'count': sc_y}) gs[i, j] = bq.Figure(marks=[hist], layout=Layout(width='auto', height='auto'), fig_margin=dict(top=0, bottom=0, left=0, right=0)) gs
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
<a id='index'></a> Index Back to top 1 Introduction 2 ScatterPlot components 2.1 The scatter plot marker 2.2 Internal structure 2.3 Data structures 2.3.1 Original data 2.3.2 Tooltip data 2.3.3 Mapper data 2.3.2 Output data 3 ScatterPlot interface 3.1 Data mapper 3.2 Tooltip selector 3.3 Colors in Hex format 4 Taking a snapshot of the current plot 5 Plotting pandas Panel and Panel4D <a id='introduction'></a> 1 Introduction Now that we are familiar with the framework's basics, we can start showing the full capabilities of Shaolin. In order to do that I will rewiev one of the most "simple" and widely used plots in data science: The scatter plot. We provide in the dashboards section of the Shaolin framework several Dashboards suited for complex data processing, and the Bokeh Scatterplot is the one on which we are going to center this tutorial. All the individual components of this Dashboard will be explained deeply in further tutorials. <a id='components'></a> 2 ScatterPlot components <a id='scatter_marker'></a> 2.1 The scatter plot marker Back to top A scatter plot, as we all know is a kind of plot in which we represent two datapoint vectors (x and y) against each other and assign a marker(by now just a circle) to each pair of data points. Althoug the x and y coordinates of the marker are the only two compulsory parameters, it is also possible to customize the following parameters for a circle marker: x: x coordinate of the marker. y: y coordinate of the marker. size: Marker size. fill_alpha: Transparency value of the interior of the marker. fill_color: Color of the interior of the marker. line_color: Color of the marker's border. line_alpha: Transparency of the marker's border. line_width: Width of the marker's boder. It is possible to fully customize what data from the data structure we want to plot willn be mapped to a marker parameter and how that mapping will be performed. In order to assign values to a marker parameter we have to follow this process: Select a chunk of data from your pandas data structure that has the correct shape. (In this case each parameter must be a datapoint vector) Select how the data will be scaled to fit in the range of values that the marker parameter can have. (For example, for the line_width param all the values should be between 1 and 4.) Select if the values of the parameter will be a mapping of the data or a default value that will be the same for all the data points. This means that we could theoretically plot 8 dimensional data by mapping each parameter to a coordinate of a data point, but in practise it is sometime more usefull to use more than one parameter to map the same vector of data points in order to emphatise some feature of the data we are plotting. For example, we could map the fill_color parameter and the fill_alpha parameter to the same feature so it would be easy to emphatise the higher values of the plotted vector. <a id='internals'></a> 2.2 Internal structure Back to top The scatter plot is a Dashboard with the following attributes: data: The pandas data structure that we will use for the plot. widget: GUI for selecting the plot parameters. plot: Bokeh plot where the data is displayed. mapper: Dashboard in charge of mapping data to every marker parameter. tooltip: Dashboard in charge of managing the information displayed on the tooltip. output: DataFrame with all the information available to the plot. bokeh_source: Bokeh DataSource that mimics the infromation contained in the source df. In the following diagram you can se the process of how data is mapped into visual information. <img src="scatter_data/structure.svg"></img> For this example we will use the classic Iris dataset imported from the bokeh data samples.
from IPython.display import Image #this is for displaying the widgets in the web version of the notebook from shaolin.dashboards.bokeh import ScatterPlot from bokeh.sampledata.iris import flowers scplot = ScatterPlot(flowers)
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='data_structures'></a> 2.3 Data structures Back to top The data contained in the blocks described in the above diagram gcan be accessed the following way: <a id='original_data'></a> 2.3.1 Original data
scplot.data.head()
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='tooltip_data'></a> 2.3.2 Tooltip data Back to top
scplot.tooltip.output.head()
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='mapper_data'></a> 2.3.3 Mapper data
scplot.mapper.output.head()
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='output_data'></a> 2.3.4 output data
scplot.output.head()
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='plot_interface'></a> 3 Scatter plot Interface Back to top The scatter plot Dashboard contains the bokeh scatter plot and a widget. That widget is a toggle menu that can display two Dashboards: - Mapper: This dashboard is in charge of managing how the data is displayed. - Tooltip: The BokehTooltip Dashboard allows to select what information will be displayed on the plot tooltips. The complete plot interface can be displayed calling the function show. As you will see, the interface layout has not been yet customized, so any suggestion regarding interface desing will be appreciated. <a id='data_mapper'></a> 3.1 Data Mapper Back to top This is the Dashboard that allows to customize how the data will be plotted. We will color each of its components so its easier to locate them. This is a good example of a complex Dashboard comprised of multiple Dashboards.
mapper = scplot.mapper mapper.buttons.widget.layout.border = "blue solid" mapper.buttons.value = 'line_width' mapper.line_width.data_scaler.widget.layout.border = 'yellow solid' mapper.line_width.data_slicer.widget.layout.border = 'red solid 0.4em' mapper.line_width.data_slicer.columns_slicer.widget.layout.border = 'green solid 0.4em' mapper.line_width.data_slicer.index_slicer.widget.layout.border = 'green solid 0.4em' mapper.line_width.default_value.widget.layout.border = 'purple solid 0.4em' mapper.line_width.apply_row.widget.layout.border = "pink solid 0.4em" scplot.widget Image(filename='scatter_data/img_1.png')
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
A plot mapper has the following components: - Marker parameter selector(Blue): A dropdown that allows to select which marker parameter that is going to be changed. - Data slicer(Red): A dashoard in charge of selecting a datapoint vector from a pandas data structure. We can slice each of the dimensions of the data structure thanks to an AxisSlicer(Green) Dashboard. - Data scaler(Yellow): Dashboard in charge of scaling the data. Similar to the data scaler from the tutorials. - Activate mapping(pink): If the value of the checkbox is True the value of the marker parameter will be the output of the scaler, otherwise it will be the default value(Purple) for every data point. <a id='tooltip_selector'></a> 3.1 Tooltip selector Back to top It is possible to choose what information from the data attribute of the ScatterPlot will be shown when hovering above a marker. In the above cell we click in the "tooltip" button of the toggleButtons in order to make the widget visible. As we can see there is a SelectMultiple widget for every column of the original DataFrame.
scplot.widget Image(filename='scatter_data/img_2.png')
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='snapshot'></a> 4 Taking a snapshot of the current plot Back to top Although it is possible to save the bokeh plot with any of the standard methods that the bokeh library offers by accessing the plot attribute of the ScatterPlot, shaolin offers the possibility of saving an snapshot of the plot as a shaolin widget compatible with the framework, this way it can be included in a Dashboard for displaying purposes. This process is done by accessing the snapshot attribute of the scatterPlot. This way the current plot is exported and we can keep working with the ScatterPlot Dashboard in case we need to make more plots. An snapshot is an HTML widget which value is an exported notebook_div of the plot.
widget_plot = scplot.snapshot widget_plot.widget Image(filename='scatter_data/img_3.png')
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='plot_pandas'></a> 5 Plotting pandas Panel and Panel4D Back to top It is also possible to plot a pandas Panel or a Panel4d the same way as a DataFrame. The only resctriction for now is that the axis that will be used as index must be the major_axis in case of a Panel and the items axis in case of a Panel4D. The tooltips are disabled, custom tooltips will be available in the next release. It would be nice to have feedback on how would you like to display and select the tooltips.
from pandas.io.data import DataReader# I know its deprecated but i can't make the pandas_datareader work :P import datetime symbols_list = ['ORCL', 'TSLA', 'IBM','YELP', 'MSFT'] start = datetime.datetime(2010, 1, 1) end = datetime.datetime(2013, 1, 27) panel = DataReader( symbols_list, start=start, end=end,data_source='yahoo') panel sc_panel = ScatterPlot(panel) #sc_panel.show() Image(filename='scatter_data/img_4.png')
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
The following blocks will be used to define the functions that computes the accumulative sum by the chosen three approaches. Native implementation using Python for command
def native_acc(x): """ Calculate accumulative sum using Python native loop approach :param x: Numpy array :return: Accumulative sum (list) """ native_acc_sum = [] sum_aux = 0 for val in x: native_acc_sum.append(val + sum_aux) sum_aux += val return native_acc_sum
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit
Just a binding for numpy.cumsum
def np_acc(x): """ Calculate accumulative sum using numpy.cumsum :param x: Numpy array :return: Accumulative sum (Numpy array) """ return np.cumsum(x)
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit
Now using sci20 implementation
def sci20_acc(x): """ Calculate accumulative sum using sci20 Array :param x: Numpy array :return: Accumulative sum (sci20 array) """ x_array = Array(FromNumPy(x)) return AccumulateArrayReturning(x_array)
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit
Here we use the timeit standard lib to obtain the elapsed time of each method
def do_benchmark(n=1000, k=10): """ Compute elapsed time for each accumulative sum implementation (Native loop, Numpy and sci20). :param n: Number of array elements. Default is 1000. :param k: Number of averages used for timing. Default is 10. :param dtype: Array data type. Default is np.double. :return: A tuple (dt_native, dt_sci20, dt_np) containing the elapsed time for each method. """ x = np.linspace(1, 100, n) dt_native = timeit.Timer(functools.partial(native_acc, x,)).timeit(k) dt_sci20 = timeit.Timer(functools.partial(sci20_acc, x,)).timeit(k) dt_np = timeit.Timer(functools.partial(np_acc, x,)).timeit(k) return dt_native, dt_sci20, dt_np
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit
And finally we have the results. sci20 and Numpy implementations are almost equivalent, while native loop has demonstrated its inefficiency
""" Computes and prints the elapsed time for each accumulative sum implementation (Native loop, Numpy and sci20). """ n_values = [10**x for x in range(1, 8)] # [10, 100, ..., 10^7] for n in n_values: print("Computing benchmark for n={}...".format(n)) dt_native, dt_sci20, dt_np = do_benchmark(n) print("Native: {:.8f}s / sci20: {:.8f}s / Numpy: {:.8f}s.".format(dt_native, dt_sci20, dt_np))
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit