markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
๊ทธ๋ฆฌ๊ณ  ๊ฐ ๋…ธ๋“œ์— ๋Œ€ํ•ด ์ƒ์„ฑ๋œ dict๋“ค์„ ์ €์žฅํ•˜์ž.
for user in users: user["shortest_paths"] = shortest_paths_from(user)
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ทธ๋Ÿฌ๋ฉด ์ด์ œ ๋งค๊ฐœ ์ค‘์‹ฌ์„ฑ์„ ๊ตฌํ•  ์ค€๋น„๊ฐ€ ๋‹ค ๋˜์—ˆ๋‹ค. ์ด์ œ ๊ฐ๊ฐ์˜ ์ตœ๋‹จ ๊ฒฝ๋กœ์— ํฌํ•จ๋˜๋Š” ๊ฐ ๋…ธ๋“œ์˜ ๋งค๊ฐœ ์ค‘์‹ฌ์„ฑ์— $1/n$์„ ๋”ํ•ด ์ฃผ์ž.
for user in users: user["betweenness_centrality"] = 0.0 for source in users: source_id = source["id"] for target_id, paths in source["shortest_paths"].items(): # python2์—์„œ๋Š” items ๋Œ€์‹  iteritems ์‚ฌ์šฉ if source_id < target_id: # ์ž˜๋ชปํ•ด์„œ ๋‘ ๋ฒˆ ์„ธ์ง€ ์•Š๋„๋ก ์ฃผ์˜ํ•˜์ž num_paths = len(paths) # ์ตœ๋‹จ ๊ฒฝ๋กœ๊ฐ€ ๋ช‡ ๊ฐœ ์กด์žฌํ•˜๋Š”...
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์‚ฌ์šฉ์ž 0๊ณผ 9์˜ ์ตœ๋‹จ ๊ฒฝ๋กœ ์‚ฌ์ด์—๋Š” ๋‹ค๋ฅธ ์‚ฌ์šฉ์ž๊ฐ€ ์—†์œผ๋ฏ€๋กœ ๋งค๊ฐœ ์ค‘์‹ฌ์„ฑ์ด 0์ด๋‹ค. ๋ฐ˜๋ฉด ์‚ฌ์šฉ์ž 3, 4, 5๋Š” ์ตœ๋‹จ ๊ฒฝ๋กœ์ƒ์— ๋ฌด์ฒ™ ๋นˆ๋ฒˆํ•˜๊ฒŒ ์œ„์น˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋†’์€ ๋งค๊ฐœ ์ค‘์‹ฌ์„ฑ์„ ๊ฐ€์ง„๋‹ค. ๋Œ€๊ฒŒ ์ค‘์‹ฌ์„ฑ์˜ ์ ˆ๋Œ“๊ฐ’ ์ž์ฒด๋Š” ํฐ ์˜๋ฏธ๋ฅผ ๊ฐ€์ง€์ง€ ์•Š๊ณ , ์ƒ๋Œ€๊ฐ’๋งŒ์ด ์˜๋ฏธ๋ฅผ ๊ฐ€์ง„๋‹ค. ๊ทธ ์™ธ์— ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ๋Š” ์ค‘์‹ฌ์„ฑ ์ง€ํ‘œ ์ค‘ ํ•˜๋‚˜๋Š” ๊ทผ์ ‘ ์ค‘์‹ฌ์„ฑ(closeness centrality)์ด๋‹ค. ๋จผ์ € ๊ฐ ์‚ฌ์šฉ์ž์˜ ์›์ ‘์„ฑ(farness)์„ ๊ณ„์‚ฐํ•œ๋‹ค. ์›์ ‘์„ฑ์ด๋ž€ from_user์™€ ๋‹ค๋ฅธ ๋ชจ๋“  ์‚ฌ์šฉ์ž์˜ ์ตœ๋‹จ ๊ฒฝ๋กœ๋ฅผ ํ•ฉํ•œ ๊ฐ’์ด๋‹ค.
# # closeness centrality # def farness(user): """๋ชจ๋“  ์‚ฌ์šฉ์ž์™€์˜ ์ตœ๋‹จ ๊ฑฐ๋ฆฌ ํ•ฉ""" return sum(len(paths[0]) for paths in user["shortest_paths"].values())
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์ด์ œ ๊ทผ์ ‘ ์ค‘์‹ฌ์„ฑ์€ ๊ฐ„๋‹จํžˆ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋‹ค.
for user in users: user["closeness_centrality"] = 1 / farness(user) for user in users: print(user["id"], user["closeness_centrality"])
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ณ„์‚ฐ๋œ ๊ทผ์ ‘ ์ค‘์‹ฌ์„ฑ์˜ ํŽธ์ฐจ๋Š” ๋”์šฑ ์ž‘๋‹ค. ๋„คํŠธ์›Œํฌ ์ค‘์‹ฌ์— ์žˆ๋Š” ๋…ธ๋“œ์กฐ์ฐจ ์™ธ๊ณฝ์— ์œ„์น˜ํ•œ ๋…ธ๋“œ๋“ค๋กœ๋ถ€ํ„ฐ ๋ฉ€๋ฆฌ ๋–จ์–ด์ ธ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์—ฌ๊ธฐ์„œ ๋ดค๋“ฏ์ด ์ตœ๋‹จ ๊ฒฝ๋กœ๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ๊ฒƒ์€ ๊ฝค๋‚˜ ๋ณต์žกํ•˜๋‹ค. ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์— ํฐ ๋„คํŠธ์›Œํฌ์—์„œ๋Š” ๊ทผ์ ‘ ์ค‘์‹ฌ์„ฑ์„ ์ž์ฃผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š”๋‹ค. ๋œ ์ง๊ด€์ ์ด์ง€๋งŒ ๋ณดํ†ต ๋” ์‰ฝ๊ฒŒ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋Š” ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ(eigenvector centrality)์„ ๋” ์ž์ฃผ ์‚ฌ์šฉํ•œ๋‹ค. 21.2 ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ธฐ ์ „์— ๋จผ์ € ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋ฌด์—‡์ธ์ง€ ์‚ดํŽด๋ด์•ผ ํ•˜๊ณ , ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋ฌด์—‡์ธ์ง€ ์•Œ๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋จผ์ € ํ–‰๋ ฌ ์—ฐ์‚ฐ์— ๋Œ€ํ•ด ์•Œ์•„๋ด์•ผ ํ•œ๋‹ค. 21.2.1 ํ–‰...
def matrix_product_entry(A, B, i, j): return dot(get_row(A, i), get_column(B, j)) def matrix_multiply(A, B): n1, k1 = shape(A) n2, k2 = shape(B) if k1 != n2: raise ArithmeticError("incompatible shapes!") return make_matrix(n1, k2, partial(matrix_product_entry, A, B)) def v...
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
ํ–‰๋ ฌ A์˜ ๊ณ ์œ  ๋ฒกํ„ฐ๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด, ์ž„์˜์˜ ๋ฒกํ„ฐ $v$๋ฅผ ๊ณจ๋ผ matrix_operate๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ , ๊ฒฐ๊ณผ๊ฐ’์˜ ํฌ๊ธฐ๊ฐ€ 1์ด ๋˜๊ฒŒ ์žฌ์กฐ์ •ํ•˜๋Š” ๊ณผ์ •์„ ๋ฐ˜๋ณต ์ˆ˜ํ–‰ํ•œ๋‹ค.
def find_eigenvector(A, tolerance=0.00001): guess = [1 for __ in A] while True: result = matrix_operate(A, guess) length = magnitude(result) next_guess = scalar_multiply(1/length, result) if distance(guess, next_guess) < tolerance: return next_guess, length ...
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ฒฐ๊ณผ๊ฐ’์œผ๋กœ ๋ฐ˜ํ™˜๋˜๋Š” guess๋ฅผ matrix_operate๋ฅผ ํ†ตํ•ด ๊ฒฐ๊ณผ๊ฐ’์˜ ํฌ๊ธฐ๊ฐ€ 1์ธ ๋ฒกํ„ฐ๋กœ ์žฌ์กฐ์ •ํ•˜๋ฉด, ์ž๊ธฐ ์ž์‹ ์ด ๋ฐ˜ํ™˜๋œ๋‹ค. ์ฆ‰, ์—ฌ๊ธฐ์„œ guess๋Š” ๊ณ ์œ ๋ฒกํ„ฐ๋ผ๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค. ๋ชจ๋“  ์‹ค์ˆ˜ ํ–‰๋ ฌ์— ๊ณ ์œ ๋ฒกํ„ฐ์™€ ๊ณ ์œ ๊ฐ’์ด ์žˆ๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์‹œ๊ณ„ ๋ฐฉํ–ฅ์œผ๋กœ 90๋„ ํšŒ์ „ํ•˜๋Š” ์—ฐ์‚ฐ์„ ํ•˜๋Š” ๋‹ค์Œ ํ–‰๋ ฌ์—๋Š” ๊ณฑํ–ˆ์„ ๋•Œ ๊ฐ€์ง€ ์ž์‹ ์ด ๋˜๋Š” ๋ฒกํ„ฐ๋Š” ์˜๋ฒกํ„ฐ๋ฐ–์— ์—†๋‹ค.
rotate = [[0, 1], [-1, 0]]
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์ด ํ–‰๋ ฌ๋กœ ์•ž์„œ ๊ตฌํ˜„ํ•œ find_eignevector(rotate)๋ฅผ ์ˆ˜ํ–‰ํ•˜๋ฉด, ์˜์›ํžˆ ๋๋‚˜์ง€ ์•Š์„ ๊ฒƒ์ด๋‹ค. ํ•œํŽธ, ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ์žˆ๋Š” ํ–‰๋ ฌ๋„ ๋•Œ๋กœ๋Š” ๋ฌดํ•œ๋ฃจํ”„์— ๋น ์งˆ ์ˆ˜ ์žˆ๋‹ค.
flip = [[0, 1], [1, 0]]
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์ด ํ–‰๋ ฌ์€ ๋ชจ๋“  ๋ฒกํ„ฐ [x, y]๋ฅผ [y, x]๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. ๋”ฐ๋ผ์„œ [1, 1]์€ ๊ณ ์œ ๊ฐ’์ด 1์ธ ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋œ๋‹ค. ํ•˜์ง€๋งŒ x, y๊ฐ’์ด ๋‹ค๋ฅธ ์ž„์˜์˜ ๋ฒกํ„ฐ์—์„œ ์ถœ๋ฐœํ•ด์„œ find_eigenvector๋ฅผ ์ˆ˜ํ–‰ํ•˜๋ฉด x, y๊ฐ’์„ ๋ฐ”๊พธ๋Š” ์—ฐ์‚ฐ๋งŒ ๋ฌดํ•œํžˆ ์ˆ˜ํ–‰ํ•  ๊ฒƒ์ด๋‹ค. (NumPy๊ฐ™์€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—๋Š” ์ด๋Ÿฐ ์ผ€์ด์Šค๊นŒ์ง€ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•๋“ค์ด ๊ตฌํ˜„๋˜์–ด ์žˆ๋‹ค.) ์ด๋Ÿฐ ์‚ฌ์†Œํ•œ ๋ฌธ์ œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์–ด์จŒ๋“  find_eigenvector๊ฐ€ ๊ฒฐ๊ณผ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•œ๋‹ค๋ฉด, ๊ทธ ๊ฒฐ๊ณผ๊ฐ’์€ ๊ณง ๊ณ ์œ ๋ฒกํ„ฐ์ด๋‹ค. 21.2.2 ์ค‘์‹ฌ์„ฑ ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋ฐ์ดํ„ฐ ๋„คํŠธ์›Œํฌ๋ฅผ ์ดํ•ดํ•˜๋Š”๋ฐ ์–ด๋–ป๊ฒŒ ๋„์›€์„ ์ค„๊นŒ? ์–˜๊ธฐ๋ฅผ ํ•˜๊ธฐ ์ „์— ...
# # eigenvector centrality # def entry_fn(i, j): return 1 if (i, j) in friendships or (j, i) in friendships else 0 n = len(users) adjacency_matrix = make_matrix(n, n, entry_fn) adjacency_matrix
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ฐ ์‚ฌ์šฉ์ž์˜ ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ์ด๋ž€ find_eigenvector๋กœ ์ฐพ์€ ์‚ฌ์šฉ์ž์˜ ๊ณ ์œ ๋ฒกํ„ฐ๊ฐ€ ๋œ๋‹ค.
eigenvector_centralities, _ = find_eigenvector(adjacency_matrix) for user_id, centrality in enumerate(eigenvector_centralities): print(user_id, centrality)
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์—ฐ๊ฒฐ์˜ ์ˆ˜๊ฐ€ ๋งŽ๊ณ , ์ค‘์‹ฌ์„ฑ์ด ๋†’์€ ์‚ฌ์šฉ์ž๋“คํ•œํ…Œ ์—ฐ๊ฒฐ๋œ ์‚ฌ์šฉ์ž๋“ค์€ ๊ณ ์œ ๋ฒกํ„ฐ ์ค‘์‹ฌ์„ฑ์ด ๋†’๋‹ค. ์•ž์˜ ๊ฒฐ๊ณผ์— ๋”ฐ๋ฅด๋ฉด ์‚ฌ์šฉ์ž 1, ์‚ฌ์šฉ์ž 2์˜ ์ค‘์‹ฌ์„ฑ์ด ๊ฐ€์žฅ ๋†’์€๋ฐ, ์ด๋Š” ์ค‘์‹ฌ์„ฑ์ด ๋†’์€ ์‚ฌ๋žŒ๋“ค๊ณผ ์„ธ๋ฒˆ์ด๋‚˜ ์—ฐ๊ฒฐ๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์ด๋“ค๋กœ๋ถ€ํ„ฐ ๋ฉ€์–ด์งˆ์ˆ˜๋ก ์‚ฌ์šฉ์ž๋“ค์˜ ์ค‘์‹ฌ์„ฑ์€ ์ ์ฐจ ์ค„์–ด๋“ ๋‹ค. 21.3 ๋ฐฉํ–ฅ์„ฑ ๊ทธ๋ž˜ํ”„(Directed graphs)์™€ ํŽ˜์ด์ง€๋žญํฌ ๋ฐ์ดํ…€์ด ์ธ๊ธฐ๋ฅผ ๋ณ„๋กœ ๋Œ์ง€ ๋ชปํ•˜์ž, ์ˆœ์ด์ต ํŒ€์˜ ๋ถ€์‚ฌ์žฅ์€ ์นœ๊ตฌ ๋ชจ๋ธ์—์„œ ๋ณด์ฆ(endorsement)๋ชจ๋ธ๋กœ ์ „ํ–ฅํ•˜๋Š” ๊ฒƒ์„ ๊ณ ๋ ค ์ค‘์ด๋‹ค. ์•Œ๊ณ  ๋ณด๋‹ˆ ์‚ฌ๋žŒ๋“ค์€ ์–ด๋–ค ๋ฐ์ดํ„ฐ ๊ณผํ•™์ž๋“ค๋ผ๋ฆฌ ์นœ๊ตฌ์ธ์ง€์— ๋Œ€ํ•ด์„œ๋Š” ๋ณ„๋กœ ๊ด€์‹ฌ์ด ์—†์—ˆ์ง€๋งŒ, ํ—ค๋“œํ—Œํ„ฐ๋“ค์€...
# # directed graphs # endorsements = [(0, 1), (1, 0), (0, 2), (2, 0), (1, 2), (2, 1), (1, 3), (2, 3), (3, 4), (5, 4), (5, 6), (7, 5), (6, 8), (8, 7), (8, 9)] for user in users: user["endorses"] = [] # add one list to track outgoing endorsements user["endorsed_by"] = [] # and another t...
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
๊ทธ๋ฆฌ๊ณ  ๊ฐ€์žฅ ๋ณด์ฆ์„ ๋งŽ์ด ๋ฐ›์€ ๋ฐ์ดํ„ฐ ๊ณผํ•™์ž๋“ค์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜์ง‘ํ•ด์„œ, ๊ทธ๊ฒƒ์„ ํ—ค๋“œํ—Œํ„ฐ๋“คํ•œํ…Œ ํŒ”๋ฉด ๋œ๋‹ค.
endorsements_by_id = [(user["id"], len(user["endorsed_by"])) for user in users] sorted(endorsements_by_id, key=lambda x: x[1], # (user_id, num_endorsements) reverse=True)
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
์‚ฌ์‹ค '๋ณด์ฆ์˜ ์ˆ˜'์™€ ๊ฐ™์€ ์ˆซ์ž๋Š” ์กฐ์ž‘ํ•˜๊ธฐ๊ฐ€ ๋งค์šฐ ์‰ฝ๋‹ค. ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜๋Š”, ๊ฐ€์งœ ๊ณ„์ •์„ ์—ฌ๋Ÿฌ ๊ฐœ ๋งŒ๋“ค์–ด์„œ ๊ทธ๊ฒƒ๋“ค๋กœ ๋‚ด ๊ณ„์ •์— ๋Œ€ํ•œ ๋ณด์ฆ์„ ์„œ๋Š” ๊ฒƒ์ด๋‹ค. ๋˜ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€, ์นœ๊ตฌ๋“ค๋ผ๋ฆฌ ์งœ๊ณ  ์„œ๋กœ๊ฐ€ ์„œ๋กœ๋ฅผ ๋ณด์ฆํ•ด ์ฃผ๋Š” ๊ฒƒ์ด๋‹ค. (์•„๋งˆ ์‚ฌ์šฉ์ž 0, 1, 2๊ฐ€ ์ด๋Ÿฐ ๊ด€๊ณ„์ผ ๊ฐ€๋Šฅ์„ฑ์ด ํฌ๋‹ค.) ์ข€ ๋” ๋‚˜์€ ์ง€์ˆ˜๋Š”, '๋ˆ„๊ฐ€' ๋ณด์ฆ์„ ์„œ๋Š”์ง€๋ฅผ ๊ณ ๋ คํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋ณด์ฆ์„ ๋งŽ์ด ๋ฐ›์€ ์‚ฌ์šฉ์ž๊ฐ€ ๋ณด์ฆ์„ ์„ค ๋•Œ๋Š”, ๋ณด์ฆ์„ ์ ๊ฒŒ ๋ฐ›์€ ์‚ฌ์šฉ์ž๊ฐ€ ๋ณด์ฆ์„ ์„ค ๋•Œ๋ณด๋‹ค ๋” ์ค‘์š”ํ•œ ๊ฒƒ์œผ๋กœ ๋ฐ›์•„๋“ค์—ฌ์ง€๋Š” ๊ฒƒ์ด ํƒ€๋‹นํ•˜๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์‚ฌ์‹ค ์ด๊ฒƒ์€ ์œ ๋ช…ํ•œ ํŽ˜์ด์ง€๋žญํฌ(PageRank) ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ธฐ๋ณธ ์ฒ ํ•™์ด...
def page_rank(users, damping = 0.85, num_iters = 100): # ๋จผ์ € ํŽ˜์ด์ง€๋žญํฌ๋ฅผ ๋ชจ๋“  ๋…ธ๋“œ์— ๊ณ ๋ฅด๊ฒŒ ๋ฐฐ๋‹น num_users = len(users) pr = { user["id"] : 1 / num_users for user in users } # ๋งค ์Šคํ…๋งˆ๋‹ค ๊ฐ ๋…ธ๋“œ๊ฐ€ ๋ฐ›๋Š” # ์ ์€ ์–‘์˜ ํŽ˜์ด์ง€๋žญํฌ base_pr = (1 - damping) / num_users for __ in range(num_iters): next_pr = { user["i...
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
Generating some data Lets generate a classification dataset that is not easily linearly separable. Our favorite example is the spiral dataset, which can be generated as follows:
N = 100 #The number of "training examples" per class D = 2 # The dimension i.e i think the feature set K = 3 # The number of classes to classify into X = np.zeros((N*K, D)) # The data matrix , each row is a single example # In our case 100*3 amount of examples and each example has two dimensions # so the Matrix is 30...
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Training a Simple Softmax Linear Classifer Initialize the Parameters
# initialize parameters randomly W = 0.01 * np.random.randn(D,K) #randn(D,K) D * K Matrix # in our case 2 * 3 Matrix b = np.zeros((1,K)) # in our case 1 * 3 # some hyperparameters step_size = 1e-0 reg = 1e-3 # regularization strength
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Compute the Class Scores
# compute class scores for a linear classif scores = np.dot(X,W) + b #scores will be 300 *3 and remember numpy broadcasting for b
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Compute the loss
# Using cross - entropy loss # Softmax loss = data loss + regularization loss # Study about the Softmax loss and how it works before seeing how it works #in this case it could be intuitive num_examples = X.shape[0] # get unnormalized probabilities exp_scores = np.exp(scores) # normalize them for each example '''We no...
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Putting it all Together Till now we just looked at individual elements
#Train a Linear Classifier # initialize parameters randomly W = 0.01 * np.random.randn(D,K) b = np.zeros((1,K)) # some hyperparameters step_size = 1e-0 reg = 1e-3 # regularization strength # gradient descent loop num_examples = X.shape[0] for i in range(200): # evaluate class scores, [N x K] scores = np.dot(X...
Deep Learning.ai/Numpy - Understanding Backprop.ipynb
Shashi456/Artificial-Intelligence
gpl-3.0
Using SQL for Queries Note that SQL is case-insensitive, but it is traditional to use ALL CAPS for SQL keywords. It is also standard to end SQL statements with a semi-colon. Simple Queries
pdsql('SELECT * FROM tips LIMIT 5;') q = """ SELECT sex as gender, smoker, day, size FROM tips WHERE size <= 3 AND smoker = 'Yes' LIMIT 5""" pdsql(q)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Filtering on strings
q = """ SELECT sex as gender, smoker, day, size , time FROM tips WHERE time LIKE '%u_c%' LIMIT 5""" pdsql(q)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Ordering
q = """ SELECT sex as gender, smoker, day, size , time FROM tips WHERE time LIKE '%u_c%' ORDER BY size DESC LIMIT 5""" pdsql(q)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Aggregate queries
pdsql('select * from tips limit 5;') q = """ SELECT sex, smoker, sum(total_bill) as total, max(tip) as max_tip FROM tips WHERE time = 'Lunch' GROUP BY sex, smoker ORDER BY max_tip DESC """ pdsql(q)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Matching students and majors Inner join
pdsql(""" SELECT * from student s INNER JOIN major m ON s.major_id = m.major_id; """)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Left outer join SQL also has RIGHT OUTER JOIN and FULL OUTER JOIN but these are not currently supported by SQLite3 (the database engine used by pdsql).
pdsql(""" SELECT s.*, m.name from student s LEFT JOIN major m ON s.major_id = m.major_id; """)
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Emulating a full outer join with UNION ALL Only necessary if the database does not proivde FULL OUTER JOIN Using linker tables to match students to classes (a MANY TO MANY join) Using SQLite3 SQLite3 is part of the standard library. However, the mechanics of using essentially any database in Python is similar, because ...
import sqlite3 c = sqlite3.connect('data/Chinook_Sqlite.sqlite')
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Standard SQL statements with parameter substitution Note: Using Python string substitution for Python defined parameters is dangerous because of the risk of SQL injection attacks. Use parameter substitution with ? instead. Do this
t = ['%rock%', 10] list(c.execute("SELECT * FROM Album WHERE Title like ? AND ArtistID > ? LIMIT 5;", t))
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Not this
t = ("'%rock%'", 10) list(c.execute("SELECT * FROM Album WHERE Title like %s AND ArtistID > %d LIMIT 5;" % t))
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
User defined functions Sometimes it is useful to have custom functions that run on the database server rather than on the client. These are called User Defined Functions or UDF. How do to do this varies with the database used, but it is fairly simple with Python and SQLite. A standard UDF
def encode(text, offset): """Caesar cipher of text with given offset.""" from string import ascii_lowercase, ascii_uppercase tbl = dict(zip(map(ord, ascii_lowercase + ascii_uppercase), ascii_lowercase[offset:] + ascii_lowercase[:offset] + ascii_uppercase[offset:] + ascii_uppe...
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
Using SQL magic functions We will use the ipython-sql notebook extension for convenience. This will only work in notebooks and IPython scripts with the .ipy extension.
%load_ext sql
scratch/Lecture08.ipynb
cliburn/sta-663-2017
mit
3. Open the file and use .read() to save the contents of the file to a string called fields. Make sure the file is closed at the end.
# Write your code here: # Run fields to see the contents of contacts.txt: fields
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb
rishuatgithub/MLPy
apache-2.0
Working with PDF Files 4. Use PyPDF2 to open the file Business_Proposal.pdf. Extract the text of page 2.
# Perform import # Open the file as a binary object # Use PyPDF2 to read the text of the file # Get the text from page 2 (CHALLENGE: Do this in one step!) page_two_text = # Close the file # Print the contents of page_two_text print(page_two_text)
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb
rishuatgithub/MLPy
apache-2.0
5. Open the file contacts.txt in append mode. Add the text of page 2 from above to contacts.txt. CHALLENGE: See if you can remove the word "AUTHORS:"
# Simple Solution: # CHALLENGE Solution (re-run the %%writefile cell above to obtain an unmodified contacts.txt file):
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb
rishuatgithub/MLPy
apache-2.0
Regular Expressions 6. Using the page_two_text variable created above, extract any email addresses that were contained in the file Business_Proposal.pdf.
import re # Enter your regex pattern here. This may take several tries! pattern = re.findall(pattern, page_two_text)
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb
rishuatgithub/MLPy
apache-2.0
Exercise: Compute the velocity of the penny after t seconds. Check that the units of the result are correct.
# Solution goes here
notebooks/chap01.ipynb
AllenDowney/ModSimPy
mit
Exercise: Why would it be nonsensical to add a and t? What happens if you try?
# Solution goes here
notebooks/chap01.ipynb
AllenDowney/ModSimPy
mit
Exercise: Suppose you bring a 10 foot pole to the top of the Empire State Building and use it to drop the penny from h plus 10 feet. Define a variable named foot that contains the unit foot provided by UNITS. Define a variable named pole_height and give it the value 10 feet. What happens if you add h, which is in unit...
# Solution goes here # Solution goes here
notebooks/chap01.ipynb
AllenDowney/ModSimPy
mit
Exercise: In reality, air resistance limits the velocity of the penny. At about 18 m/s, the force of air resistance equals the force of gravity and the penny stops accelerating. As a simplification, let's assume that the acceleration of the penny is a until the penny reaches 18 m/s, and then 0 afterwards. What is the...
# Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here
notebooks/chap01.ipynb
AllenDowney/ModSimPy
mit
Filtering and resampling data Some artifacts are restricted to certain frequencies and can therefore be fixed by filtering. An artifact that typically affects only some frequencies is due to the power line. Power-line noise is a noise created by the electrical network. It is composed of sharp peaks at 50Hz (or 60Hz dep...
import numpy as np import mne from mne.datasets import sample data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' proj_fname = data_path + '/MEG/sample/sample_audvis_eog_proj.fif' tmin, tmax = 0, 20 # use the first 20s of data # Setup for reading the raw data (save memory by c...
0.17/_downloads/1a57f50035589ab531155bdb55bcbcb5/plot_artifacts_correction_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The same care must be taken with these results as with partial derivatives. The formula for $Y$ is ostensibly $$X_1 + X_2 = X_1 + X^2 + X_1 = 2 X_1 + X^2$$ Or $2X_1$ plus a parabola. However, the coefficient of $X_1$ is 1. That is because $Y$ changes by 1 if we change $X_1$ by 1 <i>while holding $X_2$ constant</i>. Mu...
# Load pricing data for two arbitrarily-chosen assets and SPY start = '2014-01-01' end = '2015-01-01' asset1 = get_pricing('DTV', fields='price', start_date=start, end_date=end) asset2 = get_pricing('FISV', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, ...
notebooks/lectures/Multiple_Linear_Regression/notebook.ipynb
quantopian/research_public
apache-2.0
The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relati...
def complete_deg(n): a = np.identity((n), dtype = np.int) for element in np.nditer(a, op_flags=['readwrite']): if element > 0: element[...] = element + n - 2 return a complete_deg(3) D = complete_deg(5) assert D.shape==(5,5) assert D.dtype==np.dtype(int) assert np.all(D.diagonal()==4*n...
assignments/assignment03/NumpyEx04.ipynb
geoneill12/phys202-2015-work
mit
The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
def complete_adj(n): b = np.identity((n), dtype = np.int) for element in np.nditer(b, op_flags=['readwrite']): if element == 0: element[...] = 1 else: element[...] = 0 return b complete_adj(3) A = complete_adj(5) assert A.shape==(5,5) assert A.dtype==np.dtype(int) a...
assignments/assignment03/NumpyEx04.ipynb
geoneill12/phys202-2015-work
mit
Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
def L(n): L = complete_deg(n) - complete_adj(n) return L print L(1) print L(2) print L(3) print L(4) print L(5) print L(6)
assignments/assignment03/NumpyEx04.ipynb
geoneill12/phys202-2015-work
mit
ๅœจๆ–ฐไธ‰ๆฟๆŒ‚็‰Œ็š„ๅ…ฌๅธ่ฟ˜ๅŒ…ๆ‹ฌไบ†ๅŽŸไธค็ฝ‘็š„ๅ…ฌๅธ๏ผˆNETๅ’ŒSTAQ๏ผ‰๏ผŒไธค็ฝ‘็š„ๅ…ฌๅธๆ˜ฏๅœจ90ๅนดไปฃๆœ‰ไธญๅ›ฝไบบๆฐ‘้“ถ่กŒๆ‰นๅ‡†่ฎพ็ซ‹็š„๏ผŒๆ€ป็”จ9ๅฎถๅ…ฌๅธ๏ผˆ็ฒคไผ ๅช’(400003)ไธŠๅธ‚ๅŽๅ‰ฉไฝ™8ๅฎถๅ…ฌๅธ๏ผ‰๏ผŒ็”ฑไบŽๆ”ฟ็ญ–ๆ–น้ข็š„็บฆๆŸ๏ผŒๆญค็ฑปๅ…ฌๅธๅณไพฟๅœจไธ‰ๆฟๅธ‚ๅœบไธŠไพ็„ถไธ่ขซๅ…่ฎธ่ฟ›่กŒๆ–ฐ็š„่ž่ต„่€Œๅช่ƒฝ่‚กไปฝ่ฝฌ่ฎฉใ€‚่ฟ™ไบ›ๅ…ฌๅธๅŒ…ๅซไบ†๏ผš * ๅคง่‡ช็„ถ๏ผˆ400001๏ผ‰๏ผŒ่ฏฅ่‚ก็ฅจๅทฒ็ปๅœจๅœจๆ–ฐไธ‰ๆฟ้€€ๅธ‚๏ผŒๅนถไปฅ834019ไธบไปฃ็ ๆˆไธบๅ…จๅ›ฝ่‚ก่ฝฌๅ…ฌๅธ็š„ๆŒ‚็‰Œไผไธšใ€‚ๅฏ่ƒฝๅ‡บ็Žฐ็š„้—ฎ้ข˜ๆ˜ฏ๏ผŒๆ•ฐๆฎๅบ“ๆŠŠ400001ๅ…ฌๅธƒ็š„่ดขๅŠกไฟกๆฏไนŸไฝœไธบไบ†ๆ–ฐๆŒ‚็‰Œๅ…ฌๅธ็š„่ดขๅŠกไฟกๆฏ๏ผŒๅ› ๆญค๏ผŒๅค„็†ๆ—ถ็›ดๆŽฅไปฅๆ–ฐๆŒ‚็‰Œ็š„ๅ…ฌๅธ่ฝฌ่ฎฉๅ…ฌๅ‘ŠไนฆไธญๆŠซ้œฒ็š„่ดขๅŠกไฟกๆฏไฝœไธบๆฃ€้ชŒๆ ‡ๅ‡†๏ผˆTo-do: ไฝœไธบๅฏนๆฏ”็›ดๆŽฅ่ธข้™ค่ฟ™ๅฎถๅ…ฌๅธ็š„ๆ•ฐๆฎใ€‚๏ผ‰ * ้•ฟ็™ฝ5 (400002) * ๆตทๅ›ฝๅฎž5(400005) * ...
stkcs_net_staq = ["400001.OC", "400002.OC", "400005.OC", "400006.OC", "400007.OC", "400009.OC", "400010.OC", "400013.OC"] stocks_neeq = stocks_neeq[~stocks_neeq["่ฏๅˆธไปฃ็ "].isin(stkcs_net_staq)] len(stocks_neeq) stocks_neeq[stocks_neeq["่ฏๅˆธไปฃ็ "] == "834019.OC"] stocks_neeq = stocks_neeq.drop(6644) stocks_neeq[stocks_nee...
src/Stocks.ipynb
januslian/neeqem
gpl-3.0
้€€ๅธ‚่‚ก็ฅจๅค„็†๏ผš11ไธช่ฝฌๆฟไธŠๅธ‚ๅ…ฌๅธ๏ผŒ1ไธช่ฟž็ปญๅ››ๅนดไบๆŸ๏ผŒ16ไธชๅธๆ”ถๅˆๅนถ็š„ๅ…ฌๅธ๏ผŒ3ๅฎถๅ…ฌๅธๅฐ†ๅ…ฌๅธๅ˜ไธบๆœ‰้™่ดฃไปปๅ…ฌๅธ๏ผŒ6ๅฎถๅ…ฌๅธๆœช่ฏดๆ˜Ž้€€ๅธ‚ๅŽŸๅ› ใ€‚ๅœจๆ•ฐๆฎๅบ“้€€ๅธ‚่‚ก็ฅจๅˆ—่กจ้‡Œๆœ‰ไธคๅฎถๅ…ฌๅธ400001ๅ’Œ400003ๅฑžไบŽไธค็ฝ‘ๅ…ฌๅธ่ฝฌๆฟ๏ผŒไธไฝœไธบ็ ”็ฉถๆ ทๆœฌๅค„็†ใ€‚
delisting = pandas.read_excel("../data/raw/้€€ๅธ‚่‚ก็ฅจไธ€่งˆ.xlsx") #2016ๅนด4ๆœˆ15ๆ—ฅไน‹ๅ‰้€€ๅธ‚็š„ๅ…ฌๅธ delisting = delisting[:-2] delisting.head() delisting = delisting[~delisting["ไปฃ็ "].isin(["400001.OC", "400003.OC"])] stkcd = stocks_neeq["่ฏๅˆธไปฃ็ "].append(delisting["ไปฃ็ "]) stkcd[stkcd.duplicated()] delisting = pandas.read_excel("../data/delis...
src/Stocks.ipynb
januslian/neeqem
gpl-3.0
็”ฑๆญคๅพ—ๅˆฐ็š„โ€œๆ–ฐไธ‰ๆฟโ€ๅ…ฌๅธ็š„ๆœ‰ๆ•ˆๆ ทๆœฌๆ•ฐ้‡ไธบ{{len(stocks_neeq_all)}}ใ€‚ โ€œๆ–ฐไธ‰ๆฟโ€ๆ ทๆœฌๅ…ฌๅธๆœ€ๆ—ฉ็š„ๆŒ‚็‰Œๆ—ถ้—ดไธบ{{stocks_neeq_all["ๆŒ‚็‰Œๆ—ฅๆœŸ"].min()}}๏ผŒๆ‰€ไปฅๅœจ้€‰ๆ‹ฉไธŽไน‹ๅฏนๆฏ”็š„A่‚กไธปๆฟใ€ไธญๅฐๆฟๅ’Œๅˆ›ไธšๆฟๅ…ฌๅธๆ—ถไนŸ้€‰ๆ‹ฉไปŽ2006ๅนดไปฅๅŽ๏ผŒๅ…ถไธญๅˆ›ไธšๆฟๅผ€ๅง‹ไบŽ2009ๅนด10ๆœˆ๏ผŒๅœจ2006ๅˆฐ2009ๅนด10ๆœˆไน‹้—ดๆŒ‚็‰Œ็š„ๆ–ฐไธ‰ๆฟๅ…ฌๅธๆœ‰{{a}}ๅฎถๅ…ฌๅธใ€‚ ็”ฑไบŽๆ‰€ๆœ‰้€‚ๅˆๆœฌ็ ”็ฉถ็š„โ€œๆ–ฐไธ‰ๆฟโ€ๅ…ฌๅธๅ‡ไธบ2006ๅนดไน‹ๅŽๆŒ‚็‰Œไบคๆ˜“็š„๏ผŒๆ‰€ไปฅๅฏนๅบ”็š„ไธปๆฟใ€ไธญๅฐๆฟๅ’Œๅˆ›ไธšๆฟๅ…ฌๅธไนŸๅฐ†ไผš้€‰ๆ‹ฉ2005ๅนดไน‹ๅŽIPOs็š„ๅ…ฌๅธไฝœไธบๅฏนๆฏ”็š„ๅฏน่ฑกใ€‚ๅŽ็ปญ็š„ๆฃ€้ชŒไนŸไผšๅฐ†ไธปไธญๅˆ›ไธ‰ไธชๆฟๅ—็š„ๅฎšๅ‘ๅขžๅ‘ไฝœไธบๅฏนๆฏ”ๅฏน่ฑก๏ผŒไปฅไฟ่ฏๅœจๅ‘่กŒ่ฟ‡็จ‹ไธŠๆœ‰ไบ›่ฎธ็š„ๅฏๆฏ”ๆ€งใ€‚ To-do: ้œ€่ฆ็กฎๅฎšไผฐ...
stocks_neeq_all.iloc[0] year = lambda x: x.year stocks_neeq_all["ๆŒ‚็‰Œๅนดๅบฆ"] = stocks_neeq_all["ๆŒ‚็‰Œๆ—ฅๆœŸ"].apply(year) stocks_neeq_all["ๆŒ‚็‰Œๅนดๅบฆ"].value_counts().sort_index() sns.barplot(stocks_neeq_all["ๆŒ‚็‰Œๅนดๅบฆ"].value_counts().sort_index().index, stocks_neeq_all["ๆŒ‚็‰Œๅนดๅบฆ"].value_counts().sort_index(), palette="Blues_d");...
src/Stocks.ipynb
januslian/neeqem
gpl-3.0
2. Hypothesis tests
#California burritos vs. Carnitas burritos TODO # Don Carlos 1 vs. Don Carlos 2 TODO # Bonferroni correction TODO
burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb
srcole/qwm
mit
3. Burrito dimension distributions Distribution of each burrito quality
import math def metrichist(metricname): if metricname == 'Volume': bins = np.arange(.375,1.225,.05) xticks = np.arange(.4,1.2,.1) xlim = (.4,1.2) else: bins = np.arange(-.25,5.5,.5) xticks = np.arange(0,5.5,.5) xlim = (-.25,5.25) plt.figure(figsize=(5...
burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb
srcole/qwm
mit
Test for normal distribution
TODO
burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb
srcole/qwm
mit
Changes implemented From most important to least important: * pre-compute PSF parameters across multiple spectra and wavelengths to enable vectorized calls to legval instead of many scalar calls [~25% faster] * hoist wavelength -> [-1,1] for legendre calculations out of loop [~8% faster] * replace np.outer wi...
h1 = Table.read('data/portland/hsw-before.txt', format='ascii.basic') h2 = Table.read('data/portland/hsw-after.txt', format='ascii.basic') k1 = Table.read('data/portland/knl-before.txt', format='ascii.basic') k2 = Table.read('data/portland/knl-after.txt', format='ascii.basic') plot(h1['nproc'], h1['rate'], 's:', color...
doc/portland-extract.ipynb
sbailey/knltest
bsd-3-clause
Build in plots for storytelling
def labelit(title_=''): semilogx() semilogy() xticks([1,2,4,8,16,32,64,128], [1,2,4,8,16,32,64,128]) legend(loc='upper left') title(title_) ylim(20, 20000) xlabel('Number of Processes') ylabel('Extraction Rate') figure() plot(h1['nproc'], h1['rate'], 's-', color='C0', label='Haswell') p...
doc/portland-extract.ipynb
sbailey/knltest
bsd-3-clause
Configuration parameters
import GAN.utils as utils # reload(utils) class Parameters(utils.Parameters): # dataset to be loaded load_datasets=utils.param(["moriond_v9","abs(ScEta) < 1.5"]) c_names = utils.param(['Phi','ScEta','Pt','rho','run']) x_names = utils.param(['EtaWidth','R9','SigmaIeIe','S4','PhiWidth','Covariance...
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Load datasets Apply cleaning and reweight MC to match data
data,mc = cms.load_zee(*LOAD_DATASETS) data.columns mc.columns
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Cleaning
if not CLEANING is None: print('cleaning data and mc') thr_up = pd.read_hdf(CLEANING,'thr_up') thr_down = pd.read_hdf(CLEANING,'thr_down') nevts_data = data.shape[0] nevts_mc = mc.shape[0] data = data[ ((data[thr_down.index] >= thr_down) & (data[thr_up.index] <= thr_up)).all(axis=1) ] mc = m...
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Generate extra variables for MC (eg run number) distributed like data
for col in SAMPLE_FROM_DATA: print('sampling', col) mc[col] = data[col].sample(mc.shape[0]).values
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Select target and conditional features
c_names = C_NAMES x_names = X_NAMES data_c = data[c_names] data_x = data[x_names] mc_c = mc[c_names] mc_x = mc[x_names] data_x.columns, data_x.shape, data_c.columns, data_c.shape data_x.columns, data_c.columns mc_x.columns, mc_c.columns
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Reweight MC
# reload(preprocessing) # reload(utils) # iniliatlize weights to default if MCWEIGHT == '': mc_w = np.ones(mc_x.shape[0]) else: mc_w = mc[MCWEIGHT].values print(mc_w[:10]) # take care of negative weights if USE_ABS_WEIGHT: mc_w = np.abs(mc_w) # reweighting if not REWEIGHT is None: for fil in REW...
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Features transformation
# reload(preprocessing) if GLOBAL_MATCH: _,data_c,mc_x,mc_c,scaler_x,scaler_c = preprocessing.transform(None,data_c,mc_x,mc_c,FEAT_TRANSFORM) _,_,data_x,_,scaler_x_data,_ = preprocessing.transform(None,None,data_x,None,FEAT_TRANSFORM) else: data_x,data_c,mc_x,mc_c,scaler_x,scaler_c = preprocessing.tra...
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Prepare train and test sample
nmax = min(data_x.shape[0]//FRAC_DATA,mc_x.shape[0]) data_x_train,data_x_test,data_c_train,data_c_test,data_w_train,data_w_test = cms.train_test_split(data_x[:nmax],data_c[:nmax],data_w[:nmax],test_size=0.1) mc_x_train,mc_x_test,mc_c_train,mc_c_test,mc_w_train,mc_w_test = cms.train_test_split(mc_x[:nmax],mc_c[:nmax],m...
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Instantiate the model Create the model and compile it.
# reload(models) xz_shape = (1,len(x_names)) c_shape = (1,len(c_names)) gan = models.MyFFGAN( xz_shape, xz_shape, c_shape=c_shape, g_opts=G_OPTS, d_opts=D_OPTS, dm_opts=DM_OPTS, am_opts=AM_OPTS, gan_targets=GAN_TAR...
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
This will actually trigger the instantiation of the generator (if not done here, it will happen befor compilation).
gan.get_generator()
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Same as above for the discriminator. Two discriminator models are created: one for the discriminator training phase and another for the generator trainin phase. The difference between the two is that the second does not contain dropout layers.
gan.get_discriminator()
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
model compilation
gan.adversarial_compile(loss=LOSS,schedule=SCHEDULE) gan.get_generator().summary() gan.get_discriminator()[0].summary() gan.get_discriminator()[1].summary() # sanity check set(gan.am[0].trainable_weights)-set(gan.am[1].trainable_weights)
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
Training Everything is ready. We start training.
from keras.optimizers import RMSprop if PRETRAIN_G: generator = gan.get_generator() generator.compile(loss="mse",optimizer=RMSprop(lr=1.e-3)) generator.fit( [mc_c_train,mc_x_train], [mc_c_train,mc_x_train], sample_weight=[mc_w_train,mc_w_train], epochs=1, batch_size=BATCH_SIZE ) # reload(base) initial_e...
cms_zee_conditional_wgan.ipynb
musella/GAN
gpl-3.0
### Clustering pearsons_r with HDBSCAN
# Clustering the pearsons_R with N/A vlaues removed hdb_t1 = time.time() hdb_pearson_r = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_pearson_r) hdb_pearson_r_labels = hdb_pearson_r.labels_ hdb_elapsed_time = time.time() - hdb_t1 print("time to cluster", hdb_elapsed_time) print(np.unique(hdb_...
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
Looks like there are two clusters, some expression and zero expression across samples. ### Clustering mean centered euclidean distance with with HDBSCAN
df3_euclidean_mean.hist() # Clustering the mean centered euclidean distance of TPM counts hdb_t1 = time.time() hdb_euclidean_mean = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_euclidean_mean) hdb_euclidean_mean_labels = hdb_euclidean_mean.labels_ hdb_elapsed_time = time.time() - hdb_t1 print...
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
Looks like 2 clusters - both with zero expression. looks like wether it is a numpy array or pandas dataframe, the result is the same. lets now try to get index of the clustered points. ### Clustering log transformed euclidean distance with with HDBSCAN
df3_euclidean_log2 # Clustering the log2 transformed euclidean distance of TPM counts hdb_t1 = time.time() hdb_euclidean_log2 = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_euclidean_log2) hdb_euclidean_log2_labels = hdb_euclidean_log2.labels_ hdb_elapsed_time = time.time() - hdb_t1 print("ti...
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
Clustering using built-in HDBSCAN euclidean distance metric (mean centered and scaled to unit variance)
df2_TPM_values = df2_TPM.loc[:,"5GB1_FM40_T0m_TR2":"5GB1_FM40_T180m_TR1"] #isolating the data values df2_TPM_values_T = df2_TPM_values.T #transposing the data standard_scaler = StandardScaler() TPM_counts_mean_centered = standard_scaler.fit_transform(df2_TPM_values_T) #mean centering the data TPM_counts_mean_ce...
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
lets look at some clusters Euclidean_standard_scaled_clusters = {i: np.where(hdb_euclidean_labels == i)[0] for i in range(7)} df2_TPM.iloc[Euclidean_standard_scaled_clusters[0],:]
Euclidean_standard_scaled_clusters = {i: np.where(hdb_euclidean_labels == i)[0] for i in range(7)} df2_TPM.iloc[Euclidean_standard_scaled_clusters[1],:]
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
Euclidean_standard_scaled_clusters Clustering log2 transformed data using built-in HDBSCAN euclidean distance metric (mean centered and scaled to unit variance)
df2_TPM_log2_scale= df2_TPM_log2.T #transposing the data standard_scaler = StandardScaler() TPM_log2_mean_scaled = standard_scaler.fit_transform(df2_TPM_log2_scale) #mean centering the data TPM_log2_mean_scaled = pd.DataFrame(TPM_log2_mean_scaled) #back to Dataframe #transposing back to original form and reincerting...
data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb
gilmana/Cu_transition_time_course-
mit
DM -- Piece by piece (as coded) $\rho_b = \Omega_b \rho_c (1+z)^3$ $\rho_{\rm diffuse} = \rho_b - (\rho_* + \rho_{\rm ISM})$ $\rho_*$ is the mass density in stars $\rho_{\rm ISM}$ is the mass density in the neutral ISM Number densities $n_{\rm H} = \rho_{\rm diffuse}/(m_p \, \mu)$ $\mu \approx 1.3$ accounts for Helium ...
reload(igm) DM = igm.average_DM(1.) DM
docs/nb/DM_cosmic.ipynb
FRBs/FRB
bsd-3-clause
Cumulative plot
DM_cumul, zeval = igm.average_DM(1., cumul=True) # Inoue approximation DM_approx = 1000. * zeval * units.pc / units.cm**3 plt.clf() ax = plt.gca() ax.plot(zeval, DM_cumul, label='JXP') ax.plot(zeval, DM_approx, label='Approx') # Label ax.set_xlabel('z') ax.set_ylabel(r'${\rm DM}_{\rm IGM} [\rm pc / cm^3]$ ') # Legend...
docs/nb/DM_cosmic.ipynb
FRBs/FRB
bsd-3-clause
The Longley dataset is a time series dataset:
data = sm.datasets.longley.load() data.exog = sm.add_constant(data.exog) print(data.exog[:5])
examples/notebooks/gls.ipynb
ChadFulton/statsmodels
bsd-3-clause
While we don't have strong evidence that the errors follow an AR(1) process we continue
rho = resid_fit.params[1]
examples/notebooks/gls.ipynb
ChadFulton/statsmodels
bsd-3-clause
2x2 Grid You can easily create a layout with 4 widgets arranged on 2x2 matrix using the TwoByTwoLayout widget:
from ipywidgets import TwoByTwoLayout TwoByTwoLayout(top_left=top_left_button, top_right=top_right_button, bottom_left=bottom_left_button, bottom_right=bottom_right_button)
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
If you don't define a widget for some of the slots, the layout will automatically re-configure itself by merging neighbouring cells
TwoByTwoLayout(top_left=top_left_button, bottom_left=bottom_left_button, bottom_right=bottom_right_button)
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can pass merge=False in the argument of the TwoByTwoLayout constructor if you don't want this behavior
TwoByTwoLayout(top_left=top_left_button, bottom_left=bottom_left_button, bottom_right=bottom_right_button, merge=False)
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can add a missing widget even after the layout initialization:
layout_2x2 = TwoByTwoLayout(top_left=top_left_button, bottom_left=bottom_left_button, bottom_right=bottom_right_button) layout_2x2 layout_2x2.top_right = top_right_button
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can easily create more complex layouts with custom widgets. For example, you can use bqplot Figure widget to add plots:
import bqplot as bq import numpy as np size = 100 np.random.seed(0) x_data = range(size) y_data = np.random.randn(size) y_data_2 = np.random.randn(size) y_data_3 = np.cumsum(np.random.randn(size) * 100.) x_ord = bq.OrdinalScale() y_sc = bq.LinearScale() bar = bq.Bars(x=np.arange(10), y=np.random.rand(10), scales={'...
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
AppLayout AppLayout is a widget layout template that allows you to create an application-like widget arrangements. It consist of a header, a footer, two sidebars and a central pane:
from ipywidgets import AppLayout, Button, Layout header_button = create_expanded_button('Header', 'success') left_button = create_expanded_button('Left', 'info') center_button = create_expanded_button('Center', 'warning') right_button = create_expanded_button('Right', 'info') footer_button = create_expanded_button('Fo...
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
However with the automatic merging feature, it's possible to achieve many other layouts:
AppLayout(header=None, left_sidebar=None, center=center_button, right_sidebar=None, footer=None) AppLayout(header=header_button, left_sidebar=left_button, center=center_button, right_sidebar=right_button, footer=None) AppLayout(header=Non...
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can also modify the relative and absolute widths and heights of the panes using pane_widths and pane_heights arguments. Both accept a sequence of three elements, each of which is either an integer (equivalent to the weight given to the row/column) or a string in the format '1fr' (same as integer) or '100px' (absolu...
AppLayout(header=header_button, left_sidebar=left_button, center=center_button, right_sidebar=right_button, footer=footer_button, pane_widths=[3, 3, 1], pane_heights=[1, 5, '60px'])
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
Grid layout GridspecLayout is a N-by-M grid layout allowing for flexible layout definitions using an API similar to matplotlib's GridSpec. You can use GridspecLayout to define a simple regularly-spaced grid. For example, to create a 4x3 layout:
from ipywidgets import GridspecLayout grid = GridspecLayout(4, 3) for i in range(4): for j in range(3): grid[i, j] = create_expanded_button('Button {} - {}'.format(i, j), 'warning') grid
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
To make a widget span several columns and/or rows, you can use slice notation:
grid = GridspecLayout(4, 3, height='300px') grid[:3, 1:] = create_expanded_button('One', 'success') grid[:, 0] = create_expanded_button('Two', 'info') grid[3, 1] = create_expanded_button('Three', 'warning') grid[3, 2] = create_expanded_button('Four', 'danger') grid
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
You can still change properties of the widgets stored in the grid, using the same indexing notation.
grid = GridspecLayout(4, 3, height='300px') grid[:3, 1:] = create_expanded_button('One', 'success') grid[:, 0] = create_expanded_button('Two', 'info') grid[3, 1] = create_expanded_button('Three', 'warning') grid[3, 2] = create_expanded_button('Four', 'danger') grid grid[0, 0].description = "I am the blue one"
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
Note: It's enough to pass an index of one of the grid cells occupied by the widget of interest. Slices are not supported in this context. If there is already a widget that conflicts with the position of the widget being added, it will be removed from the grid:
grid = GridspecLayout(4, 3, height='300px') grid[:3, 1:] = create_expanded_button('One', 'info') grid[:, 0] = create_expanded_button('Two', 'info') grid[3, 1] = create_expanded_button('Three', 'info') grid[3, 2] = create_expanded_button('Four', 'info') grid grid[3, 1] = create_expanded_button('New button!!', 'danger'...
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
Creating scatter plots using GridspecLayout In these examples, we will demonstrate how to use GridspecLayout and bqplot widget to create a multipanel scatter plot. To run this example you will need to install the bqplot package. For example, you can use the following snippet to obtain a scatter plot across multiple dim...
import bqplot as bq import numpy as np from ipywidgets import GridspecLayout, Button, Layout n_features = 5 data = np.random.randn(100, n_features) data[:50, 2] += 4 * data[:50, 0] **2 data[50:, :] += 4 A = np.random.randn(n_features, n_features)/5 data = np.dot(data,A) scales_x = [bq.LinearScale() for i in range(n...
docs/source/examples/Layout Templates.ipynb
SylvainCorlay/ipywidgets
bsd-3-clause
<a id='index'></a> Index Back to top 1 Introduction 2 ScatterPlot components 2.1 The scatter plot marker 2.2 Internal structure 2.3 Data structures 2.3.1 Original data 2.3.2 Tooltip data 2.3.3 Mapper data 2.3.2 Output data 3 ScatterPlot interface 3.1 Data mapper 3.2 Tooltip selector 3.3 Colors in Hex format 4 T...
from IPython.display import Image #this is for displaying the widgets in the web version of the notebook from shaolin.dashboards.bokeh import ScatterPlot from bokeh.sampledata.iris import flowers scplot = ScatterPlot(flowers)
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='data_structures'></a> 2.3 Data structures Back to top The data contained in the blocks described in the above diagram gcan be accessed the following way: <a id='original_data'></a> 2.3.1 Original data
scplot.data.head()
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='tooltip_data'></a> 2.3.2 Tooltip data Back to top
scplot.tooltip.output.head()
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='mapper_data'></a> 2.3.3 Mapper data
scplot.mapper.output.head()
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='output_data'></a> 2.3.4 output data
scplot.output.head()
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='plot_interface'></a> 3 Scatter plot Interface Back to top The scatter plot Dashboard contains the bokeh scatter plot and a widget. That widget is a toggle menu that can display two Dashboards: - Mapper: This dashboard is in charge of managing how the data is displayed. - Tooltip: The BokehTooltip Dashboard allow...
mapper = scplot.mapper mapper.buttons.widget.layout.border = "blue solid" mapper.buttons.value = 'line_width' mapper.line_width.data_scaler.widget.layout.border = 'yellow solid' mapper.line_width.data_slicer.widget.layout.border = 'red solid 0.4em' mapper.line_width.data_slicer.columns_slicer.widget.layout.border = 'gr...
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
A plot mapper has the following components: - Marker parameter selector(Blue): A dropdown that allows to select which marker parameter that is going to be changed. - Data slicer(Red): A dashoard in charge of selecting a datapoint vector from a pandas data structure. We can slice each of the dimensions of the data struc...
scplot.widget Image(filename='scatter_data/img_2.png')
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='snapshot'></a> 4 Taking a snapshot of the current plot Back to top Although it is possible to save the bokeh plot with any of the standard methods that the bokeh library offers by accessing the plot attribute of the ScatterPlot, shaolin offers the possibility of saving an snapshot of the plot as a shaolin widget...
widget_plot = scplot.snapshot widget_plot.widget Image(filename='scatter_data/img_3.png')
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
<a id='plot_pandas'></a> 5 Plotting pandas Panel and Panel4D Back to top It is also possible to plot a pandas Panel or a Panel4d the same way as a DataFrame. The only resctriction for now is that the axis that will be used as index must be the major_axis in case of a Panel and the items axis in case of a Panel4D. The t...
from pandas.io.data import DataReader# I know its deprecated but i can't make the pandas_datareader work :P import datetime symbols_list = ['ORCL', 'TSLA', 'IBM','YELP', 'MSFT'] start = datetime.datetime(2010, 1, 1) end = datetime.datetime(2013, 1, 27) panel = DataReader( symbols_list, start=start, end=end,data_source=...
examples/Scatter Plot introduction.ipynb
HCsoft-RD/shaolin
agpl-3.0
The following blocks will be used to define the functions that computes the accumulative sum by the chosen three approaches. Native implementation using Python for command
def native_acc(x): """ Calculate accumulative sum using Python native loop approach :param x: Numpy array :return: Accumulative sum (list) """ native_acc_sum = [] sum_aux = 0 for val in x: native_acc_sum.append(val + sum_aux) sum_aux += val return native_acc_sum
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit
Just a binding for numpy.cumsum
def np_acc(x): """ Calculate accumulative sum using numpy.cumsum :param x: Numpy array :return: Accumulative sum (Numpy array) """ return np.cumsum(x)
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit
Now using sci20 implementation
def sci20_acc(x): """ Calculate accumulative sum using sci20 Array :param x: Numpy array :return: Accumulative sum (sci20 array) """ x_array = Array(FromNumPy(x)) return AccumulateArrayReturning(x_array)
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit
Here we use the timeit standard lib to obtain the elapsed time of each method
def do_benchmark(n=1000, k=10): """ Compute elapsed time for each accumulative sum implementation (Native loop, Numpy and sci20). :param n: Number of array elements. Default is 1000. :param k: Number of averages used for timing. Default is 10. :param dtype: Array data type. Default is np.double. ...
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit
And finally we have the results. sci20 and Numpy implementations are almost equivalent, while native loop has demonstrated its inefficiency
""" Computes and prints the elapsed time for each accumulative sum implementation (Native loop, Numpy and sci20). """ n_values = [10**x for x in range(1, 8)] # [10, 100, ..., 10^7] for n in n_values: print("Computing benchmark for n={}...".format(n)) dt_native, dt_sci20, dt_np = do_benchmark(n) print("Nat...
accumulative_sum_benchmark.ipynb
ESSS/notebooks
mit