markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
๊ทธ๋ฆฌ๊ณ ๊ฐ ๋
ธ๋์ ๋ํด ์์ฑ๋ dict๋ค์ ์ ์ฅํ์. | for user in users:
user["shortest_paths"] = shortest_paths_from(user) | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
๊ทธ๋ฌ๋ฉด ์ด์ ๋งค๊ฐ ์ค์ฌ์ฑ์ ๊ตฌํ ์ค๋น๊ฐ ๋ค ๋์๋ค.
์ด์ ๊ฐ๊ฐ์ ์ต๋จ ๊ฒฝ๋ก์ ํฌํจ๋๋ ๊ฐ ๋
ธ๋์ ๋งค๊ฐ ์ค์ฌ์ฑ์ $1/n$์ ๋ํด ์ฃผ์. | for user in users:
user["betweenness_centrality"] = 0.0
for source in users:
source_id = source["id"]
for target_id, paths in source["shortest_paths"].items(): # python2์์๋ items ๋์ iteritems ์ฌ์ฉ
if source_id < target_id: # ์๋ชปํด์ ๋ ๋ฒ ์ธ์ง ์๋๋ก ์ฃผ์ํ์
num_paths = len(paths) # ์ต๋จ ๊ฒฝ๋ก๊ฐ ๋ช ๊ฐ ์กด์ฌํ๋... | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
์ฌ์ฉ์ 0๊ณผ 9์ ์ต๋จ ๊ฒฝ๋ก ์ฌ์ด์๋ ๋ค๋ฅธ ์ฌ์ฉ์๊ฐ ์์ผ๋ฏ๋ก ๋งค๊ฐ ์ค์ฌ์ฑ์ด 0์ด๋ค.
๋ฐ๋ฉด ์ฌ์ฉ์ 3, 4, 5๋ ์ต๋จ ๊ฒฝ๋ก์์ ๋ฌด์ฒ ๋น๋ฒํ๊ฒ ์์นํ๊ธฐ ๋๋ฌธ์ ๋์ ๋งค๊ฐ ์ค์ฌ์ฑ์ ๊ฐ์ง๋ค.
๋๊ฒ ์ค์ฌ์ฑ์ ์ ๋๊ฐ ์์ฒด๋ ํฐ ์๋ฏธ๋ฅผ ๊ฐ์ง์ง ์๊ณ , ์๋๊ฐ๋ง์ด ์๋ฏธ๋ฅผ ๊ฐ์ง๋ค.
๊ทธ ์ธ์ ์ดํด๋ณผ ์ ์๋ ์ค์ฌ์ฑ ์งํ ์ค ํ๋๋ ๊ทผ์ ์ค์ฌ์ฑ(closeness centrality)์ด๋ค.
๋จผ์ ๊ฐ ์ฌ์ฉ์์ ์์ ์ฑ(farness)์ ๊ณ์ฐํ๋ค. ์์ ์ฑ์ด๋ from_user์ ๋ค๋ฅธ ๋ชจ๋ ์ฌ์ฉ์์ ์ต๋จ ๊ฒฝ๋ก๋ฅผ ํฉํ ๊ฐ์ด๋ค. | #
# closeness centrality
#
def farness(user):
"""๋ชจ๋ ์ฌ์ฉ์์์ ์ต๋จ ๊ฑฐ๋ฆฌ ํฉ"""
return sum(len(paths[0])
for paths in user["shortest_paths"].values()) | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
์ด์ ๊ทผ์ ์ค์ฌ์ฑ์ ๊ฐ๋จํ ๊ณ์ฐํ ์ ์๋ค. | for user in users:
user["closeness_centrality"] = 1 / farness(user)
for user in users:
print(user["id"], user["closeness_centrality"]) | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
๊ณ์ฐ๋ ๊ทผ์ ์ค์ฌ์ฑ์ ํธ์ฐจ๋ ๋์ฑ ์๋ค. ๋คํธ์ํฌ ์ค์ฌ์ ์๋ ๋
ธ๋์กฐ์ฐจ ์ธ๊ณฝ์ ์์นํ ๋
ธ๋๋ค๋ก๋ถํฐ ๋ฉ๋ฆฌ ๋จ์ด์ ธ ์๊ธฐ ๋๋ฌธ์ด๋ค.
์ฌ๊ธฐ์ ๋ดค๋ฏ์ด ์ต๋จ ๊ฒฝ๋ก๋ฅผ ๊ณ์ฐํ๋ ๊ฒ์ ๊ฝค๋ ๋ณต์กํ๋ค. ๊ทธ๋ ๊ธฐ ๋๋ฌธ์ ํฐ ๋คํธ์ํฌ์์๋ ๊ทผ์ ์ค์ฌ์ฑ์ ์์ฃผ ์ฌ์ฉํ์ง ์๋๋ค.
๋ ์ง๊ด์ ์ด์ง๋ง ๋ณดํต ๋ ์ฝ๊ฒ ๊ณ์ฐํ ์ ์๋ ๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ(eigenvector centrality)์ ๋ ์์ฃผ ์ฌ์ฉํ๋ค.
21.2 ๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ
๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ์ ๋ํด ์์๋ณด๊ธฐ ์ ์ ๋จผ์ ๊ณ ์ ๋ฒกํฐ๊ฐ ๋ฌด์์ธ์ง ์ดํด๋ด์ผ ํ๊ณ , ๊ณ ์ ๋ฒกํฐ๊ฐ ๋ฌด์์ธ์ง ์๊ธฐ ์ํด์๋ ๋จผ์ ํ๋ ฌ ์ฐ์ฐ์ ๋ํด ์์๋ด์ผ ํ๋ค.
21.2.1 ํ... | def matrix_product_entry(A, B, i, j):
return dot(get_row(A, i), get_column(B, j))
def matrix_multiply(A, B):
n1, k1 = shape(A)
n2, k2 = shape(B)
if k1 != n2:
raise ArithmeticError("incompatible shapes!")
return make_matrix(n1, k2, partial(matrix_product_entry, A, B))
def v... | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
ํ๋ ฌ A์ ๊ณ ์ ๋ฒกํฐ๋ฅผ ์ฐพ๊ธฐ ์ํด, ์์์ ๋ฒกํฐ $v$๋ฅผ ๊ณจ๋ผ matrix_operate๋ฅผ ์ํํ๊ณ , ๊ฒฐ๊ณผ๊ฐ์ ํฌ๊ธฐ๊ฐ 1์ด ๋๊ฒ ์ฌ์กฐ์ ํ๋ ๊ณผ์ ์ ๋ฐ๋ณต ์ํํ๋ค. | def find_eigenvector(A, tolerance=0.00001):
guess = [1 for __ in A]
while True:
result = matrix_operate(A, guess)
length = magnitude(result)
next_guess = scalar_multiply(1/length, result)
if distance(guess, next_guess) < tolerance:
return next_guess, length ... | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
๊ฒฐ๊ณผ๊ฐ์ผ๋ก ๋ฐํ๋๋ guess๋ฅผ matrix_operate๋ฅผ ํตํด ๊ฒฐ๊ณผ๊ฐ์ ํฌ๊ธฐ๊ฐ 1์ธ ๋ฒกํฐ๋ก ์ฌ์กฐ์ ํ๋ฉด, ์๊ธฐ ์์ ์ด ๋ฐํ๋๋ค. ์ฆ, ์ฌ๊ธฐ์ guess๋ ๊ณ ์ ๋ฒกํฐ๋ผ๋ ๊ฒ์ ์๋ฏธํ๋ค.
๋ชจ๋ ์ค์ ํ๋ ฌ์ ๊ณ ์ ๋ฒกํฐ์ ๊ณ ์ ๊ฐ์ด ์๋ ๊ฒ์ ์๋๋ค. ์๋ฅผ ๋ค์ด ์๊ณ ๋ฐฉํฅ์ผ๋ก 90๋ ํ์ ํ๋ ์ฐ์ฐ์ ํ๋ ๋ค์ ํ๋ ฌ์๋ ๊ณฑํ์ ๋ ๊ฐ์ง ์์ ์ด ๋๋ ๋ฒกํฐ๋ ์๋ฒกํฐ๋ฐ์ ์๋ค. | rotate = [[0, 1],
[-1, 0]] | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
์ด ํ๋ ฌ๋ก ์์ ๊ตฌํํ find_eignevector(rotate)๋ฅผ ์ํํ๋ฉด, ์์ํ ๋๋์ง ์์ ๊ฒ์ด๋ค.
ํํธ, ๊ณ ์ ๋ฒกํฐ๊ฐ ์๋ ํ๋ ฌ๋ ๋๋ก๋ ๋ฌดํ๋ฃจํ์ ๋น ์ง ์ ์๋ค. | flip = [[0, 1],
[1, 0]] | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
์ด ํ๋ ฌ์ ๋ชจ๋ ๋ฒกํฐ [x, y]๋ฅผ [y, x]๋ก ๋ณํํ๋ค. ๋ฐ๋ผ์ [1, 1]์ ๊ณ ์ ๊ฐ์ด 1์ธ ๊ณ ์ ๋ฒกํฐ๊ฐ ๋๋ค.
ํ์ง๋ง x, y๊ฐ์ด ๋ค๋ฅธ ์์์ ๋ฒกํฐ์์ ์ถ๋ฐํด์ find_eigenvector๋ฅผ ์ํํ๋ฉด x, y๊ฐ์ ๋ฐ๊พธ๋ ์ฐ์ฐ๋ง ๋ฌดํํ ์ํํ ๊ฒ์ด๋ค.
(NumPy๊ฐ์ ๋ผ์ด๋ธ๋ฌ๋ฆฌ์๋ ์ด๋ฐ ์ผ์ด์ค๊น์ง ๋ค๋ฃฐ ์ ์๋ ๋ค์ํ ๋ฐฉ๋ฒ๋ค์ด ๊ตฌํ๋์ด ์๋ค.)
์ด๋ฐ ์ฌ์ํ ๋ฌธ์ ์๋ ๋ถ๊ตฌํ๊ณ , ์ด์จ๋ find_eigenvector๊ฐ ๊ฒฐ๊ณผ๊ฐ์ ๋ฐํํ๋ค๋ฉด, ๊ทธ ๊ฒฐ๊ณผ๊ฐ์ ๊ณง ๊ณ ์ ๋ฒกํฐ์ด๋ค.
21.2.2 ์ค์ฌ์ฑ
๊ณ ์ ๋ฒกํฐ๊ฐ ๋ฐ์ดํฐ ๋คํธ์ํฌ๋ฅผ ์ดํดํ๋๋ฐ ์ด๋ป๊ฒ ๋์์ ์ค๊น?
์๊ธฐ๋ฅผ ํ๊ธฐ ์ ์ ... | #
# eigenvector centrality
#
def entry_fn(i, j):
return 1 if (i, j) in friendships or (j, i) in friendships else 0
n = len(users)
adjacency_matrix = make_matrix(n, n, entry_fn)
adjacency_matrix | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
๊ฐ ์ฌ์ฉ์์ ๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ์ด๋ find_eigenvector๋ก ์ฐพ์ ์ฌ์ฉ์์ ๊ณ ์ ๋ฒกํฐ๊ฐ ๋๋ค. | eigenvector_centralities, _ = find_eigenvector(adjacency_matrix)
for user_id, centrality in enumerate(eigenvector_centralities):
print(user_id, centrality) | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
์ฐ๊ฒฐ์ ์๊ฐ ๋ง๊ณ , ์ค์ฌ์ฑ์ด ๋์ ์ฌ์ฉ์๋คํํ
์ฐ๊ฒฐ๋ ์ฌ์ฉ์๋ค์ ๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ์ด ๋๋ค.
์์ ๊ฒฐ๊ณผ์ ๋ฐ๋ฅด๋ฉด ์ฌ์ฉ์ 1, ์ฌ์ฉ์ 2์ ์ค์ฌ์ฑ์ด ๊ฐ์ฅ ๋์๋ฐ, ์ด๋ ์ค์ฌ์ฑ์ด ๋์ ์ฌ๋๋ค๊ณผ ์ธ๋ฒ์ด๋ ์ฐ๊ฒฐ๋์๊ธฐ ๋๋ฌธ์ด๋ค.
์ด๋ค๋ก๋ถํฐ ๋ฉ์ด์ง์๋ก ์ฌ์ฉ์๋ค์ ์ค์ฌ์ฑ์ ์ ์ฐจ ์ค์ด๋ ๋ค.
21.3 ๋ฐฉํฅ์ฑ ๊ทธ๋ํ(Directed graphs)์ ํ์ด์ง๋ญํฌ
๋ฐ์ดํ
์ด ์ธ๊ธฐ๋ฅผ ๋ณ๋ก ๋์ง ๋ชปํ์, ์์ด์ต ํ์ ๋ถ์ฌ์ฅ์ ์น๊ตฌ ๋ชจ๋ธ์์ ๋ณด์ฆ(endorsement)๋ชจ๋ธ๋ก ์ ํฅํ๋ ๊ฒ์ ๊ณ ๋ ค ์ค์ด๋ค.
์๊ณ ๋ณด๋ ์ฌ๋๋ค์ ์ด๋ค ๋ฐ์ดํฐ ๊ณผํ์๋ค๋ผ๋ฆฌ ์น๊ตฌ์ธ์ง์ ๋ํด์๋ ๋ณ๋ก ๊ด์ฌ์ด ์์์ง๋ง, ํค๋ํํฐ๋ค์... | #
# directed graphs
#
endorsements = [(0, 1), (1, 0), (0, 2), (2, 0), (1, 2), (2, 1), (1, 3),
(2, 3), (3, 4), (5, 4), (5, 6), (7, 5), (6, 8), (8, 7), (8, 9)]
for user in users:
user["endorses"] = [] # add one list to track outgoing endorsements
user["endorsed_by"] = [] # and another t... | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
๊ทธ๋ฆฌ๊ณ ๊ฐ์ฅ ๋ณด์ฆ์ ๋ง์ด ๋ฐ์ ๋ฐ์ดํฐ ๊ณผํ์๋ค์ ๋ฐ์ดํฐ๋ฅผ ์์งํด์, ๊ทธ๊ฒ์ ํค๋ํํฐ๋คํํ
ํ๋ฉด ๋๋ค. | endorsements_by_id = [(user["id"], len(user["endorsed_by"]))
for user in users]
sorted(endorsements_by_id,
key=lambda x: x[1], # (user_id, num_endorsements)
reverse=True) | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
์ฌ์ค '๋ณด์ฆ์ ์'์ ๊ฐ์ ์ซ์๋ ์กฐ์ํ๊ธฐ๊ฐ ๋งค์ฐ ์ฝ๋ค.
๊ฐ์ฅ ๊ฐ๋จํ ๋ฐฉ๋ฒ ์ค ํ๋๋, ๊ฐ์ง ๊ณ์ ์ ์ฌ๋ฌ ๊ฐ ๋ง๋ค์ด์ ๊ทธ๊ฒ๋ค๋ก ๋ด ๊ณ์ ์ ๋ํ ๋ณด์ฆ์ ์๋ ๊ฒ์ด๋ค.
๋ ๋ค๋ฅธ ๋ฐฉ๋ฒ์, ์น๊ตฌ๋ค๋ผ๋ฆฌ ์ง๊ณ ์๋ก๊ฐ ์๋ก๋ฅผ ๋ณด์ฆํด ์ฃผ๋ ๊ฒ์ด๋ค. (์๋ง ์ฌ์ฉ์ 0, 1, 2๊ฐ ์ด๋ฐ ๊ด๊ณ์ผ ๊ฐ๋ฅ์ฑ์ด ํฌ๋ค.)
์ข ๋ ๋์ ์ง์๋, '๋๊ฐ' ๋ณด์ฆ์ ์๋์ง๋ฅผ ๊ณ ๋ คํ๋ ๊ฒ์ด๋ค.
๋ณด์ฆ์ ๋ง์ด ๋ฐ์ ์ฌ์ฉ์๊ฐ ๋ณด์ฆ์ ์ค ๋๋, ๋ณด์ฆ์ ์ ๊ฒ ๋ฐ์ ์ฌ์ฉ์๊ฐ ๋ณด์ฆ์ ์ค ๋๋ณด๋ค ๋ ์ค์ํ ๊ฒ์ผ๋ก ๋ฐ์๋ค์ฌ์ง๋ ๊ฒ์ด ํ๋นํ๋ค.
๊ทธ๋ฆฌ๊ณ ์ฌ์ค ์ด๊ฒ์ ์ ๋ช
ํ ํ์ด์ง๋ญํฌ(PageRank) ์๊ณ ๋ฆฌ์ฆ์ ๊ธฐ๋ณธ ์ฒ ํ์ด... | def page_rank(users, damping = 0.85, num_iters = 100):
# ๋จผ์ ํ์ด์ง๋ญํฌ๋ฅผ ๋ชจ๋ ๋
ธ๋์ ๊ณ ๋ฅด๊ฒ ๋ฐฐ๋น
num_users = len(users)
pr = { user["id"] : 1 / num_users for user in users }
# ๋งค ์คํ
๋ง๋ค ๊ฐ ๋
ธ๋๊ฐ ๋ฐ๋
# ์ ์ ์์ ํ์ด์ง๋ญํฌ
base_pr = (1 - damping) / num_users
for __ in range(num_iters):
next_pr = { user["i... | notebook/ch21_network_analysis.ipynb | rnder/data-science-from-scratch | unlicense |
Generating some data
Lets generate a classification dataset that is not easily linearly separable. Our favorite example is the spiral dataset, which can be generated as follows: | N = 100 #The number of "training examples" per class
D = 2 # The dimension i.e i think the feature set
K = 3 # The number of classes to classify into
X = np.zeros((N*K, D)) # The data matrix , each row is a single example
# In our case 100*3 amount of examples and each example has two dimensions
# so the Matrix is 30... | Deep Learning.ai/Numpy - Understanding Backprop.ipynb | Shashi456/Artificial-Intelligence | gpl-3.0 |
Training a Simple Softmax Linear Classifer
Initialize the Parameters | # initialize parameters randomly
W = 0.01 * np.random.randn(D,K) #randn(D,K) D * K Matrix
# in our case 2 * 3 Matrix
b = np.zeros((1,K)) # in our case 1 * 3
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength | Deep Learning.ai/Numpy - Understanding Backprop.ipynb | Shashi456/Artificial-Intelligence | gpl-3.0 |
Compute the Class Scores | # compute class scores for a linear classif
scores = np.dot(X,W) + b #scores will be 300 *3 and remember numpy broadcasting for b | Deep Learning.ai/Numpy - Understanding Backprop.ipynb | Shashi456/Artificial-Intelligence | gpl-3.0 |
Compute the loss | # Using cross - entropy loss
# Softmax loss = data loss + regularization loss
# Study about the Softmax loss and how it works before seeing how it works
#in this case it could be intuitive
num_examples = X.shape[0]
# get unnormalized probabilities
exp_scores = np.exp(scores)
# normalize them for each example
'''We no... | Deep Learning.ai/Numpy - Understanding Backprop.ipynb | Shashi456/Artificial-Intelligence | gpl-3.0 |
Putting it all Together
Till now we just looked at individual elements | #Train a Linear Classifier
# initialize parameters randomly
W = 0.01 * np.random.randn(D,K)
b = np.zeros((1,K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# gradient descent loop
num_examples = X.shape[0]
for i in range(200):
# evaluate class scores, [N x K]
scores = np.dot(X... | Deep Learning.ai/Numpy - Understanding Backprop.ipynb | Shashi456/Artificial-Intelligence | gpl-3.0 |
Using SQL for Queries
Note that SQL is case-insensitive, but it is traditional to use ALL CAPS for SQL keywords. It is also standard to end SQL statements with a semi-colon.
Simple Queries | pdsql('SELECT * FROM tips LIMIT 5;')
q = """
SELECT sex as gender, smoker, day, size
FROM tips
WHERE size <= 3 AND smoker = 'Yes'
LIMIT 5"""
pdsql(q) | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
Filtering on strings | q = """
SELECT sex as gender, smoker, day, size , time
FROM tips
WHERE time LIKE '%u_c%'
LIMIT 5"""
pdsql(q) | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
Ordering | q = """
SELECT sex as gender, smoker, day, size , time
FROM tips
WHERE time LIKE '%u_c%'
ORDER BY size DESC
LIMIT 5"""
pdsql(q) | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
Aggregate queries | pdsql('select * from tips limit 5;')
q = """
SELECT sex, smoker, sum(total_bill) as total, max(tip) as max_tip
FROM tips
WHERE time = 'Lunch'
GROUP BY sex, smoker
ORDER BY max_tip DESC
"""
pdsql(q) | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
Matching students and majors
Inner join | pdsql("""
SELECT * from student s
INNER JOIN major m
ON s.major_id = m.major_id;
""") | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
Left outer join
SQL also has RIGHT OUTER JOIN and FULL OUTER JOIN but these are not currently supported by SQLite3 (the database engine used by pdsql). | pdsql("""
SELECT s.*, m.name from student s
LEFT JOIN major m
ON s.major_id = m.major_id;
""") | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
Emulating a full outer join with UNION ALL
Only necessary if the database does not proivde FULL OUTER JOIN
Using linker tables to match students to classes (a MANY TO MANY join)
Using SQLite3
SQLite3 is part of the standard library. However, the mechanics of using essentially any database in Python is similar, because ... | import sqlite3
c = sqlite3.connect('data/Chinook_Sqlite.sqlite') | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
Standard SQL statements with parameter substitution
Note: Using Python string substitution for Python defined parameters is dangerous because of the risk of SQL injection attacks. Use parameter substitution with ? instead.
Do this | t = ['%rock%', 10]
list(c.execute("SELECT * FROM Album WHERE Title like ? AND ArtistID > ? LIMIT 5;", t)) | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
Not this | t = ("'%rock%'", 10)
list(c.execute("SELECT * FROM Album WHERE Title like %s AND ArtistID > %d LIMIT 5;" % t)) | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
User defined functions
Sometimes it is useful to have custom functions that run on the database server rather than on the client. These are called User Defined Functions or UDF. How do to do this varies with the database used, but it is fairly simple with Python and SQLite.
A standard UDF | def encode(text, offset):
"""Caesar cipher of text with given offset."""
from string import ascii_lowercase, ascii_uppercase
tbl = dict(zip(map(ord, ascii_lowercase + ascii_uppercase),
ascii_lowercase[offset:] + ascii_lowercase[:offset] +
ascii_uppercase[offset:] + ascii_uppe... | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
Using SQL magic functions
We will use the ipython-sql notebook extension for convenience. This will only work in notebooks and IPython scripts with the .ipy extension. | %load_ext sql | scratch/Lecture08.ipynb | cliburn/sta-663-2017 | mit |
3. Open the file and use .read() to save the contents of the file to a string called fields. Make sure the file is closed at the end. | # Write your code here:
# Run fields to see the contents of contacts.txt:
fields | nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb | rishuatgithub/MLPy | apache-2.0 |
Working with PDF Files
4. Use PyPDF2 to open the file Business_Proposal.pdf. Extract the text of page 2. | # Perform import
# Open the file as a binary object
# Use PyPDF2 to read the text of the file
# Get the text from page 2 (CHALLENGE: Do this in one step!)
page_two_text =
# Close the file
# Print the contents of page_two_text
print(page_two_text) | nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb | rishuatgithub/MLPy | apache-2.0 |
5. Open the file contacts.txt in append mode. Add the text of page 2 from above to contacts.txt.
CHALLENGE: See if you can remove the word "AUTHORS:" | # Simple Solution:
# CHALLENGE Solution (re-run the %%writefile cell above to obtain an unmodified contacts.txt file):
| nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb | rishuatgithub/MLPy | apache-2.0 |
Regular Expressions
6. Using the page_two_text variable created above, extract any email addresses that were contained in the file Business_Proposal.pdf. | import re
# Enter your regex pattern here. This may take several tries!
pattern =
re.findall(pattern, page_two_text) | nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/03-Python-Text-Basics-Assessment.ipynb | rishuatgithub/MLPy | apache-2.0 |
Exercise: Compute the velocity of the penny after t seconds. Check that the units of the result are correct. | # Solution goes here | notebooks/chap01.ipynb | AllenDowney/ModSimPy | mit |
Exercise: Why would it be nonsensical to add a and t? What happens if you try? | # Solution goes here | notebooks/chap01.ipynb | AllenDowney/ModSimPy | mit |
Exercise: Suppose you bring a 10 foot pole to the top of the Empire State Building and use it to drop the penny from h plus 10 feet.
Define a variable named foot that contains the unit foot provided by UNITS. Define a variable named pole_height and give it the value 10 feet.
What happens if you add h, which is in unit... | # Solution goes here
# Solution goes here | notebooks/chap01.ipynb | AllenDowney/ModSimPy | mit |
Exercise: In reality, air resistance limits the velocity of the penny. At about 18 m/s, the force of air resistance equals the force of gravity and the penny stops accelerating.
As a simplification, let's assume that the acceleration of the penny is a until the penny reaches 18 m/s, and then 0 afterwards. What is the... | # Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here | notebooks/chap01.ipynb | AllenDowney/ModSimPy | mit |
Filtering and resampling data
Some artifacts are restricted to certain frequencies and can therefore
be fixed by filtering. An artifact that typically affects only some
frequencies is due to the power line.
Power-line noise is a noise created by the electrical network.
It is composed of sharp peaks at 50Hz (or 60Hz dep... | import numpy as np
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
proj_fname = data_path + '/MEG/sample/sample_audvis_eog_proj.fif'
tmin, tmax = 0, 20 # use the first 20s of data
# Setup for reading the raw data (save memory by c... | 0.17/_downloads/1a57f50035589ab531155bdb55bcbcb5/plot_artifacts_correction_filtering.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
The same care must be taken with these results as with partial derivatives. The formula for $Y$ is ostensibly
$$X_1 + X_2 = X_1 + X^2 + X_1 = 2 X_1 + X^2$$
Or $2X_1$ plus a parabola.
However, the coefficient of $X_1$ is 1. That is because $Y$ changes by 1 if we change $X_1$ by 1 <i>while holding $X_2$ constant</i>. Mu... | # Load pricing data for two arbitrarily-chosen assets and SPY
start = '2014-01-01'
end = '2015-01-01'
asset1 = get_pricing('DTV', fields='price', start_date=start, end_date=end)
asset2 = get_pricing('FISV', fields='price', start_date=start, end_date=end)
benchmark = get_pricing('SPY', fields='price', start_date=start, ... | notebooks/lectures/Multiple_Linear_Regression/notebook.ipynb | quantopian/research_public | apache-2.0 |
The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relati... | def complete_deg(n):
a = np.identity((n), dtype = np.int)
for element in np.nditer(a, op_flags=['readwrite']):
if element > 0:
element[...] = element + n - 2
return a
complete_deg(3)
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*n... | assignments/assignment03/NumpyEx04.ipynb | geoneill12/phys202-2015-work | mit |
The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy. | def complete_adj(n):
b = np.identity((n), dtype = np.int)
for element in np.nditer(b, op_flags=['readwrite']):
if element == 0:
element[...] = 1
else:
element[...] = 0
return b
complete_adj(3)
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
a... | assignments/assignment03/NumpyEx04.ipynb | geoneill12/phys202-2015-work | mit |
Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. | def L(n):
L = complete_deg(n) - complete_adj(n)
return L
print L(1)
print L(2)
print L(3)
print L(4)
print L(5)
print L(6) | assignments/assignment03/NumpyEx04.ipynb | geoneill12/phys202-2015-work | mit |
ๅจๆฐไธๆฟๆ็็ๅ
ฌๅธ่ฟๅ
ๆฌไบๅไธค็ฝ็ๅ
ฌๅธ๏ผNETๅSTAQ๏ผ๏ผไธค็ฝ็ๅ
ฌๅธๆฏๅจ90ๅนดไปฃๆไธญๅฝไบบๆฐ้ถ่กๆนๅ่ฎพ็ซ็๏ผๆป็จ9ๅฎถๅ
ฌๅธ๏ผ็ฒคไผ ๅช(400003)ไธๅธๅๅฉไฝ8ๅฎถๅ
ฌๅธ๏ผ๏ผ็ฑไบๆฟ็ญๆน้ข็็บฆๆ๏ผๆญค็ฑปๅ
ฌๅธๅณไพฟๅจไธๆฟๅธๅบไธไพ็ถไธ่ขซๅ
่ฎธ่ฟ่กๆฐ็่่ต่ๅช่ฝ่กไปฝ่ฝฌ่ฎฉใ่ฟไบๅ
ฌๅธๅ
ๅซไบ๏ผ
* ๅคง่ช็ถ๏ผ400001๏ผ๏ผ่ฏฅ่ก็ฅจๅทฒ็ปๅจๅจๆฐไธๆฟ้ๅธ๏ผๅนถไปฅ834019ไธบไปฃ็ ๆไธบๅ
จๅฝ่ก่ฝฌๅ
ฌๅธ็ๆ็ไผไธใๅฏ่ฝๅบ็ฐ็้ฎ้ขๆฏ๏ผๆฐๆฎๅบๆ400001ๅ
ฌๅธ็่ดขๅกไฟกๆฏไนไฝไธบไบๆฐๆ็ๅ
ฌๅธ็่ดขๅกไฟกๆฏ๏ผๅ ๆญค๏ผๅค็ๆถ็ดๆฅไปฅๆฐๆ็็ๅ
ฌๅธ่ฝฌ่ฎฉๅ
ฌๅไนฆไธญๆซ้ฒ็่ดขๅกไฟกๆฏไฝไธบๆฃ้ชๆ ๅ๏ผTo-do: ไฝไธบๅฏนๆฏ็ดๆฅ่ธข้ค่ฟๅฎถๅ
ฌๅธ็ๆฐๆฎใ๏ผ
* ้ฟ็ฝ5 (400002)
* ๆตทๅฝๅฎ5(400005)
* ... | stkcs_net_staq = ["400001.OC", "400002.OC", "400005.OC", "400006.OC", "400007.OC", "400009.OC", "400010.OC", "400013.OC"]
stocks_neeq = stocks_neeq[~stocks_neeq["่ฏๅธไปฃ็ "].isin(stkcs_net_staq)]
len(stocks_neeq)
stocks_neeq[stocks_neeq["่ฏๅธไปฃ็ "] == "834019.OC"]
stocks_neeq = stocks_neeq.drop(6644)
stocks_neeq[stocks_nee... | src/Stocks.ipynb | januslian/neeqem | gpl-3.0 |
้ๅธ่ก็ฅจๅค็๏ผ11ไธช่ฝฌๆฟไธๅธๅ
ฌๅธ๏ผ1ไธช่ฟ็ปญๅๅนดไบๆ๏ผ16ไธชๅธๆถๅๅนถ็ๅ
ฌๅธ๏ผ3ๅฎถๅ
ฌๅธๅฐๅ
ฌๅธๅไธบๆ้่ดฃไปปๅ
ฌๅธ๏ผ6ๅฎถๅ
ฌๅธๆช่ฏดๆ้ๅธๅๅ ใๅจๆฐๆฎๅบ้ๅธ่ก็ฅจๅ่กจ้ๆไธคๅฎถๅ
ฌๅธ400001ๅ400003ๅฑไบไธค็ฝๅ
ฌๅธ่ฝฌๆฟ๏ผไธไฝไธบ็ ็ฉถๆ ทๆฌๅค็ใ | delisting = pandas.read_excel("../data/raw/้ๅธ่ก็ฅจไธ่ง.xlsx") #2016ๅนด4ๆ15ๆฅไนๅ้ๅธ็ๅ
ฌๅธ
delisting = delisting[:-2]
delisting.head()
delisting = delisting[~delisting["ไปฃ็ "].isin(["400001.OC", "400003.OC"])]
stkcd = stocks_neeq["่ฏๅธไปฃ็ "].append(delisting["ไปฃ็ "])
stkcd[stkcd.duplicated()]
delisting = pandas.read_excel("../data/delis... | src/Stocks.ipynb | januslian/neeqem | gpl-3.0 |
็ฑๆญคๅพๅฐ็โๆฐไธๆฟโๅ
ฌๅธ็ๆๆๆ ทๆฌๆฐ้ไธบ{{len(stocks_neeq_all)}}ใ
โๆฐไธๆฟโๆ ทๆฌๅ
ฌๅธๆๆฉ็ๆ็ๆถ้ดไธบ{{stocks_neeq_all["ๆ็ๆฅๆ"].min()}}๏ผๆไปฅๅจ้ๆฉไธไนๅฏนๆฏ็A่กไธปๆฟใไธญๅฐๆฟๅๅไธๆฟๅ
ฌๅธๆถไน้ๆฉไป2006ๅนดไปฅๅ๏ผๅ
ถไธญๅไธๆฟๅผๅงไบ2009ๅนด10ๆ๏ผๅจ2006ๅฐ2009ๅนด10ๆไน้ดๆ็็ๆฐไธๆฟๅ
ฌๅธๆ{{a}}ๅฎถๅ
ฌๅธใ
็ฑไบๆๆ้ๅๆฌ็ ็ฉถ็โๆฐไธๆฟโๅ
ฌๅธๅไธบ2006ๅนดไนๅๆ็ไบคๆ็๏ผๆไปฅๅฏนๅบ็ไธปๆฟใไธญๅฐๆฟๅๅไธๆฟๅ
ฌๅธไนๅฐไผ้ๆฉ2005ๅนดไนๅIPOs็ๅ
ฌๅธไฝไธบๅฏนๆฏ็ๅฏน่ฑกใๅ็ปญ็ๆฃ้ชไนไผๅฐไธปไธญๅไธไธชๆฟๅ็ๅฎๅๅขๅไฝไธบๅฏนๆฏๅฏน่ฑก๏ผไปฅไฟ่ฏๅจๅ่ก่ฟ็จไธๆไบ่ฎธ็ๅฏๆฏๆงใ
To-do: ้่ฆ็กฎๅฎไผฐ... | stocks_neeq_all.iloc[0]
year = lambda x: x.year
stocks_neeq_all["ๆ็ๅนดๅบฆ"] = stocks_neeq_all["ๆ็ๆฅๆ"].apply(year)
stocks_neeq_all["ๆ็ๅนดๅบฆ"].value_counts().sort_index()
sns.barplot(stocks_neeq_all["ๆ็ๅนดๅบฆ"].value_counts().sort_index().index,
stocks_neeq_all["ๆ็ๅนดๅบฆ"].value_counts().sort_index(), palette="Blues_d");... | src/Stocks.ipynb | januslian/neeqem | gpl-3.0 |
2. Hypothesis tests | #California burritos vs. Carnitas burritos
TODO
# Don Carlos 1 vs. Don Carlos 2
TODO
# Bonferroni correction
TODO | burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb | srcole/qwm | mit |
3. Burrito dimension distributions
Distribution of each burrito quality | import math
def metrichist(metricname):
if metricname == 'Volume':
bins = np.arange(.375,1.225,.05)
xticks = np.arange(.4,1.2,.1)
xlim = (.4,1.2)
else:
bins = np.arange(-.25,5.5,.5)
xticks = np.arange(0,5.5,.5)
xlim = (-.25,5.25)
plt.figure(figsize=(5... | burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb | srcole/qwm | mit |
Test for normal distribution | TODO | burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb | srcole/qwm | mit |
Changes implemented
From most important to least important:
* pre-compute PSF parameters across multiple spectra and wavelengths to
enable vectorized calls to legval instead of many scalar calls [~25% faster]
* hoist wavelength -> [-1,1] for legendre calculations out of loop [~8% faster]
* replace np.outer wi... | h1 = Table.read('data/portland/hsw-before.txt', format='ascii.basic')
h2 = Table.read('data/portland/hsw-after.txt', format='ascii.basic')
k1 = Table.read('data/portland/knl-before.txt', format='ascii.basic')
k2 = Table.read('data/portland/knl-after.txt', format='ascii.basic')
plot(h1['nproc'], h1['rate'], 's:', color... | doc/portland-extract.ipynb | sbailey/knltest | bsd-3-clause |
Build in plots for storytelling | def labelit(title_=''):
semilogx()
semilogy()
xticks([1,2,4,8,16,32,64,128], [1,2,4,8,16,32,64,128])
legend(loc='upper left')
title(title_)
ylim(20, 20000)
xlabel('Number of Processes')
ylabel('Extraction Rate')
figure()
plot(h1['nproc'], h1['rate'], 's-', color='C0', label='Haswell')
p... | doc/portland-extract.ipynb | sbailey/knltest | bsd-3-clause |
Configuration parameters | import GAN.utils as utils
# reload(utils)
class Parameters(utils.Parameters):
# dataset to be loaded
load_datasets=utils.param(["moriond_v9","abs(ScEta) < 1.5"])
c_names = utils.param(['Phi','ScEta','Pt','rho','run'])
x_names = utils.param(['EtaWidth','R9','SigmaIeIe','S4','PhiWidth','Covariance... | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Load datasets
Apply cleaning and reweight MC to match data | data,mc = cms.load_zee(*LOAD_DATASETS)
data.columns
mc.columns | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Cleaning | if not CLEANING is None:
print('cleaning data and mc')
thr_up = pd.read_hdf(CLEANING,'thr_up')
thr_down = pd.read_hdf(CLEANING,'thr_down')
nevts_data = data.shape[0]
nevts_mc = mc.shape[0]
data = data[ ((data[thr_down.index] >= thr_down) & (data[thr_up.index] <= thr_up)).all(axis=1) ]
mc = m... | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Generate extra variables for MC (eg run number) distributed like data | for col in SAMPLE_FROM_DATA:
print('sampling', col)
mc[col] = data[col].sample(mc.shape[0]).values
| cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Select target and conditional features | c_names = C_NAMES
x_names = X_NAMES
data_c = data[c_names]
data_x = data[x_names]
mc_c = mc[c_names]
mc_x = mc[x_names]
data_x.columns, data_x.shape, data_c.columns, data_c.shape
data_x.columns, data_c.columns
mc_x.columns, mc_c.columns
| cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Reweight MC | # reload(preprocessing)
# reload(utils)
# iniliatlize weights to default
if MCWEIGHT == '':
mc_w = np.ones(mc_x.shape[0])
else:
mc_w = mc[MCWEIGHT].values
print(mc_w[:10])
# take care of negative weights
if USE_ABS_WEIGHT:
mc_w = np.abs(mc_w)
# reweighting
if not REWEIGHT is None:
for fil in REW... | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Features transformation | # reload(preprocessing)
if GLOBAL_MATCH:
_,data_c,mc_x,mc_c,scaler_x,scaler_c = preprocessing.transform(None,data_c,mc_x,mc_c,FEAT_TRANSFORM)
_,_,data_x,_,scaler_x_data,_ = preprocessing.transform(None,None,data_x,None,FEAT_TRANSFORM)
else:
data_x,data_c,mc_x,mc_c,scaler_x,scaler_c = preprocessing.tra... | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Prepare train and test sample | nmax = min(data_x.shape[0]//FRAC_DATA,mc_x.shape[0])
data_x_train,data_x_test,data_c_train,data_c_test,data_w_train,data_w_test = cms.train_test_split(data_x[:nmax],data_c[:nmax],data_w[:nmax],test_size=0.1)
mc_x_train,mc_x_test,mc_c_train,mc_c_test,mc_w_train,mc_w_test = cms.train_test_split(mc_x[:nmax],mc_c[:nmax],m... | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Instantiate the model
Create the model and compile it. | # reload(models)
xz_shape = (1,len(x_names))
c_shape = (1,len(c_names))
gan = models.MyFFGAN( xz_shape, xz_shape, c_shape=c_shape,
g_opts=G_OPTS,
d_opts=D_OPTS,
dm_opts=DM_OPTS,
am_opts=AM_OPTS,
gan_targets=GAN_TAR... | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
This will actually trigger the instantiation of the generator
(if not done here, it will happen befor compilation). | gan.get_generator() | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Same as above for the discriminator.
Two discriminator models are created: one for the discriminator training phase and another
for the generator trainin phase.
The difference between the two is that the second does not contain dropout layers. | gan.get_discriminator() | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
model compilation | gan.adversarial_compile(loss=LOSS,schedule=SCHEDULE)
gan.get_generator().summary()
gan.get_discriminator()[0].summary()
gan.get_discriminator()[1].summary()
# sanity check
set(gan.am[0].trainable_weights)-set(gan.am[1].trainable_weights) | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
Training
Everything is ready. We start training. | from keras.optimizers import RMSprop
if PRETRAIN_G:
generator = gan.get_generator()
generator.compile(loss="mse",optimizer=RMSprop(lr=1.e-3))
generator.fit( [mc_c_train,mc_x_train], [mc_c_train,mc_x_train], sample_weight=[mc_w_train,mc_w_train], epochs=1, batch_size=BATCH_SIZE )
# reload(base)
initial_e... | cms_zee_conditional_wgan.ipynb | musella/GAN | gpl-3.0 |
### Clustering pearsons_r with HDBSCAN | # Clustering the pearsons_R with N/A vlaues removed
hdb_t1 = time.time()
hdb_pearson_r = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_pearson_r)
hdb_pearson_r_labels = hdb_pearson_r.labels_
hdb_elapsed_time = time.time() - hdb_t1
print("time to cluster", hdb_elapsed_time)
print(np.unique(hdb_... | data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb | gilmana/Cu_transition_time_course- | mit |
Looks like there are two clusters, some expression and zero expression across samples.
### Clustering mean centered euclidean distance with with HDBSCAN | df3_euclidean_mean.hist()
# Clustering the mean centered euclidean distance of TPM counts
hdb_t1 = time.time()
hdb_euclidean_mean = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_euclidean_mean)
hdb_euclidean_mean_labels = hdb_euclidean_mean.labels_
hdb_elapsed_time = time.time() - hdb_t1
print... | data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb | gilmana/Cu_transition_time_course- | mit |
Looks like 2 clusters - both with zero expression.
looks like wether it is a numpy array or pandas dataframe, the result is the same. lets now try to get index of the clustered points.
### Clustering log transformed euclidean distance with with HDBSCAN | df3_euclidean_log2
# Clustering the log2 transformed euclidean distance of TPM counts
hdb_t1 = time.time()
hdb_euclidean_log2 = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_euclidean_log2)
hdb_euclidean_log2_labels = hdb_euclidean_log2.labels_
hdb_elapsed_time = time.time() - hdb_t1
print("ti... | data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb | gilmana/Cu_transition_time_course- | mit |
Clustering using built-in HDBSCAN euclidean distance metric (mean centered and scaled to unit variance) | df2_TPM_values = df2_TPM.loc[:,"5GB1_FM40_T0m_TR2":"5GB1_FM40_T180m_TR1"] #isolating the data values
df2_TPM_values_T = df2_TPM_values.T #transposing the data
standard_scaler = StandardScaler()
TPM_counts_mean_centered = standard_scaler.fit_transform(df2_TPM_values_T) #mean centering the data
TPM_counts_mean_ce... | data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb | gilmana/Cu_transition_time_course- | mit |
lets look at some clusters
Euclidean_standard_scaled_clusters = {i: np.where(hdb_euclidean_labels == i)[0] for i in range(7)}
df2_TPM.iloc[Euclidean_standard_scaled_clusters[0],:] | Euclidean_standard_scaled_clusters = {i: np.where(hdb_euclidean_labels == i)[0] for i in range(7)}
df2_TPM.iloc[Euclidean_standard_scaled_clusters[1],:] | data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb | gilmana/Cu_transition_time_course- | mit |
Euclidean_standard_scaled_clusters
Clustering log2 transformed data using built-in HDBSCAN euclidean distance metric (mean centered and scaled to unit variance) | df2_TPM_log2_scale= df2_TPM_log2.T #transposing the data
standard_scaler = StandardScaler()
TPM_log2_mean_scaled = standard_scaler.fit_transform(df2_TPM_log2_scale) #mean centering the data
TPM_log2_mean_scaled = pd.DataFrame(TPM_log2_mean_scaled) #back to Dataframe
#transposing back to original form and reincerting... | data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb | gilmana/Cu_transition_time_course- | mit |
DM -- Piece by piece (as coded)
$\rho_b = \Omega_b \rho_c (1+z)^3$
$\rho_{\rm diffuse} = \rho_b - (\rho_* + \rho_{\rm ISM})$
$\rho_*$ is the mass density in stars
$\rho_{\rm ISM}$ is the mass density in the neutral ISM
Number densities
$n_{\rm H} = \rho_{\rm diffuse}/(m_p \, \mu)$
$\mu \approx 1.3$ accounts for Helium
... | reload(igm)
DM = igm.average_DM(1.)
DM | docs/nb/DM_cosmic.ipynb | FRBs/FRB | bsd-3-clause |
Cumulative plot | DM_cumul, zeval = igm.average_DM(1., cumul=True)
# Inoue approximation
DM_approx = 1000. * zeval * units.pc / units.cm**3
plt.clf()
ax = plt.gca()
ax.plot(zeval, DM_cumul, label='JXP')
ax.plot(zeval, DM_approx, label='Approx')
# Label
ax.set_xlabel('z')
ax.set_ylabel(r'${\rm DM}_{\rm IGM} [\rm pc / cm^3]$ ')
# Legend... | docs/nb/DM_cosmic.ipynb | FRBs/FRB | bsd-3-clause |
The Longley dataset is a time series dataset: | data = sm.datasets.longley.load()
data.exog = sm.add_constant(data.exog)
print(data.exog[:5]) | examples/notebooks/gls.ipynb | ChadFulton/statsmodels | bsd-3-clause |
While we don't have strong evidence that the errors follow an AR(1)
process we continue | rho = resid_fit.params[1] | examples/notebooks/gls.ipynb | ChadFulton/statsmodels | bsd-3-clause |
2x2 Grid
You can easily create a layout with 4 widgets arranged on 2x2 matrix using the TwoByTwoLayout widget: | from ipywidgets import TwoByTwoLayout
TwoByTwoLayout(top_left=top_left_button,
top_right=top_right_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button) | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
If you don't define a widget for some of the slots, the layout will automatically re-configure itself by merging neighbouring cells | TwoByTwoLayout(top_left=top_left_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button) | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
You can pass merge=False in the argument of the TwoByTwoLayout constructor if you don't want this behavior | TwoByTwoLayout(top_left=top_left_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button,
merge=False) | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
You can add a missing widget even after the layout initialization: | layout_2x2 = TwoByTwoLayout(top_left=top_left_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button)
layout_2x2
layout_2x2.top_right = top_right_button | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
You can easily create more complex layouts with custom widgets. For example, you can use bqplot Figure widget to add plots: | import bqplot as bq
import numpy as np
size = 100
np.random.seed(0)
x_data = range(size)
y_data = np.random.randn(size)
y_data_2 = np.random.randn(size)
y_data_3 = np.cumsum(np.random.randn(size) * 100.)
x_ord = bq.OrdinalScale()
y_sc = bq.LinearScale()
bar = bq.Bars(x=np.arange(10), y=np.random.rand(10), scales={'... | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
AppLayout
AppLayout is a widget layout template that allows you to create an application-like widget arrangements. It consist of a header, a footer, two sidebars and a central pane: | from ipywidgets import AppLayout, Button, Layout
header_button = create_expanded_button('Header', 'success')
left_button = create_expanded_button('Left', 'info')
center_button = create_expanded_button('Center', 'warning')
right_button = create_expanded_button('Right', 'info')
footer_button = create_expanded_button('Fo... | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
However with the automatic merging feature, it's possible to achieve many other layouts: | AppLayout(header=None,
left_sidebar=None,
center=center_button,
right_sidebar=None,
footer=None)
AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=None)
AppLayout(header=Non... | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
You can also modify the relative and absolute widths and heights of the panes using pane_widths and pane_heights arguments. Both accept a sequence of three elements, each of which is either an integer (equivalent to the weight given to the row/column) or a string in the format '1fr' (same as integer) or '100px' (absolu... | AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=footer_button,
pane_widths=[3, 3, 1],
pane_heights=[1, 5, '60px']) | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
Grid layout
GridspecLayout is a N-by-M grid layout allowing for flexible layout definitions using an API similar to matplotlib's GridSpec.
You can use GridspecLayout to define a simple regularly-spaced grid. For example, to create a 4x3 layout: | from ipywidgets import GridspecLayout
grid = GridspecLayout(4, 3)
for i in range(4):
for j in range(3):
grid[i, j] = create_expanded_button('Button {} - {}'.format(i, j), 'warning')
grid | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
To make a widget span several columns and/or rows, you can use slice notation: | grid = GridspecLayout(4, 3, height='300px')
grid[:3, 1:] = create_expanded_button('One', 'success')
grid[:, 0] = create_expanded_button('Two', 'info')
grid[3, 1] = create_expanded_button('Three', 'warning')
grid[3, 2] = create_expanded_button('Four', 'danger')
grid | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
You can still change properties of the widgets stored in the grid, using the same indexing notation. | grid = GridspecLayout(4, 3, height='300px')
grid[:3, 1:] = create_expanded_button('One', 'success')
grid[:, 0] = create_expanded_button('Two', 'info')
grid[3, 1] = create_expanded_button('Three', 'warning')
grid[3, 2] = create_expanded_button('Four', 'danger')
grid
grid[0, 0].description = "I am the blue one" | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
Note: It's enough to pass an index of one of the grid cells occupied by the widget of interest. Slices are not supported in this context.
If there is already a widget that conflicts with the position of the widget being added, it will be removed from the grid: | grid = GridspecLayout(4, 3, height='300px')
grid[:3, 1:] = create_expanded_button('One', 'info')
grid[:, 0] = create_expanded_button('Two', 'info')
grid[3, 1] = create_expanded_button('Three', 'info')
grid[3, 2] = create_expanded_button('Four', 'info')
grid
grid[3, 1] = create_expanded_button('New button!!', 'danger'... | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
Creating scatter plots using GridspecLayout
In these examples, we will demonstrate how to use GridspecLayout and bqplot widget to create a multipanel scatter plot. To run this example you will need to install the bqplot package.
For example, you can use the following snippet to obtain a scatter plot across multiple dim... | import bqplot as bq
import numpy as np
from ipywidgets import GridspecLayout, Button, Layout
n_features = 5
data = np.random.randn(100, n_features)
data[:50, 2] += 4 * data[:50, 0] **2
data[50:, :] += 4
A = np.random.randn(n_features, n_features)/5
data = np.dot(data,A)
scales_x = [bq.LinearScale() for i in range(n... | docs/source/examples/Layout Templates.ipynb | SylvainCorlay/ipywidgets | bsd-3-clause |
<a id='index'></a>
Index
Back to top
1 Introduction
2 ScatterPlot components
2.1 The scatter plot marker
2.2 Internal structure
2.3 Data structures
2.3.1 Original data
2.3.2 Tooltip data
2.3.3 Mapper data
2.3.2 Output data
3 ScatterPlot interface
3.1 Data mapper
3.2 Tooltip selector
3.3 Colors in Hex format
4 T... | from IPython.display import Image #this is for displaying the widgets in the web version of the notebook
from shaolin.dashboards.bokeh import ScatterPlot
from bokeh.sampledata.iris import flowers
scplot = ScatterPlot(flowers) | examples/Scatter Plot introduction.ipynb | HCsoft-RD/shaolin | agpl-3.0 |
<a id='data_structures'></a>
2.3 Data structures
Back to top
The data contained in the blocks described in the above diagram gcan be accessed the following way:
<a id='original_data'></a>
2.3.1 Original data | scplot.data.head() | examples/Scatter Plot introduction.ipynb | HCsoft-RD/shaolin | agpl-3.0 |
<a id='tooltip_data'></a>
2.3.2 Tooltip data
Back to top | scplot.tooltip.output.head() | examples/Scatter Plot introduction.ipynb | HCsoft-RD/shaolin | agpl-3.0 |
<a id='mapper_data'></a>
2.3.3 Mapper data | scplot.mapper.output.head() | examples/Scatter Plot introduction.ipynb | HCsoft-RD/shaolin | agpl-3.0 |
<a id='output_data'></a>
2.3.4 output data | scplot.output.head() | examples/Scatter Plot introduction.ipynb | HCsoft-RD/shaolin | agpl-3.0 |
<a id='plot_interface'></a>
3 Scatter plot Interface
Back to top
The scatter plot Dashboard contains the bokeh scatter plot and a widget. That widget is a toggle menu that can display two Dashboards:
- Mapper: This dashboard is in charge of managing how the data is displayed.
- Tooltip: The BokehTooltip Dashboard allow... | mapper = scplot.mapper
mapper.buttons.widget.layout.border = "blue solid"
mapper.buttons.value = 'line_width'
mapper.line_width.data_scaler.widget.layout.border = 'yellow solid'
mapper.line_width.data_slicer.widget.layout.border = 'red solid 0.4em'
mapper.line_width.data_slicer.columns_slicer.widget.layout.border = 'gr... | examples/Scatter Plot introduction.ipynb | HCsoft-RD/shaolin | agpl-3.0 |
A plot mapper has the following components:
- Marker parameter selector(Blue): A dropdown that allows to select which marker parameter that is going to be changed.
- Data slicer(Red): A dashoard in charge of selecting a datapoint vector from a pandas data structure. We can slice each of the dimensions of the data struc... | scplot.widget
Image(filename='scatter_data/img_2.png') | examples/Scatter Plot introduction.ipynb | HCsoft-RD/shaolin | agpl-3.0 |
<a id='snapshot'></a>
4 Taking a snapshot of the current plot
Back to top
Although it is possible to save the bokeh plot with any of the standard methods that the bokeh library offers by accessing the plot attribute of the ScatterPlot, shaolin offers the possibility of saving an snapshot of the plot as a shaolin widget... | widget_plot = scplot.snapshot
widget_plot.widget
Image(filename='scatter_data/img_3.png') | examples/Scatter Plot introduction.ipynb | HCsoft-RD/shaolin | agpl-3.0 |
<a id='plot_pandas'></a>
5 Plotting pandas Panel and Panel4D
Back to top
It is also possible to plot a pandas Panel or a Panel4d the same way as a DataFrame. The only resctriction for now is that the axis that will be used as index must be the major_axis in case of a Panel and the items axis in case of a Panel4D. The t... | from pandas.io.data import DataReader# I know its deprecated but i can't make the pandas_datareader work :P
import datetime
symbols_list = ['ORCL', 'TSLA', 'IBM','YELP', 'MSFT']
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2013, 1, 27)
panel = DataReader( symbols_list, start=start, end=end,data_source=... | examples/Scatter Plot introduction.ipynb | HCsoft-RD/shaolin | agpl-3.0 |
The following blocks will be used to define the functions that computes the accumulative sum by the chosen three approaches.
Native implementation using Python for command | def native_acc(x):
"""
Calculate accumulative sum using Python native loop approach
:param x: Numpy array
:return: Accumulative sum (list)
"""
native_acc_sum = []
sum_aux = 0
for val in x:
native_acc_sum.append(val + sum_aux)
sum_aux += val
return native_acc_sum | accumulative_sum_benchmark.ipynb | ESSS/notebooks | mit |
Just a binding for numpy.cumsum | def np_acc(x):
"""
Calculate accumulative sum using numpy.cumsum
:param x: Numpy array
:return: Accumulative sum (Numpy array)
"""
return np.cumsum(x) | accumulative_sum_benchmark.ipynb | ESSS/notebooks | mit |
Now using sci20 implementation | def sci20_acc(x):
"""
Calculate accumulative sum using sci20 Array
:param x: Numpy array
:return: Accumulative sum (sci20 array)
"""
x_array = Array(FromNumPy(x))
return AccumulateArrayReturning(x_array)
| accumulative_sum_benchmark.ipynb | ESSS/notebooks | mit |
Here we use the timeit standard lib to obtain the elapsed time of each method | def do_benchmark(n=1000, k=10):
"""
Compute elapsed time for each accumulative sum implementation (Native loop, Numpy and sci20).
:param n: Number of array elements. Default is 1000.
:param k: Number of averages used for timing. Default is 10.
:param dtype: Array data type. Default is np.double.
... | accumulative_sum_benchmark.ipynb | ESSS/notebooks | mit |
And finally we have the results. sci20 and Numpy implementations are almost equivalent, while native loop has demonstrated its inefficiency | """
Computes and prints the elapsed time for each accumulative sum implementation (Native loop, Numpy and sci20).
"""
n_values = [10**x for x in range(1, 8)] # [10, 100, ..., 10^7]
for n in n_values:
print("Computing benchmark for n={}...".format(n))
dt_native, dt_sci20, dt_np = do_benchmark(n)
print("Nat... | accumulative_sum_benchmark.ipynb | ESSS/notebooks | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.