markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Match Labels This routine retrieves the features from disk and pairs them with their one hot encoded labels. Currently all datasets are loaded into memory, but with enough videos, the code should be switched to using a Keras generator.
def one_hot(i): return np.array([int(i==0),int(i==1),int(i==2),int(i==3)]) def get_features(labels): x, y = [], [] for i in xrange(len(labels)): video_id = labels[i][0] clip_id = labels[i][1] label = labels[i][2] features = [] for i in range(7): fname = ...
project.ipynb
notnil/udacity-ml-capstone
mit
Training This routine trains the model and logs updates to the console and Tensorboard. After training is complete the model is saved using the current timestamp to distinguish training runs.
from keras.callbacks import TensorBoard import time import numpy as np tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0, write_graph=True, write_images=True) model.fit(X_train, Y_train, batch_size=100, ...
project.ipynb
notnil/udacity-ml-capstone
mit
Prediction This routine tests the saved model using the Keras predict method. Overall accuracy and a confusion matrix are displayed to validate that the model is accurate against unseen data.
from keras.models import load_model from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score def reverse_one_hot(val): hi_idx = -1 hi = -1 for i in range(len(val)): v = val[i] if hi == -1 or v > hi: hi = v hi_idx = i return hi_idx ...
project.ipynb
notnil/udacity-ml-capstone
mit
๊ทธ๋ž˜๋””์–ธํŠธ ๊ณ„์‚ฐํ•˜๊ธฐ <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/gradients"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์—์„œ ๋ณด๊ธฐ</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflo...
!pip install tensorflow==2.1.0
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
TensorFlow Quantum์„ ์„ค์น˜ํ•˜์„ธ์š”.
!pip install tensorflow-quantum
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์ด์ œ TensorFlow ๋ฐ ๋ชจ๋“ˆ ์ข…์†์„ฑ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค.
import tensorflow as tf import tensorflow_quantum as tfq import cirq import sympy import numpy as np # visualization tools %matplotlib inline import matplotlib.pyplot as plt from cirq.contrib.svg import SVGCircuit
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
1. ์ค€๋น„ ์–‘์ž ํšŒ๋กœ์— ๋Œ€ํ•œ ๊ทธ๋ž˜๋””์–ธํŠธ ๊ณ„์‚ฐ ๊ฐœ๋…์„ ์ข€ ๋” ๊ตฌ์ฒด์ ์œผ๋กœ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋งค๊ฐœ๋ณ€์ˆ˜ํ™”๋œ ํšŒ๋กœ๊ฐ€ ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค.
qubit = cirq.GridQubit(0, 0) my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha')) SVGCircuit(my_circuit)
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
๊ด€์ฐฐ ๊ฐ€๋Šฅ ํ•ญ๋ชฉ๊ณผ ํ•จ๊ป˜:
pauli_x = cirq.X(qubit) pauli_x
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ์—ฐ์‚ฐ์ž๋ฅผ ๋ณด๋ฉด $โŸจY(\alpha)| X | Y(\alpha)โŸฉ = \sin(\pi \ alpha)$๋ผ๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
def my_expectation(op, alpha): """Compute โŸจY(alpha)| `op` | Y(alpha)โŸฉ""" params = {'alpha': alpha} sim = cirq.Simulator() final_state = sim.simulate(my_circuit, params).final_state return op.expectation_from_wavefunction(final_state, {qubit: 0}).real my_alpha = 0.3 print("Expectation=", my_expecta...
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
$f_{1}(\alpha) = โŸจY(\alpha)| X | Y(\alpha)โŸฉ$๋ฅผ ์ •์˜ํ•˜๋ฉด $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$์ž…๋‹ˆ๋‹ค. ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.
def my_grad(obs, alpha, eps=0.01): grad = 0 f_x = my_expectation(obs, alpha) f_x_prime = my_expectation(obs, alpha + eps) return ((f_x_prime - f_x) / eps).real print('Finite difference:', my_grad(pauli_x, my_alpha)) print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
2. ๋ฏธ๋ถ„๊ธฐ์˜ ํ•„์š”์„ฑ ๋” ํฐ ํšŒ๋กœ์ผ์ˆ˜๋ก ์ฃผ์–ด์ง„ ์–‘์ž ํšŒ๋กœ์˜ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ๊ณ„์‚ฐํ•˜๋Š” ๊ณต์‹์ด ํ•ญ์ƒ ์ฃผ์–ด์ง€์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํ•œ ๊ณต์‹์œผ๋กœ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๊ณ„์‚ฐํ•˜๊ธฐ์— ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ tfq.differentiators.Differentiator ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํšŒ๋กœ์˜ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๊ณ„์‚ฐํ•˜๊ธฐ ์œ„ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์—ฌ TensorFlow Quantum(TFQ)์˜ ์ƒ๊ธฐ ์˜ˆ๋ฅผ ๋‹ค์‹œ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
expectation_calculation = tfq.layers.Expectation( differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01)) expectation_calculation(my_circuit, operators=pauli_x, symbol_names=['alpha'], symbol_values=[[my_alpha]])
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
๊ทธ๋Ÿฌ๋‚˜ ์ƒ˜ํ”Œ๋ง์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์˜ˆ์ƒ์น˜๋กœ ์ „ํ™˜ํ•˜๋ฉด(์‹ค์ œ ๊ธฐ๊ธฐ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์ผ) ๊ฐ’์ด ์•ฝ๊ฐ„ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ์ด์ œ ๋ถˆ์™„์ „ํ•œ ์ถ”์ •์น˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Œ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค.
sampled_expectation_calculation = tfq.layers.SampledExpectation( differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01)) sampled_expectation_calculation(my_circuit, operators=pauli_x, repetitions=500, s...
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์ด๊ฒƒ์€ ๊ทธ๋ž˜๋””์–ธํŠธ์™€ ๊ด€๋ จํ•˜์—ฌ ์‹ฌ๊ฐํ•œ ์ •ํ™•์„ฑ ๋ฌธ์ œ๋กœ ๋น ๋ฅด๊ฒŒ ๋ณตํ•ฉํ™”๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
# Make input_points = [batch_size, 1] array. input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32) exact_outputs = expectation_calculation(my_circuit, operators=pauli_x, symbol_names=['alpha'], ...
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์—ฌ๊ธฐ์„œ ์œ ํ•œ ์ฐจ๋ถ„ ๊ณต์‹์€ ๋ถ„์„ ์‚ฌ๋ก€์—์„œ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ๊ฒƒ์ด ๋น ๋ฅด์ง€๋งŒ ์ƒ˜ํ”Œ๋ง ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์˜ ๊ฒฝ์šฐ ๋…ธ์ด์ฆˆ๊ฐ€ ๋„ˆ๋ฌด ๋งŽ์Šต๋‹ˆ๋‹ค. ์ข‹์€ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋„๋ก ๋ณด๋‹ค ์‹ ์ค‘ํ•œ ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ ๋ถ„์„์  ๊ธฐ๋Œ€ ๊ทธ๋ž˜๋””์–ธํŠธ ๊ณ„์‚ฐ์—๋Š” ์ ํ•ฉํ•˜์ง€ ์•Š์ง€๋งŒ ์‹ค์ œ ์ƒ˜ํ”Œ ๊ธฐ๋ฐ˜ ์‚ฌ๋ก€์—์„œ ํ›จ์”ฌ ๋” ์„ฑ๋Šฅ์„ ๋ฐœํœ˜ํ•˜๋Š” ํ›จ์”ฌ ๋А๋ฆฐ ๊ธฐ์ˆ ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.
# A smarter differentiation scheme. gradient_safe_sampled_expectation = tfq.layers.SampledExpectation( differentiator=tfq.differentiators.ParameterShift()) with tf.GradientTape() as g: g.watch(values_tensor) imperfect_outputs = gradient_safe_sampled_expectation( my_circuit, operators=pauli_...
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์œ„์—์„œ ํŠน์ • ์—ฐ๊ตฌ ์‹œ๋‚˜๋ฆฌ์˜ค์— ํŠน์ • ๋ฏธ๋ถ„๊ธฐ๊ฐ€ ๊ฐ€์žฅ ์ž˜ ์‚ฌ์šฉ๋จ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๊ธฐ๊ธฐ ๋…ธ์ด์ฆˆ ๋“ฑ์— ๊ฐ•ํ•œ ๋А๋ฆฐ ์ƒ˜ํ”Œ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์€ ๋ณด๋‹ค '์‹ค์ œ' ์„ค์ •์—์„œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ…Œ์ŠคํŠธํ•˜๊ฑฐ๋‚˜ ๊ตฌํ˜„ํ•  ๋•Œ ์œ ์šฉํ•œ ๋ฏธ๋ถ„๊ธฐ์ž…๋‹ˆ๋‹ค. ์œ ํ•œ ์ฐจ๋ถ„๊ณผ ๊ฐ™์€ ๋” ๋น ๋ฅธ ๋ฐฉ๋ฒ•์€ ๋ถ„์„ ๊ณ„์‚ฐ ๋ฐ ๋” ๋†’์€ ์ฒ˜๋ฆฌ๋Ÿ‰์„ ์›ํ•˜์ง€๋งŒ ์•„์ง ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ธฐ๊ธฐ ์‹คํ–‰ ๊ฐ€๋Šฅ์„ฑ์— ๊ด€์‹ฌ์ด ์—†๋Š” ๊ฒฝ์šฐ ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. 3. ๋‹ค์ค‘ observable ๋‘ ๋ฒˆ์งธ observable์„ ์†Œ๊ฐœํ•˜๊ณ  TensorFlow Quantum์ด ๋‹จ์ผ ํšŒ๋กœ์— ๋Œ€ํ•ด ์—ฌ๋Ÿฌ observable์„ ์ง€์›ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.
pauli_z = cirq.Z(qubit) pauli_z
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์ด observable์ด ์ด์ „๊ณผ ๊ฐ™์€ ํšŒ๋กœ์—์„œ ์‚ฌ์šฉ๋œ๋‹ค๋ฉด $f_{2}(\alpha) = โŸจY(\alpha)| Z | Y (\alpha)โŸฉ = \cos(\pi \alpha)$ ๋ฐ $f_{2}^{'}(\alpha) = -\pi \sin (\pi \alpha)$์ž…๋‹ˆ๋‹ค. ๊ฐ„๋‹จํ•˜๊ฒŒ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.
test_value = 0. print('Finite difference:', my_grad(pauli_z, test_value)) print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ์ •๋„๋ฉด ์ผ์น˜ํ•œ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$๋ฅผ ์ •์˜ํ•˜๋ฉด $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$์ž…๋‹ˆ๋‹ค. ํšŒ๋กœ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด TensorFlow Quantum์—์„œ ํ•˜๋‚˜ ์ด์ƒ์˜ observable์„ ์ •์˜ํ•˜๋Š” ๊ฒƒ์€ $g$์— ๋” ๋งŽ์€ ์šฉ์–ด๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ํšŒ๋กœ์—์„œ ํŠน์ • ์‹ฌ๋ณผ์˜ ๊ทธ๋ž˜๋””์–ธํŠธ๊ฐ€ ํ•ด๋‹น ํšŒ๋กœ์— ์ ์šฉ๋œ ํ•ด๋‹น ์‹ฌ๋ณผ์˜ ๊ฐ observable์— ๋Œ€ํ•ด ๊ทธ๋ž˜๋””์–ธํŠธ์˜ ํ•ฉ๊ณผ ๋™์ผํ•จ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” TensorFlow ๊ทธ๋ž˜๋””...
sum_of_outputs = tfq.layers.Expectation( differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01)) sum_of_outputs(my_circuit, operators=[pauli_x, pauli_z], symbol_names=['alpha'], symbol_values=[[test_value]])
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์—ฌ๊ธฐ์„œ ์ฒซ ๋ฒˆ์งธ ํ•ญ๋ชฉ์€ ์˜ˆ์ƒ w.r.t Pauli X์ด๊ณ , ๋‘ ๋ฒˆ์งธ ํ•ญ๋ชฉ์€ ์˜ˆ์ƒ w.r.t Pauli Z์ž…๋‹ˆ๋‹ค. ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.
test_value_tensor = tf.convert_to_tensor([[test_value]]) with tf.GradientTape() as g: g.watch(test_value_tensor) outputs = sum_of_outputs(my_circuit, operators=[pauli_x, pauli_z], symbol_names=['alpha'], symbol_values=test_v...
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์—ฌ๊ธฐ์—์„œ ๊ฐ observable์˜ ๊ทธ๋ž˜๋””์–ธํŠธ์˜ ํ•ฉ์ด ์‹ค์ œ๋กœ $\alpha$์˜ ๊ทธ๋ž˜๋””์–ธํŠธ์ž„์„ ํ™•์ธํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋™์ž‘์€ ๋ชจ๋“  TensorFlow Quantum ๋ฏธ๋ถ„๊ธฐ์—์„œ ์ง€์›ํ•˜๋ฉฐ ๋‚˜๋จธ์ง€ TensorFlow์™€์˜ ํ˜ธํ™˜์„ฑ์— ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. 4. ๊ณ ๊ธ‰ ์‚ฌ์šฉ๋ฒ• ์—ฌ๊ธฐ์„œ๋Š” ์–‘์ž ํšŒ๋กœ์— ๋Œ€ํ•œ ์‚ฌ์šฉ์ž ์ •์˜ ๋ฏธ๋ถ„ ๋ฃจํ‹ด์„ ์ •์˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์›๋‹ˆ๋‹ค. TensorFlow Quantum ์„œ๋ธŒ ํด๋ž˜์Šค tfq.differentiators.Differentiator ๋‚ด์— ์กด์žฌํ•˜๋Š” ๋ชจ๋“  ๋ฏธ๋ถ„๊ธฐ์ž…๋‹ˆ๋‹ค. ๋ฏธ๋ถ„๊ธฐ์—์„œ differentiate_analytic ๋ฐ differentiate_sampled๋ฅผ...
class MyDifferentiator(tfq.differentiators.Differentiator): """A Toy differentiator for <Y^alpha | X |Y^alpha>.""" def __init__(self): pass @tf.function def _compute_gradient(self, symbol_values): """Compute the gradient based on symbol_values.""" # f(x) = sin(pi * x) ...
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ์ƒˆ๋กœ์šด ๋ฏธ๋ถ„๊ธฐ๋Š” ์ด์ œ ๊ธฐ์กด tfq.layer ๊ฐ์ฒด์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
custom_dif = MyDifferentiator() custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif) # Now let's get the gradients with finite diff. with tf.GradientTape() as g: g.watch(values_tensor) exact_outputs = expectation_calculation(my_circuit, operato...
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
์ด์ œ ์ด ์ƒˆ๋กœ์šด ๋ฏธ๋ถ„๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ๋ถ„ ops๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์š”์ : ์ฐจ๋ณ„ํ™” ์š”์†Œ๋Š” ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ op์—๋งŒ ์—ฐ๊ฒฐํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ด์ „ op์— ์—ฐ๊ฒฐ๋œ ๋ฏธ๋ถ„๊ธฐ๋Š” ์ƒˆ op์— ์—ฐ๊ฒฐํ•˜๊ธฐ ์ „์— ์ƒˆ๋กœ ๊ณ ์ณ์•ผ ํ•ฉ๋‹ˆ๋‹ค.
# Create a noisy sample based expectation op. expectation_sampled = tfq.get_sampled_expectation_op( cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01))) # Make it differentiable with your differentiator: # Remember to refresh the differentiator before attaching the new op custom_dif.refresh() differentiable_o...
site/ko/quantum/tutorials/gradients.ipynb
tensorflow/docs-l10n
apache-2.0
Usando la libreria
gr = nx.Graph() for i in range(1,5): gr.add_node(i) for i in edges: gr.add_edge(i[0], i[1]) nx.draw_spectral(gr) plt.show() print ('The graph is directed?: ', nx.is_directed(gr)) if nx.is_directed(gr) is True: print ('Number of edges: ', gr.number_of_edges()) else: print ('Number of edges: ', gr....
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
Propio
Directed=False print ('The graph is directed?: ', Directed) if Directed is True: print ('Number of edges: ', len(edges)) else: print ('Number of edges: ', 2*len(edges)) temp = [] for i in edges: temp.append(i[0]) temp.append(i[1]) temp = np.array(temp) print ('Number of nodes: ', np.size(np.unique(te...
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
Ejercicio - Matriz de Adyacencia (resuelva en cรณdigo propio y usando la librerรญa NetworkX (python) o iGraph (R)) Cree la matriz de adyacencia del grafo del ejercicio anterior (para dirigido y no-dirigido) Usando Librerรญa
A = nx.adjacency_matrix(gr) print ('No Dirigida') print(A) A = nx.adjacency_matrix(gr2) print ('Dirigida') print(A)
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
Propia
def adjmat(ed, directed): if directed is True: temp_d1 = [] temp_d2 = [] for i in ed: temp_d1.append(i[0]) temp_d2.append(i[1]) B=sc.sparse.csr_matrix((np.ones(len(temp_d1), dtype='int'), (temp_d1, temp_d2))) else: temp_d1 = [] temp_d2 = []...
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
Ejercicio - Sparseness Enron email network - Directed http://snap.stanford.edu/data/email-Enron.html Calcule la proporciรณn entre nรบmero de links existentes contra el nรบmero de links posibles.
F = open("Email-Enron.txt",'r') Net1=nx.read_edgelist(F) F.close() n = Net1.number_of_nodes() posibles = Net1.number_of_nodes()*(Net1.number_of_nodes()-1.0)/2.0 print ('Ratio: ', Net1.number_of_edges()/posibles)
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?
ANet1 = nx.adjacency_matrix(Net1) nzeros=Net1.number_of_nodes()*Net1.number_of_nodes()-len(ANet1.data) print ('La Red tiene: ', nzeros, ' ceros') del Net1, posibles, ANet1, nzeros
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
Social circles from Facebook (anonymized) - Undirected http://snap.stanford.edu/data/egonets-Facebook.html Calcule la proporciรณn entre nรบmero de links existentes contra el nรบmero de links posibles.
F = open("facebook_combined.txt",'r') Net=nx.read_edgelist(F) F.close() n = Net.number_of_nodes() posibles = Net.number_of_nodes()*(Net.number_of_nodes()-1.0)/2.0 print ('Ratio: ', Net.number_of_edges()/posibles)
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?
ANet = nx.adjacency_matrix(Net) nzeros=Net.number_of_nodes()*Net.number_of_nodes()-len(ANet.data) print ('La Red tiene: ', nzeros, ' ceros') del Net, n, posibles, ANet, nzeros
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
Webgraph from the Google programming contest, 2002 - Directed http://snap.stanford.edu/data/web-Google.html Calcule la proporciรณn entre nรบmero de links existentes contra el nรบmero de links posibles.
F = open("web-Google.txt",'r') Net=nx.read_edgelist(F) F.close() n = Net.number_of_nodes() posibles = Net.number_of_nodes()*(Net.number_of_nodes()-1.0)/2.0 print ('Ratio: ', Net.number_of_edges()/posibles)
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
Ejercicio - Redes Bipartitas Defina una red bipartita y genere ambas proyecciones, explique quรฉ son los nodos y links tanto de la red original como de las proyeccciones Se define una red donde los nodes E1, E2 y E3 son Estaciones de Bus, y se definen los nodos R101, R250, R161, R131 y R452 como rutas de buses.
B = nx.Graph() B.add_nodes_from(['E1','E2', 'E3'], bipartite=0) B.add_nodes_from(['R250', 'R161', 'R131', 'R452','R101'], bipartite=1) B.add_edges_from([('E1', 'R250'), ('E1', 'R452'), ('E3', 'R250'), ('E3', 'R131'), ('E3', 'R161'), ('E3', 'R452'), ('E2', 'R161'), ('E2', 'R101'),('E1', 'R131')]) B1=nx.algorithms.bipart...
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
La proyecciรณn A representa la comunicaciรณn entre Estaciones mediante el flujo de las rutas de buses, La proyecciรณn B representa la posible interacciรณn o "encuentros" entre las rutas de buses en funciรณn de las estaciones. Ejercicio - Paths Cree un grafo de 5 nodos con 5 enlaces. Elija dos nodos cualquiera e imprima: 5 P...
Nodes = [1, 2, 3, 4, 5] nEdges = 5 temp = [] for subset in itertools.combinations(Nodes, 2): temp.append(subset) Edges = random.sample(temp, nEdges) Edges G = nx.Graph() G.add_edges_from(Edges) nx.draw(G, with_labels = True) plt.show() Grafo = { 1 : [] , 2 : [] , 3 : [] , 4 : [] , 5 : [] } ...
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
Ejercicio - Componentes Baje una red real (http://snap.stanford.edu/data/index.html) y lea el archivo Social circles from Facebook (anonymized) - Undirected http://snap.stanford.edu/data/egonets-Facebook.html
F = open("youtube.txt",'r') Net1=nx.read_edgelist(F) F.close() print 'La red tiene: ',nx.number_connected_components(Net1), ' componentes'
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
Implemente el algorithmo Breadth First para encontrar el nรบmero de componentes (revise que el resultado es el mismo que utilizando la librerรญa)
Edges = Net1.edges() len(Edges) def netgen(nn, ne): nod = [i for i in range(nn)] nEdges = ne temp = [] for subset in itertools.combinations(nod, 2): temp.append(subset) edg = random.sample(temp, nEdges) return edg, nod G = nx.Graph() edges, nodes = netgen(10, 7) G.add_edges_from(edge...
alejogm0520/Ejercicios 1.1.ipynb
spulido99/NetworksAnalysis
mit
I know what the code above means, but I have to think about it. It was hard to understand, because the test above as repeated below, is so indirect. if i % 2 == 1: Compare that with the directness of the following test from cell #12. if significant_digits(value) == 5:
def prime_factors(x): divisor = 2 while divisor < x: if x % divisor == 0: yield divisor x //= divisor continue divisor += 1 yield x for letter, value in roman_letter_values: print(letter, value, list(prime_factors(value))) def significant_digits(x): ...
20171116-dojo-classification-of-letters-of-roman-numerals.ipynb
james-prior/cohpy
mit
Before starting to create our datasets, we will take a look at the SampleData class documenation, to discover the arguments of the class constructor. You can read it on the pymicro.core package API doc page, or print interactively by executing: ```python help(SD) or, if you are working with a Jupyter notebook, by ex...
data = SD(filename='my_first_dataset')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
That is it. The class has created a new HDF5/XDMF pair of files, and associated the interface with this dataset to the variable data. No message has been returned by the code, how can we know that the dataset has been created ? When the name of the file is not an absolute path, the default behavior of the class is to c...
import os # load python module to interact with operating system cwd = os.getcwd() # get current directory file_list = os.listdir(cwd) # get content of current work directory print(file_list,'\n') # now print only files that start with our dataset basename print('Our dataset files:') for file in file_list: if file...
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
The two files my_first_dataset.h5 and my_first_dataset.xdmf have indeed been created. If you want interactive prints about the dataset creation, you can set the verbose argument to True. This will set the activate the verbose mode of the class. When it is, the class instance prints a lot of information about what it ...
data.set_verbosity(True)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Let us now close our dataset, and see if the class instance prints information about it:
del data
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
<div class="alert alert-info"> **Note** It is a good practice to always delete your `SampleData` instances once you are done working with a dataset, or if you want to re-open it. As the class instance handles opened files as long as it exists, deleting it ensures that the files are properly closed. Otherwise, file m...
data = SD(filename='my_first_dataset', verbose=True)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
You can see that the printed information states that the dataset file my_first_dataset.h5 has been opened, and not created. This second instantiation of the class has not created a new dataset, but instead, has opened the one that we have just closed. Indeed, in that case, we provided a filename that already existed. ...
del data
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Overwriting datasets The overwrite_hdf5 argument of the class constructor, if it is set to True, will remove the filename dataset and create a new empty one, if this dataset already exists:
data = SD(filename='my_first_dataset', verbose=True, overwrite_hdf5=True)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
As you can see, the dataset files have been overwritten, as requested. We will now close our dataset again and continue to see the possibilities offered by the class constructor.
del data
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Copying dataset One last thing that may be interesting to do with already existing dataset files, is to create a new dataset that is a copy of them, associated with a new class instance. This is usefull for instance when you have to try new processing on a set of valuable data, without risking to damage the data. To d...
data2 = SD.copy_sample(src_sample_file='my_first_dataset', dst_sample_file='dataset_copy', get_object=True) cwd = os.getcwd() # get current directory file_list = os.listdir(cwd) # get content of current work directory print(file_list,'\n') # now print only files that start with our dataset basename print('Our dataset...
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
The copy_dataset HDF5 and XDMF files have indeed been created, and are a copy of the my_first_dataset HDF5 and XDMF files. Note that the copy_sample is a static method, that can be called even without SampleData instance. Note also that it has a overwrite argument, that allows to overwrite an already existing dst_sampl...
# set the autodelete argument to True data2.autodelete = True # Set the verbose mode on for copied dataset data2.set_verbosity(True) # Close copied dataset del data2
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
The class destructor ends by priting a confirmation message of the dataset files removal in verbose mode, as you can see in the cell above. Let us verify that it has been effectively deleted:
file_list = os.listdir(cwd) # get content of current work directory print(file_list,'\n') # now print only files that start with our dataset basename print('Our copied dataset files:') for file in file_list: if file.startswith('dataset_copy'): print(file)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
As you can see, the dataset files have been suppressed. Now we can also open and remove our first created dataset using the class constructor autodelete option:
data = SD(filename='my_first_dataset', verbose=True, autodelete=True) print(f'Is autodelete mode on ? {data.autodelete}') del data file_list = os.listdir(cwd) # get content of current work directory print(file_list,'\n') # now print only files that start with our dataset basename print('Our dataset files:') for fi...
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Now, you now how to create or open SampleData datasets. Before starting to explore their content in detail, a last feature of the SampleData class must be introduced: the naming system and conventions used to create or access data items in datasets. <div class="alert alert-info"> **Note** Using the **autodelete** op...
from config import PYMICRO_EXAMPLES_DATA_DIR # import file directory path import os dataset_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'test_sampledata_ref') # test dataset file path data = SD(filename=dataset_file)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
1- The Dataset Index As explained in the previous section, all data items have a Path, and an Indexname. The collection of Indexname/Path pairs forms the Index of the dataset. For each SampleData dataset, an Index Group is stored in the root Group, and the collection of those pairs is stored as attributes of this Index...
data.content_index
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
You should see the dictionary keys that are names of data items, and associated values, that are hdf5 pathes. You can see also data item Names at the end of their Pathes. The data item aliases are also stored in a dictionary, that is an attribute of the class, named aliases:
data.aliases
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
You can see that this dictionary contains keys only for data item that have additional names, and also that those keys are the data item indexnames. The dataset index can be plotted together with the aliases, with a prettier aspect, by calling the method print_index:
data.print_index()
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
This method prints the content of the dataset Index, with a given depth and from a specific root. The depth is the number of parents that a data item has. The root Group has thus a depth of 0, its children a depth of 1, the children of its children a depth of 2, and so on... The local root argument can be changed, to p...
data.print_index(local_root="/test_image")
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
The print_index method local root arguments needs the name of the Group whose children Index must be printed. As explained in section II, you may use for this other identificators than its Path. Let us try its Name (last part of its path), which is test_image, or its Indexname, which is image:
data.print_index(local_root="test_image") data.print_index(local_root="image")
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
As you can see, the result is the same in the 3 cases. Let us now try to print the dataset Index with a maximal data item depth of 2, using the max_depth argument:
data.print_index(max_depth=2)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Of course, you can combine those two arguments:
data.print_index(max_depth=2, local_root='mesh')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
The print_index method is usefull to get a glimpse of the content and organization of the whole dataset, or some part of it, and to quickly see the short indexnames or aliases that you can use to refer to data items. To add aliases to data items or Groups, you can use the add_alias method. The Index allows to quickly ...
data.print_dataset_content()
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
As you can see, this method prints by increasing depth, detailed information on each Group and each data item of the dataset, with a maximum depth that can be specified with a max_depth argument (like the method print_index, that has a default value of 3). The printed output is structured by groups: each Group that has...
data.print_dataset_content(short=True)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
This shorter print can be read easily, provide a complete and visual overview of the dataset organization, and indicate the memory size and type of each data item or Group in the dataset. The printed output distinguishes Group data items, from Nodes data item. The later regroups all types of arrays that may be stored i...
data.print_dataset_content(short=True, to_file='dataset_information.txt') # Let us open the content of the created file, to see if the dataset information has been written in it: %cat dataset_information.txt
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
<div class="alert alert-info"> **Note** The string representation of the *SampleData* class is composed of a first part, which is the output of the `print_index` method, and a second part, that is the output of the `print_datase_content` method (short output). </div>
# SampleData string representation : print(data)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Now you now how to get a detailed overview of the dataset content. However, with large datasets, that may have a complex internal organization (many Groups, lot of data items and metadata...), the print_dataset_content return string can become very large. In this case, it becomes cumbersome to look for a specific infor...
# Method called with data item indexname, and short output data.print_node_info(nodename='image', short=True) # Method called with data item Path and long output data.print_node_info(nodename='/test_image', short=False)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
You can observe that this method prints the same block of information that the one that appeared in the print_dataset_content method output, for the description of the test_image group. With this block, we can learn that this Group is a children of the root Group ('/'), that it has two children that are the data items ...
data.print_node_info('test_alias')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Here, we can learn which is the Node parent, what is the node Name, see that it has no attributes, see that it is an array of shape (51,), that it is not stored with data compression (compresion level to 0), and that it occupies a disk space of 64 Kb. The print_node_info method is usefull to get information on a specif...
data.print_group_content(groupname='test_mesh')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Obviously, this methods is identical to the print_dataset_content method, but restricted to one Group. As the first one, it has a to_file, a short and a max_depth arguments. These arguments work just as for print_dataset_content method, hence there use is not detailed here. the However, you may see one difference her...
data.print_group_content('test_mesh', recursive=True)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
As you can see, the information on the childrens of the Geometry group have been printed. Note that the max_depth argument is considered by this method as an absolute depth, meaning that you have to specify a depth that is at least the depth of the target group to see some output printed for the group content. The defa...
data.print_grids_info()
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
This method also has the to_file and short arguments of the print_dataset_content method:
data.print_grids_info(short=True, to_file='dataset_information.txt') %cat dataset_information.txt
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
6- Get xdmf tree content As explained in the first Notebook of this User Guide, these grid Groups and associated data are stored in a dual format by the SampleData class. This dual format is composed of the dataset HDF5 file, and an associated XDMF file containing metadata, describing Grid groups topology, data types a...
data.xdmf_tree
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
The XDMF file is synchronized with the in-memory xdmf_tree argument when calling the sync method, or when deleting the SampleData instance. However, you may want to look at the content of the XDMF tree while you are interactively using your SampleData instance. In this case, you can use the print_xdmf method:
data.print_xdmf()
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
As you can observe, you will get a print of the content of the XDMF file that would be written if you would close the file right now. You can observe that the XDMF file provides information on the grids that match those given by the Groups and Nodes attributes printed above with the previously studied method: the test ...
data.get_node_disk_size(nodename='test_array')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
As you can see, the default behavior of this method is to print a message indicating the Node disk size, but also to return a tuple containing the value of the disk size and its unit. If you want to print data in bytes, you may call this method with the convert argument set to False:
data.get_node_disk_size(nodename='test_array', convert=False)
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
If you want to use this method to get a numerical value within a script, but do not want the class to print anything, you can use the print_flag argument:
size, unit = data.get_node_disk_size(nodename='test_array', print_flag=False) print(f'Printed by script: node size is {size} {unit}') size, unit = data.get_node_disk_size(nodename='test_array', print_flag=False, convert=False) print(f'Printed by script: node size is {size} {unit}')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
The disk size of the whole HDF5 file can also be printed/returned, using the get_file_disk_size method, that has the same print_flag and convert arguments:
data.get_file_disk_size() size, unit = data.get_file_disk_size(convert=False, print_flag=False) print(f'\nPrinted by script: file size is {size} {unit}')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
8- Get nodes/groups attributes (metadata) Another central aspect of the SampleData class is the management of metadata, that can be attached to all Groups or Nodes of the dataset. Metadata comes in the form of HDF5 attributes, that are Name/Value pairs, and that we already encountered when exploring the outputs of meth...
data.print_node_attributes(nodename='test_mesh')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
As you can see, this method prints a list of all data item attributes, with the format * Name : Value \n. It allows you to quickly see what attributes are stored together with a given data item, and their values. If you want to get the value of a specific attribute, you can use the get_attribute method. It takes two a...
Nnodes = data.get_attribute(attrname='number_of_nodes', nodename='test_mesh') print(f'The mesh test_mesh has {Nnodes} nodes')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
You can also get all attributes of a data item as a dictionary. In this case, you just need to specify the name of the data item from which you want attributes, and use the get_dic_from_attributes method:
mesh_attrs = data.get_dic_from_attributes(nodename='test_mesh') for name, value in mesh_attrs.items(): print(f' Attribute {name} is {value}')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
We have now seen how to explore all of types of information that a SampleData dataset may contain, individually or all together, interactively, from a Python console. Let us review now how to explore the content of SampleData datasets with external softwares. IV - Visualize dataset contents with Vitables All the inform...
# uncomment to test # data.pause_for_visualization(Vitables=True, Vitables_path='Path_to_Vitables_executable')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Please refer to the Vitables documentation, that can be downloaded here https://sourceforge.net/projects/vitables/files/ViTables-3.0.0/, to learn how to browse through your HDF5 file. The Vitables software is very intuitive, you will see that it is provides a usefull and convenient tool to explore your SampleData datas...
# Like for Vitables --> uncomment to test # data.pause_for_visualization(Paraview=True, Paraview_path='Path_to_Paraview_executable')
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
<div class="alert alert-info"> **Note** **It is recommended to use a recent version of the Paraview software to visualize SampleData datasets (>= 5.0).** When opening the XDMF file, Paraview may ask you to choose a specific file reader. It is recommended to choose the **XDMF_reader**, and not the **Xdmf3ReaderT**, ...
del data # raw output of H5ls --> prints the childrens of the file root group !h5ls ../data/test_sampledata_ref.h5 # recursive output of h5ls (-r option) --> prints all data items !h5ls -r ../data/test_sampledata_ref.h5 # recursive (-r) and detailed (-d) output of h5ls --> also print the content of the data array...
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
As you can see if you uncommented and executed this cell, h5dump prints a a fully detailed description of your dataset: organization, data types, item names and path, and item content (value stored in arrays). As it produces a very large output, it may be convenient to write its output in a file:
# !h5dump ../data/test_sampledata_ref.h5 > test_dump.txt # !cat test_dump.txt
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
You can also use the command line tool of the Pytables software ptdump, that also takes as argument the HDF5 file, and has two command options, the verbose mode -v, and the detailed mode -d:
# uncomment to test ! # !ptdump ../data/test_sampledata_ref.h5 # uncomment to test! # !ptdump -v ../data/test_sampledata_ref.h5 # uncomment to test ! # !ptdump -d ../data/test_sampledata_ref.h5
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
heprom/pymicro
mit
Exercise 4.1 What is the relation between the age and Income? For a one percent increase in the Age how much the income increases? Using sklearn estimate a linear regression and predict the income when the Age is 30 and 40 years
income.plot(x='Age', y='Income', kind='scatter')
exercises/E4-Regression-Linear&Logistic.ipynb
albahnsen/PracticalMachineLearningClass
mit
Exercise 4.2 Evaluate the model using the MSE Exercise 4.3 Run a regression model using as features the Age and Age$^2$ using the OLS equations Exercise 4.4 Estimate a regression using more features. How is the performance compared to using only the Age? Part 2: Logistic Regression Customer Churn: losing/attrition of t...
# Download the dataset data = pd.read_csv('https://github.com/ghuiber/churn/raw/master/data/churn.csv') data.head()
exercises/E4-Regression-Linear&Logistic.ipynb
albahnsen/PracticalMachineLearningClass
mit
"Sandwich" layers There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py. For now take a look at the affine_rel...
def affine_relu_forward(x, w, b): """ Convenience layer that perorms an affine transform followed by a ReLU Inputs: - x: Input to the affine layer - w, b: Weights for the affine layer Returns a tuple of: - out: Output from the ReLU - cache: Object to give to the backward pass """ ...
uri-dl/uri-dl-hw-2/assignment2/FullyConnectedNets.ipynb
arasdar/DL
unlicense
Solver In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class. Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After do...
model = TwoLayerNet() solver = None ############################################################################## # TODO: Use a Solver instance to train a TwoLayerNet that achieves at least # # 50% accuracy on the validation set. # ##############################################...
uri-dl/uri-dl-hw-2/assignment2/FullyConnectedNets.ipynb
arasdar/DL
unlicense
As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
# TODO: Use a three-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 1e-2 learning_rate = 1e-4 model = FullyConnectedNet([100, 100], ...
uri-dl/uri-dl-hw-2/assignment2/FullyConnectedNets.ipynb
arasdar/DL
unlicense
Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
# TODO: Use a five-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } learning_rate = 1e-3 weight_scale = 1e-5 model = FullyConnectedNet([100, 100, 100, 100],...
uri-dl/uri-dl-hw-2/assignment2/FullyConnectedNets.ipynb
arasdar/DL
unlicense
Inline question: Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net? Answer: No Update rules So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. W...
from cs231n.optim import sgd_momentum N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-3, 'velocity': v} next_w, _ = sgd_momentum(w, dw, config=config) expected_next_w = np.a...
uri-dl/uri-dl-hw-2/assignment2/FullyConnectedNets.ipynb
arasdar/DL
unlicense
Train a good model! Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net. If you are careful it should be possible to get accuracies above 55%, but we don't require...
best_model = None ################################################################################ # TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might # # batch normalization and dropout useful. Store your best model in the # # best_model variable. ...
uri-dl/uri-dl-hw-2/assignment2/FullyConnectedNets.ipynb
arasdar/DL
unlicense
Graph regularization for document classification using natural graphs <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_mlp_cora"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on Tensor...
!pip install --quiet neural-structured-learning
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Dependencies and imports
import neural_structured_learning as nsl import tensorflow as tf # Resets notebook state tf.keras.backend.clear_session() print("Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print( "GPU is", "available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Cora dataset The Cora dataset is a citation graph where nodes represent machine learning papers and edges represent citations between pairs of papers. The task involved is document classification where the goal is to categorize each paper into one of 7 categories. In other words, this is a multi-class classification pr...
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz !tar -C /tmp -xvzf /tmp/cora.tgz
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Convert the Cora data to the NSL format In order to preprocess the Cora dataset and convert it to the format required by Neural Structured Learning, we will run the 'preprocess_cora_dataset.py' script, which is included in the NSL github repository. This script does the following: Generate neighbor features using the ...
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py !python preprocess_cora_dataset.py \ --input_cora_content=/tmp/cora/cora.content \ --input_cora_graph=/tmp/cora/cora.cites \ --max_nbrs=5 \ --output_train_...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Global variables The file paths to the train and test data are based on the command line flag values used to invoke the 'preprocess_cora_dataset.py' script above.
### Experiment dataset TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr' TEST_DATA_PATH = '/tmp/cora/test_examples.tfr' ### Constants used to identify neighbor features in the input. NBR_FEATURE_PREFIX = 'NL_nbr_' NBR_WEIGHT_SUFFIX = '_weight'
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Hyperparameters We will use an instance of HParams to include various hyperparameters and constants used for training and evaluation. We briefly describe each of them below: num_classes: There are a total 7 different classes max_seq_length: This is the size of the vocabulary and all instances in the input have ...
class HParams(object): """Hyperparameters used for training.""" def __init__(self): ### dataset parameters self.num_classes = 7 self.max_seq_length = 1433 ### neural graph learning parameters self.distance_type = nsl.configs.DistanceType.L2 self.graph_regularization_multiplier = 0.1 self...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Load train and test data As described earlier in this notebook, the input training and test data have been created by the 'preprocess_cora_dataset.py'. We will load them into two tf.data.Dataset objects -- one for train and one for test. In the input layer of our model, we will extract not just the 'words' and the 'lab...
def make_dataset(file_path, training=False): """Creates a `tf.data.TFRecordDataset`. Args: file_path: Name of the file in the `.tfrecord` format containing `tf.train.Example` objects. training: Boolean indicating if we are in training mode. Returns: An instance of `tf.data.TFRecordDataset` con...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Let's peek into the train dataset to look at its contents.
for feature_batch, label_batch in train_dataset.take(1): print('Feature list:', list(feature_batch.keys())) print('Batch of inputs:', feature_batch['words']) nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words') nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX) print('Bat...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Let's peek into the test dataset to look at its contents.
for feature_batch, label_batch in test_dataset.take(1): print('Feature list:', list(feature_batch.keys())) print('Batch of inputs:', feature_batch['words']) print('Batch of labels:', label_batch)
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Model definition In order to demonstrate the use of graph regularization, we build a base model for this problem first. We will use a simple feed-forward neural network with 2 hidden layers and dropout in between. We illustrate the creation of the base model using all model types supported by the tf.Keras framework -- ...
def make_mlp_sequential_model(hparams): """Creates a sequential multi-layer perceptron model.""" model = tf.keras.Sequential() model.add( tf.keras.layers.InputLayer( input_shape=(hparams.max_seq_length,), name='words')) # Input is already one-hot encoded in the integer format. We cast it to # ...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Functional base model
def make_mlp_functional_model(hparams): """Creates a functional API-based multi-layer perceptron model.""" inputs = tf.keras.Input( shape=(hparams.max_seq_length,), dtype='int64', name='words') # Input is already one-hot encoded in the integer format. We cast it to # floating point format here. cur_lay...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Subclass base model
def make_mlp_subclass_model(hparams): """Creates a multi-layer perceptron subclass model in Keras.""" class MLP(tf.keras.Model): """Subclass model defining a multi-layer perceptron.""" def __init__(self): super(MLP, self).__init__() # Input is already one-hot encoded in the integer format. We ...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Create base model(s)
# Create a base MLP model using the functional API. # Alternatively, you can also create a sequential or subclass base model using # the make_mlp_sequential_model() or make_mlp_subclass_model() functions # respectively, defined above. Note that if a subclass model is used, its # summary cannot be generated until it is ...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0