markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
首先,用tf.log计算y的每个元素的对数。接下来,我们把y_ 的每一个元素和tf.log(y)的对应元素相乘。然后,根据参数reduction_indices=[1],用tf.reduce_sum函数将y的第二维中所有元素相加,最后tf.reduce_mean计算批次中所有示例的平均值。 注意在源码中,我们没有使用这个方程,因为其为数值不稳定的。我们替代为对非标准化逻辑使用tf.nn.softmax_cross_entropy_with_logits(例:我们对tf.matmul(x, W) + b)调用softmax_cross_entropy_with_logits),因为它计算softmax激活函数更为数值稳定。在你的代码中...
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
mnist/MNIST_For_ML_Beginners.ipynb
ling7334/tensorflow-get-started
apache-2.0
在这里,我们要求TensorFlow用梯度下降算法(gradient descent algorithm)以0.5的学习速率最小化交叉熵。梯度下降算法(gradient descent algorithm)是一个简单的学习过程,TensorFlow只需将每个变量一点点地往使成本不断降低的方向移动。当然TensorFlow也提供了许多其他优化算法:只要简单地调整一行代码就可以使用其他的算法。 TensorFlow在这里实际上所做的是,它会在后台给描述你的计算的那张图里面增加一系列新的计算操作单元用于实现反向传播算法和梯度下降算法。然后,它返回给你的只是一个单一的操作,当运行这个操作时,它用梯度下降算法训练你的模型,微调你的变量,不断减...
sess = tf.InteractiveSession()
mnist/MNIST_For_ML_Beginners.ipynb
ling7334/tensorflow-get-started
apache-2.0
我们首先要添加一个操作来初始化我们创建的变量:
tf.global_variables_initializer().run()
mnist/MNIST_For_ML_Beginners.ipynb
ling7334/tensorflow-get-started
apache-2.0
然后开始训练模型,这里我们让模型循环训练1000次!
for _ in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
mnist/MNIST_For_ML_Beginners.ipynb
ling7334/tensorflow-get-started
apache-2.0
该循环的每个步骤中,我们都会随机抓取训练数据中的100个批处理数据点,然后我们用这些数据点作为参数替换之前的占位符来运行train_step。 使用一小部分的随机数据来进行训练被称为随机训练(stochastic training)- 在这里更确切的说是随机梯度下降训练。在理想情况下,我们希望用我们所有的数据来进行每一步的训练,因为这能给我们更好的训练结果,但显然这需要很大的计算开销。所以,每一次训练我们可以使用不同的数据子集,这样做既可以减少计算开销,又可以最大化地学习到数据集的总体特性。 评估我们的模型 那么我们的模型性能如何呢? 首先让我们找出那些预测正确的标签。tf.argmax是一个非常有用的函数,它能给出某个tensor...
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
mnist/MNIST_For_ML_Beginners.ipynb
ling7334/tensorflow-get-started
apache-2.0
这行代码会给我们一组布尔值。为了确定正确预测项的比例,我们可以把布尔值转换成浮点数,然后取平均值。例如,[True, False, True, True]会变成 [1,0,1,1],取平均值后得到0.75。
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
mnist/MNIST_For_ML_Beginners.ipynb
ling7334/tensorflow-get-started
apache-2.0
最后,我们计算所学习到的模型在测试数据集上面的正确率。
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
mnist/MNIST_For_ML_Beginners.ipynb
ling7334/tensorflow-get-started
apache-2.0
Euler's method Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation $$ \frac{dy}{dx} = f(y(x), x) $$ with the initial condition: $$ y(x_0)=y_0 $$ Euler's method performs updates using the equations: $$ y_{n+1} = y_n + h f(y_n,x_n) $$ $$ h = x_{n+1}...
def solve_euler(derivs, y0, x): """Solve a 1d ODE using Euler's method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarray, list...
assignments/assignment10/ODEsEx01.ipynb
brettavedisian/phys202-2015-work
mit
The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation: $$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$ Write a function solve_midpoint that implements the midpoint met...
def solve_midpoint(derivs, y0, x): """Solve a 1d ODE using the Midpoint method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarr...
assignments/assignment10/ODEsEx01.ipynb
brettavedisian/phys202-2015-work
mit
In the following cell you are going to solve the above ODE using four different algorithms: Euler's method Midpoint method odeint Exact Here are the details: Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$). Define the derivs function for the above differential equation. Using the...
x=np.linspace(0,1,11) def derivs(y,x): return x+2*y plt.figure(figsize=(10,9)) euler_diff=abs(solve_euler(derivs,0,x)-solve_exact(x)) midpt_diff=abs(solve_midpoint(derivs,0,x)-solve_exact(x)) ode_diff=np.empty(len(solve_euler(derivs,0,x))) for i in range(len(solve_euler(derivs,0,x))): ode_diff[i]=abs(odein...
assignments/assignment10/ODEsEx01.ipynb
brettavedisian/phys202-2015-work
mit
A base de seu funcionamento é o np.array, que retorna o objeto array sobre o qual todas as funções estão implementadas
a = np.array([1, 2, 3]) print(repr(a), a.shape, end="\n\n") b = np.array([(1, 2, 3), (4, 5, 6)]) print(repr(b), b.shape)
2020/02-python-bibliotecas-manipulacao-dados/Numpy.ipynb
InsightLab/data-science-cookbook
mit
O array traz consigo diversos operadores já implementados:
print(b.T, end="\n\n") # transpoe uma matriz print(a + b, end="\n\n") # soma um vetor linha/coluna a todas as linhas/colunas de uma matriz print(b - a, end="\n\n") # subtrai um vetor linha/coluna a todas as linhas/colunas de uma matriz # multiplica os elementos de um vetor linha/coluna # a todos os elementos das linh...
2020/02-python-bibliotecas-manipulacao-dados/Numpy.ipynb
InsightLab/data-science-cookbook
mit
O Numpy traz consigo diversas operações matemáticas implementadas, as quais podem ser aplicadas sobre um valor ou um array de valores. OBS: podemos ver a aplicações dessas funções como uma operação de transformação (map)
print(10*np.sin(1)) # seno trigonométrico de 1 print(10*np.sin(a)) # seno trigonométrico de cada elemento de a
2020/02-python-bibliotecas-manipulacao-dados/Numpy.ipynb
InsightLab/data-science-cookbook
mit
Uma operação booleana pode ser aplicada sobre todos os elementos de um array, retornando um array de mesmas dimensões com o resultado da operação
b<35
2020/02-python-bibliotecas-manipulacao-dados/Numpy.ipynb
InsightLab/data-science-cookbook
mit
Existem também operações utilitárias pré-definidas em um array
print(b,end="\n\n") print('Axis 1: %s' % b[0], end="\n\n") # retorna um vetor print(np.average(b), end="\n\n") # tira a média dos elementos print(np.average(b, axis=1), end="\n\n") # tira a média dos elementos dos vetores no eixo 1 print(b.sum(), end="\n\n") # retorna as somas dos valores print(b.sum(axis=1), end="\n\n...
2020/02-python-bibliotecas-manipulacao-dados/Numpy.ipynb
InsightLab/data-science-cookbook
mit
Existem também funções para gerar arrays pré-inicializados
print(np.zeros((3, 5)), end="\n\n") # array de zeros com dimensões [3,5] print(np.ones((2,3,4)), end="\n\n------------\n\n") # array de uns com dimensões [2,3,4] print(np.full((2, 2), 10), end="\n\n") # array de 10 com dimensões [2,2] print(np.arange(10, 30, 5), end="\n\n") # valores de 10 a 30 com passo 5 print(np.ran...
2020/02-python-bibliotecas-manipulacao-dados/Numpy.ipynb
InsightLab/data-science-cookbook
mit
Podemos selecionar intervalos do array, permitindo recuperar apenas uma porção dele
d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) d d[:, 0] # todas as linhas (:) da primeira coluna (0) d[:, 1] # todas as linhas (:) da segunda coluna (1) d[:, 0:2] # todas as linhas (:) das colunas de 0 à 2 d[:, 2] # todas as linhas (:) da terceira coluna (2)
2020/02-python-bibliotecas-manipulacao-dados/Numpy.ipynb
InsightLab/data-science-cookbook
mit
O Numpy conta também com funções para salvar/ler arrays de arquivos
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) np.save('/tmp/x.npy', x) del(x) x = np.load('/tmp/x.npy') print(x)
2020/02-python-bibliotecas-manipulacao-dados/Numpy.ipynb
InsightLab/data-science-cookbook
mit
Travelling salesman problem Last week we covered several different exercises including a solver for a simplified version of the travelling salesman problem. Instead of finding a cycle we only find the optimal path, there is no requirement to return to the starting point. This problem has numerous applications and we ma...
coords = [(0,0), (10,5), (10,10), (5,10), (3,3), (3,7), (12,3), (10,11)] for a, b, in zip(coords[:-1], coords[1:]): print(a, b) def distance(coords): distance = 0 for p1, p2 in zip(coords[:-1], coords[1:]): distance += ((p2[0] - p1[0]) ** 2 + (p2[1] - p1[1]) ** 2) ** 0.5 return distance ...
Wk02/Wk02-Development_wl.ipynb
beyondvalence/biof509_wtl
mit
Currently, the only way to reuse this code would be to copy and paste it. Our first improvement would be to convert it into a function.
def distance(coords): distance = 0 for p1, p2 in zip(coords[:-1], coords[1:]): distance += ((p2[0] - p1[0]) ** 2 + (p2[1] - p1[1]) ** 2) ** 0.5 return distance def find_best_route(coords): best_distance = distance(coords) best = coords for option in itertools.permutations(coords, len(co...
Wk02/Wk02-Development_wl.ipynb
beyondvalence/biof509_wtl
mit
These functions are now much easier to reuse in this notebook. The next step is to move them out into their own file.
# use type for windows equivalent to cat in unix !type tsp_solver.py import tsp_solver best_distance, best_coords = tsp_solver.find_best_route(coords) best_x = np.array([x[0] for x in best_coords]) best_y = np.array([y[1] for y in best_coords]) plt.plot(best_x, best_y, 'bo-') plt.show()
Wk02/Wk02-Development_wl.ipynb
beyondvalence/biof509_wtl
mit
Documentation At the moment the only way to understand how to use our code is to read it all. There is no indication of how the input variables should be structured or what the function returns. If we wanted to reuse this code in the future we would waste a significant amount of time simply re-familiarizing ourselves w...
!type tsp_solver_doc.py import tsp_solver_doc tsp_solver_doc.find_best_route?
Wk02/Wk02-Development_wl.ipynb
beyondvalence/biof509_wtl
mit
To find the global minimum our approach will need to explore further than just the local minimum. We will discuss two different algorithms: Simulated annealing Genetic algorithm There are a number of packages available for both simulated annealing (SimulatedAnnealing, simanneal) and genetic algorithm (pyevolve, DEAP)...
def new_path(existing_path): path = existing_path[:] point = random.randint(0, len(path)-2) path[point+1], path[point] = path[point], path[point+1] # print(point) return path print(coords) print(new_path(coords))
Wk02/Wk02-Development_wl.ipynb
beyondvalence/biof509_wtl
mit
Simulated Annealing With both our functions made we can now implement the first of these two algorithms.
# generate a large and random walk to traverse the search space with randomness # in order to avoid staying in local minimums # with temp_factor def simulated_annealing_optimizer(starting_path, cost_func, new_path_func, start_temp, min_temp, steps): current_path = starting_path[:] current_cost = cost_func(curre...
Wk02/Wk02-Development_wl.ipynb
beyondvalence/biof509_wtl
mit
Exercises Copy the simulated annealing and genetic algorithm functions out into their own files. Add documentation to both the files and the functions. Our genetic algorithm function currently only uses recombination. As we saw from the simulated annealing approach mutation is also a powerful tool in locating the opti...
!ls -l
Wk02/Wk02-Development_wl.ipynb
beyondvalence/biof509_wtl
mit
prepare data
raw_data = pd.read_csv('ex1data2.txt', names=['square', 'bedrooms', 'price']) data = general.normalize_feature(raw_data) print(data.shape) data.head() X_data = general.get_X(data) print(X_data.shape, type(X_data)) y_data = general.get_y(data).reshape(len(X_data), 1) # special treatment for tensorflow input data pri...
ex1-linear regression/4- tensoflow batch gradient decent.ipynb
icrtiou/coursera-ML
mit
run the tensorflow graph over several optimizer
epoch = 2000 alpha = 0.01 optimizer_dict={'GD': tf.train.GradientDescentOptimizer, 'Adagrad': tf.train.AdagradOptimizer, 'Adam': tf.train.AdamOptimizer, 'Ftrl': tf.train.FtrlOptimizer, 'RMS': tf.train.RMSPropOptimizer } results = [] for nam...
ex1-linear regression/4- tensoflow batch gradient decent.ipynb
icrtiou/coursera-ML
mit
plot them all
fig, ax = plt.subplots(figsize=(16, 9)) for res in results: loss_data = res['loss'] # print('for optimizer {}'.format(res['name'])) # print('final parameters\n', res['parameters']) # print('final loss={}\n'.format(loss_data[-1])) ax.plot(np.arange(len(loss_data)), loss_data, label=res['name']...
ex1-linear regression/4- tensoflow batch gradient decent.ipynb
icrtiou/coursera-ML
mit
O objeto dict_ representa todos os textos associados a classe correspondente. Na etapa de pré-processamento vamos fazer algumas operações: retirar dos textos hashtags, usuários e links. Essas informações serão incluídas em listas separadas paara serem usadas posteriormente. serão eliminados stopwords, símbolos de p...
from unicodedata import normalize, category from nltk.tokenize import regexp_tokenize from collections import Counter, Set from nltk.corpus import stopwords import re def pre_process_text(text): # Expressão regular para extrair padrões do texto. São reconhecidos (na ordem, o símbolo | separa um padrão): ...
Projects/01_Projeto_HillaryTrump_Twitter.ipynb
adolfoguimaraes/machinelearning
mit
Bigrams e Trigram mais frequentes da Hillary
#Pegando os bigram e trigram mais frequentes from nltk.collocations import BigramCollocationFinder, TrigramCollocationFinder from nltk.metrics import BigramAssocMeasures, TrigramAssocMeasures bcf = BigramCollocationFinder.from_words(words_h) tcf = TrigramCollocationFinder.from_words(words_h) bcf.apply_freq_filter(3)...
Projects/01_Projeto_HillaryTrump_Twitter.ipynb
adolfoguimaraes/machinelearning
mit
Bigrams e Trigram mais frequentes do Trump
bcf = BigramCollocationFinder.from_words(words_t) tcf = TrigramCollocationFinder.from_words(words_t) bcf.apply_freq_filter(3) tcf.apply_freq_filter(3) result_bi = bcf.nbest(BigramAssocMeasures.raw_freq, 5) result_tri = tcf.nbest(TrigramAssocMeasures.raw_freq, 5) trump_frequent_bitrigram = [] for r in result_bi: ...
Projects/01_Projeto_HillaryTrump_Twitter.ipynb
adolfoguimaraes/machinelearning
mit
Construindo um bag of words
# Cada token é concatenado em uma única string que representa um tweet # Cada classe é atribuída a um vetor (hillary, trump) # Instâncias: [t1, t2, t3, t4] # Classes: [c1, c2, c3, c4] all_tweets = [] all_class = [] for t in all_texts: all_tweets.append(t['text']) all_class.append(t['class_']) print("Criar o...
Projects/01_Projeto_HillaryTrump_Twitter.ipynb
adolfoguimaraes/machinelearning
mit
Ao final deste processo já temos nossa base de dados dividido em duas variáveis: X e y. X corresponde ao bag of words, ou seja, cada linha consiste de um twitter e cada coluna de uma palavra presente no vocabulário da base dados. Para cada linha/coluna é atribuído um valor que corresponde a quantidade de vezes que aqu...
# Teste os modelos a partir daqui
Projects/01_Projeto_HillaryTrump_Twitter.ipynb
adolfoguimaraes/machinelearning
mit
Atenção: as tarefas a seguir serão disponibilizadas após a entrega da primeira parte. Sendo assim, não precisa enviar o que se pede a seguir. Quando passar a data da entrega, disponibilizo o notebook completo. No entanto, fiquem a vontade de fazer a próxima tarefa como forma de aprendizado. É um bom exercício ;) Usando...
hillary_frequent_hashtags = nltk.FreqDist(hashtags_h).most_common(10) trump_frequent_hashtags = nltk.FreqDist(hashtags_t).most_common(10) dict_web = { 'hillary_information': { 'frequent_terms': hillary_frequent_terms, 'frequent_bitrigram': hillary_frequent_bitrigram, 'frequent_hashtags': hi...
Projects/01_Projeto_HillaryTrump_Twitter.ipynb
adolfoguimaraes/machinelearning
mit
Create profile points Now, we want to create 3 profiles that are the input for the profile curves. The wing should have one curve at its root, one at its outer end and one at the tip of a winglet.
# list of points on NACA2412 profile px = [1.000084, 0.975825, 0.905287, 0.795069, 0.655665, 0.500588, 0.34468, 0.203313, 0.091996, 0.022051, 0.0, 0.026892, 0.098987, 0.208902, 0.346303, 0.499412, 0.653352, 0.792716, 0.90373, 0.975232, 0.999916] py = [0.001257, 0.006231, 0.019752, 0.03826, 0.057302, 0.072381, 0.079198,...
examples/python/notebooks/geometry_wing.ipynb
DLR-SC/tigl
apache-2.0
Build profiles curves Now, lets built the profiles curves using tigl3.curve_factories.interpolate_points as done in the Airfoil example.
curve1 = tigl3.curve_factories.interpolate_points(points_c1) curve2 = tigl3.curve_factories.interpolate_points(points_c2) curve3 = tigl3.curve_factories.interpolate_points(points_c3)
examples/python/notebooks/geometry_wing.ipynb
DLR-SC/tigl
apache-2.0
Create the surface The final surface is created with the B-spline interpolation from the tigl3.surface_factories package. If you want, comment out the second line and play around with the curve parameters, especially the second value. What influence do they have on the final shape?
surface = tigl3.surface_factories.interpolate_curves([curve1, curve2, curve3]) # surface = tigl3.surface_factories.interpolate_curves([curve1, curve2, curve3], [0., 0.7, 1.]) # surface = tigl3.surface_factories.interpolate_curves([curve1, curve2, curve3], degree=1)
examples/python/notebooks/geometry_wing.ipynb
DLR-SC/tigl
apache-2.0
The function tigl3.surface_factories.interpolate_curves has many more parameters that influence the resulting shape. Lets have a look:
tigl3.surface_factories.interpolate_curves?
examples/python/notebooks/geometry_wing.ipynb
DLR-SC/tigl
apache-2.0
Visualize the result Now, lets draw our wing. How does it look like? What can be improved? Note: a separate window with the 3D Viewer is opening!
# start up the gui display, start_display, add_menu, add_function_to_menu = init_display() # make tesselation more accurate display.Context.SetDeviationCoefficient(0.0001) # draw the curve display.DisplayShape(curve1) display.DisplayShape(curve2) display.DisplayShape(curve3) display.DisplayShape(surface) # match co...
examples/python/notebooks/geometry_wing.ipynb
DLR-SC/tigl
apache-2.0
GitHub Vous pouvez enregistrer une copie de votre carnet Colab dans Github en utilisant Fichier > Enregistrer une copie dans GitHub… Vous pouvez charger n'importe quel fichier .ipynb dans GitHub en ajoutant simplement le chemin d'accès à colab.research.google.com/github/. Par exemple, colab.research.google.com/github/t...
import matplotlib.pyplot as plt import numpy as np x = np.arange(20) y = [x_i + np.random.randn(1) for x_i in x] a, b = np.polyfit(x, y, 1) _ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
Bonjour_Colaboratory.ipynb
mr06/hello-world
gpl-3.0
Vous voulez utiliser une nouvelle bibliothèque ? Importez-la à l'aide de la commande pip install en haut du notebook. Vous pourrez ensuite l'utiliser n'importe où ailleurs dans le notebook. Pour connaître la procédure d'importation des bibliothèques couramment utilisées, reportez-vous à Exemple de notebook importation ...
!pip install -q matplotlib-venn from matplotlib_venn import venn2 _ = venn2(subsets = (3, 2, 1))
Bonjour_Colaboratory.ipynb
mr06/hello-world
gpl-3.0
Simulation Configuration Now we set the simulation configuration.
scenario_params = { 'side_length': 10, # 10 meters side length 'single_wall_loss_dB': 5, 'num_rooms_per_side': 12, 'ap_decimation': 1} power_params = { 'Pt_dBm': 20, # 20 dBm transmit power 'noise_power_dBm': -300 # Very low noise power }
ipython_notebooks/METIS Simple Scenario.ipynb
darcamo/pyphysim
gpl-2.0
Perform the Simulation calculate the SINRs
out = perform_simulation_SINR_heatmap(scenario_params, power_params) (sinr_array_pl_nothing_dB, sinr_array_pl_3gpp_dB, sinr_array_pl_free_space_dB, sinr_array_pl_metis_ps7_dB) = out num_discrete_positions_per_room = 15 sinr_array_pl_nothing_dB2 = prepare_sinr_array_for_color_plot( sinr_array_pl_nothing_dB, ...
ipython_notebooks/METIS Simple Scenario.ipynb
darcamo/pyphysim
gpl-2.0
Print Min/Mean/Max SIR values (no noise)
print(("Min/Mean/Max SINR value (no PL):" "\n {0}\n {1}\n {2}").format( sinr_array_pl_nothing_dB.min(), sinr_array_pl_nothing_dB.mean(), sinr_array_pl_nothing_dB.max())) print(("Min/Mean/Max SINR value (3GPP):" "\n {0}\n {1}\n {2}").format( sin...
ipython_notebooks/METIS Simple Scenario.ipynb
darcamo/pyphysim
gpl-2.0
Create the Plots for the different cases First we will create the plots for a noise variance equal to zero. Plot case without path loss (only wall loss)
fig1, ax1 = plt.subplots(figsize=(10, 8)) print("Max SINR: {0}".format(sinr_array_pl_nothing_dB.max())) print("Min SINR: {0}".format(sinr_array_pl_nothing_dB.min())) print("Mean SINR: {0}".format(sinr_array_pl_nothing_dB.mean())) im1 = ax1.imshow(sinr_array_pl_nothing_dB2, interpolation='nearest', vmax=-1.5, vmin=-5) f...
ipython_notebooks/METIS Simple Scenario.ipynb
darcamo/pyphysim
gpl-2.0
Plot case with 3GPP path loss
fig2, ax2 = plt.subplots(figsize=(10, 8)) print("Max SINR: {0}".format(sinr_array_pl_3gpp_dB.max())) print("Min SINR: {0}".format(sinr_array_pl_3gpp_dB.min())) print("Mean SINR: {0}".format(sinr_array_pl_3gpp_dB.mean())) im2 = ax2.imshow(sinr_array_pl_3gpp_dB2, interpolation='nearest', vmax=30, vmin=-2.5) fig2.colorbar...
ipython_notebooks/METIS Simple Scenario.ipynb
darcamo/pyphysim
gpl-2.0
Case with Free Space Path Loss
fig3, ax3 = plt.subplots(figsize=(10, 8)) print("Max SINR: {0}".format(sinr_array_pl_free_space_dB.max())) print("Min SINR: {0}".format(sinr_array_pl_free_space_dB.min())) print("Mean SINR: {0}".format(sinr_array_pl_free_space_dB.mean())) im3 = ax3.imshow(sinr_array_pl_free_space_dB2, interpolation='nearest', vmax=30, ...
ipython_notebooks/METIS Simple Scenario.ipynb
darcamo/pyphysim
gpl-2.0
Plot case with METIS PS7 path loss
fig4, ax4 = plt.subplots(figsize=(10, 8)) print("Max SINR: {0}".format(sinr_array_pl_metis_ps7_dB.max())) print("Min SINR: {0}".format(sinr_array_pl_metis_ps7_dB.min())) print("Mean SINR: {0}".format(sinr_array_pl_metis_ps7_dB.mean())) im4 = ax4.imshow(sinr_array_pl_metis_ps7_dB2, interpolation='nearest', vmax=30, vmin...
ipython_notebooks/METIS Simple Scenario.ipynb
darcamo/pyphysim
gpl-2.0
Create the plots with interact Here we repeat the plots, but now using IPython interact. This allow us to change unput parameters and see the result in the plot.
@interact(Pt_dBm=(0., 40., 5.), noise_power_dBm=(-160., 0.0, 5.), pl_model=['nothing', '3gpp', 'free_space', 'metis'], ap_decimation=['1', '2', '4', '9']) def plot_SINRs(Pt_dBm=30., noise_power_dBm=-160, pl_model='3gpp', ap_decimation=1): scenario_params = { 'side_length': 10, # 10 meters side length 'sing...
ipython_notebooks/METIS Simple Scenario.ipynb
darcamo/pyphysim
gpl-2.0
I'll be the first to admit that it's far from perfect. The potential lines are all squeezed up, and the 'field lines' aren't uniform or anything. This is because it's not really a proper field plot, rather a stream plot. It will matter less as you see more examples. But firstly, let's see the potential contour plot can...
xs = ys = [-0.2, 0.2] Charge.plot_field(xs, ys, show_charge=False, field=False, potential=True)
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
That doesn't look so bad! (Yes, I cheated and zoomed in). It's not perfectly circular, but remember that this is just meant to be a 'quick' plot. It would be very easy to make it circular by adding a figsize=(5,5) parameter into plt.figure(). Multiple point charges We'll examine the classic dipole. Let's start with ju...
# After every plot we have to reset the charge registry Charge.reset() xs, ys = [-1, 2], [-2, 2] A = Charge(10, [0, 0]) B = Charge(-10, [1, 0]) Charge.plot_field(xs, ys, show_charge=True, field=True, potential=False)
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
Another example:
Charge.plot_field(xs, ys, show_charge=False, field=True, potential=True)
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
The potential is still looking funny. If you really need a good potential plot, do not use the built in plot_field function! Use the rest of the module to calculate V, but before plotting, multiply that V by a large number (1000 should work) to get the equipotentials to show up nicely. But the default plot doesn't alwa...
Charge.reset() A = Charge(1, [0, 0]) B = Charge(4, [1, 0]) xs = [-1, 2.5] ys = [-1, 1.5] Charge.plot_field(xs, ys, show_charge=False, field=True, potential=True)
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
We can obviously try more complex charge distributions, but the stream plot sometimes gets a bit wonky.
Charge.reset() xs = ys = [-2, 2] A = Charge(1, [-1, 1]) B = Charge(-5, [1, 1]) C = Charge(10, [1, -1]) D = Charge(-4, [-1, -1]) Charge.plot_field(xs, ys, show_charge=True, field=True, potential=False)
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
On second thought, it doesn't look too terrible here. But if you try putting identical charges at the corners of a square, you'll see what I mean (I'm too embarassed to put it here). Note that we don't really have to instantiate each charge using A = Charge(..), we can just write Charge(). We will abuse this later o...
Charge.reset() xs = ys = [-2, 2] A = Charge(1, [-1, 1]) B = Charge(1, [1, 1]) C = Charge(1, [1, -1]) D = Charge(1, [-1, -1]) Charge.plot_field(xs, ys, show_charge=True, field=True, potential=False)
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
Yeah, not too happy with it. Never mind. So we've had a look at the different things we can do with point charges. But in fact, we can combine point charges in all sorts of ways, to look at the field of charge distributions (still 2D for now). Line charges A line charge can simply be thought of as many point charges i...
from charge_distribution_2D import straight_line_charge Charge.reset() xs = ys = [-2, 2] # A line of charge on the x axis, with total charge of 1, going from (-1,0) to (1,0). straight_line_charge([-1, 0], [1, 0], res=80, Q=1) Charge.plot_field(xs, ys, show_charge=True, field=True, potential=True)
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
We're starting to get some pretty plots! res is a parameter that determines the 'resolution' of the line – specifically, how many point charges per unit length. I've found 80 to be sufficient for most purposes. A 2D 'capacitor' of sorts:
Charge.reset() xs = ys = [-4, 4] straight_line_charge([-2, 1], [2, 1], res=80, Q=1) straight_line_charge([-2, -1], [2, -1], res=80, Q=-1) # The default plot shows the field and the charge, but not the potential. Charge.plot_field(xs, ys)
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
What about more general (non-straight) line charges? I wrote a separate method for this afterwards, in which we can specify the curve as parametric equations.
import numpy as np from charge_distribution_2D import line_charge Charge.reset() xs = ys = [-2, 2] # The parametric equations of a circle def x(t): return np.cos(t) def y(t): return np.sin(t) # Create the circle line_charge(parametric_x=x, parametric_y=y, trange=2*np.pi, res=100, Q=10) Charge.plot_field(xs, ...
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
However, I much prefer to use lambdas in the call to line_charge(), in which case the above would instead be: line_charge(parametric_x=lambda t: np.cos(t), parametric_y=lambda t: np.sin(t), trange=2*np.pi, res=100, Q=10) Of course, since this method is more general than straight_line_charge(), we can also use it to dra...
from charge_distribution_2D import rectangle_charge Charge.reset() xs = ys = [-2, 2] # Create a rectangle (or rather square), with a length and height of 1, and corner at (-0.5,0.5). rectangle_charge([1, 1], [-0.5, -0.5], res=80, Q=100) Charge.plot_field(xs, ys)
examples.ipynb
surelyourejoking/ElectrodynamicsPy
mit
Two challenges 1. The file is big. We don't want to download it if it's already present. 1. We're going to repeatedly download files. We don't want to just copy and paste the same code. Encapsulating Repeated Code In Functions A function is code that can be invoked by many callers. A function may have arguments that ar...
# Example function def xyz(input): # The function's name is "func". It has one argument "input". return int(input) + 1 # The function returns one value, input + 1 print (xyz("3")) #a = xyz(3) #print (xyz(a)) def addTwo(input1, input2): return input1 + input2 # addTwo(1, 2)
Fall2018/04_ProjectOverview_AnalysisWorkflow/analysis_workflow.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Colin will provide more details about function, such as variable scope, and multiple return values.
# Function to download from a URL def download(url, filename): print("Downloading", filename) #request.urlretrieve(url, filename) download(TRIP_DATA, TRIP_FILE) # Enhancing function to detect file already present import os.path def download(url, filename): if os.path.isfile(filename): print("Alrea...
Fall2018/04_ProjectOverview_AnalysisWorkflow/analysis_workflow.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Convert keys to strings and print all keys
string_results = solph.views.convert_keys_to_strings(energysystem.results['main']) print(string_results.keys())
oemof_examples/oemof.solph/v0.4.x/jupyter_tutorials/2_Processing_results_and_plotting.ipynb
oemof/examples
gpl-3.0
Use the outputlib to collect all the flows into and out of the electricity bus Collect all flows into and out of the electricity bus by using solph.views.node()
node_results_bel = solph.views.node(energysystem.results['main'], 'bel')
oemof_examples/oemof.solph/v0.4.x/jupyter_tutorials/2_Processing_results_and_plotting.ipynb
oemof/examples
gpl-3.0
Forms The forms in the Drum Shop app Form: Choose Drum Let's the user choose the drum (from the provided options)
%%writefile drumshop_forms.py from IPython.display import HTML, Javascript from ipywidgets import interact import ipywidgets as widgets """ Forms related to the Drum Shop """
web/projects/drumshop.ipynb
satishgoda/learning
mit
Simple database for the drum shop. TODO Replace this by a sqlite database in the next iteration.
%%writefile drumshop_forms.py -a drumSizeTags = ('small', 'medium', 'large') drumSizeValues = ('20', '40', '60') drumSizes = dict(zip(drumSizeTags, range(len(drumSizeTags)))) drumColors = ['red', 'green', 'blue', 'yellow', 'grey']
web/projects/drumshop.ipynb
satishgoda/learning
mit
The callback that gets processed after the user choses the options for the drums.
%%writefile drumshop_forms.py -a def chooseDrum(size, color): """ Displays the Drum after applying the user chosen settings """ sizeTag = drumSizeTags[size] sizeValue = drumSizeValues[size] html = HTML(f""" <p>You choose a "{sizeTag}" size drum of height "{sizeValue}" and color "...
web/projects/drumshop.ipynb
satishgoda/learning
mit
Now that we have selected the code cells and executed them, we have a Python file. You can open it in your favorite text editor and have a look at the code.
!gvim drumshop_forms.py
web/projects/drumshop.ipynb
satishgoda/learning
mit
You can can run it using the %run line magic command. All the symbols in the file will be made available in the interpreter now!
%run drumshop_forms.py
web/projects/drumshop.ipynb
satishgoda/learning
mit
You can quickly check the symbols and acertain that they are from the file that was run above. Just select the two code cells below and execute them.
chooseDrum? drumSizes
web/projects/drumshop.ipynb
satishgoda/learning
mit
We can now display an user interface for the Drum Options.
interact(chooseDrum, size=drumSizes, color=drumColors)
web/projects/drumshop.ipynb
satishgoda/learning
mit
Following is an alternate user interface for the form.
sizeSlider = widgets.SelectionSlider(options=drumSizes) colorSlider = widgets.SelectionSlider(options=drumColors) interact(chooseDrum, size=sizeSlider, color=colorSlider)
web/projects/drumshop.ipynb
satishgoda/learning
mit
In this notebook, we will try to handle missing data!
df = unpickle_object("no_duplicates_df.pkl") df.shape df.head() percentage_missing(df) # seems that only dates and times are missing for out data! And only 0.2%! df[df["date"].isnull()].shape df[df["time"].isnull()].shape all(df[df["time"].isnull()].index == df[df["date"].isnull()].index) #perfect match for indic...
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Our last entry in the dataframe has an index of 1049876
df[df["date"].isnull()].head()
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Our first entry in the dataframe has an index of 1047747
1049876 - 1047747 # this is a range of 2129. Which is larger than the total dimensions missing. #so we know that the missing values are not consecutive! #these are the handles that have missing dates/times list(set(df[df["date"].isnull()]['handle']))
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Having sampled a large amount of the handles above, I found that most accounts were primarily bots or the accounts were suspended, meaning that they were formerly bots. As such, I will just drop all the rows that have missing data for date and time.
df.dropna(inplace=True) df.shape 1286 + 610694 # looks like we dropped the correct rows!
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
We now have rows that have a date of 1970! They most have nonsensical tweets! We will drop these too!
df[df['date'] == date(1970,1,1)] #clearly bad rows of data! to_drop = df[df['date'] == date(1970,1,1)].index to_drop df.loc[to_drop, :] df.drop(to_drop, inplace=True) df.shape 610694 - 7 # we dropped the right amount! df.head()
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Let's now clean up our tweets further! This will ensure we dont have garbage hashtags or nonsensical words. This will be important for the lemmatization process later on! In order to ensure that we have fully removed, duplciates, I will again drop duplicates based on clean_tweet_V2 column.
clean_df = filtration_1(df,"clean_tweet_V1", "clean_tweet_V2") clean_df.head() clean_df = filtration_2(clean_df, "clean_tweet_V2") clean_df.head() clean_df.shape clean_df.drop_duplicates(subset="clean_tweet_V2", inplace=True) clean_df.shape #lost around 80K rows! so many duplicates!
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
It seems that our handles column does not strictly contain the user names of twitter handles. Rather, they contain tweets! it is likely that this is the result of bots. As such, we will remove these entries from our dataset! Also, if the handle contains the word bot, we will also remove it!
clean_df.sort_values(by="handle").head(50) #as we can see from the sample, its all nonsense tweets clean_df.sort_values(by='handle', inplace=True) #lets prep our dataframe for the cleaning process clean_df.reset_index(inplace=True) del clean_df['index'] clean_df.head() clean_df.shape to_drop = [] for index in clean_...
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Lets look at some stats at the day level:
stats = pd.DataFrame(clean_df.groupby("date")['tweet'].size().describe()) stats.drop(["count"], inplace=True) stats = stats.rename(columns = {"tweet":'tweets_per_day'}) print(tabulate(stats, headers='keys', tablefmt='fancy_grid'))
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Now that we have plotted the hashtags. There is no need to keep the hashtags in the corpus of a particular tweet. In fact, keeping the hashtag would serve to only confuse our sentiment calculations.
clean_df['clean_tweet_V2'] = clean_df['clean_tweet_V2'].apply(lambda x: x.replace("#","")) clean_df.head()
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
While it is excellent that we have such a high level of granularity for our time column, it is not needed for our analysis. Rather, it would be excellent if we could place tweets into "hourly" buckets. This way, we can have analysis for both the day level and the hour level!
hours = [] for index in clean_df.index: hours.append(clean_df.iloc[index, 2].hour) clean_df.shape[0] == len(hours) #perfect clean_df['hour_of_day'] = hours clean_df = clean_df.set_value(clean_df[clean_df['hour_of_day'] == 0].index, "hour_of_day", 24) clean_df.head() clean_df_tweet_by_hour_plot = line_graph(cle...
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
This concludes the exploration notebook! In the next notebook, we will gather some additional data and prepare our data for the modelling process! As far as model building is concerned, we only need the dates, hours and clean_tweet_V2. Everything else is irrelevant. Let's go ahead and make these changes!
clean_df.drop(["handle", "time", "tweet", "tuple_version_tweet", "clean_tweet_V1"], axis=1, inplace=True) clean_df.head() pickle_object(clean_df, "clean_df_NB3_Complete")
05-project-kojack/Notebook_3_Exploration_Phase.ipynb
igabr/Metis_Projects_Chicago_2017
mit
Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
# Data path to your local copy of Sam's "train_transformed.csv", which was produced by ?separate Python script? data_path_for_labels_only = "/Users/Bryan/Desktop/UC_Berkeley_MIDS_files/Courses/W207_Intro_To_Machine_Learning/Final_Project/sf_crime-master/data/train_transformed.csv" df = pd.read_csv(data_path_for_labels_...
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_18_1636.ipynb
samgoodgame/sf_crime
mit
Defining Performance Criteria As determined by the Kaggle submission guidelines, the performance criteria metric for the San Francisco Crime Classification competition is Multi-class Logarithmic Loss (also known as cross-entropy). There are various other performance metrics that are appropriate for different domains: ...
def model_prototype(train_data, train_labels, eval_data, eval_labels): knn = KNeighborsClassifier(n_neighbors=5).fit(train_data, train_labels) bnb = BernoulliNB(alpha=1, binarize = 0.5).fit(train_data, train_labels) mnb = MultinomialNB().fit(train_data, train_labels) log_reg = LogisticRegression().fit(t...
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_18_1636.ipynb
samgoodgame/sf_crime
mit
Adding Features, Hyperparameter Tuning, and Model Calibration To Improve Prediction For Each Classifier Here we seek to optimize the performance of our classifiers in a three-step, dynamnic engineering process. 1) Feature addition We previously added components from the weather data into the original SF crime data as ...
list_for_ks = [] list_for_ws = [] list_for_ps = [] list_for_log_loss = [] def k_neighbors_tuned(k,w,p): tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels) dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data) list_for_ks.append(this_k) ...
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_18_1636.ipynb
samgoodgame/sf_crime
mit
The Bernoulli Naive Bayes and Multinomial Naive Bayes models can predict whether a loan will be good or bad with XXX% accuracy. Hyperparameter tuning: We will prune the work above. Will seek to optimize the alpha parameter (Laplace smoothing parameter) for MNB and BNB classifiers. Model calibration: Here we will calib...
### All the work from Sarah's notebook: import theano from theano import tensor as T from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams print (theano.config.device) # We're using CPUs (for now) print (theano.config.floatX )# Should be 64 bit for CPUs np.random.seed(0) from IPython.display import d...
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_18_1636.ipynb
samgoodgame/sf_crime
mit
Dataset The Dataset class implements an iterator which returns the next batch data in each iteration. Data is already normalized to have zero mean and unit variance. The iteration is terminated when we reach the end of the dataset (one epoch).
batch_size = 10 num_classes = Dataset.num_classes # create the Dataset for training and validation train_data = Dataset('train', batch_size) val_data = Dataset('val', batch_size, shuffle=False) # downsample = 2 # train_data = Dataset('train', batch_size, downsample) # val_data = Dataset('val', batch_size, downsample, s...
Day-2/segmentation/semantic_segmentation_solved.ipynb
SSDS-Croatia/SSDS-2017
mit
Inputs First, we will create input placeholders for Tensorflow computational graph of the model. For a supervised learning model, we need to declare placeholders which will hold input images (x) and target labels (y) of the mini-batches as we feed them to the network.
# store the input image dimensions height = train_data.height width = train_data.width channels = train_data.channels # create placeholders for inputs def build_inputs(): with tf.name_scope('data'): x = tf.placeholder(tf.float32, shape=(None, height, width, channels), name='rgb_images') y = tf.plac...
Day-2/segmentation/semantic_segmentation_solved.ipynb
SSDS-Croatia/SSDS-2017
mit
Model Now we can define the computational graph. Here we will heavily use tf.layers high level API which handles tf.Variable creation for us. The main difference here compared to the classification model is that the network is going to be fully convolutional without any fully connected layers. Brief sketch of the model...
# helper function which applies conv2d + ReLU with filter size k def conv(x, num_maps, k=3): x = tf.layers.conv2d(x, num_maps, k, padding='same') x = tf.nn.relu(x) return x # helper function for 2x2 max pooling with stride=2 def pool(x): return tf.layers.max_pooling2d(x, pool_size=2, strides=2, padding...
Day-2/segmentation/semantic_segmentation_solved.ipynb
SSDS-Croatia/SSDS-2017
mit
Loss Now we are going to implement the build_loss function which will create nodes for loss computation and return the final tf.Tensor representing the scalar loss value. Because segmentation is just classification on a pixel level we can again use the cross entropy loss function \(L\) between the target one-hot distri...
# this funcions takes logits and targets (y) and builds the loss subgraph def build_loss(logits, y): with tf.name_scope('loss'): # vectorize the image y = tf.reshape(y, shape=[-1]) logits = tf.reshape(logits, [-1, num_classes]) # gather all labels with valid ID mask = y < num_classes y = ...
Day-2/segmentation/semantic_segmentation_solved.ipynb
SSDS-Croatia/SSDS-2017
mit
Putting it all together Now we can use all the building blocks from above and construct the whole forward pass Tensorflow graph in just a couple of lines.
# create inputs x, y = build_inputs() # create model logits = build_model(x, num_classes) # create loss loss = build_loss(logits, y) # we are going to need argmax predictions for IoU y_pred = tf.argmax(logits, axis=3, output_type=tf.int32)
Day-2/segmentation/semantic_segmentation_solved.ipynb
SSDS-Croatia/SSDS-2017
mit
3. Training the model Training During training we are going to compute the forward pass first to get the value of the loss function. After that we are doing the backward pass and computing all gradients the loss wrt parameters at each layer with backpropagation.
# this functions trains the model def train(sess, x, y, y_pred, loss, checkpoint_dir): num_epochs = 30 batch_size = 10 log_dir = 'local/logs' utils.clear_dir(log_dir) utils.clear_dir(checkpoint_dir) learning_rate = 1e-3 decay_power = 1.0 global_step = tf.Variable(0, trainable=False) ...
Day-2/segmentation/semantic_segmentation_solved.ipynb
SSDS-Croatia/SSDS-2017
mit
Validation We usually evaluate the semantic segmentation results with Intersection over Union measure (IoU aka Jaccard index). Note that accurracy we used on MNIST image classification problem is a bad measure in this case because semantic segmentation datasets are often heavily imbalanced. First we compute IoU for eac...
def validate(sess, data, x, y, y_pred, loss, draw_steps=0): print('\nValidation phase:') conf_mat = np.zeros((num_classes, num_classes), dtype=np.uint64) for i, (x_np, y_np, names) in enumerate(data): start_time = time.time() loss_np, y_pred_np = sess.run([loss, y_pred], feed_dict...
Day-2/segmentation/semantic_segmentation_solved.ipynb
SSDS-Croatia/SSDS-2017
mit
Tensorboard $ tensorboard --logdir=local/logs/ 4. Restoring the pretrained network
# restore the best checkpoint checkpoint_path = 'local/pretrained1/model.ckpt' saver = tf.train.Saver() saver.restore(sess, checkpoint_path) validate(sess, val_data, x, y, y_pred, loss, draw_steps=10)
Day-2/segmentation/semantic_segmentation_solved.ipynb
SSDS-Croatia/SSDS-2017
mit
Day 4 5. Improved model with skip connections In this part we are going to improve on the previous model by adding skip connections. The role of the skip connections will be to restore the information lost due to downsampling.
def upsample(x, skip, num_maps): skip_size = skip.get_shape().as_list()[1:3] x = tf.image.resize_bilinear(x, skip_size) x = tf.concat([x, skip], 3) return conv(x, num_maps) # this functions takes the input placeholder and the number of classes, builds the model and returns the logits def build_model(x,...
Day-2/segmentation/semantic_segmentation_solved.ipynb
SSDS-Croatia/SSDS-2017
mit
Just 10 outliers can really screw up our line fit!
plt.ylim([-20,20]) plt.xlim([-20,20]) plt.scatter(*pts) pca_line = np.dot(U[0].reshape((2,1)), np.array([-20,20]).reshape((1,2))) plt.plot(*pca_line) rpca_line = np.dot(U_n[0].reshape((2,1)), np.array([-20,20]).reshape((1,2))) plt.plot(*rpca_line, c='r')
TGA_Testing.ipynb
fivetentaylor/rpyca
mit
Now the robust pca version!
import tga reload(tga) import logging logger = logging.getLogger(tga.__name__) logger.setLevel(logging.INFO)
TGA_Testing.ipynb
fivetentaylor/rpyca
mit
Factor the matrix into L (low rank) and S (sparse) parts
X = pts.copy() v = tga.tga(X.T, eps=1e-5, k=1, p=0.0)
TGA_Testing.ipynb
fivetentaylor/rpyca
mit
And have a look at this!
plt.ylim([-20,20]) plt.xlim([-20,20]) plt.scatter(*pts) tga_line = np.dot(v[0].reshape((2,1)), np.array([-20,20]).reshape((1,2))) plt.plot(*tga_line) #plt.scatter(*L, c='red')
TGA_Testing.ipynb
fivetentaylor/rpyca
mit
Problem 1 Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
display(Image("notMNIST_small/A/MlJlYmVsc0RldXgtQmxhY2sub3Rm.png"))
udacity_machine_learning_notes/deep_learning/1_notmnist.ipynb
anshbansal/anshbansal.github.io
mit