category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
implement regression
|
Parallel/multithreading version of linear regression in Julia
|
https://stackoverflow.com/questions/64470373/parallel-multithreading-version-of-linear-regression-in-julia
|
<p>I implemented regression model with interactions using Julia <code>GLM</code> package:</p>
<p><code>Reg = lm(@formula(dep_var ~ var1&var2&var3), data, true)</code>.</p>
<p>Fitting this formula requires a lot of RAM (> 80 GB), but I noticed that the calculations are performed on one core, although my OS (x86_64-pc-linux-gnu) has 8 cpu cores.</p>
<p><strong>Is it possible to implement linear regression using multiprocessing/parallelism approaches?</strong></p>
<p>I suppose, it could also improve the model runtime.</p>
|
<p>Fitting a regression model is basically doing lots of matrix operations. By default Julia is using BLAS and the easiest thing you can do is to try to configure it to be multi-threaded. This requires running Julia in a multi-threaded setting <em>and</em> setting the <code>BLAS.set_num_threads()</code> configuration.</p>
<p>Before starting Julia run:</p>
<pre><code>set JULIA_NUM_THREADS=4
</code></pre>
<p>or on Linux</p>
<pre><code>export JULIA_NUM_THREADS=4
</code></pre>
<p>Once Julia is started run the command.</p>
<pre><code>BLAS.set_num_threads(4)
</code></pre>
<p>You should observe an increased performance of your linear regression models.</p>
| 34
|
implement regression
|
Implementing logistic regression with L2 regularization in Matlab
|
https://stackoverflow.com/questions/9369379/implementing-logistic-regression-with-l2-regularization-in-matlab
|
<p>Matlab has built in logistic regression using mnrfit, however I need to implement a logistic regression with L2 regularization. I'm completely at a loss at how to proceed. I've found some good papers and website references with a bunch of equations, but not sure how to implement the gradient descent algorithm needed for the optimization.</p>
<p>Is there an easily available sample code in Matlab for this. I've found some libraries and packages, but they are all part of larger packages, and call so many convoluted functions, one can get lost just going through the trace.</p>
|
<p>Here is an annotated piece of code for plain gradient descent for logistic regression. To introduce regularisation, you will want to update the cost and gradient equations. In this code, theta are the parameters, X are the class predictors, y are the class-labels and alpha is the learning rate</p>
<p>I hope this helps :)</p>
<pre><code>function [theta,J_store] = logistic_gradientDescent(theta, X, y,alpha,numIterations)
% Initialize some useful values
m = length(y); % number of training examples
n = size(X,2); %number of features
J_store = 0;
%J_store = zeros(numIterations,1);
for iter=1:numIterations
%predicts the class labels using the current weights (theta)
Z = X*theta;
h = sigmoid(Z);
%This is the normal cost function equation
J = (1/m).*sum(-y.*log(h) - (1-y).*log(1-h));
%J_store(iter) = J;
%This is the equation to obtain the given the current weights, without regularisation
grad = [(1/m) .* sum(repmat((h - y),1,n).*X)]';
theta = theta - alpha.*grad;
end
end
</code></pre>
| 35
|
implement regression
|
Implementing a linear regression using gradient descent
|
https://stackoverflow.com/questions/58996556/implementing-a-linear-regression-using-gradient-descent
|
<p>I'm trying to implement a linear regression with gradient descent as explained in this article (<a href="https://towardsdatascience.com/linear-regression-using-gradient-descent-97a6c8700931" rel="nofollow noreferrer">https://towardsdatascience.com/linear-regression-using-gradient-descent-97a6c8700931</a>).
I've followed to the letter the implementation, yet my results overflow after a few iterations.
I'm trying to get this result approximately: y = -0.02x + 8499.6.</p>
<p>The code:</p>
<pre><code>package main
import (
"encoding/csv"
"fmt"
"strconv"
"strings"
)
const (
iterations = 1000
learningRate = 0.0001
)
func computePrice(m, x, c float64) float64 {
return m * x + c
}
func computeThetas(data [][]float64, m, c float64) (float64, float64) {
N := float64(len(data))
dm, dc := 0.0, 0.0
for _, dataField := range data {
x := dataField[0]
y := dataField[1]
yPred := computePrice(m, x, c)
dm += (y - yPred) * x
dc += y - yPred
}
dm *= -2/N
dc *= -2/N
return m - learningRate * dm, c - learningRate * dc
}
func main() {
data := readXY()
m, c := 0.0, 0.0
for k := 0; k < iterations; k++ {
m, c = computeThetas(data, m, c)
}
fmt.Printf("%.4fx + %.4f\n", m, c)
}
func readXY() ([][]float64) {
file := strings.NewReader(data)
reader := csv.NewReader(file)
records, err := reader.ReadAll()
if err != nil {
panic(err)
}
records = records[1:]
size := len(records)
data := make([][]float64, size)
for i, v := range records {
val1, err := strconv.ParseFloat(v[0], 64)
if err != nil {
panic(err)
}
val2, err := strconv.ParseFloat(v[1], 64)
if err != nil {
panic(err)
}
data[i] = []float64{val1, val2}
}
return data
}
var data = `km,price
240000,3650
139800,3800
150500,4400
185530,4450
176000,5250
114800,5350
166800,5800
89000,5990
144500,5999
84000,6200
82029,6390
63060,6390
74000,6600
97500,6800
67000,6800
76025,6900
48235,6900
93000,6990
60949,7490
65674,7555
54000,7990
68500,7990
22899,7990
61789,8290`
</code></pre>
<p>And here it can be worked on in the GO playground:
<a href="https://play.golang.org/p/2CdNbk9_WeY" rel="nofollow noreferrer">https://play.golang.org/p/2CdNbk9_WeY</a></p>
<p>What do I need to fix to get the correct result ?</p>
|
<blockquote>
<p>Why would a formula work on one data set and not another one?</p>
</blockquote>
<p>In addition to sascha's remarks, here's another way to look at problems of this application of gradient descent: The algorithm offers no guarantee that an iteration yields a better result than the previous, so it doesn't necessarily converge to a result, because:</p>
<ul>
<li>The gradients <code>dm</code> and <code>dc</code> in axes <code>m</code> and <code>c</code> are handled indepently from each other; <code>m</code> is updated in the descending direction according to <code>dm</code>, and <code>c</code> at the same time is updated in the descending direction according to <code>dc</code> — but, with certain curved surfaces z = f(m, c), the gradient in a direction between axes <code>m</code> and <code>c</code> can have the opposite sign compared to <code>m</code> and <code>c</code> on their own, so, while updating any one of <code>m</code> or <code>c</code> would converge, updating both moves away from the optimum.</li>
<li>However, more likely the failure reason in this case of linear regression to a point cloud is the entirely arbitrary magnitude of the update to <code>m</code> and <code>c</code>, determined by the product of an obscure <em>learning rate</em> and the gradient. It is quite possible that such an update oversteps a minimum for the target function, even that this is repeated with higher amplitude in each iteration.</li>
</ul>
| 36
|
implement regression
|
How to implement stratified sampling in randomForest regression in R?
|
https://stackoverflow.com/questions/49197422/how-to-implement-stratified-sampling-in-randomforest-regression-in-r
|
<p>How can I implement stratified sampling in a randomForest <strong>regression</strong> in R? I know that the strata and sampsize parameters are used in randomForest classification problems, but I get
Error in { : task 1 failed - "sampsize should be of length one."</p>
<p>My data:</p>
<pre><code>x <- sample(1:10, 100, replace = TRUE)
y <- sample(1:20, 100, replace = TRUE)
Region <- sample(c('N', 'S'), 100, replace = TRUE)
df <- data.frame(x, y, Region)
</code></pre>
<p>My code:</p>
<pre><code>randomForest(x ~ y, data = df, sampsize = c(30,20), strata = df$Region)
</code></pre>
<p>My actual analysis has far worse imbalance between groups than even this. Thank you.</p>
| 37
|
|
implement regression
|
Implementing linear regression from scratch in python
|
https://stackoverflow.com/questions/74825378/implementing-linear-regression-from-scratch-in-python
|
<p>I'm trying to Implement linear regression in python using the following gradient decent formulas (Notice that these formulas are after partial derive)
<a href="https://i.sstatic.net/EQcjo.png" rel="nofollow noreferrer">slope</a>
<a href="https://i.sstatic.net/bhJsz.png" rel="nofollow noreferrer">y_intercept</a></p>
<p>but the code keeps giving me wearied results ,I think (I'm not sure) that the error is in the gradient_descent function</p>
<pre><code>import numpy as np
class LinearRegression:
def __init__(self , x:np.ndarray ,y:np.ndarray):
self.x = x
self.m = len(x)
self.y = y
def calculate_predictions(self ,slope:int , y_intercept:int) -> np.ndarray: # Calculate y hat.
predictions = []
for x in self.x:
predictions.append(slope * x + y_intercept)
return predictions
def calculate_error_cost(self , y_hat:np.ndarray) -> int:
error_valuse = []
for i in range(self.m):
error_valuse.append((y_hat[i] - self.y[i] )** 2)
error = (1/(2*self.m)) * sum(error_valuse)
return error
def gradient_descent(self):
costs = []
# initialization values
temp_w = 0
temp_b = 0
a = 0.001 # Learning rate
while True:
y_hat = self.calculate_predictions(slope=temp_w , y_intercept= temp_b)
sum_w = 0
sum_b = 0
for i in range(len(self.x)):
sum_w += (y_hat[i] - self.y[i] ) * self.x[i]
sum_b += (y_hat[i] - self.y[i] )
w = temp_w - a * ((1/self.m) *sum_w)
b = temp_b - a * ((1/self.m) *sum_b)
temp_w = w
temp_b = b
costs.append(self.calculate_error_cost(y_hat))
try:
if costs[-1] > costs[-2]: # If global minimum reached
return [w,b]
except IndexError:
pass
</code></pre>
<p>I Used this dataset:-
<a href="https://www.kaggle.com/datasets/tanuprabhu/linear-regression-dataset?resource=download" rel="nofollow noreferrer">https://www.kaggle.com/datasets/tanuprabhu/linear-regression-dataset?resource=download</a></p>
<p>after downloading it like this:</p>
<pre><code>import pandas
p = pandas.read_csv('linear_regression_dataset.csv')
l = LinearRegression(x= p['X'] , y= p['Y'])
print(l.gradient_descent())
</code></pre>
<p>But It's giving me <code>[-568.1905905426412, -2.833321633515304]</code> Which is decently not accurate.</p>
<p>I want to implement the algorithm not using external modules like scikit-learn for learning purposes.</p>
<p>I tested the calculate_error_cost function and it worked as expected and I don't think that there is an error in the calculate_predictions function</p>
|
<p>One small problem you have is that you are returning the last values of w and b, when you should be returning the second-to-last parameters (because they yield a lower cost). This should not really matter that much... unless your learning rate is too high and you are immediately getting a higher value for the cost function on the second iteration. This I believe is your real problem, judging from the dataset you shared.</p>
<p>The algorithm does work on the dataset, but you need to <strong>change the learning rate</strong>. I ran it in the example below and it gave the result shown in the image. One caveat is that I added a limit to the iterations to avoid the algorithm from taking too long (and only marginally improving the result).</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
class LinearRegression:
def __init__(self , x:np.ndarray ,y:np.ndarray):
self.x = x
self.m = len(x)
self.y = y
def calculate_predictions(self ,slope:int , y_intercept:int) -> np.ndarray: # Calculate y hat.
predictions = []
for x in self.x:
predictions.append(slope * x + y_intercept)
return predictions
def calculate_error_cost(self , y_hat:np.ndarray) -> int:
error_valuse = []
for i in range(self.m):
error_valuse.append((y_hat[i] - self.y[i] )** 2)
error = (1/(2*self.m)) * sum(error_valuse)
return error
def gradient_descent(self):
costs = []
# initialization values
temp_w = 0
temp_b = 0
iteration = 0
a = 0.00001 # Learning rate
while iteration < 1000:
y_hat = self.calculate_predictions(slope=temp_w , y_intercept= temp_b)
sum_w = 0
sum_b = 0
for i in range(len(self.x)):
sum_w += (y_hat[i] - self.y[i] ) * self.x[i]
sum_b += (y_hat[i] - self.y[i] )
w = temp_w - a * ((1/self.m) *sum_w)
b = temp_b - a * ((1/self.m) *sum_b)
costs.append(self.calculate_error_cost(y_hat))
try:
if costs[-1] > costs[-2]: # If global minimum reached
print(costs)
return [temp_w,temp_b]
except IndexError:
pass
temp_w = w
temp_b = b
iteration += 1
print(iteration)
return [temp_w,temp_b]
p = pd.read_csv('linear_regression_dataset.csv')
x_data = p['X']
y_data = p['Y']
lin_reg = LinearRegression(x_data, y_data)
y_hat = lin_reg.calculate_predictions(*lin_reg.gradient_descent())
fig = plt.figure()
plt.plot(x_data, y_data, 'r.', label='Data')
plt.plot(x_data, y_hat, 'b-', label='Linear Regression')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Iipvy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Iipvy.png" alt="Dataset and regression result" /></a></p>
| 38
|
implement regression
|
Linear Regression - Implementing Feature Scaling
|
https://stackoverflow.com/questions/56364169/linear-regression-implementing-feature-scaling
|
<p>I was trying to implement Linear Regression in Octave 5.1.0 on a data set relating the GRE score to the probability of Admission.
The data set is of the sort,</p>
<blockquote>
<p>337 0.92 <br>
324 0.76 <br>
316 0.72 <br>
322 0.8 <br>
. <br> . <br> . <br></p>
</blockquote>
<p>My main Program.m file looks like,</p>
<pre><code> % read the data
data = load('Admission_Predict.txt');
% initiate variables
x = data(:,1);
y = data(:,2);
m = length(y);
theta = zeros(2,1);
alpha = 0.01;
iters = 1500;
J_hist = zeros(iters,1);
% plot data
subplot(1,2,1);
plot(x,y,'rx','MarkerSize', 10);
title('training data');
% compute cost function
x = [ones(m,1), (data(:,1) ./ 300)]; % feature scaling
J = computeCost(x,y,theta);
% run gradient descent
[theta, J_hist] = gradientDescent(x,y,theta,alpha,iters);
hold on;
subplot(1,2,1);
plot((x(:,2) .* 300), (x*theta),'-');
xlabel('GRE score');
ylabel('Probability');
hold off;
subplot (1,2,2);
plot(1:iters, J_hist, '-b');
xlabel('no: of iteration');
ylabel('Cost function');
</code></pre>
<p>computeCost.m looks like,</p>
<pre><code> function J = computeCost(x,y,theta)
m = length(y);
h = x * theta;
J = (1/(2*m))*sum((h-y) .^ 2);
endfunction
</code></pre>
<p>and gradientDescent.m looks like,</p>
<pre><code> function [theta, J_hist] = gradientDescent(x,y,theta,alpha,iters)
m = length(y);
J_hist = zeros(iters,1);
for i=1:iters
diff = (x*theta - y);
theta = theta - (alpha * (1/(m))) * (x' * diff);
J_hist(i) = computeCost(x,y,theta);
endfor
endfunction
</code></pre>
<p>The graphs plotted then looks like this,</p>
<p><img src="https://i.sstatic.net/8EI34.png" alt="Graphs"></p>
<p>which you can see, doesn't feel right even though my Cost function seems to be minimized.</p>
<p>Can someone please tell me if this is right? If not, what am I doing wrong?</p>
|
<p>The easiest way to check whether your implementation is correct is to compare with a validated implementation of linear regression. I suggest using an alternative implementation approach like the one suggested <a href="https://www.lauradhamilton.com/tutorial-linear-regression-with-octave" rel="nofollow noreferrer">here</a>, and then comparing your results. If the fits match, then this is the best linear fit to your data and if they don't match, then there may be something wrong in your implementation.</p>
| 39
|
implement regression
|
Is there an implement of focal loss for regression problem?
|
https://stackoverflow.com/questions/70449172/is-there-an-implement-of-focal-loss-for-regression-problem
|
<p>Proposed in RetinaNet, Focal Loss is a efficient solution to class imbalance for classification problem. But data imbalance can also occur in regression problem. Is there any implementation of focal loss in regression problem?</p>
| 40
|
|
implement regression
|
Logistic Regression Implementation
|
https://stackoverflow.com/questions/45705490/logistic-regression-implementation
|
<p>I am having some difficulties in implementing logistic regression, in terms of how should I should proceed stepwise. According to what I have done so far I am implementing it in the following way:</p>
<ul>
<li><p>First taking <code>theta</code> equal to the number of features and making it a <code>n*1</code> vector of zeros. Now using this <code>theta</code> to compute the the following
<code>htheta = sigmoid(theta' * X');</code><br>
<code>theta = theta - (alpha/m) * sum (htheta' - y)'*X</code></p></li>
<li><p>Now using the <code>theta</code> computed in the first step to compute the cost function<br>
<code>J= 1/m *((sum(-y*log(htheta))) - (sum((1-y) * log(1 - htheta)))) + lambda/(2*m) * sum(theta).^2</code></p></li>
<li><p>In the end computing the gradient<br>
<code>grad = (1/m) * sum ((sigmoid(X*theta) - y')*X);</code></p></li>
</ul>
<p>As i am taking <code>theta</code> to be zero. I am getting same value of <code>J</code> throughout the vector, is this the right output? </p>
|
<p>You are computing the gradient in the last step, while it has been computed before in the computation of the new <code>theta</code>. Moreover, your definition of the cost function contains a regularization parameter, but this is not incorporated in the gradient computation. A working version without the regularization:</p>
<pre><code>% generate dummy data for testing
y=randi(2,[10,1])-1;
X=[ones(10,1) randn([10,1])];
% initialize
alpha = 0.1;
theta = zeros(1,size(X,2));
J = NaN(100,1);
% loop a fixed number of times => can improve this by stopping when the
% cost function no longer decreases
htheta = sigmoid(X*theta');
for n=1:100
grad = X' * (htheta-y); % gradient
theta = theta - alpha*grad'; % update theta
htheta = sigmoid(X*theta');
J(n) = sum(-y'*log(htheta)) - sum((1-y)' * log(1 - htheta)); % cost function
end
</code></pre>
<p>If you now plot the cost function, you will see (except for randomness) that it converges after about 15 iterations. </p>
| 41
|
implement regression
|
How to implement polynomial logistic regression in scikit-learn?
|
https://stackoverflow.com/questions/55937244/how-to-implement-polynomial-logistic-regression-in-scikit-learn
|
<p>I'm trying to create a non-linear logistic regression, i.e. polynomial logistic regression using scikit-learn. But I couldn't find how I can define a degree of polynomial. Did anybody try it?
Thanks a lot!</p>
|
<p>For this you will need to proceed in two steps. Let us assume you are using the iris dataset (so you have a reproducible example):</p>
<pre><code>from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
data = load_iris()
X = data.data
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
</code></pre>
<h1>Step 1</h1>
<p>First you need to convert your data to polynomial features. Originally, our data has 4 columns:</p>
<pre><code>X_train.shape
>>> (112,4)
</code></pre>
<p>You can create the polynomial features with scikit learn (here it is for degree 2):</p>
<pre><code>poly = PolynomialFeatures(degree = 2, interaction_only=False, include_bias=False)
X_poly = poly.fit_transform(X_train)
X_poly.shape
>>> (112,14)
</code></pre>
<p>We know have 14 features (the original 4, their square, and the 6 crossed combinations)</p>
<h1>Step 2</h1>
<p>On this you can now build your logistic regression calling <code>X_poly</code></p>
<pre><code>lr = LogisticRegression()
lr.fit(X_poly,y_train)
</code></pre>
<p>Note: if you then want to evaluate your model on the test data, you also need to follow these 2 steps and do:</p>
<pre><code>lr.score(poly.transform(X_test), y_test)
</code></pre>
<h1>Putting everything together in a Pipeline (optional)</h1>
<p>You may want to use a Pipeline instead that processes these two steps in one object to avoid building intermediary objects:</p>
<pre><code>pipe = Pipeline([('polynomial_features',poly), ('logistic_regression',lr)])
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test)
</code></pre>
| 42
|
implement regression
|
Trying to implement Logistic Regression with stochastic gradient descent but gettin KEYError
|
https://stackoverflow.com/questions/65860488/trying-to-implement-logistic-regression-with-stochastic-gradient-descent-but-get
|
<p>I'am trying to implement logistic regression with sgd. My dataset is splited like <code>train_features.npy</code>, <code>train_labels.npy</code>, <code>test_features.npy</code>, <code>test_labels.npy</code>. My sgd function is here:</p>
<pre><code>def stoch_grd_dcnt(train, batch_size, n0, n1, max_epoch, delta):
# getting the total number of classes
k = train['Category'].nunique()
theta = np.zeros((train.shape[1], k-1))
theta_list = []
theta_list.append(theta)
loss_list = []
loss_list.append(loss(theta, train))
for epoch in range(max_epoch):
print(epoch+1, "Epochs started.")
n = n0/(n1 + epoch)
train = train.sample(frac=1, random_state = 102)
# dividing the training set into batches
num_batch = int(train.shape[0]/batch_size)
batches = np.split(train, num_batch, axis = 0)
# theta update using gradient descent
for i in range(num_batch):
theta = np.array(grad_desc(np.array(theta), n, batches[i], k))
theta_list.append(theta)
# computing and storing the loss function using new theta
loss_list.append(loss(theta, train))
if(loss_list[-1] > (1 - delta) * loss_list[-2] and
epoch > 2):
print("Not much progress, terminate")
break
print(epoch+1, "Epochs completed.")
return theta_list, loss_list`
</code></pre>
<p>But When I try to run parameters part after whole implementation I got KEYError:</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/anaconda3/lib/python3.8/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2894 try:
-> 2895 return self._engine.get_loc(casted_key)
2896 except KeyError as err:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'Category'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
<ipython-input-45-a1c0f72fef45> in <module>
1 # 1. Run the first stochastic gradient descent on the training example
----> 2 params = stoch_grd_dcnt(Train, 16, 0.1, 1, 1000, 0.00001)
3 theta_list = params[0]
4 loss_list = params[1]
5 #>> Number of epochs = 5
<ipython-input-34-d48e44f19d71> in stoch_grd_dcnt(train, batch_size, n0, n1, max_epoch, delta)
1 def stoch_grd_dcnt(train, batch_size, n0, n1, max_epoch, delta):
2 # getting the total number of classes
----> 3 k = Train['Category'].nunique()
4 theta = np.zeros((train.shape[1], k-1))
5 theta_list = []
~/anaconda3/lib/python3.8/site-packages/pandas/core/frame.py in __getitem__(self, key)
2900 if self.columns.nlevels > 1:
2901 return self._getitem_multilevel(key)
-> 2902 indexer = self.columns.get_loc(key)
2903 if is_integer(indexer):
2904 indexer = [indexer]
~/anaconda3/lib/python3.8/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2895 return self._engine.get_loc(casted_key)
2896 except KeyError as err:
-> 2897 raise KeyError(key) from err
2898
2899 if tolerance is not None:
KeyError: 'Category'
</code></pre>
<p>How can I fix it?</p>
| 43
|
|
implement regression
|
How to implement Softmax regression with pytorch?
|
https://stackoverflow.com/questions/64783744/how-to-implement-softmax-regression-with-pytorch
|
<p>I am working on a uni assignment where I need to implement <code>Softmax Regression</code> with <code>Pytorch</code>. The assignment says:</p>
<pre><code>Implement Softmax Regression as an nn.Module and pipe its output with its output with torch.nn.Softmax.
</code></pre>
<p>As I am new to pytorch, I am not sure how to do it exactly. So far I have tried:</p>
<p>class SoftmaxRegression(nn.Module): # inheriting from nn.Module!</p>
<pre><code>def __init__(self, num_labels, num_features):
super(SoftmaxRegression, self).__init__()
self.linear = torch.nn.Linear(num_labels, num_features)
def forward(self, x):
# should return the probabilities for the classes, e.g.
# tensor([[ 0.1757, 0.3948, 0.4295],
# [ 0.0777, 0.3502, 0.5721],
# ...
# not sure what to do here
</code></pre>
<p>does anybody have any idea how I could go about it? I am not sure what should be written in the <code>forward</code> method. I appreciate any help !</p>
|
<p>As far as I understand, the assignment wants you to implement your own version of the Softmax function. But, I didn't get what do you mean by <code>and pipe its output with torch.nn.Softmax</code>. Are they asking you to return the output of your custom Softmax along with <code>torch.nn.Softmax</code> from your custom <code>nn.Module</code>? You could do this:</p>
<pre class="lang-py prettyprint-override"><code>class SoftmaxRegression(nn.Module):
def __init__(self, dim=0):
super(SoftmaxRegression, self).__init__()
self.dim = dim
def forward(self, x):
means = torch.mean(x, self.dim, keepdim=True)[0]
exp_x= torch.exp(x-means)
sum_exp_x = torch.sum(exp_x, self.dim, keepdim=True)
value = exp_x/sum_exp_x
return value
</code></pre>
| 44
|
implement regression
|
How to implement Logistic regression with lasso or ridge?
|
https://stackoverflow.com/questions/60479134/how-to-implement-logistic-regression-with-lasso-or-ridge
|
<p>I have to implement a logistic regression and regularise with lasso or ridge and then calculate the RSS choosing the model that will shrinks up to 10 features or less.
I tried, using the package glmnet</p>
<pre><code>x <- Train_set %>% dplyr::select(-Diag) %>% as.matrix()
y <- Train_set$Diag %>% as.matrix()
Test_matrix <- model.matrix(Diag ~.,Test_set)[,-1]
Lasso <- cv.glmnet(x,y,alpha =1,family = "binomial")
</code></pre>
<p>Seeing that the desired model was that at 1 std I chose that. To get my prediction I used :</p>
<pre><code>Lasso_1std <- glmnet(x,y alpha = 1, family = "binomial",lambda =Lasso$lambda.1se)
lasso_pred <- Lasso_1std %>% predict(newx = Test_matrix)
#.....RSS_lasso
(Output_lasso <- data.frame(Test_set$Diag,lasso_pred))
#..........................RSS
(RSS_lasso <-(Output_lasso$Test_set.Diag - Output_lasso$s0)^2 %>% sum )
</code></pre>
<p>I do not know if I'm doing it the right way because I get 250 as RSS. A further question is if there is a way to do the same with caret :: train.</p>
| 45
|
|
implement regression
|
Multiple linear regression in Matlab R2014a
|
https://stackoverflow.com/questions/30751080/multiple-linear-regression-in-matlab-r2014a
|
<p>Is there any function that does this? I keep searching and the closest match is <strong>regression</strong>, but it's for the simple linear regression. In Matlab R2015a they have implemented <strong>regress</strong>, but I don't have that version.</p>
<p>Thanks.</p>
|
<p>Both <code>fitlm</code> and <code>regress</code> do the same thing.</p>
| 46
|
implement regression
|
Is LASSO regression implemented in Statsmodels?
|
https://stackoverflow.com/questions/43446919/is-lasso-regression-implemented-in-statsmodels
|
<p>I would love to use a linear LASSO regression within statsmodels, so to be able to use the 'formula' notation for writing the model, that would save me quite some coding time when working with many categorical variables, and their interactions. However, it seems like it is not implemented yet in stats models?</p>
|
<p>Lasso is indeed implemented in statsmodels. The documentation is given in the url below:</p>
<p><a href="http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.fit_regularized.html" rel="noreferrer">http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.fit_regularized.html</a></p>
<p>To be precise, the implementation in statsmodel has both L1 and L2 regularization, with their relative weight indicated by L1_wt parameter. You should look at the formula at the bottom to make sure you are doing exactly what you want to do.</p>
<p>Besides the elastic net implementation, there is also a square root Lasso method implemented in statsmodels. </p>
| 47
|
implement regression
|
Neural Network for Regression using PyTorch
|
https://stackoverflow.com/questions/72874433/neural-network-for-regression-using-pytorch
|
<p>I am trying to implement a Neural Network for predicting the h1_hemoglobin in PyTorch. After creating a model, I kept 1 in the output layer as this is Regression. But I got the error as below. I'm not able to understand the mistake. Keeping a large value like 100 in the output layer removes the error but renders the model useless as I am trying to implement regression.</p>
<p>Data:
<a href="https://i.sstatic.net/rvJDX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rvJDX.png" alt="Data" /></a></p>
<pre><code>from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
##### Creating Tensors
X_train=torch.tensor(X_train)
X_test=torch.tensor(X_test)
y_train=torch.LongTensor(y_train)
y_test=torch.LongTensor(y_test)
</code></pre>
<pre><code>class ANN_Model(nn.Module):
def __init__(self,input_features=4,hidden1=20,hidden2=20,out_features=1):
super().__init__()
self.f_connected1=nn.Linear(input_features,hidden1)
self.f_connected2=nn.Linear(hidden1,hidden2)
self.out=nn.Linear(hidden2,out_features)
def forward(self,x):
x=F.relu(self.f_connected1(x))
x=F.relu(self.f_connected2(x))
x=self.out(x)
return x
loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr = 0.01)
epochs = 500
final_losses = []
for i in range(epochs):
i = i + 1
y_pred = model.forward(X_train.float())
loss=loss_function(y_pred, y_train)
final_losses.append(loss.item())
if i%10==1:
print("Epoch number: {} and the loss: {}".format(i, loss.item()))
optimizer.zero_grad()
loss.backward()
optimizer.step()
</code></pre>
<p>Error:
<a href="https://i.sstatic.net/mRoIB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mRoIB.png" alt="enter image description here" /></a></p>
|
<p>Your response variable h1_hemoglobin looks like continous response variable. If that's the case please change the Torch Tensor Type for <code>y_train</code> and <code>y_test</code> from <code>LongTensor</code> to <code>FloatTensor</code> or <code>DoubleTensor</code>.</p>
<p>According to the Pytorch docs, <code>CrossEntropyLoss</code> is useful for classification problems with a number of classes. Try to change your <code>loss_function</code> from <code>CrossEntropyLoss</code> to a more suitable one for your continuous response variable <code>h1_hemoglobin</code>.</p>
<p>In your case, the following might do it.</p>
<pre><code>y_train=torch.DoubleTensor(y_train)
y_test=torch.DoubleTensor(y_test)
...
...
loss_function = nn.MSELoss()
</code></pre>
<p><a href="https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html?highlight=mseloss#torch.nn.MSELoss" rel="nofollow noreferrer">Pytorch MSELoss</a></p>
<p><a href="https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html?highlight=crossentropy#torch.nn.CrossEntropyLoss" rel="nofollow noreferrer">Pytorch CrossEntropyLoss</a></p>
| 48
|
implement regression
|
Problem implementing Partial Least Squares Regression
|
https://stackoverflow.com/questions/73818502/problem-implementing-partial-least-squares-regression
|
<p>I’m trying to implement the Partial Least Squares Regression model in R using the <code>pls</code> package. The following data frame represents my data, where I have two response variables and three explanatory variables.</p>
<pre><code>library(pls)
df <- data.frame(x1 = rnorm(20),
x2 = rnorm(20),
x3 = rnorm(20),
y1 = rnorm(20),
y2 = rnorm(20))
model <- plsr(c(y1, y2) ~ x1+x2+x3, ncomp = 3, scale = TRUE, data = df)
</code></pre>
<p>However, when I run the model, I always get an error message.</p>
<blockquote>
<p>Error in model.frame.default(formula = c(y1, y2) ~ x1 + x2 + x3, data
= df) : variable lengths differ (found for 'x1')</p>
</blockquote>
<p>I believe that the data frame is not correctly structured. Would anyone have any suggestions?</p>
| 49
|
|
implement regression
|
Implementing lasso regression using TensorFlow
|
https://stackoverflow.com/questions/39815728/implementing-lasso-regression-using-tensorflow
|
<p>I want to run Lasso regression using TensorFlow. As the Lasso regression is simply adding L1-norm to the cost, I am going to define my cost term as</p>
<pre><code>cost = RSS + tf.nn.l1_norm(Weight) or
cost = RSS + tf.reduce_sum(tf.abs(Weight))
train = tf.train.GradientDescentOptimizer(cost)
</code></pre>
<p>Is the above code work as a Lasso regression? One of my question is if I can use gradient descent in this case.
Lasso regression has non-diffrentiable point and coordinate descent is widely used that does not exist in the TensorFlow library.</p>
| 50
|
|
implement regression
|
Implement a Regression/Classification Neural Network in Python
|
https://stackoverflow.com/questions/47626075/implement-a-regression-classification-neural-network-in-python
|
<h3>Problem Context</h3>
<p>I am trying to learn Neural Networks in Python, & I've developed an implementation of a Logistic Regression based NN.</p>
<p>Here is the code - </p>
<pre><code>import numpy as np
# Input array
X = np.array([[1, 0, 1, 0], [1, 0, 1, 1], [0, 1, 0, 1]])
# Output
y = np.array([[1], [1], [0]])
# Sigmoid Function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Derivative of Sigmoid Function
def ddx_sigmoid(x):
return x * (1 - x)
##### Initialization - BEGIN #####
# Setting training iterations
iterations_max = 500000
# Learning Rate
alpha = 0.5
# Number of Neruons in Input Layer = Number of Features in the data set
inputlayer_neurons = X.shape[1]
# Number of Neurons in the Hidden Layer
hiddenlayer_neurons = 3 # number of hidden layers neurons
# Number of Neurons at the Output Layer
output_neurons = 1 # number of neurons at output layer
# weight and bias initialization
wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))
bh = np.random.uniform(size=(1, hiddenlayer_neurons))
wout = np.random.uniform(size=(hiddenlayer_neurons, output_neurons))
bout = np.random.uniform(size=(1, output_neurons))
##### Initialization - END #####
# Printing of shapes
print "\nShape X: ", X.shape, "\nShape Y: ", y.shape
print "\nShape WH: ", wh.shape, "\nShape BH: ", bh.shape, "\nShape Wout: ", wout.shape, "\nShape Bout: ", bout.shape
# Printing of Values
print "\nwh:\n", wh, "\n\nbh: ", bh, "\n\nwout:\n", wout, "\n\nbout: ", bout
##### TRAINING - BEGIN #####
for i in range(iterations_max):
##### Forward Propagation - BEGIN #####
# Input to Hidden Layer = (Dot Product of Input Layer and Weights) + Bias
hidden_layer_input = (np.dot(X, wh)) + bh
# Activation of input to Hidden Layer by using Sigmoid Function
hiddenlayer_activations = sigmoid(hidden_layer_input)
# Input to Output Layer = (Dot Product of Hidden Layer Activations and Weights) + Bias
output_layer_input = np.dot(hiddenlayer_activations, wout) + bout
# Activation of input to Output Layer by using Sigmoid Function
output = sigmoid(output_layer_input)
##### Forward Propagation - END #####
##### Backward Propagation - BEGIN #####
E = y - output
slope_output_layer = ddx_sigmoid(output)
slope_hidden_layer = ddx_sigmoid(hiddenlayer_activations)
d_output = E * slope_output_layer
Error_at_hidden_layer = d_output.dot(wout.T)
d_hiddenlayer = Error_at_hidden_layer * slope_hidden_layer
wout += hiddenlayer_activations.T.dot(d_output) * alpha
bout += np.sum(d_output, axis=0, keepdims=True) * alpha
wh += X.T.dot(d_hiddenlayer) * alpha
bh += np.sum(d_hiddenlayer, axis=0, keepdims=True) * alpha
##### Backward Propagation - END #####
##### TRAINING - END #####
print "\nOutput is:\n", output
</code></pre>
<p>This code is working perfectly in the cases where the output is binary (0,1). I guess, this is because of the sigmoid function I am using.</p>
<h3>Problem</h3>
<p>Now, I want to scale this code so that it can handle linear regression as well.</p>
<p>As we know, the <code>scikit</code> library has some preloaded datasets which can be used for classification and regression.</p>
<p>I want my NN to train and test the <code>diabetes</code> dataset.</p>
<p>With this in mind, I have modified my code as follows - </p>
<pre><code>import numpy as np
from sklearn import datasets
# Sigmoid Function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Derivative of Sigmoid Function
def ddx_sigmoid(x):
return x * (1 - x)
# Load Data
def load_data():
diabetes_data = datasets.load_diabetes()
return diabetes_data
input_data = load_data()
X = input_data.data
# Reshape Output
y = input_data.target
y = y.reshape(len(y), 1)
iterations_max = 1000
# Learning Rate
alpha = 0.5
# Number of Neruons in Input Layer = Number of Features in the data set
inputlayer_neurons = X.shape[1]
# Number of Neurons in the Hidden Layer
hiddenlayer_neurons = 5 # number of hidden layers neurons
# Number of Neurons at the Output Layer
output_neurons = 3 # number of neurons at output layer
# weight and bias initialization
wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))
bh = np.random.uniform(size=(1, hiddenlayer_neurons))
wout = np.random.uniform(size=(hiddenlayer_neurons, output_neurons))
bout = np.random.uniform(size=(1, output_neurons))
##### TRAINING - BEGIN #####
for i in range(iterations_max):
##### Forward Propagation - BEGIN #####
# Input to Hidden Layer = (Dot Product of Input Layer and Weights) + Bias
hidden_layer_input = (np.dot(X, wh)) + bh
# Activation of input to Hidden Layer by using Sigmoid Function
hiddenlayer_activations = sigmoid(hidden_layer_input)
# Input to Output Layer = (Dot Product of Hidden Layer Activations and Weights) + Bias
output_layer_input = np.dot(hiddenlayer_activations, wout) + bout
# Activation of input to Output Layer by using Sigmoid Function
output = sigmoid(output_layer_input)
##### Forward Propagation - END #####
##### Backward Propagation - BEGIN #####
E = y - output
slope_output_layer = ddx_sigmoid(output)
slope_hidden_layer = ddx_sigmoid(hiddenlayer_activations)
d_output = E * slope_output_layer
Error_at_hidden_layer = d_output.dot(wout.T)
d_hiddenlayer = Error_at_hidden_layer * slope_hidden_layer
wout += hiddenlayer_activations.T.dot(d_output) * alpha
bout += np.sum(d_output, axis=0, keepdims=True) * alpha
wh += X.T.dot(d_hiddenlayer) * alpha
bh += np.sum(d_hiddenlayer, axis=0, keepdims=True) * alpha
##### Backward Propagation - END #####
##### TRAINING - END #####
print "\nOutput is:\n", output
</code></pre>
<p>The output of this code is - </p>
<pre><code>Output is:
[[ 1. 1. 1.]
[ 1. 1. 1.]
[ 1. 1. 1.]
...,
[ 1. 1. 1.]
[ 1. 1. 1.]
[ 1. 1. 1.]]
</code></pre>
<p>Obviously, I am goofing up somewhere in the basics.</p>
<p>Is this because I am using the sigmoid function for the hidden as well as the output layer?</p>
<p>What kind of functions shall I use so that I get a valid output, which can be used to train my NN efficiently?</p>
<h3>Efforts so far</h3>
<p>I have tried to use the TANH function, the SOFTPLUS function as activation to both the layers, without any success.</p>
<p>Can someone out here please help?</p>
<p>I tried Googling this but the explanations there are very complex.</p>
<p>HELP !</p>
|
<p>You should try to remove sigmoid function on your output.</p>
<p>As for linear regression, output range may be large, while output of sigmoid or tanh function is [0, 1] or [-1, 1], which makes minimize of error function not possible. </p>
<p>======= UPDATE ======</p>
<p>I try to accomplish it in tensorflow, the core part for it:</p>
<pre><code>w = tf.Variable(tf.truncated_normal([features, FLAGS.hidden_unit], stddev=0.35))
b = tf.Variable(tf.zeros([FLAGS.hidden_unit]))
# right way
y = tf.reduce_sum(tf.matmul(x, w) + b, 1)
# wrong way: as sigmoid output in [-1, 1]
# y = tf.sigmoid(tf.matmul(x, w) + b)
mse_loss = tf.reduce_sum(tf.pow(y - y_, 2) / 2
</code></pre>
| 51
|
implement regression
|
Linear regression implementation in Octave
|
https://stackoverflow.com/questions/54104134/linear-regression-implementation-in-octave
|
<p>I recently tried implementing linear regression in octave and couldn't get past the online judge. Here's the code</p>
<pre><code>function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
for i = 1:m
temp1 = theta(1)-(alpha/m)*(X(i,:)*theta-y(i,:));
temp2 = theta(2)-(alpha/m)*(X(i,:)*theta-y(i,:))*X(i,2);
theta = [temp1;temp2];
endfor
J_history(iter) = computeCost(X, y, theta);
end
end
</code></pre>
<p>I am aware of the vectorized implementation but just wanted to try the iterative method. Any help would be appreciated.</p>
|
<p>You don't need the inner <code>for</code> loop. Instead you can use the <code>sum</code> function.</p>
<p>In code:</p>
<pre><code>for iter = 1:num_iters
j= 1:m;
temp1 = sum((theta(1) + theta(2) .* X(j,2)) - y(j));
temp2 = sum(((theta(1) + theta(2) .* X(j,2)) - y(j)) .* X(j,2));
theta(1) = theta(1) - (alpha/m) * (temp1 );
theta(2) = theta(2) - (alpha/m) * (temp2 );
J_history(iter) = computeCost(X, y, theta);
end
</code></pre>
<p>It'd be a good exercise to implement the vectorized solution too and then compare both of them to see how in practice the efficiency of the the vectorization.</p>
| 52
|
implement regression
|
Linear Regression\Gradient Descent python implementation
|
https://stackoverflow.com/questions/14993454/linear-regression-gradient-descent-python-implementation
|
<p>I'm trying to implement linear regression using the gradient descent method from scratch for learning purposes. One part of my code is really bugging me. For some reason the variable <code>x</code> is being altered after I run a line of code and I'm not sure why. </p>
<p>The variables are as follow. <code>x</code> and <code>y</code> are numpy arrays and I've given them random numbers for this example.</p>
<pre><code>x = np.array([1, 2, 3, 4, ...., n])
y = np.array([1, 2, 3, , ...., n])
theta = [0, 0]
alpha = .01
m = len(x)
</code></pre>
<p>The code is:</p>
<pre><code>theta[0] = theta[0] - alpha*1/m*sum([((theta[0]+theta[1]*x) - y)**2 for (x,y) in zip(x,y)])
</code></pre>
<p>Once I run the above code <code>x</code> is no longer a list. It becomes only the variable n or the last element in the list. </p>
|
<p>What is happening is that python is computing the list <code>zip(x,y)</code>, then each iteration of your for loop is overwriting <code>(x,y)</code> with the corresponding element of <code>zip(x,y)</code>. When your for loop terminates <code>(x,y)</code> contains <code>zip(x,y)[-1]</code>.</p>
<p>Try </p>
<pre><code>theta[0] = theta[0] - alpha*1/m*sum([((theta[0]+theta[1]*xi) - yi)**2 for (xi,yi) in zip(x,y)])
</code></pre>
| 53
|
implement regression
|
code for h2o ensemble implementation in r for regression in r
|
https://stackoverflow.com/questions/48490900/code-for-h2o-ensemble-implementation-in-r-for-regression-in-r
|
<p>I have searched for different portals and even in h2o ensemble documentation and all I have got ensemble examples for only classification problem binary in nature but not a single example showing how to implement general stacking or h2o ensembling for a simple regression problem in r </p>
<p>I request anyone to please share working code on how to implement h2o ensemble or stacking only for regression problem in R </p>
<p>OR </p>
<p>simple ensembling only meant for regression in R.</p>
<p>Only want to know how ensembling/stacking is implemented for regression with varying weights.</p>
|
<p>Here's an example of building a stacked ensemble for a regression problem (predicting age) in R:</p>
<pre><code>library('h2o')
h2o.init()
files3 = "http://h2o-public-test-data.s3.amazonaws.com/smalldata/prostate/prostate.csv"
col_types <- c("Numeric","Numeric","Numeric","Enum","Enum","Numeric","Numeric","Numeric","Numeric")
dat <- h2o.importFile(files3,destination_frame = "prostate.hex",col.types = col_types)
ss <- h2o.splitFrame(dat, ratios = 0.8, seed = 1)
train <- ss[[1]]
test <- ss[[2]]
x <- c("CAPSULE","GLEASON","RACE","DPROS","DCAPS","PSA","VOL")
y <- "AGE"
nfolds <- 5
# Train & Cross-validate a GBM
my_gbm <- h2o.gbm(x = x,
y = y,
training_frame = train,
distribution = "gaussian",
max_depth = 3,
learn_rate = 0.2,
nfolds = nfolds,
fold_assignment = "Modulo",
keep_cross_validation_predictions = TRUE,
seed = 1)
# Train & Cross-validate a RF
my_rf <- h2o.randomForest(x = x,
y = y,
training_frame = train,
ntrees = 30,
nfolds = nfolds,
fold_assignment = "Modulo",
keep_cross_validation_predictions = TRUE,
seed = 1)
# Train & Cross-validate a extremely-randomized RF
my_xrf <- h2o.randomForest(x = x,
y = y,
training_frame = train,
ntrees = 50,
histogram_type = "Random",
nfolds = nfolds,
fold_assignment = "Modulo",
keep_cross_validation_predictions = TRUE,
seed = 1)
# Train a stacked ensemble using the models above
stack <- h2o.stackedEnsemble(x = x,
y = y,
training_frame = train,
validation_frame = test, #also test that validation_frame is working
model_id = "my_ensemble_gaussian",
base_models = list(my_gbm@model_id, my_rf@model_id, my_xrf@model_id))
# predict
pred <- h2o.predict(stack, newdata = test)
</code></pre>
| 54
|
implement regression
|
Understanding this implementation of logistic regression
|
https://stackoverflow.com/questions/41959431/understanding-this-implementation-of-logistic-regression
|
<p>Following this example implementation of Logistic Regression from scikit-learn :
<a href="https://analyticsdataexploration.com/logistic-regression-using-python/" rel="nofollow noreferrer">https://analyticsdataexploration.com/logistic-regression-using-python/</a></p>
<p>After running predict , the following is produced :</p>
<pre><code>predictions=modelLogistic.predict(test[predictor_Vars])
predictions
array([0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1,
0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0,
0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0,
1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0,
1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1,
0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1,
0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0,
1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1,
0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0,
0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1,
0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1,
1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0,
1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0,
1, 0, 0, 0], dtype=int64)
</code></pre>
<p>I'm failing to understand the <code>array</code> values. I think they are related to logistic function and are outputting what it thinks the label is but should these values be between 0 and 1 instead of 0 or 1 ?</p>
<p>Reading the doc for <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.predict" rel="nofollow noreferrer">predict</a> function : </p>
<pre><code>predict(X)
Predict class labels for samples in X.
Parameters:
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Samples.
Returns:
C : array, shape = [n_samples]
Predicted class label per sample.
</code></pre>
<p>Taking the first 5 values : 0, 1, 0, 0, 1 of the returned array how are these interpreted as labels ?</p>
<p>Complete code :</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn import cross_validation
import matplotlib.pyplot as plt
%matplotlib inline
train=pd.read_csv('/train.csv')
test=pd.read_csv('/test.csv')
def data_cleaning(train):
train["Age"] = train["Age"].fillna(train["Age"].median())
train["Fare"] = train["Age"].fillna(train["Fare"].median())
train["Embarked"] = train["Embarked"].fillna("S")
train.loc[train["Sex"] == "male", "Sex"] = 0
train.loc[train["Sex"] == "female", "Sex"] = 1
train.loc[train["Embarked"] == "S", "Embarked"] = 0
train.loc[train["Embarked"] == "C", "Embarked"] = 1
train.loc[train["Embarked"] == "Q", "Embarked"] = 2
return train
train=data_cleaning(train)
test=data_cleaning(test)
predictor_Vars = [ "Sex", "Age", "SibSp", "Parch", "Fare"]
X, y = train[predictor_Vars], train.Survived
X.iloc[:5]
y.iloc[:5]
modelLogistic = linear_model.LogisticRegression()
modelLogisticCV= cross_validation.cross_val_score(modelLogistic,X,y,cv=15)
modelLogistic = linear_model.LogisticRegression()
modelLogistic.fit(X,y)
#predict(X) Predict class labels for samples in X.
predictions=modelLogistic.predict(test[predictor_Vars])
</code></pre>
<p>Update : </p>
<p>printing first 10 elements from the test dataset : </p>
<p><a href="https://i.sstatic.net/jcolf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jcolf.png" alt="enter image description here"></a></p>
<p>Can see it matches the predictions of first 10 elements of array : </p>
<pre><code>0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0
</code></pre>
<p>So these are the logistic regression predictions on the <code>test</code> dataset after applying logistic regression to the <code>train</code> dataset.</p>
|
<p>As stated in the documentation the values returned by the <code>predict</code> function are class labels (like the values you provided to the <code>fit</code> function as y). In your case 1 for survived and 0 for not survived.</p>
<p>If you want the scores of each prediction you should use the <code>decision_function</code> which returns values between -1 and 1.</p>
<p>i hope this answers your question.</p>
| 55
|
implement regression
|
Implementing softmax regression
|
https://stackoverflow.com/questions/39988084/implementing-softmax-regression
|
<p>I am trying to make a neural network using softmax regression. I am using the following regression formula:</p>
<p><a href="https://i.sstatic.net/oaAL7.png" rel="nofollow"><img src="https://i.sstatic.net/oaAL7.png" alt="enter image description here"></a></p>
<p>Lets say I have an input of 1000x100. In other words, lets say I have 1000 images each of dimensions 10x10. Now, let's say the images are images of letters from A, B, C, D, E, F, G, H, I, J and I'm trying to predict this. My design is the following: to have 100 inputs (each image) and 10 outputs.</p>
<p>I have the following doubts. Given that n is superscript in x^n, with regards to the numerator, should I perform the dot product of w (w = weights whose dimensions are 10x100 - 10 representing the number of outputs and 100 representing the number of inputs) and a single x(a single image) or all the imagines combined(1000x100)? I am coding in python and so if I do dot product of w and x^T (10x100 dot 100x1000), then I am not sure how I can make that an exponent. I am using numpy. I am having a hard time wrapping my mind around these matrices on how they can be raised as an exponent.</p>
|
<p>If you are training Neural Networks, it might we worth while to check <a href="http://deeplearning.net/tutorial/mlp.html" rel="nofollow">Theano</a> library. It features various output thresholding functions like <em>tanh</em>, <em>softmax</em>, etc. and allows training of neural networks on GPU.</p>
<p>Also x^n is the output of the last layer in the above formula, not input raised to some exponent. You can't put matrix in exponent.</p>
<p>You should check more about softmax regression. <a href="http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/" rel="nofollow">This</a> might be of help.</p>
| 56
|
implement regression
|
How to extract Regression predictor of Scikit-learn to implement into C++?
|
https://stackoverflow.com/questions/37802891/how-to-extract-regression-predictor-of-scikit-learn-to-implement-into-c
|
<p>I did some training using Random Forest Regression (or any kind of regressions) of Scikit-learn and got the predictor:</p>
<pre><code>predictor = RandomForestRegressor(n_estimators=n_estimators)
predictor.fit(X_train, Y_train)
</code></pre>
<p>How can I extract information in <code>predictor</code> to implement into C++ as a filter only for prediction?</p>
<p>What I need is to build a "predictor" in C++ to get: <code>Y_predict = predictor(X_test)</code> without any training in C++.</p>
|
<p>The <a href="http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html" rel="nofollow">sklearn documentation for RFR</a> says that you have</p>
<pre><code> property estimators_ : list of DecisionTreeRegressor
</code></pre>
<p>From <a href="http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html" rel="nofollow">DecisionTreeRegressor</a> you can get the attributes you need, specifically tree_ attribute.</p>
| 57
|
implement regression
|
Ridge Polynomial Regression: Direct implementation in python
|
https://stackoverflow.com/questions/66905960/ridge-polynomial-regression-direct-implementation-in-python
|
<p>I want to directly implement the ridge polynomial regression in python without using related libraries such as sklearn. The direct calculation is given by:</p>
<p>w = (XTX + lambda * I)^-1*XTy.</p>
<p>The code is as follows:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from openpyxl import load_workbook
wb = load_workbook('data.xlsx')
data = wb['data']
xv=[]
yv=[]
for i in range(1,100):
xv = xv +[float(data.cell(row=i,column=1).value)]
yv = yv +[float(data.cell(row=i,column=2).value)]
n=5 #polynomial degree
ex=1
m=len(xv)
max=xv[0]
for u in range(1,m):
if xv[u]>max:
max=xv[u]
x=[]
for i in range(0,m):
xn=[]
for j in range(0,n+1):
xn=xn+[xv[i]**j]
x=x+[xn]
lam = 5
X=np.array(x)
XtX=(X.T).dot(X)
#XtX_inv=np.linalg.inv(XtX)
Xty=(X.T).dot(np.array(yv))
I = np.identity(XtX.shape[0])
LI = np.dot(lam, I)
XtXLI = np.add(XtX, LI)
XtXLI_inv = XtX_inv=np.linalg.inv(XtXLI)
teta=XtXLI_inv.dot(Xty)
def h(c):
h=0
for i in range(0,n+1):
h=h+teta[i]*c**i
return h
hv=[]
for i in range(0,m):
hv=hv+[h(xv[i])]
</code></pre>
<p>It is expected to achieve better fitting by tuning the lambda parameter. However, the error increases significantly by increasing the lambda. How can I solve the problem?</p>
|
<p>You may check this implementation made by me which uses the following formula: A^T * A * x = A^T * B</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
DATA = np.array([(-5, 12), (-3, 2), (-2, -7), (-1, -4), (2, 3), (3, 1), (5, 4), (7, 9)])
n, m = DATA.shape
def regression(degree: int):
A = np.empty(shape=(n, degree + 1))
for i, data in enumerate(DATA):
# Evaluates the polynomial in order to get coefficients
A[i] = np.array([data[0]**x for x in range(degree + 1)])
# @ is a special python operator which performs matrix multiplication
x = A.T @ A
y = A.T @ np.array([d[1] for d in DATA])
# Solves the linear system
r = np.linalg.solve(x, y)
# Evaluates in order to plot values
x = np.linspace(DATA[0][0], DATA[-1][0], num=1000)
y = np.array([np.sum(np.array([r[i]*(j**i) for i in range(len(r))])) for j in x])
# Plots the polynomial
plt.plot(x, y)
# Plots the data points
for data in DATA:
plt.scatter(*data)
# y has to be recalculated because linspace creates extra values in order to plot the graph
y = np.array([np.sum(np.array([r[i] * (d[0] ** i) for i in range(len(r))])) for d in DATA])
error = sum([abs(DATA[i][1]-y[i])**2 for _ in range(n)])**0.5
# If the error is too small, it is depreciated
if error > 1e-10:
plt.title(f"Degree: {degree}, Error: {error}")
else:
plt.title(f"Degree: {degree}, Perfect aproximation")
plt.show()
for i in range(1, n):
regression(i)
</code></pre>
| 58
|
implement regression
|
How can I implement multivariate linear regression?
|
https://stackoverflow.com/questions/47357431/how-can-i-implement-multivariate-linear-regression
|
<p>I have built a simple linear regressor below...</p>
<pre><code>import numpy as np
x = np.array([[0.0], [0.33], [0.66], [1.0]])
y = np.array([[1.0, 2.0, 3.0, 4.0]]).T
w = np.random.random((1, 1))
for j in xrange(100000):
a2 = np.dot(x, w) + 1
w += x.T.dot((y - a2))
print(a2)
</code></pre>
<p>Now here is my attempt at developing it to include multivariate data...</p>
<pre><code>import numpy as np
x = np.array([[0.0], [0.33], [0.66], [1.0]])
x2 = np.array([[0.0], [0.33], [0.66], [1.0]])
y = np.array([[1.0, 2.0, 3.0, 4.0]]).T
w = np.random.random((1, 1))
for j in xrange(100000):
mx = np.dot(x, w) + np.dot(x2, w)
w += (np.sum(x, x2)).T.dot((y - a2))
print(mx)
</code></pre>
<p>It seems to not be letting me add together the x and x2 arrays. Also the model yielded infinite values before hand. Please give me some pointers? Just numpy and python please no scikit learn as the true way to learn machine learning is from scratch really. Feel free to change the data in x and y arrays. Bonus points for plotting data and regression line with matplotlib! If the results are bad then of course it doesn’t matter because it’s linear regression after all. Thanks again </p>
|
<p>I think what you need is <strong>numpy concatenate</strong> function:</p>
<pre><code>import numpy as np
x = np.array([[0.0], [0.33], [0.66], [1.0]])
x2 = np.array([[0.0], [0.33], [0.66], [1.0]])
y = np.array([[1.0, 2.0, 3.0, 4.0]]).T
w = np.random.random((1, 2))
x_all = np.concatenate((x,x2),axis = 1)
for j in xrange(100):
mx = np.dot(x_all, w.T)
w += (x_all.T.dot((y - mx))).T
print(mx)
</code></pre>
<p>This works but i am not sure that the way that the weights are updated is correct.</p>
<p>Some examples of linear regression from scratch can be found here: </p>
<p><a href="https://machinelearningmastery.com/implement-simple-linear-regression-scratch-python/" rel="nofollow noreferrer">https://machinelearningmastery.com/implement-simple-linear-regression-scratch-python/</a></p>
<p><a href="https://mubaris.com/2017-09-28/linear-regression-from-scratch" rel="nofollow noreferrer">https://mubaris.com/2017-09-28/linear-regression-from-scratch</a></p>
| 59
|
implement regression
|
Implementing LSTM regression model with tensor flow
|
https://stackoverflow.com/questions/47277171/implementing-lstm-regression-model-with-tensor-flow
|
<p>I am trying to implement a tensor flow LSTM regression model for a list of inputs number.
example: </p>
<pre><code> input_data = [1, 2, 3, 4, 5]
time_steps = 2
-> X == [[1, 2], [2, 3], [3, 4]]
-> y == [3, 4, 5]
</code></pre>
<p>The code is below:</p>
<pre><code>TIMESTEPS = 20
num_hidden=20
Xd, yd = load_data()
train_input = Xd['train']
train_input = train_input.reshape(-1,20,1)
train_output = yd['train']
# train_input = [[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],..
# train_output = [[21],[22],[23]....
test_input = Xd['test']
test_output = yd['test']
X = tf.placeholder(tf.float32, [None, 20, 1])
y = tf.placeholder(tf.float32, [None, 1])
cell = tf.nn.rnn_cell.LSTMCell(num_hidden, state_is_tuple=True)
val, state = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
val = tf.Print(val, [tf.argmax(val,1)], 'argmax(val)=' , summarize=20, first_n=7)
val = tf.transpose(val, [1, 0, 2])
val = tf.Print(val, [tf.argmax(val,1)], 'argmax(val2)=' , summarize=20, first_n=7)
# Take only the last output after 20 time steps
last = tf.gather(val, int(val.get_shape()[0]) - 1)
last = tf.Print(last, [tf.argmax(last,1)], 'argmax(val3)=' , summarize=20, first_n=7)
# define variables for weights and bias
weight = tf.Variable(tf.truncated_normal([num_hidden, int(y.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[y.get_shape()[1]]))
# Prediction is matmul of last value + wieght + bias
prediction = tf.matmul(last, weight) + bias
# Cost function using softmax
# y is the true distrubution and prediction is the predicted
cost = tf.reduce_mean(-tf.reduce_sum(y * tf.log(prediction), reduction_indices=[1]))
#cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cost)
from tensorflow.python import debug as tf_debug
inita = tf.initialize_all_variables()
sess = tf.Session()
sess.run(inita)
batch_size = 100
no_of_batches = int(len(train_input)/batch_size)
epoch = 10
test_size = 100
for i in range(epoch):
for start, end in zip(range(0, len(train_input), batch_size), range(batch_size, len(train_input)+1, batch_size)):
sess.run(minimize, feed_dict={X: train_input[start:end], y: train_output[start:end]})
test_indices = np.arange(len(test_input)) # Get A Test Batch
np.random.shuffle(test_indices)
test_indices = test_indices[0:test_size]
print (i, mean_squared_error(np.argmax(test_output[test_indices], axis=1), sess.run(prediction, feed_dict={X: test_input[test_indices]})))
print ("predictions", prediction.eval(feed_dict={X: train_input}, session=sess))
y_pred = prediction.eval(feed_dict={X: test_input}, session=sess)
sess.close()
test_size = test_output.shape[0]
ax = np.arange(0, test_size, 1)
plt.plot(ax, test_output, 'r', ax, y_pred, 'b')
plt.show()
</code></pre>
<p>But i am not able to minimize the cost, the calculated MSE increases at each step instead of decreasing.
I suspect there is a problem with the cost problem that i am using.</p>
<p>any thoughts or suggestions as to what i am doing wrong ?</p>
<p>Thanks</p>
|
<p>As mentioned in the comment, you had to change your loss function to the MSE function and reduce your learning rate. Is your error converging to zero ?</p>
| 60
|
implement regression
|
Fast softmax regression implementation in tensorflow
|
https://stackoverflow.com/questions/39770254/fast-softmax-regression-implementation-in-tensorflow
|
<p>I am trying to implement the softmax regression model in tensorflow in order to make a benchmark with other mainstream deep-learning frameworks. The official documentation code is slow because of the <a href="https://github.com/tensorflow/tensorflow/issues/2919" rel="nofollow">feed_dict issue</a> in tensorflow. I am trying to serve the data as tensorflow constant but I don't know the most efficient way to do that. For now I just use the single batch as constant and trained through that batch. What are the efficient solutions of making minibatched solution of that code? Here is my code:</p>
<pre><code>from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
batch_xs, batch_ys = mnist.train.next_batch(100)
x = tf.constant(batch_xs, name="x")
W = tf.Variable(0.1*tf.random_normal([784, 10]))
b = tf.Variable(tf.zeros([10]))
logits = tf.matmul(x, W) + b
batch_y = batch_ys.astype(np.float32)
y_ = tf.constant(batch_y, name="y_")
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, y_))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
....
# Minitbatch is never updated during that for loop
for i in range(5500):
sess.run(train_step)
</code></pre>
|
<p>Just as follows.</p>
<pre><code>from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
batch_size = 32 #any size you want
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
x = tf.placeholder(tf.float32, shape = [None, 784])
y = tf.placeholder(tf.float32, shape = [None, 10])
W = tf.Variable(0.1*tf.random_normal([784, 10]))
b = tf.Variable(tf.zeros([10]))
logits = tf.matmul(x, W) + b
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
....
# Minitbatch is never updated during that for loop
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
l, _ = sess.run([loss, train_step], feed_dict = {x: batch_x, y: batch_ys})
print l #loss for every minibatch
</code></pre>
<p>Shape like [None, 784] allows you to feed any value of shape [?, 784].</p>
<p>I haven't tested this code, but I hope it would work.</p>
| 61
|
implement regression
|
Parallel implementation of least angle regression
|
https://stackoverflow.com/questions/23264997/parallel-implementation-of-least-angle-regression
|
<p>I'm looking for the R package or script which the least angle regression or lasso implemented in parallel fashion.
Does anyone knows some?</p>
|
<p>Yes. The <strong>Lars</strong> package in R will do this. Check out this package - <a href="http://cran.r-project.org/web/packages/lars/index.html" rel="nofollow">http://cran.r-project.org/web/packages/lars/index.html</a></p>
<p>And the R command would be </p>
<pre><code>lars(x, y, type = c("lasso"))
</code></pre>
| 62
|
implement regression
|
python linear regression implementation
|
https://stackoverflow.com/questions/39478437/python-linear-regression-implementation
|
<p>I've been trying to do my own implementation of a simple linear regression algorithm, but I'm having some trouble with the gradient descent.</p>
<p>Here's how I coded it:</p>
<pre><code>def gradientDescentVector(data, alpha, iterations):
a = 0.0
b = 0.0
X = data[:,0]
y = data[:,1]
m = data.shape[0]
it = np.ones(shape=(m,2))
for i in range(iterations):
predictions = X.dot(a).flatten() + b
errors_b = (predictions - y)
errors_a = (predictions - y) * X
a = a - alpha * (1.0 / m) * errors_a.sum()
b = b - alpha * (1.0 / m) * errors_b.sum()
return a, b
</code></pre>
<p>Now, I know this won't scale well with more variables, but I was just trying with the simple version first, and follow up from there.</p>
<p>I was following the gradient descent algorithm from the machine learning course at coursera:</p>
<p><a href="https://i.sstatic.net/w1BRm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w1BRm.png" alt="enter image description here"></a></p>
<p>But I'm getting infinite values after ~90 iterations (on a specific dataset), and haven't been able to wrap my head around this so far.</p>
<p>I've tried iterating over each value before I learned about numpy's broadcasting and was getting the same results.</p>
<p>If anyone could shed some light on what could be the problem here, it would be great.</p>
|
<p>It is clear that the parameters are diverging from the optimum ones. One possible reason may be that you are using too large a value for the learning rate ("alpha"). Try decreasing the learning rate. Here is a rule of thumb. Start always from a small value like 0.001. Then try increasing the learning rate by taking a three time higher learning rate than previous. If it gives less MSE error (or whatever error function you are using), then its fine. If not try taking a value between 0.001 and 0.003. Next if the latter hold true, then try this recursively until you reach a satisfying MSE.</p>
| 63
|
implement regression
|
implementing keras version of quantile loss for regression
|
https://stackoverflow.com/questions/77529336/implementing-keras-version-of-quantile-loss-for-regression
|
<p>I am trying to implement Quantile loss for a regression problem based on the formula from this <a href="https://medium.com/@mlblogging.k/14-loss-functions-you-can-use-for-regression-b24db8dff987" rel="nofollow noreferrer">article</a> (number 14 at the end of the article):</p>
<p><a href="https://i.sstatic.net/cjJzO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cjJzO.png" alt="enter image description here" /></a></p>
<p>Here is my implementation:</p>
<pre><code>import numpy as np
def quantile():
percentiles = [0.01, 0.25, 0.5, 0.75, 0.99]
gamma = 1e-2
np.random.seed(0)
y_true = [1.0, 0.0, -1.0, 0.0, 0.4, 0.8, 0.9, 1.0 , 1.0, -0.6, -0.9, -1.0]
y_pred = np.random.rand(12)
error = bk.abs(y_true - y_pred)
cond = error < gamma
quantile_loss = bk.sum(tf.where(cond, (gamma-1) * error, gamma * error), axis=-1)
quantile()
</code></pre>
<p>I would like to check if my implementation is correct considering the condition (<code>y_true<y_pred</code>) to be applied when multiplying <code>gamma</code>. Also, if this is based on percentiles, how to apply percentiles in the loss formula?</p>
| 64
|
|
implement regression
|
Fast way to implement a polynomial regression on each pandas dataframe row
|
https://stackoverflow.com/questions/76077118/fast-way-to-implement-a-polynomial-regression-on-each-pandas-dataframe-row
|
<p>I have the following pandas dataframe:</p>
<pre><code>df = pd.DataFrame({0: [11, 12, 31], 1: [6, 14, 27], 2: [11, 24, 21], 3: [1, 24, 20]})
0 1 2 3
0 11 6 11 1
1 12 14 24 24
2 31 27 21 20
</code></pre>
<p>For each row at the time, I want to implement a polynomial regression, where column names are X and row values are Y.</p>
<p>I know I can use iterrows:</p>
<pre><code>x=(df.columns).to_numpy()
for index, row in df.iterrows():
print(np.polyfit(x,row,2))
</code></pre>
<p>which produces:</p>
<pre><code>[-1.25 1.25 9.75]
[-0.5 6.1 11.1]
[ 0.75 -6.15 31.35]
</code></pre>
<p>but this can take a long time on large dataframes.
Is there a faster way to do this? Thanks</p>
|
<p><a href="https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html" rel="nofollow noreferrer"><code>polyfit</code></a> can take a 2d <code>y</code>-param of the same length with <code>x</code>, so:</p>
<pre><code>np.polyfit(df.columns, df.T, 2).T
</code></pre>
<p>gives:</p>
<pre><code>array([[-1.25, 1.25, 9.75],
[-0.5 , 6.1 , 11.1 ],
[ 0.75, -6.15, 31.35]])
</code></pre>
| 65
|
implement regression
|
I want to implement a tree-based regression model using rss
|
https://stackoverflow.com/questions/56337405/i-want-to-implement-a-tree-based-regression-model-using-rss
|
<p>1.2 RSS
As stated in chapter 8 of the textbook, we need a parameter to measure the efficiency of a specific split/tree. We choose here Residual Sum of Squares. Implement RSS calculation for the list of splits, knowing that the value to predict (Wage(k)) is contained in element[-1] for element in split.
You can use the cell of code below your implementation to check your result on a specific split.</p>
<p>1.3 Split
We will code a split function capable of splitting the data in two part based on the index of the feature, the value of splitting and the data. Implement the condition of splitting, taking as convention left
<p>1.4 Optimal split creation
There is no theoritecal result allowing to find the best possible split before going through all the possible one, so we implement a RSS minimizer over the whole split. Using previously coded functions, fill the #TODO parts. You can check your return in the following cell.</p>
<p>1.5 Tree Building and prediction
Aggregating all the parts of the code now allows us to build the whole tree recursively. Comment the given code, and especially the importance of the parameter min_size in regard to the model structure
Using the same coding paradigm allows us to use our model to do regression on the test set, as you can see in the next cell of code. Which part of the global model is now missing ? Explain its importance in a real machine learning problem. (Bonus) Implement it</p>
<p>I want to implement a tree-based regression model using rss. I want to fill out the following blanks but it is too difficult</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import math
data = pd.read_csv("Wages.csv", sep=";")
training_set = np.array(data[:10])
test_set = np.array(data[10:])
-- RSS --
verbose = False
def RSS(splits):
"""
Return the RSS of the input splits. The input should be in the form
of a list of list
"""
residual = 0
for split in splits:
if(len(split) != 0):
mean = mean(split[:-1])
if(verbose):print("Mean :" + str(mean))
residual = ##TODO
return residual
split_1 = np.array([[[0,2],[0,8]],[[4,5]]])
RSS_value = RSS(split_1)
if (type(RSS_value) not in [int,float,np.float16,np.float32,np.float64]):
print("TypeError : check your output")
elif(RSS(split_1) == 18.0):
print("Your calculations are right, at least on this specific
example")
else:
print("Your calculations are wrong")
-- Split --
def split(index, value, data):
"""
Splits the input @data into two parts, based on the feature at @index
position, using @value as a boundary value
"""
left_split = #TODO condition
right_split = #TODO condition
return [left_split, right_split]
-- optimal split creation
def split_tester(data):
"""
Find the best possible split possible for the current @data.
Loops over all the possible features, and all values for the given
features to test every possible split
"""
optimal_split_ind, optimal_split_value, optimal_residual, optimal_splits = -1,-1,float("inf"),[] #Initialize such that the first split is better than initialization
for curr_ind in range(data.shape[1]-1):
for curr_val in data:
if(verbose):print("Curr_split : " + str((curr_ind, curr_val[curr_ind])))
split_res = #TODO (comments : get the current split)
if(verbose):print(split_res)
residual_value = #TODO (comments : get the RSS of the current split)
if(verbose):print("Residual : " + str(residual_value))
if residual_value < optimal_residual:
optimal_split_ind, optimal_split_value, optimal_residual, optimal_splits = curr_ind,\
curr_val[curr_ind], residual_value, split_res
return optimal_split_ind, optimal_split_value, optimal_splits
-- tree building --
def tree_building(data, min_size):
"""
Recursively builds a tree using the split tester built before.
"""
if(data.shape[0] > min_size):
ind, value, [left, right] = split_tester(data)
left, right = np.array(left), np.array(right)
return [tree_building(left, min_size), tree_building(right,
min_size),ind,value]
else:
return data
tree = tree_building(training_set,2)
def predict(tree, input_vector):
if(type(tree[-1]) != np.int64):
if(len(tree) == 1):
return(tree[0][-1])
else:
return(np.mean([element[-1] for element in tree]))
else:
left_tree, right_tree, split_ind, split_value = tree
if(input_vector[split_ind]<split_value):
return predict(left_tree, input_vector)
else:
return predict(right_tree, input_vector)
for employee in test_set:
print("Predicted : " + str(predict(tree,employee)) + ", Actual : " +
str(employee[-1]))
</code></pre>
<p>I'm studying the code to get #TODO here. I do not know. Please help me.</p>
|
<p>If I understand correctly, you are only asking for the calculation marked #TODO in the code you posted. If you calculate the errors predicted by your model, these error values are sometimes called the "residual errors". You cannot simply sum these, some are negative and some are positive so they might cancel each other. However, if the errors are all squared, then the squared errors are all positive values and can be summed. This is where the term "Residual Sum of Squares" (RSS) comes from. You can use something like "RSS = numpy.sum(numpy.square(errors))" to calculate this value.</p>
| 66
|
implement regression
|
parameters error in azure ML designer in evaluation metrics in regression model
|
https://stackoverflow.com/questions/73833320/parameters-error-in-azure-ml-designer-in-evaluation-metrics-in-regression-model
|
<p>I developed a designer to implement regression models in azure machine learning studio. I have taken the data set pill and then split the data set into train and test in prescribed manner. When I am trying to implement the evaluation metrics and run the pipeline, it was showing a warning and error in the moment I called the dataset for the operation. I am bit confused, with the same implementation, when i tried to run with linear regression and it worked as shown in the image. If the same approach is used to implement logistic regression it was showing some warning and error in building the evaluation metrics.</p>
<p><a href="https://i.sstatic.net/DbEeq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DbEeq.png" alt="enter image description here" /></a></p>
<p>the above success is in linear regression. When it comes to logistic regression it was showing the warning and error in pipeline.</p>
<p>Any help is appreciated.</p>
|
<p>Creating a sample pipeline with designer with mathematical format.</p>
<p><a href="https://i.sstatic.net/kCx3A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kCx3A.png" alt="enter image description here" /></a></p>
<p>We need to create a compute instance.</p>
<p><a href="https://i.sstatic.net/7bQA5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7bQA5.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/NQWZV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NQWZV.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/MpxPY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MpxPY.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/bAfEv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bAfEv.png" alt="enter image description here" /></a></p>
<p>Assign the compute instance and click on create</p>
<p><a href="https://i.sstatic.net/wIS0d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wIS0d.png" alt="enter image description here" /></a></p>
<p>Now the import data warning will be removed. In the same manner, we will be getting similar error in other pills too.</p>
<p><a href="https://i.sstatic.net/zFK74.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zFK74.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Yk5gY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yk5gY.png" alt="enter image description here" /></a></p>
<p>Create a mathematical format. If not needed for your case, try to remove that math operation and give the remaining.</p>
<p><a href="https://i.sstatic.net/uhKlv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uhKlv.png" alt="enter image description here" /></a></p>
<p>Assign the column set. Select any option according to the requirement.</p>
<p><a href="https://i.sstatic.net/mcNZe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mcNZe.png" alt="enter image description here" /></a></p>
<p>Finally, we can find the pills which have no warning or error.</p>
| 67
|
implement regression
|
LinAlgError Singular Matrix when implementing logistic regression
|
https://stackoverflow.com/questions/64381015/linalgerror-singular-matrix-when-implementing-logistic-regression
|
<p>I am trying to implement a logistic regression model, but when I try to print the results I'm getting an error that I've looked up and tried to figure out how to solve, but haven't been able to.</p>
<p>Here's what that looks like:</p>
<pre><code>#Columns
columns = new_df[['DIABETES_NO','DIABETES_INSULIN', 'DIABETES_NON-INSULIN', 'bmi_cat_0','bmi_cat_gte40','bmi_cat_lt40',
'albumin_cat_0', 'albumin_cat_gt3.5', 'albumin_cat_lt3.5', 'SMOKE_No', 'SMOKE_Yes',
'age_cat_0', 'age_cat_gte65', 'age_cat_lt65', 'SEX_male', 'SEX_female']]
</code></pre>
<pre><code>#Model 1 Target Variable (Mortality)
X = columns
y = new_df['Mortality']
logit_model=sm.Logit (y,X)
result=logit_model.fit()
print(result.summary2())
</code></pre>
<pre><code>Warning: Maximum number of iterations has been exceeded.
Current function value: 0.014645
Iterations: 35
---------------------------------------------------------------------------
LinAlgError Traceback (most recent call last)
<ipython-input-35-0a3dafc9126f> in <module>
5
6 logit_model=sm.Logit (y,X)
----> 7 result=logit_model.fit()
8 print(result.summary2())
E:\Users\davidwool\Anaconda3\lib\site-packages\statsmodels\discrete\discrete_model.py in fit(self, start_params, method, maxiter, full_output, disp, callback, **kwargs)
1832 bnryfit = super(Logit, self).fit(start_params=start_params,
1833 method=method, maxiter=maxiter, full_output=full_output,
-> 1834 disp=disp, callback=callback, **kwargs)
1835
1836 discretefit = LogitResults(self, bnryfit)
E:\Users\davidwool\Anaconda3\lib\site-packages\statsmodels\discrete\discrete_model.py in fit(self, start_params, method, maxiter, full_output, disp, callback, **kwargs)
218 mlefit = super(DiscreteModel, self).fit(start_params=start_params,
219 method=method, maxiter=maxiter, full_output=full_output,
--> 220 disp=disp, callback=callback, **kwargs)
221
222 return mlefit # up to subclasses to wrap results
E:\Users\davidwool\Anaconda3\lib\site-packages\statsmodels\base\model.py in fit(self, start_params, method, maxiter, full_output, disp, fargs, callback, retall, skip_hessian, **kwargs)
471 Hinv = cov_params_func(self, xopt, retvals)
472 elif method == 'newton' and full_output:
--> 473 Hinv = np.linalg.inv(-retvals['Hessian']) / nobs
474 elif not skip_hessian:
475 H = -1 * self.hessian(xopt)
E:\Users\davidwool\Anaconda3\lib\site-packages\numpy\linalg\linalg.py in inv(a)
530 signature = 'D->D' if isComplexType(t) else 'd->d'
531 extobj = get_linalg_error_extobj(_raise_linalgerror_singular)
--> 532 ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj)
533 return wrap(ainv.astype(result_t, copy=False))
534
E:\Users\davidwool\Anaconda3\lib\site-packages\numpy\linalg\linalg.py in _raise_linalgerror_singular(err, flag)
87
88 def _raise_linalgerror_singular(err, flag):
---> 89 raise LinAlgError("Singular matrix")
90
91 def _raise_linalgerror_nonposdef(err, flag):
LinAlgError: Singular matrix
</code></pre>
<p>I tried to set the method='bfgs' but I get NaN, for all areas except for the Coeff column.</p>
<p>Here's what that looks like:</p>
<pre><code>#Model 1 Target Variable (Mortality)
X = columns
y = new_df['Mortality']
logit_model=sm.Logit (y,X)
result=logit_model.fit(method='bfgs')
print(result.summary2())
</code></pre>
<pre><code>Warning: Maximum number of iterations has been exceeded.
Current function value: 0.014671
Iterations: 35
Function evaluations: 36
Gradient evaluations: 36
Results: Logit
=================================================================
Model: Logit Pseudo R-squared: 0.090
Dependent Variable: Mortality AIC: 329.5189
Date: 2020-10-15 19:32 BIC: 402.1568
No. Observations: 10549 Log-Likelihood: -154.76
Df Model: 9 LL-Null: -170.03
Df Residuals: 10539 LLR p-value: 0.00035468
Converged: 0.0000 Scale: 1.0000
-----------------------------------------------------------------
Coef. Std.Err. z P>|z| [0.025 0.975]
-----------------------------------------------------------------
DIABETES_NO -1.3211 nan nan nan nan nan
DIABETES_INSULIN -0.1911 nan nan nan nan nan
DIABETES_NON-INSULIN -0.2797 nan nan nan nan nan
bmi_cat_0 -0.0321 nan nan nan nan nan
bmi_cat_gte40 -1.0971 nan nan nan nan nan
bmi_cat_lt40 -0.6626 nan nan nan nan nan
albumin_cat_0 -1.7288 nan nan nan nan nan
albumin_cat_gt3.5 -0.7371 nan nan nan nan nan
albumin_cat_lt3.5 0.6740 nan nan nan nan nan
SMOKE_No -1.0509 nan nan nan nan nan
SMOKE_Yes -0.7410 nan nan nan nan nan
age_cat_0 -0.0321 nan nan nan nan nan
age_cat_gte65 -0.0337 nan nan nan nan nan
age_cat_lt65 -1.7261 nan nan nan nan nan
SEX_male -1.2519 nan nan nan nan nan
SEX_female -0.5400 nan nan nan nan nan
=================================================================
</code></pre>
<p>Any help or suggestions would be appreciated, thank you!!</p>
|
<p>It is quite clear that you do not understand how to handle categorical variables. You include a whole set of one-hot encoded dummies (e.g. including <code>SEX_male</code> and <code>SEX_female</code> together) per categorical variable, essentially introducing multiple constants to your regression, hence singular matrix error.</p>
| 68
|
implement regression
|
Implementing Linear Regression in ASP.NET web application
|
https://stackoverflow.com/questions/67072491/implementing-linear-regression-in-asp-net-web-application
|
<p>I was supposed to develop a web application that also includes predicting an expected arrival date of parcel value based on the origin location of the value, and I was thinking of using Linear Regression model. Though I'm not sure if this is feasible: implementing linear regression in web application. I can't seem to find any suitable resources on this for beginners on linear regression in web application. Please provide any thoughts or resources if anyone has any.</p>
| 69
|
|
implement regression
|
TensorFlow Returning nan When Implementing Logistic Regression
|
https://stackoverflow.com/questions/38538635/tensorflow-returning-nan-when-implementing-logistic-regression
|
<p>I've been trying to implement Logistic Regression in TensorFlow following the MNIST example but with data from a CSV. Each row is one sample and has 12 dimensions. My code is the following:</p>
<pre><code>batch_size = 5
learning_rate = .001
x = tf.placeholder(tf.float32,[None,12])
y = tf.placeholder(tf.float32,[None,2])
W = tf.Variable(tf.zeros([12,2]))
b = tf.Variable(tf.zeros([2]))
mult = tf.matmul(x,W)
pred = tf.nn.softmax(mult+b)
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
avg_cost = 0
total_batch = int(len(Xtrain)/batch_size)
for i in range(total_batch):
batch_xs = Xtrain[i*batch_size:batch_size*i+batch_size]
batch_ys = ytrain[i*batch_size:batch_size*i+batch_size]
_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs,y: batch_ys})
print(c)
</code></pre>
<p>Xtrain is a 252x10 numpy array, and ytrain is a 252x2 one hot numpy array. </p>
<p><strong>The Problem:</strong> the cost c gets calculated for the first iteration (value is 0.6931...), but for every iteration after, it returns 'nan.'</p>
<p><strong>Things I've Tried:</strong> I made sure every component aspect of the model was working. The issue happens entirely after the first iteration. I've played around with the learning rate, but that doesn't do anything. I've tried initializing the weights as truncated_normal (which I shouldn't need to do for logistic regression anyway), but that doesn't help either. </p>
<p>So, any thoughts? I've spent around 3 hours trying to fix it and have run out of ideas. It seems like something just isn't working when TensorFlow goes to optimize the cost function.</p>
|
<p>The issue you are having is because log(pred) is not defined for pred = 0. The "hacky" way around this is to use <code>tf.maximum(pred, 1e-15)</code> or <code>tf.clip_by_value(pred, 1e-15, 1.0)</code>.</p>
<p>An even better solution, however, is using <code>tf.nn.softmax_cross_entropy_with_logits(pred)</code> instead of applying softmax and cross-entropy separately, which deals with edge cases like this (hence all your problems) automatically!</p>
<p>For further reading, I'd recommend this great answer:
<a href="https://stackoverflow.com/a/34243720/5829427">https://stackoverflow.com/a/34243720/5829427</a></p>
| 70
|
implement regression
|
Implementing Logistic Regression in Theano with Climin
|
https://stackoverflow.com/questions/36742032/implementing-logistic-regression-in-theano-with-climin
|
<p>I want to implement logistic regression with minibatches on MNIST using Theano. I want to try out different optimisers, so I decided to use a library called <a href="https://github.com/BRML/climin" rel="nofollow">climin</a>. Climin offers several functions, which receive the parameters of the model, loss and / or the gradient of the loss, and a stream of the data as parameters. Since theano functions need to be compiled I designed the following model.</p>
<pre><code># loads the MNIST dataset
train_set_x, train_set_y = load_data(dataset)[0]
print('... building the model')
W = theano.shared(
value = np.zeros((28 * 28, 10), dtype = theano.config.floatX),
name = 'W',
borrow = True
) # weights dimension 28 * 28, 10
b = theano.shared(
value = np.zeros((10,), dtype = theano.config.floatX),
name = 'b',
borrow = True
) # biases
x = T.matrix('x') # data, represented as rasterized images dimension 28 * 28
y = T.ivector('y') # labes, represented as 1D vector of [int] labels dimension 10
p_y_given_x = T.nnet.softmax(T.dot(x, W) + b)
y_pred = T.argmax(p_y_given_x, axis=1)
NLL = -T.sum(T.log(p_y_given_x)[T.arange(y.shape[0]), y])
# negative_log_likelihood = -T.mean(T.log(y_pred)[T.arange(y.shape[0]), y])
loss = theano.function(inputs = [ x, y ], outputs = NLL )
g_W = theano.function(inputs = [ x, y ], outputs = T.grad(cost=loss, wrt=W))
g_b = theano.function(inputs = [ x, y ], outputs = T.grad(cost=loss, wrt=b))
def d_loss_wrt_pars(parameters, inputs, targets):
# climin passes the parametes
# as 1-D concatinated array to
# this function so I need to
# unpack the parameter array
W = parameters[:28 * 28 * 10].reshape((28 * 28, 10))
b = parameters[28 * 28 * 10:].reshape((1, 10))
return np.concatinate([g_W(x, y).flatten(), g_b(x, y)])
wrt = np.empty(7850) # allocate space for the parameters (MNIST dimensionality 28 * 28 * 10)
cli.initialize.randomize_normal(wrt, 0, 1) # initialize the parameters with random numbers
if batch_size is None:
args = itertools.repeat(([train_set_x, train_set_y], {}))
batches_per_pass = 1
else:
args = cli.util.iter_minibatches([train_set_x, train_set_y], batch_size, [0, 0])
args = ((i, {}) for i in args)
batches_per_pass = train_set_x.shape[0] / batch_size
if optimizer == 'gd':
opt = cli.GradientDescent(wrt, d_loss_wrt_pars, step_rate=0.1, momentum=.95, args=args)
else:
print('unknown optimizer')
return 1
print('... training the model')
for info in opt:
if info['n_iter'] >= n_epochs and (not done_looping):
break
</code></pre>
<p>Unfortunately this produces nothing but:</p>
<pre><code>Traceback (most recent call last):
File "logreg/logistic_sgd.py", line 160, in <module> sys.exit(sgd_optimization_mnist())
File "logreg/logistic_sgd.py", line 69, in sgd_optimization_mnist
g_W = theano.function(inputs = [ x, y ], outputs = T.grad(cost=loss, wrt=W))
File "/Users/romancpodolski/anaconda/lib/python2.7/site-packages/theano/gradient.py", line 430, in grad
if cost is not None and isinstance(cost.type, NullType):
</code></pre>
<p>AttributeError: 'Function' object has no attribute 'type'</p>
<p>Anyone an Idea how to make this work and combining the two libraries?</p>
|
<p>Your code passes a compiled function (instead of a Theano expression) to <code>T.grad</code>. Replace <code>loss</code> with <code>NLL</code> and you should be fine</p>
<pre><code>g_W = theano.function(inputs = [ x, y ], outputs = T.grad(cost=NLL, wrt=W))
g_b = theano.function(inputs = [ x, y ], outputs = T.grad(cost=NLL, wrt=b))
</code></pre>
<p>Also, you might want to try a Theano-supporting library (e.g. <a href="http://lasagne.readthedocs.org/en/latest/modules/updates.html" rel="nofollow">Lasagne</a>) for optimization.</p>
| 71
|
implement regression
|
Verifying Linear Regression implementation using python
|
https://stackoverflow.com/questions/41312507/verifying-linear-regression-implementation-using-python
|
<p>I just started with machine learning, spent few hours learning Linear Regression. Based on my understanding I implemented it from scratch in python (code below) without regularisation. Is my logic correct or does it need any improvements?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Assigning X and y from the dataset
data = np.loadtxt('ex1data1.txt', delimiter=',')
rows=(data.size)/2
X = np.array((data[:, 0])).reshape(rows, 1)
y = np.array(data[:, 1]).reshape(rows, 1)
m = np.size(X)
X = np.insert(X, 0, values=1, axis=1)
t = np.ones(shape=[2, 1])
def linReg():
h = np.dot(X,t)
J = 1/(2*m) * sum((h - y)**2)
print('Cost:',J)
print("Error:",h-y)
for i in range(1,2000):
h = np.dot(X,t)
t[0] = t[0] - 0.01*(1/m)*(np.dot((X[:,0]),(h-y)))
t[1] = t[1] - 0.01*(1/m)*(np.dot((X[:,1]),(h-y)))
J = 1/(2*m) * sum((h - y)**2)
print(i)
print('Cost:', J)
plt.scatter(X[:,1],y,color= 'blue')
plt.plot(X[:,1],h)
return t
def predict(newval):
W = linReg()
predValue = np.dot(newval,W[1]) + W[0]
print("Predicted Value:-",predValue)
plt.plot(newval, predValue)
plt.scatter(newval, predValue, color='red')
plt.xlim(0, 40)
plt.ylim(0, 40)
plt.show()
print("Enter the number to be predicted:-")
nv = input()
nv = float(nv)
predict(nv)
</code></pre>
|
<p>To check your model, a simple thing to do would be to <em>split your data into a training set and a test set</em>. The training set is used to <em>fit the model</em>, by being passed as an argument to the <code>linReg</code> function, and the features of the test set are used for <em>prediction</em> (with your so-called <code>predict</code> function). </p>
<p>You will then need a third function to <em>score</em> your model, by <em>comparing the predictions with the actual values given by the data</em>. If you get a good score, then your implementation may be correct, and if not, a little debugging will be necessary ;-)</p>
<p>To get started, I would suggest rearranging your code by defining the following functions:</p>
<pre><code>def train_test_split(X, y):
"""
Return a splitted version (X_train, y_train) and (X_test, y_test) of the dataset.
"""
def linReg_train(X_train, y_train):
"""
Fit the model and return the weights.
"""
def linReg_pred(X_test)
"""
Use the fitted model to predict values for all the points in X_test.
"""
def linReg_score(y_predicted, y_test)
"""
Compare predicted and true outputs to assess model quality.
"""
</code></pre>
<p>Some resources you might find useful:</p>
<ul>
<li><a href="http://www.dataschool.io/linear-regression-in-python/" rel="nofollow noreferrer" title="A friendly introduction to linear regression (with Python)">A friendly introduction to linear regression with Python</a>: a series of videos that goes into detail about how to implement linear regression.</li>
<li><a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html" rel="nofollow noreferrer" title="Linear regression with scikit-learn">Linear regression in scikit-learn</a>: scikit-learn is one of the most useful libraries for machine learning in Python, with a very well-written documentation. If you plan to write your own implementations of ML algorithms, you may want to check them by running your data through the scikit-learn implementations and compare the outputs with those of your own work.</li>
</ul>
<p>Good luck !</p>
| 72
|
implement regression
|
MATLAB implementation for Flexible Least Squares (FLS) regression
|
https://stackoverflow.com/questions/11474052/matlab-implementation-for-flexible-least-squares-fls-regression
|
<p>Is there a MATLAB implementation for <a href="http://www2.econ.iastate.edu/tesfatsi/flshome.htm" rel="nofollow">Flexible Least Squares (FLS) regression</a><sup>1</sup>?</p>
<p>I am looking for a decent (well, the least painful) way to handle regression with time-varying coefficients. All ideas are welcome, but the best would be a pointer to a FLS regression implementation, because I couldn't find one after extensive googling.</p>
<hr>
<p><a href="http://www2.econ.iastate.edu/tesfatsi/TimeVaryingLinearReg.CMWA1989.pdf" rel="nofollow"><sup>1. R. Kalaba and L. Tesfatsion (1989), <em>"Time-Varying Linear Regression via Flexible Least Squares"</em>, Computers and Mathematics with Applications, Vol. 17, pp. 1215-1245</sup></a></p>
|
<p>I've found a <a href="https://github.com/bwlewis/fls/blob/master/R/fls.R" rel="nofollow">R code on GitHub</a> of the FLS regression. I have converted it to MATLAB, but it may have some errors due to differences between the functions of R and MATLAB. Now, we just need to test it in order to do the final adjustments.</p>
<pre><code>function X = fls(A, b, mu, ncap, smoothed)
if isempty(mu)
mu = 1;
end
if isempty(ncap)
ncap = length(b);
end
if isempty(smoothed)
smoothed = true;
end
[m n] = size(A);
M = zeros(n, n, ncap);
E = zeros(n, ncap);
X = zeros(n, ncap);
R = eye(n) * mu;
for j = 1:ncap
Z = linsolve( qr( R + tcrossprod(A(j,:)) ), eye(n) );
M(:,:,j) = mu * Z; % (5.7b)
v = b(j) * A(j,:);
if (j == 1)
p = zeros(n,1);
else
p = mu * E(:,j-1);
end
w = p + v;
E(:,j) = Z * w; % (5.7c)
R = -mu * mu * Z;
diag = R(1:n+1:end);
R(diag) = R(diag) + 2*mu;
end
% Calculate eqn (5.15) FLS estimate at ncap
Q = -mu * M(:,:,ncap-1);
diag = Q(1: (size(Q,1)+1) : end);
Q(diag) = Q(diag) + mu;
Ancap = A(ncap,:);
C = Q + Ancap' * Ancap;
d = mu * E(:,ncap-1) + b(ncap)*Ancap';
X(:,ncap) = C * d;
X(:,ncap) = linsolve(qr(C),d);
if smoothed
% Use eqn (5.16) to obtain smoothed FLS estimates for
% X[,1], X[,2], ..., X[,ncap-1]
for j = 1:ncap-1
l = ncap - j;
X(:,l) = E(:,l) + M(:,:,l) * X(:,l+1);
end
else
X = X(:,ncap);
end
end
function resp = tcrossprod(A)
resp = A * A';
end
</code></pre>
| 73
|
implement regression
|
Failing to implement logistic regression using 'equinox' and 'optax' library
|
https://stackoverflow.com/questions/76146548/failing-to-implement-logistic-regression-using-equinox-and-optax-library
|
<p>I am trying to implement logistic regression using equinox and optax libraries, with the support of JAX. While training the model, the loss is not decreasing over time,and model is not learning. Herewith attaching a reproducible code with toy dataset for reference:</p>
<pre><code>import jax
import jax.nn as jnn
import jax.numpy as jnp
import jax.random as jrandom
import equinox as eqx
import optax
data_key,model_key = jax.random.split(jax.random.PRNGKey(0),2)
### Generating toy-data
X_train = jax.random.normal(data_key, (1000,2))
y_train = X_train[:,0]+X_train[:,1]
y_train = jnp.where(y_train>0.5,1,0)
### Using equinox and optax
print("Training using equinox and optax")
epochs = 10000
learning_rate = 0.1
n_inputs = X_train.shape[1]
class Logistic_Regression(eqx.Module):
weight: jax.Array
bias: jax.Array
def __init__(self, in_size, out_size, key):
wkey, bkey = jax.random.split(key)
self.weight = jax.random.normal(wkey, (out_size, in_size))
self.bias = jax.random.normal(bkey, (out_size,))
#self.weight = jnp.zeros((out_size, in_size))
#self.bias = jnp.zeros((out_size,))
def __call__(self, x):
return jax.nn.sigmoid(self.weight @ x + self.bias)
@eqx.filter_value_and_grad
def loss_fn(model, x, y):
pred_y = jax.vmap(model)(x)
return -jnp.mean(y * jnp.log(pred_y) + (1 - y) * jnp.log(1 - pred_y))
@eqx.filter_jit
def make_step(model, x, y, opt_state):
loss, grads = loss_fn(model, x, y)
updates, opt_state = optim.update(grads, opt_state)
model = eqx.apply_updates(model, updates)
return loss, model, opt_state
in_size, out_size = n_inputs, 1
model = Logistic_Regression(in_size, out_size, key=model_key)
optim = optax.sgd(learning_rate)
opt_state = optim.init(model)
for epoch in range(epochs):
loss, model, opt_state = make_step(model,X_train,y_train, opt_state)
loss = loss.item()
if (epoch+1)%1000 ==0:
print(f"loss at epoch {epoch+1}:{loss}")
# The following code is implementation of Logistic regression using scikit-learn and pytorch, and it is working well. It is added just for reference
### Using scikit-learn
print("Training using scikit-learn")
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
model = LogisticRegression()
model.fit(X_train,y_train)
y_pred = model.predict(X_train)
print("Train accuracy:",accuracy_score(y_train,y_pred))
## Using pytorch
print("Training using pytorch")
import numpy as np
import torch
import torch.nn as nn
from torch.optim import SGD
from torch.nn import Sequential
X_train = np.array(X_train)
y_train = np.array(y_train)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("device:",device)
torch_LR= Sequential(nn.Linear(n_inputs, 1),
nn.Sigmoid())
torch_LR.to(device)
criterion = nn.BCELoss() # define the optimization
optimizer = SGD(torch_LR.parameters(), lr=learning_rate)
train_loss = []
for epoch in range(epochs):
inputs, targets = torch.tensor(X_train).to(device), torch.tensor(y_train).to(device) # move the data to GPU if available
optimizer.zero_grad() # clear the gradients
yhat = torch_LR(inputs.float()) # compute the model output
loss = criterion(yhat, targets.unsqueeze(1).float()) # calculate loss
#train_loss_batch.append(loss.cpu().detach().numpy()) # store the loss
loss.backward() # update model weights
optimizer.step()
if (epoch+1)%1000 ==0:
print(f"loss at epoch {epoch+1}:{loss.cpu().detach().numpy()}")
</code></pre>
<p>I tried SGD and adam optmizers with different learning rates, but the result is same. Also, I tried zero weight initialisation and ranodom weight initialisation. For the same data, I tried pytorch and LogisticRegression module from scikit-learn library (I understood in sklearn SGD is not used, but just as a reference to observe performance). Scikit-learn and pytorch modeling is added in the code block for reference. I have tried this with multiple classification datasets but still facing this problem.</p>
|
<p>The first time you print your loss is after 1000 epochs. If you change it to print the loss of the first 10 epochs, you see that the optimizer is rapidly converging:</p>
<pre class="lang-py prettyprint-override"><code> # ...
if epoch < 10 or (epoch + 1)%1000 ==0:
print(f"loss at epoch {epoch+1}:{loss}")
</code></pre>
<p>Here is the result:</p>
<pre><code>Training using equinox and optax
loss at epoch 1:1.237254023551941
loss at epoch 2:1.216030478477478
loss at epoch 3:1.1952687501907349
loss at epoch 4:1.174972414970398
loss at epoch 5:1.1551438570022583
loss at epoch 6:1.1357849836349487
loss at epoch 7:1.1168975830078125
loss at epoch 8:1.098482370376587
loss at epoch 9:1.0805412530899048
loss at epoch 10:1.0630732774734497
loss at epoch 1000:0.6320337057113647
loss at epoch 2000:0.6320337057113647
loss at epoch 3000:0.6320337057113647
</code></pre>
<p>By epoch 1000, the loss has converged to a minimum value from which it does not move.</p>
<p>Given this, it looks like your optimizer is functioning correctly.</p>
<hr />
<p>Edit: I did some debugging and found that <code>y_pred = jax.vmap(model)(X_train)</code> returns an array of shape <code>(1000, 1)</code>, so <code>(y - y_pred)</code> is not a length-1000 array of differences, but rather a shape (1000, 1000) array of pairwise differences between all outputs. The log-loss over these pairwise differences is <strong>not</strong> a standard logistic regression model.</p>
| 74
|
implement regression
|
Linear Regression with Multiple Variables - Python - Implementation issues
|
https://stackoverflow.com/questions/8971034/linear-regression-with-multiple-variables-python-implementation-issues
|
<p>I am trying to implement Linear Regression with Multiple variables( actually , just 2 ) . I am using the data from the ML-Class Stanford. I got it working correctly for the single variable case. The same code <em>should</em> have worked for multiple, but , does not. </p>
<p>LINK to the data : </p>
<p><a href="http://s3.amazonaws.com/mlclass-resources/exercises/mlclass-ex1.zip" rel="nofollow" title="DATA">http://s3.amazonaws.com/mlclass-resources/exercises/mlclass-ex1.zip</a></p>
<p>Feature Normalization:</p>
<pre><code>''' This is for the regression with multiple variables problem . You have to normalize features before doing anything. Lets get started'''
from __future__ import division
import os,sys
from math import *
def mean(f,col):
#This is to find the mean of a feature
sigma = 0
count = 0
data = open(f,'r')
for line in data:
points = line.split(",")
sigma = sigma + float(points[col].strip("\n"))
count+=1
data.close()
return sigma/count
def size(f):
count = 0
data = open(f,'r')
for line in data:
count +=1
data.close()
return count
def standard_dev(f,col):
#Calculate the standard_dev . Formula : Sqrt ( Sigma ( x - x') ** (x-x') ) / N )
data = open(f,'r')
sigma = 0
mean = 0
if(col==0):
mean = mean_area
else:
mean = mean_bedroom
for line in data:
points = line.split(",")
sigma = sigma + (float(points[col].strip("\n")) - mean) ** 2
data.close()
return sqrt(sigma/SIZE)
def substitute(f,fnew):
''' Take the old file.
1. Subtract the mean values from each feature
2. Scale it by dividing with the SD
'''
data = open(f,'r')
data_new = open(fnew,'w')
for line in data:
points = line.split(",")
new_area = (float(points[0]) - mean_area ) / sd_area
new_bedroom = (float(points[1].strip("\n")) - mean_bedroom) / sd_bedroom
data_new.write("1,"+str(new_area)+ ","+str(new_bedroom)+","+str(points[2].strip("\n"))+"\n")
data.close()
data_new.close()
global mean_area
global mean_bedroom
mean_bedroom = mean(sys.argv[1],1)
mean_area = mean(sys.argv[1],0)
print 'Mean number of bedrooms',mean_bedroom
print 'Mean area',mean_area
global SIZE
SIZE = size(sys.argv[1])
global sd_area
global sd_bedroom
sd_area = standard_dev(sys.argv[1],0)
sd_bedroom=standard_dev(sys.argv[1],1)
substitute(sys.argv[1],sys.argv[2])
</code></pre>
<p>I have implemented mean and Standard deviation in the code, instead of using NumPy/SciPy. After storing the values in a file , a snapshot of which is the following:</p>
<p><strong><code>X1 X2 X3 COST OF HOUSE</code></strong></p>
<pre><code>1,0.131415422021,-0.226093367578,399900
1,-0.509640697591,-0.226093367578,329900
1,0.507908698618,-0.226093367578,369000
1,-0.743677058719,-1.5543919021,232000
1,1.27107074578,1.10220516694,539900
1,-0.0199450506651,1.10220516694,299900
1,-0.593588522778,-0.226093367578,314900
1,-0.729685754521,-0.226093367578,198999
1,-0.789466781548,-0.226093367578,212000
1,-0.644465992588,-0.226093367578,242500
</code></pre>
<p>I run regression on it to find the parameters. The code for that is below:</p>
<pre><code>''' The plan is to rewrite and this time, calculate cost each time to ensure its reducing. Also make it enough to handle multiple variables '''
from __future__ import division
import os,sys
def computecost(X,Y,theta):
#X is the feature vector, Y is the predicted variable
h_theta=calculatehTheta(X,theta)
delta = (h_theta - Y) * (h_theta - Y)
return (1/194) * delta
def allCost(f,no_features):
theta=[0,0]
sigma=0
data = open(f,'r')
for line in data:
X=[]
Y=0
points=line.split(",")
for i in range(no_features):
X.append(float(points[i]))
Y=float(points[no_features].strip("\n"))
sigma=sigma+computecost(X,Y,theta)
return sigma
def calculatehTheta(points,theta):
#This takes a file which has (1,feature1,feature2,so ... on)
#print 'Points are',points
sigma = 0
for i in range(len(theta)):
sigma = sigma + theta[i] * float(points[i])
return sigma
def gradient_Descent(f,no_iters,no_features,theta):
''' Calculate ( h(x) - y ) * xj(i) . And then subtract it from thetaj . Continue for 1500 iterations and you will have your answer'''
X=[]
Y=0
sigma=0
alpha=0.01
for i in range(no_iters):
for j in range(len(theta)):
data = open(f,'r')
for line in data:
points=line.split(",")
for i in range(no_features):
X.append(float(points[i]))
Y=float(points[no_features].strip("\n"))
h_theta = calculatehTheta(points,theta)
delta = h_theta - Y
sigma = sigma + delta * float(points[j])
data.close()
theta[j] = theta[j] - (alpha/97) * sigma
sigma = 0
print theta
print allCost(sys.argv[1],2)
print gradient_Descent(sys.argv[1],1500,2,[0,0,0])
</code></pre>
<p>It prints the following as the parameters:</p>
<p>[-3.8697149722857996e-14, 0.02030369056348706, 0.979706406501678]</p>
<p>All three are horribly wrong :( The exact same thing works with Single variable . </p>
<p>Thanks !</p>
|
<p>The global variables and quadruply nested loops worry me. That and reading and writing the data to files multiple times.</p>
<p>Is your data so big it doesn't easily fit in memory?</p>
<p>Why not use the <a href="https://docs.python.org/3/library/csv.html" rel="nofollow noreferrer">csv</a> module for the file processing?</p>
<p>Why not you use <a href="https://numpy.org/" rel="nofollow noreferrer">Numpy</a> for the numeric part ?</p>
<p>Don't reinvent the wheel</p>
<p>Assuming your data entries are rows, you can normalize your data and do a least squares fit in two lines:</p>
<pre><code>normData = (data-data.mean(axis = 0))/data.std(axis = 0)
c = numpy.dot(numpy.linalg.pinv(normData),prices)
</code></pre>
<hr />
<p><strong>Reply to comment from Original Poster</strong>:</p>
<p>Ok, then the only other advice I can give you then is try to break it up into smaller pieces, so that it's easier to see what's going on. and it's easier to sanity-check the small parts.</p>
<p>It's probably not the problem, but you are using <code>i</code> as the index for two of the loops in that quadruple loop. That's the sort of problem you can avoid by cutting it into smaller scopes.</p>
<p>I think it's been years since I wrote an explicitly nested loop, or declared a global variable.</p>
| 75
|
implement regression
|
Error while implementing Multi-Linear regression Using pytorch
|
https://stackoverflow.com/questions/75918642/error-while-implementing-multi-linear-regression-using-pytorch
|
<p>I am trying to implement 1 variable Linear regression using PyTorch. However i get [NaN] as output of my prediction when i train my model using large integer inputs .</p>
<pre><code>import torch
from torch.autograd import Variable
import numpy as np
class linearRegression(torch.nn.Module):
# instantiating Linear Regression class
def __init__(self):
super(linearRegression, self).__init__()
self.lr_pytorch = torch.nn.Linear(1,1)
def forward(self,input_to_predict):
predictions = self.lr_pytorch(input_to_predict)
return predictions
def fit(self,inputs,outputs,epochs = 10, learning_rate = 0.01):
inputs = Variable(torch.from_numpy(np.array(inputs,dtype=np.float32).reshape(-1,1)))
labels = Variable(torch.from_numpy(np.array(outputs,dtype=np.float32).reshape(-1,1)))
stochastic_gradient_descent = torch.optim.SGD(self.lr_pytorch.parameters(),learning_rate)
mse_loss_function = torch.nn.MSELoss()
for epoch in range(epochs):
stochastic_gradient_descent.zero_grad()
predictions = self.lr_pytorch(inputs)
loss = mse_loss_function(predictions,labels)
loss.backward()
stochastic_gradient_descent.step()
</code></pre>
<pre><code>a = [1,2,3,4,5,6,7,8,9,10,20]
b = [2,4,6,8,10,12,14,16,18,20,40]
c = Variable(torch.from_numpy(np.array([150],dtype=np.float32)))
d = linearRegression()
d.fit(a,b,500)
prediction = d.forward(c)
print(prediction)
</code></pre>
<p>The above block when executed gives the result
<code>tensor([299.9434], grad_fn=<AddBackward0>)</code></p>
<p>However, as soon as i change my last training example to 50 and it's output to 100 as below</p>
<pre><code>a = [1,2,3,4,5,6,7,8,9,10,50]
b = [2,4,6,8,10,12,14,16,18,20,100]
c = Variable(torch.from_numpy(np.array([150],dtype=np.float32)))
d = linearRegression()
d.fit(a,b,500)
prediction = d.forward(c)
print(prediction)
</code></pre>
<p>I get the prediction as
<code>tensor([nan], grad_fn=<AddBackward0>)</code></p>
|
<p>The gradients are getting too large. You could normalize the data or reduce the learning rate.</p>
| 76
|
implement regression
|
Why is my implementation of linear regression not working?
|
https://stackoverflow.com/questions/77569740/why-is-my-implementation-of-linear-regression-not-working
|
<p>I am trying to implement linear regression from scratch in python.</p>
<p>For reference, here are the mathematical formulae I have used: <a href="https://i.sstatic.net/7zzc1.png" rel="nofollow noreferrer">Equations</a></p>
<p>This is what I tried:</p>
<pre class="lang-py prettyprint-override"><code>class LinearRegression:
def __init__(
self,
features: np.ndarray[np.float64],
targets: np.ndarray[np.float64],
) -> None:
self.features = np.concatenate((np.ones((features.shape[0], 1)), features), axis=1)
self.targets = targets
self.params = np.random.randn(features.shape[1] + 1)
self.num_samples = features.shape[0]
self.num_feats = features.shape[1]
self.costs = []
def hypothesis(self) -> np.ndarray[np.float64]:
return np.dot(self.features, self.params)
def cost_function(self) -> np.float64:
pred_vals = self.hypothesis()
return (1 / (2 * self.num_samples)) * np.dot((pred_vals - self.targets).T, pred_vals - self.targets)
def update(self, alpha: np.float64) -> None:
self.params = self.params - (alpha / self.num_samples) * (self.features.T @ (self.hypothesis() - self.targets))
def gradientDescent(self, alpha: np.float64, threshold: np.float64, max_iter: int) -> None:
converged = False
counter = 0
while not converged:
counter += 1
curr_cost = self.cost_function()
self.costs.append(curr_cost)
self.update(alpha)
new_cost = self.cost_function()
if abs(new_cost - curr_cost) < threshold:
converged = True
if counter > max_iter:
converged = True
</code></pre>
<p>I used the class like this:</p>
<pre class="lang-py prettyprint-override"><code>regr = LinearRegression(features=np.linspace(0, 1000, 200, dtype=np.float64).reshape((20, 10)), targets=np.linspace(0, 200, 20, dtype=np.float64))
regr.gradientDescent(0.1, 1e-3, 1e+3)
regr.cost_function()
</code></pre>
<p>However, I am getting the following errors:</p>
<pre><code>RuntimeWarning: overflow encountered in scalar power
return (1 / (2 * self.num_samples)) * (la.norm(self.hypothesis() - self.targets) ** 4)
RuntimeWarning: invalid value encountered in scalar subtract
if abs(new_cost - curr_cost) < threshold:
RuntimeWarning: overflow encountered in matmul
self.params = self.params - (alpha / self.num_samples) * (self.features.T @ (self.hypothesis() - self.targets))
</code></pre>
<p>What is going wrong exactly?</p>
|
<p>It is overflowing because you use to large numbers in your example. Try using:</p>
<pre><code>regr = LinearRegression(features=np.linspace(0, 1000, 200, dtype=np.float64).reshape((20, 10))/1000, targets=np.linspace(0, 200, 20, dtype=np.float64)/1000)
regr.gradientDescent(0.1, 1e-3, 1e+3)
regr.cost_function()
</code></pre>
<p>It gives me an output of 0.00474225348416323.</p>
| 77
|
implement regression
|
How to implement Rolling Regression in Python
|
https://stackoverflow.com/questions/74322560/how-to-implement-rolling-regression-in-python
|
<p>I'm trying to predict the Adjusted Closing Price for the next day for a stock using Rolling Regression.</p>
<pre><code> Open High Low Close Adj Close
0 26.629999 27.000000 26.290001 26.600000 26.599386
1 26.670000 27.450001 26.540001 27.350000 27.349369
2 27.340000 27.440001 26.889999 27.139999 27.139374
3 27.070000 27.420000 26.969999 27.280001 27.279371
4 26.700001 27.190001 26.459999 27.129999 27.129374
</code></pre>
<p>Every row in my database is a trading day.</p>
<p>So here I have a window of 5, trying to predict W + 1 (the price for the 6th day)</p>
<p>I've tried using statsmodel RollingOLS and have this so far</p>
<pre><code>
df_20['const'] = 1
model = RollingOLS(endog =df_20['Adj Close'].values , exog=df_20[['Open', 'Close']],window=5)
rres = model.fit()
rres.params
</code></pre>
<p>Where the manual told me to add a constant, so I added '1'.</p>
<p>This isn't working, my code comes out like this.</p>
<pre><code> Open Close
-4.428939e-07 0.999977
1.411571e-07 0.999977
4.341009e-07 0.999976
3.469735e-07 0.999977
3.208869e-07 0.999977
</code></pre>
<p>Any idea how to do this?</p>
|
<p>You could transform your window into separate columns, like so:</p>
<pre class="lang-py prettyprint-override"><code>
# == Necessary Imports ====================================
# Note: to install ``sklearn`` run: ``pip install scikit-learn``
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn import linear_model
# == Sample DataFrame ====================================
df = pd.DataFrame(
{
'Open': [
26.629999,
26.670000,
27.340000,
27.070000,
26.700001
],
'High': [
27.000000,
27.450001,
27.440001,
27.420000,
27.190001
],
'Low': [
26.290001,
26.540001,
26.889999,
26.969999,
26.459999
],
'Close': [
26.600000,
27.350000,
27.139999,
27.280001,
27.129999
],
'Adj Close': [
26.599386,
27.349369,
27.139374,
27.279371,
27.129374
],
}
)
# == Transform Window on ``n`` days into columns ========
window_size = 3
new_df = df.iloc[window_size:].copy()
row_count, column_count = df.shape
for index, row in new_df.iterrows():
for i in range(1, window_size+1):
for column in df.columns:
new_df.loc[new_df.index == index, f'{column}_D-{i}'] = df.iloc[index - i][column]
# == Separate the Target column from the rest of the columns ======
# Note: perform a ``train_test_split`` when running this code for real
y = new_df.pop('Adj Close')
X = new_df
# == Normalize the data and train the model ========================
scaler = StandardScaler()
reg = linear_model.LinearRegression()
# Normalizing the data
X_norm = scaler.fit_transform(X.values)
# Training a linear regression model
reg = reg.fit(X_norm, y)
# Adding predictions back to the "original" dataframe
new_df['Predicted Adj Close'] = reg.predict(X_norm)
new_df
# Returns:
# Open High Low Close Open_D-1 High_D-1 Low_D-1 \
# 3 27.070000 27.420000 26.969999 27.280001 27.34 27.440001 26.889999
# 4 26.700001 27.190001 26.459999 27.129999 27.07 27.420000 26.969999
# Close_D-1 Adj Close_D-1 Open_D-2 High_D-2 Low_D-2 Close_D-2 \
# 3 27.139999 27.139374 26.67 27.450001 26.540001 27.350000
# 4 27.280001 27.279371 27.34 27.440001 26.889999 27.139999
# Adj Close_D-2 Open_D-3 High_D-3 Low_D-3 Close_D-3 Adj Close_D-3 \
# 3 27.349369 26.629999 27.000000 26.290001 26.60 26.599386
# 4 27.139374 26.670000 27.450001 26.540001 27.35 27.349369
# Predicted Adj Close
# 3 27.279371
# 4 27.129374
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="https://i.sstatic.net/ABJIS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ABJIS.png" alt="enter image description here" /></a></p>
| 78
|
implement regression
|
How to implement multi-branch regression trees in R
|
https://stackoverflow.com/questions/17294632/how-to-implement-multi-branch-regression-trees-in-r
|
<p>Is it possible to do the multi branch regression trees(non binary split) using R?</p>
|
<p>A word on binary trees, contesting superiority of non-binary: <a href="http://bus.utk.edu/stat/datamining/Tree%20Structured%20Data%20Analysis%20%28SPSS%29.pdf" rel="nofollow">here</a></p>
<p>Tree models in R: <a href="http://plantecology.syr.edu/fridley/bio793/cart.html" rel="nofollow">here</a></p>
<p>R Party package for recursive partitioning: <a href="http://cran.r-project.org/web/packages/party/party.pdf" rel="nofollow">here</a></p>
| 79
|
implement regression
|
Classification and regression trees implementation in java
|
https://stackoverflow.com/questions/9708269/classification-and-regression-trees-implementation-in-java
|
<p>Are there any libs implementing classification and regression trees in java? Something very close to Matlab's classregtree command required.</p>
|
<p>Try <a href="http://www.cs.waikato.ac.nz/ml/weka/" rel="nofollow">Weka</a></p>
| 80
|
implement regression
|
How can I implement a linear regression line in a Vizframe Chart?
|
https://stackoverflow.com/questions/42939004/how-can-i-implement-a-linear-regression-line-in-a-vizframe-chart
|
<p>I have a scatter plot Vizframe chart in which I need to include a linear regression/trend line. Any idea how this can be done? It appears this is not something offered by vizframe 'out of box'? I can't find a solution for this! </p>
<p><strong>Question:</strong></p>
<p>Any suggestions on a feasible way to implement a regression line on a Scatter Plot Vizframe chart?</p>
<p>Here is the code I have for the setup. The scatter plot opens in a dialog/modal when a button is pressed.</p>
<pre><code>sap.ui.define([
'jquery.sap.global',
'vizConcept/controller/BaseController',
'sap/ui/model/json/JSONModel',
'vizConcept/model/viewControls',
'sap/m/Button',
'sap/m/Dialog',
],
function (jQuery, BaseController, JSONModel, viewControls, Button, Dialog) {
"use strict";
var controls;
var mainController = BaseController.extend("vizConcept.controller.Main", {
onInit: function(oEvent) {
// Access/expose the defined model(s) configured in the Component.js or Manifest.json within the controller.
this.getView().setModel(this.getOwnerComponent().getModel("products"), "products");
var oModel = this.getView().getModel("products");
this.getView().setModel(oModel);
var sUrl = "#" + this.getOwnerComponent().getRouter().getURL("page2");
$(function() {
var dataset = new sap.viz.ui5.data.FlattenedDataset({
dimensions : [
{
axis : 1,
name : 'Award Date',
value : "{AwdDate}"
}
],
measures : [
{
group: 1,
name : 'Award Date',
value : '{Hist}'
},
{
group: 2,
name : 'Current PPI',
value : '{Current}'
}
],
data : {
path : "/ProductCollection"
}
});
var scatterViz = new sap.viz.ui5.Scatter({
id : "idscatter",
width : "1000px",
height : "400px",
title : {
text : 'Pricing Tool Scatter Plot Example'
},
xAxis : {
title : {
visible : true
}
},
yAxis : {
title : {
visible : true
}
},
dataset : dataset
});
scatterViz.setModel(sap.ui.getCore().getModel());
scatterViz.setModel(oModel);
var dlg = new sap.m.Dialog({
id: 'vizModal',
title: 'Scatter Plot Example Viz',
width : "1800px",
height : "600px",
content : [scatterViz],
beginButton: new Button({
text: 'Close',
press: function () {
dlg.close();
}
})
});
(new sap.m.Button({
text: 'open',
type: 'Accept',
press: function() {
dlg.open();
scatterViz.invalidate();
}
})).placeAt('content');
});
},
onToPage2 : function () {
this.getOwnerComponent().getRouter().navTo("page2");
},
});
return mainController;
});
</code></pre>
<p><strong>Edit</strong></p>
<p>Here is the 'products' model that is outputted on the vizframe chart. I have the products model defined in the manifest.json but the connection there is fine:</p>
<p><strong>products model</strong></p>
<pre><code> {
"ProductCollection": [
{
"Item": "1",
"AwdDate": "20160715",
"Hist": 171.9,
"Current": 183
},
{
"Item": "2",
"AwdDate": "20160701",
"Hist" : 144.3,
"Current": 158.6
},
{
"Item": "3",
"AwdDate": "20150701",
"Hist": 160,
"Current": 165
},
{
"Item": "1",
"AwdDate": "20160715",
"Hist": 201,
"Current": 167
},
{
"Item": "2",
"AwdDate": "20160801",
"Hist" : 175.3,
"Current": 178.2
},
{
"Item": "3",
"AwdDate": "20150721",
"Hist": 160,
"Current": 147
},
{
"Item": "1",
"AwdDate": "20160715",
"Hist": 175.9,
"Current": 185.2
},
{
"Item": "2",
"AwdDate": "20161101",
"Hist" : 165.3,
"Current": 158.2
},
{
"Item": "3",
"AwdDate": "201700101",
"Hist": 160,
"Current": 165
},
{
"Item": "4",
"AwdDate": "201600401",
"Hist": 173,
"Current": 177
}
]
};
</code></pre>
<p><strong>Edit 2</strong>
Here is my attempt at the solution offered <a href="https://blogs.sap.com/2016/08/07/add-linear-and-exponential-trend-lines-to-sapui5-line-chart/" rel="nofollow noreferrer">here</a>. But nothing appears after this is included in the onInit() function of the controller.</p>
<pre><code>var regressionData = [];
for (var i = 0; i < 10; i++) {
regressionData[i] = [oData.ProductCollection[i].Current, oData.ProductCollection[i].Hist];
};
regression('linear', regressionData);
</code></pre>
|
<p>Unfortunately, you have some limitations inherent to the viz charts. Namely:</p>
<ul>
<li>You cannot add "oblique" reference lines (= the trend line) to the chart. You can only add vertical / horizontal ones via the <a href="https://sapui5.hana.ondemand.com/sdk/explored.html#/sample/sap.viz.sample.ReferenceLine/preview" rel="nofollow noreferrer">Reference Line</a> feature. Check out the (non-intuitive) documentation on this: <a href="https://sapui5.hana.ondemand.com/sdk/docs/vizdocs/index.html#reference/chartProperty/Charts/Scatter%20%20(5)/Scatter%20Plot/" rel="nofollow noreferrer">Viz Charts Documentation</a> (look at the plotArea.referenceLine.line; it is just a number = will be a horizontal / vertical line depending on your chart type or orientation).</li>
<li>You cannot combine a scatter plot with a line chart. If you look in the same <a href="https://sapui5.hana.ondemand.com/sdk/docs/vizdocs/index.html#reference/chartProperty/Charts/Scatter%20%20(5)/Scatter%20Plot/" rel="nofollow noreferrer">Viz Charts Documentation</a>, you will see that in the Combination "chapter" on the left hand side, there is no "Line - Scatter" combination.</li>
</ul>
<p>Also, your X values (AwDate) are ABAP-style dates as <strong>string</strong>. It is more appropriate to use a date / time chart to correctly display them. Anyway, regression makes a lot more sense when you use dates. Otherwise your data is "categorical" and the only way you can make regression is by sorting them and considering them equidistant on the X axis - not what you would normally want if you have non-equidistant data (like the one in your example).</p>
<p>My proposal would be to:</p>
<ul>
<li>Use a line chart instead of a scatter chart (such that you can also display the trend line). </li>
<li>Convert the ABAP-style strings into dates, such that you can use a timeseries chart. You have some "wrong" dates in your example data btw: "2016<strong>00</strong>401".</li>
<li>Do the regression for whatever you want and add it as a separate "trend" series. You will need to compute a point on the "trend" series for each point on your other series (basically, you will have to add a "trend" attribute to each line in your ProductCollection). If you are using an OData model, then you will need to either switch to an JSON client model or do some ugly workarounds with formatters.</li>
</ul>
<p>As requested, I made an example implementation here: <a href="https://jsfiddle.net/93mx0yvt/23/" rel="nofollow noreferrer">https://jsfiddle.net/93mx0yvt/23/</a>. The regression algorithm I borrowed from here: <a href="https://dracoblue.net/dev/linear-least-squares-in-javascript/" rel="nofollow noreferrer">https://dracoblue.net/dev/linear-least-squares-in-javascript/</a>. The main points of the code are:</p>
<pre class="lang-js prettyprint-override"><code>// format for parsing the ABAP-style dates
var oFormat = DateFormat.getDateInstance({
pattern: "yyyyMMdd"
});
// maps an ProductCollection entry to a new object
// which has the parsed date (and only the needed attributes)
var fnMapData = function(oEntry) {
return {
date: oFormat.parse(oEntry.AwdDate),
current: oEntry.Current,
historical: oEntry.Hist
};
};
var fnProcessData = function(oD) {
var aEntries = oD.ProductCollection.map(fnMapData),
aXs = aEntries.map(function(oE) { // get the Xs
// we take the millis to be able to do arithmetics
return oE.date.getTime();
}),
aYs = aEntries.map(function(oE) { // get the Ys (hist)
return oE.historical;
}),
//changed the function to only return only result Ys
aRs = findLineByLeastSquares(aXs, aYs);
//save the Ys into the result
for (var i = 0; i < aEntries.length; ++i) {
aEntries[i].trend = aRs[i];
}
return {
data: aEntries
};
};
</code></pre>
<p>You can use then the data returned by the <code>fnProcessData</code> function inside a JSONModel and then build a simple multi-series date/time line chart based on it.</p>
| 81
|
implement regression
|
Implementations of local regression and local likelihood methods
|
https://stackoverflow.com/questions/14411627/implementations-of-local-regression-and-local-likelihood-methods
|
<p>I am looking for an efficient implementation of local regression (LOESS) and local likelihood methods such as local logistic regression (local likelihood methods are discussed, for example, in section 6.5 of <a href="http://www-stat.stanford.edu/%7Etibs/ElemStatLearn/" rel="nofollow noreferrer">Elements of Statistical Learning</a> by Hastie et. al.).</p>
<p>I would prefer a C++ or Python implementation, but pointers to R (where I know that LOESS is implemented, but I can't find a local likelihood method) or Java would also suffice.</p>
|
<p>In R there are the 'locfit' and 'mgcv' packages that I would suggest do forms of local regression. I believe the locfit package is simply a syntactic bridge to an underlying C package. (But not C++.)</p>
| 82
|
implement regression
|
ValueError: Unknown label type: continuous. When implementing regression
|
https://stackoverflow.com/questions/74815755/valueerror-unknown-label-type-continuous-when-implementing-regression
|
<p>I am trying to predict the price of taxi fares using a neural network, I managed to run it for one optimizer and for a given number of epochs, but when implementing gridSearchCV() to try out different optimizers and different amounts of epochs it outputs an error: [ValueError: Unknown label type: continuous.]</p>
<pre><code>def create_model(activation = 'relu', optimizer='adam'):
model = keras.models.Sequential([
keras.layers.Dense(128, activation=activation),
keras.layers.Dense(64, activation=activation),
keras.layers.Dense(32, activation='sigmoid'),
keras.layers.Dense(1)
])
model.compile(optimizer=optimizer, loss='mean_squared_error') #, metrics=['neg_mean_squared_error'])
return model
model = KerasClassifier(build_fn=create_model,
epochs=50,
verbose=0)
# activations = ['tanh', 'relu', 'sigmoid']
optimizer = ['SGD', 'RMSprop', 'Adadelta', 'Adam', 'Adamax', 'Nadam']
epochs = [50, 100, 150, 200, 250, 300]
# batches = [32, 64, 128, 0]
param_grid = dict(optimizer = optimizer, # activation=activations,
epochs=epochs # batch_size=batches
)
grid = GridSearchCV(
estimator=model,
param_grid=param_grid,
n_jobs=-1,
cv=3,
scoring='neg_mean_squared_error',
verbose=2
)
grid_result = grid.fit(train_in, train_out)
</code></pre>
<p>The last part line of the code, fitting the model is outputing such error.
All the inputs (19 features) are numerical, floats and integers. The output is a float value.</p>
|
<p>You mention wanting to implement a regression task. But the wrapper you are using is the <code>KerasClassifier</code> for classification. This is not meant for regression with continuous target. Hence the error message for classifiers</p>
<pre><code>ValueError: Unknown label type: continuous
</code></pre>
<p>Use the <code>KerasRegressor</code> instead as a wrapper, and it should work fine.</p>
<pre><code>model = KerasRegressor(build_fn=create_model,
epochs=50,
verbose=0)
</code></pre>
| 83
|
implement regression
|
Python, GD and SGD implementation on Linear Regression
|
https://stackoverflow.com/questions/48843721/python-gd-and-sgd-implementation-on-linear-regression
|
<p>I try to understand and implement these algorithms in a simple example of linear regression. It is clear to me that the full batch gradient descent uses all the data to calculate the gradient and that the stochastic gradient descent uses only one.</p>
<p>Full Batch Gradient Descent:</p>
<pre><code>import pandas as pd
from math import sqrt
df = pd.read_csv("data.csv")
df = df.sample(frac=1)
X = df['X'].values
y = df['y'].values
m_current=0
b_current=0
epochs=100000
learning_rate=0.0001
N = float(len(y))
for i in range(epochs):
y_current = (m_current * X) + b_current
cost = sum([data**2 for data in (y-y_current)]) / N
rmse = sqrt(cost)
m_gradient = -(2/N) * sum(X * (y - y_current))
b_gradient = -(2/N) * sum(y - y_current)
m_current = m_current - (learning_rate * m_gradient)
b_current = b_current - (learning_rate * b_gradient)
print("RMSE: ", rmse)
</code></pre>
<p>Full Batch Gradient Descent output <code>RMSE: 10.597894381512043</code></p>
<p>Now I tried to implement Stochastic Gradient Descent on this code, and it looks like this:</p>
<pre><code>import pandas as pd
from math import sqrt
df = pd.read_csv("data.csv")
df = df.sample(frac=1)
X = df['X'].values
y = df['y'].values
m_current=0
b_current=0
epochs=100000
learning_rate=0.0001
N = float(len(y))
mini = df.sample(n=1) # get one random row from dataset
X_mini = mini['X'].values
y_mini = mini['y'].values
for i in range(epochs):
y_current = (m_current * X) + b_current
cost = sum([data**2 for data in (y-y_current)]) / N
rmse = sqrt(cost)
m_gradient = -(2/N) * (X_mini * (y_mini - y_current))
b_gradient = -(2/N) * (y_mini - y_current)
m_current = m_current - (learning_rate * m_gradient)
b_current = b_current - (learning_rate * b_gradient)
print("RMSE: ", rmse)
</code></pre>
<p>Outputs: <code>RMSE: 27.941268469783633</code>, <code>RMSE: 20.919246260939282</code>, <code>RMSE: 31.100985268167648</code>, <code>RMSE: 21.023479528518386</code>, <code>RMSE: 19.920972478204785</code>...</p>
<p>The results I got using sklearn SGDRegressor (with the same settings): </p>
<pre><code>import pandas as pd
from sklearn import linear_model
from sklearn.metrics import mean_squared_error
from math import sqrt
data= pd.read_csv('data.csv')
x = data.X.values.reshape(-1,1)
y = data.y.values.reshape(-1,1).ravel()
Model = linear_model.SGDRegressor(alpha = 0.0001, shuffle=True, max_iter = 100000)
Model.fit(x,y)
y_predicted = Model.predict(x)
mse = mean_squared_error(y, y_predicted)
print("RMSE: ", sqrt(mse))
</code></pre>
<p>Otuputs: <code>RMSE: 10.995881334048224</code>, <code>RMSE: 11.75907544873036</code>, <code>RMSE: 12.981134247509486</code>, <code>RMSE: 12.298263437187988</code>, <code>RMSE: 12.549948073154608</code>...</p>
<p>The results obtained by the above algorithm are worse than scikit model results.. I wonder where I made a mistake? Also my algorithm is quite slower (few seconds)..</p>
|
<p>It seems like you set <code>alpha</code> in <code>SGDClassifier</code> to learning rate. <code>alpha</code> is not the learning rate.</p>
<p>To set constant learning rate set <code>SGDClassifier's</code> <code>learing_rate</code> to <code>constant</code> and <code>eta0</code> to your learning rate. </p>
<p>You'd also need to set <code>alpha</code> to 0 since that's the regularization term and your implementation doesn't use that.</p>
<p>Also note that since these algorithms are stochastic in nature, setting <code>random_state</code> to some fixed value might be a good idea.</p>
| 84
|
implement regression
|
Implementation of Logistic regression
|
https://stackoverflow.com/questions/49818519/implementation-of-logistic-regression
|
<p>In the beginning, I am Japanese and not good at English, so my sentence may be incorrect. Sorry.
I made the code that is about logistic regression.
And I tested this code with dataset, but it is not appropriate rate.
That's why there is any mistakes in this code. If so, please let me know.
Moreover, I want to know the way to plot data and decision boundary.</p>
<pre><code>class logisticr(object):
def __init__(self, eta=0.01):
import numpy as np
from numpy import random
self.eta = eta
def sigFunc(self, z):
return 1.0 / (1.0 + np.exp( -z ))
def predict(self, X):
X = np.matrix(X)
m, n = X.shape
X = np.c_[np.matrix(np.ones((m, 1))), X]
z=X*self.w_
phi=self.sigFunc(z)
return self.decide(phi)
def decide(self, x):
return np.where(x >= 0.5, 1, 0)
def costfunc(self, X):
z = X * self.w_
phi =self.sigFunc(z)
J = -y.T * np.log(phi) - (np.ones((m, 1)) - y).T * np.log(np.ones((m, 1)) - phi)
return J
def fit(self, X, y):
X = np.matrix(X).T
m, n = X.shape
print "the number of futures is %d" %n
X = np.c_[np.matrix(np.ones((m, 1))), X]
y = np.matrix(y).T
self.w_ = np.matrix(np.zeros((n + 1, 1)))
for xi, yi in zip(X, y):
zi = xi * self.w_
phii = self.sigFunc(zi)
gradJi = -xi.T * (yi - phii)
self.w_ -= self.eta * gradJi
self.eta *= 0.1
print "final parameter is (%d, %d)" %(self.w_[0], self.w_[1])
z = X * self.w_
phi =self.sigFunc(z)
correctAnswer = np.where(np.array(y == self.decide(phi)) == True, 1, 0)
return float(sum(correctAnswer)) / len(correctAnswer)
</code></pre>
|
<p>I modified your code a little bit. The fitting is now solved in two ways: as a simple gradient descent and by means of the scipy optimizer. The gradient descent is very slow and should not be used for real problems. I tested the class on the data taken from the Machine Learning course at <a href="https://www.coursera.org/learn/machine-learning" rel="nofollow noreferrer">Coursera</a></p>
<p>I'm not sure I can share the original data set. Here is a short fragment of it:</p>
<pre><code>34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
60.18259938620976,86.30855209546826,1
79.0327360507101,75.3443764369103,1
45.08327747668339,56.3163717815305,0
61.10666453684766,96.51142588489624,1
</code></pre>
<p>Here are the outputs of two solutions:</p>
<p><strong>Simple gradient descent</strong> (after 100000 iterations...)</p>
<pre><code>Number of Features: 3
Found Solution:
[[-4.81180027]
[ 0.04528064]
[ 0.03819149]]
Train Accuracy: 0.910000
</code></pre>
<p><strong>Scipy Optimizer</strong> (very quickly)</p>
<pre><code>Number of Features: 3
Found Solution:
[[-25.16131869]
[ 0.20623159]
[ 0.20147149]]
Train Accuracy: 0.890000
</code></pre>
<p>The decision boundary in this code works only for the same type of the problem (a straight line separating two sets). For a better accuracy you can try to involve combinations of the features as new features (be aware of a possible overfitting).</p>
<p><a href="https://i.sstatic.net/Qi009.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qi009.png" alt="Logistic regression Decision boundary as a straight line separating two sets"></a></p>
<p>Here is the code:</p>
<pre><code>import numpy as np
from numpy import *
import scipy.optimize as op
import matplotlib.pyplot as plt
class Logisticr():
def __init__(self, X, y, alg, eta, w):
self.X = X
self.y = y
self.alg = alg
self.eta = eta
self.w = w
self.m, self.n = np.shape(X)
def sigFunc(self, z):
return 1.0 / (1.0 + np.exp( -z ))
def decide(self, x):
return np.where(x >= 0.5, 1, 0)
def costfunc(self, w, X, y):
w = w.reshape((self.n,1))
z = X * w
phi =self.sigFunc(z)
# calculating the cost function
part1 = np.multiply(y, np.log(phi))
part2 = np.multiply((1 - y), np.log(1 - phi))
J = (-part1 - part2).sum()/self.m
# calculating the gradient
grad = X.T * (phi - y) / self.m
return J, grad
def graddescent(self, maxiter):
for i in range(0, maxiter):
J, grad = self.costfunc(self.w, self.X, self.y)
self.w = self.w - self.eta*grad
return self.w
def fit(self):
print "Number of Features: %d" %self.n
if self.alg == 0:
_maxiter = 100000
self.w = self.graddescent(_maxiter)
else:
Result = op.minimize(fun = self.costfunc,
x0 = self.w,
args = (self.X, self.y),
method = 'TNC',
jac = True);
self.w = Result.x
self.w = np.matrix(self.w).T
print "Found Solution:"
print self.w
z = self.X * self.w
phi = self.sigFunc(z)
correctAnswer = np.where(np.array(self.y == self.decide(phi)) == True, 1, 0)
accuracy = float(sum(correctAnswer)) / len(correctAnswer)
print "Train Accuracy: %f" %accuracy
def plot(self):
if self.n == 3:
ind_1 = np.where(self.y == 1)
ind_0 = np.where(self.y == 0)
x1_1 = self.X[:, [1]].min()
x1_2 = self.X[:, [1]].max()
x2_1 = -(self.w[0, 0] + self.w[1, 0]*x1_1)/self.w[2, 0]
x2_2 = -(self.w[0, 0] + self.w[1, 0]*x1_2)/self.w[2, 0]
plt.plot(self.X[ind_1, [1]], self.X[ind_1, [2]], "bo", markersize=3)
plt.plot(self.X[ind_0, [1]], self.X[ind_0, [2]], "ro", markersize=3)
plt.plot([x1_1, x1_2], [x2_1, x2_2], "g-")
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.title("Decision boundary")
plt.show()
return 1
# Main module
_f = loadtxt('data/ex2data1.txt', delimiter=',')
_X, _y = _f[:,[0,1]], _f[:,[2]]
_m = np.shape(_X)[0]
# add a column of 1
_X = np.hstack((np.matrix(np.ones((_m, 1))),_X))
_y = np.matrix(_y)
_alg = 1 # 0 = using simple gradient descent
# 1 = using an optimizer
_eta = 0.001 # initial eta value for _type = 0
_n= np.shape(_X)[1]
_w = np.matrix(np.zeros((_n, 1)))
# creating an instance of the Logisticr
lr = Logisticr(_X, _y, _alg, _eta, _w);
lr.fit()
lr.plot()
</code></pre>
<p><strong>UPDATE</strong></p>
<p>I tried to work with the referenced data set. It looks for me like the classes are not good separable using only Temperature and Humidity. Even if you use the high order feature combinations you will probably overfit the classifier. Don't you think so?</p>
<p><a href="https://i.sstatic.net/PxIaM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PxIaM.png" alt="Not separable classes using logistic regression"></a></p>
| 85
|
implement regression
|
Implementing Polynomial Regression in Python (error:could not convert)
|
https://stackoverflow.com/questions/57076184/implementing-polynomial-regression-in-python-errorcould-not-convert
|
<h1>Hello world,</h1>
<h1>Problem</h1>
<p>Currently, trying to predict the margin (in percent) from a dataframe with 48098 rows and 70 features (all types). After that, the get dummies has been done in order to have only numerical values, which give the following shape (Shape Df is: (48098, 572)). </p>
<p>However, since the exploration step, we could have seen that the target doesn't really follow a normal distribution (as shown on images).
<a href="https://i.sstatic.net/oiCrF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oiCrF.png" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/SFkxB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SFkxB.png" alt="enter image description here"></a></p>
<p>Therefore, the performance are 0.76 and 0.74 on respectively the training and the test set.</p>
<h1>Things tried</h1>
<p>Some solutions have been tried such as:</p>
<ul>
<li>Fitting the whole X set (before the train/test) split which give the following error: </li>
</ul>
<h2>First Error:</h2>
<p>Thus, the polynomial regression has been tried to implement. The problem comes when the function (PolynomialFeatures) fit the training set. In fact, the following error appears:</p>
<pre><code> MemoryError Traceback (most recent call last)
<ipython-input-19-bf8dbdba1272> in <module>
3 poly = PolynomialFeatures(degree=2, include_bias=False)
4 poly = poly.fit(X_train)
----> 5 X_poly = poly.transform(X_train)
~\AppData\Local\Continuum\anaconda3\Anaconda\lib\site-packages\sklearn\preprocessing\data.py in transform(self, X)
1504 XP = sparse.hstack(columns, dtype=X.dtype).tocsc()
1505 else:
-> 1506 XP = np.empty((n_samples, self.n_output_features_ dtype=X.dtype)
1507 for i, comb in enumerate(combinations):
1508 XP[:, i] = X[:, comb].prod(1)
MemoryError:
</code></pre>
<ul>
<li>Fitting Polynomial Features before get dummies, in order to have less columns than 500, which give the following ValueError: could not convert string to float: 'Loans'.</li>
</ul>
<h3>Second Errors:</h3>
<pre><code> ---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-25-e379555bc257> in <module>
5 # the default "include_bias=True" adds a feature that's constantly 1
6 poly = PolynomialFeatures(degree=2, include_bias=False)
----> 7 poly = poly.fit(X)
8 X_poly = poly.transform(X)
1458 self : instance
1459 """
-> 1460 n_samples, n_features = check_array(X, accept_sparse=True).shape
1461 combinations = self._combinations(n_features, self.degree,
1462 self.interaction_only,
565 # make sure we actually converted to numeric:
566 if dtype_numeric and array.dtype.kind == "O":
--> 567 array = array.astype(np.float64)
568 if not allow_nd and array.ndim >= 3:
569 raise ValueError("Found array with dim %d. %s expected <= 2."
ValueError: could not convert string to float: 'Loans'
</code></pre>
<h1>Some Code</h1>
<p>First: Polynomial</p>
<pre><code> from sklearn.preprocessing import PolynomialFeatures
y = df.MARGIN
X = df.drop('MARGIN', axis=1)
poly = PolynomialFeatures(degree=2, include_bias=False)
poly = poly.fit(X)
X_poly = poly.transform(X)
</code></pre>
<p>Second: Train/Test split</p>
<pre><code> from sklearn.model_selection import train_test_split
y = df_ohe.MARGIN
X = df_ohe.drop('MARGIN', axis=1)
# Split into Tain and Test set
X_train,X_test,y_train,y_test = train_test_split (X, y, test_size=0.25, random_state=0)
</code></pre>
<h1>Expected Solution</h1>
<p>The expected results would be to complete the polynomial regression and having some thing like:</p>
<h3>First: Implementing polynomial Regression on X</h3>
<pre><code> from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=10, include_bias=False)
poly.fit(X)
X_poly = poly.transform(X)
</code></pre>
<h3>Second: Split X on training and test set</h3>
<p>X_train and X_test would be with the polynomial features ?</p>
<h3>Third: Predicting</h3>
<pre><code>from sklearn.linear_model import LinearRegression
lr = LinearRegression().fit(X_train, y_train)
lr_pred = lr.predict(X_test)
train_R2_lr = lr.score(X_train, y_train)
test_R2_lr = lr.score(X_train, y_test)
print("Training set score: {:.2f}".format(train_R2_lr))
print("Test set score: {:.2f}".format(test_R2_lr))
</code></pre>
<h2>Questions:</h2>
<ul>
<li>When do we implement get_dummies? (after, before the split)</li>
<li>How to do when we have that many columns?</li>
<li>Is it the 'good' way to do so? (New on this field so any help is the welcome)</li>
</ul>
<p>If you have any suggestion, do not hesitate to share and Thanks in advance for those who would take time.</p>
<p>Have a good day/night!</p>
| 86
|
|
implement regression
|
Numpy implementation for regression using NN
|
https://stackoverflow.com/questions/63093127/numpy-implementation-for-regression-using-nn
|
<p>I am implementing my own Neural Network model for regression using only <code>NumPy</code>, and I'm getting really weird results when I'm testing my model on m > 1 samples (for m=1 it works fine).. It seems like the model collapses and predicts only specific values for the whole batch:</p>
<pre><code>Input:
X [[ 7.62316802 -6.12433912]
[ 1.11048966 4.97509421]]
Expected Output:
Y [[16.47952332 12.50288412]]
Model Output
y_hat [[10.42446234 10.42446234]]
</code></pre>
<p>Any idea what might cause this issue?</p>
<p>My code:</p>
<pre><code>import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# np.seterr(all=None, divide=None, over=None, under=None, invalid=None)
data_x = np.random.uniform(0, 10, size=(2, 1))
data_y = (2 * data_x).sum(axis=0, keepdims=True)
# data_y = data_x[0, :] ** 2 + data_x[1, :] ** 2
# data_y = data_y.reshape((1, -1))
# # fig = plt.figure()
# # ax = fig.add_subplot(111, projection='3d')
# # ax.scatter(data_x[0, :], data_x[1, :], data_y)
# # plt.show()
memory = dict()
nn_architecture = [
{"input_dim": 2, "output_dim": 6, "activation": "sigmoid", "bias": True},
{"input_dim": 6, "output_dim": 4, "activation": "sigmoid", "bias": True},
{"input_dim": 4, "output_dim": 1, "activation": "relu", "bias": True}
]
def init_network_parameters(nn_architecture):
parameters = []
for idx, layer in enumerate(nn_architecture):
layer_params = {}
input_dim, output_dim, activation, bias = layer.values()
W = np.random.uniform(0, 1, (output_dim, input_dim))
B = np.zeros((output_dim, 1))
if bias:
B = np.ones((output_dim, 1))
activation_func = identity
backward_activation_func = identity_backward
if activation is 'sigmoid':
activation_func = sigmoid
backward_activation_func = sigmoid_backward
elif activation is 'relu':
activation_func = relu
backward_activation_func = relu_backward
else:
print(f"Activation function set to identity for layer {idx}")
layer_params[f"W"] = W
layer_params[f"B"] = B
layer_params[f"activation"] = activation_func
layer_params[f"backward_activation"] = backward_activation_func
layer_params[f"bias"] = bias
parameters.append(layer_params)
return parameters
def identity(z):
return z
def sigmoid(z):
return np.clip(1 / (1 + np.exp(-z)), -100, 100)
def relu(z):
output = np.array(z, copy=True)
output[z <= 0] = 0
return output
def identity_backward(z, dA):
return dA
def sigmoid_backward(z, dA):
return np.clip(z * (1-z) * dA, -100, 100)
def relu_backward(z, dA):
output = np.ones(z.shape)
output[z <= 0] = 0
return output * dA
def forward_single_layer(prev_A, parameters, idx):
W = parameters[f"W"]
B = parameters[f"B"]
activation = parameters[f"activation"]
if parameters["bias"]:
curr_Z = W.dot(prev_A) + B
else:
curr_Z = W.dot(prev_A)
curr_A = activation(curr_Z)
memory[f"Z{idx+1}"] = curr_Z
memory[f"A{idx+1}"] = curr_A
return curr_Z, curr_A
def forward(X, parameters):
prev_A = X
memory["A0"] = prev_A
for idx, layer_params in enumerate(parameters):
curr_Z, prev_A = forward_single_layer(prev_A=prev_A, parameters=layer_params, idx=idx)
return prev_A
def criteria(y_hat, y):
assert y_hat.shape == y.shape
n = y_hat.shape[0]
m = y_hat.shape[1]
loss = np.sum(y_hat - y, axis=1) / m
dA = (y_hat - y) / m
return loss, dA
def backward_single_layer(prev_A, dA, curr_W, curr_Z, backward_activation, idx):
m = prev_A.shape[1]
dZ = backward_activation(z=curr_Z, dA=dA)
dW = np.dot(dZ, prev_A.T) / m
dB = np.sum(dZ, axis=1, keepdims=True) / m
dA = np.dot(curr_W.T, dZ)
return dA, dW, dB
def backpropagation(parameters, dA):
grads = {}
for idx in reversed(range(len(parameters))):
layer = parameters[idx]
prev_A = memory[f"A{idx}"]
curr_Z = memory[f"Z{idx+1}"]
curr_W = layer["W"]
backward_activation = layer["backward_activation"]
dA, dW, dB = backward_single_layer(prev_A, dA, curr_W, curr_Z, backward_activation, idx)
grads[f"W{idx}"] = dW
grads[f"B{idx}"] = dB
return grads
def update_params(parameters, grads, lr=0.001):
new_params = []
for idx, layer in enumerate(parameters):
layer["W"] -= lr*grads[f"W{idx}"]
layer["B"] -= lr*grads[f"B{idx}"]
new_params.append(layer)
return new_params
X = np.random.uniform(-10, 10, (2, 2))
Y = 2*X[0, :] + X[1, :] ** 2
Y = Y.reshape((1, X.shape[1]))
parameters = init_network_parameters(nn_architecture)
n_epochs = 1000
lr = 0.01
loss_history = []
for i in range(n_epochs):
y_hat = forward(X, parameters)
loss, dA = criteria(y_hat, Y)
loss_history.append(loss)
grads = backpropagation(parameters, dA)
parameters = update_params(parameters, grads, lr)
if not i % 10:
print(f"Epoch {i}/{n_epochs} loss={loss}")
print("X", X)
print("Y", Y)
print("y_hat", y_hat)
</code></pre>
|
<p>There wasn't a problem with my implementation, just overfitting.
<a href="https://www.researchgate.net/post/Neural_network_outputting_average_values_of_all_outputs-Am_I_doing_anything_wrong" rel="nofollow noreferrer">More information can be found here.</a></p>
| 87
|
implement regression
|
Where can I find data pairs to practice implementing linear regression?
|
https://stackoverflow.com/questions/35397525/where-can-i-find-data-pairs-to-practice-implementing-linear-regression
|
<p>I've recently started learning machine learning algorithms. I've written a program in python from scratch to implement linear regression but I need some data pairs to use.</p>
|
<p>There are many dataset at internet to use, </p>
<p>have a look here, you can find many real datasets: <a href="https://archive.ics.uci.edu/ml/datasets.html" rel="nofollow">uci</a></p>
| 88
|
implement regression
|
How do I implement multiple linear regression in Python?
|
https://stackoverflow.com/questions/48257144/how-do-i-implement-multiple-linear-regression-in-python
|
<p>I am trying to write a multiple linear regression model from scratch to predict the key factors contributing to number of views of a song on Facebook. About each song we collect this information, i.e. variables I'm using:</p>
<pre><code>df.dtypes
clicked int64
listened_5s int64
listened_20s int64
views int64
percentage_listened float64
reactions_total int64
shared_songs int64
comments int64
avg_time_listened int64
song_length int64
likes int64
listened_later int64
</code></pre>
<p>i'm using number of views as my dependent variable and all other variables in a dataset as independent ones. The model is posted down below:</p>
<pre><code> #df_x --> new dataframe of independent variables
df_x = df.drop(['views'], 1)
#df_y --> new dataframe of dependent variable views
df_y = df.ix[:, ['views']]
names = [i for i in list(df_x)]
regr = linear_model.LinearRegression()
x_train, x_test, y_train, y_test = train_test_split(df_x, df_y, test_size = 0.2)
#Fitting the model to the training dataset
regr.fit(x_train,y_train)
regr.intercept_
print('Coefficients: \n', regr.coef_)
print("Mean Squared Error(MSE): %.2f"
% np.mean((regr.predict(x_test) - y_test) ** 2))
print('Variance Score: %.2f' % regr.score(x_test, y_test))
regr.coef_[0].tolist()
</code></pre>
<p>Output here:</p>
<pre><code> regr.intercept_
array([-1173904.20950487])
MSE: 19722838329246.82
Variance Score: 0.99
</code></pre>
<p>Looks like something went miserably wrong.</p>
<p>Trying the OLS model:</p>
<pre><code> import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
model=sm.OLS(y_train,x_train)
result = model.fit()
print(result.summary())
</code></pre>
<p>Output:</p>
<pre><code> R-squared: 0.992
F-statistic: 6121.
coef std err t P>|t| [95.0% Conf. Int.]
clicked 0.3333 0.012 28.257 0.000 0.310 0.356
listened_5s -0.4516 0.115 -3.944 0.000 -0.677 -0.227
listened_20s 1.9015 0.138 13.819 0.000 1.631 2.172
percentage_listened 7693.2520 1.44e+04 0.534 0.594 -2.06e+04 3.6e+04
reactions_total 8.6680 3.561 2.434 0.015 1.672 15.664
shared_songs -36.6376 3.688 -9.934 0.000 -43.884 -29.392
comments 34.9031 5.921 5.895 0.000 23.270 46.536
avg_time_listened 1.702e+05 4.22e+04 4.032 0.000 8.72e+04 2.53e+05
song_length -6309.8021 5425.543 -1.163 0.245 -1.7e+04 4349.413
likes 4.8448 4.194 1.155 0.249 -3.395 13.085
listened_later -2.3761 0.160 -14.831 0.000 -2.691 -2.061
Omnibus: 233.399 Durbin-Watson:
1.983
Prob(Omnibus): 0.000 Jarque-Bera (JB):
2859.005
Skew: 1.621 Prob(JB):
0.00
Kurtosis: 14.020 Cond. No.
2.73e+07
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 2.73e+07. This might indicate that there are strong multicollinearity or other numerical problems.
</code></pre>
<p>It looks like somethings went seriously wrong just by looking at this output.</p>
<p>I believe that something went wrong with training/testing sets and creating two different data frames x and y but can't figure out what. This problem must be solvable by using multiple regression. Shall it not be linear? Could you please help me figure out what went wrong?</p>
|
<p>I would suggest using regularisation with the Lasso, which also performs feature selection:</p>
<pre><code>from sklearn.model_selection import train_test_split
from sklearn.linear_model import LassoCV
from sklearn.preprocessing import StandardScaler
# Standardize the data (excluding 'views' which is the target variable)
scaler = StandardScaler()
df_x_standardized = scaler.fit_transform(df.drop(['views'], axis=1))
df_y = df['views']
# Split the data into training and testing sets
x_train, x_test, y_train, y_test = train_test_split(df_x_standardized, df_y, test_size=0.2, random_state=42)
# Initialize and fit the Lasso regression model with cross-validation
lasso = LassoCV(cv=5, random_state=42).fit(x_train, y_train)
# Print the coefficients and intercept
print('Intercept: ', lasso.intercept_)
print('Coefficients: \n', lasso.coef_)
# Evaluate the model
print("Mean Squared Error (MSE): %.2f" % np.mean((lasso.predict(x_test) - y_test) ** 2))
print('Variance Score (R^2): %.2f' % lasso.score(x_test, y_test))
</code></pre>
| 89
|
implement regression
|
How to implement Statsmodels Multinomial Logistic Regression (MNLogit) wald_test()?
|
https://stackoverflow.com/questions/63272453/how-to-implement-statsmodels-multinomial-logistic-regression-mnlogit-wald-test
|
<p>I'm (a Python newbie) writing Python code to mimic outputs in SAS and want to run a multinomial logistic regression on the SAS Wallet data set. I've done normal logistic regression previously on other data using statsmodels.Logit, but now am using statsmodels.MNLogit. According to the API (<a href="https://www.statsmodels.org/stable/generated/statsmodels.discrete.discrete_model.MultinomialResults.html#statsmodels.discrete.discrete_model.MultinomialResults" rel="nofollow noreferrer">https://www.statsmodels.org/stable/generated/statsmodels.discrete.discrete_model.MultinomialResults.html#statsmodels.discrete.discrete_model.MultinomialResults</a>) there is a .wald_test() method on the fitted MultinomialResults model, but it's throwing "<strong>TypeError: decoding to str: need a bytes-like object, tuple found</strong>" when I try to use it with a String/tuple parameter value for r_matrix (there are also no examples for how the test should be used on the wald_test() page (<a href="https://www.statsmodels.org/stable/generated/statsmodels.discrete.discrete_model.MultinomialResults.wald_test.html#statsmodels.discrete.discrete_model.MultinomialResults.wald_test" rel="nofollow noreferrer">https://www.statsmodels.org/stable/generated/statsmodels.discrete.discrete_model.MultinomialResults.wald_test.html#statsmodels.discrete.discrete_model.MultinomialResults.wald_test</a>).</p>
<p>This is the code I input (where df is a pandas dataframe of the Wallet data):</p>
<pre><code>#Define X and Y
X = df[['Male','Business','Punish','Explain']]
X = sm.add_constant(X)
Y = df['Wallet']
#Fit model
mnlogit_1 = sm.MNLogit(Y,X).fit()
#Run Wald test (on constant)
wald = ('const = 0')
Results = mnlogit_1.wald_test(wald)
print("const:", Results)
</code></pre>
<p>The same code works for sm.Logit() on a different data set (one that works with binomial logistic regression).
Does anyone have an idea for what's going wrong? Thanks :)</p>
| 90
|
|
implement regression
|
How to fix nan values coming from the implementation of Logistic Regression?
|
https://stackoverflow.com/questions/62818741/how-to-fix-nan-values-coming-from-the-implementation-of-logistic-regression
|
<p>After I get some processes to get x_train and y_train, I flatten them. The code snippets are shown below.</p>
<p><strong>The flatten code</strong></p>
<pre><code>x_train = x_train_flatten.T
x_test = x_test_flatten.T
y_test = Y_test.T
y_train = Y_train.T
print("x train: ",x_train.shape)
print("x test: ",x_test.shape)
print("y train: ",y_train.shape)
print("y test: ",y_test.shape)
</code></pre>
<p><strong>Result</strong></p>
<pre><code>x train: (16384, 38)
x test: (16384, 10)
y train: (1, 38)
y test: (1, 10)
</code></pre>
<p><strong>Normalization Process(AfterFlatten)</strong> :</p>
<pre><code>from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer = imputer.fit(x_train)
x_train= imputer.transform(x_train)
x_train = (x_train-np.min(x_train))/(np.max(x_train)-np.min(x_train))
</code></pre>
<p>Then I wrote some methods for <strong>Logistic Regression</strong></p>
<p><strong>Logistic Regression Methods</strong></p>
<pre><code>def initialize_weights_and_bias(dimension):
w = np.full((dimension,1),0.01)
b = 0.0
return w, b
def sigmoid(z):
y_head = 1/(1+np.exp(-z))
return y_head
def forward_backward_propagation(w,b,x_train,y_train):
# forward propagation
z = np.dot(w.T,x_train) + b
y_head = sigmoid(z)
loss = -(1-y_train)*np.log(1-y_head)-y_train*np.log(y_head)
cost = (np.sum(loss))/x_train.shape[1] # x_train.shape[1] is for scaling
# backward propagation
derivative_weight = (np.dot(x_train,((y_head-y_train).T)))/x_train.shape[1] # x_train.shape[1] is for scaling
derivative_bias = np.sum(y_head-y_train)/x_train.shape[1] # x_train.shape[1] is for scaling
gradients = {"derivative_weight": derivative_weight,"derivative_bias": derivative_bias}
return cost,gradients
def update(w, b, x_train, y_train, learning_rate,number_of_iterarion):
cost_list = []
cost_list2 = []
index = []
# updating(learning) parameters is number_of_iterarion times
for i in range(number_of_iterarion):
# make forward and backward propagation and find cost and gradients
cost,gradients = forward_backward_propagation(w,b,x_train,y_train)
cost_list.append(cost)
# lets update
w = w - learning_rate * gradients["derivative_weight"]
b = b - learning_rate * gradients["derivative_bias"]
if i % 50 == 0:
cost_list2.append(cost)
index.append(i)
print ("Cost after iteration %i: %f" %(i, cost))
# we update(learn) parameters weights and bias
parameters = {"weight": w,"bias": b}
plt.plot(index,cost_list2)
plt.xticks(index,rotation='vertical')
plt.xlabel("Number of Iterarion")
plt.ylabel("Cost")
plt.show()
return parameters, gradients, cost_list
def predict(w,b,x_test):
# x_test is a input for forward propagation
z = sigmoid(np.dot(w.T,x_test)+b)
Y_prediction = np.zeros((1,x_test.shape[1]))
# if z is bigger than 0.5, our prediction is woman (y_head=1),
# if z is smaller than 0.5, our prediction is man (y_head=0),
for i in range(z.shape[1]):
if z[0,i]<= 0.5:
Y_prediction[0,i] = 0
else:
Y_prediction[0,i] = 1
return Y_prediction
def logistic_regression(x_train, y_train, x_test, y_test, learning_rate , num_iterations):
# initialize
dimension = x_train.shape[0]
w,b = initialize_weights_and_bias(dimension)
parameters, gradients, cost_list = update(w, b, x_train, y_train, learning_rate,num_iterations)
y_prediction_test = predict(parameters["weight"],parameters["bias"],x_test)
y_prediction_train = predict(parameters["weight"],parameters["bias"],x_train)
train_acc_lr = round((100 - np.mean(np.abs(y_prediction_train - y_train)) * 100),2)
test_acc_lr = round((100 - np.mean(np.abs(y_prediction_test - y_test)) * 100),2)
# Print train/test Errors
print("train accuracy: %", train_acc_lr)
print("test accuracy: %", test_acc_lr)
</code></pre>
<p>Then I start to call logistic_regression method to implement Logistic Regression.</p>
<pre><code>logistic_regression(x_train, y_train, x_test, y_test,learning_rate = 0.01, num_iterations = 700)
</code></pre>
<p>After showing some cost results, some of them has nan values as shown below.</p>
<pre><code>Cost after iteration 0: nan
Cost after iteration 50: 10.033753
Cost after iteration 100: 11.253421
Cost after iteration 150: nan
Cost after iteration 200: nan
Cost after iteration 250: nan
Cost after iteration 300: nan
Cost after iteration 350: nan
Cost after iteration 400: nan
Cost after iteration 450: 0.321755
...
</code></pre>
<p>How can I fix the issue?</p>
|
<p>Here is my solution</p>
<ul>
<li>Apply feature scaling to x_train before training the model to stop producing nan values</li>
</ul>
<p>I write this code block before calling <code>logistic_regression method</code>.</p>
<pre><code>from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
x_train = sc_X.fit_transform(x_train)
</code></pre>
| 91
|
implement regression
|
R implementation of group lasso-regularized linear regression
|
https://stackoverflow.com/questions/10983629/r-implementation-of-group-lasso-regularized-linear-regression
|
<p>Does anyone know of any good implementation of group-lasso regularized linear regression in R (or even Matlab)?</p>
|
<p>Have you looked at <strong><a href="http://www.gnu.org/software/octave/" rel="nofollow">GNU Octave</a></strong>? It does its work on command line so you can use it with any language that can read/write to file and execute shell commands to kick it off from within the program.</p>
<p>GNU Octave is featured in the <strong><a href="https://class.coursera.org/ml/lecture/preview" rel="nofollow">Stanford Machine Learning Course</a></strong> on the chapter of linear regression with multiple variables.</p>
| 92
|
implement regression
|
Select smoothing parameter and implement non-parametric regression in Python
|
https://stackoverflow.com/questions/60828059/select-smoothing-parameter-and-implement-non-parametric-regression-in-python
|
<p>I'm working in R to estimate non-parametric regression. The complete project: <a href="https://systematicinvestor.wordpress.com/2012/05/22/classical-technical-patterns" rel="nofollow noreferrer">https://systematicinvestor.wordpress.com/2012/05/22/classical-technical-patterns</a></p>
<p>My <code>R</code> code is the following , relying on the <code>sm</code> package's <code>h.select</code> and <code>sm.regression</code>.</p>
<pre><code>library(sm)
y = as.vector( last( Cl(data), 190) )
t = 1:len(y)
h = h.select(t, y, method = 'cv')
temp = sm.regression(t, y, h=h, display = 'none')
</code></pre>
<p>I now would like to do the same in Python. I managed to set up the data (see below) but do not know how to select the smoothing parameter and estimate the non-parametric regression.</p>
<pre><code>import pandas as pd
import datetime
import pandas_datareader.data as web
from pandas import Series, DataFrame
start = datetime.datetime(1970, 1, 1)
end = datetime.datetime(2020, 3, 24)
df = web.DataReader("^GSPC", 'yahoo', start, end)
y = df['Close'].tail(190).values
t = list(range(1, len(y) + 1))
</code></pre>
| 93
|
|
implement regression
|
Implement Ridge Regression Equation for a Neural Network
|
https://stackoverflow.com/questions/78597100/implement-ridge-regression-equation-for-a-neural-network
|
<p>I am trying to replicate the following equation in MATLAB to find the optimal output weight matrix of a neural network from training using ridge regression.</p>
<p><strong>Output Weight Matrix of a Neural Network after Training using Ridge Regression:</strong>
<a href="https://i.sstatic.net/O9byd971.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O9byd971.png" alt="enter image description here" /></a>
This equation comes from the echo state network guide provided by Mantas Lukosevicius and can be found at: <a href="https://www.researchgate.net/publication/319770153_A_practical_guide_to_applying_echo_state_networks" rel="nofollow noreferrer">https://www.researchgate.net/publication/319770153_A_practical_guide_to_applying_echo_state_networks</a> (see page 11)</p>
<p>My attempt is below. I believe that the outer parenthesis (in red) makes this a non-traditional double summation, meaning the method presented by Voss (see <a href="https://www.mathworks.com/matlabcentral/answers/1694960-nested-loops-for-double-summation" rel="nofollow noreferrer">https://www.mathworks.com/matlabcentral/answers/1694960-nested-loops-for-double-summation</a>) cannot be followed. Note that <code>y_i</code> is a <code>T</code> by <code>1</code> vector and <code>y_i_target</code> is also a <code>T</code> by <code>1</code> vector. <code>Wout_i</code> is a <code>N</code> by <code>1</code> vector where <code>N</code> is the number of nodes in the neural network. I generate the three <code>Ny</code> by <code>1</code> vectors <code>Wout_i,y_i,y_i_target</code> for each <code>i^th</code> target training signal, where <code>Ny</code> is the number of training signals. The final output for <code>Wout</code> is a <code>N</code> by <code>1</code> vector, where each element in the vector is the optimal weight for each node in the network.</p>
<pre><code>N = 100; % number of nodes in nerual network
Ny = 200; % number of training signals
T = 50; % time length of each training signal
X = rand(N,T); % neural network state matrix
reg = 10^-4; % ridge regression coefficient
outer_sum = zeros(Ny,1);
for i = 1:Ny
y_i_target = rand(T,1); % training signal
Wout_i = ((X*X' + reg*eye(N)) \ (X*y_i_target));
Wouts{i} = Wout_i; % collected cell matrix of each Wout_i for each i^th target training signal
y_i = Wout_i'*X; % predicted signal
inner_sum = sum(((y_i'-y_i_target).^2)+reg*norm(Wout_i)^2);
outer_sum(i) = inner_sum;
end
outer_sum = outer_sum.*(1/Ny);
[minval, minidx] = min(outer_sum);
Wout = cell2mat(Wouts(minidx));
</code></pre>
<p>My final answer for <code>Wout</code> is a <code>N</code> by <code>1</code> as it should be, but I am uncertain in my answer. I am particularly unsure whether or not I have done the double summation and <code>arg min</code> with respect to <code>Wout</code> operations correctly. Is there any way to validate my answer?</p>
<p><strong>Solution:</strong>
I tried out another method/attempt as shown below:</p>
<pre><code>N = 100; % number of nodes in nerual network
Ny = 200; % number of training signals
T = 50; % time length of each training signal
X = rand(N,T); % neural network state matrix
reg = 10^-4; % ridge regression coefficient
MSE = zeros(Ny,1);
for i = 1:Ny
y_i_target = rand(T,1); % training signal
Wout_i = ((X*X' + reg*eye(N)) \ (X*y_i_target)); % Eq. 9 from Luko et al.
Wouts{i} = Wout_i; % collected cell matrix of each Wout_i for each i^th target training signal
y_i = Wout_i'*X; % predicted signal
MSE(i) = (1/T)*sum((y_i'-y_i_target).^2); % mean square error
end
[minval, minidx] = min(MSE);
Wout = cell2mat(Wouts(minidx));
</code></pre>
<p>I believe this attempt is better than the first, but I am not sure if it’s right still.</p>
<p>As highlighted by BillBokeey, the desired equation is simply an iterative version of equation 9 from Luko et al. To conduct training, one must apply Eq. 9 to each target signal in the the training data set and choose the resulting <strong>W_out</strong> that minimizes mean square error (MSE).</p>
<p><strong>Final Update:</strong>
I am still looking for some validation of my second solution. I am particularly concerned that I am cherry picking the best <strong>Wout_i</strong> <code>N</code> by <code>1</code> vector that minimizes MSE and effectively ignoring all the other <strong>Wout_i</strong> vectors.</p>
| 94
|
|
implement regression
|
Implementing Linear Regression using Gradient Descent
|
https://stackoverflow.com/questions/49081195/implementing-linear-regression-using-gradient-descent
|
<p>I have just started in machine learning and currently taking the course by andrew Ng's Machine learning Course. I have implemented the linear regression algorithm in python but the result is not desirable. I code of python is as follows:</p>
<pre><code>import numpy as np
x = [[1,1,1,1,1,1,1,1,1,1],[10,20,30,40,50,60,70,80,90,100]]
y = [10,16,20,23,29,30,35,40,45,50]
x = np.array(x)
y = np.array(y)
theta = np.zeros((2,1))
def Cost(x,y,theta):
m = len(y)
pred_ions = np.transpose(theta).dot(x)
J = 1/(2*m) * np.sum((pred_ions - y)*(pred_ions - y))
return J
def GradientDescent(x,y,theta,iteration,alpha):
m = len(y)
pred_ions = np.transpose(theta).dot(x)
i = 1
while i <= iteration:
theta[0] = theta[0] - alpha/m * np.sum(pred_ions - y)
theta[1] = theta[1] - alpha/m * np.sum((pred_ions - y)*x[1,:])
Cost_History = Cost(x,y,theta)
i = i + 1
return theta[0],theta[1]
itera = 1000
alpha = 0.01
a,b = GradientDescent(x,y,theta,itera, alpha)
print(a)
print(b)
</code></pre>
<p>I am not able to figure out what exactly is the problem. But, my results are something very strange. The value of parameter is, according to above code, are 298 and 19890. Any Help would be appreciated. Thanks. </p>
|
<p>Ah. I did this assignment too a while ago.</p>
<p>See this mentioned in Page 7 of the assignment PDF:</p>
<blockquote>
<p>Octave/MATLAB array indices start from one, not zero. If you’re
storing θ0 and θ1 in a vector called theta, the values will be
theta(1) and theta(2).</p>
</blockquote>
<p>So, in your while loop, change the <code>theta[0]</code> and <code>theta[1]</code> to <code>theta[1]</code> and <code>theta[2]</code>. It should work right.</p>
<p>Also, if you are storing the Cost in Cost_History, shouldn't it include the iteration variable like </p>
<pre><code>Cost_History[i] = Cost(x,y,theta)
</code></pre>
<p>Just check that too! Hope this helped.</p>
<p><em>Edit 1:</em> Okay, I have understood the issue now. In his video, Andrew Ng says that you need to update both the thetas <strong>simultaneously.</strong> To do that, store the theta matrix in a temp variable. And update theta[0] and theta[1] based on the temp values.</p>
<p>Currently in your code, during <code>theta[1] =</code> it has changed the <code>theta[0]</code> to the newer value already, so both are not being updated simultaneously.</p>
<p>So instead, do this:</p>
<pre><code>while i <= iteration:
temp = theta
theta[0] = theta[0] - alpha/m * np.sum(np.transpose(temp).dot(x) - y)
theta[1] = theta[1] - alpha/m * np.sum((np.transpose(temp).dot(x) - y)*x[1,:])
Cost_History[i] = Cost(x,y,theta)
i = i + 1
</code></pre>
<p>It should work now, if not, let me know, I will debug on my side.</p>
| 95
|
implement regression
|
Can I implement sample weights in statsmodels quantile regression?
|
https://stackoverflow.com/questions/41991585/can-i-implement-sample-weights-in-statsmodels-quantile-regression
|
<p>I have just learned about the statsmodels module by searching about quantile regression. I was wondering if there is a way to use statsmodels for quantile regressoin and take the errorbars of the data into account,
like in sklearn.linear_model.LinearRegression?</p>
<p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html" rel="nofollow noreferrer">http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html</a></p>
<p>Thanks for looking!</p>
| 96
|
|
implement regression
|
Octave/Matlab implementation of confidence intervals in multiple regression
|
https://stackoverflow.com/questions/46752817/octave-matlab-implementation-of-confidence-intervals-in-multiple-regression
|
<p>I need to implement the confidence intervals of multiple regression coefficients in Octave/Matlab.</p>
<p>The task is defined in a common manner: data Y, design matrix X, coefficients β so that Y=βX. The code for β is then simply:</p>
<pre><code>beta = pinv(X)*Y
</code></pre>
<p>Now, as a stupid physicist, I am a bit lost in confidence and prediction intervals. Formulas as well as their implementation.</p>
<p><strong>Note:</strong> I am aware that there is a Matlab function mvregress, but it is still missing from Octave which I am actually using.</p>
<p><strong>Note 2:</strong> This question was asked at the CrossValidated and marked as off topic cause it focuses on programming.</p>
|
<p>I think this is what you want to find:</p>
<pre><code>[b, bint, r, rint, stats] = regress (y, X, [alpha]).
</code></pre>
<p>where bint is the confidence interval for beta.</p>
<p>For details, please refer to <a href="https://octave.sourceforge.io/statistics/function/regress.html" rel="nofollow noreferrer">https://octave.sourceforge.io/statistics/function/regress.html</a>.</p>
| 97
|
implement regression
|
Asking review and help for implementing logistic regression model
|
https://stackoverflow.com/questions/55664603/asking-review-and-help-for-implementing-logistic-regression-model
|
<p>I'm trying to implement a simple logistic regression model in Scala. That is there is only one independent variable to the dependent binary variable.
If we look as the example from Wikipedia: <a href="https://en.wikipedia.org/wiki/Logistic_regression" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Logistic_regression</a> (under section " Probability of passing an exam versus hours of study") and try to get the same coefficients for theta0 and theta1 with my current code we come close.</p>
<p>When the if-statement in the method gradientDescent is commented out theta0 = -4.360295851109452 and theta1 = 1.5166246438642796 with max iterations.
This is pretty close to the wikipedia example where theta0 = −4.0777 and theta1 = 1.5046.</p>
<p>If the if-statement is not commented out then theta0 = 0.0 and theta1 = 0.0 with only one itarations meaning that the return happens instantly.
I'm not sure why this is. Although I'm not even sure how far off am I from a working model.</p>
<p>In general I'm trying to implement what has been showed here: <a href="https://www.internalpointers.com/post/cost-function-logistic-regression" rel="nofollow noreferrer">https://www.internalpointers.com/post/cost-function-logistic-regression</a>
To my understanding: If we obtain the most optimas thetas with gradient descent then we can fit the S-curve to the original data points. </p>
<pre class="lang-scala prettyprint-override"><code>
import scala.collection.mutable.Buffer
import scala.collection.mutable.ArrayBuffer
import scala.math.exp
import scala.math.abs
object Testing extends App {
val logistic = new LogisticRegressionCalculator
val yData = Array(0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1).map(z => z.toDouble)
val xData = Array.tabulate(20)({a => 0.5 + a * 0.25})
val thetas = logistic.deriveThetas(yData, xData)
println(thetas)
}
class LogisticRegressionCalculator {
//Learning rate
private var alpha = 0.01
//Tolerance
private var epsilon = 10E-10
//Number of iterations
private var maxIterations = 1000000
def changeAlpha(newAlpha: Double) = this.alpha = newAlpha
def changeEpsilon(newEpsilon: Double) = this.epsilon = newEpsilon
def changeMaxIterations(newMaxIter: Int) = this.maxIterations = newMaxIter
def giveAlpha: Double = this.alpha
def giveEpsilon: Double = this.epsilon
def giveMaxIterations: Int = this.maxIterations
/*
* This function is for the simple case where we have only one independent variable for the y values
* which are either zero or one.
* It is assumed that the left
*/
def deriveThetas(yData: Array[Double], xData: Array[Double]): Buffer[Double] = {
require(yData.size == xData.size)
//Traces below would be used for testing to see if the values obtained make sence
// val traceTheta0 = Array.ofDim[Double](this.maxIterations)
// val traceTheta1 = Array.ofDim[Double](this.maxIterations)
var theta0 = 0.0
var tempTheta0 = theta0
var theta1 = 0.0
var tempTheta1 = theta1
val dataSize = yData.size
var counter = 0
//Hypothesis function for logistic regression in the form of sigmoid function
def hypothesis(z: Double) = 1.0 / (1.0 + exp(-z))
def deriveTheta0: Double = {
var sum = 0.0
for (i <- 0 until dataSize) {
sum += (hypothesis(theta0 + theta1 * xData(i)) - yData(i)) //here should be * 1 as the coefficient of theta0 is 1.
}
return -(this.alpha / dataSize) * sum
}
def deriveTheta1: Double = {
var sum = 0.0
for (i <- 0 until dataSize) {
sum += (hypothesis(theta0 + theta1 * xData(i)) - yData(i)) * xData(i)
}
return -(this.alpha / dataSize) * sum
}
def gradientDescent: (Double, Double, Double) = {
for (i <- 0 until this.maxIterations) {
//println(s"Theta0: ${theta0}\tTheta1: ${theta1}")
counter += 1
tempTheta0 = theta0 + deriveTheta0
tempTheta1 = theta1 + deriveTheta1
//If the difference is so miniscule that further iterations are of no use.
// if (abs(tempTheta0 - theta0) >= epsilon || abs(tempTheta1 - theta1) >= epsilon) {
//
// return(theta0, theta1, counter)
//
// }
theta0 = tempTheta0
theta1 = tempTheta1
}
(theta0, theta1, counter)
}
val temp = gradientDescent
Buffer(temp._1, temp._2, counter)
}
}
</code></pre>
| 98
|
|
implement regression
|
How to implement Vector Auto-Regression in Python?
|
https://stackoverflow.com/questions/18348720/how-to-implement-vector-auto-regression-in-python
|
<p>I want to implement <strong>vector autoregression</strong> in python. My data is saved as a list of 3 lists. I found this -
<a href="http://statsmodels.sourceforge.net/stable/vector_ar.html#var" rel="nofollow noreferrer">http://statsmodels.sourceforge.net/stable/vector_ar.html#var</a>, but could not figure out the proper way to implement. </p>
<p>Suppose <em>tsdata</em> - a list of 3 lists of length 100 each, is my data. I tried </p>
<pre class="lang-python prettyprint-override"><code>varmodel = ts.VAR(tsdata)
results = varmodel.fit(maxlags=5, ic='aic')
</code></pre>
<p>But the above is not working. </p>
<p><strong>Update</strong>:
I have changed the list of lists to a column stack according to suggestions below. It is working fine now. So tsdata, which was a list of lists is changed to </p>
<pre><code>tsdata = np.column_stack(tsdata)
</code></pre>
|
<p>Changing the list of lists to a column stack (<a href="https://stackoverflow.com/questions/18348720/how-to-implement-vector-auto-regression-in-python#comment26955958_18348720">as @Josef suggests</a>) may solve your problem. For that, one might use <a href="https://numpy.org/doc/stable/reference/generated/numpy.column_stack.html" rel="nofollow noreferrer"><code>numpy.column_stack</code></a> as follows</p>
<pre><code>tsdata = np.column_stack(tsdata)
</code></pre>
| 99
|
implement classification
|
Implementing Classification with Multi-Tenancy
|
https://stackoverflow.com/questions/78382174/implementing-classification-with-multi-tenancy
|
<p>Is it currently possible to utilize the classification endpoint alongside multi-tenancy in the latest version? I couldn't find any documentation regarding this feature. it's not implemented yet in v4.</p>
<p>As a workaround, I understand that using v3 is necessary. However, even with v3, there doesn't appear to be a straightforward way to execute classification while specifying a particular tenant for the classification task.</p>
<p>When attempting to run classification without specifying a tenant, I encounter the error message:</p>
<pre><code>'error': 'classification failed: retrieve to-be-classifieds: object search at index trajectory: class Trajectory has multi-tenancy enabled, but request was without tenant'
</code></pre>
<p>Is there a known solution or workaround for this issue, or am I missing something?</p>
|
<p>Duda from Weaviate here.</p>
<p>Right now, the classification feature is not implemented for a multi tenant collection.</p>
<p>Please, feel free to open a feature request in our Github repository here:
<a href="https://github.com/weaviate/weaviate/issues" rel="nofollow noreferrer">https://github.com/weaviate/weaviate/issues</a></p>
<p>Thanks!</p>
| 100
|
implement classification
|
Classification and regression trees implementation in java
|
https://stackoverflow.com/questions/9708269/classification-and-regression-trees-implementation-in-java
|
<p>Are there any libs implementing classification and regression trees in java? Something very close to Matlab's classregtree command required.</p>
|
<p>Try <a href="http://www.cs.waikato.ac.nz/ml/weka/" rel="nofollow">Weka</a></p>
| 101
|
implement classification
|
Implement multiple classification in Java
|
https://stackoverflow.com/questions/25873645/implement-multiple-classification-in-java
|
<p>Is there a way to implement multiple classification in Java? For instance, take this UML diagram -
<img src="https://i.sstatic.net/Izpgx.png" alt="enter image description here"></p>
<p>How should the <code>GrainCrop</code> class be implemented?</p>
|
<p>Java does not support multiple inheritance on classes, only interfaces can inherit from multiple interfaces. So what I would do is that the Class <code>GrainCrop</code> in question implements the interface from the two other classes. Then <code>GrainCrop</code> aggregates the two other classes and delegates to the "inner" objects. The calls in <code>GrainCrop</code> can then also be decorated if you need to do additional stuff. E.g.</p>
<pre><code>public void methodFromClassCrop() {
System.out.println("Stuff before call");
oneOfTheObjects.methodFromClassCrop();
System.out.println("Stuff after call");
}
</code></pre>
<p>Of course you can only introduce an interface for one of the classes and inherit from the other. It depends on the question if your new class extends one of the other classes in a sense of: introduce new behavior, algorithm, data which uses the other data, or if you just "act" (aggregation in this case) on them.</p>
| 102
|
implement classification
|
classification tree implementation in mathematica
|
https://stackoverflow.com/questions/6563407/classification-tree-implementation-in-mathematica
|
<p>I want to implement simple classification tree (binary classification) using <em>Mathematica.</em></p>
<p>How can I implement a binary tree in <em>Mathematica</em>? Is there is a symbol for doing that?</p>
|
<p>Among the new objects in MMA 8 are <a href="http://reference.wolfram.com/mathematica/ref/TreeGraph.html" rel="nofollow noreferrer">TreeGraph</a>, <a href="http://reference.wolfram.com/mathematica/ref/CompleteKaryTree.html?q=CompleteKaryTree&lang=en" rel="nofollow noreferrer">CompleteKaryTree</a>, and <a href="http://reference.wolfram.com/mathematica/ref/KaryTree.html?q=KaryTree&lang=en" rel="nofollow noreferrer">KaryTree</a>. The latter two objects give binary trees by default. I don't know how efficient they are for intensive computation but they do seem well-suited for displaying classifications. And there are many predicates and options for manipulating and displaying them.</p>
<p>Here's an example of a classification tree from [Breiman, L. Classification and Regression Trees: Chapman & Hall/CRC, 1984.]. It concerns 3 questions to determine whether a cardiac patient is likely to die within 30 days if not treated.</p>
<pre><code>KaryTree[9, 2,
VertexLabels -> {1 -> "Blood pressure > 91 ?", 2 -> "Age > 62.5?",
4 -> "Sinus tachycardia ?", 8 -> "< 30 days"},
EdgeLabels -> {1 \[UndirectedEdge] 2 -> "yes",
1 \[UndirectedEdge] 3 -> "no", 2 \[UndirectedEdge] 4 -> "yes",
2 \[UndirectedEdge] 5 -> "no", 4 \[UndirectedEdge] 8 -> "yes",
4 \[UndirectedEdge] 9 -> "no"}, ImagePadding -> 20]
</code></pre>
<p><img src="https://i.sstatic.net/d3oLx.png" alt="Classification graph"></p>
<p>I'd like to get rid of the two unused nodes on the right, but have not figured out a an elegant way to do it. So I think I'll post a simple question about that on SO.</p>
| 103
|
implement classification
|
Implementing classification tree with lists and vectors
|
https://stackoverflow.com/questions/46255195/implementing-classification-tree-with-lists-and-vectors
|
<p>So, I'm playing around with R in order to get the hang of Classification Trees. I'm primarily interested in making a abstract data type for the the Classification Tree so I can start building it. But unlike, C, Java, etc I can't have pointers to other nodes. I'm limited to lists and vectors. </p>
<p>How can I build this? Any tips?</p>
|
<p>By playing a little bit with <code>data.tree</code> in R, I constructed this</p>
<pre><code>library(data.tree)
my.tree <- Node$new('my tree')
my.tree$key <- 1
my.tree$var.name <- 'blahblah'
function <- insert.Node(tree=NULL, key=1, var.name='abcd'){
if (is.null(tree$key)){
# Creation of root
tree = Node$new(paste(var.name, " < ", key, sep = ''))
tree$key <- key
} else if (key < tree$key) {
# Left child
tree$AddChildNode(insert.Node(tree$children[[1]], key, var.name))
} else {
# Right child
tree$AddChildNode(insert.Node(tree$children[[2]], key, var.name))
}
}
tree <- insert.Node(tree=my.tree, key = 4, var.name = 'hello world')
</code></pre>
<p>Hopefully, this helps.</p>
| 104
|
implement classification
|
gpflow classification implementation
|
https://stackoverflow.com/questions/68419128/gpflow-classification-implementation
|
<p>I want to implement a binary classification model using Gaussian process. According to the <a href="https://gpflow.readthedocs.io/en/master/notebooks/basics/classification.html" rel="nofollow noreferrer">official documentation</a>, I had the code as below.</p>
<p>The X has 2048 features and Y is either 0 or 1. After optimizing the model, I was trying to evaluate the performance.</p>
<p>However, the <code>predict_y</code> method yields a weird result; the expected <code>pred</code> should have a shape like (n_test_samples, 2), which represents the probability of being class 0 and 1. But the result I got instead is (n_test_samples, n_training_samples).</p>
<p>What is going wrong?</p>
<pre><code>def model(X,Y):
'''
X: (n_training_samples, n_features) , my example is (n, 2048)
Y: (n_training_samples,) , binary classification
'''
m = gpflow.models.VGP(
(X, Y), likelihood=gpflow.likelihoods.Bernoulli(), kernel=gpflow.kernels.SquaredExponential()
)
opt = gpflow.optimizers.Scipy()
opt.minimize(m.training_loss, variables=m.trainable_variables)
return m
def evaluate(model,X,Y,accuracy, MCC, Kappa):
'''
X: (n_test_samples, n_features) , my example is (n, 2048)
Y: (n_test_samples,) , binary classification
'''
pred,_ = model.predict_y(X)
print('pred.shape is {}'.format(pred)) # I got wired result (num of test samples <X.shape[0]>, num of training samples)
accuracy += [accuracy_score(Y, pred)]
MCC += [matthews_corrcoef(Y, pred)]
Kappa += [cohen_kappa_score(Y, pred)]
return accuracy, MCC, Kappa
</code></pre>
|
<p>I finally figured it out. The reason is the Y for VGP model should have a shape like (n_training_samples, 1) instead of (n_training_samples,).</p>
| 105
|
implement classification
|
How to implement hierarchical Transformer for document classification in Keras?
|
https://stackoverflow.com/questions/70271797/how-to-implement-hierarchical-transformer-for-document-classification-in-keras
|
<p>Hierarchical attention mechanism for document classification has been presented by Yang et al.
<a href="https://www.cs.cmu.edu/%7E./hovy/papers/16HLT-hierarchical-attention-networks.pdf" rel="nofollow noreferrer">https://www.cs.cmu.edu/~./hovy/papers/16HLT-hierarchical-attention-networks.pdf</a></p>
<p>Its implementation is available on <a href="https://github.com/ShawnyXiao/TextClassification-Keras" rel="nofollow noreferrer">https://github.com/ShawnyXiao/TextClassification-Keras</a></p>
<p>Also, the implementation of the document classification with Transformer is available on <a href="https://keras.io/examples/nlp/text_classification_with_transformer" rel="nofollow noreferrer">https://keras.io/examples/nlp/text_classification_with_transformer</a></p>
<p>But, it's not hierarchical.</p>
<p>I have googled a lot but didn't find any implementation of a hierarchical Transformer. Does anyone know how to implement a hierarchical transformer for document classification in Keras?</p>
<p>My implementation is as follows. Note that the implementation extended from Nandan implementation for document classification. <a href="https://keras.io/examples/nlp/text_classification_with_transformer" rel="nofollow noreferrer">https://keras.io/examples/nlp/text_classification_with_transformer</a>.</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras.utils.np_utils import to_categorical
class MultiHeadSelfAttention(layers.Layer):
def __init__(self, embed_dim, num_heads=8):
super(MultiHeadSelfAttention, self).__init__()
self.embed_dim = embed_dim
self.num_heads = num_heads
if embed_dim % num_heads != 0:
raise ValueError(
f"embedding dimension = {embed_dim} should be divisible by number of heads = {num_heads}"
)
self.projection_dim = embed_dim // num_heads
self.query_dense = layers.Dense(embed_dim)
self.key_dense = layers.Dense(embed_dim)
self.value_dense = layers.Dense(embed_dim)
self.combine_heads = layers.Dense(embed_dim)
def attention(self, query, key, value):
score = tf.matmul(query, key, transpose_b=True)
dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
scaled_score = score / tf.math.sqrt(dim_key)
weights = tf.nn.softmax(scaled_score, axis=-1)
output = tf.matmul(weights, value)
return output, weights
def separate_heads(self, x, batch_size):
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.projection_dim))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, inputs):
# x.shape = [batch_size, seq_len, embedding_dim]
batch_size = tf.shape(inputs)[0]
query = self.query_dense(inputs) # (batch_size, seq_len, embed_dim)
key = self.key_dense(inputs) # (batch_size, seq_len, embed_dim)
value = self.value_dense(inputs) # (batch_size, seq_len, embed_dim)
query = self.separate_heads(
query, batch_size
) # (batch_size, num_heads, seq_len, projection_dim)
key = self.separate_heads(
key, batch_size
) # (batch_size, num_heads, seq_len, projection_dim)
value = self.separate_heads(
value, batch_size
) # (batch_size, num_heads, seq_len, projection_dim)
attention, weights = self.attention(query, key, value)
attention = tf.transpose(
attention, perm=[0, 2, 1, 3]
) # (batch_size, seq_len, num_heads, projection_dim)
concat_attention = tf.reshape(
attention, (batch_size, -1, self.embed_dim)
) # (batch_size, seq_len, embed_dim)
output = self.combine_heads(
concat_attention
) # (batch_size, seq_len, embed_dim)
return output
def compute_output_shape(self, input_shape):
# it does not change the shape of its input
return input_shape
class TransformerBlock(layers.Layer):
def __init__(self, embed_dim, num_heads, ff_dim, dropout_rate, name=None):
super(TransformerBlock, self).__init__(name=name)
self.att = MultiHeadSelfAttention(embed_dim, num_heads)
self.ffn = keras.Sequential(
[layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim), ]
)
self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = layers.Dropout(dropout_rate)
self.dropout2 = layers.Dropout(dropout_rate)
def call(self, inputs, training):
attn_output = self.att(inputs)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(inputs + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
return self.layernorm2(out1 + ffn_output)
def compute_output_shape(self, input_shape):
# it does not change the shape of its input
return input_shape
class TokenAndPositionEmbedding(layers.Layer):
def __init__(self, maxlen, vocab_size, embed_dim, name=None):
super(TokenAndPositionEmbedding, self).__init__(name=name)
self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)
self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
def call(self, x):
maxlen = tf.shape(x)[-1]
positions = tf.range(start=0, limit=maxlen, delta=1)
positions = self.pos_emb(positions)
x = self.token_emb(x)
return x + positions
def compute_output_shape(self, input_shape):
# it changes the shape from (batch_size, maxlen) to (batch_size, maxlen, embed_dim)
return input_shape + (self.pos_emb.output_dim,)
# Lower level (produce a representation of each sentence):
embed_dim = 100 # Embedding size for each token
num_heads = 2 # Number of attention heads
ff_dim = 64 # Hidden layer size in feed forward network inside transformer
L1_dense_units = 100 # Size of the sentence-level representations output by the word-level model
dropout_rate = 0.1
vocab_size = 1000
class_number = 5
max_docs = 10000
max_sentences = 15
max_words = 60
word_input = layers.Input(shape=(max_words,), name='word_input')
word_embedding = TokenAndPositionEmbedding(maxlen=max_words, vocab_size=vocab_size,
embed_dim=embed_dim, name='word_embedding')(word_input)
word_transformer = TransformerBlock(embed_dim=embed_dim, num_heads=num_heads, ff_dim=ff_dim,
dropout_rate=dropout_rate, name='word_transformer')(word_embedding)
word_pool = layers.GlobalAveragePooling1D(name='word_pooling')(word_transformer)
word_drop = layers.Dropout(dropout_rate, name='word_drop')(word_pool)
word_dense = layers.Dense(L1_dense_units, activation="relu", name='word_dense')(word_drop)
word_encoder = keras.Model(word_input, word_dense)
word_encoder.summary()
# =========================================================================
# Upper level (produce a representation of each document):
L2_dense_units = 100
sentence_input = layers.Input(shape=(max_sentences, max_words), name='sentence_input')
sentence_encoder = tf.keras.layers.TimeDistributed(word_encoder, name='sentence_encoder')(sentence_input)
sentence_transformer = TransformerBlock(embed_dim=L1_dense_units, num_heads=num_heads, ff_dim=ff_dim,
dropout_rate=dropout_rate, name='sentence_transformer')(sentence_encoder)
sentence_pool = layers.GlobalAveragePooling1D(name='sentence_pooling')(sentence_transformer)
sentence_out = layers.Dropout(dropout_rate)(sentence_pool)
preds = layers.Dense(class_number , activation='softmax', name='sentence_output')(sentence_out)
model = keras.Model(sentence_input, preds)
model.summary()
</code></pre>
<p>The summary of the model is as follows:</p>
<pre><code>Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
word_input (InputLayer) [(None, 60)] 0
word_embedding (TokenAndPos (None, 60, 100) 106000
itionEmbedding)
word_transformer (Transform (None, 60, 100) 53764
erBlock)
word_pooling (GlobalAverage (None, 100) 0
Pooling1D)
word_drop (Dropout) (None, 100) 0
word_dense (Dense) (None, 100) 10100
=================================================================
Total params: 169,864
Trainable params: 169,864
Non-trainable params: 0
_________________________________________________________________
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sentence_input (InputLayer) [(None, 15, 60)] 0
sentence_encoder (TimeDistr (None, 15, 100) 169864
ibuted)
sentence_transformer (Trans (None, 15, 100) 53764
formerBlock)
sentence_pooling (GlobalAve (None, 100) 0
ragePooling1D)
dropout_9 (Dropout) (None, 100) 0
sentence_output (Dense) (None, 5) 505
=================================================================
Total params: 224,133
Trainable params: 224,133
Non-trainable params: 0
</code></pre>
<p>Everything is ok and you can copy and paste these codes in colab to see the summary of the model.
But, my problem is for positional encoding at the sentence level.
How to apply positional encoding at the sentence level?</p>
|
<p>The implementation is recursive in the sense that you treat the average of your outputs of transformer <em>x</em> as the input to transformer <em>x+1</em>.</p>
<p>So let's say your data is structured as (batch, chapter, paragraph, sentence, token).</p>
<p>After the first transformation you end up with (batch, chapter, paragraph, sentence, embedding) so then you average and get (batch, chapter, paragraph, sentence_embedding_in).</p>
<p>Apply another transformation and get (batch, chapter, paragraph, sentence_embedding_out).</p>
<p>Average again and get (batch, chapter, paragraph_embedding). Rinse & Repeat.</p>
<p>The implementation of the paper is actually in a different repository:
<a href="https://github.com/ematvey/hierarchical-attention-networks" rel="nofollow noreferrer">https://github.com/ematvey/hierarchical-attention-networks</a></p>
<p>They actually do something different from what I've described and apply transformers at the bottom and RNN at the top. In theory you could do the opposite or apply RNN at each layer (that would be really slow). As far as the implementation is concerned you can abstract from that - the principle remains the same: you apply a transformation, average the outputs and feed it into the next higher-level "layer" (or "module" using torch lingo).</p>
| 106
|
implement classification
|
Implement multi class classification using SVM in R
|
https://stackoverflow.com/questions/27125381/implement-multi-class-classification-using-svm-in-r
|
<p>I am trying to implement Multi class classification using SVM under e1071 package in R language. I read in a similar thread that SVM handles one vs one classifier by itself in the back end. Is it true.</p>
<p>Also, if I want to execute One vs Rest classifier, how to do it. And, while printing the summary of SVM model, it doesnt show anywhere that it used One vs One classifier. How to confirm that.</p>
|
<p>Found the answer to my query above. I implemented one vs rest classifier by building binary classifiers on iris data present by default in R. It has 3 classes. So, I built 3 binary classifiers. Below is the code:</p>
<pre><code>data(iris)
head(iris)
table(iris$Species)
nrow(iris)
index_iris<-sample.split(iris$Species,SplitRatio=.7)
trainset_iris<-iris[index_iris==TRUE,]
testset_iris<-iris[index_iris==FALSE,]
train_setosa<-trainset_iris
train_setosa$Species<-as.character(train_setosa$Species)
train_setosa$Species[train_setosa$Species!="setosa"]<-'0'
train_setosa$Species[train_setosa$Species=="setosa"]<-'1'
train_setosa$Species<-as.integer(train_setosa$Species)
tune_setosa<-tune.svm(Species~.,data=train_setosa,gamma=10^(-6:-1),cost=10^(-1:1))
summary(tune_setosa)
model_setosa<-svm(Species~.,data=train_setosa,kernel="radial",gamma=.1,cost=10,scale=TRUE,probabilities=TRUE,na.action=na.omit)
summary(model_setosa)
predict_setosa<-predict(model_setosa,testset_iris[,-5])
tab_setosa<-table(predict_setosa,testset_iris[,5])
tab_setosa
train_versicolor<-trainset_iris
train_versicolor$Species<-as.character(train_versicolor$Species)
train_versicolor$Species[train_versicolor$Species!="versicolor"]<-0
train_versicolor$Species[train_versicolor$Species=="versicolor"]<-1
train_versicolor$Species<-as.integer(train_versicolor$Species)
tune_versicolor<-tune.svm(Species~.,data=train_versicolor,gamma=10^(-6:-1),cost=10^(-1:1))
summary(tune_versicolor)
model_versicolor<-svm(Species~.,data=train_versicolor,kernel="radial",gamma=.1,cost=10,scale=TRUE,probabilities=TRUE,na.action=na.omit)
summary(model_versicolor)
predict_versicolor<-predict(model_versicolor,testset_iris[,-5])
tab_versicolor<-table(predict_versicolor,testset_iris[,5])
tab_versicolor
train_virginica<-trainset_iris
train_virginica$Species<-as.character(train_virginica$Species)
train_virginica$Species[train_virginica$Species!="virginica"]<-0
train_virginica$Species[train_virginica$Species=="virginica"]<-1
train_virginica$Species<-as.integer(train_virginica$Species)
tune_virginica<-tune.svm(Species~.,data=train_virginica,gamma=10^(-6:-1),cost=10^(-1:1))
summary(tune_virginica)
model_virginica<-svm(Species~.,data=train_virginica,kernel="radial",gamma=.1,cost=10,scale=TRUE,probabilities=TRUE,na.action=na.omit)
summary(model_virginica)
predict_virginica<-predict(model_virginica,testset_iris[,-5])
tab_virginica<-table(predict_virginica,testset_iris[,5])
tab_virginica
bind<-cbind(predict_setosa,predict_versicolor,predict_virginica)
classnames = c('setosa', 'versicolor', 'virginica')
a<-apply(bind,1,classnames[which.max])
b<-cbind(bind,a)
table(b[,4],testset_iris$Species)
</code></pre>
<p>But, when I compared the confusion matrix of this result with confusion matrix of the result which used One vs One classifier (by default in radial kernel), One vs One gave better result. I believe that happened since there are only 3 classes in this case and One vs Rest works well when classes are large in number.</p>
| 107
|
implement classification
|
How do I implement multilabel classification neural network with keras
|
https://stackoverflow.com/questions/54726518/how-do-i-implement-multilabel-classification-neural-network-with-keras
|
<p>I am attempting to implement a neural network using Keras with a problem that involves multilabel classification. I understand that one way to tackle the problem is to transform it to several binary classification problems. I have implemented one of these, but am not sure how to proceed with the others, mostly how do I go about combining them? My data set has 5 input variables and 5 labels. Generally a single sample of data would have 1-2 labels. It is rare to have more than two labels.</p>
<p>Here is my code (thanks to machinelearningmastery.com):</p>
<pre><code>import numpy
import pandas
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load dataset
dataframe = pandas.read_csv("Realdata.csv", header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:5].astype(float)
Y = dataset[:,5]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# baseline model
def create_baseline():
# create model
model = Sequential()
model.add(Dense(5, input_dim=5, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
scores = model.evaluate(X, encoded_Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
#Make predictions....change the model.predict to whatever you want instead of X
predictions = model.predict(X)
# round predictions
rounded = [round(x[0]) for x in predictions]
print(rounded)
return model
# evaluate model with standardized dataset
estimator = KerasClassifier(build_fn=create_baseline, epochs=100, batch_size=5, verbose=0)
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X, encoded_Y, cv=kfold)
print("Results: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
</code></pre>
|
<p>The approach you are referring to is the <a href="https://scikit-learn.org/stable/modules/multiclass.html#one-vs-one" rel="nofollow noreferrer">one-versus-all</a> or the <a href="https://scikit-learn.org/stable/modules/multiclass.html#ovr-classification" rel="nofollow noreferrer">one-versus-one</a> strategy for multi-label classification. However, when using a neural network, the easiest solution for a multi-label classification problem with 5 labels is to use a single model with 5 output nodes. With keras:</p>
<pre><code>model = Sequential()
model.add(Dense(5, input_dim=5, kernel_initializer='normal', activation='relu'))
model.add(Dense(5, kernel_initializer='normal', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd')
</code></pre>
<p>You can provide the training labels as binary-encoded vectors of length 5. For instance, an example that corresponds to classes 2 and 3 would have the label <code>[0 1 1 0 0]</code>.</p>
| 108
|
implement classification
|
How to implement a sequence classification LSTM network in CNTK?
|
https://stackoverflow.com/questions/38614265/how-to-implement-a-sequence-classification-lstm-network-in-cntk
|
<p>I'm working on implementation of LSTM Neural Network for sequence classification. I want to design a network with the following parameters:</p>
<ol>
<li><strong>Input :</strong> a sequence of <code>n</code> one-hot-vectors.</li>
<li><strong>Network topology :</strong> two-layer LSTM network.</li>
<li><strong>Output:</strong> a probability that a sequence given belong to a class (binary-classification). I want to take into account only last output from second LSTM layer.</li>
</ol>
<p>I need to implement that in CNTK but I struggle because its documentation is not written really well. Can someone help me with that?</p>
|
<p>There is a <a href="https://github.com/Microsoft/CNTK/blob/4080c0a5afb4d80ac16de049f7e9cb234527ae29/Examples/SequenceClassification/SimpleExample/Python/SequenceClassification.py" rel="nofollow noreferrer">sequence classification example</a> that follows exactly what you're looking for.</p>
<p>The only difference is that it uses just a single LSTM layer. You can easily change this network to use multiple layers by changing:</p>
<pre class="lang-python prettyprint-override"><code>LSTM_function = LSTMP_component_with_self_stabilization(
embedding_function.output, LSTM_dim, cell_dim)[0]
</code></pre>
<p>to:</p>
<pre class="lang-python prettyprint-override"><code>num_layers = 2 # for example
encoder_output = embedding_function.output
for i in range(0, num_layers):
encoder_output = LSTMP_component_with_self_stabilization(encoder_output.output, LSTM_dim, cell_dim)
</code></pre>
<p>However, you'd be better served by using the new layers library. Then you can simply do this:</p>
<pre class="lang-python prettyprint-override"><code>encoder_output = Stabilizer()(input_sequence)
for i in range(0, num_layers):
encoder_output = Recurrence(LSTM(hidden_dim)) (encoder_output.output)
</code></pre>
<p>Then, to get your final output that you'd put into a dense output layer, you can first do:</p>
<pre class="lang-python prettyprint-override"><code>final_output = sequence.last(encoder_output)
</code></pre>
<p>and then</p>
<pre class="lang-python prettyprint-override"><code>z = Dense(vocab_dim) (final_output)
</code></pre>
| 109
|
implement classification
|
How to implement class weight sampling in multi label classification?
|
https://stackoverflow.com/questions/78559061/how-to-implement-class-weight-sampling-in-multi-label-classification
|
<p>I am working on a multi label classification problem and need some guidance on computing class weights using Scikit-Learn.</p>
<p><strong>Problem Context:</strong></p>
<p>I have a dataset with 9973 training samples.The labels are one-hot encoded, representing 13 different classes.The shape of my training labels is (9973, 13).</p>
<p>I want to use this code:</p>
<pre><code>import numpy as np
from sklearn.utils.class_weight import compute_class_weight
y_integers = np.argmax(y, axis=1)
class_weights = compute_class_weight('balanced', np.unique(y_integers), y_integers)
d_class_weights = dict(enumerate(class_weights))
</code></pre>
<p>This does not work says, too may positional arguments. My training samples look like this:</p>
<pre><code> [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
</code></pre>
<p>How can i implement in multi class classification problem so that my dataset imbalance can be solved ?</p>
<p>Edit 1: It is working fine now, Do you think it works in multilabel because I read somewhere, that you must use sampling weight instead of class weight. How can i implement that ?</p>
|
<p>You need to call <code>compute_class_weights</code> with keyword arguments rather than positional, just like they do in the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_class_weight.html" rel="nofollow noreferrer">documentation</a>:</p>
<pre><code>compute_class_weight('balanced', classes=np.unique(y_integers), y=y_integers)
</code></pre>
<p>The way to know this is by looking at the function signature when they define it (also seen in the documentation I linked above):</p>
<pre><code>def compute_class_weight(class_weight, *, classes, y)
</code></pre>
<p>The asterisk <code>*</code> means that positional arguments after the first one (<code>class_weight</code>) are disregarded. It's a common method to force users to use keyword/named arguments like <code>classes=</code> or <code>y=</code>. See <a href="https://stackoverflow.com/a/14302007/7662085">this</a> Stackoverflow answer.</p>
| 110
|
implement classification
|
Confusion on cs231n image classification implementation
|
https://stackoverflow.com/questions/35691805/confusion-on-cs231n-image-classification-implementation
|
<p>I am recently taking CS231n online class. Trying to implement simple image classification code using CIFAR-10 dataset, in <a href="http://cs231n.github.io/classification/" rel="nofollow">http://cs231n.github.io/classification/</a></p>
<p>While running this code, on predict function, time takes too long, and even didn't completed. I basically copy-and-pasted all codes.
My desktop has state-of-art machine. </p>
<p>The code is shown below. </p>
<h1>Load function</h1>
<pre><code>import os;
import cPickle as pickle;
import numpy as np;
import matplotlib.pyplot as plt;
def load_CIFAR_batch(filename):
with open(filename, 'r') as f:
datadict=pickle.load(f);
X=datadict['data'];
Y=datadict['labels'];
X=X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("float");
Y=np.array(Y);
return X, Y;
def load_CIFAR10(ROOT):
xs=[];
ys=[];
for b in range(1,6):
f=os.path.join(ROOT, "data_batch_%d" % (b, ));
X, Y=load_CIFAR_batch(f);
xs.append(X);
ys.append(Y);
Xtr=np.concatenate(xs);
Ytr=np.concatenate(ys);
del X, Y;
Xte, Yte=load_CIFAR_batch(os.path.join(ROOT, "test_batch"));
return Xtr, Ytr, Xte, Yte;
def visualize_CIFAR(X_train, y_train, samples_per_class):
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'];
num_classes=len(classes);
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show();
</code></pre>
<h1>Nearest Neighbor function</h1>
<pre><code>class NearestNeighbor:
def __init__(self):
pass
def train(self, X, y):
"""X is N x D where each row is an example. Y is 1-dimension of size N"""
# the nearest neighbor classifier simply remembers all the training data
self.Xtr = X
self.ytr = y
def predict(self, X):
"""X is N x D where each row is an example we wish to predict label for"""
num_test = X.shape[0]
# lets make sure that the output type matches the input type
Ypred = np.zeros(num_test, dtype = self.ytr.dtype)
# loop over all test rows
for i in xrange(num_test):
# find the nearest training image to the i'th test image
# using the L1 distance (sum of absolute value differences)
distances = np.sum(np.abs(self.Xtr - X[i,:]), axis = 1)
min_index = np.argmin(distances) # get the index with smallest distance
Ypred[i] = self.ytr[min_index] # predict the label of the nearest example
return Ypred
Xtr, Ytr, Xte, Yte = load_CIFAR10('/home/tegg/Downloads/cifar-10-batches-py/')
# flatten out all images to be one-dimensional
Xtr_rows = Xtr.reshape(Xtr.shape[0], 32*32*3) # Xtr_rows become 50000 x 3072
Xte_rows = Xte.reshape(Xte.shape[0], 32*32*3) # Xte_rows become 10000 x 3072
nn = NearestNeighbor() # create a Nearest Neighbor classifier class
nn.train(Xtr_rows, Ytr) # train the classifier on the training images and labels
Yte_predict = nn.predict(Xte_rows) # predict labels on the test images
# and now print the classification accuracy, which is the average number
# of examples that are correctly predicted (i.e. label matches)
</code></pre>
<p>I think the last line is having trouble.</p>
<pre><code>Yte_predict = nn.predict(Xte_rows)
</code></pre>
<p>While running this code using ipython notebook, It is keep running the code, and not showing result (finish compile). Is there any wrong in my code?</p>
| 111
|
|
implement classification
|
implement Naive Bayes Gaussian classifier on the number classification data
|
https://stackoverflow.com/questions/46513538/implement-naive-bayes-gaussian-classifier-on-the-number-classification-data
|
<p>I am trying to implement Naive Bayes Gaussian classifier on the number classification data. Where each feature represents a pixel. </p>
<p>When trying to implement this, I hit a bump, I've noticed that some the feature variance equate to 0.
This is an issue because I would not be able to divide by 0 when trying to solve the probability. </p>
<p>What can I do to work around this? </p>
|
<p>Very short answer is <strong>you cannot</strong> - even though you can usually try to fit Gaussian distribution to any data (no matter its true distribution) there is one exception - the constant case (0 variance). So what can you do? There are three main solutions:</p>
<ol>
<li><p>Ignore 0-variance pixels. I do not recommend this approach as it loses information, but if it is 0 variance <strong>for each class</strong> (which is a common case for MNIST - some pixels are black, <strong>independently</strong> from class) then it is actually fully justified mathematically. Why? The answer is really simple, if for each class, given feature is constant (equal to some single value) then it brings literally no information for classification, thus ignoring it will not affect the model which assumes conditional independence of features (such as NB).</p></li>
<li><p>Instead of doing MLE estimate (so using N(mean(X), std(X))) use the regularised estimator, for example of form N(mean(X), std(X) + eps), which is equivalent to adding eps-noise independently to each pixel. This is a very generic approach that I would recommend.</p></li>
<li><p>Use better distribution class, if your data is images (and since you have 0 variance I assume these are binary images, maybe even MNIST) you have K features, each in [0, 1] interval. You can use multinomial distribution with bucketing, so P(x e Bi|y) = #{ x e Bi | y } / #{ x | y }. Finally this is usually <strong>the best</strong> thing to do (however requires some knowledge of your data), as the problem is you are trying to use a model which is not suited for the data provided, and I can assure you, that proper distribution will always give better results with NB. So how can you find a good distribution? Plot conditonal marginals P(xi|y) for each feature, and look how they look like, based on that - choose distribution class which matches the behaviour, I can assure you these will not look like Gaussians.</p></li>
</ol>
| 112
|
implement classification
|
Multi-label classification implementation
|
https://stackoverflow.com/questions/63666851/multi-label-classification-implementation
|
<p>So far I have used Keras Tensorflow to model image processing, NLP, time series prediction. Usually in case of having labels with multiple entries, so multiple categories the task was always to just predict to which class the sample belongs. So for example the list of possible classes was [car, human, airplane, flower, building]. So the final prediction was to which class the sample belongs - giving probabilities for each class. Usually in terms of a very confident prediction one class had a very high probability and the others very low.</p>
<p>Now I came across this Kaggle challenge: <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/overview" rel="nofollow noreferrer">Toxic Comment Classification Challenge</a> and in specific this <a href="https://www.kaggle.com/jhoward/improved-lstm-baseline-glove-dropout" rel="nofollow noreferrer">implementation</a>. I thought that this is a multi-label classification problem, as one sample can belong to different classes. And indeed, when I check the final prediction:</p>
<p><a href="https://i.sstatic.net/Lm7LB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lm7LB.png" alt="ex1" /></a></p>
<p>I can see that the first sample prediction has a very high probability for both toxic and obscene. With my knowledge so far when I applied a standard model to predict a class I would have predicted the probability to which of this class the sample belongs. So either class 1 or 2 or .... so I would have had - in case of a confident prediciton - a high probability for class toxic and low for the others - or in case of unconfident prediciton - 0.4x for toxic, 0.4x for obscene and small probability for the rest.</p>
<p>Now I was suprised of how the implementation was done resp. I do not understand the following:
How is a multi-label classification done (in opposite to the "usual" model)?</p>
<p>When checking the code I see the following model:</p>
<pre><code>inp = Input(shape=(maxlen,))
x = Embedding(max_features, embed_size, weights=[embedding_matrix])(inp)
x = Bidirectional(LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(x)
x = GlobalMaxPool1D()(x)
x = Dense(50, activation="relu")(x)
x = Dropout(0.1)(x)
x = Dense(6, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
</code></pre>
<p>I understand that <code>x = Dense(6, activation="sigmoid")</code> results from having to predict 6 classes. Same would be with my knowledge so far. However, why is then a probability resulting for a mulit-label classification? Where is the implementation difference between multi-label classification and just predicting one-label out of different choices?</p>
<p>Is it the simple difference of using binary crossentropy and not (sparse) categorical crossentropy along with 6 classes? So that tells that we have a binary problem for each of the classes and it handles these 6 classes separately, so giving a probability for each class that the sample belongs to this class and therefore it can have high probability to belonging to different classes?</p>
|
<p>The loss function to be used is indeed the <code>binary_crossentropy</code> with a <code>sigmoid</code> activation.</p>
<p>The <code>categorical_crossentropy</code> is not suitable for multi-label problems, because in case of the multi-label problems, the labels are not mutually exclusive. Repeat the last sentence: the labels are not mutually exclusive.</p>
<p>This means that the presence of a label in the form <code>[1,0,1,0,0,0]</code> is correct. The <code>categorical_crossentropy</code> and <code>softmax</code> will always tend to favour one specific class, but this is not the case; just like you saw, a comment can be both toxic and obscene.</p>
<p>Now imagine photos with cats and dogs inside them. What happens if we have 2 dogs and 2 cats inside a photo? Is it a dog picture or a cat picture? It is actually a "both" picture! We definitely need a way to specify that multiple labels are pertained/related to a photo/label.</p>
<p>The rationale for using the binary_crossentropy and sigmoid for multi-label classification resides in the mathematical properties, in that each output needs to be treated as in independent <a href="https://en.wikipedia.org/wiki/Bernoulli_distribution" rel="nofollow noreferrer">Bernoulli distribution</a>.</p>
<p>Therefore, the only correct solution is BCE + 'sigmoid'.</p>
| 113
|
implement classification
|
classification and clustering of tweets
|
https://stackoverflow.com/questions/71296865/classification-and-clustering-of-tweets
|
<p>I am trying to implement classification and clustering of tweets and also using jupyter notebook and getting error while running this:</p>
<pre><code>nlp = en_core_web_sm.load()
tokenizer = RegexpTokenizer(r'\w+')
lemmatizer = WordNetLemmatizer()
stop = set(stopwords.words('english'))
punctuation = list(string.punctuation)
stop.update(punctuation)
w_tokenizer = WhitespaceTokenizer()
# clean the set of words
def furnished(text):
final_text = []
for i in text.split():
if i.lower() not in stop:
word = lemmatizer.lemmatize(i)
final_text.append(word.lower())
return " ".join(final_text)
the error i am getting is:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-9-a9108abcd516> in <module>()
----> 1 nlp = en_core_web_sm.load()
2 tokenizer = RegexpTokenizer(r'\w+')
3 lemmatizer = WordNetLemmatizer()
4 stop = set(stopwords.words('english'))
5 punctuation = list(string.punctuation)
enter code here
NameError: name 'en_core_web_sm' is not defined.
</code></pre>
<p>Can somebody please let me know where i am going wrong in this?</p>
| 114
|
|
implement classification
|
How to implement class or sample weight sampling in multi label classification?
|
https://stackoverflow.com/questions/78595351/how-to-implement-class-or-sample-weight-sampling-in-multi-label-classification
|
<p>I'm currently working on a multi-label image classification problem where my dataset is significantly imbalanced. Each image in my dataset can have multiple labels. The labels are provided in a one-hot encoded format.
For example: [1,0,0,0,0,1,0] etc.</p>
<p>My train df:</p>
<pre><code>Image Index Finding Labels
0 00005504_002.png Pleural_Thickening
1 00003527_002.png Atelectasis|Pneumonia
2 00018285_000.png Effusion|Mass
3 00016971_007.png Emphysema|Mass
4 00014022_071.png Atelectasis|Consolidation|Pleural_Thickening
</code></pre>
<p>To balance the dataset, I considered undersampling the overrepresented classes. However, I encountered a challenges that it might lose effectiveness of model.</p>
<p>I am looking to implement class weights/Sample weights in PyTorch. How can I effectively implement in PyTorch for this multi-label classification problem? I have read online that class weights might not work well with one-hot encoded labels and that using sample weights or a custom loss function might be necessary. How can i implement custom loss with weighted sampling ?. Any particular advice would be appreciated.</p>
<p>Below is my code:</p>
<pre><code>`resnet50 = ResNet101(input_shape=(256, 256, 3), weights='imagenet', include_top=False)
for layer in resnet50.layers[:-3]:
layer.trainable = False
x = Flatten()(resnet50.output)
x = Dense(512, activation='relu')(x)
prediction = Dense(13, activation='sigmoid')(x)
model = Model(inputs=resnet50.input, outputs=prediction)
learning_rate = 0.001
adam_optimizer = Adam(learning_rate=learning_rate)
model.compile(optimizer=adam_optimizer, loss='binary_crossentropy', metrics=['accuracy', AUC(multi_label=True)])
early_stopping = EarlyStopping(monitor='val_auc', patience=5, restore_best_weights=True)
history = model.fit(train_dataset, epochs=100, validation_data=val_dataset, callbacks=[early_stopping])`
</code></pre>
<p>There is compute_sample_weight function in sklearn, I dont have any idea about how to implement in multi label case ?</p>
| 115
|
|
implement classification
|
How to implement Random Decision Forest classification in iOS
|
https://stackoverflow.com/questions/34871592/how-to-implement-random-decision-forest-classification-in-ios
|
<p>I am making an iOS application using objective-c and Xcode that will collect and analyze some data from the user. Using this data, it will return one of 3 classifications. I can use training data in either R or Python to create a random forest model that has the capability to do this. I would like to know now I can implement this model in the iOS application so that it can return a classification. If this is not possible, then maybe it is possible to synthesize the model in the application itself and somehow store it for use again with new data, or use stored training data to make a new model every time if it is not possible to store the model in the application itself.</p>
<p>Thank you for your help :-)</p>
|
<p>One approach is using <a href="http://bigml.com" rel="nofollow">BigML</a>, a cloud-based ML service that also provides a <a href="https://bigml.com/developers/" rel="nofollow">REST API</a> plus <a href="https://github.com/bigmlcom/bigml-swift" rel="nofollow">Swift</a> and <a href="https://github.com/bigmlcom/ml4ios" rel="nofollow">ObjC SDKs</a>.</p>
<p>BigML provides support for many ML algorithms, including decision trees, clusters, anomaly detectectors, and, most importantly in your case, <a href="http://blog.bigml.com/2013/04/29/1-click-random-decision-forests/" rel="nofollow">ensembles</a>.</p>
<p>One interesting feature that both the ObjC and Swift SDKs provide is local predictions for any of the aforementioned ML algorithms. In other terms, you can create your model/cluster/ensemble through using the Web UI, then, once you are satisfied with the results, you can download it, put it into your app's bundle and load it into BigML's SDK to generate predictions offline -- i.e. without using the remote service. Of course, the SDKs also support creating your ML resources directly (that is, without going through the Web UI), if your requirement is generating, e.g., a model from user data.</p>
<p>An example of how you can retrieve an ensemble (forest tree) from BigML and use it to make a prediction is this:</p>
<pre><code>ML4iOS* ml4iOS = [[ML4iOS alloc] initWithUsername:_BML_USERNAME
key:_BML_APIKEY
developmentMode:YES]];
NSDictionary* inputData = @{
@"sepal width": @4.1,
@"petal length": @0.96,
@"petal width": @2.52};
NSInteger httpStatusCode = 0;
NSDictionary* ensemble = [self getEnsembleWithIdSync:ensembleId
statusCode:&httpStatusCode];
NSDictionary* prediction =
[ML4iOSLocalPredictions localPredictionWithJSONEnsembleSync:ensemble
arguments:inputData
options:@{ @"byName" : @(YES) }
ml4ios:ml4iOS];
</code></pre>
<p>Full disclosure: I am currently working at BigML.</p>
| 116
|
implement classification
|
Is it possible to implement data classification scheme (for choropleth plot) in altair?
|
https://stackoverflow.com/questions/72021194/is-it-possible-to-implement-data-classification-scheme-for-choropleth-plot-in
|
<p>I am creating a choropleth plot function with Altair and want to use different data <a href="https://gisgeography.com/choropleth-maps-data-classification/" rel="nofollow noreferrer">classification scheme</a> for the size of the scatter markers. I have looked up the <code>bin parameters</code> and <code>bin transform</code> in <code>encoding</code> but haven't worked it out. Ideally I want to implement the classification schemes like they are in <a href="https://geopandas.org/en/stable/gallery/choropleths.html" rel="nofollow noreferrer">GeoPandas</a>. Here is my current code and plot (only the <code>size</code> part is relevant I believe)</p>
<pre><code>base = alt.Chart(choro_data).mark_geoshape(
fill="lightgray",
stroke='white',
strokeWidth=0.1
).properties(
width=1000,
height=800,
title='Safe Graph for Average Distance'
).project(
type='albersUsa'
)
cold = alt.Chart(choro_data).mark_circle().encode(
latitude='properties.latitude:Q',
longitude='properties.longitude:Q',
stroke=alt.ColorValue("black"),
strokeWidth = alt.value(1),
strokeOpacity = alt.condition(
'datum.properties.recency <= 5', ### condition value set to??
alt.value(1),
alt.value(0)),
color=alt.Color('properties.recency:Q',
bin = alt.Bin(step=5),
scale=alt.Scale(scheme='blues'),
legend=alt.Legend(title='Week Numbers (Cold Spots)')),
size=alt.Size('properties.total:Q',
scale=alt.Scale(range=[0, 70]),
bin=alt.Bin(step=5),
legend=alt.Legend(title='Number of Time Classified as Cold/Hot Spots')),
tooltip=display
)
base+cold
</code></pre>
<p><a href="https://i.sstatic.net/yMD2v.png" rel="nofollow noreferrer">choropleth plot</a></p>
| 117
|
|
implement classification
|
How to implement One-Vs-Rest for Multi-Class Classification in Julia?
|
https://stackoverflow.com/questions/74932178/how-to-implement-one-vs-rest-for-multi-class-classification-in-julia
|
<p>I’m new to Julia and i am trying to implement One-Vs-Rest Multi-Class Classification, and I was wondering if anyone could help me out. Here is a snippet of my code so far:
My data frame is basic since I’m trying to figure out the implementation first, my c column is my class consisting of [0, 1, 2], and my y, x1, x2, x3 are random Int64 values.</p>
<pre><code>using DataFrames
using CSV
using StatsBase
using StatsModels
using Statistics
using Plots, StatsPlots
using GLM
using Lathe
df = DataFrame(CSV.File(“data.csv”))
fm = @formula(c~x1+x2+x3+y)
model0 = glm(fm0, df, Binomial(), ProbitLink()) # 0 vs [1,2]
model1 = glm(fm1, df, Binomial(), ProbitLink()) # 1 vs [0,2]
model2 = glm(fm2, df, Binomial(), ProbitLink()) # 2 vs [0,1]
</code></pre>
<p>I am trying to make logistic models but I don’t know how to do it.
If anyone can help me out, I would be thrilled.</p>
<p>I am trying to split the multi-class dataset into multiple binary classification problems. A binary classifier is then trained on each binary classification problem and predictions are made using the model that is the most confident.
My only problem is that I don't how to write the logistic model for a multi-class dataset.</p>
|
<p>Here is how you can do the same manually using GLM.jl (there is a lot of boilerplate code, but I wanted to keep the example simple):</p>
<pre><code>df = DataFrame(x1=rand(100), x2=rand(100), x3=rand(100), target=rand([0, 1, 2], 100));
model0 = glm(@formula((target==0)~x1+x2+x3), df, Binomial(), ProbitLink())
model1 = glm(@formula((target==1)~x1+x2+x3), df, Binomial(), ProbitLink())
model2 = glm(@formula((target==2)~x1+x2+x3), df, Binomial(), ProbitLink())
choice = argmax.(eachrow([predict(model0) predict(model1) predict(model2)])) .- 1 # need to subtract 1 to use 0-based indexing
</code></pre>
<p>Let me explain the last operation step by step:</p>
<ol>
<li>get the predictions of three models as columns of a matrix</li>
</ol>
<pre><code>julia> [predict(model0) predict(model1) predict(model2)]
100×3 Matrix{Float64}:
0.517606 0.314234 0.206062
0.173916 0.431573 0.389071
0.211322 0.355592 0.413929
0.252108 0.337381 0.387629
0.515388 0.306834 0.211937
0.169052 0.386062 0.436603
0.125764 0.395105 0.490297
0.0955411 0.347634 0.589351
0.449734 0.341201 0.227459
⋮
0.412786 0.281343 0.303454
0.209337 0.354169 0.417261
0.37683 0.345307 0.273704
0.187584 0.411171 0.390831
0.401612 0.243119 0.350124
0.323155 0.338805 0.322453
0.488678 0.300927 0.23324
0.0979282 0.413296 0.522639
0.195902 0.313932 0.472582
</code></pre>
<ol start="2">
<li>Iterate rows of this matrix:</li>
</ol>
<pre><code>julia> eachrow([predict(model0) predict(model1) predict(model2)])
Base.Generator{Base.OneTo{Int64}, Base.var"#240#241"{Matrix{Float64}}}(Base.var"#240#241"{Matrix{Float64}}([0.5176063824396965 0.3142344514631397 0.2060615429588215; 0.17391563070921184 0.4315728844478078 0.3890711795309746; … ; 0.09792824142064335 0.41329629745776897 0.5226385962610233; 0.19590183503978997 0.31393218775269705 0.4725817014561341]), Base.OneTo(100))
</code></pre>
<ol start="3">
<li>For each row get index of maximum value:</li>
</ol>
<pre><code>julia> argmax.(eachrow([predict(model0) predict(model1) predict(model2)]))
100-element Vector{Int64}:
1
2
3
3
1
3
3
3
1
⋮
1
3
1
2
1
2
1
3
3
</code></pre>
<ol start="4">
<li>Subtract 1 from the result as Julia uses 1-based indexing, and you wanted the first model to have number 0:</li>
</ol>
<pre><code>julia> argmax.(eachrow([predict(model0) predict(model1) predict(model2)])) .- 1
100-element Vector{Int64}:
0
1
2
2
0
2
2
2
0
⋮
0
2
0
1
0
1
0
2
2
</code></pre>
<p>Alternatively you could write:</p>
<pre><code>julia> map(predict(model0), predict(model1), predict(model2)) do x...
return argmax(x) - 1
end
100-element Vector{Int64}:
0
1
2
2
0
2
2
2
0
⋮
0
2
0
1
0
1
0
2
2
</code></pre>
<p>Which is more efficient and shorter, but I was not sure if it is clearer as it uses <a href="https://docs.julialang.org/en/v1/manual/faq/#The-two-uses-of-the-...-operator:-slurping-and-splatting" rel="nofollow noreferrer">slurping</a>.</p>
<hr />
<p>An example how to train one model for three classes using Flux.jl (still using the same <code>df</code> source data frame):</p>
<pre><code>using Flux
model = Chain(Dense(3 => 3, σ), softmax)
X = permutedims(Matrix(df[:, 1:3]))
y = Flux.onehotbatch(df.target, 0:2)
optim = Flux.setup(Flux.Adam(0.01), model)
for epoch in 1:1_000
Flux.train!(model, [(X, y)], optim) do m, x, y
y_hat = m(x)
Flux.crossentropy(y_hat, y)
end
end
</code></pre>
| 118
|
implement classification
|
Implementing attention in Keras classification
|
https://stackoverflow.com/questions/57059382/implementing-attention-in-keras-classification
|
<p>I would like to implement attention to a trained image classification CNN model. For example, there are 30 classes and with the Keras CNN, I obtain for each image the predicted class. However, to visualize the important features/locations of the predicted result. I want to add a Soft Attention after the FC layer. I tried to read the "<a href="https://arxiv.org/abs/1502.03044" rel="noreferrer">Show, Attend and Tell: Neural Image Caption Generation with Visual Attention</a>" to obtain similar results. However, I could not understand how the author has implemented. Because my problem is not an image caption or text seq2seq problem. </p>
<p>I have an image classification CNN and would like to extract the features and put it into a LSTM to visualize soft attention. Although I am getting stuck every time. </p>
<p>The steps I took:</p>
<ol>
<li>Load CNN model</li>
<li>Extract features from a single image (however, the LSTM will check the same image with some removed patches in the image)</li>
</ol>
<p>The steps I took:</p>
<ol>
<li>Load CNN model (I already trained the CNN earlier for predictions)</li>
<li>Extract features from a single image (however, the LSTM will check the same image with some removed patches in the image)</li>
</ol>
<p>Getting stuck after steps below:</p>
<ol start="3">
<li>Create LSTM with soft attention</li>
<li>Obtain a single output</li>
</ol>
<p>I am using Keras with TensorFlow background. CNN features are extracted using ResNet50. The images are 224x224 and the FC layer has 2048 units as output shape.</p>
<pre><code>#Extract CNN features:
base_model = load_model(weight_file, custom_objects={'custom_mae': custom_mae})
last_conv_layer = base_model.get_layer("global_average_pooling2d_3")
cnn_model = Model(input=base_model.input, output=last_conv_layer.output)
cnn_model.trainable = False
bottleneck_features_train_v2 = cnn_model.predict(train_gen.images)
#Create LSTM:
seq_input = Input(shape=(1, 224, 224, 3 ))
encoded_frame = TimeDistributed(cnn_model)(seq_input)
encoded_vid = LSTM(2048)(encoded_frame)
lstm = Dropout(0.5)(encoded_vid)
#Add soft attention
attention = Dense(1, activation='tanh')(lstm)
attention = Flatten()(attention)
attention = Activation('softmax')(attention)
attention = RepeatVector(units)(attention)
attention = Permute([2, 1])(attention)
#output 101 classes
predictions = Dense(101, activation='softmax', name='pred_age')(attention)
</code></pre>
<p>What I expect is to give an image feature from the last FC layer. Add soft attention to LSTM to train attention weights and would like to obtain a class from the output and visualize the soft attention to know where the system is looking at with doing the prediction (similar soft attention visualization as in the paper). </p>
<p>As I am new to the attention mechanism, I did much research and could not find a solution/understanding of my problem. I would like to know if I am doing it right. </p>
| 119
|
|
implement classification
|
Implementing Hierarchical Attention for Classification
|
https://stackoverflow.com/questions/49057752/implementing-hierarchical-attention-for-classification
|
<p>I am trying to implement the Hierarchical Attention <a href="https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf" rel="nofollow noreferrer">paper</a> for text classification. One of the challenges that I am finding is how to manage batching and updates to the weights of the network by the optimizer. The architecture of the network is made of two encoders stacked one after the other: a sentence encoder, and a document encoder. </p>
<p>When the dataset is made of large documents, the following problem arises: for each pass through the document encoder, you will have multiple passes through the sentence encoder. When the loss is calculated and the optimizer uses the calculated gradients to update the weights of the parameters of the network, I am assuming that the weights of the sentence encoder should be updated differently to the weights of the document encoder. What is a good strategy to do so? How could that strategy could be implemented in libraries such as <code>Keras</code> or <code>Pytorch</code>?</p>
| 120
|
|
implement classification
|
How to implement Image Classification and feature extraction using Matlab?
|
https://stackoverflow.com/questions/36307467/how-to-implement-image-classification-and-feature-extraction-using-matlab
|
<p>I'm currently performing a research that involves identification of food items using image classification techniques, I'm well versed in the theories and maths of SVM, yet I'm completely lost when it comes to implementing it using Matlab.</p>
<p>I would like some guiding steps to perform full image classification of food, I believe it will involve color, texture, shape and size features. I just wanted to know where should I start?</p>
<p>Thank you very much</p>
| 121
|
|
implement classification
|
What FFT descriptors should be used as feature to implement classification or clustering algorithm?
|
https://stackoverflow.com/questions/27546476/what-fft-descriptors-should-be-used-as-feature-to-implement-classification-or-cl
|
I have some geographical trajectories sampled to analyze, and I calculated the histogram of data in spatial and temporal dimension, which yielded a time domain based feature for each spatial element. I want to perform a discrete <code>FFT</code> to transform the time domain based feature into frequency domain based feature (which I think maybe more robust), and then do some classification or clustering algorithms.
<p>But I'm not sure using what descriptor as frequency domain based feature, since there are amplitude spectrum, power spectrum and phase spectrum of a signal and I've read some references but still got confused about the significance. And what distance (similarity) function should be used as measurement when performing learning algorithms on frequency domain based feature vector(Euclidean distance? Cosine distance? Gaussian function? Chi-kernel or something else?)</p>
Hope someone give me a clue or some material that I can refer to, thanks~
<hr>
<p><strong>Edit</strong></p>
Thanks to @DrKoch, I chose a spatial element with the largest <code>L-1</code> norm and plotted its <code>log power spectrum</code> in python and it did show some prominent peaks, below is my code and the figure
<pre><code>import numpy as np
import matplotlib.pyplot as plt
sp = np.fft.fft(signal)
freq = np.fft.fftfreq(signal.shape[-1], d = 1.) # time sloth of histogram is 1 hour
plt.plot(freq, np.log10(np.abs(sp) ** 2))
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/YFjAV.png" alt="log power spectrum"></p>
<p>And I have several trivial questions to ask to make sure I totally understand your suggestion:</p>
<ul>
<li><p>In your second suggestion, you said <em>"ignore all these values."</em> </p>
Do you mean the horizontal line represent the threshold and all values below it should be assigned to value zero?</li>
<li><p><em>"you may search for the two, three largest peaks and use their location and probably widths as 'Features' for further classification."</em> </p>
I'm a little bit confused about the meaning of "location" and "width", does "location" refer to the log value of power spectrum (y-axis) and "width" refer to the frequency (x-axis)? If so, how to combine them together as a feature vector and compare two feature vector of <em>"a similar frequency and a similar widths"</em> ?</li>
</ul>
<hr>
<p><strong>Edit</strong></p>
<p>I replaced <code>np.fft.fft</code> with <code>np.fft.rfft</code> to calculate the positive part and plot both power spectrum and log power spectrum.</p>
code:
<pre><code>f, axarr = plt.subplot(2, sharex = True)
axarr[0].plot(freq, np.abs(sp) ** 2)
axarr[1].plot(freq, np.log10(np.abs(sp) ** 2))
plt.show()
</code></pre>
figure:
<p><img src="https://i.sstatic.net/PKXX2.png" alt="power spectrum and log power spectrum">
Please correct me if I'm wrong:</p>
I think I should keep the last four peaks in <em>first</em> figure with <code>power = np.abs(sp) ** 2</code> and <code>power[power < threshold] = 0</code> because the log power spectrum reduces the difference among each component. And then use the log spectrum of new power as feature vector to feed classifiers.
<p>I also see some reference suggest applying a window function (e.g. Hamming window) before doing fft to avoid <strong>spectral leakage</strong>. My raw data is sampled every 5 ~ 15 seconds and I've applied a histogram on sampling time, is that method equivalent to apply a window function or I still need apply it on the histogram data?</p>
|
<p>Generally you should extract just a small number of "Features" out of the complete FFT spectrum.</p>
<p>First: Use the log power spec.
Complex numbers and Phase are useless in these circumstances, because they depend on where you start/stop your data acquisiton (among many other things)</p>
<p>Second: you will see a "Noise Level" e.g. most values are below a certain threshold, ignore all these values.</p>
<p>Third: If you are lucky, e.g. your data has some harmonic content (cycles, repetitions) you will see a few prominent Peaks.</p>
<p>If there are clear peaks, it is even easier to detect the noise: Everything between the peaks should be considered noise.</p>
<p>Now you may search for the two, three largest peaks and use their location and probably widths as "Features" for further classification.</p>
<p>Location is the x-value of the peak i.e. the 'frequency'. It says something how "fast" your cycles are in the input data.</p>
<p>If your cycles don't have constant frequency during the measuring intervall (or you use a window before caclculating the FFT), the peak will be broader than one bin. So this widths of the peak says something about the 'stability' of your cycles.</p>
<p>Based on this: Two patterns are similar if the biggest peaks of both hava a similar frequency and a similar widths, and so on.</p>
<p><HR>
<strong>EDIT</strong></p>
<p>Very intersiting to see a logarithmic power spectrum of one of your examples.</p>
<p>Now its clear that your input contains a <strong>single</strong> harmonic (periodic, oscillating) component with a frequency (repetition rate, cycle-duration) of about f0=0.04.
(This is relative frquency, proprtional to the your sampling frequency, the inverse of the time beetween individual measurment points)</p>
<p>Its is not a pute sine-wave, but some "interesting" waveform. Such waveforms produce peaks at 1*f0, 2*f0, 3*f0 and so on.
(So using an FFT for further analysis turns out to be very good idea)</p>
<p>At this point you should produce spectra of several measurements and see what makes a similar measurement and how differ different measurements. What are the "important" features to distinguish your mesurements? Thinks to look out for:</p>
<ul>
<li>Absolute amplitude: Height of the prominent (leftmost, highest) peaks.</li>
<li>Pitch (Main cycle rate, speed of changes): this is position of first peak, distance between consecutive peaks.</li>
<li>Exact Waveform: Relative amplitude of the first few peaks.</li>
</ul>
<p>If your most important feature is absoulute amplitude, you're better off with calculating the RMS (root mean square) level of our input signal.</p>
<p>If pitch is important, you're better off with calculationg the ACF (auto-correlation function) of your input signal.</p>
<p>Don't focus on the leftmost peaks, these come from the high frequency components in your input and tend to vary as much as the noise floor.</p>
<p><strong>Windows</strong></p>
<p>For a high quality analyis it is importnat to apply a window to the input data before applying the FFT. This reduces the infulens of the "jump" between the end of your input vector ant the beginning of your input vector, because the FFT considers the input as a single cycle.</p>
<p>There are several popular windows which mark different choices of an unavoidable trade-off: Precision of a single peak vs. level of sidelobes:</p>
<p>You chose a "rectangular window" (equivalent to no window at all, just start/stop your measurement). This gives excellent precission of your peaks which now have a width of just one sample. Your sidelobes (the small peaks left and right of your main peaks) are at -21dB, very tolerable given your input data. In your case this is an excellent choice.</p>
<p>A Hanning window is a single cosine wave. It makes your peaks slightly broader but reduces side-lobe levels.</p>
<p>The Hammimg-Window (cosine-wave, slightly raised above 0.0) produces even broader peaks, but supresses side-lobes by -42 dB. This is a good choice if you expect further weak (but important) components between your main peaks or generally if you have complicated signals like speech, music and so on.</p>
<p><HR>
<strong>Edit: Scaling</strong></p>
<p>Correct scaling of a spectrum is a complicated thing, because the values of the FFT lines depend on may things like sampling rate, lenght of FFT, window, and even implementation details of the FFT algorithm (there exist several different accepted conventions).</p>
<p>After all, the FFT should show the underlying conservation of energy. The RMS of the input signal should be the same as the RMS (Energy) of the spectrum.</p>
<p>On the other hand: if used for classification it is enough to maintain relative amplitudes. As long as the paramaters mentioned above do not change, the result can be used for classification without further scaling.</p>
| 122
|
implement classification
|
Implementation of Naive Bayes for text classification in C++
|
https://stackoverflow.com/questions/34515035/implementation-of-naive-bayes-for-text-classification-in-c
|
<p>I am writing a code for implementing Naive Bayes classifier for text classification. I have worked a very small example, <a href="https://web.stanford.edu/class/cs124/lec/naivebayes.pdf" rel="nofollow">please refer page 44</a>, it seems to be working.</p>
<ol>
<li>But I want know whether the implementation is correct, whether it will work for other training and testing sets? I am not trying to implement a commercial level Naive Bayes, just a small assignment, to learn some C++. </li>
<li>I want to know how the code is? Like the way I wrote the code is it a good C++ practice?</li>
<li>I know there lot of improvements which can be done, like for example at present I am testing only one test file, so a way to test multiple files is something that I am thinking of doing in the future, also at present I am doing only 2 class classification, in the future maybe multi class classification. But anything other improvement code wise?</li>
</ol>
<p>Here is the code, <strong>NB header</strong> file:</p>
<pre><code>#pragma once
#include<iostream>
#include<fstream>
#include<string>
#include<vector>
#include<map>
using namespace std;
class NB
{
public:
NB(NB& cl1, NB& cl2, string className);
NB(string className);
NB(string className, int classType);
vector <string> combineClassText();
void bagOfWords(string classCombine, bool isTotal = false);
void calcProb(NB& total);
float totalProb(NB& prob, NB& total);
int classType;
private:
int _len = 0;
float _prob = 1.0f;
int _voc = 0;
int _nOfClass = 0;
int _tnClass = 0;
int _totalWordsinC = 0;
int _wordCounter = 0;
bool _isDone = false;
ifstream _in;
ofstream _out;
//string _classCombine;
string _className;
string _fileName;
vector <string> _combined;
map<string, string> _category;
map<string, int> _bow;
map<string, float> _probCalc;
};
</code></pre>
<p>The <strong>NB.cpp</strong> file:</p>
<pre><code>#include "NB.h"
#include<cmath>
NB::NB(NB& cl1, NB& cl2, string className)
{
_className = className;
_out.open("combineAll.txt");
if (_out.fail()) {
perror("cannot write to combineAll.txt");
}
_len = cl1.combineClassText().size();
for (int i = 0; i < _len; i++) {
_combined.push_back(cl1.combineClassText()[i]);
}
_len = cl2.combineClassText().size();
for (int i = 0; i < _len; i++) {
_combined.push_back(cl2.combineClassText()[i]);
}
_len = _combined.size();
for (int i = 0; i < _len; i++) {
_out << _combined[i] << endl;
//cout << i + 1 << ". " << _combined[i] << endl;
}
_out.close();
_tnClass = cl1._tnClass + cl2._tnClass;
bagOfWords("combineAll.txt", true);
}
NB::NB(string className, int classType) {
NB::classType = classType;
_className = className;
cout << "Enter a filename for " + _className << endl;
cin >> _fileName;
_category[_fileName] = _className;
combineClassText();
bagOfWords(_className + ".txt");
}
NB::NB(string className)
{
_className = className;
while (_isDone == false) {
cout << "Enter a filename for " + _className << endl;
cin >> _fileName;
if (_fileName != "q") {
_category[_fileName] = _className;
_nOfClass++;
_tnClass++;
} else {
_isDone = true;
}
}
combineClassText();
bagOfWords(_className + ".txt");
}
vector<string> NB::combineClassText() {
string temp;
string classCombine = _className + ".txt";
vector <string> tmp;
map<string, string>::iterator it;
_out.open(classCombine);
if (_out.fail()) {
perror("cannot write to");
}
for (it = _category.begin(); it != _category.end(); it++) {
_in.open(it->first);
if (_in.fail()) {
perror("cannot read from");
}
while (_in >> temp) {
_out << temp << endl;
tmp.push_back(temp);
}
_in.close();
}
_out.close();
return tmp;
}
void NB::bagOfWords(string classCombine, bool isTotal) {
map<string, int>::iterator it;
string temp;
vector<string> tp;
string name = _className + "_bow.txt";
int len;
_in.open(classCombine);
if (_in.fail()) {
perror("cannot read from");
}
_out.open(name);
if (_out.fail()) {
perror("cannot write to");
}
while (_in >> temp) {
tp.push_back(temp);
}
for (int i = 0; i < tp.size(); i++) {
for (int j = 0; j < tp[i].size(); j++) {
if (tp[i][j] == '.' || tp[i][j] == ',') {
tp[i][j] = ' ';
}
}
}
len = tp.size();
vector<int> count(len, 1);
for (int i = 0; i < len; i++) {
for (int j = 0; j < (len - i - 1); j++) {
if (tp[i] == tp[j + i + 1]) {
count[i]++;
}
}
}
for (int i = len - 1; i >= 0; i--) {
_bow[tp[i]] = count[i];
}
for (it = _bow.begin(); it != _bow.end(); it++) {
_out << it->first << ": " << it->second << endl;
//cout << it->first << ": " << it->second << endl;
}
//cout << endl;
if (isTotal == true) {
for (it = _bow.begin(); it != _bow.end(); it++) {
_voc += 1;
//cout << _voc << endl;
}
} else {
for (it = _bow.begin(); it != _bow.end(); it++) {
_totalWordsinC += it->second;
}
//cout << _totalWordsinC << endl;
}
_in.close();
_out.close();
}
void NB::calcProb(NB& total) {
map<string, int> ::iterator it;
map<string, int> ::iterator it2;
map<string, float> ::iterator it3;
_out.open(_className + "_prob.txt");
if (_out.fail()) {
perror("cannot write to");
}
for (it = total._bow.begin(); it != total._bow.end(); it++) {
for (it2 = _bow.begin(); it2 != _bow.end(); it2++) {
if (it->first == it2->first) {
_probCalc[it->first] = (float)((it2->second) + 1) / (_totalWordsinC + total._voc);
break;
} else {
_probCalc[it->first] = (float)(1) / (_totalWordsinC + total._voc);
}
}
}
for (it3 = _probCalc.begin(); it3 != _probCalc.end(); it3++) {
//cout << it3->first << ": " << it3->second << endl;
_out << it3->first << ": " << it3->second << endl;
}
_out.close();
}
float NB::totalProb(NB& prob, NB& total) {
map<string, int> ::iterator it;
map<string, int> ::iterator it2;
map<string, float> ::iterator it3;
_out.open(_className + "_" + prob._className + "_prob.txt");
if (_out.fail()) {
perror("cannot write to");
}
_prob = 1.0f;
for (it = _bow.begin(); it != _bow.end(); it++) {
for (it3 = prob._probCalc.begin(); it3 != prob._probCalc.end(); it3++) {
if (it->first == it3->first) {
_wordCounter = 0;
_prob = (_prob * pow((it3->second), (it->second)));
break;
} else {
_wordCounter++;
if (_wordCounter == prob._probCalc.size()) {
_prob = _prob * ((float)1 / (prob._totalWordsinC + total._voc));
}
}
}
}
_prob = (_prob * ((float)(prob._nOfClass) / total._tnClass));
cout << _prob << endl;
_out << "The probalility of the " << _className << " beloning to " << prob._className << " is: " << _prob << endl;
_out.close();
return _prob;
}
</code></pre>
<p>and finally <strong>main.cpp</strong>:</p>
<pre><code>#include<iostream>
#include<vector>
#include"NB.h"
using namespace std;
int main() {
NB class1("class1");
NB class2("class2");
NB total(class1, class2, "all_combined");
class1.calcProb(total);
class2.calcProb(total);
int nOfTestDocs = 0;
int corrClass = 0;
float accurancy = 0.0f;
cout << "Enter the number of test documents\n";
cin >> nOfTestDocs;
NB test("test", 1);
if (test.totalProb(class1, total) >= test.totalProb(class2, total)) {
cout << "The test data belongs to class 1\n";
if (test.classType == 1) {
corrClass++;
accurancy = (float)corrClass / nOfTestDocs;
cout << "The accurancy is: " << accurancy << endl;
}
}
else {
cout << "The test data belongs to class 2\n";
if (test.classType == 1) {
corrClass++;
accurancy = (float)corrClass / nOfTestDocs;
cout << "The accurancy is: " << accurancy << endl;
}
}
system("PAUSE");
return 0;
}
</code></pre>
| 123
|
|
implement classification
|
flask image classification ML model API implementation
|
https://stackoverflow.com/questions/61115346/flask-image-classification-ml-model-api-implementation
|
<p>I try to implement a API for pre-trained resnet(machine learning model). so server can accepts a single valid image file in the request to be analyzed. Returns the output from running the image against the model.</p>
<p>I'm wondering what's the general structure of my api look like. So far I have </p>
<p>app
api
<strong>init</strong>.py (for blueprint)
errors.py (for exceptions)
main
resnet18.py (for actual model and pic classification)</p>
<p>In my resnet18.py:</p>
<pre><code>import torchvision.models as models
import torch
from torchvision import transforms
resnet18 = models.resnet18(pretrained=True)
transform = transforms.Compose([
transforms.Resize(256), # resize the image to 256*256
transforms.CenterCrop(224), # crop the image to 224*224 pixels about the center
transforms.ToTensor(), # convert the image to PyTorch Tensor data type
transforms.Normalize(mean=[0.485, 0.456, 0.406], # Normalize the image by setting its mean and standard deviation to the specified values
std=[0.229, 0.224, 0.225])
])
from PIL import Image
img = Image.open('dog.jpg')
img_t = transform(img)
batch_t = torch.unsqueeze(img_t, 0)
out = resnet18(batch_t)
with open('imagenet_classes.txt') as f:
classes = [line.strip() for line in f.readlines()]
_, index = torch.max(out, 1)
percentage = torch.nn.functional.softmax(out, dim=1)[0] * 100
# print(classes[index[0]], percentage[index[0]].item())
most_likely = classes[index[0]]
confidence = percentage[index[0]].item()
# _, indices = torch.sort(out, descending=True)
# [(classes[idx], percentage[idx].item()) for idx in indices[0][:5]]
def to_json():
json_prediction = {
'most_likely': most_likely,
'confidence': confidence
}
return json_prediction
</code></pre>
<p>So i want to call this fill once i upload a image. I'm wondering how can i make it more elegant. I never done anything like this.</p>
| 124
|
|
implement classification
|
How to implement Multi label classification with train, validation and test images with Densenet/ResNet
|
https://stackoverflow.com/questions/60784634/how-to-implement-multi-label-classification-with-train-validation-and-test-imag
|
<p>I am working on <strong>Multi-Label Image classification</strong> using a dataset having 14,720 training images. I need to implement it using <em>ResNet/DenseNet</em>. I trying to work on it. Could you please suggest me a reference to move forward.</p>
|
<p>For creating a multi-label classification problem, you have to bear in mind two different crucial aspects:</p>
<ol>
<li>The activation function to be used is <code>sigmoid</code>, not <code>softmax</code> (like in the multi-class classification problem).</li>
<li>A correct label would be of the form <code>[1,0,1,0,0]</code>; practically, since we have a multi-label, we do not have the mutual exclusiveness case (in fact, that is the explanation, a more mathematically complex one for choosing sigmoid and not softmax, with regard to Bernoulli Distributions).</li>
</ol>
<p>You can have a look here on how to create a multi-label classification problem in Keras:</p>
<p><a href="https://www.pyimagesearch.com/2018/05/07/multi-label-classification-with-keras/" rel="nofollow noreferrer">https://www.pyimagesearch.com/2018/05/07/multi-label-classification-with-keras/</a></p>
| 125
|
implement classification
|
Implementing Image classification using SVM
|
https://stackoverflow.com/questions/76801704/implementing-image-classification-using-svm
|
<p>I am trying to understand SVM and want to do image classification using SVM. I saw some sample codes which use sklearn <a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html</a> but even after trying a lot I really donot understand how this classification actually occurs. I tried reading a book <strong>Learning with Kernel Bernhard Scholkopf and Alexander J. Smola</strong> but I could not make progress. I feel, if I can get some link which explains the concept with some programming approach or some book, I can understand this concept and make progress.</p>
<p>Also, I donot understand the concept of kernel matrix.</p>
<pre><code>class sklearn.svm.SVC(*, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape='ovr', break_ties=False, random_state=None)
</code></pre>
<p>This site has always provided me great help. Thank you in advance.</p>
|
<p>Here I will try and provide some intuition for SVCs. The code for the figures is at the end.</p>
<hr />
<p><strong>Linear SVCs</strong>
Suppose you have a batch of 2-pixel "images", and each image is labelled either A or B. Here's a sample of 5 images:</p>
<p><a href="https://i.sstatic.net/Cwu8d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cwu8d.png" alt="samples" /></a></p>
<p>Each pixel is considered as a feature. In this example, the two pixels defined a 2D feature space that we could visualise along an x and y axis. When you have an 8x8 image for example, you have a 64-dimensional feature space.</p>
<p>For each image (there are 500), you can consider the pixel values (intensities) as features. Plotting the samples in feature space (pixel 1 - pixel 2 space), and colouring them by their label, yields this representation:</p>
<p><a href="https://i.sstatic.net/RuHVY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RuHVY.png" alt="scatter of data" /></a></p>
<p>Linear SVCs work by drawing a line in feature space that separates the two classes. The best line is the one with the widest margin. The entire area above the decision line is considered to belong to class A, and the entire area below the decision line is considered to be class B. When you have a new sample, the SVC assesses where it falls in relation to the decision line: above the line means it will predict 'A', and below the line means it'll predict 'B'.</p>
<p><a href="https://i.sstatic.net/BpuxN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BpuxN.png" alt="svc" /></a></p>
<p>The margin is defined by where it first hits a sample. The samples that limit or define the margins are called support vectors:</p>
<p><a href="https://i.sstatic.net/FBU5U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FBU5U.png" alt="support vectors" /></a></p>
<p>In this case the best margin is defined by 3 samples. In practice, you want your margin (and therefore the SVC's decision line or solution) to be defined by more samples, i.e. allow more margin violations. This is determined by the regularisation parameter <code>C</code>. Smaller values of <code>C</code> average more samples when defining the margin (more regularised, potentially underfitting). Larger values of <code>C</code> only let a few samples become support vectors, so the margin might end up depending on just a few noisy points (less regularised, potentially overfitting). The optimal <code>C</code> for your dataset is a matter of hyperparameter tuning.</p>
<hr />
<p><strong>Non-linear SVCs and the kernel trick</strong>
The kernel trick is used where you can't simply linearly separate the classes using the features you have, such as in this example:</p>
<p><a href="https://i.sstatic.net/VSdjK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VSdjK.png" alt="not linearly separable" /></a></p>
<p>In such cases, you can try creating more complex features, such as these quadratic ones: pixel1 x pixel2, (pixel1)^2, (pixel2)^2 (similar to <code>kernel='poly', degree=2</code>). In this richer feature space, you have a better chance of finding a linear separation of the classes. The kernel trick is an efficient way of converging on a linear separation in the higher dimensional space - it saves you explicitly computing the new features, while still allowing you to navigate the space for a solution. When you bring that linear solution back into the original pixel1-pixel2 space, it'll be nonlinear because it was only linear in the higher dimensional space where it was found. This results in complex decision boundaries:</p>
<p><a href="https://i.sstatic.net/BzZQW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BzZQW.png" alt="kernel trick pred" /></a></p>
<p>I tried to replicate this result manually - i.e. without using the kernel trick. I combined the features to create the product features described above (pixel 1 times pixel 2, etc), and then ran it through a <em>linear</em> SVC such that it would find a linear separation of my quadratic feature set. I get a similar result (green) to the above (red):</p>
<p><a href="https://i.sstatic.net/qeD62.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qeD62.png" alt="manual method" /></a></p>
<p>SVCs are binary classifiers, like the one shown above. When you apply SVCs to more than 2 classes, <code>sklearn</code> creates several binary SVCs (one for each pair of classes), and the final results is determined using a one-vs-rest strategy.</p>
<hr />
<p><strong>Code used to generate the plots</strong></p>
<pre><code>import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
#
#Synthesise 2-pixel images. Each image belongs to A or B.
#
np.random.seed(rand_seed)
synth = np.random.randn(4, 250)
X_df = pd.DataFrame({'pixel1': np.c_[ synth[[0]] + 2, synth[[1]] - 2 ].ravel(),
'pixel2': np.c_[ synth[[2]] + 2, synth[[3]] - 5 ].ravel(),
'label': ['A'] * 250 + ['B'] * 250})
X_df.index.name = 'sample'
X, y = X_df.drop(columns='label').values, X_df.label.values
#
# View some samples
#
f, axs = plt.subplots(1, 5)
for ax in axs.flatten():
i = np.random.randint(0, 500)
ax.imshow(X_df.loc[i, ['pixel1', 'pixel2']].infer_objects().values[None, :],
cmap='binary', vmin=-5, vmax=5)
#ax.axis('off')
ax.set_title(X_df.label[i])
ax.set_xticks([])
ax.set_yticks([])
#
# View samples in feature space (pixel1-pixel2 space)
#
f, ax = plt.subplots(figsize=(6, 3))
for label in X_df.label.unique():
X_df.loc[X_df.label == label].plot(
kind='scatter', x='pixel1', y='pixel2', c='red' if label=='A' else 'blue', label=label, ax=ax
)
ax_lims = ax.axis()
#
# Fit linear SVC
#
from sklearn.svm import SVC
svc = SVC(kernel='linear').fit(X, y)
#Plot decision boundary
(w0, w1), b = svc.coef_.ravel().tolist(), svc.intercept_
x0 = np.array([-10, 10])
x1 = (-w0*x0 - b) / w1
ax.plot(x0, x1, 'k', label='decision boundary', linewidth=3)
ax.plot(x0, x1 + 1/w1, 'k:', label='margin', linewidth=1)
ax.plot(x0, x1 - 1/w1, 'k:', linewidth=1)
ax.axis(ax_lims)
ax.legend()
#Highlight support vectors
ax.scatter(
svc.support_vectors_[:, 0], svc.support_vectors_[:, 1],
marker='s', edgecolor='k',facecolor='none', s=70, label='support vectors'
)
ax.legend()
</code></pre>
<p>Code example for non-linear SVMs/kernel trick:</p>
<pre><code>#
# Kernel trick demo
#
np.random.seed(1)
#Synthesise a dataset that's not linearly seperable
synth = np.random.randn(4, 250)
X_df = pd.DataFrame({'pixel1': np.c_[ synth[[0]] + 2, synth[[1], :125] - 2, synth[[1], 125:] + 8].ravel(),
'pixel2': np.c_[ synth[[2]] + 2, synth[[3], :125] - 5, synth[[3], :125] + 8 ].ravel(),
'label': ['A'] * 250 + ['B'] * 250})
X_df.index.name = 'sample'
X, y = X_df.drop(columns='label').values, X_df.label.values
#Plot the data
f, ax = plt.subplots(figsize=(6, 3))
for label in X_df.label.unique():
X_df.loc[X_df.label == label].plot(
kind='scatter', x='pixel1', y='pixel2', c='red' if label=='A' else 'blue', label=label, ax=ax
)
ax_lims = ax.axis()
#
#SVC(kernel='poly') uses the kernel trick
#
svc = SVC(kernel='poly', degree=2, C=1e6).fit(X, y)
xx, yy = np.meshgrid(np.linspace(-10, 10),
np.linspace(-10, 10))
X = np.c_[xx.ravel(), yy.ravel()]
preds = svc.predict(X).reshape(xx.shape)
preds[preds == 'A'] = 0
preds[preds == 'B'] = 1
preds = np.float32(preds)
decision = svc.decision_function(X).reshape(xx.shape)
ax.contour(xx, yy, preds, cmap='Reds')
#
# Manual equivalent of the SVC above
# but without using the kernel trick
#
#Create a richer feature space from the existing features
#We'll opt for a quadratic polynomial feature space, like the SVC above
X2_df = X_df.copy()
X2_df['pixel1_squared'] = X2_df.pixel1 ** 2
X2_df['pixel2_squared'] = X2_df.pixel2 ** 2
X2_df['pixel1_pixel2'] = X2_df.pixel1 * X2_df.pixel2
X2 = X2_df.drop(columns='label').values
#Find a linear separation in this space
svc2 = SVC(kernel='linear', C=1e6).fit(X2, y)
#Plot the decision boundary
preds2 = svc2.predict( np.c_[X, X[:, 0]**2, X[:, 1]**2, X[:, 0]*X[:, 1]] ).reshape(xx.shape)
preds2[preds2 == 'A'] = 0
preds2[preds2 == 'B'] = 1
preds2 = np.float32(preds2)
decision2 = svc2.decision_function( np.c_[X, X[:, 0]**2, X[:, 1]**2, X[:, 0]*X[:, 1]] ).reshape(xx.shape)
ax.contour(xx, yy, preds2, cmap='Greens')
</code></pre>
| 126
|
implement classification
|
Log likelihood to implement Naive Bayes for Text Classification
|
https://stackoverflow.com/questions/5451004/log-likelihood-to-implement-naive-bayes-for-text-classification
|
<p>I am implementing Naive Bayes algorithm for text classification. I have ~1000 documents for training and 400 documents for testing. I think I've implemented training part correctly, but I am confused in testing part. Here is what I've done briefly:</p>
<p>In my training function:</p>
<pre><code>vocabularySize= GetUniqueTermsInCollection();//get all unique terms in the entire collection
spamModelArray[vocabularySize];
nonspamModelArray[vocabularySize];
for each training_file{
class = GetClassLabel(); // 0 for spam or 1 = non-spam
document = GetDocumentID();
counterTotalTrainingDocs ++;
if(class == 0){
counterTotalSpamTrainingDocs++;
}
for each term in document{
freq = GetTermFrequency; // how many times this term appears in this document?
id = GetTermID; // unique id of the term
if(class = 0){ //SPAM
spamModelArray[id]+= freq;
totalNumberofSpamWords++; // total number of terms marked as spam in the training docs
}else{ // NON-SPAM
nonspamModelArray[id]+= freq;
totalNumberofNonSpamWords++; // total number of terms marked as non-spam in the training docs
}
}//for
for i in vocabularySize{
spamModelArray[i] = spamModelArray[i]/totalNumberofSpamWords;
nonspamModelArray[i] = nonspamModelArray[i]/totalNumberofNonSpamWords;
}//for
priorProb = counterTotalSpamTrainingDocs/counterTotalTrainingDocs;// calculate prior probability of the spam documents
}
</code></pre>
<p>I think I understood and implemented training part correctly, but I am not sure I could implemented testing part properly. In here, I am trying to go through each test document and I calculate logP(spam|d) and logP(non-spam|d) for each document. Then I compare these two quantities in order to determine the class (spam/non-spam).</p>
<p>In my testing function:</p>
<pre><code>vocabularySize= GetUniqueTermsInCollection;//get all unique terms in the entire collection
for each testing_file:
document = getDocumentID;
logProbabilityofSpam = 0;
logProbabilityofNonSpam = 0;
for each term in document{
freq = GetTermFrequency; // how many times this term appears in this document?
id = GetTermID; // unique id of the term
// logP(w1w2.. wn) = C(wj)∗logP(wj)
logProbabilityofSpam+= freq*log(spamModelArray[id]);
logProbabilityofNonSpam+= freq*log(nonspamModelArray[id]);
}//for
// Now I am calculating the probability of being spam for this document
if (logProbabilityofNonSpam + log(1-priorProb) > logProbabilityofSpam +log(priorProb)) { // argmax[logP(i|ck) + logP(ck)]
newclass = 1; //not spam
}else{
newclass = 0; // spam
}
}//for
</code></pre>
<p>My problem is; I want to return the probability of each class instead of exact 1's and 0's (spam/non-spam). I want to see e.g. newclass = 0.8684212 so I can apply threshold later on. But I am confused here. How can I calculate the probability for each document? Can I use logProbabilities to calculate it?</p>
|
<p>The probability of the data described by a set of features {<em>F1</em>, <em>F2</em>, ..., <em>Fn</em>} belonging in class <em>C</em>, according to the naïve Bayes probability model, is</p>
<pre><code>P(C|F) = P(C) * (P(F1|C) * P(F2|C) * ... * P(Fn|C)) / P(F1, ..., Fn)
</code></pre>
<p>You have all the terms (in logarithmic form), except for the 1 / <em>P</em>( <em>F1</em>, ..., <em>Fn</em>) term since that's not used in the naïve Bayes classifier that you're implementing. (Strictly, the <a href="http://en.wikipedia.org/wiki/Maximum_a_posteriori" rel="nofollow">MAP</a> classifier.)</p>
<p>You'd have to collect frequencies of the features as well, and from them calculate</p>
<pre><code>P(F1, ..., Fn) = P(F1) * ... * P(Fn)
</code></pre>
| 127
|
implement classification
|
Svm Implementation in java - 1/0 classification
|
https://stackoverflow.com/questions/19899410/svm-implementation-in-java-1-0-classification
|
<p>I have done SVM using 1/-1 classification.</p>
<p>i.e. by checking sign of decision function <code>wTrans * X - gamma</code></p>
<pre><code>if(decfun<0)
setclasslabel to -1
else
setclasslabel to 1
</code></pre>
<p>What about in 1/0 classification. </p>
<p>Is that the same thing?</p>
|
<p>yes, its a same thing.
In your mentioned SVM classification ,you are classifying data into two classes (1/-1) i.e you are just setting them label as 1 or -1.Main concept is to classify data into two classes and label them, so instead of -1 u can label 0 to that class.</p>
<pre><code> if(decfun<0)
setclasslabel to 0
else
setclasslabel to 1
</code></pre>
| 128
|
implement classification
|
Trying to implement predictRaw() for classification model in Apache Spark
|
https://stackoverflow.com/questions/45957112/trying-to-implement-predictraw-for-classification-model-in-apache-spark
|
<p>The developer API example (<a href="https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/ml/DeveloperApiExample.scala" rel="nofollow noreferrer">https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/ml/DeveloperApiExample.scala</a>) gives a simple implementation example for the function predictRaw() in a classification model. This is a function within abstract class ClassificationModel that must be implemented in the concrete class. According to the developer API example, you can calculate it as follows:</p>
<pre><code>override def predictRaw(features: Features.Type): Vector = {
val margin = BLAS.dot(features, coefficients)
Vectors.dense(-margin, margin) // Binary classification so we return a length-2 vector, where index i corresponds to class i (i = 0, 1).
}
</code></pre>
<p>My understanding of <code>BLAS.dot(features, coefficients)</code> is that this is simply the matrix dot product of the features vector (of length numFeatures) by the coefficients vector (of length numFeatures), so effectively each 'feature' cols is mutliplied by a coefficient and then summed to get <code>val margin</code>. However Spark no longer provides access to the BLAS library as it's private in MLlib and instead matrix mutliplication is provided in the Matrix trait where there are various factory methods for multiplication.</p>
<p>My understanding of how to implement <code>predictRaw()</code> using the matrix factory methods is as follows:</p>
<pre><code>override def predictRaw(features: Vector): Vector = {
//coefficients is a Vector of length numFeatures: val coefficients = Vectors.zeros(numFeatures)
val coefficientsArray = coefficients.toArray
val coefficientsMatrix: SparkDenseMatrix = new SparkDenseMatrix(numFeatures, 1, coefficientsArray)
val margin: Array[Double] = coefficientsMatrix.multiply(features).toArray // contains a single element
val rawPredictions: Array[Double] = Array(-margin(0),margin(0))
new SparkDenseVector(rawPredictions)
}
</code></pre>
<p>This will require the overhead of converting the data structures to Arrays. Is there a better way? It seems strange that BLAS is now private. NB. Code not tested! At the moment <code>val coefficients: Vector</code> is just a vector of zeros, but once I have implemented the learning algorithm this would contain the results.</p>
|
<p>I think I have worked this out. The Spark DeveloperAPI example is very confusing because predictRaw() computes a confidence interval for a logistic-regression type example. However what predictRaw() is actually supposed to do, when implementing ClassificationModel, is predict a vector of output labels for every ith sample of the input dataset. Technically speaking the matrix multiplication above is correct without the use of BLAS - but in fact predictRaw() doesn't have to be calculated this way. </p>
<p>From the underlying source code:
<a href="https://github.com/apache/spark/blob/v2.2.0/mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala" rel="nofollow noreferrer">https://github.com/apache/spark/blob/v2.2.0/mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala</a></p>
<p><code>* @return vector where element i is the raw prediction for label i.
* This raw prediction may be any real number, where a larger value indicates greater
* confidence for that label.</code></p>
<p>The function raw2predict then computes the actual label from the raw prediction but doesn't need to be implemented as this is done by the API.</p>
| 129
|
implement classification
|
Implement metrics using XLMRoBERTa model for text classification
|
https://stackoverflow.com/questions/72515966/implement-metrics-using-xlmroberta-model-for-text-classification
|
<p>I have created script for binary (0 and 1) text classification using XLM-ROBERTa model. I would like to put metrics (as Binary Cross-Entropy) but also early stopping with patience of 15.</p>
<p>But I have a problem. I tried to use the path <code>model.compile</code> and <code>model.fit</code>, but XLM-RoBertaForSequenceClassification doesn't have these parameters. I would't like to use Argumentation. It is possible to find some solution?</p>
<p>Already I use AdamW. Finally it is possible to get for each epoch parameters as recall, f1, accuracy? At the moment I get only last data of the last epoch.</p>
<p>Below I put the script during training:</p>
<pre><code>from transformers import XLMRobertaForSequenceClassification, AdamW, BertConfig
# Load BertForSequenceClassification, the pretrained BERT model with a single
# linear classification layer on top.
model = XLMRobertaForSequenceClassification.from_pretrained(
"xlm-roberta-base", # Use the 12-layer BERT model, with an uncased vocab.
num_labels = 2, # The number of output labels--2 for binary classification.
# You can increase this for multi-class tasks.
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
# Tell pytorch to run this model on the GPU.
#model.cuda()
model.to(device)
</code></pre>
<p>Here start the training!</p>
<pre><code>import random
import numpy as np
import gc
seed_val = 45
epochs = 15
# Set the seed value all over the place to make this reproducible.
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# Store the average loss after each epoch so we can plot them.
loss_values = []
training_stats = []
# Measure how long the training epoch takes.
total_t0 = time.time()
# For each epoch...
for epoch in range(0, epochs):
print("")
print('======== Epoch {:} / {:} ========'.format(epoch + 1, epochs))
stacked_val_labels = []
targets_list = []
# ========================================
# Training
# ========================================
print('Training...')
# put the model into train mode
model.train()
# This turns gradient calculations on and off.
torch.set_grad_enabled(True)
# Measure how long the training epoch takes.
t0 = time.time()
# Reset the total loss for this epoch.
total_train_loss = 0
for i, batch in enumerate(train_dataloader):
train_status = 'Batch ' + str(i) + ' of ' + str(len(train_dataloader))
print(train_status, end='\r')
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
model.zero_grad()
outputs = model(b_input_ids,
attention_mask=b_input_mask,
labels=b_labels)
# Get the loss from the outputs tuple: (loss, logits)
loss = outputs[0]
# Convert the loss from a torch tensor to a number.
# Calculate the total loss.
total_train_loss = total_train_loss + loss.item()
# Zero the gradients
optimizer.zero_grad()
# Perform a backward pass to calculate the gradients.
loss.backward()
# Clip the norm of the gradients to 1.0.
# This is to help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# Use the optimizer to update the weights.
# Optimizer for GPU
optimizer.step()
# Optimizer for TPU
# https://pytorch.org/xla/
#xm.optimizer_step(optimizer, barrier=True)
# Measure how long this epoch took.
training_time = format_time(time.time() - t0)
print("")
print('Train loss:' ,total_train_loss)
print(" Training epcoh took: {:}".format(training_time))
# ========================================
# Validation
# ========================================
print('\nValidation...')
# Measure how long the training epoch takes.
t0 = time.time()
# Put the model in evaluation mode.
model.eval()
# Turn off the gradient calculations.
# This tells the model not to compute or store gradients.
# This step saves memory and speeds up validation.
torch.set_grad_enabled(False)
# Reset the total loss for this epoch.
total_val_loss = 0
for j, batch in enumerate(val_dataloader):
val_status = 'Batch ' + str(j) + ' of ' + str(len(val_dataloader))
print(val_status, end='\r')
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
outputs = model(b_input_ids,
attention_mask=b_input_mask,
labels=b_labels)
# Get the loss from the outputs tuple: (loss, logits)
loss = outputs[0]
# Convert the loss from a torch tensor to a number.
# Calculate the total loss.
total_val_loss = total_val_loss + loss.item()
# Get the preds
preds = outputs[1]
# Move preds to the CPU
val_preds = preds.detach().cpu().numpy()
# Move the labels to the cpu
targets_np = b_labels.to('cpu').numpy()
# Append the labels to a numpy list
targets_list.extend(targets_np)
if j == 0: # first batch
stacked_val_preds = val_preds
else:
stacked_val_preds = np.vstack((stacked_val_preds, val_preds))
# Calculate the validation accuracy
y_true = targets_list
y_pred = np.argmax(stacked_val_preds, axis=1)
val_acc = accuracy_score(y_true, y_pred)
# Measure how long the validation run took.
validation_time = format_time(time.time() - t0)
print('Val loss:' ,total_val_loss)
print('Val acc: ', val_acc)
print(" Validation took: {:}".format(validation_time))
# Record all statistics from this epoch.
#training_stats = []
training_stats.append(
{
'epoch': epoch + 1,
'Training Loss': total_train_loss,
'Valid. Loss': total_val_loss,
'Valid. Accur.': val_acc,
'Training Time': training_time,
'Validation Time': validation_time
}
)
print("")
print("Training complete!")
print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0)))
# Save the Model
torch.save(model.state_dict(), '/content/drive/MyDrive/model/model.pt')
# Use the garbage collector to save memory.
gc.collect()
</code></pre>
|
<p><code>XLMRobertaForSequenceClassification</code> and other classes of the "ForSequenceClassification" family assume classification into multiple classes and use categorical cross-entropy as the loss function. The class is just a lightweight wrapper of the <code>XLMRoberta</code> class.</p>
<p>If you want to use specifically binary cross-entropy, you can either make your own wrapper with a single class output and binary cross-entropy, or you can do the loss computation in the training loop in your code snippet. I.e., instead of using <code>outputs[0]</code>, use the logits <code>outputs[1]</code> as an input to the loss function.</p>
<p>Regarding other metrics, you have the logits in the <code>outputs</code> variable. It should be enough to compute whatever metric you find useful for your task.</p>
| 130
|
implement classification
|
Python Implementation of Logistic Regression as Regression (Not Classification!)
|
https://stackoverflow.com/questions/65268985/python-implementation-of-logistic-regression-as-regression-not-classification
|
<p>I have a regression problem on which I want to use logistic regression - not logistic classification - because my target variables <code>y</code> are continuopus quantities between 0 and 1. However, the common implementations of logistic regression in Python seem to be exclusively logistic classification. I've also looked at GLM implementations and none seem to have implemented a sigmoid link function. Can someone point me in the direction of a Python implementation of logistic regression as a regression algorithm.</p>
|
<p>In statsmodels both GLM with family Binomial and discrete model Logit allow for a continuous target variable as long as the values are restricted to interval [0, 1].</p>
<p>Similarly, Poisson is very useful to model non-negative valued continuous data.</p>
<p>In these cases, the model is estimated by quasi maximum likelihood, QMLE, and not by MLE, because the distributional assumptions are not correct. Nevertheless, we can correctly (consistently) estimate the mean function. Inference needs to be based on misspecification robust standard errors which are available as <code>fit</code> option <code>cov_type="HC0"</code></p>
<p>Here is a notebook with example
<a href="https://www.statsmodels.org/dev/examples/notebooks/generated/quasibinomial.html" rel="nofollow noreferrer">https://www.statsmodels.org/dev/examples/notebooks/generated/quasibinomial.html</a></p>
<p>some issues with background for QMLE and fractional Logit
<a href="https://www.github.com/statsmodels/statsmodels/issues/2040" rel="nofollow noreferrer">https://www.github.com/statsmodels/statsmodels/issues/2040</a> QMLE
<a href="https://github.com/statsmodels/statsmodels/issues/2712" rel="nofollow noreferrer">https://github.com/statsmodels/statsmodels/issues/2712</a></p>
<p>Reference</p>
<p>Papke, L.E. and Wooldridge, J.M. (1996), Econometric methods for fractional response variables with an application to 401(k) plan participation rates. J. Appl. Econ., 11: 619-632. <a href="https://doi.org/10.1002/(SICI)1099-1255(199611)11:6" rel="nofollow noreferrer">https://doi.org/10.1002/(SICI)1099-1255(199611)11:6</a><619::AID-JAE418>3.0.CO;2-1</p>
<p><strong>update and Warning</strong></p>
<p>as of statsmodels 0.12</p>
<p>Investigating this some more, I found that discrete Probit does not support continuous interval data. It uses a computational shortcut that assumes that the values of the dependent variable are either 0 or 1. However, it does not raise an exception in this case.
<a href="https://github.com/statsmodels/statsmodels/issues/7210" rel="nofollow noreferrer">https://github.com/statsmodels/statsmodels/issues/7210</a></p>
<p>Discrete Logit works correctly for continuous data with optimization method "newton". The loglikelihood function itself uses a similar computational shortcut as Probit, but not the derivatives and other parts of Logit.</p>
<p>GLM-Binomial is designed for interval data and has no problems with it. The only numerical precision problems are currently in the Hessian of the probit link that uses numerical derivatives and is not very precise, which means that the parameters are well estimated but standard error can have numerical noise in GLM-Probit.</p>
<p><strong>update</strong>
Two changes in statsmodels 0.12.2:<br />
Probit now raises exception if response is not integer valued, and<br />
GLM Binomial with Probit link uses improved derivatives for Hessian with precision now similar to discrete Probit.</p>
| 131
|
implement classification
|
How to implement Grad-CAM in a classification app
|
https://stackoverflow.com/questions/69881311/how-to-implement-grad-cam-in-a-classification-app
|
<p>I am building a Medical Image classification app,
Is there a way to intergrate <code>Grad-CAM</code> into the model so that when images are classified they return with <code>class activation map</code>, Or the Grad-CAM functionality is to be added in the app independant from the model?I need the app to classify a disease and return as shown below .IHow can i go about this?</p>
<p><a href="https://i.sstatic.net/Uzyow.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uzyow.png" alt="enter image description here" /></a></p>
| 132
|
|
implement classification
|
Implement a Regression/Classification Neural Network in Python
|
https://stackoverflow.com/questions/47626075/implement-a-regression-classification-neural-network-in-python
|
<h3>Problem Context</h3>
<p>I am trying to learn Neural Networks in Python, & I've developed an implementation of a Logistic Regression based NN.</p>
<p>Here is the code - </p>
<pre><code>import numpy as np
# Input array
X = np.array([[1, 0, 1, 0], [1, 0, 1, 1], [0, 1, 0, 1]])
# Output
y = np.array([[1], [1], [0]])
# Sigmoid Function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Derivative of Sigmoid Function
def ddx_sigmoid(x):
return x * (1 - x)
##### Initialization - BEGIN #####
# Setting training iterations
iterations_max = 500000
# Learning Rate
alpha = 0.5
# Number of Neruons in Input Layer = Number of Features in the data set
inputlayer_neurons = X.shape[1]
# Number of Neurons in the Hidden Layer
hiddenlayer_neurons = 3 # number of hidden layers neurons
# Number of Neurons at the Output Layer
output_neurons = 1 # number of neurons at output layer
# weight and bias initialization
wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))
bh = np.random.uniform(size=(1, hiddenlayer_neurons))
wout = np.random.uniform(size=(hiddenlayer_neurons, output_neurons))
bout = np.random.uniform(size=(1, output_neurons))
##### Initialization - END #####
# Printing of shapes
print "\nShape X: ", X.shape, "\nShape Y: ", y.shape
print "\nShape WH: ", wh.shape, "\nShape BH: ", bh.shape, "\nShape Wout: ", wout.shape, "\nShape Bout: ", bout.shape
# Printing of Values
print "\nwh:\n", wh, "\n\nbh: ", bh, "\n\nwout:\n", wout, "\n\nbout: ", bout
##### TRAINING - BEGIN #####
for i in range(iterations_max):
##### Forward Propagation - BEGIN #####
# Input to Hidden Layer = (Dot Product of Input Layer and Weights) + Bias
hidden_layer_input = (np.dot(X, wh)) + bh
# Activation of input to Hidden Layer by using Sigmoid Function
hiddenlayer_activations = sigmoid(hidden_layer_input)
# Input to Output Layer = (Dot Product of Hidden Layer Activations and Weights) + Bias
output_layer_input = np.dot(hiddenlayer_activations, wout) + bout
# Activation of input to Output Layer by using Sigmoid Function
output = sigmoid(output_layer_input)
##### Forward Propagation - END #####
##### Backward Propagation - BEGIN #####
E = y - output
slope_output_layer = ddx_sigmoid(output)
slope_hidden_layer = ddx_sigmoid(hiddenlayer_activations)
d_output = E * slope_output_layer
Error_at_hidden_layer = d_output.dot(wout.T)
d_hiddenlayer = Error_at_hidden_layer * slope_hidden_layer
wout += hiddenlayer_activations.T.dot(d_output) * alpha
bout += np.sum(d_output, axis=0, keepdims=True) * alpha
wh += X.T.dot(d_hiddenlayer) * alpha
bh += np.sum(d_hiddenlayer, axis=0, keepdims=True) * alpha
##### Backward Propagation - END #####
##### TRAINING - END #####
print "\nOutput is:\n", output
</code></pre>
<p>This code is working perfectly in the cases where the output is binary (0,1). I guess, this is because of the sigmoid function I am using.</p>
<h3>Problem</h3>
<p>Now, I want to scale this code so that it can handle linear regression as well.</p>
<p>As we know, the <code>scikit</code> library has some preloaded datasets which can be used for classification and regression.</p>
<p>I want my NN to train and test the <code>diabetes</code> dataset.</p>
<p>With this in mind, I have modified my code as follows - </p>
<pre><code>import numpy as np
from sklearn import datasets
# Sigmoid Function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Derivative of Sigmoid Function
def ddx_sigmoid(x):
return x * (1 - x)
# Load Data
def load_data():
diabetes_data = datasets.load_diabetes()
return diabetes_data
input_data = load_data()
X = input_data.data
# Reshape Output
y = input_data.target
y = y.reshape(len(y), 1)
iterations_max = 1000
# Learning Rate
alpha = 0.5
# Number of Neruons in Input Layer = Number of Features in the data set
inputlayer_neurons = X.shape[1]
# Number of Neurons in the Hidden Layer
hiddenlayer_neurons = 5 # number of hidden layers neurons
# Number of Neurons at the Output Layer
output_neurons = 3 # number of neurons at output layer
# weight and bias initialization
wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))
bh = np.random.uniform(size=(1, hiddenlayer_neurons))
wout = np.random.uniform(size=(hiddenlayer_neurons, output_neurons))
bout = np.random.uniform(size=(1, output_neurons))
##### TRAINING - BEGIN #####
for i in range(iterations_max):
##### Forward Propagation - BEGIN #####
# Input to Hidden Layer = (Dot Product of Input Layer and Weights) + Bias
hidden_layer_input = (np.dot(X, wh)) + bh
# Activation of input to Hidden Layer by using Sigmoid Function
hiddenlayer_activations = sigmoid(hidden_layer_input)
# Input to Output Layer = (Dot Product of Hidden Layer Activations and Weights) + Bias
output_layer_input = np.dot(hiddenlayer_activations, wout) + bout
# Activation of input to Output Layer by using Sigmoid Function
output = sigmoid(output_layer_input)
##### Forward Propagation - END #####
##### Backward Propagation - BEGIN #####
E = y - output
slope_output_layer = ddx_sigmoid(output)
slope_hidden_layer = ddx_sigmoid(hiddenlayer_activations)
d_output = E * slope_output_layer
Error_at_hidden_layer = d_output.dot(wout.T)
d_hiddenlayer = Error_at_hidden_layer * slope_hidden_layer
wout += hiddenlayer_activations.T.dot(d_output) * alpha
bout += np.sum(d_output, axis=0, keepdims=True) * alpha
wh += X.T.dot(d_hiddenlayer) * alpha
bh += np.sum(d_hiddenlayer, axis=0, keepdims=True) * alpha
##### Backward Propagation - END #####
##### TRAINING - END #####
print "\nOutput is:\n", output
</code></pre>
<p>The output of this code is - </p>
<pre><code>Output is:
[[ 1. 1. 1.]
[ 1. 1. 1.]
[ 1. 1. 1.]
...,
[ 1. 1. 1.]
[ 1. 1. 1.]
[ 1. 1. 1.]]
</code></pre>
<p>Obviously, I am goofing up somewhere in the basics.</p>
<p>Is this because I am using the sigmoid function for the hidden as well as the output layer?</p>
<p>What kind of functions shall I use so that I get a valid output, which can be used to train my NN efficiently?</p>
<h3>Efforts so far</h3>
<p>I have tried to use the TANH function, the SOFTPLUS function as activation to both the layers, without any success.</p>
<p>Can someone out here please help?</p>
<p>I tried Googling this but the explanations there are very complex.</p>
<p>HELP !</p>
|
<p>You should try to remove sigmoid function on your output.</p>
<p>As for linear regression, output range may be large, while output of sigmoid or tanh function is [0, 1] or [-1, 1], which makes minimize of error function not possible. </p>
<p>======= UPDATE ======</p>
<p>I try to accomplish it in tensorflow, the core part for it:</p>
<pre><code>w = tf.Variable(tf.truncated_normal([features, FLAGS.hidden_unit], stddev=0.35))
b = tf.Variable(tf.zeros([FLAGS.hidden_unit]))
# right way
y = tf.reduce_sum(tf.matmul(x, w) + b, 1)
# wrong way: as sigmoid output in [-1, 1]
# y = tf.sigmoid(tf.matmul(x, w) + b)
mse_loss = tf.reduce_sum(tf.pow(y - y_, 2) / 2
</code></pre>
| 133
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.