category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
gradient descent implementation
Implementation of Logistic regression with Gradient Descent in Java
https://stackoverflow.com/questions/28824327/implementation-of-logistic-regression-with-gradient-descent-in-java
<p>I have implemented Logistic Regression with Gradient Descent in Java. It doesn't seem to work well (It does not classify records properly; the probability of y=1 is a lot.) I don't know whether my implementation is correct.I have gone through the code several times and i am unable to find any bug. I have been following Andrew Ng's tutorials on Machine learning on Course Era. My Java implementation has 3 classes. namely :</p> <ol> <li>DataSet.java : To read the data set</li> <li>Instance.java : Has two members : 1. double[ ] x and 2. double label </li> <li>Logistic.java : This is the main class that implements Logistic Regression with Gradient Descent. </li> </ol> <p>This is my cost function: <h2>J(Θ) = (- 1/m ) [Σ<sup>m</sup><sub>i=1</sub> y<sup>(i)</sup> log( h<sub>Θ</sub>( x<sup>(i)</sup> ) ) + (1 - y<sup>(i) </sup>) log(1 - h<sub>Θ</sub> (x<sup>(i)</sup>) )] </h2>For the above Cost function, this is my Gradient Descent algorithm: <h1>Repeat (</h1><h1> Θ<sub>j</sub> := Θ<sub>j</sub> - α Σ<sup>m</sup><sub>i=1</sub> ( h<sub>Θ</sub>( x<sup>(i)</sup>) - y<sup>(i)</sup> ) x<sup>(i)</sup><sub>j</sub></h1> (Simultaneously update all Θ<sub>j</sub> )<h1> ) </h1></p> <pre><code>import java.io.FileNotFoundException; import java.util.Arrays; import java.util.Collections; import java.util.List; public class Logistic { /** the learning rate */ private double alpha; /** the weight to learn */ private double[] theta; /** the number of iterations */ private int ITERATIONS = 3000; public Logistic(int n) { this.alpha = 0.0001; theta = new double[n]; } private double sigmoid(double z) { return (1 / (1 + Math.exp(-z))); } public void train(List&lt;Instance&gt; instances) { double[] temp = new double[3]; //Gradient Descent algorithm for minimizing theta for(int i=1;i&lt;=ITERATIONS;i++) { for(int j=0;j&lt;3;j++) { temp[j]=theta[j] - (alpha * sum(j,instances)); } //simulataneous updates of theta for(int j=0;j&lt;3;j++) { theta[j] = temp[j]; } System.out.println(Arrays.toString(theta)); } } private double sum(int j,List&lt;Instance&gt; instances) { double[] x; double prediction,sum=0,y; for(int i=0;i&lt;instances.size();i++) { x = instances.get(i).getX(); y = instances.get(i).getLabel(); prediction = classify(x); sum+=((prediction - y) * x[j]); } return (sum/instances.size()); } private double classify(double[] x) { double logit = .0; for (int i=0; i&lt;theta.length;i++) { logit += (theta[i] * x[i]); } return sigmoid(logit); } public static void main(String... args) throws FileNotFoundException { //DataSet is a class with a static method readDataSet which reads the dataset // Instance is a class with two members: double[] x, double label y // x contains the features and y is the label. List&lt;Instance&gt; instances = DataSet.readDataSet("data.txt"); // 3 : number of theta parameters corresponding to the features x // x0 is always 1 Logistic logistic = new Logistic(3); logistic.train(instances); //Test data double[]x = new double[3]; x[0]=1; x[1]=45; x[2] = 85; System.out.println("Prob: "+logistic.classify(x)); } } </code></pre> <p>Can anyone tell me what am I doing wrong? Thanks in advance! :)</p>
<p>As I am studying logistic regression, I took the time to review your code in detail.</p> <p><strong>TLDR</strong></p> <p>In fact, it appears the algorithm is correct.</p> <p>The reason you had so much false negatives or false positives is, I think, because of the hyper parameters you choose.</p> <p>The model was under-trained so the hypothesis was under-fitting.</p> <p><strong>Details</strong></p> <p>I had to create the <code>DataSet</code> and <code>Instance</code> classes because you did not publish them, and set up a train data set and a test data set based on the Cryotherapy dataset. See <a href="http://archive.ics.uci.edu/ml/datasets/Cryotherapy+Dataset+" rel="nofollow noreferrer">http://archive.ics.uci.edu/ml/datasets/Cryotherapy+Dataset+</a>.</p> <p>Then, using your same exact code (for the logistic regression part) and by choosing an alpha rate of <code>0.001</code> and a number of iterations of <code>100000</code>, I got a precision rate of <code>80.64516129032258</code> percent on the test data set, which is not so bad.</p> <p>I tried to get a better precision rate by tweaking manualy those hyper parameters but could not obtain any better result.</p> <p>At this point, an enhancement would be to implement regularization, I suppose.</p> <p><strong>Gradient descent formula</strong></p> <p>In Andrew Ng's video about the the cost function and gradient descent, it is correct that the <code>1/m</code> term is omitted. A possible explanation is that the <code>1/m</code> term is included in the <code>alpha</code> term. Or maybe it's just an oversight. See <a href="https://www.youtube.com/watch?v=TTdcc21Ko9A&amp;index=36&amp;list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN&amp;t=6m53s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=TTdcc21Ko9A&amp;index=36&amp;list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN&amp;t=6m53s</a> at 6m53s.</p> <p>But if you watch Andrew Ng's video about regularization and logistic regression you'll notice that the term <code>1/m</code> is clearly present in the formula. See <a href="https://www.youtube.com/watch?v=IXPgm1e0IOo&amp;index=42&amp;list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN&amp;t=2m19s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=IXPgm1e0IOo&amp;index=42&amp;list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN&amp;t=2m19s</a> at 2m19s.</p>
334
gradient descent implementation
Implementing naive gradient descent in python
https://stackoverflow.com/questions/41202817/implementing-naive-gradient-descent-in-python
<p>I'm trying to implement a very naive gradient descent in python. However, it looks like it goes into an infinite loop. Could you please help me debug it? </p> <pre><code>y = lambda x : x**2 dy_dx = lambda x : 2*x def gradient_descent(function,derivative,initial_guess): optimum = initial_guess while derivative(optimum) != 0: optimum = optimum - derivative(optimum) else: return optimum gradient_descent(y,dy_dx,5) </code></pre> <p>Edit:</p> <p>Now I have this code, I really can't comprehend the output. P.s. It might freeze your CPU. </p> <pre><code>y = lambda x : x**2 dy_dx = lambda x : 2*x def gradient_descent(function,derivative,initial_guess): optimum = initial_guess while abs(derivative(optimum)) &gt; 0.01: optimum = optimum - 2*derivative(optimum) print((optimum,derivative(optimum))) else: return optimum gradient_descent(y,dy_dx,5) </code></pre> <p>Now I'm trying to apply it to a regression problem, however the output doesn't appear to be correct as shown in the output below:</p> <p><a href="https://i.sstatic.net/zd7nv.png" rel="nofollow noreferrer">Output of gradient descent code below</a></p> <pre><code>import matplotlib.pyplot as plt def stepGradient(x,y, step): b_current = 0 m_current = 0 b_gradient = 0 m_gradient = 0 N = int(len(x)) for i in range(0, N): b_gradient += -(1/N) * (y[i] - ((m_current*x[i]) + b_current)) m_gradient += -(1/N) * x[i] * (y[i] - ((m_current * x[i]) + b_current)) while abs(b_gradient) &gt; 0.01 and abs(m_gradient) &gt; 0.01: b_current = b_current - (step * b_gradient) m_current = m_current - (step * m_gradient) for i in range(0, N): b_gradient += -(1/N) * (y[i] - ((m_current*x[i]) + b_current)) m_gradient += -(1/N) * x[i] * (y[i] - ((m_current * x[i]) + b_current)) return [b_current, m_current] x = [1,2, 2,3,4,5,7,8] y = [1.5,3,1,3,2,5,6,7] step = 0.00001 (b,m) = stepGradient(x,y,step) plt.scatter(x,y) abline_values = [m * i + b for i in x] plt.plot(x, abline_values, 'b') plt.show() </code></pre> <p>Fixed :D</p> <pre><code>import matplotlib.pyplot as plt def stepGradient(x,y): step = 0.001 b_current = 0 m_current = 0 b_gradient = 0 m_gradient = 0 N = int(len(x)) for i in range(0, N): b_gradient += -(1/N) * (y[i] - ((m_current*x[i]) + b_current)) m_gradient += -(1/N) * x[i] * (y[i] - ((m_current * x[i]) + b_current)) while abs(b_gradient) &gt; 0.01 or abs(m_gradient) &gt; 0.01: b_current = b_current - (step * b_gradient) m_current = m_current - (step * m_gradient) b_gradient= 0 m_gradient = 0 for i in range(0, N): b_gradient += -(1/N) * (y[i] - ((m_current*x[i]) + b_current)) m_gradient += -(1/N) * x[i] * (y[i] - ((m_current * x[i]) + b_current)) return [b_current, m_current] x = [1,2, 2,3,4,5,7,8,10] y = [1.5,3,1,3,2,5,6,7,20] (b,m) = stepGradient(x,y) plt.scatter(x,y) abline_values = [m * i + b for i in x] plt.plot(x, abline_values, 'b') plt.show() </code></pre>
<p>Your <code>while</code> loop stops only when a calculated floating-point value equals zero. This is naïve, since floating-point values are rarely calculated exactly. Instead, stop the loop when the calculated value is <em>close enough</em> to zero. Use something like</p> <pre><code>while math.abs(derivative(optimum)) &gt; eps: </code></pre> <p>where <code>eps</code> is the desired precision of the calculated value. This could be made another parameter, perhaps with a default value of <code>1e-10</code> or some such.</p> <hr> <p>That said, the problem in your case is worse. Your algorithm is far too naïve in assuming that the calculation</p> <pre><code>optimum = optimum - 2*derivative(optimum) </code></pre> <p>will move the value of <code>optimum</code> closer to the actual optimum value. In your particular case, the variable <code>optimum</code> just cycles back and forth between <code>5</code> (your initial guess) and <code>-5</code>. Note that the derivative at <code>5</code> is <code>10</code> and the derivative at <code>-5</code> is <code>-10</code>.</p> <p>So you need to avoid such cycling. You could multiply your delta <code>2*derivative(optimum)</code> by something smaller than <code>1</code>, which would work in your particular case <code>y=x**2</code>. But this will not work in general.</p> <p>To be completely safe, 'bracket' your optimum point with a smaller value and a larger value, and use the derivative to find the next guess. But ensure that your next guess does not go outside the bracketed interval. If it does, or if the convergence of your guesses is too slow, use another method such as bisection or golden mean search.</p> <p>Of course, this means your 'very naïve gradient descent' algorithm is too naïve to work in general. That's why real optimization routines are more complicated.</p>
335
gradient descent implementation
gradient descent doesn&#39;t change in my linear regression implementation
https://stackoverflow.com/questions/70010085/gradient-descent-doesnt-change-in-my-linear-regression-implementation
<p>I am beginner in gradient descent concept . I implemented a multivariate linear regression with gradient descent optimization algorithm. but my program doesn't converge and just early iteration has small changes! my methods(in my class) is following below :</p> <pre><code>def gradientDescent(self, X, y, theta): ''' Fits the model via gradient descent Arguments: X is a n-by-d numpy matrix y is an n-dimensional numpy vector theta is a d-dimensional numpy vector Returns: the final theta found by gradient descent ''' n,d = X.shape self.JHist = [] for i in range(self.n_iter): self.JHist.append((self.computeCost(X, y, theta), theta)) print(&quot;Iteration: &quot;, i+1, &quot; Cost: &quot;, self.JHist[i][0], &quot; Theta: &quot;, theta) #vectorized//// error=np.matmul(X,theta)-y theta -= self.alpha * np.matmul(X.T, error) /n return theta def computeCost(self, X, y, theta): ''' Computes the objective function Arguments: X is a n-by-d numpy matrix y is an n-dimensional numpy vector theta is a d-dimensional numpy vector n=len(y) d=np.matmul(X,theta)-y J=(1/(2*n))*np.matmul(d.T,d) return J </code></pre> <p>five last iterations report: <a href="https://i.sstatic.net/HVZJV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HVZJV.png" alt="enter image description here" /></a> Could someone please help to identify the problem.</p>
336
gradient descent implementation
How do I implement stochastic gradient descent correctly?
https://stackoverflow.com/questions/55757307/how-do-i-implement-stochastic-gradient-descent-correctly
<p>I'm trying to implement stochastic gradient descent in MATLAB however I am not seeing any convergence. Mini-batch gradient descent worked as expected so I think that the cost function and gradient steps are correct.</p> <p>The two main issues I am having are:</p> <ol> <li>Randomly shuffling the data in the training set before the for-loop </li> <li>Selecting one example at a time</li> </ol> <p>Here is my MATLAB code:</p> <p><strong>Generating Data</strong></p> <pre><code>alpha = 0.001; num_iters = 10; xrange =(-10:0.1:10); % data lenght ydata = 5*(xrange)+30; % data with gradient 2, intercept 5 % plot(xrange,ydata); grid on; noise = (2*randn(1,length(xrange))); % generating noise target = ydata + noise; % adding noise to data f1 = figure subplot(2,2,1); scatter(xrange,target); grid on; hold on; % plot a scttaer title('Linear Regression') xlabel('xrange') ylabel('ydata') tita0 = randn(1,1); %intercept (randomised) tita1 = randn(1,1); %gradient (randomised) % Initialize Objective Function History J_history = zeros(num_iters, 1); % Number of training examples m = (length(xrange)); </code></pre> <p><strong>Shuffling data, Gradient Descent and Cost Function</strong></p> <pre><code>% STEP1 : we shuffle the data data = [ xrange, ydata]; data = data(randperm(size(data,1)),:); y = data(:,1); X = data(:,2:end); for iter = 1:num_iters for i = 1:m x = X(:,i); % STEP2 Select one example h = tita0 + tita1.*x; % building the estimated %Changed to xrange in BGD %c = (1/(2*length(xrange)))*sum((h-target).^2) temp0 = tita0 - alpha*((1/m)*sum((h-target))); temp1 = tita1 - alpha*((1/m)*sum((h-target).*x)); %Changed to xrange in BGD tita0 = temp0; tita1 = temp1; fprintf("here\n %d; %d", i, x) end J_history(iter) = (1/(2*m))*sum((h-target).^2); % Calculating cost from data to estimate fprintf('Iteration #%d - Cost = %d... \r\n',iter, J_history(iter)); end </code></pre> <hr> <p>On plotting the cost vs iterations and linear regression graphs, the MSE settles (local minimum?) at around 420 which is wrong.</p> <p><a href="https://i.sstatic.net/Ciqn2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ciqn2.png" alt="enter image description here"></a></p> <p>On the other hand if I re-run the exact same code however using batch gradient descent I get acceptable results. In batch gradient descent I am changing <code>x</code> to <code>xrange</code>:</p> <p><a href="https://i.sstatic.net/6xdqv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6xdqv.png" alt="enter image description here"></a></p> <p>Any suggestions on what I am doing wrong?</p> <hr> <p><strong>EDIT:</strong></p> <p>I also tried selecting random indexes using:</p> <pre><code>f = round(1+rand(1,1)*201); %generating random indexes </code></pre> <p>and then selecting one example:</p> <pre><code>x = xrange(f); % STEP2 Select one example </code></pre> <p>Proceeding to use <code>x</code> in the hypothesis and GD steps also yield a cost of 420.</p>
<p>First we need to shuffle the data correctly:</p> <pre><code>data = [ xrange', target']; data = data(randperm(size(data,1)),:); </code></pre> <p>Next we need to index X and y correctly:</p> <pre><code>y = data(:,2); X = data(:,1); </code></pre> <p>Then during gradient descent I need to update based on a single value not on <code>target</code>, like so:</p> <pre><code>tita0 = tita0 - alpha*((1/m)*((h-y(i)))); tita1 = tita1 - alpha*((1/m)*((h-y(i)).*x)); </code></pre> <p>Theta converges to [5, 30] with the changes above.</p>
337
gradient descent implementation
Gradient descent with polynomial features implementation issue
https://stackoverflow.com/questions/62625339/gradient-descent-with-polynomial-features-implementation-issue
<p>I am trying to implement gradient descent after transforming some random data using sklearns polynomial transformer. My code works when not using polynomial features, but gives really high coefficients when transforming.</p> <p>Is there an issue with my code (below)?</p> <pre><code> l= 20 np.random.seed(0) X = 2 - 3 * np.random.normal(0, 1, l) y = X - 2 * (X ** 2) + 0.5 * (X ** 3) + np.random.normal(-3, 3, l) plt.scatter(X,y, s=10) plt.show() X = X.reshape(-1,1) m, n = X.shape p = PolynomialFeatures(degree=2) xbp = p.fit_transform(X) # initiate coefs theta = np.ones((xbp.shape[1], 1)) for _ in range(500): err = xbp.dot(theta) - y.reshape(-1, 1) gradients = 2 / m * xbp.T.dot(err) theta -= 0.01 * gradients print(theta) print() print(LinearRegression().fit(xbp, y).coef_) </code></pre> <p>output</p> <pre><code> [[ 802.60118234] [ 360.65857329] [12234.00939771]] [ 0. 8.48492679 -1.62853134] </code></pre> <p><a href="https://i.sstatic.net/OOlzv.png" rel="nofollow noreferrer">code snippet</a></p> <p>Thanks</p>
338
gradient descent implementation
correct implementation of Hinge loss minimization for gradient descent
https://stackoverflow.com/questions/28988732/correct-implementation-of-hinge-loss-minimization-for-gradient-descent
<p>I copied the hinge loss function from <a href="https://code.google.com/p/java-statistical-analysis-tool/source/browse/trunk/JSAT/src/jsat/lossfunctions/HingeLoss.java?r=762" rel="nofollow">here</a> (also LossC and LossFunc upon which it's based. Then I included it in my gradient descent algorithm like so: </p> <pre><code> do { iteration++; error = 0.0; cost = 0.0; //loop through all instances (complete one epoch) for (p = 0; p &lt; number_of_files__train; p++) { // 1. Calculate the hypothesis h = X * theta hypothesis = calculateHypothesis( theta, feature_matrix__train, p, globo_dict_size ); // 2. Calculate the loss = h - y and maybe the squared cost (loss^2)/2m //cost = hypothesis - outputs__train[p]; cost = HingeLoss.loss(hypothesis, outputs__train[p]); System.out.println( "cost " + cost ); // 3. Calculate the gradient = X' * loss / m gradient = calculateGradent( theta, feature_matrix__train, p, globo_dict_size, cost, number_of_files__train); // 4. Update the parameters theta = theta - alpha * gradient for (int i = 0; i &lt; globo_dict_size; i++) { theta[i] = theta[i] - LEARNING_RATE * gradient[i]; } } //summation of squared error (error value for all instances) error += (cost*cost); /* Root Mean Squared Error */ //System.out.println("Iteration " + iteration + " : RMSE = " + Math.sqrt( error/number_of_files__train ) ); System.out.println("Iteration " + iteration + " : RMSE = " + Math.sqrt( error/number_of_files__train ) ); } while( error != 0 ); </code></pre> <p>But this doesnt work at all. Is that due to the loss function? Maybe how I added the loss function to my code? </p> <p>I guess it's also possible that my implementation of gradient descent is faulty. </p> <p>Here are my methods for calculating the gradient and the hypothesis, are these right?</p> <pre><code>static double calculateHypothesis( double[] theta, double[][] feature_matrix, int file_index, int globo_dict_size ) { double hypothesis = 0.0; for (int i = 0; i &lt; globo_dict_size; i++) { hypothesis += ( theta[i] * feature_matrix[file_index][i] ); } //bias hypothesis += theta[ globo_dict_size ]; return hypothesis; } static double[] calculateGradent( double theta[], double[][] feature_matrix, int file_index, int globo_dict_size, double cost, int number_of_files__train) { double m = number_of_files__train; double[] gradient = new double[ globo_dict_size];//one for bias? for (int i = 0; i &lt; gradient.length; i++) { gradient[i] = (1.0/m) * cost * feature_matrix[ file_index ][ i ] ; } return gradient; } </code></pre> <p>The rest of the code is <a href="https://github.com/h1395010/gradient_diss-1-ent_1-id" rel="nofollow">here</a> if you're interested to take a look. </p> <p>Below this sentence is what those loss functions look like. Should I use the <code>loss</code> or <code>deriv</code>, are these even correct?</p> <pre><code>/** * Computes the HingeLoss loss * * @param pred the predicted value * @param y the target value * @return the HingeLoss loss */ public static double loss(double pred, double y) { return Math.max(0, 1 - y * pred); } /** * Computes the first derivative of the HingeLoss loss * * @param pred the predicted value * @param y the target value * @return the first derivative of the HingeLoss loss */ public static double deriv(double pred, double y) { if (pred * y &gt; 1) return 0; else return -y; } </code></pre>
<p>The code you provided for gradient does not look like a gradient of Hinge loss. Take a look at a valid equation, for example here: <a href="https://stats.stackexchange.com/questions/4608/gradient-of-hinge-loss">https://stats.stackexchange.com/questions/4608/gradient-of-hinge-loss</a></p>
339
gradient descent implementation
Implementation of Linear Regression Using Gradient Descent
https://stackoverflow.com/questions/63861962/implementation-of-linear-regression-using-gradient-descent
<p>Linear Regression Using Gradient Descent</p> <p>Reference: <a href="https://towardsdatascience.com/linear-regression-using-gradient-descent-in-10-lines-of-code-642f995339c0" rel="nofollow noreferrer">Linear Regression Using Gradient Descent in 10 Lines of Code</a></p> <p>Dataset:</p> <p>House price No. of bedrooms</p> <p>Coding environment: Google Co- lab</p> <p>Code:</p> <pre><code>import numpy as np import pandas as pd import io from google.colab import files uploaded = files.upload() df = pd.read_csv(io.BytesIO(uploaded['datasets_46927_85203_data.csv'])) house_price=df.price No_of_bedrooms=df.bedrooms # Mean of house_price and No_of_bedrooms mean_x= np.mean(house_price) mean_y=np.mean(No_of_bedrooms) #Total number of datapoints n=len(house_price) numerator = 0; denominator = 0; for i in range (n): numerator += (house_price[i] - mean_x) * (No_of_bedrooms[i] - mean_y) denominator += (house_price[i] - mean_x) ** 2 slope= numerator / denominator constant = mean_y - (slope*mean_x) def linear_regression(house_price[], y, m_current=slope, b_current=constant, epochs=1000, learning_rate=0.0001): N = float(len(No_of_bedrooms)) for i in range(epochs): y_current = (m_current * house_price[i]) + b_current cost = sum([data**2 for data in (No_of_bedrooms[i]-y_current)]) / N m_gradient = -(2/N) * sum(house_price[i] * (No_of_bedrooms[i] - y_current)) b_gradient = -(2/N) * sum(No_of_bedrooms[i] - y_current) m_current = m_current - (learning_rate * m_gradient) b_current = b_current - (learning_rate * b_gradient) return m_current, b_current, cost </code></pre> <p>Error: <code>def linear_regression(house_price[], y, m_current=slope, b_current=constant, epochs=1000, learning_rate=0.0001):</code></p> <pre><code>Error Statement: File &quot;&lt;ipython-input-6-864b867cbc7a&gt;&quot;, line 31 def linear_regression(house_price[], y, m_current=slope, b_current=constant, epochs=1000, learning_rate=0.0001): ^ SyntaxError: invalid syntax </code></pre> <p>I haven't understand how to get rid of this error.</p> <p>This <a href="https://stackoverflow.com/questions/58996556/implementing-a-linear-regression-using-gradient-descent">post</a> has no connection with my query.</p> <p>It will be great if someone helps me out. Thank you.</p>
340
gradient descent implementation
Logistic Regression, Gradient Descent Octave implementation
https://stackoverflow.com/questions/63046676/logistic-regression-gradient-descent-octave-implementation
<p>I'm taking the Machine Learning class by Prof. Ng. There is a homework need to implement logistic regression gradient descent. And here is my code:</p> <pre><code>function [J, grad] = costFunction(theta, X, y) %COSTFUNCTION Compute cost and gradient for logistic regression % J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the % parameter for logistic regression and the gradient of the cost % w.r.t. to the parameters. % Initialize some useful values m = length(y); % number of training examples [~,n] = size(X); % You need to return the following variables correctly J = 0; grad = zeros(size(theta)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the cost of a particular choice of theta. % You should set J to the cost. % Compute the partial derivatives and set grad to the partial % derivatives of the cost w.r.t. each parameter in theta % % Note: grad should have the same dimensions as theta % J = ((-y'*log(sigmoid(X*theta)))-((1-y)'*log(1-sigmoid(X*theta))))/m; for j = 1:n temp_sum = 0; for i = 1:m temp_sum+=(sigmoid(X(i,:)*theta)-y(i))*X(i,j); endfor grad(j) = theta(j)-temp_sum; endfor % ============================================================= end </code></pre> <p>This is the formula that I'm trying to implement: <a href="https://i.sstatic.net/B9k58.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B9k58.png" alt="enter image description here" /></a></p> <p>where h of x represent sigmoid function. I have check that sigmoid function is correct, but I still cant understand where is wrong in this algorithm. Please let me know if you find anything wrong.</p>
<p>I believe you were supposed to get the average of the gradient <code>grad = grad / m</code> as well, just like for the cost <code>J</code>. But it has been a while since I last did Andrew Ng's course, so I might be wrong.</p>
341
gradient descent implementation
Implementing Stochastic Gradient Descent Python
https://stackoverflow.com/questions/44344006/implementing-stochastic-gradient-descent-python
<p>I've been trying to implement stochastic gradient descent as part of a recommendation system following these equations:</p> <p><a href="https://i.sstatic.net/mIzbi.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mIzbi.jpg" alt="enter image description here" /></a></p> <p>I have:</p> <pre><code>for step in range(max_iter): e = 0 for x in range(len(R)): for i in range(len(R[x])): if R[x][i] &gt; 0: exi = 2 * (R[x][i] - np.dot(Q[:,i], P[x,:])) qi, px = Q[:,i], P[x,:] qi += _mu_2 * (exi * px - (2 * _lambda_1 * qi)) px += _mu_1 * (exi * qi - (2 * _lambda_2 * px)) Q[:,i], P[x,:] = qi, px </code></pre> <p>The output I expect isn't quite right but I can't really put a finger on it. Please help me to identify the problem I have in my code.</p> <p>Much appreciate your support</p>
<p>When you update <em>qi</em> and <em>px</em>, you should exchange <em>mu1</em> and <em>mu2</em>.</p>
342
gradient descent implementation
Gradient descent with random input implementation
https://stackoverflow.com/questions/27009256/gradient-descent-with-random-input-implementation
<p>I am trying to implement gradient descent on a dataset. Even though I tried everything, I couldn't make it work. So, I created a test case. I am trying my code on a random data and try to debug. </p> <p>More specifically, what I am doing is, I am generating random vectors between 0-1 and random labels for these vectors. And try to over-fit the training data.</p> <p>However, my weight vector gets bigger and bigger in each iteration. And then, I have infinities. So, I do not actually learn anything. Here is my code:</p> <pre><code>import numpy as np import random def getRandomVector(n): return np.random.uniform(0,1,n) def getVectors(m, n): return [getRandomVector(n) for i in range(n)] def getLabels(n): return [random.choice([-1,1]) for i in range(n)] def GDLearn(vectors, labels): maxIterations = 100 stepSize = 0.01 w = np.zeros(len(vectors[0])+1) for i in range(maxIterations): deltaw = np.zeros(len(vectors[0])+1) for i in range(len(vectors)): temp = np.append(vectors[i], -1) deltaw += ( labels[i] - np.dot(w, temp) ) * temp w = w + ( stepSize * (-1 * deltaw) ) return w vectors = getVectors(100, 30) labels = getLabels(100) w = GDLearn(vectors, labels) print w </code></pre> <p>I am using LMS for loss function. So, in all iterations, my update is the following,</p> <p><img src="https://i.sstatic.net/vfxiym.png" alt="enter image description here"></p> <p>where w^i is the ith weight vector and R is the stepSize and E(w^i) is the loss function.</p> <p>Here is my loss function. (LMS)</p> <p><img src="https://i.sstatic.net/H8A3Nm.png" alt="enter image description here"></p> <p>and here is how I derivated the loss function,</p> <p><img src="https://i.sstatic.net/USvYK.png" alt="enter image description here">,</p> <p>Now, my questions are:</p> <ol> <li>Should I expect good results in this random scenario using Gradient Descent? (What is the theoretical bounds?)</li> <li>If yes, what is my bug in my implementation?</li> </ol> <p>PS: I tried several other <code>maxIterations</code> and <code>stepSize</code> parameters. Still not working. PS2: This is the best way I can ask the question here. Sorry if the question is too specific. But it made me crazy. I really want to learn the problem.</p>
<p>Your code has a couple of faults:</p> <ul> <li>In <code>GetVectors()</code> method, you did not actually use the input variable <code>m</code>;</li> <li>In <code>GDLearn()</code> method, you have a double loop, but you use the same variable <code>i</code> as the loop variables in both loops. (I guess the logic is still right, but it's confusing).</li> <li>The prediction error (<code>labels[i] - np.dot(w, temp)</code>) has the wrong sign. </li> <li>Step size does matters. If I am using 0.01 as step size, the cost is increasing in each iteration. Changing it to be 0.001 solved the problem. </li> </ul> <p>Here is my revised code based on your original code.</p> <blockquote> <pre><code>import numpy as np import random def getRandomVector(n): return np.random.uniform(0,1,n) def getVectors(m, n): return [getRandomVector(n) for i in range(m)] def getLabels(n): return [random.choice([-1,1]) for i in range(n)] def GDLearn(vectors, labels): maxIterations = 100 stepSize = 0.001 w = np.zeros(len(vectors[0])+1) for iter in range(maxIterations): cost = 0 deltaw = np.zeros(len(vectors[0])+1) for i in range(len(vectors)): temp = np.append(vectors[i], -1) prediction_error = np.dot(w, temp) - labels[i] deltaw += prediction_error * temp cost += prediction_error**2 w = w - stepSize * deltaw print 'cost at', iter, '=', cost return w vectors = getVectors(100, 30) labels = getLabels(100) w = GDLearn(vectors, labels) print w </code></pre> </blockquote> <p>Running result -- you can see the cost is decreasing with each iteration but with a diminishing return. </p> <pre><code>cost at 0 = 100.0 cost at 1 = 99.4114482617 cost at 2 = 98.8476022685 cost at 3 = 98.2977744556 cost at 4 = 97.7612851154 cost at 5 = 97.2377571222 cost at 6 = 96.7268325883 cost at 7 = 96.2281642899 cost at 8 = 95.7414151147 cost at 9 = 95.2662577529 cost at 10 = 94.8023744037 ...... cost at 90 = 77.367904046 cost at 91 = 77.2744249433 cost at 92 = 77.1823702888 cost at 93 = 77.0917090883 cost at 94 = 77.0024111475 cost at 95 = 76.9144470493 cost at 96 = 76.8277881325 cost at 97 = 76.7424064707 cost at 98 = 76.6582748518 cost at 99 = 76.5753667579 [ 0.16232142 -0.2425511 0.35740632 0.22548442 0.03963853 0.19595213 0.20080207 -0.3921798 -0.0238925 0.13097533 -0.1148932 -0.10077534 0.00307595 -0.30111942 -0.17924479 -0.03838637 -0.23938181 0.1384443 0.22929163 -0.0132466 0.03325976 -0.31489526 0.17468025 0.01351012 -0.25926117 0.09444201 0.07637793 -0.05940019 0.20961315 0.08491858 0.07438357] </code></pre>
343
gradient descent implementation
Linear Regression\Gradient Descent python implementation
https://stackoverflow.com/questions/14993454/linear-regression-gradient-descent-python-implementation
<p>I'm trying to implement linear regression using the gradient descent method from scratch for learning purposes. One part of my code is really bugging me. For some reason the variable <code>x</code> is being altered after I run a line of code and I'm not sure why. </p> <p>The variables are as follow. <code>x</code> and <code>y</code> are numpy arrays and I've given them random numbers for this example.</p> <pre><code>x = np.array([1, 2, 3, 4, ...., n]) y = np.array([1, 2, 3, , ...., n]) theta = [0, 0] alpha = .01 m = len(x) </code></pre> <p>The code is:</p> <pre><code>theta[0] = theta[0] - alpha*1/m*sum([((theta[0]+theta[1]*x) - y)**2 for (x,y) in zip(x,y)]) </code></pre> <p>Once I run the above code <code>x</code> is no longer a list. It becomes only the variable n or the last element in the list. </p>
<p>What is happening is that python is computing the list <code>zip(x,y)</code>, then each iteration of your for loop is overwriting <code>(x,y)</code> with the corresponding element of <code>zip(x,y)</code>. When your for loop terminates <code>(x,y)</code> contains <code>zip(x,y)[-1]</code>.</p> <p>Try </p> <pre><code>theta[0] = theta[0] - alpha*1/m*sum([((theta[0]+theta[1]*xi) - yi)**2 for (xi,yi) in zip(x,y)]) </code></pre>
344
gradient descent implementation
Better gradient descent implementation for linear SVM with varying loss?
https://stackoverflow.com/questions/64126137/better-gradient-descent-implementation-for-linear-svm-with-varying-loss
<p>I'm implementing SVM with hinge loss (linear SVM, soft margin), and try to minimize the loss using gradient descent.<br /> Here's my current gradient descent, in Julia:</p> <pre><code>for i in 1:max_iter if n_cost_no_change &lt;= 0 &amp;&amp; early_stop break end learn!(X_data, Y_data, weights, learning_rate) # compute gradient and update weights new_cost = cost(X_data, Y_data, weights) # compute loss if early_stop if best_cost === nothing || isnan(best_cost) best_cost = new_cost else if new_cost &gt; best_cost - tol n_cost_no_change -= 1 else best_cost = min(new_cost, best_cost) n_cost_no_change = n_iter_no_change end end end end </code></pre> <p>Here, <code>tol</code> is set to 0.001, <code>max_iter</code> is 1000, <code>learning_rate</code> is 0.05, and they are all constant during training.</p> <p>The problem is that the computed <code>cost</code> for each iteration is varying a lot.<br /> In order to force finding a global minimum, I have to turn off <code>early_stop</code> and set <code>max_iter</code> to 10000. Otherwise, it will stop early within few iterations, and output a bad result.</p> <p>Here's a graph showing how <code>cost</code> vary by iterations:<br /> <a href="https://i.sstatic.net/zxqnV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zxqnV.png" alt="enter image description here" /></a></p> <p>These iterations are just waste of time.</p> <p>My guess is that I also need to change my <code>learning_rate</code> each iteration if there's no improvement on the <code>cost</code>.<br /> The problem is that I don't know how to implement an update to <code>learning_rate</code> in a way such that the variance on <code>cost</code> is reduced and I won't fall into a local minimum (if it exists), and find the optimal weights for linear SVM.</p> <p>Anyone has any suggestions?</p> <h2>update</h2> <p>Here's my <code>learn!</code> function:</p> <pre><code>function learn!(X_data::Array{T} where T&lt;:Number, Y_data::Array{T} where T&lt;:Number, weights::WeightsLinearSVM, alpha::AbstractFloat) @assert ndims(Y_data) == ndims(weights.w) == 1 @assert size(X_data) == (size(Y_data)[1], size(weights.w)[1]) # compute deciding feature decide = (Y_data .* (X_data * weights.w .+ weights.b)) .&lt; 1 # (? &lt; 1) will be 1, otherwise 0 # update w gradient_w = weights.w .+ (weights.C / size(X_data)[1]) .* vec(-(Y_data .* decide)' * X_data) gradient_w .= gradient_w .* alpha weights.w .= weights.w .- gradient_w # update b gradient_b = (weights.C / size(X_data)[1]) * sum(-(Y_data .* decide)) gradient_b *= alpha weights.b = weights.b - gradient_b return nothing end </code></pre> <p><code>cost</code> function:</p> <pre><code>function cost(X_data::Array{T} where T&lt;:Number, Y_data::Array{T} where T&lt;:Number, weights::WeightsLinearSVM)::AbstractFloat @assert ndims(Y_data) == ndims(weights.w) == 1 @assert size(X_data) == (size(Y_data)[1], size(weights.w)[1]) loss_w = 0.5 * (weights.w' * weights.w) loss_inner = 1.0 .- Y_data .* vec(X_data * weights.w .+ weights.b) loss_inner .= map(m-&gt;max(0.0,m), loss_inner) loss = loss_w + weights.C * sum(loss_inner) / size(X_data)[1] return loss end </code></pre>
<p>After reading this <a href="https://ruder.io/optimizing-gradient-descent/index.html#momentum" rel="nofollow noreferrer">article</a>, I tried to use a momentum on gradient update.</p> <p>My new <code>learn!</code> function looks like this:</p> <pre><code>function learn!(X_data::Array{T} where T&lt;:Number, Y_data::Array{T} where T&lt;:Number, weights::WeightsLinearSVM, momentum::WeightsLinearSVM, alpha::AbstractFloat) @assert ndims(Y_data) == ndims(weights.w) == 1 @assert size(X_data) == (size(Y_data)[1], size(weights.w)[1]) # compute deciding feature decide = (Y_data .* (X_data * weights.w .+ weights.b)) .&lt; 1 # (? &lt; 1) will be 1, otherwise 0 # update w gradient_w = weights.w .+ (weights.C / size(X_data)[1]) .* vec(-(Y_data .* decide)' * X_data) gradient_w .= gradient_w .* alpha momentum.w .= gradient_w .+ (0.9 .* momentum.w) weights.w .= weights.w .- momentum.w # update b gradient_b = (weights.C / size(X_data)[1]) * sum(-(Y_data .* decide)) gradient_b *= alpha momentum.b = gradient_b + (0.9 * momentum.b) weights.b = weights.b - momentum.b return nothing end </code></pre> <p>Here is a comparison on the cost oscillation.</p> <p>Before using momentum:<br /> <a href="https://i.sstatic.net/8bWJe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8bWJe.png" alt="enter image description here" /></a></p> <p>After using momentum:<br /> <a href="https://i.sstatic.net/JgvH9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JgvH9.png" alt="enter image description here" /></a></p> <p>I'm satisfied with the improvement. Probably will try other optimization algorithms mentioned in the article as well.</p>
345
gradient descent implementation
Self-Implementation of Gradient Descent Compared to SciPy Minimize
https://stackoverflow.com/questions/56586436/self-implementation-of-gradient-descent-compared-to-scipy-minimize
<p>This is an assignment for a convex optimization class that I'm taking. The assignment is as follows:</p> <blockquote> <p>Implement the gradient descent algorithm with backtracking line search to find the optimal step size. Your implementation will be compared to Python's <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html" rel="nofollow noreferrer"><code>scipy.optimize.minimize</code></a> function.</p> <p>The specific function to minimize is the least squares function. The error between the solution found by the Python library and your implementation must be smaller than 0.001.</p> </blockquote> <p>I've made an implementation but the error value hovers around 1, and have been searching for ways to improve it but have been having some trouble. Here's the code that I wrote:</p> <p><strong>Gradient Descent + Backtracking Line Search Implementation</strong></p> <pre><code>import numpy as np # Gradient descent. def min_gd(fun, x0, grad, args=()): alpha = 0.3 beta = 0.8 delta_x = -grad(x0, *args) t = backtracking_line_search(fun, x0, grad, delta_x, alpha, beta, args) x_new = x0 + (t * delta_x) if np.linalg.norm(x_new) ** 2 &gt; np.linalg.norm(x0) ** 2: return min_gd(fun, x_new, grad, args) else: return x_new # Line search function returns optimal step size. def backtracking_line_search(fun, x, grad, delta_x, alpha, beta, args=()): t = 1 derprod = grad(x, *args) @ delta_x while fun((x + (t * delta_x)), *args) &gt; fun(x, *args) + (alpha * t * derprod): t *= beta return t </code></pre> <p><strong>Other given functions</strong></p> <pre><code>import numpy as np from scipy.optimize import minimize import gd # Least Squares function def LeastSquares(x, A, b): return np.linalg.norm(A @ x - b) ** 2 # gradient def grad_LeastSquares(x, A, b): return 2 * ((A.T @ A) @ x - A.T @ b) </code></pre> <p>The error between the two results is basically calculated using the L2-norm.</p> <p>Some ideas that I've came up with are that the tolerance checking point in my gradient descent function may be flawed. Right now I'm essentially checking simply whether the next step is larger than the previous. However, I'm also having trouble wrapping my head around how I might improve that.</p> <p>Any feedback is appreciated.</p> <p><strong>Edit</strong></p> <p>In case anyone is curious on the final code that I wrote to make it work in the desired way:</p> <pre><code>def min_gd(fun, x0, grad, args=()): alpha = 0.3 beta = 0.8 delta_x = -grad(x0, *args) t = backtracking_line_search(fun, x0, grad, delta_x, alpha, beta, args) x_new = x0 + (t * delta_x) if np.linalg.norm(grad(x_new, *args)) &lt; 0.01: return x_new else: return min_gd(fun, x_new, grad, args) </code></pre> <p>I simply fixed the conditional statement so that I'm not just comparing norms, but also checking if the value is smaller than a predetermined tolerance level.</p> <p>Hope this helps anyone in the future.</p>
<p>Your guess about the tolerance checking is right: the norm of the current vector is not related to convergence. A typical criterion would be a small gradient, so <code>min_gd</code> should look like</p> <pre><code>def min_gd(fun, x0, grad, args=()): alpha = 0.3 beta = 0.8 eps = 0.001 x_new = x0 delta_x = -grad(x0, *args) while np.linalg.norm(delta_x) &gt; eps: t = backtracking_line_search(fun, x_new, grad, delta_x, alpha, beta, args) x_new = x_new + (t * delta_x) delta_x = -grad(x_new, *args) return x_new </code></pre> <p>where <code>eps</code> is some small positive tolerance.</p>
346
gradient descent implementation
Python implementation of Gradient Descent Algorithm isn&#39;t working
https://stackoverflow.com/questions/30745627/python-implementation-of-gradient-descent-algorithm-isnt-working
<p>I am trying to implement a gradient descent algorithm for simple linear regression. For some reason it doesn't seem to be working. </p> <pre class="lang-py prettyprint-override"><code>from __future__ import division import random def error(x_i,z_i, theta0,theta1): return z_i - theta0 - theta1 * x_i def squared_error(x_i,z_i,theta0,theta1): return error(x_i,z_i,theta0,theta1)**2 def mse_fn(x, z, theta0,theta1): m = 2 * len(x) return sum(squared_error(x_i,z_i,theta0,theta1) for x_i,z_i in zip(x,z)) / m def mse_gradient(x, z, theta0,theta1): m = 2 * len(x) grad_0 = sum(error(x_i,z_i,theta0,theta1) for x_i,z_i in zip(x,z)) / m grad_1 = sum(error(x_i,z_i,theta0,theta1) * x_i for x_i,z_i in zip(x,z)) / m return grad_0, grad_1 def minimize_batch(x, z, mse_fn, mse_gradient_fn, theta0,theta1,tolerance=0.000001): step_sizes = 0.01 theta0 = theta0 theta1 = theta1 value = mse_fn(x,z,theta0,theta1) while True: grad_0, grad_1 = mse_gradient(x,z,theta0,theta1) next_theta0 = theta0 - step_sizes * grad_0 next_theta1 = theta1 - step_sizes * grad_1 next_value = mse_fn(x,z,next_theta0,theta1) if abs(value - next_value) &lt; tolerance: return theta0, theta1 else: theta0, theta1, value = next_theta0, next_theta1, next_value #The data x = [i + 1 for i in range(40)] y = [random.randrange(1,30) for i in range(40)] z = [2*x_i + y_i + (y_i/7) for x_i,y_i in zip(x,y)] theta0, theta1 = [random.randint(-10,10) for i in range(2)] q = minimize_batch(x,z,mse_fn, mse_gradient, theta0,theta1,tolerance=0.000001) </code></pre> <p>When I run I get the following error: return error(x_i,z_i,theta0,theta1)**2 OverflowError: (34, 'Result too large')</p>
347
gradient descent implementation
Implement gradient descent in python
https://stackoverflow.com/questions/56887503/implement-gradient-descent-in-python
<p>I am trying to implement gradient descent in python. Though my code is returning result by I think results I am getting are completely wrong. </p> <p>Here is the code I have written:</p> <pre><code>import numpy as np import pandas dataset = pandas.read_csv('D:\ML Data\house-prices-advanced-regression-techniques\\train.csv') X = np.empty((0, 1),int) Y = np.empty((0, 1), int) for i in range(dataset.shape[0]): X = np.append(X, dataset.at[i, 'LotArea']) Y = np.append(Y, dataset.at[i, 'SalePrice']) X = np.c_[np.ones(len(X)), X] Y = Y.reshape(len(Y), 1) def gradient_descent(X, Y, theta, iterations=100, learningRate=0.000001): m = len(X) for i in range(iterations): prediction = np.dot(X, theta) theta = theta - (1/m) * learningRate * (X.T.dot(prediction - Y)) return theta theta = np.random.randn(2,1) theta = gradient_descent(X, Y, theta) print('theta',theta) </code></pre> <p>The result I get after running this program is:</p> <blockquote> <p>theta [[-5.23237458e+228] [-1.04560188e+233]]</p> </blockquote> <p>Which are very high values. Can someone point out the mistake I have made in implementation.</p> <p>Also, 2nd problem is I have to set value of learning rate very low (in this case i have set to 0.000001) to work other wise program throws an error.</p> <p>Please help me in diagnosis the problem.</p>
<p>try to reduce the learning rate with iteration otherwise it wont be able to reach the optimal lowest.try this</p> <pre><code>import numpy as np import pandas dataset = pandas.read_csv('start.csv') X = np.empty((0, 1),int) Y = np.empty((0, 1), int) for i in range(dataset.shape[0]): X = np.append(X, dataset.at[i, 'R&amp;D Spend']) Y = np.append(Y, dataset.at[i, 'Profit']) X = np.c_[np.ones(len(X)), X] Y = Y.reshape(len(Y), 1) def gradient_descent(X, Y, theta, iterations=50, learningRate=0.01): m = len(X) for i in range(iterations): prediction = np.dot(X, theta) theta = theta - (1/m) * learningRate * (X.T.dot(prediction - Y)) learningRate/=10; return theta theta = np.random.randn(2,1) theta = gradient_descent(X, Y, theta) print('theta',theta) </code></pre>
348
gradient descent implementation
Gradient Descent and Normal Equation give different theta values for multivariate linear regression.Why?
https://stackoverflow.com/questions/44347215/gradient-descent-and-normal-equation-give-different-theta-values-for-multivariat
<h1>Vectorized implementation of gradient descent</h1> <pre><code>for iter = 1:num_iters theta = theta - (alpha / m) * X' * (X * theta - y); J_history(iter) = computeCostMulti(X, y, theta); end </code></pre> <h1>Implementation of computeCostMulti()</h1> <pre><code>function J = computeCostMulti(X, y, theta) m = length(y); J = 0; J = 1 / (2 * m) * (X * theta - y)' * (X * theta - y); </code></pre> <h1>Normal equation implementation</h1> <pre><code>theta = pinv(X' * X) * X' * y; </code></pre> <p>These two implementations converge to different values of theta for the same values of X and y. The Normal Equation gives the right answer but Gradient descent gives a wrong answer. </p> <p><strong>Is there anything wrong with the implementation of Gradient Descent?</strong></p>
<p>I suppose that when you use gradient descent, you first process your input using feature scaling. That is not done with the normal equation method (as feature scaling is not required), and that should result in a different theta. If you use your models to make predictions they should come up with the same result.</p>
349
gradient descent implementation
Gradient becoming larger and larger while implementing gradient descent
https://stackoverflow.com/questions/48348275/gradient-becoming-larger-and-larger-while-implementing-gradient-descent
<p>I am trying to implement the gradient descent in python. The data is that of housing prices and i want to predict the house price. But the problem is that the gradient is becoming larger and larger until python cannot process it anymore.</p> <pre><code>import numpy as np import sys from numpy import genfromtxt train_file = sys.argv[1] dev_file = sys.argv[2] learning = sys.argv[3] X_train = np.genfromtxt(train_file, dtype='f', delimiter = ',', skip_header = 1,filling_values = 0, usecols = (3,4,5,6,8,9,10,11,12,13,14,15,17,18,19,20)) y_train = np.genfromtxt(train_file, dtype='f', delimiter = ',', skip_header = 1,filling_values = 0, usecols = (2)) training_examples = X_train.shape[0] total_featues = X_train.shape[1] Wprime = np.asarray([0]*total_featues) W = Wprime.reshape(-1,1) k = 0 while k&lt;total_featues : i=0 temp_sum = 0 #print(X_train[i][k]) while i&lt; training_examples: A = Wprime B = X_train[i] #print(y_train[i]) f = abs(np.dot(A,B)-y_train[i]) #print("this is f"+str(f)) f = f*X_train[i][k] temp_sum = temp_sum + f i=i+1 print("this is temp sum " + str(temp_sum)) update = temp_sum*0.0001/training_examples print("this is update "+str(update)) Wprime[k] = Wprime[k] - update print(Wprime) k = k+1 </code></pre> <p>Can anyone help me out?</p>
350
gradient descent implementation
Simple Gradient Descent Implementation Error
https://stackoverflow.com/questions/65609277/simple-gradient-descent-implementation-error
<p>I have tried to use a toy problem of linear regression for implanting the optimisation on the MSE function using the algorithm of gradient decent.</p> <pre><code>import numpy as np # Data points x = np.array([1, 2, 3, 4]) y = np.array([1, 1, 2, 2]) # MSE function f = lambda a, b: 1 / len(x) * np.sum(np.power(y - (a * x + b), 2)) # Gradient def grad_f(v_coefficients): a = v_coefficients[0, 0] b = v_coefficients[1, 0] return np.array([1 / len(x) * np.sum(2 * (y - (a * x + b)) * x), 1 / len(x) * np.sum(2 * (y - (a * x + b)))]).reshape(2, 1) # Gradient Decent with epsilon as tol vector and alpha as the step/learning rate def gradient_decent(v_prev): tol = 10 ** -3 epsilon = np.array([tol * np.ones([2, 1], int)]) alpha = 0.2 v_next = v_prev - alpha * grad_f(v_prev) if (np.abs(v_next - v_prev) &lt;= epsilon).all(): return v_next else: gradient_decent(v_next) # v_0 is the initial guess v_0 = np.array([[1], [1]]) gradient_decent(v_0) </code></pre> <p>I have tried different alpha values but the code never converges (infinite recursion) it seems that the issue is with the stop condition of the recursion, but after few runs the v_next and v_prev bounces between -infinte to infinite</p>
<p>It's great that you are learning machine learning (^_^) by implementing some base algorithms by yourself. Regarding your question, there are two problems in your code, first one is <strong>mathematical</strong>, the sign in:</p> <pre><code>def grad_f(v_coefficients): a = v_coefficients[0, 0] b = v_coefficients[1, 0] return np.array([1 / len(x) * np.sum(2 * (y - (a * x + b)) * x), 1 / len(x) * np.sum(2 * (y - (a * x + b)))]).reshape(2, 1) </code></pre> <p>should be</p> <pre><code>return -np.array(...) </code></pre> <p>since <img src="https://latex.codecogs.com/gif.latex?%5Cfrac%7Bd%20%5Csum%7B%28y%20-%20ax%20-%20b%29%5E2%7D%7D%7Bda%7D%20%3D%20-2%5Csum%7B%28y%20-%20ax%20-%20b%29x%7D" alt="E=mc^2" /></p> <p>the second one is <strong>programming</strong>, this kind of code will not return you a result in Python:</p> <pre><code>def add(x): new_x = x + 1 if new_x &gt; 10: return new_x else: add(new_x) </code></pre> <p>you must use <code>return</code> in both clauses of the <code>if</code> statement, so it should be</p> <pre><code>def add(x): new_x = x + 1 if new_x &gt; 10: return new_x else: return add(new_x) </code></pre> <p>There is also a minor issue with the alpha coefficient for these particular data points <code>alpha=0.2</code> is too big for algorithm to converge, you need to use smaller <code>alpha</code>. I also slightly refactor your initial code using numpy broadcasting convention (<a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/user/basics.broadcasting.html</a>) to get the following result:</p> <pre><code>import numpy as np # Data points x = np.array([1, 2, 3, 4]) y = np.array([1, 1, 2, 2]) # MSE function f = lambda a, b: np.mean(np.power(y - (a * x + b), 2)) # Gradient def grad_f(v_coefficients): a = v_coefficients[0, 0] b = v_coefficients[1, 0] return -np.array([np.mean(2 * (y - (a * x + b)) * x), np.mean(2 * (y - (a * x + b)))]).reshape(2, 1) # Gradient Decent with epsilon as tol vector and alpha as the step/learning rate def gradient_decent(v_prev): tol = 1e-3 # epsilon = np.array([tol * np.ones([2, 1], int)]) do not need this, due to numpy broadcasting rules alpha = 0.1 v_next = v_prev - alpha * grad_f(v_prev) if (np.abs(v_next - v_prev) &lt;= alpha).all(): return v_next else: return gradient_decent(v_next) # v_0 is the initial guess v_0 = np.array([[1], [1]]) gradient_decent(v_0) </code></pre>
351
gradient descent implementation
Linear Regression and Gradient Descent in Scikit learn?
https://stackoverflow.com/questions/34469237/linear-regression-and-gradient-descent-in-scikit-learn
<p>In <a href="https://share.coursera.org/wiki/index.php/ML:Linear_Regression_with_Multiple_Variables#Gradient_Descent_for_Multiple_Variables" rel="nofollow noreferrer">this Coursera course</a> for machine learning, it says gradient descent should converge.</p> <p>I'm using Linear regression from scikit learn. It doesn't provide gradient descent info. I have seen many questions on StackOverflow to implement linear regression with gradient descent.</p> <p>How do we use Linear regression from scikit-learn in real world? OR Why doesn't scikit-learn provide gradient descent info in linear regression output?</p>
<p>Scikit learn provides you two approaches to linear regression:</p> <ol> <li><p><code>LinearRegression</code> object uses Ordinary Least Squares solver from scipy, as LR is one of two classifiers which have <strong>closed form solution</strong>. Despite the ML course - you can actually learn this model by just inverting and multiplicating some matrices.</p> </li> <li><p><code>SGDRegressor</code> which is an implementation of <strong>stochastic gradient descent</strong>, very generic one where you can choose your penalty terms. To obtain linear regression you choose loss to be <code>L2</code> and penalty also to <code>none</code> (linear regression) or <code>L2</code> (Ridge regression)</p> </li> </ol> <p>There is no &quot;typical gradient descent&quot; because it is <strong>rarely used</strong> in practise. If you can decompose your loss function into additive terms, then stochastic approach is known to behave better (thus SGD) and if you can spare enough memory - OLS method is faster and easier (thus first solution).</p>
352
gradient descent implementation
Implementing backpropagation gradient descent using scipy.optimize.minimize
https://stackoverflow.com/questions/47579719/implementing-backpropagation-gradient-descent-using-scipy-optimize-minimize
<p>I am trying to train an autoencoder NN (3 layers - 2 visible, 1 hidden) using numpy and scipy for the MNIST digits images dataset. The implementation is based on the notation given <a href="http://ufldl.stanford.edu/wiki/index.php/Neural_Networks" rel="nofollow noreferrer">here</a> Below is my code:</p> <pre><code>def autoencoder_cost_and_grad(theta, visible_size, hidden_size, lambda_, data): """ The input theta is a 1-dimensional array because scipy.optimize.minimize expects the parameters being optimized to be a 1d array. First convert theta from a 1d array to the (W1, W2, b1, b2) matrix/vector format, so that this follows the notation convention of the lecture notes and tutorial. You must compute the: cost : scalar representing the overall cost J(theta) grad : array representing the corresponding gradient of each element of theta """ training_size = data.shape[1] # unroll theta to get (W1,W2,b1,b2) # W1 = theta[0:hidden_size*visible_size] W1 = W1.reshape(hidden_size,visible_size) W2 = theta[hidden_size*visible_size:2*hidden_size*visible_size] W2 = W2.reshape(visible_size,hidden_size) b1 = theta[2*hidden_size*visible_size:2*hidden_size*visible_size + hidden_size] b2 = theta[2*hidden_size*visible_size + hidden_size: 2*hidden_size*visible_size + hidden_size + visible_size] #feedforward pass a_l1 = data z_l2 = W1.dot(a_l1) + numpy.tile(b1,(training_size,1)).T a_l2 = sigmoid(z_l2) z_l3 = W2.dot(a_l2) + numpy.tile(b2,(training_size,1)).T a_l3 = sigmoid(z_l3) #backprop delta_l3 = numpy.multiply(-(data-a_l3),numpy.multiply(a_l3,1-a_l3)) delta_l2 = numpy.multiply(W2.T.dot(delta_l3), numpy.multiply(a_l2, 1 - a_l2)) b2_derivative = numpy.sum(delta_l3,axis=1)/training_size b1_derivative = numpy.sum(delta_l2,axis=1)/training_size W2_derivative = numpy.dot(delta_l3,a_l2.T)/training_size + lambda_*W2 #print(W2_derivative.shape) W1_derivative = numpy.dot(delta_l2,a_l1.T)/training_size + lambda_*W1 W1_derivative = W1_derivative.reshape(hidden_size*visible_size) W2_derivative = W2_derivative.reshape(visible_size*hidden_size) b1_derivative = b1_derivative.reshape(hidden_size) b2_derivative = b2_derivative.reshape(visible_size) grad = numpy.concatenate((W1_derivative,W2_derivative,b1_derivative,b2_derivative)) cost = 0.5*numpy.sum((data-a_l3)**2)/training_size + 0.5*lambda_*(numpy.sum(W1**2) + numpy.sum(W2**2)) return cost,grad </code></pre> <p>I have also implemented a function to estimate the numerical gradient and verify the correctness of my implementation (below).</p> <pre><code>def compute_gradient_numerical_estimate(J, theta, epsilon=0.0001): """ :param J: a loss (cost) function that computes the real-valued loss given parameters and data :param theta: array of parameters :param epsilon: amount to vary each parameter in order to estimate the gradient by numerical difference :return: array of numerical gradient estimate """ gradient = numpy.zeros(theta.shape) eps_vector = numpy.zeros(theta.shape) for i in range(0,theta.size): eps_vector[i] = epsilon cost1,grad1 = J(theta+eps_vector) cost2,grad2 = J(theta-eps_vector) gradient[i] = (cost1 - cost2)/(2*epsilon) eps_vector[i] = 0 return gradient </code></pre> <p>The norm of the difference between the numerical estimate and the one computed by the function is around 6.87165125021e-09 which seems to be acceptable. My main problem seems to be to get the gradient descent algorithm "L-BGFGS-B" working using the <code>scipy.optimize.minimize</code> function as below:</p> <pre><code># theta is the 1-D array of(W1,W2,b1,b2) J = lambda x: utils.autoencoder_cost_and_grad(theta, visible_size, hidden_size, lambda_, patches_train) options_ = {'maxiter': 4000, 'disp': False} result = scipy.optimize.minimize(J, theta, method='L-BFGS-B', jac=True, options=options_) </code></pre> <p>I get the below output from this:</p> <pre><code>scipy.optimize.minimize() details: fun: 90.802022224079778 hess_inv: &lt;16474x16474 LbfgsInvHessProduct with dtype=float64&gt; jac: array([ -6.83667742e-06, -2.74886002e-06, -3.23531941e-06, ..., 1.22425735e-01, 1.23425062e-01, 1.28091250e-01]) message: b'ABNORMAL_TERMINATION_IN_LNSRCH' nfev: 21 nit: 0 status: 2 success: False x: array([-0.06836677, -0.0274886 , -0.03235319, ..., 0. , 0. , 0. ]) </code></pre> <p>Now, this <a href="https://stackoverflow.com/questions/34663539/scipy-optimize-fmin-l-bfgs-b-returns-abnormal-termination-in-lnsrch">post</a> seems to indicate that the error could mean that the gradient function implementation could be wrong? But my numerical gradient estimate seems to confirm that my implementation is correct. I have tried varying the initial weights by using a uniform distribution as specified <a href="https://stats.stackexchange.com/questions/47590/what-are-good-initial-weights-in-a-neural-network/186351#186351">here</a> but the problem still persists. Is there anything wrong with my backprop implementation?</p>
<p>Turns out the issue was a syntax error (very silly) with this line:</p> <pre><code>J = lambda x: utils.autoencoder_cost_and_grad(theta, visible_size, hidden_size, lambda_, patches_train) </code></pre> <p>I don't even have the <code>lambda</code> parameter <code>x</code> in the function declaration. So the <code>theta</code> array wasn't even being passed whenever <code>J</code> was being invoked.</p> <p>This fixed it:</p> <pre><code>J = lambda x: utils.autoencoder_cost_and_grad(x, visible_size, hidden_size, lambda_, patches_train) </code></pre>
353
gradient descent implementation
Batch gradient descent algorithm implementation in python
https://stackoverflow.com/questions/47593225/batch-gradient-descent-algorithm-implementation-in-python
<p>I learned the Batch gradient descent algorithm recently and tried implementing it in Python. I used a data set which is not random. When I tried running the below code, the process is converging after 3 iterations but with a big error. Can someone guide me in a right way? Sample Data set:(original data set length is 600.)</p> <pre><code>6203.75 1 173.8 43.6 0.0 183.0 6329.75 1 115.0 60.1 0.0 236.2 5830.75 1 159.5 94.1 21.0 275.8 4061.75 1 82.5 45.0 11.0 75.7 3311 1 185.5 46.1 4.0 0.0 4349.75 1 169.5 40.3 5.0 73.5 5695.25 1 138.5 68.9 6.0 204.2 5633.5 1 50.0 117.3 4.0 263.9 </code></pre> <p>First column is the output. Second column is the constant value. Rest are features.</p> <p>Thank you</p> <pre><code>data = open('Data_trial.txt','r') import time lines=data.readlines() dataSet=[] for line in lines: dataSet.append(line.split()) original_output=[] features=[] for i in range(0,len(dataSet)): features.append([]) predict=[] grad=[] weights=[0,0,0,0,0] learning_factor=0.01 for i in range(0,len(dataSet)): for j in range(0,len(dataSet[i])): if j==0: original_output.append(float(dataSet[i][j])) else: features[i].append(float(dataSet[i][j])) def prediction(predict,weights,original_output,features): for count in range(0,len(original_output)): predict.append(sum(weights[i]*features[count][i] for i in range(0,len(features[count])))) print(&quot;predicted values&quot;,predict) def gradient(predict,grad,original_output,features): for count in range(0,len(weights)): grad.append(sum((predict[i]-original_output[i])*features[i][count] for i in range(0,len(original_output)))) print(&quot;Gradient values&quot;,grad) def weights_update(grad,learning_factor,weights): for i in range(0,len(weights)): weights[i]-=learning_factor*grad[i] print(&quot;Updated weights&quot;,weights) if __name__==&quot;__main__&quot;: while True: prediction(predict,weights,original_output,features) gradient(predict,grad,original_output,features) weights_update(grad,learning_factor,weights) time.sleep(1) predict=[] grad=[] print() </code></pre>
354
gradient descent implementation
Python implementation of gradient descent (Machine Learning)
https://stackoverflow.com/questions/19889918/python-implementation-of-gradient-descent-machine-learning
<p>I have tried to implement gradient descent here in python but the cost J just seems to be increasing irrespective of lambda ans alpha value, i am unable to figure out what the issue over here is. It'll be great if someone can help me out with this. The input is a matrix Y and R with same dimensions. Y is a matrix of movies x users and R is just to say if a user has rated a movie.</p> <pre><code>#Recommender system ML import numpy import scipy.io def gradientDescent(y,r): (nm,nu) = numpy.shape(y) x = numpy.mat(numpy.random.randn(nm,10)) theta = numpy.mat(numpy.random.randn(nu,10)) for i in range(1,10): (x,theta) = costFunc(x,theta,y,r) def costFunc(x,theta,y,r): X_tmp = numpy.power(x , 2) Theta_tmp = numpy.power(theta , 2) lmbda = 0.1 reg = ((lmbda/2) * numpy.sum(Theta_tmp))+ ((lmbda/2)*numpy.sum(X_tmp)) ans = numpy.multiply(numpy.power(((theta * x.T).T - y),2) , r) res = (0.5 * numpy.sum(ans))+reg print "J:",res print "reg:",reg (nm,nu) = numpy.shape(y) X_grad = numpy.mat(numpy.zeros((nm,10))); Theta_grad = numpy.mat(numpy.zeros((nu,10))); alpha = 0.1 # [m f] = size(X); (m,f) = numpy.shape(x); for i in range(0,m): for k in range(0,f): tmp = 0 # X_grad(i,k) += (((theta * x'(:,i)) - y(i,:)').*r(i,:)')' * theta(:,k); tmp += ((numpy.multiply(((theta * x.T[:,i]) - y[i,:].T),r[i,:].T)).T) * theta[:,k]; tmp += (lmbda*x[i,k]); X_grad[i,k] -= (alpha*tmp) # X_grad(i,k) += (lambda*X(i,k)); # [m f] = size(Theta); (m,f) = numpy.shape(theta); for i in range(0,m): for k in range(0,f): tmp = 0 # Theta_grad(i,k) += (((theta(i,:) * x') - y(:,i)').*r(:,i)') * x(:,k); tmp += (numpy.multiply(((theta[i,:] * x.T) - y[:,i].T),r[:,i].T)) * x[:,k]; tmp += (lmbda*theta[i,k]); Theta_grad[i,k] -= (alpha*tmp) # Theta_grad(i,k) += (lambda*Theta(i,k)); return(X_grad,Theta_grad) def main(): mat1 = scipy.io.loadmat("C:\Users\ROHIT\Machine Learning\Coursera\mlclass-ex8\ex8_movies.mat") Y = mat1['Y'] R = mat1['R'] r = numpy.mat(R) y = numpy.mat(Y) gradientDescent(y,r) #if __init__ == '__main__': main() </code></pre>
<p>I did not check the whole code logic, but assuming it is correct, your <code>costfunc</code> is supposed to return gradient of the cost function, and in these lines:</p> <pre><code>for i in range(1,10): (x,theta) = costFunc(x,theta,y,r) </code></pre> <p>you are overwriting the last values of x and theta with its gradient, while gradient is the measure of change, so you should move in the opposite direction (substract the gradient instead of overwriting the values):</p> <pre><code>for i in range(1,10): (x,theta) -= costFunc(x,theta,y,r) </code></pre> <p>But it seems that you already assign the minus sign to the gradient in your <code>costfunc</code> so you should add this value instead</p> <pre><code>for i in range(1,10): (x,theta) += costFunc(x,theta,y,r) </code></pre>
355
gradient descent implementation
What is the issue with this implementation of gradient descent?
https://stackoverflow.com/questions/41349806/what-is-the-issue-with-this-implementation-of-gradient-descent
<p>I tried to implement linear regresion with gradient descent, but my error diverges to infinity. I've read over my code and still cannot figure out where I went wrong. I'm hoping someone can help me debug why this implementation of linear regression isn't working. </p> <p>When <code>N=100</code> then there are no problems, but when <code>N=1000</code> then divergence to infinity is observed. </p> <pre><code>import numpy as np class Regression: def __init__(self, xs, ys, w,alpha): self.w = w self.xs = xs self.ys = ys self.a = alpha self.N = float(len(xs)) def error(self, ys, yhat): return (1./self.N)*np.sum((ys-yhat)**2) def propagate(self): yhat = xs*w[0]+w[1] loss = yhat - self.ys r1 = (2./self.N)*np.sum(loss*self.xs) r2 = (2./self.N)*np.sum(loss) self.w[0] -= self.a*r1 self.w[1] -= self.a*r2 N = 600 xs = np.arange(0,N) bias = np.random.sample(size=N)*10 ys = xs * 2. + 2. + bias ws = np.array([0.,0.]) regressor = Regression( xs, ys, ws, 0.00001) for i in range(1000): regressor.propagate() </code></pre> <p>Output:</p> <pre><code>... 2.71623180177e+286 5.27841816362e+286 1.02574818143e+287 1.99332318715e+287 3.87359919362e+287 7.52751526171e+287 1.46281231441e+288 2.84266426942e+288 5.52411274435e+288 1.07349369184e+289 2.0861064206e+289 4.05390365232e+289 7.87789858657e+289 1.5309018532e+290 2.97498179035e+290 5.78124367308e+290 1.12346161297e+291 2.18320843611e+291 4.24260074438e+291 8.2445912074e+291 1.6021607564e+292 3.11345829619e+292 6.05034327761e+292 1.17575539141e+293 2.28483026006e+293 4.4400811218e+293 8.62835227315e+293 </code></pre>
<p>You've exceeded the convergence radius of your method. I put in a print statement to trace the effect, at the bottom of <strong>propagate</strong>:</p> <pre><code> self.w = np.array(res).astype(np.float) print self.error(ys, yhat), '\t', r1, '\t', r2, '\t', self.w </code></pre> <p>As K.A. Buhr pointed out, r1 scales quadratically with <strong>N</strong>. Choose your learning rate according to the input; it's not a promised constant with the SGD algorithm. Here's the output from the first 20 iterations with N=600, as in your code:</p> <pre><code>486826.997899 -482786.592791 -1211.52883528 [ 4.82786593 0.01211529] 946024.542374 673013.376697 1680.38708612 [-1.90226784 -0.00468858] 1838377.19732 -938192.956012 -2350.99664804 [ 7.47966172 0.01882138] 3572474.5816 1307858.19046 3268.82617841 [-5.59892018 -0.01386688] 6942323.62211 -1823178.2573 -4565.30975898 [ 12.63286239 0.03178622] 13490907.7204 2541543.91414 6355.61930844 [-12.78257675 -0.03176997] 26216686.5837 -3542958.75828 -8868.35584965 [ 22.64701083 0.05691359] 50946528.2176 4938949.44036 12354.1444796 [-26.74248357 -0.06662786] 99003709.9274 -6884985.98436 -17230.4097511 [ 42.10737627 0.10567624] 192392610.191 9597796.6223 24011.0009034 [-53.87058995 -0.13443377] 373874053.385 -13379504.31 -33480.2810842 [ 79.92445315 0.20036904] 726544597.0 18651274.1534 46663.6193386 [-106.58828839 -0.26626715] 1411884707.51 -26000217.8559 -65058.4461128 [ 153.41389017 0.38431731] 2743697288.89 36244780.0586 90684.1600127 [-209.03391041 -0.52252429] 5331791469.79 -50525887.4157 -126423.886221 [ 296.22496374 0.74171457] 10361201450.4 70434012.7562 176228.707876 [-408.11516382 -1.02057251] 20134788880.2 -98186304.1721 -245674.553107 [ 573.7478779 1.43617302] 39127675046.8 136873506.894 342466.322375 [-794.98719104 -1.9884902 ] 76036305324.8 -190804176.229 -477412.833248 [ 1113.05457125 2.78563813] 147760369643.0 265984517.38 665513.730619 [-1546.79060255 -3.86949918] </code></pre> <p>However, with alpha set to E-6 (instead of E-5), the first 10 lines are</p> <pre><code>14495.6359775 -13788.3126768 -211.542964687 [ 0.01378831 0.00021154] 14306.0982004 -13697.7438847 -210.177498646 [ 0.02748606 0.00042172] 14119.0422005 -13607.7699931 -208.821001646 [ 0.04109383 0.00063054] 13934.4354818 -13518.3870942 -207.473414775 [ 0.05461221 0.00083801] 13752.2459738 -13429.5913063 -206.134679506 [ 0.0680418 0.00104415] 13572.4420258 -13341.3787729 -204.804737697 [ 0.08138318 0.00124895] 13394.9924018 -13253.7456628 -203.483531589 [ 0.09463693 0.00145244] 13219.8662747 -13166.6881702 -202.171003801 [ 0.10780362 0.00165461] 13047.0332208 -13080.202514 -200.867097331 [ 0.12088382 0.00185548] 12876.4632151 -12994.2849383 -199.571755548 [ 0.13387811 0.00205505] 12708.1266257 -12908.9317115 -198.284922195 [ 0.14678704 0.00225333] </code></pre> <p>... and it continues to converge. BTW, 1000 iterations are not enough to achieve proper convergence even at N=600; you might want to use an epsilon figure rather than quantity of iterations.</p>
356
gradient descent implementation
Issue Implementing Custom Gradient Descent Function
https://stackoverflow.com/questions/67266157/issue-implementing-custom-gradient-descent-function
<p>I am implementing my own/custom Gradient descent algorithm using python but the weights and biases that are returned by my algorithm has 10 values (shape=(10, )) but my input data has only 1 column so I am expecting it to return <strong>1 Weight and 1 Bias</strong></p> <p>Code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def SGD(X, y, learning_rate=0.01, max_iter=1000): w = np.random.randn(X.shape[1]) b = np.random.randn(1,) print(w, b) n = len(X) loss_list = [] for i in range(max_iter): y_pred = w*X + b Lw = -(2/n)*sum(X*(y - y_pred)) Lb = -(2/n)*sum(y - y_pred) w = w - learning_rate*Lw b = b - learning_rate*Lb loss = np.square(np.subtract(y, y_pred)).mean() loss_list.append(loss) print(f&quot;Epoch: {i}, loss: {loss}&quot;) return w, b x = list(range(1, 11)) y = [] for i in x: y.append(i**2) x, y = np.array(x).reshape(-1, 1), np.array(y) w, b = SGD(x, y) print(&quot;\n\n\n\n&quot;) print(w) print(b) </code></pre> <p>Loss of last iteration:</p> <pre><code>Epoch: 999, loss: 0.11521764208740602 </code></pre> <p>Returned weight and bias respectively,</p> <pre><code>w: [0.00149535 0.00777379 0.01823786 0.03288755 0.05172286 0.07474381 0.10195038 0.13334257 0.1689204 0.20868384] # giving 10 values b: [ 0.98958964 3.94588026 8.87303129 15.77104274 24.63991461 35.47964689 48.29023958 63.07169269 79.82400621 98.54718014] # giving 10 values </code></pre> <p>I am not understanding the cause, how this is happening? Thanks!</p>
<p>I think this is because your <code>y</code> is a 1d row list, but <code>y_pred</code> is a <code>1xn</code> column list, so subtracting them will give you an <code>nxn</code> matrix which you don't want. The fix is to just reshape <code>y</code> before you call your function like so:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def SGD(X, y, learning_rate=0.01, max_iter=1000): w = np.random.randn(X.shape[1]) b = np.random.randn(1,) print(w, b) n = len(X) loss_list = [] for i in range(max_iter): y_pred = w*X + b Lw = -(2/n)*sum(X*(y - y_pred)) Lb = -(2/n)*sum(y - y_pred) w = w - learning_rate*Lw b = b - learning_rate*Lb loss = np.square(np.subtract(y, y_pred)).mean() loss_list.append(loss) print(f&quot;Epoch: {i}, loss: {loss}&quot;) return w, b x = list(range(1, 11)) y = [] for i in x: y.append(i**2) x, y = np.array(x).reshape(-1, 1), np.array(y).reshape((-1, 1)) # Change is here w, b = SGD(x, y) print(&quot;\n\n\n\n&quot;) print(w) print(b) </code></pre> <p>and then <code>w, b</code> are:</p> <pre><code>[10.94655101] [-21.6278976] </code></pre> <p>respectively</p>
357
gradient descent implementation
My vectorization implementation of gradient descent does not get me the right answer
https://stackoverflow.com/questions/56520284/my-vectorization-implementation-of-gradient-descent-does-not-get-me-the-right-an
<p>I'm currently working on Andrew Ng's gradient descent exercise using python but keeps getting me the wrong optimal theta. I followed this vectorization cheatsheet for gradient descent --- <a href="https://medium.com/ml-ai-study-group/vectorized-implementation-of-cost-functions-and-gradient-vectors-linear-regression-and-logistic-31c17bca9181" rel="nofollow noreferrer">https://medium.com/ml-ai-study-group/vectorized-implementation-of-cost-functions-and-gradient-vectors-linear-regression-and-logistic-31c17bca9181</a>.</p> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np import pandas as pd def cost_func(X, Y, theta): m = len(X) H = X.dot(theta) J = 1/(2*m) * (H - Y).T.dot(H - Y) return J def gradient_descent(X, Y, alpha=0.01, iterations=1500): #initializing theta as a zero vector theta = np.zeros(X.shape[1]) #initializing the a list of cost function value J_list = [cost_func(X, Y, theta)] m = len(X) while iterations &gt; 0: H = X.dot(theta) delta = (1/m)*X.T.dot(H - Y) theta = theta - alpha * delta iterations -= 1 J_list.append(cost_func(X, Y, theta)) return theta, J_list def check_convergence(J_list): plt.plot(range(len(J_list)), J_list) plt.xlabel('Iterations') plt.ylabel('Cost J') plt.show() file_name_1 = 'https://raw.githubusercontent.com/kaleko/CourseraML/master/ex1/data/ex1data1.txt' df1 = pd.read_csv(file_name_1, header=None) X = df1.values[:, 0] Y = df1.values[:, 1] m = len(X) X = np.column_stack((np.ones(m), X)) theta_optimal, J_list = gradient_descent(X, Y, 0.01, 1500) print(theta_optimal) check_convergence(J_list) </code></pre> <p>My theta output is [-3.63029144 1.16636235], which is incorrect.</p> <p><a href="https://i.sstatic.net/XwWoa.png" rel="nofollow noreferrer">Here is my cost function graph.</a> As you see, it converges way too quickly.</p> <p><a href="https://i.sstatic.net/DivhC.png" rel="nofollow noreferrer">The correct graph should look like.</a></p> <p>Thank you.</p>
358
gradient descent implementation
Gradient descent values not correct
https://stackoverflow.com/questions/37845650/gradient-descent-values-not-correct
<p>I'm attempting to implement gradient descent using code from : </p> <p><a href="https://stackoverflow.com/questions/10591343/gradient-descent-implementation-in-octave">Gradient Descent implementation in octave</a></p> <p>I've amended code to following : </p> <pre><code>X = [1; 1; 1;] y = [1; 0; 1;] m = length(y); X = [ones(m, 1), data(:,1)]; theta = zeros(2, 1); iterations = 2000; alpha = 0.001; for iter = 1:iterations theta = theta -((1/m) * ((X * theta) - y)' * X)' * alpha; end theta </code></pre> <p>Which gives following output : </p> <pre><code>X = 1 1 1 y = 1 0 1 theta = 0.32725 0.32725 </code></pre> <p>theta is a 1x2 Matrix but should'nt it be 1x3 as the output (y) is 3x1 ?</p> <p>So I should be able to multiply theta by the training example to make a prediction but cannot multiply x by theta as x is 1x3 and theta is 1x2?</p> <p>Update : </p> <pre><code>%X = [1 1; 1 1; 1 1;] %y = [1 1; 0 1; 1 1;] X = [1 1 1; 1 1 1; 0 0 0;] y = [1 1 1; 0 0 0; 1 1 1;] m = length(y); X = [ones(m, 1), X]; theta = zeros(4, 1); theta iterations = 2000; alpha = 0.001; for iter = 1:iterations theta = theta -((1/m) * ((X * theta) - y)' * X)' * alpha; end %to make prediction m = size(X, 1); % Number of training examples p = zeros(m, 1); htheta = sigmoid(X * theta); p = htheta &gt;= 0.5; </code></pre>
<p>You are misinterpreting dimensions here. Your data consists of <strong>3 points</strong>, each having <strong>a single dimension</strong>. Furthermore, you <strong>add a dummy dimension</strong> of 1s </p> <pre><code>X = [ones(m, 1), data(:,1)]; </code></pre> <p>thus</p> <pre><code>octave:1&gt; data = [1;2;3] data = 1 2 3 octave:2&gt; [ones(m, 1), data(:,1)] ans = 1 1 1 2 1 3 </code></pre> <p>and <code>theta</code> is your parametrization, which you should be able to apply through (this is not a code, but math notation)</p> <pre><code>h(x) = x1 * theta1 + theta0 </code></pre> <p>thus your theta should have <strong>two</strong> dimensions. One is a weight for your dummy dimension (so called <strong>bias</strong>) and one for actual X dimension. If your X has K dimensions, theta would have K+1. Thus, after adding a dummy dimension matrices have following shapes:</p> <pre><code>X is 3x2 y is 3x1 theta is 2x1 </code></pre> <p>so </p> <pre><code>X * theta is 3x1 </code></pre> <p>the same as <code>y</code></p>
359
gradient descent implementation
Implementing Gradient Descent In Python and receiving an overflow error
https://stackoverflow.com/questions/49865952/implementing-gradient-descent-in-python-and-receiving-an-overflow-error
<h2>Gradient Descent and Overflow Error</h2> <p>I am currently implementing vectorized gradient descent in python. However, I continue to get an overflow error. The numbers in my dataset are not extremely large though. I am using this formula:</p> <p><a href="https://i.sstatic.net/wPecc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wPecc.png" alt="Formula for vectorized gradient descent"></a> I choose this implementation to avoid using derivatives. Does anyone have any suggestion on how to remedy this problem or am I implementing it wrong? Thank you in advance!</p> <p>Dataset Link: <a href="https://www.kaggle.com/CooperUnion/anime-recommendations-database/data" rel="nofollow noreferrer">https://www.kaggle.com/CooperUnion/anime-recommendations-database/data</a></p> <pre><code>## Cleaning Data ## import math import matplotlib.pyplot as plt import numpy as np import pandas as pd data = pd.read_csv('anime.csv') # print(data.corr()) # print(data['members'].isnull().values.any()) # Prints False # print(data['rating'].isnull().values.any()) # Prints True members = [] # Corresponding fan club size for row ratings = [] # Corresponding rating for row for row in data.iterrows(): if not math.isnan(row[1]['rating']): # Checks for Null ratings members.append(row[1]['members']) ratings.append(row[1]['rating']) plt.plot(members, ratings) plt.savefig('scatterplot.png') theta0 = 0.3 # Random guess theta1 = 0.3 # Random guess error = 0 </code></pre> <h2>Formula's</h2> <pre><code>def hypothesis(x, theta0, theta1): return theta0 + theta1 * x def costFunction(x, y, theta0, theta1, m): loss = 0 for i in range(m): # Represents summation loss += (hypothesis(x[i], theta0, theta1) - y[i])**2 loss *= 1 / (2 * m) # Represents 1/2m return loss def gradientDescent(x, y, theta0, theta1, alpha, m, iterations=1500): for i in range(iterations): gradient0 = 0 gradient1 = 0 for j in range(m): gradient0 += hypothesis(x[j], theta0, theta1) - y[j] gradient1 += (hypothesis(x[j], theta0, theta1) - y[j]) * x[j] gradient0 *= 1/m gradient1 *= 1/m temp0 = theta0 - alpha * gradient0 temp1 = theta1 - alpha * gradient1 theta0 = temp0 theta1 = temp1 error = costFunction(x, y, theta0, theta1, len(y)) print("Error is:", error) return theta0, theta1 print(gradientDescent(members, ratings, theta0, theta1, 0.01, len(ratings))) </code></pre> <h2>Error's</h2> <p>After several iterations, my costFunction being called within my gradientDescent function gives me an OverflowError: (34, 'Result too large'). However, I expect my code to continually print out a decreasing error value. </p> <pre><code> Error is: 1.7515692852199285e+23 Error is: 2.012089675182454e+38 Error is: 2.3113586742689143e+53 Error is: 2.6551395730578252e+68 Error is: 3.05005286756189e+83 Error is: 3.503703756035943e+98 Error is: 4.024828599077087e+113 Error is: 4.623463163528686e+128 Error is: 5.311135890211131e+143 Error is: 6.101089907410428e+158 Error is: 7.008538065634975e+173 Error is: 8.050955905074458e+188 Error is: 9.248418197694096e+203 Error is: 1.0623985545062037e+219 Error is: 1.220414847696018e+234 Error is: 1.4019337603196565e+249 Error is: 1.6104509643047377e+264 Error is: 1.8499820618048921e+279 Error is: 2.1251399172389593e+294 Traceback (most recent call last): File "tyreeGradientDescent.py", line 54, in &lt;module&gt; print(gradientDescent(members, ratings, theta0, theta1, 0.01, len(ratings))) File "tyreeGradientDescent.py", line 50, in gradientDescent error = costFunction(x, y, theta0, theta1, len(y)) File "tyreeGradientDescent.py", line 33, in costFunction loss += (hypothesis(x[i], theta0, theta1) - y[i])**2 OverflowError: (34, 'Result too large') </code></pre>
<p>Your data values are really very large, which makes your loss function very steep. The result is that you need a <em>tiny</em> alpha unless you normalize your data to smaller values. With an alpha value that is too large your gradient descent is hopping all over the place and actually diverges, which is why your error rate is going up rather than down.</p> <p>With your current data, an alpha of <code>0.0000000001</code> will make the error converge. After 30 iterations my loss went from :</p> <p><code>Error is: 66634985.91339202</code></p> <p>to</p> <p><code>Error is: 16.90452378179708</code></p>
360
gradient descent implementation
Java implementation of multivariate gradient descent
https://stackoverflow.com/questions/41144206/java-implementation-of-multivariate-gradient-descent
<p>I'm trying to implement the multivariate gradient descent algorithm in Java (from AI coursera course), and I cannot figure where is the fault located in my code.</p> <p>This is the output of the below program:</p> <pre><code>Before train: parameters := [0.0, 0.0, 0.0] -&gt; cost function := 2.5021875E9 After first iteration: parameters := [378.5833333333333, 2.214166666666667, 50043.75000000001] -&gt; cost function := 5.404438291015627E9 </code></pre> <p>As you can see, after first iteration, the values are way off. What am I doing wrong?</p> <p>This is the algorithm I'm trying to implement:</p> <p><a href="https://i.sstatic.net/vMa9D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vMa9D.png" alt="enter image description here"></a></p> <p>And the code:</p> <pre><code> import java.util.*; public class GradientDescent { private double[][] trainingData; private double[] means; private double[] scale; private double[] parameters; private double learningRate; GradientDescent() { this.learningRate = 0D; } public double predict(double[] inp){ double[] features = new double[inp.length + 1]; features[0] = 1; for(int i = 0; i &lt; inp.length; i++) { features[i+1] = inp[i]; } double prediction = 0; for(int i = 0; i &lt; parameters.length; i++) { prediction = parameters[i] * features[i]; } return prediction; } public void train(){ double[] tempParameters = new double[parameters.length]; for(int i = 0; i &lt; parameters.length; i++) { tempParameters[i] = parameters[i] - learningRate * partialDerivative(i); //System.out.println(tempParameters[i] + " = " + parameters[i] + " - " + learningRate + " * " + partialDerivative(i)); } System.out.println("Before train: parameters := " + Arrays.toString(parameters) + " -&gt; cost function := " + costFunction()); parameters = tempParameters; System.out.println("After first iteration: parameters := " + Arrays.toString(parameters) + " -&gt; cost function := " + costFunction()); } private double partialDerivative(int index) { double sum = 0; for(int i = 0; i &lt; trainingData.length; i++) { double[] input = new double[trainingData[i].length - 1]; int j = 0; for(; j &lt; trainingData[i].length - 1; j++) { input[j] = trainingData[i][j]; } sum += ((predict(input) - trainingData[i][j]) * trainingData[i][index]); } return (1D/trainingData.length) * sum; } public double[][] getTrainingData() { return trainingData; } public void setTrainingData(double[][] data) { this.trainingData = data; this.means = new double[this.trainingData[0].length-1]; this.scale = new double[this.trainingData[0].length-1]; for(int j = 0; j &lt; data[0].length-1; j++) { double min = data[0][j], max = data[0][j]; double sum = 0; for(int i = 0; i &lt; data.length; i++) { if(data[i][j] &lt; min) min = data[i][j]; if(data[i][j] &gt; max) max = data[i][j]; sum += data[i][j]; } scale[j] = max - min; means[j] = sum / data.length; } } public double[] getParameters() { return parameters; } public void setParameters(double[] parameters) { this.parameters = parameters; } public double getLearningRate() { return learningRate; } public void setLearningRate(double learningRate) { this.learningRate = learningRate; } /** 1 m i i 2 * J(theta) = ----- * SUM( h (x ) - y ) * 2*m i=1 theta */ public double costFunction() { double sum = 0; for(int i = 0; i &lt; trainingData.length; i++) { double[] input = new double[trainingData[i].length - 1]; int j = 0; for(; j &lt; trainingData[i].length - 1; j++) { input[j] = trainingData[i][j]; } sum += Math.pow(predict(input) - trainingData[i][j], 2); } double factor = 1D/(2*trainingData.length); return factor * sum; } @Override public String toString() { StringBuilder sb = new StringBuilder("hypothesis: "); int i = 0; sb.append(parameters[i++] + " + "); for(; i &lt; parameters.length-1; i++) { sb.append(parameters[i] + "*x" + i + " + "); } sb.append(parameters[i] + "*x" + i); sb.append("\n Feature scale: "); for(i = 0; i &lt; scale.length-1; i++) { sb.append(scale[i] + " "); } sb.append(scale[i]); sb.append("\n Feature means: "); for(i = 0; i &lt; means.length-1; i++) { sb.append(means[i] + " "); } sb.append(means[i]); sb.append("\n Cost fuction: " + costFunction()); return sb.toString(); } public static void main(String[] args) { final double[][] TDATA = { {200, 2, 20000}, {300, 2, 41000}, {400, 3, 51000}, {500, 3, 61500}, {800, 4, 41000}, {900, 5, 141000} }; GradientDescent gd = new GradientDescent(); gd.setTrainingData(TDATA); gd.setParameters(new double[]{0D,0D,0D}); gd.setLearningRate(0.00001); gd.train(); //System.out.println(gd); //System.out.println("PREDICTION: " + gd.predict(new double[]{300, 2})); } } </code></pre> <hr> <p>EDIT:<br></p> <p>I've updated the code to make it more readable, and tried to map it to the notation Douglas used. I think it's working better now, but there are still shady areas I don't fully understand.</p> <p>It seems that if I have multiple parameters (like in the example below, number of rooms and area), the prediction is strongly related to the second parameter (in this case area), and it doesn't have much effect changing the first parameter (number of rooms). </p> <p>Here is prediction for <code>{2, 200}</code>:</p> <pre><code>PREDICTION: 200000.00686158828 </code></pre> <p>Here is prediction for <code>{5, 200}</code>:</p> <pre><code>PREDICTION: 200003.0068315415 </code></pre> <p>As you can see there is barely any difference between the two values.</p> <p>Are there still faults in my attempt to translate the math into code?</p> <p>Here is the updated code:</p> <pre><code>import java.util.*; public class GradientDescent { private double[][] trainingData; private double[] means; private double[] scale; private double[] parameters; private double learningRate; GradientDescent() { this.learningRate = 0D; } public double predict(double[] inp) { return predict(inp, this.parameters); } private double predict(double[] inp, double[] parameters){ double[] features = concatenate(new double[]{1}, inp); double prediction = 0; for(int j = 0; j &lt; features.length; j++) { prediction += parameters[j] * features[j]; } return prediction; } public void train(){ readjustLearningRate(); double costFunctionDelta = Math.abs(costFunction() - costFunction(iterateGradient())); while(costFunctionDelta &gt; 0.0000000001) { System.out.println("Old cost function : " + costFunction()); System.out.println("New cost function : " + costFunction(iterateGradient())); System.out.println("Delta: " + costFunctionDelta); parameters = iterateGradient(); costFunctionDelta = Math.abs(costFunction() - costFunction(iterateGradient())); readjustLearningRate(); } } private double[] iterateGradient() { double[] nextParameters = new double[parameters.length]; // Calculate parameters for the next iteration for(int r = 0; r &lt; parameters.length; r++) { nextParameters[r] = parameters[r] - learningRate * partialDerivative(r); } return nextParameters; } private double partialDerivative(int index) { double sum = 0; for(int i = 0; i &lt; trainingData.length; i++) { int indexOfResult = trainingData[i].length - 1; double[] input = Arrays.copyOfRange(trainingData[i], 0, indexOfResult); sum += ((predict(input) - trainingData[i][indexOfResult]) * trainingData[i][index]); } return sum/trainingData.length ; } private void readjustLearningRate() { while(costFunction(iterateGradient()) &gt; costFunction()) { // If the cost function of the new parameters is higher that the current cost function // it means the gradient is diverging and we have to adjust the learning rate // and recalculate new parameters System.out.print("Learning rate: " + learningRate + " is too big, readjusted to: "); learningRate = learningRate/2; System.out.println(learningRate); } // otherwise we are taking small enough steps, we have the right learning rate } public double[][] getTrainingData() { return trainingData; } public void setTrainingData(double[][] data) { this.trainingData = data; this.means = new double[this.trainingData[0].length-1]; this.scale = new double[this.trainingData[0].length-1]; for(int j = 0; j &lt; data[0].length-1; j++) { double min = data[0][j], max = data[0][j]; double sum = 0; for(int i = 0; i &lt; data.length; i++) { if(data[i][j] &lt; min) min = data[i][j]; if(data[i][j] &gt; max) max = data[i][j]; sum += data[i][j]; } scale[j] = max - min; means[j] = sum / data.length; } } public double[] getParameters() { return parameters; } public void setParameters(double[] parameters) { this.parameters = parameters; } public double getLearningRate() { return learningRate; } public void setLearningRate(double learningRate) { this.learningRate = learningRate; } /** 1 m i i 2 * J(theta) = ----- * SUM( h (x ) - y ) * 2*m i=1 theta */ public double costFunction() { return costFunction(this.parameters); } private double costFunction(double[] parameters) { int m = trainingData.length; double sum = 0; for(int i = 0; i &lt; m; i++) { int indexOfResult = trainingData[i].length - 1; double[] input = Arrays.copyOfRange(trainingData[i], 0, indexOfResult); sum += Math.pow(predict(input, parameters) - trainingData[i][indexOfResult], 2); } double factor = 1D/(2*m); return factor * sum; } private double[] normalize(double[] input) { double[] normalized = new double[input.length]; for(int i = 0; i &lt; input.length; i++) { normalized[i] = (input[i] - means[i]) / scale[i]; } return normalized; } private double[] concatenate(double[] a, double[] b) { int size = a.length + b.length; double[] concatArray = new double[size]; int index = 0; for(double d : a) { concatArray[index++] = d; } for(double d : b) { concatArray[index++] = d; } return concatArray; } @Override public String toString() { StringBuilder sb = new StringBuilder("hypothesis: "); int i = 0; sb.append(parameters[i++] + " + "); for(; i &lt; parameters.length-1; i++) { sb.append(parameters[i] + "*x" + i + " + "); } sb.append(parameters[i] + "*x" + i); sb.append("\n Feature scale: "); for(i = 0; i &lt; scale.length-1; i++) { sb.append(scale[i] + " "); } sb.append(scale[i]); sb.append("\n Feature means: "); for(i = 0; i &lt; means.length-1; i++) { sb.append(means[i] + " "); } sb.append(means[i]); sb.append("\n Cost fuction: " + costFunction()); return sb.toString(); } public static void main(String[] args) { final double[][] TDATA = { //number of rooms, area, price {2, 200, 200000}, {3, 300, 300000}, {4, 400, 400000}, {5, 500, 500000}, {8, 800, 800000}, {9, 900, 900000} }; GradientDescent gd = new GradientDescent(); gd.setTrainingData(TDATA); gd.setParameters(new double[]{0D, 0D, 0D}); gd.setLearningRate(0.1); gd.train(); System.out.println(gd); System.out.println("PREDICTION: " + gd.predict(new double[]{3, 600})); } } </code></pre>
<p>It seems you have a reasonable start, but there were some issues in the translation of the math to code. See the following math.</p> <p><img src="https://i.sstatic.net/QtKss.png" alt="The math"></p> <p>There were a few steps I took to clarify the math and the algorithm's convergence mechanism.</p> <ol> <li>To increase legibility, rather than use a parenthetic superscript to denote rows, a more standard comma delimited subscript was used in the notation.</li> <li>An attempt was made to use a zero base for the summation control variables to match the Java/C index convention without introducing bugs into the math. (Hopefully done correctly.)</li> <li>Made the various substitutions implied by the course material.</li> <li>Determined the mapping between variable names in the posted code and the mathematical representation.</li> </ol> <p>After that, it became aparent that there is more awry than the missing plus sign in the summation loop. The partial derivative seems to need rewriting or significant modification to match the course concepts.</p> <p>Note that the inner loop of k=0->n to produces a dot product across all features and then is applied within the i=0->m-1 loop to account for each training case.</p> <p>All of that must be contained within each iteration r. The loop criteria for that outer loop should not be some maximum r value. You will need some criteria that is met once convergence is sufficiently complete.</p> <hr> <p><strong>Additional Notes in response to comments:</strong></p> <p>It is difficult to spot incongruities as the code stands because of what Martin Fowler termed the Symantic Gap. In this case, it is between three things. </p> <ol> <li>math representation </li> <li>lecture terminology </li> <li>algorithms in code </li> </ol> <p>It is likely that refactoring the member variable and breaking the y vector from the x matrix (shown below) will facilitate spotting incongruities. </p> <pre><code>private int countMExamples; private int countNFeatures; private double[][] aX; private double[] aY; private double[] aMeans; private double[] aScales; private double[] aParamsTheta; private double learnRate; </code></pre>
361
gradient descent implementation
Trouble Implementing Gradient Descent in Octave
https://stackoverflow.com/questions/42944688/trouble-implementing-gradient-descent-in-octave
<p>I've been trying to implement gradient descent in Octave. This is the code I have so far:</p> <pre><code>function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters) %GRADIENTDESCENT Performs gradient descent to learn theta % theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by % taking num_iters gradient steps with learning rate alpha % Initialize some useful values m = length(y); % number of training examples J_history = zeros(num_iters, 1); for iter = 1:num_iters % ====================== YOUR CODE HERE ====================== % Instructions: Perform a single gradient step on the parameter vector % theta. % % Hint: While debugging, it can be useful to print out the values % of the cost function (computeCost) and gradient here. % theta X y theta' .* X for inner = 1:length(theta) hypothesis = (X * theta - y)'; % Updating the parameters temp0 = theta(1) - (alpha * (1/m) * hypothesis * X(:, 1)); temp1 = theta(2) - (alpha * (1/m) * hypothesis * X(:, 2)); theta(1) = temp0; theta(2) = temp1; J_history(iter) = computeCost(X, y, theta); end end </code></pre> <p>I can't really tell what's going wrong with this code, it compiles and runs but it's being auto-graded and it fails every time.</p> <p>EDIT: Sorry, wasn't specific. I was supposed to implement a single step of GD, not the whole loop</p> <p>EDIT 2: Here's the full thing. Only the stuff inside the for loop is relevant imo.</p> <p>EDIT 3: Both test cases fail, so there's something wrong with my calculations.</p>
<p>I think my problem is that I had an extra for loop in there for some reason.</p>
362
gradient descent implementation
How to implement minibatch gradient descent in tensorflow without using feeddict?
https://stackoverflow.com/questions/51091129/how-to-implement-minibatch-gradient-descent-in-tensorflow-without-using-feeddict
<p>From what I understand, using <code>feed_dict</code> is a computationally expensive process and should be avoided according to <a href="https://towardsdatascience.com/how-to-use-dataset-in-tensorflow-c758ef9e4428" rel="nofollow noreferrer">this article</a>. Tensorflow's input pipelines are supposedly better. </p> <p>All mini-batch gradient descent tutorials that I've found are implemented with <code>feed_dict</code>. Is there a way to use input pipelines and minibatch gradient descent?</p>
<p>If you are just making a small model, you will do fine with <code>feed_dict</code>. Many large models have been trained with the <code>feed_dict</code> method in the past. If you are scaling to a very deep convnet with a large dataset or something, you may want to use <code>tf.data</code> and the dataset pipeline, probably with the dataset serialized to a <code>.tfrecord</code> file so that the data can be pre-fetched to the GPU to reduce idle time. These optimizations are worthwhile with large models, and if you would really like to learn the API, visit the <a href="https://www.tensorflow.org/get_started/datasets_quickstart" rel="nofollow noreferrer">quickstart guide</a> and <a href="https://towardsdatascience.com/how-to-use-dataset-in-tensorflow-c758ef9e4428" rel="nofollow noreferrer">this helpful Medium article</a> on getting started with the API. </p>
363
gradient descent implementation
Issue when Implementing gradient descent in R
https://stackoverflow.com/questions/50249691/issue-when-implementing-gradient-descent-in-r
<p>I have problems implementing a gradient descent in R for an exponential function.</p> <p>Let's say</p> <pre><code>foo &lt;- function(x) { y = -2 + 2.5 * exp(0.1*x^2-0.7*x) return(y) } </code></pre> <p>is my exponential function then</p> <pre><code> grad &lt;- function(x) { y = 2.5*exp(0.1*x^2-0.7*x)*(0.2*x-0.7) return(y) } </code></pre> <p>is the gradient function of foo(x).</p> <p>The task is to implement a function called</p> <pre><code>gdescent &lt;- function(x0, fc, grd, diff, step) {} </code></pre> <p>where </p> <ul> <li>x0 is a random initial value</li> <li>foo is the exponential function I want to minimize </li> <li>grad is the gradient function (derivative of foo(x)) </li> <li>diff is the difference that terminates the algo </li> <li>step is the size of the steps (i.e. the learning rate)</li> </ul> <p>The result of the function shall be a list containing </p> <ul> <li><i>par</i> - the value of x in the minimum; </li> <li><i>value</i> - the corresponding value of foo(par) at the minimum; </li> <li>and <i>iter</i> - the number of iterations it took the algo to find the minimum.</li> </ul> <p>The update rule after every iteration shall be:</p> <pre><code>xi+1 = xi - step * grd(xi) # i didn't understand this at all </code></pre> <p>How do I implement this in R?</p> <p>My understanding of the gradient descent method so far:</p> <ol> <li>pick a random initial value x0 insert x0 in the gradient function grd(x)</li> <li>if grd(x0) &lt; 0, reduce x0 by "step" (x0+1=x0-step); </li> <li>if grd(x0) > 0, increase x0 by "step" (x0+1=x0+step); and go back to 2) with x0+1 as initial value </li> </ol> <p>my solution so far:</p> <pre><code>gdescent &lt;- function(x0, fc, grd, diff, step) { x = x0 x_history = vector("numeric", iter) for(i in 1:iter) { x = x - step * grad(x) if( x &gt; diff ) { #not sure how to proceed here } } </code></pre> <p>How can I solve this without a fixed number of iterations? so without initialising iter</p>
364
gradient descent implementation
Why is my implementation of gradient descent on python producing outputs so slow?
https://stackoverflow.com/questions/55357592/why-is-my-implementation-of-gradient-descent-on-python-producing-outputs-so-slow
<p>Why are the outputs from the code getting slow with every successive iteration?</p> <p>I want to write a working code , that implements Gradient descent and Newton's method on same function and I want to compare the speeds and iterations for both the methods to arrive at the approximate solution.</p> <p>This code snippet is only for Gradient descent , and the output seems to get slower for every iteration. So I got first 10 output relatively fast after that every output is taking atleast 5-6 seconds or more . </p> <pre class="lang-py prettyprint-override"><code>#A python program to approximate a root of a polynomial #using the newton-raphson method import math #f(x) - the function of the polynomial h = 0.000001 def f(x): function = (x*x*x) - (2*x) - 1 return function def derivative(h,x): #function to find the derivative of the polynomial derivative = (f(x + h) - f(x)) / h return derivative def GD(h,x): return (x - h*derivative(h,x)) # p - the initial point i.e. a value closer to the root # n - number of iterations def iterate(p, n): # x = 0 for i in range(n): if i == 0: #calculate first approximation x = GD(h,p) else: x = GD(h,iterate(x, n)) #iterate the first and subsequent approximations n=n-1 return x for i in range(100): print (iterate(1, i)) #print the root of the polynomial x^3 - 2x - 1 using 3 iterations and taking initial point as 1 </code></pre> <p>I can't figure out whether it is happening due to code or the gradient descent.</p>
365
gradient descent implementation
Implementing gradient descent in python
https://stackoverflow.com/questions/66494060/implementing-gradient-descent-in-python
<p>I was trying to build a gradient descent function in python. I have used the binary-crossentropy as the loss function and sigmoid as the activation function.</p> <pre><code>def sigmoid(x): return 1/(1+np.exp(-x)) def binary_crossentropy(y_pred,y): epsilon = 1e-15 y_pred_new = np.array([max(i,epsilon) for i in y_pred]) y_pred_new = np.array([min(i,1-epsilon) for i in y_pred_new]) return -np.mean(y*np.log(y_pred_new) + (1-y)*np.log(1-y_pred_new)) def gradient_descent(X, y, epochs=10, learning_rate=0.5): features = X.shape[0] w = np.ones(shape=(features, 1)) bias = 0 n = X.shape[1] for i in range(epochs): weighted_sum = w.T@X + bias y_pred = sigmoid(weighted_sum) loss = binary_crossentropy(y_pred, y) d_w = (1/n)*(X@(y_pred-y).T) d_bias = np.mean(y_pred-y) w = w - learning_rate*d_w bias = bias - learning_rate*d_bias print(f'Epoch:{i}, weights:{w}, bias:{bias}, loss:{loss}') return w, bias </code></pre> <p>So, as input I gave</p> <pre><code>X = np.array([[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.6, 0.2, 0.4], [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.9, 0.4, 0.7]]) y = 2*X[0] - 3*X[1] + 0.4 </code></pre> <p>and then <code>w, bias = gradient_descent(X, y, epochs=100)</code> the output was w = <code>array([[-20.95],[-29.95]])</code>, b = <code>-55.50000017801383</code>, and <code>loss:40.406546076763014</code>. The weights are decreasing(becoming more -ve) and bias is also decreasing for more epochs. Expected output was w = [[2],[-3]], and b = 0.4.</p> <p>I don't know what I am doing wrong, the loss is also not converging. It is constant throughout all the epochs.</p>
<p>Usually, <code>binary cross-entropy</code> loss is used for binary classification task. However, here your task is a linear regression so I would prefer using <code>Mean Square Error</code> as loss function. Here is my suggesstion:</p> <pre><code>def gradient_descent(X, y, epochs=1000, learning_rate=0.5): w = np.ones((X.shape[0], 1)) bias = 1 n = X.shape[1] for i in range(epochs): y_pred = w.T @ X + bias mean_square_err = (1.0 / n) * np.sum(np.power((y - y_pred), 2)) d_w = (-2.0 / n) * (y - y_pred) @ X.T d_bias = (-2.0 / n) * np.sum(y - y_pred) w -= learning_rate * d_w.T bias -= learning_rate * d_bias print(f'Epoch:{i}, weights:{w}, bias:{bias}, loss:{mean_square_err}') return w, bias X = np.array([[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.6, 0.2, 0.4], [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.9, 0.4, 0.7]]) y = 2*X[0] - 3*X[1] + 0.4 w, bias = gradient_descent(X, y, epochs=5000, learning_rate=0.5) print(f'w = {w}') print(f'bias = {bias}') </code></pre> <p>Output:</p> <pre><code>w = [[ 1.99999999], [-2.99999999]] bias = 0.40000000041096756 </code></pre>
366
gradient descent implementation
Gradient descent not updating theta values
https://stackoverflow.com/questions/37229574/gradient-descent-not-updating-theta-values
<p>Using the vectorized version of gradient as described at : <a href="https://stackoverflow.com/questions/10479353/gradient-descent-seems-to-fail">gradient descent seems to fail</a></p> <pre><code>theta = theta - (alpha/m * (X * theta-y)' * X)'; </code></pre> <p>The theta values are not being updated, so whatever initial theta value this is the values that is set after running gradient descent : </p> <p>example1 : </p> <pre><code>m = 1 X = [1] y = [0] theta = 2 theta = theta - (alpha/m .* (X .* theta-y)' * X)' theta = 2.0000 </code></pre> <p>example2 : </p> <pre><code>m = 1 X = [1;1;1] y = [1;0;1] theta = [1;2;3] theta = theta - (alpha/m .* (X .* theta-y)' * X)' theta = 1.0000 2.0000 3.0000 </code></pre> <p>Is <code>theta = theta - (alpha/m * (X * theta-y)' * X)';</code> a correct vectorised implementation of gradient descent ?</p>
<p><code>theta = theta - (alpha/m * (X * theta-y)' * X)';</code> is indeed the correct vectorized implementation of gradient-descent.</p> <p>You totally forgot to set the learning rate, <code>alpha</code>.</p> <p>After setting <code>alpha = 0.01</code>, your code becomes:</p> <pre><code>m = 1 # number of training examples X = [1;1;1] y = [1;0;1] theta = [1;2;3] alpha = 0.01 theta = theta - (alpha/m .* (X .* theta-y)' * X)' theta = 0.96000 1.96000 2.96000 </code></pre>
367
gradient descent implementation
Understanding Gradient Descent for Multivariate Linear Regression python implementation
https://stackoverflow.com/questions/33629734/understanding-gradient-descent-for-multivariate-linear-regression-python-impleme
<p>It seems that the following code finds the gradient descent correctly:</p> <pre><code>def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() for i in range(0, numIterations): hypothesis = np.dot(x, theta) loss = hypothesis - y cost = np.sum(loss ** 2) / (2 * m) print("Iteration %d | Cost: %f" % (i, cost)) # avg gradient per example gradient = np.dot(xTrans, loss) / m # update theta = theta - alpha * gradient return theta </code></pre> <p>Now suppose we have the following sample data:</p> <p><a href="https://i.sstatic.net/K7Cx0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K7Cx0.png" alt="enter image description here"></a></p> <p>For the 1st row of sample data, we will have: <code>x = [2104, 5, 1, 45]</code>, <code>theta = [1,1,1,1]</code>, <code>y = 460</code>. However, we are nowhere specifying in the lines :</p> <pre><code>hypothesis = np.dot(x, theta) loss = hypothesis - y </code></pre> <p>which row of the sample data to consider. Then how come this code is working fine ?</p>
<p>First: Congrats on taking the course on Machine Learning on Coursera! :)</p> <p><code>hypothesis = np.dot(x,theta)</code> will compute the hypothesis for all x(i) at the same time, saving each h_theta(x(i)) as a row of <code>hypothesis</code>. So there is no need to reference a single row.</p> <p>Same is true for <code>loss = hypothesis - y</code>.</p>
368
gradient descent implementation
Can I implement a gradient descent for arbitrary convex loss function?
https://stackoverflow.com/questions/42587696/can-i-implement-a-gradient-descent-for-arbitrary-convex-loss-function
<p>I have a loss function I would like to try and minimize:</p> <pre><code>def lossfunction(X,b,lambs): B = b.reshape(X.shape) penalty = np.linalg.norm(B, axis = 1)**(0.5) return np.linalg.norm(np.dot(X,B)-X) + lambs*penalty.sum() </code></pre> <p>Gradient descent, or similar methods, might be useful. I can't calculate the gradient of this function analytically, so I am wondering how I can numerically calculate the gradient for this loss function in order to implement a descent method.</p> <p>Numpy has a <code>gradient</code> function, but it requires me to pass a scalar field at pre determined points.</p>
<p>You could try <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html" rel="nofollow noreferrer">scipy.optimize.minimize</a> For your case a sample call would be: </p> <pre><code> import scipy.optimize.minimize scipy.optimize.minimize(lossfunction, args=(b, lambs), method='Nelder-mead') </code></pre>
369
gradient descent implementation
Implementing gradient descent for multiple variables in Octave using &quot;sum&quot;
https://stackoverflow.com/questions/35945445/implementing-gradient-descent-for-multiple-variables-in-octave-using-sum
<p>I'm doing Andrew Ng's course on Machine Learning and I'm trying to wrap my head around the vectorised implementation of gradient descent for multiple variables which is an optional exercise in the course.</p> <p>This is the algorithm in question (taken from <a href="http://www.holehouse.org/mlclass/04_Linear_Regression_with_multiple_variables.html" rel="nofollow noreferrer">here</a>):</p> <p><a href="https://i.sstatic.net/S3jua.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S3jua.png" alt="enter image description here"></a></p> <p>I just cannot do this in octave using <code>sum</code> though, I'm not sure how to multiply the sum of the hypothesis of x(i) - y(i) by the all variables xj(i). I tried different iterations of the following code but to no avail (either the dimensions are not right or the answer is wrong):</p> <pre><code>theta = theta - alpha/m * sum(X * theta - y) * X; </code></pre> <p>The correct answer, however, is entirely non-obvious (to a linear algebra beginner like me anyway, from <a href="https://stackoverflow.com/questions/10479353/gradient-descent-seems-to-fail">here</a>):</p> <pre><code>theta = theta - (alpha/m * (X * theta-y)' * X)'; </code></pre> <p>Is there a rule of thumb for cases where <code>sum</code> is involved that governs transformations like the above? </p> <p>And if so, is there the opposite version of the above (i.e. going from a <code>sum</code> based solution to a general multiplication one) as I was able to come up with a correct implementation using <code>sum</code> for gradient descent for a single variable (albeit not a very elegant one):</p> <pre><code>temp0 = theta(1) - (alpha/m * sum(X * theta - y)); temp1 = theta(2) - (alpha/m * sum((X * theta - y)' * X(:, 2))); theta(1) = temp0; theta(2) = temp1; </code></pre> <p>Please note that this only concerns vectorised implementations and although there are several questions on SO as to how this is done, my question is primarily concerned with the implementation of the algorithm in Octave using <code>sum</code>.</p>
<p>The general "rule of the thumb" is as follows, if you encounter something in the form of</p> <pre><code>SUM_i f(x_i, y_i, ...) g(a_i, b_i, ...) </code></pre> <p>then you can easily vectorize it (and this is what is done in the above) through</p> <pre><code>f(x, y, ...)' * g(a, b, ...) </code></pre> <p>As this is just a typical dot product, which in mathematics (in Euclidean space of finite dimension) looks like</p> <pre><code>&lt;A, B&gt; = SUM_i A_i B_i = A'B </code></pre> <p>thus</p> <pre><code>(X * theta-y)' * X) </code></pre> <p>is just</p> <pre><code>&lt;X * theta-y), X&gt; = &lt;H_theta(X) - y, X&gt; = SUM_i (H_theta(X_i) - y_i) X_i </code></pre> <p>as you can see this works both ways, as this is just a mathematical definition of dot product.</p>
370
gradient descent implementation
Python implemented Gradient Descent Algorithm won&#39;t converge
https://stackoverflow.com/questions/30221211/python-implemented-gradient-descent-algorithm-wont-converge
<p>I've implemented a gradient descent algorithm in python and it is just not converging when it runs. When I debugging it, I have to make the alpha very small to let it 'seems' to converge. The alpha is like, have to be 1e-12 that small.</p> <p>Here is my code</p> <pre><code> def batchGradDescent(dataMat, labelMat): dataMatrix = mat(dataMat) labelMatrix = mat(labelMat).transpose() m, n = shape(dataMatrix) cycle = 1000000 alpha = 7e-11 saved_weights = ones((n,1)) weights = saved_weights for k in range(cycle): hypothesis = dataMatrix * weights saved_weights = weights error = labelMatrix - hypothesis weights = saved_weights + alpha * dataMatrix.transpose() * error print weights-saved_weights return weights </code></pre> <p>And my dataset is like this(a row)</p> <pre><code>800 0 0.3048 71.3 0.00266337 126.201 </code></pre> <p>First five elements are features and the last one is the label.</p> <p>Could anyone provide help? I'm really frustrated here. I think my algorithm is theoretically right. Is it about the normalization on the dataset?</p> <p>Thank you.</p>
371
gradient descent implementation
Stochastic gradient descent converges too smoothly
https://stackoverflow.com/questions/42766970/stochastic-gradient-descent-converges-too-smoothly
<p>As a part of my homework I was asked to implement a stochastic gradient descent in order to solve a linear regression problem (even though I have only 200 training examples). My problem is that stochastic gradient descent converges too smoothly, almost exactly as batch gradient descent, which brings me to my question: why does it look so smoothly, considering the fact that usually it's much more noisy. Is it because I use it with only 200 examples? </p> <p>Convergence plots: </p> <p><a href="https://i.sstatic.net/oLDFH.png" rel="nofollow noreferrer">Stochastic gradient descent</a></p> <p><a href="https://i.sstatic.net/ig9WN.png" rel="nofollow noreferrer">Gradient descent</a></p> <p>MSE with weights from stochastic gradient descent: 2.78441258841</p> <p>MSE with weights from gradient descent: 2.78412631451 (identical to MSE with weights from normal equation) </p> <p>My code:</p> <pre><code>def mserror(y, y_pred): n = y.size diff = y - y_pred diff_squared = diff ** 2 av_er = float(sum(diff_squared))/n return av_er </code></pre> <p>.</p> <pre><code>def linear_prediction(X, w): return dot(X,np.transpose(w)) </code></pre> <p>.</p> <pre><code>def gradient_descent_step(X, y, w, eta): n = X.shape[0] grad = (2.0/n) * sum(np.transpose(X) * (linear_prediction(X,w) - y), axis = 1) return w - eta * grad </code></pre> <p>.</p> <pre><code>def stochastic_gradient_step(X, y, w, train_ind, eta): n = X.shape[0] grad = (2.0/n) * np.transpose(X[train_ind]) * (linear_prediction(X[train_ind],w) - y[train_ind]) return w - eta * grad </code></pre> <p>. </p> <pre><code>def gradient_descent(X, y, w_init, eta, max_iter): w = w_init errors = [] errors.append(mserror(y, linear_prediction(X,w))) for i in range(max_iter): w = gradient_descent_step(X, y, w, eta) errors.append(mserror(y, linear_prediction(X,w))) return w, errors </code></pre> <p>.</p> <pre><code>def stochastic_gradient_descent(X, y, w_init, eta, max_iter): n = X.shape[0] w = w_init errors = [] errors.append(mserror(y, linear_prediction(X,w))) for i in range(max_iter): random_ind = np.random.randint(n) w = stochastic_gradient_step(X, y, w, random_ind, eta) errors.append(mserror(y, linear_prediction(X,w))) return w, errors </code></pre>
<p>There is nothing unusual with your graph. You should also note that your batch method takes fewer iterations to converge. </p> <p>You may be letting SGD plots from neural networks cloud your view on what SGD "should" look like. Most neural networks are much more complicated models (difficult to optimize) working on harder problems. This contributes to the "jaggedness" you might be expecting.</p> <p>Linear regression is a simple problem, and has a convex solution. That means any step that will lower our error rate is guaranteed to be a step toward the best possible solution. Thats a lot less complication than neural networks, and part of why you see a smooth error reduction. That's also why you see almost identical MSE. Both SGD and batch <em>will</em> converge to the exact same solution. </p> <p>If you want to try and force some non-smoothness, you can keep increasing the learning rate eta, but that's kind of a silly exercise. Eventually you'll just reach a point where you don't converge because you always take steps past the solution. </p>
372
gradient descent implementation
Trying to Implement Linear Regression with Stochastic Gradient Descent
https://stackoverflow.com/questions/66756559/trying-to-implement-linear-regression-with-stochastic-gradient-descent
<p>[<a href="https://docs.google.com/spreadsheets/d/1AVNrWBwn22c1QWc6X9zG8FkvTMXHXZGuZH2sPAT9a00/edit?usp=sharing" rel="nofollow noreferrer">Dataset</a>]<a href="https://docs.google.com/spreadsheets/d/1AVNrWBwn22c1QWc6X9zG8FkvTMXHXZGuZH2sPAT9a00/edit?usp=sharing" rel="nofollow noreferrer">1</a>I'm attempting to implement linear regression for stochastic gradient descent using python. I have the code to enable me do this but for some reason, its triggering an error at &quot;row[column] = float(row[column].strip())&quot;-could not convert string to float: 'C'&quot;. Anyone who will assist me troubleshoot this error will be greatly appreciated.</p> <pre><code> # Linear Regression With Stochastic Gradient Descent for Pima- Indians-Diabetes from random import seed from random import randrange from csv import reader from math import sqrt filename = 'C:/Users/Vince/Desktop/University of Wyoming PHD/Year 2/Machine Learning/Homeworks/Solutions/HW4/pima-indians-diabetes-training.csv' # Load a CSV file def load_csv(filename): dataset = list() with open(filename, 'r') as file: csv_reader = reader(filename) for row in csv_reader: if not row: continue dataset.append(row) return dataset # Convert string column to float def str_column_to_float(dataset, column): for row in dataset: row[column] = float(row[column].strip()) # Find the min and max values for each column def dataset_minmax(dataset): minmax = list() for i in range(len(dataset[0])): col_values = [row[i] for row in dataset] value_min = min(col_values) value_max = max(col_values) minmax.append([value_min, value_max]) return minmax # Rescale dataset columns to the range 0-1 def normalize_dataset(dataset, minmax): for row in dataset: for i in range(len(row)): row[i] = (row[i] - minmax[i][0]) / (minmax[i][1] - minmax[i][0]) # Split a dataset into k folds def cross_validation_split(dataset, n_folds): dataset_split = list() dataset_copy = list(dataset) fold_size = int(len(dataset) / n_folds) for i in range(n_folds): fold = list() while len(fold) &lt; fold_size: index = randrange(len(dataset_copy)) fold.append(dataset_copy.pop(index)) dataset_split.append(fold) return dataset_split # Calculate root mean squared error def rmse_metric(actual, predicted): sum_error = 0.0 for i in range(len(actual)): prediction_error = predicted[i] - actual[i] sum_error += (prediction_error ** 2) mean_error = sum_error / float(len(actual)) return sqrt(mean_error) # Evaluate an algorithm using a cross validation split def evaluate_algorithm(dataset, algorithm, n_folds, *args): folds = cross_validation_split(dataset, n_folds) scores = list() for fold in folds: train_set = list(folds) train_set.remove(fold) train_set = sum(train_set, []) test_set = list() for row in fold: row_copy = list(row) test_set.append(row_copy) row_copy[-1] = None predicted = algorithm(train_set, test_set, *args) actual = [row[-1] for row in fold] rmse = rmse_metric(actual, predicted) scores.append(rmse) return scores # Make a prediction with coefficients def predict(row, coefficients): yhat = coefficients[0] for i in range(len(row)-1): yhat += coefficients[i + 1] * row[i] return yhat # Estimate linear regression coefficients using stochastic gradient descent def coefficients_sgd(train, l_rate, n_epoch): coef = [0.0 for i in range(len(train[0]))] for epoch in range(n_epoch): for row in train: yhat = predict(row, coef) error = yhat - row[-1] coef[0] = coef[0] - l_rate * error for i in range(len(row)-1): coef[i + 1] = coef[i + 1] - l_rate * error * row[i] # print(l_rate, n_epoch, error) return coef # Linear Regression Algorithm With Stochastic Gradient Descent def linear_regression_sgd(train, test, l_rate, n_epoch): predictions = list() coef = coefficients_sgd(train, l_rate, n_epoch) for row in test: yhat = predict(row, coef) predictions.append(yhat) return(predictions) # Linear Regression on Indians Pima Database seed(1) # load and prepare data filename = 'C:/Users/Vince/Desktop/University of Wyoming PHD/Year 2/Machine Learning/Homeworks/Solutions/HW4/pima-indians-diabetes-training.csv' dataset = load_csv(filename) for i in range(len(dataset[0])): str_column_to_float(dataset, i) # normalize minmax = dataset_minmax(dataset) normalize_dataset(dataset, minmax) # evaluate algorithm n_folds = 5 l_rate = 0.01 n_epoch = 5 0 scores = evaluate_algorithm(dataset, linear_regression_sgd, n_folds, l_rate, n_epoch) print('Scores: %s' % scores) print('Mean RMSE: %.3f' % (sum(scores)/float(len(scores)))) </code></pre>
<p>Adding on to the answer from @Agni</p> <p>The CSV file that you are reading has a header line</p> <p><code>num_preg PlGlcConc BloodP tricept insulin BMI ped_func Age HasDiabetes</code></p> <p>When you use <code>reader(file)</code> to read the file and then iterate over it, the first line also gets added in <code>dataset</code>. Hence, the first element in <code>dataset</code> list is:</p> <pre><code>&gt;&gt;&gt; dataset [['num_preg', 'PlGlcConc', 'BloodP', 'tricept', 'insulin', 'BMI', 'ped_func', 'Age', 'HasDiabetes'], ...] </code></pre> <p>So when you try to convert it into float it throws the error, <code>Could not convert string to float): numpreg</code></p> <p>Here is the final edited code:</p> <pre><code>def load_csv(filename): dataset = list() with open(filename, 'r') as file: csv_reader = reader(file) fieldnames = next(csv_reader) # Skip the first row and store in case you need it dataset = list(csv_reader) # You can convert an iterator to list directly return dataset </code></pre>
373
gradient descent implementation
Python gradient descent - cost keeps increasing
https://stackoverflow.com/questions/39771075/python-gradient-descent-cost-keeps-increasing
<p>I'm trying to implement gradient descent in python and my loss/cost keeps increasing with every iteration.</p> <p>I've seen a few people post about this, and saw an answer here: <a href="https://stackoverflow.com/questions/17784587/gradient-descent-using-python-and-numpy">gradient descent using python and numpy</a></p> <p>I believe my implementation is similar, but cant see what I'm doing wrong to get an exploding cost value:</p> <pre><code>Iteration: 1 | Cost: 697361.660000 Iteration: 2 | Cost: 42325117406694536.000000 Iteration: 3 | Cost: 2582619233752172973298548736.000000 Iteration: 4 | Cost: 157587870187822131053636619678439702528.000000 Iteration: 5 | Cost: 9615794890267613993157742129590663647488278265856.000000 </code></pre> <p>I'm testing this on a dataset I found online (LA Heart Data): <a href="http://www.umass.edu/statdata/statdata/stat-corr.html" rel="nofollow noreferrer">http://www.umass.edu/statdata/statdata/stat-corr.html</a></p> <p>Import code:</p> <pre><code>dataset = np.genfromtxt('heart.csv', delimiter=",") x = dataset[:] x = np.insert(x,0,1,axis=1) # Add 1's for bias y = dataset[:,6] y = np.reshape(y, (y.shape[0],1)) </code></pre> <p>Gradient descent:</p> <pre><code>def gradientDescent(weights, X, Y, iterations = 1000, alpha = 0.01): theta = weights m = Y.shape[0] cost_history = [] for i in xrange(iterations): residuals, cost = calculateCost(theta, X, Y) gradient = (float(1)/m) * np.dot(residuals.T, X).T theta = theta - (alpha * gradient) # Store the cost for this iteration cost_history.append(cost) print "Iteration: %d | Cost: %f" % (i+1, cost) </code></pre> <p>Calculate cost:</p> <pre><code>def calculateCost(weights, X, Y): m = Y.shape[0] residuals = h(weights, X) - Y squared_error = np.dot(residuals.T, residuals) return residuals, float(1)/(2*m) * squared_error </code></pre> <p>Calculate hypothesis:</p> <pre><code>def h(weights, X): return np.dot(X, weights) </code></pre> <p>To actually run it:</p> <pre><code>gradientDescent(np.ones((x.shape[1],1)), x, y, 5) </code></pre>
<p>Assuming that your derivation of the gradient is correct, you are using: <code>=-</code> and you should be using: <code>-=</code>. Instead of updating <code>theta</code>, you are reassigning it to <code>- (alpha * gradient)</code></p> <p>EDIT (after the above issue was fixed in the code):</p> <p>I ran what the code on what I believe is the right dataset and was able to get the cost to behave by setting <code>alpha=1e-7</code>. If you run it for <code>1e6</code> iterations you should see it converging. This approach on this dataset appears very sensitive to learning rate. </p>
374
gradient descent implementation
Gradient descent in Java
https://stackoverflow.com/questions/32169988/gradient-descent-in-java
<p>I've recently started the AI-Class at Coursera and I've a question related to my implementation of the gradient descent algorithm.</p> <p>Here's my current implementation (I actually just &quot;translated&quot; the mathematical expressions into Java code):</p> <pre><code> public class GradientDescent { private static final double TOLERANCE = 1E-11; private double theta0; private double theta1; public double getTheta0() { return theta0; } public double getTheta1() { return theta1; } public GradientDescent(double theta0, double theta1) { this.theta0 = theta0; this.theta1 = theta1; } public double getHypothesisResult(double x){ return theta0 + theta1*x; } private double getResult(double[][] trainingData, boolean enableFactor){ double result = 0; for (int i = 0; i &lt; trainingData.length; i++) { result = (getHypothesisResult(trainingData[i][0]) - trainingData[i][1]); if (enableFactor) result = result*trainingData[i][0]; } return result; } public void train(double learningRate, double[][] trainingData){ int iteration = 0; double delta0, delta1; do{ iteration++; System.out.println(&quot;SUBS: &quot; + (learningRate*((double) 1/trainingData.length))*getResult(trainingData, false)); double temp0 = theta0 - learningRate*(((double) 1/trainingData.length)*getResult(trainingData, false)); double temp1 = theta1 - learningRate*(((double) 1/trainingData.length)*getResult(trainingData, true)); delta0 = theta0-temp0; delta1 = theta1-temp1; theta0 = temp0; theta1 = temp1; }while((Math.abs(delta0) + Math.abs(delta1)) &gt; TOLERANCE); System.out.println(iteration); } } </code></pre> <p>The code works quite well but only if I choose an very little alpha, here called learningRate. If it's higher than 0.00001, it diverges.</p> <p>Do you have any suggestions on how to optimize the implementation, or an explanation for the &quot;Alpha-Issue&quot; and a possible solution for it?</p> <p><strong>Update:</strong></p> <p>Here's the main including some sample inputs:</p> <pre><code>private static final double[][] TDATA = {{200, 20000},{300, 41000},{900, 141000},{800, 41000},{400, 51000},{500, 61500}}; public static void main(String[] args) { GradientDescent gd = new GradientDescent(0,0); gd.train(0.00001, TDATA); System.out.println(&quot;THETA0: &quot; + gd.getTheta0() + &quot; - THETA1: &quot; + gd.getTheta1()); System.out.println(&quot;PREDICTION: &quot; + gd.getHypothesisResult(300)); } </code></pre> <p>The mathematical expression of gradient descent is as follows:</p> <p><a href="https://i.sstatic.net/72agd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/72agd.png" alt="enter image description here" /></a></p>
<p>To solve this issue, it's necessary to normalize the data with this formular: (Xi-mu)/s. Xi is the current training set value, mu the average of values in the current column and s the maximum value minus the minimum value of the current column. This formula will get the training data approximately into a range between -1 and 1 which allowes to choose higher learning rates and gradient descent to converge faster. But it's afterwards necessary to denormalize the predicted result.</p>
375
gradient descent implementation
How is Nesterov&#39;s Accelerated Gradient Descent implemented in Tensorflow?
https://stackoverflow.com/questions/50774683/how-is-nesterovs-accelerated-gradient-descent-implemented-in-tensorflow
<p>The documentation for <a href="https://www.tensorflow.org/api_docs/python/tf/train/MomentumOptimizer" rel="noreferrer"><code>tf.train.MomentumOptimizer</code></a> offers a <code>use_nesterov</code> parameter to utilise Nesterov's Accelerated Gradient (NAG) method.</p> <p>However, NAG requires the gradient at a location other than that of the current variable to be calculated, and the <code>apply_gradients</code> interface only allows for the current gradient to be passed. So I don't quite understand how the NAG algorithm could be implemented with this interface.</p> <p>The documentation says the following about the implementation:</p> <blockquote> <p><code>use_nesterov</code>: If True use Nesterov Momentum. See <a href="http://proceedings.mlr.press/v28/sutskever13.pdf" rel="noreferrer">Sutskever et al., 2013</a>. This implementation always computes gradients at the value of the variable(s) passed to the optimizer. Using Nesterov Momentum makes the variable(s) track the values called <code>theta_t + mu*v_t</code> in the paper.</p> </blockquote> <p>Having read through the paper in the link, I'm a little unsure about whether this description answers my question or not. How can the NAG algorithm be implemented when the interface doesn't require a gradient function to be provided?</p>
<p><strong>TL;DR</strong></p> <p>TF's implementation of Nesterov is indeed an approximation of the original formula, valid for high values of momentum.</p> <p><strong>Details</strong></p> <p>This is a great question. In the paper, the NAG update is defined as</p> <pre><code>v<sub>t+1</sub> = μ.v<sub>t</sub> - λ.∇f(θ<sub>t</sub> + μ.v<sub>t</sub>) θ<sub>t+1</sub> = θ<sub>t</sub> + v<sub>t+1</sub> </code></pre> <p>where <code>f</code> is our cost function, <code>θ<sub>t</sub></code> our parameters at time <code>t</code>, <code>μ</code> the momentum, <code>λ</code> the learning rate; <code>v<sub>t</sub></code> is the NAG's internal accumulator.</p> <p>The main difference with standard momentum is the use of the gradient at <code>θ<sub>t</sub> + μ.v<sub>t</sub></code>, <em>not</em> at <code>θ<sub>t</sub></code>. But as you said, tensorflow only uses gradient at <code>θ<sub>t</sub></code>. So what is the trick?</p> <p>Part of the trick is actually mentioned in the part of the documentation you cited: the algorithm is tracking <code>θ<sub>t</sub> + μ.v<sub>t</sub></code>, <em>not</em> <code>θ<sub>t</sub></code>. The other part comes from an approximation valid for high value of momentum.</p> <p>Let's make a slight change of notation from the paper for the accumulator to stick with tensorflow's definition. Let's define <code>a<sub>t</sub> = v<sub>t</sub> / λ</code>. The update rules are changed slightly as</p> <pre><code>a<sub>t+1</sub> = μ.a<sub>t</sub> - ∇f(θ<sub>t</sub> + μ.λ.a<sub>t</sub>) θ<sub>t+1</sub> = θ<sub>t</sub> + λ.a<sub>t+1</sub> </code></pre> <p>(The motivation for this change in TF is that now <code>a</code> is a pure gradient momentum, independent of the learning rate. This makes the update process robust to changes in <code>λ</code>, a possibility common in practice but that the paper does not consider.)</p> <p>If we note <code>ψ<sub>t</sub> = θ<sub>t</sub> + μ.λ.a<sub>t</sub></code>, then</p> <pre><code>a<sub>t+1</sub> = μ.a<sub>t</sub> - ∇f(ψ<sub>t</sub>) ψ<sub>t+1</sub> = θ<sub>t+1</sub> + μ.λ.a<sub>t+1</sub> = θ<sub>t</sub> + λ.a<sub>t+1</sub> + μ.λ.a<sub>t+1</sub> = ψ<sub>t</sub> + λ.a<sub>t+1</sub> + μ.λ.(a<sub>t+1</sub> - a<sub>t</sub>) = ψ<sub>t</sub> + λ.a<sub>t+1</sub> + μ.λ.[(μ-1)a<sub>t</sub> - ∇f(ψ<sub>t</sub>)] ≈ ψ<sub>t</sub> + λ.a<sub>t+1</sub> </code></pre> <p>This last approximation holds for strong values of momentum, where <code>μ</code> is close to 1, so that <code>μ-1</code> is close to zero, and <code>∇f(ψ<sub>t</sub>)</code> is small compared to <code>a</code> — this last approximation is more debatable actually, and less valid for directions with frequent gradient switch.</p> <p>We now have an update that uses the gradient of the current position, and the rules are pretty simple — they are in fact those of standard momentum.</p> <p>However, we want <code>θ<sub>t</sub></code>, not <code>ψ<sub>t</sub></code>. This is the reason why we subtract <code>μ.λ.a<sub>t+1</sub></code> to <code>ψ<sub>t+1</sub></code> just before returning it — and to recover <code>ψ</code> it is added again first thing at the next call.</p>
376
gradient descent implementation
Implementing numba for word2vec gradient descent but getting LoweringError
https://stackoverflow.com/questions/48698936/implementing-numba-for-word2vec-gradient-descent-but-getting-loweringerror
<p>I am running gradient descent for word2vec and would like to implement numba to speed up the training. </p> <p>EDIT: It seems the real error is this </p> <blockquote> <p>NotImplementedError: unsupported nested memory-managed object</p> </blockquote> <p>This is a subsequent error: </p> <pre><code>raise NotImplementedError("%s: %s" % (root_type, str(e))) numba.errors.LoweringError: Failed at nopython (nopython mode backend) reflected list(reflected list(int64)): unsupported nested memory-managed object File "test.py", line 36 [1] During: lowering "negative_indices = arg(6, name=negative_indices)" at test.py (36) </code></pre> <p>I've searched through the <a href="http://numba.pydata.org/numba-doc/dev/user/troubleshoot.html#numba-troubleshooting" rel="nofollow noreferrer">numba documentation</a> and googled this error with no luck.</p> <p>Here is a replicable code snippet:</p> <pre><code>import numpy as np import random from numba import jit random.seed(10) np.random.seed(10) @jit(nopython=True) def sigmoid(x): return float(1)/(1+np.exp(-x)) @jit(nopython=True) def listsum(list1): ret=0 for i in list1: ret += i return ret num_samples = 2 learning_rate = 0.05 center_token = 50 hidden_size = 100 sequence_chars = [2000, 1500, 400, 600] W1 = np.random.uniform(-.5, .5, size=(11000, hidden_size)) W2 = np.random.uniform(-.5, .5, size=(11000, hidden_size)) negative_indices = [[800,1000], [777,950], [650,300], [10000,9999]] @jit(nopython=True) def performDescent(num_samples, learning_rate, center_token, sequence_chars,W1,W2,negative_indices): nll_new = 0 neg_idx = 0 for k in range(0, len(sequence_chars)): w_c = sequence_chars[k] W_neg = negative_indices[k] w_j = [w_c] + W_neg t_j = [1] + [0]*len(W_neg) h = W1[center_token] update_i = np.zeros((hidden_size,len(w_j))) for i in range(0,len(w_j)): v_j = W2[w_j[i]] update_i[:,i] = (sigmoid(np.dot(v_j.T,h))-t_j[i])*v_j W2[w_j[i]] = v_j - learning_rate*(sigmoid(np.dot(v_j.T,h))-t_j[i])*h #creates v_j_new W1[center_token] = h - learning_rate*np.sum(update_i, axis=1) update_nll = [] for i in range(1,len(w_j)): update_nll.append(np.log(sigmoid(-np.dot(W2[w_j[i]].T,h)))) #h is updated in memory nll = -np.log(sigmoid(np.dot(W2[w_j[0]].T,h))) - listsum(update_nll) print("nll:",nll) nll_new += nll return [nll_new] performDescent(num_samples, learning_rate, center_token, sequence_chars,W1,W2,negative_indices) </code></pre> <p>I don't understand why negative_indices is giving an issue.</p>
<p>As the error message hints at, lists in numba have only partial support. They can't contain "memory-managed" objects, which means they can only hold scalar, primitive types - for example:</p> <pre><code>@njit def list_first(l): return l[0] list_first([1, 2, 3]) # Out[65]: 1 list_first([[1], [2]]) # LoweringError: Failed at nopython (nopython mode backend) # reflected list(reflected list(int64)): unsupported nested memory-managed object </code></pre> <p>Assuming your example if representative, it seems like in the places you are using a list, it isn't necessary, and would be hurting performance even if it was supported, because you know the allocation sizes in advance.</p> <p>Here's a potential refactoring that numba can handle.</p> <pre><code>sequence_chars = np.array([2000, 1500, 400, 600], dtype=np.int64) negative_indices = np.array([[800,1000], [777,950], [650,300], [10000,9999]], dtype=np.int64) @jit(nopython=True) def performDescent2(num_samples, learning_rate, center_token, sequence_chars, W1, W2 ,negative_indices): nll_new = 0 neg_idx = 0 neg_ind_length = len(negative_indices[0]) w_j = np.empty(neg_ind_length + 1, dtype=np.int64) t_j = np.zeros(neg_ind_length + 1, dtype=np.int64) t_j[0] = 1 for k in range(0, len(sequence_chars)): w_j[0] = sequence_chars[k] w_j[1:] = negative_indices[k] h = W1[center_token] update_i = np.zeros((hidden_size,len(w_j))) for i in range(0,len(w_j)): v_j = W2[w_j[i]] update_i[:,i] = (sigmoid(np.dot(v_j.T, h)) - t_j[i]) * v_j W2[w_j[i]] = v_j - learning_rate * (sigmoid(np.dot(v_j.T, h)) - t_j[i]) * h #creates v_j_new W1[center_token] = h - learning_rate * np.sum(update_i, axis=1) update_nll = np.zeros(len(w_j)) for i in range(1, len(w_j)): update_nll[i-1] = np.log(sigmoid(-np.dot(W2[w_j[i]].T, h))) #h is updated in memory nll = -np.log(sigmoid(np.dot(W2[w_j[0]].T,h))) - update_nll.sum() print("nll:",nll) nll_new += nll return nll_new </code></pre>
377
gradient descent implementation
Implementation of gradient descent blowing up to infinity?
https://stackoverflow.com/questions/68356680/implementation-of-gradient-descent-blowing-up-to-infinity
<p>This is how I generated the training data for my Linear Regression.</p> <pre><code>!pip install grapher, numpy from grapher import Grapher import matplotlib.pyplot as plt import numpy as np # Secret: y = 3x + 4 # x, y = [float(row[0]) for row in rows], [float(row[5]) for row in rows] x, y = [a for a in range(-20, 20)], [3*a + 4 for a in range(-20, 20)] g = Grapher(['3*x + 4'], title=&quot;y = 3x+4&quot;) plt.scatter(x, y) g.plot() </code></pre> <p><a href="https://i.sstatic.net/HVnym.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HVnym.png" alt="enter image description here" /></a></p> <p>Then, I tried gradient descent on a simple quadratic function (x - 7)^2</p> <pre><code>def n(x): return (x-7)**2 cur_x = 0 lr = 0.001 ittr = 10000 n = 0 prev_x = -1 max_precision = 0.0000001 precision = 1 while n &lt; ittr and precision &gt; max_precision: prev_x = cur_x cur_x = cur_x - lr * (2*(cur_x - 7)) precision = abs(prev_x - cur_x) n+=1 if n%100 == 0: print(n, ':') print(cur_x) print() print(cur_x) </code></pre> <p>And this works perfectly.</p> <p>Then I made a Linear Regression class to make the same thing happen.</p> <pre><code>class LinearRegression: def __init__(self, X, Y): self.X = X self.Y = Y self.m = 1 self.c = 0 self.learning_rate = 0.01 self.max_precision = 0.000001 self.itter = 10000 def h(self, x, m, c): return m * x + c def J(self, m, c): loss = 0 for x in self.X: loss += (self.h(x, m, c) - self.Y[self.X.index(x)])**2 return loss/2 def calc_loss(self): return self.J(self.m, self.c) def guess_answer(self, step=1): losses = [] mcvalues = [] for m in np.arange(-10, 10, step): for c in np.arange(-10, 10, step): mcvalues.append((m, c)) losses.append(self.J(m, c)) minloss = sorted(losses)[0] return mcvalues[losses.index(minloss)] def gradient_decent(self): print('Orignal: ', self.m, self.c) nm = 0 nc = 0 prev_m = 0 perv_c = -1 mprecision = 1 cprecision = 1 while nm &lt; self.itter and mprecision &gt; self.max_precision: prev_m = self.m nm += 1 self.m = self.m - self.learning_rate * sum([(self.h(x, self.m, self.c) - self.Y[self.X.index(x)])*x for x in self.X]) mprecision = abs(self.m - prev_m) return self.m, self.c def graph_loss(self): plt.scatter(0, self.J(0)) print(self.J(0)) plt.plot(self.X, [self.J(x) for x in self.X]) def check_loss(self): plt.plot([m for m in range(-20, 20)], [self.J(m, 0) for m in range(-20, 20)]) x1 = 10 y1 = self.J(x1, 0) l = sum([(self.h(x, x1, self.c) - self.Y[self.X.index(x)])*x for x in self.X]) print(l) plt.plot([m for m in range(-20, 20)], [(l*(m - x1)) + y1 for m in range(-20, 20)]) plt.scatter([x1], [y1]) LinearRegression(x, y).gradient_decent() </code></pre> <p>Output is</p> <pre><code>Orignal: 1 0 (nan, 0) </code></pre> <p>Then I tried graphing my Loss Function (J(m, c)) and tried to use its derivative to see if it actuallly gives slope. I was in a suspection that I have messed up my d(J(m, c))/dm</p> <p>After running <code>LinearRegression(x, y).check_loss()</code></p> <p>I get this graph</p> <p><a href="https://i.sstatic.net/xvTwN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xvTwN.png" alt="enter image description here" /></a></p> <p>It is a slope at whatever point I want it to be. Why isnt it working in my code?</p>
<p>Now that I see, the main problem is with the learning rate. Learning rate of <code>0.01</code> is too high. Keeping it lower than <code>0.00035</code> works well. About <code>0.0002</code> works well and quick. I tried graphing things, and saw it made a lot of difference.</p> <p>With a learning rate of <code>0.00035</code> and 1000 iterations, this was the graph:</p> <p><a href="https://i.sstatic.net/mnisZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mnisZ.png" alt="enter image description here" /></a></p> <p>With a learning rate of <code>0.0002</code> and 1000 iterations, this was the graph:</p> <p><a href="https://i.sstatic.net/W4iJc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W4iJc.png" alt="enter image description here" /></a></p> <p>With a learning rate of <code>0.0004</code> and just <code>10</code> iterations, this was the graph:</p> <p><a href="https://i.sstatic.net/Lj5mZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lj5mZ.png" alt="enter image description here" /></a></p> <p>Instead of converging to the point, its diverging. THat is why learning rate is important and anything bigger than 0.0004 will result in the same.</p> <p>It took me quite some time to figure out.</p>
378
gradient descent implementation
Implementation of Gradient (Steepest) Descent
https://stackoverflow.com/questions/3950349/implementation-of-gradient-steepest-descent
<p>I'm looking for some advice on how to go about implementing <a href="http://en.wikipedia.org/wiki/Gradient_descent" rel="nofollow">Gradient (steepest) Descent</a> in C. I am finding the minimum of f(x)=||Ax-y||^2, with A(n,n) and y(n) given.</p> <p>This is difficult in C (I think) because computing the gradient, Δf(x)=[df/dx(1), ..., df/dx(n)] requires calculating derivatives.</p> <p>I just wanted to throw this at SO to get some direction on going about programming this, e.g.:</p> <p>1) What dimensionality would be best to start with (1,2,...)</p> <p>2) Advice on how to go about doing the partial derivatives</p> <p>3) Whether I should implement in an easier language, like python, first -- then translate over to C</p> <p>4) Etc.</p> <p>Let me know your thoughts! Thanks in advance</p>
<p>1) Start in 2D, this way you can plot the path of the descent and actually see your algorithm working.</p> <p>2) df/dx = (f(x+h)-f(x-h))/(2*h) if f evaluation is cheap, (f(x+h)-f(x))/h if it is expensive. The choice of h should balance truncation error (mostly with big h) and roundoff error (small h). Typical values of h are ~ pow(DBL_EPSILON, 1./3), but the actual exponent depends on the formula for the derivative, and ideally there should be a prefactor that depends on f. You may plot the numerical derivative as a function of h in a logscale, for some given sample points in the parameter space. You will then clearly see the range of h that is optimal for the points you are sampling.</p> <p>3) Yes whatever you find easier.</p> <p>4) The hard point is finding the optimal step size. You may want to use an inner loop here to search for the optimal step.</p>
379
gradient descent implementation
Vectorized gradient descent basics
https://stackoverflow.com/questions/24337887/vectorized-gradient-descent-basics
<p>I'm implementing simple gradient descent in octave but its not working. Here is the data I'm using:</p> <pre><code>X = [1 2 3 1 4 5 1 6 7] y = [10 11 12] theta = [0 0 0] alpha = 0.001 and itr = 50 </code></pre> <p>This is my gradient descent implementation:</p> <pre><code>function theta = Gradient(X,y,theta,alpha,itr) m= length(y) for i = 1:itr, th1 = theta(1) - alpha * (1/m) * sum((X * theta - y) .* X(:, 1)); th2 = theta(2) - alpha * (1/m) * sum((X * theta - y) .* X(:, 2)); th3 = theta(3) - alpha * (1/m) * sum((X * theta - y) .* X(:, 3)); theta(1) = th1; theta(2) = th2; theta(3) = th3; end </code></pre> <p>Questions are:</p> <ul> <li>It produces some values of theta which I use in <code>theta * [1 2 3]</code> and expect an output near about 10 (from y). Is that the correct way to test the hypothesis? [h(x) = theta' * x]</li> <li>How can I determine how many times should it iterate? If I give it 1500 iterations, theta gets extremely small (in e).</li> <li>If I use double digit numbers in X, theta gets too small again. Even with &lt; 5 iterations.</li> </ul> <p>I've been struggling with these things for a long time now. Unable to resolve it myself. </p> <p>Sorry for bad formatting. </p>
<p>Your Batch gradient descent implementation seems perfectly fine to me. Can you be more specific on what is the error you are facing. Having said that, for your question Is that the correct way to test the hypothesis? [h(x) = theta' * x]. Based on the dimensions of your test set here, you should test it as h(x) = X*theta. For your second question, the number of iterations depends on the data set provided. To decide on the optimized number of iterations, you need to plot your cost function with the number of iterations. And as iterations increase, values of cost function should decrease. By this you can decide upon how many iterations you need. You might also consider, increasing the value of alpha in steps of 0.001, 0.003, 0.01, 0.03, 0.1 ... to consider best possible alpha value. For your third question, I guess you are directly trying to model the data which you have in this question. This data is very small, it just contains 3 training examples. You might be trying to implement linear regression algorithm. For that, you need to that proper training set which contains sufficient data to train your model. Then you can test your model with your test data. Refer to Andrew Ng course of Machine Learning in www.coursera.org. You will find more information in that course.</p>
380
gradient descent implementation
sklearn: Hyperparameter tuning by gradient descent?
https://stackoverflow.com/questions/43420493/sklearn-hyperparameter-tuning-by-gradient-descent
<p>Is there a way to perform hyperparameter tuning in scikit-learn by gradient descent? While a formula for the gradient of hyperparameters might be difficult to compute, numerical computation of the hyperparameter gradient by evaluating two close points in hyperparameter space should be pretty easy. Is there an existing implementation of this approach? Why is or isn't this approach a good idea?</p>
<p>The calculation of the gradient is the least of problems. At least in times of advanced <a href="https://en.wikipedia.org/wiki/Automatic_differentiation" rel="noreferrer">automatic differentiation</a> software. (Implementing this in a general way for all sklearn-classifiers of course is not easy)</p> <p>And while there are works of people who used this kind of idea, they only did this for some specific and well-formulated problem (e.g. SVM-tuning). Furthermore there probably were a lot of assumptions because:</p> <p><em>Why is this not a good idea</em>?</p> <ul> <li>Hyper-param optimization is in general: <strong>non-smooth</strong> <ul> <li>GD really likes smooth functions as a gradient of zero is not helpful</li> <li>(Each hyper-parameter which is defined by some discrete-set (e.g. choice of l1 vs. l2 penalization) introduces non-smooth surfaces)</li> </ul></li> <li>Hyper-param optimization is in general: <strong>non-convex</strong> <ul> <li>The whole convergence-theory of GD assumes, that the underlying problem is convex <ul> <li>Good-case: you obtain some local-minimum (can be arbitrarily bad)</li> <li>Worst-case: GD is not even converging to some local-minimum</li> </ul></li> </ul></li> </ul> <p>I might add, that your general problem is the worst kind of optimization problem one can consider because it's:</p> <ul> <li>non-smooth, non-convex</li> <li><em>and even stochastic / noisy as most underlying algorithms are heuristic approximations with some variance in regards to the final output (and often even PRNG-based random-behaviour)</em>.</li> </ul> <p>The last part is the reason, why the offered methods in sklearn are that simple:</p> <ul> <li>random-search: <ul> <li>if we can't infere something because the problem is too hard, just try many instances and pick the best</li> </ul></li> <li>grid-search: <ul> <li>let's assume there is some kind of smoothness <ul> <li>instead of random-sampling, we sample in regards to our smoothness-assumption <ul> <li>(and other assumptions like: param is probably big -> <code>np.logspace</code> to analyze more big numbers)</li> </ul></li> </ul></li> </ul></li> </ul> <p>While there are a lot of Bayesian-approaches including available python-software like <a href="http://jaberg.github.io/hyperopt/" rel="noreferrer">hyperopt</a> and <a href="https://github.com/HIPS/Spearmint" rel="noreferrer">spearmint</a>, many people think, that random-search is the best method in general (which might be surprising but emphasizes the mentioned problems).</p>
381
gradient descent implementation
Understanding gradient of gradient descent algorithm in Numpy
https://stackoverflow.com/questions/33621399/understanding-gradient-of-gradient-descent-algorithm-in-numpy
<p>I'm trying to figure out the python code for multivariate gradient descent algorithm, and have found several several implementations like this:</p> <pre><code>import numpy as np # m denotes the number of examples here, not the number of features def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() for i in range(0, numIterations): hypothesis = np.dot(x, theta) loss = hypothesis - y cost = np.sum(loss ** 2) / (2 * m) print("Iteration %d | Cost: %f" % (i, cost)) # avg gradient per example gradient = np.dot(xTrans, loss) / m # update theta = theta - alpha * gradient return theta </code></pre> <p>From the definition of gradient descent, the expression of gradient descent is: <img src="https://i.sstatic.net/w3Nav.png" alt="[\frac{1}{m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})x_j^{(i)}]"></p> <p>However, in numpy, it is being calculated as: <code>np.dot(xTrans, loss) / m</code> Can someone please explain how we have got this numpy expression ?</p>
<p>The code is actually very straightforward, it would be beneficial to spend a bit more time to read it.</p> <ul> <li><code>hypothesis - y</code> is the first part of the square loss' gradient (as a vector form for each component), and this is set to the <code>loss</code> variable. The calculuation of the hypothesis looks like it's for linear regression.</li> <li><code>xTrans</code> is the transpose of <code>x</code>, so if we dot product these two we get the sum of their components' products.</li> <li>we then divide by <code>m</code> to get the average.</li> </ul> <p>Other than that, the code has some python style issues. We typically use <code>under_score</code> instead of <code>camelCase</code> in python, so for example the function should be <code>gradient_descent</code>. More legible than java isn't it? :)</p>
382
gradient descent implementation
Gradient descent Search implemented in matlab theta1 incorrect
https://stackoverflow.com/questions/52862164/gradient-descent-search-implemented-in-matlab-theta1-incorrect
<p>I studied the Machine learning course taught by Prof. Andrew Ng. <a href="http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&amp;doc=exercises/ex2/ex2.html" rel="nofollow noreferrer">This is the link</a> </p> <p>I try to implement the 1st assignment of this course. <strong>Exercise 2: Linear Regression</strong> based upon <strong>Supervised learning problem</strong></p> <p>1.Implement gradient descent using a learning rate of alpha=0.07.Since Matlab/Octave and Octave index vectors starting from 1 rather than 0, you'll probably use theta(1) and theta(2) in Matlab/Octave to represent theta0 and theta1.</p> <p>I write down a matlab code to solve this problem:</p> <pre><code>clc clear close all x = load('ex2x.dat'); y = load('ex2y.dat'); figure % open a new figure window plot(x, y, '*'); ylabel('Height in meters') xlabel('Age in years') m = length(y); % store the number of training examples x = [ones(m, 1), x]; % Add a column of ones to x theta = [0 0]; temp=0,temp2=0; h=[]; alpha=0.07;n=2; %alpha=learning rate for i=1:m temp1=0; for j=1:n h(j)=theta(j)*x(i,j); temp1=temp1+h(j); end temp=temp+(temp1-y(i)); temp2=temp2+((temp1-y(i))*(x(i,1)+x(i,2))); end theta(1)=theta(1)-(alpha*(1/m)*temp); theta(2)=theta(2)-(alpha*(1/m)*temp2); </code></pre> <p>I get the answer : </p> <pre><code>&gt;&gt; theta theta = 0.0745 0.4545 Here, 0.0745 is exact answer but 2nd one is not accurate. </code></pre> <p>Actual answer</p> <p>theta =</p> <p>0.0745 0.3800</p> <p>The data set is provided in the link. Can any one help me to fix the problem?</p>
<p>You get wrong results because you write long unnecessary code that is easily prone to bugs, that is exactly why we have matlab:</p> <pre><code>clear x = load('d:/ex2x.dat'); y = load('d:/ex2y.dat'); figure(1), clf, plot(x, y, '*'), xlabel('Age in years'), ylabel('Height in meters') m = length(y); % store the number of training examples x = [ones(m, 1), x]; % Add a column of ones to x theta=[0,0]; alpha=0.07; residuals = x*theta' - y ; %same as: sum(x.*theta,2)-y theta = theta - alpha*mean(residuals.*x); disp(theta) </code></pre>
383
gradient descent implementation
Using tf.py_func as loss function to implement gradient descent
https://stackoverflow.com/questions/55266743/using-tf-py-func-as-loss-function-to-implement-gradient-descent
<p>I'm trying to use <code>tf.train.GradientDescentOptimizer().minimize(loss)</code> to get the minimum value of the loss function. But the loss function is very complicated and I need to use numpy to calculate the value, so I use <code>tf.py_func</code> to change the output to tensor again and try to use gradient descent to get the result. But this occurs the error: No gradients provided for any variable.<br></p> <pre class="lang-py prettyprint-override"><code>def getvalue(w): return w train_X = np.array([1,2,3,4,5]) train_Y = np.array([34,56,78,27,96]) X = tf.placeholder("float") Y = tf.placeholder("float") w = tf.Variable([0.0,0,0,0,0], name="weight") loss = tf.py_func(getvalue,[w],tf.float32) train_op = tf.train.GradientDescentOptimizer(0.02).minimize(loss) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) temp = 1 for i in range(100): for (x, y) in zip(train_X, train_Y): _, w_value = sess.run([train_op, w],feed_dict={X: x,Y: y}) temp += 1 </code></pre> <p>Here's a very simple loss function, but when compile it still occurs No gradients provided for any variable. So is there's a way to solve it? Do I need to calculate everything in tensor in order to use gradient descent? Thanks in advance.</p>
384
gradient descent implementation
How do I implement stochastic gradient descent from the following gradient descent code? (trouble adding a random sample)
https://stackoverflow.com/questions/54523274/how-do-i-implement-stochastic-gradient-descent-from-the-following-gradient-desce
<p>I'm struggling to make the gradient descent function I already have into one for stochastic gradient descent. I have the following:</p> <pre><code> gd &lt;- function(f, grad, y, X, theta0, npars, ndata, a, niters) { theta &lt;- matrix(data=NA, nrow=niters, ncol=npars) cost &lt;- vector(mode="numeric", length=niters) theta[1, ] &lt;- theta0 cost[1] &lt;- f(y, X, theta0, ndata) for (i in 2:niters) { theta[i, ] &lt;- theta[i-1, ]-a*grad(y, X, theta[i-1, ], ndata) cost[i] &lt;- f(y, X, theta[i, ], ndata) } return(list(theta=theta, cost=cost)) } </code></pre> <p>This code works fine. I'm trying to do it so instead of <code>ndata &lt;- 1000</code>, I have 100 points randomly sampled. I tried changing the second part to </p> <pre><code>for (i in 2:niters) { samp &lt;- sample(ndata, nsubsamples) theta[i, ] &lt;- theta[i-1, ]-a*grad(y[samp,], X[samp,], theta[i-1, ], nsubsamples) cost[i] &lt;- f(y, X, theta[i, ], nsubsamples) </code></pre> <p>but i get an error saying: </p> <blockquote> <p>Error in y[samp, ] : incorrect number of dimensions.</p> </blockquote> <p>My <code>y</code> is a column from a dataset called <code>simulated_data</code> with 1000 observations. When trying to get 100 random samples from it (nsubsamples=100 and ndata=1000, <code>simulated_data[samp,]$y</code> works but <code>simulated_data$y[samp,]</code> does not. But my <code>y</code> has to be defined as <code>simulated_data$y</code>.</p> <p>So I'm wondering if there's an easier way to add a random sample and, if I've done that, the rest of my code is correct (as I'm a bit confused if I should be using <code>ndata</code> or <code>nsubsamples</code> for <code>theta[i]</code> and <code>cost[i]</code>.</p>
385
gradient descent implementation
What is wrong with my gradient descent implementation (SVM classifier with hinge loss)
https://stackoverflow.com/questions/79055573/what-is-wrong-with-my-gradient-descent-implementation-svm-classifier-with-hinge
<p>I am trying to implement and train an SVM multi-class classifier from scratch using python and numpy in jupyter notebooks.</p> <p>I have been using the CS231n course as my base of knowledge, especially this page: <a href="https://cs231n.github.io/optimization-1/" rel="nofollow noreferrer">https://cs231n.github.io/optimization-1/</a> which discusses gradient descent. I have implemented a class, SVM, that I believe is on the right track.</p> <p>Here is the basic profile for that class:</p> <pre><code>class SVM:   def __init__(self):     self.weights = np.random.randn(len(labels), X_train.shape[1]) * 0.1     self.history = []   def predict(self, X):     '''     returns class predictions in np array of size     n x num_classes, where n is the number of examples in X     '''     #matrix multiplication to apply weights to X     bounds = self.weights @ X.T     #return the predictions     return np.array(bounds).T   def loss(self, scores, y, delta=1): '''computes the loss'''     #calculate and return the loss for a prediction and corresponding truth label     #hinge loss in this case     total_loss = 0     #compute loss for each example...     for i in range(len(scores)):       #extract values for this example       scores_of_x = scores[i]       label = y[i]       correct_score = scores_of_x[label]       incorrect_scores = np.concatenate((scores_of_x[:label], scores_of_x[label+1:]))       #use the scores for example x to compute the loss at x       wj_xi = correct_score           #these should be a vector of INCORRECT scores       wyi_xi = incorrect_scores       #this should be a vector of the CORRECT score       wy_xi = wj_xi - wyi_xi + delta  #core of the hinge loss formula       losses = np.maximum(0, wy_xi)   #lower bound the losses at 0       loss = np.sum(losses)           #sum the losses       #add to the total loss       total_loss += loss     #return the loss     avg_loss = total_loss / len(scores)     return avg_loss   def gradient(self, scores, X, y, delta=1): '''computes the gradient'''     #calculate the loss and the gradient of the loss function     #gradient of hinge loss function     gradient = np.zeros(self.weights.shape)     #calculate the gradient in each example in x     for i in range(len(X)):       #extract values for this example       scores_of_x = scores[i]       label = y[i]       x = X[i]       correct_score = scores_of_x[label]       incorrect_scores = np.concatenate((scores_of_x[:label], scores_of_x[label+1:]))       #       ##       ### start by computing the gradient of the weights of the correct classifier       ##       #       wj_xi = correct_score           #these should be a vector of INCORRECT scores       wyi_xi = incorrect_scores       #this should be a vector of the CORRECT score       wy_xi = wj_xi - wyi_xi + delta  #core of the hinge loss formula       losses = np.maximum(0, wy_xi)   #lower bound the losses at 0       #get number of nonzero losses, and scale data vector by them to get the loss       num_contributing_classifiers = np.count_nonzero(losses)       #print(f&quot;Num loss contributors: {num_contributing_classifiers}&quot;)       g = -1 * x * num_contributing_classifiers   #NOTE the -, very important here, doesn't apply to other scores       #add the gradient of the correct classifier to the gradient       gradient[label] += g  #because arrays are 0-indexed, but the labels are 1-indexed       # print(f&quot;correct label: {label}&quot;)       #print(f&quot;gradient:\n{gradient}&quot;)       #       ##       ### then, compute the gradient of the weights for each incorrect classifier       ##       #       for j in range(len(scores_of_x)):         #skip the correct score, since we already did it         if j == label:           continue         wj_xi = scores_of_x[j]          #should be a vector containing the score of the CURRENT classifier         wyi_xi = correct_score          #should be a vector containing the score of the CORRECT classifier         wy_xi = wj_xi - wyi_xi + delta  #core of the hinge loss formula         loss = np.maximum(0, wy_xi)   #lower bound the loss at 0         #get whether this classifier contributed to the loss, and scale the data vector by that to get the gradient         contributed_to_loss = 0         if loss &gt; 0:           contributed_to_loss = 1         g = x * contributed_to_loss        #either times 1 or times 0         #add the gradient of the incorrect classifier to the gradient         gradient[j] += g     #divide the gradient by number of examples to get the average gradient     return gradient / len(X)   def fit(self, X, y, epochs = 1000, batch_size = 256, lr=1e-2, verbose=True):     #gradient descent loop     for epoch in range(epochs):       self.history.append({'epoch': epoch})       #create a batch of samples to calculate the gradient       #NOTE: this significantly boosts the speed of training       indices = np.random.choice(len(X), batch_size, replace=False)       X_batch = X.iloc[indices]       y_batch = y.iloc[indices]             X_batch = X_batch.to_numpy()       y_batch = y_batch.to_numpy()       #evaluate class scores on training set       predictions = self.predict(X_batch)       predicted_classes = np.argmax(predictions, axis=1)       #compute the loss: average hinge loss       loss = self.loss(predictions, y_batch)       self.history[-1]['loss'] = loss       #compute accuracy on the test set, for an intuitive metric       accuracy = np.mean(predicted_classes == y_batch)       self.history[-1]['accuracy'] = accuracy #print progress       if epoch%50 == 0 and verbose:         print(f&quot;Epoch: {epoch} | Loss: {loss} | Accuracy: {accuracy} | LR: {lr} \n&quot;)       #compute the gradient on the scores assigned by the classifier       gradient = self.gradient(predictions, X_batch, y_batch)             #backpropagate the gradient to the weights + bias       step = gradient * lr       #perform a parameter update, in the negative??? direction of the gradient       self.weights += step </code></pre> <p>That is my implementation. The fit() method is the one that trains the weights on the data passed in. I am at a stage where loss tends to decrease from one iteration to the next.</p> <p>But, the problem is, accuracy drops down to zero even as loss decreases.</p> <p>I know that they are not directly related, but shouldn't my accuracy generally trend upwards as loss goes down? This makes me think I have done something wrong in the loss() and gradient() methods. But, I can't seem to find where I went wrong. Also, sometimes, my loss will increase from one epoch to the next. This could be an impact of my batched evaluation of the gradient, but I am not certain.</p> <p>Here is a link to my Jupyter notebook, which should let you run my code in its current state:</p> <p><a href="https://colab.research.google.com/drive/12z4DevKDicmT4iE6AlMGrRiN6He8R9_4#scrollTo=uBTUQlscWksP" rel="nofollow noreferrer">https://colab.research.google.com/drive/12z4DevKDicmT4iE6AlMGrRiN6He8R9_4#scrollTo=uBTUQlscWksP</a> And here is a link to the data set I am using: <a href="https://www.kaggle.com/datasets/taweilo/fish-species-sampling-weight-and-height-data/code" rel="nofollow noreferrer">https://www.kaggle.com/datasets/taweilo/fish-species-sampling-weight-and-height-data/code</a></p>
<p>To anyone who runs across this thread, I solved my problem. Turns out, I was misreading the formula, and had the locations of two of the terms mixed up. The comments in my original code were actually correct. The variables wj_xi and wyi_xi should actually be defined like this (in both the gradient and the loss methods):</p> <pre><code>wj_xi = incorrect_scores #these should be a vector of INCORRECT scores wyi_xi = correct_score #this should be a vector of the CORRECT score </code></pre> <p>I had them flipped around. Also, as was mentioned in the replies, it is important to update the weights in the negative direction of the gradient, like so:</p> <pre><code>self.weights -= step </code></pre> <p>Full code:</p> <pre><code>class SVM: def __init__(self): self.weights = np.random.randn(len(labels), X_train.shape[1]) * 0.1 #9 sets of weights (9 classes) and 4 entries per set (3 features + 1 bias, shape= (9x4)) self.history = [] def predict(self, X): ''' returns class predictions in np array of size n x 10, where n is the number of examples in X ''' #matrix multiplication to apply weights to X bounds = self.weights @ X.T #return the predictions return np.array(bounds).T def loss(self, scores, y, delta=1): ''' returns the average hinge loss of the batch ''' #calculate and return the loss for a prediction and corresponding truth label #hinge loss in this case total_loss = 0 #compute loss for each example... for i in range(len(scores)): #extract values for this example scores_of_x = scores[i] label = y[i] correct_score = scores_of_x[label] incorrect_scores = np.concatenate((scores_of_x[:label], scores_of_x[label+1:])) #use the scores for example x to compute the loss at x wj_xi = incorrect_scores #these should be a vector of INCORRECT scores wyi_xi = correct_score #this should be a vector of the CORRECT score wy_xi = wj_xi - wyi_xi + delta #core of the hinge loss formula losses = np.maximum(0, wy_xi) #lower bound the losses at 0 loss = np.sum(losses) #sum the losses #add to the total loss total_loss += loss #return the loss avg_loss = total_loss / len(scores) #divide by the number of examples to fund the average hinge loss per-example return avg_loss def gradient(self, scores, X, y, delta=1): ''' returns the gradient of the loss function ''' #calculate the loss and the gradient of the loss function #gradient of hinge loss function gradient = np.zeros(self.weights.shape) #calculate the gradient in each example in x for i in range(len(X)): #extract values for this example scores_of_x = scores[i] label = y[i] x = X[i] correct_score = scores_of_x[label] #because arrays are 0-indexed, but the labels are 1-indexed incorrect_scores = np.concatenate((scores_of_x[:label], scores_of_x[label+1:])) # ## ### start by computing the gradient of the weights of the correct classifier ## # wj_xi = incorrect_scores #these should be a vector of INCORRECT scores wyi_xi = correct_score #this should be a vector of the CORRECT score wy_xi = wj_xi - wyi_xi + delta #core of the hinge loss formula losses = np.maximum(0, wy_xi) #lower bound the losses at 0 #get number of nonzero losses, and scale data vector by them to get the loss num_contributing_classifiers = np.count_nonzero(losses) #print(f&quot;Num loss contributors: {num_contributing_classifiers}&quot;) g = -1 * x * num_contributing_classifiers #NOTE the -, very important here, doesn't apply to other scores #add the gradient of the correct classifier to the gradient gradient[label] += g #because arrays are 0-indexed, but the labels are 1-indexed # print(f&quot;correct label: {label}&quot;) #print(f&quot;gradient:\n{gradient}&quot;) # ## ### then, compute the gradient of the weights for each incorrect classifier ## # for j in range(len(scores_of_x)): #skip the correct score, since we already did it if j == label: continue wj_xi = scores_of_x[j] #should be a vector containing the score of the CURRENT classifier wyi_xi = correct_score #should be a vector containing the score of the CORRECT classifier wy_xi = wj_xi - wyi_xi + delta #core of the hinge loss formula loss = np.maximum(0, wy_xi) #lower bound the loss at 0 #get whether this classifier contributed to the loss, and scale the data vector by that to get the gradient contributed_to_loss = 0 if loss &gt; 0: contributed_to_loss = 1 g = x * contributed_to_loss #either times 1 or times 0 #add the gradient of the incorrect classifier to the gradient gradient[j] += g #divide the gradient by number of examples to get the average gradient return gradient / len(X) def fit(self, X, y, epochs = 1000, batch_size = 256, lr=1e-2, verbose=True): ''' trains the model on the training set ''' #gradient descent loop for epoch in range(epochs): self.history.append({'epoch': epoch}) #create a batch of samples to calculate the gradient #NOTE: this significantly boosts the speed of training indices = np.random.choice(len(X), batch_size, replace=False) X_batch = X.iloc[indices] y_batch = y.iloc[indices] X_batch = X_batch.to_numpy() y_batch = y_batch.to_numpy() #evaluate class scores on training set predictions = self.predict(X_batch) predicted_classes = np.argmax(predictions, axis=1) if epoch%50 == 0 and verbose: print(f&quot;pred: {predicted_classes[:10]}&quot;) print(f&quot;true: {y_batch[:10]}&quot;) #compute the loss: average hinge loss loss = self.loss(predictions, y_batch) self.history[-1]['loss'] = loss #compute accuracy on the test set, for an intuitive metric accuracy = np.mean(predicted_classes == y_batch) self.history[-1]['accuracy'] = accuracy #reduce the learning rate as training progresses # lr *= 0.999 # self.history[-1]['lr'] = lr if epoch%50 == 0 and verbose: print(f&quot;Epoch: {epoch} | Loss: {loss} | Accuracy: {accuracy} | LR: {lr} \n&quot;) #compute the gradient on the scores assigned by the classifier gradient = self.gradient(predictions, X_batch, y_batch) #print(gradient) #backpropagate the gradient to the weights + bias step = gradient * lr #perform a parameter update, in the negative direction of the gradient self.weights -= step sm = SVM() pred = sm.predict(np.array(X_train[0:1])) sm.fit(X_train, y_train) </code></pre>
386
gradient descent implementation
how can i implement plain gradient descent with keras?
https://stackoverflow.com/questions/58759354/how-can-i-implement-plain-gradient-descent-with-keras
<p>I am a student learning deep learning. These days, I am trying to see the plot of a loss function with respect to weights and bias. Especially, I want to apply gradient descent method to get smooth lines rather than random characteristics orginated from other optimizers.</p> <p>Keras framework offers various types of optimizers such as SGD, RMSprop, Adagrad, AdaDelta, Adam, etc. However, the normal, general and plain gradient descent(without random characteristics) is not seen in the Keras official document. Are key arguments, clip normal and clip value relevant to plain GD? For example, if a SGD optimizer with the key args, clipnormal=1, then would it act a plain SGD?</p> <p>Thank you in advance.</p>
<p>I think setting batch_size = n_samples is not sufficient, u have to set momentum=0, nesterov=False, inspite of this i didnt see it gets stuck in to local optima, which means it will still has a way to respawn so that it breakout and find new optima</p>
387
gradient descent implementation
Convert stochastic gradient descent to mini batch gradient descent
https://stackoverflow.com/questions/64172528/convert-stochastic-gradient-descent-to-mini-batch-gradient-descent
<p>I need to convert a training with stochastic gradient descent in mini batch gradient descent. I report a simple example of a neural network with only 4 training sample so we can for example implement a batch size of 2 only for understand how to change the training part.</p> <p>This is the simple example of a net that have to learn an xor operation:</p> <p>This part is the network definition</p> <pre><code>#include &lt;stdio.h&gt; #include &lt;stdlib.h&gt; typedef double NNType; // numer of inputs #define IN 2 // number neurons layer hidden #define HID 8 // numer of outputs #define OUT 1 // learning constant #define EPS 0.1 NNType input[IN]; // input NNType hidden[HID]; // layer hidden NNType output[OUT]; // output NNType weightH[HID][IN]; // weights layer hidden NNType biasesH[HID]; // biases layer hidden NNType weightO[OUT][HID]; // weights output NNType biasesO[OUT]; // biases output inline NNType Activation(NNType x) { return x&gt;0?x:0; } inline NNType Derivative(NNType x) { return x&gt;0?1:0; } </code></pre> <p>This function is the network calculation</p> <pre><code>NNType NetworkResult(NNType inp1,NNType inp2) { // load the inputs input[0]=inp1; input[1]=inp2; // compute hidden layer for (int i=0;i&lt;HID;i++) { hidden[i]=biasesH[i]; for (int j=0;j&lt;IN;j++) hidden[i] += input[j]*weightH[i][j]; hidden[i]=Activation(hidden[i]); } // compute output for (int i=0;i&lt;OUT;i++) { output[i]=biasesO[i]; for (int j=0;j&lt;HID;j++) output[i] += hidden[j]*weightO[i][j]; output[i]=Activation(output[i]); } return output[0]; } </code></pre> <p>This is the training part that I need to change to mini batch gradient descent</p> <pre><code>void TrainNet(NNType inp1,NNType inp2,NNType result,NNType *error) { NetworkResult(inp1,inp2); NNType DeltaO[OUT]; NNType DeltaH[HID]; // layer output NNType err= result-output[0]; *error+=err*err*0.5; DeltaO[0]=err*Derivative(output[0]); // layer hidden for (int i=0;i&lt;HID;i++) { NNType err=0; for (int j=0;j&lt;OUT;j++) err+= DeltaO[j]*weightO[j][i]; DeltaH[i]=err*Derivative(hidden[i]); } // change weights // layer output for (int i=0;i&lt;OUT;i++) { for (int j=0;j&lt;HID;j++) weightO[i][j]+=EPS*DeltaO[i]*hidden[j]; biasesO[i]+=EPS*DeltaO[i]; } // layer hidden for (int i=0;i&lt;HID;i++) { for (int j=0;j&lt;IN;j++) weightH[i][j]+=EPS*DeltaH[i]*input[j]; biasesH[i]+=EPS*DeltaH[i]; } } </code></pre> <p>Main program</p> <pre><code>// constant for weights initializations #define CONSTINIT 0.1 int main(int argc, char *argv[]) { srand(1); // initalize weights and biases for (int i=0;i&lt;HID;i++) { for (int j=0;j&lt;IN;j++) weightH[i][j]= 2.0 * ( (rand()/((NNType)RAND_MAX)) - 0.5 ) * CONSTINIT; biasesH[i]=0.1; } for (int i=0;i&lt;OUT;i++) { for (int j=0;j&lt;HID;j++) weightO[i][j]= 2.0 * ( (rand()/((NNType)RAND_MAX)) - 0.5 ) * CONSTINIT; biasesO[i]=0.1; } // calculate the results with the random weights printf(&quot;0 0 = %f\n&quot;,NetworkResult(0,0)); printf(&quot;0 1 = %f\n&quot;,NetworkResult(0,1)); printf(&quot;1 0 = %f\n&quot;,NetworkResult(1,0)); printf(&quot;1 1 = %f\n&quot;,NetworkResult(1,1)); printf(&quot;\n&quot;); // Train the net to recognize an xor operation int i; for (i=0;i&lt;10000;i++) { NNType error=0; TrainNet(0,0,0,&amp;error); // input 0 0 result 0 TrainNet(0,1,1,&amp;error); // input 0 1 result 1 TrainNet(1,0,1,&amp;error); // input 1 0 result 1 TrainNet(1,1,0,&amp;error); // input 1 1 result 0 if (error&lt;0.0001) break; // exit the training with a low error } // calculate the results after the train printf(&quot;After %d iterations\n&quot;,i); printf(&quot;0 0 = %f\n&quot;,NetworkResult(0,0)); printf(&quot;0 1 = %f\n&quot;,NetworkResult(0,1)); printf(&quot;1 0 = %f\n&quot;,NetworkResult(1,0)); printf(&quot;1 1 = %f\n&quot;,NetworkResult(1,1)); printf(&quot;\n&quot;); return 0; } </code></pre>
<p>Check <a href="https://stats.stackexchange.com/questions/117919/what-are-the-differences-between-epoch-batch-and-minibatch">What are the differences between 'epoch', 'batch', and 'minibatch'?</a>.</p> <p>In your case your input is random. You could split your training data in 2 mini-batches. Run two times your for loop with an error array. In your main:</p> <pre><code> #define BATCHES 2 // add a batch dimension NNType weightH[BATCHES][HID][IN]; // weights layer hidden NNType biasesH[BATCHES][HID]; // biases layer hidden NNType weightO[BATCHES][OUT][HID]; // weights output NNType biasesO[BATCHES][OUT]; // biases output int i,j; NNType error[BATCHES] = {0}; // updated prototype to train multiple batches void TrainNet(NNType inp1,NNType inp2,NNType result,NNType *error, int batch); //init your stuff with random val as before for BATCHES dim init(); // train for (j=0;j&lt;BATCHES;j++) { for (i=0;i&lt;10000/BATCHES;i++) { TrainNet(0,0,0,&amp;error[j], j); // input 0 0 result 0 TrainNet(0,1,1,&amp;error[j], j); // input 0 1 result 1 TrainNet(1,0,1,&amp;error[j], j); // input 1 0 result 1 TrainNet(1,1,0,&amp;error[j], j); // input 1 1 result 0 if (error[j]&lt;0.0001) break; // exit the training with a low error } } </code></pre>
388
gradient descent implementation
Numpy Overflow Error when implementing gradient descent algorithm
https://stackoverflow.com/questions/74264293/numpy-overflow-error-when-implementing-gradient-descent-algorithm
<p>I was trying to learn the gradient descent algorithm purely for fun and I made some code that seems to work event though it get stuck in a local minimum sometimes</p> <p>but sometimes when I run it works and sometimes it gives an overflow error</p> <h3>Output when failed:</h3> <pre><code>[2 4 6 8] E:\Projects\Python\custom_neural_network\testing\testing_gradient_descent.py:18: RuntimeWarning: overflow encountered in exp return 1 / (1 + np.exp(-output)) E:\Projects\Python\custom_neural_network\testing\testing_gradient_descent.py:12: RuntimeWarning: overflow encountered in square return sum((yhat - y) ** 2) Final Loss: inf: inf Prediction: [-inf -inf -inf -inf] </code></pre> <h3>Output when successful:</h3> <pre><code>[2 4 6 8] Final Loss: 0.2104827577503237732477 Prediction: [2. 4. 6. 8.] </code></pre> <h3>Code:</h3> <pre><code>import numpy as np import random X = np.array([1, 2, 3, 4]) y = np.array([2, 4, 6, 8]) def neuron_output(X, w, b): return np.dot(w, X) + b def loss_f(yhat, y): return sum((yhat - y) ** 2) def loss_df(yhat, y): return sum(2 * (yhat - y)) def sigmoid_activation(output): return 1 / (1 + np.exp(-output)) def sigmoid_derivative(output): activation = sigmoid_activation(output) return activation * (1 - activation) w = random.uniform(-1, 1) b = random.uniform(-1, 1) learning_rate = 0.01 epochs = 100 print(y) for epoch in range(epochs): prediction = neuron_output(X, w, b) activation = sigmoid_activation(prediction) loss_derivative = loss_df(prediction, y) # print(loss_derivative) activation_derivative = sigmoid_derivative(activation) derivative_wrt_w = np.dot(prediction, loss_derivative) derivative_wrt_w = np.dot(derivative_wrt_w, activation_derivative) # print(derivative_wrt_w) w -= derivative_wrt_w * learning_rate if epoch == epochs - 1: print(f&quot;Final Loss: {loss_f(prediction, y)}&quot;) else: print(f&quot;Epoch: {epoch} Loss: {loss_f(prediction, y)}&quot;, end=&quot; \r&quot;) print(f&quot;Prediction: {np.round(neuron_output(X, w, b))}&quot;) </code></pre> <p>How can I fix this issue? thanks in advance :D</p>
<p>Numpy arrays cannot auto promote like Python built-in types. This is because they are fixed to a Data Type to make operations faster which is a reason NumPy is good. Although like in your case, you lose the flexibility that when a number overshoots it automatically changes its datatype to accommodate your needs.</p> <p><a href="https://numpy.org/doc/stable/user/basics.types.html#overflow-errors" rel="nofollow noreferrer">https://numpy.org/doc/stable/user/basics.types.html#overflow-errors</a></p> <p>I would propose jumping to the Overflow error in debug mode and looking at the Datatypes your numpy arrays have, then trying the same calculation with a python array and you will see the difference :D</p>
389
gradient descent implementation
Trying to Implement Gradient Descent Algorithm with Fixed Step Size
https://stackoverflow.com/questions/67466602/trying-to-implement-gradient-descent-algorithm-with-fixed-step-size
<p>I am trying to implement the gradient descent algorithm with fixed step size on MATLAB.\</p> <pre><code>syms x1 x2 x3 x4 f(x1,x2,x3,x4) = (x1+10*x2)^2 + 5*(x3-x4)^2 + (x2-2*x3)^4 + 10*(x1-x4)^4 ; grad_f = gradient(f); xk = [3;-1;0;1]; while euclidian(grad_f(xk(1),xk(2),xk(3),xk(4)),4) &gt; 0.01 xk = xk- 0.001*grad_f(xk(1),xk(2),xk(3),xk(4)); double(xk) end </code></pre> <p>This is the main part and the following is the euclidian norm function:</p> <pre><code>function euclidian_norm = euclidian (x,size) total = 0; for i = 1:size total = total + x(i)^2; end euclidian_norm = sqrt(total); end </code></pre> <p>But, when I try to run the code, it takes forever compute it. And I don't have any idea why.<br /> Thanks in advance</p> <p>Edit: Can anyone try to run the code and tell me if the same problem occurs?</p>
<p>Frame the problem as a vector dot product to take advantage of MATLAB's built-in linear algebra routines, which are much faster than explicit loops:</p> <pre><code>function euclidean_norm = euclidean(x) euclidean_norm = x' * x; end </code></pre> <p>You don't need the <code>size</code> parameter (even in your existing code, it would be less error-prone to compute it within the function instead of asking the caller to compute and pass it in).</p> <p>I would argue that this doesn't even need to be a separate function, but there's no harm in it.</p>
390
gradient descent implementation
Implementing a gradient descent from a single point in Numpy?
https://stackoverflow.com/questions/70040011/implementing-a-gradient-descent-from-a-single-point-in-numpy
<p>I have encountered this question in an online test. I am looking for advice on what approach to use, rather than a full solution.</p> <p>You are walking on a mountain. You want to descend to the lowest point on the mountain and choose to apply a gradient descent to plan your route. the height at any location x,y is described by the function h:</p> <pre><code>h(x,y) = x*y**2*(sin(Pi*x) + cos (2Pi*y)) </code></pre> <p><a href="https://i.sstatic.net/SnraE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SnraE.jpg" alt="image" /></a></p> <p>Implement a function that takes as input the floats x and y, which represent your position on the mountain. This returns a list of floats (dx,dy) which represents the gradient at this position. The function accepts float x, float y as parameters and is expected to return a float array. Feel free to import numpy.</p> <pre><code>def gradient(x,y): # your code </code></pre> <p>I have looked up the np.gradient() function which could give me 'a single array corresponding to the derivatives'. However, this takes an array as input yet I have a single point as an input?</p> <p>So how would you convert this into an array? or, maybe I am expected to write a gradient descent algorithm from scratch using np.zero() arrays instead of np.gradient()? I would be grateful for any analogies, links and snippets of code with a similar approach, yet I'd really like to have a go at tackling this problem myself first.</p>
<p>wasnt able to figure out an array solution yet,</p> <p>reverting to SymPy library I got this code to retrieve a list of</p> <p>dx, dy at point x,y given your function:</p> <pre><code>import sympy as sym from sympy.parsing.sympy_parser import parse_expr class FunZ: def __init__(self, f): self.x, self.y = sym.symbols('x y') self.f = parse_expr(f) # print('f : ', self.f) def evalu(self, xx, yy): return float(self.f.subs({self.x: xx, self.y: yy}).evalf()) def derX(self, xx, yy): self.dx = sym.diff(self.f, self.x) # print('dx : ', self.dx) return float(self.dx.subs({self.x: xx, self.y: yy}).evalf()) def derY(self, xx, yy): self.dy = sym.diff(self.f, self.y) # print('dy :', self.dy) return float(self.dy.subs({self.x: xx, self.y: yy}).evalf()) def derXY(self, xx, yy): return [float(self.derX(xx, yy)), float(self.derY(xx, yy))] xx = -3 yy = -2.54 funz = FunZ('x * y ** 2 * (sin(pi * x) + cos(2 * pi * y))') # print('evalu : ',funz.evalu(xx,yy)) print( funz.evalu(xx,yy), type(funz.evalu(xx,yy)),'\n', funz.derX(xx, yy), type(funz.derX(xx, yy)),'\n', funz.derY(xx, yy), type(funz.derY(xx, yy)),'\n' ) print('_________________') print(funz.derXY(xx,yy), type(funz.derXY(xx, yy))) </code></pre> <p>output:</p> <pre><code>18.74673336701243 &lt;class 'float'&gt; 54.55598636936226 &lt;class 'float'&gt; 15.481918816962425 &lt;class 'float'&gt; _________________ [54.55598636936226, 15.481918816962425] &lt;class 'list'&gt; </code></pre> <p>not sure code is right, got it visualized throught:</p> <pre><code>import numpy as np import sympy as sym from sympy.parsing.sympy_parser import parse_expr import matplotlib.pyplot as plt class FunZ: def __init__(self, f): self.x, self.y = sym.symbols('x y') self.f = parse_expr(f) # print('f : ', self.f) def evalu(self, xx, yy): return float(self.f.subs({self.x: xx, self.y: yy}).evalf()) def derX(self, xx, yy): self.dx = sym.diff(self.f, self.x) # print('dx : ', self.dx) return float(self.dx.subs({self.x: xx, self.y: yy}).evalf()) def derY(self, xx, yy): self.dy = sym.diff(self.f, self.y) # print('dy :', self.dy) return float(self.dy.subs({self.x: xx, self.y: yy}).evalf()) def derXY(self, xx, yy): return [float(self.derX(xx, yy)), float(self.derY(xx, yy))] XX = np.linspace(-3, 3, 100) YY = np.linspace(-3, 3, 100) funz = FunZ('x * y ** 2 * (sin(pi * x) + cos(2 * pi * y))') ij = [(x, y, funz.evalu(x, y)) for x in XX for y in YY] arr = np.array(ij, dtype=float) # print(arr, arr.size, arr.shape, arr.dtype) der_x = [(a, b, funz.derX(a, b)) for a in XX for b in YY] derX = np.array(der_x) # print(derX, derX.size, derX.shape, derX.dtype) der_y = [(a, b, funz.derY(a, b)) for a in XX for b in YY] derY = np.array(der_y) # print(derY, derY.size, derY.shape, derY.dtype) x = arr[:, 0] y = arr[:, 1] data = arr[:, 2] fig = plt.figure() ax = fig.add_subplot(221, projection=&quot;3d&quot;) ax.plot_trisurf(x, y, data, color=&quot;red&quot;, alpha=0.5) ax2 = fig.add_subplot(223, projection=&quot;3d&quot;) ax2.plot_trisurf( x, y, derX[:, 2], color=&quot;blue&quot;, alpha=0.2) ax3 = fig.add_subplot(224, projection=&quot;3d&quot;) ax3.plot_trisurf( x, y, derY[:, 2], color=&quot;green&quot;, alpha=0.2) plt.show() </code></pre> <p>output is:</p> <p><a href="https://i.sstatic.net/SbEPi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SbEPi.png" alt="output" /></a></p> <p>the code, especially the latter is very very slow, so I would be interested in a faster solution too, but given my poor knowlegde of python I can only wait for suggestions</p>
391
gradient descent implementation
Gradient Descent Matlab implementation
https://stackoverflow.com/questions/21799435/gradient-descent-matlab-implementation
<p>I have gone through many codes in stack overflow and made my own on same line. there is some problem with this code I am unable to understand. I am storing the value theta1 and theta 2 and also the cost function for analysis purpose. The data for x and Y can be downloaded from this <a href="http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&amp;doc=exercises/ex2/ex2.html" rel="nofollow">Openclassroom</a> page. It has x and Y data in form of .dat files that you can open in notepad.</p> <pre><code> %Single Variate Gradient Descent Algorithm%% clc clear all close all; % Step 1 Load x series/ Input data and Output data* y series x=load('D:\Office Docs_Jay\software\ex2x.dat'); y=load('D:\Office Docs_Jay\software\ex2y.dat'); %Plot the input vectors plot(x,y,'o'); ylabel('Height in meters'); xlabel('Age in years'); % Step 2 Add an extra column of ones in input vector [m n]=size(x); X=[ones(m,1) x];%Concatenate the ones column with x; % Step 3 Create Theta vector theta=zeros(n+1,1);%theta 0,1 % Create temporary values for storing summation temp1=0; temp2=0; % Define Learning Rate alpha and Max Iterations alpha=0.07; max_iterations=1; % Step 4 Iterate over loop for i=1:1:max_iterations %Calculate Hypothesis for all training example for k=1:1:m h(k)=theta(1,1)+theta(2,1)*X(k,2); %#ok&lt;AGROW&gt; temp1=temp1+(h(k)-y(k)); temp2=temp2+(h(k)-y(k))*X(k,2); end % Simultaneous Update tmp1=theta(1,1)-(alpha*1/(2*m)*temp1); tmp2=theta(2,1)-(alpha*(1/(2*m))*temp2); theta(1,1)=tmp1; theta(2,1)=tmp2; theta1_history(i)=theta(2,1); %#ok&lt;AGROW&gt; theta0_history(i)=theta(1,1); %#ok&lt;AGROW&gt; % Step 5 Calculate cost function tmp3=0; tmp4=0; for p=1:m tmp3=tmp3+theta(1,1)+theta(2,1)*X(p,1); tmp4=tmp4+theta(1,1)+theta(2,1)*X(p,2); end J1_theta0(i)=tmp3*(1/(2*m)); %#ok&lt;AGROW&gt; J2_theta1(i)=tmp4*(1/(2*m)); %#ok&lt;AGROW&gt; end theta hold on; plot(X(:,2),theta(1,1)+theta(2,1)*X); </code></pre> <p>I am getting the value of</p> <blockquote> <p>theta as 0.0373 and 0.1900 it should be 0.0745 and 0.3800 </p> </blockquote> <p>this value is approximately double that I am expecting.</p>
<p>I have been trying to implement the iterative step with matrices and vectors (i.e not update each parameter of theta). Here is what I came up with (only the gradient step is here):</p> <pre><code>h = X * theta; # hypothesis err = h - y; # error gradient = alpha * (1 / m) * (X' * err); # update the gradient theta = theta - gradient; </code></pre> <p>The hard part to grasp is that the "sum" in the gradient step of the previous examples is actually performed by the matrix multiplication <code>X'*err</code>. You can also write it as <code>(err'*X)'</code></p>
392
gradient descent implementation
Implementation of gradient descent is very inefficient and does not work in all cases
https://stackoverflow.com/questions/75255159/implementation-of-gradient-descent-is-very-inefficient-and-does-not-work-in-all
<p>I am supposed to implement gradient descent for linear regression. Here is the implementation:</p> <pre><code>class SimpleLinearRegressionModel(): def __init__(self, x, y, theta, alpha): self.x = x self.y = y self.theta = theta self.alpha = alpha ''' Equation for the regression line. input x_i (float) - single input feature @return corresponding model output (float) ''' def h(self, x_i): return self.theta[0] + x_i * self.theta[1] ''' Loss function measuring mean squared error of the regression line for a given training set and model parameters. @return MSE based on the current parameters (float) ''' def J(self): m = len(self.y) return (1 / (m)) * np.sum((self.h(self.x) - self.y) ** 2) def get_gradient(self): m = len(self.y) return np.array([(1 / m) * np.sum(self.h(self.x) - self.y), (1 / m) * np.sum((self.h(self.x) - self.y) * self.x)]) ''' Update the model parameters (i.e. the two theta values) for one gradient descent step. ''' def gradient_descent_step(self): return self.theta - self.alpha * self.get_gradient() ''' Run gradient descent to optimize the model parameters. @param threshold (float) - run gradient descent until the magnitude of the gradient is below this value. @return a list storing the value of the cost function after every step of gradient descent (float list) ''' def run_gradient_descent(self, threshold=.01): cost_values = [] while np.linalg.norm(self.get_gradient()) &gt; threshold: self.theta = self.gradient_descent_step() cost_values.append(self.J()) return cost_values </code></pre> <p>This works for a small dataset (25 elements) but when using a large one (20000 elements), it becomes impossibly slow. How can I optimize this? I've tried to vectorize all the functions, but <code>J()</code> and <code>get_gradient()</code> seem especially slow. I've also noticed when debugging while using the large dataset that the error is increasing as the algorithm runs, which definitely should not be happening.</p>
<p>A few things are obvious immediately:</p> <ol> <li><p>You are running <code>self.h(self.x)</code> three times in every loop iteration, once should be enough. Try to store and re-use the intermediate values of your calculation, also for <code>self.h(self.x) - self.y</code> etc. You may need to combine <code>J()</code> and <code>gradient_descent_step()</code> for this (and change the ordering, i.e., save the loss before the update instead of afterwards).</p> </li> <li><p>Most likely, using <code>np.mean(x)</code> instead of <code>np.sum(x) / len(x)</code> is beneficial (at the very least, it simplifies your code).</p> </li> <li><p>You could just do <code>self.theta -= self.alpha * self.get_gradient()</code> in the gradient step to avoid having to assign <code>self.theta</code> outside the method.</p> </li> </ol> <p>Your convergence behavior (e.g., if the loss sometimes increases) will depend a lot on the choice of the learning rate (<code>self.alpha</code>), so play around with its value to see what happens.</p>
393
gradient descent implementation
Stochastic Gradient Descent on Custom Functions
https://stackoverflow.com/questions/71120229/stochastic-gradient-descent-on-custom-functions
<p>I am working with the R programming language. I am trying to perform Stochastic Gradient Descent on custom defined functions.</p> <p>For instance, here is an example of using Gradient Descent to optimize a custom function (using the well established &quot;pracma&quot; library):</p> <pre><code># define function: Rastrigin &lt;- function(x) { return(20 + x[1]^2 + x[2]^2 - 10*(cos(2*pi*x[1]) + cos(2*pi*x[2]))) } # run gradient descent: library(pracma) &gt; steep_descent(c(1, 1), Rastrigin) $xmin [1] 0.9949586 0.9949586 $fmin [1] 1.989918 $niter [1] 3 </code></pre> <p>Now, I am trying to run Stochastic Gradient Descent on this same function. I found the following package that allow for Stochastic Gradient Descent (e.g. <a href="https://www.rdocumentation.org/packages/sgd/versions/1.1.1" rel="nofollow noreferrer">https://www.rdocumentation.org/packages/sgd/versions/1.1.1</a>, <a href="https://rdrr.io/cran/torch/man/optim_rmsprop.html" rel="nofollow noreferrer">https://rdrr.io/cran/torch/man/optim_rmsprop.html</a>) - but this seems to more suited for functions within pre-existing statistical and machine learning models. I also tried looking for popular variants of Stochastic Gradient Descent such as ADAGRAD or RMSPROP, <strong>but there does not seem to be any straightforward methods to implement Stochastic Gradient Descent on custom defined functions.</strong></p> <p>For instance - suppose I wanted to run Stochastic Gradient Descent on the &quot;Rastrigin&quot; function that I defined above; how to do this?</p> <p>Thanks!</p> <p><strong>Note:</strong> I understand that performing Gradient Descent on a function requires knowledge of the function's derivatives. From the this Stackoverflow post (<a href="https://stackoverflow.com/questions/8779362/explicit-formula-versus-symbolic-derivatives-in-r">Explicit formula versus symbolic derivatives in R</a>), we can obtain the derivatives of the Rastrign Function:</p> <pre><code>#load libraries library(Ryacas0) library(Ryacas) #define Rastrign function (here I am defining the function in &quot;x&quot; and &quot;y&quot; instead of &quot;x[1] and x[2]&quot; z &lt;- 20 + x^2 + y^2 - 10*(cos(2*pi*x) + cos(2*pi*y)) x &lt;- Sym(&quot;x&quot;) y &lt;- Sym(&quot;y&quot;) #first derivative with respect to x (note : 2 * pi = 6.283) dx &lt;- deriv(z, x, 1) dx yacas_expression(2 * x - -62.83185307 * sin(6.28318530717959 * x)) #first derivative with respect to y dy &lt;- deriv(z, y, 1) dy yacas_expression(2 * y - -62.83185307 * sin(6.28318530717959 * y)) </code></pre> <p><strong>Now that we know the first derivatives of the Rastrign Function with respect to &quot;x&quot; and &quot;y&quot; - can we write a function that performs Stochastic Gradient Descent on the Rastrign Function in R?</strong></p>
394
gradient descent implementation
Implementing stochastic gradient descent
https://stackoverflow.com/questions/64739896/implementing-stochastic-gradient-descent
<p>I am trying to implement a basic way of the stochastic gradient desecent with multi linear regression and the L2 Norm as loss function.</p> <p>The result can be seen in this picture:</p> <p><a href="https://i.sstatic.net/yB5My.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yB5My.png" alt="enter image description here" /></a></p> <p>Its pretty far of the ideal regression line, but I dont really understand why thats the case. I double checked all array dimensions and they all seem to fit.</p> <p>Below is my source code. If anyone can see my error or give me a hint I would appreciate that.</p> <pre><code>def SGD(x,y,learning_rate): theta = np.array([[0],[0]]) for i in range(N): xi = x[i].reshape(1,-1) y_pre = xi@theta theta = theta + learning_rate*(y[i]-y_pre[0][0])*xi.T print(theta) return theta N = 100 x = np.array(np.linspace(-2,2,N)) y = 4*x + 5 + np.random.uniform(-1,1,N) X = np.array([x**0,x**1]).T plt.scatter(x,y,s=6) th = SGD(X,y,0.1) y_reg = np.matmul(X,th) print(y_reg) print(x) plt.plot(x,y_reg) plt.show() </code></pre> <p>Edit: Another solution was to shuffle the measurements with <code>x = np.random.permutation(x)</code></p>
<p>to illustrate my comment,</p> <pre><code>def SGD(x,y,n,learning_rate): theta = np.array([[0],[0]]) # currently it does exactly one iteration. do more for _ in range(n): for i in range(len(x)): xi = x[i].reshape(1,-1) y_pre = xi@theta theta = theta + learning_rate*(y[i]-y_pre[0][0])*xi.T print(theta) return theta </code></pre> <p><code>SGD(X,y,10,0.01)</code> yields the correct result</p> <p><a href="https://i.sstatic.net/WUZ6d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WUZ6d.png" alt="10 iterations" /></a></p>
395
gradient descent implementation
My python implementation of Gradient Descent is not working well
https://stackoverflow.com/questions/77095271/my-python-implementation-of-gradient-descent-is-not-working-well
<p>I am trying to create a Linear regression model that uses batch gradient descent but the error or mse value never decreases. The LinearModel is just a template class that initializes the hyperparameters (step_size=0.001, max_iter=10000, eps=0.001, theta_0=None, verbose=True)</p> <pre><code># The data is stored in an array of (m, n), where there are n #columns and m #rows # x = array((m,n)) # y = array((m,1)) class LinearRegression(LinearModel): &quot;&quot;&quot;Linear regression with Gradient Descent.&quot;&quot;&quot; def fit(self, x, y): &quot;&quot;&quot;Run Gradient Descent Method to minimize J(theta) for Linear Regression. :param x: Training example inputs. Shape (m, n). :param y: Training example labels. Shape (m,). &quot;&quot;&quot; def squared_error(theta, x,y): # Initialize a variable to store the total error error = 0 # Calculate the predicted value using the current parameter values # Calculate the difference between the predicted and actual values # Add the squared difference to the total error # Return the total error divided by the number of data points return np.mean((x.dot(theta) - y)**2) # Define a function to calculate the partial derivatives of the squared error with respect to each parameter def grad_squared_error(theta, x,y): m,n = x.shape # Initialize a list to store the partial derivatives grad = np.zeros(n) for j in range(n): # Update the partial derivatives using the chain rule grad[j] += ((x.dot(theta)) - y).dot(x[:,j][np.newaxis].T) #np.newaxis to make the numpy array 2D for Transpose # Return the partial derivatives divided by the number of (data points *2) return [g / (2*m) for g in grad] m,n = x.shape if self.theta == None: self.theta = np.zeros(n) decay_factor = 0.9 # Adjust this as needed decay_interval = 10 # Adjust this as needed for i in range(self.max_iter): # Calculate the partial derivatives of the squared error using the current parameter values grad = grad_squared_error(self.theta, x,y) # Update each parameter value using gradient descent formula for j in range(len(self.theta)): self.theta[j] -= self.step_size * grad[j] # Print the current parameter values and squared error if self.verbose and i % decay_interval == 0: self.step_size *= decay_factor print(f&quot;Iteration {i+1}: theta = {self.theta}, squared error = {squared_error(self.theta, x,y)}&quot;) if squared_error(self.theta, x,y) &lt; self.eps: if self.verbose: print(&quot;Converged With Parameters:&quot;) print(f&quot;Iteration {i+1}: theta = {self.theta}, squared error = {squared_error(self.theta, x,y)}&quot;) break def predict(self, x): &quot;&quot;&quot;Make a prediction given new inputs x. :param x: Inputs of shape (m, n). :return: Outputs of shape (m,). &quot;&quot;&quot; return np.dot(x, self.theta) </code></pre> <p>A few last output values are following.</p> <pre><code>Iteration 9911: theta = [-0.47145182 5.37944606 6.10798726 5.03442352 4.33745377 1.48791336], squared error = 19565.226327190878 Iteration 9921: theta = [-0.47145182 5.37944606 6.10798726 5.03442352 4.33745377 1.48791336], squared error = 19565.226327190878 Iteration 9931: theta = [-0.47145182 5.37944606 6.10798726 5.03442352 4.33745377 1.48791336], squared error = 19565.226327190878 Iteration 9941: theta = [-0.47145182 5.37944606 6.10798726 5.03442352 4.33745377 1.48791336], squared error = 19565.226327190878 Iteration 9951: theta = [-0.47145182 5.37944606 6.10798726 5.03442352 4.33745377 1.48791336], squared error = 19565.226327190878 Iteration 9961: theta = [-0.47145182 5.37944606 6.10798726 5.03442352 4.33745377 1.48791336], squared error = 19565.226327190878 Iteration 9971: theta = [-0.47145182 5.37944606 6.10798726 5.03442352 4.33745377 1.48791336], squared error = 19565.226327190878 Iteration 9981: theta = [-0.47145182 5.37944606 6.10798726 5.03442352 4.33745377 1.48791336], squared error = 19565.226327190878 Iteration 9991: theta = [-0.47145182 5.37944606 6.10798726 5.03442352 4.33745377 1.48791336], squared error = 19565.226327190878 </code></pre> <p>I have tried even decreasing the learning_rate after 10 steps but still not working. I have used the Fish_Market data from Kaggle: <a href="https://www.kaggle.com/datasets/tarunkumar1912/fish-dataset1?select=Fish_dataset.csv" rel="nofollow noreferrer">https://www.kaggle.com/datasets/tarunkumar1912/fish-dataset1?select=Fish_dataset.csv</a></p>
<p>So, I was able to get the Code to work as intended.</p> <ul> <li>I had to vectorize the code as much as possible to make it run faster (Gradient Descent and Stochastic Gradient Descent were taking AGES to find the params of training set of shape (18000,7).</li> <li>I was doing some wrong vector calculations in the above code specifically inside the for loop over range(m) =&gt; (columns of input training set) (For Batch Gradient Descent)</li> </ul> <p>Here's the code:</p> <pre><code>class LinearRegression2(LinearModel): &quot;&quot;&quot;Linear regression with Multiple Methods.&quot;&quot;&quot; def fit(self, x, y): &quot;&quot;&quot;Run Gradient Descent Method to minimize J(theta) for Linear Regression. :param x: Training example inputs. Shape (m, n). :param y: Training example labels. Shape (m,). &quot;&quot;&quot; m, n = x.shape if self.theta is None: self.theta = np.zeros(n) J_arr = [] # To store cost values for plotting e_arr = [] # To store MSE values for plotting J = 0 error = 0 if self.method == 'norm': self.theta = np.linalg.inv(x.T @ x) @ (x.T @ y) J = np.mean((y - x @ self.theta) ** 2) / 2.0 error = np.mean((x @ self.theta - y) ** 2) if self.verbose: print(f&quot;J: {J}, squared error = {np.mean((y - x @ self.theta) ** 2)}, theta = {self.theta} &quot;) if self.method == 'gdb': decay_factor = 0.5 # Adjust this as needed (if needed) decay_interval = 10 # Adjust this as needed (if needed) for i in range(self.max_iter): error = y - x @ self.theta J = np.mean((y - x @ self.theta) ** 2) / 2.0 J_arr.append(J) e_arr.append(np.mean(error ** 2)) gradient = -x.T @ error / m self.theta -= self.step_size * gradient # if i % decay_interval == 0: # self.step_size *= decay_factor if self.verbose and i % decay_interval == 0: print(f&quot;J: {J}, squared error = {np.mean(error ** 2)}, theta = {self.theta} &quot;) if J &lt; self.eps and self.verbose: print(&quot;Converged With Parameters:&quot;) print(f&quot;J: {J}, squared error = {np.mean(error ** 2)}, theta = {self.theta} &quot;) break if self.method == 'gds': decay_factor = 0.5 # Adjust this as needed decay_interval = 10 # Adjust this as needed flag = False for _ in range(self.max_iter): for i in range(m): idx = np.random.randint(0, m) xi = x[idx:idx+1] yi = y[idx:idx+1] error = yi - xi @ self.theta J = np.mean((yi - xi @ self.theta) ** 2) / 2.0 J_arr.append(J) e_arr.append(np.mean(error ** 2)) gradient = - (error @ xi) / m self.theta -= self.step_size * gradient # if i % decay_interval == 0: # self.step_size *= decay_factor if self.verbose and i % decay_interval == 0: print(f&quot;J: {J}, squared error = {np.mean(error ** 2)}, theta = {self.theta} &quot;) if J &lt; self.eps: flag=True break if self.verbose and flag==True: print(&quot;Converged With Parameters:&quot;) print(f&quot;J: {J}, squared error = {np.mean(error ** 2)}, theta = {self.theta} &quot;) break if self.plot and self.method != 'norm': fig, axes = plt.subplots(1, 2, constrained_layout=True, figsize=(15, 5)) axes[0].set_title(&quot;J(theta)&quot;) axes[0].set_xlabel(f&quot;iterations&quot;) axes[0].set_ylabel('J(theta)') axes[0].plot(J_arr, color='red') axes[1].set_title(&quot;mse&quot;) axes[1].set_xlabel(f&quot;iterations&quot;) axes[1].set_ylabel('mse') axes[1].plot(e_arr, color='g') elif self.plot and self.method == 'norm': print('') print('----------------------------------------------------------') print('Normal Equation used to find &quot;theta&quot;, Graph Not Available.') else: pass def predict(self, x): &quot;&quot;&quot;Make a prediction given new inputs x. :param x: Inputs of shape (m, n). :return: Outputs of shape (m,). &quot;&quot;&quot; return x @ self.theta </code></pre>
396
gradient descent implementation
Batch Gradient Descent with Python not converging
https://stackoverflow.com/questions/59406021/batch-gradient-descent-with-python-not-converging
<p>Here's the Jupyter Notebook I used for this practice: <a href="https://drive.google.com/file/d/18-OXyvXSit5x0ftiW9bhcqJrO_SE22_S/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/18-OXyvXSit5x0ftiW9bhcqJrO_SE22_S/view?usp=sharing</a></p> <p>I was practicing simple Linear Regression with <a href="https://www.kaggle.com/luddarell/101-simple-linear-regressioncsv" rel="nofollow noreferrer">this</a> data set, and here are my parameters:</p> <pre><code>sat = np.array(data['SAT']) gpa = np.array(data['GPA']) theta_0 = 0.01 theta_1 = 0.01 alpha = 0.003 cost = 0 m = len(gpa) </code></pre> <p>I tried to optimize the cost function calculation by turning it into a matrix and perform element wise operations. This is the resulting formula I came up with:</p> <p>Cost function optimization: <img src="https://i.sstatic.net/JnlCJ.png" alt="Cost function optimization (image)"></p> <p><strong>Cost function</strong></p> <pre><code>def calculateCost(matrix_x,matrix_y,m): global theta_0,theta_1 cost = (1 / (2 * m)) * ((theta_0 + (theta_1 * matrix_x) - matrix_y) ** 2).sum() return cost </code></pre> <p>I also tried to do the same for the gradient descent.</p> <p><strong>Gradient Descent</strong></p> <pre><code>def gradDescent(alpha,matrix_x,matrix_y): global theta_0,theta_1,m,cost cost = calculateCost(sat,gpa,m) while cost &gt; 1 temp_0 = theta_0 - alpha * (1 / m) * (theta_0 + theta_1 * matrix_x - matrix_y).sum() temp_1 = theta_1 - alpha * (1 / m) * (matrix_x.transpose() * (theta_0 + theta_1 * matrix_x - matrix_y)).sum() theta_0 = temp_0 theta_1 = temp_1 </code></pre> <p>I am not entirely sure whether both implementations are correct. The implementation returned a cost of <em>114.89379821428574</em> and somehow this is how the "descent" looked like when I graph the costs:</p> <p>Gradient descent graph:</p> <p><img src="https://i.sstatic.net/0cTea.png" alt="Gradient descent graph"></p> <p>Please correct me if I have implemented both the <strong>cost function</strong> and <strong>gradient descent</strong> correctly, and provide explanation if possible as I am still a beginner in multivariable calculus. Thank you.</p>
<p>The are many issues with that code.</p> <p>First, the two main issues that are behind the bugs:</p> <p>1) The line</p> <pre><code>temp_1 = theta_1 - alpha * (1 / m) * (matrix_x.transpose() * (theta_0 + theta_1 * matrix_x - matrix_y)).sum() </code></pre> <p>specifically the matrix multiplication <code>matrix_x.transpose() * (theta_0 + ...)</code>. The <code>*</code> operator does element-wise multiplication, and as a result, the result is of size <code>20x20</code>, where you expect a gradient that is of size <code>1x1</code> (as you update a single real variable <code>theta_1</code>.</p> <p>2) The <code>while cost&gt;1:</code> condition in your gradient computation. You never update the cost in the loop...</p> <p>Here is a version of your code that works:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt sat=np.random.rand(40,1) rand_a=np.random.randint(500) rand_b=np.random.randint(400) gpa=rand_a*sat+rand_b theta_0 = 0.01 theta_1 = 0.01 alpha = 0.1 cost = 0 m = len(gpa) def calculateCost(matrix_x,matrix_y,m): global theta_0,theta_1 cost = (1 / 2 * m) * ((theta_0 + (theta_1 * matrix_x) - matrix_y) ** 2).sum() return cost def gradDescent(alpha,matrix_x,matrix_y,num_iter=10000,eps=0.5): global theta_0,theta_1,m,cost cost = calculateCost(sat,gpa,m) cost_hist=[cost] for i in range(num_iter): theta_0 -= alpha * (1 / m) * (theta_0 + theta_1 * matrix_x - matrix_y).sum() theta_1 -= alpha * (1 / m) * (matrix_x.transpose().dot(theta_0 + theta_1 * matrix_x - matrix_y)).sum() cost = calculateCost(sat,gpa,m) cost_hist.append(cost) if cost&lt;eps: return cost_hist if __name__=="__main__": print("init_cost==",cost) cost_hist=gradDescent(alpha,sat,gpa) print("final_cost,num_iters",cost,len(cost_hist)) print(rand_b,theta_0,rand_a,theta_1) plt.plot(cost_hist,linewidth=5,color="r");plt.show() </code></pre> <p>Finally, the coding style itself, while not responsible for the bugs, is definitely an issue here. Generally, global variables are just bad practice. They just lead to bugprone, unmaintainable code. It's always better to store them in small data structures and pass them around to functions. In your case, you can just put the initial parameters in a list, pass them to your gradient computation function and return the optimized ones at the end. </p>
397
gradient descent implementation
Gradient descent algo implementation
https://stackoverflow.com/questions/74258007/gradient-descent-algo-implementation
<p>I am trying question 9.30 in the book 'Convex Optimization' by Boyd. But for some reason I can't make the backtrack line search work. Here is my code:</p> <pre><code>import numpy as np n, m = 100, 200 A = np.random.randn(m, n) a, b = 0.01, 0.5 gtol = 1e-3 def f(x): # return - np.sum(np.log(1-x*x)) - np.sum(np.log(1-A @ x)) return -np.sum(np.log(1-A@x)) - np.sum(np.log(1+x)) - np.sum(np.log(1-x)) def G(x): # return 2*x/(1-x*x) + np.sum(A, 0)/np.sum(1-A@x) return A.T @ (1/(1-A@x)) - 1/(1-x) + 1/(1+x) def feasible(x): return np.all(x*x&lt;1) and np.all(A@x&lt;1) def step_size(x, g, a, b): # backtracking line search fx = f(x) dx = -g t = 1 while True: if not feasible(x+t*dx): t *= b else: if f(x+t*dx) &lt;= fx+a*t*g.T@dx: break t *= b return t def stopping_condition(g): return np.linalg.norm(g, 2) &lt; gtol def gradient_descent(x, a, b): flist, xlist, tlist = [f(x)], [x],[np.nan] while True: g = G(x) if stopping_condition(g): break t = step_size(x, -g, a, b) x -= t * g print(f(x), t, np.linalg.norm(g, 2)) flist.append(f(x)), xlist.append(x), tlist.append(t) return flist, xlist, tlist fx, x, t = gradient_descent(np.zeros(n), a, b) </code></pre> <p>I see that f(x) and G(x) calculate the value correctly. However the step_size function does not seem to converge, while the theory suggest it should. I can't seem to make sense of why this is not working.</p>
<p>Ok I found the issue!</p> <p>I am passing -g and then multiplying a -ive again!</p> <p>thanks.</p>
398
gradient descent implementation
How to Implement Full Batch Gradient Descent with Nesterov Momentum in PyTorch?
https://stackoverflow.com/questions/78102637/how-to-implement-full-batch-gradient-descent-with-nesterov-momentum-in-pytorch
<p>I'm working on a machine learning project in PyTorch where I need to optimize a model using the full batch gradient descent method. The key requirement is that the optimizer should use all the data points in the dataset for each update. My challenge with the existing torch.optim.SGD optimizer is that it doesn't inherently support using the entire dataset in a single update. This is crucial for my project as I need the optimization process to consider all data points to ensure the most accurate updates to the model parameters.</p> <p>Additionally, I would like to retain the use of Nesterov momentum in the optimization process. I understand that one could potentially modify the batch size to equal the entire dataset, simulating a full batch update with the SGD optimizer. However, I'm interested in whether there's a more elegant or direct way to implement a true Gradient Descent optimizer in PyTorch that also supports Nesterov momentum.</p> <p>Ideally, I'm looking for a solution or guidance on how to implement or configure an optimizer in PyTorch that meets the following criteria:</p> <ul> <li>Utilizes the entire dataset for each parameter update (true Gradient Descent behavior).</li> <li>Incorporates Nesterov momentum for more efficient convergence.</li> <li>Is compatible with the rest of the PyTorch ecosystem, by subclassing torch.optim.Optimizer</li> </ul>
<p>The pytorch SGD implementation is actually independent of the batching! It only uses the gradients that were calculated and stored in the parameters <code>.grad</code> attribute in the backward pass. So the batch size used for calculations and the batch size used for optimization are decoupled.</p> <p>You can now either:</p> <p>a) Put all your samples as one big batch through your model by setting the batchsize to the dataset size or</p> <p>b) Accumulate the gradients for many smaller batches before doing a single step of the optimizer (Pseudo-code):</p> <pre><code>model = YourModel() data = YourDataSetOrLoader() optim = torch.optim.SGD(model.parameters()) for full_batch_step in range(100) #this sets the accumulated gradient to zero optim.zero_grad() for batch in data: f=model(data) # this adds the gradient wrt to the parameters for the current datapoint to the model paramters f.backward() # now after we summed the gradient for all samples, we do a GD step. optim.step() </code></pre>
399
transformers
&quot;Monad transformers more powerful than effects&quot; - Examples?
https://stackoverflow.com/questions/31335805/monad-transformers-more-powerful-than-effects-examples
<p>The paper <a href="http://eb.host.cs.st-andrews.ac.uk/drafts/effects.pdf" rel="noreferrer" title="Edwin C. Brady (2013?): &#39;Programming and reasoning with algebraic effects and dependent types&#39;">"Programming and reasoning with algebraic effects and dependent types" by Edwin C. Brady</a> on effects in Idris contains the (unreferenced) claim that:</p> <blockquote> <p>Although [effects and monad transformers] are not equivalent in power — monads and monad transformers can express more concepts — many common effectful computations are captured.</p> </blockquote> <p>What examples are there that can be modelled by monad transformers but not effects?</p>
<p>Continuations can be modelled as monads, using CPS, but they are not algebraic effects as they cannot be modelled using Lawvere theories. See Martin Hyland and John Power, 2007, <a href="https://www.dpmms.cam.ac.uk/~martin/Research/Publications/2007/hp07.pdf">The Category Theoretic Understanding of Universal Algebra: Lawvere Theories and Monads (pdf)</a>, ENTCS 172:437-458.</p>
400
transformers
mtl, transformers, monads-fd, monadLib, and the paradox of choice
https://stackoverflow.com/questions/2769487/mtl-transformers-monads-fd-monadlib-and-the-paradox-of-choice
<p>Hackage has several packages for monad transformers:</p> <ul> <li><a href="http://hackage.haskell.org/package/mtl" rel="noreferrer">mtl</a>: Monad transformer library</li> <li><a href="http://hackage.haskell.org/package/transformers" rel="noreferrer">transformers</a>: Concrete functor and monad transformers</li> <li><a href="http://hackage.haskell.org/package/monads-fd" rel="noreferrer">monads-fd</a>: Monad classes, using functional dependencies</li> <li><a href="http://hackage.haskell.org/package/monads-tf" rel="noreferrer">monads-tf</a>: Monad classes, using type families</li> <li><a href="http://hackage.haskell.org/package/monadLib" rel="noreferrer">monadLib</a>: A collection of monad transformers.</li> <li><a href="http://hackage.haskell.org/package/mtl-tf" rel="noreferrer">mtl-tf</a>: Monad transformer library using type families.</li> <li><a href="http://hackage.haskell.org/package/mmtl" rel="noreferrer">mmtl</a>: Modular Monad transformer library</li> <li><a href="http://hackage.haskell.org/package/mtlx" rel="noreferrer">mtlx</a>: Monad transformer library with type indexes, providing 'free' copies.</li> <li><a href="http://hackage.haskell.org/package/compose-trans" rel="noreferrer">compose-trans</a>: Composable monad transformers</li> </ul> <p>(and maybe I missed some)</p> <p>Which one shall we use?</p> <p>mtl is the one in the Haskell Platform, but I keep hearing on reddit that it's uncool.</p> <p>But what's bad about choice anyway, isn't it just a good thing?</p> <p>Well, I saw how for example the authors of data-accessor had to make all these to cater to just the popular choices:</p> <ul> <li>data-accessor-monadLib library: Accessor functions for monadLib's monads</li> <li>data-accessor-monads-fd library: Use Accessor to access state in monads-fd State monad class</li> <li>data-accessor-monads-tf library: Use Accessor to access state in monads-tf State monad type family</li> <li>data-accessor-mtl library: Use Accessor to access state in mtl State monad class</li> <li>data-accessor-transformers library: Use Accessor to access state in transformers State monad</li> </ul> <p>I imagine that if this goes on and for example several competing Arrow packages evolve, we might see something like: spoonklink-arrows-transformers, spoonklink-arrows-monadLib, spoonklink-tfArrows-transformers, spoonklink-tfArrows-monadLib, ...</p> <p>And then I worry that if spoonklink gets forked, Hackage will run out of disk space. :)</p> <p>Questions:</p> <ul> <li>Why are there so many monad transformer packages?</li> <li>Why is mtl [considered] uncool?</li> <li>What are the key differences?</li> <li>Most of these seemingly competing packages were written by Andy Gill and are maintained by Ross Paterson. Does this mean that these packages are not competing but rather work together in some way? And do Andy and Ross consider any of their own packages as obsolete?</li> <li>Which one should you and I use?</li> </ul>
<p>A bunch of them are almost completely equivalent:</p> <ul> <li><code>mtl</code> uses GHC extensions, but <code>transformers</code> is Haskell 98.</li> <li><code>monads-fd</code> and <code>monads-tf</code> are add-ons to <code>transformers</code>, using functional dependencies and type families respectively, both providing the functionality in <code>mtl</code> that's missing from <code>transformers</code>.</li> <li><code>mtl-tf</code> is <code>mtl</code> reimplemented using type families.</li> </ul> <p>So essentially, <code>mtl</code> == <code>transformers</code> ++ <code>monads-fd</code>, <code>mtl-tf</code> == <code>transformers</code> ++ <code>monads-tf</code>. The improved portability and modularity of <code>transformers</code> and its associated packages is why <code>mtl</code> is uncool these days, I think.</p> <p><code>mmtl</code> and <code>mtlx</code> both seem to be similar to and/or based on <code>mtl</code>, with API differences and extra features.</p> <p><code>MonadLib</code> seems to have a rather different take on matters, but I'm not familiar with it directly. Also seems to use a lot of GHC extensions, more than the others.</p> <p>At a glance <code>compose-trans</code> seems to be more like metaprogramming stuff for creating monad transformers. It claims to be compatible with <code>Control.Monad.Trans</code> which... I guess means <code>mtl</code>?</p> <p>At any rate, I'd suggest the following decision algorithm:</p> <ul> <li>Do you need standard monads for a new project? Use <code>transformers</code> &amp; co., help us lay <code>mtl</code> to rest.</li> <li>Are you already using <code>mtl</code> in a large project? <code>transformers</code> isn't completely compatible, but no one will kill you for not switching.</li> <li>Does one of the other packages provide unusual functionality that you need? Might as well use it rather than rolling your own.</li> <li>Still unsatisfied? Throw them all out, download <a href="https://hackage.haskell.org/package/category-extras" rel="noreferrer"><code>category-extras</code></a>, and solve all the world's problems with a page and a half of <strike>incomprehensible abstract nonsense</strike> breathtakingly generic code.</li> </ul>
401
transformers
Transformer: cannot import name &#39;AutoModelWithLMHead&#39; from &#39;transformers&#39;
https://stackoverflow.com/questions/64112358/transformer-cannot-import-name-automodelwithlmhead-from-transformers
<p>I was referring to this answer from stackoverflow but I can't get any leads regarding my problem: [https://stackoverflow.com/questions/63141267/importerror-cannot-import-name-automodelwithlmhead-from-transformers][1]</p> <p>This is the code that I ran:</p> <pre><code>import transformers from transformers import AutoModelWithLMHead </code></pre> <p>Results:</p> <pre><code>cannot import name 'AutoModelWithLMHead' from 'transformers' (/Users/xev/opt/anaconda3/lib/python3.7/site-packages/transformers/__init__.py) </code></pre> <p>My transformer version is '3.0.2'. My import for AutoTokenizer is fine.</p> <p>Would appreciate if there's anyone who can help regarding the Transformers package! [1]: <a href="https://stackoverflow.com/questions/63141267/importerror-cannot-import-name-automodelwithlmhead-from-transformers">ImportError: cannot import name &#39;AutoModelWithLMHead&#39; from &#39;transformers&#39;</a></p>
<p>just make sure you installed the transformers using its official git link</p> <pre><code>pip install git+https://github.com/huggingface/transformers </code></pre> <p>and not by using pypi</p> <pre><code>pip install transformers </code></pre> <p>coz there are few sub-libraries such as <code>TFLayoutLMForSequenceClassification</code> missing in pypi transformers version</p>
402
transformers
Where does hugging face&#39;s transformers save models?
https://stackoverflow.com/questions/61798573/where-does-hugging-faces-transformers-save-models
<p>Running the below code downloads a model - does anyone know what folder it downloads it to?</p> <pre><code>!pip install -q transformers from transformers import pipeline model = pipeline('fill-mask') </code></pre>
<p><strong>Update 2023-05-02:</strong> The cache location has changed again, and is now <strong><code>~/.cache/huggingface/hub/</code></strong>, as reported by @Victor Yan. Notably, the sub folders in the <code>hub/</code> directory are also named similar to the cloned model path, instead of having a SHA hash, as in previous versions.</p> <hr /> <p><strong>Update 2021-03-11:</strong> The cache location has now changed, and is located in <strong><code>~/.cache/huggingface/transformers</code></strong>, as it is also detailed in the answer by @victorx.</p> <hr /> <p><a href="https://github.com/huggingface/transformers/issues/2157" rel="noreferrer">This post</a> should shed some light on it (plus some investigation of my own, since it is already a bit older).</p> <p>As mentioned, the default location in a Linux system is <strong><code>~/.cache/torch/transformers/</code></strong> (I'm using transformers v 2.7, currently, but it is unlikely to change anytime soon.). The cryptic folder names in this directory seemingly correspond to the Amazon S3 hashes.</p> <p>Also note that the pipeline tasks are just a &quot;rerouting&quot; to other models. To know which one you are currently loading, see <a href="https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L1513" rel="noreferrer">here</a>. For your specific model, <code>pipeline(fill-mask)</code> actually utilizes a <code>distillroberta-base</code> model.</p>
403
transformers
Avoiding lift with monad transformers
https://stackoverflow.com/questions/9054731/avoiding-lift-with-monad-transformers
<p>I have a problem to which a stack of monad transformers (or even one monad transformer) over <code>IO</code>. Everything is good, except that using lift before every action is terribly annoying! I suspect there is really nothing to do about that, but I thought I'd ask anyway.</p> <p>I am aware of lifting entire blocks, but what if the code is really of mixed types? Would it not be nice if GHC threw in some syntactic sugar (for example, <code>&lt;-$</code> = <code>&lt;- lift</code>)?</p>
<p>For all the standard <a href="http://hackage.haskell.org/package/mtl">mtl</a> monads, you don't need <code>lift</code> at all. <code>get</code>, <code>put</code>, <code>ask</code>, <code>tell</code> — they all work in any monad with the right transformer somewhere in the stack. The missing piece is <code>IO</code>, and even there <code>liftIO</code> lifts an arbitrary IO action down an arbitrary number of layers.</p> <p>This is done with typeclasses for each "effect" on offer: for example, <a href="http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Monad-State-Class.html"><code>MonadState</code></a> provides <code>get</code> and <code>put</code>. If you want to create your own <code>newtype</code> wrapper around a transformer stack, you can do <code>deriving (..., MonadState MyState, ...)</code> with the <code>GeneralizedNewtypeDeriving</code> extension, or roll your own instance:</p> <pre><code>instance MonadState MyState MyMonad where get = MyMonad get put s = MyMonad (put s) </code></pre> <p>You can use this to selectively expose or hide components of your combined transformer, by defining some instances and not others.</p> <p>(You can easily extend this approach to all-new monadic effects you define yourself, by defining your own typeclass and providing boilerplate instances for the standard transformers, but all-new monads are rare; most of the time, you'll get by simply composing the standard set offered by mtl.)</p>
404
transformers
Huggingface transformers: cannot import BitsAndBytesConfig from transformers
https://stackoverflow.com/questions/75563949/huggingface-transformers-cannot-import-bitsandbytesconfig-from-transformers
<p>Following through the <a href="https://huggingface.co/docs/transformers/main/main_classes/quantization" rel="noreferrer">Huggingface quantization guide</a>, I installed the following:</p> <pre class="lang-bash prettyprint-override"><code>pip install transformers accelerate bitsandbytes </code></pre> <p>(It yielded transformers 4.26.0, accelerate 0.16.0, bitsandbytes 0.37.0, which seems to match the guide’s requirements.)</p> <p>Then ran the first line of the offload code in Python:</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig </code></pre> <p>It however resulted in the following error: <code>ImportError: cannot import name 'BitsAndBytesConfig' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/__init__.py)</code>.</p> <p>Doing <code>grep BitsAndBytesConfig -r /usr/local/lib/python3.10/dist-packages</code> yields nothing.</p> <p>Is there a step I might have skipped, or a version inconsistency I could work around?</p>
<p>BitsAndBytesConfig was <a href="https://github.com/huggingface/transformers/commit/3668ec17165dbb7823f3bc7e190e1733040c3af8" rel="noreferrer">added only recently</a>, and the latest release dates back to earlier.</p> <p>The online documentation is generated from the source’s mdx, so it sometimes references things that are not yet released. However, it can be tried by <a href="https://huggingface.co/docs/transformers/installation#install-from-source" rel="noreferrer">installing from source</a>:</p> <pre class="lang-bash prettyprint-override"><code>pip install git+https://github.com/huggingface/transformers </code></pre>
405
transformers
Monad Transformers vs Passing parameters to functions
https://stackoverflow.com/questions/12968351/monad-transformers-vs-passing-parameters-to-functions
<p>I am new to Haskell but understand how Monad Transformers can be used. Yet, I still have difficulties grabbing their claimed advantage over passing parameters to function calls.</p> <p>Based on the wiki <a href="http://www.haskell.org/haskellwiki/Monad_Transformers_Explained" rel="noreferrer">Monad Transformers Explained</a>, we basically have a Config Object defined as </p> <pre><code>data Config = Config Foo Bar Baz </code></pre> <p>and to pass it around, instead of writing functions with this signature</p> <pre><code>client_func :: Config -&gt; IO () </code></pre> <p>we use a ReaderT Monad Transformer and change the signature to</p> <pre><code>client_func :: ReaderT Config IO () </code></pre> <p>pulling the Config is then just a call to <code>ask</code>.</p> <p>The function call changes from <code>client_func c</code> to <code>runReaderT client_func c</code></p> <p>Fine.</p> <p>But why does this make my application simpler ?</p> <p>1- I suspect Monad Transformers have an interest when you stitch a lot of functions/modules together to form an application. But this is where is my understanding stops. Could someone please shed some light? </p> <p>2- I could not find any documentation on how you write a large <strong>modular</strong> application in Haskell, where modules expose some form of API and hide their implementations, as well as (partly) hide their own States and Environments from the other modules. Any pointers please ?</p> <p>(Edit: Real World Haskell states that ".. this approach [Monad Transformers] ... scales to bigger programs.", but there is no clear example demonstrating that claim)</p> <p><strong>EDIT Following Chris Taylor Answer Below</strong></p> <p>Chris perfectly explains why encapsulating Config, State,etc... in a Transformer Monad provides two benefits:</p> <ol> <li>It prevents a higher level function from having to maintain in its type signature all the parameters required by the (sub)functions it calls but not required for its own use (see the <code>getUserInput</code> function)</li> <li>and as a consequence makes higher level functions more resilient to a change of the content of the Transformer Monad (say you want to add a <code>Writer</code> to it to provide Logging in a lower level function)</li> </ol> <p>This comes at the cost of changing the signature of all functions so that they run "in" the Transformer Monad.</p> <p>So question 1 is fully covered. Thank you Chris.</p> <p>Question 2 is now answered in <a href="https://stackoverflow.com/questions/13007123/modular-program-design-combining-monad-transformers-in-monad-agnostic-function">this SO post</a></p>
<p>Let's say that we're writing a program that needs some configuration information in the following form:</p> <pre><code>data Config = C { logFile :: FileName } </code></pre> <p>One way to write the program is to explicitly pass the configuration around between functions. It would be nice if we only had to pass it to the functions that use it explicitly, but sadly we're not sure if a function might need to call another function that uses the configuration, so we're forced to pass it as a parameter everywhere (indeed, it tends to be the low-level functions that need to use the configuration, which forces us to pass it to all the high-level functions as well).</p> <p>Let's write the program like that, and then we'll re-write it using the <code>Reader</code> monad and see what benefit we get.</p> <h2>Option 1. Explicit configuration passing</h2> <p>We end up with something like this:</p> <pre><code>readLog :: Config -&gt; IO String readLog (C logFile) = readFile logFile writeLog :: Config -&gt; String -&gt; IO () writeLog (C logFile) message = do x &lt;- readFile logFile writeFile logFile $ x ++ message getUserInput :: Config -&gt; IO String getUserInput config = do input &lt;- getLine writeLog config $ "Input: " ++ input return input runProgram :: Config -&gt; IO () runProgram config = do input &lt;- getUserInput config putStrLn $ "You wrote: " ++ input </code></pre> <p>Notice that in the high level functions we have to pass config around all the time.</p> <h2>Option 2. Reader monad</h2> <p>An alternative is to rewrite using the <code>Reader</code> monad. This complicates the low level functions a bit:</p> <pre><code>type Program = ReaderT Config IO readLog :: Program String readLog = do C logFile &lt;- ask readFile logFile writeLog :: String -&gt; Program () writeLog message = do C logFile &lt;- ask x &lt;- readFile logFile writeFile logFile $ x ++ message </code></pre> <p>But as our reward, the high level functions are simpler, because we never need to refer to the configuration file.</p> <pre><code>getUserInput :: Program String getUserInput = do input &lt;- getLine writeLog $ "Input: " ++ input return input runProgram :: Program () runProgram = do input &lt;- getUserInput putStrLn $ "You wrote: " ++ input </code></pre> <h2>Taking it further</h2> <p>We could re-write the type signatures of getUserInput and runProgram to be</p> <pre><code>getUserInput :: (MonadReader Config m, MonadIO m) =&gt; m String runProgram :: (MonadReader Config m, MonadIO m) =&gt; m () </code></pre> <p>which gives us a lot of flexibility for later, if we decide that we want to change the underlying <code>Program</code> type for any reason. For example, if we want to add modifiable state to our program we could redefine</p> <pre><code>data ProgramState = PS Int Int Int type Program a = StateT ProgramState (ReaderT Config IO) a </code></pre> <p>and we don't have to modify <code>getUserInput</code> or <code>runProgram</code> at all - they'll continue to work fine.</p> <p>N.B. I haven't type checked this post, let alone tried to run it. There may be errors!</p>
406
transformers
Problem loading transformers; ModuleNotFoundError: No module named &#39;transformers&#39;
https://stackoverflow.com/questions/79031959/problem-loading-transformers-modulenotfounderror-no-module-named-transformers
<p>I want to use some of the models available through huggingface. I am having the hardest time even getting started. Can anyone help me identify and solve this problem?</p> <p>I am using Kubuntu 24.04.</p> <hr /> <p>First, I create and activate a virtual environment within which to install transformers.</p> <pre><code>python3 -m venv .env source .env/bin/activate </code></pre> <p>This is successful, as now my terminal in Visual Code Studio has the prefix '<code>(.env)</code>'.</p> <p>Next, I install the latest transformers from github:</p> <pre><code>pip install git+https://github.com/huggingface/transformers </code></pre> <p>The output is successful. I then test its success with the recommended method on hugginface.co:</p> <pre><code>python3 -c &quot;from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))&quot; </code></pre> <p>The output looks right to me:</p> <pre><code>No model was supplied, defaulted to distilbert/distilbert-base-uncased-finetuned-sst-2-english and revision 714eb0f (https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english). Using a pipeline without specifying a model name and revision in production is not recommended. Hardware accelerator e.g. GPU is available in the environment, but no `device` argument is passed to the `Pipeline` object. Model will be on CPU. [{'label': 'POSITIVE', 'score': 0.9998656511306763}] </code></pre> <p>From there, I try to run the following code:</p> <pre><code>from transformers import pipeline </code></pre> <p>but every time I get the following output:</p> <pre><code>/bin/python3 /path-to/main.py Traceback (most recent call last): File &quot;/path-to/main.py&quot;, line 5, in &lt;module&gt; from transformers import pipeline ModuleNotFoundError: No module named 'transformers' </code></pre>
<p>I recommend you run your code in the terminal that your virtual environment is activated</p>
407
transformers
Monad transformers libraries - which one to use?
https://stackoverflow.com/questions/5797091/monad-transformers-libraries-which-one-to-use
<p>There are many different monad transformers libraries on Hackage. A few seem to get more attention than the others. To name a few: mtl (current version depending on transformers for some reason), transformers, monadLib, monads-tf, mtlx, contstuff.</p> <p>Which one should be preferred and why? What are their unique features? What about performance?</p>
<p>The <a href="http://haskell.org/platform" rel="noreferrer">Haskell Platform</a> specifies <a href="http://hackage.haskell.org/package/mtl" rel="noreferrer"><code>mtl</code></a> and <a href="http://hackage.haskell.org/package/transformers" rel="noreferrer"><code>transformers</code></a> as standard. </p> <p>If you're unsure, you should just use <code>mtl</code>.</p> <p>However, if you have a specific technical reason to look at the new libraries, they tend to address issues or add new features to <code>mtl</code>. <code>monadLib</code> in particular has some new features.</p>
408
transformers
Using transformers-cli on Windows?
https://stackoverflow.com/questions/61579248/using-transformers-cli-on-windows
<p>I can't figure out how to use transformers-cli on Windows. I got it working on Google Colab, and am using it in the meantime.</p> <p>[EDIT]</p> <p>Here's the process that I'm going through, what I expect, and what is happening:</p> <p><strong>I'm on a Windows System (brackets are the exact commands I'm typing into CMD)</strong></p> <ol> <li>I install transformers==2.8.0 (pip install transformers==2.8.0)</li> <li>I try to run transformers-cli as explained on Huggingface's website (transformers-cli) <a href="https://huggingface.co/transformers/model_sharing.html" rel="nofollow noreferrer">https://huggingface.co/transformers/model_sharing.html</a></li> </ol> <p><strong>I get:</strong> </p> <pre><code>'transformers-cli' is not recognized as an internal or external command, operable program or batch file. </code></pre> <p>I don't know if I have to add some directory to my PATH or perhaps the CLI isn't available on Windows?</p> <p><strong>I repeat the exact same process on Google Colab, and it works as expected. I get:</strong></p> <pre><code>usage: transformers-cli &lt;command&gt; [&lt;args&gt;] positional arguments: {convert,download,env,run,serve,login,whoami,logout,s3,upload} transformers-cli command helpers convert CLI tool to run convert model from original author checkpoints to Transformers PyTorch checkpoints. run Run a pipeline through the CLI serve CLI tool to run inference requests through REST and GraphQL endpoints. login Log in using the same credentials as on huggingface.co whoami Find out which huggingface.co account you are logged in as. logout Log out s3 {ls, rm} Commands to interact with the files you upload on S3. upload Upload a model to S3. optional arguments: -h, --help show this help message and exit </code></pre>
<p>All you have to do is to locate the script and launch it. It won't be added to the $PATH automatically. In my python interpreter installation on Windows 10 (not anaconda just python), it was installed in the <code>Scripts</code> folder of my python interpreter directory. You have to launch it with the python interpreter as windows, as far as I know, doesn't support Shebangs.</p> <pre><code>cd YOURPYTHONINTERPRETERDIRECTORY\Scripts python.exe transformers-cli login </code></pre> <p>You can define a <a href="https://superuser.com/questions/560519/how-to-set-an-alias-in-windows-command-line">macro</a> to shortcut transformers-cli.</p>
409
transformers
Transformers Pipeline from Huggingface
https://stackoverflow.com/questions/61073049/transformers-pipeline-from-huggingface
<p>I am trying out the transformers pipeline from huggingface:</p> <p><a href="https://github.com/huggingface/transformers#installation" rel="nofollow noreferrer">https://github.com/huggingface/transformers#installation</a></p> <p><a href="https://i.sstatic.net/j8p4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j8p4E.png" alt="enter image description here" /></a></p> <p>When I run on my system, I get a different result.</p> <pre><code>&gt;&gt;&gt; import transformers &gt;&gt;&gt; transformers.__version__ '2.8.0' </code></pre> <p>I am running on Python 3.7.6.</p> <pre><code>&gt;&gt;&gt; from transformers import pipeline &gt;&gt;&gt; nlp = pipeline('sentiment-analysis') Downloading: 100%|███████████████████████████████████████████████████████| 230/230 [00:00&lt;00:00, 77.3kB/s] 2020-04-08 18:04:33.862653: W tensorflow/python/util/util.cc:319] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. 2020-04-08 18:04:33.931454: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fa52e6b7be0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-04-08 18:04:33.931486: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version &gt;&gt;&gt; nlp('We are very happy to include pipeline into the transformers repository.') [{'label': 'NEGATIVE', 'score': 0.94570833}] &gt;&gt;&gt; </code></pre> <p>What can I look into?</p>
410
transformers
Transformers v4.x: Convert slow tokenizer to fast tokenizer
https://stackoverflow.com/questions/65431837/transformers-v4-x-convert-slow-tokenizer-to-fast-tokenizer
<p>I'm following the transformer's pretrained model <a href="https://huggingface.co/joeddav/xlm-roberta-large-xnli?text=%0A&amp;candidate_labels=&amp;multi_class=true" rel="noreferrer">xlm-roberta-large-xnli</a> example</p> <pre><code>from transformers import pipeline classifier = pipeline(&quot;zero-shot-classification&quot;, model=&quot;joeddav/xlm-roberta-large-xnli&quot;) </code></pre> <p>and I get the following error</p> <pre><code>ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one. </code></pre> <p>I'm using Transformers version <code>'4.1.1'</code></p>
<p>According to Transformers <code>v4.0.0</code> <a href="https://github.com/huggingface/transformers/releases/tag/v4.0.0" rel="noreferrer">release</a>, <code>sentencepiece</code> was removed as a required dependency. This means that</p> <blockquote> <p>&quot;The tokenizers that depend on the SentencePiece library will not be available with a standard transformers installation&quot;</p> </blockquote> <p>including the <code>XLMRobertaTokenizer</code>. However, <code>sentencepiece</code> can be installed as an extra dependency</p> <pre><code>pip install transformers[sentencepiece] </code></pre> <p>or</p> <pre><code>pip install sentencepiece </code></pre> <p>if you have transformers already installed.</p>
411
transformers
Transformers AutoModelForCasualLM cannot be imported
https://stackoverflow.com/questions/75191536/transformers-automodelforcasuallm-cannot-be-imported
<p>I am trying to follow <a href="https://towardsdatascience.com/run-bloom-the-largest-open-access-ai-model-on-your-desktop-computer-f48e1e2a9a32" rel="nofollow noreferrer">this article</a> to use the <code>AutoModelForCasualLM</code> from <code>transformers</code> to generate text with bloom. But I keep getting an error saying that python cannot AutoModelForCasualLM from transformers. I have tried multiple computers and multiple versions of transformers but I always get the following error. (Traceback from latest version of transformers)</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[28], line 1 ----&gt; 1 from transformers import AutoTokenizer, AutoModelForCasualLM, BloomConfig 2 from transformers.models.lboom.modeling_bloom import BloomBlock, build_alibi_tensor ImportError: cannot import name 'AutoModelForCasualLM' from 'transformers' (/mnt/MLDr/venv/lib/python3.10/site-packages/transformers/__init__.py) </code></pre> <p>code snippet from where the error occurs (first ~10 lines):</p> <pre class="lang-py prettyprint-override"><code>import os import torch import torch.nn as nn from collections import OrderedDict def get_state_dict(shard_num, prefix=None): d = torch.load(os.path.join(model_path, f&quot;pytorch_model_{shard_num:05d}-of-00072.bin&quot;)) return d if prefix is None else OrderedDict((k.replace(prefix, ''), v) for k, v in d.items()) from transformers import AutoTokenizer, AutoModelForCasualLM, BloomConfig from transformers.models.lboom.modeling_bloom import BloomBlock, build_alibi_tensor model = &quot;./bloom&quot; config = BloomConfig.from_pretrained(model_path) device = 'cpu' </code></pre> <p>transformers-cli env results:</p> <ul> <li><code>transformers</code> version: 4.25.1</li> <li>Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35</li> <li>Python version: 3.10.6</li> <li>Huggingface_hub version: 0.11.1</li> <li>PyTorch version (GPU?): 1.13.1+cu117 (False)</li> <li>Tensorflow version (GPU?): 2.11.0 (False)</li> <li>Flax version (CPU?/GPU?/TPU?): not installed (NA)</li> <li>Jax version: not installed</li> <li>JaxLib version: not installed</li> <li>Using GPU in script?: &lt;fill in&gt;</li> <li>Using distributed or parallel set-up in script?: &lt;fill in&gt;</li> </ul>
<p>This is because you are using wrong class name this class name not exist in the version of the Transformers library you are using. The correct class name is AutoModelForCausalLM (note the correct spelling of &quot;Causal&quot;). Try this :</p> <p><code>from transformers import AutoTokenizer,AutoModelForCausalLM</code></p>
412
transformers
Transformers in Android Studio Chaquopy
https://stackoverflow.com/questions/78420548/transformers-in-android-studio-chaquopy
<pre><code>chaquopy { productFlavors { getByName(&quot;py310&quot;) { version = &quot;3.10&quot; } getByName(&quot;py311&quot;) { version = &quot;3.11&quot; } } defaultConfig { pip { // Install only the pipeline module from transformers with version 4.12.0 install(&quot;numpy&quot;) install(&quot;transformers&quot;) } } } </code></pre> <p>Is this possible to install transformers?? The script I have inside python folder looks like this:</p> <pre><code>from transformers import pipeline from PIL import Image import matplotlib.pyplot as plt def classify_image(image_file): # Load model via pipeline pipe = pipeline(&quot;image-classification&quot;, model=&quot;sergiocannata/prove_melanomaprova_melanoma&quot;) # Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained(&quot;sergiocannata/prove_melanomaprova_melanoma&quot;) model = AutoModelForImageClassification.from_pretrained(&quot;sergiocannata/prove_melanomaprova_melanoma&quot;) ... </code></pre> <p>I am trying to use a model from transformers inside an app.</p> <pre><code>Failed to install tokenizers&lt;0.20,&gt;=0.19 from https://files.pythonhosted.org/packages/48/04/2071c150f374aab6d5e92aaec38d0f3c368d227dd9e0469a1f0966ac68d1/tokenizers-0.19.1.tar.gz#sha256=ee59e6680ed0fdbe6b724cf38bd70400a0c1dd623b07ac729087270caeac88e3 (from transformers). </code></pre>
<p>Chaquopy's current options for <code>transformers</code> are listed on <a href="https://github.com/chaquo/chaquopy/issues/607" rel="nofollow noreferrer">its issue tracker</a>:</p> <ul> <li>tokenizers 0.10.3 (compatible with <code>transformers==4.15.0</code>)</li> <li>tokenizers 0.7.0 (compatible with <code>transformers==2.11.0</code>)</li> </ul> <p>Both of these are currently only available for Python 3.8. To change the Python version of your app, see <a href="https://chaquo.com/chaquopy/doc/current/android.html#python-version" rel="nofollow noreferrer">here</a>.</p>
413
transformers
Huggingface Transformers Conda Install issue
https://stackoverflow.com/questions/71754258/huggingface-transformers-conda-install-issue
<p>conda by default installing transformers 2.x however pip installs 4.x by default which is what I want but via conda.</p> <p>If I install by specifying the latest distribution file from conda-forge… <code>conda install https://anaconda.org/conda-forge/transformers/4.16.2/download/noarch/transformers-4.16.2-pyhd8ed1ab_0.tar.bz2</code></p> <p>then it complains that environment is not consistent and list the package causing the issue which is PyTorch version 1.11.</p> <p>I removed pytorch and then installed as listed above then installation go through. I then tried installing datasets… <code>conda install datasets</code></p> <p>Now it complains that environment is inconsistent due to transformers 4.16.2 that I installed.</p> <p>Not sure whats wrong and how to install pytorch, transformers and datasets together with no issue. Do I need specific versions of these to make it work, could not find any such guideline on huggingface docs or support pages.</p> <p>thanks.</p>
<p>Just tried installing transformers in the <code>venv</code> that I will be working on as follows</p> <pre><code>conda install transformers </code></pre> <p>And it installed the version <code>transformers-4.18.0</code>.</p> <p>If one wants to install specifically from the channel <code>huggingface</code>, then do the following</p> <pre><code>conda install -c huggingface transformers </code></pre> <hr /> <p>Last case scenario, install it with <code>pip</code>, as using <code>pip</code> can potentially wreck one's installation (<code>pip</code> and <code>conda</code> do not manage dependencies in the same way).</p> <pre><code>pip install transformers </code></pre> <p>Or, by specifying the version</p> <pre><code>pip install transformers==4.18.0 </code></pre> <p>Or directly from the source</p> <pre><code>pip install git+https://github.com/huggingface/transformers </code></pre> <p>If one has <code>transformers</code> already installed and wants to install a different version than the one we currently have, one should pass <code>-Iv</code> (<a href="https://stackoverflow.com/a/5226504/7109869">as suggested here</a>)</p> <pre><code>pip install -Iv transformers==4.18.0 </code></pre> <hr /> <p>To check if <code>transformers</code> was properly installed, run the following</p> <pre><code>python -c &quot;from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))&quot; </code></pre> <p>It will download a pretrained model, then print out the label and score.</p> <hr /> <p>For more information on <code>transformers</code> installation, <a href="https://huggingface.co/docs/transformers/installation" rel="noreferrer">consult this page</a>.</p>
414
transformers
cannot import name &#39;GPT2ForQuestionAnswering&#39; from &#39;transformers&#39;
https://stackoverflow.com/questions/75617250/cannot-import-name-gpt2forquestionanswering-from-transformers
<p><a href="https://i.sstatic.net/wZejL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wZejL.png" alt="enter image description here" /></a></p> <blockquote> <p>1 import pandas as pd 2 import torch ----&gt; 3 from transformers import GPT2Tokenizer, GPT2ForQuestionAnswering, AdamW 4 from transformers import default_data_collator 5 from torch.utils.data import DataLoader</p> </blockquote> <pre><code>import pandas as pd import torch from transformers import GPT2Tokenizer, GPT2ForQuestionAnswering, AdamW from transformers import default_data_collator from torch.utils.data import DataLoader from transformers import datasets from transformers import TriviaQAProcessor, set_seed from transformers import TrainingArguments, Trainer </code></pre>
<p>Is your package up to date? I had the same issue so upgraded the package, and the error went away</p> <p>Try:</p> <pre><code>pip install transformers --upgrade </code></pre>
415
transformers
cannot import name &#39;TFBertForQuestionAnswering&#39; from &#39;transformers&#39;
https://stackoverflow.com/questions/62907901/cannot-import-name-tfbertforquestionanswering-from-transformers
<p>Currently I am using transformers(3.0.2) and python(3.7.3) which encountered the below error:</p> <blockquote> <p><strong>cannot import name 'TFBertForQuestionAnswering' from 'transformers'</strong></p> </blockquote> <pre><code>from transformers import BertTokenizer, TFBertForQuestionAnswering model = TFBertForQuestionAnswering.from_pretrained('bert-base-cased') f = open(model_path, &quot;wb&quot;) pickle.dump(model, f) </code></pre> <p>How do resolve this issue?</p>
<p>Upgrade your TensorFlow library. It works fine with 2.3.1. version.</p>
416
transformers
Mule ESB: XSLT Transformers or Java Transformers?
https://stackoverflow.com/questions/11421379/mule-esb-xslt-transformers-or-java-transformers
<p>Is there any performance improvement if I use a custom Java transformer in place of an XSLT transformer in Mule?</p> <p>I have a cxf proxy-service and proxy-client pattern, and my transformers are being used to change the payload so that it is a valid input for subsequent SOAP web-service calls.</p>
<p>Measure it and see. You should never make a change to your system for performance reasons unless you can measure the effect. Focus your efforts on instrumentation, and good performance will follow naturally.</p>
417
transformers
cannot import &#39;AutoModelForSequenceClassification&#39; from &#39;transformers&#39;
https://stackoverflow.com/questions/66909773/cannot-import-automodelforsequenceclassification-from-transformers
<p><strong>cannot import 'AutoModelForSequenceClassification' from 'transformers'</strong></p> <p>The code is</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline t = AutoTokenizer.from_pretrained('/some/directory') m = AutoModelForSequenceClassification.from_pretrained('/some/directory') c2 = pipeline(task = 'sentiment-analysis', model=m, tokenizer=t) </code></pre> <p>The error is</p> <pre><code>cannot import 'AutoModelForSequenceClassification' from 'transformers' </code></pre>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>from transformers import AutoModelForSequenceClassification, BertForSequenceClassification from transformers import (XLMRobertaConfig, XLMRobertaTokenizer, TFXLMRobertaModel) from transformers import AutoTokenizer, AutoConfig, TFAutoModel PRETRAINED_MODEL_TYPES = { 'xlmroberta': (AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, 'akhooli/xlm-r-large-arabic-toxic') } # model_class,model = AutoModelForSequenceClassification.from_pretrained("akhooli/xlm-r-large-arabic-toxic") config_class, model_class, tokenizer_class, model_name = PRETRAINED_MODEL_TYPES['xlmroberta'] # Download vocabulary from huggingface.co and cache. tokenizer = AutoTokenizer.from_pretrained(model_name,use_fast=False) #fast tokenizer tokenizer</code></pre> </div> </div> </p>
418
transformers
Can&#39;t load transformers models
https://stackoverflow.com/questions/68528187/cant-load-transformers-models
<p>I have the following problem to load a transformer model. The strange thing is that it work on google colab or even when I tried on another computer, it seems to be version / cache problem but I didn't found it.</p> <pre class="lang-py prettyprint-override"><code>from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer('sentence-transformers/paraphrase-xlm-r-multilingual-v1') </code></pre> <pre><code>--------------------------------------------------------------------------- Exception Traceback (most recent call last) &lt;ipython-input-8-0b8b6a3eea75&gt; in &lt;module&gt; 1 from sentence_transformers import SentenceTransformer 2 from sentence_transformers.util import cos_sim ----&gt; 3 model = SentenceTransformer('sentence-transformers/paraphrase-xlm-r-multilingual-v1') 4 ~\AppData\Local\Programs\Python\Python39\lib\site-packages\sentence_transformers\SentenceTransformer.py in __init__(self, model_name_or_path, modules, device, cache_folder) 88 89 if os.path.exists(os.path.join(model_path, 'modules.json')): #Load as SentenceTransformer model ---&gt; 90 modules = self._load_sbert_model(model_path) 91 else: #Load with AutoModel 92 modules = self._load_auto_model(model_path) ~\AppData\Local\Programs\Python\Python39\lib\site-packages\sentence_transformers\SentenceTransformer.py in _load_sbert_model(self, model_path) 820 for module_config in modules_config: 821 module_class = import_from_string(module_config['type']) --&gt; 822 module = module_class.load(os.path.join(model_path, module_config['path'])) 823 modules[module_config['name']] = module 824 ~\AppData\Local\Programs\Python\Python39\lib\site-packages\sentence_transformers\models\Transformer.py in load(input_path) 122 with open(sbert_config_path) as fIn: 123 config = json.load(fIn) --&gt; 124 return Transformer(model_name_or_path=input_path, **config) 125 126 ~\AppData\Local\Programs\Python\Python39\lib\site-packages\sentence_transformers\models\Transformer.py in __init__(self, model_name_or_path, max_seq_length, model_args, cache_dir, tokenizer_args, do_lower_case, tokenizer_name_or_path) 28 config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir) 29 self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir) ---&gt; 30 self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path if tokenizer_name_or_path is not None else model_name_or_path, cache_dir=cache_dir, **tokenizer_args) 31 32 #No max_seq_length set. Try to infer from model ~\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\models\auto\tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 566 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] 567 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): --&gt; 568 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 569 else: 570 if tokenizer_class_py is not None: ~\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1730 logger.info(f&quot;loading file {file_path} from cache at {resolved_vocab_files[file_id]}&quot;) 1731 -&gt; 1732 return cls._from_pretrained( 1733 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs 1734 ) ~\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs) 1848 # Instantiate tokenizer. 1849 try: -&gt; 1850 tokenizer = cls(*init_inputs, **init_kwargs) 1851 except OSError: 1852 raise OSError( ~\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\models\xlm_roberta\tokenization_xlm_roberta_fast.py in __init__(self, vocab_file, tokenizer_file, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, **kwargs) 132 mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token 133 --&gt; 134 super().__init__( 135 vocab_file, 136 tokenizer_file=tokenizer_file, ~\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\tokenization_utils_fast.py in __init__(self, *args, **kwargs) 105 elif fast_tokenizer_file is not None and not from_slow: 106 # We have a serialization from tokenizers which let us directly build the backend --&gt; 107 fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) 108 elif slow_tokenizer is not None: 109 # We need to convert a slow tokenizer to build the backend Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 1 column 317584 </code></pre> <p>To give your more details, i also got another problem only on this computer with another model :</p> <pre class="lang-py prettyprint-override"><code>model = SentenceTransformer('etalab-ia/dpr-question_encoder-fr_qa-camembert') </code></pre> <pre><code>ValueError: unable to parse C:\Users\david.rouyre/.cache\torch\sentence_transformers\etalab-ia_dpr-question_encoder-fr_qa-camembert\tokenizer_config.json as a URL or as a local path </code></pre> <p>So i checked in the cache path and there was not <code>tokenizer_config.json</code>, only <code>tokenizer.json</code> (by renaming the file it worked)</p> <p>The package : (same version in colab)</p> <pre><code>Name: sentence-transformers Version: 2.0.0 Summary: Sentence Embeddings using BERT / RoBERTa / XLM-R Home-page: https://github.com/UKPLab/sentence-transformers Author: Nils Reimers Author-email: info@nils-reimers.de License: Apache License 2.0 Location: c:\users\david.rouyre\appdata\local\programs\python\python39\lib\site-packages Requires: transformers, tqdm, torch, torchvision, numpy, scikit-learn, scipy, nltk, sentencepiece, huggingface-hub Required-by: </code></pre> <p>I tried to clear the cache and uninstall with pip all dependencies (transformers, tqdm, torch, torchvision, numpy, scikit-learn, scipy, nltk, sentencepiece, huggingface-hub), uninstall sentence-transformers and reinstalling it.</p>
<p>Which tokenizer version do you have installed? For me it helped to upgrade the tokenizer:</p> <pre><code>pip3 install tokenizers==0.10.3 </code></pre>
419
transformers
Transformers pipeline model directory
https://stackoverflow.com/questions/64310515/transformers-pipeline-model-directory
<p>I'm using the Huggingface's Transformers pipeline function to download the model and the tokenizer, my Windows PC downloaded them but I don't know where they are stored on my PC. Can you please help me? <a href="https://i.sstatic.net/eHRNO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eHRNO.png" alt="enter image description here" /></a></p> <pre><code>from transformers import pipeline qa_pipeline = pipeline( &quot;question-answering&quot;, model=&quot;mrm8488/bert-multi-cased-finetuned-xquadv1&quot;, tokenizer=&quot;mrm8488/bert-multi-cased-finetuned-xquadv1&quot; ) </code></pre>
<p>You can check the default location with:</p> <pre class="lang-py prettyprint-override"><code>import transformers #it is important to load the library before checking! import os os.environ['TRANSFORMERS_CACHE'] </code></pre> <p>In case you want to change the default location, please have a lock at this <a href="https://stackoverflow.com/a/63314437/6664872">answer</a>.</p>
420
transformers
can not import tftrainer from transformers
https://stackoverflow.com/questions/78661481/can-not-import-tftrainer-from-transformers
<p>I try to train a GPT2 model on my own data</p> <pre><code>from transformers import GPT2Tokenizer, GPT2LMHeadModel from transformers import TextDataset, DataCollatorForLanguageModeling from transformers import TFTrainer, TFTrainingArguments </code></pre> <p>but I get the error &quot;cannot import name 'TFTrainer' from 'transformers'&quot;</p> <p>I tried to change the version of transformers but I still experienced the same problem.</p> <p>I also tried</p> <pre><code>from tftrainer import TrainArgument, Trainer from transformers.trainer_tf import TFTrainer </code></pre>
<p>TFTrainer is a separate package that transformers, so you need to install it separately and do:</p> <p><code>from tftrainer import Trainer, TrainArgument</code></p> <p>This is because TFTrainer is no longer supported by transformers. You could try to install an older version of transformers which still supports it, though.</p>
421
transformers
Cannot import BertModel from transformers
https://stackoverflow.com/questions/62386631/cannot-import-bertmodel-from-transformers
<p>I am trying to import BertModel from transformers, but it fails. This is code I am using</p> <pre><code>from transformers import BertModel, BertForMaskedLM </code></pre> <p>This is the error I get</p> <pre><code>ImportError: cannot import name 'BertModel' from 'transformers' </code></pre> <p>Can anyone help me fix this?</p>
<p>Fixed the error. This is the code</p> <pre><code>from transformers.modeling_bert import BertModel, BertForMaskedLM </code></pre>
422
transformers
ModuleNotFoundError: no module named &#39;transformers&#39;
https://stackoverflow.com/questions/71012012/modulenotfounderror-no-module-named-transformers
<p>This is my first post and I am new to coding, so please let me know if you need more information. I have been running some AI to generate artwork and it has been working, but when I reloaded it the python script won't work and it is now saying &quot;No module named 'transformers'&quot;. Can anyone help me out? It was when I upgraded to Google Colab Pro that I started to encounter issues although I am not sure why that would make a difference.</p> <p><code>ModuleNotFoundError</code></p> <p><a href="https://i.sstatic.net/Q4SiZ.png" rel="noreferrer"><img src="https://i.sstatic.net/Q4SiZ.png" alt="enter image description here" /></a></p>
<p>Probably it is because you have not installed in your (new, since you've upgraded to colabs pro) session the library transformers. Try to run as first cell the following: <code>!pip install transformers</code> (the &quot;!&quot; at the beginning of the instruction is needed to go into &quot;terminal mode&quot; ). This will download the transformers package into the session's environment.</p>
423
transformers
transformers No module named &#39;keras.engine&#39;
https://stackoverflow.com/questions/78990350/transformers-no-module-named-keras-engine
<p>I have transformers 4.25.1 and Keras 3.4.1 with Python 3.9 under Windows. <code>permutation_importance</code> uses <code>transformers\utils\import_utils.py</code> which produces:</p> <pre><code>line 1095, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.modeling_tf_utils because of the following error No module named 'keras.engine' </code></pre>
<p>I encountered the same issue too, the config is slightly different from yours; the main problem is due to old Python version mismatches with newer versions of <code>tensorflow</code>. Here is my solution:</p> <p>Versions used:</p> <ul> <li>Ubuntu 22.04 LTS</li> <li>Python 3.10.0</li> <li>transformers 4.28.1</li> <li>tokenizers 0.13.3</li> <li>tensorflow 2.18.0</li> <li>tf-keras 2.18.0</li> <li>keras 3.8.0</li> </ul> <p>Downgrading <code>tensorflow</code> and <code>keras</code> can solve the problem, but the older versions cannot be installed via <code>pip</code> anymore. Therefore, a modern solution is needed.</p> <p>I then install Python 3.12.9 (along with <code>pip</code>) via ppa in Ubuntu in parallel with the default Python 3.10.0. Then, I install the following versions with Python 3.12:</p> <ul> <li>Python 3.12.9</li> <li>transformers 4.49.0</li> <li>tokenizers 0.21.0</li> <li>tensorflow 2.18.0</li> <li>tf-keras 2.18.0</li> <li>keras 3.8.0</li> </ul> <p>(all are at the latest versions at the moment writing this answer)</p> <p>The <code>keras.engine</code> issue should be gone. Alternatively, if you are using Ubuntu 22.04, you can upgrade to 24.04, which has Python 3.12 installed; then you don't have to maintain two Python versions.</p>
424
transformers
Issue installing transformers
https://stackoverflow.com/questions/77413586/issue-installing-transformers
<p>I'm doing a NLP project on vscode &quot; amazon reviews sentiment analyzer&quot; every thing is going ok until I reached the part for importing transformers</p> <p>when I'm installing transformers from pip Im getting this error :</p> <pre><code>error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. </code></pre> <p>I tried downloading rust and cargo and it didn't work I'm sure its in the environment variable It the first time I do NLTK project .</p>
<p>You can <a href="https://github.com/huggingface/transformers#with-pip" rel="nofollow noreferrer">read document</a> about how to install this package.</p> <p>You will need to install at least one of <strong>Flax</strong>, <strong>PyTorch</strong>, or <strong>TensorFlow</strong>.</p> <p>When one of those backends has been installed, Transformers can be installed using pip as follows:</p> <pre><code>pip install transformers </code></pre>
425
transformers
Unable to install sentence-transformers, getting error
https://stackoverflow.com/questions/71083239/unable-to-install-sentence-transformers-getting-error
<p>Running below command after installing python 3.10.</p> <p>pip3 install -U sentence-transformers</p> <ol> <li>List item</li> </ol> <p>ERROR: Cannot install sentence-transformers==0.1.0, sentence-transformers==0.2.0, sentence-transformers==0.2.1, sentence-transformers==0.2.2, sentence-transformers==0.2.3, sentence-transformers==0.2.4, sentence-transformers==0.2.4.1, sentence-transformers==0.2.5, sentence-transformers==0.2.5.1, sentence-transformers==0.2.6.1, sentence-transformers==0.2.6.2, sentence-transformers==0.3.0, sentence-transformers==0.3.1, sentence-transformers==0.3.2, sentence-transformers==0.3.3, sentence-transformers==0.3.4, sentence-transformers==0.3.5, sentence-transformers==0.3.5.1, sentence-transformers==0.3.6, sentence-transformers==0.3.7, sentence-transformers==0.3.7.1, sentence-transformers==0.3.7.2, sentence-transformers==0.3.8, sentence-transformers==0.3.9, sentence-transformers==0.4.0, sentence-transformers==0.4.1, sentence-transformers==0.4.1.1, sentence-transformers==0.4.1.2, sentence-transformers==1.0.0, sentence-transformers==1.0.1, sentence-transformers==1.0.2, sentence-transformers==1.0.3, sentence-transformers==1.0.4, sentence-transformers==1.1.0, sentence-transformers==1.1.1, sentence-transformers==1.2.0, sentence-transformers==1.2.1, sentence-transformers==2.0.0 and sentence-transformers==2.1.0 because these package versions have conflicting dependencies.</p> <p>The conflict is caused by: sentence-transformers 2.1.0 depends on torch&gt;=1.6.0<br /> ...............</p> <p>To fix this you could try to:</p> <ol> <li>loosen the range of package versions you've specified</li> <li>remove package versions to allow pip attempt to solve the dependency conflict</li> </ol> <p>ERROR: ResolutionImpossible: for help visit <a href="https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies" rel="nofollow noreferrer">https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies</a> WARNING: You are using pip version 21.2.4; however, version 22.0.3 is available. You should consider upgrading via the '/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 -m pip install --upgrade pip' command.</p>
<p>I encountered the same error, here is how I fixed it:</p> <ul> <li>first install torch 1.10.0 using the following</li> </ul> <pre class="lang-sh prettyprint-override"><code>conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch </code></pre> <ul> <li>then install sentence-transformers 2.1.0 by</li> </ul> <pre class="lang-sh prettyprint-override"><code>pip install -U sentence-transformers </code></pre>
426
transformers
Python Error ModuleNotFoundError: No module named &#39;transformers&#39;
https://stackoverflow.com/questions/74607244/python-error-modulenotfounderror-no-module-named-transformers
<p>I'm getting below error when running 'import transformers', even though I have installed in the same vitual env. I'm using python 3.8</p> <pre><code>ModuleNotFoundError: No module named 'transformers' </code></pre> <p>Error:</p> <p><a href="https://i.sstatic.net/7B3Sb.png" rel="nofollow noreferrer">enter image description here</a></p> <p>I have uninstalled it and reinstalled it using 'pip3 install transformers' from python cmd line.</p> <p>Then I tried to uninstalled again, and reinstalled in jupyter notebook using '!pip install transformers', result shows '</p> <pre><code>Installing collected packages: transformers Successfully installed transformers-4.24.0 </code></pre> <p>'</p> <p>I can also verify directly in Jupyter Notebook:</p> <p><a href="https://i.sstatic.net/7WyEm.png" rel="nofollow noreferrer">enter image description here</a></p> <p>I tried to install transformers successfully in jupyter Notebook. After that still getting <code>ModuleNotFoundError</code> error (have tried restarted kernel too)</p> <p><a href="https://i.sstatic.net/lAxgb.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>its resolved now. I just tried to use %pip install transformers==3.4.0, instead of !pip install transformers==3.4.0 in jupyter book, and it worked. I can proceed with the project for now. Although I don't know what I did wrong in my python command line earlier that caused the inconsistency. Will open a new thread.</p>
427
transformers
Hugginface transformers module not recognized by anaconda
https://stackoverflow.com/questions/62538079/hugginface-transformers-module-not-recognized-by-anaconda
<p>I am using Anaconda, python 3.7, windows 10.</p> <p>I tried to install transformers by <a href="https://huggingface.co/transformers/" rel="nofollow noreferrer">https://huggingface.co/transformers/</a> on my env. I am aware that I must have either pytorch or TF installed, I have pytorch installed - as seen in anaconda navigator environments.</p> <p>I would get many kinds of errors, depending on where (anaconda / prompt) I uninstalled and reinstalled pytorch and transformers. Last attempt using conda install pytorch torchvision cpuonly -c pytorch and conda install -c conda-forge transformers I get an error:</p> <pre class="lang-py prettyprint-override"><code>from transformers import BertTokenizer bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) def tok(dataset): input_ids = [] attention_masks = [] sentences = dataset.Answer2EN.values labels = dataset.Class.values for sent in sentences: encoded_sent = bert_tokenizer.encode(sent, add_special_tokens=True, max_length = 64, pad_to_max_length =True) </code></pre> <blockquote> <p>TypeError: _tokenize() got an unexpected keyword argument 'pad_to_max_length'</p> </blockquote> <p><strong>Does anyone know a secure installation of transformers using Anaconda?</strong> Thank you</p>
<p>The problem is that conda only offers the transformers library in version 2.1.1 (<a href="https://anaconda.org/conda-forge/transformers" rel="nofollow noreferrer">repository information</a>) and this version didn't have a <code>pad_to_max_length</code> argument. I'm don't want to look it up if there was a different parameter, but you can simply pad the result (which is just a list of integers):</p> <pre class="lang-py prettyprint-override"><code>from transformers import BertTokenizer bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) sentences = ['this is just a test', 'this is another test'] max_length = 64 for sent in sentences: encoded_sent = bert_tokenizer.encode(sent, add_special_tokens=True, max_length = max_length) encoded_sent.extend([0]* (max_length - len(encoded_sent))) ###your other stuff </code></pre> <p>The better option in my opinion is to create a new conda environment and install everything via pip and not via conda. This will allow you to work with the most recent transformers version (2.11).</p>
428
transformers
Cannot import pipeline after successful transformers installation
https://stackoverflow.com/questions/68499238/cannot-import-pipeline-after-successful-transformers-installation
<h2 id="environment-info-f7cj">Environment info</h2> <ul> <li><code>transformers</code> version: 4.9.0</li> <li>Platform: Linux-4.15.0-151-generic-x86_64-with-glibc2.27</li> <li>Python version: 3.9.2</li> <li>PyTorch version (GPU?): 1.7.1+cu101 (False)</li> <li>Tensorflow version (GPU?): 2.5.0 (False)</li> <li>Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)</li> <li>Jax version: 0.2.17</li> <li>JaxLib version: 0.1.69</li> <li>Using GPU in script?: no</li> <li>Using distributed or parallel set-up in script?: no</li> </ul> <h2 id="details-i0la">Details</h2> <p>I am attempting to use a fresh installation of transformers library, but after successfully completing the installation with pip, I am not able to run the test script: <code>python -c &quot;from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))&quot;</code></p> <p>Instead, I see the following output:</p> <blockquote> <p>/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/gensim/similarities/<strong>init</strong>.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package &lt;https://pypi.org/proje$ t/python-Levenshtein/&gt; is unavailable. Install Levenhstein (e.g. <code>pip install python-Levenshtein</code>) to suppress this warning.<br /> warnings.warn(msg)<br /> Traceback (most recent call last):<br /> File &quot;&quot;, line 1, in <br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py&quot;, line 1977, in <strong>getattr</strong><br /> module = self._get_module(self._class_to_module[name])<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py&quot;, line 1986, in _get_module<br /> return importlib.import_module(&quot;.&quot; + module_name, self.<strong>name</strong>)<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/importlib/<strong>init</strong>.py&quot;, line 127, in import_module<br /> return _bootstrap._gcd_import(name[level:], package, level)<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/pipelines/<strong>init</strong>.py&quot;, line 25, in <br /> from ..models.auto.configuration_auto import AutoConfig<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/<strong>init</strong>.py&quot;, line 19, in <br /> from . import (<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/layoutlm/<strong>init</strong>.py&quot;, line 22, in <br /> from .configuration_layoutlm import LAYOUTLM_PRETRAINED_CONFIG_ARCHIVE_MAP, LayoutLMConfig<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/layoutlm/configuration_layoutlm.py&quot;, line 19, in <br /> from ..bert.configuration_bert import BertConfig<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/bert/configuration_bert.py&quot;, line 21, in <br /> from ...onnx import OnnxConfig<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/onnx/<strong>init</strong>.py&quot;, line 16, in <br /> from .config import EXTERNAL_DATA_FORMAT_SIZE_LIMIT, OnnxConfig, OnnxConfigWithPast<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/onnx/config.py&quot;, line 18, in <br /> from transformers import PretrainedConfig, PreTrainedTokenizer, TensorType<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py&quot;, line 1977, in <strong>getattr</strong><br /> module = self._get_module(self._class_to_module[name])<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py&quot;, line 1986, in _get_module<br /> return importlib.import_module(&quot;.&quot; + module_name, self.<strong>name</strong>)<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/importlib/<strong>init</strong>.py&quot;, line 127, in import_module<br /> return _bootstrap._gcd_import(name[level:], package, level)<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/tokenization_utils.py&quot;, line 26, in <br /> from .tokenization_utils_base import (<br /> File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/tokenization_utils_base.py&quot;, line 74, in <br /> from tokenizers import AddedToken<br /> File &quot;/home/shushan/tokenization_experiments/tokenizers.py&quot;, line 26, in <br /> from transformers import BertTokenizer File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py&quot;, line 1978, in <strong>getattr</strong> value = getattr(module, name) File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py&quot;, line 1977, in <strong>getattr</strong> module = self._get_module(self._class_to_module[name]) File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py&quot;, line 1986, in _get_module return importlib.import_module(&quot;.&quot; + module_name, self.<strong>name</strong>) File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/importlib/<strong>init</strong>.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/bert/tokenization_bert.py&quot;, line 23, in from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace ImportError: cannot import name 'PreTrainedTokenizer' from partially initialized module 'transformers.tokenization_utils' (most likely due to a circular import) (/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformer s/tokenization_utils.py)</p> </blockquote> <p>I have attempted uninstalling transformers and re-installing them, but I couldn't find any more information as to what is wrong, or how to go about fixing this issue I am seeing. The only suspicious behavior is that the output of <code>transformers-cli env</code> command above specifies the pytorch version does not work with GPU, while in reality, I have an installation of pytorch that works with gpu. Can you help? Thanks in advance Shushan</p>
<p>Maybe presence of both Pytorch and TensorFlow or maybe incorrect creation of the environment is causing the issue. Try re-creating the environment while installing bare minimum packages and just keep one of Pytorch or TensorFlow.</p> <p>It worked perfectly fine for me with the following config:</p> <pre><code> - transformers version: 4.9.0 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.9.2 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: &lt;fill in&gt; - Using distributed or parallel set-up in script?: &lt;fill in&gt; </code></pre>
429
transformers
Import of transformers package throwing value_error
https://stackoverflow.com/questions/68997701/import-of-transformers-package-throwing-value-error
<p>I have successfully installed transformers package in my Jupyter Notebook from Anaconda administrator console using the command '<code>conda install -c conda-forge transformers</code>'.</p> <p>However when I try to load the transformers package in my Jupyter notebook using '<code>import transformers</code>' command, I am getting an error, <code>'ValueError: got_ver is None'</code>.</p> <p>I am not sure how I can resolve this. Appreciate any inputs.</p> <p>Below is the complete error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-58-279c49635b32&gt; in &lt;module&gt; ----&gt; 1 import transformers C:\ProgramData\Anaconda3\lib\site-packages\transformers\__init__.py in &lt;module&gt; 41 42 # Check the dependencies satisfy the minimal versions required. ---&gt; 43 from . import dependency_versions_check 44 from .file_utils import ( 45 _LazyModule, C:\ProgramData\Anaconda3\lib\site-packages\transformers\dependency_versions_check.py in &lt;module&gt; 39 continue # not required, check version only if installed 40 ---&gt; 41 require_version_core(deps[pkg]) 42 else: 43 raise ValueError(f&quot;can't find {pkg} in {deps.keys()}, check dependency_versions_table.py&quot;) C:\ProgramData\Anaconda3\lib\site-packages\transformers\utils\versions.py in require_version_core(requirement) 118 &quot;&quot;&quot;require_version wrapper which emits a core-specific hint on failure&quot;&quot;&quot; 119 hint = &quot;Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master&quot; --&gt; 120 return require_version(requirement, hint) C:\ProgramData\Anaconda3\lib\site-packages\transformers\utils\versions.py in require_version(requirement, hint) 112 if want_ver is not None: 113 for op, want_ver in wanted.items(): --&gt; 114 _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) 115 116 C:\ProgramData\Anaconda3\lib\site-packages\transformers\utils\versions.py in _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) 43 def _compare_versions(op, got_ver, want_ver, requirement, pkg, hint): 44 if got_ver is None: ---&gt; 45 raise ValueError(&quot;got_ver is None&quot;) 46 if want_ver is None: 47 raise ValueError(&quot;want_ver is None&quot;) ValueError: got_ver is None </code></pre>
<p>I had similar error which took a whole day to fix.</p> <p>This is causing due to a version mismatch of some of the expected packages by transformers while importing. You can check the specific package details in the transformers folder in your local disk. 2 python files are shown in the location ..Anaconda3\Lib\site-packages\transformers. These are dependency_versions_table.py and dependency_versions_check.py</p> <p>dependency table gives the details of all required packages for transformers and its versions.</p> <p><a href="https://i.sstatic.net/v4TBB.png" rel="noreferrer"><img src="https://i.sstatic.net/v4TBB.png" alt="dependency version table" /></a></p> <p>dependency version check file gives a code to check the specific versions <a href="https://i.sstatic.net/pJsZF.png" rel="noreferrer"><img src="https://i.sstatic.net/pJsZF.png" alt="dependency version check code" /></a></p> <p>You can check these versions. For me, below was the output</p> <pre><code>from importlib_metadata import version print(version('tqdm')) #4.64.0 print(version('regex')) # 2022.3.15 print(version('sacremoses')) # 0.0.46 print(version('packaging')) # 21.0' print(version('filelock')) # 3.6.0 print(version('numpy')) # 'none' print(version('tokenizers')) #0.12.1 </code></pre> <p>My code returned 'none' for numpy initially. Then I checked the numpy version using 2 codes</p> <pre><code>print(numpy.__version__) pip show numpy </code></pre> <p>Both were giving 2 different versions. Then I forced install one version using below code</p> <pre><code>!python -m pip install numpy==1.19.5 --user </code></pre> <p>After that I again checked the versions and found numpy returning the version 1.19.5. Then restarted the kernel and imported Transformers</p> <p>This resolved the issue with transformers importing</p> <p>Not : if importlib_metadata is not working, try with importlib.metadata too</p>
430
transformers
from transformers import TFBertModel, BertConfig, BertTokenizerFast
https://stackoverflow.com/questions/64823301/from-transformers-import-tfbertmodel-bertconfig-berttokenizerfast
<p>I am having trouble importing TFBertModel, BertConfig, BertTokenizerFast. I tried the latest version of transformers, tokenizer==0.7.0, and transformers.modeling_bert but they do not seem to work. I get the error</p> <p><code>from transformers import TFBertModel, BertConfig, BertTokenizerFast</code></p> <p>ImportError: cannot import name 'TFBertModel' from 'transformers' (unknown location)</p> <p>Any ideas for a fix? Thanks!</p>
<p>Do you have <code>Tensorflow 2</code> installed? The model you are trying to import required Tensorflow</p>
431
transformers
Monad transformers monad duplication
https://stackoverflow.com/questions/16637221/monad-transformers-monad-duplication
<p>I am new to monad transformers, so sorry easy question. I have value <code>val :: MaybeT IO String</code> and function <code>fn :: String -&gt; IO [String]</code>. So after binding, I have <code>val &gt;&gt;= liftM fn :: MaybeT IO (IO [String])</code>. How can I remove duplicate IO monad and get result of type <code>MaybeT IO [String]</code>?</p>
<p>Use <code>lift</code> (or <code>liftIO</code>) instead of <code>liftM</code>.</p> <pre><code>&gt; :t val &gt;&gt;= lift . fn val &gt;&gt;= lift . fn :: MaybeT IO [String] </code></pre> <p><code>liftM</code> is for applying pure functions in a monad. <code>lift</code> and <code>liftIO</code> are for lifting actions into a transformer.</p> <pre><code>liftM :: Monad m =&gt; (a -&gt; b) -&gt; m a -&gt; m b lift :: (Monad m, MonadTrans t) =&gt; m a -&gt; t m a liftIO :: MonadIO m =&gt; IO a -&gt; m a </code></pre>
432
transformers
R Reticulate transformers library cannot find torch
https://stackoverflow.com/questions/70262279/r-reticulate-transformers-library-cannot-find-torch
<p>Using R and the <code>reticulate</code> package I am trying to use a pre-trained model from Huggingface. This partcular model requires PyTorch and transformers. Both are available in R via reticulate, however even though I can install and load both, the transformers package can't find the PyTorch installation.</p> <pre><code>use_virtualenv(&quot;r-reticulate&quot;) reticulate::py_install('transformers', pip = TRUE) reticulate::py_install(&quot;PyTorch&quot;) transformer = reticulate::import('transformers') torch = reticulate::import('torch') tokenizer = transformer$AutoTokenizer$from_pretrained(&quot;gagan3012/keytotext-small&quot;) model = transformer$AutoModel$from_pretrained(&quot;gagan3012/keytotext-small&quot;) </code></pre> <p>and the error:</p> <pre><code>Error in py_call_impl(callable, dots$args, dots$keywords): ImportError: AutoModel requires the PyTorch library but it was not found in your environment. Checkout the instructions on the installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. Detailed traceback: File &quot;/miniconda/envs/r-reticulate/lib/python3.7/site-packages/transformers/utils/dummy_pt_objects.py&quot;, line 364, in from_pretrained requires_backends(cls, [&quot;torch&quot;]) File &quot;/miniconda/envs/r-reticulate/lib/python3.7/site-packages/transformers/file_utils.py&quot;, line 683, in requires_backends raise ImportError(&quot;&quot;.join([BACKENDS_MAPPING[backend][1].format(name) for backend in backends])) Traceback: 1. transformer$AutoModel$from_pretrained(&quot;gagan3012/keytotext-small&quot;) 2. py_call_impl(callable, dots$args, dots$keywords) </code></pre> <p>but PyTorch is definitely installed - I can call methods i.e <code>torch$cudnn_convolution_add_relu</code> so how can I tell the transformers package where torch is?</p>
<p>It looks like reticulate is pointing to miniconda, and you have likely installed PyTorch, etc. in a different environment.</p> <p>Check out which python environment your <code>Pytorch</code> is installed. Then check where your reticulate environment is using <code>Sys.getenv()</code> and look for <code>RETICULATE_PYTHON</code> variable.</p> <p>The best way to handle a mis-match is to create a .Renviron file in the home directory listed in your <code>Sys.getenv()</code></p> <p>I install all of my packages into a Python environment called reticulate and then point to it in my <code>.Renviron</code> file.</p> <p>In your <code>.Renviron</code> file you need the following code:</p> <pre><code>RETICULATE_PYTHON=&quot;path to environment with Pytorch&quot; </code></pre> <p>This post talks about it also.</p> <p><a href="https://stackoverflow.com/questions/50145643/unable-to-change-python-path-in-reticulate/69798951#69798951">.Renviron File for Reticulate</a></p>
433