category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
gradient descent
In general - is Stochastic Gradient Descent a "superior" algorithm compared to Gradient Descent?
https://ai.stackexchange.com/questions/34532/in-general-is-stochastic-gradient-descent-a-superior-algorithm-compared-to-g
<p>On a very informal level, if we were to compare the (classical) <strong>Gradient Descent Algorithm to the Stochastic Gradient Descent Algorithm</strong>, the first thing that comes to mind is:</p> <ul> <li>Gradient Descent can be considered as a slower algorithm than Stochastic Gradient Descent, since it requires performing mathematical operations (e.g. derivatives, second derivatives) on the entire data set. On the other hand, Stochastic Gradient Descent can be considered as a faster algorithm, since it approximates the gradient using &quot;mini batches&quot; of observations from the data set. Logically, this makes sense : Stochastic Gradient Descent requires fewer calculations compared to Gradient Descent, therefore the same computer should take less time to perform fewer calculations compared to more calculations.</li> </ul> <p>However, (in the spirit of tradeoffs) the somewhat logical counter point to the above argument is that:</p> <ul> <li>Since Stochastic Gradient Descent approximates the gradient of the function whereas Gradient Descent evaluates the full gradient - one would imagine that perhaps Gradient Descent might have a better ability to find the true minimum of the function compared to Gradient Descent. <strong>However, I am not sure about this.</strong></li> </ul> <p><strong>My Question:</strong> In the past few years, a lot of research has been done about studying the behavior (e.g. theoretical convergence properties) of Stochastic Gradient Descent (e.g. <a href="https://arxiv.org/abs/2103.14350" rel="nofollow noreferrer">https://arxiv.org/abs/2103.14350</a> , <a href="https://raghumeka.github.io/CS289ML/gdnotes.pdf" rel="nofollow noreferrer">https://raghumeka.github.io/CS289ML/gdnotes.pdf</a>) which have demonstrated similar abilities of the Stochastic Gradient Descent Algorithm to converge as compared to Gradient Descent.</p> <p><strong>But are there any theoretical results which expose the above (speculated) tradeoff?</strong> At the end of the day, if Stochastic Gradient requires less computing power when compared to Gradient Descent - are there any theoretical results that suggest Gradient Descent has an inherent ability to better find the minimum of the function (e.g. perhaps less likely to get &quot;stuck&quot; in saddle points) since it is not relying on an approximation of the gradient? Or if Stochastic Gradient Descent is equal to Gradient Descent in this regard - could Stochastic Gradient Descent be then considered as superior to Gradient Descent?</p> <p>Can someone please comment on this?</p> <p>Thanks!</p>
<p>Stochastic Gradient Descent empirically has shown to lead better results than classic Gradient Descent since its formulation, and today we're getting close to understand that it's not just luck, but the results of better mathematical qualities as well. SO the answer to your question is no, the theoretical results also show that SGD is better than classic GD.</p> <p>Check these papers offer different explanations about why SGD tend to perform better that classic GD:</p> <ol> <li><a href="https://arxiv.org/abs/1509.01240" rel="nofollow noreferrer">Train faster, generalize better: Stability of stochastic gradient descent</a></li> <li><a href="https://proceedings.mlr.press/v40/Ge15.pdf" rel="nofollow noreferrer">Escaping From Saddle Points – Online Stochastic Gradient for Tensor Decomposition</a></li> <li><a href="https://arxiv.org/abs/1802.06175" rel="nofollow noreferrer">An Alternative View: When Does SGD Escape Local Minima?</a></li> </ol> <p>Quick summaries:</p> <ol> <li>SGD works better cause it's less prone to generalization error, or to put it the other way around, a more stable learner compare to GD, i.e. the solutions learned with SGD are less affected by small perturbations in the training data.</li> <li>SGD works better cause its randomness makes it less prone to get stuck in saddle points.</li> <li>SGD works better cause it's literally GD but applied to a smoother version (convolved with noise) of the objective function we want to minimize, so again, less risk to get stuck not only in saddle points, but bad local minima as well (see image below).</li> </ol> <p><a href="https://i.sstatic.net/Y4Ikc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y4Ikc.png" alt="enter image description here" /></a></p> <p>Extra note: it's not hard to understand conceptually how SGD can help escaping saddle points and local minima. When computing a single step using all gradients of all training instances like GD does, we don't leave much room for exploration of the loss function surface. So if our training instances point to a local minima that will be the solution found by GD. On the other hand, SGD compute gradients only on a batch of instances, and therefore each update is a suboptimal step toward the negative direction of the gradients. Being suboptmal, each step leave room for exploration, i.e. if an update step points to a local minima it might be that the next step point in a different direction, something that can't happen in GD since we perform a single update. Same conclusion, but different conceptualization comes from the interpretation of the last paper. They prove that SGD can be seen as GD (so again, single update step) performed on a loss function convolved with noise. Noise convolution has the effect of smoothing out the function (think as a metaphor to combine random frequencies, they are more likely to cancel each other out than boosting each other). Smoothing the loss function means precisely removing small local minima, as you can see in the last picture. The consequence is the same as in the first explanation, i.e. less chance of getting stuck in a local minima.</p> <p><a href="https://i.sstatic.net/FwOAk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FwOAk.png" alt="enter image description here" /></a></p>
312
gradient descent
What do you mean by &quot;updating based on a training example/batch&quot; in Gradient Descent?
https://ai.stackexchange.com/questions/43688/what-do-you-mean-by-updating-based-on-a-training-example-batch-in-gradient-des
<p>My understanding is this: When doing Stochastic Gradient Descent over a neural network, in every epoch, we run <span class="math-container">$n$</span> iterations (where the dataset has <span class="math-container">$n$</span> training examples) and in every iteration, we take a random sample and update the parameters wrt the sample.</p> <p>However, in batch gradient descent, we take the whole dataset every epoch, and update the parameters wrt the batch. I have the following questions:</p> <ol> <li>Why do we need to compute the loss function every time, especially if the value of the loss has no importance as such in the backprop process? Is it just to ensure that the loss decreases over time? How can you propagate the error back when the value of L is of no significance?</li> <li>What exactly does updating wrt a &quot;batch&quot; mean? What would you take the input vector (required to compute gradients for the first set of weights) as? I assume that the loss is taken to be the average of the losses for the entire batch</li> </ol>
<p>In batch gradient descent computing the loss function every time serves several purposes. While the value of the loss itself may not directly affect the backpropagation process, as you said monitoring the loss over time is essential to ensure that the optimization process is converging. Also many optimization algorithms such as learning rate schedules or adaptive learning rate methods adjust the learning rate or other hyperparameters based on the behavior of the loss function during training. Therefore, computing the loss function is crucial for determining how the learning rate should be updated. Finally, monitoring the loss function can also be used for early stopping, a technique where training is halted if the loss on a validation set stops decreasing or starts increasing, which helps prevent the usual overfitting of neural networks and wasting computational resources.</p> <p>Updating parameters with respect to a batch means computing the gradients of the (average) loss function with respect to the parameters using all examples in the batch and then updating the parameters accordingly. This process is repeated for each batch during each epoch of training and in your case each batch could simply mean the entire training dataset.</p> <p>The input vector used to compute gradients for the first set of weights is a matrix whose row represents an individual example from the batch and whose columns represent the features of that example, and the number of features would typically determine the number of input nodes of your network. During training the neural network processes this entire batch at once with operations including forward and backward passes performed in a vectorized or parallelized manner for efficiency.</p>
313
gradient descent
Oscillating around the saddle point in gradient descent?
https://ai.stackexchange.com/questions/18410/oscillating-around-the-saddle-point-in-gradient-descent
<p>I was reading a blog post that talked about the problem of the saddle point in training. </p> <blockquote> <p>In the post, it says if the loss function is flatter in the direction of x (local minima here) compared to y at the saddle point, gradient descent will oscillate to and from the y direction. This gives an illusion of converging to a minima. Why is this? </p> <p>Wouldn’t it continue down in the y direction and hence escape the saddle point?</p> </blockquote> <p><a href="https://i.sstatic.net/NyXTy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NyXTy.png" alt="Saddle point"></a></p> <p>Link to post: <a href="https://blog.paperspace.com/intro-to-optimization-in-deep-learning-gradient-descent/" rel="nofollow noreferrer">https://blog.paperspace.com/intro-to-optimization-in-deep-learning-gradient-descent/</a></p> <p>Please go to <em>Challenges with Gradient Descent #2: Saddle Points</em>.</p>
<p>It's important to note that in a pragmatic instance of ML on a "real dataset", you likely wouldn't have a "strict" saddle point with precisely zero gradient. Your error surface won't be "smooth", so even though you would have something that resembles a saddle point, what you would obtain would in fact be a small region with near-zero gradient.</p> <p>So let's say you are in a region with near-zero gradient. Assuming that the gradient is this area is normalized at 0 with small Gaussian distributed noise (thus gradient = small Gaussian noise). You can then see that the algorithm can't quite escape the region (or at least, will spend a lot of time here) since 1. Gaussian random walks will more-or-less stay in place (unless for a long time) 2. small gradients means there is no obvious direction to leave the region.</p> <p>In any case, SGD more or less solves this issue, and its usage is standard practice for reasons beyond this problem.</p>
314
gradient descent
Different methods of calculating gradients of cost function(loss function)
https://ai.stackexchange.com/questions/19895/different-methods-of-calculating-gradients-of-cost-functionloss-function
<p>We require to find the gradient of loss function(cost function) w.r.t to the weights to use optimization methods such as SGD or gradient descent. So far, I have come across two ways to compute the gradient:</p> <ol> <li>BackPropagation</li> <li>Calculating gradient of loss function by calculus</li> </ol> <p>I found many resources for understanding backpropagation. The 2nd method I am referring to is the image below(taken for a specific example, e is the error: difference between target and prediction): <a href="https://i.sstatic.net/OSOVB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OSOVB.jpg" alt="enter image description here"></a> </p> <p>Also, the proof was mentioned in this paper:<a href="https://arxiv.org/pdf/1802.01528.pdf" rel="nofollow noreferrer">here</a></p> <p>Moreover, I found this method while reading this <a href="https://www.pyimagesearch.com/2016/10/10/gradient-descent-with-python/" rel="nofollow noreferrer">blog</a>.(You might have to scroll down to see the code: gradient = X.T.dot(error) / X.shape[0] ) </p> <p>My question is are the two methods of finding gradient of cost function same? It appears different and if yes, which one is more efficient( though one can guess it is backpropagation)</p> <p>Would be grateful for any help. Thanks for being patient(it's my 1st time learning ML). </p>
<p>I'm pretty sure they're the same thing. Both backpropagation and gradient descent's essential ideas are to calculate the partial derivative of the cost with respect to each weight, and subtract the partial derivative times the learning rate.</p>
315
gradient descent
Why is loss displayed as a parabola in mean squared error with gradient descent?
https://ai.stackexchange.com/questions/27390/why-is-loss-displayed-as-a-parabola-in-mean-squared-error-with-gradient-descent
<p>I'm looking at the loss function: mean squared error with gradient descent in machine learning. I'm building a single-neuron network (perceptron) that outputs a linear number. For example:</p> <p>Input * Weight + Bias &gt; linear activation &gt; output.</p> <p>Let's say the output is 40 while I expect the number 20. That means the loss function has to correct the weights+bias from 40 towards 20.</p> <p>What I don't understand about mean squared error + gradient descent is: why is this number 40 displayed as a point on a <strong>parabola</strong>?</p> <p>Does this parabola represent all possible outcomes? Why isn't it just a line? How do I know where on the parabola the point &quot;40&quot; is?</p> <p><a href="https://i.sstatic.net/WPwui.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WPwui.png" alt="image of parabola" /></a></p>
<p>Mean Square Error (MSE) is a quadratic function and the further you go away from your optimum the bigger (quadratic) the MSE gets. Take <span class="math-container">$o_{expected}=20$</span> and <span class="math-container">$o_{net}=40$</span> as example. Your MSE is then 400, because <span class="math-container">$MSE = (o_{expected}-o_{net})^2$</span>.</p> <p>Just imagine <span class="math-container">$y = x^2$</span> with x being the output of your network. If you want to shift the parabola with optimum at <span class="math-container">$20$</span> the formula you get is <span class="math-container">$y = (20-x)^2$</span>. For every new case you train the net on, you get a different parabola with different parameters.</p>
316
gradient descent
Do gradient-based algorithms deal with the flat regions with desired points?
https://ai.stackexchange.com/questions/30300/do-gradient-based-algorithms-deal-with-the-flat-regions-with-desired-points
<p>I am studying a chapter named <a href="https://www.deeplearningbook.org/contents/numerical.html" rel="nofollow noreferrer">Numerical Computation</a> of a deep learning book. Afaik, it does not deal with flat regions with desired points.</p> <p>For example, let us consider a function whose local/global minimum or maximum values lies on flat regions. You can take <a href="https://www.desmos.com/calculator/5vbpayhztu" rel="nofollow noreferrer">this</a> graph (just) for example.</p> <p><a href="https://i.sstatic.net/M620x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M620x.png" alt="Example graph with flat minimum" /></a></p> <p>Can gradient-based algorithms work on those curves with their local/global minima, or do maxima lie on flat regions?</p>
<blockquote> <p>Can gradient-based algorithms work on those curves with their local/global minima, or do maxima lie on flat regions?</p> </blockquote> <p>Yes, with some minor caveats.</p> <p>All the points on the flat region are equivalent (and in your example, are all valid global minimum points). Gradients outside of the region will point correctly away from that region and gradient descent steps will therefore move parameters towards it.</p> <p>Provided the step size multiplied by the gradient near the flat region is not too large, then a step taken near it will end up with parameters inside the region. After that, then any further gradients will be zero, so it is not possible to use <em>basic</em> gradient steps to escape it.</p> <p>In the case of a global minimum, that's fine, you don't care <em>which</em> point in the global minimum you have converged to (otherwise your function to optimise would be different).</p> <p>In the case of local minima or saddle points, you might care to use optimisation methods that can escape flat areas. Minibatch or stochastic gradient descent can do this because gradient measurements are noisy, whilst <a href="https://distill.pub/2017/momentum/" rel="nofollow noreferrer">momentum algorithms</a> can continue making steps when the immediate gradient signal is zero.</p> <p>The example function you used is not something you would expect to come across when optimising a machine learning algorithm, although some loss functions do have components that have similar behaviour. For example, <a href="https://en.wikipedia.org/wiki/Triplet_loss" rel="nofollow noreferrer">triplet loss</a> uses a <span class="math-container">$\text{max}(d_1 - d_2 + \alpha, 0)$</span> where <span class="math-container">$d_1$</span> and <span class="math-container">$d_2$</span> are distances between an anchor image and desired class versus different class respectively, and <span class="math-container">$\alpha$</span> is a margin or minimum desirable distance between classes. The details of this are not important unless you want to create a face recogniser or similar - the important detail for your question is that <span class="math-container">$\text{max}(x, 0)$</span> is really used in ML as a loss function, and may have a similar shape to your example function. Once used in aggregate with many data examples though, and with regularisation, the shape would not be so simple, and proabably would not have any reachable flat minima regions like this.</p>
317
gradient descent
Unclear fact about difference between Gradient Descent to Stochastic Gradient Decent in wikipedia
https://ai.stackexchange.com/questions/34440/unclear-fact-about-difference-between-gradient-descent-to-stochastic-gradient-de
<p>From wikipedia page it mentioned:</p> <blockquote> <p>To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems</p> </blockquote> <p>And later on it demonstrates the pseudo code with shuffle, and not sub-setting the samples. I don't get it how come applying shuffle improves the Gradient descent: <a href="https://i.sstatic.net/nRWey.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nRWey.png" alt="enter image description here" /></a></p> <p>Can somebody shed some light on this issue?</p>
<p>That paragraph is incomplete and unclear indeed. Let's crack the difference with a concrete example: logistic regression.</p> <p>The objective we want to minimize is:</p> <p><span class="math-container">$J_{train}(\theta)=\frac{1}{2m}\sum_{i=1}^{m}(h_{\theta}(x^{i})-y^{i})^{2}$</span></p> <p>Notice how this loss formulation requires us to iterate trough all our data to sum the squared errors. Gradient descent does precisely that:</p> <p><span class="math-container">$\theta_{j} := \theta_{j} - \alpha\frac{1}{m}\sum_{i=1}^{m}(h_{\theta}(x^{i})-y^{i})x^{i}_{j}$</span></p> <p>Also here, to compute a single descent step we are iterating through to all our data, while keep summing the gradients. This is why normal gradient descent is so memory inefficient.</p> <p>One work around to this problem is to redefine our initial loss <span class="math-container">$J$</span> in order to allow us to perform a descent step without having to look at all our data. We usually call this new formulation <em>cost</em>:</p> <p><span class="math-container">$cost(\theta, (x^{i}, y^{i})) = \frac{1}{2}(h_{\theta}(x^{i})-y^{i})^{2}$</span></p> <p>This allows us to rewrite also our descent step in a way that doesn't require to look at all our data.</p> <p><span class="math-container">$\theta_{j} := \theta_{j} - \alpha\frac{1}{2}(h_{\theta}(x^{i})-y^{i})x^{i}_{j}$</span></p> <p>Since the sum now it's gone it means that we don't need to sum gradients or keep in memory extra stuff, so this is much more efficient. And this is basically Stochastic gradient descent.</p> <p>Problem is that because we're updating on a fraction of our data at the time, order becomes relevant. If our data are ordered in some strange fashion, we'll risk to introduce biases in our parameters do to biases in the subset we selected to perform the update. And even if other subsets of data would normally account for that, now it's not the case anymore, cause we already performed an update step, and when applying our model to the new subset the parameters are therefore already biased. The only strategy we have to prevent this is precisely randomizing the order of our data.</p>
318
gradient descent
Does Musk know what gradient descent is?
https://ai.stackexchange.com/questions/3187/does-musk-know-what-gradient-descent-is
<p>From <a href="https://futurism.com/elon-musk-an-ai-attack-on-the-internet-is-only-a-matter-of-time/" rel="nofollow noreferrer">Futurism.com</a>:</p> <blockquote> <p>Musk indicates that internet infrastructure is "particularly susceptible" to a method called gradient descent algorithm, a mathematical problem-solving process. Bad news is, AI is excellent at doing gradient descents, which can become devastating digital weaponry.</p> </blockquote> <p>A screenshot of the tweet is provided below.</p> <p><a href="https://i.sstatic.net/BTQco.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BTQco.png" alt="enter image description here"></a></p> <p>He is an intelligent man, so why would he say something like this? Does Musk know what gradient descent is?</p>
<p>I think Musk was using the terminology correctly though perhaps with hyperbole. </p> <p>I believe this was tweeted in the context of the botnet attacks on name-resolution services that broke Netflix and a large number of other internet services for a time. He was expressing the idea that you could train a botnet-based system to attack the internet by giving it a toolbox of targets and hacks and using well-known machine-learning techniques to optimize the effectiveness of using the attacks in combination to take down the internet. </p> <p>Gradient descent itself isn't harmful; it is using gradient descent to train a botnet that could theoretically result in a really devastating attack.</p> <p>I also don't think he is necessarily implying that there is an actual machine learning system at work; rather that the hackers were training themselves with a methodology analogous to gradient descent.</p>
319
gradient descent
Are gradients of weights in RNNs dependent on the gradient of every neuron in that layer?
https://ai.stackexchange.com/questions/3761/are-gradients-of-weights-in-rnns-dependent-on-the-gradient-of-every-neuron-in-th
<p>I am writing my own recurrent neural network in Java to understand the inner workings better. While working through the math, I found that in timesteps later than 2 the gradient of weight w of neuron n depends on the gradients of all neurons at all timesteps before. A handwritten example is given, I tried to write as clearly as possible.</p> <p>Could anyone verify this so I can finish my network? Am I missing a piece or is my premise wrong, as in the output of a neuron is s(Wx + Vh + b), where h is the last step of <em>only</em> neuron n?</p> <p><a href="https://i.sstatic.net/QFMfz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QFMfz.jpg" alt="The feedforward formulas for the first three timesteps"></a> <a href="https://i.sstatic.net/jW9mq.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jW9mq.jpg" alt="My derivations of the gradients of the specific weight 1 of neuron 1, just as an example"></a></p>
320
gradient descent
What can be deduced about the &quot;algorithm&quot; of backpropagation/gradient descent?
https://ai.stackexchange.com/questions/6395/what-can-be-deduced-about-the-algorithm-of-backpropagation-gradient-descent
<p>On this video</p> <p><a href="https://www.youtube.com/watch?v=YUVLgccVi54" rel="nofollow noreferrer">Link to video</a></p> <p>a neurologist starts by saying that we do not know how neurons calculate gradients for backpropagation.</p> <p>At minute 30:39 hes showing faster convergence for "our algorithm", which seems to converge faster than backpropagation.</p> <p>After 34:36 it goes explaining how "neurons" in the brain are actually packs of neurons.</p> <p>I do not really understand all that he says, so I infer that those packs of neurons (which seem depicted as a single layer) are the ones who calculate the gradient. It would make sense if each neuron makes a sightly different calculation, and then each other communicate the difference in results. That would allow to deduce a gradient.</p> <p>What can be deduced, from the presented information, about the purported "algorithm"?? (From the viewpoint of improving convergence of an artificial neural network).</p>
<p>There are 3 separate issues that are often confounded in Deep Learning and Neuroscience:</p> <ol> <li>Deep Learning is inspired by the way the biological brain works.</li> <li>Deep Learning is how the biological brain works.</li> <li>Deep Learning can model how the biological brain works.</li> </ol> <p>Number 1 is accurate. The brain has many layers and many connections. Those principles have informed Deep Learning models.</p> <p>Number 2 has little evidence to support that claim. The biological brain learns at the cellular level in very different ways than how any Deep Learnings system learns.</p> <p>Number 3 is a current topic of research. Deep Learning is very good at learning patterns. There are good reasons to believe that Deep Learning can learn patterns in the brain. However, those Deep Learning models will not automatically give insight into the biological processes of the brain.</p> <p>The video is an example of #1. Inspired by our current understanding of neurobiology, let's build better Deep Learning algorithms. These new algorithms might perform better on machine learning benchmarks. However, these algorithms are not better models of the biological brain. In order to understand the algorithms, the language of biology might not be helpful. It might be better to describe them mathematically.</p>
321
gradient descent
SARSA won&#39;t work for linear function approximator for MountainCar-v0 in OpenAI environment. What are the possible causes?
https://ai.stackexchange.com/questions/9590/sarsa-wont-work-for-linear-function-approximator-for-mountaincar-v0-in-openai-e
<p>I am learning Reinforcement Learning from the lectures from David Silver. I finished lecture 6 and went on to try SARSA with linear function approximator for MountainCar-v0 environment from OpenAI.</p> <p>A brief explanation of the MountainCar-v0 environment. The state is denoted by two features, position, and velocity. There are three actions for each state, accelerate forwards, don't accelerate, accelerate backward. The goal of the agent is to learn how to climb a mountain. The engine of the car is not strong enough to power directly to the top. So speed has to be built up by oscillating in the cliff.</p> <p>I have used a linear function approximator, written by myself. I am attaching my code here for reference :-</p> <pre><code> class LinearFunctionApproximator: ''' A function approximator must have the following methods:- constructor with num_states and num_actions get_q_value get_action fit ''' def __init__(self, num_states, num_actions): self.weights = np.zeros((num_states, num_actions)) self.num_states = num_states self.num_actions = num_actions def get_q_value(self, state, action): return np.dot( np.transpose(self.weights), np.asarray(state) )[action] def get_action(self, state, eps): return randint(0, self.num_actions-1) if uniform(0, 1) &lt; eps else np.argmax( np.dot(np.transpose(self.weights), np.asarray(state)) ) def fit(self, transitions, eps, gamma, learning_rate): ''' Every transition in transitions should be of type (state, action, reward, next_state) ''' gradient = np.zeros_like(self.weights) for (state, action, reward, next_state) in transitions: next_action = self.get_action(next_state, eps) g_target = reward + gamma * self.get_q_value(next_state, next_action) g_predicted = self.get_q_value(state, action) gradient[:, action] += learning_rate * (g_target - g_predicted) * np.asarray(state) gradient /= len(transitions) self.weights += gradient </code></pre> <p>I have tested the gradient descent, and it works as expected. After every epoch, the mean squared error between current estimate of Q and TD-target reduces as expected.</p> <p>Here is my code for SARSA :-</p> <pre><code>def SARSA(env, function_approximator, num_episodes=1000, eps=0.1, gamma=0.95, learning_rate=0.1, logging=False): for episode in range(num_episodes): transitions = [] state = env.reset() done = False while not done: action = function_approximator.get_action(state, eps) next_state, reward, done, info = env.step(action) transitions.append( (state, action, reward, next_state) ) state = next_state for i in range(10): function_approximator.fit(transitions[::-1], eps, gamma, learning_rate) if logging: print('Episode', episode, ':', end=' ') run_episode(env, function_approximator, eps, render=False, logging=True) </code></pre> <p>Basically, for every episode, I fit the linear function approximator to the current TD-target. I have also tried running fit just once per episode, but that also does not yield any winning episode. Fitting 10 times ensures that I am actually making some progress towards the TD-target, and also not overfitting.</p> <p>However, after running over 5000 episodes, I do not get a single episode where the reward is greater than -200. Eventually, the algorithm choses one action, and somehow the Q-value of other actions is always lesser than this action.</p> <pre><code># Now, let's see how the trained model does env = gym.make('MountainCar-v0') num_states = 2 num_actions = env.action_space.n function_approximator = LinearFunctionApproximator(num_states, num_actions) num_episodes = 2000 eps = 0 SARSA(env, function_approximator, num_episodes=num_episodes, eps=eps, logging=True) </code></pre> <p>I want to be more clear about this. Say action 2 is the one which is the action which gets selected always after say 1000 episodes. Action 0 and action 1 have somehow, for all states, have their Q-values reduced to a level which is never reached by action 2. So for a particular state, action 0 and action 1 may have Q-values of -69 and -69.2. The Q-value of action 2 will never drop below -65, even after running the 5000 episodes.</p>
<p>On doing some research on why this problem might be occurring, I delved into some statistics of the environment. Interestingly, after a small number of episodes (~20), the agent always chooses to take only one action (this has been mentioned in the question too). Also, the Q values of the state-action pairs do not change a lot after just about 20 episodes. Same is the case for policy of environment, as may be expected.</p> <p>The problem is that, although all the individual updates are being done correctly, the following scenario occurs.</p> <p>The update equation used is the following :-</p> <pre><code>Q(s1, a1) &lt;- r1 + gamma * Q(s2, a2) </code></pre> <p>Now, any update to function approximator means that the Q value changes not just for the updated (state, action) pair, but for all (state, action) pairs. How much it changes for any specific (state, action) pairs is another issue. Now, since our function approximator is altered, next time we use the previous equation for updating any other state, we use the following equation :-</p> <pre><code>Q(s3, a3) &lt;- r3 + gamma * Q(s4, a4) </code></pre> <p>But since Q has itself been changed, the target value of the record 3 changes. This is not desirable. Over time, all the changes cancel each other and Q value remains roughly the same. </p> <p>By using something known as the target function approximator(target network), we can maintain an older version of function approximator, which is used to get the Q-value of next state-action pair while the time of update. This helps avoid the problem and can be used to solve the environment.</p>
322
gradient descent
Neural networks when gradient descent is not possible
https://ai.stackexchange.com/questions/12938/neural-networks-when-gradient-descent-is-not-possible
<p>I am looking for an example in which it is simply impossible to use some sort of gradient descent to train a neural network. Is this available? </p> <p>I have read quite some papers about gradient-free optimization tools, but they always use it on a network for which you can also use gradient descent. I want to have a situation in which the only option to train the network is by, for example, a genetic algorithm. </p>
323
gradient descent
At which point, does the momentum based GD helps really in this figure?
https://ai.stackexchange.com/questions/35648/at-which-point-does-the-momentum-based-gd-helps-really-in-this-figure
<p>Classical gradient descent algorithms sometimes overshoot and escape minima as they depend on the gradient only. You can see such a problem during the update from point 6.</p> <p><a href="https://i.sstatic.net/XJu8M.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XJu8M.jpg" alt="enter image description here" /></a></p> <p>In classical GD algorithm, the update equation is</p> <p><span class="math-container">$$\theta_{t+1} = \theta_{t} - \eta \times \triangledown_{\theta} \ell$$</span></p> <p>In the momentum based GD algorithm, the update equations are</p> <p><span class="math-container">$$v_0 = 0$$</span> <span class="math-container">$$v_{t+1} = \alpha v_t + \eta \times \triangledown_{\theta} \ell $$</span> <span class="math-container">$$\theta = \theta - \eta \times \triangledown_{\theta} \ell$$</span></p> <p>I am writing all the equations concisely by removing the obvious variables used such as inputs to loss functions. In the lecture I'm listening to, the narrator says that <strong>momentum-based GD helps during the update at point 6 and the update will not lead to point 7 as shown in the figure and goes towards minima.</strong></p> <p>But for me, it seems that even momentum-based GD will go to point 7 and the update at point 7 will be benefited from the momentum-based GD as it does not lead to point 8 and goes towards minima.</p> <p>Am I correct? If not, at which point does the momentum-based GD actually help?</p>
<p>What your professor most probably means is that since momentum adds your previous gradient to the current (as in a moving average). The speed built up till point 7 (moving in to the right) will added to the gradient from 7 to 8 that is going to the left, causing them to cancel each other out (what might cause the update to turn small enough for the loss to converge instead of explode). So it helps at the point where the gradient suddenly changes direction, as the momentum will cancel it out partially.</p>
324
gradient descent
Loss keep increasing when using full-batch gradient descent
https://ai.stackexchange.com/questions/48379/loss-keep-increasing-when-using-full-batch-gradient-descent
<p>I am learning linear regression model based on this <a href="https://github.com/d2l-ai/d2l-zh/blob/master/chapter_linear-networks/linear-regression-scratch_origin.md?plain=1#L380" rel="nofollow noreferrer">tutorial</a>. Following the example provided in the tutorial, it works fine with mini-batch stochastic gradient descent.</p> <pre><code>for epoch in range(num_epochs): for X, y in data_iter(batch_size, features, labels): l = loss(net(X, w, b), y) l.sum().backward() sgd([w, b], lr, batch_size) with torch.no_grad(): train_l = loss(net(features, w, b), labels) print(f'epoch {epoch + 1}, loss {float(train_l.mean()):f}') </code></pre> <p>However, when I tried full-batch gradient descent, I noticed that the loss kept increasing.</p> <pre><code>for epoch in range(num_epochs): l = loss(net(features, w, b), labels) l.sum().backward() sgd([w, b], lr, 1) # No batch used here with torch.no_grad(): train_l =loss(net(features, w, b), labels) print(f'epoch {epoch + 1}, loss {float(train_l .mean()):f}') </code></pre> <p>What's wrong?</p>
<p>After post the question to chatgpt, it told me that I have to zoom the gradient:</p> <p>By modify from</p> <pre><code>sgd([w, b], lr, 1) # No batch used here </code></pre> <p>To</p> <pre><code>sgd([w, b], lr, 1en(features)) # No batch used here </code></pre> <p>It works now.</p>
325
gradient descent
Why doesn&#39;t deep learning use modular arithmetic like cryptography, even though both deal with non-linear functions?
https://ai.stackexchange.com/questions/48193/why-doesnt-deep-learning-use-modular-arithmetic-like-cryptography-even-though
<p>So, deep learning models are great at learning complex, non-linear patterns and seem to handle noise just fine. But under the hood, they rely on IEEE754 floating-point numbers, which can lose precision (e.g., rounding errors). Meanwhile, cryptography, like elliptic curve crypto, also deals with theoretically smooth, non-linear curves. But instead of floating-point, crypto uses modular arithmetic (finite fields) to keep everything exact and robust.</p> <p>Here’s the thing: crypto folks turn continuous math into discrete, finite fields to avoid floating-point messiness. But deep learning avoids modular arithmetic because it hates abrupt jumps—like how <code>uint8</code> wraps <code>255 → 0</code>. That discontinuity wrecks gradients during training.</p> <p>What if we reimagined <code>uint8</code> as a <strong>cyclic space</strong>, like a gear with 256 teeth? For example, the <strong>distance between 254 and 1 isn’t 253 (linear subtraction) but just 3 steps</strong> if you &quot;rotate&quot; through <code>255 → 0 → 1</code>. It’s like choosing to rotate clockwise instead of counterclockwise. Could gradients work this way? Instead of &quot;add/subtract,&quot; maybe gradients tell us how much to &quot;rotate&quot; the weights in this cyclic space. So, the direction of gradient is not about negative or positive. Instead, it's counter clockwise or clockwise.</p> <p>I’m not talking about security here, or deep learning is supposed to be secure (even though it's better). Just the engineering. Why stick to floating-point when modular arithmetic avoids precision loss? Couldn’t cyclic gradients sidestep exploding/vanishing issues? Or is there a deeper reason deep learning avoids this?</p> <p>Example: If a gradient says <strong>rotate 3 steps clockwise</strong> instead of <strong>subtract 3</strong>, wouldn’t that handle modular wraps more gracefully? Or does this break how neural networks update weights?</p> <p>So, actual number that we have already known must be encoded into cyclic space, and that literal value in cyclic space is not really represents absolute value in cyclic space.</p> <p>For example, if we have continuous feature like temperature in celcius, it supposed to be encoded into <code>uint8</code> space and think this as hidden representation of temperature in celcius.</p>
<p>Cryptography benefits from the exactness of finite fields for reasons of security and exactness in discrete math, DL depends on the continuous adjustments that floating-point arithmetic facilitates. For features that are inherently continuous like temperature, encoding them into a cyclic space would only make sense if the underlying phenomenon were periodic such as angles or time-of-day. Most continuous quantities do not naturally have a cyclic structure, so forcing them into a cyclic space could distort their true relationships. Instead, these features benefit from a linear continuous representation that <em>preserves</em> their order and differences. For example, a temperature of 254°C and 1°C might be &quot;close&quot; by 3 steps in your forced cyclic angular space, but such closedness is unrelated <em>in reality</em>. A neural network would struggle to learn meaningful relationships from this encoding. In addition, cyclic spaces create symmetric repeating loss landscapes, thus optimization could cycle endlessly or settle in suboptimal regions.</p> <p>Neural networks rely on linear combinations such as matrix operations and smooth nonlinearities. Cyclic arithmetic would make these operations chaotic. Modern GPUs and other hardware accelerators are optimized for floating-point operations which provide high throughput for floating-point arithmetic, whereas modular arithmetic isn’t similarly optimized, making it impractical for large-scale deep learning. Efficient low-precision training such as 8-bit integers uses quantization-aware training or post-training without cyclic wraps to preserve gradient flow. Though quantization represents weights/activations as low as 8 bits but it still keeps gradients in floating-point for stability during training.</p> <p>Finally exploding/vanishing gradients arise from repeated multiplicative operations due to learning architecture, not floating-point precision. Cryptography prioritizes exactness and discreteness to enforce security, while DL prioritizes approximation and continuity to generalize from already inherently noisy and non-precise data.</p>
326
gradient descent
Does it make sense for a computational graph to have entirely non-differentiable functions?
https://ai.stackexchange.com/questions/48014/does-it-make-sense-for-a-computational-graph-to-have-entirely-non-differentiable
<p>Does it make sense for a <a href="https://ai.stackexchange.com/q/47986/48038">computational graph</a> to have entirely non-differentiable functions?</p> <p>For example, <a href="https://pytorch.org/docs/stable/notes/autograd.html#gradients-for-non-differentiable-functions" rel="nofollow noreferrer">PyTorch can handle non-differentiable functions</a> and <a href="https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_non_differentiable.html" rel="nofollow noreferrer">mark outputs as non-differentiable</a>, but I'm wondering not about functions whose derivatives are undefined at some points, but about graphs of entirely non-differentiable functions.</p> <p><sup>related question: &quot;<a href="https://ai.stackexchange.com/q/43152/48038">How is back propagation applied in case the activation function is not differentiable?</a>&quot;</sup></p>
<p>Entirely non-differentiable functions make sense since one might intentionally include them in parts of a model where gradient updates are either not required or not meaningful, such as the <span class="math-container">$\text{argmax}$</span> operator or even a simple routing or logging function of incoming data. For entirely non-differentiable function node in a computational graph, its output is marked accordingly per your own references, and gradient-based optimization simply bypasses any branch through that node.</p> <blockquote> <p><code>FunctionCtx.mark_non_differentiable(*args)</code> <br /> Mark outputs as non-differentiable. <br /> This should be called at most once, in either the <code>setup_context()</code> or <code>forward()</code> methods, and all arguments should be tensor outputs.</p> </blockquote> <blockquote> <p>If the function is not a deterministic mapping (i.e. it is not a mathematical function), it will be marked as non-differentiable. This will make it error out in the backward if used on tensors that require grad outside of a <code>no_grad</code> environment.</p> </blockquote> <p>Therefore in practice this means that the affected outputs are treated as constants relative to the parameters upstream, and those parameters will not be updated based on errors flowing back through the non‐differentiable function node. Downstream parameters might still get updated if they have other differentiable paths influencing the loss.</p>
327
gradient descent
What is the gradient of a non-linear SVM with respect to the input?
https://ai.stackexchange.com/questions/20169/what-is-the-gradient-of-a-non-linear-svm-with-respect-to-the-input
<p>The objective function of an SVM is the following:</p> <p><span class="math-container">$$J(\mathbf{w}, b)=C \sum_{i=1}^{m} \max \left(0,1-y^{(i)}\left(\mathbf{w}^{t} \cdot \mathbf{x}^{(i)}+b\right)\right)+\frac{1}{2} \mathbf{w}^{t} \cdot \mathbf{w}$$</span> where</p> <ul> <li><span class="math-container">$\mathbf{w}$</span> is the model's feature weights and <span class="math-container">$b$</span> is its bias parameter</li> <li><span class="math-container">$\mathbf{x}^{(i)}$</span> is the <span class="math-container">$i^\text{th}$</span> training instance's feature vector</li> <li><span class="math-container">$y^{(i)}$</span> is the target class (<span class="math-container">$-1$</span> or <span class="math-container">$1$</span>) for the <span class="math-container">$i^\text{th}$</span> instance</li> <li><span class="math-container">$m$</span> is the number of training instances</li> <li><span class="math-container">$C$</span> is the regularisation hyper-parameter</li> </ul> <p>And if I was to use a kernel, this would become:</p> <p><span class="math-container">$$J(\mathbf{w}, b)=C \sum_{i=1}^{m} \max \left(0,1-y^{(i)}\left(\mathbf{u}^{t} \cdot \mathbf{K}^{(i)}+b\right)\right)+\frac{1}{2} \mathbf{u}^{t} \cdot \mathbf{K} \cdot \mathbf{u}$$</span></p> <p>where the kernel can be the Gaussian kernel:</p> <p><span class="math-container">$$K(\mathbf{u}, \mathbf{v})=e^{-\gamma\|\mathbf{u}-\mathbf{v}\|^{2}}$$</span></p> <p>How would I go about finding its gradient with respect to the input?</p> <p>I need to know this as to then apply this to a larger problem of a CNN with its last layer being this SVM, so I can then find the gradient of this output wrt the input of the CNN.</p>
328
gradient descent
Can a neural network learn to avoid wrong decisions using backpropagation?
https://ai.stackexchange.com/questions/3854/can-a-neural-network-learn-to-avoid-wrong-decisions-using-backpropagation
<p>I studied the articles on <a href="http://neuralnetworksanddeeplearning.com/" rel="noreferrer">Neural Networks and Deep Learning</a> from Michael Nielsen and developed a simple neural network based on his examples. I understand how backpropagation works and I already taught my neural network to not only play TicTacToe but also improve his own play by learning from his own successes using backpropagation.</p> <p>Going forward with my experiments, I am facing the problem, that I won't always be able to show the network good moves to use for learning (maybe because I simply don't know what is correct in a certain situation), but I might be required to show it bad moves to avoid (because some of the bad moves are obvious). Teaching the network what to do using backpropagation is easy, but I haven't found a way to teach it what to avoid using similar techniques.</p> <p>Is it possible to teach simple neural networks using negative examples like this or do I need other techniques? My gut feeling says, that it might be possible to "invert" gradient descent into gradient ascent to solve this problem. Or is it more complicated than this?</p>
<p>What you are describing is conceptually close to adversarial training. you should read more on adversarial examples and generative adversarial networks for more information.</p> <p>The idea is that there is a discriminator network, whose job is to correctly discriminate between positive and negative examples. We also have a generative network, that learns to produce "adversarial examples" that "confuses" the discriminator network. By training these two networks side by side, both networks get better at their task. But it's usually the generator network that people are more interested in.</p> <p>Intuitively, the naive implementation of the method you've described (gradient ascent on incorrect examples from a network in a clean/randomly-initialized state) shouldn't work. This is because negative examples don't form a "natural class" (all triangles have 3 edges, all things that are not triangles however....)</p>
329
gradient descent
How can we calculate the gradient of the Boltzmann policy over reward function?
https://ai.stackexchange.com/questions/7147/how-can-we-calculate-the-gradient-of-the-boltzmann-policy-over-reward-function
<p>I'm struggling with an inverse reinforcement learning problem which seems to appear quite often around the literature, yet I can't find any resources explaining it.</p> <p>The problem is that of calculating the gradient of a Boltzmann policy distribution over the reward weights <span class="math-container">$\theta$</span>:</p> <p><span class="math-container">$$\displaystyle\pi(s,a)=\frac{\exp(\beta\cdot Q(s,a|\theta))}{\sum_{a'}\exp(\beta\cdot Q(s,a'|\theta')}$$</span></p> <p>The <span class="math-container">$\theta$</span> are a linear parametrization of the reward function, such that</p> <p><span class="math-container">$$\displaystyle R = \theta^T\phi(s,a)$$</span></p> <p>where <span class="math-container">$\phi(s,a)$</span> are features of the state space. In the simplest of the case, one could take <span class="math-container">$\phi_i(s,a) = \delta(s,i)$</span>, that is, the feature space is just an indicator function of the state space.</p> <p>A lot of algorithms simply state to calculate the gradient, but that doesn't seem that trivial, and I'm not managing to infer from the <a href="https://github.com/jmacglashan/burlap/blob/master/src/main/java/burlap/behavior/singleagent/learnfromdemo/mlirl/support/BoltzmannPolicyGradient.java" rel="nofollow noreferrer">bits of code I found online</a>.</p> <p>Some of the papers using this kind of methods are <a href="http://www.icml-2011.org/papers/478_icmlpaper.pdf" rel="nofollow noreferrer">Apprenticeship Learning About Multiple Intentions</a> (2011), by Monica Babes-Vroman et al, and <a href="https://papers.nips.cc/paper/4479-map-inference-for-bayesian-inverse-reinforcement-learning" rel="nofollow noreferrer">MAP Inference for Bayesian Inverse Reinforcement Learning</a> (2011), by Jaedeug Choi et al.</p>
<p>The main point here is that you can write <span class="math-container">$Q(s, a|\theta) = R = \theta^\top \phi(s, a)$</span>. For more details on this, you can read up on Policy Gradients (Chapter 13) from the 2nd edition of the Sutton and Barto book (in fact the expression you're looking for is equation 13.9)</p> <p>For simplicity, I'm setting <span class="math-container">$\beta=1$</span>, but you can always put it in once you get the idea. Therefore, the expression for <span class="math-container">$\pi(s,a,\theta)$</span> (I'm including <span class="math-container">$\theta$</span> here to make the dependence of <span class="math-container">$\pi$</span> explicit) is now: <span class="math-container">$$ \pi(s,a,\theta) = \frac{exp(\theta^\top \phi(s, a))}{\sum_{a'} exp(\theta^\top \phi(s, a'))} = \frac{A}{B} $$</span></p> <p>Also, it is standard to compute <span class="math-container">$\nabla_\theta\log \pi(s,a,\theta)$</span> instead of <span class="math-container">$\nabla_\theta \pi(s,a,\theta)$</span> (although you totally can), so I'll go ahead and do that. To differentiate <span class="math-container">$\log \pi(s,a,\theta)$</span> with respect to the parameters, <span class="math-container">$\theta$</span>, let's do some calculus: <span class="math-container">$$ \nabla_\theta \log \pi(s,a,\theta) = \nabla_\theta \log \frac{A}{B} = \nabla_\theta \log A - \nabla_\theta \log B \\ = \nabla_\theta \left[ \theta^\top \phi(s,a) \right] - \nabla_\theta \left[ \log \sum_{a'} exp(\theta^\top \phi(s, a')) \right] \\ = \phi(s,a) - \frac{1}{\sum_{a'} exp(\theta^\top \phi(s, a'))} \sum_{b} exp(\theta^\top \phi(s, b)) \phi(s,b) \\ = \phi(s,a) - \sum_{b} \frac{exp(\theta^\top \phi(s, b))}{\sum_{a'} exp(\theta^\top \phi(s, a'))} \phi(s,b) \\ = \phi(s,a) - \sum_{b} \pi(s,b,\theta) \phi(s,b) $$</span></p> <p>Therefore, you can write <span class="math-container">$\nabla_\theta\log \pi(s,a,\theta) = \phi(s,a) - \sum_{b} \pi(s,b,\theta) \phi(s,b)$</span></p>
330
gradient descent
Are on-line backpropagation iterations perpendicular to the constraint?
https://ai.stackexchange.com/questions/11407/are-on-line-backpropagation-iterations-perpendicular-to-the-constraint
<p><a href="https://page.mi.fu-berlin.de/rojas/neural/chapter/K8.pdf#page=9" rel="nofollow noreferrer">Raul Rojas' Neural Networks A Systematic Introduction, section 8.1.2</a> relates off-line backpropagation and on-line backpropagation with Gauss-Jacobi and Gauss-Seidel methods for finding the intersection of two lines.</p> <p>What I can't understand is how the iterations of on-line backpropagation are perpendicular to the (current) constraint. More specifically, how is <span class="math-container">$\frac12(x_1w_1 + x_2w_2 - y)^2$</span>'s gradient, <span class="math-container">$(x_1,x_2)$</span>, normal to the constraint <span class="math-container">$x_1w_1 + x_2w_2 = y$</span>?</p>
<p>Answer by <a href="https://math.stackexchange.com/users/248286/theo-bendit">Theo Bandit</a> <a href="https://math.stackexchange.com/questions/3160982/how-is-x-1-x-2-normal-to-x-1w-1-x-2w-2-y/3160988#3160988">at maths stackexchange</a></p> <blockquote> <p>If you choose two points <span class="math-container">$(w_1, w_2), (v_1, v_2)$</span> along this line, then <span class="math-container">$$(x_1, x_2) \cdot ((w_1, w_2) - (v_1, v_2)) = x_1 w_1 + x_2 w_2 - (x_1 v_1 + x_2 v_2) = y - y = 0.$$</span> That is, the direction <span class="math-container">$(x_1, x_2)$</span> is perpendicular to any vector lying along the line, i.e. <span class="math-container">$(x_1, x_2)$</span> is normal to the line.</p> </blockquote>
331
gradient descent
In NN, as iterations of Gradient descent increases, the accuracy of Test/CV set decreases. how can i resolve this?
https://ai.stackexchange.com/questions/13109/in-nn-as-iterations-of-gradient-descent-increases-the-accuracy-of-test-cv-set
<p>As mentioned in the title I'm using 300 Dataset example with 500 feature as an input.</p> <p>As I'm training the dataset, I found something peculiar. Please look at the data shown below.</p> <blockquote> <p><strong>Iteration 5000 | Cost: 2.084241e-01</strong></p> <p>Training Set Accuracy: 100.000000</p> <p>CV Set Accuracy: 85.000000</p> <p>Test Set Accuracy: 97.500000</p> <p><strong>Iteration 3000 | Cost: 2.084241e-01</strong></p> <p>Training Set Accuracy: 98.958333</p> <p>CV Set Accuracy: 85.000000</p> <p>Test Set Accuracy: 97.500000</p> <p><strong>Iteration 1000 | Cost: 4.017322e-01</strong></p> <p>Training Set Accuracy: 96.875000</p> <p>CV Set Accuracy: 85.000000</p> <p>Test Set Accuracy: 97.500000</p> <p><strong>Iteration 500 | Cost: 5.515852e-01</strong></p> <p>Training Set Accuracy: 95.486111</p> <p>CV Set Accuracy: 90.000000</p> <p>Test Set Accuracy: 97.500000</p> <p><strong>Iteration 100 | Cost: 8.413299e-01</strong></p> <p>Training Set Accuracy: 90.625000</p> <p>CV Set Accuracy: 95.000000</p> <p>Test Set Accuracy: 97.500000</p> <p><strong>Iteration 50 | Cost: 8.483802e-01</strong></p> <p>Training Set Accuracy: 90.277778</p> <p>CV Set Accuracy: 95.000000</p> <p>Test Set Accuracy: 97.500000</p> </blockquote> <p>The trend is that as the Iteration(cost) increases(cost decreases), the training set accuracy increases as expected, but the CV Set/Test Set Accuracy decreases. My initial thought is that this has to do with precision/bias issue, but I really can't buy it. </p> <p>Anyone know what this entails? Or any reference? </p>
<p>Training scores improving (loss decreasing and accuracy increasing) whilst the opposite happens with cross validation and test data is a sign of <a href="https://en.wikipedia.org/wiki/Overfitting" rel="nofollow noreferrer">overfitting to the training data</a>. Your neural network is getting worse at generalising and no amount of further training will improve it - in fact the situation will get worse the more you train.</p> <p>This is the main reason you have CV data sets, to show you when this happens before you try to use your model against the test set or real world data. So it is not "peculiar" at all, but the CV set doing its job for you, allowing you notice something has gone wrong.</p> <p>To improve on the situation, you need to use some form of <a href="https://towardsdatascience.com/regularization-in-machine-learning-76441ddcf99a" rel="nofollow noreferrer">regularisation</a>. The simplest approach here would be to take your model from around 100 epochs (because it has best CV score that I can see). This is early stopping, and is a simple valid regularisation approach.</p> <p>Alternatives for neural networks include L2 weight regularisation (also called weight decay) and dropout.</p> <p>In addition to this, your question states that you only have 300 examples, and more features than examples. This is really tricky to generalise (and hard to tell if you have - for instance it looks like you only have 20 CV examples and 40 test examples, these numbers are very low and prone to giving you inaccurate estimates simply due to chance of what entries are in the data sets). I recommend you look into <a href="https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation" rel="nofollow noreferrer">K-fold cross-validation</a> in order to get more accurate scores, and help you choose the best model for generalisation.</p>
332
gradient descent
How to calculate multiobjective optimization cost for ordinary problems?
https://ai.stackexchange.com/questions/15685/how-to-calculate-multiobjective-optimization-cost-for-ordinary-problems
<p><strong>What I did:</strong><br> Created a population of 2D legged robots in a simulated environment. Found the best motor rotation values to make the robots move rightward, using an objective function with Differential Evolution (could use PSO or GA too), that returned the distance moved rightward. Gradient descent used for improving fitness. </p> <p><strong>What I want to do:</strong><br> Add more objectives. To find the best motor rotation, with the least motion possible, with the least jittery motion, without toppling the body upside down and making the least collision impact on the floor. </p> <p><strong>What I found:</strong> </p> <ul> <li><p>Spent almost two weeks searching for solutions, reading research papers, going through tutorials on Pareto optimality, installing libraries and trying the example programs.</p></li> <li><p>Using <a href="https://en.wikipedia.org/wiki/Pairing_function" rel="nofollow noreferrer">pairing functions</a> to create a cost function wasn't good enough.</p></li> <li><p>There are many <a href="https://www.google.com/search?client=firefox-b-d&amp;q=multi%20objective%20evolutionary%20algorithms" rel="nofollow noreferrer">multi-objective PSO, DE, GA etc.</a>, but they seem to be built for solving some other kind of problem.</p></li> </ul> <p><strong>Where I need help:</strong> </p> <ul> <li><p>Existing multi objective algorithms seem to use some pre-existing minimization and maximization functions (Fonseca, Kursawe, OneMax, DTLZ1, ZDT1, etc.) and it's confusing to understand how I can use my own maximization and minimization functions with the libraries. (minimize(motorRotation), maximize(distance), minimize(collisionImpact), constant(bodyAngle)).</p></li> <li><p>How do I know which is the best Pareto front to choose in a multi-dimensional space? There seem to be ways of choosing the top-right Pareto front or the top-left or the bottom-right or bottom-left. In multi-dimensional space, it'd be even more varied.</p></li> <li><p>Libraries like Platypus, PyGMO, Pymoo etc. just define the problem using <code>problem = DTLZ2()</code>, instantiate an algorithm <code>algorithm = NSGAII(problem)</code> and run it <code>algorithm.run(10000)</code>, where I assume 10000 is the number of generations. But since I'm using a legged robot, I can't simply use <code>run(10000)</code>. I need to assign motor values to the robots, wait for the simulator to make the robots in the population move and then calculate the objective function cost. How can I achieve this? </p></li> <li><p>Once the pareto optimal values are found, how is it used to create a cost value that helps me determine the fittest robot in the population?</p></li> </ul>
<p>I eventually used the keep-efficient function from <a href="https://stackoverflow.com/a/32793059/453673">this answer</a> to calculate the Pareto front and used the k-means function to calculate the centroid of the front. This gave me the approximate knee-point of the front, which is usually the optimal solution. One of the calculations was to maximise the distance moved in x direction (dx) vs. minimising the energy consumed (e), so since the x axis needed positive maximization and the y axis needed minimization, I inverted the y axis, since <code>min(f(y)) = - max(-f(y))</code>. This helped get the pareto front toward the top right side of the graph and both the x and y axes were maximization objectives. The optimal point calculated was the robot that had the best fitness.</p>
333
gradient descent
Why gradients are so small in deep learning?
https://ai.stackexchange.com/questions/18325/why-gradients-are-so-small-in-deep-learning
<p>The learning rate in my model is <code>0.00001</code> and the gradients of the model is within the distribution of <code>[-0.0001, 0.0001]</code>. Is it normal?</p>
334
gradient descent
How to prove that gradient descent doesn&#39;t necessarily find the global optimum?
https://ai.stackexchange.com/questions/18658/how-to-prove-that-gradient-descent-doesnt-necessarily-find-the-global-optimum
<p>How can I prove that gradient descent doesn't necessarily find the global optimum?</p> <p>For example, consider the following function</p> <p><span class="math-container">$$f(x_1, x_2, x_3, x_4) = (x_1 + 10x_2)^2 + 5x_2^3 + (x_2 + 2x_3)^4 + 3x_1x_4^2$$</span></p> <p>Assume also that we can't find the optimal value for the learning rate because of time constraints.</p>
<p>Well, GD terminates once the gradients are 0, right? Now, in a non-convex function, there could be some points, which do not belong to the global minima, and yet, have 0 gradients. For example, such points can belong to saddle points and local minima.</p> <p>Consider this picture and say you start GD at the x label. <a href="https://i.sstatic.net/vmK8H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vmK8H.png" alt="enter image description here"></a></p> <p>GD will bring you the flat area and will stop making progress there as gradients are 0. However, as you can see, global minima is to the left of this flat region.</p> <p>By the same token, you have to show, for your own function, that there exists at least a single point whose gradients are 0 and yet, it is not the global minima.</p> <p>In addition to that, the guarantee on converge for convex functions depends on annealing the learning rate appropriately. For example, if your LR is too high, GD can just keep overshooting the minima. The visualization from <a href="https://deeplearning.ai/ai-notes/optimization/?utm_campaign=BlogAINotesOptimizationNovember122019&amp;utm_content=105388620&amp;utm_medium=social&amp;utm_source=linkedin&amp;hss_channel=lcp-18246783" rel="nofollow noreferrer">this</a> page might help you to understand more regarding the behavior of GD.</p>
335
gradient descent
If the normal equation works, why do we need gradient descent?
https://ai.stackexchange.com/questions/22401/if-the-normal-equation-works-why-do-we-need-gradient-descent
<p>Recently, I followed the open course CS229, <a href="http://cs229.stanford.edu/notes/cs229-notes1.pdf" rel="nofollow noreferrer">http://cs229.stanford.edu/notes/cs229-notes1.pdf</a><br /> This lecturer introduces an alternative approach to gradient descent that is called &quot;Normal Equation&quot; and the equation is as follows:</p> <p><span class="math-container">$$\theta=\left(X^{T} X\right)^{-1} X^{T} \vec{y}$$</span></p> <p>The normal equation can directly compute the <span class="math-container">$\theta$</span>.</p> <p>If the normal equation works, why do we need gradient descent? What is the trade-off between these two methods?</p>
<p>That <a href="https://mathworld.wolfram.com/NormalEquation.html" rel="nofollow noreferrer">normal equation</a> is sometimes called the <a href="https://mathworld.wolfram.com/Closed-FormSolution.html" rel="nofollow noreferrer">closed-form solution</a>.</p> <p>The short answer to your question is that the closed-form solution may be impractical or unavailable in certain cases or the iterative numerical method (such as gradient descent) may be more efficient (in terms of resources).</p> <p><a href="https://stats.stackexchange.com/a/23132/82135">This answer</a> gives you more details and an example.</p>
336
gradient descent
How parameter adjustment works in Gradient Descent?
https://ai.stackexchange.com/questions/23737/how-parameter-adjustment-works-in-gradient-descent
<p>I am trying to comprehend how the Gradient Descent works.</p> <p>I understand we have a cost function which is defined in terms of the following parameters,</p> <p><span class="math-container">$J(𝑤_{1},𝑤_{2},.... , w_{n}, b)$</span></p> <p>the derivative would tell us which direction to adjust the parameters.</p> <p>i.e. <span class="math-container">$\dfrac{dJ(𝑤_{1},𝑤_{2},.... , w_{n}, b)}{d(𝑤_{1}}$</span> is the rate of change of the cost w.r.t <span class="math-container">$𝑤$</span></p> <p>The lecture kept on saying this very valuable as we are asking the question <em>how should I change <span class="math-container">$𝑤$</span></em> to improve the cost?</p> <p>But then the Lecturer presented <span class="math-container">$w_{1}$</span>, <span class="math-container">$w_{2}$</span>, ... as scaler value, How can we differentiate a scalar value.</p> <p>I am fundamentally missing what is happening.</p> <p>Can anyone please guide me to any blog post, a book that I should read to understand better?</p>
<p>Imagine we have the curve <span class="math-container">$f(x) = x^2$</span>, and we want to find the minimum of this function. The derivate of <span class="math-container">$f$</span> with respect to <span class="math-container">$x$</span> is <span class="math-container">$2x$</span>. Now, gradient descent works by updating our current estimate of the minimum, say <span class="math-container">$c_t$</span>, by the following iterative process <span class="math-container">$$c_{t+1} = c_t - \alpha \times \nabla_xf(x=c_t),$$</span> where <span class="math-container">$\alpha$</span> is some constant to control how much we shift towards the gradient.</p> <p>Intuitively, this should make sense. Imagine our current estimate of the minimum is <span class="math-container">$c_t = -1$</span>. The update would then give us <span class="math-container">$c_{t+1} = -1 - \alpha \times -2 = -1 + 2\alpha &gt; -1$</span>. As you can see, the update has shifted our estimate in the direction of the minimum. If our estimate were <span class="math-container">$+1$</span> then you can probably see that the update would again have shifted us in the direction of the minimum again.</p> <p>Now, what happens in machine learning is we have a loss function <span class="math-container">$L$</span> that we typically want to find the minimum of in terms of the parameters of our model. By applying gradient descent on the loss function as we did above with <span class="math-container">$f(x)$</span> we iteratively apply the update rule which will eventually lead us to the minimum of our loss function with respect to the weights. The process is exactly the same as above except it is likely to happen in higher dimensions, where the derivative of <span class="math-container">$f$</span> becomes a vector of partial derivates. Note that whilst <span class="math-container">$w_i$</span>'s are scalar values, we are not differentiating these values, rather we are differentiating the loss function with respect to these scalars.</p> <p>I would recommend you to try this out with a simple linear model of maybe 2 or 3 parameters. <a href="https://towardsdatascience.com/linear-regression-using-gradient-descent-97a6c8700931" rel="nofollow noreferrer">After a quick Google I found this article that may be useful</a>.</p>
337
gradient descent
Why is the perceptron criterion function differentiable?
https://ai.stackexchange.com/questions/24261/why-is-the-perceptron-criterion-function-differentiable
<p>I'm reading chapter one of the book called <a href="https://dl.uswr.ac.ir/bitstream/Hannan/141305/2/9783319944623.pdf" rel="nofollow noreferrer">Neural Networks and Deep Learning</a> from Aggarwal.</p> <p>In section 1.2.1.1 of the book, I'm learning about the perceptron. One thing that book says is, if we use the sign function for the following loss function: <span class="math-container">$\sum_{i=0}^{N}[y_i - \text{sign}(W * X_i)]^2$</span>, that loss function will NOT be differentiable. Therefore, the book suggests us to use, instead of the sign function in the loss function, the perceptron criterion which will be defined as:</p> <p><span class="math-container">$$ L_i = \max(-y_i(W * X_i), 0) $$</span></p> <p>The question is: Why is the perceptron criterion function differentiable? Won't we face a discontinuity at zero? Is there anything that I'm missing here?</p>
<p><span class="math-container">$\max(-y_i(w x_i), 0)$</span> is not partial derivable respect <span class="math-container">$w$</span> if <span class="math-container">$w x_i=0$</span>.</p> <p>Loss functions are problematic when not derivable in some point, but even more when they are flat (constant) in some interval of the weights.</p> <p>Assume <span class="math-container">$y_i = 1$</span> and <span class="math-container">$w x_i &lt; 0$</span> (that is, an error of type &quot;false negative&quot;).</p> <p>In this case, function <span class="math-container">$[y_i - \text{sign}(w x_i)]^2 = 4$</span>. Derivative on all interval <span class="math-container">$w x_i &lt; 0$</span> is zero, thus, the learning algorithm has no any way to decide if it is better increase or decrease <span class="math-container">$w$</span>.</p> <p>In same case, <span class="math-container">$\max(-y_i(w x_i), 0) = - w x_i$</span>, partial derivative is <span class="math-container">$-x_i$</span>. The learning algorithm knows that it must increase <span class="math-container">$w$</span> value if <span class="math-container">$x_i&gt;0$</span>, decrease otherwise. This is the real reason this loss function is considered more practical than previous one.</p> <p>How to solve the problem at <span class="math-container">$w x_i = 0$</span> ? simply, if you increase <span class="math-container">$w$</span> and the result is an exact <span class="math-container">$0$</span>, assign to it a very small value, <span class="math-container">$w=\epsilon$</span>. Similar logic for remainder cases.</p>
338
gradient descent
How can the gradient of the weight be calculated in the viewpoint of matrix calculus?
https://ai.stackexchange.com/questions/27335/how-can-the-gradient-of-the-weight-be-calculated-in-the-viewpoint-of-matrix-calc
<p>Let <span class="math-container">$\sigma(x)$</span> be sigmoid function. Consider the case where <span class="math-container">$\text{out}=\sigma(\vec{x} \times W + \vec{b})$</span>, and we want to compute <span class="math-container">$\frac{\partial{\text{out}}}{\partial{w} }.$</span><br /> Set the dimension as belows:<br /> <span class="math-container">$\vec{x}$</span>: <span class="math-container">$(n, n_{\text{in}})$</span>, <span class="math-container">$W$</span>: <span class="math-container">$(n_{\text{in}}, n_{\text{out}})$</span>, <span class="math-container">$\vec{b}$</span>: <span class="math-container">$(1, n_{\text{out}})$</span>.<br /> Then <span class="math-container">$\text{out}$</span> has the dimension <span class="math-container">$(n, n_{\text{out}})$</span>. So we need to calculate the matrix by matrix derivative, as I know there is no such way to define that. I know that finally it is calculated as <span class="math-container">$\vec{x}^T \times (\text{out}\cdot(1-\text{out}))$</span>.<br /> But I can't still get the exact procedure of calculation, why it should be <span class="math-container">$\vec{x}^T \times (\text{out}\cdot(1-\text{out}))$</span>, not <span class="math-container">$(\text{out}\cdot(1-\text{out})) \times \vec{x}^T$</span>,I know it by considering dimension, but not by calculation.</p> <p>My intuition about this problem is that all calculation can be considerd as vector by vector differentiation since <span class="math-container">$n$</span> is a batch size number, we can calculate matrix differentiation by considering each column vector.</p> <p>I'm not sure about my intuition yet, and I need some exact mathematical calculation procedure for the problem,</p>
<p>I think I understand the process now. Let <span class="math-container">$x$</span> has the dimension <span class="math-container">$(n, p)$</span> and W has the dimension <span class="math-container">$(p, q)$</span>. In neural network, <span class="math-container">$n$</span> denotes the samples in the batch and <span class="math-container">$p, q$</span> denotes input dimension and output dimension of each layer, respectively. We don't use bias <span class="math-container">$b$</span> here just for simplicity.</p> <p>I had trouble in understanding the process of differentiation because of some confusion in vector(matrix) derivative. By changing matrix as set of column vector, now I can solve the problem.</p> <p>Let <span class="math-container">$x$</span> = <span class="math-container">$\begin{bmatrix} x_{11}, &amp;x_{12}, &amp;\cdots, &amp;x_{1p}\\ x_{21}, &amp;x_{22}, &amp;\cdots, &amp;x_{2p}\\ \vdots, &amp;\vdots, &amp;\vdots, &amp;\vdots\\ x_{n1}, &amp;x_{n2}, &amp;\cdots, &amp;x_{np} \end{bmatrix}$</span>, which can be written as <span class="math-container">$\begin{bmatrix} \vec{x}_{1}, &amp;\vec{x}_{2}, &amp;\cdots, &amp;\vec{x}_{p} \end{bmatrix}$</span>.</p> <p>Similarly, <span class="math-container">$W$</span> = <span class="math-container">$\begin{bmatrix} w_{11}, &amp;w_{12}, &amp;\cdots, &amp;w_{1q}\\ w_{21}, &amp;w_{22}, &amp;\cdots, &amp;w_{2q}\\ \vdots, &amp;\vdots, &amp;\vdots, &amp;\vdots\\ w_{p1}, &amp;w_{n2}, &amp;\cdots, &amp;w_{pq} \end{bmatrix}$</span> = <span class="math-container">$\begin{bmatrix} \vec{w}_{1}\\ \vec{w}_{2}, \\ \vdots, \\ \vec{w}_{p} \end{bmatrix}$</span>.</p> <p>Then <span class="math-container">$x\times W$</span> = <span class="math-container">$\vec{x}_1 \vec{w}_1 + \vec{x}_2 \vec{w}_2 + \cdots \vec{x}_p \vec{w}_p$</span>. So <span class="math-container">$\frac{\partial(\vec{x}_1 \vec{w}_1 + \vec{x}_2 \vec{w}_2 + \cdots \vec{x}_p \vec{w}_p)}{\partial{W}} = \begin{bmatrix} \vec{x}_{1}, &amp;\vec{x}_{2}, &amp;\cdots, &amp;\vec{x}_{p} \end{bmatrix}^T.$</span></p> <p>I still have a little trouble in understanding the last part since actually, <span class="math-container">$\vec{x}_1 \vec{w}_1 + \vec{x}_2 \vec{w}_2 + \cdots \vec{x}_p \vec{w}_p$</span> is not scalar, it's <span class="math-container">$(n, q)$</span> matrix, but at least now I understand why it is written as above result.</p> <p>(Also hope to fully understand the last part.)</p>
339
gradient descent
Which function $(\hat{y} - y)^2$ or $(y - \hat{y})^2$ should I use to compute the gradient?
https://ai.stackexchange.com/questions/12612/which-function-haty-y2-or-y-haty2-should-i-use-to-compute-th
<p>The MSE can be defined as <span class="math-container">$(\hat{y} - y)^2$</span>, which should be equal to <span class="math-container">$(y - \hat{y})^2$</span>, but I think their derivative is different, so I am confused of what derivative will I use for computing my gradient. Can someone explain for me what term to use?</p>
<p>The derivative of <span class="math-container">$\mathcal{L_1}(y, x) = (\hat{y} - y)^2 = (f(x) - y)^2$</span> with respect to <span class="math-container">$\hat{y}$</span>, where <span class="math-container">$f$</span> is the model and <span class="math-container">$\hat{y} = f(x)$</span> is the output of the model, is </p> <p><span class="math-container">\begin{align} \frac{d}{d \hat{y}} \mathcal{L_1} &amp;= \frac{d}{d \hat{y}} (\hat{y} - y)^2 \\ &amp;= 2(\hat{y} - y) \frac{d}{d \hat{y}} (\hat{y} - y) \\ &amp;= 2(\hat{y} - y) (1) \\ &amp;= 2(\hat{y} - y) \end{align}</span></p> <p>The derivative of <span class="math-container">$\mathcal{L_2}(y, x) = (y - \hat{y})^2 = (y - f(x))^2$</span> w.r.t <span class="math-container">$\hat{y}$</span> is</p> <p><span class="math-container">\begin{align} \frac{d}{d \hat{y}} \mathcal{L_2} &amp;= \frac{d}{d \hat{y}} (y - \hat{y})^2 \\ &amp;= 2(y -\hat{y}) \frac{d}{d \hat{y}} (y -\hat{y}) \\ &amp;= 2(y - \hat{y})(-1)\\ &amp;= -2(y - \hat{y})\\ &amp;= 2(\hat{y} - y) \end{align}</span></p> <p>So, the derivatives of <span class="math-container">$\mathcal{L_1}$</span> and <span class="math-container">$\mathcal{L_2}$</span> are the same.</p>
340
gradient descent
What&#39;s the rationale behind mini-batch gradient descent?
https://ai.stackexchange.com/questions/7494/whats-the-rationale-behind-mini-batch-gradient-descent
<p>I am reading a book that states</p> <blockquote> <p>As the mini-batch size increases, the gradient computed is closer to the 'true' gradient</p> </blockquote> <p>So, I assume that they are saying that mini-batch training only focuses on decreasing the cost function in a certain 'plane', sacrificing accuracy for speed. Is that correct?</p>
<p>The basic idea behind mini-batch training is rooted in the exploration / exploitation tradeoff in <a href="https://en.wikipedia.org/wiki/Local_search_(optimization)" rel="nofollow noreferrer">local search and optimization algorithms</a>. </p> <p>You can view training of an ANN as a local search through the space of possible parameters. The most common search method is to move all the parameters in the direction that reduces error the most (gradient decent).</p> <p>However, ANN parameter spaces do not usually have a smooth topology. There are many shallow local optima. Following the global gradient will usually cause the search to become trapped in one of these optima, preventing convergence to a good solution.</p> <p><a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent" rel="nofollow noreferrer">Stochastic gradient decent</a> solves this problem in much the same way as older algorithms like simulated annealing: you can escape from a shallow local optima because you will eventually (with high probability) pick a sequence of updates based on a single point that "bubbles" you out. The problem is that you'll also tend to waste a lot of time moving in wrong directions.</p> <p>Mini-batch training sits between these two extremes. Basically you average the gradient across enough examples that you still have some global error signal, but not so many that you'll get trapped in a shallow local optima for long. </p> <p>Recent research by <a href="https://arxiv.org/abs/1804.07612" rel="nofollow noreferrer">Masters and Luschi</a> suggests that in fact, most of the time you'd want to use <em>smaller</em> batch sizes than what's being done now. If you set the learning rate carefully enough, you can use a big batch size to complete training faster, but the difficulty of picking the correct learning rate increases with the size of the batch.</p>
341
gradient descent
Is it possible with stochastic gradient descent for the error to increase?
https://ai.stackexchange.com/questions/13288/is-it-possible-with-stochastic-gradient-descent-for-the-error-to-increase
<p>As simple as that. Is there any scenario where the error might <em>increase</em>, if only by a tiny amount, when using SGD (no momentum)?</p>
<p>Yes. Not only that, but error is highly noisy, prone to big spikes and sometimes quite long period of increase before decrease again or stabilize. Often it's even impossible to understand error plot without passing it through smoothing filter, so noisy it is. Specific depend on the problem of cause. It's not only for SGD but for any optimizer.</p>
342
gradient descent
What is the derivative of a specific output with respect to a specific weight?
https://ai.stackexchange.com/questions/25522/what-is-the-derivative-of-a-specific-output-with-respect-to-a-specific-weight
<p>If I have a neural network, and say the 6th output node of the neural network is:</p> <p><span class="math-container">$$x_6 = w_{16}y_1 + w_{26}y_2 + w_{36}y_3$$</span></p> <p>What does that make the derivative of:</p> <p><span class="math-container">$$\frac{\partial x_6}{\partial w_{26}}$$</span></p> <p>I guess that it's how is <span class="math-container">$x_6$</span> changing with respect to <span class="math-container">$w_{26}$</span>, so, therefore, is it equal to <span class="math-container">$y_2$</span> (since the output, <span class="math-container">$y_2$</span>, will change depending on the weight added to the input)?</p>
<p>Formally speaking <span class="math-container">$x_6$</span> is a function of <span class="math-container">$w_{16},\ w_{26}$</span> and <span class="math-container">$w_{36}$</span>, that is <span class="math-container">$$x_6 =f(w_{16}, w_{26}, w_{36})=w_{16}y_1 + w_{26}y_2 + w_{36}y_3.$$</span> The derivative w.r.t. <span class="math-container">$w_{26}$</span> is <span class="math-container">$$\frac{\partial x_6}{\partial w_{26}}= \frac{\partial w_{16}y_1}{\partial w_{26}} +\frac{\partial w_{26}y_2}{\partial w_{26}} +\frac{\partial w_{36}y_3}{\partial w_{26}} = 0 +y_2 \frac{\partial w_{26}}{\partial w_{26}} + 0= y_2.$$</span> The first equality is obtained using the fact that the partial derivative is linear (so the derivative of the sum is the sum of the derivatives); the second equality comes from again from the linearity and from the fact that <span class="math-container">$w_{16}y_1$</span> and <span class="math-container">$w_{36}y_3$</span> are constants with respect to <span class="math-container">$w_{26}$</span>, so their partial derivative w.r.t. this variable is <span class="math-container">$0$</span>.</p> <p><em><strong>Bonus</strong></em></p> <p>Not really asked in the original question, but since I'm here let me have a bit of fun ;).</p> <p>Let's say <span class="math-container">$x_6$</span> is the output of the sixth node after you apply an activation function, that is <span class="math-container">$$x_6 =\sigma(f(w_{16}, w_{26}, w_{36}))=\sigma(w_{16}y_1 + w_{26}y_2 + w_{36}y_3).$$</span> You can compute the partial derivative applying the properties illustrated above, with the additional help of the chain rule <span class="math-container">$$\frac{\partial x_6}{\partial w_{26}}=\frac{\partial \sigma(w_{16}y_1 + w_{26}y_2 + w_{36}y_3)}{\partial w_{26}}=\sigma'\frac{\partial w_{16}y_1}{\partial w_{26}} +\sigma'\frac{\partial w_{26}y_2}{\partial w_{26}} +\sigma'\frac{\partial w_{36}y_3}{\partial w_{26}}=y_2\sigma'$$</span> <span class="math-container">$\sigma'$</span> denotes the derivative of sigma with respect to its argument.</p>
343
gradient descent
How can we reach global optimum?
https://ai.stackexchange.com/questions/11729/how-can-we-reach-global-optimum
<p>Gradient descent can get stuck into local optimum. Which techniques are there to reach global optimum?</p>
<p>In Deep Learning there are several methods to improve "stuck" gradient - decrease learning rate, use cyclic learning rate - cycle it from bigger to smaller value. More radical method is completely reinitialize last or two last (before loss) layers of the network. </p> <p>In non-Deep Learning ML out of those only decrease learning rate will work, but there is plethora Numerical Optimization methods to help, like second-order methods - variation of Gauss-Newton, or methods specific to the problem which may include projective methods, alternate directions, conjugate gradients <a href="https://en.wikipedia.org/wiki/Mathematical_optimization#Optimization_algorithms" rel="nofollow noreferrer">etc</a>. There are a lot of methods which are better then gradient descent for non-Deep Learning optimization.</p>
344
gradient descent
Simple Polynomial Gradient Descent algorithm not working
https://ai.stackexchange.com/questions/34230/simple-polynomial-gradient-descent-algorithm-not-working
<p>I am trying to implement a simple 2nd order polynomial gradient descent algorithm in Java. It is not converging and becomes unstable. How do I fix it?</p> <pre><code>public class PolyGradientDescent { public static double getValue(double input) { return 3 * input * input - 4 * input + 3.5; } public static void fit() { double x0 = Math.random(); double x1 = Math.random(); double x2 = Math.random(); double size = 15; double learningrate = 0.0001; for(int i = 0; i &lt; 400; i++) { double partial_x2 = 0; double partial_x1 = 0; double partial_x0 = 0; for(double x = 0; x &lt; size+0.001; x++) { double xx = x * x; double y_predict = xx * x2 + x * x1 + x0; double delta = getValue(x) - y_predict; partial_x2 += xx * delta; partial_x1 += x * delta; partial_x0 += delta; } x0 = x0 + (2 / size) * partial_x0 * learningrate; x1 = x1 + (2 / size) * partial_x1 * learningrate; x2 = x2 + (2 / size) * partial_x2 * learningrate; System.out.println(x0 + &quot;\t&quot; + x1 + &quot;\t&quot; + x2 + &quot;\t&quot; + &quot;\t&quot; + partial_x2 + &quot;\t&quot; + partial_x1 + &quot;\t&quot; + partial_x0); } for(double x = 0; x &lt; size+0.001; x++) { System.out.println(&quot;Y: &quot; + getValue(x) + &quot;, Y_Predict: &quot; + (x2 * x * x + x1 * x + x0)); } } public static void main(String[] args) { fit(); } } </code></pre>
<p>I tested your code in python and it works just fine, when I decrease the learning rate (divided by 100) by a bit more (and the epochs multiplied by 100).</p> <p>I also changed the way the derivative was calculated to make it more mathematically correct :)</p> <pre><code>import random def getValue(x): return 3 * x * x - 4 * x + 3.5 def fit(): x0 = random.randrange(-100, 101) / 100 x1 = random.randrange(-100, 101) / 100 x2 = random.randrange(-100, 101) / 100 size = 15 learningrate = 0.000001 for i in range(40000): partial_x2 = 0 partial_x1 = 0 partial_x0 = 0 for x in range(16): xx = x * x y_predict = xx * x2 + x * x1 + x0 delta = getValue(x) - y_predict # for the partial derivatives, I pulled the sign and the 2 into this step, and also devided the term later by -2, because this would be the true derivative partial_x2 -= 2 * xx * delta partial_x1 -= 2 * x * delta partial_x0 -= 2 * delta x0 = x0 - (1 / size) * partial_x0 * learningrate x1 = x1 - (1 / size) * partial_x1 * learningrate x2 = x2 - (1 / size) * partial_x2 * learningrate for x in range(16): print(&quot;Y: &quot; + str(getValue(x)) + &quot;, Y_Predict: &quot; + str(x2 * x * x + x1 * x + x0)) fit() </code></pre>
345
gradient descent
Why would one prefer the gradient of the sum rather than the sum of the gradients?
https://ai.stackexchange.com/questions/34910/why-would-one-prefer-the-gradient-of-the-sum-rather-than-the-sum-of-the-gradient
<p>When gradients are aggregated over mini batches, I sometimes see formulations like this, e.g., in the &quot;Deep Learning&quot; book by Goodfellow et al.</p> <p><span class="math-container">$$\mathbf{g} = \frac{1}{m} \nabla_{\mathbf{w}} \left( \sum\limits_{i=1}^{m} L \left( f \left( \mathbf{x}^{(i)}, \mathbf{w} \right), y^{(i)} \right) \right)$$</span></p> <p>This is mathematically equivalent to</p> <p><span class="math-container">$$\mathbf{g} = \frac{1}{m} \left( \sum\limits_{i=1}^{m} \nabla_{\mathbf{w}} L \left( f \left( \mathbf{x}^{(i)}, \mathbf{w} \right), y^{(i)} \right) \right)$$</span></p> <p>just moving the gradient operator inside/outside the sum.</p> <p>But I was wondering: why one would prefer the first representation?</p> <p>My thoughts, and please correct me if I am wrong:</p> <ul> <li><p>When performing batch gradient descent in practice, we process one example after the other and compute the corresponding gradient everytime. So the second equation above better represents what is really happening</p> </li> <li><p>Even more, there is no alternative in practice. I can not compute all losses, just sum them up and obtain just one gradient afterwards. So the first equation (although mathematically correct and equal) might even be somewhat misleading?</p> </li> <li><p>The only reason I can imagine to decide for the first way is to express more clearly that there is one (aggregated, mean) gradient based on one data set</p> </li> </ul> <p><strong>Any mistakes here on my side?</strong></p>
346
gradient descent
How is direction of weight change determined by Gradient Descent algorithm
https://ai.stackexchange.com/questions/5670/how-is-direction-of-weight-change-determined-by-gradient-descent-algorithm
<p>The result of gradient descent algorithm is a vector. So how does this algorithm decide the direction for weight change? We Give hyperparameters for step size. But how is the vector direction for weight change, for the purpose of reducing the Loss function in a Linear Regression Model, determined by this algorithm?</p>
<p>First thing is that what does gradient descent do? Gradient Descent is a tool of calculus which we use to determine the parameters (here weights) used in a Machine Learning algorithm or a Neural Network, by running the Gradient Descent algorithm iteratively. </p> <p>What does the vector obtained from one iteration of gradient descent tell us? It tells us the direction of weight change (when weights are treated as a vector) for the maximum reduction in the value outputted by a loss function. The intuition behind why Gradient Descent gives such a direction can be found here:</p> <ul> <li><a href="https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/gradient-and-directional-derivatives/v/why-the-gradient-is-the-direction-of-steepest-ascent" rel="nofollow noreferrer">Why the gradient is the direction of steepest ascent - Khan Academy</a></li> <li><a href="https://math.stackexchange.com/questions/223252/why-is-gradient-the-direction-of-steepest-ascent">Why is gradient the direction of steepest ascent?</a></li> <li><a href="https://math.stackexchange.com/questions/180559/why-does-gradient-descent-work">Why does gradient descent work?</a></li> </ul> <p>In general cases the cost/loss function is an n-dimensional paraboloid (we design function in such a way such that its convex).</p> <p><a href="https://i.sstatic.net/TYDRx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TYDRx.png" alt="enter image description here"></a></p> <p>A 2-D paraboloid with <code>x</code> and <code>y</code> as independent variables.</p> <p>Now why do we reduce the weights in the direction of negative gradient only? Why not some other direction?</p> <p>Since we want to reduce the cost in Linear Regression for better predictions we choose only the negative Gradient Descent direction, as it is the direction of steepest descent even though we could have chosen some other possible vector direction to reduce cost, but the negative gradient descent direction ensures that:</p> <ul> <li>Cost is always decreased (if the step size is correct and in general only in cases of convex functions e.g paraboloid).</li> <li>Cost is decreased by the maximal amount if we move in that direction, so we don't have to worry whether cost will decrease or not if we move in that direction.</li> </ul> <p>Also we use learning rate <code>alpha</code> to scale by how much amount we want to decrease the weights.</p> <p>EDIT: As pointed by @pasaba the error function may not be a paraboloid but in general a good cost function looks like a paraboloid with skewed axis.</p>
347
gradient descent
How can a neural network learn when the derivative of the activation function is 0?
https://ai.stackexchange.com/questions/10158/how-can-a-neural-network-learn-when-the-derivative-of-the-activation-function-is
<p>Imagine that I have an artificial neural network with a single hidden layer and that I am using ReLU as my activating function. If by change I initialize my bias and my weights in such a form that: <span class="math-container">$$ X * W + B &lt; 0 $$</span> for every input <strong>x</strong> in <strong>X</strong> then the partial derivate of the Loss function with respect to W will always be 0! </p> <p>In a setup like the above where the derivate is 0 is it true that an NN won´t learn anything? </p> <p>If true (the NN won´t learn anything) can I also assume that once the gradient reaches the value 0 for a given weight, that weight won´t ever be updated?</p>
<blockquote> <p>In a setup like the above where the derivat[iv]e is 0 is it true that an NN won´t learn anything?</p> </blockquote> <p>There are a couple of adjustments to gradients that <em>might</em> apply if you do this in a standard framework:</p> <ul> <li><p>Momentum may cause weights to continue changing if any recent ones were non-zero. This is typically implemented as a rolling mean of recent gradients.</p></li> <li><p>Weight decay (aka L2 weight regularisation) is often implemented as an additional gradient term and may adjust weights down even in the absence of signals from prediction errors.</p></li> </ul> <p>If either of these extensions to basic gradient descent are active, or anything similar, then it is possible for the neural network to move out of the stationary zone that you have created after a few steps and then continue learning.</p> <p>Otherwise, then yes it is correct that the neural networks weights would not change at all through gradient descent, and the NN would remain unchanged for any of your input values. Your careful initialisation of biases and weights will have created a system that is unable to learn from the given data. This is a known problem with ReLU activation, and can happen to some percentage of artificial neurons during training with normal start conditions. Other activation functions such as sigmoid have similar problems - although the gradient is never zero in many of these, it can be arbitrarily low, so it is possible for parameters to get into a state where learning is so slow that the NN, whilst technically learning <em>something</em> on each iteration, is effectively stuck. It is not always easy to tell the difference between these unwanted states of a NN and the goal of finding a useful minimum error.</p>
348
gradient descent
In gradient descent&#39;s update rule, why do we use $\sigma(z^{l-1})\frac{\delta C_0}{ \delta w^{l}}$ instead of $\frac{\delta C_0}{\delta w^{l}}$?
https://ai.stackexchange.com/questions/26920/in-gradient-descents-update-rule-why-do-we-use-sigmazl-1-frac-delta-c
<p>I am trying to code a two layered neural network simple NN as I have described here <a href="https://itisexplained.com/html/NN/ml/5_codingneuralnetwork/" rel="nofollow noreferrer">https://itisexplained.com/html/NN/ml/5_codingneuralnetwork/</a></p> <p>I am getting stuck on the last step of updating the weights after calculating the gradients for the outer and inner layers via back-propagation</p> <pre><code>#--------------------------------------------------------------- # Two layered NW. Using from (1) and the equations we derived as explanations # (1) http://iamtrask.github.io/2015/07/12/basic-python-network/ #--------------------------------------------------------------- import numpy as np # seed random numbers to make calculation deterministic np.random.seed(1) # pretty print numpy array np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) # let us code our sigmoid funciton def sigmoid(x): return 1/(1+np.exp(-x)) # let us add a method that takes the derivative of x as well def derv_sigmoid(x): return x*(1-x) # set learning rate as 1 for this toy example learningRate = 1 # input x, also used as the training set here x = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ]) # desired output for each of the training set above y = np.array([[0,1,1,0]]).T # Explanaiton - as long as input has two ones, but not three, ouput is One &quot;&quot;&quot; Input [0,0,1] Output = 0 Input [0,1,1] Output = 1 Input [1,0,1] Output = 1 Input [1,1,1] Output = 0 &quot;&quot;&quot; input_rows = 4 # Randomly initalised weights weight1 = np.random.random((3,input_rows)) weight2 = np.random.random((input_rows,1)) print(&quot;Shape weight1&quot;,np.shape(weight1)) #debug print(&quot;Shape weight2&quot;,np.shape(weight2)) #debug # Activation to layer 0 is taken as input x a0 = x iterations = 1000 for iter in range(0,iterations): # Forward pass - Straight Forward z1= x @ weight1 a1 = sigmoid(z1) z2= a1 @ weight2 a2 = sigmoid(z2) # Backward Pass - Backpropagation delta2 = (y-a2) #--------------------------------------------------------------- # Calcluating change of Cost/Loss wrto weight of 2nd/last layer # Eq (A) ---&gt; dC_dw2 = delta2*derv_sigmoid(z2) #--------------------------------------------------------------- dC_dw2 = delta2 * derv_sigmoid(a2) if iter == 0: print(&quot;Shape dC_dw2&quot;,np.shape(dC_dw2)) #debug #--------------------------------------------------------------- # Calcluating change of Cost/Loss wrto weight of 2nd/last layer # Eq (B)---&gt; dC_dw1 = derv_sigmoid(a1)*delta2*derv_sigmoid(a2)*weight2 # note delta2*derv_sigmoid(a2) == dC_dw2 # dC_dw1 = derv_sigmoid(a1)*dC_dw2*weight2 #--------------------------------------------------------------- dC_dw1 = (np.multiply(dC_dw2,weight2.T)) * derv_sigmoid(a1) if iter == 0: print(&quot;Shape dC_dw1&quot;,np.shape(dC_dw1)) #debug #--------------------------------------------------------------- #Gradinent descent #--------------------------------------------------------------- #weight2 = weight2 - learningRate*dC_dw2 --&gt; these are what the textbook tells #weight1 = weight1 - learningRate*dC_dw1 weight2 = weight2 + learningRate*np.dot(a1.T,dC_dw2) # this is what works weight1 = weight1 + learningRate*np.dot(a0.T,dC_dw1) print(&quot;New ouput\n&quot;,a2) </code></pre> <p>Why is</p> <pre><code> weight2 = weight2 + learningRate*np.dot(a1.T,dC_dw2) weight1 = weight1 + learningRate*np.dot(a0.T,dC_dw1) </code></pre> <p>done instead of</p> <pre><code> #weight2 = weight2 - learningRate*dC_dw2 #weight1 = weight1 - learningRate*dC_dw1 </code></pre> <p>I am not getting the source of the equation of updating the weights by multiplying with the activation of the previous layer</p> <p>As per gradient descent, the weight update should be</p> <p><span class="math-container">$$ W^{l}_{new} = W^{l}_{old} - \gamma * \frac{\delta C_0}{\delta w^{l}} $$</span></p> <p>However, what works in practice is</p> <p><span class="math-container">$$ W^{l}_{new} = W^{l}_{old} - \gamma * \sigma(z^{l-1})\frac{\delta C_0}{ \delta w^{l}}, $$</span></p> <p>where <span class="math-container">$\gamma$</span> is the learning rate.</p>
<p>Okay - the answer is here <a href="https://explained.ai/matrix-calculus/#sec6.2" rel="nofollow noreferrer">https://explained.ai/matrix-calculus/#sec6.2</a> and it is pretty involved. Basically, there is a difference when you derive the equation for one neuron and when you have to do practically for a set of neurons. The answer is matrix calculus. Here goes from what I could make out. Feel free to correct if I am wrong</p> <h2>Gradient Vector/Matrix/2D tensor of Loss function wrto Weight</h2> <p><span class="math-container">$$ C = \frac{1}{2} \sum_j (y_j-a^L_j)^2 $$</span></p> <p>Assuming a neural net with 2 layers, we have the final Loss as</p> <p><span class="math-container">$$ C = \frac{1}{2} \sum_j (y_j-a^2_j)^2 $$</span></p> <p>Where</p> <p><span class="math-container">$$ a^2 = \sigma(w^2.a^1) $$</span></p> <p>We can then write</p> <p><span class="math-container">$$ C = \frac{1}{2} \sum_j v^2 \quad \rightarrow (Eq \;A) $$</span></p> <p>Where</p> <p><span class="math-container">$$ v= y-a^2 $$</span></p> <h2>Partial Derivative of Loss function wrto Weight</h2> <p>For the last layer, lets use Chain Rule to split like below</p> <p><span class="math-container">$$ \frac {\partial C}{\partial w^2} = \frac{\partial v^2}{\partial v} * \frac{\partial v}{\partial w^2} \quad \rightarrow (Eq \;B) $$</span></p> <p><span class="math-container">$$ \frac{\partial v^2}{\partial v} =2v \quad \rightarrow (Eq \;B.1) $$</span></p> <p><span class="math-container">$$ \frac{\partial v}{\partial w^2}= \frac{\partial (y-a^2)}{\partial w^2} = 0-\frac{\partial a^2}{\partial w^2} \quad \rightarrow (Eq \;B.2) $$</span></p> <p><span class="math-container">$$ \frac {\partial C}{\partial w^2} = \frac{1}{2} *2v(0-\frac{\partial a^2}{\partial w^2}) \quad \rightarrow (Eq \;B) $$</span>  </p> <h3>Now we need to find <span class="math-container">$\frac{\partial a^2}{\partial w^2}$</span></h3> <p>Let</p> <p><span class="math-container">$$ a^2= \sigma(sum(w^2 \otimes a^1 )) = \sigma(z^2) $$</span> <span class="math-container">$$ z^2 = sum(w^2 \otimes a^1 ) $$</span></p> <p><span class="math-container">$$ z^2 = sum(k^2) \; \text {where} \; k^2=w^2 \otimes a^1 $$</span></p> <p>We now need to derive an intermediate term which we will use later</p> <p><span class="math-container">$$ \frac{\partial z^2}{\partial w^2} =\frac{\partial z^2}{\partial k^2}*\frac{\partial k^2}{\partial w^2} $$</span> <span class="math-container">$$ =\frac {\partial sum(k^2)}{\partial k^2}* \frac {\partial (w^2 \otimes a^1 )} {\partial w^2} $$</span> <span class="math-container">$$ \frac{\partial z^2}{\partial w^2} = (1^{\rightarrow})^T* diag(a^1) =(a^{1})^T \quad \rightarrow (Eq \;B.3) $$</span> How the above is, you need to check this in <a href="https://explained.ai/matrix-calculus/#sec6.2" rel="nofollow noreferrer">https://explained.ai/matrix-calculus/#sec6.2</a></p> <p>Basically though these are written like scalar here; actually all these are partial differention of vector by vector, or vector by scalar; and a set of vectors can be represented as the matrix here.</p> <p>Note that the vector dot product <span class="math-container">$w.a$</span> when applied on matrices becomes the elementwise multiplication <span class="math-container">$w^2 \otimes a^1$</span> (also called Hadamard product)</p> <p>Going back to <span class="math-container">$Eq \;(B.2)$</span></p> <p><span class="math-container">$$ \frac {\partial a^2}{\partial w^2} = \frac{\partial a^2}{\partial z^2} * \frac{\partial z^2}{\partial w^2} $$</span></p> <p>Using <span class="math-container">$Eq \;(B.3)$</span> for the term in left</p> <p><span class="math-container">$$ = \frac{\partial a^2}{\partial z^2} * (a^{1})^T $$</span></p> <p><span class="math-container">$$ = \frac{\partial \sigma(z^2)}{\partial z^2} * (a^{1})^T $$</span></p> <p><span class="math-container">$$ \frac {\partial a^2}{\partial w^2} = \sigma^{'}(z^2) * (a^{1})^T \quad \rightarrow (Eq \;B.4) $$</span></p> <p>Now lets got back to partial derivative of Loss function wrto to weight</p> <p><span class="math-container">$$ \frac {\partial C}{\partial w^2} = \frac {1}{2}*2v(0-\frac{\partial a^2}{\partial w^2}) \quad \rightarrow (Eq \;B) $$</span> Using <span class="math-container">$Eq \;(B.4)$</span> to substitute in the last term</p> <p><span class="math-container">$$ = v(0- \sigma^{'}(z^2) * (a^{1})^T) $$</span></p> <p><span class="math-container">$$ = v*-1*\sigma^{'}(z^2) * (a^{1})^T $$</span></p> <p><span class="math-container">$$ = (y-a^2)*-1*\sigma^{'}(z^2) * (a^{1})^T $$</span></p> <p><span class="math-container">$$ \frac {\partial C}{\partial w^2}= (a^2-y)*\sigma^{'}(z^2) * (a^{1})^T $$</span></p>
349
language modeling
Any popular diffusion model for language modeling?
https://ai.stackexchange.com/questions/45902/any-popular-diffusion-model-for-language-modeling
<p>Is there a popular diffusion model-based framework for language modelling? If not, is it because of the difficulty sampling for discrete distributions?</p>
<p>The problem with the current diffusion models, is that they are great density models, but they require to know a priori the size of the sample (ie the size of the image), which definitely is not the case for text</p> <p>However, you might get around this by first having a model, an encoder, that encodes text to a fixed size vector, and then a decoder that decodes that back to text, and use the diffusion model on the latest space of it</p> <p>However, most likely, it won't give great results, for the same reasons Seq2Seq is not the SoTA for text generation</p>
350
language modeling
Where can I find pre-trained language models in English and German?
https://ai.stackexchange.com/questions/7684/where-can-i-find-pre-trained-language-models-in-english-and-german
<p>Where can I find (more) pre-trained <a href="https://en.wikipedia.org/wiki/Language_model" rel="nofollow noreferrer">language models</a>? I am especially interested in <strong>neural network-based</strong> models for <strong>English and German</strong>.</p> <p>I am aware only of <a href="https://github.com/tensorflow/models/tree/master/research/lm_1b" rel="nofollow noreferrer">Language Model on One Billion Word Benchmark</a> and <a href="https://github.com/lverwimp/tf-lm" rel="nofollow noreferrer">TF-LM: TensorFlow-based Language Modeling Toolkit</a>.</p> <p>I am surprised not to find a greater wealth of models for different frameworks and languages.</p>
<p>Of course now there has been a huge development: Huggingface published <a href="https://github.com/huggingface/pytorch-transformers" rel="nofollow noreferrer">pytorch-transformers</a>, a library for the so successful Transformer models (BERT and its variants, GPT-2, XLNet, etc.), including many pretrained (mostly English or multilingual) models (<a href="https://huggingface.co/pytorch-transformers/" rel="nofollow noreferrer">docs here</a>). It also includes one German BERT model. SpaCy offers a <a href="https://github.com/explosion/spacy-pytorch-transformers" rel="nofollow noreferrer">convenient wrapper</a> (<a href="https://explosion.ai/blog/spacy-pytorch-transformers" rel="nofollow noreferrer">blog post</a>).</p> <p>Update: Now, Salesforce published the English model <a href="https://github.com/salesforce/ctrl" rel="nofollow noreferrer">CTRL</a>, which allows for use of "control codes" that influence the style, genre and content of the generated text.</p> <p>For completeness, here is the old, now less relevant version of my answer:</p> <hr> <p>Since I posed the question, I found this <strong>pretrained German language model</strong>: <a href="https://lernapparat.de/german-lm/" rel="nofollow noreferrer">https://lernapparat.de/german-lm/</a></p> <p>It is an instance of a <a href="https://github.com/Bachfischer/german2vec" rel="nofollow noreferrer">3-layer "averaged stochastic descent weight-dropped" LSTM</a> which was implemented based on an implementation by <a href="https://github.com/salesforce/awd-lstm-lm" rel="nofollow noreferrer">Salesforce</a>.</p>
351
language modeling
publically available language models that can be used to train arbitrary language data?
https://ai.stackexchange.com/questions/43408/publically-available-language-models-that-can-be-used-to-train-arbitrary-languag
<p>I have sentence data in a language that is not widely in use and as such popular LLMs do not support the language. I want to train some language model such that given some question, it is able to respond back in the same language, just as in ChatGPT just with a different language.</p> <p>In such a case, what language model is publically available and is sufficiently powerful? Or would it be possible to use popular LLMs (such as ChatGPT) to achieve such a goal?</p>
<p>One can use the <a href="https://huggingface.co/papers/2309.09400" rel="nofollow noreferrer">CulturaX</a> dataset (<a href="https://arxiv.org/pdf/2309.09400.pdf" rel="nofollow noreferrer">paper</a>) to train an LLM in another language. It contains 27 TB of texts divided into 167 languages.</p> <p>Otherwise, one can use <a href="https://github.com/nlp-uoregon/Okapi" rel="nofollow noreferrer">Okapi</a> (<a href="https://arxiv.org/pdf/2307.16039.pdf" rel="nofollow noreferrer">paper</a>):</p> <blockquote> <p>Okapi Resources: We provide resources to perform instruction tuning with RLHF for 26 languages, including ChatGPT prompts, multilingual instruction datasets and multilingual response ranking data.</p> <p>Okapi Models: We provide RLHF-based instruction-tuned LLMs for 26 languages on Okapi dataset. Our models include both BLOOM-based and LLaMa-based versions. We also provide scripts to interact with our models and fine-tune LLMs with our resources.</p> <p>Multilingual Evaluation Benchmark Datasets: We provide three benchmark datasets for evaluating Multilingual Large Language Models (LLMs) for 26 languages. You can access the full datasets and evaluation scripts: <a href="https://github.com/nlp-uoregon/mlmm-evaluation" rel="nofollow noreferrer">here</a>.</p> </blockquote>
352
language modeling
What does the outputlayer of BERT for masked language modelling look like?
https://ai.stackexchange.com/questions/26284/what-does-the-outputlayer-of-bert-for-masked-language-modelling-look-like
<p>In the tutorial <a href="https://www.lyrn.ai/2018/11/07/explained-bert-state-of-the-art-language-model-for-nlp/" rel="nofollow noreferrer">BERT – State of the Art Language Model for NLP</a> the masked language modeling pre-training steps are described as follows:</p> <blockquote> <p>In technical terms, the prediction of the output words requires:</p> <ol> <li>Adding a classification layer on top of the encoder output.</li> </ol> <p>2.Multiplying the output vectors by the embedding matrix, transforming them into the vocabulary dimension.</p> <p>3.Calculating the probability of each word in the vocabulary with softmax.</p> </blockquote> <p>In the Figure below this process is visualized and also from the tutorial.</p> <p>I am confused about what exactly is done. Does it mean that each output vector O is fed into a fully connected layer with embedding_size neurons and then multiplied by the embedding matrix from the input layer?</p> <p>Update:</p> <p>In the tutorial <a href="http://jalammar.github.io/illustrated-gpt2/" rel="nofollow noreferrer">The Illustrated GPT-2 (Visualizing Transformer Language Models)</a> I found an explanation for GPT-2 which seems to be similar to my question.</p> <p>In the tutorial is said that each output vector is multiplied by the input embedding matrix to get the final output.</p> <p>Does the same mechanic apply to BERT?</p> <p><img src="https://miro.medium.com/max/698/0*ViwaI3Vvbnd-CJSQ.png" alt="Text" /></p>
353
language modeling
What is a neuron in large language models?
https://ai.stackexchange.com/questions/40385/what-is-a-neuron-in-large-language-models
<p>I'm reading OpenAI's new paper &quot;<a href="https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html" rel="nofollow noreferrer">Language models can explain neurons in language models</a>&quot; And I can't fully understand the concept of neurons here.</p> <p>Can you please explain it? Is it related to the attention mechanism?</p>
<p>It seems that in this paper neuron means the perceptrons in the MLP layers of the transformer.</p>
354
language modeling
Transformer Language Model generating meaningless text
https://ai.stackexchange.com/questions/23809/transformer-language-model-generating-meaningless-text
<p>I currently learning on Transformers, so check my understanding I tried implementing a small transformer-based language model and compare it to RNN based language model. Here's the code for transformer. I'm using PyTorch inbuilt layer for Transformer Encoder</p> <pre><code>class TransformerLM_1(nn.Module): def __init__(self, head, vocab_size, embedding_size, dropout = 0.1, device = 'cpu', pad_idx = 0, start_idx = 1, end_idx = 2, unk_idx = 3): super(TransformerLM_1, self).__init__() self.head = head self.embedding_size = embedding_size self.vocab_size = vocab_size self.device = device self.embed = WordEmbedding(self.vocab_size, self.embedding_size, pad_idx) self.postional_encoding = PostionalEncoding(embedding_size, device) self.decoder = nn.TransformerEncoderLayer(self.embedding_size, self.head) self.out_linear = nn.Linear(self.embedding_size, vocab_size) self.dropout = dropout self.pad_idx = pad_idx self.start_idx = start_idx self.end_idx = end_idx self.unk_idx = unk_idx self.device = device def make_src_mask(self, src_sz): mask = (torch.triu(torch.ones(src_sz, src_sz)) == 1).transpose(0, 1) mask = mask.float().masked_fill(mask == 0, 10e-20).masked_fill(mask == 1, float(0.0)) mask = mask.to(self.device) return mask def forward(self, x): dec_in = x.clone()[:, :-1] src_mask = self.make_src_mask(dec_in.size()[1]) src = self.embed(dec_in) src = self.postional_encoding(src) src = src.transpose(0,1) transformer_out = self.decoder(src, src_mask) out = self.out_linear(transformer_out) return out </code></pre> <p>I'm using teacher forcing to make it converge faster. From what I saw from the results, the text generated by the RNN model is better than transformer's.</p> <p>Here is sample generated text with the expected</p> <pre><code>Expected: you had to have been blind not to see the scenario there for what it was and is and will continue to be for months and even years a part of south carolina that has sustained a blow that the red cross expects will cost that organization alone some <span class="math-container">$ n million &lt;eos&gt; Predicted: some &lt;unk&gt; been the been &lt;unk&gt; not be $</span> the total has was the may has &lt;unk&gt; the that that be to the &lt;unk&gt; the Expected: citicorp and chase are attempting to put together a new lower bid &lt;eos&gt; Predicted: a are &lt;unk&gt; carries n't to the together with &lt;unk&gt; jersey than Expected: it ' s amazing the amount of money that goes up their nose out to the dog track or to the tables in las vegas mr . katz says &lt;eos&gt; Predicted: &lt;unk&gt; ' s &lt;unk&gt; comeback money of the in mr to their &lt;unk&gt; and of &lt;unk&gt; &lt;unk&gt; or or &lt;unk&gt; the money Expected: moreover while asian and middle eastern investors &lt;unk&gt; gold and help &lt;unk&gt; its price silver does n't have the same &lt;unk&gt; dealers say &lt;eos&gt; Predicted: the production the routes &lt;unk&gt; of its Expected: a board of control spokesman said the board had not seen the claim and declined to comment &lt;eos&gt; Predicted: the board said declined of said Expected: property capital trust said it dropped its plan to liquidate because it was n't able to realize the value it had expected &lt;eos&gt; Predicted: the claims markets said its was n &lt;unk&gt; to sell insolvent of was n't disclosed to sell its plan Expected: similarly honda motor co . ' s sales are so brisk that workers &lt;unk&gt; they have n't had a saturday off in years despite the government ' s encouragement of more leisure activity &lt;eos&gt; Predicted: the honda ' credit . s s &lt;unk&gt; Expected: we expect a big market in the future so in the long term it will be profitable &lt;eos&gt; Predicted: it can it &lt;unk&gt; board Expected: u . k . composite or &lt;unk&gt; insurers which some equity analysts said might be heavily hit by the earthquake disaster helped support the london market by showing only narrow losses in early trading &lt;eos&gt; Predicted: the . s . s trading sell said which &lt;unk&gt; traders market said the be able in the the earthquake Expected: this will require us to define and &lt;unk&gt; what is necessary or appropriate care &lt;eos&gt; Predicted: &lt;unk&gt; is be the $ &lt;unk&gt; &lt;unk&gt; &lt;unk&gt; &lt;unk&gt; is the to &lt;unk&gt; and or </code></pre> <p>As you can see Transformer fails to grasp grammar compared to RNN. Is there anything wrong with my understanding?</p> <p>EDIT</p> <p>This is one example that caught my eye</p> <pre><code>Expected: also the big board met with angry stock specialists &lt;eos&gt; Predicted: also met specialists board met the stock big with after </code></pre> <p>Most of the words predicted have is from the expected but in a different order. I have read that transformers are permutation invariant which is the reason why we include positional encoding with the word embedding.</p>
<p>This is probably an issue of complete underfitting. How many training data do you use? What is your vocab size? What is your batch size and how many epochs did you train? Transformers always need more data than RNNs to reach good text quality.</p>
355
language modeling
Is large language model and foundation model the same thing?
https://ai.stackexchange.com/questions/36766/is-large-language-model-and-foundation-model-the-same-thing
<p>I read a lot about foundation model and large language model.</p> <p>However, I dont find a clear definition what exactly is a foundation model. Is large language model and foundation model the same thing?</p>
<p>At this point in time, there does not appear to be a really widely-agreed-upon definition of &quot;Foundation models&quot;. If you want one, the best place to go would be <a href="https://arxiv.org/abs/2108.07258" rel="noreferrer">this paper from Stanford that coined the term</a>. Generally at least some of the following ideas apply (according to some people they must all apply, according to others only some need to apply):</p> <ul> <li>Trained using unsupervised or self-supervised learning.</li> <li>Large (deep neural network) model.</li> <li>Not intended to be used directly for any particular end-goal.</li> <li>Intended to serve as a basis (&quot;foundation&quot;). You can think of this as &quot;warm-starting&quot;, starting from a well-trained initial model, that you can then fine-tune with further training (for example, supervised learning) for any specific task you personally have in mind.</li> <li>Trained on multimodal data (not just text, not just images, not just audio, etc., but a mix of things). <em>In my personal opinion this point really is not strictly necessary, and I think most people would agree, but I've seen some people saying this.</em></li> </ul> <p>Large Language Models would typically be trained specifically on language-related data (text). So, I suppose an LLM could sometimes serve as a Foundation model, but it's not necessarily the same thing.</p> <p><strong>NOTE</strong>: I would like to remark that, from many experts outside of Stanford, there is <a href="https://twitter.com/tdietterich/status/1558256704696905728" rel="noreferrer">quite a bit of pushback against the particular term &quot;Foundation model&quot;</a>. There seems to be a bit of PR involved, where it seems like they may like to coin a new term for something that really doesn't need a new term. Just saying it's, for example, a <a href="https://twitter.com/raphaelmilliere/status/1558460796321021953" rel="noreferrer">large pre-trained model</a>, may be more clear.</p>
356
language modeling
Language models of virtual assistants before transformers
https://ai.stackexchange.com/questions/47883/language-models-of-virtual-assistants-before-transformers
<p>How were the language models of virtual assistants (Siri, Google Assistant, Alexa, etc.) designed and trained before transformers? What were their architectures?</p>
<p>Before transformers for the <a href="https://en.wikipedia.org/wiki/Natural_language_understanding" rel="nofollow noreferrer">natural language understanding (NLU)</a> NLP task, there was already the encoder-decoder architecture where LSTMs and GRUs RNN models act as encoder to transform input sequence and <a href="https://en.wikipedia.org/wiki/Conditional_random_field" rel="nofollow noreferrer">conditional random fields (CRFs)</a> were applied for <a href="https://en.wikipedia.org/wiki/Structured_prediction" rel="nofollow noreferrer">structured prediction</a>. And before them, there's rule based statistical language models where hand-crafted rules handle common cases encountered by the virtual assistant while statistical models such as n-gram/TF-IDF handle edge cases or ambiguities.</p> <blockquote> <p>in natural language processing, &quot;linear chain&quot; CRFs are popular, for which each prediction is dependent only on its immediate neighbours.</p> </blockquote> <p>Compared to transformers, they relied heavily on handcrafted features and struggled with understanding long-range complex queries or multi-turn conversational context. Updating these models to handle new domain or intents required significant effort from scratch since they're not scalable, generalizable, and transferrable like foundation transformers</p>
357
language modeling
Large Language Models vs Tabular Data
https://ai.stackexchange.com/questions/39652/large-language-models-vs-tabular-data
<p><strong>Problem:</strong><br /> Let's say we want to predict insurance frauds. Whenever we obtain an insurance claim, we are provided with a free-form description detailing the loss and a substantial amount of data on the claimant, presented in a tabular format.</p> <p><strong>Questions:</strong><br /> How can we utilize both the written loss description and the extensive structured data we have gathered? Should we build two separate models, one for natural language processing and the other for tabular data? Is it possible for Large Language Models (LLMs) to extract insights from tabular data? If so, how? What limitations or pitfalls should we bear in mind?</p>
<p>In Pytorch you can build models which are in part consisting of e.g. a pretrained Bert model and then add some custom or additional layers.</p>
358
language modeling
Are embeddings in multi-lingual language models comparable across languages?
https://ai.stackexchange.com/questions/16353/are-embeddings-in-multi-lingual-language-models-comparable-across-languages
<p>Facebook has <a href="https://arxiv.org/pdf/1911.02116.pdf" rel="nofollow noreferrer">just pushed out</a> a bigger version of their multi-lingual language model XLM, called XLM-R. My question is: do these kind of multi-lingual models imply, or even ensure, that their embeddings are comparable between languages? That is, are semantically related words close together in the vector space across languages?</p> <p>Perhaps the most interesting citation from the paper that is relevant to my question (p. 3):</p> <blockquote> <p>Unlike Lample and Conneau (2019), we do not use language embeddings, which allows our model to better deal with code-switching.</p> </blockquote> <p>Because they do not seem to make a distinction between languages, and there's just one vocabulary for all trained data, I fail to see how this can be truly representative of semantics anymore. The move away from semantics is increased further by the use of BPE, since morphological features (or just plain, statistical <em>word chunks</em>) of one language might often not be semantically related to the same chunk in another language - this can be true for tokens themselves, but especially so for subword information.</p> <p>So, in short: how well can the embeddings in multi-lingual language models be used for semantically comparing input (e.g. a word or sentence) of two different languages?</p>
<p>Embeddings generated by transformers like Bert or XLM-R are fundamentally different from embeddings learned through language models like GloVe or Word2Vec. The latter are static, i.e. they are just dictionaries containing a vocabulary with n-dimensional vectors associated to each word. Because of this they can be plotted through PCA and the distance between them can be easily calculate with whatever metrics you prefer.</p> <p>When training Bert or XLM-R instead you are not learning vectors, but the parameters of a transformer. The embedding for each token are then generated once a token is fed into the transformer. This implies several things, the most important being that the hidden representation (the embedding) for the token change depending on the context (recall that XML-R use as input also the hidden states generated by the previous token). This means that there are no static vectors to compare by plotting them or by calculating the cosine similarity. Nevertheless, there are way to analyse and visualise the syntax and semantics encoded in the parameters, this paper show some strategies: <a href="https://arxiv.org/pdf/1906.02715.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1906.02715.pdf</a></p> <p>On a more linguistic side, I would also ask why vectors of same words should show the same semantic properties across languages. Surely there are similarities for lot of words translated literally, but the use of some expressions is inherently different across languages. To make a quick example: in English the clock 'works', in Dutch the clock 'lopen' (it walks) and in Italian the clock 'funziona' (it functions). Same expression, three different words in different languages that do not necessarily share the same neighbours in their monolingual latent spaces. The point of transformers is exactly to move from static representations to dynamic ones that are able to learn that all those three verbs (in their specific language) can appear early in a sentences and close to the word clock. </p>
359
language modeling
How can i create a new language model for language other than english?
https://ai.stackexchange.com/questions/38706/how-can-i-create-a-new-language-model-for-language-other-than-english
<p>I have large set of corpus for all literature in 'Tamil' language, i am trying to create a document retrieval engine through simple natural language.</p> <p>Since the corpus is huge, its hard to do a supervised learning, i also checked ai4bharat.org 's initiatives to build the language models, but its still not working good the literature texts.</p> <p>what is required to build a model from scratch</p>
360
language modeling
LSTM language model not working
https://ai.stackexchange.com/questions/9756/lstm-language-model-not-working
<p>I am trying to use a Keras LSTM neural network for character level language modelling. As the input, I give it the last 50 characters and it has to output the next one. It has 3 layers of 400 neurons each. For the training data, I am using 'War of The Worlds' by H.G. Wells which adds up to 269639 training samples and 67410 validation samples.</p> <p>After 7 epochs the validation accuracy has reached 35.1% and the validation loss has reached 2.31. However, after being fed the first sentence of war of the worlds to start it outputs:</p> <blockquote> <p>the the the the the the the the the the the the the the the the...</p> </blockquote> <p>I'm not sure where I'm going wrong; I don't want it to overfit and output passages straight from the training data but I also don't want it to just output 'the' repeatedly. I'm really at a loss as to what I should do to improve it.</p> <p>Any help would be greatly appreciated. Thanks!</p>
<p>You can follow the below steps :</p> <ol> <li>LSTMs are slower in terms of convergence. They take much time to train amd thereby give better results. Try training the network for a more number of epochs like 50 or 65.</li> <li>Use smaller batch size. Try using a RMSProp optimizer. It takes a long time to converge but gives excellent results in case of recurrent nets.</li> <li>Also, try word level predictions. Means, train your LSTM in such a way that it can predict words and not characters.</li> <li>Use dropout layers in between two LSTM layers. Use a rate like 0.2 or 0.35.</li> </ol> <p>As a working example, you can refer this <a href="https://github.com/shubham0204/Poem_Writer_AI" rel="nofollow noreferrer">project</a> which generates poem lines using LSTM in tensorflow.</p>
361
language modeling
How to combine language models and theorem provers?
https://ai.stackexchange.com/questions/46225/how-to-combine-language-models-and-theorem-provers
<p>Suppose that a mathematician wants to find a proof of a conjecture or understand details of some proof. Can one combine language models like ChatGpt or Copilot and theorem proving software like Coq to and iterate outputs until a proof is found?</p>
362
language modeling
How to combine pretrained language models with additional feature set?
https://ai.stackexchange.com/questions/37026/how-to-combine-pretrained-language-models-with-additional-feature-set
<p>Are there any techniques to combine a feature set (other than the text itself) with pretrained language models.</p> <p>Let's say I have a random NLP task that tries to predict a binary class label based on e.g. Twitter data. One could easily utilize a pretrained language model such as BERT/GPT-3 etc. to fine-tune it on the text of the tweets. However the tweets come with a lot of useful metadata such as likes/retweets etc. or if I want to add additional syntactic features such as POS-Tags, dependency relation or any other generated feature. Is it possible to use additional features I extracted for the finetuning step of the pretrained language model? Or is the only way of doing so to use an ensemble classifier and basically write a classifier for each of the extracted features and combine all of their predictions with the finetuned LMs predictions?</p>
<p>Although barely related, this brings to my mind a concept I first learned in <em>conditional GANs</em> - there, if you wanted an output of some specific class, you would take your random noise input <span class="math-container">$z$</span> (that is given to your generative model <span class="math-container">$G$</span>) and concatenate it with some representation of the class instance. On the <code>MNIST</code> digits dataset, for example, you can provide <span class="math-container">$[z,(1,0,...,0)]$</span> where <span class="math-container">$(1,0,...,0)$</span> represents a one-hot encoding of the digit <span class="math-container">$0$</span>. So <em>perhaps</em> you could use this concatenation concept for your task:</p> <ol> <li>extract features <span class="math-container">$\{a_i\}_{i=1}^N$</span> from the text itself - this can be done using a transformer, as suggested, or some RNN architecture like LSTM or GRU.</li> <li>extract features <span class="math-container">$\{b_i\}_{i=1}^N$</span> from the metadata - there are some approaches that deal with tabular data, but I would start with a basic MLP</li> <li>concatenate the two feature vectors <span class="math-container">$c_i=[a_i,b_i]$</span></li> <li>yield a final prediction using yet another model that takes <span class="math-container">$\{c_i\}_{i=1}^N$</span> as input</li> </ol> <hr /> <p>Edit: I've looked this up online, and found some supporting evidence - check out <a href="https://towardsdatascience.com/integrating-image-and-tabular-data-for-deep-learning-9281397c7318" rel="nofollow noreferrer">this</a> blog post, for example.</p>
363
language modeling
Fundamentally, what is a perfect language model?
https://ai.stackexchange.com/questions/23557/fundamentally-what-is-a-perfect-language-model
<p>Suppose that we want to generate a sentence made of words according to language <span class="math-container">$L$</span>: <span class="math-container">$$ W_1 W_2 \ldots W_n $$</span></p> <p><strong>Question:</strong> What is the perfect language model?</p> <p>I ask about <em>perfect</em> because I want to know the concept fundamentally at its fullest extent. I am not interested in knowing heuristics or shortcuts that reduce the complexity of its implementation.</p> <hr /> <h1>1. My thoughts so far</h1> <h2>1.1. Sequential</h2> <p>One possible way to think about it is moving from left to right. So, 1st, we try to find out value of <span class="math-container">$W_1$</span>. To do so, we choose the specific word <span class="math-container">$w$</span> from the space of words <span class="math-container">$\mathcal{W}$</span> that's used by the language <span class="math-container">$L$</span>. Basically: <span class="math-container">$$ w_1 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_1 = w) $$</span></p> <p>Then, we move forward to find the value of the next word <span class="math-container">$W_2$</span> as follows <span class="math-container">$$ w_2 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_2 = w | W_1 = w_1) $$</span></p> <p>Likewise for <span class="math-container">$W_3, \ldots, W_n$</span>: <span class="math-container">$$ w_3 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_3 = w | W_1 = w_1, W_2=w_2) $$</span> <span class="math-container">$$ \vdots $$</span> <span class="math-container">$$ w_n = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_n = w | W_1 = w_1, W_2=w_2, \ldots W_{n-1}=w_{n-1}) $$</span></p> <p>But is this really <em>perfect</em>? I personally doubt. I think while language is read and written usually from a given direction (e.g. left to right), it is not always done so, and in many cases language is read/written possibly in a funny order as we always do. E.g. even when I wrote this question, I jumped back and forth, then went to edit it (as I'm doing now). So I clearly didn't write it from left to right! Similarly, <em>you</em>, the reader; you won't really read it in a single pass from left to right, will you? You will probably read it in some funny order and go back and forth for awhile until you conclude an understanding. So I personally really doubt that the sequential formalism is perfect.</p> <h2>1.2. Joint</h2> <p>Here we find all the <span class="math-container">$n$</span> words jointly. Of course ridiculously expensive computationally (if implemented), but our goal here is to only know what is the problem at its fullest.</p> <p>Basically, we get the <span class="math-container">$n$</span> words as follows:</p> <p><span class="math-container">$$ (w_1, w_2, \ldots, w_n) = \underset{(w_1,w_2,\ldots,w_n) \in \mathcal{W}^n}{\text{arg max }} \Pr(W_1 = w_1, W_2=w_2, \ldots W_n=w_n) $$</span></p> <p>This is a <em>perfect</em> representation of language model in my opinion, because its answer is gauranteed to be correct. But there is this annoying aspect which is that its <em>words candidates space</em> is needlessly large!</p> <p>E.g. this formalism is basically saying that the following is a candidate words sequence: <span class="math-container">$(., Hello, world, !)$</span> even though we know that in (say) English a sentence cannot start by a dot &quot;.&quot;.</p> <h2>1.3. Joint but <em>slightly</em> smarter</h2> <p>This is very similar to <strong>1.2 Joint</strong>, except that it deletes the single bag of all words <span class="math-container">$\mathcal{W}$</span>, and instead introduces several bags <span class="math-container">$\mathcal{W}_1, \mathcal{W}_2, \ldots, \mathcal{W}_n$</span>, which work as follows:</p> <ul> <li><span class="math-container">$\mathcal{W}_1$</span> is a bag that contains words that can <em>only</em> appear as <em>1<sup>st</sup></em> words.</li> <li><span class="math-container">$\mathcal{W}_2$</span> is a bag that contains words that can <em>only</em> appear as <em>2<sup>nd</sup></em> words.</li> <li><span class="math-container">$\vdots$</span></li> <li><span class="math-container">$\mathcal{W}_n$</span> is a bag that contains words that can <em>only</em> appear as <em><span class="math-container">$n$</span><sup>th</sup></em> words.</li> </ul> <p>This way, we will avoid the <em>stupid</em> candidates that <strong>1.2. Joint</strong> evaluated by following this: <span class="math-container">$$ (w_1, w_2, \ldots, w_n) = \underset{w_1 \in \mathcal{W}_1,w_2 \in \mathcal{W}_2,\ldots,w_n \in \mathcal{W}_n) \in \mathcal{W}^n}{\text{arg max }} \Pr(W_1 = w_1, W_2=w_2, \ldots W_n=w_n) $$</span></p> <p>This will also guarantee being a <em>perfect</em> representation of a language model, yet it its <em>candidates</em> space is smaller than one in <strong>1.2. Joint</strong>.</p> <h2>1.4. Joint but <em>fully</em> smart</h2> <p>Here is where I'm stuck!</p> <p><strong>Question rephrase (in case it helps):</strong> Is there any formalism that gives the perfect correctness of <strong>1.2.</strong> and <strong>1.3.</strong>, except for also being <em>fully</em> smart in that its candidates space is smallest?</p>
<p>One of your hypothesis is very close to the truth, it's 1.2</p> <p>So, a language model measures the probability of a given sentence in a language <span class="math-container">$L$</span>. The sentences can have any length and the sum of probabilities of all the sentences in the language <span class="math-container">$L$</span> is 1. It's very difficult to compute, thus people use some simplifications, like say if the words are located far enough from each other, then the occurrence of a current word doesn't depend on a word which was occurred far away in the past. Each sentence is a sequence <span class="math-container">$w_1, \dots, w_n$</span> and a language model computes the probability of the sequence <span class="math-container">$p([w_1, \dots w_n])$</span> (it's not joint distrribution yet). It can be decomposed into a joint distribution with some special tokens added <span class="math-container">$p(BOS, w_1, \dots w_n, EOS])$</span>. BOS is begin of the sentence and EOS is end of sentence. Then this joint distribution can be decomposed using the chain rule <span class="math-container">$p(BOS, w_1, \dots w_n, EOS]) = p(BOS) p(w_1 | BOS) \Big[ \prod\limits_{i=1}^n p(w_i | BOS, w_1, \dots, w_{i-1}) \Big] p(EOS | BOS, w_1, \dots, w_n)$</span>. There are 2 types of probabilities that are usually modelled differently: a prior probability <span class="math-container">$p(BOS)$</span> which is always equal to 1, because you always have BOS as the first token in the augmented sequence. Then conditional probabilities can be computed as follows <span class="math-container">$p(w_i | BOS, w_1, \dots, w_{i-1}) = \frac{c(BOS, w_1, \dots, w_{i-1}, w_i)}{\sum_{w_i \in W} c(BOS, w_1, \dots, w_{i-1}, w_i)}$</span>. Where <span class="math-container">$c$</span> is a counter function that measures how many times a given sequence occured in the dataset you specified to train your model. You can notice it's a maximum likelihood estimate of the unknown conditional probabilities. Obviously if you're using a certain dataset you compute a model of that dataset, not of a language, but that's the to approximate true probabilities of sentences in a language. The EOS token is needed to make difference between a probability of a non-finished yet sequence and that which has finished, because if you take those counters from above and forget about adding the EOS into your dataset in the end of all sentences, you'll get probabilities that don't sum into 1 (which is bad).</p>
364
language modeling
Tips and tricks when training a very large language model?
https://ai.stackexchange.com/questions/42131/tips-and-tricks-when-training-a-very-large-language-model
<p>Have never trained a (very) large language model, so I am wondering if the process is the same as training a (regular) language model, i.e. you prepare the data, set up the architecture, hyperparameters, loss function to minimize perplexity and predicting the next word, and then do gradient descent over the giant dataset. Or if there are any special gotchas or tricks you must do when training it. I know there's at least one involving the training dynamics:</p> <ol> <li>training dynamics: most LLMs stop seeing performance improvement even before a single epoch is finished.</li> </ol> <p>I am wondering if there are any others</p>
<p>So one thing that might occur:</p> <blockquote> <p>Note that we typically only train PMs for a single epoch, so the learning curves themselves (Figure 7 left) indicate how performance scales with dataset size (we used a fixed learning rate).</p> </blockquote> <p>From: <a href="https://arxiv.org/pdf/2204.05862.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2204.05862.pdf</a></p> <p>Additionally:</p> <blockquote> <p>Supervised fine-tuning (SFT). We fine-tune GPT-3 on our labeler demonstrations using supervised learning. We trained for 16 epochs, using a cosine learning rate decay, and residual dropout of 0.2. We do our final SFT model selection based on the RM score on the validation set. Similarly to Wu et al. (2021), we find that our SFT models overfit on validation loss after 1 epoch; however, we find that training for more epochs helps both the RM score and human preference ratings, despite this overfitting. <a href="https://arxiv.org/pdf/2203.02155.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2203.02155.pdf</a></p> </blockquote> <p>Here PM refers to the preference model used to evaluate generated outputs</p> <p>In particular, the final line</p> <blockquote> <p>we find that our SFT models overfit on validation loss after 1 epoch; however, we find that training for more epochs helps both the RM score and human preference ratings, despite this overfitting.</p> </blockquote> <p>indicates some sort of overtraining/overparameterized phenomenon</p>
365
language modeling
What is the difference between a language model and a word embedding?
https://ai.stackexchange.com/questions/26739/what-is-the-difference-between-a-language-model-and-a-word-embedding
<p>I am self-studying applications of deep learning on the NLP and machine translation.</p> <p>I am confused about the concepts of &quot;Language Model&quot;, &quot;Word Embedding&quot;, &quot;BLEU Score&quot;.</p> <p>It appears to me that a language model is a way to predict the next word given its previous word. Word2vec is the similarity between two tokens. BLEU score is a way to measure the effectiveness of the language model.</p> <p>Is my understanding correct? If not, can someone please point me to the right articles, paper, or any other online resources?</p>
<p><strong>Simplified:</strong> Word Embeddings does not consider context, Language Models does.</p> <p>For e.g Word2Vec, GloVe, or fastText, there exists one fixed vector per word.</p> <p>Think of the following two sentences:</p> <blockquote> <p>The fish ate the cat.</p> </blockquote> <p>and</p> <blockquote> <p>The cat ate the fish.</p> </blockquote> <p>If you averaged their word embeddings, they would have the same vector, but, in reality, their meaning (semantic) is very different.</p> <p>Then the concept of contextualized word embeddings arose with language models that <em>do</em> consider the context, and give different embeddings depending on the context.</p> <p>Both word embeddings (e.g Word2Vec) and language models (e.g BERT) are ways of representing text, where language models capture more information and are considered state-of-the-art for representing natural language in a vectorized format.</p> <p><strong>BLEU score</strong> is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Which is not directly related to the difference between traditional word embeddings and contextualized word embeddings (aka language models).</p>
366
language modeling
What are the main differences between a language model and a machine translation model?
https://ai.stackexchange.com/questions/22086/what-are-the-main-differences-between-a-language-model-and-a-machine-translation
<p>What are the main differences between a language model and a machine translation model?</p>
<p>The simple language model will give you the probability of a sequence of tokens(sentence) for that language. So lets say if you have trained a model for English language your model can give you the probability for any random english sentence.</p> <p>Consider some sentence</p> <p><span class="math-container">$X$</span> <span class="math-container">$=$</span> &quot;the quick brown fox jumps over the lazy dog&quot; <span class="math-container">$=$</span> <span class="math-container">$x_1 \ x_2 \ x_3 \ ... \ x_n$</span></p> <p>model will give you <span class="math-container">$P(X)$</span></p> <p>Moreover if the model has been trained properly, in the following scenario, where</p> <p><span class="math-container">$X$</span> <span class="math-container">$=$</span> &quot;the quick brown fox jumps over the lazy dog&quot; <span class="math-container">$=$</span> <span class="math-container">$x_1 \ x_2 \ x_3 \ ... \ x_n$</span></p> <p><span class="math-container">$Y$</span> <span class="math-container">$=$</span> &quot;dog brown quick fox over the jumps lazy the&quot; <span class="math-container">$=$</span> <span class="math-container">$y_1 \ y_2 \ y_3 \ ... \ y_m$</span></p> <p>model will always give <span class="math-container">$P(X) &gt; P(Y)$</span>, as it has learnt the structure of language.</p> <p>On the other hand Machine translation model gives you the conditional probability of the next token given your source sentence and partial-target sentence. So if</p> <p><span class="math-container">$X = $</span> &quot;I am a student&quot; and <span class="math-container">$Y = $</span> &quot;je suis&quot; <span class="math-container">$=y_1, \ y_2$</span></p> <p>model will give you <span class="math-container">$P(y_3 | X,y1_,y_2)$</span> .</p> <p>where <span class="math-container">$X$</span> is a source sentence and <span class="math-container">$Y= y_1, y_2$</span> is a partial target sentence. The probability of word/token &quot;étudiant&quot; would be maximum among all words of vocabulary.</p>
367
language modeling
Is the problem of Language Modelling a Well-Posed Learning Problem?
https://ai.stackexchange.com/questions/40379/is-the-problem-of-language-modelling-a-well-posed-learning-problem
<p>Hadamard defines (<a href="https://en.wikipedia.org/wiki/Well-posed_problem" rel="nofollow noreferrer">Well-posed problem (Wikipedia)</a>) a well-posed problem as one for which:</p> <ol> <li>a solution exists,</li> <li>the solution is unique,</li> <li>the solution depends continuously on the data (e.g. it is stable)</li> </ol> <p>Now for an autoregressive language model, the pretaining objective consists of predicting the next token given all previous tokens. But as I see it, while the objective helps the model pick nuances of the language and learn &quot;language representation&quot;, there is no &quot;correct&quot; answer. Given the example, say, &quot;I would love to have some ____&quot;, &quot;chocolate&quot; is just as &quot;correct&quot; as &quot;coffee&quot;. Thus, the solution is not &quot;unique&quot; as required by the definition. But again, we could think of the problem as taking as input all the previous tokens and returning a probability distribution over the whole vocabulary. In this way of looking at the problem, the solution, i.e., the probability distribution, is unique. So, is the problem of language modelling well-posed? Is either of the two approaches the right reason? If not, what is the correct reason?</p>
<p>Philosophically speaking,</p> <p>I believe a perspective shift is necessary. When you say, 'A solution exists', the next statement should be, 'Well, what is the problem?'. While you are correct in pointing out instances where the model is unique and not so much so, their correct problem statement is not being talked about.</p> <p>I am afraid, that the concept of a &quot;Well-posed problem&quot; is theoretical, and while a problem might look absolutely well-posed in theory, things might work out differently in the real world.</p> <p>Let's talk about the theory of an LLM (will intentionally leave out the technical bits). Ideally, given an exhaustive dataset of all the world's data and a NN with practically infinite training parameters; a trained language model should be able to be attentive enough to completely understand the context before providing any answers. Such a model will work something like this-</p> <p>User: &quot;Fill in the blank, 'I would love to have some ____'&quot;</p> <p>LLM: &quot;What do you love&quot;</p> <p>User: &quot;Among many things, chocolate, ice-creams, dance, booze, etc.&quot;</p> <p>LLM: &quot;Okay, and how are you feeling today?&quot;</p> <p>User: &quot;Particularly hot, and alone.&quot;</p> <p>LLM: (to itself: maybe some sugar to lift up the mood, and some cold ice cream to help with the hot day)</p> <p>LLM: Let's go have some Chocolate Ice-cream.</p> <p>User: Wow! I would never have thought about it myself! Thanks, Overlord.</p> <p>Strictly theoretically speaking, the Freudian concept of pan-determinism says, given the complete history of a human being, you can pinpoint the exact decision they will make given a situation. So, in this ideal world, where the model thought this information is enough to figure out the answer would be absolutely right. As there will exist only one fill to the blank.</p> <p>Now, let's travel to the real world, shall we?</p> <p>Here, everything is limited. From low memory, and limited silicon chip supply, to limited attention spans (of LLMs of course). Because here, our LLM doesn't have (yet) the capability to reason on itself, unless explicitly advised to do so, it will answer based on defined heuristics like Greedy Search (see, now that you think of it, because someone included greedy search in the architecture, suddenly it feels like it was a well-defined problem after all? Because given the limitations of the language models and the architecture set bu the creature, there is only one solution, isn't there? Unless of course, you decide you want to increase the randomness in selection). But, back to the point, transformers or any other kind of NN are essentially numerical models which theoretically speaking have a well-defined convergence to a point but practically never converge; we always judge the trained parameters to be good enough and stop the training.</p> <p>Given that we can probably never really achieve the ideal doesn't mean that it doesn't exist. With infinite resources, we can build the ideal LLM that can do things that make it a Well-posed problem.</p> <p>And in due time we might have LLMs good enough to predict human decisions which makes you feel like they only had one solution which they duly produced, <a href="https://www.youtube.com/watch?v=NOCsdhzo6Jg&amp;ab_channel=3Blue1Brown" rel="nofollow noreferrer">but as is with illusions, they are never real.</a></p> <p>UPDATE: One more thing that I just recalled is that while it is a well-posed problem, engineers and developers go to great lengths to disturb its &quot;pose&quot; just because again, practicality supersedes idealism. What I mean by that is illustrated best using an example, that of the Bi-directional Encoder Representations from Transformers or the BERT architecture.</p> <p>The trainers went to great lengths to intentionally introduce errors in training to avoid overfitting and achieve higher efficiency, which on the downside had the side-effect of not predicting a specific token with high confidence. What I am referring to is their use of masking [Mask] only 80% of the time during training, and intentionally using [word] instead of a mask, or a [wrong word] instead of the mask equally over the remaining 20% of the times.</p>
368
language modeling
can we apply soft actor critic to language model preference optimization?
https://ai.stackexchange.com/questions/46477/can-we-apply-soft-actor-critic-to-language-model-preference-optimization
<p>Most language model is using online PPO or offline DPO type algorithm. Can we use soft actor critic RL to do alignment work? Any publication related can be recommended?</p>
<p>Sample efficient off-policy <a href="https://spinningup.openai.com/en/latest/algorithms/sac.html" rel="nofollow noreferrer">SAC</a> learning is originally proposed for robotics continuous action space RL use cases while its alternative version could handle discrete action spaces.</p> <blockquote> <p>A central feature of SAC is entropy regularization. The policy is trained to maximize a trade-off between expected return and entropy, a measure of randomness in the policy. This has a close connection to the exploration-exploitation trade-off: increasing entropy results in more exploration, which can accelerate learning later on... An alternate version of SAC, which slightly changes the policy update rule, can be implemented to handle discrete action spaces.</p> </blockquote> <p>So the reasons why it's not used in LLM RLHF (RL with human feedback) NLP tasks are related to its main purpose which is to deal with continuous action spaces with limited samples due to its inherent entropy maximization explorative nature. LLM RLHF stage doesn't lack human provided ranking samples to inversely learn the reward model (RM), and the extremely stable clipped-objective oriented PPO on-policy learning is well suited to make sure a stable and robust learning process which is apparently desirable in RLHF after supervised fine-tuning (SFT) stage.</p> <p>Incidentally to speedup RLHF learning Direct Preference Optimization (DPO), Contrastive Preference Learning (CPL), and rejection sampling, etc have been proposed to replace or enhance PPO. Meta's Llama-2-chat models first use rejection sampling to fine tune with samples of higher reward compared to PPO which updates based on a single sample each time, PPO is used only at its final stage. You can read below references for these alternatives.</p> <p>Bai et al. (2022). <em>&quot;Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback&quot;</em><br /> Hejna et al. (2023). <em>&quot;Contrastive Preference Learning: Learning from Human Feedback without RL&quot;</em><br /> Rafailov et al. (2023). <em>&quot;Direct Preference Optimization: Your Language Model is Secretly a Reward Model&quot;</em></p>
369
language modeling
What are the most successful applications of large language models?
https://ai.stackexchange.com/questions/47709/what-are-the-most-successful-applications-of-large-language-models
<p>Large Language Models (LLMs) are currently among the most prominent AI technologies, with some considering them as a step toward Artificial General Intelligence (AGI). My question is: Are there already successful commercial applications of LLMs, and if so, are these models particularly indispensable in the short term such that they cannot be readily substituted by other AI technologies?</p>
370
language modeling
What language model can convert normal text to JSON data
https://ai.stackexchange.com/questions/43843/what-language-model-can-convert-normal-text-to-json-data
<p>I have tried training T5-small, T5-base and T5-Large on around 15K rows of data where input data was something like but I did not get desired results</p> <pre><code>Nutrition Facts, 100g per, Energy 646.95Kcal Carbohydrates 19.31g, Protein 21.94g 53.55g Total Fat 6.64g Saturated Fat 14.97g Dietary Fiber, &lt;1.Omg Cholesterol Sodium 0.29g Sugars 3.39g, Lightly Salted and to Perfection, Ingredients: Peanuts, Almonds,, Cashews, Pistachios, Vegetable Oil, Salt, aa, , 74G, Pistachio, 61129110611336177 WE ARE NUTS ABOUT QUALITY, Baked, Nuts Salted, Mixed WE ARE NUTS ABOUT, Community 364, 13 Street Plot No. 36, Al Area 1, 24149,UAE 4971 4 3355777, License Number: 224614 VAT No: 100058529700003, CERTIFIED COMPANY, ALLERGEN WARNING: in a facility that also processes nuts, sesame and mustard, Store in a cool dry place away from heat moisture, Instruction Once store in airtight container and consume before expiry date, Pro: 14/12/23, Exp:13/12/24, Net 40gms </code></pre> <p>Output data will be in JSON format of the above details. What language models can be trained for this purpose and minimum how many Parameters it should have?</p>
<p>I've tried Mistral-7B-Instruct-v0.2, and it worked out of the box for your example, as did Qwen1.5-1.5b. However, Qwen1.5-0.5b had issues, but I think with enough training, you can achieve decent results.</p> <p>I believe the size/quality trade-off of starting from 1.5b and lowering it could deteriorate in corner cases, but the only way to find out is through experimentation with validation.</p> <p>One note is that you can generate almost an infinite amount of data by removing the JSON structure. 15k seems sufficient if you have just a product JSONs, but for more diversity, you might want more data. The SFT stage in an LLM usually takes from 200k up to a million examples.</p> <p>You can also constrain the output using a scheme by restricting the output tokens. See one example of such an approach at <a href="https://github.com/1rgs/jsonformer" rel="nofollow noreferrer">https://github.com/1rgs/jsonformer</a>.</p>
371
language modeling
Do all Android devices use the same foundation language model and LORA adapters?
https://ai.stackexchange.com/questions/47748/do-all-android-devices-use-the-same-foundation-language-model-and-lora-adapters
<p>I read on <a href="https://developer.android.com/ai/gemini-nano" rel="nofollow noreferrer">https://developer.android.com/ai/gemini-nano</a> about the use of local language models on Android:</p> <blockquote> <ul> <li><strong>Supported Devices</strong>: AICore is currently available on Pixel 9 series devices, Google Pixel 8 Series devices including Pixel 8<a href="https://developer.android.com/ai/gemini-nano#fn1" rel="nofollow noreferrer">1</a> and Pixel 8a<a href="https://developer.android.com/ai/gemini-nano#fn2" rel="nofollow noreferrer">2</a>, Samsung S24 Series devices, Samsung Z Fold6, Samsung Z Flip6, Realme GT 6, Motorola Edge 50 Ultra, Motorola Razr 50 Ultra, Xiaomi 14T/Pro, and Xiaomi MIX Flip.</li> </ul> </blockquote> <p><a href="https://i.sstatic.net/Umw7zeFE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Umw7zeFE.png" alt="enter image description here" /></a></p> <p>Do all these devices use the same foundation language model and LORA adapters? If not, does that only depend on the Google AI Edge SDK version, or does that also depend on the phone itself?</p>
<p>Frmo Google's paper <a href="https://arxiv.org/pdf/2312.11805" rel="nofollow noreferrer">Gemini: A Family of Highly Capable Multimodal Models</a>:</p> <blockquote> <p>Our most efficient model, designed to run on-device. <strong>We trained two versions of Nano, with 1.8B (Nano-1) and 3.25B (Nano-2) parameters, targeting low and high memory devices respectively</strong>. It is trained by distilling from larger Gemini models. It is 4-bit quantized for deployment and provides best-in-class performance.</p> </blockquote> <p>It's echoed by <a href="https://www.androidpolice.com/samsung-galaxy-s25-multimodal-gemini-nano/" rel="nofollow noreferrer">https://www.androidpolice.com/samsung-galaxy-s25-multimodal-gemini-nano/</a>:</p> <blockquote> <p>We also verified the Galaxy S25 supports Google’s multimodal Gemini Nano model. The AICore app on the Galaxy S25 we checked out took up 4GB of space, just like on the Pixel 9. But on the Galaxy Z Fold 6, the same app only took up 1GB since it downloaded the smaller, text-only version.</p> </blockquote>
372
language modeling
How word2vec de-embeds the special names in language models which output text
https://ai.stackexchange.com/questions/41823/how-word2vec-de-embeds-the-special-names-in-language-models-which-output-text
<p>I am new to nlp field. I have some questions about word2vec embeddings. as I know they have a fixed size dictionary of vocabs. so definitely there some words which is not in that predefined dictionary of vocab, for i.e. we may get an special name (like people's name) which doesn't exist in the vocab and they are called out-of-vocabulary (OOV) words.</p> <p>So how embedding and de-embedding for a translation language model work? I want the language model of the example to include the de-embedding part; as we know in classification language models we may only concern about the embedding part as the output is not an embedding vector, and does not need to be converted to a string(word)</p> <p>I don't know what ways are used to embed those OOV words? I guess one way is to use some main word2vec and another complementary fastText for OOV words, or instead some complementary generic way of embedding like <code>nn.embedding</code> in pytorch. This won't make any problem in the classification models like sentiment analysis or Text classification.</p> <p>Question 1: what are other methods that can be used for embedding an OOV word while we want to use word2vec?</p> <p>but in the models which Their output is eventually gonna be a word (like translation or text summarization), I have some concerns as below:</p> <p>Note we know the output of a translation language model is a sequence of embedding vectors to correspondent output word.</p> <p>Note the way that we convert those embedding vectors to words, usually is by finding the closest embedding Vector in the embedding Matrix, either word2vec or fastText or de-embedder of <code>nn.embedding</code>. Also note this way eventually result, some Vector and from there, by its index, we would get a word.</p> <p>as we have two embedding models, so when we are the step which have received the language model output embedding vector, we need first distinguish, this output embedding vector Should be de-embedded from which embedding model (word2vec or fastText)? note as I said eventually the next action (finding closest embedding) would result some word.</p> <p>so I guess one way maybe is find the closest vectors for 2 embedding models, also we their similarity score to output embedding vector of the language model, and know to de-embed from which one.</p> <p>question 2: so how this problem to distinguish to which embedding model to de-embed from is handled? I guess there should be more sophisticated methods in Practice so if there are please tell me?</p> <p>question 3: do u know a model which have implemented my solutions (one for embedding by complementary embedding model, 2nd one distinguishing by similarity score)?</p>
<p>I'm not sure I got 100% of the question, but word2vec is rather simple to understand so I'll give my two cents and if something is missing or not clear I'll edit the answer to improve it.</p> <p>1 For OOV vectors, as you pointed out, one option is indeed to initialise some random embeddings. The nice aspect of this method is that these missing embeddings can be trained as extra parameters for the model while keeping the already trained vectors fixed. Another alternative is to get all zero vectors, meaning that we force the model to ignore certain words (it might be that these OOV are not of interest for specific use cases). Using vectors coming from other models is unlikely to work because of a problem called &quot;vector alignment&quot;, i.e. embedding spaces from different model do not share the same learned geometric structure, meaning that two numerically equal vectors could be present in both spaces but be associated with totally different words.</p> <p>2 Here I think there is a little misunderstanding going on regarding how word2vec works. First of all, word2vec is a training method for embedding, it does not necessarily refer only to pertained vector of fixed size. Specifically, the word2vec papers proposed two ways of training embeddings, continuous bag of words and skip-gram. The picture below show the difference, which I won't explain since there are already lot of answers about it in the community. The point I want to focus is that both approaches use simple classic supervised machine learning tasks. You have an input and you want to predict an output. In the original paper the desired output was a continuous vectors representation of words to leverage as features for further NLP tasks. This is different from encoding/decoding and the similarity is actually closer to transfer learning. It is also relevant to point out that you can get very good embedding also with classic matrix factorisation techniques, the problem is that these techniques can handle small amount of data, the deep learning formulation proposed in word2vec allow to train embeddings of ay dimension on any amount of data, hence the success of this approach.</p> <p>3 As far as I'm aware there is no such model, and I doubt someone tried such approach for at least two reasons: 1 the same alignment problem already cited above, 2 the quick arise of transformers that happened not long after word2vec (attention is all you need was published in 2015, word2vec in 2013) which canalised all the attention (pun intended) in the NLP community to those new architectures.</p> <p><a href="https://i.sstatic.net/g3NzQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g3NzQ.png" alt="enter image description here" /></a></p>
373
language modeling
Can you make Q&amp;A language model stay on topic?
https://ai.stackexchange.com/questions/36812/can-you-make-qa-language-model-stay-on-topic
<p>I’m thinking of fine-tuning a pre-trained language model for a Q&amp;A task. More specifically, I’d like to fine-tune the model on a single chapter in a classic college textbook. Afterward, the reader of the chapter should be able to engage in a Q&amp;A session with the model about the content of the chapter.</p> <p>But how do I make sure that the model stays on topic and doesn’t go out of a tangent? I know it is possible when looking at what <a href="https://play.aidungeon.io/" rel="nofollow noreferrer">https://play.aidungeon.io/</a> has achieved, but I don’t know if it will require me to build a model from the ground for each chapter. Can anyone tell me if I’m out of my mind or if it’s feasible?</p> <p>Edit:</p> <p>I've just learned that AI Dungeon's models has also struggled with a tendency to create graphic and sexual content despite not being prompted by players.</p>
<p>Perhaps you could train an auxiliary binary classifier to predict whether or not a response from the chatbot was on or off topic. You could use text from the chapters that you fine tune the Q.A. model with to generate examples of &quot;on topic&quot; text and then use text from other sources as &quot;off topic&quot;.</p> <p>Then, when the model generates an output, you could pass that through the binary on/off-topic classifier. If it is deemed off topic, you could then respond with some canned phrases, indicating that the chatbot is unable to answer the specific question. I have not tried this myself, but it seems to me like a reasonable place to begin.</p>
374
language modeling
How do we know if GPT-2 is a better language model?
https://ai.stackexchange.com/questions/10869/how-do-we-know-if-gpt-2-is-a-better-language-model
<p>You may have heard of GPT2, a new language model. It has recently attracted attention from the general public as the foundation that published the paper, <a href="https://blog.openai.com/better-language-models/" rel="nofollow noreferrer">OpenAI</a>, ironically refused to share the whole model fearing dangerous implications. Along the paper, they also published a manifesto to justify their choice: &quot;Better Language Models and Their Implications&quot;. And soon a lot of media were publishing articles discussing the choice and its effectiveness to actually prevent bad implications. I am not here to discuss the ethical components of this choice but the actual performance of the model.</p> <p>The model got my attention too and I downloaded the small model to play with. To be honest I am far from impressed by the results. Some times the first paragraph of the produced text appears to make sense, but nine times out of ten it is giberish by the first or the second sentence. Exemples given in the paper seems to be &quot;Lucky&quot; outputs, cherry picked by human hands. Overall, the paper may suffer from a very strong publication bias.</p> <p>However, most article we can read on the internet seems to take its powerfulness for granted. <a href="https://www.technologyreview.com/s/612975/ai-natural-language-processing-explained/" rel="nofollow noreferrer">The MIT technology review</a> wrote:</p> <blockquote> <p>The language model can write like a human</p> </blockquote> <p><a href="https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction" rel="nofollow noreferrer">The Guardian</a> wrote</p> <blockquote> <p>When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.</p> </blockquote> <p>The model appears generally qualified as a &quot;breakthrough&quot;. These writings do not match my personal experimentation as produced texts are rarely consistent / syntactically correct.</p> <p>My question is: without the release of the whole model for ethical reasons, how do we know if the model is really that powerful?</p>
<p>You have experimented with a small model (117M parameters). OpenAI has now released the medium sized model (345M parameters). Note that the full model has 1.5B parameters. Also note that GPT-2 has been trained on a tiny fraction of all available text. It's almost guaranteed that a larger model trained on more text will generate better text.</p> <p>I have experimented with the 345M model, and I agree with The Guardian assessment. See for yourself:</p> <p><strong>Prompt:</strong> </p> <blockquote> <p>I want to generate beautiful music using machine learning. Do you think a computer will ever be able to compose something as novel as creations of Mozart, Bach, or Chopin?</p> </blockquote> <p><strong>Results</strong> (first four samples, not cherry picked):</p> <blockquote> <p>Yes, computers will probably be able to do it by 2020. I think the key for us will be to make music that is interesting and beautiful, so we will never really know. My hope would be that computers will become far more efficient over time – something that is already happening in a lot of ways like in big data – while still being able to do more things that they are trained on, like find patterns. I think that's where I'd want to use neural networks in the future.</p> </blockquote> <hr> <blockquote> <p>We don't know, but it will probably happen. That's an interesting question to ask. Of course we have AI, which can do interesting things without human input. It can recognize the world, find objects and images, it understands language, we can make games for it. But if you imagine an AI being able to write music, will it even be good enough? How much better than human players? How much better can someone else do? AI will never really be at its best because it will always have to come up with creative ways to use and play it. To ask a question like it is good enough is to ignore that it is human. You are only allowed to ask good questions</p> </blockquote> <hr> <blockquote> <p>The software will help, as it already can do. But it will also allow more human creators, which is what I think it should do more of to bring music to any genre. In terms of composing, it's a matter of creativity in humans in general. But how does it achieve that? Let's just say that there are fewer humans composing today than ever before. Do you remember the day when, by chance, a computer could write lyrics to Mozart's Sonata No. 4, or the day when a human wrote lyrics to Mozart's La Résistance du Ciel No. 2? That was back in 1960, and it's just not true anymore. As we move forward, we are not going to create music that human musicians cannot compose. What is your goal as a computer music creator?</p> </blockquote> <hr> <blockquote> <p>At the core of this idea is that music is a collection of sounds – it was the very first art form, it was the first music that humans evolved, a collection of sounds. Music is not an abstract thing of sound, but a collection of sounds. You can't write music the way we write and compose books and films, because we write and compose with sounds in mind. This is where machine learning comes in. You can read some books about how to get better music writers or how to improve the quality of their music, but you cannot make a difference with this kind of information at all. What does music teach you? To write and perform beautifully, to convey emotion accurately, and to create lasting and rich relationships with</p> </blockquote>
375
language modeling
Why do language models place less importance on punctuation?
https://ai.stackexchange.com/questions/13600/why-do-language-models-place-less-importance-on-punctuation
<p>I have a very outdated idea about how NLP tasks are carried out by normal RNN's, LSTM's/GRU's, word2vec, etc to basically generate some hidden form of the sentence understood by the machine.</p> <p>One of the things I have noticed is that, in general, researchers are interested in generating the context of the sentence, but oftentimes ignore punctuation marks which are one of the most important aspects for generating context. For example:</p> <blockquote> <p>“Most of the time, travellers worry about their luggage.”</p> <p>“Most of the time travellers worry about their luggage”</p> </blockquote> <p><a href="https://cybertext.wordpress.com/2012/11/22/a-light-hearted-look-at-how-punctuation-can-change-meaning/" rel="nofollow noreferrer">Source</a></p> <p>Like this there exists probably 4 important punctuation marks <code>.,?</code> and <code>!</code>. Yet, I have not seen any significant tutorials/blogs on them. It is also interesting to note that punctuation marks don't have a meaning (quite important, since most language models try to map word to a numerical value/meaning), they are more of a 'delimiter'. So what is the current theory or perspective on this? And why is it ignored?</p>
<p>You are right. Approaches that map words to meaning solely do fail in this regard. None the less Word2Vec and Glove have shown wonderful downstream results. This in itself may indicate that most of the time, punctuation's addition can be interpolated. But as you provided, there are cases where this just is not true! </p> <p>Now of days I would say most models actually use almost <strong>NO</strong> preprocessing. This is surprising but its due to the rise in power of learnable, reversable, tokenizations. Some examples of these include <a href="https://en.wikipedia.org/wiki/Byte_pair_encoding" rel="nofollow noreferrer">byte pair encoding (bpe)</a> and the <a href="https://arxiv.org/abs/1808.06226" rel="nofollow noreferrer">sentence piece model (spm)</a>. </p> <p>State-of-the-art NLP generally rely on these. Examples include <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">BERT</a> and <a href="https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf" rel="nofollow noreferrer">GPT2</a>, which are general purpose pretrained Language Models. Their ability to parse and <em>understand</em> (i use this word loosely) a wide variety of phrasing, spelling and more can be partially due to the freedom in the preprocessing. </p> <p><strong>Takeaway:</strong> You can achieve good results by using preprocessing in a manner that will eliminate information but keep the meat and bones that you are interested in (but this requires domain knowledge paired with optimization experience), but the field seems gearing towards models that are more inclusive, more transferable, and dont have the problems you mention by design.</p>
376
language modeling
How do you build a language model to predict the contextual similarity between two documents?
https://ai.stackexchange.com/questions/16328/how-do-you-build-a-language-model-to-predict-the-contextual-similarity-between-t
<p>How do you build a language model to predict the contextual similarity between two documents?</p>
<p>You can use an autoencoder for text. For example, you can refer to this example here: <a href="https://machinelearningmastery.com/lstm-autoencoders/" rel="nofollow noreferrer">https://machinelearningmastery.com/lstm-autoencoders/</a></p> <p>For comparing the contextual similarity, you can compare the encoded vectors for the distance maybe through a formula like mean squared error. </p> <p>This works as the autoencoder compresses the input data into a vector of numbers, forcing the encoder and the decoder to learn specific features about text. The features cannot be understood by humans but have meanings. </p> <p>Another approach will be using some word or document embedding like word2vec or GloVe. IT may also work well depending on your dataset size. You need to experiment through different methods to find out which is the best.</p> <p>A supervised method can also be used if you have labels. For example, you can use a LSTM and train it like an Siamese network for LSTM using triplet loss. Here is an example: <a href="https://medium.com/@gautam.karmakar/manhattan-lstm-model-for-text-similarity-2351f80d72f1" rel="nofollow noreferrer">https://medium.com/@gautam.karmakar/manhattan-lstm-model-for-text-similarity-2351f80d72f1</a> Implementation: <a href="https://github.com/GKarmakar/deep-siamese-text-similarity" rel="nofollow noreferrer">https://github.com/GKarmakar/deep-siamese-text-similarity</a></p> <p>Hope this can help you and have a nice day</p>
377
language modeling
Are there any good alternatives to an LSTM language model for text generation?
https://ai.stackexchange.com/questions/25270/are-there-any-good-alternatives-to-an-lstm-language-model-for-text-generation
<p>I have a trained LSTM language model and want to use it to generate text. The standard approach for this seems to be:</p> <ol> <li>Apply softmax function</li> <li>Take a weighted random choice to determine the next word</li> </ol> <p>This is working reasonably well for me, but it would be nice to play around with other options. Are there any good alternatives to this?</p>
<p>The current state of the art in natural language generation are all auto-regressive transformer models. Transformers no longer use recurrent neural networks such as LSTM, because the recurrences makes long dependencies messy to calculate. Instead, Transformers only keep the attention layers, and apply attention on all the existing text so far, which can be done in parallel so therefore very fast, while being able to attend to long dependencies (e.g. understanding that &quot;it&quot; refers to &quot;John&quot; from 3 sentences ago). They are also faster to train than LSTMs (on powerful GPUs at least). The downside is more memory requirement, and you need large models and large datasets (LSTMs work better for small models and small datasets). <a href="http://jalammar.github.io/illustrated-transformer/" rel="nofollow noreferrer">Here</a> is some background info on how they work.</p> <p>Auto-regressive transformer models only use the decoder for text generation, and removes the encoder. Given an input, they predict the next word.</p> <p>The most well-known one is GPT (<a href="https://github.com/openai/gpt-3" rel="nofollow noreferrer">GPT-3</a> has 175B parameters; <a href="https://github.com/openai/gpt-2" rel="nofollow noreferrer">GPT-2</a> has 1.5B parameters, and GPT-1 has 175M parameters) GPT is developed by OpenAI and is a commercial, paid-software if you want to use their official model, but I'm sure with a little digging you can find <a href="https://huggingface.co/transformers/pretrained_models.html" rel="nofollow noreferrer">community-trained models</a> that will perform slightly worse but is at least free to use. GPT is basically a vanilla transformer, but trained on a huge, huge dataset with a huge, huge model to achieve state-of-the-art performance.</p> <p>Other auto-regressive transformer models include:</p> <ul> <li><a href="http://CTRL" rel="nofollow noreferrer">CTRL</a> by Salesforce, which uses the novel idea of control codes to guide the style of generation (e.g. to generate Wikipedia article style text or book review style text).</li> <li><a href="https://github.com/zihangdai/xlnet" rel="nofollow noreferrer">XLNet</a> by Google AI Brain team, which handles longer sequences more accuractely than the others because it re-introduces reccurrence back in the transformer model, allowing it to remember past sequences. Otherwise, vanilla transformers cannot handle dependencies that crosses sequences (note: a sequence is limited by the max length you can feed into the model, bottle-necked by your memory requirement, and can contain many sentences or paragraphs).</li> <li><a href="https://github.com/lucidrains/reformer-pytorch" rel="nofollow noreferrer">Reformer</a> by Google Research, which is a more efficient transformer that significantly reduces the memory requirement while also being faster to compute on long sequences.</li> </ul> <p>If your goal is just to generate English or another commonly researched language, you can use an existing pre-trained language model and avoid doing any training yourself. This saves a lot of time, and there should at least be free community-trained models readily available. Otherwise, for obscure tasks, you'd have to train one yourself, and these state of the art models will take immense resources.</p>
378
language modeling
What is the lowest possible loss for a language model?
https://ai.stackexchange.com/questions/42495/what-is-the-lowest-possible-loss-for-a-language-model
<p>Example: Suppose a character-level language model (three input letters to predict the next one), trained on a dataset which contains three instances of the sequence <code>aei</code>, with two occurrences preceding <code>o</code> and one preceding <code>u</code>, i.e., the dataset is:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">Input</th> <th style="text-align: center;">Output</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">aei</td> <td style="text-align: center;">o</td> </tr> <tr> <td style="text-align: center;">aei</td> <td style="text-align: center;">u</td> </tr> <tr> <td style="text-align: center;">aei</td> <td style="text-align: center;">o</td> </tr> </tbody> </table> </div> <p>In this case, the ideal probability distribution for the model's logits for <code>aei</code> would be <span class="math-container">$\sim 0.66$</span> for <code>o</code>, <span class="math-container">$\sim 0.33$</span> for <code>u</code>, and zero for other letters. In other words, when the model is input with <code>aei</code>, the ideal softmax of the logits would be <span class="math-container">$\sim 0.66$</span> for <code>o</code>, <span class="math-container">$\sim 0.33$</span> for <code>u</code>, and zero for other letters.</p> <p>Following this reasoning, the objective is to optimize the model's output for a given input to match the distribution of occurrences in the dataset.</p> <p>If this reasoning is correct, then we have the following ideal loss (cross-entropy):</p> <p><span class="math-container">$$ L = \frac{- log\left(\frac{2}{3}\right) - log\left(\frac{1}{3}\right) - log\left(\frac{2}{3}\right)}{3} \approx 0.63 $$</span></p> <p>Thus, <span class="math-container">$\sim 0.63$</span> is the smallest loss we can get with this dataset.</p> <p>Is my reasoning correct?</p>
<p>Yes.. consider that neural networks are <em>function</em> approximators, and:</p> <blockquote> <p>In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y</p> </blockquote> <p>So, given the same X, you can only hope to have a specific Y, and if your dataset is ambiguous (such as in your case), then depending on the loss, you'll get some approximate answer (if derived from the MLE principle, probably the mean answer, but might change if you use MAP or regularization)</p> <p>So yes, no NN will ever be able to give a 0 loss to such problem (at least formalized as such)</p>
379
language modeling
How is the word embedding represented in the paper &quot;Recurrent neural network based language model&quot;?
https://ai.stackexchange.com/questions/5285/how-is-the-word-embedding-represented-in-the-paper-recurrent-neural-network-bas
<p>I'm reading <em>"<a href="http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf" rel="nofollow noreferrer">Recurrent neural network based language model</a>"</em> of Mikolov et al. (2010). Although the article is straight forward, I'm not sure how word embedding <span class="math-container">$w(t)$</span> is obtained:</p> <p><a href="https://i.sstatic.net/6i54I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6i54I.png" alt="enter image description here"></a></p> <p>The reason I wonder is that in the classic "<em><a href="http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf" rel="nofollow noreferrer">A Neural Probabilistic Language Model</a></em>" Bengio et al. (2003) - they used separate embedding vector for representing each word and it was somehow "semi-layer", meaning - it haven't contains non-linearity, but they did update word embeddings during the back-propagation.</p> <p>In Mikolov approach though, I assume they used simple one-hot vector, where each feature represent presence of each word. If we represent that's way single word input (like was in the Mikolov's paper) - that vector become all-zeros except single one.</p> <p>Is that correct?</p>
<p>Input vector contains two concatenated parts. The low part represents the current word:</p> <blockquote> <p>word in time t encoded using 1-of-N coding [...] - size of vector x is equal to size of vocabulary V (this can be in practice 30 000-200 000) plus [...]</p> </blockquote> <p>where, as you said, 1-of-N means (see <a href="https://www.quora.com/What-is-1-of-V-coding-with-relation-to-the-neural-network-language-model" rel="nofollow noreferrer">here</a>, 1-of-V):</p> <blockquote> <p>If you have a fixed-size vocabulary of symbols with V members in total, each input symbol can be coded as a vector of size V with all zeros except for the element corresponding to the symbol's order in the vocabulary, which gets a 1.</p> </blockquote> <p>The high part of the input vector represents the current context:</p> <blockquote> <p>and previous context layer [...] Size of context (hidden) layer s is usually 30 - 500 hidden units.</p> <p>For initialization, s(0) can be set to vector of small values, like 0.1</p> </blockquote> <p>the article includes this expression:</p> <blockquote> <p>x(t) = w(t) + s(t - 1)</p> </blockquote> <p>that I think better to write as:</p> <blockquote> <p>x(t) = w(t) || s(t - 1)</p> </blockquote> <p>to made more visible the concatenation.</p> <p>Finally, paper states some improvements that breaks the 1-to-N definition of the word (low) part, in order to reduce the size of the w vector:</p> <blockquote> <p>we merge all words that occur less often than a threshold (in the training text) into a special rare token.</p> </blockquote>
380
language modeling
What background should I have before starting to fine tune a Large Language Model?
https://ai.stackexchange.com/questions/39078/what-background-should-i-have-before-starting-to-fine-tune-a-large-language-mode
<p>I want to know what things I should be learning before trying to fine-tune or for that matter working with a large language model.</p> <p>In my case, I am trying to fine-tune bloom (<a href="https://huggingface.co/blog/bloom" rel="nofollow noreferrer">https://huggingface.co/blog/bloom</a>).</p> <p>But considering my inadequate knowledge of this, I want to know what things I should cover first before taking on something so big.</p> <p>Any suggestions are appreciated.</p>
<p>There are considerable free and excellent resources out there (in the wild).</p> <p>You can check The Stanford Natural Language Processing Group teaching <a href="https://nlp.stanford.edu/teaching/" rel="nofollow noreferrer">page</a>; you can easily follow their YouTube courses on NLP (natural language processing) and NLU (natural language understanding).</p> <p>Moreover, you can read scientific papers on Transformers, the bread and butter of HuggingFace's models; a quick Google search should do the work.</p> <p>Finally, you must know Pytorch since (if I recall correctly) HuggingFace's implementations use it.</p>
381
language modeling
What are the leading methods to estimate Epistemic Uncertainty in Large Language Models?
https://ai.stackexchange.com/questions/48038/what-are-the-leading-methods-to-estimate-epistemic-uncertainty-in-large-language
<p>Epistemic uncertainty is uncertainty that arises from a lack of knowledge, for instance in machine learning epistemic uncertainty can be caused by a lack of training data. Estimating epistemic uncertainty is important for useful AI systems, since it allows the AI to &quot;know that it doesn't know&quot;, therefore avoiding hallucinations.</p> <p>While estimating epistemic uncertainty in machine learning classifiers has a clear interpretation, when considering generative models tasked with text generation it is less clear how to evaluate uncertainty, since many text completions can be considered satisfactory. Yet, it is obvious that a good epistemic uncertainty estimator should return a high value when a modest AI model is asked for example to &quot;Solve the Riemann hypothesis&quot; (hard unsolved math problem).</p> <p>What are the leading methods to estimate Epistemic Uncertainty in Large Language Models?</p>
<p>In traditional Monte Carlo dropout approach, by enabling dropout at <em>inference</em> time and performing multiple stochastic forward passes, you can sample different outputs from the same input. The variability across these samples can serve as a proxy for epistemic uncertainty.</p> <p>In LLMs recent work on chain-of-thought (CoT) prompting has suggested that if you sample multiple reasoning paths for the same input and then aggregate or measure the variance among them, the diversity can serve as an uncertainty signal. When many diverse CoT completions emerge, the model’s internal state may be less certain about the correct reasoning process. This is why <em>self-consistency</em> in CoT sampling is taken as an indicator of lower epistemic uncertainty. You can further read Wang et al., (2023) &quot;<em><a href="https://arxiv.org/pdf/2203.11171" rel="nofollow noreferrer">Self-Consistency Improves Chain-of-Thought Reasoning in Language Models</a></em>&quot;.</p> <blockquote> <p>This suggests that one can use self-consistency to provide an uncertainty estimate of the model in its generated solutions. In other words, one can use low consistency as an indicator that the model has low confidence; i.e., self-consistency confers some ability for the model to “know when it doesn’t know”.</p> </blockquote>
382
language modeling
Why do current language models no longer generate to long or short texts?
https://ai.stackexchange.com/questions/42140/why-do-current-language-models-no-longer-generate-to-long-or-short-texts
<p>One of the biggest strengths of ChatGPT is that it generates fitting text with respect to the input query. It usually stays on topic, anwers the question completely and especially does not start talking gibberish or repeating itself.</p> <p>This behaviour is different when comparing this to older LLMs. For example: GPT2 would usually only stop generating text when it hit the token limit or a predefined stop sequence. Also, it had a much bigger problem with giving repeating answers. Newer models (especially instruction tuned ones) do not suffer from this problems (e.g. llama 2).</p> <p><strong>So I have 2 questions: What mechanisms/techniques are used in current language models such that...</strong></p> <ol> <li>...<strong>the models know when to stop generating text.</strong></li> <li>...<strong>the models do not repeat themselfes and stay on topic.</strong></li> </ol> <p>I suspect it might have alot to do with instruction tuning but I am happy to hear from you.</p>
<p><strong>TLDR: if you're using a model without instruction-tuning, you'll likely need to do some additional work during inference to make sure that your outputs are coherent <em>regardless of how old your model is</em>. But even on GPT-2 you can get decent generations with the right technique.</strong></p> <p>You're correct in that this has a lot to do with instruction-tuning: GPT-2 wasn't trained to give helpful chatbot-style responses, it was trained to generate random web-text. Yes, there are EOS tokens that indicate the end of a sequence, but during pre-training these separate <em>documents</em>, which tend to be pretty long and likely aren't very coherent as chat-bot responses.</p> <p><a href="https://discuss.huggingface.co/t/llama-2-generation-config-top-p-0-6/49916/3" rel="nofollow noreferrer">This also means that newer models that aren't instruction-tuned (like Llama-2) will suffer from similar issues</a>, compared to their instruction-tuned variant (Llama-2 chat).</p> <h2>Better responses through prompting</h2> <p>However, it's important to note that, at least for larger models, training isn't necessary to get this kind of behavior. <a href="https://ai-plans.com/fs/arxiv/arxiv.org_abs_2312.01552v1.pdf" rel="nofollow noreferrer">Lin et al.</a> explore various methods for eliciting helpful chatbot-like responses through zero-shot prompting (instructions + a markdown-style prompt), ICL (providing examples of queries &amp; responses), retrieval augmented ICL (ICL but retrieving examples that are most similar to the query), and their proposed method which adds a system prompt and uses specially-crafted ICL samples.</p> <p><a href="https://i.sstatic.net/9omfNuKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9omfNuKN.png" alt="From Lin et al. (https://ai-plans.com/fs/arxiv/arxiv.org_abs_2312.01552v1.pdf)" /></a></p> <p>My <em>guess</em> is that this wouldn't work as well on less powerful models like GPT-2, as they are much worse at learning from context.</p> <h2>Repetition penalty</h2> <p>@Eponymous mentioned that there are mechanisms to downweigh repetition during sampling, but it's important to add that people have been working on developing inference-time techniques to improve generation quality since GPT-2 (and before neural models, for that matter). e.g., <a href="https://arxiv.org/pdf/1904.09751" rel="nofollow noreferrer">this famous 2019 paper</a>, proposing nucleus sampling (and targeting GPT-2 Large &amp; XL): <a href="https://i.sstatic.net/6HNoti7B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HNoti7B.png" alt="Holtzman et al. (https://arxiv.org/pdf/1904.09751)" /></a></p> <p>With the right techniques, you can definitely get these older models to generate coherent text (see Figure 11 in Appendix B in the above paper for some examples of different sampling techniques + the resulting generations).</p>
383
language modeling
Are there any other language models besides GPT-3 that can be used to create chatbots with a specific identity and environment?
https://ai.stackexchange.com/questions/34040/are-there-any-other-language-models-besides-gpt-3-that-can-be-used-to-create-cha
<p>I've recently been looking into language models but ran into a small question out of curiosity. For some language models like GPT-3, it is possible to generate dialogue, with the model basing its answers on the initial input (which can be anything). This makes it possible to make a &quot;bot&quot; which a user can chat with, that identifies itself a certain way (ex. gender and job), has certain mannerisms (ex. polite or angry) and knows things about a custom world (ex. The bot is a cashier in a certain store, located in a certain city).</p> <p>Newer and better models for open-domain dialogue such as Facebook Blenderbot do not seem to have this possibility, it chats with the user but does not have to setup a certain context. Are there any other language models that can do this, and might perform better than GPT-3?</p>
<p>You may want to look at Chirpy Cardinal from Stanford. It doesn't provide mannerisms out of the box but the response generators can be configured to give each instance its own character.</p> <p><a href="https://stanfordnlp.github.io/chirpycardinal/" rel="nofollow noreferrer">https://stanfordnlp.github.io/chirpycardinal/</a></p>
384
language modeling
Can a prompt injection attack exist without predefined system instructions in a language model?
https://ai.stackexchange.com/questions/48597/can-a-prompt-injection-attack-exist-without-predefined-system-instructions-in-a
<p>Prompt injection attacks are typically described as adversarial inputs crafted to override or manipulate a language model’s behavior by exploiting its prompt - following nature. In many examples, these attacks rely on the presence of predefined system instructions (e.g., &quot;You are a helpful assistant. Do not answer harmful questions.&quot;).</p> <p>However, I’m curious whether a prompt injection attack can meaningfully exist if there are no explicit or implicit system instructions guiding the model’s behavior.</p> <ol> <li>Is some form of predefined instruction (explicit system prompt, behavioral constraint, or task framing) a necessary condition for defining a prompt injection attack?</li> <li>If not, how is “injection” conceptually different from regular prompt engineering?</li> <li>Are there any examples of prompt injection attacks occurring in zero-instruction (fully instruction-free) settings?</li> </ol> <p>I’m looking for a principled explanation—especially from a security or alignment perspective - on whether prompt injection inherently presumes an instruction-following context.</p>
385
language modeling
Which framework should I use for training transformer language models with reinforcement learning?
https://ai.stackexchange.com/questions/48596/which-framework-should-i-use-for-training-transformer-language-models-with-reinf
<p>Which framework should I use for training transformer language models with reinforcement learning (e.g., GRPO)? Any recommendation?</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">Feature</th> <th style="text-align: left;"><a href="https://github.com/huggingface/trl/blob/50a2fa8ec843e18a6230508f53b1f220824914fd/trl/scripts/grpo.py#L25" rel="nofollow noreferrer"><code>trl</code></a> (Hugging Face)</th> <th style="text-align: left;"><a href="https://github.com/unslothai/unsloth/blob/6ebef501f9015f17ba61e27d89cbdf3198fef671/tests/saving/language_models/test_save_merged_grpo_model.py#L609" rel="nofollow noreferrer"><code>unsloth</code></a></th> <th style="text-align: left;"><a href="https://github.com/volcengine/verl/tree/main/examples/grpo_trainer" rel="nofollow noreferrer"><code>verl</code></a> (Volcano Engine)</th> <th style="text-align: left;"><a href="https://github.com/OpenRLHF/OpenRLHF/tree/c8fbb4f058ac3500d3f854150c9c716f6936c389/examples/scripts" rel="nofollow noreferrer"><code>openrlhf</code></a></th> </tr> </thead> <tbody> <tr> <td style="text-align: left;"><strong>Role in GRPO</strong></td> <td style="text-align: left;">Full GRPO framework, implements PPO, DPO, IPO, KTO</td> <td style="text-align: left;"><strong>Accelerates DPO</strong> (not a full GRPO framework)</td> <td style="text-align: left;">Full GRPO framework, implements PPO, GRPO, ReMax, DAPO, etc.</td> <td style="text-align: left;">Full GRPO framework, implements PPO, DPO, KTO</td> </tr> <tr> <td style="text-align: left;"><strong>Core Function</strong></td> <td style="text-align: left;">Easy, comprehensive RLHF with HF models</td> <td style="text-align: left;">Speed up LLM SFT/DPO fine-tuning</td> <td style="text-align: left;">Flexible, efficient, production-ready RL training for LLMs</td> <td style="text-align: left;">Flexible, scalable, research-oriented RLHF</td> </tr> <tr> <td style="text-align: left;"><strong>Ease of Use</strong></td> <td style="text-align: left;">Very High (Trainer API)</td> <td style="text-align: left;">High (easy integration)</td> <td style="text-align: left;">Moderate (flexible but extensive feature set)</td> <td style="text-align: left;">Moderate (more control)</td> </tr> <tr> <td style="text-align: left;"><strong>Performance</strong></td> <td style="text-align: left;">Good, leverages Accelerate</td> <td style="text-align: left;"><strong>Excellent</strong> (speed &amp; VRAM reduction for DPO)</td> <td style="text-align: left;">Excellent (SOTA throughput, scales to hundreds of GPUs)</td> <td style="text-align: left;">Very Good, designed for large-scale/distributed</td> </tr> <tr> <td style="text-align: left;"><strong>Integration</strong></td> <td style="text-align: left;">Deeply integrated with Hugging Face ecosystem</td> <td style="text-align: left;">Integrates well with HF &amp; <code>trl</code>'s <code>DPOTrainer</code></td> <td style="text-align: left;">Compatible with HF/Modelscope, integrates with FSDP, Megatron-LM, vLLM, SGLang</td> <td style="text-align: left;">Uses HF models; often more modular</td> </tr> <tr> <td style="text-align: left;"><strong>Target Audience</strong></td> <td style="text-align: left;">Practitioners, general users, rapid prototyping</td> <td style="text-align: left;">Anyone doing DPO/SFT, especially on limited hardware</td> <td style="text-align: left;">Researchers, advanced practitioners, production teams needing performance/flexibility</td> <td style="text-align: left;">Researchers, power users, large-scale deployments</td> </tr> </tbody> </table></div>
386
language modeling
Should I use the same learning rate when I compare the performance for different language models?
https://ai.stackexchange.com/questions/47254/should-i-use-the-same-learning-rate-when-i-compare-the-performance-for-different
<p>If I'm using the same datasets and using different language models, such as BERT, DistilBERT, Electra, etc., and my goal to compare the performance, such as the accuracy, macro f score, etc. should I used the same learning rate and number of epochs to make comparison fair, or is it allowed to have different learning rates for each? For example, if BERT provides the best result with a learning rate 2e-5 and Electra provides the best results with 1e-6, can I register both results and use them?</p>
<p>Since learning rate is a hyperparameter, using different learning rates or epochs for each model is not only allowed but generally encouraged as long as the hyperparameter tuning process such as grid search is consistent across all models. Different models have unique architectures, parameter counts, and training dynamics, which can affect how they respond to specific learning rates. Comparing models using the same fixed learning rate may disadvantage certain models, as they might not reach their optimal performance under those conditions.</p> <p>Finally it's important to keep other factors constant such as the training dataset, evaluation metrics, data splits, preprocessing steps, etc. Of course, if resources are permitted, you can also run all models with the same learning rate and training setup to explore how they perform under identical conditions which could potentially provide additional insights.</p>
387
language modeling
Can the output of a language model be identical to it&#39;s training data if finetuned with reference documents also present on the training data?
https://ai.stackexchange.com/questions/48410/can-the-output-of-a-language-model-be-identical-to-its-training-data-if-finetun
<p>E.g. Finetuning a language model using text from Wikipedia articles (without modifications) when the language model has Wikipedia data in its training dataset will cause the model to reproduce the articles' texts verbatim in its output? Assuming the model is small (less than 3 billion parameters) and has very few training data.</p>
<p>LLMs are more likely to memorize frequently seen sequences if the same sequence appears multiple times during pre-training and fine-tuning in repeated epochs. If the fine-tuning dataset is very small, the model might memorize specific phrases, but full articles are unlikely, since in practice a whole article is usually semantically segmented into shuffled mini-batched sequences for training/fine-tuning to ensure regularization and reduce overfitting.</p> <p>In addition, while greedy decoding increases the chance of reproducing memorized sequences, stochastic methods such as temperature scaling and top-k/top-p are usually introduced to make exact replication of a whole article very unlikely. If the input prompt closely matches the beginning of a memorized article, the model may continue it verbatim. Without such cues, outputs are more likely to paraphrase or blend concepts.</p> <p>Having said that, when you fine‑tune on a document the model is already trained on, you increase its exposure count to that text, and memorization increases <a href="https://arxiv.org/pdf/2202.07646" rel="nofollow noreferrer">log‑linearly</a> with the model capacity and the number of times an example appears in training data.</p> <blockquote> <p>Large language models (LMs) have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized training data verbatim... We describe three <em>log-linear</em> relationships that quantify the degree to which LMs emit memorized training data. Memorization significantly grows as we increase (1) the capacity of a model, (2) the number of times an example has been duplicated, and (3) the number of tokens of context used to prompt the model. Surprisingly, we find the situation becomes more complicated when generalizing these results across model families. On the whole, we find that memorization in LMs is more prevalent than previously believed and will likely get worse as models continues to scale, at least without active mitigations.</p> </blockquote>
388
language modeling
Language Model from missing data
https://ai.stackexchange.com/questions/13518/language-model-from-missing-data
<p>I want to learn how a set of operations (my vocabulary) are composed in a dataset of algorithms (corpus). </p> <p>The algorithms are a sequence of higher level operations which have varying low-level implementations. I am able to map raw code to my vocabulary, but not all of it.</p> <p>e.g. I observe a lossy description of an algorithm that does something:</p> <p><code>X: missing data Algo 1: BIND3 EXTEND2 X X ROTATE360 X PUSH Algo 2: X X EXTEND2 ROTATE360 </code></p> <p>The underlying rotate operation could have very different raw code, but effectively the same function and so it gets mapped to the same operation. </p> <p>I want to infer what the next operation will be given a sequence of (potentially missing) operations (regions of code I could not map). </p> <p>i.e. I want a probability distribution over my operations vocabulary.</p> <p>Any ideas on the best approach here? The standard thing seems to throw out missing data, but I can still learn in these scenarios. Also, the gaps in the code are non-homogenous--some could do many things, The alternative is to contract the sequences and lose the meaning of the gaps, or to learn an imputation.</p>
389
language modeling
Normalizing the embedding space of an encoder language model with respect to categorical data
https://ai.stackexchange.com/questions/46252/normalizing-the-embedding-space-of-an-encoder-language-model-with-respect-to-cat
<p>Suppose we have a tree/hierarchy of categories (e.g. categories of products in an e-commerce website), each node being assigned a title. Assume that the title of each node is semantically accurate, meaning it's consistent with the category (the titles of the children nodes) it represents. Now, take an encoder language model (like BERT or Word2Vec) and produce embeddings for each node/category. My goal is to ensure that these embeddings are representative of the categorical hierarchy. I wonder:</p> <ul> <li>Should the embeddings (mostly) be consistent with the categorical hierarchy, even if the categorical hierarchy is very niche and/or uneven (meaning that categorical granularity is not necessarily evenly distributed, if that makes sense)? By consistent, I mean that the embeddings of sibling nodes under one parent should be closer to each other than to children nodes of other parents, and closer to their parent's embeddings than to other parents' embeddings. Is there a nice metric to measure/validate this?</li> <li>If not, what would be good ways to re-map a latent space to make it &quot;nicer&quot; with respect to the categorical hierarchy? I'm thinking either 1) we directly transform the latent space or 2) fine-tune the language encoder model (i.e. BERT). For example: 1) let's say that the shape formed by a category/parent's sub-categories/children is a thin oval, would it be possible and make sense to want to map the space such that this thin oval becomes a circle? Also for 2) would fine-tuning BERT along the task of classifying leaf embeddings' under the correct parent embeddings do the same thing?</li> </ul> <p>I think that this requires the categories to not be so niche and to have some correspondence to regular language usage (and even more after providing context), which is my case.</p> <p>This is just my intuition, so let me know your thoughts. I would be interested in exploring anything from the most simple heuristic approaches to SOTA NLP techniques that could pertain to this problem.</p>
390
language modeling
What&#39;s the most efficient way of performing batched training of Causal Language Models?
https://ai.stackexchange.com/questions/39817/whats-the-most-efficient-way-of-performing-batched-training-of-causal-language
<p>I have seen a number of ways to train (yes, train, not fine-tune) these models efficiently with batches. I will illustrate these techniques with the following example dataset and context window:</p> <pre><code>Context window: ----------------- Data samples: 1. ### 2. ################ 3. #### 4. ############## 5. ######## 6. ######### </code></pre> <p>Suppose we have a batch size of 2. Our pad token is x</p> <h2>First technique: Vanilla Padding</h2> <pre><code>Context window: ----------------- batch 1: 1. ###xxxxxxxxxxxxx 2. ################ batch 2: 3. ####xxxxxxxxxx 4. ############## batch 3: 5. ########x 6. ######### </code></pre> <h2>Second technique: Bucketed Padding</h2> <p>Samples of similar lengths are batched together to minimise the number of pad tokens</p> <pre><code>Context window: ----------------- batch 1: 1. ###x 3. #### batch 2: 2. ################ 4. ##############xx batch 3: 5. ########x 6. ######### </code></pre> <p>this is <em>uniform length batching</em> described in <a href="https://mccormickml.com/2020/07/29/smart-batching-tutorial/" rel="nofollow noreferrer">this blogpost</a> and referred to as <em>bucketed random sampling</em> in <a href="https://aclanthology.org/2021.findings-acl.74/" rel="nofollow noreferrer">this paper</a>.</p> <h2>Third technique: Concatenating samples</h2> <p>In this technique, we concatenate samples, separating them with a EOS token (E) until they reach the context length. In this way, we have no padding tokens, and the entire context length is used. The attention mask keeps track of where the EOS tokens occur.</p> <pre><code>Context window: ----------------- batch 1: ###E############# (1 and part of 2) batch 2: ###E####E######## (rest of 2, 3 and part of 4) batch 3: ######E########E# (rest of 4, 5, part of 6) batch 4: ######## (rest of 6) </code></pre> <p>This technique is referenced at 2:28 of <a href="https://www.youtube.com/watch?v=ma1TrR7gE7I" rel="nofollow noreferrer">this video</a> from <a href="https://huggingface.co/course/chapter7/6" rel="nofollow noreferrer">this huggingface tutorial</a>.</p> <p>With this technique, we reduce the number of batches, and only have to pad the final batch if necessary. However, it is unclear to me whether this is &quot;allowed&quot; for causal language modelling, as it is unclear whether this will cause the causal attention mechanism to attend to tokens from previous samples, only ignoring the EOS token (instead of everything before it)</p> <hr /> <p>Of these 3 techniques, which is the most memory efficient? Which is the most commonly used?</p>
391
language modeling
How do language models know what they don&#39;t know - and report it?
https://ai.stackexchange.com/questions/40874/how-do-language-models-know-what-they-dont-know-and-report-it
<p>Again and again I ask myself what goes on in a pre-trained transformer-based language model (like ChatGPT9) when it comes to &quot;know&quot; that it cannot give an appropriate answer and either</p> <ul> <li><p>states it (&quot;I have not enough information to answer this question.&quot;)</p> </li> <li><p>asks for more specific information (&quot;Please tell me which kind of XY you mean.&quot;)</p> </li> <li><p>calls a plugin (like Wolfram or ScholarAI)</p> </li> </ul> <p>(I assume that this will never happen without reinforcement learning by human feedback. A pre-trained-only model would always answer something (possibly hallucinating) and not &quot;reflect&quot; about its lack of knowledge.)</p> <p>The only possibility that I can see - but it's not really explanatory: that after some steps of execution the sum of the top_k probabilities of the final vector (which gives probabilities to the all words in the vocabulary) is too small. But what, when this happens only late? ChatGPT would already have produced lots of words - but one never observes that he stops generation after some lengthy text and only then ends with something like &quot;Ah, finally I see that I'm missing information. I wasn't aware in the beginning.&quot; ChatGPT <em>immediately</em> admits that he doesn't know (when he does). And when ChatGPT calls a plugin - e.g. ScholarAI - he does it without having produced a single word of response to the last message.</p> <p>In principle, ChatGPT could generate a complete response in the background that then is checked somehow if it's &quot;satisfactory&quot;. If yes it's given as output (simulating word-by-word generation), if not, it's regenerated with some sort of trigger (a hidden token?) to admit that ChatGPT is missing information or to call a plugin.</p> <p>What's the clever trick under the hood (in some technical detail)?</p>
<p>The data it is trained on includes variants of &quot;I don't know&quot;. For instance, if you ask me what is the meaning of life and I reply I don't know, then that is the information schema the AI absorbs. It knows what it does not know, in the same way that it knows what it knows.</p> <p>Here is another way to look at it. When in training, people were asked to interact with GPT 3.5. At that time, the trainers would have received many incorrect responses. They would then flag to GPT that the response is incorrect/inaccurate from which it would learn to either flag the issue to the user upfront (I am only a LLM ... etc. etc.) or to say I do not know, or some variant. In all of these, the chat interactions and the training data enable it to learn the association of a sequence of words and the idea in it with the phrase &quot;I don't know&quot; or &quot;I won't answer that&quot; in the same way that it learns anything else. The guide rails are programmed in through the interactions with human trainers. This way the math does not change. You only need human trainers to interact with the system and they implicitly program the guide rails.</p> <p>The schema is plugin aware but not trained on any specific plugin. The following is the information flow (from <a href="https://platform.openai.com/docs/plugins/introduction" rel="nofollow noreferrer">https://platform.openai.com/docs/plugins/introduction</a>) taken verbatim from the webpage:</p> <ul> <li><p>OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples.</p> </li> <li><p>When a user asks a relevant question, the model may choose to invoke an API call from your plugin if it seems relevant; for POST requests, we require that developers build a user confirmation flow to avoid destruction actions.</p> </li> <li><p>The model will incorporate the API call results into its response to the user.</p> </li> <li><p>The model might include links returned from the API calls in its response. These will be displayed as rich previews (using the OpenGraph protocol, where we pull the site_name, title, description, image, and url fields).</p> </li> <li><p>The model can also format data from your API in markdown and the ChatGPT UI will render the markdown automatically.</p> </li> </ul> <p>In a nutshell, ChatGPT is intelligent but its plugin system is very limited.</p>
392
language modeling
Efficient Matching of Sample Requests to Sample Offers Using Large Language Models
https://ai.stackexchange.com/questions/46052/efficient-matching-of-sample-requests-to-sample-offers-using-large-language-mode
<p>I want to discuss an interesting matching problem. We aim to match sample requests with corresponding sample offers. Here are some examples:</p> <p>Sample Requests:</p> <p>Need help installing Linux on my old laptop. Looking for a tutor in specialist mathematics. Need a second person to help assemble an Ikea bed. Sample Offers:</p> <p>Electron Workshop and Linux Users Victoria Tech. VCE maths tutor (Further, Methods, Specialist). Heading from Lake Eildon to Melbourne. Anyone available? To solve this problem, I used a large language model (LLM) to perform the matching. My approach involves embedding the requests and offers using vector embeddings and then employing a Retrieval-Augmented Generation (RAG) framework using LangChain. The method has shown promising results.</p> <p>Here's a snippet of my code:</p> <h1>Initialize the OpenAI embeddings</h1> <p>embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)</p> <h1>Sample data</h1> <p>strings_to_embed = sample_off</p> <h1>Create Chroma vector store from the texts using the built-in embeddings</h1> <p>vector_store = Chroma.from_texts(texts=strings_to_embed, embedding=embeddings) retriever = vector_store.as_retriever()</p> <p>template = &quot;&quot;&quot; Match with the best offer based on the following context. Make different phrasing of the offer then search it. Only return the matching answer. No extra reasoning or string should be added: {context}</p> <p>Question: {question} &quot;&quot;&quot;</p> <p>prompt = ChatPromptTemplate.from_template(template=template)</p> <p>rag_chain = ( {&quot;context&quot;: retriever, &quot;question&quot;: RunnablePassthrough()} | prompt | ChatOpenAI() | StrOutputParser() )</p> <h1>Now to match the best request with the best offer, we invoke the RAG chain</h1> <p>rag_chain.invoke(sample_req[0]) While this method works well, I believe it may require too many LLM calls, which is not efficient. I am looking for ideas and discussions on how to improve this approach. Specifically, I want to explore how we can use the retriever more efficiently. For example, structuring the sample offers into different classes could potentially make the querying process more straightforward and efficient.</p> <p>Any insights or suggestions on optimizing this process would be greatly appreciated!</p>
393
language modeling
How can I improve this toy Graph Neural Network Generative Language model
https://ai.stackexchange.com/questions/41921/how-can-i-improve-this-toy-graph-neural-network-generative-language-model
<h3>Background</h3> <p>I'm an undergraduate student with research interests in a field of physics that has significant overlap with graph theory, and a functioning knowledge of how simple neural nets work and how to build them with TensorFlow and Keras. As many people are, I'm fascinated by the recent advancements in transformer-based language models, and I've spent the last several weeks reading up on them in an attempt to construct my own simple &quot;mini GPT&quot;. In doing so I encountered Graph Neural Networks, and decided I'd try instead to construct a generative language model out of these, inspired by the fact that graphs are (in a very hand-wavy sense) perhaps inherently more amenable to encoding relationships like &quot;attention&quot;, etc. I'm aware that the task I'm trying to accomplish could probably be much more easily achieved using alternative architectures. This is mostly just a fun &quot;what if&quot; project.</p> <hr /> <h3>Code</h3> <p>I still haven't quite wrapped my head around all the details of how and what a Graph Neural Net learns, nor what the different types of GNN layers do. Nevertheless, I've constructed a simple GNN using the <code>Spektral</code> library, which takes an input string, and predicts the rest of the string word-by-word by predicting the most probable next token. Here's what I have so far:</p> <pre><code>import tensorflow as tf from tensorflow.keras.layers import Input, Dense, Embedding from tensorflow.keras.preprocessing.sequence import pad_sequences from spektral.layers import GCNConv from spektral.utils import normalized_adjacency import numpy as np import random import os def get_training_data(training_data_dir): filenames = [] for filename in os.listdir(training_data_dir): filenames.append(os.path.join(training_data_dir, filename)) random.shuffle(filenames) lines = [] for filename in filenames: with open(filename, &quot;r&quot;) as file: for line in file: lines.append(line.strip()) return lines # Import data training_corpus = get_training_data(&quot;./training_data&quot;) tokens = [line.split() for line in training_corpus] vocab = set(token for line in tokens for token in line) vocab_size = len(vocab) print(&quot;Vocabulary size: &quot;, vocab_size, &quot; words&quot;) # Tokenize word_to_idx = {word: idx for idx, word in enumerate(vocab)} idx_to_word = {idx: word for word, idx in word_to_idx.items()} # Pad and truncate sequences to a fixed length train_data = [[word_to_idx[token] for token in line] for line in tokens] train_data_padded = pad_sequences(train_data, maxlen=vocab_size, padding='pre', truncating='pre') # Shift train_data_padded to create train_labels train_labels = np.roll(train_data_padded, -1, axis=1) train_labels[train_labels &gt;= vocab_size] = 0 train_data_padded = np.array(train_data_padded) train_labels = np.array(train_labels) # Construct token-to-token similarity matrix similarity_matrix = np.zeros((vocab_size, vocab_size)) for sentence in tokens: for i, token1 in enumerate(sentence): for j, token2 in enumerate(sentence): if i != j: num_words_between = pow(abs(j - i), 2) similarity_matrix[word_to_idx[token1], word_to_idx[token2]] += num_words_between adjacency_matrix = normalized_adjacency(similarity_matrix) # Construct model input_layer = Input(shape=(None,)) embedding = Embedding(input_dim=vocab_size, output_dim=vocab_size)(input_layer) gcn_layer = GCNConv(vocab_size)([embedding, adjacency_matrix]) output_layer = Dense(vocab_size, activation='softmax')(gcn_layer) model = tf.keras.Model(inputs=input_layer, outputs=output_layer) model.compile(optimizer='adam', loss='categorical_crossentropy') # Training loop num_epochs = 100 model.fit(train_data_padded, tf.keras.utils.to_categorical(train_labels, num_classes=vocab_size), epochs=num_epochs) padding_token = &quot;&lt;PAD&gt;&quot; def respond_to_text(initial_string): initial_tokens = initial_string.split() for i in range(len(initial_tokens)): if initial_tokens[i] not in word_to_idx: initial_tokens[i] = &quot;&lt;PAD&gt;&quot; while len(initial_tokens) &lt; vocab_size: initial_tokens.insert(0, padding_token) generated_tokens = [word_to_idx[token] for token in initial_tokens] max_generation_length = 40 # Number of tokens from the initial string that have been used initial_tokens_used = len(initial_tokens) for _ in range(max_generation_length): current_seq = np.array([generated_tokens[-vocab_size:]]) # Always use the last vocab_size tokens next_token_probs = model.predict(current_seq)[0][-1] next_token = np.random.choice(np.arange(vocab_size), p=next_token_probs) generated_tokens.append(next_token) # If there are more initial tokens to use, do that if initial_tokens_used &lt; len(initial_tokens): generated_tokens[-vocab_size:] = [word_to_idx[token] for token in initial_tokens[initial_tokens_used:]] initial_tokens_used = len(initial_tokens) # Generate text generated_text = [idx_to_word[idx] for idx in generated_tokens] # Remove trailing &quot;&lt;PAD&gt;&quot; tokens generated_text = [token for token in generated_text if token != padding_token] # Join the tokens into text generated_text = &quot; &quot;.join(generated_text) print(&quot;Generated Text:&quot;, generated_text) while True: input_string = input() respond_to_text(input_string) </code></pre> <p>It's relatively simple, thus far. I train the network on strings of text from a subset of the WikiQA corpus, which I pre-process by removing all punctuation and capitalization. I define the elements of the adjacency matrix to be the squared distance between tokens (and 0 between a token and itself). I'm using <code>GCNConv()</code>, admittedly without knowing the intimate details of how it is different from other options provided by <code>Spektral</code>. Since the number of tokens in the input string must be identical to the vocabulary size, I pre-pend it with &quot;<code>&lt;PAD&gt;</code>&quot; tokens, and only pass in the last <code>vocab_size</code> tokens each time I generate a new token. I deal with unknown tokens by replacing them with &quot;<code>&lt;PAD&gt;</code>&quot;.</p> <p>If I understand correctly, the GNN learns the &quot;strengths&quot; of connections between nodes (words), i.e. the edge weights and, as I imagine it, the adjacency matrix encodes a sort of very weak form of &quot;attention&quot;.</p> <hr /> <h3>Question</h3> <ol> <li><strong>What and how, precisely, is this learning?</strong> I know that in a simple feed-forward neural network, a set of weights &quot;between&quot; perceptrons is learned. Do I understand correctly that what is being learned here are edge weights? How do node features and the adjacency matrix factor into this, and what does the model &quot;do&quot; with some input text? How are the next token probabilities calculated? I only understand this on a very superficial level, sufficient so as to produce this seemingly somewhat functional script. I can see that, when I plot the graph associated with the <code>GCNConv</code> layer using <code>networkx</code> after fitting, words that are similar (e.g. &quot;boat&quot;, &quot;ocean&quot;, &quot;water&quot;) tend to cluster.</li> <li><strong>How can I improve the model?</strong>* I've spent a fair amount of time reading GNN research papers, however they're written largely in what appears to be very subfield-specific jargon that is unfamiliar to me as someone very familiar with graph theory and somewhat familiar with machine learning. I'd like to begin by taking smaller steps, hopefully starting with some suggestions provided by the community here. I have no strict, &quot;objective&quot; criteria in mind, outside of producing more realistic, human-like text.</li> </ol> <p>Here's some example input and output:</p> <ol> <li><strong>Input:</strong> &quot;the traffic&quot; -&gt; <strong>Output:</strong> &quot;the traffic was invaluable sense day playing urban evening together made past weekends board plot off color cookies calm concert flowers express eye-opening learn garden outside satisfying laughter movie waves how's sunrise of try traffic scratch day captivating hobby live blanket delicious&quot;</li> <li><strong>Input:</strong> &quot;i enjoy&quot; -&gt; <strong>Output:</strong> &quot;i enjoy watching magical journaling awe-inspiring buds tail feeling entertainment resist homemade ones flavors soothing well-being laughter life culture cleanup picnics beauty accomplishment nature mood ocean up satisfying magical contagious joy admiring feeling live marathons beauty views things expression hikes next happiness vacations&quot;</li> </ol> <p><span class="math-container">$*$</span> Outside of the obvious, e.g. more training data, adjusting hyperparameters, etc.</p>
<p>First of all, I would like to encourage you to keep trying new things, it sounds super fun! There are a few things I would like you to clarify about Graph Neural Networks (GNNs) and Graph Convolutional Neural Networks (GCNs), and then I will answer your questions as best I can. Also, as a sidenote, there are too many questions here, so don't expect me to answer all of them. Given that, I will try to answer the main ones.</p> <h2>Adjacency Matrix and Attention Mechanism</h2> <p>I will address the notion that <strong>&quot;the adjacency matrix encodes a sort of very weak form of attention&quot;</strong>. This notion is kind of correct. In fact, it can be shown that the attention mechanism of the transformer is just an instance of the generalization of the message-passing function of the GNN, where the underlying graph is a complete graph, or in other words, its adjacency matrix is a matrix of all ones. If you continue your research on GNNs, you will see another famous architecture called Graph Attention Neural Networks (GAT). This is just an adaptation of the attention mechanism of the transformer for more simple static graph data. If you want to understand how the transformer is a type of GNN, I can explain it to you later.</p> <h2>First Question: What Exactly is Being Learned?</h2> <p>What is learned in a graph neural network depends on the type of neural network that you are implementing. Generally, there are two types of features that you can input into a GNN (for the forward pass). These are the feature data and the edge data of each node. In the case of the <code>GCNConv</code> layer, this only implements the learning of the feature data (which are the embeddings of each token in your case). So, in your code, the GCNConv is learning the weights that are multiplied with the embeddings. In your code, there are no edge weights that are being trained or even initialized because there is no edge data in the first place.</p> <p>GCN uses the following formula for calculating the message passing: <span class="math-container">$ \mathbf{h}_i^{(l+1)} = \sigma \left( \sum_{j \in \mathcal{N}(i) \cup \{i\}} \mathbf{A}_{ij} \mathbf{h}_j^{(l)} \mathbf{W}^{(l)} \right) \ $</span></p> <ul> <li><p><span class="math-container">$h_i(l)$</span> is the feature vector of node i at layer <span class="math-container">$l$</span>.</p> </li> <li><p><span class="math-container">$W(l)$</span> is the weight matrix for layer <span class="math-container">$l$</span>.</p> </li> <li><p><span class="math-container">$N(i)$</span> is the set of neighbors of node <span class="math-container">$i$</span>.</p> </li> <li><p><span class="math-container">$A_{ij}$</span> is the element at the <span class="math-container">$(i,j)$</span> position of the adjacency matrix <span class="math-container">$A$</span>. This could be either 1 or 0, indicating whether there is an edge between nodes <span class="math-container">$i$</span> and <span class="math-container">$j$</span>, or it could be a weight if the graph is weighted.</p> </li> <li><p>σ is an activation function, such as ReLU or Sigmoid.</p> </li> </ul> <p>I would recommend watching this <a href="https://youtu.be/ijmxpItkRjc?list=PLSgGvve8UweGx4_6hhrF3n4wpHf_RV76_" rel="nofollow noreferrer">video</a>, which in my opinion explains GCNs very well if you have any more doubts.</p> <h2>Second Question: How Can the Model be Improved?</h2> <p>I can understand how you are feeling about GNNs. I did research on them, and they can be very niche with some terminology. I would recommend that you understand the basics of the GNN architecture very well and then follow up with other types of GNNs. Graph theory is definitely useful, but in my opinion, it only helps you to understand the basics of graphs, and the other things are much more specific to the field of GNNs.</p> <p>As for the question, sadly, deep learning is not a very interpretable field. This means that sometimes we don't know why some architectures seem to work better than others. For example, the transformer is currently the best model for modeling language. We think this is because of the attention mechanism, which is able to take into account all tokens in context at the same time. However, it is also a very scalable model that does not suffer from the gradient vanishing effect, unlike recurrent neural networks.</p> <p>Recently, there has been a new type of RNN called <a href="https://arxiv.org/abs/2305.13048" rel="nofollow noreferrer">RWKV</a> that solves the initial gradient vanishing effect of RNNs and does not use the attention mechanism. We have <a href="https://arxiv.org/abs/2111.09509" rel="nofollow noreferrer">RAVEN</a>, which is an example of how we can build models that are just as performant as transformers with RWKV.</p> <p>What I'm trying to say is that it is very difficult to say what things you can use to improve the accuracy of your model, besides the basics of training with more data and tuning hyperparameters. If I had to try it, I would probably make my GNN resemble the transformer architecture as much as possible, adding attention, residual layers, and so on. However, this is something that you will have to explore and experiment with.</p> <h2>Final Thoughts</h2> <p>I don't intend to discourage you; quite the opposite. However, I should note that GCNs may not be best suited for this task. You might also explore standard NLP techniques for boosting accuracy, such as improved tokenization and data quality. For more ambitious improvements, you could look into Reinforcement Learning through Human Feedback (RLHF) techniques.</p>
394
language modeling
How can (pretrained) language models actively seek additional training data - possibly reference request?
https://ai.stackexchange.com/questions/37827/how-can-pretrained-language-models-actively-seek-additional-training-data-po
<p>I am reading the paper &quot;Large Language Models Can Self-Improve&quot; <a href="https://arxiv.org/abs/2210.11610" rel="nofollow noreferrer">https://arxiv.org/abs/2210.11610</a> in which the authors consider that LLM can generate Chain-of-Thoughts sequences and even novel questions and their respective CoT responses and then selects the best sequences and fine-tunes themselves. Such self-reflection improves LLM. Such self-reflection also mimics how the human beings learns material by rethinking it. The usual human learning process is guided by the preset questions - courses and textbooks have them. But more mature learners usually can discover knowledge and understanding gaps in their minds themselves and form the most relevant questions and open problems, whose solution can improve the understanding and hence the assimilation of material in the actionable form.</p> <p>Such process is known in psychology as apperception <a href="https://en.wikipedia.org/wiki/Apperception" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Apperception</a>.</p> <p>My personal experience shows that apperception can be greatly enhanced (sometimes it even required) the reading of additional, carefully selected material. Sometimes it gives additional information. Sometimes it just reformulated the existing information and hints to additional links among the existing pieces of knowledge.</p> <p>Apparently - if LLM is working in self-improving mode then it could be highly benefitial if LLM could actively seek specific additional information and fine-tune itself on it.</p> <p>My question is - how to do such active seeking of specific additional information and is there any research efforts that have already done something like that?</p> <p><strong>Additional info</strong>: actually there is a least one paper <a href="https://parl.ai/projects/seeker/" rel="nofollow noreferrer">https://parl.ai/projects/seeker/</a> (<a href="https://arxiv.org/abs/2203.13224" rel="nofollow noreferrer">https://arxiv.org/abs/2203.13224</a>) that uses this self-improvement scheme.</p>
395
language modeling
How can a language model keep track of the provenance of the main knowledge/sources used to generate a given output?
https://ai.stackexchange.com/questions/38372/how-can-a-language-model-keep-track-of-the-provenance-of-the-main-knowledge-sour
<p>One of the main criticisms against the use of ChatGPT on Stack Exchange is that it doesn't attribute the main knowledge/sources used to generate a given output. How can a language model keep track of the provenance of the main knowledge/sources used to generate a given output?</p>
<p>Reddit user <a href="https://www.reddit.com/user/skyebreak" rel="noreferrer">skyebreak</a> <a href="https://redd.it/znl8f8" rel="noreferrer">pointed</a> me to a relevant paper on that topic:</p> <p>Gao, Luyu, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao et al. &quot;<a href="https://arxiv.org/abs/2212.08037" rel="noreferrer">Attributed text generation via post-hoc research and revision.</a>&quot; arXiv preprint arXiv:2210.08726 (2022).</p> <blockquote> <p>Large language models (LLMs) have shown impressive results across a variety of tasks while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios. We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in this setting. We propose and study Attributed QA as a key first step in the development of attributed LLMs. We develop a reproducable evaluation framework for the task, using human annotations as a gold standard and a correlated automatic metric that we show is suitable for development settings. We describe and benchmark a broad set of architectures for the task. Our contributions give some concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and <strong>give some hints as to how to address a third key question (How to build LLMs with attribution?)</strong>.</p> </blockquote>
396
language modeling
Can in principle GPT language models learn physics?
https://ai.stackexchange.com/questions/23418/can-in-principle-gpt-language-models-learn-physics
<p>Does anyone know of research involving the GPT models to learn not only regular texts, but also learn from physics books with the equations written in latex format?</p> <p>My intuition is that the model might learn the rules relating equations and deductions, as they can learn statistically what correlates with what. I understand that the results can also be a little nonsensical, like the sometimes surreal paragraphs written by these models.</p> <p>Have there been any attempts to do this?</p>
397
language modeling
Data extraction from diagrams using Vision Language Model
https://ai.stackexchange.com/questions/47707/data-extraction-from-diagrams-using-vision-language-model
<p>looking for some ideas to accurately extract data flows from system context diagram. I've tried a number of models and prompt engineering techniques, but i'm still getting missing flows, and hallucination from the model on non-existing flows, incorrect data flow.</p> <p><strong>what i've tried:</strong></p> <ol> <li>prompt engineering with vision models (Phi-3-vision-128k-instruct, llama-3.2-90b-vision-instruct)</li> <li>splitting the diagram into smaller parts</li> <li>using OCR then feed data back into the vision model</li> </ol> <p><strong>Example of diagram:</strong> <a href="https://i.sstatic.net/MBj4Zjdp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBj4Zjdp.png" alt="Example of diagram" /></a></p>
398
language modeling
How to train a language model to write poem?
https://ai.stackexchange.com/questions/41936/how-to-train-a-language-model-to-write-poem
<p>I am using some of the LLM applications, and all of them are better at provide formal, steady conversations, rather than writing texts with styles. So I wonder whether it is possible to train a LLM better at writing poem, or other stylish texts in general.</p> <p>Is this resulted from the dataset or the model structure, the loss function? If so, what should I do if I want to train a model that can write poem? Should I make a special dataset to train the Lora models, or try to design a new loss function?</p>
<h3>General advice</h3> <p>LLMs are usually trained with big datasets, which probably include poems, so I wouldn't be surprised if they can come up with poem-like texts. Now, if you want to fine-tune some LLM to specialise in poem writing, then yes you could start by training it with a dataset with only poems. ML is (sadly) mostly an engineering task nowadays, so there isn't just one right answer - you may realise that changing the loss function could be useful or not, etc. I wouldn't also be surprised if people already attempted to do what you're trying to do, so you may want to do a little bit of research on that.</p> <h3>Example</h3> <p>I've just tried to ask ChatGPT to generate a poem with the following prompt.</p> <blockquote> <p>Write a simple poem on how chat models are misleading and can be informational dangerous.</p> </blockquote> <p>Answer</p> <pre><code>In realms of code where words entwine, Chat models dance, their language fine. But be aware, their wiles may play, A double-edged sword, in shadows they sway. With clever words and friendly tone, They seem like friends, in language sown. Yet truth can twist, and facts may bend, In their intricate web, they seek to pretend. Informational danger lurks within, As chat models spin tales to win. Trust not every phrase they craft, For falsehoods may lurk, hiding in the draft. Critical minds must always be near, To separate fact from fiction, clear. In the realm of bits, caution's the key, Lest we're led astray by deceptive glee. So heed this warning, take the cue, Chat models' truths might not be true. Stay vigilant, wise, and aware, For in their charm, risks often flare. </code></pre>
399
transfer learning
Transfer Learning of Numerical Data
https://ai.stackexchange.com/questions/24136/transfer-learning-of-numerical-data
<p>It seems like transfer learning is only applicable to neural networks. Is this a correct assumption?</p> <p>While I was looking for examples of Transfer Learning, most seemed to be based on image data, audio data, or text data. I was not able to find an example of training a neural network on numerical data.</p> <p>I want to use transfer learning in this particular scenario: I have a lot of numerical data from an old environment, with binary classification labels, and utilizing this, want to train on a new environment to do the same binary classification.</p> <p>The dataset would look something like this <a href="https://i.sstatic.net/1CKRY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1CKRY.png" alt="Sample Table" /></a></p> <p>Is this possible? What would the model look like?</p>
<blockquote> <p>It seems like transfer learning is only applicable to neural networks. Is this a correct assumption?</p> </blockquote> <p>No. <a href="https://en.wikipedia.org/wiki/Transfer_learning" rel="nofollow noreferrer">Wiki</a> page give you pointers of several examples in other methodologies.</p> <blockquote> <p>While I was looking for examples of Transfer Learning, most seemed to be based on image data, audio data, or text data. I was not able to find an example of training a neural network on numerical data.</p> </blockquote> <p>All the cases you say are converted to numerical data. Image and audio usually via sampling, text via one-hot encoding.</p> <blockquote> <p>I want to use transfer learning in this particular scenario: I have a lot of numerical data from an old environment, with binary classification labels, and utilizing this, want to train on a new environment to do the same binary classification.</p> </blockquote> <p>That is not transfer learning. Transfer learning applies when there are a change in the domain (input features) or in the task (output labels).</p> <blockquote> <p>The dataset would look something like this Sample Table Is this possible? What would the model look like?</p> </blockquote> <p>For a simple case as the one you present, probably a simple network with one hidden layer will be enough. Train it with original pairs of {features,label} or, if not available, use the current predictor to obtain the label from the features.</p>
400
transfer learning
What is layer freezing in transfer learning?
https://ai.stackexchange.com/questions/22963/what-is-layer-freezing-in-transfer-learning
<p>Transfer learning consists of taking features learned on one problem and leveraging them on a new, similar problem.</p> <p>In the Transfer Learning, we take layers from a previously trained model and freeze them.</p> <p><strong>Why is this layer freezing required and what are the effects of layer freezing?</strong></p>
<p><strong>Why is this layer freezing required?</strong></p> <p>It's not.</p> <p><strong>What are the effects of layer freezing?</strong> The consequences are:</p> <p>(1) Should be <em>faster</em> to train (the gradient will have far less components)</p> <p>(2) Should require <em>less</em> data to train on</p> <p>If you do unfreeze the weights, I'd think your performance would be better because you are adjusting (i.e., fine-tuning) the parameters to your specific problem at hand. I am not sure what the marginal improvements are in practice, as I have not experiemented much with fine-tuning (like are the improvements typically a 0.01% reduction in error rate? Not sure.)</p>
401
transfer learning
What is the relation between self-taught learning and transfer learning?
https://ai.stackexchange.com/questions/27136/what-is-the-relation-between-self-taught-learning-and-transfer-learning
<p>I am new to transfer learning and I start by reading <a href="https://ieeexplore.ieee.org/document/5288526" rel="nofollow noreferrer">A Survey on Transfer Learning</a>, and it stated the following:</p> <blockquote> <p>according to different situations of labeled and unlabeled data in the source domain, we can further categorize the inductive transfer learning setting into two cases:</p> <p>case <span class="math-container">$(a)$</span> (It is irrelevant to my question).</p> <p>case <span class="math-container">$(b): $</span> No labeled data in the source domain are available. In this case, the inductive transfer learning setting is similar to the self-taught learning setting, which is first proposed by Raina et al. [22]. In the self-taught learning setting, the label spaces between the source and target domains may be different, which implies the side information of the source domain cannot be used directly. Thus, it’s similar to the inductive transfer learning setting where the labeled data in the source domain are unavailable.</p> </blockquote> <p>From that, I understand that self-taught learning is inductive transfer learning.</p> <p>But I opened the paper of self-taught learning that was mentioned (i.e <a href="https://www.researchgate.net/publication/200038912_Self-taught_learning_Transfer_learning_from_unlabeled_data" rel="nofollow noreferrer">paper by Raina et al. [22].</a>), and It stated the following in the introduction:</p> <blockquote> <p>Because self-taught learning places significantly fewer restrictions on the type of unlabeled data, in many practical applications (such as image, audio or text classification) it is much easier to apply than typical semi-supervised learning or transfer learning methods.</p> </blockquote> <p>And here it looks like transfer learning is different from self-taught learning.</p> <p>So what is the right relation between them?</p>
<p>After a lot of searches, I think self-taught learning is a Transfer learning category, I think when Self-taught learning paper published (2007), there isn't any good survey on transfer learning, and as seen by high citation of the paper of Pan, his paper (which published in 2009) describes transfer learning in a clear way that did not exist before it.</p> <p>Also, it is reasonable to consider Self-Taught learning a transfer learning category because he actually transfers the knowledge learned from unlabelled data to use it in the supervised task that we want to do (there is no need for the unlabeled data (that used for training) to follow the same class labels or generative distribution as the labelled data that will be used for the supervised task).</p> <p>If someone could find something wrong with my answer or there is something missing, then please tell me.</p>
402
transfer learning
How is few-shot learning different from transfer learning?
https://ai.stackexchange.com/questions/25615/how-is-few-shot-learning-different-from-transfer-learning
<p>To my understanding, transfer learning helps to incorporate data from other related datasets and achieve the task with less labelled data (maybe in 100s of images per category).</p> <p>Few-shot learning seems to do the same, with maybe 5-20 images per category. Is that the only difference?</p> <p>In both cases, we initially train the neural network with a large dataset, then fine-tune it with our custom datasets.</p> <p>So, how is few-shot learning different from transfer learning?</p>
<p>They use the same techniques, but study different problems.</p> <p>Transfer learning always does not imply that the novel classes have very-few samples (as few as 1 per class). Few-shot learning does.</p> <p>The goal of transfer learning is to obtain <strong>transferrable features</strong> that can be used for a wide variety of downstream discriminative tasks. One example is using an ImageNet pretrained model as an initialization for any downstream task, but note that <strong>we need to train on large amounts of data on those novel classes</strong> for the model to be suitable to that task.</p> <p>Note that you can't finetune an ImageNet classifier on few examples of COCO and expect it to generalize well, because it won't. It wasn't explicitly optimized for few-sample learning.</p> <p>In few-shot learning, our aim is to obtain models that can generalize from few-samples. This could be transfer learned (with certain changes to the usual transfer learning scenario), or it could be meta-learned. It might not need both, it could just be augmented with data from the novel classes during the test time, and a classifier could be trained from scratch.</p>
403
transfer learning
What are the differences between transfer learning and meta learning?
https://ai.stackexchange.com/questions/18232/what-are-the-differences-between-transfer-learning-and-meta-learning
<p>What are the differences between meta-learning and transfer learning?</p> <p>I have read 2 articles on <a href="https://qr.ae/Tdowsi" rel="noreferrer">Quora</a> and <a href="https://towardsdatascience.com/icml-2018-advances-in-transfer-multitask-and-semi-supervised-learning-2a15ef7208ec" rel="noreferrer">TowardDataScience</a>. </p> <blockquote> <p>Meta learning is a part of machine learning theory in which some algorithms are applied on meta data about the case to improve a machine learning process. The meta data includes properties about the algorithm used, learning task itself etc. Using the meta data, one can make a better decision of chosen learning algorithm(s) to solve the problem more efficiently.</p> </blockquote> <p>and</p> <blockquote> <p>Transfer learning aims at improving the process of learning new tasks using the experience gained by solving predecessor problems which are somewhat similar. In practice, most of the time, machine learning models are designed to accomplish a single task. However, as humans, we make use of our past experience for not only repeating the same task in the future but learning completely new tasks, too. That is, if the new problem that we try to solve is similar to a few of our past experiences, it becomes easier for us. Thus, for the purpose of using the same learning approach in Machine Learning, transfer learning comprises methods to transfer past experience of one or more source tasks and makes use of it to boost learning in a related target task.</p> </blockquote> <p>The comparisons still confuse me as both seem to share a lot of similarities in terms of reusability. Meta-learning is said to be "model agnostic", yet it uses metadata (hyperparameters or weights) from previously learned tasks. It goes the same with transfer learning, as it may reuse partially a trained network to solve related tasks. I understand that there is a lot more to discuss, but, broadly speaking, I do not see so much difference between the two.</p> <p>People also use terms like "meta-transfer learning", which makes me think both types of learning have a strong connection with each other.</p> <p>I also found a <a href="https://stats.stackexchange.com/q/255025/82135">similar question</a>, but the answers seem not to agree with each other. For example, some may say that multi-task learning is a sub-category of transfer learning, others may not think so.</p>
<p>First of all, I would like to say that it is possible that these terms are used inconsistently, given that at least transfer learning, AFAIK, is a relatively new expression, so, the general trick is to take terminology, notation and definitions with a grain of salt. However, in this case, although it may sound confusing to you, all of the current descriptions on this page (in your question and the other answers) don't seem inconsistent with my knowledge. In fact, I think I had already roughly read some of the cited research papers (e.g. the MAML paper).</p> <p>Roughly speaking, although you can have formal definitions (e.g. the one in the MAML paper and also described in <a href="https://ai.stackexchange.com/a/18404/2444">this answer</a>), which may not be completely consistent across sources, <strong>meta-learning</strong> is about <strong>learning to learn</strong> or learning something that you usually don't directly learn (e.g. the hyperparameters), where learning is roughly a synonym for optimization. In fact, the meaning of the word "meta" in meta-learning is</p> <blockquote> <p>denoting something of a <strong>higher or second-order kind</strong></p> </blockquote> <p>For example, in the context of training a neural network, you want to find a neural network that approximates a certain function (which is represented by the dataset). To do that, usually, you manually specify the optimizer, its parameters (e.g. the learning rate), the number of layers, etc. So, in this usual case, you will train a network (learn), but you will not know that the hyperparameters that you set are the most appropriate ones. So, in this case, training the neural network is the task of "learning". If you also want to learn the hyperparameters, then you will, in this sense, learn how to learn.</p> <p>The concept of meta-learning is also common in reinforcement learning. For example, in the paper <a href="https://www.semanticscholar.org/paper/Metacontrol-for-Adaptive-Imagination-Based-Hamrick-Ballard/099cdb087f240352a02286bf9a3e7810c7ebb02b" rel="noreferrer">Metacontrol for Adaptive Imagination-Based Optimization</a>, they even formalize the concept of a meta-Markov decision process. If you read the paper, which I did a long time ago, you will understand that they are talking about a higher-order MDP.</p> <p>To conclude, in the context of machine learning, meta-learning usually refers to learning something that you usually don't learn in the standard problem or, as the definition of meta above suggests, to perform "higher-order" learning. </p> <p><strong>Transfer learning</strong> is often used as a synonym for fine-tuning, although that's not always the case. For example, in <a href="https://www.tensorflow.org/tutorials/images/transfer_learning" rel="noreferrer">this TensorFlow tutorial</a>, transfer learning is used to refer to the scenario where you freeze (i.e. make the parameters non-trainable) the convolution layers of a model <span class="math-container">$M$</span> pre-trained on a dataset <span class="math-container">$A$</span>, replace the pre-trained dense layers of model <span class="math-container">$M$</span> on dataset <span class="math-container">$A$</span> with new dense layers for the new tasks/dataset <span class="math-container">$B$</span>, then retrain the new model, by adjusting the parameters of this new dense layer, on the new dataset <span class="math-container">$B$</span>. There are also papers that differentiate the two (although I don't remember which ones now). If you use transfer learning as a synonym for fine-tuning, then, roughly speaking, transfer learning is to use a pre-trained model and then slightly retrain it (e.g. with a smaller learning rate) on a new but related task (to the task the pre-trained model was originally trained for), but you don't necessarily freeze any layers. So, in this case, fine-tuning (or transfer learning) means to tune the pre-trained model to the new dataset (or task).</p> <p><em>How is transfer learning (as fine-tuning) and meta-learning different?</em></p> <p>Meta-learning is, in a way, about fine-tuning, but not exactly in the sense of transfer learning, but in the sense of hyperparameter optimization. Remember that I said that meta-learning can be about learning the parameters that you usually don't learn, i.e. the hyper-parameters? When you perform hyper-parameters optimization, people sometimes refer to it as fine-tuning. So, meta-learning is a way of performing hyperparameter optimization and thus fine-tuning, but not in the sense of transfer learning, which can be roughly thought of as <em>retraining a pre-trained model but on a different task with a different dataset</em> (with e.g. a smaller learning rate).</p> <p>To conclude, take terminology, notation, and definitions with a grain of salt, even the ones in this answer.</p>
404
transfer learning
Embedding Quality of Transfer Learning model vs Contrastive learning model
https://ai.stackexchange.com/questions/35738/embedding-quality-of-transfer-learning-model-vs-contrastive-learning-model
<p>I am working on Contrastive learning which is a technique to learn features based on the concept of learning from comparing two or more instances.</p> <p>The downstream task is a classification problem.</p> <p><strong>Transfer Learning</strong> Due to limited data, I tried to use Transfer learning model trained on &quot;Imagenet&quot;(ResNet50 V2 &quot;Deep Residual Learning for Image Recognition Kaiming He, et.al&quot;).</p> <p>I used the embedding from the pretrained model and trained Linear SVM and achieved a F1 score of 0.84.</p> <p><strong>Contrastive Learning</strong> I also trained a model for contrastive learning using Facenet technique(&quot;FaceNet: A Unified Embedding for Face Recognition and Clustering Florian Schroff, et.al&quot;) and further used the embedding for training a Linear SVM for classification problem. The achieved F1 score is 0.83.</p> <p><strong>Problem</strong> Though the scores of both the concept are closeby, I tried to evaluate the quality of both the embeddings using Silhouette Coefficient.</p> <p><em><strong>Overall Silhouette Coefficient:</strong></em></p> <ul> <li>Transfer Learning Embedding = 0.05</li> <li>Contrastive Learning Embedding = 0.49</li> </ul> <p>I do not understand this behaviour of the system that even with lower Silhouette Coefficient, the transfer learning model is able to perform well.</p> <p>Kindly provide me with your views on it</p>
<p>what are the dimensions of the embeddings? The first thing that comes to mind is that you probably should be wary of using euclidean distance in high dimensions (Distribution of Distances between Elements in a Compact Set), as done when computing the Silhouette Coefficient.</p> <p>However, that's probably not an issue for typical dimensions of 10 to 100. It would be good to get more details on your training.</p>
405
transfer learning
Can we apply transfer learning between any two different CNN architectures?
https://ai.stackexchange.com/questions/23672/can-we-apply-transfer-learning-between-any-two-different-cnn-architectures
<p>There are many types of CNN architectures: LeNet, AlexNet, VGG, GoogLeNet, ResNet, etc. Can we apply transfer learning between any two different CNN architectures? For instance, can we apply transfer learning from AlexNet to GoogLeNet, etc.? Or even just from a &quot;conventional&quot; CNN to one of these other architectures, or the other way around? Is this possible in general?</p> <p>EDIT: My understanding is that <em>all</em> machine learning models have the ability to perform transfer learning. If this is true, then I guess the question is, as I said, whether we can transfer between two <em>different</em> CNN architectures – for instance, what was learned by a conventional CNN to a different CNN architecture.</p>
<p>No, transfer learning cannot be applied &quot;between&quot; different architectures, as transfer learning is the practice of taking a neural network that has already been trained on one task and retraining it on another task with the same input modality, which means that only the weights (and other trainable parameters) of the network change during transfer learning but not the architecture.</p> <p>In my understanding, transfer learning is also only really effective in deep learning, but I could be wrong, considering that <a href="https://www.google.com/search?q=random+forest+transfer+learning" rel="nofollow noreferrer">this Google search</a> seems to yield some results.</p> <p>You might otherwise be thinking of knowledge distillation, which is a related but different concept, where an already trained network acts as a teacher and teaches another network (a student network) with possibly a different architecture (or a machine learning model not based on neural networks at all) the correct outputs for a bunch of input examples.</p>
406
transfer learning
What are the real-life applications of transfer learning?
https://ai.stackexchange.com/questions/15731/what-are-the-real-life-applications-of-transfer-learning
<p>What are the real-life applications of transfer learning in machine learning? I am particularly interested in industrial applications of the concept.</p>
<p>One application I know of being used in industry is of image classification, by only training the last layer of one of the inception models released by Google, with the desired number of classes. I can't provide specific details. </p> <p>Transfer learning is useful when:</p> <ol> <li><p>You do not have the resources (time, processing power, etc.) to train a DL model from scratch.</p></li> <li><p>You can compromise a bit on accuracy. </p></li> </ol>
407
transfer learning
Validation accuracy very low with transfer learning
https://ai.stackexchange.com/questions/28156/validation-accuracy-very-low-with-transfer-learning
<p>I am using MobileNetV3 from TF keras for doing transfer learning; I removed the last layer, added two dense layers, and trained for 20 epochs.</p> <ol> <li><p>How many dense layers should I add after the MobileNet and How dense should they be?</p> </li> <li><p>How many epochs should I train for?</p> </li> <li><p>Validation loss and validation accuracy have a strange pattern, is that normal?</p> </li> </ol> <p>Is there anything I am missing?</p> <p><a href="https://i.sstatic.net/JC4L4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JC4L4.png" alt="enter image description here" /></a></p>
<p>these two steps solved my problem</p> <ol> <li>I found that I forget to freeze the per-trained model by setting trainable = False</li> <li>It seams that I failed to load the weights when I get the model from keras.application even that the <a href="https://keras.io/api/applications/" rel="nofollow noreferrer">documentation</a> mentioned</li> </ol> <blockquote> <p>Weights are downloaded automatically when instantiating a model. They are stored at ~/.keras/models/.</p> </blockquote> <p>so I get the model from <a href="https://tfhub.dev/" rel="nofollow noreferrer">tensorflow hub</a> which worked correctly</p>
408
transfer learning
How could Bayesian neural networks be used for transfer learning?
https://ai.stackexchange.com/questions/28549/how-could-bayesian-neural-networks-be-used-for-transfer-learning
<p>In transfer learning, we use big data from similar tasks to learn the parameters of a neural network, and then fine-tune the neural network on our own task that has little data available for it. Here, we can think of the transfer learning step as learning a (proper) prior, and then fine-tuning as learning the posterior.</p> <p>So, we can argue that Bayesian networks can also solve the problem of small data-set regimes. But, what are the directions that we can mix Bayesian neural networks with similar tasks to transfer learning, for example, few-shot learning?</p> <p>They make sense when they both take a role as a solution to the low data regime problems, but I can't think of a mix of them to tackle this issue.</p> <p>Is it possible, for example, to learn a BNN for which we have picked a good prior to learn the posterior with little data and use the weight distribution for learning our new task? Is there any benefit in this?</p>
<p>Well, I would say, that purpose of Bayesian inference is not transfer learning, but uncertainty estimation.</p> <p>In case you have good feature extractor in the beginning, you can adjust small number of parameters, like few last layers to achieve good quality in few epochs.</p> <p>However, this is about adjusting the means of distributions over each weight.</p> <p>Concerning the variance, I think transfer learning is inapplicable since the source and target distributions can be very different. For example, ImageNet is a broad and diverse dataset with many classes, and the target problem can involve only a few classes. Most probably, uncertainty estimate and the standard deviations of model weights on the ImageNet would be larger, than for the model, trained solely on the target task.</p>
409
transfer learning
Would this count as a Transfer Learning approach?
https://ai.stackexchange.com/questions/28247/would-this-count-as-a-transfer-learning-approach
<p>I have two datasets, Dataset 1(D1) and Dataset 2(D2). D1 has around 22000 samples, and D2 has around 8000 samples. What I am doing is that I train a Deep Neural Network model with around three layers on the dataset D1, which has an accuracy of around 84% (test accuracy = 82%).</p> <p>Now, when I use that model to make predictions on D2 without any fine-tuning or anything, I get an accuracy of around 15%(test accuracy = 12.3%). But when I add three more layers to the pre-trained model while keeping the three layers of the initial model(trained on D1) frozen, I get around 90% accuracy (test accuracy = 87.6%) on D2.</p> <p>This tells me that because the initial model was performing so poorly without any fine-tuning, most of the learning that led to the 90% accuracy was only because of the additional layers, not the layers that were transferred from the model trained on the D1 dataset. Is this a correct inference? And if it is, then is it still valid to call this a Transfer Learning application? Or does it has to have more accuracy without fine-tuning to be rightly listed as a Transfer Learning problem.</p>
<blockquote> <p>This tells me that because the initial model was performing so poorly without any fine-tuning, most of the learning that led to the 90% accuracy was only because of the additional layers, not the layers that were transferred from the model trained on the D1 dataset. Is this a correct inference?</p> </blockquote> <p>This is a <em>possibility</em>, but not the only one. If you were re-purposing a classifier for ImageNet classes to specific new image type, or even the same classes but with the labels in a different order, then this large initial drop in accuracy would be expected.</p> <p>The transfer learning could be helping in two different measurable ways:</p> <ul> <li><p>The training for the new purpose is faster (fewer epochs required) than if done from scratch with a brand new network.</p> </li> <li><p>The end accuracy is better than could be achieved with just D2 dataset and a brand new network.</p> </li> </ul> <p>The only way to tell if either of these are the case is to compare results by using just the normal D2 features and a re-initialised copy of the original NN used to learn from D1 (by that I mean adapt your initial training script from D1 to work with D2 - changing the dataset file names and the ouput layer shape should be all you need to do). Look at the learning curve for this training - if it is significantly slower or worse accuracy at the end, then the transfer learning has made a difference.</p> <blockquote> <p>And if it is, then is it still valid to call this a Transfer Learning application? Or does it has to have more accuracy without fine-tuning to be rightly listed as a Transfer Learning problem.</p> </blockquote> <p>I am not sure it matters if the results using transfer learning are worse. It is - at least in my opinion - an <em>attempt</em> to use transfer learning.</p> <p>If transfer learning has not produced better results, then you will maybe have demonstrated that there is not enough overlap from the D1 dataset and problem to the D2 ones to justify using transfer learning in your use case.</p> <p>This result may also depend on the sizes of datasets D1 and D2, even if your experiment stays in the same problem domain. The number of examples in D1 is ~3 times the size of D2, which is not much of a difference compared to transfer learning done using large pre-trained image or language models.</p>
410
transfer learning
What is the difference between learning without forgetting and transfer learning?
https://ai.stackexchange.com/questions/13644/what-is-the-difference-between-learning-without-forgetting-and-transfer-learning
<p>I would like to incrementally train my model with my current dataset and <a href="https://github.com/tensorflow/models/issues/7200#issuecomment-510850230" rel="nofollow noreferrer">I asked this question on Github</a>, which is what I'm using SSD MobileNet v1.</p> <p>Someone there told me about <a href="https://arxiv.org/abs/1606.09282" rel="nofollow noreferrer"><strong>learning without forgetting</strong></a>. I'm now confused between <em>learning without forgetting</em> and <em>transfer learning</em>. How they differ from each other?</p> <p>My initial problem, what I'm trying to achieve (mentioned in Github issue) is the following.</p> <p>I have trained my dataset on <code>ssd_mobilenet_v1_coco</code> model. I'm getting continuous incremental data. Right now, my dataset is very limited.</p> <p>What I want to achieve is <em>incremental training</em>, i.e. as soon as I get new data, I can further train my already trained model and I don't have to retrain everything:</p> <ol> <li>Save trained model <span class="math-container">$M_t$</span></li> <li>Get new data <span class="math-container">$D_{t+1}$</span></li> <li>Train <span class="math-container">$M_t$</span> on <span class="math-container">$D_{t+1}$</span> to produce <span class="math-container">$M_{t+1}$</span></li> <li>Let <span class="math-container">$t = t+1$</span>, then go back to <span class="math-container">$1$</span></li> </ol> <p>How do I perform this incremental training/learning? Should I use LwF or transfer learning?</p>
<p>Learning without Forgetting (LwF) is an incremental learning (sometimes also called continual or lifelong learning) technique for neural networks, which is a machine learning technique that attempts to avoid <a href="https://ai.stackexchange.com/a/13293/2444">catastrophic forgetting</a>. There are several incremental learning approaches. LwF is an incremental learning approach based on the concept of <a href="https://arxiv.org/pdf/1710.10686.pdf" rel="noreferrer"><em>regularization</em></a>. In section 3.2 of the paper <a href="https://www.sciencedirect.com/science/article/pii/S0893608019300231" rel="noreferrer">Continual lifelong learning with neural networks: A review</a> (2019), by Parisi et al., other regularisation-based continual learning techniques are described. </p> <p>LwF could be seen as a combination of <a href="https://arxiv.org/abs/1503.02531" rel="noreferrer">distillation networks</a> and <a href="https://arxiv.org/abs/1311.2524" rel="noreferrer">fine-tuning</a>, which refers to the re-training with a low learning rate (which is a very rudimentary technique to avoid catastrophically forgetting the previously learned knowledge) an already trained model <span class="math-container">$\mathcal{M}$</span> with new and (usually) more specific dataset, <span class="math-container">$\mathcal{D}_{\text{new}}$</span>, with respect to the dataset, <span class="math-container">$\mathcal{D}_{\text{old}}$</span>, with which you originally trained the given model <span class="math-container">$\mathcal{M}$</span>. </p> <p>LwF, as opposed to other continual learning techniques, only uses the new data, so it assumes that past data (used to pre-train the network) is unavailable. The paper <a href="https://arxiv.org/abs/1606.09282" rel="noreferrer">Learning without Forgetting</a> goes into the details of the technique and it also describes the concepts of <em>feature extraction</em>, <em>fine tuning</em> and <em>multitask learning</em>, which are related to incremental learning techniques.</p> <p><em>What is the difference between LwF and transfer learning?</em> LwF is a combination of distillation networks and fine-tuning, which is a <a href="https://www.cse.ust.hk/~qyang/Docs/2009/tkde_transfer_learning.pdf" rel="noreferrer">transfer learning</a> technique, which is a special case of incremental learning, where the old and new tasks are different, while, in general, in incremental learning, the old and new tasks can also be the same (which is called <a href="https://arxiv.org/abs/1606.09282" rel="noreferrer">domain adaptation</a>).</p>
411