text
stringlengths 81
47k
| source
stringlengths 59
147
|
|---|---|
Question: <p>While doing transfer learning where my two problems are face-generation and car-generation is it likely that, if I use the weights of one problem as the initialization of the weights for the other problem, the model will converge to a local minima? In any problem is it better to train from scratch over transfer learning? (especially for GAN training?)</p>
Answer:
|
https://ai.stackexchange.com/questions/13567/is-convergence-to-a-local-minima-more-likely-with-transfer-learning
|
Question: <p>I'm super new to deep learning and computer vision, so this question may sound dumb.</p>
<p>In this link (<a href="https://github.com/GeorgeSeif/Semantic-Segmentation-Suite" rel="nofollow noreferrer">https://github.com/GeorgeSeif/Semantic-Segmentation-Suite</a>), there are pre-trained models (e.g., ResNet101) called front-end models. And they are used for feature extraction. I found these models are called backbone models/architectures generally. And the link says some of the main models (e.g. DeepLabV3 or PSPNet) rely on pre-trained ResNet.</p>
<p>Also, transfer learning is to take a model trained on a large dataset and transfer its knowledge to a smaller dataset, right?</p>
<ol>
<li><p>Do the main models that rely on pre-trained ResNet do transfer learning basically (like from ResNet to the main model)? </p></li>
<li><p>If I use a pre-trained network, like ResNet101, as the backbone architecture of the main model(like U-Net or SegNet) for image segmentation, is it considered as transfer learning?</p></li>
</ol>
Answer:
|
https://ai.stackexchange.com/questions/17750/what-is-the-difference-between-using-a-backbone-architecture-and-transfer-learni
|
Question: <p>I am currently working on a defect detection algorithm but I only have a few samples of defects.I googled for defect detection datasets and I found this one: </p>
<p><a href="http://resources.mpi-inf.mpg.de/conferences/dagm/2007/prizes.html" rel="nofollow noreferrer">http://resources.mpi-inf.mpg.de/conferences/dagm/2007/prizes.html</a> </p>
<p>which has a few hundreds of original images of defects.</p>
<p>My idea is:
Imagenet => Defect dataset from internet => Own defect dataset</p>
<p>Step 1. Training a model with ImageNet initialization using the defect dataset found in the internet (+ non-defect images + augmented data)</p>
<p>Step 2. Using the output model of step 1 (which will be more similar to my own data),do transfer learning using my own defect dataset (defects + non-defects + augmented).</p>
<p>Do you think this a good way to get good results? </p>
<p>Based on:
<a href="https://blog.slavv.com/a-gentle-intro-to-transfer-learning-2c0b674375a0" rel="nofollow noreferrer">https://blog.slavv.com/a-gentle-intro-to-transfer-learning-2c0b674375a0</a></p>
<p>Should defect images consider as low similar with imagenet's images? or similar to model because a both inputs are images? Some webpages said because they both are images, they are similar but some webpages said because these images are too different to the images used to train the imagenet model so I got confused about this.</p>
<p>If I skip step 1, I dont think I get anything good because I have less than 100 images.</p>
<p>Any advise or comment will be appreciated. </p>
Answer:
|
https://ai.stackexchange.com/questions/5084/transfer-learning-from-model-trained-in-a-similar-dataset
|
Question: <p>How can transfer learning be used to mitigate <a href="https://en.wikipedia.org/wiki/Catastrophic_interference" rel="nofollow noreferrer">catastrophic forgetting</a>. Could someone elaborate on this?</p>
Answer: <p>Transfer learning is a field where you apply knowledge from a source onto a target. This is a vague notion and there is an abundance of literature pertaining to it. Given your question I will work under the assumption that you are referring weight/architecture sharing between model (in other words training a model on one dataset and using it as a featurizer for another dataset) </p>
<p>Now any learning system without lossless memory will have remnants of catastrophic forgetting. So let's think about how we would implement this transfer and what effects can be derived from this.</p>
<ol>
<li>One implementation involves transferring a component and only training additional layers.</li>
<li>Another is retraining the entire system but at a lower learning rate? </li>
</ol>
<p>In setting 1, we can make the claim catastrophic forgetting is minimized by the fact that there is an unbiased featurizer that cant <em>forget</em> based on a sampling regime, though this does not mean additional layers which are still being trained can still faulter in this error mode. </p>
<p>In setting 2, we can make the claim catasrophic forgetting can be reduced compared to a normal end-to-end no-transfer training because the unbiased featurizers difference can be analytically bounded by its initial transferred featurization (complexity class is based on both the function and the number of steps -- so longer you train, the more likely it can <em>forget</em>) </p>
<p>These reasons are talking about mitigating and not erasing the concept of catastrophic forgetting, that is because as I mentioned above <em>any learning system without lossless memory will have remnants of catastrophic forgetting</em>, so making the generalized claim about transfer learning may not always fit the bill.</p>
|
https://ai.stackexchange.com/questions/14117/how-is-transfer-learning-used-to-mitigate-catastrophic-forgetting-in-neural-netw
|
Question: <p>Lately, there are lots of posts on <em>one-shot</em> learning. I tried to figure out what it is by reading some articles. To me, it looks like similar to <em>transfer</em> learning, in which we can use pre-trained model weights to create our own model. <em>Fine-tuning</em> also seems a similar concept to me. </p>
<p>Can anyone help me and explain the differences between all three of them?</p>
Answer: <p>They are all related terms.</p>
<p>From top to bottom:</p>
<p><strong>One-shot learning</strong> aims to achieve results with one or very few examples. Imagine an image classification task. You may show an apple and a knife to a human and no further examples are needed to continue classifying. That would be the ideal outcome, but for algorithms.</p>
<p>In order to achieve one-shot learning (or close) we can rely on <strong>knowledge transfer</strong>, just like the human in the example would do (we are trained to be amazing at image processing, but here we would also exploit other knowledge like abstract reasoning abilities, and so on).</p>
<p>This brings us to <strong>transfer learning</strong>. Generally speaking, transfer learning is a machine learning paradigm where we train a model on one problem and then try to apply it to a different one (after some adjustments, as we'll see in a second).</p>
<p>In the example above, classifying apples and knives is not at all trivial. However, if we are given a neural network that already excels at image classification, with super-human results in over 1000 categories... perhaps it is easy to adapt this model to our specific apples vs knives situation.</p>
<p>This "adapting", those "adjustments", are essentially what we call <strong>fine-tuning</strong>. We could say that fine-tuning is the training required to adapt an already trained model to the new task. This is normally much less intensive than training from scratch, and many of the characteristics of the given model are retained.</p>
<p>Fine-tuning usually covers more steps. A typical pipeline in deep learning for computer vision would be this:</p>
<ol>
<li>Get trained model (image classifier champion)</li>
<li><p>Note the head of our model does not match our needs (there's probably one output per category, and we only need two categories now!)</p></li>
<li><p>Swap the very last layer(s) of the model, so that the output matches our needs, but keeping the rest of the architecture and already trained parameters intact.</p></li>
<li><p>Train (fine-tune!) our model on images that are specific to our problem (only a few apples and knives in our silly example). We often only allow the last layers to learn at first, so they "catch up" with the rest of the model (in this case we talk about freezing and unfreezing and discriminative learning rates, but that's a bit beyond the question).</p></li>
</ol>
<p>Note that some people may sometimes use fine-tuning as a synonym for transfer learning, so be careful about that!</p>
|
https://ai.stackexchange.com/questions/21719/what-is-the-difference-between-one-shot-learning-transfer-learning-and-fine-tun
|
Question: <p>I'm seeing conflicting info on what to do with the fully-connected output layer of a pre-trained network when it's used in transfer learning. <a href="https://datascience.stackexchange.com/questions/76370/how-many-layers-should-i-replace-in-transfer-learning-cnn">A previous answer</a> seems to imply that the network is kept intact and a new output layer is added on top of the existing output layer. This approach means that the new network will build on all of the pre-trained weights of the existing network, including the weights in the output layer. However, when I look at <a href="https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html" rel="nofollow noreferrer">this tutorial</a> on PyTorch, the fully-connected output layer is replaced with a new output layer to match the number of classes in the new task. This second approach means that the pre-trained weights in the existing output layer (512*1000 parameters for ResNet18) are lost. The first approach retains everything that was learned. The second approach also looks reasonable since the weights of the final output layer are like very task-specific and so can be discarded when learning a new task. Which approach is recommended in general?</p>
Answer: <p>Transfer learning is a complete field of research, and there are multiple possibilities for what might work best in each situation.</p>
<p>There are various ways in which you can employ a pretrained model for transfer learning. You can indeed keep the complete model intact, but it is more common (as is done in pytorch) to delete the last (several) layers. In addition, one can vary with which model weights are frozen during training. You can, for example, delete the last layer and unfreeze the second to last, meaning you train 2 layers, but one is pretrained.</p>
<p>If you do not simply want to extend the model, it is usually advised to look into unfreezing (some of) the last pretrained layer(s), as it helps adjust the model to the new training data. Do not put the learning rate too high as it might result in the unfrozen pretrained layer forgetting what it had learned (usually termed catastrophic forgetting).</p>
<p>I personally have had more success with deleting the last 1 or 2 layers and unfreezing another 1 or 2 layers. You could look at some academic papers to see if there are any large-scale tests for what generally works better.</p>
|
https://ai.stackexchange.com/questions/38033/keep-weights-of-output-layer-in-transfer-learning
|
Question: <p>All examples of transfer learning I have seen for classification use initial weights of a network trained on a larger number of classes (say 1000 in the case of networks trained on ImageNet data) to address a new task that has a smaller number of classes. Can transfer learning be effectively used when the new task has more classes than the original? For example, can I effectively build on ResNet50 or parts of it for a new task that has 1500 classes? Thanks.</p>
Answer: <p>Yes, transfer learning can usually also be utilized when the number of classes differs from the original. Your model will however be more 'transferable' if it has been trained on a wide variety of data or data that is somewhat similar to your new data.
What one would usually do is freeze the weights of the lower layers of the network and only retrain the upper part of the network (e.g. the fully connected part). The most important question is, thus, not how many classes each network was trained on but instead: <strong>How high is the representational distance between the old and the new dataset?</strong></p>
<p>Recently self-supervised pretraining has increased in popularity, this shows that general features are useful somewhat independent of how they were acquired.</p>
|
https://ai.stackexchange.com/questions/38030/is-transfer-learning-effective-when-the-new-task-has-more-classes-than-the-origi
|
Question: <p>I am reading some books and papers on Transfer Learning for Reinforcement Learning and have some questions.</p>
<p>Suppose previous MDPs and target MDP share the same state space, action space and transition function, but differ in their reward functions, parameterized as <span class="math-container">$r_{\theta}(s,a)$</span> with different values of <span class="math-container">$\theta$</span> for each MDP.</p>
<p>Assume all MDPs are very large and hard to solve. If we have already solved previous MDPs with good policy <span class="math-container">$\pi_{\theta_{i}}$</span>, can we leverage supervised learning to directly learn a policy <span class="math-container">$\pi_{\theta_{target}}$</span> for the target MDP? That is to say, we could collect data pairs <span class="math-container">$(\theta_{i},s,\pi_{\theta_{i}}(s))$</span> from each <span class="math-container">$\pi_{\theta_{i}}$</span>, treat them as <span class="math-container">$(feature, label)$</span> pairs. Then, applying supervised learning method on this dataset.</p>
<p>Many paper consider the value function of the target MDP and do some excellent analysis on it. But I'm wondering if using surpervised learning a more straightforward way to solve it? Are there any existing works in RL that explore similar idea?</p>
Answer: <p>If the target MDP is significantly different in terms of reward structure, the policy learned from past MDPs may not generalize well intuitively. The dataset <span class="math-container">$(\theta_{i},s,\pi_{\theta_{i}}(s))$</span> only contains states visited by past policies. The new policy may visit new states not covered in previous training, leading to extrapolation error of supervised learning (SL). Fully exploiting SL assumes an iid or exchangable dataset, in RL the sampled data distribution from interaction with environment changes over time as the agent explores and updates its policy during each update. Therefore there's an <em>unreducible</em> difference between SL and the core transfer learning for RL.</p>
<p>Having said that, your idea is related to policy distillation, Meta-RL/MAML, behavioral cloning, and policy reuse. You may further refer papers below for special cases where we can leverage SL in RL transfer learning usually in a contextual meta-learning or imitation-learning sense.</p>
<p>Konidaris (2006) <em><a href="https://all.cs.umass.edu/pubs/2006/konidaris_ICMLws06.pdf" rel="noreferrer">A Framework for Transfer in Reinforcement Learning</a></em></p>
<blockquote>
<p>We present a conceptual framework for transfer in reinforcement learning based on the idea that related tasks share a common space. The framework attempts to capture the notion of tasks that are related (so that transfer is possible) but distinct (so that transfer is non-trivial). We define three types of transfer (knowledge, skill and model transfer) in terms of the framework, and illustrate them with an example scenario.</p>
</blockquote>
<p>Rusu et al. (2016) <em><a href="https://arxiv.org/pdf/1511.06295" rel="noreferrer">Policy Distillation</a></em></p>
<p>Finn et al. (2017) <em><a href="https://arxiv.org/pdf/1703.03400" rel="noreferrer">Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks</a></em></p>
<p>Ren et al. (2022) <em><a href="https://arxiv.org/pdf/2211.10861v1" rel="noreferrer">Efficient Meta Reinforcement Learning for
Preference-based Fast Adaptation</a></em></p>
<p>Zahavy et al. (2023) <em><a href="https://arxiv.org/pdf/2106.00661" rel="noreferrer">Reward is Enough for Convex MDPs</a></em></p>
|
https://ai.stackexchange.com/questions/47888/why-people-dont-use-supervised-learning-in-transfer-learning-for-reinforcement
|
Question: <p>Question on transfer learning object classification (MobileNet_v2 with 75% number of parameters) with my own synthetic data:</p>
<p>I made my own dataset of three shapes: triangles, rectangles and spheres. each category has 460 samples with diferent sizes, dimensions, different wobbles at edges. They look like this:</p>
<p><a href="https://i.sstatic.net/NAgwS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NAgwS.png" alt="enter image description here"></a>
<a href="https://i.sstatic.net/yuxVt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yuxVt.png" alt="enter image description here"></a>
<a href="https://i.sstatic.net/7GFyG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7GFyG.png" alt="enter image description here"></a></p>
<p>I want the network to classify these primitive shapes in other environments as well with different lighting/color conditions and image statistics. </p>
<p>Even though I'm adding random crops, scaling, and brightnesses, at training step 10 it's already at 100% training and validation accuracy. Cross entropy keeps going down though. I'm using tensorflow hub. The performance of the network in the end could be better within other environments (virtual 3d space with such shapes). Also trained and tested for ~ 50 steps to see if the network is overfitting, but that doesn't work too well.</p>
<p>What alterations would you recommend to generalize better? Or shouldn't I train on synthetic data at all to learn primitive shapes? If so, any dataset recommendations?</p>
<p>Thanks in advance</p>
Answer:
|
https://ai.stackexchange.com/questions/16716/learning-object-recognition-of-primitive-shapes-through-transfer-learning-proble
|
Question: <p>I'm building a model for <strong>facial expression recognition</strong>, and I want to use <em><strong>transfer learning</strong></em>. From what I understand, there are different steps to do it. The first is the <strong>feature extraction</strong> and the second is <strong>fine-tuning</strong>. I want to understand more about these two stages, and the difference between them. Must we use them simultaneously in the same training?</p>
Answer: <p>It has been a while since the question was asked, but I came up with this <a href="https://developer.baidu.com/article/details/2727266" rel="nofollow noreferrer">article</a>. It helped me to understand the topic. From the article:</p>
<p><strong>Feature-based</strong> methods involve using the intermediate representations or features from a pre-trained model as additional inputs to a task-specific model.</p>
<p><strong>Fine-tuning</strong>, on the other hand, involves modifying and retraining a pre-trained model to adapt it to a specific task.</p>
|
https://ai.stackexchange.com/questions/28138/what-is-the-difference-between-feature-extraction-and-fine-tuning-in-transfer-le
|
Question: <p>For example, you train on dataset 1 with an adaptive optimizer like Adam. Should you reload the learning schedule, etc., from the end of training on dataset 1 when attempting transfer to dataset 2? Why or why not?</p>
Answer: <p>When doing transfer learning it makes sense to have different update policies for "inherited" parameters and the "new" parameters. "Inherited" parameters are pre-trained on dataset1 and they typically form the front end of the deep model. The "new" parameters are trained from scratch and they typically produce the desired predictions on dataset2. It would be sensible to restart the learning schedule for the "new" parameters. However, most often we would avoid doing that for "inherited" parameters in order to avoid catastrophic forgetting.</p>
|
https://ai.stackexchange.com/questions/10545/should-you-reload-the-optimizer-for-transfer-learning
|
Question: <p>I'm trying to develop a better understanding of the concept of "out-of-distribution" (generalization) in the context of Bengio's "Moving from System 1 DL to System 2 DL" and the concept of "(meta)-transfer learning" in general. </p>
<p>These concepts seem to be very strongly related, maybe even almost referring to the same thing. So, what are <strong>similarities and differences</strong> between these two concepts? Do these expressions refer to the same thing? If the concepts are to be differentiated from each other, what differentiates the one concept from the other and how do the concepts <strong>relate</strong>?</p>
Answer:
|
https://ai.stackexchange.com/questions/18754/what-is-the-difference-between-out-of-distribution-generalisation-and-meta
|
Question: <p>I have a base model <span class="math-container">$M$</span> trained on a data say type 1 for task <span class="math-container">$T$</span>. Now, I want to update <span class="math-container">$M$</span> by applying transfer learning for it to work on data type 2 for the same task <span class="math-container">$T$</span>. I am very new to AI/ML field. One common way I found for applying transfer learning is to add a new layer at the end or replace the last layer of the base model with a new layer, and then retrain the model on new data (type 2 here). Depending upon the size of type 2, we may decide whether we retrain the whole model or only the new layer.</p>
<p>However, my question is that how do we decide following:</p>
<ol>
<li>What should be the new layer(s)?</li>
<li>Should the objective function while retraining be the same as the one used for the base model, or it can be different? If different, then any insights on how to figure out a new objective function?</li>
</ol>
<p>P.S. Data of type 1 and type 2 are of the same category (like both are logs or both are images), however are significantly different.</p>
Answer:
|
https://ai.stackexchange.com/questions/32171/how-to-choose-the-new-layer-and-objective-function-for-transfer-learning-on-a-ne
|
Question: <p>I’ve read the article titled <strong><a href="https://arxiv.org/abs/1603.09246" rel="nofollow noreferrer">Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles</a></strong>.</p>
<p>In this article, the authors create jigsaw puzzles and train a model to solve them. The process involves starting with an original image of size 256x256, cropping a 225x225 frame from it, then dividing that frame into 9 tiles of 75x75. Each tile is further randomly cropped to 64x64 before being used for training.</p>
<p>My question is: How can this model, which takes nine small images (9 tiles of 64x64), be used in transfer learning? We cannot directly feed a 256x256 image into this model. Should we crop nine 64x64 patches from our image, feed them into the pre-trained model, and then build our new network on top of it?</p>
Answer:
|
https://ai.stackexchange.com/questions/46590/how-to-apply-a-pre-trained-jigsaw-puzzle-model-for-transfer-learning-on-larger-i
|
Question: <p>I am basically interested in vehicle on the road. </p>
<p><a href="https://github.com/ayooshkathuria/pytorch-yolo-v3" rel="nofollow noreferrer">YoloV3 pytorch</a> is giving a decent result.</p>
<p>So my interested Vehicles <code>Car</code> <code>Motorbike</code> <code>Bicycle</code> <code>Truck</code> and <code>bus</code>, I have a small vehicles being detected as <code>truck</code>.</p>
<p>Since the small vehicle is nicely being detected as truck. I have annotated this small vehicle as a different class.</p>
<p>Though, I could add an extra class say 81th class, since the current YoloV3 being used is trained on 80 classes.</p>
<p>81th class would contain the weight of the truck, I would freeze the weights such that the rest of the 80 classes remain unaltered and only the 81th class of this new data gets trained.</p>
<p>The problem is the final layer gets tuned according to the prediction of all the classes it learns.</p>
<p>I was not able to find any post that could actually mention this way of preserving the predictions of the other classes and introducing a new class using transfer learning.</p>
<p>The closest, I was able to get is this post of
<a href="https://github.com/pierluigiferrari/ssd_keras/blob/master/weight_sampling_tutorial.ipynb" rel="nofollow noreferrer">Weight Sampling Tutorial</a> SSD using keras</p>
<p>Its mentioned in</p>
<p><strong>Option 1: Just ignore the fact that we need only 8 classes</strong></p>
<p><code>This would work, and it wouldn't even be a terrible option. Since only 8 out of the 80 classes would get trained, the model might get gradually worse at predicting the other 72 clases</code> in the second paragraph.</p>
<p>Is it possible to preserve the predictions, of the previous pre trained model while introducing the new class and use transfer learning to train only for that class?</p>
<p>I Feel that this is not possible, would like to know your opinion. Hope someone can prove me wrong. </p>
Answer: <p>Even if you want to re-train your model for just one new class you will have to prepare your training data such that it includes all or most of the classes which you want to predict. Most of the times last two layers of a network have the data of number of labels which are to be predicted and that should always be sum of the number of classes you already trained on and the number of classes you want to add for training.</p>
|
https://ai.stackexchange.com/questions/14147/transfer-learning-to-train-only-for-a-new-class-while-not-affecting-the-predicti
|
Question: <p>I am trying to create a model that is using a <em>one-shot learning</em> approach for a classification task. We do this because we do not have a lot of data and it also seems like a good way to learn this approach (it is going to be a university project). The task would be to classify objects, probably from drone/satellite image (of course zoomed one).</p>
<p>My question is, do you think it would be ok to use a model for face recognition, such as <a href="https://www.cs.toronto.edu/%7Eranzato/publications/taigman_cvpr14.pdf" rel="nofollow noreferrer">DeepFace</a> or <a href="http://elijah.cs.cmu.edu/DOCS/CMU-CS-16-118.pdf" rel="nofollow noreferrer">OpenFace</a>, and, using transfer learning, retrain it on my classes?</p>
Answer:
|
https://ai.stackexchange.com/questions/23897/is-it-ok-to-perform-transfer-learning-with-a-base-model-for-face-recognition-to
|
Question: <p>I want to train a neural network for the detection of a single class, but I will be extending it to detect more classes. To solve this task, I selected the PyTorch framework.</p>
<p>I came across <a href="http://cs231n.github.io/transfer-learning/" rel="nofollow noreferrer">transfer learning</a>, where we fine-tune a pre-trained neural network with new data. There's a nice PyTorch tutorial explaining <a href="http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#training-the-model" rel="nofollow noreferrer">transfer learning</a>. We have a <a href="https://github.com/amdegroot/ssd.pytorch" rel="nofollow noreferrer">PyTorch implementation of the Single Shot Detector (SSD)</a> as well. See also <a href="https://towardsdatascience.com/learning-note-single-shot-multibox-detector-with-pytorch-part-1-38185e84bd79" rel="nofollow noreferrer">Single Shot MultiBox Detector with Pytorch — Part 1</a>.</p>
<p>This is my current situation</p>
<ul>
<li><p>The data I want to fine-tune the neural network with is different from the data that was used to initially train the neural network; more specifically, the neural network was initially trained with a dataset of 20 classes</p>
</li>
<li><p>I currently have a very small labeled training dataset.</p>
</li>
</ul>
<p>To solve this problem using transfer learning, the solution is to freeze the weights of the initial layers, and then train the neural network with these layers frozen.</p>
<p>However, I am confused about what the initial layers are and how to change the last layers of the neural network to solve my specific task. So, here are my questions.</p>
<ol>
<li><p>What are the initial layers in this case? How exactly can I freeze them?</p>
</li>
<li><p>What are the changes I need to make while training the NN to classify one or more new classes?</p>
</li>
</ol>
Answer: <p>After a lot of browsing online for answers to these questions, this is what I came up with.</p>
<blockquote>
<ol>
<li>What are the initial layers in this case? How exactly can I freeze them?</li>
</ol>
</blockquote>
<p>The <strong>initial few layers</strong> are said to extract the most general features of any kind of image, like edges or corners of objects. So, I guess it actually would depend on the kind of <a href="https://www.analyticsvidhya.com/blog/2017/08/10-advanced-deep-learning-architectures-data-scientists/" rel="nofollow noreferrer">backbone architecture</a> you are selecting.</p>
<p>How to freeze the layers depends on the framework we use.</p>
<p>(I have selected <a href="http://pytorch.org" rel="nofollow noreferrer">PyTorch</a> as the framework. I found this tutorial <a href="https://spandan-madan.github.io/A-Collection-of-important-tasks-in-pytorch/" rel="nofollow noreferrer">Some important Pytorch tasks - A concise summary from a vision researcher</a>, which seems to be useful.)</p>
<blockquote>
<ol start="2">
<li>What are the changes I need to make while training the NN to classify one or more new classes?</li>
</ol>
</blockquote>
<p>I just need to change just the number of neurons (or units) in the final layer to the number of classes/objects your new dataset contains.</p>
|
https://ai.stackexchange.com/questions/5370/when-doing-transfer-learning-which-initial-layers-do-we-need-to-freeze-and-how
|
Question: <p>I am working on a solar energy production forecasting problem using LSTM multi-step models to predict 1/4/8h ahead of solar energy production for different solar installations. Our goal is to help clients optimize their energy utilization by trading with their neighbours or respective Microgrids.</p>
<p>I have clustered households into groups such as small generators, medium generators, and large generators. I am currently developing a multi-household model for each cluster using TensorFlow's <strong><a href="https://www.tensorflow.org/tutorials/structured_data/time_series#recurrent_neural_network" rel="nofollow noreferrer">LSTM multi-step model tutorial</a></strong>.</p>
<p>To improve prediction accuracy and provide a more personalized approach, I would like to explore transfer learning to create specialized single-household models based on the generalized multi-household models.</p>
<h3><strong>Multi-Household Model (Generalized Model)</strong></h3>
<p>The dataset consists of 160 time series and includes weather features such as hour, day, month, temperature, DHI, DNI, GHI, precipitation, and solar zenith angle. The model learns from multiple similar households.</p>
<p>To better visualize the dataset, here is an example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Hour</th>
<th>Day</th>
<th>Month</th>
<th>TS_0</th>
<th>TS_1</th>
<th>TS_N</th>
<th>Temperature</th>
<th>DHI</th>
<th>DNI</th>
<th>GHI</th>
<th>Cosine Periodicity</th>
<th>Sin Periodicity</th>
<th>Other Features</th>
</tr>
</thead>
<tbody>
<tr>
<td>6</td>
<td>1</td>
<td>5</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>15</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
</tr>
<tr>
<td>7</td>
<td>1</td>
<td>5</td>
<td>0.1</td>
<td>0.1</td>
<td>0.1</td>
<td>17</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
</tr>
<tr>
<td>8</td>
<td>1</td>
<td>5</td>
<td>0.2</td>
<td>0.3</td>
<td>0.25</td>
<td>18</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
</tr>
<tr>
<td>9</td>
<td>1</td>
<td>5</td>
<td>0.5</td>
<td>0.4</td>
<td>0.35</td>
<td>18</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
</tr>
<tr>
<td>10</td>
<td>1</td>
<td>5</td>
<td>1</td>
<td>0.8</td>
<td>0.85</td>
<td>20</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
</tr>
</tbody>
</table>
</div>
<p>Note: These features related to the weather would be an average of the district that these houses exist in.</p>
<p>This current setup utilizes TS_0 to TS_N as examples to learn from each other since their solar installations are similar and should therefore yield similar amounts of electricity. I can then use TS_X as an output label to predict, thereby getting a prediction for each household while maintaining some learning from other household examples.</p>
<h3><strong>Single-Household Model (Specialized Model)</strong></h3>
<p>I want to create a specialized model for each household by using transfer learning from the generalized model. This specialized model will incorporate household-specific features such as solar capacity, number of habitants, solar installation angle/direction, and town-specific weather parameters.</p>
<h3><strong>Objective</strong></h3>
<p>The goal is to create generalized models that can help train specialized models for better accuracy for each household while allowing easy onboarding of new users.</p>
<h3>Business Case for Generalized vs Specific Model</h3>
<p>The generalized model would be trained for general locations across a country for certain solar installations (this would be trained with ongoing customer data). The more specific single-household model would enable (in theory) to have more personalized predictions for your specific solar installation setup and location.</p>
<p>A generalized model approach would enable the solar installation company to be able to onboard users more easily - simply add a new customer to the generalized model that better fits a specific household cluster.
A single-household model would make it more difficult to onboard a new user as you would need a buffer period to gather customer data before being able to train a specific ML model for them.</p>
<h3><strong>Problem</strong></h3>
<p>The problem with the single-household model is that the household-specific features might be constant across the dataset and not very meaningful. Adding these features in the multi-household model would lead to a high-dimensionality problem where we would have a feature for each timeseries.</p>
<p>I would like to ask for recommendations on the following:</p>
<ol>
<li>How can I create an architecture that goes from generalized models to more specific models while being able to introduce valuable additional information specific to a single household?</li>
<li>If I opt for a multi-household generalized model solution, how can I include more specific feature information for each time series without running into dimensionality problems?</li>
<li>If I choose a single-household model solution (a model for every single client), how can I ensure good predictions, considering the model wouldn't have access to other time series examples within its tier?
<ol>
<li>There is a limit to the amount of single-household data we could acquire since some of these may be more recent customers and won’t have multi-year data available</li>
</ol>
</li>
</ol>
Answer: <p>I would probably refrain from using transfer learning. Transfer learning works if you have a more specific data <strong>set</strong>. Transfer learning on a single sample is going to be very hard, and I'm unfamiliar with any papers that attempt such a feat.</p>
<p>Model conditioning is when you add variables as 'condition' to the rest of the input of your model. In your case, you have several general variables (such as weather?). During general training, you can sample from your dataset (do some feature engineering or such) to create your conditional variables. Then upon inference, you can get your specific condition, together with the general variables, input them through your model and get an output for your specific case.</p>
|
https://ai.stackexchange.com/questions/39888/transfer-learning-for-solar-energy-production-forecasting-with-lstm-generalized
|
Question: <p>Assume one is using transfer learning via a model which was trained on ImageNet.</p>
<ol>
<li><p>Assume that the pre-processing, which was used to achieve the pre-trained model, contained z-score standardization using some mean and std, which was calculated on the training data.</p>
<p>Should one apply the same transformation to their new data? Should they apply z-score standardization using a mean and std of their own training data?</p>
</li>
<li><p>Assume that the pre-processing now did not contain any standardization.</p>
<p>Should one apply no standardization on their new data as well? Or should one apply the z-score standardization, using the mean and std of their new data, and expect better results?</p>
</li>
</ol>
<p>For example, I've seen that the Inception V3 model, which was trained by Keras, did not use any standardization, and I'm wondering if using z-score standardization on my new data could yield better results.</p>
Answer: <ol>
<li>You should use the same transformation (i.e same mean and std in case of z-score normalization) on the new data while using any pretrained model.</li>
<li>If the pretrained model was trained without any normalization then it does not matter if you use normalization or not as long as range of input data is fixed (0-255 for the 8-bit input images). You can use pretrained models with batch normalization which takes care of internal covariance shift and thus there is no need to normalize data.</li>
</ol>
|
https://ai.stackexchange.com/questions/8744/in-transfer-learning-should-we-apply-standardization-if-the-pre-trained-model-w
|
Question: <p>I am new to AI/ML and wanted to seek guidance as I am totally lost. I will simplify my issue as follows:</p>
<p>Let's say I would like to detect apples and oranges in images.
I would like to leverage a pre-trained Faster RCNN model for efficiency.
My understanding is that I need to remove the last two layers of such a model and add my own custom layers to detect what I want to detect. This gives me the option to tailor the model to my needs.
However, these models typically have 8 outputs or so with things like detection_boxes, detection_classes, etc.</p>
<p>My question is then how do I customize the model to focus on two object classes and feed additional datasets to further train it? In other words, use the learnings from pre-training but customize the model and further train based on my data.</p>
<p>I can load the model with the following code:</p>
<pre><code>input_shape = (640, 640, 3)
model_handle = "https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_640x640/1"
k_layer = hub.KerasLayer(model_handle, input_shape=(None, None, 3))
inputs = tf.keras.layers.Input(shape=(None, None, 3), dtype=tf.uint8)
outputs_k_layer = k_layer(inputs)
classifier = tf.keras.Model(inputs, outputs_k_layer)
print(classifier.summary())
</code></pre>
<p>However, I cannot add a new layer as new layers require one input and output as opposed to the 8 outputs...</p>
<p>For example something like below throws an error</p>
<pre><code>x = tf.keras.layers.Dense(128, activation="relu")(outputs_k_layer)
</code></pre>
<pre><code>Error: Layer "dense_2" expects 1 input(s), but it received 8 input tensors.
</code></pre>
<p>I think I am confusing concepts and totally lost my way. I would be grateful if you could point me towards the right direction.</p>
<p>Best,</p>
<p>Doug</p>
Answer: <p>The Faster R-CNN from the TensorFlow Hub is an object detection model, not just a classification model. That explains why it has 8 outputs, including softmax probabilities and bounding boxes. If you only want the probabilities, I suggest the following code :</p>
<pre><code>n_classes = 2
model_handle = "https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_640x640/1"
k_layer = hub.KerasLayer(model_handle, trainable=True)
inputs = tf.keras.layers.Input(shape=(640, 640, 3), dtype=tf.uint8)
outputs_k_layer = k_layer(inputs)
# To enhance your model, you can add as many layers as you want
detection_multiclass_scores = tf.keras.layers.Dense(128, activation='relu')(outputs_k_layer['detection_multiclass_scores'])
detection_multiclass_scores = tf.keras.layers.Dropout(0.2)(detection_multiclass_scores)
detection_multiclass_scores = tf.keras.layers.Dense(n_classes)(detection_multiclass_scores)
classifier = tf.keras.Model(inputs=inputs, outputs={'detection_multiclass_scores' : detection_multiclass_scores})
</code></pre>
<p>You can also get the bounding boxes. The bounding boxes are regressions independent of the nature of the object we want to detect, so they don't require transfer learning, unlike the scores related to each class :</p>
<pre><code>outputs = {
'detection_multiclass_scores' : detection_multiclass_scores,
'detection_boxes': outputs_k_layer['detection_boxes'],
}
object_detector = tf.keras.Model(inputs=inputs, outputs=outputs)
</code></pre>
|
https://ai.stackexchange.com/questions/42473/transfer-learning-using-pretrained-tensorflow-object-detection-model
|
Question: <p>in <a href="https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html" rel="nofollow noreferrer">this pytorch tutorial</a>, there is <code>transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])</code>, what is the purpose of this?</p>
<p>(i removed it and the code still works)</p>
Answer: <p>Those are mean and standard deviation used to standardize each channel of the images from <a href="https://www.image-net.org/" rel="nofollow noreferrer">IMAGENET</a> used to train the <a href="https://pytorch.org/vision/stable/models.html" rel="nofollow noreferrer">torchvision pretrained models</a>.</p>
<p>Since the models were trained using this preprocessing step, it is useful to apply it also when using those models for transfer learning on new data.</p>
|
https://ai.stackexchange.com/questions/36396/what-is-normalize-for-in-pytorch-transfer-learning-tutorial
|
Question: <p><strong>How do I best transfer and fine-tune a Q-learning policy that was trained on small instances to large instances?</strong></p>
<p><em>Some more details on the problem:</em>
I am currently trying to derive a decision policy for a dynamic vehicle dispatching problem.
In the problem, a decision point occurs whenever a customer requests delivery. Customers expect to be delivered within a fixed amount of time and the objective is to minimize the delay. The costs of each state are the delays realized in that state (i.e., the delay of the customers that were delivered between the last two states.</p>
<p><em>Some details on the policy:</em>
I used a Q-learning policy (Dueling Deep Q Network) to estimate the discounted future delay of assigning an order to a vehicle. The policy was trained on a small-scale instance (5 vehicles ~100 customers) using epsilon-greedy exploration and a prioritized experience replay. I did not use temporal difference learning (as the cost of a decision are only revealed later in the process) but updated the policy after simulating the entire instance.</p>
<p><em>My problem:</em>
As it is, the policy transfers well to instances of larger sizes (up to 100 vehicles and ~2000 customers) and I could do without fine-tuning. However, there certainly is room for the policy to improve. Unfortunately, when I try to fine-tune the initial model on the larger instance, the retrained model becomes worse over the training steps with regards to minimizing delay.
I suspect that large gradients play a role here as the initial q-values, trained on the small instance, are obviously way off for the large instances (due to the increase in customers).</p>
<p><strong>Is there a standard approach to deal with such a transfer problem or do you have any suggestions?</strong></p>
Answer:
|
https://ai.stackexchange.com/questions/32041/transferring-a-q-learning-policy-to-larger-instances
|
Question: <p>In Deep Learning and Transfer Learning, does layer freezing offer other benefits other than to reduce computational time in gradient descent?</p>
<p>Assuming I train a neural network on task A to derive weights <span class="math-container">$W_{A}$</span>, set these as initial weights and train on another task B (without layer freezing), does it still count as transfer learning?</p>
<p>In summary, how essential is layer freezing in transfer learning?</p>
Answer: <p>from your post I assume you are having three sub-questions and I will answer them one by one.</p>
<ul>
<li>For the 1st question, yes, layer freezing reduces the computational cost a lot and also helps the model to keep the many patterns it has learned to recognize things better, c negative to positive samples easier and avoid overfitting if it was trained on a bigger dataset. Said like having a model trained on COCO but now we only want a person detection model, without freezing weights, our model may not generalize as well as it was before due to backprops now just focusing on one class.</li>
<li>The second one, yes! Without freezing but still using pretrained model as initial weights is also counted as transfer learning</li>
<li>Finally, in summary, layer freezing is a very essential trick to have a good model, especially if you want it to work well in the open-world dataset and generalize for as many domains as possible, time-saving and easier for fine-tuning as we now just have to train the final downstream task layers.
Hope this helps :D</li>
</ul>
|
https://ai.stackexchange.com/questions/39239/does-layer-freezing-offer-other-benefits-other-than-to-reduce-computational-time
|
Question: <p>I am working on classifying the <a href="https://github.com/brendenlake/omniglot" rel="nofollow noreferrer">Omniglot dataset</a>, and the different papers dealing with this topic describe the problem as <em>one-shot learning</em> (classification). I would like to nail down a precise description of what counts as <em>one-shot learning</em>.</p>
<p>It's clear to me that in one-shot classification, a model tries to classify an input into one of <span class="math-container">$C$</span> classes by comparing it to exactly one example from each of the <span class="math-container">$C$</span> classes.</p>
<p>What I want to understand is:</p>
<ol>
<li><p>Is it necessary that the model has never seen the input and the target examples before, for the problem to be called one-shot?</p>
</li>
<li><p>Goodfellow et. al. describe one-shot learning as an extreme case transfer learning where only one labeled example of the transfer task is <a href="http://www.deeplearningbook.org/contents/representation.html" rel="nofollow noreferrer">presented</a>. So, it means they are considering the training process as a kind of continuous transfer learning? What has the model learned earlier, that is being transferred?</p>
</li>
</ol>
Answer: <p>The model has learnt the "features" for the type of inputs, eg. faces.
For the problem to be called one-shot, it needs to also correctly classify/compare any new samples. For example, in face recognition application, any new person's images should also result a positive for their own image and negative for any other seen or unseen image.</p>
<p>Since we are using euclidean distance of the final feature layer and not performing any final classification, we can say we are using the weights of a pretrained network and computing final value using that(distance), thus transfer learning. There is no backprop in this, but what you want to do with the embeddings, such as learning a threshold function can be considered learning.</p>
|
https://ai.stackexchange.com/questions/16138/precise-description-of-one-shot-learning
|
Question: <p>I'm using MobileNetV2 for classification, and I want to add dense layers(i remove the last layer of the MobileNetV2 model). How do I choose the number of units for the dense layer after obtaining the feature vector (1280)? Is there a formula to determine the units for each dense layer I add?</p>
Answer: <p>Not only the units but also the number of layers... you can reason over something like "how complex is your task", but usually we resort to grid search over some educated guesses (like 2/3 layer of 128/256 neurons, but depends)</p>
|
https://ai.stackexchange.com/questions/42877/how-to-determine-the-number-of-units-for-dense-layer-for-transfer-learning
|
Question: <p>I am trying to train my model to classify 10 classes of hand gestures but I don't get why am I getting validation accuracy approx. double than training accuracy.</p>
<p>My dataset is from kaggle:<br />
<a href="https://www.kaggle.com/gti-upm/leapgestrecog/version/1" rel="nofollow noreferrer">https://www.kaggle.com/gti-upm/leapgestrecog/version/1</a></p>
<p>My code for training model:</p>
<pre><code>print(x.shape, y.shape)
# ((10000, 240, 320), (10000,))
# preprocessing
x_data = x/255
le = LabelEncoder()
y_data = le.fit_transform(y)
x_data = x_data.reshape(-1,240,320,1)
x_train,x_test,y_train,y_test = train_test_split(x_data,y_data,test_size=0.25,shuffle=True)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Training
base_model = keras.applications.InceptionV3(input_tensor=Input(shape=(240,320,3)),
include_top=False,
weights='imagenet')
base_model.trainable = False
CLASSES = 10
input_tensor = Input(shape=(240,320,1) )
model = Sequential()
model.add(input_tensor)
model.add(Conv2D(3,(3,3),padding='same'))
model.add(base_model)
model.add(GlobalAveragePooling2D())
model.add(Dropout(0.4))
model.add(Dense(CLASSES, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.Adam(lr=1e-5), metrics=['accuracy'])
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=20,
validation_data=(x_test, y_test)
)
</code></pre>
<p>I am getting accuracy like:</p>
<pre><code>Epoch 1/20
118/118 [==============================] - 117s 620ms/step - loss: 2.4571 - accuracy: 0.1020 - val_loss: 2.2566 - val_accuracy: 0.1640
Epoch 2/20
118/118 [==============================] - 70s 589ms/step - loss: 2.3253 - accuracy: 0.1324 - val_loss: 2.1569 - val_accuracy: 0.2512
</code></pre>
<p>I have tried removing the <code>Dropout</code> layer, changing <code>train_test_split</code>, but nothing works.</p>
<p><strong>EDIT:</strong></p>
<p>On changing the dataset to color images from <a href="https://www.kaggle.com/vbookshelf/v2-plant-seedlings-dataset" rel="nofollow noreferrer">https://www.kaggle.com/vbookshelf/v2-plant-seedlings-dataset</a>
, I am still getting higher validation accuracy in initial epochs, is it acceptable or I am doing something wrong?</p>
<p><a href="https://i.sstatic.net/Mdlht.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mdlht.png" alt="enter image description here" /></a></p>
Answer: <p>The problem is that you're not creating a model with <code>InceptionV3</code> as the backbone. What you want to do is this:</p>
<pre><code>input_tensor = Input(shape=(240,320,1) )
base_model = tf.keras.applications.InceptionV3(input_tensor=Input(shape=(240,320,3)),
include_top=False, weights='imagenet')
base_model.trainable = False
x = base(input_tensor)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(1024, activation='relu')(x)
x = tf.keras.layers.Dense(N_CLASSES, activation='softmax')(x)
...
model = tf.keras.Model(inputs = input_tensor, outputs = x)
model.compile(...)
</code></pre>
<p>The <code>inputs</code> and <code>outputs</code> parameters are of importance here.</p>
|
https://ai.stackexchange.com/questions/27418/how-to-train-my-model-using-transfer-learning-on-inception-v3-pre-trained-model
|
Question: <p>Here's a quote from the <code>T5 paper</code> (T5 stands for "Text-to-Text Transfer Transformer") titled <a href="https://www.jmlr.org/papers/volume21/20-074/20-074.pdf" rel="nofollow noreferrer">Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer</a> by <em>Colin Raffel et al.</em>:</p>
<blockquote>
<p>To summarize, our model is roughly equivalent to the original
Transformer proposed by Vaswani et al. (2017) with the exception of
removing the Layer Norm bias, placing the layer normalization outside
the residual path, and using a different position embedding scheme.
Since these architectural changes are <strong>orthogonal</strong> to the experimental
factors we consider in our empirical survey of transfer learning, we
leave the ablation of their impact for future work.</p>
</blockquote>
<p>What exactly does 'orthogonal' mean in this context? Also, is it just me or have I seen the word used in a similar way before, but can't remember where?</p>
Answer: <p>"Orthogonal" is often used to mean "independent", as in "independent variable which does not correlate with the other variables". I believe this terminology originates from <a href="https://en.wikipedia.org/wiki/Principal_component_analysis" rel="nofollow noreferrer">principal component analysis</a>, where uncorrelated variation would be along orthogonal axes.</p>
<p>Or, in the words of the <a href="https://en.wikipedia.org/wiki/Orthogonality" rel="nofollow noreferrer">Wikipedia article on orthogonality</a> applied to computer science:</p>
<blockquote>
<p>Orthogonality is a system design property which guarantees that modifying the technical effect produced by a component of a system neither creates nor propagates side effects to other components of the system.</p>
</blockquote>
<p>So in this excerpt they state that their changes do not affect anything else (because they are independent/uncorrelated), and can hence be discussed elsewhere.</p>
|
https://ai.stackexchange.com/questions/31689/why-do-the-authors-of-the-t5-paper-say-that-the-architectural-changes-are-ortho
|
Question: <p>I am currently writing my thesis about human pose estimation and wanted to use Google's inception network, modify it for my needs and use transfer learning to detect human key joints. I wanted to ask if that could be done in that way?</p>
<p>Assuming I am having n-keypoints, generating the n-feature maps, use transfer learning and cut off the final classification layers and replace it by a FCN which guesses the key joints. I am asking myself if this might be possible.</p>
<p>However, these feature maps should output heatmaps with the highest probability as well. Is this assumption valid? </p>
Answer:
|
https://ai.stackexchange.com/questions/17261/can-an-image-recognition-model-used-for-human-pose-estimation
|
Question: <p>One disadvantage or weakness of Artificial Intelligence today the slow nature of learning or training success. For instance, an AI agent might require a 100,000 samples or more to reach an appreciable level of performance with a specific task. But this is unlike humans who are able to learn very quickly with a minimum number of samples. Humans are also able to teach one another, or in other words, transfer knowledge acquired.</p>
<p>My question is this: are Artificial Intelligence learnings or trainings transferable from one agent to the other? If yes, how? If no, why?</p>
Answer: <p>The simplest case is copying the software. That is instant duplication of the learnings. In a similar, but less trivial way you can adjust pretrained neural network classifiers to new datasets by simply re-initializing the last layer.</p>
<p>It gets more interesting when you have multiple agents and you want to combine their knowledge. This can be done with many techniques; the simplest is averaging the weights of identical neural network architectures. One example of this <em>combined learning</em> happened is <a href="https://www.popularmechanics.com/technology/robots/a20190/google-room-full-of-robot-arms/" rel="nofollow noreferrer">Google Has a Room Full of Robot Arms Learning Hand-Eye Coordination</a>. The relevant paper is <a href="https://arxiv.org/pdf/1603.02199.pdf" rel="nofollow noreferrer">Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection</a>.</p>
<p>The most complicated case, where you want to transfer just the gist and prevent forgetting is -- up to my knowledge, and I'm relatively certain it's outdated in that area -- unsolved. The problem is <strong>post-hoc explanations</strong>. Humans decide to do something, observe the outcome and then explain why they did something / why it was good to do it. So we automatically generate hypotheses about the world. And sometimes we are able to formulate them in a way that others can understand them. That is not possible with current learning machines. We don't know yet how to abstract automatically from arbitrary problems.</p>
|
https://ai.stackexchange.com/questions/8920/are-artificial-intelligence-learnings-or-trainings-transferable-from-one-agent-t
|
Question: <p>I was reading <a href="https://www.sciencedirect.com/science/article/abs/pii/S0925231220300874" rel="noreferrer">DT-LET: Deep transfer learning by exploring where to transfer</a>, and it contains the following:</p>
<blockquote>
<p>It should be noted direct use of labeled source domain data on a new scene of target domain would result in poor performance due to the semantic gap between the two domains, even they are representing the same objects.</p>
</blockquote>
<p>Can someone please explain what the semantic gap is?</p>
Answer: <p>In terms of transfer learning, semantic gap means different meanings and purposes behind the same syntax between two or more domains. For example, suppose that we have a deep learning application to detect and label a sequence of actions/words <span class="math-container">$a_1, a_2, \ldots, a_n$</span> in a video/text as a "greeting" in a society A. However, this knowledge in Society A cannot be transferred to another society B that the same sequence of actions in that society means "criticizing"! Although the example is very abstract, it shows the semantic gap between the two domains. You can see the different meanings behind the same syntax or sequence of actions in two domains: Societies A and B. This phenomenon is called the "semantic gap".</p>
|
https://ai.stackexchange.com/questions/27382/what-does-semantic-gap-mean
|
Question: <p>I am tentatively trying to train a deep reinforcement learning model the maze escaping task, and each time it takes one image as the input (e.g., a different "maze").</p>
<p>Suppose I have about <span class="math-container">$10K$</span> different maze images, and the ideal case is that after training <span class="math-container">$N$</span> mazes, my model would do a good job to quickly solve the puzzle in the rest <span class="math-container">$10K$</span> - <span class="math-container">$N$</span> images. </p>
<p>I am writing to inquire some good idea/empirical evidences on how to select a good <span class="math-container">$N$</span> for the training task.</p>
<p>And in general, how should I estimate and enhance the ability of "transfer learning" of my reinforcement model? Make it more generalized? </p>
<p>Any advice or suggestions would be appreciate it very much. Thanks.</p>
Answer:
|
https://ai.stackexchange.com/questions/12569/training-a-reinforcement-learning-model-with-multiple-images
|
Question: <p>I searched through the internet but couldn't find a reliable article that answers this question.</p>
<p>Can we use Autoencoders for unsupervised CNN feature learning of unlabeled images like the below
<a href="https://i.sstatic.net/JC9DO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JC9DO.png" alt="enter image description here" /></a>
and use the encoder part of the Auto-encoder for Transfer learning of few labeled images from the dataset? as shown below.
<a href="https://i.sstatic.net/flVLi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/flVLi.png" alt="enter image description here" /></a></p>
<p>I believe this will reduce the labeling work and increase the accuracy of a model.</p>
<p>However, I have concerns like the more cost in computing, failing to learn all required features, etc..</p>
<p>Please let me know if any employed this method in large scale learning such as image-net.</p>
<p>PS: Pardon if it is Trivial or Vague as I am new to the field of AI and computer vision.</p>
Answer:
|
https://ai.stackexchange.com/questions/12244/can-we-use-autoencoders-for-unsupervised-cnn-feature-learning
|
Question: <p>Analogies are quite powerful in communication. They allow explaining complex concepts to people with no domain knowledge, just by mapping to a known domain. Hofstadter <a href="https://cogsci.indiana.edu/" rel="nofollow noreferrer">says they matter</a>, whereas Dijkstra says they are dangerous. Anyway, analogies can be seen as a powerful way to transfer concepts in human communication (dare I say <a href="https://en.wikipedia.org/wiki/Transfer_learning" rel="nofollow noreferrer">transfer learning</a>?).</p>
<p>I am aware of legacy work, such as <a href="https://en.wikipedia.org/wiki/Case-based_reasoning" rel="nofollow noreferrer">Case-Based Reasoning</a>, but no more recent work about the analogy mechanism in AI.</p>
<p>Is there a consensus on whether or not analogy is necessary (or even critical) to AGIs, and how critical would they be?</p>
<p>Please, consider backing your answers with concrete work or publications.</p>
Answer: <p>I don't think I can give you a true answer to the actual question as posed, as I don't have a strict definition of "general intelligence". Nor do I have a solid definition of "critical" in context.</p>
<p>However, if we lean on our naive/intuitive understanding of what "general intelligence" and what it means to be critical, you might translate your question as</p>
<blockquote>
<p>Would a general intelligence system need analogy-making in order to do certain things that it couldn't otherwise do?</p>
</blockquote>
<p>Or, to put it another way</p>
<blockquote>
<p>Are there useful behaviors that are enabled by analogical reasoning that can't be replicated any other way?</p>
</blockquote>
<p>In the strictest sense, I don't have an answer to either of those questions either, but there is at least <em>evidence</em> to suggest that the answer may be "yes". See, for reference, the <a href="http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf" rel="nofollow noreferrer">Copycat paper</a> by Hofstadter and Mitchell.</p>
<p>From what I've seen, some of the <em>kinds of problems Copycat solves</em> are different from anything I've seen solved by other approaches. Now maybe it's just a coincidence that nobody has tried solving those problems with, I don't know, let's say "deep learning" or "rule induction" or "genetic algorithms". Or maybe they have and I just haven't stumbled across that corpus of research.</p>
<p>Anyway, I'll also add that there is still ongoing research into using <em>analogy</em> for AI/ML. See, for example, the paper <a href="http://proceedings.mlr.press/v70/liu17d/liu17d.pdf" rel="nofollow noreferrer">Analogical Inference for Multi-relational Embeddings (2017)</a>, where the authors talk about using analogy, but define their approach as "analogical inference" (which they claim is different from "analogical reasoning" as defined during the earlier "GOFAI period"). There is also the paper <a href="https://arxiv.org/pdf/1705.04416.pdf" rel="nofollow noreferrer">Evaluating vector-space models of analogy (2017)</a>, where another set of authors deal with a form of analogical reasoning.</p>
<p>I don't think there's a consensus as to whether or not some form of analogical reasoning is "critical", but it's definitely a subject that is still being researched.</p>
<p>And to go off on a little bit of a tangent - an interesting related question would be to ask whether or not "analogy making" would be an emergent property of a sufficiently deep/wide ANN, or would such a facility need to be designed and coded up explicitly.</p>
|
https://ai.stackexchange.com/questions/3665/is-analogy-necessary-to-artificial-general-intelligence
|
Question: <p>During transfer learning in computer vision, I've seen that the layers of the base model are frozen if the images aren't too different from the model on which the base model is trained on.</p>
<p>However, on the NLP side, I see that the layers of the BERT model aren't ever frozen. What is the reason for this?</p>
Answer: <p>Corrections and other answers are welcome, but here are a few thoughts:</p>
<p>There are several approaches in terms of which weights get frozen (and also other considerations, see for example Fig. 5 in <a href="https://arxiv.org/abs/2211.09085v1" rel="nofollow noreferrer">"Galactica: A Large Language Model for Science"</a>).</p>
<p>Which of the approaches yields higher-quality results <strong>depends on the architecture (and hyperparameters) and dataset</strong>.</p>
<p>There can be rules of thumb, for example <a href="https://web.archive.org/web/20211220033410/https://huggingface.co/docs/transformers/training" rel="nofollow noreferrer">this old snapshot of a "Documentation" of Transformer architectures at Hugging Face</a> said:</p>
<blockquote>
<p>we are directly fine-tuning the whole model without taking any precaution. It actually works better this way for Transformers model</p>
</blockquote>
<p>but this explanation apparently was removed from <a href="https://huggingface.co/docs/transformers/training" rel="nofollow noreferrer">the new vesion of this page</a>. Maybe it turned out that such <strong>rules of thumb aren't right in general</strong>.</p>
<p><strong>Quality of results</strong> is also not the only thing being optimized. Some choices are made due to <strong>memory or compute considerations</strong>. For example, when freezing the first layers, their output features can be computed only once for all samples, saved, and used thereafter; moreover, computing the gradient of the loss with respect to the weights of the first frozen network block is not necessary.</p>
|
https://ai.stackexchange.com/questions/23884/why-arent-the-bert-layers-frozen-during-fine-tuning-tasks
|
Question: <p>I see that domain adaptation and transfer learning has been widely adopted in image classification and semantic segmentation analysis. But it's still lacking in providing solutions to enterprise data, for example, solving problems related to business processes?</p>
<p>I want to know what characteristics of the data determine the applicability or non-applicability with respect to generating models for prediction where multiple domains are involved within an enterprise information database?</p>
Answer:
|
https://ai.stackexchange.com/questions/23396/why-is-domain-adaptation-and-generative-modelling-for-knowledge-graphs-still-not
|
Question: <p>I was reading <a href="https://www.researchgate.net/publication/228618750_Deep_learning_of_representations_for_unsupervised_and_transfer_learning" rel="nofollow noreferrer">Deep Learning of Representations for Unsupervised and Transfer Learning</a>,
and they state the following:</p>
<blockquote>
<p>They have only a small number of unlabeled examples (4096) and very few labeled examples (1
to 64 per class) available to a Hebbian linear classifier (which discriminates according to
the median between the centroids of two classes compared) applied separately to each class
against the others.</p>
</blockquote>
<p>I have searched about what a Hebbian linear classifier is, but I couldn't find more than an explanation about what Hebbian learning is, so can anybody explain what Hebbian linear classifier is?</p>
Answer:
|
https://ai.stackexchange.com/questions/27385/what-is-a-hebbian-linear-classifier
|
Question: <p>I've seen two approaches for introducing custom tokens for transfer learning with large language models like Bert or GPT3. Some approaches introduce new tokens into the vocabulary and learn embeddings from scratch. This is the "traditional" approach. However, I've seen other papers that imitate custom tokens with the use of punctuation, e.g. <code>"<custom-token>"</code>. In this case the model is not learning any new tokens, but is learning to connect subword tokens and punctation already in its vocabulary. I think this approach is often used with GPT3, as the closed API prevents learning new tokens from-scratch.</p>
<p>Has any research benchmarked whether one approach is better than another, when both options are available?</p>
Answer:
|
https://ai.stackexchange.com/questions/37526/are-custom-tokens-better-than-punctuation-pseudo-tokens-for-llms
|
Question: <p>I have a dataset A of videos. I've extracted the feature vector of each video (with a convolutional neural network, via transfer learning) creating a dataset B. Now, every vector of the dataset B has a high dimension (about 16000), and I would like to classify these vectors using an RBF-ANN (there are only 2 possible classes).</p>
<p>Is the high dimensionality of input vectors a problem for a radial basis function ANN? If yes, is there any way to deal with it?</p>
Answer:
|
https://ai.stackexchange.com/questions/21030/is-the-high-dimensionality-of-input-vectors-a-problem-for-a-radial-basis-functio
|
Question: <p>Generative models in artificial intelligence span from simple models like Naive Bayes to the advanced deep generative models like current day GANs. This question is not about coding and involves only science and theoretical part only.</p>
<p>Are there any standard textbooks that covers topics from scratch to the advanced?</p>
Answer: <p>From the theoretical foundations one can look into the <code>Chapter 20: Deep Generative Models</code> of the classic DL book by <strong>Goodfellow, Bengio</strong> <a href="https://amzn.to/2MmZNbH" rel="nofollow noreferrer">https://amzn.to/2MmZNbH</a>. Not the most recent reference, but written by the professionals in simple and accessible way.</p>
<p>There is a nice book <strong>Generative Deep Learning</strong> by D.Foster with some simple heuristics and probability theory motivations and examples <a href="https://www.google.ru/books/edition/Generative_Deep_Learning/RqegDwAAQBAJ?hl=en&gbpv=1&printsec=frontcover" rel="nofollow noreferrer">https://www.google.ru/books/edition/Generative_Deep_Learning/RqegDwAAQBAJ?hl=en&gbpv=1&printsec=frontcover</a>.</p>
<p>Finally, there is a book from Jason Brownlee (author of many nicely written articles on machinelearningmastery.com) <a href="https://machinelearningmastery.com/generative_adversarial_networks/" rel="nofollow noreferrer">https://machinelearningmastery.com/generative_adversarial_networks/</a></p>
|
https://ai.stackexchange.com/questions/28521/books-on-generative-models
|
Question: <p>I'm working my way through how ChatGPT works. So I read that ChatGPT is a generative model. When searching for generative models, I found two defintions:</p>
<ul>
<li><a href="https://developers.google.com/machine-learning/gan/generative" rel="nofollow noreferrer">A <strong>generative</strong> model includes the distribution of the data itself, and tells you how likely a given example is</a> by Google</li>
<li><a href="https://www.mdpi.com/2227-7102/14/2/172" rel="nofollow noreferrer"><strong>Generative</strong> artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content</a></li>
</ul>
<p>Do they both mean the same? That is, for generating new content, a model must learn the distribution of data itself? Or do we call ChatGPT generative because it just generates new text?</p>
<p>I see that ChatGPT is something other than a discriminative model that learns a boundary to split data, however, I can not bring ChatGPT in line with a more traditional generative model like naive Bayes, where class distributions are inferred.</p>
Answer: <h3>What are generative (and discriminative) models?</h3>
<p>If the model learns a distribution of the form <span class="math-container">$p(x)$</span> or <span class="math-container">$p(x, y)$</span>, where <span class="math-container">$x$</span> are the inputs and <span class="math-container">$y$</span> the outputs/labels, from which you can sample data, then it's a generative model. An example of a generative model: variational autoencoder (VAE).</p>
<p>Bishop also defines generative models in this way (<a href="https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf#page=63" rel="nofollow noreferrer">p. 43</a>)</p>
<blockquote>
<p>Approaches that explicitly or implicitly model the distribution of
inputs as well as outputs are known as generative models, because by sampling from them it is possible to generate synthetic data points in the input space</p>
</blockquote>
<p>If it learns a distribution of the form <span class="math-container">$p(y \mid x)$</span>, then it's a discriminative model - many/most classifiers learn this distribution, but you can also derive the conditional given the the joint and prior (that's why above Bishop uses <em>implicitly</em> or <em>explicitly</em>).</p>
<p>Bishop also defines discriminative models in this way (<a href="https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf#page=63" rel="nofollow noreferrer">p. 43</a>)</p>
<blockquote>
<p>Approaches that model the posterior probabilities directly
are called discriminative models</p>
</blockquote>
<p>The <a href="https://en.wikipedia.org/wiki/Generative_model" rel="nofollow noreferrer">related Wikipedia article</a> claims that people have not always been using these terms consistently (which is common in machine learning), so one should always keep that in mind.</p>
<h3>GPTs are autoregressive</h3>
<p>As far as I know, GPTs are <a href="https://deepgenerativemodels.github.io/notes/autoregressive/" rel="nofollow noreferrer">autoregressive models</a>. <a href="https://ml.berkeley.edu/blog/posts/AR_intro/" rel="nofollow noreferrer">Here</a> is another potentially useful post that explains what autoregressive models are.</p>
<p>My understanding of autoregressive models, at least based on neural networks, is that they are also generative models - the linked articles and even the <a href="https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf" rel="nofollow noreferrer">GPT-2 paper</a> seem to start the descriptions from the assumption that you can factorize some joint distribution like <span class="math-container">$p(x)$</span> into conditional distributions.</p>
<p>ChatGPT is based on a GPT model, so it's probably considered a generative model too, but there are <a href="https://openai.com/blog/chatgpt/" rel="nofollow noreferrer">several steps involved</a> to create this model, so it may not be super clear how to categorise this model.</p>
<p>Moreover, the authors of the <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">transformer</a>, which GPT models are based on, claim that the transformer is an autoregressive model.</p>
<h3>Conclusion</h3>
<p>It seems to me that many people in ML refer to any model that generates data as a generative model, even if there's no written theoretical formulation of it as a generative model, which doesn't mean that you cannot formulate these models as generative models, i.e. a model that learns some distribution that you can use to sample data from data distribution.</p>
<p>I am currently not familiar enough with the details of the GPT models to say if they have been mathematically formulated as generative models of the form <span class="math-container">$p(x, y)$</span>, but they model some distribution of the form <span class="math-container">$p(x)$</span>, from which you can sample, otherwise, how could you even sample data (words)?</p>
|
https://ai.stackexchange.com/questions/39012/what-makes-chatgpt-a-generative-model
|
Question: <p>All of the generative models that I have found appear to use 32-bit seeds, at least on my testing.</p>
Answer: <p>32-bit seeds are used due to historical and compatibility reasons, and they provide <span class="math-container">$2^{32}$</span> over 4 billion initial states which is sufficient for most applications including ML since as you may already know that the quality of a PRGN is not most relevant factors for the quality of ML models. That's why it's probably hard to find public generative models codebase directly setting a 64-bit seed.</p>
<p>However, when generative models are implemented in programming languages or frameworks (e.g., Python, PyTorch, TensorFlow) that support 64-bit random number generators, you can always manually enforce 64-bit seeds to gain better quality of random number generation if you insist and deem appropriate in your specific case.</p>
|
https://ai.stackexchange.com/questions/47776/do-generative-models-with-64-bit-seeds-exist
|
Question: <p>I have a little perplexity trying to distinguish <em>parametric</em> vs <em>non-parametric</em> generative model.</p>
<p>In my understanding, a <strong>parametric</strong> generative model would try to learn the probability density function by estimating the parameters of an underlying distribution we are assuming. So just doing for example,</p>
<p><span class="math-container">$$\theta^* = arg\max_\theta \,\prod_{i=1}^N p_\theta(\textbf{x}_i)$$</span></p>
<p>I realize that in practice, we need to figure out what is the basic distribution that we are going to modify by adjusting the parameters <span class="math-container">$\theta$</span>. So in the case of <em>VAEs</em> we use <em>latent variables</em> assumption to make training feasible, we jointly train <span class="math-container">$q_\phi(\textbf{z}|\textbf{x})$</span> and <span class="math-container">$p_\theta(\textbf{x}|\textbf{z})$</span> prametrizing both distributions with neural networks (i.e. encoder & decoder). In such case, we end up with the situation that all our distributions are <strong>Gaussians</strong> (assuming that prior and conditional are gaussians). So, having said that, can we conclude that VAEs are <strong>parametric</strong>? Also, what could be an example of <strong>non-parametric</strong> generative model?</p>
<p>I would say that, for example, <em>GANs</em> maybe an example of non-parametric model, as we start with a latent normal distribution but then applying a stack of non-linear transformations ending up with something potentially very complicated.</p>
Answer: <p>I call this difference <strong>prescribed</strong> vs <strong>implicit</strong> models.</p>
<p>To my knowledge, a <strong>parametric</strong> model has a fixed,finite number of parameters with respect to the sample size (e.g a linear model), while <strong>nonparametric</strong> model have an increasing number of parameters with respect to data. See this <a href="https://sebastianraschka.com/faq/docs/parametric_vs_nonparametric.html" rel="nofollow noreferrer">blog post</a> and this <a href="https://stats.stackexchange.com/questions/268638/what-exactly-is-the-difference-between-a-parametric-and-non-parametric-model">answer</a> for more precise info. Since most models use a Neural Network to generate an output, I prefer the terms prescribed vs implicit.</p>
<p>Viewing this way,</p>
<ul>
<li><p><strong>Implicit</strong> models directly generate the output by passing a noise vector through a deterministic function (Generally a NN). They do not have a global likelihood, and are less limited in the architecture.
A great example of an implicit model, as you stated, are GANs.</p>
</li>
<li><p><strong>Prescribed</strong> models provides an explicit parametric specification of the output distribution <span class="math-container">$p(x)$</span>. They can estimate the likelihood of data, but are more constrained in the architecture.</p>
<ul>
<li>An Autoregressive model fits a Bernoulli distribution , learning a neural network to model <span class="math-container">$p(x_i|x_{<i})$</span></li>
<li>Normalizing flows learn a change of variable to reshape a gaussian source noise. They must learn an invertible function, so the architecture is heavily constrained.</li>
<li>VAEs assume a Gaussian prior <span class="math-container">$p(z) \sim \mathcal{N}(0,\mathbb{I})$</span> that is decoded by a NN which models <span class="math-container">$p(x|z)$</span>. They have less limits in the architecture, but they lose the closed form likelihood, and they are trained by maximizing an ELBO. Since they still have (even if approximated) likelihood, they are prescribed models.</li>
</ul>
</li>
</ul>
|
https://ai.stackexchange.com/questions/36308/parametric-vs-non-parametric-generative-models
|
Question: <p>I am wondering how a plain auto encoder is a generative model though its version might be but how can a plain auto encoder can be generative. I know that Vaes which is a version of the autoencoder is generative as it generates distribution for latent variables and whole data explicitly. But I am not able to think how an autoencoder generates probability distribution and becomes a generative model.</p>
<p>Also from this youtube video: <a href="https://www.youtube.com/watch?v=c27SHdQr4lw&t=380s&ab_channel=PaulHand" rel="noreferrer">here</a> It says plain auto encoder is not a generative model. See last line from picture.</p>
<p><a href="https://i.sstatic.net/28Eq4.png" rel="noreferrer"><img src="https://i.sstatic.net/28Eq4.png" alt="enter image description here" /></a></p>
Answer: <p>An autoencoder is not considered a generative model, because it only reconstructs the given input. You could use the decoder <em>like</em> a generative model by putting in different vectors. However, the standard autoencoder mostly learns a sparse latent space. This means that you will have distinct clusters in the latent space (see the left image below). The decoder has never learned to reconstruct vectors in between the clusters, so it will produce very abstract things - mostly garbage.</p>
<p>Instead a variational autoencoder (VAE) is considered a generative model. It's basically an autoencoder with a modified bottleneck. This VAE learns a dense latent space (see image on the right), this means you can sample any vector from the latent space, pass it to the model and it will give you a nice result with somewhat interpolated object properties from the dataset.</p>
<p><a href="https://news.sophos.com/en-us/2018/06/15/using-variational-autoencoders-to-learn-variations-in-data/" rel="noreferrer">This article</a> provides a nice overview of the two models.</p>
<p><a href="https://i.sstatic.net/tIvHU.png" rel="noreferrer"><img src="https://i.sstatic.net/tIvHU.png" alt="latent_spaces" /></a></p>
<p><em>Figure taken from <a href="https://news.sophos.com/en-us/2018/06/15/using-variational-autoencoders-to-learn-variations-in-data/" rel="noreferrer">here</a></em></p>
|
https://ai.stackexchange.com/questions/36118/is-plain-autoencoder-a-generative-model
|
Question: <p>I just finished reading this paper <a href="https://arxiv.org/abs/2006.10137" rel="nofollow noreferrer">MoFlow: An Invertible Flow Model for Generating Molecular Graphs</a>.</p>
<p>The paper, which is about generating molecular graphs with certain chemical properties improved the SOTA at the time of writing by a bit and used a new method to discover novel molecules. The premise of this research is that this can be used in drug design.</p>
<p>In the paper, they beat certain benchmarks and even create a new metric to compare themselves against existing methods. However, I kept wondering if such methods were actually used in practice. The same question is valid for any comparable generative models such as GAN's, VAE's or autoregressive generative models.</p>
<p>So, basically, are these models used in production already? If so, do they speed up existing molecule discovery and/or discover new molecules? If not, why not? And are there any remaining bottlenecks to be solved before this can be used?</p>
<p>Any further information would be great!</p>
Answer:
|
https://ai.stackexchange.com/questions/27720/are-generative-models-actually-used-in-practice-for-industrial-drug-design
|
Question: <p>From what I have understood reading the UCT paper <a href="http://ggp.stanford.edu/readings/uct.pdf" rel="nofollow noreferrer">Bandit based monte-carlo planning</a>, by Levente Kocsis and Csaba Szepesvári, MCTS/UCT requires a generative model. </p>
<ol>
<li><p>Does it mean that, in case there is no generative model of the environment, we cannot use MCTS?</p></li>
<li><p>If we can still use MCTS, how does the roll-out happen in this case, as there is no simulation? </p></li>
</ol>
Answer: <p>You either need a generative model or an emulator of the environment. In the later case you don't calculate your transitions and rewards using the model but feed your actions and states to the emulator and work with the results.</p>
<p>The emulator can be a black box as long as it returns the next state and the reward when provided with the current state and an action. You also need a way to identify all legal actions in a given state to build the tree.</p>
|
https://ai.stackexchange.com/questions/3290/can-we-use-mcts-without-a-generative-model
|
Question: <p>I'm looking for a modern machine learning book with graduate-level treatment of more recent topics such as diffusion and generative models, transformers etc.</p>
<p>I have a hard copy of <em>Deep Learning</em> by Goodfellow and Bengio; while I liked the book and read it extensively when it was published, it is a bit dated now.</p>
<p>I'm considering <em><a href="https://probml.github.io/pml-book/book2.html" rel="nofollow noreferrer">Probabilistic Machine Learning: Advanced Topics</a></em> by Kevin Patrick Murphy. But maybe there are better alternatives.</p>
<p>I need this for my comprehensive Ph.D. examination. Any suggestions are very appreciated.</p>
<p>P.S. I'm also aware of this <a href="https://ai.stackexchange.com/questions/23507/what-are-other-examples-of-theoretical-machine-learning-books">stack post</a>, but the list of references there is quite dated or introductory.</p>
Answer: <p><a href="https://sites.google.com/view/berkeley-cs294-158-sp20/home" rel="nofollow noreferrer">Berkeley CS294-158</a> is a graduate-level course on deep unsupervised learning. They cover a lot of architectures used in modern generative modeling. They have recorded lectures and slides online.</p>
<p><a href="https://deepgenerativemodels.github.io/" rel="nofollow noreferrer">Stanford CS236</a> is a course on deep generative modeling. They don't have a textbook, but they have course notes and slides online.</p>
|
https://ai.stackexchange.com/questions/41439/modern-graduate-level-machine-learning-books-with-focus-on-generative-models
|
Question: <p>There are newer PRNGs like xoshiro/xoroshiro that have better quality than Mersenne Twister or WELL in a way that they pass statistical tests. Is the use of a better PRNG algorithm crucial to improve the output of generative models? (Anyway I couldn't find any information about what PRNGs are used in popular generative models)</p>
Answer: <p>Most popular generative models like GANs, VAEs and DDPMs rely on <a href="https://en.wikipedia.org/wiki/Pseudorandom_number_generator" rel="nofollow noreferrer">PRNGs</a> for initializing model weights, sampling latent variables from Gaussian distribution, and some other stochastic operations during training such as dropout and noise injection in the forward pass of DDPM, etc. Most PRGNs, including Mersenne Twister, meet these requirements adequately.</p>
<blockquote>
<p>One well-known PRNG to avoid major problems and still run fairly quickly is the Mersenne Twister.</p>
</blockquote>
<p>Small variations in initial weights are quickly overshadowed by the the stochastic nature of subsequent training. And the effects of latents sampling or dropout/noise injection are generally robust to minor difference in randomness quality. Empirically most of generative models' quality stems from optimization algorithm, model architecture, training data quality and hyperparameters. Therefore (pre)trained deep generative models are already regularized, approximate and suboptimal due to many factors including those above mentioned, a little gain in the true randomness from PRGNs won't affect this materially.</p>
<p>Newer PRGNs like xoshiro/xoroshiro are designed to address edge cases in specific tests which is required in some highly reliable and performant security applications, not generative models.</p>
|
https://ai.stackexchange.com/questions/47745/is-the-quality-of-generative-model-outputs-affected-by-the-quality-of-their-prng
|
Question: <p>How does one tell if a given model is generative AI or predictive AI?</p>
<p>Do generative models have more outputs than inputs and <em>vice versa</em> for predictive models?</p>
Answer: <p>It's hard nowadays to draw a distinct line between the two with the advent of conditional generation and normalizing flow</p>
<p>Usually, we say that is generative if you try to model the joint probability <span class="math-container">$p(x,y)$</span>, whereas discriminative models try to model the marginal <span class="math-container">$p(y|x)$</span></p>
<p>However, generative model aims at sampling from such learned distribution, where usually instead discriminative models are only interested in some statistics of it (such as the mode)</p>
|
https://ai.stackexchange.com/questions/46020/how-to-tell-if-a-model-is-generative-vs-predictive
|
Question: <p>Given a generative model, G, trained on a dataset D. This generative model can be either GAN or Diffusion based. Supposed each sample, x_i, generated by G, can be evaluated by a readily available scoring function, S(x_i).</p>
<p>What are the possible ways to navigate the latent space of G to find generated samples which maximizes S(x_i)?</p>
<p>Here are some of the possible ways I have thought:</p>
<ol>
<li><p>Random samplings from G to form a dataset. For each sample, evaluate it with S, then finetune G to maximise S with loss = -S. Keep repeating until G only produces samples with high S values. This is motivated from RLHF but use backprop instead of RL loss.</p>
</li>
<li><p>Fix G, start with a random latent, l_1 to sample x_1 from G. Then slowly find a better x_2 by going into the direction of S, e.g. l_2 = l_1 + alpha * s(x_1) = l_1 + alpha * s( g(l_1)). Is this possible, is it fair to assume P(l, s(1)) is smooth?</p>
</li>
<li><p>Condition the training of G with S, i.e G(X|S(X)). In this way, we can specific any desired score as an input to the generative model.</p>
</li>
</ol>
<p>Are any of these ideas possible? Otherwise, are there better or more simpler approaches?</p>
Answer: <p>For the first suggestion, are you suggesting doing gradient descent with objective <span class="math-container">$-S$</span>? If so (and <span class="math-container">$S$</span> is differentiable), that's definitely possible. I would suggest looking at the <a href="https://arxiv.org/pdf/2003.03808.pdf" rel="nofollow noreferrer">PULSE paper</a>. They do image superresolution over face images by searching through the latent space of a GAN to find high resolution generations that match the given low-resolution image.</p>
<p>The main thing to keep in mind here is that not all areas in the latent space correspond to realistic generations. You probably want areas that correspond to latents that were likely seen during training (e.g., latents with high probability under the prior). The PULSE paper does this by constraining the search space to a hypersphere centered at the origin. <span class="math-container">$S$</span> here would be measuring how much the downscaled image deviates.</p>
<p><a href="https://arxiv.org/pdf/1904.03189.pdf" rel="nofollow noreferrer">This paper</a> does something similar. This paper also has an additional perceptual loss term using a VGG model. <span class="math-container">$S$</span> here would be measuring the reconstruction loss using both a simple pixel-wise MSE loss and the perceptual loss.</p>
<p>Finally, an early way of generating images from text-prompts was <a href="https://towardsdatascience.com/generating-images-from-prompts-using-clip-and-stylegan-1f9ed495ddda" rel="nofollow noreferrer">optimizing the input embeddings for an image generator like StyleGAN using CLIP as a loss function</a>.</p>
<p>I'm not too sure about your second and third suggestion, though. The second sounds a bit like gradient descent -- how do you know what "the direction of <span class="math-container">$S$</span>" is? The third sounds like <a href="https://arxiv.org/abs/1411.1784" rel="nofollow noreferrer">Conditional GANs</a>, but I'm not aware of work that conditions on the "quality" of the input.</p>
|
https://ai.stackexchange.com/questions/41608/maximize-a-scoring-function-within-the-latent-space-of-a-generative-model
|
Question: <p>If the concern with using generative models for question answering is that these models aren't always producing factual information, why is it that people are using these models with Retrieval Augmented Generation (Open Generative QA/Closed Generative QA), rather than using a transformers-based extractive QA model that can refer users to the potential answer from actual text documentation?</p>
Answer: <p>The issue is that extractive QA limits the types of questions to ones that have answers that appear <em>exactly</em> in the text.</p>
<p>For example, consider the question:</p>
<pre><code>Is Bob considered a good movie director?
</code></pre>
<p>It's very unlikely that you're going to have a document that says exactly <code>Bob is a good movie director</code>. But, you might have several documents that talk about the quality of movies that Bob has directed or interviews that discuss how well he works with actors etc. from which you can <em>infer</em> an answer from the totality of the evidence.</p>
<p>That's not say that extractive systems are always worse, you're just faced with the classic trade-off of a restrictive but predictable system versus a more flexible system that may e.g., hallucinate.</p>
|
https://ai.stackexchange.com/questions/43646/why-is-generative-qa-rag-so-popular-when-extractive-qa-models-exist-that-can-p
|
Question: <p>I'm trying to come up with a generative model that can input a name and output all valid formats of it. </p>
<p>For example, "Bob Dylan" could be an input and the gen model will output "Dylan, Bob", "B Dylan", "Bob D" and any other type of valid formatting of a person's name. So given my example the gen model doesn't seem that complicated to build, but it also has to handle stuff like "Dylan, Bob" and "B Dylan", but obviously the 2nd one it shouldn't output "Bob Dylan" as a potential output cause inferring that requires more than just "B Dylan". Any ideas for a good Generative Model for this?</p>
Answer:
|
https://ai.stackexchange.com/questions/15978/whats-a-good-generative-model-for-creating-valid-formats-of-a-persons-name
|
Question: <h3>Background</h3>
<p><strong>Generative modeling</strong> <br>
Generative modeling aims to model the probability of observing an observation <code>x</code>.
<span class="math-container">$$
p(x) = \frac{p(y\cap x)}{p(y|x)}
$$</span></p>
<p><strong>Representation Learning</strong></p>
<p>Instead of trying to model the <strong>high-dimensional sample space</strong> directly, we describe each observation in the training set using some <strong>lower-dimensional latent space</strong> and then learn a mapping function that can take a point in the latent space and map it to a point in the original domain</p>
<hr />
<h3>Context:</h3>
<p>Let’s suppose we have a training set consisting of grayscale images of biscuit tins
<a href="https://i.sstatic.net/QFgfn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QFgfn.png" alt="enter image description here" /></a></p>
<p>we can convert each image of a tin to a point in a latent space of just two dimensions. We would first need to establish two latent space dimensions that best describe this dataset(height, width), then learn the mapping function <strong>f</strong> that can take a point in this space and map it to a grayscale biscuit tin image.
<a href="https://i.sstatic.net/yIQMc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yIQMc.png" alt="enter image description here" /></a></p>
<hr />
<h3>Question:</h3>
<p>What is:</p>
<p><code>P(y|x)</code>: is it possibility of point lying in Latent space given it is a point in (high dimensional) pixel space? <br></p>
<p>Or for that matter, what do <code>x, y, P(x), P(y), P(x|y)</code> really mean/represent here?</p>
<p><em>P.S.: this is referred from the book Generative Deep Learning 2<sup>nd</sup> edition By David Foster, O'Reilly Publication.</em></p>
Answer: <p>"<em>What is: x, y, P(x), P(y), P(x|y), P(y|x) in this example?</em>" I don't really see where you are referring to them in the question, so I'll answer with the general interpretation in ML:</p>
<ul>
<li><span class="math-container">$P(x)$</span>: also known as data distribution, it's the density that describes some data... take your example, if your bins are described by height and width, you can assume that maybe the height is described by <span class="math-container">$H\sim U(1,10)$</span> (uniform) and the width by <span class="math-container">$W \sim U(1,5)$</span>, and supposing that they are independent, then the joint distribution of <span class="math-container">$H$</span> and <span class="math-container">$W$</span> is <span class="math-container">$P(x)$</span> (now, if you are dealing with images, you have to consider the joint distribution of all pixels, so it's a very high dimensional distribution)</li>
<li><span class="math-container">$P(y)$</span> also known as target distribution. This can also be considered as "prior", since in discriminative task, you try to model <span class="math-container">$P(y|x)$</span>... it describes your belief of the target distribution given no additional information... take for example a task where you try to regress the weight of a person, then you know that generally speaking, weight is distributed as a Gaussian (look up statistics online about this and you will see that Bell curve) centered at some weight</li>
<li><span class="math-container">$P(y|x)$</span>, your conditional probability that you want to learn with discriminative tasks. It means _"Given these informations <span class="math-container">$x$</span>, how likely it <span class="math-container">$y$</span> to be some value"... take the weight regression task, say generally that the weight is distributed as <span class="math-container">$N(50,10)$</span>, so given no additional information, the best guess is to say that a person will weight 50kg... however, if I additionally tell you that that persone is 190cm tall, then you know that that person will weight more than the average most likely, so your belief on his weight might shift to <span class="math-container">$N(70,5)$</span>... this "shift" is the information brought by <span class="math-container">$x$</span>, thus <span class="math-container">$p(y|x)$</span> is just a classification/regression etc etc</li>
<li><span class="math-container">$P(x|y)$</span> is the other way around.. if we consider images, <span class="math-container">$Y$</span> might be for example a the digit you want to see in the image, and <span class="math-container">$x$</span> the image you want to generate, then <span class="math-container">$p(y|x)$</span> is just "which images are more likely to be correct given that I want to see the digit <span class="math-container">$y$</span> printed on it"... you can consider that <span class="math-container">$p(x)$</span> is already a distribution and <span class="math-container">$p(x|y)$</span> just takes the interesting part of it where there is <span class="math-container">$y$</span> printed on the image</li>
</ul>
<p>so, in your example, P(x) is just the distribution of the images, P(y) is the distribution of some classification with those bins (say if a cookie will fit in it, it's your general belief wether a cookie fits in a random bin or not), P(y|x) is the "given this image of a bin will a cookie fit in that", and P(x|y) is "generate a picture of a possible bin such that this y cookie fits in"</p>
|
https://ai.stackexchange.com/questions/41818/what-is-x-y-px-py-in-generative-model-domain
|
Question: <p>In the 2015 paper "<a href="https://arxiv.org/pdf/1503.03585.pdf" rel="nofollow noreferrer">Deep Unsupervised Learning using Nonequilibrium Thermodynamics</a>" by Sohl-Dickstein et al. on diffusion for generative models, Figure 1 shows the forward trajectory for a 2-d swiss-roll image using Gaussian diffusion. The thin lines are gradually blurred into wider and fuzzier lines, and eventually into an identity-covariance Gaussian. Table App.1 gives the diffusion kernel as:</p>
<p><span class="math-container">$$
q(\mathbf{x}^{(t)} \mid \mathbf{x}^{(t-1)}) = \mathcal{N}(\mathbf{x}^{(t)} ; \mathbf{x}^{(t-1)} \sqrt{1 - \beta_t}, \mathbf{I} \beta_t )
$$</span></p>
<p>The covariance of the diffusion kernel is diagonal, so each component <span class="math-container">$x_i^{(t)}$</span> (i.e., each pixel in the image at time step <span class="math-container">$t$</span>) is independently sampled from a 1-d Gaussian based on the prior time step's pixel value at the <em>same x-y location in the image</em>. So a given pixel should <em>NOT</em> diffuse into neighboring pixels; instead, the action of the diffusion step is a linear Gaussian 1-d transformation of the number held in the pixel, with the mean slightly reduced and some noise added.</p>
<p><strong>Question</strong>: This seems inconsistent with Figure 1? Instead of the blurred line (wider and fuzzier line), we should have a line that has the same width, but exhibits more noise? In order to have a pixel diffuse into neighboring pixels, we would need a diffusion kernel with a non-diagonal covariance, so that there is nonzero covariance between components?</p>
Answer: <blockquote>
<p>Figure 1. The proposed modeling framework trained on 2-d swiss roll data</p>
</blockquote>
<p>The data is actually 2d coordinates, plotted on a scatter-plot. It doesn't represent an <span class="math-container">$n \times n$</span> image with individual pixels. And the added noise is 2d, well 1d to the x-coordinate and 1d to the y-coordinate, if you consider them separately.</p>
<p>But when the process is applied to images, as seen in Figure 3, it behaves as you described and expected the result to look like.</p>
|
https://ai.stackexchange.com/questions/40021/blurring-of-image-in-generative-model-using-diffusion-probabilistic-method
|
Question: <p>I don't really understand the reason of this. I have listed the outputs of different models below.</p>
<p>Gan (source: self made simple GAN for CIFAR10)</p>
<p><a href="https://i.sstatic.net/U0gJW0ED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U0gJW0ED.png" alt="enter image description here" /></a></p>
<p>Vq-vae (source: <a href="https://arxiv.org/pdf/1906.00446" rel="nofollow noreferrer">link</a>)</p>
<p><a href="https://i.sstatic.net/fIC40d6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fIC40d6t.png" alt="enter image description here" /></a></p>
<p>What i mean is illustrated here.</p>
<p>The image that we want to encode then decode (reconstruction)
<a href="https://i.sstatic.net/lQZVN019.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lQZVN019.png" alt="enter image description here" /></a></p>
<p>The input of the model</p>
<p><a href="https://i.sstatic.net/M6X4ODFp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6X4ODFp.png" alt="enter image description here" /></a></p>
Answer:
|
https://ai.stackexchange.com/questions/46021/why-generative-models-produce-mesh-structure-at-the-beginning
|
Question: <p>OpenAI seems to be avoiding branding their reasoning models as "GPTs. See, for example, <a href="https://platform.openai.com/docs/models" rel="nofollow noreferrer">this page</a> from their API docs, which has one column for "GPT models" and another for "Reasoning models." Are the reasoning models not generative pretrained transformers, or is this simple a branding decision by OpenAI?</p>
Answer: <p>It could be said primarily as a functional branding decision since they're still foundationally based on pretrained LLM transformers for contextual language semantic understanding and generation, though they also differ significantly both in terms of fine-tuning methods and generated token types. Traditional GPT models like GPT-4o are designed for <em>general-purpose</em> language understanding and generation, reasoning models are optimized to tackle complex problem-solving tasks. According to your OpenAI's reference, the o1 reasoning model is designed to solve <em>specialized hard</em> problems across several domains which is mainly further trained with reinforcement learning for generating their additional chain-of-thought reasoning tokens.</p>
<blockquote>
<p>The o1 series of models are trained with reinforcement learning to perform complex reasoning. o1 models think before they answer, producing a long internal chain of thought before responding to the user... Reasoning models, like OpenAI o1 and o3-mini, are new large language models trained with reinforcement learning to perform complex reasoning. Reasoning models think before they answer, producing a long internal chain of thought before responding to the user. Reasoning models excel in complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflows.</p>
</blockquote>
<blockquote>
<p>The o1 reasoning model is designed to solve hard problems across domains. o1-mini is a faster and more affordable reasoning model, but we recommend using the newer o3-mini model that features higher intelligence at the same latency and price as o1-mini.</p>
</blockquote>
<blockquote>
<p>Reasoning models introduce reasoning tokens in addition to input and output tokens. The models use these reasoning tokens to "think", breaking down their understanding of the prompt and considering multiple approaches to generating a response. After generating reasoning tokens, the model produces an answer as visible completion tokens, and discards the reasoning tokens from its context.</p>
</blockquote>
|
https://ai.stackexchange.com/questions/47974/are-the-newer-openai-models-such-as-o1-not-generative-pretrained-transformers
|
Question: <p>Although the GAN is widely used due to its capability, there were generative models before the GAN which are based on probabilistic graphical models such as Bayesian networks, Markov networks, etc.</p>
<p>It is now a well-known fact that GANs are excelling at image generation tasks. But I am not sure whether the generative models that were invented before GANs were used for image generation or not.</p>
<p>Is it true that other generative models were used for image <strong>generation</strong> before the proposal of the GAN in 2014?</p>
Answer:
|
https://ai.stackexchange.com/questions/30055/is-image-generation-not-existent-before-generative-adversarial-networks
|
Question: <p>If we have a neural network that learns the generative model for <span class="math-container">$P(A, B, C)$</span>, the joint PDF of the random variables <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, and <span class="math-container">$C$</span>.</p>
<p>Now, we want to learn the generative model for <span class="math-container">$P(A, B, C, D)$</span>.</p>
<p>Is there any theory that says learning <span class="math-container">$P(A,B,C$</span>) and then composing it with <span class="math-container">$P(D \mid A,B,C)$</span> is faster than learning <span class="math-container">$P(A,B,C,D)$</span> from scratch?</p>
Answer:
|
https://ai.stackexchange.com/questions/12128/given-the-generative-model-pa-b-c-would-it-be-faster-to-learn-pa-b-c-d
|
Question: <p>I've curated a dataset of player-made Minecraft builds. Each unique Minecraft block is tokenized and treated as a unique "word" like in NLP. I've trained a Skip-Gram model on the dataset (using context "cubes" and target "blocks" as opposed to windows and words). Plotting the embeddings, they are meaningful as nearby embeddings are similar blocks.</p>
<p>Now, I'm attempting to train a Variational AutoEncoder on the embedded builds. Where each build data point is converted from the tokens to the pre-trained embeddings from the SkipGram model. My VAE is fully convolutional, modeled after the encoder and decoder parts of Stable Diffusion, only upgraded to 3D. Therefore, each data point is of size:</p>
<p><code>(Batch Size, Depth, Height, Width, Embedding Dimension)</code></p>
<p><a href="https://i.sstatic.net/4Z3yRPLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Z3yRPLj.png" alt="Block Embeddings plotted with T-SNE" /></a></p>
<p>I'm having trouble training the model, specifically with the reconstruction error loss term. My KL-divergence seems to be working well. I've tried pytorch's <code>CosineEmbeddingLoss</code> since the embeddings are generally nonlinear, weighting the loss of each block according to it's probability of occurring in the build. I've also tried <code>MeanSquaredError</code> and <code>LogCosh</code> and those didn't work very well.</p>
<p>I've perused <em>many</em> repositories that implement a VAE, and nearly all of them use either <code>BinaryCrossEntropy</code> or simply <code>CrossEntropy</code>. Would those loss functions would be appropriate for my model? My embeddings are real-valued. The task is somewhat of a classification problem, where there are around 4000 unique classes due to the number of blocks in Minecraft. I've gone with embeddings because if the model outputs a similar block compared to the actual block, that would still be a viable build. I'm beginning to think that this task is much more difficult than I anticipated.</p>
<p>Perhaps another architecture would be more optimal? I lack a necessary intuition with generative modeling as it's not the kind of ML I usually write. Compute is not a limitation as I have access to the H100s at my university.</p>
Answer:
|
https://ai.stackexchange.com/questions/46940/which-generative-model-architecture-and-loss-function-should-i-use-to-train-on
|
Question: <p>Of my understanding mode-collapse is when there happen to be multiple classes in the dataset and the generative network converges to only one of these classes and generates images only within this class. On training the model more, the model converges to another class.</p>
<p>In Goodfellows NeurIPS presentation he clearly addressed how training a generative network in an adversarial manner avoids mode-collapse. How exactly do GAN's avoid mode-collapse? and did previous works on generative networks not try to address this?</p>
<p>Apart from the obvious superior performance (generally), is the fact that GAN's address mode-collapse make them far preferred over other ways of training a generative model?</p>
Answer: <p>I don't think he said that at all. Going back to the talk you'll see he mentions mode collapse comes from the naivete of using alternating gradient-based optimization steps because then <span class="math-container">$min_{\phi}max_{\theta}L(G_\phi, D_\theta)$</span> starts to look a lot like <span class="math-container">$max_{\theta}min_{\phi}L(G_\phi, D_\theta)$</span>. </p>
<p>This is problematic because in the latter case the generator has an obvious minimum of transforming all generated output into a single-mode that the discriminator has considered acceptable. </p>
<p>Since then a lot of work has been done to deal with this point of failure. Examples include <a href="https://arxiv.org/abs/1611.02163" rel="nofollow noreferrer">Unrolled GANs</a> (he mentions this one in the talk), where you essentially make the generator optimize what the discriminator <em>will</em> think <span class="math-container">$K$</span> steps in the future to ensure the ordering of the <span class="math-container">$min \ max$</span> game, and <a href="https://arxiv.org/abs/1701.07875" rel="nofollow noreferrer">Wasserstein GANs</a>, where you focus on a different metric that still has the same global minimum but allows for side by side training completely eliminating the ordering and failure mode, to begin with. On top of this, other work has been done as well, these are just two important examples. </p>
<p>Regarding how they fare against other generative models, like VAEs, there is no <em>one is better than the other</em>. The recent empirical success of GANs is why they are so popularly used, but we still see others being used in practice as well.</p>
|
https://ai.stackexchange.com/questions/16441/how-exactly-does-adversarial-training-help-in-handling-mode-collapse-in-generati
|
Question: <p>There are lots of explanations on DGM (Deep Generative Model) and generative classifier (most of the explanations on which are about generative classifier vs discriminative classifier)</p>
<p>But, I can hardly find the common parts between the two concepts. In my understanding, 'generative' from DGM is quite straightforward - it almost goes the same with its literal meaning. In the contrary, 'generative' from the comparisons with the discriminative model is a little bit technical but it's the one that took the word earlier than the former one. (Jordan and Ng, 2002).</p>
<p>Is it just that these two concepts are not really unrelated? Were they just used just because they do produce some distributions while learning?</p>
Answer: <p>Generative models have in common that they all model the distribution of the training samples. This makes it possible to sample from a distribution that is (hopefully) similar to the training data distribution. Different architecture types such as energy-based models, variational autoencoders, generative adversarial networks, autoregressive models, normalizing flows, and numerous hybrid approaches can all be considered generative. For more information see <a href="https://arxiv.org/pdf/2103.04922.pdf" rel="nofollow noreferrer">this review</a>.</p>
<p>The same meaning is true for generative classifiers, they are called this way because they model how a particular class would generate its associated data. Discriminative classifiers learn to detect features to distinguish between the various possible classes. This visualization from <a href="https://medium.com/@jordi299/about-generative-and-discriminative-models-d8958b67ad32" rel="nofollow noreferrer">here</a> might be helpful in this regard:</p>
<p><a href="https://i.sstatic.net/lA9wQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lA9wQ.png" alt="disciminative vs. generative" /></a></p>
|
https://ai.stackexchange.com/questions/38925/what-is-the-difference-between-the-term-generative-in-classical-machine-learni
|
Question: <p>I have a quick question regarding the use of different latent spaces to represent a distribution. Why is it that a Gaussian is usually used to represent the latent space of the generative model rather than say a hypercube? Is it because a Gaussian has most of its distribution centred around the origin rather than a uniform distribution which uniformly places points in a bounded region? </p>
<p>I've tried modelling different distributions using a generative model with both a Gaussian and Uniform distribution in the latent space and the Uniform is always slightly restrictive when compared with a Gaussian. Is there a mathematical reason behind this?</p>
<p>Thanks in advance! </p>
Answer:
|
https://ai.stackexchange.com/questions/21129/why-do-hypercube-latent-spaces-perform-poorer-than-gaussian-latent-spaces-in-gen
|
Question: <p>Suppose there are dataset of handwritten digit of zero. We know that human handwritten are not perfects. But here the model will tries to draw the <strong>ideal version</strong> of the digit, e.g. perfect circle.</p>
<p>Because I accidentally trained GAN where all random latent vectors input is almost identical to the idealized version of dataset. But for number 7 instead of 0.</p>
Answer: <p>Turns out if we feed the generator part without discriminator with real samples and random latent, we got idealized version of the dataset.</p>
<p><a href="https://i.sstatic.net/pSJ6BVfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pSJ6BVfg.png" alt="enter image description here" /></a></p>
|
https://ai.stackexchange.com/questions/46962/is-there-such-generative-model-that-drawing-idealized-version-of-dataset
|
Question: <p>When I was reading about discriminative vs generative models, I came across their definitions:</p>
<blockquote>
<p>Given a distribution of inputs <span class="math-container">$X$</span> and labels <span class="math-container">$Y:$</span></p>
<p>Discriminative models learn the conditional distribution <span class="math-container">$P(Y|X)$</span>.</p>
<p>Generative models learn the joint distribution <span class="math-container">$P(X,Y)$</span>. If you only
have <span class="math-container">$X$</span>, then you can still learn <span class="math-container">$P(X)$</span>.</p>
</blockquote>
<p>My questions are:</p>
<ul>
<li>What does it mean to "learn a distribution" ? Learning what from the distribution ?</li>
<li>What does the distribution contain, and what does it look like?</li>
</ul>
Answer: <p><strong>Learning the distribution</strong>: When we talk about learning a distribution, we are essentially trying to capture the underlying statistical properties of the data. In other words, we try to capture the distribution from which the data points in our dataset our sampled. This involves estimating parameters that define the distribution (such as mean, variance, etc.) or learning a model that can generate data points similar to those observed in the dataset.</p>
<p><strong>What does the distribution contain?</strong>: The distribution contains information about the likelihood of different values or configurations of the variables in the dataset. For example, in a simple case where X represents the features of a dataset and Y represents the labels, the conditional distribution P(Y|X) would describe the probability of observing a particular label given the input features.</p>
<p><strong>What does the distribution look like?</strong>: Depends on the nature of the data and the relationships between variables. It could take various forms, such as Gaussian (bell-shaped), uniform, exponential, etc., depending on the characteristics of the data.</p>
|
https://ai.stackexchange.com/questions/45040/what-does-it-mean-to-learn-a-distribution-and-what-does-it-contain
|
Question: <p>Consider the following statement from the abstract of the paper titled <a href="https://arxiv.org/pdf/1406.2661.pdf" rel="nofollow noreferrer">Generative Adversarial Nets</a></p>
<blockquote>
<p>We propose a new framework for estimating generative models via an
adversarial process, in which we simultaneously train two models: a
generative model <span class="math-container">$G$</span> that <strong>captures the data distribution</strong>, and a
discriminative model <span class="math-container">$D$</span> that estimates the probability that a sample
came from the training data rather than <span class="math-container">$G$</span>.</p>
</blockquote>
<p>Let <span class="math-container">$D'$</span> be a dataset of <span class="math-container">$n$</span> digital and discrete* images, each of size <span class="math-container">$C \times H \times W$</span>. Suppose the generative adversarial network is trained on the dataset <span class="math-container">$D'$</span>.</p>
<p>Our sample space (or data set) is <span class="math-container">$D' = \{I_1, I_2, I_3, \cdots, I_n\}$</span>, where <span class="math-container">$I_j$</span> is the <span class="math-container">$j^{th}$</span> image for <span class="math-container">$1 \le j \le n$</span></p>
<p>The random variables are <span class="math-container">$X_1, X_2, X_3, \cdots, X_{CHW}$</span> where</p>
<p><span class="math-container">$$X_i \in \{a, a+1, a+2, \cdots, b\} =\text{ intensity of }i^{th} \text{ pixel;} \text{ for } 1 \le i \le CHW$$</span></p>
<p>Since we have dataset <span class="math-container">$D'$</span>, we can calculate the joint distribution <span class="math-container">$p_{data\_set}$</span> having <span class="math-container">$(b-a+1)^{CHW}$</span> parameters calculated from the dataset. But the parameters of the original image distribution <span class="math-container">$p_{ground_{D^{'}}} $</span>is not equal to the <span class="math-container">$p_{D'}$</span></p>
<hr />
<p><strong>Simple example:</strong></p>
<p>Suppose I flipped an unbiased coin 100 times and I got 45 heads, 55 tails, then <span class="math-container">$P_{data\_set}(H) = \dfrac{45}{100}$</span> and <span class="math-container">$P_{data\_set}(T) = \dfrac{55}{100}$</span>. So, <span class="math-container">$P_{data\_set} = \{\dfrac{45}{100}, \dfrac{55}{100}\}$</span></p>
<p>but the ground truth probability distribution is <span class="math-container">$P_{ground}(H) = \dfrac{50}{100}$</span> and <span class="math-container">$P_{ground}(T) = \dfrac{50}{100}$</span>. So, <span class="math-container">$P_{ground} = \{\dfrac{50}{100}, \dfrac{50}{100}\}$</span></p>
<hr />
<p>Which distribution is our generator capturing by the end? <strong>Is it the probability distribution calculated based on our dataset <span class="math-container">$p_{data\_set}$</span> of images or the actual probability distribution <span class="math-container">$p_{ground}$</span>?</strong></p>
<p>How to understand the <strong>act of capturing</strong> here? Does it only mean to be behaving (generating instances) in the same way as data distribution?</p>
<hr />
<p>Suppose I design a machine that is capable to generate equal number of <span class="math-container">$0$</span>'a as equal number of <span class="math-container">$1$</span>'s for sufficiently large trails, then I can say that my machine captured the coin toss probability distribution?</p>
<hr />
<ul>
<li>discrete images refers to the images whose pixel values takes finite number of integer values.</li>
</ul>
Answer:
|
https://ai.stackexchange.com/questions/29946/which-probability-distribution-a-generator-in-generative-adversarial-network-ga
|
Question: <p>I am aware that there is a plethora of deep generative models out there (e.g. variational autoencoders (VAE), GANs) that can model high-dimensional data as the images of latent variables under a non-linear mapping (typically neural network).</p>
<p>In more traditional methods such as probabilistic PCA, the latent variables can be marginalised analytically. In Bayesian PCA (BPCA), we can additionally integrate out the linear mapping, from the latent space to the observation space, by adopting the variational lower bound that leads to closed form updates of the parameters. The Gaussian Process Latent Variable (GPLVM) model adopts a non-linear probabilistic mapping (a Gaussian process) that can be marginalised. These two models enjoy to a certain degree analytical solutions concerning the inference of the latent variables and the mapping.</p>
<p>I have been wondering whether there is any research into more "complex" models (perhaps I should call them deep) that are capable of modelling more complex data distributions than the GPVLM and BPCA, but retain analytical solutions when inferring the posterior of the latent variables (like BPCA) or the mapping (like GPLVM)?</p>
<p>What I like about the GPLVM and BPCA is that they possess an objective function that can be analytically optimised, as opposed to the intractable objective of VAEs that necessitates Monte-Carlo averages and stochastic gradient. Could somebody please point me to such examples of more complex generative models that admit analytical inference for working out the posterior of the latent variables or the mapping?</p>
<hr />
<p>Also posted on <a href="https://www.reddit.com/r/MachineLearning/comments/176vgdn/r_pointers_to_deep_latent_variable_models_that/?utm_source=share&utm_medium=web2x&context=3" rel="nofollow noreferrer">reddit</a></p>
Answer:
|
https://ai.stackexchange.com/questions/42418/pointers-to-deep-latent-variable-models-that-admit-analytical-approximations
|
Question: <h3>Background</h3>
<p>I'm an undergraduate student with research interests in a field of physics that has significant overlap with graph theory, and a functioning knowledge of how simple neural nets work and how to build them with TensorFlow and Keras. As many people are, I'm fascinated by the recent advancements in transformer-based language models, and I've spent the last several weeks reading up on them in an attempt to construct my own simple "mini GPT". In doing so I encountered Graph Neural Networks, and decided I'd try instead to construct a generative language model out of these, inspired by the fact that graphs are (in a very hand-wavy sense) perhaps inherently more amenable to encoding relationships like "attention", etc. I'm aware that the task I'm trying to accomplish could probably be much more easily achieved using alternative architectures. This is mostly just a fun "what if" project.</p>
<hr />
<h3>Code</h3>
<p>I still haven't quite wrapped my head around all the details of how and what a Graph Neural Net learns, nor what the different types of GNN layers do. Nevertheless, I've constructed a simple GNN using the <code>Spektral</code> library, which takes an input string, and predicts the rest of the string word-by-word by predicting the most probable next token. Here's what I have so far:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Embedding
from tensorflow.keras.preprocessing.sequence import pad_sequences
from spektral.layers import GCNConv
from spektral.utils import normalized_adjacency
import numpy as np
import random
import os
def get_training_data(training_data_dir):
filenames = []
for filename in os.listdir(training_data_dir):
filenames.append(os.path.join(training_data_dir, filename))
random.shuffle(filenames)
lines = []
for filename in filenames:
with open(filename, "r") as file:
for line in file:
lines.append(line.strip())
return lines
# Import data
training_corpus = get_training_data("./training_data")
tokens = [line.split() for line in training_corpus]
vocab = set(token for line in tokens for token in line)
vocab_size = len(vocab)
print("Vocabulary size: ", vocab_size, " words")
# Tokenize
word_to_idx = {word: idx for idx, word in enumerate(vocab)}
idx_to_word = {idx: word for word, idx in word_to_idx.items()}
# Pad and truncate sequences to a fixed length
train_data = [[word_to_idx[token] for token in line] for line in tokens]
train_data_padded = pad_sequences(train_data, maxlen=vocab_size, padding='pre', truncating='pre')
# Shift train_data_padded to create train_labels
train_labels = np.roll(train_data_padded, -1, axis=1)
train_labels[train_labels >= vocab_size] = 0
train_data_padded = np.array(train_data_padded)
train_labels = np.array(train_labels)
# Construct token-to-token similarity matrix
similarity_matrix = np.zeros((vocab_size, vocab_size))
for sentence in tokens:
for i, token1 in enumerate(sentence):
for j, token2 in enumerate(sentence):
if i != j:
num_words_between = pow(abs(j - i), 2)
similarity_matrix[word_to_idx[token1], word_to_idx[token2]] += num_words_between
adjacency_matrix = normalized_adjacency(similarity_matrix)
# Construct model
input_layer = Input(shape=(None,))
embedding = Embedding(input_dim=vocab_size, output_dim=vocab_size)(input_layer)
gcn_layer = GCNConv(vocab_size)([embedding, adjacency_matrix])
output_layer = Dense(vocab_size, activation='softmax')(gcn_layer)
model = tf.keras.Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer='adam', loss='categorical_crossentropy')
# Training loop
num_epochs = 100
model.fit(train_data_padded, tf.keras.utils.to_categorical(train_labels, num_classes=vocab_size), epochs=num_epochs)
padding_token = "<PAD>"
def respond_to_text(initial_string):
initial_tokens = initial_string.split()
for i in range(len(initial_tokens)):
if initial_tokens[i] not in word_to_idx:
initial_tokens[i] = "<PAD>"
while len(initial_tokens) < vocab_size:
initial_tokens.insert(0, padding_token)
generated_tokens = [word_to_idx[token] for token in initial_tokens]
max_generation_length = 40
# Number of tokens from the initial string that have been used
initial_tokens_used = len(initial_tokens)
for _ in range(max_generation_length):
current_seq = np.array([generated_tokens[-vocab_size:]]) # Always use the last vocab_size tokens
next_token_probs = model.predict(current_seq)[0][-1]
next_token = np.random.choice(np.arange(vocab_size), p=next_token_probs)
generated_tokens.append(next_token)
# If there are more initial tokens to use, do that
if initial_tokens_used < len(initial_tokens):
generated_tokens[-vocab_size:] = [word_to_idx[token] for token in initial_tokens[initial_tokens_used:]]
initial_tokens_used = len(initial_tokens)
# Generate text
generated_text = [idx_to_word[idx] for idx in generated_tokens]
# Remove trailing "<PAD>" tokens
generated_text = [token for token in generated_text if token != padding_token]
# Join the tokens into text
generated_text = " ".join(generated_text)
print("Generated Text:", generated_text)
while True:
input_string = input()
respond_to_text(input_string)
</code></pre>
<p>It's relatively simple, thus far. I train the network on strings of text from a subset of the WikiQA corpus, which I pre-process by removing all punctuation and capitalization. I define the elements of the adjacency matrix to be the squared distance between tokens (and 0 between a token and itself). I'm using <code>GCNConv()</code>, admittedly without knowing the intimate details of how it is different from other options provided by <code>Spektral</code>. Since the number of tokens in the input string must be identical to the vocabulary size, I pre-pend it with "<code><PAD></code>" tokens, and only pass in the last <code>vocab_size</code> tokens each time I generate a new token. I deal with unknown tokens by replacing them with "<code><PAD></code>".</p>
<p>If I understand correctly, the GNN learns the "strengths" of connections between nodes (words), i.e. the edge weights and, as I imagine it, the adjacency matrix encodes a sort of very weak form of "attention".</p>
<hr />
<h3>Question</h3>
<ol>
<li><strong>What and how, precisely, is this learning?</strong> I know that in a simple feed-forward neural network, a set of weights "between" perceptrons is learned. Do I understand correctly that what is being learned here are edge weights? How do node features and the adjacency matrix factor into this, and what does the model "do" with some input text? How are the next token probabilities calculated? I only understand this on a very superficial level, sufficient so as to produce this seemingly somewhat functional script. I can see that, when I plot the graph associated with the <code>GCNConv</code> layer using <code>networkx</code> after fitting, words that are similar (e.g. "boat", "ocean", "water") tend to cluster.</li>
<li><strong>How can I improve the model?</strong>* I've spent a fair amount of time reading GNN research papers, however they're written largely in what appears to be very subfield-specific jargon that is unfamiliar to me as someone very familiar with graph theory and somewhat familiar with machine learning. I'd like to begin by taking smaller steps, hopefully starting with some suggestions provided by the community here. I have no strict, "objective" criteria in mind, outside of producing more realistic, human-like text.</li>
</ol>
<p>Here's some example input and output:</p>
<ol>
<li><strong>Input:</strong> "the traffic" -> <strong>Output:</strong> "the traffic was invaluable sense day playing urban evening together made past weekends board plot off color cookies calm concert flowers express eye-opening learn garden outside satisfying laughter movie waves how's sunrise of try traffic scratch day captivating hobby live blanket delicious"</li>
<li><strong>Input:</strong> "i enjoy" -> <strong>Output:</strong> "i enjoy watching magical journaling awe-inspiring buds tail feeling entertainment resist homemade ones flavors soothing well-being laughter life culture cleanup picnics beauty accomplishment nature mood ocean up satisfying magical contagious joy admiring feeling live marathons beauty views things expression hikes next happiness vacations"</li>
</ol>
<p><span class="math-container">$*$</span> Outside of the obvious, e.g. more training data, adjusting hyperparameters, etc.</p>
Answer: <p>First of all, I would like to encourage you to keep trying new things, it sounds super fun! There are a few things I would like you to clarify about Graph Neural Networks (GNNs) and Graph Convolutional Neural Networks (GCNs), and then I will answer your questions as best I can. Also, as a sidenote, there are too many questions here, so don't expect me to answer all of them. Given that, I will try to answer the main ones.</p>
<h2>Adjacency Matrix and Attention Mechanism</h2>
<p>I will address the notion that <strong>"the adjacency matrix encodes a sort of very weak form of attention"</strong>. This notion is kind of correct. In fact, it can be shown that the attention mechanism of the transformer is just an instance of the generalization of the message-passing function of the GNN, where the underlying graph is a complete graph, or in other words, its adjacency matrix is a matrix of all ones. If you continue your research on GNNs, you will see another famous architecture called Graph Attention Neural Networks (GAT). This is just an adaptation of the attention mechanism of the transformer for more simple static graph data. If you want to understand how the transformer is a type of GNN, I can explain it to you later.</p>
<h2>First Question: What Exactly is Being Learned?</h2>
<p>What is learned in a graph neural network depends on the type of neural network that you are implementing. Generally, there are two types of features that you can input into a GNN (for the forward pass). These are the feature data and the edge data of each node. In the case of the <code>GCNConv</code> layer, this only implements the learning of the feature data (which are the embeddings of each token in your case). So, in your code, the GCNConv is learning the weights that are multiplied with the embeddings. In your code, there are no edge weights that are being trained or even initialized because there is no edge data in the first place.</p>
<p>GCN uses the following formula for calculating the message passing:
<span class="math-container">$
\mathbf{h}_i^{(l+1)} = \sigma \left( \sum_{j \in \mathcal{N}(i) \cup \{i\}} \mathbf{A}_{ij} \mathbf{h}_j^{(l)} \mathbf{W}^{(l)} \right)
\
$</span></p>
<ul>
<li><p><span class="math-container">$h_i(l)$</span> is the feature vector of node i at layer <span class="math-container">$l$</span>.</p>
</li>
<li><p><span class="math-container">$W(l)$</span> is the weight matrix for layer <span class="math-container">$l$</span>.</p>
</li>
<li><p><span class="math-container">$N(i)$</span> is the set of neighbors of node <span class="math-container">$i$</span>.</p>
</li>
<li><p><span class="math-container">$A_{ij}$</span> is the element at the <span class="math-container">$(i,j)$</span> position of the adjacency matrix
<span class="math-container">$A$</span>. This could be either 1 or 0, indicating whether there is an edge between nodes <span class="math-container">$i$</span> and <span class="math-container">$j$</span>, or it could be a weight if the graph is weighted.</p>
</li>
<li><p>σ is an activation function, such as ReLU or Sigmoid.</p>
</li>
</ul>
<p>I would recommend watching this <a href="https://youtu.be/ijmxpItkRjc?list=PLSgGvve8UweGx4_6hhrF3n4wpHf_RV76_" rel="nofollow noreferrer">video</a>, which in my opinion explains GCNs very well if you have any more doubts.</p>
<h2>Second Question: How Can the Model be Improved?</h2>
<p>I can understand how you are feeling about GNNs. I did research on them, and they can be very niche with some terminology. I would recommend that you understand the basics of the GNN architecture very well and then follow up with other types of GNNs. Graph theory is definitely useful, but in my opinion, it only helps you to understand the basics of graphs, and the other things are much more specific to the field of GNNs.</p>
<p>As for the question, sadly, deep learning is not a very interpretable field. This means that sometimes we don't know why some architectures seem to work better than others. For example, the transformer is currently the best model for modeling language. We think this is because of the attention mechanism, which is able to take into account all tokens in context at the same time. However, it is also a very scalable model that does not suffer from the gradient vanishing effect, unlike recurrent neural networks.</p>
<p>Recently, there has been a new type of RNN called <a href="https://arxiv.org/abs/2305.13048" rel="nofollow noreferrer">RWKV</a> that solves the initial gradient vanishing effect of RNNs and does not use the attention mechanism. We have <a href="https://arxiv.org/abs/2111.09509" rel="nofollow noreferrer">RAVEN</a>, which is an example of how we can build models that are just as performant as transformers with RWKV.</p>
<p>What I'm trying to say is that it is very difficult to say what things you can use to improve the accuracy of your model, besides the basics of training with more data and tuning hyperparameters. If I had to try it, I would probably make my GNN resemble the transformer architecture as much as possible, adding attention, residual layers, and so on. However, this is something that you will have to explore and experiment with.</p>
<h2>Final Thoughts</h2>
<p>I don't intend to discourage you; quite the opposite. However, I should note that GCNs may not be best suited for this task. You might also explore standard NLP techniques for boosting accuracy, such as improved tokenization and data quality. For more ambitious improvements, you could look into Reinforcement Learning through Human Feedback (RLHF) techniques.</p>
|
https://ai.stackexchange.com/questions/41921/how-can-i-improve-this-toy-graph-neural-network-generative-language-model
|
Question: <p>I understand why deep generative models like DBN ( deep belief nets ) or DBM ( deep boltzmann machines ) are able to capture underlying structures in data and use it for various tasks ( classification, regression, multimodal representations etc ...).</p>
<p>But for the classification tasks like in <a href="http://www.cs.cmu.edu/~rsalakhu/papers/annrev.pdf" rel="nofollow noreferrer">Learning deep generative models</a>, I was wondering why the network is fine-tuned on labeled-data like a feed-forward network and why only the last hidden layer is used for classification?</p>
<p>During the fine-tuning and since we are updating the weights for a classification task ( not the same goal as the generative task ), could the network lose some of its ability to regenerate proper data? ( and thus to be used for different classification tasks ? )</p>
<p>Instead of using only the last layer, could it be possible to use a partition of the hidden units of different layers to perform the classifications task and without modifying the weights? For example, by taking a subset of hidden units of the last two layers ( sub-set of abstract representations ) and using a simple classifier like an SVM?</p>
<p>Thank you in advance!</p>
Answer: <p>One of the big realizations that deep learning models brought in recent years was that we can train the feature extractors and classifiers simultaneously. In fact most people have stopped separating the 2 tasks and simply refer to all the process as training the model. </p>
<p>However, if you dive in to every single model architecture, it will always be constructed from the first part which is the <strong>feature extractor</strong> which outputs the embedding output - (which is basically the x encoded features of the input), and second part consisting of the final layer the model - <strong>the classifier</strong> which uses the embedding layer encoding to predict the class of the input.</p>
<p>The goal of the first part is to reduce the dimensionality of the input to just the most impotent features for the final task. The goal of the classifier is to use those features to output the final score/class etc.</p>
<p>This is why usually only this layer is fine-tuned, because we don't want to damage the trained feature extractor, just update the classifier to fit a slightly different distribution.</p>
<p>I'm pretty sure that in your mentioned case, for generation they do not use the classification layer, so updating it shouldn't have any affect on the model's generative abilities.</p>
<p>Regarding your last question, yes it is possible, ones you extracted the features with the model, you can use any kind of classifier on them.</p>
|
https://ai.stackexchange.com/questions/9933/why-is-the-last-layer-of-a-dbn-or-dbm-used-for-classification-task
|
Question: <p>Epistemic uncertainty is uncertainty that arises from a lack of knowledge, for instance in machine learning epistemic uncertainty can be caused by a lack of training data. Estimating epistemic uncertainty is important for useful AI systems, since it allows the AI to "know that it doesn't know", therefore avoiding hallucinations.</p>
<p>While estimating epistemic uncertainty in machine learning classifiers has a clear interpretation, when considering generative models tasked with text generation it is less clear how to evaluate uncertainty, since many text completions can be considered satisfactory. Yet, it is obvious that a good epistemic uncertainty estimator should return a high value when a modest AI model is asked for example to "Solve the Riemann hypothesis" (hard unsolved math problem).</p>
<p>What are the leading methods to estimate Epistemic Uncertainty in Large Language Models?</p>
Answer: <p>In traditional Monte Carlo dropout approach, by enabling dropout at <em>inference</em> time and performing multiple stochastic forward passes, you can sample different outputs from the same input. The variability across these samples can serve as a proxy for epistemic uncertainty.</p>
<p>In LLMs recent work on chain-of-thought (CoT) prompting has suggested that if you sample multiple reasoning paths for the same input and then aggregate or measure the variance among them, the diversity can serve as an uncertainty signal. When many diverse CoT completions emerge, the model’s internal state may be less certain about the correct reasoning process. This is why <em>self-consistency</em> in CoT sampling is taken as an indicator of lower epistemic uncertainty. You can further read Wang et al., (2023) "<em><a href="https://arxiv.org/pdf/2203.11171" rel="nofollow noreferrer">Self-Consistency Improves Chain-of-Thought Reasoning in Language Models</a></em>".</p>
<blockquote>
<p>This suggests that
one can use self-consistency to provide an uncertainty estimate of the model in its generated solutions.
In other words, one can use low consistency as an indicator that the model has low confidence; i.e.,
self-consistency confers some ability for the model to “know when it doesn’t know”.</p>
</blockquote>
|
https://ai.stackexchange.com/questions/48038/what-are-the-leading-methods-to-estimate-epistemic-uncertainty-in-large-language
|
Question: <p>Consider the following abstract from the research paper titled <a href="https://arxiv.org/pdf/1801.01973.pdf" rel="nofollow noreferrer">A Note on the Inception Score</a> for instance</p>
<blockquote>
<p>Deep generative models are powerful tools that have produced
impressive results in recent years. These advances have been for the
most part empirically driven, making it essential that we use
high-quality <strong>evaluation metrics</strong>. In this paper, we provide new
insights into the Inception Score, a recently proposed and widely used
<strong>evaluation metric</strong> for generative models, and demonstrate that it fails
to provide useful guidance when comparing models. We discuss both
suboptimalities of the metric itself and issues with its application.
Finally, we call for researchers to be more systematic and careful
when evaluating and comparing generative models, as the advancement of
the field depends upon it.</p>
</blockquote>
<p>Here we can observe the usage of word metric several times. In mathematics, the word metric is used only in the context of metric spaces afaik. The <a href="http://www-groups.mcs.st-andrews.ac.uk/%7Ejohn/MT4522/Lectures/L5.html#:%7E:text=A%20metric%20space%20is%20a,(x%2C%20y)%20%3D%200" rel="nofollow noreferrer">definition for metric space</a> and metric is defined as follows</p>
<blockquote>
<p>A metric space is a set <span class="math-container">$X$</span> together with a function <span class="math-container">$d$</span> (called a
metric or "distance function") which assigns a real number <span class="math-container">$d(x, y)$</span>
to every <span class="math-container">$x, y, z$</span> belongs <span class="math-container">$X$</span> satisfying the properties (or axioms):</p>
<ol>
<li><span class="math-container">$d(x, y) \ge 0$</span> and <span class="math-container">$d(x, y) = 0$</span> iff <span class="math-container">$x = y$</span>,</li>
<li><span class="math-container">$d(x, y) = d(y, x)$</span>,</li>
<li><span class="math-container">$d(x, y) + d(y, z) \ge d(x, z).$</span></li>
</ol>
</blockquote>
<p>Do research papers generally use the word metric in the sense of the metric defined above? Or do we need to interpret the word metric less rigorously, just as a measure, like an accuracy?</p>
<p>Note: Although I provided the abstract from a research paper containing the word metric, the question is not restricted to this particular context. This question can be applied to all AI-related research papers that used the word metric, especially in the context of performance or evaluation metrics.</p>
Answer: <p>"Metric" should be understood as "a function of the trained model and of a dataset which returns a number".</p>
<p>For example, in reinforcement learning, one can use as an evaluation metric the total cumulative discounted reward that a policy gets on a set of episodes for a given task, starting from the same initial state <span class="math-container">$S_0$</span>.</p>
<p>That's a function <span class="math-container">$f(\tau_1, \tau_2, ... | \pi, S_0)$</span>, where <span class="math-container">$\tau_1$</span> is the first trajectory obtained by running policy <span class="math-container">$\pi$</span> on the task, <span class="math-container">$\tau_2$</span> the second one, etc. (we assume that <span class="math-container">$\pi$</span> is stochastic, otherwise it doesn't make sense to run multiple episodes). <span class="math-container">$f$</span> is an "evaluation metric" but is certainly not a distance function.</p>
|
https://ai.stackexchange.com/questions/35100/should-i-need-to-interpret-the-word-metric-in-performance-metric-rigorously
|
Question: <p>Although the main stream research is on Generative Adversarial Networks(GANs) using Multi Layer Percepteons (MLPs). The original paper titled <a href="https://arxiv.org/pdf/1406.2661.pdf" rel="nofollow noreferrer">Generative Adversarial Nets</a> clealry says, in abstract, that GAN is possible with out MLP also</p>
<blockquote>
<p>In the <strong>space of arbitrary functions</strong> <span class="math-container">$G$</span> and <span class="math-container">$D$</span>, a unique solution
exists, with <span class="math-container">$G$</span> recovering the training data distribution and <span class="math-container">$D$</span>
equal to <span class="math-container">$\dfrac{1}{2}$</span> everywhere. In the case where G and D are defined
by multilayer perceptrons, the entire system can be trained with backpropagation.</p>
</blockquote>
<p>Are there any research papers that uses models other than MLPs and are comparatively successful?</p>
Answer:
|
https://ai.stackexchange.com/questions/29947/are-there-any-generative-adversarial-networks-without-multi-layer-perceptrons
|
Question: <p>I see that domain adaptation and transfer learning has been widely adopted in image classification and semantic segmentation analysis. But it's still lacking in providing solutions to enterprise data, for example, solving problems related to business processes?</p>
<p>I want to know what characteristics of the data determine the applicability or non-applicability with respect to generating models for prediction where multiple domains are involved within an enterprise information database?</p>
Answer:
|
https://ai.stackexchange.com/questions/23396/why-is-domain-adaptation-and-generative-modelling-for-knowledge-graphs-still-not
|
Question: <p>My question relates to full-text translators that are not specifically based on LLMs. My current understanding is that the term Generative AI goes beyond LLMs and that the full-text translators (especially those which are based on artificial neural networks) also fall into this category.</p>
<p><a href="https://en.wikipedia.org/wiki/Generative_artificial_intelligence" rel="noreferrer">In the Wikipedia article about Generative AI</a>, I could only find the statement that generative AI systems such as ChatGPT are also used for translations. That is quite obvious, but this does not answer the question of whether other full text translators are also commonly referred to as Generative AI.</p>
<p><a href="https://research.ibm.com/blog/what-is-generative-AI" rel="noreferrer">IBM research defines Generative AI as</a></p>
<blockquote>
<p>Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.</p>
</blockquote>
<p>AFAIK DeepL, Google Translate, Bing Translate but also a few not so well-known systems falls clearly into that definition.</p>
<p>My question was triggered by <a href="https://meta.stackoverflow.com/a/427857">an answer on Meta Stack Overflow</a> that contained the following sentence:</p>
<blockquote>
<p>Besides, the banner clearly states: "Answers generated by artificial intelligence tools". Translations aren't generated answers. They're translations.</p>
</blockquote>
<p>with the implicit conclusion that it is clear to everyone that pure translators do not count as generative AI, hence it does not need any further explanation. Other participants in that thread seem to agree to that point of view. However, I think their use of terminology is not the typical use in the field of AI, and I would like to hear what the experts say.</p>
Answer: <p>Generative AI, as defined by IBM research, refers to deep-learning models capable of creating new content, be it text, images, or other media, based on their training data. This definition indeed encompasses models like GPT-3 or GPT-4, which can generate text in various styles and formats, including translations.</p>
<p>However, when it comes to full-text translators like Google Translate, DeepL, or Bing Translate, there's a nuanced difference. These systems are typically based on neural machine translation (NMT) models, a specific application of deep learning tailored for the task of translating text from one language to another. While these NMT systems are indeed 'generative' in the sense that they produce new text in a target language, their primary function is not to create original content but to convert existing content from one language to another as accurately as possible.</p>
|
https://ai.stackexchange.com/questions/43554/do-full-text-translators-such-as-deepl-or-google-translate-fall-under-the-term
|
Question: <p>What I know about CRF is that they are discriminative models, while HMM are generative models, but, in the inference method, both use the same algorithm, that is, the Viterbi algorithm, and forward and backward algorithms.</p>
<p>Does CRF use the same features as HMM, namely features transition and state features? </p>
<p>But in here <a href="https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf" rel="nofollow noreferrer">https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf</a>, CRF has these features Edge-Observation and Node-Observation Features.</p>
<p>What is the difference features transition and state features vs features Edge-Observation and Node-Observation features?</p>
Answer:
|
https://ai.stackexchange.com/questions/13691/what-are-the-differences-between-crf-and-hmm
|
Question: <p>Given that I'm training a generative model, (say a <em>generative adversarial network</em>), and I know that my (real) inputs (let's say vectors <span class="math-container">$\textbf{x} \in \mathbb{R}^n$</span>) satisfy linear constraints of the form e.g. <span class="math-container">$a_1\textbf{x}_1 + \dots a_n\textbf{x}_n =0$</span>, where the coefficients are <em>fixed</em>, is there a way to inject this knowledge during training?</p>
Answer: <p>Maybe a bit too trivial to work out of the shelf, but I would try to add a component to the adversarial loss based precisely on the given set of coefficients.</p>
<p>Something like:</p>
<p><span class="math-container">$L_{linear}= \frac{\gamma}{k} \prod_{k}A_{k}\hat{x}$</span></p>
<p>which combined with the adversarial loss (assuming minimax, but any other choice is fine as well) would become:</p>
<p><span class="math-container">$L(G, D)=E_{x}[log(D(x))] + E_{z}[(log(1 - D(G(z)))] + \frac{\gamma}{k} \prod_{k}A_{k}\hat{x} $</span></p>
<p>where <span class="math-container">$A_{k}$</span> is a set of fixed coefficients of an hyperplane, and <span class="math-container">$\hat{x}$</span> the vector sampled from the generator. I would use a product operator since the loss should drop to zero when <span class="math-container">$\hat{x}$</span> lies in one of the hyperplanes.
Instead <span class="math-container">$\gamma$</span> can be used to scale the loss to a value in the range of the generator and discriminator losses.</p>
|
https://ai.stackexchange.com/questions/34762/is-there-a-way-to-inject-linear-constrains-during-gan-training
|
Question: <p>I found the following paragraph from <a href="https://arxiv.org/abs/1906.02691" rel="nofollow noreferrer">An Introduction to
Variational Autoencoders</a> sounds relevant, but I am not fully understanding it.</p>
<blockquote>
<p>A VAE learns stochastic mappings between an observed <span class="math-container">$\mathbf{x}$</span>-space, whose empirical distribution <span class="math-container">$q_{\mathcal{D}}(\mathbf{x})$</span> is typically complicated, and a latent <span class="math-container">$\mathbf{z}$</span>-space, whose distribution can be relatively simple (such as spherical, as in this figure). The generative model learns a joint distribution <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})$</span> that is often (but not always) factorized as <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})=p_{\boldsymbol{\theta}}(\mathbf{z}) p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$</span>, with a prior distribution over latent space <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{z})$</span>, and a stochastic decoder <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$</span>. The stochastic encoder <span class="math-container">$q_{\phi}(\mathbf{z} \mid \mathbf{x})$</span>, also called inference model, approximates the true but intractable posterior <span class="math-container">$p_{\theta}(\mathbf{z} \mid \mathbf{x})$</span> of the generative model.</p>
</blockquote>
<p>How is it that the generative model learns a joint distribution <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})$</span> in the case of the VAE? I know that learning the weights of the decoder is learning <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$</span></p>
Answer: <p>The VAE models the following directed graphical model (figure 1 from the original VAE paper)</p>
<p><a href="https://i.sstatic.net/BLCT7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BLCT7.png" alt="enter image description here" /></a></p>
<p>So, you have 2 sets of parameters, <span class="math-container">$\boldsymbol{\phi}$</span> and <span class="math-container">$\boldsymbol{\theta}$</span>, and 2 random variables, <span class="math-container">$\mathbf{z}$</span> (latent r.v.) and <span class="math-container">$\mathbf{x}$</span>.</p>
<p>How can you view this graphical model (in figure 1 above) as a generative model?</p>
<ol>
<li><p>First, a sample <span class="math-container">$\mathbf{z}^{(i)}$</span> is generated from the (variational) probability distribution <span class="math-container">$q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})$</span></p>
</li>
<li><p>Then, a sample <span class="math-container">$\mathbf{x}^{(i)}$</span> is generated from <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z}^{(i)})$</span></p>
</li>
</ol>
<p>Now, more concretely, let's assume that we have a dataset <span class="math-container">$\mathbf{X}=\left\{\mathbf{x}^{(i)}\right\}_{i=1}^{N}$</span> of <span class="math-container">$N$</span> i.i.d. samples of the random variable <span class="math-container">$\mathbf{x}$</span>. So, each of these <span class="math-container">$\mathbf{x}^{(i)}$</span> has been generated as follows</p>
<ol>
<li><p>a sample <span class="math-container">$\mathbf{z}^{(i)}$</span> is generated from some prior distribution <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{z})$</span></p>
</li>
<li><p>a sample <span class="math-container">$\mathbf{x}^{(i)}$</span> is generated from some conditional distribution <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z}^{(i)})$</span>.</p>
</li>
</ol>
<p>Now, in the VAE, the encoder represents <span class="math-container">$q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})$</span>, while the decoder represents <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$</span>. If you want to train a VAE, you also need to make an assumption about <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{z})$</span>, for example, you can assume <span class="math-container">$p_{\boldsymbol{\theta}}(\mathbf{z}) = \mathcal{N}(\mathbf{z} ; \mathbf{0}, \mathbf{I})$</span>. Once trained, you use the variational distribution (encoder) as the prior from which you sample <span class="math-container">$\mathbf{z}^{(i)}$</span> (although we train the VAE as if the variational distribution is an approximation of the true/unknown/intractable posterior given a usually fixed prior and the likelihood/decoder), in order to sample <span class="math-container">$\mathbf{x}^{(i)}$</span>. This is not wrong. In fact, this is just how you usually do Bayesian statistics. You have a prior and a likelihood (and maybe a marginal), then you compute the posterior, which then becomes the new prior. So, if you had more data, you could learn a new variational distribution <span class="math-container">$q'_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})$</span> by assuming that your new prior is <span class="math-container">$q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})$</span>.</p>
<p>If you keep in mind the following equation</p>
<p><span class="math-container">$$p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})=p_{\boldsymbol{\theta}}(\mathbf{z}) p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$$</span></p>
<p>it should remind you why the VAE can be viewed as a <a href="https://en.wikipedia.org/wiki/Generative_model" rel="nofollow noreferrer">generative model</a>.</p>
|
https://ai.stackexchange.com/questions/32161/how-does-the-vae-learn-a-joint-distribution
|
Question: <p>Let us assume that I am working on a dataset of black and white dog images.</p>
<p>Each image is of size <span class="math-container">$28 \times 28$</span>.</p>
<p>Now, I can say that I have a sample space <span class="math-container">$S$</span> of all possible images. And <span class="math-container">$p_{data}$</span> is the probability distribution for dog images. It is easy to understand that all other images get a probability value of zero. And it is obvious that <span class="math-container">$n(S)= 2^{28 \times 28}$</span>.</p>
<p>Now, I am going to design a generative model that sample from <span class="math-container">$S$</span> using <span class="math-container">$p_{data}$</span> rather than random sampling.</p>
<p>My generative model is a neural network that takes random noise (say, of length 100) and generates an image of the size <span class="math-container">$28 \times 28$</span>. My function is learning a function <span class="math-container">$f$</span>, which is totally different from the function <span class="math-container">$p_{data}$</span>. It is because of the reason that <span class="math-container">$f$</span> is from <span class="math-container">$R^{100}$</span> to <span class="math-container">$S$</span> and <span class="math-container">$p_{data}$</span> is from <span class="math-container">$S$</span> to <span class="math-container">$[0,1]$</span>.</p>
<p>In the literature, I often read the phrases that <strong>our generative model learned <span class="math-container">$p_{data}$</span></strong> or <strong>our goal is to get <span class="math-container">$p_{data}$</span></strong>, etc., but in fact, they are trying to learn <span class="math-container">$f$</span>, which just obeys <span class="math-container">$p_{data}$</span> while giving its output.</p>
<p>Am I going wrong anywhere or the usage in literature is somewhat random?</p>
Answer: <p>You're right! The generative model <span class="math-container">$f$</span> is not the same as the probability density (p.d.f.) function <span class="math-container">$p_{data}$</span>. The kind of phrases you've referred to are to be interpreted informally. You learn <span class="math-container">$f$</span> with the hope that sampling a latent vector <span class="math-container">$z$</span>
from some known distribution (from which it is easy to sample), results in <span class="math-container">$f(z)$</span> that has the probability density function <span class="math-container">$p_{data}$</span>. However, merely learning <span class="math-container">$f$</span> does not give you the power to estimate what <span class="math-container">$p_{data}(x)$</span> is for some image <span class="math-container">$x$</span>. Learning <span class="math-container">$f$</span> only gives you the power to sample according to <span class="math-container">$p_{data}(\cdot)$</span> (if you've learned an accurate such <span class="math-container">$f$</span>).</p>
|
https://ai.stackexchange.com/questions/25557/confusion-between-function-learned-and-the-underlying-distribution
|
Question: <p>In the research paper titled <a href="https://arxiv.org/pdf/1411.1784.pdf" rel="nofollow noreferrer">Conditional Generative Adversarial Nets</a> by <em>Mehdi Mirza and Simon Osindero</em>, there is a notion of conditioning a neural network on a class label.</p>
<p>It is mentioned in the abstract that we need to <strong>simply</strong> feed extra input <span class="math-container">$y$</span> to the generator and discriminator of an unconditional GAN.</p>
<blockquote>
<p>Generative Adversarial Nets were recently introduced as a novel
way to train generative models. In this work we introduce the
conditional version of generative adversarial nets, which can be
<strong>constructed by simply feeding the data, <span class="math-container">$y$</span>, we wish to condition on to
both the generator and discriminator.</strong> We show that this model can
generate MNIST digits conditioned on class labels. We also illustrate
how this model could be used to learn a multi-modal model, and provide
preliminary examples of an application to image tagging in which we
demonstrate how this approach can generate descriptive tags which are
not part of training labels.</p>
</blockquote>
<p>So, I cannot see whether there is any special treatment for input <span class="math-container">$y$</span>.</p>
<p>If there is no special treatment for the data <span class="math-container">$y$</span>, then why do they call <span class="math-container">$y$</span> a condition and follow the notation of conditional probability such as <span class="math-container">$G(z|y), D(x|y)$</span> instead of <span class="math-container">$G(z,y), D(x,y)$</span>?</p>
<p>If there is a special treatment to input <span class="math-container">$y$</span>, then what is that special? Don't they pass <span class="math-container">$y$</span> in the same way as <span class="math-container">$x$</span> to the neural networks?</p>
Answer:
|
https://ai.stackexchange.com/questions/30025/is-there-any-difference-between-input-and-conditional-input-in-the-case-of-n
|
Question: <p>The <a href="https://wiseodd.github.io/techblog/2016/12/17/conditional-vae/" rel="nofollow noreferrer">Conditional Variational Autoencoder (CVAE)</a>, introduced in the paper <a href="https://papers.nips.cc/paper/5775-learning-structured-output-representation-using-deep-conditional-generative-models.pdf" rel="nofollow noreferrer">Learning Structured Output Representation using Deep Conditional Generative Models</a> (2015), is an extension of <a href="https://arxiv.org/abs/1312.6114" rel="nofollow noreferrer">Variational Autoencoder (VAE)</a> (2013). In VAEs, we have no control over the data generation process, something problematic if we want to generate some specific data. Say, in MNIST, generate instances of 6.</p>
<p>So far, I have only been able to find CVAEs that can condition to discrete features (classes). Is there a CVAE that allows us to condition to continuous variables, kind of a stochastic predictive model?</p>
Answer: <p>Whether a discrete or continuous class, you can model it the same.</p>
<p>Denote the encoder <span class="math-container">$q$</span> and the decoder <span class="math-container">$p$</span>. Recall the variational autoencoder's goal is to minimize the <span class="math-container">$KL$</span> divergence between <span class="math-container">$q$</span> and <span class="math-container">$p$</span>'s posterior. i.e. <span class="math-container">$\min_{\theta, \phi} \ KL(q(z|x;\theta) || p(z|x; \phi))$</span> where <span class="math-container">$\theta$</span> and <span class="math-container">$\phi$</span> parameterize the encoder and decoder respectively. To make this tractable this is generally done by using the Evidence Lower Bound (because it has the same minimum) and parametrizing <span class="math-container">$q$</span> with some form of reparametrization trick to make sampling differentiable.</p>
<p>Now your goal is to condition the sampling. In other words you are looking for modeling <span class="math-container">$p(x|z, c;\phi)$</span> and in turn will once again require <span class="math-container">$q(z|x, c; \theta)$</span>. Your goal will now intuitively become once again <span class="math-container">$\min_{\theta, \phi} \ KL(q(z|x, c;\theta) || p(z|x, c; \phi))$</span>. This is still simply transformed into the ELBO for tractability purposes. In other words your loss becomes <span class="math-container">$E_q[log \ p(x|z,c)] - KL(q(z|x,c)||p(z|c)$</span>.</p>
<p><strong>Takeaway:</strong> Conditioning doesn't change much, just embed your context and inject it both into the encoder and decoder, the fact that its continuous doesn't change anything. For implementation details, normally people just project/normalize and concatenate it somehow to some representation of <span class="math-container">$x$</span> in both the decoder/encoder.</p>
|
https://ai.stackexchange.com/questions/13698/is-there-a-continuous-conditional-variational-auto-encoder
|
Question: <p>I have come across <a href="https://arxiv.org/pdf/2110.07375.pdf" rel="nofollow noreferrer">this</a> research paper where a Variational Autoencoder is used to map multiple styles from reference images to a linear latent space and then transfer the style to another image like this:
<a href="https://i.sstatic.net/Z77hf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z77hf.png" alt="enter image description here" /></a></p>
<p>What I don't understand from the paper is how does the Variation module learn the style of the reference images as opposed to the content?</p>
<p>And since VAEs are generative models, is it possible to generalise this to generate styles from the trained variation model by feeding it random noise instead of the encoded reference images?</p>
Answer:
|
https://ai.stackexchange.com/questions/34515/how-can-a-vae-learn-to-generate-a-style-for-neural-style-transfer
|
Question: <p>I have been reading about <a href="https://deepai.org/machine-learning-glossary-and-terms/autoregressive-model" rel="nofollow noreferrer">autoregressive models</a>. Based on what I've read, it seems to me that all autoregressive models use <a href="https://www.reddit.com/r/deeplearning/comments/cgqpde/what_is_ancestral_sampling/" rel="nofollow noreferrer">ancestral sampling</a>. For instance, <a href="http://proceedings.mlr.press/v32/gregor14.pdf" rel="nofollow noreferrer">this</a> paper says the following in <strong>Abstract</strong>:</p>
<blockquote>
<p>We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling.</p>
</blockquote>
<p>However, what I don't understand is why (as I understand it) all autoregressive models use ancestral sampling. Why is ancestral sampling used in autoregressive models?</p>
Answer: <p>My understanding is that the answer to this question is basically 'ancestral sampling is used in autoregressive models because it fits well the structure/dynamics of autoregressive models (the ancestor-descendent relationship, etc.)'. It's not a very satisfying answer, but my understanding is that it's correct.</p>
<p>If anyone has a better answer, feel free to post.</p>
|
https://ai.stackexchange.com/questions/28384/why-is-ancestral-sampling-used-in-autoregressive-models
|
Question: <p>Consider the following two paragraphs taken from the paper titles <strong><a href="https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf" rel="nofollow noreferrer">Generative Adversarial Nets</a></strong> by <em>Ian J. Goodfellow et.al</em></p>
<p>#1: <strong>Abstract</strong></p>
<blockquote>
<p>We propose a new framework for estimating generative models via an
adversarial process, in which we simultaneously train two models: a
generative model G that captures the <strong>data distribution</strong>, and a
discriminative model D that estimates the probability that a sample
came from the training data rather than G. The training procedure for
G is to maximize the probability of D making a mistake. This framework
corresponds to a minimax two-player game. In the space of arbitrary
functions G and D, a unique solution exists, with G recovering the
training <strong>data distribution</strong> and D equal to 1.......</p>
</blockquote>
<p>#2: <strong>Excerpt from Introduction</strong></p>
<blockquote>
<p>The promise of deep learning is to discover rich, hierarchical models
that represent <strong>probability distributions</strong> over the kinds of data
encountered in artificial intelligence applications, such as natural
images, audio waveforms containing speech, and symbols in natural
language corpora.</p>
</blockquote>
<p>here we saw the word <strong>distribution</strong> thrice. It is very common to encounter the phrase <em>data distribution</em> in machine learning papers. For the types of data we use in artificial intelligence, there can be infinite random samples. We collect some instances and form a dataset, based on which we try to get the data distribution.</p>
<p>Even most of the literature uses the phrases <em>data distribution</em> or <em>probability distribution</em>. I personally never came across papers that explicitly talk about the random vector for the probability distribution under consideration. What can be the reason for it? Why won't research papers give the list of random variables for the distribution they are discussing? Is it immaterial or is it obvious from the context?</p>
<hr />
<p>Note: For example, if our discussion is about a 'human face images' dataset and if we use the phrase 'data distribution' then should I assume random variables to be 'pixels' of the image or high-level features like 'eyes, ears, etc.,' or some other? Or is it immaterial? If immaterial, then what should I need to perceive about probability distribution? Should I imagine it as an arbitrary random vector?</p>
Answer: <p>The main reason of using the term <em>data distribution</em> over the <em>random variable</em> is to note for the intrinsic relationship the different data samples have with each other.</p>
<blockquote>
<p>It is a mathematical way of saying this data is almost random
(consider the average pixel on a <em>human face dataset</em>) but it has some
underlaying relationship (the concept of <em>human faces</em>).</p>
</blockquote>
<p>This is also helpful to describe that different images can be similar looking but represent different underlaying concepts. For example: consider 2 dataset: <em>human faces</em>, <span class="math-container">$x_{human}$</span>, and animal faces, <span class="math-container">$x_{animal}$</span>. Although most of the images might look similar at a pixel level: similar colors, similar textures, similar backgrounds... The underlaying concept depicted in them differs: <em>human faces</em> vs <em>animal faces</em>.</p>
<p>So, how do you represent that mathematically?: <strong>with two overlapping distributions of different shape</strong>.</p>
<p><a href="https://i.sstatic.net/ub7nD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ub7nD.png" alt="enter image description here" /></a></p>
<p>This basically mean: <em>"even though the images might seem random, might overlap in some domains (color, textures, histogram...) there is an intrinsic difference between the two"</em> and this difference (the shape of the distribution) is the target intrinsic concept that the neural network will have to learn.</p>
<p>In the specific case of GANs the left distribution would be the "Generated Faces" and this distribution will try to fit the right distribution that would be "Real Faces".</p>
|
https://ai.stackexchange.com/questions/36296/what-should-be-taken-as-random-variables-in-the-distributions-of-datasets
|
Question: <p>Most machine learning models, such as multilayer perceptrons, require a fixed-length input and output, but generative (pre-trained) transformers can produce sentences or full articles of variable length. How is this possible?</p>
Answer: <p>In short, repetition with feedback.</p>
<p>You are correct that machine learning (ML) models such as neural networks work with fixed dimensions for input and output. There are a few different ways to work around this when desired input and output are more variable. The most common approaches are:</p>
<ul>
<li><p><strong>Padding:</strong> Give the ML model capacity to cope with the largest expected dimensions then pad inputs and filter outputs as necessary to match logical requirements. For example, this might be used for an image classifier where the input image varies in size and shape.</p>
</li>
<li><p><strong>Recurrent models:</strong> Add an internal state to the ML model and use it to pass data along with each input, in order to work with <em>sequences</em> of identical, related inputs or outputs. This is a preferred architecture for natural language processing (NLP) tasks, where LSTMs, GRUs, and transformer networks are common choices.</p>
</li>
</ul>
<p>A recurrent model relies on the fact that each input and output is the same kind of thing, at a different point in the sequence. The internal state of the model is used to combine information between points in the sequence so that for example a word in position three in the input has an impact on the choice of the word at position seven of the output.</p>
<p>Generative recurrent models often use their own output (or a sample based on the probabilities expressed in the output) as the next step's input.</p>
<p>It is well worth reading this blog for an introduction and some examples: <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="nofollow noreferrer">The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy</a></p>
|
https://ai.stackexchange.com/questions/26531/how-are-certain-machine-learning-models-able-to-produce-variable-length-outputs
|
Question: <p>I'm new to diffusion models so I'm trying to familiarize myself with the theory.</p>
<p>In the article <a href="https://arxiv.org/abs/2011.13456" rel="nofollow noreferrer">Score-Based Generative Modeling through Stochastic Differential Equations</a> (Song and al.), it's explained that we need to solve the reverse-time SDE to obtain samples from image distribution <span class="math-container">$p_{0}$</span>:</p>
<p><span class="math-container">$$\text d \mathbf{x} = [\mathbf{f}(\mathbf{x},t) − g(t)^{2} \nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})]dt+g(t) \text d \overline{\mathbf{w}}$$</span></p>
<p>Thus, we need to estimate the score <span class="math-container">$\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})$</span> to solve the previous equation and we then train a neural network to predict it (with score matching, slice score matching, etc).</p>
<p>However, in practice, I've seen many codes (event recent ones) training their neural networks to predict the noise <span class="math-container">$\varepsilon$</span> knowing <span class="math-container">$\mathbf{x}_{t}$</span> and <span class="math-container">$t$</span> (like in <a href="https://arxiv.org/abs/2006.11239" rel="nofollow noreferrer">DDPM</a>).</p>
<p>So I'm trying to understand the connexion between the noise <span class="math-container">$\varepsilon$</span> and the score <span class="math-container">$\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})$</span>. I know that for Gaussian transition kernels the training objective is the same (up to a coefficient, see the first article), but it seem very restrictive as the Gaussian transition kernel assumption is only valid for affine drift coefficient <span class="math-container">$\mathbf{f}(\mathbf{x},t)$</span>.</p>
<p>Thanks for your help, <br>
Pepper08</p>
Answer: <p>The loss function for noise prediction in DDPMs is typically the mean squared error between the true noise and the predicted noise formally expressed as <span class="math-container">$\mathbb{E}_{\mathbf{x}_0,ϵ,t}[∥ϵ−\hat{ϵ}_θ(\mathbf{x}_t,t)∥^2]$</span> which turns out to be equivalent to score matching objective <em>only</em> when there's an affine drift term <span class="math-container">$\mathbf{f}(\mathbf{x},t)$</span> in the reverse-time SDE implying a Gaussian transition kernel as you rightly noted .</p>
<p>Predicting noise often provides a more stable training objective since noise is simpler to model diffusion directly, especially when using Gaussian transitions with a fixed noise schedule. Therefore predicting the noise corresponds directly to the process of denoising, making it intuitive and aligning with the iterative denoising steps used in the reverse diffusion. Even if the Gaussian kernel transition assumption is somewhat restrictive, it has been found empirically that training with noise prediction yields high-quality realistic samples, so it remains a popular choice in practice.</p>
|
https://ai.stackexchange.com/questions/47142/connexion-between-noise-and-score-in-diffusion-models
|
Question: <p>In programming, if a new language could be improved by the language itself, it’s call self-hosting or bootstrapping.</p>
<p>To develop generative AI, there’s some steps, data preparing, model training, fine tuning. Is it possible to use AI it self to help with these steps and make big improvement in efficiency?</p>
Answer: <p>Let me see if this helps-</p>
<p>I believe what you request is an understanding of whether or not Language models generate content to train themselves or other SOTA models.</p>
<p>In short yes, Language models can automate tasks like data pre-processing, and data-preparing, and given the right algorithm (consider OpenAI's new function calling feature) even make decisions on writing the code to train themselves in an optimal manner by mere means of arguing with themselves.</p>
<p>You might be aware of GANs which kind of do that (improve each other without much human interception).</p>
<p>I mean consider all the prompt styles being researched and used, Chain-of-Thought, Tree-based-Chain-of-Thought, etc. If we can have LLM reason itself why can we not have it engineer textual data for us? It is coming, if you search you will find research in this direction. <a href="https://www.amazon.science/blog/using-large-language-models-llms-to-synthesize-training-data" rel="nofollow noreferrer">Here</a> is an example.</p>
|
https://ai.stackexchange.com/questions/41039/could-generative-ai-bootstrap
|
Question: <p>When looking at the deep learning courses offered by top universities in the United States that are available online (not MOOCs, but actual classes), a few schools still cover (Restricted) Boltzmann Machines, although they are not many. However, I noticed that these techniques are not frequently applied nowadays. It could serve as a basic introduction to generative models, but I think it would be fine to go straight to VAEs and GANs.</p>
<p>In this context, is there an educational, academic, or practical reason to teach concepts related to (Restricted) Boltzmann Machines?</p>
Answer:
|
https://ai.stackexchange.com/questions/46048/do-we-still-need-to-learn-about-boltzmann-machines
|
Question: <p>Generative AI is being used to create amazing art; first through paid services like Midjourney and now also with free, open source alternatives like Stable Diffusion. Now you can even generate art in a particular style, first with Google's Dreambooth and later with open-source implementations of the same.</p>
<p>Is there a Generative AI program for audio that allows you to create a model/style of a particular voice exemplar?</p>
<p>I am looking to train a voice model in a particular style using the aural equivalent of Dreambooth, and then apply that voice model/style to written text. Ideally I could review several options, and then pick certain ones to expand upon and later upscale.</p>
<p>I have researched this extensively but all of the voice generation services I have found are 1) paid 2) closed source 3) don't allow you to train the software on specific audio samples or 4) don't sound very natural at all.</p>
<p>Are there modern generative AI services for text-to-audio?</p>
Answer: <p>You can check <a href="https://github.com/Harmonai-org/sample-generator" rel="nofollow noreferrer">sample-generator</a>, an implementation of stable diffusion for audio data from <a href="https://www.harmonai.org/" rel="nofollow noreferrer">Harmonai</a>. I'm playing with it recently and I can say it works pretty well already out the shelf. With some hacks I was also able to reduce the model size and run a training on cpu.</p>
<p>And if you have the proper gears they also provide pretrained weights of some of their models.</p>
|
https://ai.stackexchange.com/questions/37734/are-there-free-and-open-source-audio-versions-of-generative-ai-programs-like-sta
|
Question: <p>I'm trying to gain some intuition beyond definitions, in any possible dimension. I'd appreciate references to read.</p>
Answer: <p>The intution that I have about these is that generative are "from abstract to concrete" whereas discriminative models are "from concrete to abstract".</p>
<p>For example: Detecting if a photo has a cat or not is about going from the photo i.e concrete to the abstract concept of a cat. Whereas generating a photo of a cat given some abstract properties about the cat is going from abstract to concrete.</p>
|
https://ai.stackexchange.com/questions/2106/how-can-one-intuitively-understand-generative-v-s-discriminative-models-specifi
|
Question: <p>I am interested in what insights can be gained about the mathematical class of auto-regressive encoder-decoders (LLMs), by comparing them to topological neural networks.</p>
<p>Specifically, I am looking for similarities and differences in their structures, behaviors, and mathematical properties.</p>
<p>In the context of this question, an LLM is a type of neural network that is designed to generate sequences of data, such as sentences in a language. It does this by learning to predict the next item in a sequence based on the previous items.</p>
<p>Any insights, references, or resources that could help clarify this would be greatly appreciated.</p>
<p>References:</p>
<ol>
<li><a href="https://dev.to/kayis/building-an-autoencoder-for-generative-models-3e1i" rel="nofollow noreferrer">Building an Autoencoder for Generative Models</a></li>
</ol>
Answer:
|
https://ai.stackexchange.com/questions/41030/comparing-auto-regressive-encoder-decoders-and-topological-neural-networks
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.