category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
transformer models
|
Can you use transformer models to do autocomplete tasks?
|
https://ai.stackexchange.com/questions/21277/can-you-use-transformer-models-to-do-autocomplete-tasks
|
<p>I've researched online and seen many papers on the use of RNNs (like LSTMs or GRUs) to autocomplete for, say, a search engine, character by character. Which makes sense since it inherently predicts character-by-character in a sequential manner. </p>
<p>Would it be possible to use the transformer architecture instead to do search autocomplete? If so, how might such a model be adapted? </p>
| 112
|
|
transformer models
|
How does transformer models like GPT generate valid meaningful response for meaningless garbage input?
|
https://ai.stackexchange.com/questions/39681/how-does-transformer-models-like-gpt-generate-valid-meaningful-response-for-mean
|
<p>My understanding of a transformer model is that it uses the given input to calculate internal query of relate-ness of word meanings, and generate a meaningful response based on its meaning. But if your given sentence has no meaning, then won't the model fail to capture any meaningful input so that the output will also be meaningless? How does the ChatGPT generate meaningful response asking me what I meant when the input is uncoordinated and meaningless? Is that a feature, or did they train on that kind of input specifically?<br />
eg. Input "Jumps the dog lazy fox over quick brown the."<br />
For the output, ChatGPT asks for clarification.<br />
Normally for a self-attention based model, there won't be any relation between input word for this example, so shouldn't the output also be garbage like?</p>
|
<p>If you give a human some input that doesn't seem to convey any meaning they will probably ask you for clarification. Presumably there are a lot of examples of this in the ChatGPT training data so that is exactly what it is going to do.</p>
| 113
|
transformer models
|
Transformer model is very slow and doesn't predict well
|
https://ai.stackexchange.com/questions/30379/transformer-model-is-very-slow-and-doesnt-predict-well
|
<p>I created my first transformer model, after having worked so far with LSTMs. I created it for multivariate time series predictions - I have 10 different meteorological features (temperature, humidity, windspeed, pollution concentration a.o.) and with them I am trying to predict time sequences (24 consecutive values/hours) of air pollution. So my input has the shape <code>X.shape = (75575, 168, 10)</code> - 75575 time sequences, each sequence contains 168 hourly entries/vectors and each vector contains 10 meteo features. My output has the shape <code>y.shape = (75575, 24)</code> - 75575 sequences each containing 24 consecutive hourly values of the air pollution concentration.</p>
<p>I took as a model an <a href="https://keras.io/examples/timeseries/timeseries_transformer_classification/" rel="nofollow noreferrer">example</a> from the official keras site. It is created for classification problems, I only took out the <code>softmax</code> activation and in the last dense layer I set the number of neurons to 24 and I hoped it would work. It runs and trains, but it does worse predictions than the LSTMs I have used on the same problem and more importantly - it is very slow - 4 min/epoch. Below I attach the model and I would like to know:</p>
<p>I) Have I done something wrong in the model? can the accuracy or speed be improved? Are there maybe some other parts of the code I need to change for it to work on regression, not classification problems?</p>
<p>II) Also, can a transformer at all work on multivariate problems of my kind (10 features input, 1 feature output) or do transformers only work on univariate problems? Tnx</p>
<pre><code>def build_transformer_model(input_shape, head_size, num_heads, ff_dim, num_transformer_blocks, mlp_units, dropout=0, mlp_dropout=0):
inputs = keras.Input(shape=input_shape)
x = inputs
for _ in range(num_transformer_blocks):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(x)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation="relu")(x)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
x = x + res
x = layers.GlobalAveragePooling1D(data_format="channels_first")(x)
for dim in mlp_units:
x = layers.Dense(dim, activation="relu")(x)
x = layers.Dropout(mlp_dropout)(x)
x = layers.Dense(24)(x)
return keras.Model(inputs, x)
model_tr = build_transformer_model(input_shape=(window_size, X_train.shape[2]), head_size=256, num_heads=4, ff_dim=4, num_transformer_blocks=4, mlp_units=[128], mlp_dropout=0.4, dropout=0.25)
model_tr.compile(loss="mse",optimizer='adam')
m_tr_history = model_tr.fit(x=X_train, y=y_train, validation_split=0.25, batch_size=64, epochs=10, callbacks=[modelsave_cb])
</code></pre>
|
<h2>Edit</h2>
<p>I just noticed that the model you are referring to is built very differently than the transformer from Attention is All You Need since it only uses one half of the architecture. Thus my answer below is not be complete. I thus have to add the following: (The final two paragraphs still apply as they are, though)</p>
<p>The Keras model is quite weird, and while it is a time-series model, it's not a regression one. Thus my general description regarding how transformers work still apply (that is, you only get one prediction at a time), since their model is a many-to-one classification model. So, you have adjust your model accordingly.</p>
<p>The main thing you can ignore from my original answer is my emphasis on decoders; Keras only use encoders for their model.</p>
<hr />
<h2>Original answer</h2>
<p>Transformers only output one prediction at a time, not twenty-four. Lets break a transformer during runtime step by step.</p>
<p>In all of the steps, the encoder input will be the same since your sequences are not that long. The decoder gets the output as its input, which is done in the following manner:</p>
<p>Step one:
The decoders input is only one token which stands for "start of sequence" and padding. <span class="math-container">$(start,0,0,..,0,0)$</span></p>
<p>Step two:
The decoder input is the start-token and the prediction from step one + padding <span class="math-container">$(start,y_1,0,0,..,0,0)$</span></p>
<p>Step three:
The decoder input is the decoders input in step two and the prediction from step two + padding <span class="math-container">$(start,y_1,y_2,0,0,..,0,0)$</span></p>
<p>And, so on.. If you want 24 outputs, the transformer will have to run 24 times. The only reason the original transformer model has multiple outputs is that it needs to make one prediction for each word token (and, pick the most likely one). Training will thus have to reflect this.</p>
<p>However, I am quite skeptical that you will have any significant benefit from using transformers over LSTMs. The main benefit of transformers, or any other attention based model, is that they allow for longer-term dependencies than LSTMs. According to a <a href="https://arxiv.org/pdf/1805.04623.pdf" rel="nofollow noreferrer">Stanford study</a>, the point at which LSTMs are no longer efficient is a sequence of around 200 tokens. Of course, that study is specific to language models, but I bet it won't be that much different in your case; Your sequences are quite short.</p>
<p>Anyhow, the output layer for a regression task generally use linear activation functions. If all your outputs are positive, that won't make much of a difference.</p>
| 114
|
transformer models
|
What considerations should I take to train my transformer model?
|
https://ai.stackexchange.com/questions/35833/what-considerations-should-i-take-to-train-my-transformer-model
|
<p>I want to train my vision transformer model on a benchmark for an image segmentation task: (<a href="https://arxiv.org/abs/2110.08733" rel="nofollow noreferrer">LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation</a>) (<a href="https://github.com/Junjue-Wang/LoveDA" rel="nofollow noreferrer">GitHub</a>), but I don't get any acceptable result from my model.</p>
<p>I don't have a good result either in evaluation nor (obviously) in test.</p>
<p>I'm frustrated since I have trained my model with different parameters: learning rate : [0.001,0001, 6e-e].</p>
<p>The training datasets include about 3000 images, the number of epochs is 1500.</p>
<p>I have used different loss functions: Cross Entropy, Dice loss, Focal loss, and a combination of those losses.</p>
<p>So, I'm wondering if anyone has any experience that could help me find a solution and improve my model's performance?</p>
| 115
|
|
transformer models
|
Can transformer models be used to convert code from one programming language to another?
|
https://ai.stackexchange.com/questions/40792/can-transformer-models-be-used-to-convert-code-from-one-programming-language-to
|
<p>There was a <a href="https://ai.stackexchange.com/questions/17140/can-sequence-to-sequence-models-be-used-to-convert-source-code-from-one-programm">question</a> like this in 2019. I hope things have changed since then.</p>
<p>Concretely, I am looking for a way to train a transformer model to convert code from SAS to Python. I guess the method does not depend on the pair of programming languages requested as long as one has enough data for training.</p>
<p>There are paid tools, e.g. <a href="https://www.codeconvert.ai/" rel="nofollow noreferrer">CodeConert</a> and <a href="https://sas2py.com/solutions.html" rel="nofollow noreferrer">sas2py</a>, that do this, but I can not find out what the are based on and how they were built.</p>
<p>It appears that one (very bad) way could be to use interpreters that translate one programming language, e.g. Java, to English and then from English to the second programming language, e.g. Python. This must be very prone to mistakes and seems like a wrong approach.</p>
|
<p>ChatGPT, which is based on the transformer, can already do this to some extent, so yes to your question, but I also doubt it will always be effective.</p>
<p>OpenAI also used to provide Codex (which is or was being used in <a href="https://github.com/features/copilot" rel="nofollow noreferrer">Github's copilot</a>), but <a href="https://help.openai.com/en/articles/7438137-code-completion-deprecated" rel="nofollow noreferrer">recently OpenAI deprecated it in favour of models behind ChatGPT</a>.</p>
| 116
|
transformer models
|
Can the decoder in a transformer model be parallelized like the encoder?
|
https://ai.stackexchange.com/questions/12490/can-the-decoder-in-a-transformer-model-be-parallelized-like-the-encoder
|
<p>Can the decoder in a transformer model be parallelized like the encoder?</p>
<p>As far as I understand, the encoder has all the tokens in the sequence to compute the self-attention scores. But for a decoder, this is not possible (in both training and testing), as self-attention is calculated based on previous timestep outputs. Even if we consider some techniques, like teacher forcing, where we are concatenating expected output with obtained, this still has a sequential input from the previous timestep. </p>
<p>In this case, apart from the improvement in capturing long-term dependencies, is using a transformer-decoder better than say an LSTM, when comparing purely on the basis of parallelization?</p>
|
<blockquote>
<p><strong>Can the decoder in a transformer model be parallelized like the</strong>
<strong>encoder?</strong></p>
</blockquote>
<p><strong>Generally NO:</strong></p>
<p>Your understanding is completely right. In the decoder, the output of each step is fed to the bottom decoder in the next time step, just like an <a href="https://en.wikipedia.org/wiki/Long_short-term_memory" rel="noreferrer">LSTM</a>.</p>
<p>Also, like in LSTMs, the self-attention layer needs to attend to earlier positions in the output sequence in order to compute the output. Which makes straight parallelisation impossible.</p>
<p>However, when decoding during training, there is a frequently used procedure which doesn't take the previous output of the model at step t as input at step t+1, but rather takes the ground truth output at step t. This procudure is called 'Teacher Forcing' and makes the decoder parallelised during training. You can read more about it <a href="https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/" rel="noreferrer">here</a>.</p>
<p><em>And For detailed explanation of how Transformer works I suggest reading this article: <a href="http://jalammar.github.io/illustrated-transformer/" rel="noreferrer">The Illustrated Transformer</a>.</em></p>
<blockquote>
<p><strong>Is using a transformer-decoder better than say an lstm when comparing</strong>
<strong>purely on the basis of parallelization?</strong></p>
</blockquote>
<p><strong>YES:</strong></p>
<p>Parallelization is the main drawback of <a href="https://en.wikipedia.org/wiki/Recurrent_neural_network" rel="noreferrer">RNNs</a> in general. In a simple way, RNNs have the ability to memorize but not parallelize while <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" rel="noreferrer">CNNs</a> have the opposite. Transformers are so powerful because they <strong>combine both</strong> parallelization (at least partially) and memorizing.</p>
<p>In Natural Language Processing for example, where RNNs are used to be so effective, if you take a look at <a href="https://gluebenchmark.com/leaderboard" rel="noreferrer">GLUE leaderboard</a> you will find that most of the world leading algorithms today are Transformer-based (e.g <a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT by GOOGLE</a>, <a href="https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf" rel="noreferrer">GPT by OpenAI</a>..)</p>
<p><em>For better understanding of why Transformers are better than CNNs I suggest reading this Medium article: <a href="https://towardsdatascience.com/transformers-141e32e69591" rel="noreferrer">How Transformers Work.</a></em></p>
| 117
|
transformer models
|
Why would my transformer model always predict the same pregnancy code?
|
https://ai.stackexchange.com/questions/47521/why-would-my-transformer-model-always-predict-the-same-pregnancy-code
|
<p>I'm trying to predict pregnancy codes with a basic transformer model architecture. These pregnancy codes are like following prg001, prg002 to prg030. Prg001 would be antenatal screening and prg030 would be maternal outcome of delivery.</p>
<p>The source is codes in one year (2022 for example) and the target is the codes in the following year (2023 for example).</p>
<p>My dataset has about 42k rows. About half (22k rows) have empty target (no code because the pregnancy ended in the previous year). I use pytorch's Transformer module.</p>
<p>The model is basically predicting zero code no matter what sequence im testing. For example, its not predicting that if a person has prg008 in the prior year, they would have prg030 (the code that represents delivery) in the target because they would give birth in the following year.</p>
<pre><code>#Hyperparameters
max_seq_length = 20
max_gen_length = 20
learning_rate = 0.001
weight_decay = 0.01
dropout = 0.1
d_model = 128
nhead = 4
num_layers = 3
dim_feedforward = 512
batch_size = 256
num_epochs = 10
</code></pre>
<p>I'm using crossentropy loss. Any suggestion on how I can improve this? Most of the code is below:</p>
<pre><code>import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import numpy as np
import math
from typing import List, Tuple
from torch.nn.utils.rnn import pad_sequence
import os
import time
class PreTokenizedPRGDataset(Dataset):
def __init__(self, preprocessed_file: str):
"""
Dataset that loads pre-tokenized data from disk
"""
with open(preprocessed_file, 'rb') as f:
self.data = pickle.load(f)
self.data = self.data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
item = self.data[idx]
return (
torch.tensor(item['src'], dtype=torch.long),
torch.tensor(item['tgt'], dtype=torch.long),
item['id']
)
class PRGDataset(Dataset):
def __init__(self, sequence_list, tokenizer):
self.tokenizer = tokenizer
# Get special token IDs
self.bos_token_id = tokenizer.bos_token_id
self.eos_token_id = tokenizer.eos_token_id
self.pad_token_id = tokenizer.pad_token_id
# All sequences
self.data = []
for idx, seq in enumerate(sequence_list[:50000]):
# Convert list of tokens to IDs
seq_src_ids = torch.tensor(tokenizer.encode(seq[0]))
seq_tgt_ids = torch.tensor(tokenizer.encode(seq[1]))
indiv_entpr_id = int(seq[2])
# create source sequence
seq_ids = torch.cat([
torch.tensor([self.bos_token_id]),
seq_src_ids,
torch.tensor([self.eos_token_id])
])
# Create target
tgt_ids = torch.cat([
torch.tensor([self.bos_token_id]),
seq_tgt_ids,
torch.tensor([self.eos_token_id])
])
# # indiv_entpr_id
# indiv_entpr_ids = torch.cat([indiv_entpr_id], dim=0)
self.data.append((seq_ids, tgt_ids, indiv_entpr_id))
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def collate_fn(batch):
"""Custom collate function to handle variable length sequences"""
src_sequences = [item[0] for item in batch]
tgt_sequences = [item[1] for item in batch]
indiv_entpr_id = torch.tensor([item[2] for item in batch])
# Pad sequences in batch
src_padded = pad_sequence(src_sequences, batch_first=True, padding_value=0)
tgt_padded = pad_sequence(tgt_sequences, batch_first=True, padding_value=0)
return {
'src': src_padded,
'tgt': tgt_padded,
'indiv_entpr_id': indiv_entpr_id
# 'src_lengths': torch.tensor([len(s) for s in src_sequences]),
# 'tgt_lengths': torch.tensor([len(t) for t in tgt_sequences])
}
class PositionalEncoding(nn.Module):
def __init__(self, d_model: int, dropout: float, max_len: int = 20):
super().__init__()
self.dropout = nn.Dropout(p=dropout)
# Create positional encoding matrix
position = torch.arange(max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
pe = torch.zeros(1, max_len, d_model)
pe[0, :, 0::2] = torch.sin(position * div_term)
pe[0, :, 1::2] = torch.cos(position * div_term)
# Register as buffer (not parameter)
self.register_buffer('pe', pe)
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""
Args:
x: Tensor, shape [batch_size, seq_len, embedding_dim]
"""
x = x + self.pe[:, :x.size(1), :]
return self.dropout(x)
class PRGTransformer(nn.Module):
def __init__(self, tokenizer, d_model, nhead, num_layers, dim_feedforward, max_seq_length, max_gen_length, dropout):
super().__init__()
self.tokenizer = tokenizer
self.vocab_size = len(tokenizer.get_vocab())
self.d_model = d_model
self.pad_token_id = tokenizer.pad_token_id
self.bos_token_id = tokenizer.bos_token_id
self.eos_token_id = tokenizer.eos_token_id
self.max_gen_length = max_gen_length
# Token embedding
# ***** very important to add padding_idx so model can ignore padding *****
self.embedding = nn.Embedding(self.vocab_size, d_model, padding_idx=self.pad_token_id)
# Positional encoding
self.pos_encoder = PositionalEncoding(d_model, dropout, max_seq_length)
# Dropout
self.dropout = nn.Dropout(dropout)
# Transformer
self.transformer = nn.Transformer(
d_model=d_model,
nhead=nhead,
num_encoder_layers=num_layers,
num_decoder_layers=num_layers,
dim_feedforward=dim_feedforward,
dropout=dropout,
batch_first=True,
norm_first=True
)
# Output layer
self.output_layer = nn.Linear(d_model, self.vocab_size)
# Initialize parameters
for p in self.parameters():
if p.dim() > 1:
nn.init.xavier_uniform_(p)
def create_masks(self, src: torch.Tensor, tgt: torch.Tensor) -> tuple:
src_mask = None
tgt_mask = self.transformer.generate_square_subsequent_mask(tgt.size(1)).to(tgt.device)
# Create padding masks if needed
# True is where the pad tokens are
src_padding_mask = (src == self.pad_token_id)
tgt_padding_mask = (tgt == self.pad_token_id)
return src_mask, tgt_mask, src_padding_mask, tgt_padding_mask
def forward(self, src, tgt):
src_mask, tgt_mask, src_key_padding_mask, tgt_key_padding_mask = self.create_masks(src, tgt)
# Embed tokens and add positional encoding
src_embedded = self.embedding(src) * math.sqrt(self.d_model)
src_embedded = self.pos_encoder(src_embedded)
# src_embedded = self.dropout(src_embedded)
tgt_embedded = self.embedding(tgt) * math.sqrt(self.d_model)
tgt_embedded = self.pos_encoder(tgt_embedded)
# tgt_embedded = self.dropout(tgt_embedded)
# Transformer forward pass
output = self.transformer(
src_embedded,
tgt_embedded,
src_mask=src_mask,
tgt_mask=tgt_mask,
src_key_padding_mask=src_key_padding_mask,
tgt_key_padding_mask=tgt_key_padding_mask
)
return self.output_layer(output)
@torch.no_grad()
def generate(self, src: torch.Tensor) -> torch.Tensor:
"""
generate sequences until EOS token is predicted
"""
self.eval()
device = src.device
batch_size = src.size(0)
# Start with BOS token
decoded = torch.full((batch_size, 1), self.bos_token_id,
dtype=torch.long, device=device)
# Generate tokens until max length (-1 for bos token)
for _ in range(self.max_gen_length-1):
output = self(src, decoded)
next_token = torch.argmax(output[:, -1:, :], dim=-1)
decoded = torch.cat([decoded, next_token], dim=1)
# Find first EOS token position for each sequence, or set to sequence length if no EOS
eos_mask = (decoded == self.eos_token_id)
has_eos = eos_mask.any(dim=1)
# where there's no eos, set position to sequence length
eos_pos = torch.where(has_eos, eos_mask.float().argmax(dim=1), torch.full_like(has_eos, decoded.size(1) - 1, dtype=torch.long))
# Create a mask: 1s before EOS (inclusive), 0s after
mask = torch.arange(decoded.size(1), device=device)[None, :] <= eos_pos[:, None]
# Apply mask and pad with zeros
padded = torch.where(mask, decoded, torch.zeros_like(decoded))
return padded
def train_model(model, train_loader, valid_loader, learning_rate, num_epochs, weight_decay, device, tokenizer, save_dir='checkpoints'):
criterion = nn.CrossEntropyLoss(ignore_index=0)
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate, weight_decay=weight_decay)
scheduler = torch.optim.lr_scheduler.OneCycleLR(
optimizer,
max_lr=0.001,
epochs=num_epochs,
steps_per_epoch=len(train_loader)
)
train_losses = []
valid_losses = []
best_valid_loss = float('inf')
patience = 3 #number of epochs to wait before stopping
for epoch in range(num_epochs):
epoch_start = time.time()
# print('epoch ', epoch)
model.train()
train_loss = 0
# correct_sequences = 0
# total_sequences = 0
for batch_idx, batch in enumerate(train_loader):
batch_start = time.time()
src = batch['src'].to(device)
tgt = batch['tgt'].to(device)
# ***************Create teacher forcing input**************
# 1. Create shifted version of target (shift right by 1/remove last token)
tgt_input = tgt[:, :-1]
# 2. Target output: remove first token
tgt_output = tgt[:, 1:]
# zero gradients out (instead of accumulating) for each batch because the previous gradients from the previous batch have already been used.
optimizer.zero_grad()
output = model(src, tgt_input)
# print(f'output {output.size()}', output)
loss = criterion(output.reshape(-1, output.size(-1)), tgt_output.reshape(-1))
loss.backward()
# Gradient clipping
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=0.5)
optimizer.step()
scheduler.step()
train_loss += loss.item()
train_loss /= len(train_loader) # Average training loss for the epoch
train_losses.append(train_loss)
# validation
print('Validation')
model.eval()
valid_loss = 0
total_code_accuracy = 0
total_sequences = 0
with torch.no_grad():
for batch_idx, batch in enumerate(valid_loader):
src = batch['src'].to(device)
tgt = batch['tgt'].to(device)
indiv_id = batch['indiv_entpr_id'].to(device)
# ***************Create teacher forcing input with SEP token**************
# 1. Create shifted version of target (shift right by 1/remove last token)
tgt_input = tgt[:, :-1]
# 2. Target output: remove first token
tgt_output = tgt[:, 1:]
output = model(src, tgt_input)
# print('output', output)
# Reshape output and target for loss calculation
loss = criterion(output.reshape(-1, output.size(-1)), tgt_output.reshape(-1))
valid_loss += loss.item()
if batch_idx % 100 == 0:
# Calculate accuracy
predictions = model.generate(src)
for pred, target in zip(predictions, tgt):
# print('pred', pred)
# print('target', target)
# remove padding tokens
pred = pred[pred != 0]
target = target[target != 0]
# Convert to sets of codes for comparison
pred_codes = set(pred.cpu().numpy())
target_codes = set(target.cpu().numpy())
accuracy = len(pred_codes.intersection(target_codes)) / len(target_codes)
total_code_accuracy += accuracy
total_sequences += 1
print(f'Batch {batch_idx} time: {time.time() - batch_start:.2f}s')
print(f'Epoch {epoch+1}, Batch {batch_idx}')
print(f'Loss: {loss.item():.4f}')
print(f'Sample indiv_entpr_id: {indiv_id[0].cpu()}')
# create tensor of tokens to exclude
src_mask = (src[0] != tokenizer.pad_token_id) & (src[0] != tokenizer.bos_token_id) & (src[0] != tokenizer.eos_token_id)
pred_mask = (predictions[0] != tokenizer.pad_token_id) & (predictions[0] != tokenizer.bos_token_id) & (predictions[0] != tokenizer.eos_token_id)
tgt_mask = (tgt[0] != tokenizer.pad_token_id) & (tgt[0] != tokenizer.bos_token_id) & (tgt[0] != tokenizer.eos_token_id)
sample_input = tokenizer.decode(src[0][src_mask])
sample_generated = tokenizer.decode(predictions[0][pred_mask])
sample_target = tokenizer.decode(tgt[0][tgt_mask])
print(f'Sample Input: {sample_input}')
print(f'Sample Generated: {sample_generated}')
print(f'Sample Target: {sample_target}\n')
valid_loss /= len(valid_loader) # Average validation loss for the epoch
valid_losses.append(valid_loss)
valid_accuracy = (total_code_accuracy / total_sequences) * 100
print(f'Epoch {epoch+1}/{num_epochs}, Train Loss: {train_loss:.4f}, Valid Loss: {valid_loss:.4f}, Valid Accuracy: {valid_accuracy:.2f}%')
# Save best model and early stopping
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
epochs_without_improvement = 0
else:
epochs_without_improvement +=1
if (epochs_without_improvement >= patience):
print(f'Early stopping at epoch {epoch+1}')
break
print(f'Epoch time: {time.time() - epoch_start:.2f}s')
# Save checkpoint when we get better validation loss
save_checkpoint(model, optimizer, scheduler, epoch,
train_losses, valid_losses, best_valid_loss,
save_dir)
return train_losses, valid_losses
def evaluate_model(model: PRGTransformer, tokenizer, test_sequences: List[torch.Tensor], batch_size, device: torch.device):
model.eval()
with torch.no_grad():
test_dataset = PRGDataset(test_sequences, tokenizer)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn)
correct_sequences = 0
total_sequences = 0
for batch_idx, batch in enumerate(test_loader):
src = batch['src'].to(device)
tgt = batch['tgt'].to(device)
# Calculate accuracy
predictions = model.generate(src)
# Only compare up to actual sequence lengths
for pred, target, source in zip(predictions, tgt, src):
print(f"\nInput: {tokenizer.convert_ids_to_tokens(source)}")
print(f"Generated: {tokenizer.convert_ids_to_tokens(pred)}")
print(f"Expected: {tokenizer.convert_ids_to_tokens(target)}")
print(f"Correct? {torch.all(pred == target).item()}")
correct_sequences += torch.all(pred == target).item()
total_sequences += 1
test_accuracy = (correct_sequences / total_sequences)
print(f'Test Accuracy: {test_accuracy:.2%}')
def main():
torch.manual_seed(42)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Using device: {device}")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained('/path/prg_tokenizer')
# Hyperparameters
max_seq_length = 20
max_gen_length = 20
learning_rate = 0.001
weight_decay = 0.01
dropout = 0.1
d_model = 128
nhead = 4
num_layers = 3
dim_feedforward = 512
batch_size = 256
num_epochs = 10
# Create train dataset and dataloader
train_dataset = PreTokenizedPRGDataset(train_file)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn, num_workers=4,
pin_memory=True)
# Create valid dataset and dataloader
valid_dataset = PreTokenizedPRGDataset(valid_file)
valid_loader = DataLoader(valid_dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn)
# Initialize model
model =PRGTransformer(tokenizer=tokenizer, d_model=d_model, nhead=nhead, num_layers=num_layers, dim_feedforward=dim_feedforward, max_seq_length=max_seq_length, max_gen_length=max_gen_length, dropout=dropout).to(device)
# Train the model
save_dir = '/path/model_checkpoints'
print("Starting training...")
train_losses, valid_losses = train_model(model, train_loader, valid_loader, learning_rate, num_epochs, weight_decay, device, tokenizer, save_dir)
return train_losses, valid_losses
if __name__ == "__main__":
train_losses, valid_losses = main()
</code></pre>
| 118
|
|
transformer models
|
Fine Tuning Transformer Model for Machine Translation
|
https://ai.stackexchange.com/questions/38356/fine-tuning-transformer-model-for-machine-translation
|
<p>I am working on the Transformer example demonstrated on TensorFlow's website. <a href="https://www.tensorflow.org/text/tutorials/transformer" rel="nofollow noreferrer">https://www.tensorflow.org/text/tutorials/transformer</a></p>
<p>In this example, Machine Translation model is trained to translate from Portuguese to English. The transformer is coded from scratch and other popular libraries like huggingface are not used.</p>
<p>Let's say I have another dataset which includes pairs of sentences of Portuguese and Finnish and let's say this dataset is fairly small. Since it is a small dataset, I want to use my model trained on Portuguese to English as a PreTrained model for creating the translation model for Portuguese to Finnish.</p>
<p>My question is, what are the key points to consider when using such a PreTrained model and changing ONLY its decoder output structure?</p>
|
<p>Transfer learning in machine translation is a relatively common technique in machine translation. Mostly, it means fine-tuning pre-trained self-supervised sequence-to-sequence models, such as <a href="https://aclanthology.org/2020.tacl-1.47" rel="nofollow noreferrer">mBART</a>. It is also often used for low-resource languages to transfer from a related high-resource language, as e.g. most of the participants in the <a href="https://aclanthology.org/2021.wmt-1.72" rel="nofollow noreferrer">WMT21 low-resource competition</a> did.</p>
<p>What you suggest is very close to a <a href="https://aclanthology.org/W18-6325/" rel="nofollow noreferrer">2018 paper by Kocmi and Bojar</a>. You might use their setup as a starting point. The main challenges addressed in the paper are:</p>
<ul>
<li>Carefully setting the learning rate schedule to avoid catastrophic forgetting.</li>
<li>Do something about the vocabulary mismatch (Finnish uses a different vocabulary than English).</li>
</ul>
| 119
|
transformer models
|
Can an existing transformer model be modified to estimate the next most probable number in a sequence of numbers?
|
https://ai.stackexchange.com/questions/27044/can-an-existing-transformer-model-be-modified-to-estimate-the-next-most-probable
|
<p>Models based on the transformer architectures (GPT, BERT, etc.) work awesome for NLP tasks including taking an input generated from words and producing probability estimates of the next word as the output.</p>
<p>Can an existing transformer model, such as GPT-2, be modified to perform the same task on a sequence of numbers and estimate the next most probable number? If so, what modifications do we need to perform (do we still train a tokenizer to tokenize integers/floats into token IDs?)?</p>
|
<p>To answer this, you need some constraints on the problem. Here are some sequences of numbers. No machine learning technique could be expected to learn all of them:</p>
<ul>
<li>the odd numbers</li>
<li>the primes</li>
<li>numbers expressed in digits, but listed in alphabetical order of their name in German</li>
<li>numbers listed in the lexical order of the reverse of their representation in base 3</li>
<li>the phone numbers in the Manhattan phone director, listed in alphabetical order of subscriber</li>
</ul>
| 120
|
transformer models
|
Why does the loss stops reducing after a point in this Transformer Model?
|
https://ai.stackexchange.com/questions/25065/why-does-the-loss-stops-reducing-after-a-point-in-this-transformer-model
|
<h2>Context</h2>
<p>I was making a Transformer Model to convert English Sentences to German Sentences. But the loss stops reducing after some time.</p>
<h2>Code</h2>
<pre><code>import string
import re
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Embedding, LSTM, RepeatVector, Dense, Dropout, BatchNormalization, TimeDistributed, AdditiveAttention, Input, Concatenate, Flatten
from tensorflow.keras.layers import Activation, LayerNormalization, GRU, GlobalAveragePooling1D, Attention
from tensorflow.keras.optimizers import Adam
from tensorflow.nn import tanh, softmax
import time
from tensorflow.keras.losses import SparseCategoricalCrossentropy, CategoricalCrossentropy
from numpy import array
from tensorflow.keras.utils import plot_model
from sklearn.utils import shuffle
import time
import tensorflow as tf
from numpy import array
import numpy as np
from tensorflow.keras.models import load_model
from tensorflow.keras.datasets.imdb import load_data
def load_data(filename):
file = open(filename, 'r')
text = file.read()
file.close()
return text
def to_lines(text):
return text.split('\n')
def clean_data(pair):
pair = 'start_seq_ ' + pair + ' end_seq_'
re_print = re.compile('[^%s]' % re.escape(string.printable))
table = str.maketrans('', '', string.punctuation)
tokens = [token.translate(table) for token in pair.split()]
tokens = [token.lower() for token in tokens]
tokens = [re_print.sub('', token) for token in tokens]
tokens = [token for token in tokens if token.isalpha()]
return tokens
lines = to_lines(load_data('/content/drive/My Drive/spa.txt'))
english_pair = []
german_pair = []
language = []
for line in lines:
if line != '':
pairs = line.split('\t')
english_pair.append(clean_data(pairs[0]))
german_pair.append(clean_data(pairs[1]))
language.append(clean_data(pairs[0]))
language.append(clean_data(pairs[1]))
english_pair = array(english_pair)
german_pair = array(german_pair)
language = array(language)
def create_tokenizer(data):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(data)
return tokenizer
def max_len(lines):
length = []
for line in lines:
length.append(len(line))
return max(length)
tokenizer = create_tokenizer(language)
vocab_size = len(tokenizer.word_index) + 1
max_len = max_len(language)
def create_sequences(sequences, max_len):
sequences = tokenizer.texts_to_sequences(sequences)
sequences = pad_sequences(sequences, maxlen=max_len, padding='post')
return sequences
X1 = create_sequences(english_pair, max_len)
X2 = create_sequences(german_pair, max_len)
Y = create_sequences(german_pair, max_len)
X1, X2, Y = shuffle(X1, X2, Y)
training_samples = int(X1.shape[0] * 1.0)
train_x1, train_x2, train_y = X1[:training_samples], X2[:training_samples], Y[:training_samples]
test_x1, test_x2, test_y = X1[training_samples:], X2[training_samples:], Y[training_samples:]
train_x2 = train_x2[:, :-1]
test_x2 = test_x2[:, :-1]
train_y = train_y[:, 1:].reshape(-1, max_len-1)
test_y = test_y[:, 1:].reshape(-1, max_len-1)
train_x2 = pad_sequences(train_x2, maxlen=max_len, padding='post')
test_x2 = pad_sequences(test_x2, maxlen=max_len, padding='post')
train_y = pad_sequences(train_y, maxlen=max_len, padding='post')
test_y = pad_sequences(test_y, maxlen=max_len, padding='post')
</code></pre>
<p>All code above just prepares the Data, so if you want you can skip that part.
Code After this starts implementing the Transformer Model.</p>
<pre><code>class EncoderBlock(tf.keras.layers.Layer):
def __init__(self, mid_ffn_dim, embed_dim, num_heads, max_len, batch_size):
super(EncoderBlock, self).__init__()
# Variables
self.batch_size = batch_size
self.max_len = max_len
self.mid_ffn_dim = mid_ffn_dim
self.embed_dim = embed_dim
self.num_heads = num_heads
self.attention_vector_len = self.embed_dim // self.num_heads
if self.embed_dim % self.num_heads != 0:
raise ValueError('I am Batman!')
# Trainable Layers
self.mid_ffn = Dense(self.mid_ffn_dim, activation='relu')
self.final_ffn = Dense(self.embed_dim)
self.layer_norm1 = LayerNormalization(epsilon=1e-6)
self.layer_norm2 = LayerNormalization(epsilon=1e-6)
self.combine_heads = Dense(self.embed_dim)
self.query_dense = Dense(self.embed_dim)
self.key_dense = Dense(self.embed_dim)
self.value_dense = Dense(self.embed_dim)
def separate_heads(self, x):
x = tf.reshape(x, (-1, self.max_len, self.num_heads, self.attention_vector_len))
return tf.transpose(x, perm=[0, 2, 1, 3])
def compute_self_attention(self, query, key, value):
score = tf.matmul(query, key, transpose_b=True)
dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
scaled_score = score / tf.math.sqrt(dim_key)
weights = tf.nn.softmax(scaled_score, axis=-1)
output = tf.matmul(weights, value)
return output
def self_attention_layer(self, x):
query = self.query_dense(x)
key = self.key_dense(x)
value = self.value_dense(x)
query_heads = self.separate_heads(query)
key_heads = self.separate_heads(key)
value_heads = self.separate_heads(value)
attention = self.compute_self_attention(query_heads, key_heads, value_heads)
attention = tf.transpose(attention, perm=[0, 2, 1, 3])
attention = tf.reshape(attention, (-1, self.max_len, self.embed_dim))
output = self.combine_heads(attention)
return output
def get_output(self, x):
attn_output = self.self_attention_layer(x)
out1 = self.layer_norm1(x + attn_output)
ffn_output = self.final_ffn(self.mid_ffn(out1))
encoder_output = self.layer_norm2(out1 + ffn_output)
return encoder_output
class DecoderBlock(tf.keras.layers.Layer):
def __init__(self, mid_ffn_dim, embed_dim, num_heads, max_len, batch_size):
super(DecoderBlock, self).__init__()
# Variables
self.batch_size = batch_size
self.max_len = max_len
self.mid_ffn_dim = mid_ffn_dim
self.embed_dim = embed_dim
self.num_heads = num_heads
self.attention_vector_len = self.embed_dim // self.num_heads
if self.embed_dim % self.num_heads != 0:
raise ValueError('I am Batman!')
# Trainable Layers
self.query_dense1 = Dense(self.embed_dim, name='query_dense1')
self.key_dense1 = Dense(self.embed_dim, name='key_dense1')
self.value_dense1 = Dense(self.embed_dim, name='value_dense1')
self.mid_ffn = Dense(self.mid_ffn_dim, activation='relu', name='dec_mid_ffn')
self.final_ffn = Dense(self.embed_dim, name='dec_final_ffn')
self.layer_norm1 = LayerNormalization(epsilon=1e-6)
self.layer_norm2 = LayerNormalization(epsilon=1e-6)
self.layer_norm3 = LayerNormalization(epsilon=1e-6)
self.combine_heads = Dense(self.embed_dim, name='dec_combine_heads')
self.query_dense2 = Dense(self.embed_dim, name='query_dense2')
self.key_dense2 = Dense(self.embed_dim, name='key_dense2')
self.value_dense2 = Dense(self.embed_dim, name='value_dense2')
def separate_heads(self, x):
x = tf.reshape(x, (-1, self.max_len, self.num_heads, self.attention_vector_len))
return tf.transpose(x, perm=[0, 2, 1, 3])
def compute_self_attention(self, query, key, value):
score = tf.matmul(query, key, transpose_b=True)
dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
scaled_score = score / tf.math.sqrt(dim_key)
weights = tf.nn.softmax(scaled_score, axis=-1)
output = tf.matmul(weights, value)
return output
def masking(self, x):
b = []
for batch in range(x.shape[0]):
bat = []
for head in range(x.shape[1]):
headd = []
for word in range(x.shape[2]):
current_word = []
for represented_in in range(x.shape[3]):
if represented_in > word:
current_word.append(np.NINF)
else:
current_word.append(0)
headd.append(current_word)
bat.append(headd)
b.append(bat)
return b
def compute_masked_self_attention(self, query, key, value):
score = tf.matmul(query, key, transpose_b=True)
score = score + self.masking(score)
score = tf.convert_to_tensor(score)
dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
scaled_score = score / tf.math.sqrt(dim_key)
weights = tf.nn.softmax(scaled_score, axis=-1)
output = tf.matmul(weights, value)
return output
def masked_self_attention_layer(self, x):
query = self.query_dense1(x)
key = self.key_dense1(x)
value = self.value_dense1(x)
query_heads = self.separate_heads(query)
key_heads = self.separate_heads(key)
value_heads = self.separate_heads(value)
attention = self.compute_masked_self_attention(query_heads, key_heads, value_heads)
attention = tf.transpose(attention, perm=[0, 2, 1, 3])
attention = tf.reshape(attention, (-1, self.max_len, self.embed_dim))
output = self.combine_heads(attention)
return output
def second_attention_layer(self, x, encoder_output):
query = self.query_dense2(x)
key = self.key_dense2(encoder_output)
value = self.value_dense2(encoder_output)
query_heads = self.separate_heads(query)
key_heads = self.separate_heads(key)
value_heads = self.separate_heads(value)
attention = self.compute_self_attention(query_heads, key_heads, value_heads)
attention = tf.transpose(attention, perm=[0, 2, 1, 3])
attention = tf.reshape(attention, (-1, self.max_len, self.embed_dim))
output = self.combine_heads(attention)
return output
def get_output(self, x, encoder_output):
masked_attn_output = self.masked_self_attention_layer(x)
out1 = self.layer_norm1(x + masked_attn_output)
mutli_head_attn_output = self.second_attention_layer(out1, encoder_output)
out2 = self.layer_norm2(out1 + mutli_head_attn_output)
ffn_output = self.final_ffn(self.mid_ffn(out2))
decoder_output = self.layer_norm3(out2 + ffn_output)
return decoder_output
embed_dim = 512
mid_ffn_dim = 1024
num_heads = 8
max_len = max_len
batch_size = 32
encoder_block1 = EncoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
encoder_block2 = EncoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
encoder_block3 = EncoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
decoder_block1 = DecoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
decoder_block2 = DecoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
decoder_block3 = DecoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
# Define Loss and Optimizer
loss_object = SparseCategoricalCrossentropy()
optimizer = Adam()
embedding = Embedding(vocab_size, embed_dim, name='embedding')
position_embedding = Embedding(vocab_size, embed_dim)
final_transformer_layer = Dense(vocab_size, activation='softmax')
def positional_embedding(x):
positions = tf.range(start=0, limit=max_len, delta=1)
positions = position_embedding(positions)
return x + positions
def train_step(english_sent, german_sent, german_trgt):
with tf.GradientTape() as tape:
english_embedded = embedding(english_sent)
german_embedded = embedding(german_sent)
english_positioned = positional_embedding(english_embedded)
german_positioned = positional_embedding(german_embedded)
# Encoders
encoder_output = encoder_block1.get_output(english_positioned)
encoder_output = encoder_block2.get_output(encoder_output)
encoder_output = encoder_block3.get_output(encoder_output)
# Decoders
decoder_output = decoder_block1.get_output(german_positioned, encoder_output)
decoder_output = decoder_block2.get_output(decoder_output, encoder_output)
decoder_output = decoder_block3.get_output(decoder_output, encoder_output)
# Final Output
transformer_output = final_transformer_layer(decoder_output)
# Compute Loss
loss = loss_object(german_trgt, transformer_output)
variables = embedding.trainable_variables + position_embedding.trainable_variables + encoder_block1.trainable_variables + encoder_block2.trainable_variables
variables += encoder_block3.trainable_variables + decoder_block1.trainable_variables + decoder_block2.trainable_variables + decoder_block3.trainable_variables
variables += final_transformer_layer.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return float(loss)
def train(epochs=10):
batch_per_epoch = int(train_x1.shape[0] / batch_size)
for epoch in range(epochs):
for i in range(batch_per_epoch):
english_sent_x = train_x1[i*batch_size : (i*batch_size)+batch_size].reshape(batch_size, max_len)
german_sent_x = train_x2[i*batch_size : (i*batch_size)+batch_size].reshape(batch_size, max_len)
german_sent_y = train_y[i*batch_size : (i*batch_size)+batch_size].reshape(batch_size, max_len, 1)
loss = train_step(english_sent_x, german_sent_x, german_sent_y)
print('Epoch ', epoch, 'Batch ', i, '/', batch_per_epoch, 'Loss ', loss)
train()
</code></pre>
<p>And the Code is done! But the loss stops reducing at around value of 1.2 after some time. Why is this happening?</p>
<h2>Maybe Important</h2>
<p>I tried debugging the model, by passing random input integers, and the model was still performing the same way it did when I gave real Sentences as input.</p>
<p>When I tried training the model with just 1 training sample, the loss stops reducing at around 0.2. When I train it with 2 training samples, the result was the approximately the same as when I trained it with 1 training sample.</p>
<p>When I stopped shuffling the dataset the loss gone till around 0.7 and again stopped learning.</p>
<p>I tried simplifying the model by removing some encoder and decoder blocks but the results were approximately the same. I even tried making the model more complex but the results were again approximately the same.</p>
| 121
|
|
transformer models
|
Training custom seq2seq transformer model without any tokenizer
|
https://ai.stackexchange.com/questions/46974/training-custom-seq2seq-transformer-model-without-any-tokenizer
|
<p>I’m working on a custom seq2seq transformer model for translating between two sets of token IDs.</p>
<p>My input and translation token IDs range from 0 to 8191.</p>
<p>Example:</p>
<pre><code>input_ids = [2034, 4043, ...., 3] # length is 2048
translation_input_ids = [3042, 9123, ...., 3285] # length is 2048
</code></pre>
<p>I’m using EncoderDecoderModel with BertConfig for both the encoder and decoder. My sequences are all of length 2048. Here are my questions:</p>
<p>Vocab Size: Given that my tokens range from 0 to 8191, do I need to set vocab_size to 8194 (i.e., 8192 + 2) to account for additional tokens like decoder_start_token_id and pad_token_id?</p>
<p>Handling decoder_start_token_id: How does the model handle the decoder_start_token_id? Is it correct to assign it to a value outside my token range (e.g., 8192)?</p>
<p>Padding Token: If I don’t need padding (since all my sequences are the same length), do I still need to provide a pad_token_id in the configuration?</p>
<p>Config Choice: Is using BertConfig the right approach for this task, or would something like BartConfig or T5Config be more appropriate?</p>
<p>Here is my current model structure:</p>
<pre><code>batch_size = 2
vocab_size = 8192 + 2
embedding_size = 1024
context_window_length = 2048
device_num = 0
device = torch.device(f'cuda:{device_num}' if torch.cuda.is_available() else 'cpu')
# Initialize BertConfig for encoder and decoder with vocab_size=8192
config_encoder = BertConfig(
vocab_size=vocab_size,
hidden_size=embedding_size,
num_hidden_layers=6,
num_attention_heads=8,
intermediate_size=embedding_size*4,
max_position_embeddings=context_window_length
)
config_decoder = BertConfig(
vocab_size=vocab_size,
hidden_size=embedding_size,
num_hidden_layers=6,
num_attention_heads=8,
intermediate_size=embedding_size*4,
max_position_embeddings=context_window_length,
is_decoder=True,
add_cross_attention=True
)
config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = EncoderDecoderModel(config=config)
model.config.decoder_start_token_id = 8192 # arbitrary number that is not between 0 and 8191
model.config.pad_token_id = 8193 # arbitrary number that is not between 0 and 8191
model.to(device)
</code></pre>
<p>Now when I run the model training, it the loss is stuck to 6.5 just after around 1000 iterations, (my epoch size is 19500 iterations) and stays over there.</p>
<p>Is this the correct way to run a seq2seq training without any tokenizer involved? Or am I doing something wrong here?</p>
<p>I do not need the tokenizer here because these sequence are coming directly from a generation process and those are not text. But do I still need to design a customer tokenizer to run with the BERTConfig?</p>
| 122
|
|
transformer models
|
Why do both sine and cosine have been used in positional encoding in the transformer model?
|
https://ai.stackexchange.com/questions/15386/why-do-both-sine-and-cosine-have-been-used-in-positional-encoding-in-the-transfo
|
<p>The Transformer model proposed in "Attention Is All You Need" uses sinusoid functions to do the positional encoding. </p>
<p>Why have both sine and cosine been used? And why do we need to separate the odd and even dimensions to use different sinusoid functions?</p>
| 123
|
|
transformer models
|
How to Select Model Parameters for Transformer (Heads, number of layers, etc)
|
https://ai.stackexchange.com/questions/27254/how-to-select-model-parameters-for-transformer-heads-number-of-layers-etc
|
<p>Is there a general guideline on how the Transformer model parameters should be selected, or the range of these parameters that should be included in a hyperparameter sweep?</p>
<ul>
<li>Number of heads</li>
<li>Number of encoder & decoder layers</li>
<li>Size of transformer model (<code>d_model</code> in Pytorch)</li>
<li>Size of hidden layers</li>
</ul>
<p>Are there general guidelines like number of decoder layers should be equal to encoder layers? Thank you</p>
| 124
|
|
transformer models
|
How to train a transformer text-to-text model on counterexamples?
|
https://ai.stackexchange.com/questions/15707/how-to-train-a-transformer-text-to-text-model-on-counterexamples
|
<p>Is it possible to update the weights of a vanilla transformer model using counterexamples alongside examples?</p>
<p>For example, from the <a href="https://github.com/google-research-datasets/paws" rel="nofollow noreferrer">PAWS</a> data set, given the phrases "Although interchangeable, the body pieces on the 2 cars are not similar." and "Although similar, the body parts are not interchangeable on the 2 cars." we have the label 0 because it is a counterexample, whereas for the phrases "Katz was born in Sweden in 1947 and moved to New York City at the age of 1." and "Katz was born in 1947 in Sweden and moved to New York at the age of one." we have the label 1 because it is a positive example of a valid paraphrase.</p>
<p>My goal is to use the transformer model to generate paraphrases, and I am attempting to build a GAN but could not find any references for updating the transformer text-to-text model using counterexamples.</p>
| 125
|
|
transformer models
|
why cross entropy loss has to be multiplied by a batch size during an evaluation in transformer model?
|
https://ai.stackexchange.com/questions/36395/why-cross-entropy-loss-has-to-be-multiplied-by-a-batch-size-during-an-evaluation
|
<p>I am trying to look through a code of the transformer model from Pytorch. However,
I do not understand why batch size needs to multiply with cross-entropy loss given that loss is calculated based on data at a given timestep.</p>
<p>This is from the line: <strong>"total_loss += batch_size * criterion(output_flat, targets).item()"</strong></p>
<p><a href="https://github.com/pytorch/tutorials/blob/master/beginner_source/transformer_tutorial.py#L323" rel="nofollow noreferrer">This is the section</a> of code:</p>
<pre><code>def evaluate(model: nn.Module, eval_data: Tensor) -> float:
model.eval() # turn on evaluation mode
total_loss = 0.
src_mask = generate_square_subsequent_mask(bptt).to(device)
with torch.no_grad():
for i in range(0, eval_data.size(0) - 1, bptt):
data, targets = get_batch(eval_data, i)
batch_size = data.size(0)
if batch_size != bptt:
src_mask = src_mask[:batch_size, :batch_size]
output = model(data, src_mask)
output_flat = output.view(-1, ntokens)
total_loss += batch_size * criterion(output_flat, targets).item()
return total_loss / (len(eval_data) - 1)
</code></pre>
|
<p>That is because the default reduction for the CEP loss calculation is mean. Hence to find the true average across all batches, you first multiply by the batch size and then divide by total number of data points. Note this only matters when all the batches are not equal, i.e. total number of data points do not perfectly divide batch size. Else you can simply add all the batch losses and divide by number of batches.</p>
| 126
|
transformer models
|
Understanding embedding outputs in transformer models like CLIP
|
https://ai.stackexchange.com/questions/41653/understanding-embedding-outputs-in-transformer-models-like-clip
|
<p>I'm working with OpenAI's CLIP model and trying to understand the output of the text encoder. When I input a short prompt like "cat", the output is a tensor of shape [77, 1024]. My understanding is that the 1024 represents the dimensionality of the embeddings, and the 77 represents the maximum sequence length that the model can handle.</p>
<p>Given that "apple" would be tokenized into far fewer than 77 tokens, I'm assuming that the remaining tokens are padding tokens. However, when I inspect the tensor, I don't see any zero values. I was expecting the embeddings for the padding tokens to be zero vectors, but this doesn't seem to be the case.</p>
<p>My current hypothesis is that only the first few 1024-dimensional vectors in the tensor (corresponding to the tokens in my input) are significant, and the remaining vectors (corresponding to padding tokens) do not carry meaningful information about my input. Is this understanding correct?</p>
<p>Also, could someone explain why the embeddings for the padding tokens are not zero vectors? How does the model ensure that these padding tokens do not contribute to the output?</p>
|
<p>Usually the embeddings are not zero for the padding, since they are part fo the lookup table of the embedding layer, however in addition to the input, a mask is forwarded through the forward pass of the network to mask them out when it's relevant (in transformers encoder used in CLIP, in the cross-attention)</p>
<p>So the actual transformer attention is something like:
<span class="math-container">$$
A = softmax(\frac{QK^T}{\sqrt{d_k}} \cdot (MM^T\cdot -\infty))V
$$</span></p>
<p>where <span class="math-container">$M$</span> is a vector <span class="math-container">$1xN$</span> (<span class="math-container">$N$</span> being the length of the sequence you have inputted, with <span class="math-container">$0$</span> for actual tokens, <span class="math-container">$1$</span> for masking)</p>
<p>By doing so, you are adding <span class="math-container">$-\infty$</span> to the padding tokens, thus the attention will be 0, which means that in the last product (<span class="math-container">$softmax \cdot V$</span>) those embeddings that you are seeing not being zero are not considered</p>
<p>However, I want to highlight that if the padding tokens were zeros, this still is needed, as after normalizatio, attention or FFNN step, they might not be zero anymore, thus contributing to the whole next attention step, which would make the network no more length invariant</p>
| 127
|
transformer models
|
What is the most important predecessor of the transformer model?
|
https://ai.stackexchange.com/questions/36584/what-is-the-most-important-predecessor-of-the-transformer-model
|
<p>I'm wondering what the origins of the transformer as proposed in <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">Attention Is All You Need</a> are. The paper itself provides some interesting pointers to the literature on self-attention such as:</p>
<ol>
<li><a href="https://aclanthology.org/D16-1244.pdf" rel="nofollow noreferrer">A Decomposable Attention Model for Natural Language Inference</a></li>
<li><a href="https://openreview.net/pdf?id=BJC_jUqxe" rel="nofollow noreferrer">A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING</a></li>
<li><a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE</a></li>
</ol>
<p>It seems like using attentional mechanisms was widespread (at least in combination with recurrent networks) and 'A Decomposable Attention Model for Natural Language Inference' paper from 2016 already conjectured that scaling attention might be feasible.
Is it from this prior work 'only' an engineering leap? Or what additional papers at the time likely influenced the architecture?</p>
|
<p>An influential predecessor paper is indeed the work on <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="nofollow noreferrer">NEURAL MACHINE TRANSLATION
BY JOINTLY LEARNING TO ALIGN AND TRANSLATE</a>. The paper outlines an attentional mechanism that is similar to the computation of the actual attentional weights in the transformer paper. See the paper and picture below for details.</p>
<p><a href="https://i.sstatic.net/7i0JI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7i0JI.jpg" alt="Model overview of attention mechanism" /></a></p>
<p>When the authors of the paper visualize their attention weights they obtain the following results. The inputs into the attention computation are not called queries and keys yet but are conceptually similar.</p>
<p><a href="https://i.sstatic.net/KJtI1.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KJtI1.jpg" alt="Attention weight visualization" /></a></p>
<p>See this <a href="https://www.youtube.com/watch?v=EixI6t5oif0" rel="nofollow noreferrer">video</a> for a detailed explanation.</p>
| 128
|
transformer models
|
Transformer Language Model generating meaningless text
|
https://ai.stackexchange.com/questions/23809/transformer-language-model-generating-meaningless-text
|
<p>I currently learning on Transformers, so check my understanding I tried implementing a small transformer-based language model and compare it to RNN based language model. Here's the code for transformer. I'm using PyTorch inbuilt layer for Transformer Encoder</p>
<pre><code>class TransformerLM_1(nn.Module):
def __init__(self, head, vocab_size, embedding_size, dropout = 0.1, device = 'cpu',
pad_idx = 0, start_idx = 1, end_idx = 2, unk_idx = 3):
super(TransformerLM_1, self).__init__()
self.head = head
self.embedding_size = embedding_size
self.vocab_size = vocab_size
self.device = device
self.embed = WordEmbedding(self.vocab_size, self.embedding_size, pad_idx)
self.postional_encoding = PostionalEncoding(embedding_size, device)
self.decoder = nn.TransformerEncoderLayer(self.embedding_size, self.head)
self.out_linear = nn.Linear(self.embedding_size, vocab_size)
self.dropout = dropout
self.pad_idx = pad_idx
self.start_idx = start_idx
self.end_idx = end_idx
self.unk_idx = unk_idx
self.device = device
def make_src_mask(self, src_sz):
mask = (torch.triu(torch.ones(src_sz, src_sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, 10e-20).masked_fill(mask == 1, float(0.0))
mask = mask.to(self.device)
return mask
def forward(self, x):
dec_in = x.clone()[:, :-1]
src_mask = self.make_src_mask(dec_in.size()[1])
src = self.embed(dec_in)
src = self.postional_encoding(src)
src = src.transpose(0,1)
transformer_out = self.decoder(src, src_mask)
out = self.out_linear(transformer_out)
return out
</code></pre>
<p>I'm using teacher forcing to make it converge faster. From what I saw from the results, the text generated by the RNN model is better than transformer's.</p>
<p>Here is sample generated text with the expected</p>
<pre><code>Expected: you had to have been blind not to see the scenario there for what it was and is and will continue to be for months and even years a part of south carolina that has sustained a blow that the red cross expects will cost that organization alone some <span class="math-container">$ n million <eos>
Predicted: some <unk> been the been <unk> not be $</span> the total has was the may has <unk> the that that be to the <unk> the
Expected: citicorp and chase are attempting to put together a new lower bid <eos>
Predicted: a are <unk> carries n't to the together with <unk> jersey than
Expected: it ' s amazing the amount of money that goes up their nose out to the dog track or to the tables in las vegas mr . katz says <eos>
Predicted: <unk> ' s <unk> comeback money of the in mr to their <unk> and of <unk> <unk> or or <unk> the money
Expected: moreover while asian and middle eastern investors <unk> gold and help <unk> its price silver does n't have the same <unk> dealers say <eos>
Predicted: the production the routes <unk> of its
Expected: a board of control spokesman said the board had not seen the claim and declined to comment <eos>
Predicted: the board said declined of said
Expected: property capital trust said it dropped its plan to liquidate because it was n't able to realize the value it had expected <eos>
Predicted: the claims markets said its was n <unk> to sell insolvent of was n't disclosed to sell its plan
Expected: similarly honda motor co . ' s sales are so brisk that workers <unk> they have n't had a saturday off in years despite the government ' s encouragement of more leisure activity <eos>
Predicted: the honda ' credit . s s <unk>
Expected: we expect a big market in the future so in the long term it will be profitable <eos>
Predicted: it can it <unk> board
Expected: u . k . composite or <unk> insurers which some equity analysts said might be heavily hit by the earthquake disaster helped support the london market by showing only narrow losses in early trading <eos>
Predicted: the . s . s trading sell said which <unk> traders market said the be able in the the earthquake
Expected: this will require us to define and <unk> what is necessary or appropriate care <eos>
Predicted: <unk> is be the $ <unk> <unk> <unk> <unk> is the to <unk> and or
</code></pre>
<p>As you can see Transformer fails to grasp grammar compared to RNN. Is there anything wrong with my understanding?</p>
<p>EDIT</p>
<p>This is one example that caught my eye</p>
<pre><code>Expected: also the big board met with angry stock specialists <eos>
Predicted: also met specialists board met the stock big with after
</code></pre>
<p>Most of the words predicted have is from the expected but in a different order. I have read that transformers are permutation invariant which is the reason why we include positional encoding with the word embedding.</p>
|
<p>This is probably an issue of complete underfitting. How many training data do you use? What is your vocab size? What is your batch size and how many epochs did you train? Transformers always need more data than RNNs to reach good text quality.</p>
| 129
|
transformer models
|
Which framework should I use for training transformer language models with reinforcement learning?
|
https://ai.stackexchange.com/questions/48596/which-framework-should-i-use-for-training-transformer-language-models-with-reinf
|
<p>Which framework should I use for training transformer language models with reinforcement learning (e.g., GRPO)? Any recommendation?</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Feature</th>
<th style="text-align: left;"><a href="https://github.com/huggingface/trl/blob/50a2fa8ec843e18a6230508f53b1f220824914fd/trl/scripts/grpo.py#L25" rel="nofollow noreferrer"><code>trl</code></a> (Hugging Face)</th>
<th style="text-align: left;"><a href="https://github.com/unslothai/unsloth/blob/6ebef501f9015f17ba61e27d89cbdf3198fef671/tests/saving/language_models/test_save_merged_grpo_model.py#L609" rel="nofollow noreferrer"><code>unsloth</code></a></th>
<th style="text-align: left;"><a href="https://github.com/volcengine/verl/tree/main/examples/grpo_trainer" rel="nofollow noreferrer"><code>verl</code></a> (Volcano Engine)</th>
<th style="text-align: left;"><a href="https://github.com/OpenRLHF/OpenRLHF/tree/c8fbb4f058ac3500d3f854150c9c716f6936c389/examples/scripts" rel="nofollow noreferrer"><code>openrlhf</code></a></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;"><strong>Role in GRPO</strong></td>
<td style="text-align: left;">Full GRPO framework, implements PPO, DPO, IPO, KTO</td>
<td style="text-align: left;"><strong>Accelerates DPO</strong> (not a full GRPO framework)</td>
<td style="text-align: left;">Full GRPO framework, implements PPO, GRPO, ReMax, DAPO, etc.</td>
<td style="text-align: left;">Full GRPO framework, implements PPO, DPO, KTO</td>
</tr>
<tr>
<td style="text-align: left;"><strong>Core Function</strong></td>
<td style="text-align: left;">Easy, comprehensive RLHF with HF models</td>
<td style="text-align: left;">Speed up LLM SFT/DPO fine-tuning</td>
<td style="text-align: left;">Flexible, efficient, production-ready RL training for LLMs</td>
<td style="text-align: left;">Flexible, scalable, research-oriented RLHF</td>
</tr>
<tr>
<td style="text-align: left;"><strong>Ease of Use</strong></td>
<td style="text-align: left;">Very High (Trainer API)</td>
<td style="text-align: left;">High (easy integration)</td>
<td style="text-align: left;">Moderate (flexible but extensive feature set)</td>
<td style="text-align: left;">Moderate (more control)</td>
</tr>
<tr>
<td style="text-align: left;"><strong>Performance</strong></td>
<td style="text-align: left;">Good, leverages Accelerate</td>
<td style="text-align: left;"><strong>Excellent</strong> (speed & VRAM reduction for DPO)</td>
<td style="text-align: left;">Excellent (SOTA throughput, scales to hundreds of GPUs)</td>
<td style="text-align: left;">Very Good, designed for large-scale/distributed</td>
</tr>
<tr>
<td style="text-align: left;"><strong>Integration</strong></td>
<td style="text-align: left;">Deeply integrated with Hugging Face ecosystem</td>
<td style="text-align: left;">Integrates well with HF & <code>trl</code>'s <code>DPOTrainer</code></td>
<td style="text-align: left;">Compatible with HF/Modelscope, integrates with FSDP, Megatron-LM, vLLM, SGLang</td>
<td style="text-align: left;">Uses HF models; often more modular</td>
</tr>
<tr>
<td style="text-align: left;"><strong>Target Audience</strong></td>
<td style="text-align: left;">Practitioners, general users, rapid prototyping</td>
<td style="text-align: left;">Anyone doing DPO/SFT, especially on limited hardware</td>
<td style="text-align: left;">Researchers, advanced practitioners, production teams needing performance/flexibility</td>
<td style="text-align: left;">Researchers, power users, large-scale deployments</td>
</tr>
</tbody>
</table></div>
| 130
|
|
transformer models
|
How can an decoder-only transformer be used for document embedding?
|
https://ai.stackexchange.com/questions/41161/how-can-an-decoder-only-transformer-be-used-for-document-embedding
|
<p>GPT3 and 4 are both examples of decoder-only models. However OpenAI offers an text embedding API endpoint based on these models. This begs the general question how can one obtain text embeddings from a decoder-only transformer model?</p>
|
<p>The GPT models (as manifested using the decoder block in the original Transformer architecture) are not generating the embedding. However, the weights from the GPT are being used as the initial weights in a new model, dubbed CPT [<a href="https://openai.com/research/clip" rel="nofollow noreferrer">1</a>], that can create embeddings.</p>
<p>The key lies in Contrastive Modeling techniques - acquire pair data and train a new model to minimize loss on the known pairs. Look at the <a href="https://openai.com/research/clip" rel="nofollow noreferrer">CLIP model</a> for a good review of contrastive loss. The CLIP model embeds the text and image into the same vector space. Thus, CLIP and contrastive models know how to create embeddings.</p>
<p>Pair data, luckily, is prevalent on the web and easy to acquire: [image, caption], [docstring, code], [podcast titles, podcast descriptions], etc.</p>
<p>One key ingredient: you get way better results when you start with an initialized pre-trained model. That's where the GPT leverage comes in. They are using the GPT models (at various sizes like Ada, Babbage, Curie, DaVinci...) as the initial weights on a new constrastive model.</p>
<p>Thus, the formula is:</p>
<ol>
<li>A pre-trained GPT for initial weights</li>
<li>Pair data for training</li>
<li>Contrastive loss training</li>
</ol>
<p>The resulting model is what OpenAI calls a <a href="https://cdn.openai.com/papers/Text_and_Code_Embeddings_by_Contrastive_Pre_Training.pdf" rel="nofollow noreferrer">"cpt" model.</a></p>
<p>It's unclear how well these contrastive loss embedding models work compared to a more traditional encoder-based LLM block (like universal-sentence-encode). <a href="https://medium.com/@nils_reimers/openai-gpt-3-text-embeddings-really-a-new-state-of-the-art-in-dense-text-embeddings-6571fe3ec9d9" rel="nofollow noreferrer">See this blog post for more.</a></p>
| 131
|
transformer models
|
Why feed forward neural network (FFN) in transformer block has a "contract and expand" pattern?
|
https://ai.stackexchange.com/questions/43630/why-feed-forward-neural-network-ffn-in-transformer-block-has-a-contract-and-e
|
<p>I noticed that in many (every ?) transformer architecture, the FFN (i.e the MLP network at the end of one transformer block) consists of two linear layers (with an activation) where the first layer goes from D1 channels to D2 channels and the second layer goes back from D2 to D1.</p>
<p>For instance we can see it in this FFN class in DistilBert :
<a href="https://github.com/huggingface/transformers/blob/345b9b1a6a308a1fa6559251eb33ead2211240ac/src/transformers/models/distilbert/modeling_distilbert.py#L455" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/345b9b1a6a308a1fa6559251eb33ead2211240ac/src/transformers/models/distilbert/modeling_distilbert.py#L455</a></p>
<p>What justifies this "expand and contract" pattern ? Why not just using two identical linear layers or anything else ?</p>
| 132
|
|
transformer models
|
Why not increase the number of attention heads rather than stacking transformer layers?
|
https://ai.stackexchange.com/questions/46983/why-not-increase-the-number-of-attention-heads-rather-than-stacking-transformer
|
<p>In transformers model, we have multiple transformer blocks each with multiple attention heads.</p>
<p>So assume a model has <code>N</code> transformer blocks with each having <code>M</code> attention heads, why don't we just use a single transformer block with <code>NxM</code> attention heads ?</p>
<p>That will also have approximately same number of parameters ?</p>
|
<p>In many NLP tasks information is intrinsically structured hierarchically, for instance, early layers in a transformer model may specialize in short-range dependencies such as syntactic relationships between adjacent tokens, while deeper layers specialize in long-range dependencies like semantic coherence across sentences or paragraphs. Multiple transformer blocks allow the model to perform a hierarchical transformation of the input at different levels of contextualized abstraction. In contrast, a single transformer block, even with many attention heads, would not have the same depth to specialize attention in a single representation space and iteratively capture the nuanced and hierarchical dependencies with only the same raw input embeddings at once.</p>
<p>In addition, in a single transformer block you only get one application of the feed-forward network (FFN), which reduces the model's capacity to model complex non-linear relationships, even if there are more attention heads. Multiple blocks with interleaving FFNs allow the model to compute various non-linear combinations of the relationships captured by attention.</p>
<p>Lastly transformers rely heavily on residual connections between blocks which allow the model to avoid vanishing gradients and enable better learning by passing information from earlier layers directly to later layers. With only one block, you'd lose the advantage of having these residual connections between different hierarchical representations. This could impair learning and result in suboptimal training, especially when dealing with complex patterns.</p>
| 133
|
transformer models
|
How can a transformer encoder attend to future tokens?
|
https://ai.stackexchange.com/questions/40614/how-can-a-transformer-encoder-attend-to-future-tokens
|
<p>What does attending to future tokens mean? From my understanding, the transformer model works by inputting a prompt and predicting the next word in a sequence and this process just keeps repeating while attending to the words from the prompt and the already generated words. However, the explanations for transformers all mention the prevention of attending to future tokens while generating the next word in a sequence. How do these models attend to a word that hasn’t been generated yet?</p>
|
<p>Most of the recent hype around transformers is about the <a href="https://ai.stackexchange.com/questions/40179/how-does-the-decoder-only-transformer-architecture-work">decoder-only transformer architecture</a>. This architecture is different from the originally introduced <a href="https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf" rel="nofollow noreferrer">traditional transformer</a> which features both an encoder and a decoder. <strong>A transformer only attends to future tokens in the encoder.</strong> Hence, models such as chatGPT, GPT-4 etc, do not attend to future tokens.</p>
<p><strong>The traditional transformer (featuring an encoder and a decoder) was introduced for the task of language translation</strong>. Hence, you would input a prompt of language A into the encoder and the decoder would output that same prompt in language B. In language translation tasks, you need to attend to future tokens. In different languages, pieces of sentences are ordered differently. You cannot simply translate each word at the same place.</p>
<p>Hence, the encoder of the traditional transformer model attends to future tokens. It can do so because the whole input prompt is fed into the encoder at once. You have to think of the transformer model like a standard Multi-Layer Perceptron (MLP). An MLP calculates the output for the whole input at once. The transformer does the same. It uses information from the complete input prompt to calculate what parts of the input prompt to attend to. This can be at the start of the prompt, but also at the end of the prompt. The decoder then uses this information, as well as the previously translated words, to generate the next translated word/token.</p>
<p>Lots of confusion is currently occurring surrounding transformers because it is not clearly explained that 'the transformer' comes in different forms. The most popular transformer right now is the decoder-only transformer (<a href="https://ai.stackexchange.com/questions/40179/how-does-the-decoder-only-transformer-architecture-work">see full explanation here</a>). It is much easier to understand how the traditional transformer works by first understanding how the decoder-only transformer works. So I'd recommend reading the linked post and then thinking again about whether you now understand the answer to this question. I expect you to easily answer your current question after you complete understand the decoder-only transformer architecture.</p>
| 134
|
transformer models
|
What are Transformers worse at than classical deep learning models
|
https://ai.stackexchange.com/questions/48329/what-are-transformers-worse-at-than-classical-deep-learning-models
|
<p><strong>Background:</strong> It's well-known that transformers offer great performance across a large number of tasks, and this is largely a consequence of their powerful attention mechanisms and flexibility. However, in this post I'm curious about the <em>opposite</em>:</p>
<p><strong>Question:</strong> Namely, what are some <em>well-known</em> cases, problems, or real-world applications where transformers don't perform very well <strong>especially compared to classical deep learning models</strong> <em>such as MLPs, CNNs, etc</em>...</p>
<hr />
<p><strong>Specification:</strong>
I'm looking for specific <em>regression</em> tasks (with accessible datasets) where transformers are not the state-of-the-art. The regression task should be "naturally suitable", so no sequential or time-dependent data (in which case an autoregressive transformer would be more natural).</p>
<hr />
<p>This is a follow-up to the MLP question I asked <a href="https://ai.stackexchange.com/questions/18576/what-are-some-well-known-problems-where-neural-networks-dont-do-very-well">some years back here</a></p>
|
<p>In edge computing or real-time IoT regression applications, transformer's quadratic attention time complexity and large parameter counts make them impractical, where other DL models such as LSTM which has much lower memory footprint are apparently more advantageous.</p>
<p>Furthermore, even for sequential or time-series data without real-time efficiency consideration, an autoregressive transformer would not be more natural in all cases as you claimed. Transformer is comparatively good at capturing long-range dependencies but not for all kinds of time-series, such as chaotic ones which you can further read a recent paper by Valle & Bruno (2025) <em>"<a href="https://www.sciencedirect.com/science/article/abs/pii/S0960077925000475" rel="nofollow noreferrer">Forecasting chaotic time series: Comparative performance of LSTM-based and Transformer-based neural networks</a>"</em>.</p>
<blockquote>
<p>The complexity and sensitivity to initial conditions are the main characteristics of chaotic dynamical systems, making long-term forecasting a significant challenge. Deep learning, however, is a powerful technique that can potentially improve forecasting in chaotic time series. In this study, we explored the performance of modern neural network architectures in forecasting chaotic time series with different Lyapunov exponents. To accomplish this, we created a robust dataset composed of chaotic orbits with Lyapunov exponents ranging from 0.019 to 1.253 and used state-of-the-art neural network models for time series forecasting, including recurrent-based and transformer-based architectures. Our results show that LSTNet presents the best results in one-step-ahead and the recursive one-step-ahead forecasting for the majority of the time series in our dataset, enabling the prediction of chaotic time series with high Lyapunov exponent. Additionally, we observed that the sensitivity to initial conditions and complexity still affects the performance of the neural networks, decaying predictive power in time series with larger Lyapunov exponent.</p>
</blockquote>
<p>In summary transformers struggle with multi-step forecasting of high-Lyapunov-exponent chaotic time-series due to error accumulation, and LSTNet which combines convolutional and recurrent components effectively captures both short-term patterns and long-term dependencies inherent in chaotic time series.</p>
| 135
|
transformer models
|
Determining optimal data size for generalization in transformer encoders, particularly for Time-Series signal data
|
https://ai.stackexchange.com/questions/45708/determining-optimal-data-size-for-generalization-in-transformer-encoders-partic
|
<p>I'm currently experimenting with training a model that employs a single transformer encoder on time-series signal data. Despite having a relatively small dataset of around 50 examples, each with a sequence length of approximately 1000, the model seems to excel at understanding and memorizing these examples. However, I'm concerned about its generalization capabilities given the limited amount of data.</p>
<p>I'm wondering: <strong>How much data is typically required for a transformer encoder to generalize well,</strong> especially in the context of time-series signal data? Is there a recommended range or guideline for the amount of training data that can help ensure better generalization performance? Additionally, are there specific strategies or techniques that can enhance generalization in transformer models when working with small datasets?</p>
<p>Any insights, experiences, or references would be greatly appreciated. Thank you!</p>
|
<p>In order for a neural network to generalize well, it has to be trained on tons and tons of data, otherwise if the model is shown very few samples of the training data, it will be biased towards these few samples and it will perform poorly on unseen data for sure.</p>
<p>Transformers are data hungry models and won't get generalized with few samples (50 in your case)</p>
<p>Best suggestion for your use case is to use some pretrained model that has been trained extensively on the time series data. Best example would be <a href="https://github.com/google-research/timesfm" rel="nofollow noreferrer">timesfm from google research</a>. Pick this pre-trained model as the starting point for your weights and fine tune the model with your custom training samples.</p>
| 136
|
transformer models
|
Redundancy of Value-Projection-matrix in Multi-headed attention in Transformer model
|
https://ai.stackexchange.com/questions/39867/redundancy-of-value-projection-matrix-in-multi-headed-attention-in-transformer-m
|
<p>In the original transformer paper "Attention is all you need" in section 3.2.2 it is written:</p>
<p>Instead of performing a single attention function with dmodel-dimensional keys, values and queries, we found it beneficial to linearly <strong>project the queries, keys and values h times</strong> with different, learned linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional output values. These are <strong>concatenated and once again projected</strong>, resulting in the final values, as depicted in Figure 2.</p>
<p>I am wondering why you need to project the values h times if you concatenate and project them once again in the end. It seems to me that this just results in two matrices being learned that are multiplied with each other. This should have the same expressiveness as one matrix. So, in my understanding you could just leave away the h projection steps and simply do the final projection. Am I missing something?</p>
|
<p>This performs <span class="math-container">$h$</span> different attention lookups in parallel. The amount of computation is kept the same, though, so each one is smaller. They found that multiple smaller lookups were more useful than one big lookup.</p>
<p>The matrices are not just split, projected and concatenated. They are split, projected, <strong>the attention lookup is performed on each set of pieces</strong> and then the results of the attention function are concatenated.</p>
| 137
|
transformer models
|
What is the purpose of Decoder mask (triangular mask) in Transformer?
|
https://ai.stackexchange.com/questions/23889/what-is-the-purpose-of-decoder-mask-triangular-mask-in-transformer
|
<p>I'm trying to implement transformer model using <a href="https://www.tensorflow.org/tutorials/text/transformer#create_the_transformer" rel="noreferrer">this tutorial</a>. In the decoder block of the Transformer model, a mask is passed to "<strong>pad and mask future tokens in the input received by the decoder</strong>". This mask is added to attention weights.</p>
<pre><code>import tensorflow as tf
def create_look_ahead_mask(size):
mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
return mask
</code></pre>
<p>Now my question is, how is doing this step (adding mask to the attention weights) equivalent to revealing the words to model one by one? I simply can't grasp the intuition of it's role. Most tutorials won't even mention this step like it's very obvious. Please help me understand. Thanks.</p>
|
<p>The Transformer model presented in this tutorial is an auto-regressive Transformer. Which means that prediction of next token only depends on it's previous tokens.</p>
<p>So in order to predict next token, you have to make sure that only previous token are attended. (If not, this would be a cheating because model already knows whats next).</p>
<p>So attention mask would be like this<br />
[0, 1, 1, 1, 1]<br />
[0, 0, 1, 1, 1]<br />
[0, 0, 0, 1, 1]<br />
[0, 0, 0, 0, 1]<br />
[0, 0, 0, 0, 0]<br /></p>
<p>For example: If you are translating English to Spanish<br />
<strong>Input: How are you ?<br /></strong>
<strong>Target: < start > Como estas ? < end ><br /></strong>
Then decoder will predict something like this<br />
< start > (it will be given to decoder as initial token)<br />
< start > Como<br />
< start > Como estas<br />
< start > Como estas ?<br />
< start > Como estas ? < end ><br /></p>
<p>Now compare this step by step prediction sequences to attention mask given above, It would make sense now to you</p>
| 138
|
transformer models
|
Embedding from Transformer-based model from paragraph or documnet (like Doc2Vec)
|
https://ai.stackexchange.com/questions/27471/embedding-from-transformer-based-model-from-paragraph-or-documnet-like-doc2vec
|
<p>I have a set of data that contains the different lengths of sequences. On average the sequence length is 600. The dataset is like this:</p>
<pre><code>S1 = ['Walk','Eat','Going school','Eat','Watching movie','Walk'......,'Sleep']
S2 = ['Eat','Eat','Going school','Walk','Walk','Watching movie'.......,'Eat']
.........................................
.........................................
S50 = ['Walk','Going school','Eat','Eat','Watching movie','Sleep',.......,'Walk']
</code></pre>
<p>The number of unique actions in the dataset are fixed. That means some sentences may not contain all of the actions.</p>
<p>By using Doc2Vec (Gensim library particularly), I was able to extract embedding for each of the sequences and used that for later task (i.e., clustering or similarity measure)</p>
<p>As transformer is the state-of-the-art method for NLP task. I am thinking if Transformer-based model can be used for similar task. While searching for this technique I came across the "sentence-Transformer"-
<a href="https://github.com/UKPLab/sentence-transformers" rel="nofollow noreferrer">https://github.com/UKPLab/sentence-transformers</a>. But it uses a pretrained BERT model (which is probably for language but my case is not related to language) to encode the sentences. Is there any way I can get embedding from my dataset using Transformer-based model?</p>
| 139
|
|
transformer models
|
How to 'induce' or 'teach' pretrained model to a continous Transformer token-pruning Algorithm
|
https://ai.stackexchange.com/questions/46791/how-to-induce-or-teach-pretrained-model-to-a-continous-transformer-token-pru
|
<p>I am currently looking for ways to improve Transformers performance in image processing, especially in image segmentation. I found this <a href="https://arxiv.org/abs/2112.13890" rel="nofollow noreferrer">paper</a> by Kong, Z., et al called "SPViT: Enabling Faster Vision Transformers via Latency-aware Soft Token Pruning" where in their method, TLDR, they run multiple rounds of Transformer training and pruning until they reached a consensus between accuracy and training weight. So basically the what the paper do is trying to preserve accuracy with lighter training by taking unimportant tokens of which each Transformer heads are making.</p>
<p>From what I understand, they first run transformers on 'maximum performance cost' first because the first tokens are unpruned, then the cost will lower because of pruned tokens and reached the expected lighter yet still accurate training. So from the images, I expect maximum training cost which I'm actually trying to avoid and the cost will be lower as the tokens are more pruned in Phase 3</p>
<p><a href="https://i.sstatic.net/cWv5lCwg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWv5lCwg.png" alt="SP-ViT in action" /></a></p>
<p>My question is, is there a way I can 'skip' the 'maximum performance Transformer training' by inducing a model from somewhere else like from <a href="https://github.com/rstrudel/segmenter" rel="nofollow noreferrer">segmenter-github-by-Strudel,et al.</a>. But isn't that called inference? How to transfer the 'context dictionary' from one model to another?</p>
<p>So I can transfer these 'contexts' from image I just inference with pretrained model to the Transformer in SP-ViT algorithm</p>
<p><a href="https://i.sstatic.net/MR8b6pBW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MR8b6pBW.png" alt="inference-result" /></a></p>
<p>What do I actually look for?</p>
| 140
|
|
transformer models
|
Is a small transformer model able to effectively handle any input length provided it is fine-tuned on it?
|
https://ai.stackexchange.com/questions/45949/is-a-small-transformer-model-able-to-effectively-handle-any-input-length-provide
|
<p>Suppose we have a transformer LLM which can do a task such as summarising.</p>
<p>I know transformer <em>can</em> technically handle any input length (assume we are not using learned positional embeddings) because the architecture doesn’t define a fix length for input. However, the quadratic complexity and unavailability of long text data puts a practical limit on sequence lengths used for training and they are not effective beyond the sequence length they are trained on.</p>
<p>Suppose we have enough hardware resources and data of a particular long input length. Will the model be able to perform the <em>same</em> task effectively on that, say, 1 million length? Or will we need a model with more parameters?</p>
<p>I would appreciate any reason or empirical result which shows this would or wouldn’t work. Also will this depend on the complexity of a task? If so why would the said complexity increase with input length?</p>
| 141
|
|
transformer models
|
Is the multi-head attention in the transformer a weighted adjacency matrix?
|
https://ai.stackexchange.com/questions/32036/is-the-multi-head-attention-in-the-transformer-a-weighted-adjacency-matrix
|
<p>Are multi-head attention matrices weighted adjacency matrices?</p>
<p>The job of the multi-head-attention mechanism in transformer models is to determine how likely a word is to appear after another word. In a sense this makes the resulting matrix a big graph with nodes and edges, where a node represents a word and an edge the likelihood to appear after that. So basically it is an adjacency matrix that is created.</p>
|
<p>Short answer, yes I believe we can! One way feels more meaningful that the other. First, let's look at some nuance in the definition of attention. If <span class="math-container">$\text{score}(x_i, x_j) = \text{score}(x_j, x_i)$</span>, then the attention matrix is symmetric and naturally has the form of a weighted adjacency matrix. For example, this happens when attention is given by a simple dot product <span class="math-container">$\text{score}(x_i, x_j) = \langle x_i, x_j \rangle = x_i^Tx_j$</span>. This also happens if we have a learnable matrix <span class="math-container">$A$</span> and <span class="math-container">$\text{score}(x_i, x_j) = x_i^TAx_j$</span> if <span class="math-container">$A$</span> is symmetric. We can then view the attention matrix <span class="math-container">$\alpha_{i,j} = \text{Attn}_{i,j}(X)$</span> as a weighted adjacency matrix where the nodes represent input tokens, and edge weights correspond to similarity scores (as defined by the inner product, scaled inner product, or the symmetric matrix <span class="math-container">$A$</span>). Now, for the following definition of attention, this makes a little less sense, as the edge connecting token nodes in graph corresponding to tokens <span class="math-container">$x_i$</span> and <span class="math-container">$x_j$</span> is not the same as the weight on the edge connecting <span class="math-container">$x_j$</span> to <span class="math-container">$x_i$</span>, in general. Suppose <span class="math-container">$X \in \mathbb{R}^{d \times n}$</span> has as columns <span class="math-container">$X_i$</span> the <span class="math-container">$d$</span>-dimensional embeddings of the <span class="math-container">$n$</span>-tokens <span class="math-container">$x_1, x_2, ..., x_n$</span> from your input. Now, let</p>
<p><span class="math-container">$$W_QX = Q$$</span>
<span class="math-container">$$W_KX = K$$</span>
<span class="math-container">$$W_VX = V$$</span></p>
<p>be the learned weight matrices giving the "query" <span class="math-container">$q_i = W_QX_i$</span>, "key" <span class="math-container">$k_i = W_KX_i$</span>, and "value" <span class="math-container">$v_i = W_VX_i$</span> vectors. Then we can define attention as</p>
<p><span class="math-container">\begin{align}
\text{Attn}(X) &= \text{softmax}\left( \frac{Q^TK}{\sqrt{d}} \right)V \\
&= \text{softmax}_j\left(\text{score}(x_i, x_j)\right)
\end{align}</span></p>
<p>From this we can derive,</p>
<p><span class="math-container">$$ \text{Attn}_{i,j}(X) = \frac{\exp \left( \frac{\langle q_i, k_j \rangle}{\sqrt{d}}\right)}{\sum_k \exp\left( \frac{\langle q_i, k_k \rangle}{\sqrt{d}} \right)}.$$</span></p>
<p>Now, note, we have turned each column into a probability distribution by applying the softmax so we have</p>
<p><span class="math-container">$$
P(X_i) = \text{softmax}\begin{pmatrix}
\frac{\langle q_i, k_1 \rangle}{\sqrt{d}}\\
\frac{\langle q_i, k_2 \rangle}{\sqrt{d}}\\
\vdots \\
\frac{\langle q_i, k_n \rangle}{\sqrt{d}}
\end{pmatrix}.
$$</span></p>
<p>Now, if we adjust for masked self attention, we can represent the attention mechanism as a directed graph (with weighted self-loops) as the entries in the attention matrix above the diagonal are zero, and so all edges are directed at token nodes that come "after" them.</p>
<p>There is another way to understand attention using graphs when we view attention through the lens of (complete) graph attention networks as explained <a href="https://docs.dgl.ai/en/0.8.x/tutorials/models/4_old_wines/7_transformer.html" rel="nofollow noreferrer">here</a>.</p>
<p>There is also a visualization you can find <a href="https://colab.research.google.com/drive/1hXIQ77A4TYS4y3UthWF-Ci7V7vVUoxmQ?usp=sharing#scrollTo=-QnRteSLP0Hm" rel="nofollow noreferrer">here</a> that shows the attention mechanism as a graph. It is equivalent to the graph attention formulation, but is more interactive and pretty.</p>
| 142
|
transformer models
|
Is using a LSTM, CNN or any other neural network model on top of a Transformer(using hidden states) overkill?
|
https://ai.stackexchange.com/questions/25962/is-using-a-lstm-cnn-or-any-other-neural-network-model-on-top-of-a-transformeru
|
<p>I have recently come across transformers, I am new to Deep Learning. I have seen a <a href="https://arxiv.org/pdf/2007.10819.pdf" rel="nofollow noreferrer">paper</a> using CNN and BiLSTM on top of a transformer, the paper uses a transformer(XLM-R) for sentiment analysis in code-mixed domain. But many of the blogs only use a normal feed formal network on top of the transformer.</p>
<p>I am trying to use transformers for sentiment analysis, short text classification.</p>
<p>Is it overkill to use models like CNN and BiLSTM on top of the transformer considering the size of the data it is trained on and its complexity?</p>
| 143
|
|
transformer models
|
How is context length calculated in transformers, and why isn't hardware specification considered?
|
https://ai.stackexchange.com/questions/46803/how-is-context-length-calculated-in-transformers-and-why-isnt-hardware-specifi
|
<p>I'm trying to understand how context length is calculated in transformer models, especially in NLP tasks using open-source models like llama, which can be run locally. I'm confused about how this is defined and whether hardware specifications (like GPU/CPU memory) affect the context length.</p>
<p>From what I understand, the maximum context length might be limited by memory, but I don't often see hardware specifications mentioned when discussing context length limits. For models that can be run locally, how exactly does hardware influence this, and why isn't it typically part of the context length discussion when dealing with transformer models?
for example when they say llama 3.1 context length is 128k, based on what requirements they say that ?</p>
|
<p><strong>Context length is fixed</strong></p>
<p>128K refers to Llama 3.1's ability to take up to 128,000 tokens in a single forward pass. This limit is fixed during training.</p>
<p><strong>Inference on local hardware</strong></p>
<p>When running inference locally (assuming your question isn't about hardware for training), your hardware significantly influences the speed of the model's performance but doesn't affect the fixed context length. There are optimization techniques, such as offloading model layers between RAM and VRAM and using quantization to reduce floating-point precision (which may impact response quality).</p>
<p><strong>But why "128"?</strong></p>
<p>According to the <a href="https://arxiv.org/pdf/2407.21783" rel="nofollow noreferrer">to the paper</a>:</p>
<blockquote>
<p>We increase the supported context length in increments, pre-training until the model has successfully adapted to the increased context length. We assess successful adaptation by measuring whether (1) model performance on short-context evaluations has recovered completely and (2) the model perfectly solves "needle in a haystack" tasks up to that length. In Llama 3 405B pre-training, we increased context length gradually in six stages, starting from the original 8K context window and ending in the final 128K context window. This long-context pre-training stage was performed using approximately 800B training tokens.</p>
</blockquote>
<p>The <a href="https://www.youtube.com/watch?v=KwRRuiCCdmc" rel="nofollow noreferrer">"needle in a haystack"</a> test involves hiding a key fact within a large amount of information. The model is then asked a question that requires that specific fact to answer correctly. This tests the model's ability to recover that small fact in a such a large context, it would catch if it's answering solely on pre-existing knowledge within it's weights for instance (and not using the context effectively).</p>
<p>It appears that the "128k" was chosen based on the quality of the response instead of on hardware specifications.</p>
<p>P.S.: It's a bit odd to me that they tried "six stages" from 8K and 128K instead of five (8K, 16K, 32K, 64K, 128K).</p>
| 144
|
transformer models
|
Language models of virtual assistants before transformers
|
https://ai.stackexchange.com/questions/47883/language-models-of-virtual-assistants-before-transformers
|
<p>How were the language models of virtual assistants (Siri, Google Assistant, Alexa, etc.) designed and trained before transformers? What were their architectures?</p>
|
<p>Before transformers for the <a href="https://en.wikipedia.org/wiki/Natural_language_understanding" rel="nofollow noreferrer">natural language understanding (NLU)</a> NLP task, there was already the encoder-decoder architecture where LSTMs and GRUs RNN models act as encoder to transform input sequence and <a href="https://en.wikipedia.org/wiki/Conditional_random_field" rel="nofollow noreferrer">conditional random fields (CRFs)</a> were applied for <a href="https://en.wikipedia.org/wiki/Structured_prediction" rel="nofollow noreferrer">structured prediction</a>. And before them, there's rule based statistical language models where hand-crafted rules handle common cases encountered by the virtual assistant while statistical models such as n-gram/TF-IDF handle edge cases or ambiguities.</p>
<blockquote>
<p>in natural language processing, "linear chain" CRFs are popular, for which each prediction is dependent only on its immediate neighbours.</p>
</blockquote>
<p>Compared to transformers, they relied heavily on handcrafted features and struggled with understanding long-range complex queries or multi-turn conversational context. Updating these models to handle new domain or intents required significant effort from scratch since they're not scalable, generalizable, and transferrable like foundation transformers</p>
| 145
|
transformer models
|
How to implement or avoid masking for transformer?
|
https://ai.stackexchange.com/questions/23703/how-to-implement-or-avoid-masking-for-transformer
|
<p>When it comes to using Transformers for image captioning is there any reason to use masking?</p>
<p>I currently have a resnet101 encoder and am trying to use the features as the input for a transformer model in order to generate a caption for the image, is there any need to use masking? and what would I mask if I did need to?</p>
<p>Any help would be much appreciated</p>
<p>Thanks in advance.</p>
|
<p>If you're using a library such as <a href="https://github.com/google/trax" rel="nofollow noreferrer">Trax</a> which contains great submodules for various Transformers (Skipping, BERT, Vanilla and Reformer) you can use the inbuilt <code>trax.data.inputs.add_loss_weights()</code> function and provide a value for the <code>id_to_mask</code> parameter.</p>
<p>Example Usage:</p>
<pre><code>train_generator = trax.data.inputs.add_loss_weights(
data_generator(batch_size, x_train, y_train,vocab['<PAD>'], True),
id_to_mask=vocab['<PAD>'])
</code></pre>
<p>Here are some resources for building Transformers in Trax:</p>
<ul>
<li><p><a href="https://github.com/SauravMaheshkar/Trax-Examples/blob/main/NLP/NER%20using%20Reformer.ipynb" rel="nofollow noreferrer">Named Entity Recognition with Reformer</a> (Masking Implemented) and the associated <a href="https://sauravmaheshkar.github.io/fastpages/transformers/jupyter/nlp/2020/11/29/_11_19_NER_Using_Reformer.html" rel="nofollow noreferrer">blog post</a> (contains an in-depth explanation of NER and Transformers)</p>
</li>
<li><p><a href="https://github.com/SauravMaheshkar/Trax-Examples/blob/main/NLP/Deep%20N-Gram.ipynb" rel="nofollow noreferrer">Deep N-Gram Models</a> (without masking but I recommend you go through these first to get a hang of the library)</p>
</li>
</ul>
| 146
|
transformer models
|
Is it good practice to save NLP Transformer based pre-trained models into file system in production environment
|
https://ai.stackexchange.com/questions/22700/is-it-good-practice-to-save-nlp-transformer-based-pre-trained-models-into-file-s
|
<p>I have developed a multi label classifier using BERT. I'm leveraging Hugging Face Pytorch implementation for transformers.</p>
<p>I have saved the pretrained model into the file directory in dev environment. Now, the application is ready to be moved the production environment.</p>
<p>Is it a good practice to save the models into file system in prod ?
Can I serialize the model files and word embeddings into any DB and read again ?</p>
| 147
|
|
transformer models
|
Why use exponential and log in Positional Encoding of Transformer
|
https://ai.stackexchange.com/questions/41670/why-use-exponential-and-log-in-positional-encoding-of-transformer
|
<p>This code snippet is from <a href="https://huggingface.co/blog/annotated-diffusion" rel="nofollow noreferrer">here</a> under the section named "Position embeddings".</p>
<pre><code>class SinusoidalPositionEmbeddings(nn.Module):
def __init__(self, dim):
super().__init__()
self.dim = dim
def forward(self, time):
device = time.device
half_dim = self.dim // 2
embeddings = math.log(10000) / (half_dim - 1)
embeddings = torch.exp(torch.arange(half_dim, device=device) * -embeddings)
embeddings = time[:, None] * embeddings[None, :]
embeddings = torch.cat((embeddings.sin(), embeddings.cos()), dim=-1)
return embeddings
</code></pre>
<p>The code is implementing Positional Encoding that was introduced in Transformer model.<br />
I don't understand why we have to use <code>torch.exp</code> and <code>math.log</code>. Is this related to scaling or non-negative issue?</p>
<p>Many thanks ahead!</p>
|
<p>The current code just implements the now-standard expressions for positional embeddings given in the original transformer paper (<a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">Attention is all you need</a>).</p>
<p>In section 3.5 of this paper they suggest the formula
<span class="math-container">$$PE_{(pos,2i)} = \sin \left(\frac{pos}{10000^{2i/d_{model}}}\right)$$</span>
<span class="math-container">$$PE_{(pos, 2i+1)} = \cos\left(\frac{pos}{10000^{2i/d_{model}}}\right)$$</span></p>
<p>It turns out to be easier (or more efficient?) to implement in code using the standard maths identities <span class="math-container">$x = \exp \log x$</span> and <span class="math-container">$\log x^a = a \log x$</span>
<span class="math-container">$$\frac{pos}{10000^{2i/d_{model}}} = \exp\left(\log(pos) - \frac{2i}{d_{model}} \log(10000) \right)$$</span></p>
<p>Hopefully you can now see that this corresponds to the code you have given.</p>
<p>But perhaps you are interested in the question as to why the positional encodings are of this fairly whacky form. My understanding of the main intuitions are that sines and cosines interact really nicely with translation due to their periodicity, and by using exponentially spaced 'frequencies' one can extract signals for interactions at a large number of different 'length scales'; see <a href="https://kazemnejad.com/blog/transformer_architecture_positional_encoding/" rel="nofollow noreferrer">this excellent blog post and associated links</a> for further explanation. But as I understand it, the main reason this clever idea is so widely used is because it works so well in practice.</p>
| 148
|
transformer models
|
Playing around with Transformer - accuracy not improving
|
https://ai.stackexchange.com/questions/40080/playing-around-with-transformer-accuracy-not-improving
|
<p>I am playing around with a decoder only transformer model,</p>
<p>The Colab is here if you find that easier
<a href="https://colab.research.google.com/drive/1SHyJ9Oa3E4j1x8YFlXQbd1mjUjWhHGOV#scrollTo=60e13119" rel="nofollow noreferrer">https://colab.research.google.com/drive/1SHyJ9Oa3E4j1x8YFlXQbd1mjUjWhHGOV#scrollTo=60e13119</a>
or see the code below (should run with minimal deps in a notebook)</p>
<h3>Goal:</h3>
<ol>
<li>I am playing around to understand the transformer architecture. I wanted to build a minimal version of a decoder only model and see if i can train the model to predict the last digit in this sequence (think of it as a recall function from a key value store).</li>
<li>Want to get an intuitive understanding on the linear transformations of the weight matrices to the sequence input + positional encodings.</li>
</ol>
<pre><code>n9 v8 a5 p7 k0 j1 i3 e2 g6 c4 c - (should be 4)
d5 t3 q8 r7 y1 i0 c2 n9 s4 u6 i - (should be 0) ...
l8 u9 p5 y3 f7 k0 g6 v4 r1 x2 l
a7 x5 b6 v0 i1 f3 z9 d4 y2 k8 x
m9 h4 g5 t2 l3 f1 w7 b6 a8 j0 g
x0 g7 q9 u2 j8 v4 h3 o1 f5 r6 r
r6 c4 d0 p3 j2 g9 a7 n1 e8 l5 d
r2 z7 y6 x5 v4 u1 s3 a8 l9 p0 z
k0 u3 t1 r4 g8 p2 j5 x9 s7 v6 t
o7 a1 u3 r2 k6 j0 m8 y9 e4 c5 j
</code></pre>
<h3>Questions:</h3>
<ul>
<li>The accuracy is not improving, so i guess there is some fundamental issue with the model. It looks like it is learning that it is a digit, but not what digit.</li>
</ul>
<h3>Code:</h3>
<pre><code>import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import torch.optim as optim
import random
import string
import time
import math
if torch.cuda.is_available():
dev = "cuda:0"
else:
dev = "cpu"
device = torch.device(dev)
class PositionalEncoding(nn.Module):
"""
compute sinusoid encoding.
"""
def __init__(self, d_model, max_len, device):
"""
constructor of sinusoid encoding class
:param d_model: dimension of model
:param max_len: max sequence length
:param device: hardware device setting
"""
super(PositionalEncoding, self).__init__()
# same size with input matrix (for adding with input matrix)
self.encoding = torch.zeros(max_len, d_model, device=device)
self.encoding.requires_grad = False # we don't need to compute gradient
pos = torch.arange(0, max_len, device=device)
pos = pos.float().unsqueeze(dim=1)
# 1D => 2D unsqueeze to represent word's position
_2i = torch.arange(0, d_model, step=2, device=device).float()
# 'i' means index of d_model (e.g. embedding size = 50, 'i' = [0,50])
# "step=2" means 'i' multiplied with two (same with 2 * i)
self.encoding[:, 0::2] = torch.sin(pos / (10000 ** (_2i / d_model)))
self.encoding[:, 1::2] = torch.cos(pos / (10000 ** (_2i / d_model)))
# compute positional encoding to consider positional information of words
def forward(self, x):
# self.encoding
# [max_len = 512, d_model = 512]
batch_size, seq_len, d_model = x.size()
#print("seq_len: ", seq_len, " batch_size: ", batch_size, "d_model: ", d_model)
# [batch_size = 128, seq_len = 31]
#print("self.encoding: ", self.encoding[:seq_len, :].shape)
return self.encoding[:seq_len, :d_model]
# [seq_len = 30, d_model = 512]
# it will add with tok_emb : [128, 30, 512]
# Generate random strings
def random_string(length):
#return "1 2 3 4 5 6 7 8 9"
a = random.sample(string.ascii_lowercase, length)
d = random.sample(string.digits, length)
r = ' '.join([a[i] + d[i] for i in range(length)])
n = random.randint(0, length)-1
return r + " " + a[n] + d[n]
# Synthetic Dataset
class RandomStringDataset(Dataset):
def __init__(self, num_samples, seq_length):
self.num_samples = num_samples
self.seq_length = seq_length
self.data = [random_string(seq_length) for _ in range(num_samples)]
def __len__(self):
return self.num_samples
def __getitem__(self, idx):
input_seq = self.data[idx][:-1]
target_seq = self.data[idx][1:]
return input_seq, target_seq
# Tokenizer and detokenizer functions
def tokenize(text):
return [char for char in text]
def detokenize(tokens):
return ''.join(tokens)
# Map characters to indices and vice versa
vocab = string.ascii_lowercase + string.digits + " "
char_to_idx = {char: idx for idx, char in enumerate(vocab)}
idx_to_char = {idx: char for idx, char in enumerate(vocab)}
# Convert tokens to tensor
def tokens_to_tensor(tokens):
indices = [char_to_idx[token] for token in tokens]
return torch.tensor(indices, device=device)
# Convert tensor to tokens
def tensor_to_tokens(tensor):
return [idx_to_char[idx.item()] for idx in tensor]
def collate_fn(batch):
inputs, targets = zip(*batch)
input_tensors = [tokens_to_tensor(seq) for seq in inputs]
target_tensors = [tokens_to_tensor(seq) for seq in targets]
return torch.stack(input_tensors), torch.stack(target_tensors)
class DecoderOnlyTransformer(nn.Module):
def __init__(self, vocab_size, d_model, nhead, num_layers):
super(DecoderOnlyTransformer, self).__init__()
self.pe = PositionalEncoding(d_model, 128, device)
self.embedding = nn.Embedding(vocab_size, d_model)
self.transformer_decoder = nn.TransformerDecoder(
nn.TransformerDecoderLayer(d_model, nhead),
num_layers
)
self.fc = nn.Linear(d_model, vocab_size)
def forward(self, x):
x = self.embedding(x)
pe = self.pe(x)
#print("x: ", x.shape)
#print("pe: ", pe.shape)
x = x + pe
tgt = torch.zeros_like(x)
output = self.transformer_decoder(x, x)
output = self.fc(output)
return output
# Hyperparameters
seq_length = 10
batch_size = 16
num_samples = batch_size*1000
learning_rate = 0.001
num_epochs = 100
d_model = 4
nhead = 4
num_layers = 2
vocab_size = len(vocab)
# Create dataset
dataset = RandomStringDataset(num_samples, seq_length)
train_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn)
# Initialize model, loss, and optimizer
model = DecoderOnlyTransformer(vocab_size, d_model, nhead, num_layers)
model.to(device)
criterion = nn.CrossEntropyLoss()
lr = 5 # learning rate
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)
for epoch in range(num_epochs):
correct = 0
count = 0
total_loss = 0
start_time = 0
log_interval = 100
num_batches = int(num_samples/batch_size)
for inputs, targets in train_loader:
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs.reshape(-1, len(vocab)), targets.flatten())
probabilities = torch.softmax(outputs[0, -1], dim=-1)
next_token_idx = torch.argmax(probabilities).item()
token = idx_to_char[next_token_idx]
if idx_to_char[targets[0, -1].item()] == token:
correct+=1
count+=1
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
total_loss += loss.item()
if count % log_interval == 0 and count > 0:
lr = scheduler.get_last_lr()[0]
ms_per_batch = (time.time() - start_time) * 1000 / log_interval
cur_loss = total_loss / log_interval
ppl = math.exp(cur_loss)
print(f'| epoch {epoch:3d} | {count:5d}/{num_batches:5d} batches | '
f'lr {lr:02.2f} | ms/batch {ms_per_batch:5.2f} | '
f'loss {cur_loss:5.2f} | ppl {ppl:8.2f} | '
f'Acc: {correct/count}'
)
total_loss = 0
start_time = time.time()
print(f'Epoch: {epoch + 1}/{num_epochs}, Loss: {loss.item()} Count: {count}')
<span class="math-container">```</span>
</code></pre>
|
<p>Ok, so the solution to this was that I was randomizing both the key (a-z) and the value (0-9). That was too hard for a simple model to learn, so only randomizing the value worked. I also changed the positional embedding to be learned, which improved learning speed.</p>
<pre><code>import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import torch.optim as optim
import random
import string
import time
import math
if torch.cuda.is_available():
dev = "cuda:0"
else:
dev = "cpu"
device = torch.device(dev)
def flatten(l):
return [item for sublist in l for item in sublist]
def random_string(length):
a = string.ascii_lowercase[0:length]
d = random.sample(string.digits, length)
r = ' '.join([a[i] + d[i] for i in range(length)])
n = random.randint(0, length)-1
return r + " " + a[n] + d[n]
# Synthetic Dataset
class RandomStringDataset(Dataset):
def __init__(self, num_samples, seq_length):
self.num_samples = num_samples
self.seq_length = seq_length
self.data = [random_string(seq_length) for _ in range(num_samples)]
def __len__(self):
return self.num_samples
def __getitem__(self, idx):
input_seq = self.data[idx][:-1]
target_seq = self.data[idx][1:]
return input_seq, target_seq
# Tokenizer and detokenizer functions
def tokenize(text):
return [char for char in text]
def detokenize(tokens):
return ''.join(tokens)
# Map characters to indices and vice versa
vocab = string.ascii_lowercase + string.digits + " "
char_to_idx = {char: idx for idx, char in enumerate(vocab)}
idx_to_char = {idx: char for idx, char in enumerate(vocab)}
# Convert tokens to tensor
def tokens_to_tensor(tokens):
indices = [char_to_idx[token] for token in tokens]
return torch.tensor(indices, device=device)
# Convert tensor to tokens
def tensor_to_tokens(tensor):
return [idx_to_char[idx.item()] for idx in tensor]
def collate_fn(batch):
inputs, targets = zip(*batch)
input_tensors = [tokens_to_tensor(seq) for seq in inputs]
target_tensors = [tokens_to_tensor(seq) for seq in targets]
return torch.stack(input_tensors), torch.stack(target_tensors)
class DecoderOnlyTransformer(nn.Module):
def __init__(self, vocab_size, d_model, nhead, num_layers):
super(DecoderOnlyTransformer, self).__init__()
self.pe = nn.Embedding(32, d_model)
self.embedding = nn.Embedding(vocab_size, d_model)
self.transformer_decoder = nn.TransformerDecoder(
nn.TransformerDecoderLayer(d_model, nhead, batch_first=True),
num_layers
)
self.ln = nn.LayerNorm(d_model)
self.fc = nn.Linear(d_model, vocab_size)
def forward(self, x):
B, T = x.shape
x = self.embedding(x) + self.pe(torch.arange(T, device=device))
output = self.transformer_decoder(x, x)
output = self.ln(output)
output = self.fc(output)
return output
# Hyperparameters
seq_length = 10
batch_size = 16
num_samples = batch_size*1000
num_epochs = 10
d_model = 32
nhead = 4
num_layers = 2
vocab_size = len(vocab)
# Create dataset
dataset = RandomStringDataset(num_samples, seq_length)
train_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn)
# Initialize model, loss, and optimizer
model = DecoderOnlyTransformer(vocab_size, d_model, nhead, num_layers)
model.to(device)
criterion = nn.CrossEntropyLoss()
lr = 5 # learning rate
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)
for epoch in range(num_epochs):
correct = 0
count = 0
total_loss = 0
start_time = 0
log_interval = 100
num_batches = int(num_samples/batch_size)
for inputs, targets in train_loader:
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs.reshape(-1, len(vocab)), targets.flatten())
probabilities = torch.softmax(outputs[0, -1], dim=-1)
next_token_idx = torch.argmax(probabilities).item()
token = idx_to_char[next_token_idx]
if idx_to_char[targets[0, -1].item()] == token:
correct+=1
count+=1
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
total_loss += loss.item()
if count % log_interval == 0 and count > 0:
lr = scheduler.get_last_lr()[0]
ms_per_batch = (time.time() - start_time) * 1000 / log_interval
cur_loss = total_loss / log_interval
ppl = math.exp(cur_loss)
print(f'| epoch {epoch:3d} | {count:5d}/{num_batches:5d} batches | '
f'lr {lr:02.2f} | ms/batch {ms_per_batch:5.2f} | '
f'loss {cur_loss:5.2f} | ppl {ppl:8.2f} | '
f'Acc: {correct/count}'
)
total_loss = 0
start_time = time.time()
print(f'Epoch: {epoch + 1}/{num_epochs}, Loss: {loss.item()} Count: {count}')
# Some evaluation
rs = random_string(10)
print(rs)
t = torch.argmax(torch.softmax(model(tokens_to_tensor(tokenize(rs))[:-1].reshape(1, -1)), dim=-1), dim=-1)
"".join(tensor_to_tokens(t[0]))
</code></pre>
| 149
|
reinforcement learning
|
Additional (Potential) Action for Agent in MazeGrid Environment (Reinforcement Learning)
|
https://ai.stackexchange.com/questions/21955/additional-potential-action-for-agent-in-mazegrid-environment-reinforcement-l
|
<p>In a classic <strong>GridWorld Environment</strong> where the possible <strong>actions</strong> of an agent are (Up, Down, Left, Right), can another potential output of Action be "x amount of steps" where the agent takes 2,3,.. steps in the direction (U,D,L,R) that it chooses? If so, how would one go about doing it?</p>
|
<p>You can definitely define an environment that accepts more types of action, including actions that take multiple steps in a direction.</p>
<p>The first thing you would need to do is implement support for that action in the environment. That is not really a reinforcement learning issue, but like implementing the rules of a board game. You will need to decide things such as what happens if the move would be blocked - does the move succeed up the point of being blocked, does it fail completely, is the reward lower depending on how much the agent tries to overshoot, etc.</p>
<p>After you do that, you will want to write an agent that can choose the new actions. You have a few choices here:</p>
<ul>
<li><p>Simplest would be to enumerate all the choices separately and continue to use the same kind of agent as you already have. So instead of <span class="math-container">$\{U, D, L, R\}$</span> you might have <span class="math-container">$\{U1, U2, U3, D1, D2, D3, L1, L2, L3, R1, R2, R3\}$</span>.</p></li>
<li><p>If you want to take advantage of generalisation between similar actions (e.g. that action <span class="math-container">$U3$</span> is similar to <span class="math-container">$U2$</span> and also to <span class="math-container">$R3$</span>), then you can use some form of coding for the action, such as the relative x,y movement that it is attempting. So you could express <span class="math-container">$U2$</span> as <span class="math-container">$(0,2)$</span> and <span class="math-container">$L3$</span> as <span class="math-container">$(-3,0)$</span>. For that to then work with Q values, you cannot easily use a table. Instead, you would need to use function approximation, for example a neural network, so you can implement <span class="math-container">$q(s,a)$</span> as a parametric function - combining <span class="math-container">$s,a$</span> into the input vector, and learn the parameters to that the neural network outputs the correct action value. This is what the Q learning variation DQN can do, as well as other similar RL algorithms that use neural networks.</p></li>
</ul>
<p>Using a neural network, instead of tabular Q-learning, is not something you see often with grid world environments. It is a step up in complexity, but it is often required if state space or action space becomes large and might benefit from the generalisation possible from trainable function approximators.</p>
| 150
|
reinforcement learning
|
When to apply reward for time series data?
|
https://ai.stackexchange.com/questions/22665/when-to-apply-reward-for-time-series-data
|
<p>Reading the paper 'Reinforcement Learning for FX trading 'at <a href="https://stanford.edu/class/msande448/2019/Final_reports/gr2.pdf" rel="nofollow noreferrer">https://stanford.edu/class/msande448/2019/Final_reports/gr2.pdf</a> it states:</p>
<blockquote>
<p>While our end goal is to be able to make decisions on a universal time
scale, in order to apply a reinforcement learning approach to this
problem with rewards that do not occur at each step, we formulate the
problem with a series of episodes. In each episode, which we designate
to be one hour long, the agent will learn the make decisions to
maximize the reward (return) in that episode, given the time series
features we have.</p>
</blockquote>
<p>This may be a question for the authors but is it not better in RL to apply rewards at each time step instead of "rewards that do not occur at each step"? If apply rewards at each time step then the RL algorithm will achieve better convergence properties as a result of learning at smaller time intervals rather than waiting for "one hour". Why not apply rewards at each time step?</p>
| 151
|
|
reinforcement learning
|
RL: Encoding action conditioned on previous action
|
https://ai.stackexchange.com/questions/25400/rl-encoding-action-conditioned-on-previous-action
|
<p>I have a card game where on a player's turn, the player sequentially draws two cards. Each card may be drawn from another player's discard stack (face up), or from the deck (face down).</p>
<p>Thinking how to encode this into an action space, I could naively assume the two draws are independent. The action space would simply be a binary vector of 2 * (1 + (number_of_players - 1)), which I could post-filter to limit for empty draw piles (and can't draw from own pile).</p>
<p>However, when playing the game myself, I noticed that it's sometimes advantageous to draw the initial card from the deck, then select the draw pile for the second card based on the value of the first one drawn. But how would this be encoded into an action space? Would it be better to think of these are two separate actions, even thought they are part of the same "turn"?</p>
|
<p>It is hard to say for certain without knowing full details and results of experiments.</p>
<p>However, if the game allows for splitting decisions up, it will likely be better for the agent to take advantage of extra knowledge of the value of any previously hidden card just taken from the draw pile.</p>
<p>In general, if each player decision is taken sequentially, resulting in changes to state, then it is a separate action on a separate time step according to the MDP theoretical model used in reinforcement learning (RL). You might want to describe/notate the time steps differently so that they match how the game play proceeds. However, for the purposes of RL, each decision point should be on a new time step, and should result in a new state, new value estimates etc.</p>
<p>Similarly, whether or not the current choice is the player's first card or second card to be drawn needs to be part of the state. This detail of the state <em>might</em> already be covered by the number of cards in the player's hand, if logically the number of cards is always the same at each stage. However, if hand size can vary for other reasons, it is worth adding an explicit flag for "first draw choice" or similar so that the agent can use the information.</p>
<p>You have some freedom for encoding the action space. If drawing cards is the only possible action in this game at all stages, then a binary output vector of 1 + (number_of_players - 1) dimensions would be suitable. Other encodings may work well too, it depends if there is any logical structure to the choices or some derived data that encodes useful game information.</p>
<p>It may be useful to arrange the action choices so that the index for drawing from each player's discard pile is considered <em>relatively</em> to the current player's turn. That is, instead of actions being arranged <span class="math-container">$[draw, discard P1, discard P3, discard P4, discard P5]$</span> for P2, they would be arranged <span class="math-container">$[draw, discard P3, discard P4, discard P5, discard P1]$</span> and for P3 would be different: <span class="math-container">$[draw, discard P4, discard P5, discard P1, discard P2]$</span> . . . that would inherently allow for the cyclical nature of turns. State representation would need to similarly rotate knowledge about each player to match this. You might not need to do this, but I would recommend it for games where there is a lot of common logic regarding action choices relative to turn position that you could take advantage of. The opposite would apply (and you would use absolute player positions) if there were important differences throughout the game between being P1, P2, P3 etc.</p>
| 152
|
reinforcement learning
|
Agent in toy environment only learns to act optimally with small discount factors
|
https://ai.stackexchange.com/questions/5221/agent-in-toy-environment-only-learns-to-act-optimally-with-small-discount-factor
|
<p>I have tried several environment libraries like OpenAI gym/gridworld but now I am trying to create a toy environment for experimentation. The environment I've created is as follows: </p>
<ol>
<li><p>State: grid with n rows by m columns, represented by a boolean matrix. Each grid cell can be empty or filled and the grid starts empty.</p></li>
<li><p>Action: one of the m columns to be filled, which must have at least the top row empty. </p></li>
<li><p>Next state: Once a column is chosen, the lowest unfilled cell in that column is filled. This works from bottom up like a very simple version of Tetris. </p></li>
<li><p>Reward: after every action, a reward equal to the number of empty columns is awarded. </p></li>
</ol>
<p>Therefore in a sample world of 5 rows by 3 column, starting off with an empty grid, the maximum attainable reward would be by filling column wise first. This policy will give a maximum total reward of 2*5 + 1*5 = 15. (2 free columns by 5 row action, once first column is filled then 1 free column by 5 row action.) </p>
<p>This very simple environment is trained using DQN with a single ff layer. The agent only took a few episodes to converge and is able to produce the maximum attainable reward. </p>
<p>In a next toy environment, I've made it a little more complex. I modified the very first action to be random choice of any column. I have retrained the RL model with the new environment modification. However, after convergence, the agent does not attain max score of 15 for all possible starting columns. I.e. If column 1 was randomly chosen first, max score might be 15, however column 2 or 3 was randomly chosen first, max score might only reach 11 or 9. In theory, the optimum policy would be for the agent to fill column that was randomly chosen first - i.e. repeat the first randomly chosen action.</p>
<p>I have tried several ways to tweak my input parameters (e.g. episilon_decay_rate, learning_rate, batch_size, number of hidden nodes) to see if the agent could act optimally for all possible starting columns. I also tried DDQN and Sarsa. The only way I could make the agent perform optimally is by reducing gamma (discount factor) to 0.5 or below. Are there any explanations to why the agent only works for small discount factors in this example? Also, are there alternative ways to obtain the optimum policy?</p>
|
<p>This is an episodic problem, and there should be no issue in theory with most learning algorithms coping without a discount factor (or setting gamma = 1).</p>
<blockquote>
<p>Are there any explanations to why the agent only works for small discount factors in this example?</p>
</blockquote>
<p>The most likely explanations are:</p>
<ul>
<li><p>You have a mistake in your implementation or use of DQN.</p></li>
<li><p>You have an incorrect setting of a neural network hyperparameter. My initial thought would be you have mistakenly put a non-linearity like softmax or sigmoid on the NN outputs (it needs to be a linear output). Or it could just be that 10 hidden neurons is not enough for this representation</p></li>
<li><p>Convergence requires far longer to train than you thought, once you introduce non-linear relationships between state and expected reward.</p></li>
</ul>
<p>It is not surprising that the simpler puzzle trains more easily, as the agent can reduce it down to a linear counting puzzle without really needing to "look" at the representation. I would expect the Q values to be very poor approximations to the correct values after so few iterations, but the problem is simple enough that the incorrect values still produce an optimal behaviour.</p>
| 153
|
reinforcement learning
|
Does epsilon-greedy approach always choose the "best action" (100% of the time) when it does not take the random path?
|
https://ai.stackexchange.com/questions/7397/does-epsilon-greedy-approach-always-choose-the-best-action-100-of-the-time
|
<p>I'm now reading <a href="https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-7-action-selection-strategies-for-exploration-d3a97b7cceaf" rel="nofollow noreferrer">the following blog post</a> but on the epsilon-greedy approach, the author implied that the epsilon-greedy approach takes the action randomly with the probability epsilon, and take the best action 100% of the time with probability 1 - epsilon.</p>
<p>So for example, suppose that the epsilon = 0.6 with 4 actions. In this case, the author seemed to say that each action is taken with the following probability (suppose that the first action has the best value):</p>
<ul>
<li>action 1: 55% (.40 + .60 / 4) </li>
<li>action 2: 15%</li>
<li>action 3: 15%</li>
<li>action 4: 15%</li>
</ul>
<p>However, I feel like I learned that the epsilon-greedy only takes the action randomly with the probability of epsilon, and otherwise it is up to the policy function that decides to take the action. And the policy function returns the probability distribution of actions, not the identifier of the action with the best value. So for example, suppose that the epsilon = 0.6 and each action has 50%, 10%, 25%, and 15%. In this case, the probability of taking each action should be the following:</p>
<ul>
<li>action 1: 35% (.40 * .50 + .60 / 4)</li>
<li>action 2: 19% (.40 * .10 + .60 / 4)</li>
<li>action 3: 25% (.40 * .25 + .60 / 4)</li>
<li>action 4: 21% (.40 * .15 + .60 / 4)</li>
</ul>
<p>Is my understanding not correct here? Does the non-random part of the epsilon (1 - epsilon) always takes the best action, or does it select the action according to the probability distribution?</p>
|
<p>Epsilon-greedy is most commonly used to ensure that you have some element of exploration in algorithms that otherwise output deterministic policies. </p>
<p>For example, value-based algorithms (Q-Learning, SARSA, etc.) do not directly have a policy as output; they have values for states or state-action pairs as outputs. The standard policy we "extract" from that is a deterministic policy that simply tries to maximize the predicted value (or, technically, a "slightly" nondeterministic policy in that, in proper implementations, it should break ties (where there are multiple equal values at the top) randomly). For such algorithms, there is not sufficient inherent exploration, so we typically use something like epsilon-greedy to introduce an element of exploration. In these cases, both of the possible explanations in your question are identical.</p>
<p>In cases where your algorithm already produces complete probability distributions as outputs that do not so much focus all of the probability mass on a single or a couple of points, like the probability distribution you gave as an example in your question, it's generally not really necessary to use epsilon-greedy on top of it; you already get exploration inherently due to all actions having a decent probability assigned to them. </p>
<p>Now, I've actually personally mostly worked with value-based methods so far and not so much with e.g. policy gradient methods yet, so I'm not sure whether there tends to be a risk that they also "converge" to situations where they place too much probability mass on some actions and too little on others too quickly. <strong>If that's the case</strong>, I would expect an additional layer of epsilon-greedy exploration might be useful. And, in that case, I would indeed find your explanation the most natural. If I look through, for example, the <a href="https://arxiv.org/abs/1707.06347" rel="nofollow noreferrer">PPO paper</a>, I didn't find anything about them using epsilon-greedy in a quick glance. So, I suppose the combination of epsilon-greedy with "nondeterministic" policies (ignoring the case of tie-breaking in value-based methods here) simply isn't really a common combination.</p>
| 154
|
reinforcement learning
|
Is it possible to state an outliers detection problem as a reinforcement learning problem?
|
https://ai.stackexchange.com/questions/8193/is-it-possible-to-state-an-outliers-detection-problem-as-a-reinforcement-learnin
|
<p>To me it seems to be ill defined. Partially because of absence of knowledge which points are to be considered outliers in the first place.</p>
<p>The problem which I have in mind is "bad market data" detection. For example if a financial data provider is good only most of the time, but about 7-10% of data do not make any sense.</p>
<p>The action space is binary: either take an observation or reject it.</p>
<p>I am not sure about the reward, because the observations would be fed into an algorithm as inputs and the outputs of the algo would be outliers themselves. So the outliers detection should prevent outputs of the algorithm going rouge.</p>
<p>It is necessary to add that if we are talking about the market data (stocks, indices, fx), there's no guarantee that the distributions are stationary and there might be trends and jumps. If a supervised classifier is trainer based on historical data, how and how often should it be adjusted to be able to cope with different modes of the data.</p>
|
<p>It is quite often <em>possible</em> to frame a problem as a Reinforcement Learning (RL) problem at some level. However, this may turn out to be for no benefit, or a net cost towards solving the problem. Casting parameter or hyperparameter searches as RL can be adding a layer of complexity and reduce efficiency.</p>
<p>One key thing to bear in mind is that any classification or regression that occurs within a RL framework will end up using effectively the same models and approaches that could solve the same problems <em>directly</em>. These models would either appear directly as the function approximators in RL that implement policies or value functions, or they would be an implied part of them. If you have labeled data for classification - even delayed until some time after you collect data, then you are usually going to be better off using supervised learning directly. </p>
<p>For hyperparameter searches (e.g. cutoffs for anomaly detection) then you may not need labelled data, but just need a good way to test the model offline.</p>
<p>The first point at which supervised learning or classic anomaly detection might fail for you is if you never receive any feedback about individual records, only a measure of overall performance. In other words, if you can measure <em>consequences</em> of good performance, but never measure or check <em>correctness</em>.</p>
<p>Your characterisation</p>
<blockquote>
<p>about 7-10% of data do not make any sense.</p>
</blockquote>
<p>does not appear to fit that. It looks like you could detect this, maybe manually labelling a few thousand records, and train a classifier using supervised learning techniques. That is likely a much better use of your time than trying to restructure the problem at a higher level and trusting a trail-and-error approach to discover the same rules.</p>
<p>Putting that to one side, assuming you do have a problem where</p>
<ul>
<li>data to be classified is arriving as a stream, and needs to be processed online, item by item or in small batches</li>
<li>you have reason to think that an accept/reject stage before processing further would be useful</li>
<li>you have no way to label training data for accept/reject</li>
<li>you have a way to measure performance of the remaining system after the accept/reject phase</li>
</ul>
<p>then you could use RL to frame the accept/reject phase as an action. There are some challenges there, but essentially you would use RL along with measurement feedback to sample errors or gradients - typically using TD Error or policy gradients. This could wrap almost any model that does classification or anomaly detection etc, provided it could be trained using those gradients.</p>
<p>From comments, if the underlying distributions for accept/reject are non-stationary, this may point you more towards a RL solution. However, that may come with a cost to performance - you will need to balance exploration rate (which will reduce the performance of the model against stationary data) versus speed of learning new distributions. This is a problem for all online learners; the main advantage of a RL approach here is that it will not require generating new labelled data. If you can use a recency-weighted anomaly detection algorithm instead, then you won't need the labeled data either - whether that is better requires testing, personally I'd take a working anomaly detection as the baseline and only use RL if it proved itself better.</p>
<p>The specific items that you turn into states and rewards are not clear to me from the question, and you would need to work on these things carefully. It is possible you will need more than the current data item in order to define state, and that will depend a lot on how the feedback loop works that establishes reward. </p>
| 155
|
reinforcement learning
|
Dyna-Q algorithm, having trouble when adding the simulated experiences
|
https://ai.stackexchange.com/questions/8344/dyna-q-algorithm-having-trouble-when-adding-the-simulated-experiences
|
<p>I'm trying to create a simple Dyna-Q agent to solve small mazes, in python. For the Q function, Q(s, a), I'm just using a matrix, where each row is for a state value, and each column is for one of the 4 actions (up, down, left, right).</p>
<p>I've implemented the "real experience" part, which is basically just straightforward SARSA. It solves a moderately hard (i.e., have to go around a few obstacles) mazes in 2000-8000 steps (in the first episode, it will no doubt decrease with more). So I know that part is working reliably.</p>
<p>Now, adding the part that simulates experience based on what it knows of the model to update the Q values more, I'm having trouble. The way I'm doing it is to keep an <code>experiences</code> list (a lot like experience replay), where each time I take real action, I add its (S, A, R, S') to that list. </p>
<p>Then, when I want to simulate an experience, I take a random (S, A, R, S') tuple from that list (David Silver mentions in his lecture (#8) on this that you can either update your transition probability matrix P and reward matrix R by changing their values or just sample from the experience list, which should be equivalent). In my case, with a given S and A, since it's deterministic, R and S' are also going to be the same as the ones I sampled from the tuple. Then I calculate Q(S, A) and max_A'(Q(S', A')), to get the TD error (same as above), and do stochastic gradient descent with it to change Q(S, A) in the right direction.</p>
<p>But it's not working. When I add simulated experiences, it <em>never</em> finds the goal. I've tried poking around to figure out why, and all I can see that's weird is that the Q values continually increase as time goes on (while, without experiences, they settle to correct values).</p>
<p>Does anyone have any advice about things I could try? I've looked at the sampled experiences, the Q values in the experience loop, the gradient, etc... and nothing really sticks out, aside from the Q values growing.</p>
<p>edit: here's the code. The first part (one step TD learning) is working great. Adding the planning loop part screws it up.</p>
<pre><code>def dynaQ(self, N_steps=100, N_plan_steps=5):
self.initEpisode()
for i in range(N_steps):
#Get current state, next action, reward, next state
s = self.getStateVec()
a = self.epsGreedyAction(s)
r, s_next = self.iterate(a)
#Get Q values, Q_next is detached so it doesn't get changed by the gradient
Q_cur = self.Q[s, a]
Q_next = torch.max(self.Q[s_next]).detach().item()
TD0_error = (r + self.params['gamma']*Q_next - Q_cur).pow(2).sum()
#SGD
self.optimizer.zero_grad()
TD0_error.backward()
self.optimizer.step()
#Add to experience buffer
e = Experience(s, a, r, s_next)
self.updateModel(e)
for j in range(N_plan_steps):
xp = self.experiences[randint(0,len(self.experiences)-1)]
Q_cur0 = self.Q[xp.s, xp.a]
Q_next0 = torch.max(self.Q[xp.s_next]).detach().item()
TD0_error0 = (xp.r + self.params['gamma']*Q_next0 - Q_cur0).pow(2).sum()
self.optimizer.zero_grad()
TD0_error0.backward()
self.optimizer.step()
</code></pre>
| 156
|
|
reinforcement learning
|
What should be saved in SARSA prioritized sweeping?
|
https://ai.stackexchange.com/questions/10441/what-should-be-saved-in-sarsa-prioritized-sweeping
|
<p>In the book "Reinforcement Learning: An Introduction", by Sutton and Barto, they provided the "Q-learning prioritized sweeping" algorithm, in which the model saves the next state and the immediate reward, for each state and action, that is, <span class="math-container">$Model(S_{t},A_{t}) \leftarrow S_{t+1}, R_{t+1}$</span>. </p>
<p>If we want to use "SARSA prioritized sweeping", should we save "next state, immediate reward, and next action", that is, <span class="math-container">$Model(S_{t},A_{t}) \leftarrow S_{t+1}, R_{t+1}, A_{t+1}$</span>?</p>
|
<p>SARSA is an on-policy method. Your historical choices of action will have been made using older Q values, and thus from a different policy. In addition, the action that was taken at the time may not have been typical for the agent (it may have been exploring). So you don't usually want to re-use historical action choices to calculate TD targets in single-step SARSA, because these may introduce bias.</p>
<p>Provided you are performing single-step SARSA, then you can re-generate the action choice sampling from the current best policy. This is similar to generating the max Q value in the TD target value <span class="math-container">$R_{t+1} + \text{max}_{a'} Q(S_{t+1},a')$</span>. </p>
<p>You could do this using a regular SARSA sampling the action choice:</p>
<p><span class="math-container">$$R_{t+1} + Q(S_{t+1}, a' \sim \pi(\cdot|S_{t+1}) )$$</span></p>
<p>Or you could use Expected SARSA and take a weighted mean of all possible actions:</p>
<p><span class="math-container">$$R_{t+1} + \sum_{a' \in \mathcal{A}(S_{t+1})} \pi(a'|S_{t+1})Q(S_{t+1}, a')$$</span></p>
<p>Technically these would both be on-policy with respect to evaluating the TD Target, but off-policy with respect to the distribution of <span class="math-container">$S_t, A_t$</span> that you are running the update for. That's already the case due to prioritised sweeping focussing more updates on certain transitions, but could be a big difference when using a neural network to approximate Q. Bear in mind that making TD learning methods off-policy can have a negative impact on stability.</p>
<p>If you want to process multi-step updates, then you do have to reference <span class="math-container">$A_{t+1}$</span> and adjust for when the historical data makes a different action choice than your current estimate of the best policy. This would commonly use <em>importance sampling</em>. This is true for Q-learning as well however, so there would still be no difference in what you store between Q-learning and SARSA.</p>
| 157
|
reinforcement learning
|
Code examples of controlling multiple units with RL
|
https://ai.stackexchange.com/questions/12037/code-examples-of-controlling-multiple-units-with-rl
|
<p>Anyone knows a resources (papers, articles and especially repositories) regarding controlling multiple units with RL.</p>
<p>The controlled units should not be fixed, for example in Real Time Strategy the agent builds various units (workers, soldiers ...) and later controls them. During the game various units could die and new ones are built.</p>
<p>I think good contemporary example is AlphaStar, while OpenAI Five controls just a single agent. This might be incorrect since I've never played those games.</p>
| 158
|
|
reinforcement learning
|
DQN Agent helped by a controller: on which action should I perform backprop?
|
https://ai.stackexchange.com/questions/12209/dqn-agent-helped-by-a-controller-on-which-action-should-i-perform-backprop
|
<p><strong>Background</strong></p>
<p>I am working on a robotic arm controlled by a DQN + a python script I wrote.
The DQN receives the 5 joint states, the coordinates of a target, the coordinates of the obstacle and outputs the best action to take (in terms of joint rotations).
The python script checks if the action suggested by the DQN is safe. If it is, it performs it. Otherwise, it performs the second highest-ranking action from the DQN; and so on. If no action is possible, collision: we fail.</p>
<p>During training, this python functionality wasn't present: the arm learned how to behave without anything else to correct his behaviour. With this addition on the already-trained network, the performance raised from 78 to 95%.
Now my advisor (bachelor's thesis) asked me to leave the external controller on during training to check whether this improves learning.</p>
<p><strong>Question</strong></p>
<p>Here's what happens during training; at each step:</p>
<ol>
<li>the ANN selects an action</li>
<li>if it is legal, the python script executes it, otherwise it chooses another one.</li>
</ol>
<p>Now... On which action should I perform backprop? The one proposed by the arm or the one which was really executed? (So, which action should I perform backprop on?)</p>
<p>I am really confused. On the one hand, the arm <strong>did choose</strong> an action so my idea was that we should, in fact, learn on THAT action. On the other hand, during the exploration phase (<span class="math-container">$\epsilon$</span> greedy), we backprop on the action which was randomly selected and executed, with no interest on what was the output of the arm. So, it would be rational too, in this case, to perform backprop on the action really executed; so the one chosen by the script.</p>
<p><strong>What is the right thing to do here?</strong>. <em>(Bonus question: is it reasonable to train with this functionality on? Wouldn't it be better to train the Network by itself, and then later, enhance its performance with this functionality?)</em></p>
|
<p>Q-learning - which DQN is based on - is an off-policy reinforcement learning (RL) method. That means it can learn a <em>target policy</em> of optimal control whilst acting using a different <em>behaviour policy</em>. In addition, provided you use single step learning (as opposed to n-step or Q(<span class="math-container">$\lambda$</span>) variants), you don't need to worry much about the details of the behaviour policy. It is more efficient to learn from behaviour policies closer to the current best guess at optimal, but possible to learn from almost anything, including random behaviour.</p>
<p>So it doesn't really matter too much if you change the behaviour during training.</p>
<p>In your case, the script is actually doing more than just changing the behaviour. It is consistently filtering out state/action pairs that you have decided should never be taken, as a domain expert. This has two major consequences:</p>
<ul>
<li><p>It reduces the search space by whatever fraction of state/actions are now denied by your safety script.</p></li>
<li><p>It prevents the agent from ever learning about certain state/action pairs, as they are never experienced.</p></li>
</ul>
<p>The first point means that your learning should in theory be more efficient. As for how much, you will have to try and see. It might only be a small amount if the problem states and actions are unlikely to be reached during exploration from near-optimal behaviour.</p>
<p>The second point means that your agent will never learn by itself to avoid the problem state/action combinations. So you will always need to use the safety script.</p>
<p>In fact you can view the safety script as a modification to the environment (as opposed to a modification to the agent), if its decisions are strict and consistent. Filtering available actions is a standard mechanism in RL when action space may vary depending on state.</p>
<blockquote>
<p>On which action should I perform backprop? </p>
</blockquote>
<p>In DQN, you don't "perform backprop" on an action. Instead you either use directly or store some observed data about the step: <span class="math-container">$s, a, r, s'$</span> where <span class="math-container">$s$</span> is the start state, <span class="math-container">$a$</span> the action taken, <span class="math-container">$r$</span> the immediate reward, and <span class="math-container">$s'$</span> the resulting state. You then update the the current action value estimate(s) based on a TD target <span class="math-container">$\delta = r + \gamma \text{max}_{a'} Q(s,a')$</span> either online or from the experience table.</p>
<p>When Q-learning learns, it updates the estimate for the action value <span class="math-container">$Q(s, a)$</span> - and <span class="math-container">$a$</span> is taken from the actual behaviour (otherwise the agent will update its estimate of an action that it didn't take). So your answer here is to use - or more likely store in the experience table - the action actually taken. If the action recommended as optimal at that time is different, ignore the difference, what matters is the observed experience.</p>
| 159
|
reinforcement learning
|
How to represent action space in reinforcement learning?
|
https://ai.stackexchange.com/questions/12646/how-to-represent-action-space-in-reinforcement-learning
|
<p>I started to learn reinforcement learning a few days ago. And I want to use that to solve resource allocation problem something like given a constant number, find the best way to divide it into several real numbers each is non-negative.</p>
<p>For example, to divide the number 1 into 3 real numbers, the allocation can be:</p>
<p>[0.2, 0.7, 0.1]</p>
<p>[0.95, 0.05, 0]
...</p>
<p>I do not know how to represent the action space because each allocation is 3-dimensional and each dimension is real-valued and each other correlated.</p>
<p>In actor-critic architecture, is it possible to have 3 outputs activated by softmax in the actor's network each represents one dimension in the allocation?</p>
<hr>
<p>Appended:</p>
<p>There is a playlist of videos. A user can switch to the next video at any time. More buffer leads to better viewing experience but more bandwidth loss if user switches to the next video. I want to optimize the smoothness of playback with minimal bandwidth loss. At each time step, the agent decides the bandwidth allocation to download current video and the next 2 videos. So I guess the state will be the bandwidth, user's behavior and the player situation.</p>
| 160
|
|
reinforcement learning
|
How to solve optimal control problem with reinforcement learning
|
https://ai.stackexchange.com/questions/13709/how-to-solve-optimal-control-problem-with-reinforcement-learning
|
<p>The problem I am trying to attack is a predator-prey pursuit problem. There are multiple predators that pursue multiple preys and preys tried to evade predators. I am trying to solve a simplified version - one predator tries to catch a static prey on a plane. There is bunch of literature on the above problem when predators and preys are on the grid. </p>
<p>Can anybody suggest articles/code where such problem is solved on a continuous plane? I am looking at continuous state space, discrete action space (predator can turn left 10 degrees, go straight, turn right 10 degrees, runs at constant speed), and discrete time. MountainCar is one dimensional version (car is predator and flag is prey) and DQN works fine. However, when I tried DQN on two dimensional plane the training become very slow (I guess dimensionality curse).</p>
<p>The second question concerns the definition of states and reward. In my case I consider angle between predator heading vector and vector between the predator and prey positions. Reward is the change in distance between predator and prey, 10 when prey is captured, and -10 when predator gets too far from the prey. Is this reasonable? I already asked similar question before and with the help of @Neil Slater was able to refine reward and state. </p>
<p>The third question concerns when to update train network to target network. At each episode? Or only when prey is caught? Any ideas?</p>
<p>The last question I have is about the network structure: activation functions and regularization. Currently I am using two tanh hidden layers and linear output with l2 and dropout. Can anybody share some insights?</p>
<p>Thanks in advance!</p>
| 161
|
|
reinforcement learning
|
Why doesn't stability in prediction imply stability in control in off-policy reinforcement learning?
|
https://ai.stackexchange.com/questions/15526/why-doesnt-stability-in-prediction-imply-stability-in-control-in-off-policy-rei
|
<p>Prediction's goal is to get an estimate of a performance of a policy given a specific state. </p>
<p>Control's goal is to improve the policy wrt. the prediction. </p>
<p>The alternation between the two is the basis of reinforcement learning algorithms. </p>
<p>In the paper <a href="https://arxiv.org/abs/1606.02647" rel="nofollow noreferrer">“Safe and Efficient Off-Policy Reinforcement Learning.” (Munos, 2016)</a>, the section 3.1) "Policy evaluation" assumes that the target policy is fixed, while the section 3.2) "Control" extends to where the target policy is a sequence of policies improved by a sequence of increasingly greedy operations. </p>
<p>This suggests that even a proof of convergence is established with a fixed target policy, one cannot immediately imply that of the case where the target policy is a sequence of improving policies.</p>
<p>I wonder why it is the case. If an algorithm converges under a fixed target policy assumption, any policy during the chain of improvement should have no problem with this algorithm as well. With the merit of policy improvement, each policy in sequence is increasingly better hence converging to an optimal policy.</p>
<p>This should be obvious from the policy improvement perspective and should require no further proof at all?</p>
|
<p>It is a mathematical problem. Usually, we use contraction mapping theorem for the proof of convergence. You should apply the Banach fixed point theorem for Bellman's functions.</p>
| 162
|
reinforcement learning
|
RF: How to deal with environments changing state due to external factors
|
https://ai.stackexchange.com/questions/15626/rf-how-to-deal-with-environments-changing-state-due-to-external-factors
|
<p>I have a use case where the state of the environment could change due to random events in between time steps that the agent takes actions. For example, at t1, the agent takes action a1 and is given the reward and the new state s1. Before the agent takes the next action at t2, some random events occurred in the environment that altered the state. Now when the agent takes action at t2, it's now acting on "stale information" since the state of the environment had changed. Also, the new state s2 will represent changes not only due to the agent's action, but also due to the prior random events that occurred. In the worst case, the action could possibly have become invalid for the new state that was introduced due to these random events occurred within the environment.</p>
<p>How do we deal with this? Does this mean that this use case is not a good one to solve with RF? If we just ignore these changing states due to the random events in the environment, how would that affect the various learning algorithms? I presume that this is not a uncommon or unique problem in real-life use cases...</p>
<p>Thanks!</p>
<p>Francis</p>
| 163
|
|
reinforcement learning
|
Effects of translating RL action probability through non linearity
|
https://ai.stackexchange.com/questions/16669/effects-of-translating-rl-action-probability-through-non-linearity
|
<p>I am training an RL agent (specifically using the <a href="https://arxiv.org/pdf/1707.06347.pdf" rel="nofollow noreferrer">PPO algorithm</a>) on a game environment with 2 possible actions <strong>left</strong> or <strong>right</strong>.</p>
<p>The actions can be taken with varying "force"; e.g. go <strong>left</strong> 17% or go <strong>right</strong> 69.3%. Currently, I have the agent output 21 actions - 10 for <strong>left</strong> (in 10% increments), 10 for <strong>right</strong> in 10% increments and 1 for stay in place (do nothing). In other words, there is a direct 1-1 mapping in 10% increments between the agent output and the force the agent uses to move in the environment.</p>
<p>I am wondering, if instead of outputting 21 possible actions, I change the action space to a binary output and obtain the action probabilities. The probabilities will have the form, say, [70, 30]. That is, go left with 70% probability and go right with 30% probability. Then I take these probabilities and put them through a non-linearity that translates to the actual action force taken; e.g an output of 70% probability to go left, may in fact translate to moving left with 63.8% force.</p>
<p>The non linear translation is not directly observed by the agent but will determine the proceeding state, which is directly observed.</p>
<p>I don't fully understand what the implications of doing this will be. Is there any argument that this would increase performance (rewards) as the agent does not need to learn direct action mappings, rather just a binary probability output?</p>
|
<blockquote>
<p>I don't fully understand what the implications of doing this will be. </p>
</blockquote>
<p>Without other matching adjustments, you will break your agent.</p>
<p>The problem is how your new action space gets converted <em>back</em> into gradients to update the agent, after it has acted and needs to learn from the results. The NN component of policy function you are considering is designed to work by balancing a discrete probablility distribution. It learns by increasing the probability of actions (in the binary case, the probability of going left vs going right) that score better than a current baseline level.</p>
<p>When interpreting the result from going 63.8% left, you have to resolve two things - which action did the agent take, and what changes to your parameters will increase the probability of taking that action. Unfortunately neither of these tasks are simple if you combine the action choices in the way you suggest. </p>
<p>Also, you have lost exploration. The combined left/right algorithm will always output a fixed steering amount for each state. Whilst there are algorithms, like DDPG, that can work with this, it is not really possible to adapt PPO to do so.</p>
<p>However, PPO already supports continuous action spaces directly. You can have your network output the mean and standard deviation of a distribution for how to steer, and sample from that. Then the action choice taken will directly relate to the output of the network and you can adjust the policy to make that choice more or less possible depending on results from taking it. If you are using a library implementation of PPO, then this option should be available to you.</p>
| 164
|
reinforcement learning
|
Reinforcement learning without trajectories
|
https://ai.stackexchange.com/questions/17143/reinforcement-learning-without-trajectories
|
<p>Does it make sense to use Reinforcement Learning methods in an environment that does not have trajectories?</p>
<p>I have a lot of states and actions in my environment. However, there are no trajectories.</p>
<p>If the agent takes action <span class="math-container">$a$</span> in the state <span class="math-container">$s$</span>, it reaches <span class="math-container">$s'$</span>. In <span class="math-container">$s'$</span>, it takes action <span class="math-container">$a'$</span> and reaches state <span class="math-container">$s''$</span>. However, if it takes the reverse order of actions <span class="math-container">$a'$</span> and then <span class="math-container">$a$</span>, it would reach the same state <span class="math-container">$s''$</span>.</p>
<p>How reinforcement learning methods handle with this?</p>
| 165
|
|
reinforcement learning
|
How to perform Interpretability analysis toward a simple reinforcement learning network
|
https://ai.stackexchange.com/questions/17158/how-to-perform-interpretability-analysis-toward-a-simple-reinforcement-learning
|
<p>We are currently using a RL network with the following simple structure to train a model which helps to solve a transformation task:</p>
<p>Environment (a binary file) + reward ---> LSTM (embedding) --> FC layer --> FC layer --> FC layer --> decision (to select and apply a kind of transformation toward the environment from a pool of transformations)</p>
<p>The model will receive a simple reward and also take the input to make the decision. And we have a condition to stop each episode.</p>
<p>So the current workflow, although it is simple, it seems to have learned something and with multiple episode of training, we can observe the accumulated reward for each episode increases. So right now, what we are thinking is to <code>interpret</code> the model, well, a fancy term.</p>
<p>So basically we are thinking to let the model tell us from which component of the Environment (input file), the model somewhat makes the decision to select a transformation to apply. And I have learned a bunch of interpretability articles, which basically use an activation map (e.g., <a href="https://jacobgil.github.io/deeplearning/class-activation-maps?nsukey=AWNe1ED4Qk%2FmkOWgGIXn0dabS%2Bz4VNo%2BkEnUhfw0sUs2ViTBewAdmlgUoGfbTOlXUFnPrwHzzYjPMf9r2xJytPnXx%2BScM6tYqPH52ogxtLywL6auw2DlxLQ1rJo1QeVXu1Nujvpirc9EEhd%2F2YFQONv9SUIBgbYhgDSf2ZaQekWUa%2Bc3jnMDT9i1qVsC4A%2BS6nxfRgEH%2BGeAvYXUwNCoxw%3D%3D" rel="nofollow noreferrer">link</a>) to highlight certain components from the input. </p>
<p>However, the problem is that, we don't have any sort of CNN layer in our simple RL model. In that sense, the aforementioned method cannot apply, right? I also learned a number of techniques from this <a href="https://christophm.github.io/interpretable-ml-book/simple.html" rel="nofollow noreferrer">book</a>, but still, I don't see any specific techniques applicable for RL models.</p>
<p>So here is my question, in terms of our simple RL model, how can we do certain "interpretability" analysis and therefore have a better idea on which part of the "Environment" leads to each step of decision? Thank you very much.</p>
| 166
|
|
reinforcement learning
|
Is the temperature equal to epsilon in Reinforcement Learning?
|
https://ai.stackexchange.com/questions/17667/is-the-temperature-equal-to-epsilon-in-reinforcement-learning
|
<p>This is a piece of code from my homework. </p>
<pre><code># action policy: implements epsilon greedy and softmax
def select_action(self, state, epsilon):
qval = self.qtable[state]
prob = []
if (self.softmax):
# use Softmax distribution
prob = sp.softmax(qval / epsilon)
#print(prob)
else:
# assign equal value to all actions
prob = np.ones(self.actions) * epsilon / (self.actions -1)
# the best action is taken with probability 1 - epsilon
prob[np.argmax(qval)] = 1 - epsilon
return np.random.choice(range(0, self.actions), p = prob)
</code></pre>
<p>This is a method in order to select the best action according to the two polices i think. My question is, why in the softmax computation there is the epsilon parameter used as temperature. Is really the same thing? Are they different? I think they should be two different variables. Should the temperature be a fixed value over time? Because when i use the epsilon-greedy policy my epsilon decrease over time.</p>
|
<p>Your are correct that epsilon in epsilon-greedy and temperature parameter in the "softmax distribution" are different parameters, although they serve a similar purpose. The original author of the code has taken a small liberty with variable names in the <code>select_action</code> method in order to use just one simple name as a positional argument.</p>
<blockquote>
<p>Should the temperature be a fixed value over time? </p>
</blockquote>
<p>Not necessarily, if your goal is to converge on an optimal policy you will want to decrease temperature. A slow decay factor applied after each update or episode, as you might use for epsilon (e.g. 0.999 or other value close to 1), can also work for temperature decay. A very high temperature is roughly equivalent to epsilon of 1. As temperature becomes lower, differences in action value estimates become major differences in action selection probabilities, with a sometimes desirable effect of picking "promising" action choices more often, plus with very low probabilities of picking the actions with the worst estimates. </p>
<p>Using a more sophisticated action selection such as the temperature based on in the example code can speed learning in RL. However, this particular approach is only good in some cases - it is a bit fiddly to tune, and can simply not work at all. </p>
<p>The tricky part of using a temperature parameter is choosing good starting point, as well as the decay rate and end values (you have to do the last for epsilon-decay as well). The problem is that the impact of using this distribution depends on the actual differences between action choices. You need a temperature value on roughly the same scale as the Q value differences. This is difficult to figure out in advance. In addition, if the Q value differences are more pronounced in some states than in others, you risk either having next to no exploration or having too much in some parts of the problem.</p>
| 167
|
reinforcement learning
|
Is there 1-dimensional reinforcement learning?
|
https://ai.stackexchange.com/questions/21018/is-there-1-dimensional-reinforcement-learning
|
<p>From what I can find, reinforcement algorithms work on a grid or 2-dimensional environment. How would I set up the problem for an approximate solution when I have a 1-dimensional signal from a light sensor. The sensor sits some distance away from a lighthouse. The intent would be to take the reading from the sensor to determine the orientation of the lighthouse beam.</p>
<p>The environment would be a lighthouse beam, the state would be the brightness seen at the sensor for a given orientation, and the agent would be the approximate brightness/orientation? What would the reward be? What reinforcement learning algorithm would I use to approximate the lighthouse orientation given sensor brightnesses?</p>
|
<blockquote>
<p>From what I can find, reinforcement algorithms work on a grid or 2-dimensional environment.</p>
</blockquote>
<p>A lot of teaching materials use a "grid world" presentation to demonstrate basic reinforcement learning (RL). However, the underlying Markov Decision Process (MDP) theory works on an arbitrary <em>graph</em> of connected states. This graph could be based on subdividing a metric space of any dimensions into a grid of the same dimensions (and using tiles of any shape that worked in that dimension). However, it is not limited to that, the state space does not need to be a metric that represents distances or physical properties.</p>
<p>In practice, the set of states can be arbitrary objects, connected via state transitions in any consistent way. Provided the transition probability function <span class="math-container">$p(s'|s,a)$</span> is consistent, the environment could be used in a RL problem.</p>
<p>A very common state description is that the state is a vector of numbers that capture all the variables relevant to the problem. The environment can then be measurements taken in the real world of those variables, or the same quantities provided by a simulation. That state vector can be of any size, and have arbitrary constraints on individual components. This is no different from numerical representations of other machine learning problems, such as the inputs allowed to a neural network.</p>
<blockquote>
<p>The environment would be a lighthouse beam, the state would be the brightness seen at the sensor for a given orientation, and the agent would be the approximate brightness/orientation? </p>
</blockquote>
<p>Something not quite right about the description there. There does not seem to be any action that the agent takes.</p>
<blockquote>
<p>What would the reward be? </p>
</blockquote>
<p>It would be whatever measure of reaching a goal or maintaining a "good" result that is appropriate for the problem. You do not give any information about the goal in your description.</p>
<p>If your goal is to light up a moving sensor with the highest brightness, then the brightness measured at the sensor, transformed into suitable units, would seem to be a good candidate for a reward function (you would also need the state to give information about the target - where it had been seen last for instance). Assuming the problem is continuous, you would also need a discount factor.</p>
<blockquote>
<p>What reinforcement learning algorithm would I use to approximate the lighthouse orientation given sensor brightnesses?</p>
</blockquote>
<p>Generally RL algorithms estimate rewards, or generate policies. If the lighthouse orientation is the action you wanted to take, then pretty much all RL algorithms can do one or the other to allow you to do this. The differences are in things like complexity or speed of the algorithm, what approximations you are willing to take etc. </p>
<p>You don't give enough information about the problem to even nearly suggest a "best" algorithm. Before you start, you will need to determine a more thorough description of state, action and rewards, that will define the problem. Once you have a more formal description of the problem, that may suggest which algorithms would be good starting points.</p>
| 168
|
reinforcement learning
|
Calculating the advantage 'gain' of actions in model-free reinforcement learning
|
https://ai.stackexchange.com/questions/21571/calculating-the-advantage-gain-of-actions-in-model-free-reinforcement-learning
|
<p>I have a simple question about model-free reinforcement. In a model I'm writing about, I want to know the value 'gain' we'd get for executing an action, relative to the current state. That is, what will I get if I moved from the current state <span class="math-container">$s$</span> taking action <span class="math-container">$a$</span>.</p>
<p>The measure I want is:</p>
<p><span class="math-container">$$G(s, a)=V(s^{\prime})-V(s)$$</span></p>
<p>where <span class="math-container">$s'$</span> is the state that I would transition to if the underlying MDP was deterministic. If the MDP has a stochastic transition function, the model I want is:</p>
<p><span class="math-container">$$G(s, a)=\left[\sum_{s' \in S } P(s^{\prime} \mid a, s) V(s^{\prime})\right]-V(s)$$</span></p>
<p>In a model-free environment, we don't have <span class="math-container">$P(s' \mid a,s)$</span>.</p>
<p>If we had a Q-function <span class="math-container">$Q(s,a)$</span>, could we represent <span class="math-container">$G(s,a)$</span>?</p>
<p>NOTE: This is not the same as an 'advantage function' as first proposed by Baird (Leemon C Baird. Reinforcement learning in continuous time: Advantage updating. In Proceedings of 1994 IEEE International Conference on Neural Networks, pages 448–2453. IEEE, 1994.), which means the advantage of actions relative to the optimal action. What I'm looking for is the gain of actions relative to the current state.</p>
| 169
|
|
reinforcement learning
|
Is there any programming practice website for beginners in Reinforcement Learning
|
https://ai.stackexchange.com/questions/21576/is-there-any-programming-practice-website-for-beginners-in-reinforcement-learnin
|
<p>I am doing an online course on Reinforcement Learning from university of Alberta.
It focus too much on theory. I am engineering and I am interested towards applying RL to my applications directly. </p>
<p>My question is, is there any website which has sample programmers for beginners. Small sample programs.
I have seen several websites for other machine learning topics such as CNN/RNN etc. But the resources for RL are either limited, or I couldn't find them</p>
|
<p>Firstly, since you are a beginner, I strongly recommend you start reading <a href="http://incompleteideas.net/book/bookdraft2017nov5.pdf" rel="nofollow noreferrer">Sutton's book</a>. It is a really great book. </p>
<p>Then, some tutorials:</p>
<p><a href="https://www.udemy.com/course/machine-learning-beginner-reinforcement-learning-in-python/?utm_source=adwords&utm_medium=udemyads&utm_campaign=DataScience_v.PROF_la.EN_cc.ROW_ti.5336&utm_content=deal4584&utm_term=_._ag_85469003954_._ad_437497334835_._kw__._de_c_._dm__._pl__._ti_dsa-774930036449_._li_9067687_._pd__._&matchtype=b&gclid=CjwKCAjw8df2BRA3EiwAvfZWaFozTSdALFp3qg_IFF5R_4PL6SGbeM5W_WGtGy5teJ5MW1Lef6dwBBoCqU0QAvD_BwE" rel="nofollow noreferrer">udemy rl</a></p>
<p><a href="https://www.udemy.com/course/deep-reinforcement-learning/?utm_source=adwords&utm_medium=udemyads&utm_campaign=DeepLearning_v.PROF_la.EN_cc.ROW_ti.5374&utm_content=deal4584&utm_term=_._ag_85479010714_._ad_437497336740_._kw__._de_c_._dm__._pl__._ti_dsa-774930039529_._li_9067687_._pd__._&matchtype=b&gclid=CjwKCAjw8df2BRA3EiwAvfZWaFHx42hKDJfUvGf_QPU7Bsub7wX1zGP6M8vz8maEP5EMAgEbu76rSRoCCiwQAvD_BwE" rel="nofollow noreferrer">udemy deep rl</a></p>
<p><a href="https://github.com/MorvanZhou/Reinforcement-learning-with-tensorflow" rel="nofollow noreferrer">rl-with-tensorflow</a></p>
<p><a href="https://www.learndatasci.com/tutorials/reinforcement-q-learning-scratch-python-openai-gym/" rel="nofollow noreferrer">learndatasci</a></p>
<p><a href="https://stackabuse.com/introduction-to-reinforcement-learning-with-python/" rel="nofollow noreferrer">stackabuse</a></p>
| 170
|
reinforcement learning
|
Can reinforcement learning be utilized for creating a simulation?
|
https://ai.stackexchange.com/questions/15720/can-reinforcement-learning-be-utilized-for-creating-a-simulation
|
<p>According to the definition, the AI agent has to play a game by it's own. A typical domain is the blocksworld problem. The AI determines which action the robot in a game should execute and a possible strategy for determine the action sequence is reinforcement learning. Colloquial spoken, reinforcement learning leads to an AI agent who can play games.</p>
<p>Before a self-learning character can be realized, the simulation has to be programmed first. That is an environment which contains the rules for playing blocksworld or any other game. The environment is the house in which the AI character operates. Can the Q-learning algorithm be utilized to build the simulation itself?</p>
|
<blockquote>
<p>Can the Q-learning algorithm be utilized to build the simulation itself?</p>
</blockquote>
<p>Only in the presence of a meta-environment, or meta-simulation where the goals of creating the original simulation are encoded in the states, available actions and rewards.</p>
<p>A special case of this might be in model-learning planning algorithms where there exists a "real" environment to refer to, and the agent benefits from exploring it and constructing a statistical model that it can then use to create an approximate simulation of the outcomes of sequences of actions. The <a href="http://www-anw.cs.umass.edu/~barto/courses/cs687/Chapter%209.pdf" rel="nofollow noreferrer">Dyna-Q algorithm</a>, which is a simple extension of Q-learning, is an example of this kind of model building approach. The simulation is very basic - it simply replays previous relevant experience. But you could consider this as an example of the agent constructing a simulation.</p>
<p>Getting an agent to act like a researcher and actually design and/or code a simulation from scratch would require a different kind of meta-environment. It is theoretically possible, but likely very hard to implement in general way - even figuring out the reward scheme to express the goals of such an agent could be a challenge. I'm not aware of any examples, but entirely possible someone has attempted this kind of meta agent, because it is an interesting idea.</p>
<p>Possibly the simplest example would be a gridworld meta-environment where a "designer" agent could select the layout of objects in the maze, with the goal of making a second "explorer" agent's task progressivley more difficult. The designer would be "creating the simulation" only in a very abstract way though, by setting easy-to-manage parameters of the environment, not writing low level code. </p>
<p>There is not much difference between the approach above and having two opposing agents playing a game. It is different from a turn-based game like chess in that each agent would complete a full episode, and then be rewarded by the outcome at the end of the combined two episodes. There are some similarities to GANs for image generation.</p>
| 171
|
reinforcement learning
|
Move blocks to create a designed surface
|
https://ai.stackexchange.com/questions/5098/move-blocks-to-create-a-designed-surface
|
<p>I am new to machine learning and AI, so forgive me if this is obvious. I was talking with a friend on how to solve this problem, and neither of us could figure out how to do it.</p>
<p>Say I have a grid area of 100x100 blocks, and I want a robot to build a horizontal 100x100 grid, and 3 blocks high. I am given a random, but known starting surface, always 100x100 but the height of the random surface can vary from 1 to 5 blocks. I have an extra reserve of blocks i can pick up, so dont have to worry about running out. The robot can move in any direction, even diagonally at some cost penalty. The robot can obviously move a 4 high block to fill in a 2 high, so each is at the design height of 3.
This sounds like a reinforcement learning problem, but would any one be able to explain more detail how I would do this, to a) minimize the amount of moves, and b) to get to the design surface. </p>
|
<p>Essentially, you could do something like have the robot randomly make moves (moving around and moving blocks) for some number of steps. Repeat this a bunch of times, and record the 'score' at the end (how close you are to a perfect result grid). Tell your algorithm to act more like the best scoring runs (Optimize a loss function), and start it over. Hopefully, you'll eventually get a robot that manages the task - the whole 'optimal path' thing will come around by the computer telling itself to learn from the lowest cost examples.</p>
<p>Remember, you're letting the machine do the hard thinking about the best way to do this or that. All you have to do is give it the framework to learn. </p>
| 172
|
reinforcement learning
|
Why is On-Policy MC/TD Algorithm guaranteed to converge to optimal policy?
|
https://ai.stackexchange.com/questions/13502/why-is-on-policy-mc-td-algorithm-guaranteed-to-converge-to-optimal-policy
|
<p>Let's say we have a task where the cost depends entirely on the path length to a terminal state, so the goal of an agent would be to take actions to reach terminal state as quickly as possible.</p>
<p>Now let us say, we know the optimal path length is of length <span class="math-container">$10$</span>, and there are <span class="math-container">$n$</span> such paths possible. Each state has 5 possible actions. Let's say the scheme we are using to find optimal policy is On-Policy MC/TD(n) along with GLIE Policy improvement (Generalised Policy Iteration).</p>
<p>In the first Policy Iteration Step, each actions are equally likely, there for the probability of sampling this optimal path (or the agent discovering this path) is <span class="math-container">$n* \frac {1}{5^{10}} \approx n* \frac {1}{2^{20}}$</span>. So, according to probability theory we need to sample around <span class="math-container">$2^{20}/n$</span> steps to atleast discover one of the best paths (worst case scenario).</p>
<p>Since, it is not possible to go through such huge number of samplings, let's say we do not sample the path, thus in the next Policy Iteration step (after GLIE Policy Improvement) some other sub-optimal path will have a higher probability of being sampled than the optimal path, hence the probability falls even lower. So, like this there is a considerably high probability that we may not find the best path at all, yet theory says we will find <span class="math-container">$\pi^*$</span> which indicates the best path.</p>
<p>So what is wrong in my reasoning here?</p>
|
<p>Your reasoning is fine. GLIE - Greedy in the Limit with Infinite Exploration assumes that our agent does not act greedily all the time. As the number of samples approaches infinity all state-action pairs will be explored -> hence the policy will converge on a greedy policy. The emphasis is on "number of samples approaches infinity". Also for GLIE Monte-Carlo initial values of Q does not matter, since they are replaced after the first update. </p>
| 173
|
reinforcement learning
|
What are the various problems RL is trying to solve?
|
https://ai.stackexchange.com/questions/28141/what-are-the-various-problems-rl-is-trying-to-solve
|
<p>I have read most of Sutton and Barto's introductory text on reinforcement learning. I thought I would try to apply some of the RL algorithms in the book to a previous assignment I had done on <a href="https://www.sokobanonline.com/" rel="nofollow noreferrer">Sokoban</a>, in which you are in a maze-like grid environment, trying to stack three snowballs into a snowman on a predefined location on the grid.</p>
<p>The basic algorithms (MC control, Q-learning, or Dyna-Q) seemed to all be based on solving whichever specific maze the agent was trained on. For example, the transition probabilities of going from coordinate (1,2) to (1,3) would be different for different mazes (since in one maze, we could have an obstacle at (1,3)). An agent that calculates its rewards based on one maze using these algorithms doesn't seem like it would know what to do given a totally different maze. It would have to <em>retrain</em>: 1) either take real life actions to relearn from scratch how to navigate a maze, or 2) be given the model of the maze, either exact or approximate (which seems infeasible in a real life setting) so that planning without taking actions is possible.</p>
<p>When I started learning RL, I thought that it would be more generalizable. This leads me to the question: Is this problem covered in multi-task RL? How would you categorize the various areas of RL in terms of the general problem that it is looking to solve?</p>
|
<blockquote>
<p>The basic algorithms (MC control, Q-learning, or Dyna-Q) seemed to all be based on solving whichever specific maze the agent was trained on.</p>
</blockquote>
<p>All RL algorithms are based on creating solutions to a defined state and action space. If you limit your state space representation and training to a single maze, then that is what will be learned. This is no different from other machine learning approaches - they learn the traits of a population by being shown samples from that population (not just one example). They also need to be built for the range of input parameters that you need them to solve.</p>
<p>In the case of RL, and your maze solver, that means the state representation needs to cover all possible mazes, not just a location in a single maze (there are ways to internalise some of the representation to the learning process such as using RNNs, but that is not relevant to the main answer here).</p>
<p>The toy environments in Sutton & Barto are often trivial to solve using non-RL approaches. They are not demonstrations of what RL can do, instead they have been chosen to explain how a particular issue related to learning works. Sutton & Barto does include a chapter on more interesting and advanced uses of RL - that is chapter 16 "Applications and Case Studies" <a href="http://incompleteideas.net/book/RLbook2020.pdf" rel="nofollow noreferrer">in the second edition</a>.</p>
<blockquote>
<p>When I started learning RL, I thought that it would be more generalizable.</p>
</blockquote>
<p>It is, but without some kind of pre-training to support generalisation from a low number of examples, you have to:</p>
<ul>
<li><p>Model the general problem</p>
</li>
<li><p>Train the agent on the general problem</p>
</li>
</ul>
<p>RL agents trained from scratch on new problems can seem very inefficient compared to learning by living creatures that RL roughly parallels. However, RL is not a model for general intelligence, but for learning through trial and error. Most examples start from no knowledge, not even basic priors for a maze such as the grid layout or the generalisable knowledge of movement and location.</p>
<p>If you do provide a more general problem definition and training examples, and use a function approximator that can generalise internally (such as a neural network), then an agent can learn to solve problems in a more general sense and <em>may</em> also generate internal representations that (approximately) match up to common factors in the general problem.</p>
| 174
|
reinforcement learning
|
A3C fails to solve MountainCar-v0 enviroment (implementation by OpenAi gym)
|
https://ai.stackexchange.com/questions/12160/a3c-fails-to-solve-mountaincar-v0-enviroment-implementation-by-openai-gym
|
<p>While I've been able to solve MountainCar-v0 using Deep Q learning, no matter what I try I can't solve this enviroment using policy-gradient approaches. As far as I learnt searching the web, this is a really hard enviroment to solve, mainly because the agent is given a reward only when it reaches the goal,which is a rare event. I tried to apply the so called "reward engineering", more or less substituting the reward given by the enviroment with a reward based upon the "energy" of the whole system (kinetic plus potential energy), but despite this no luck.
I ask you:</p>
<ul>
<li>is correct to assume that MountainCar-v0 is beyond the current state of the art A3C algorithm, so that it requires some human intervention to suggest the agent the policy to follow, for example adopting reward engineering?</li>
<li>could anyone provide any hint about which reward function could be used, provided that reward engineering is actually needed ?</li>
</ul>
<p>Thanks for your help.</p>
|
<p>I don't know about your first question, but I got a basic policy gradient approach with the kinetic energy as reward working on <code>MountainCar-v0</code>. </p>
<p>You can implement it based on <a href="https://medium.com/@ts1829/policy-gradient-reinforcement-learning-in-pytorch-df1383ea0baf" rel="nofollow noreferrer">this blog</a> and the notebook you find there. It uses an <code>MLP</code> with one hidden layer of size 128 and standard policy gradient learning. </p>
<p>The reward engineering boils down to replacing the <code>reward</code> variable with the kinetic energy <span class="math-container">$v^2$</span> (no potential energy and no constant factor, the reward itself is not used). I takes <span class="math-container">$>1000$</span> episodes to solve the environment consistently.</p>
<p>I'm afraid the solution is not very satisfactory and I don't have the feeling there is much to learn from it. The solution is originally for the cartpole problem and it stops working for me if I change hyperparameters/optimizer or the specifics of the reward. </p>
| 175
|
reinforcement learning
|
Should I represent my reinforcement learning as an episodic or continuous task?
|
https://ai.stackexchange.com/questions/33941/should-i-represent-my-reinforcement-learning-as-an-episodic-or-continuous-task
|
<p>I would like the community to help me understand if the following example would be better represented as episodic or continuous task, this will help me structure the problem and chose the right RL algorithm.</p>
<p>The agent start with an initial score <code>x</code> of let's say 100. The agent objective is to maximise it's score. There is no upper bound! Theoretically the agent can get a score up to infinity, and there is no termination based on the number of steps, therefore the agent could play forever. However, the score can't be negative and if the agent get to a score of zero, the episode should terminate and the environment reset. I am undecided what would be the best representation, because if the agent learns how to play, the episode would never terminate, and the agent would theoretically play forever. However if the score get to zero, there is no way for the agent to continue playing so the environment needs to reset. Thank you.</p>
| 176
|
|
reinforcement learning
|
How to normalize rewards in DQN?
|
https://ai.stackexchange.com/questions/33947/how-to-normalize-rewards-in-dqn
|
<p>I want to use a Deep Q-Network for a specific problem. My immediate rewards (<span class="math-container">$r_t = 0$</span>) are all zeros. But my terminal reward is a large positive value <span class="math-container">$(r_T=100$</span>). How could I normalize rewards to stabilize the training? I think clipping rewards to be in range <span class="math-container">$[0,1]$</span> makes training harder because it just forces most values to be near zero.</p>
| 177
|
|
reinforcement learning
|
Is there an analogy between client/server in web development and agent/environment in reinforcement learning?
|
https://ai.stackexchange.com/questions/6897/is-there-an-analogy-between-client-server-in-web-development-and-agent-environme
|
<p>I've recently come across the client-server model. From my understanding, the client requests the server, to which the server responds with a response. In this case, both the request and responses are vectors.</p>
<p>In reinforcement learning, the agent communicates with the environment via an "action", to which the environment sends a scalar reward signal. The "goal" is to maximize this scalar reward signal in long run.</p>
<p>Is there an analogy between client/server in web development and agent/environment in reinforcement learning?</p>
|
<blockquote>
<p>Is there an analogy between client/server in web development and agent/environment in reinforcement learning?</p>
</blockquote>
<p>The answer is "not really". There is no useful analogy here that allows any insight into RL from web server knowledge or vice versa.</p>
<p>However, you <em>could</em> set up an agent where the goal was to collect information, and the available actions were to make web requests. Clearly, in order to do this, you would need to make use of the client/server model for web servers, with the agent having control over the client web requests, and the environment being the network and servers of the world wide web.</p>
<p>There are some very hard challenges to construct an open-ended "web assistant" agent. Here are a couple that I can think of:</p>
<ul>
<li><p>How to describe actions? Starting with raw web requests composed as strings would likely be very frustrating. Probably you would simplify and have a first action be a call to a search engine with some variation of the topic description, and then decisions about which links to follow, or perhaps whether to refine the search to better fetch sites related to the topic as it is being built.</p>
</li>
<li><p>How to create a model of reward for collecting information? The first major stumbling block would be to measure the amount of <em>useful</em> information that the agent had found on any request.</p>
</li>
</ul>
<p>I think with current levels of Natural Language Processing, setting an agent free to discover information according to some goal from a text topic description is a very hard task, beyond cutting edge research. It would definitely be unreasonable to expect any such agent to end up with any resemblance to "understanding" subject matter from text. The agent would have very little ability to tell the difference between accurate facts or lies, or just grammatically correct gibberish.</p>
<p>One interesting idea for agents trying to learn unsupervised from exploring an environment is creating a reward signal from data compression. An agent will have learned something useful about its environment, if on processing new information, it is able to compress its existing world model more efficiently. This is basic concept behind ideas of general learning agents from <a href="https://en.wikipedia.org/wiki/J%C3%BCrgen_Schmidhuber" rel="nofollow noreferrer">Jürgen Schmidhuber</a>, <a href="https://en.wikipedia.org/wiki/Marcus_Hutter" rel="nofollow noreferrer">Marcus Hutter</a> and others. Research into this idea <em>could</em> be a driver towards creating more generic AI learning systems - however, it is one idea amongst many in AI, and so far is research-only, it has not yet led to anything as practical as an AI web-searching assistant.</p>
| 178
|
reinforcement learning
|
Does reinforcement learning lend itself well to changes in the environment due to external factors?
|
https://ai.stackexchange.com/questions/34146/does-reinforcement-learning-lend-itself-well-to-changes-in-the-environment-due-t
|
<p>I've been reading about reinforcement learning, and it seems to me that reinforcement learning assumes the environment is static, and therefore that the reward for taking a particular action will be the same from time to time. But what if something happens outside the agent's control, that changes the reward?</p>
<p>As a simplified example, suppose the agent in a RL algorithm is a grape farmer living on a planet known for extreme weather events. Its goal is to maximize profit/minimize losses. Grape seeds cost 1000 dollars. It takes 1 year for the grape seeds to grow delicious grapes. There is a 20% chance of the weather destroying the grapes during the year, leading to the agent losing 1000 dollars. However, if the weather does NOT destroy the grapes, the agent wins $2000.</p>
<p>The agent starts the year with two choices:</p>
<ul>
<li>Grow grapes</li>
<li>Don't grow grapes</li>
</ul>
<p>In its first year, an adverse weather event occurs, killing the grapes. Will the agent ever grow grapes again, or will it forever decide that the best way to win is to not plant grapes? Likewise, if it ever DOES grow grapes, can we trust it to know when not to grow grapes?</p>
|
<p>In the question, you are not describing the environment <em>changing</em>. Instead, there is a fixed 20% chance of a bad weather event each year. Such events can me modelled as a static environment with stochastic results.*</p>
<p>If nothing else happened in the year, it is easy to calculate the <em>expected</em> immediate reward for each action:</p>
<p><em>Not planting seeds.</em> <span class="math-container">$0$</span></p>
<p><em>Planting seeds.</em> <span class="math-container">$-1000 + 0.8 \times 2000 = 600$</span></p>
<p>So the question resolves to: Can a learning algorithm discover these odds, and make the correct decision, based only on its experience of individual years? Perhaps it will be unlucky in its first year and simply give up?</p>
<p>The answer is yes, most reinforcement learning (RL) algorithms will cope just fine, and learn that planting seeds is the best option in the simple environment that you have proposed.</p>
<p>There is an important caveat: To learn the true value of each option, algorithm must be one that <em>explores</em>. That means, whichever action the agent thinks is currently best, there must be some chance that it decides to take (what it thinks is) a worse option. If the worse option turns out better than it expected, the agent can adjust its expectations, and maybe choose that one more frequently in future, and vice versa if the option turns out worse than expected.</p>
<p>Nearly all RL methods do this, by setting some kind of exploration parameter. The more the agent explores, the more it will learn about alternatives to its current best guess at optimal behaviour. However, whilst exploring it may pay the price for making many sub-optimal decisions. Your planning agent may decide to not plant anything every few years, just to see if that is better, even after it has established that planting is the best way to get profit.</p>
<p>It is often hard to balance between the learning obtained by trying different options vs doing what the agent currently thinks is best. This is called the <a href="https://conceptually.org/concepts/explore-or-exploit" rel="nofollow noreferrer">exploration-exploitation tradeoff</a>, and is a fundamental issue in RL.</p>
<hr />
<p>* For comparison, a non-stationary environment in RL would be one in which the probabilities and/or reward distributions change so that values learned from older experience are no longer valid. If climate change impacted your grape-growing region such that after some year the chance of a bad storm was 80%, and the agent had no way of knowing that in advance, then the agent might lose money trying to follow its previously best policy. If it was still learning, it would adapt to these new rules eventually though.</p>
| 179
|
reinforcement learning
|
Confusion about state-action value function notation used in Sutton-Barto RL book
|
https://ai.stackexchange.com/questions/34514/confusion-about-state-action-value-function-notation-used-in-sutton-barto-rl-boo
|
<p>Let <span class="math-container">$\pi$</span> be an <span class="math-container">$\epsilon-soft$</span> policy with state-action value function <span class="math-container">$q_{\pi}(s,a)$</span>
and <span class="math-container">$\pi'$</span> be an <span class="math-container">$\epsilon-greedy$</span> policy with respect to <span class="math-container">$q_{\pi}$</span>.</p>
<p>In Sutton-Barto RL book (page 101, eq. 5.2), they define</p>
<p><span class="math-container">\begin{align}
q_{\pi}(s, \pi'(s))=\displaystyle \sum_{a}\pi'(a|s)q_{\pi}(s,a).
\end{align}</span>
Normally <span class="math-container">$q_{\pi}(s, a)$</span> means taking action <span class="math-container">$a$</span>
at state <span class="math-container">$s$</span> and then following the policy <span class="math-container">$\pi$</span>. Based on this convention, the notation
<span class="math-container">$q_{\pi}(s, \pi'(s))$</span> is weird because <span class="math-container">$\pi'(s)$</span> is not a single action like <span class="math-container">$a$</span>.
I.e., <span class="math-container">$\pi'$</span> is a stochastic policy
and hence only <span class="math-container">$\pi'(a|s)$</span> makes sense.</p>
|
<p>The book presents a slight abuse of notation where <span class="math-container">$\pi'(s)$</span> is shorthand for a distribution of action values described by the more correct <span class="math-container">$\pi'(a|s)$</span>. At that point there is an implied function composition of <span class="math-container">$q_{\pi}$</span> over this distribution, resolved on the right hand side to its expectation.</p>
<p>I believe it does this so that it can briefly show something familiar from the deterministic policy improvement theorem. You could almost read it as "the equivalent of this term (LHS) taken from the previous proof must now be written like this (RHS)".</p>
<p>It would be more correct notation to write something like this:</p>
<p><span class="math-container">$$\mathbb{E}[q_{\pi}(s,A')|A' \sim \pi'] = ...$$</span></p>
<p>or perhaps just start with the right hand side.</p>
| 180
|
reinforcement learning
|
Behaviour policy must be stochastic in states where it is not identical to the Optimal policy
|
https://ai.stackexchange.com/questions/34390/behaviour-policy-must-be-stochastic-in-states-where-it-is-not-identical-to-the-o
|
<p>In <a href="http://incompleteideas.net/book/RLbook2020trimmed.pdf" rel="nofollow noreferrer">Sutton & Barto Reinforcement learning book</a>, page 103 (chapter: off-policy learning
via importance sampling), the following statement is given:</p>
<p>"In order to use episodes from <span class="math-container">$b$</span> to estimate values for <span class="math-container">$\pi$</span>, we require that every action
taken under <span class="math-container">$\pi$</span> is also taken, at least occasionally, under <span class="math-container">$b$</span>. That is, we require that
<span class="math-container">$\pi$</span>(a|s) > 0 implies <span class="math-container">$b$</span>(a|s) > 0. This is called the assumption of <em>coverage</em>. It follows
from coverage that <span class="math-container">$b$</span> must be stochastic in states where it is not identical to <span class="math-container">$\pi$</span>. "</p>
<p>where <span class="math-container">$b$</span> refers to a behavior policy and <span class="math-container">$\pi$</span> to the target optimal policy.
I don't understand how the above conclusion was obtained from the definition of coverage.</p>
|
<p>Say we are in state <span class="math-container">$s$</span>, and <span class="math-container">$b$</span> chooses <span class="math-container">$a'$</span> but <span class="math-container">$\pi$</span> chooses <span class="math-container">$a$</span>. Since <span class="math-container">$\pi$</span> chose <span class="math-container">$a$</span> (<span class="math-container">$\pi(a | s) >0$</span>), coverage implies that <span class="math-container">$b(a|s)>0$</span>. But <span class="math-container">$a'$</span> was ultimately chosen by <span class="math-container">$b$</span>, so it must be that <span class="math-container">$b(a'|s)>0$</span>. Hence there is some chance that <span class="math-container">$a'$</span> is chosen and there is some chance that <span class="math-container">$a$</span> is chosen, meaning <span class="math-container">$b$</span> is stochastic in state <span class="math-container">$s$</span>. Thus the above conclusion follows from the definition of coverage when the policies are not identical under <span class="math-container">$s$</span>.</p>
| 181
|
reinforcement learning
|
Do you think this method of creating AI is valid?
|
https://ai.stackexchange.com/questions/34702/do-you-think-this-method-of-creating-ai-is-valid
|
<ol>
<li><p>If the agent wins the game, the agent is rewarded</p>
</li>
<li><p>If the agent wins the game, the game is rewarded</p>
</li>
<li><p>Both the agent and the game are reinforcement learning agents.</p>
</li>
</ol>
<p>When the agent wins the game, the game is rewarded, so the game creates various game winning conditions.</p>
<p>As the agent satisfies various game winning conditions, the reward increases, so the agent develops to win various games.</p>
<p>An agent that can win various games develops into a general-purpose intelligence</p>
|
<p>What you are describing is a <a href="https://arxiv.org/pdf/1707.00183.pdf" rel="nofollow noreferrer">teacher-student framework</a> in which you have one agent (the teacher) who determines the complexity of the task that another agent (the student) must accomplish. If the task is too difficult for the student to accomplish the teacher receives little to no reward and therefore changes the environment (the game) making it easier for the student to complete it.</p>
<p>The paper <a href="https://arxiv.org/abs/1705.06366" rel="nofollow noreferrer">Automatic Goal Generation for Reinforcement Learning Agents</a> is quite interesting because it applies this concept using a Generative Adversarial Network (GAN). Here the generator network is the teacher and the adversarial network is the student trying to solve the task created by the generator network. The results in the paper demonstrate that no prior knowledge of the environment is required for the student to solve it and it can even be done with sparse reward, i.e., 1 for winning the game and 0 otherwise.</p>
| 182
|
reinforcement learning
|
Confusing statement in Sutton-Barto's RL book in Section 8.5 ( Expected vs. Sample Updates)
|
https://ai.stackexchange.com/questions/34940/confusing-statement-in-sutton-bartos-rl-book-in-section-8-5-expected-vs-samp
|
<p>In Sutton-Barto RL's book (page 174) it says:</p>
<blockquote>
<p>The advantage of sample updates shown in Figure 8.7 is probably an underestimate of
the real effect. In a real problem, the values of the successor states would be estimates
that are themselves updated. By causing estimates to be more accurate sooner, sample
updates will have a second advantage in that the values backed up from the successor
states will be more accurate. These results suggest that sample updates are likely to be
superior to expected updates on problems with large stochastic branching factors and
too many states to be solved exactly.</p>
</blockquote>
<p>This is very confusing because before the above paragraph they say</p>
<blockquote>
<p>The values at the next states are assumed correct,...</p>
</blockquote>
<p>If so, the results in Figure 8.7 for sample updates should be worse in real.</p>
|
<blockquote>
<p>If so, the results in Figure 8.7 for sample updates should be worse in real.</p>
</blockquote>
<p>Yes, the sample updates will perform worse than shown. The later paragraph is explaining that the results for expected updates may be even worse <em>relative</em> to sample updates. It is not claiming that sample updates are unaffected if you remove the simplifying assumption used in the comparison.</p>
<p>The assertion on page 174 is that the disparity between the approaches may in practice persist for longer than the idealised situation (of already having correct values to backup/bootstrap from), because it may take longer to converge the expected values from the original estimates than the sampling approach would take. This is due to the same effect impacting each time step in a longer trajectory, whilst figure 8.7 shows the impact only for a single time step.</p>
| 183
|
reinforcement learning
|
In reinforcement learning, how to craft observation space when environment is made of multiple blocks?
|
https://ai.stackexchange.com/questions/35265/in-reinforcement-learning-how-to-craft-observation-space-when-environment-is-ma
|
<p>In reinforcement learning problems like cartpole, usually the environment is a single system that takes an input and gives an output. For example, in cartpole, given positions, velocities as input we can tell if pole is going to fall. Hence, we can craft observation space from positions and velocities.</p>
<p>But if environment is made of sequential blocks of systems {A, B, C}, where input is given to A that passes to B then to C then to output, then how do we craft observations?</p>
<p>Options:</p>
<ol>
<li>We use a separate block D (linear or non-linear) to create numerical observations for input which is then fed to A. Problem with this: Observation crafted out of D is not correlated to output at C. For example, in cartpole, I would say that I would pass the velocities and positions thru a block D to get amount of fuel left in cart. And I would feed the amount of fuel left as observation to actual system which determines if the pole falls.</li>
<li>We use block A to create numerical observations for input. Better than option 1 but observations crafted out of A are not reflective of the entire process (A -> B -> C).</li>
</ol>
<p>So, how do we craft observations for such an RL problem?</p>
<p><strong>Edit:</strong> An example scenario. I have a system A that takes a song and gives N similar songs as output. I have a system B that takes a song and determines if the song is liked by more than 10 million people. Then I have a system C that takes a song liked by more than 10 million people and says if the song is of genre 'pop' or 'edm'.</p>
<p>I have 10 'edm' songs (liked by more than 10 million people) to start with, and I want to apply Reinforcement Learning to expand the 10 'edm' 'liked by more than 10 million people' songs to more such songs. By optimally, I mean we could always brute force for each song but the system A is non-deterministic. It can give N similar songs (N ranges from 0 to 100). In X amount of time, I want to maximize creations of such songs. Say, song X and song Y are the initial inputs. After we pass them thru A->B->C we could get 100 such similar ('edm' & 'liked by more than 10 million people') songs for X but only 2 for Y. Brute-force would waste time a lot in places that would not yield many such songs.</p>
<p>Now, an RL agent can decide which song to pick to get maximum number of similar such songs in X amount of time. But the environment is A -> B (is it liked by 10 million people, output is 1 or 0) -> C (is it 'edm' or 'pop', output is 1 or 0). Songs are represented as signals (array of continuous values between -inf to +inf) by passing them thru a system D. The system D takes the song name, downloads it and gets the signals (array representation of the songs). So, these numerical representations can be used instead of song names.</p>
<p>How do I craft observations for this combined environment?</p>
<p>As far as I know observation space should be such that given 2 observations we can distinguish which one gives better results, like in cartpole, 2 sets of observations (positions1, velocities1) and (positions2, velocities2) the system can identify which observation is relatively better or worse. But in our system (A -> B -> C) how do we go about crafting an observation that correlated with output?</p>
| 184
|
|
reinforcement learning
|
Play against your own RL-trained AI from gym retro
|
https://ai.stackexchange.com/questions/36398/play-against-your-own-rl-trained-ai-from-gym-retro
|
<p>so far I have seen people implementing reinforcement learning to build an AI to play and complete games on gym retro, such as street fighter, racing games and so on. However, I was wondering if it is possible to play against your own trained AI to get some sort of human evaluation? Anyone has any idea for this? Thank you</p>
|
<p>I dont understand why the post is downvoted.</p>
<ol>
<li>I understand that you can use gym-retro to build a reinforcement learning agent to play some particular game, lets say street fighter.</li>
<li>What I want to know is, is it possible to fight against that agent that you have trained, rather than watching a mere trailer (render) of that agent fighting against the CPU bot in that game. If it is possible, then how should I go about doing it, assuming I have a trained rl agent for street fighter. Any resources out there that can guide this, since the python gym-retro does not have too much details on deployment of trained agents vs humans</li>
</ol>
| 185
|
reinforcement learning
|
Reinforcement learning applicable to a scheduling problem?
|
https://ai.stackexchange.com/questions/28888/reinforcement-learning-applicable-to-a-scheduling-problem
|
<p>I have a certain scheduling problem and I would like to know in general whether I can use Reinforcement learning (and if so what kind of RL) to solve it. Basically my problem is a mixed-integer linear optimization problem. I have a building with an electric heating device that converts electricity into heat. So the action vector (decision variable) is <span class="math-container">$x(t)$</span> which quantifies the electrical power of the heating device. The device has to take one decision for every minute of the day (so in total there are <span class="math-container">$24$</span> hours <span class="math-container">$\times 60$</span> minutes <span class="math-container">$= 1440$</span> variables). Each of those variables is a continuous variable and can have any value between <span class="math-container">$0$</span> and <span class="math-container">$2000 W$</span>.</p>
<p>The state space contains several continuous variables:</p>
<ul>
<li>External varying electricity price per minute: Between <span class="math-container">$0$</span> Cents and <span class="math-container">$100$</span> Cents per kWh (amount of energy)</li>
<li>Internal temperature of the building: Basically between every possible value but there is a constraint to have the temperature between <span class="math-container">$20 °C$</span> and <span class="math-container">$22 °C$</span></li>
<li>Heat demand of the building: Any value between <span class="math-container">$0 W$</span> and <span class="math-container">$10.000 W$</span></li>
<li>Varying "efficiency" of the electrical heating device between <span class="math-container">$1$</span> and <span class="math-container">$4$</span> (depending on the external outside temperature)</li>
</ul>
<p>The goal is to minimize the electricity costs (under a flexible electricity tariff) and to not violate the temperature constraint of the building. As stated before, this problem can be solved by mathematical optimization (mixed-integer linear program). But I would like to know if you can solve this also with reinforcement learning? As I am new to reinforcement learning I would not know how to do this. And I have some concerns about this.</p>
<p>Here I have a very large state space with continuous values. So I can't build a comprehensive <span class="math-container">$Q-$</span>table as there are to many values. Further, I am not sure whether the problem is a dynamic programming problem (as most/all?) of the reinforcement problems. From an optimization point of view it is a mixed-integer linear problem.</p>
<p>Can anyone tell me if and how I could solve this by using RL? If it is possible I would like to know which type of RL method is suitable for this. Maybe Deep-Q-Learning but also some Monte-Carlo policy iteration or SARSA? Shall I use model-free or model-based RL for this?</p>
<p><strong>Reminder</strong>: Does nobody know whether and how I can use reinforcement learning for this problem? I'd highly appreciate every comment.</p>
<p>Can nobody give me some more information on my issue? I'll highly appreciate every comment and would be quite thankful for more insights and your help.</p>
|
<p>Details matter, and it is possible that your problem is best solved using classic control (solving the state equations) or operations research style optimisation. However, RL is also a good choice because it can be made to learn a controller that is not brittle when things go wrong.</p>
<p>One thing you will have to accept with RL is that the constraints will be soft constraints, even if you penalise them heavily. That is, you can expect that the internal temperature could drift outside of bounds. It definitely will during learning. A major design concern when framing the problem for reinforcement learning is how to weight the different rewards that represent your goals. You can weight your strict constraints higher, but at least initially they need to be low enough that the cost saving is not completely swamped.</p>
<p>I would suggest that your worst constraint failure penalty is slightly larger than the highest possible electricity cost for a single time step. That would mean the agent is always incentivised to spend money if it has to, as opposed to break the constraints, whilst still being able to explore what happens when it does break the constraints without having to cope with predicting large numbers.</p>
<p>There are lots of types of RL. Some are better at different kinds of problems. I would characterise your problem as you have described it as:</p>
<ul>
<li><p>Episodic - but only for convenience of describing the problem. In fact your agent with a 24 hour episode will be incentivised to allow internal temperature to drop at the end of the 24 hours to save money, when it does not care what might happen immediately afterwards. Depending on price of electricity at that point, it could well be more optimal to spend more. This may only be a small difference from truly optimal behaviour, but you might play to strong point of RL by re-framing the problem as a continuing one (where mixed-integer linear optimisation may be harder to frame).</p>
</li>
<li><p>Continuous state space, with low dimensionality.</p>
<ul>
<li>If prices are known in advance, you may want to augment the state space so that the agent knows how long it has at current price and whether the next price will be higher or lower. Alternatively, if they always follow the same time schedule, you could add the current time as a state variable. Either way, that allows the agent to take advantage of the temperature bounds. For instance, it could load up on cheap heating before a price hike, or allow the temperature to drop to minimum acceptable if cheaper electricity is about to become available.</li>
</ul>
</li>
<li><p>Large, possibly continuous action space. You might want to consider approximating this to e.g. 21 actions (0 W, 100 W . . . 2000 W) as optimising this simpler variant will be easier to code (a DQN could do it), whilst it may not significantly affect optimality of any solution.</p>
</li>
</ul>
<p>I don't think you could simplify your state space in order to use Q tables. So the DQN agent is probably the simplest that you could use here, provided you are willing to discretise the action space.</p>
<p>If you don't want to discretise the action space, then you will want to use some form of policy gradient approach. This will include a policy network that takes current state as input and then output a distribution of power level choices - e.g. a mean and standard deviation for the power choice. In production use you may be able to set the standard deviation to zero and use the mean as the action choice. A method like A3C can be used to do train such an agent.</p>
<p>I suggest that you discretise the actions and use a DQN-based agent to learn an approximate optimal policy for your environment. If that returns a promising result, you could either stop there or try to refine it further using continuous action space and A3C.</p>
<p>Also, you will want to practice using DQN on a simpler problem before diving in to your main project. Two reasonable learning problems here might be <a href="https://gym.openai.com/envs/CartPole-v1/" rel="nofollow noreferrer">Cartpole-v1</a> and <a href="https://gym.openai.com/envs/LunarLander-v2/" rel="nofollow noreferrer">LunarLander-v2</a> which also has a <a href="https://gym.openai.com/envs/LunarLanderContinuous-v2/" rel="nofollow noreferrer">continuous actions variant</a>. Learning enough about setting up relevant RL methods to solve these toy problems should put you on a good footing to handle your more complex problem.</p>
<p>Keras documentation includes an <a href="https://keras.io/examples/rl/deep_q_network_breakout/" rel="nofollow noreferrer">example DQN for Atari Breakout</a>, that you may be able to use as the basis for building your own code.</p>
| 186
|
reinforcement learning
|
Is it required that taking an action updates the state?
|
https://ai.stackexchange.com/questions/21084/is-it-required-that-taking-an-action-updates-the-state
|
<p>For some environments taking an action may not update the environment state. For example, a trading RL agent may take an action to buy shares s. The state at time t which is the time of investing is represented as the interval of 5 previous prices of s. At t+1 the share price has changed but it may not be as a result of the action taken. Does this affect RL learning, if so how ? Is it required that state is updated as a result of taking actions for agent learning to occur ?</p>
<p>In gaming environments it is clear how actions affect the environment. Can some rules of RL breakdown if no "noticeable" environment change takes place as a result of actions ?</p>
<p>Update:</p>
<p>"actions influence the state transitions", is my understanding correct:
If transitioning to a new state is governed by epsilon greedy and epsilon is set to .1 then with .1 probability the agent will choose an action from the q table which has max reward reward for the given state. Otherwise the agent randomly chooses and performs an action then updates the q table with discounted reward received from the environment for the given action.</p>
<p>I've not explicitly modeled an MDP and just defined the environment and let the agent determine best actions over multiple episodes of choosing either a random action or the best action for the given state, the selection is governed by epsilon greedy.</p>
<p>But perhaps I've not understood something fundamental in RL. I'm ignoring MDP in large part as I'm not modeling the environment explicitly. I don't set the probabilities of moving from each state to other states.</p>
|
<p>A very vague question. What's the objective?</p>
<p>Reinforcement Learning (RL) typically uses the Markov Decision Process framework, which is a sequential decision making framework. In this framework, actions influence the state transitions. In other words, RL deals with controlling (via actions) a Markov chain. The objective in RL is figure out how to take actions in an optimal (in some sense) way!</p>
<p>If, in the application you mentioned, the actions don't influence the state transitions and the objective is to predict states, RL is not required. It's just a regression/ time-series problem. </p>
| 187
|
reinforcement learning
|
What is Imagination Learning and Imagination machines?
|
https://ai.stackexchange.com/questions/5451/what-is-imagination-learning-and-imagination-machines
|
<p>I recently came across a <a href="https://www.quora.com/I-found-many-of-your-answers-on-reinforcement-learning-extremely-insightful-Im-at-Google-and-you-mentioned-that-you-were-giving-a-talk-about-imagination-learning-I-would-love-to-attend-if-possible" rel="nofollow noreferrer">Quora post</a>, where I saw the term "Imagination Learning". It seems to be based on something called "<a href="https://people.cs.umass.edu/~mahadeva/Site/About_Me.html" rel="nofollow noreferrer">Imagination Machines</a>" (the link is based on a guy's work profile as of now; subject to change).</p>
<p>The only thing that I could find on Internet about it is this paper: <a href="https://arxiv.org/pdf/1707.06203.pdf" rel="nofollow noreferrer">Imagination-Augmented Agents for Deep Reinforcement Learning</a>. (But I'm not sure if it's related to that concept.)</p>
<p>Any ideas on this would be appreciated.</p>
|
<p>The whole idea of Imagination Learning seems to be in its infancy. The author of the response you linked wrote a <a href="https://people.cs.umass.edu/~mahadeva/papers/aaai2018-imagination.pdf" rel="nofollow noreferrer">paper</a> on the subject, but as he notes the paper is more of an outline for "a new overarching challenge for AI." The second link you posted is not directly related to the ideas referenced in the paper on Imagination Learning, other than that both use the idea of imagination in humans as inspiration. In short, there does not seem to be much information on this topic and I suspect it will stay that way until we learn some of the underlying processes that go along with imagination in humans and then maybe Imagination Machines may come to fruition. </p>
| 188
|
reinforcement learning
|
What is the physics engine used by DeepMimic?
|
https://ai.stackexchange.com/questions/7958/what-is-the-physics-engine-used-by-deepmimic
|
<p>I found a <a href="https://youtu.be/vppFvq2quQ0" rel="nofollow noreferrer">video for the paper <em>DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills</em>
</a> on YouTube.</p>
<p>I looked in the related paper, but could not find details of how to the environment was created, such as the physics engine it used. I would like to use it, or something similar.</p>
|
<p><a href="https://pybullet.org/wordpress/" rel="nofollow noreferrer">Bullet physics engine</a></p>
<p>Their <a href="https://arxiv.org/pdf/1804.02717.pdf" rel="nofollow noreferrer">paper</a> says</p>
<pre><code>Physics simulation is performed at 1.2kHz using the Bullet physics engine [Bullet 2015].
</code></pre>
| 189
|
reinforcement learning
|
How to design the reward for an action which is the only legal action at some state
|
https://ai.stackexchange.com/questions/8662/how-to-design-the-reward-for-an-action-which-is-the-only-legal-action-at-some-st
|
<p>I am working on a RL project,but got stuck at one point: The task is continuous (Non-episodic). Following some suggestion from Sutton's <a href="http://incompleteideas.net/book/bookdraft2017nov5.pdf" rel="nofollow noreferrer">RL book</a>, I am using a value function approximation method with average reward (differential return instead of discount return). For some state (represented by some features), only one action is legal. I am not sure how to design reward for such action. Is it ok to just assign the reward in the previous step? Could anyone tell me the best way to decide the reward for the only legal action. Thank you! </p>
<p>UPDATE:
To give more details, I added one simplified example:
Let me explain this by a simplified example: the state space consists of a job queue with fix size and a single server. The queue state is represented by the duration of jobs and the server state is represented by the time left to finish the current running job. When the queue is not full and the server is idle, the agent can SCHEDULE a job to server for execution and see a state transition(taking next job into queue) or the agent can TAKE NEXT JOB into queue. But when the job queue is full and server is still running a job, the agent can do nothing except take a BLOCKING action and witness a state transit (time left to finish running job gets decreased by one unit time). The BLOCKING action is the only action that the agent can take in that state.</p>
| 190
|
|
reinforcement learning
|
Intuition behind $\gamma$-discounted state frequency
|
https://ai.stackexchange.com/questions/11888/intuition-behind-gamma-discounted-state-frequency
|
<p>At the appendix A of paper "<a href="https://arxiv.org/abs/1810.01257" rel="nofollow noreferrer">near-optimal representation learning for hierarchical reinforcement learning</a>", the authors express the <span class="math-container">$\gamma$</span>-discounted state visitation frequency <span class="math-container">$d$</span> of policy <span class="math-container">$\pi$</span> as</p>
<p><span class="math-container">$$
d=(1-\gamma)A_\pi(I-\gamma^cP_\pi^c)^{-1}\mu\tag 1
$$</span></p>
<p>I've simplifed the notation for easy reading, hoping it does not introduce any error. In the above definition, <span class="math-container">$P_\pi^c$</span> the <span class="math-container">$c$</span>-step transition matrix under the policy <span class="math-container">$\pi$</span>, i.e., <span class="math-container">$P_{\pi}^c=P_\pi(s_{c}|s_0)$</span>, <span class="math-container">$\mu$</span> a Dirac <span class="math-container">$\delta$</span> distribution centered at start state <span class="math-container">$s_0$</span> and
<span class="math-container">$$
A_\pi=I+\sum_{k=1}^{c-1}\gamma^kP_\pi^k\tag 2
$$</span>
They further give the every-<span class="math-container">$c$</span>-steps <span class="math-container">$\gamma$</span>-discounted state frequency of <span class="math-container">$\pi$</span> as
<span class="math-container">$$
d^c_\pi=(1-\gamma^c)(I-\gamma^cP_\pi^c)^{-1}\mu\tag 3
$$</span>
To my best knowledge, <span class="math-container">$A_\pi$</span> seems to be the unnormalized <span class="math-container">$\gamma$</span>-discounted state frequency, but I cannot really make sense of the rest.
I'm hoping that someone can shed some light on these definitions.</p>
<h2>Update</h2>
<p>Thank @Philip Raeisghasem for pointing out the paper <a href="https://arxiv.org/abs/1705.10528" rel="nofollow noreferrer">CPO</a>. Here's what I've gotten from that.
Applying the sum of the geometric series to Eq.<span class="math-container">$(2)$</span>, we have
<span class="math-container">$$
A={(I-\gamma^cP_\pi^c)(I-\gamma P_\pi)^{-1}}\tag4
$$</span>
Plugging Eq.<span class="math-container">$(4)$</span> back into Eq.<span class="math-container">$(1)$</span>, we get the same result as Eq.<span class="math-container">$(18)$</span> in the CPO paper:
<span class="math-container">$$
d=(1-\gamma)(I-\gamma P_\pi)^{-1}\mu\tag 5
$$</span>
where <span class="math-container">$(1-\gamma)$</span> normalizes all weights introduced by <span class="math-container">$\gamma$</span> so that they are summed to one. However, I'm still confused. Here are the questions I have</p>
<ol>
<li>Eq.<span class="math-container">$(5)$</span> indicates Eq.<span class="math-container">$(1)$</span> is the state frequency in the infinite horizon. But I do not understand why we have it in the hierarchical policy. To my best knowledge, policies here are low-level, which means they are only valid in a short horizon (<span class="math-container">$c$</span> steps, for example). Computing state frequency in the infinite horizon here seems confusing.</li>
<li>What should I make of <span class="math-container">$d_\pi^c$</span> defined in Eq.<span class="math-container">$(3)$</span>, originally from Eqs.<span class="math-container">$(26)$</span> and <span class="math-container">$(27)$</span> in the paper? The authors define them as every-<span class="math-container">$c$</span>-steps <span class="math-container">$\gamma$</span>-discounted state frequencies of policy <span class="math-container">$\pi$</span>. But I do not see why it is the case. To me, they are more like the consequence of Eq.<span class="math-container">$(30)$</span> in the paper.</li>
</ol>
<p>Sorry if anyone feels that this update makes this question too broad. This is kept since I'm not so sure whether I can get a satisfactory answer without these questions.
Any partial answer will be sincerely appreciated. Thanks in advance.</p>
|
<p><strong>Update: I rewrote the first part due to major mistake in the first version</strong></p>
<p>Notice: The notation <span class="math-container">$P^k$</span> from Eq.<span class="math-container">$(20)$</span> and <span class="math-container">$(21)$</span> in the paper does not mean the kth power of some <span class="math-container">$P$</span>. Instead, <span class="math-container">$P^k$</span> should be thought as the <span class="math-container">$k$</span> step transition probability of a non-homogeneous Markov chain. </p>
<ol>
<li>According to the CPO paper, the discounted future state distribution is defined as
<span class="math-container">$$
d_\pi(s)=(1-\gamma)\sum_{k=0}^\infty \gamma^kProb(s_k=s|\pi,s_0)\mu(s_0)\tag{1}
$$</span>
Consider function form. Let <span class="math-container">$Prob_{\pi,k}$</span> denote the <span class="math-container">$k$</span> step probability transition operator induced by <span class="math-container">$\pi$</span>; here <span class="math-container">$\pi$</span> can be a hierarchical policy, <span class="math-container">$k$</span> can be larger than <span class="math-container">$c$</span>.
<span class="math-container">$$
d_\pi=(1-\gamma)\sum_{k=0}^\infty \gamma^kProb_{\pi,k}\mu\tag{2}
$$</span>
Now apply the similar definition as Eq.<span class="math-container">$(20)$</span> and <span class="math-container">$(21)$</span> in the paper, let <span class="math-container">$P^k_\pi$</span> denote the <span class="math-container">$k$</span> step transition probability of the non-homogeneous Markov chain induced by the low level policy, with <span class="math-container">$k$</span> smaller or equal to <span class="math-container">$c$</span>.
<span class="math-container">\begin{align}
d_\pi&=(1-\gamma)\sum_{m=0}^\infty\gamma^{mc}\sum_{k=0}^{c-1}\gamma^kP^k_\pi(P^c_\pi)^m\mu\\
&=(1-\gamma)\sum_{k=0}^{c-1}\gamma^kP^k_\pi(\sum_{m=0}^\infty\gamma^{mc}(P^c_\pi)^m)\mu\\
&=(1-\gamma)A_\pi(I-\gamma^cP^c)^{-1}\mu\tag{3}
\end{align}</span>
which is exactly the form of Eq.<span class="math-container">$(22)$</span> and <span class="math-container">$(23)$</span> in the paper, with <span class="math-container">$A_\pi$</span> defined similar as Eq.<span class="math-container">$(24)$</span> and <span class="math-container">$(25)$</span>.</li>
<li>The "every-<span class="math-container">$c$</span>-step discounted state frequency" builds on <span class="math-container">$(3)$</span>, but it lumps the <span class="math-container">$c$</span> steps into one "high level" step where the discount factor is <span class="math-container">$\gamma^c$</span> and the transition operator is <span class="math-container">$P_\pi^c$</span>. Starting with <span class="math-container">$(3)$</span>, replace <span class="math-container">$c$</span> with 1, we get the "every one step future state distribution"
<span class="math-container">$$
d_\pi=(1-\gamma)(I-\gamma P_\pi)^{-1}\mu\tag{4}
$$</span>
Then replace <span class="math-container">$\gamma$</span> and <span class="math-container">$P_\pi$</span> in <span class="math-container">$(4)$</span> with <span class="math-container">$\gamma^c$</span> and <span class="math-container">$P_\pi^c$</span>, we get the "every-<span class="math-container">$c$</span>-step discounted state frequency", or "every-<span class="math-container">$c$</span>-step future state distribution"
<span class="math-container">$$
d_\pi^c=(1-\gamma^c)(I-\gamma^c P_\pi^c)^{-1}\mu\tag{5}
$$</span>
By the way, I read your blogpost on this paper. It's very helpful for me, thank you!</li>
</ol>
| 191
|
reinforcement learning
|
Why do these reward functions give different training curves?
|
https://ai.stackexchange.com/questions/13785/why-do-these-reward-functions-give-different-training-curves
|
<p>Let's say our task is to pick and place a block, like: <a href="https://gym.openai.com/envs/FetchPickAndPlace-v0/" rel="nofollow noreferrer">https://gym.openai.com/envs/FetchPickAndPlace-v0/</a></p>
<p>Reward function 1: -1 for block not placed, 0 for block placed</p>
<p>Reward function 2: 0 for block not placed, +1 for block placed</p>
<p>I noticed training 1 is much faster than 2... I am using the HER implementation from OpenAI. Why is that?</p>
| 192
|
|
reinforcement learning
|
How does reinforcement learning with video data work?
|
https://ai.stackexchange.com/questions/17166/how-does-reinforcement-learning-with-video-data-work
|
<p>My goal is to train an agent to play MarioKart on the Nintendo DS. My first approach (in theory) was to setup an emulator on my pc and let the agent play for ages. But then a colleague suggested to train the agent first on pre recorded humanly played video data, to achieve some sort of base level. And then for further perfection let the agent play for its own with the emulator.</p>
<p>But I have no clue how training with video data works. E.g. I wonder how to calculate a loss since there is no reward. Or am I getting the intuition wrong?</p>
<p>I would appreciate it if someone could explain this technique to me.</p>
|
<p>In reinforcement learning, to learn off policy control, you need data on the states, actions and rewards at each time step. If, in addition to a recorded video, you had a recording of controller inputs, and could add reward data by hand, then you could use a standard reinforcement learning method, e.g. DQN. Simply run the DQN training loop as normal, but skip the parts where it acts in the environment, and only train on batches of recorded experience.</p>
<p>With <em>only</em> video data, your options are limited. However, it might still be useful, because a significant part of the challenge is a machine vision task. For a DQN agent, it will need to convert frames from the video (e.g. last 4 frames) into a prediction of the different rewards that it could get depending on which controller buttons are pressed. If you can teach a separate neural network to perform a vision task on relevant video data, it may may help. You could use the learned weights from the first layers of this network as the starting point for your Q values network, and it will likely speed up a DQN figuring out the relationship to its predictions. This sort of task switch following learning is called <a href="https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a" rel="nofollow noreferrer">transfer learning</a>, and is often used in computer vision tasks.</p>
<p>A possibly useful starting task if you have a video, but no controller or reward data, would be to predict the next frame(s) of the video, given say four starting frames (you need more than one so that the neural network can use velocity information). It should be possible to generate the training data using opencv or ffmpeg from your recordings.</p>
| 193
|
reinforcement learning
|
Where can I find short videos of examples of RL being used?
|
https://ai.stackexchange.com/questions/22951/where-can-i-find-short-videos-of-examples-of-rl-being-used
|
<p>I would like to add a short ~1-3 minute video to a presentation, to demonstrate how Reinforcement Learning is used to solve problems. I am thinking something like a short gif of an agent playing an Atari game, but for my audience it would probably be better to have something more manufacturing/industry based.</p>
<p>Does anyone know any good sources where I could find some stuff like this?</p>
|
<p>You can take some ideas from <a href="https://www.youtube.com/watch?v=JgvyzIkgxF0" rel="nofollow noreferrer">this</a> YouTube video .</p>
<p>In addition, you should consider <a href="http://karpathy.github.io/2016/05/31/rl/" rel="nofollow noreferrer">that</a> page which is about Deep Reinforcement Learning used in a game (Pong from Pixels) .</p>
| 194
|
reinforcement learning
|
Train agent to surround a burning fire
|
https://ai.stackexchange.com/questions/26845/train-agent-to-surround-a-burning-fire
|
<p>I have built a wildfire 'simulation' in unity. And I want to train an RL agent to 'control' this fire. However, I think my task is quite complicated, and I can't work out to get the agent to do what I want.</p>
<p>A fire spreads in a tree-like format, where each node represents a point burning in the fire. When a node has burned for enough time, it spreads in all possible cardinal directions (as long as it does not spread to where it came from). The fire has a list of 'perimeter nodes' which represent the burning perimeter of the fire. These are the leaf nodes in the tree. The rate of spread is calculated using a mathematical model (<a href="https://www.fs.usda.gov/treesearch/pubs/55928#:%7E:text=The%20Rothermel%20surface%20fire%20spread%20model%2C%20with%20some%20adjustments%20by,fuels%20management%20systems%20since%201972.&text=Models%20have%20been%20developed%20for,directions%20other%20than%20head%20fire." rel="nofollow noreferrer">Rothermel model</a>) that takes into account wind speed, slope, and parameters relating to the type of fuel burning.</p>
<p>I want to train the agent to place 'control lines' in the map, which completely stops the fire from burning. The agent will ideally work out where the fire is heading and place these control lines ahead of the fire such that it runs into these lines.
Please could you guide me (or refer me to any reading that would be useful) on how I can decide the rules by which I give the model rewards?</p>
<p>Currently, I give positive rewards for the following:</p>
<ul>
<li>the number of fire nodes contained by a control line increases.</li>
</ul>
<p>And I give negative rewards for:</p>
<ul>
<li>the number of fire nodes contained by a control line does not increase.</li>
<li>the agent places a control line (these resources are valuable and can only be used sparingly).</li>
</ul>
<p>I end the session with a win when all nodes are contained, and with a loss if the agent places a control line out of the bounds of the world.</p>
<p>I am currently giving the agent the following information as observations:</p>
<ul>
<li>the direction that the wind is heading, as a bearing.</li>
<li>the wind speed</li>
<li>the vector position that the fire is started at</li>
<li>the current percentage of nodes that are contained</li>
<li>the total number of perimeter nodes</li>
</ul>
<p>I am new to RL, so I don't really know what is the best way to choose these parameters to train on. Please could you guide me to how I can better solve this problem?</p>
| 195
|
|
reinforcement learning
|
Is it possible to tell the Reinforcement Learning agent some rules directly without any constraints?
|
https://ai.stackexchange.com/questions/32042/is-it-possible-to-tell-the-reinforcement-learning-agent-some-rules-directly-with
|
<p>I try to apply RL for a control problem, and I intend to either use Deep Q-Learning or SARSA.</p>
<p>I have two heating storage systems with one heating device, and the RL agent is only allowed to heat up 1 for every time slot. How can I do that?</p>
<p>I have two continuous variables <span class="math-container">$x(t)$</span> and <span class="math-container">$y(t)$</span>, where <span class="math-container">$x(t)$</span> quantifies the degree of maximum power for heating up storage 1 and <span class="math-container">$y(t)$</span> quantifies the degree of maximum power for heating up storage 2.</p>
<p>Now, IF <span class="math-container">$x(t) > 0$</span>, THEN <span class="math-container">$y(t)$</span> has to be <span class="math-container">$0$</span>, and vice versa with <span class="math-container">$x(t)$</span> and <span class="math-container">$y(t)$</span> element <span class="math-container">$0$</span> or <span class="math-container">$[0.25, 1]$</span>. How can I tell this to the agent?</p>
<p>One way would be to adjust the actions after the RL agent has decided about that with a separate control algorithm that overrules the actions of the RL agent. I am wondering if and how this can be also done directly? I'll appreciate every comment.</p>
<p><strong>Update</strong>: Of course I could do this with a reward function. But is there not a direct way of doing this? Because this is actually a so called hard constraint. The agent is not allowed to violate this at all as this is technically not feasible. So it will be better to tell the agent directly not to do this (if that is possible).</p>
<p><strong>Reminder</strong>: Can anyone tell me more about this issue? I'll highly appreciate any further comment and will be quite thankful for your help. I will also award a bounty for a good answer.</p>
|
<p>I'm not an expert, but as far as I understand, you should use an off-policy algorithm, the difference between is:</p>
<p>On-Policy: The agent learns the value function according to the current action derived from current the policy being used.
Off-Policy: The agent learns the value function according to the action derived from another policy.</p>
<p>This means that you can use another policy to explore. For example, if you use a Q-Learning (not your case because of the continuos values of your problem) that is an off-policy approach, you can explore with a particular policy to get the actions (you can only select valid actions) then you can update your q-table with the Q-Learning equation.</p>
<p>In your case you can use an off-policy deep approach. I suggest DDPG/TD3, you can look about some of them briefly <a href="https://spinningup.openai.com/en/latest/user/algorithms.html#the-off-policy-algorithms" rel="nofollow noreferrer">here</a>.</p>
<p>The idea is to use an exploration policy, the one you restrict to only select valid values (hard-constraint), and integrate the State, Action, Reward, State' in the replay buffer. The Stable_Baseline library doesnt allow that, but you could check the original source code of TD3.</p>
<p>Edit1:</p>
<p>If you see in the Q learning algorithm, the e-greedy consist on selecting with a probability of <span class="math-container">$\epsilon$</span> <span class="math-container">$a \gets \text{any action}$</span>, and with <span class="math-container">$1-\epsilon$</span> the <span class="math-container">$a \gets max_{a}Q(s,a)$</span>. This <span class="math-container">$\text{any action}$</span> is the part of the code that you use this "controller" to only select random (but valid) actions. This is because you want to explore but only explore with valid actions. Then the Q learning can "exploit" picking the best action from the exploration you did before. Now, for your case with continuos actions, you can use DDPG/TD3 to do something similar but you store these valid actions in a replay buffer, so your Neural Network can learn for this "data" of only valid actions.</p>
<p><a href="https://i.sstatic.net/Dqda6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dqda6.jpg" alt="Q learning" /></a></p>
<p>Edit 2:</p>
<p>In your custom environment you can define your action space like:</p>
<pre><code>self.action_space = gym.spaces.Box(low=-1, high=1, shape=(1,))
</code></pre>
<p>Now, as you said, in the step function of your environment you can establish the x(t) and y(t)</p>
<pre><code>maxX=10 #Depends on the maximum value of your x(t), I assigned a 10
maxY=10 #Depends on the maximum value of your y(t), I assigned a 10
x=0
y=0
if action>0:
y=0
x=action*maxX
elif action<0:
x = 0
# you need to multiply by -1 because your action is negative
y = -1*action * maxY
# do the rest of the code of your controler with x and y
</code></pre>
<p>In this way, your RL agent will learn which action (between -1 and 1) will get the best reward, but in the step function, you map the action [-1 +1] to your true values.</p>
| 196
|
reinforcement learning
|
Is this simple game solvable with reinforcment learning?
|
https://ai.stackexchange.com/questions/36310/is-this-simple-game-solvable-with-reinforcment-learning
|
<p>Let's imagine this simple environment :</p>
<p>Each episode has a length of 1 step. Each action leads to a reward for this action.</p>
<p>The action space is of 3 : 'UP', 'DOWN', 'UNKNOWN'</p>
<p>Most of the time, the observation is a random vector. But sometimes, it is a sinusoid. The player has to predict if the next value is greater or smaller than the last one. If the vector is a sinusoid, it is very easy to predict what the next value would be. If it is random, there is no way to know.</p>
<p>If the answer is correct, the player gets a reward of 1. If it is incorrect, -1. If the action taken is 'UNKNOWN', 0.</p>
<p>A good model would predict 'UP' or 'DOWN' when the observation is a sinusoid, and 'UNKNOW' when it is random.</p>
<p>I thought it would be possible to solve this game with a Reinforcment Learning algorithm, what are your thougths ?</p>
|
<p>This looks like a job for a multiclass classifier. You can build a classifier model to predict one of your three classes and train it on test data with known correct outputs very easily.</p>
<p>Although you can fit reinforcement learning (RL) to the problem, there is no real benefit to doing so, and learning will be less efficient, since RL works on trial-and error. There is no need for trial-and-error here, or conversion into a reward score. Instead you can directly use an error gradient from the prediction.</p>
<p>If your purpose is to find a simple game where RL could be applied, so you can practise RL, then you need to make a more complex game, which has time steps and actions that have consequences involving changes to state. A common toy game for teaching yourself RL would be Blackjack (aka 21), where the actions are "twist" or "stick" - see <a href="http://incompleteideas.net/book/the-book.html" rel="nofollow noreferrer">Sutton & Barto: Reinforcement Learning, an Introduction</a> chapter 5, section 5.3 which presents a simplified version of the game and what a solution looks like.</p>
| 197
|
reinforcement learning
|
Why is $l$-step lookahead better in RL?
|
https://ai.stackexchange.com/questions/38920/why-is-l-step-lookahead-better-in-rl
|
<p>The following is from the book "Reinforcement learning and optimal control", by D. P. Bertsekas.</p>
<p>Chapter 2, page 52:</p>
<p>"The motivation for <span class="math-container">$l$</span>-step lookahead is that for increasing values of <span class="math-container">$l$</span>,
one may require a less accurate approximation <span class="math-container">$\tilde{J}_{k+l}$</span> to obtain good
performance. Otherwise expressed, for the same quality of cost function approximation,
better performance maybe obtained as <span class="math-container">$l$</span> becomes larger. This makes intuitive sense,
since in this case, the cost of more stages is treated exactly with optimization."</p>
<p>My question:</p>
<p>Why is it that for increasing values of <span class="math-container">$l$</span>,
one may require a less accurate approximation <span class="math-container">$\tilde{J}_{k+l}$</span> to obtain good
performance?</p>
|
<p>Let's first start with the intuitive explanation. If you just use q-learning (or any other temporal difference method like SARSA) you usually just look ahead one step. The other extreme is a monte-carlo method where you don't rely on estimates of the state-value function at all (your policy might depend on an estimate, but your updates don't). Monte-carlo methods can be viewed as an <span class="math-container">$\infty$</span>-lookahead. So <span class="math-container">$l$</span>-step lookaheads are somewhere in between (they take <span class="math-container">$l$</span> rewards in a monte carlo fashion and then only bootstrap from there), so they reduce the dependency on the estimates by quite some margin. Therefore you can have an estimate of the state-value function with higher variance for <span class="math-container">$l$</span>-step lookahead as you could with only <span class="math-container">$1$</span>-step lookaheads.</p>
<p>Now let's have a look at a more quantitative explanation. You are looking for the <em>error reduction property</em> (<a href="https://ai.stackexchange.com/a/9426/67543">see derivation in Sutton-Barto notation</a>) of <span class="math-container">$l$</span>-step lookaheads, which can generally be given by
<span class="math-container">\begin{equation}
\min\limits_x \Bigl\lvert \mathbb{E}_{\pi}[T^{l-1}\tilde{J}_{t}| X_{t}=x] - J_{\pi}\Bigr\rvert \le \gamma^{l} \min\limits_s \Bigl\lvert \tilde{J}_{t} - J_{\pi}\Bigr\rvert
\end{equation}</span></p>
<p>This shows that the <span class="math-container">$l$</span>-step lookahead will always have lower errors than a simple <span class="math-container">$1$</span> step solution. Hence one can have worse estimations to achieve the same quality of results with multi-step learning.</p>
| 198
|
reinforcement learning
|
How to train a logical XOR with reinforcement learning?
|
https://ai.stackexchange.com/questions/9838/how-to-train-a-logical-xor-with-reinforcement-learning
|
<p>After reading an excellent BLOG post <a href="http://karpathy.github.io/2016/05/31/rl/" rel="nofollow noreferrer"><strong>Deep Reinforcement Learning: Pong from Pixels</strong></a> and playing with the code a little, I've tried to do something simple: use the same code to train a logical XOR gate.</p>
<p>But no matter how I've tuned hyperparameters, the reinforced version does not converge (gets stuck around -10). What am I doing wrong? Isn't it possible to use Policy Gradients, in this case, for some reason?</p>
<p>The setup is simple:</p>
<ul>
<li>3 inputs (1 for bias=1, x, and y), 3 neurons in the hidden layer and 1 output.</li>
<li>The <em>game</em> is passing all 4 combinations of x,y to the RNN step-by-step, and after 4 steps giving a reward of +1 if all 4 answers were correct, and -1 if at least one was wrong.</li>
<li>The <em>episode</em> is 20 <em>games</em></li>
</ul>
<p>The code (forked from original and with minimal modifications) is here: <a href="https://gist.github.com/Dimagog/de9d2b2489f377eba6aa8da141f09bc2" rel="nofollow noreferrer">https://gist.github.com/Dimagog/de9d2b2489f377eba6aa8da141f09bc2</a></p>
<p><strong>P.S. Almost the same code trains XOR gate with <em>supervised</em> learning in no time (2 sec).</strong></p>
|
<p>Reinforcement learning is used when we know the outcome we want, <em>but not how to get there</em> which is why you won't see a lot of people using it for classification (because we already know the optimal policy, which is just to output the class label). You knew that already, just getting it out of the way for future readers!</p>
<p>As you say, your policy model is fine - a fully connected model that is just deep enough to learn XOR. I think the reward gradient is a little shallow - when I give a reward of +1 for "3 out 4" correct and +2 for "4 out of 4", then convergence happens (but very slowly). </p>
| 199
|
loss function
|
Non-Convex loss-surface although quadratic loss function
|
https://ai.stackexchange.com/questions/37039/non-convex-loss-surface-although-quadratic-loss-function
|
<p>there is one problem which bugs me quite a long time, it is the non-convex loss shape (multiple minima, e.g. shown <a href="https://medium.com/swlh/non-convex-optimization-in-deep-learning-26fa30a2b2b3" rel="nofollow noreferrer">here</a>) of neural networks which use a quadratic loss function.</p>
<p>Question: Why is a “common” AI problem usually non-convex with multiple minima, although we are using e.g. quadratic loss functions (which is in lectures usually drawn as a simple,convex,quadratic function such as x^2)?</p>
<p>My guess:</p>
<p>Is it because we are feeding the loss function with our highly non-linear model-output and therefore the resulting total loss surface is highly non-linear/ non-convex? Specifically, the quadratic loss is just approximating the infinitesimal small neighbourhood around a specific point (minibatch) as quadratic. Is that guess correct? This would imply, that highly-non linear / very deep and complex models have a highly-non linear resulting loss-surface, while shallower models have less minima and a one-layer network has a convex shape ?</p>
|
<p>If I understood your question correctly - the quadratic loss function is not always convex. Its' convexity depends on the input it takes. For example, consider a very basic NN that takes some input <span class="math-container">$x$</span> and returns
<span class="math-container">\begin{equation}
f(x)=relu(w_1relu(w_0x+b_0)+b_1).
\end{equation}</span>
Using quadratic loss, or <span class="math-container">$\ell_2$</span> is minimizing the objective <span class="math-container">$\sum_i(x_i-f(x_i))^2$</span>, but even when examining only one of the terms (say <span class="math-container">$i=0$</span>), which is <span class="math-container">$(x_0-f(x_0))^2$</span>, we see that it is not convex at all, as the following plot shows:</p>
<p> <a href="https://i.sstatic.net/Dwte2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dwte2.png" alt="plot" /></a></p>
| 200
|
loss function
|
Is Mean Squared Error Loss function a good loss function for continuous variables $0 < x < 1$
|
https://ai.stackexchange.com/questions/20199/is-mean-squared-error-loss-function-a-good-loss-function-for-continuous-variable
|
<p>Suppose I am utilising a neural network to predict the next state, <span class="math-container">$s'$</span> based on the current <span class="math-container">$(s, a)$</span> pairs. </p>
<p>all my neural network inputs are between 0 and 1 and the loss function for this network is defined as the mean squared error of the <strong>Difference</strong> between the current state and next state. Because the variables are all between 0 and 1, the MSE difference between the <strong>actual difference</strong> and <strong>predicted difference</strong> is smaller than the actual difference. </p>
<p>Suppose the difference in next state and current state for <span class="math-container">$s \in R^2$</span> is <span class="math-container">$[0.4,0.5]$</span> and the neural network outputs a difference of <span class="math-container">$[0.2,0.4]$</span>. The mean squared loss is therefore 0.05 <span class="math-container">$(0.2^2 + 0.1^2) = 0.05$</span> whereas the neural network does not really predict the next state very well due to a difference of <span class="math-container">$(0.2, 0.1)$</span>. </p>
<p>Although whichever loss function is used does not matter, It was deceiving to think that despite the loss function outputting low values, it is mainly due to the <strong>squared</strong> term that keeps the value small. </p>
<p>Is Mean Squared Error loss function still a good loss function to be used here ? </p>
| 201
|
|
loss function
|
Loss Function In Units Of Bits?
|
https://ai.stackexchange.com/questions/22475/loss-function-in-units-of-bits
|
<p>Where can I find a machine learning library that implements <code>loss functions</code> measuring the <a href="http://www.scholarpedia.org/article/Algorithmic_information_theory#:%7E:text=Algorithmic%20information%20theory%20(AIT)%20is,computation%2C%20information%2C%20and%20randomness.&text=presumably%20has%20no%20simple%20description%20other%20than%20writing%20down%20the%20string%20itself." rel="nofollow noreferrer">Algorithmic Information Theoretic</a>-friendly quantity "<em>bits of information</em>"?</p>
<p>To illustrate the difference between <em>entropy</em>, in the <em>Shannon information</em> sense of "bits" and the <em>algorithmic information</em> sense of "bits", consider the way these two measures treat a 1 million character string representing <span class="math-container">$\pi$</span>:</p>
<p><em>Shannon entropy</em> "bits" (<span class="math-container">$6$</span> for the '.'): <span class="math-container">$\lceil 1e6*\log_2(10) \rceil+6$</span></p>
<p><em>Algorithmic</em> "bits": The length, in bits, of the shortest <a href="https://www.angio.net/pi/pi-programs.html" rel="nofollow noreferrer">program that outputs 1e6 digits of <span class="math-container">$\pi$</span> </a>.</p>
<p>All <em>statistical</em> measures of information, such as <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence" rel="nofollow noreferrer">KL divergence</a>, are based on <em>Shannon information</em>. By contrast, <em>algorithmic information</em> permits representations that are fully <em>dynamical</em> as in <a href="https://en.wikipedia.org/wiki/Unrestricted_grammar" rel="nofollow noreferrer">Chomsky type 0</a>, <a href="https://en.wikipedia.org/wiki/Turing_completeness" rel="nofollow noreferrer">Turing Complete</a>, etc. languages. Since the world in which we live is <em>dynamical</em>, algorithmic models are at least <em>plausibly</em> more valid in many situations than are <em>statistical</em> models. (I recognize that recursive neural nets can be dynamical and that they can be trained with statistical loss functions.)</p>
<p>For a more authoritative and formal description of these distinctions see the Hutter Prize FAQ questions <a href="http://www.hutter1.net/prize/hfaq.htm#xvalid" rel="nofollow noreferrer">Why aren't cross-validation or train/test-set used for evaluation?</a> and <a href="http://www.hutter1.net/prize/hfaq.htm#reg" rel="nofollow noreferrer">Why is Compressor Length superior to other Regularizations?</a> For a paper-length exposition on the same see "<a href="https://www.mdpi.com/1099-4300/22/6/612" rel="nofollow noreferrer">A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions</a>".</p>
<p>From what I can see, machine learning makes it difficult to relate <code>loss</code> to <em>algorithmic</em> information. Such an AIT-friendly <code>loss function</code> must, by definition, measure the number of bits required to reconstruct, <em>without loss</em>, the original training dataset.</p>
<p>Let me explain with examples of what I mean by AIT-friendly <code>loss functions</code>, starting with the baby-step of <code>classification loss</code> (usually measured as <a href="https://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html" rel="nofollow noreferrer">cross-entropy</a>):</p>
<p>Let's say your training set consists of <span class="math-container">$P$</span> patterns belonging to <span class="math-container">$C$</span> classes. You can then construct a partial AIT loss function providing the length of the corrections to the model's classifications with a <span class="math-container">$P$</span>-length vector, each element containing a <span class="math-container">$0$</span> if the model was correct for that pattern, or the class if not. These elements would each have a bit-length of <span class="math-container">$\lceil \log_2(C+1) \rceil$</span>, and be prefixed by a <a href="https://en.wikipedia.org/wiki/Variable-length_quantity" rel="nofollow noreferrer">variable length integer</a> storing <span class="math-container">$P$</span>. The more <span class="math-container">$0$</span> elements, the more compressible this correction vector until, in the limit, a single <code>run-length code</code> for <span class="math-container">$P$</span> <span class="math-container">$0$</span>'s is stored as the correction, prefixed by <span class="math-container">$P$</span> and the length of the binary for the <code>RLE</code> algorithm itself. The bit-length of these, taken together, would comprise this partial <code>loss function</code>.</p>
<p>This is a reasonable first cut at an AIT-friendly loss function for <code>classification error</code>.</p>
<p>So now let's go one step further to outputs that are numeric, the typical approach is a summation of a function of individual error measures, such as squaring or taking their absolute value or whatever -- perhaps taking their mean. None of these are in units of bits of information. To provide the correction on the outputs to reproduce the actual training values requires, again, a vector of corrections. This time it would be deltas, the precision of which must be adequate to the original data being <em>losslessly</em> represented, hence requiring some sort of adaptive variable length quantity representation(s). These deltas would likely have a non-uniform distribution so they can be arithmetically encoded. That seems like a reasonable approach to another AIT-friendly loss function.</p>
<p>But now we get to the "model parameters" and find ourselves in the <em>apparently</em> well-defined but ill-founded notions like "<a href="https://towardsdatascience.com/l1-and-l2-regularization-methods-ce25e7fc831c" rel="nofollow noreferrer">L2 regularization</a>", which are defined in terms of ill-defined "parameters", e.g. "parameter counts" aren't given in bits.</p>
<p>I'll grant that <code>L2 regularization</code> sounds like it is heading in the right direction by squaring the weights and summing them up, but when one looks at what is actually being done, it is:</p>
<ul>
<li>applying additional functions to the sum such as mean</li>
<li>asking for a scaling factor to apply</li>
<li>applying the regularization on a per-layer basis rather than the entire model</li>
</ul>
<p>I'm sure I missed some of the many ways <code>L2 regularization</code> fails to be AIT-friendly.</p>
<p>Finally, there is the model's pseudo-invariance, measured, not simply in terms of its <code>hyperparameters</code> but in terms of the length of the (compressed archive of the) actual executable binary running on the hardware. I say 'pseudo' because there is nothing that says one cannot vary, say, the number of neurons in a neural network during learning -- nor even change to another learning paradigm than neural networks during learning (in the most general case).</p>
<p>So that's pretty much the complete loss function down to the Universal Turing Machine iron, but I'd be happy to see just a reference to an existing TensorFlow or another library that tries to do even a partial <code>loss</code> function for AIT-theoretic learning.</p>
| 202
|
|
loss function
|
How to update Loss Function parameter after compilation
|
https://ai.stackexchange.com/questions/14249/how-to-update-loss-function-parameter-after-compilation
|
<p>I used following custom loss function.</p>
<pre><code>def custom_loss(epo):
def loss(y_true,y_pred):
m=K.binary_crossentropy(y_true, y_pred)
x=math.log10(epo)
y=x*x
y=(math.sqrt(y)/100)
l=(m*(y))
return K.mean(l, axis=-1)
return loss
</code></pre>
<p>and this is my discriminator model</p>
<pre><code>def Discriminator():
inputs = Input(shape=img_shape)
x=Conv2D(32, kernel_size=3, strides=2, padding="same")(inputs)
x=LeakyReLU(alpha=0.2)(x)
x=Dropout(0.25)(x, training=True)
x=Conv2D(64, kernel_size=3, strides=2, padding="same")(x)
x=ZeroPadding2D(padding=((0, 1), (0, 1)))(x)
x=BatchNormalization(momentum=0.8)(x)
x=LeakyReLU(alpha=0.2)(x)
x=Dropout(0.25)(x, training=True)
x=Conv2D(128, kernel_size=3, strides=2, padding="same")(x)
x=BatchNormalization(momentum=0.8)(x)
x=LeakyReLU(alpha=0.2)(x)
x=Dropout(0.25)(x, training=True)
x=Conv2D(256, kernel_size=3, strides=1, padding="same")(x)
x=BatchNormalization(momentum=0.8)(x)
x=LeakyReLU(alpha=0.2)(x)
x=Dropout(0.25)(x, training=True)
x=Flatten()(x)
outputs=Dense(1, activation='sigmoid')(x)
model = Model(inputs, outputs)
#model.summary()
img = Input(shape=img_shape)
validity = model(img)
return Model(img, validity)
</code></pre>
<p>and initialize discriminator here</p>
<pre><code>D = Discriminator()
epoch=0
D.compile(loss=custom_loss(epoch), optimizer=optimizer, metrics=
['accuracy'])
G = Generator()
z = Input(shape=(100,))
img = G(z)
D.trainable = False
valid = D(img)
</code></pre>
<p>i want to update epo value of loss function after each epoch in the following code</p>
<pre><code>for epoch in range(epochs):
for batch in range(batches):
............
d_loss_real = D.train_on_batch(imgs, valid)
d_loss_fake = D.train_on_batch(gen_batch, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
g_loss = combined.train_on_batch(noise_batch, valid)
</code></pre>
<p>Are there any way for updating loss function without effecting training after compiling the model?</p>
| 203
|
|
loss function
|
How could logistic loss be used as loss function for an ANN?
|
https://ai.stackexchange.com/questions/22786/how-could-logistic-loss-be-used-as-loss-function-for-an-ann
|
<p>Normally, in practice, people use those loss functions with minima, e.g. <span class="math-container">$L_1$</span> mean absolute loss, <span class="math-container">$L_2$</span> mean squared error, etc. All those come with a minimum to optimize to.
<a href="https://i.sstatic.net/RSccA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RSccA.png" alt="enter image description here" /></a></p>
<p>However, there's another thing, logistic loss, I'm reading about, but don't get it why the logistic function could be used as a loss function, given that it has the so-called minimum at infinity, but that isn't a normal minimum. Logistic loss function (black curve):</p>
<p><a href="https://i.sstatic.net/0pREW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0pREW.png" alt="enter image description here" /></a></p>
<p>How can an optimizer minimize the logistic loss?</p>
|
<p>I see why you might be confused. First, the logistic-loss or log-loss is technically called cross-entropy loss. This function is very simple:</p>
<p><span class="math-container">$CE = -[y \log(p) + (1 - y) \log(1 - p)]$</span></p>
<p>This tells basically if the predicted class <span class="math-container">$y$</span> was right <span class="math-container">$y=1$</span> then the loss is <span class="math-container">$CE=-\log(p)$</span>, if the predicted class was not the right one then the loss is <span class="math-container">$CE=-\log(1-p)$</span>.</p>
<p>If we look at the function as a pure math concept we see that:</p>
<p><span class="math-container">$CE = f(x) = - \log(x)$</span></p>
<p>And as you point out, that function is minima-unbounded as its domain is <span class="math-container">$D(f(x)) = [0, +\infty]$</span>. You can check that in here:</p>
<p><a href="https://i.sstatic.net/dWM0J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dWM0J.png" alt="enter image description here" /></a></p>
<p>However the trick is that the inputs must be bounded, meaning, the inputs to the loss function must be in range <span class="math-container">$[0, 1]$</span>. This bounding is achieved by applying a sigmoid activation function as the final "layer" of the network. Then if the inputs to the loss function are bounded the function has a clear minima.</p>
<p>Check how the function looks in reality from one of the most important papers in loss function in the AI world: <a href="https://arxiv.org/abs/1708.02002" rel="nofollow noreferrer">Focal Loss</a> (I really encourage you to read it as the first section explains in detail the cross-entropy loss). The blue curve is the one you are looking for.</p>
<p><a href="https://i.sstatic.net/U0AbV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U0AbV.png" alt="enter image description here" /></a></p>
<p>Finally, you might want to review your log-loss/CE function since it should have an asymptote for <span class="math-container">$f(x=0) = \infty$</span></p>
| 204
|
loss function
|
When can we call a loss function "adaptive"?
|
https://ai.stackexchange.com/questions/31523/when-can-we-call-a-loss-function-adaptive
|
<p>A loss function is a measure of how bad our neural network is. We can decrease the loss by proper training.</p>
<p>I came across the phrase "adaptive loss function" in several research papers. For example: consider the following excerpt from the "Introduction" of the research paper titled <a href="https://proceedings.mlr.press/v48/reed16.pdf" rel="nofollow noreferrer">Generative Adversarial Text to Image Synthesis</a> by Scott Reed et al.</p>
<blockquote>
<p>By conditioning both generator and discriminator on side information, we can naturally model this phenomenon since the discriminator network acts as a "smart" <strong>adaptive</strong> loss function.</p>
</blockquote>
<p>When can we denote a loss function as adaptive? Is it a mathematical property or is solely based on the context?</p>
| 205
|
|
loss function
|
Loss Function not Decreasing
|
https://ai.stackexchange.com/questions/46593/loss-function-not-decreasing
|
<p>To practice what I learned about PyTorch, I gave myself the following problem:</p>
<blockquote>
<p>Create a model that given a vector, predicts what the 2nd largest number in it is.</p>
</blockquote>
<p>For example, <code>model([ 0.3, 0.4. 0.9 0.7 ])</code> should return <code>[ 0.0 0.0 0.0 1.0 ]</code>. I am pretty new to neural networks so I tried a feed forward network. However, no matter how large I make the dataset or how many training iterations I do I don't see my loss function decreasing and my model at the end gives pretty much random results. I feel like this should be a pretty easy problem to solve with a neural network, so what am I doing wrong here? Is a feed forward design not appropriate here?</p>
<pre><code>import torch as t
import torch.nn as nn
import torch.optim as optim
# Example: input [ 0.3, 0.4. 0.9 0.7 ] -> [ 0.0 0.0 0.0 1.0 ]
training_set = []
for i in range(10000):
data_point = t.rand(4)
target = sorted(data_point)[-2]
label = t.tensor([ 1. if v == target else 0. for v in data_point ])
training_set.append([data_point, label])
class Model(nn.Module):
def __init__(self, hidden=100):
super(Model, self).__init__()
self.lin1 = nn.Linear(4, hidden)
self.sig = nn.Sigmoid()
self.lin2 = nn.Linear(hidden, 4)
# Simple feed forward network
def forward(self, input):
return self.lin2(self.sig(self.lin1(input)))
model = Model()
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters())
for epoch in range(3):
running_loss = 0.0
for i, (input, label) in enumerate(training_set):
optimizer.zero_grad()
pred = model(input) # Forward pass
loss = criterion(pred, label) # Calculate loss
loss.backward() # Backward pass
optimizer.step()
# Print statistics
running_loss += loss.item()
step = 500
if i % step == 0:
print(epoch, i, running_loss / step)
running_loss = 0.0
test = t.rand(4)
print('Input', test)
print('Output', model(test))
</code></pre>
|
<p>Your problem type is more suited for classification than regression as you want to assign a class to the second largest number and not predict a value. Check out loss functions like CrossEntropyLoss rather than MSE to solve this type of problem.</p>
| 206
|
loss function
|
Custom Loss Function Traps Network in Local Optima
|
https://ai.stackexchange.com/questions/46523/custom-loss-function-traps-network-in-local-optima
|
<p>I am working with a feedforward neural network to fit the following simple function:</p>
<pre><code>N(1) = -1
N(2) = -1
N(3) = 1
N(4) = -1
</code></pre>
<p>But I don't want to use the Mean-Squared Error; I'm using a custom loss function that "guides" the network to the correct output in a different way. My custom loss function works as follows:</p>
<pre><code>if input == 3:
loss = -1 * output
else:
loss = 1 * output
</code></pre>
<p>If the neural network has a <code>tanh</code> activation function, the minimizing function for this loss is clearly <code>N</code>. So the network should tend to fit <code>N</code>.</p>
<p>But my network is only outputting <code>-1</code> (even when the input is <code>3</code>). I wrote a small sample code with pytorch to show this:</p>
<p>Loss function:</p>
<pre><code>import torch
import torch.nn as nn
class CustomLoss(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, i):
multiplier = -1 if i == 3 else 1
return multiplier * x
</code></pre>
<p>Initializing Network, Optimizer, and Loss:</p>
<pre><code>net = nn.Sequential(
nn.Linear(1, 20),
nn.Tanh(),
nn.Linear(20, 20),
nn.Tanh(),
nn.Linear(20, 20),
nn.Tanh(),
nn.Linear(20, 1),
nn.Tanh(),
)
optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)
loss_function = CustomLoss()
</code></pre>
<p>Optimizing through Backprop:</p>
<pre><code>for _i in range(100000):
i = _i % 4 + 1
input_value = torch.FloatTensor([i])
output_value = net(input_value)
optimizer.zero_grad()
loss_value = loss_function(output_value, i)
loss_value.backward()
optimizer.step()
print(i, output_value.item(), loss_value.item())
</code></pre>
<p>And after running it for a minute or so, the program outputs:</p>
<pre><code>Input | Output | Loss
1 | -0.9999958276748657 | -0.9999958276748657
2 | -0.9999961853027344 | -0.9999961853027344
3 | -0.9999962449073792 | 0.9999962449073792 <--- Loss can be minimized by making Output 1
4 | -0.9999962449073792 | -0.9999962449073792
</code></pre>
<p>As you can see, <code>N(3)</code> is not properly fitting. So I'm led to believe the model is trapped in a local optima. But I don't know exactly why, since the MSE version of this code easily works (I tested it with the exact same setup with an altered loss function). Does anyone have any ideas?</p>
<p>I also discovered something else: if I make <code>N(1) = 1</code> instead of <code>N(3) = 1</code> (i.e. swap the outputs of <code>1</code> and <code>3</code>), the network fits perfectly. I have absolutely no idea why this happens. It seems the loss function can approximate:</p>
<pre><code>N(1) = 1
N(2) = -1
N(3) = -1
N(4) = -1
</code></pre>
<p>But it can't approximate:</p>
<pre><code>N(1) = -1
N(2) = -1
N(3) = 1
N(4) = -1
</code></pre>
<p>Somehow.</p>
|
<p>This is not meant as a complete answer to your question, but here are some things in your approach that arguably mess with model convergence.</p>
<p>Firstly, optimization is all about gradients, your loss function technically <em>guides</em> the model towards outputs of <span class="math-container">$-\infty$</span> and <span class="math-container">$+\infty$</span> (prior to <span class="math-container">$\tanh$</span>), depending on the case. But what the gradient of the <span class="math-container">$\tanh(x)$</span>-function does:</p>
<p><span class="math-container">$\lim_{x \rightarrow \infty} \tanh(x) = 1$</span> <br />
<span class="math-container">$\lim_{x \rightarrow \infty} \tanh'(x) = 0 \quad$</span> (<span class="math-container">$\tanh'(x) = 1 - \tanh^2(x)$</span>).</p>
<p><span class="math-container">$\lim_{x \rightarrow -\infty} \tanh(x) = -1$</span> <br />
<span class="math-container">$\lim_{x \rightarrow -\infty} \tanh'(x) = 0$</span></p>
<p>As soon as the model overfits to the solution where everything is <span class="math-container">$-1$</span>, there is no gradient to recover from it and this scenario is likely to happen, because <span class="math-container">$-1$</span> is the more frequent case in the training set.
<em><-- This would be my explanation why your model converges to -1 for all inputs.</em> This is also why you statement "Loss can be minimized by making Output 1" is misleading. A gradient of 0 means no training regardless of the loss value.</p>
<p>Another thing is that you are using Adam optimizer - usually a great choice but it seems not suitable in this case. Its an optimizer with momentum, which cannot deal well with rapid slope changes during training, it will basically overshoot and therefore kind of ignore the slope change - this is amplified by training on individual samples and not on batches. (A slope change happens due to your if statement - it changes from <span class="math-container">$+1$</span> to <span class="math-container">$-1$</span> or vice versa.)</p>
<p>I'd argue its a mixture of the first two points why your special case of setting N(1) = 1 actually works: It's your very first training sample, thus it initializes the momentum of Adam which is a much better precondition not to converge to the <span class="math-container">$-1$</span> solution prematurely.</p>
<p>If you want to keep trying with this custom function, I'd try the following 1) reduce model size 2a) Use batched training instead of individual samples to make training less noisy 2b) balance the batches such that <span class="math-container">$+1$</span> and <span class="math-container">$-1$</span> occur in equal amounts. 3) Experiment with smaller learning rates 4) You can also try other optimizers, but if everything else is configured a bit better Adam should also work. (I'd try in that order and after every change see what happens).</p>
| 207
|
loss function
|
Can't the loss function be used as a fitness function?
|
https://ai.stackexchange.com/questions/45740/cant-the-loss-function-be-used-as-a-fitness-function
|
<p>Can't the loss function used in backpropagation be used as a fitness function in an evolutionary algorithm?</p>
|
<p>It definitely can. It took a bit but I found discussion on NEAT for supervised classification.</p>
<p>But for the record, evolutionary algorithms are a lot more interesting on problems which are not differentiable. They tend to be computationally expensive if you already know a reasonable algorithm (backprop).</p>
| 208
|
loss function
|
Loss function on intermediate layers of the networks
|
https://ai.stackexchange.com/questions/47288/loss-function-on-intermediate-layers-of-the-networks
|
<p>Typically in supervised learning, a neural networks' output is compared to the targets through a loss function, and the gradients are backpropagated. Is it a bad idea to also have a loss function on intermediate layers? I feel that restricts the networks ability to actually optimize the final, global loss function. What are the instances in which an additional loss on intermediate layers would work?</p>
<p><strong>----EDIT----</strong></p>
<p>Clarification of my thought process for supervising intermediate layers, via examples:</p>
<ol>
<li><p>Merging contrastive loss to a Variational AutoEncoder (VAE) or a Classifier. I would have the original VAE loss function (or classifier's cross entropy loss), but additionally supervise the output of the encoder or some intermediate layer with a contrastive loss (for example the triplet loss, where the targets would be positive and negative samples from the batch)</p>
</li>
<li><p>A more classic example would be teacher-student distillation, supervising student's intermediate layers with the teacher's intermediate layers.</p>
</li>
<li><p>When intermediate layers provide a prediction itself. For instance, let's say we have an object detector where the <em>M</em>th layer outputs segmentation masks, and the <em>N</em>th layer (<em>N>M</em>) outputs the bounding boxes.</p>
</li>
</ol>
<p>So the question is, when would these additional losses work, or will they confuse the network just more?</p>
| 209
|
|
loss function
|
Transformer Loss Function for Music Generation
|
https://ai.stackexchange.com/questions/46306/transformer-loss-function-for-music-generation
|
<p>I am working on a Midi Generation project that takes tracks as inputs, and outputs a complimentary track of notes.</p>
<p>The tracks are basically a list of notes created of:</p>
<ul>
<li>Time</li>
<li>Duration</li>
<li>Pitch</li>
<li>Velocity</li>
</ul>
<p>I am one-hot encoding the pitch and duration (128 possibilities each), to run cross-entropy. My problem is that the pitch and duration seem to not even start to converge, staying at the max <code>ln(128) == ~4.85</code>, while the time's MSE loss seems to go down.</p>
<p>Also, I don't know if I am handling the masked outputs well.</p>
<p>The model:</p>
<pre><code>def forward(self, src, trg, src_mask, tgt_mask):
src_emb = self.pos_enc(src)
tgt_emb = self.pos_enc(trg)
outs = self.transformer(
src_emb,
tgt_emb,
src_key_padding_mask=src_mask,
tgt_key_padding_mask=tgt_mask
)
time = self.time_ff(outs)
time = self.relu(time)
duration = self.duration_ff(outs)
duration = self.relu(duration)
pitch = self.pitch_ff(outs)
pitch = self.softmax(pitch)
velocity = self.velocity_ff(outs)
velocity = self.softmax(velocity)
concatenated_output = torch.cat([time, duration, pitch, velocity], dim=-1)
return concatenated_output
</code></pre>
<p>The Loss Function:</p>
<pre><code>def midi_loss_fn(output, target):
'''
MSE + Cross Entropy Loss
'''
out_mask = output.isnan()
tgt_mask = target == -2
mask = ~(out_mask | tgt_mask)
out_times, tgt_times = output[...,0][mask[...,0]], target[...,0][mask[...,0]]
out_durations, tgt_durations = output[...,1][mask[...,1]], target[...,1][mask[...,1]]
out_pitches, tgt_pitches = output[...,2:130][mask[...,2:130]], target[...,2:130][mask[...,2:130]]
out_velocities, tgt_velocities = output[...,130:258][mask[...,130:258]], target[...,130:258][mask[...,130:258]]
time_loss = F.mse_loss(out_times, tgt_times)
duration_loss = F.mse_loss(out_durations, tgt_durations)
pitch_loss = F.cross_entropy(out_pitches.reshape(-1, 128), tgt_pitches.reshape(-1, 128))
velocity_loss = F.cross_entropy(out_velocities.reshape(-1, 128), tgt_velocities.reshape(-1, 128))
return loss + 1e-8
</code></pre>
| 210
|
|
loss function
|
Different methods of calculating gradients of cost function(loss function)
|
https://ai.stackexchange.com/questions/19895/different-methods-of-calculating-gradients-of-cost-functionloss-function
|
<p>We require to find the gradient of loss function(cost function) w.r.t to the weights to use optimization methods such as SGD or gradient descent. So far, I have come across two ways to compute the gradient:</p>
<ol>
<li>BackPropagation</li>
<li>Calculating gradient of loss function by calculus</li>
</ol>
<p>I found many resources for understanding backpropagation.
The 2nd method I am referring to is the image below(taken for a specific example, e is the error: difference between target and prediction):
<a href="https://i.sstatic.net/OSOVB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OSOVB.jpg" alt="enter image description here"></a> </p>
<p>Also, the proof was mentioned in this paper:<a href="https://arxiv.org/pdf/1802.01528.pdf" rel="nofollow noreferrer">here</a></p>
<p>Moreover, I found this method while reading this <a href="https://www.pyimagesearch.com/2016/10/10/gradient-descent-with-python/" rel="nofollow noreferrer">blog</a>.(You might have to scroll down to see the code: gradient = X.T.dot(error) / X.shape[0] ) </p>
<p>My question is are the two methods of finding gradient of cost function same? It appears different and if yes, which one is more efficient( though one can guess it is backpropagation)</p>
<p>Would be grateful for any help. Thanks for being patient(it's my 1st time learning ML). </p>
|
<p>I'm pretty sure they're the same thing. Both backpropagation and gradient descent's essential ideas are to calculate the partial derivative of the cost with respect to each weight, and subtract the partial derivative times the learning rate.</p>
| 211
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.