category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
machine translation
How to deploy Neural Machine Translation checkpoints on Azure as endpoints and microservice
https://stackoverflow.com/questions/78095214/how-to-deploy-neural-machine-translation-checkpoints-on-azure-as-endpoints-and-m
<p>I find it straightforward to deploy simple Machine Learning models on AzureML after training, as we can serialize the trained model into a single .pkl file, for example. However, for Neural Machine Translation models, especially when fine-tuning pre-trained models, we end up with up to eleven files. These files can be uploaded to Hugging Face for testing.</p> <p>I have been attempting to deploy these checkpoints on Azure as endpoints but have been unsuccessful, despite reading and following the Azure documentation. If anyone knows of a clear tutorial or has experience deploying such models on Azure, I would greatly appreciate the guidance. Additionally, I would love to know how to deploy these models as microservices on Azure. Thank you.</p> <p>I have read Microsoft's documentation on deployment, but I couldn't find specific information on deploying checkpoints for Neural Machine Translation models.</p> <p><a href="https://i.sstatic.net/XxZvQ.png" rel="nofollow noreferrer">Here are files from the chekpoints after training and finetuning NMT models</a></p>
<p>Currently, foundation models of this kind can be deployed from the <code>Hugging Face</code> registry in Azure ML.</p> <p><img src="https://i.imgur.com/cKdisP2.png" alt="enter image description here" /></p> <p>So, you register your model with Hugging Face and try deploying it in Azure ML.</p> <p><img src="https://i.imgur.com/0SIXrYY.png" alt="enter image description here" /></p> <p>There is a component in the Azure ML registry where you can convert these models to <code>Mlflow</code>. Check this <a href="https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/system/import/import_model_into_registry.ipynb" rel="nofollow noreferrer">notebook</a> for more information.</p> <p>After converting, you can deploy it to an endpoint.</p> <p>But before all of this, you need to register a new model in Hugging Face with these files.</p>
1,334
machine translation
machine translation of HTML blocks
https://stackoverflow.com/questions/2305254/machine-translation-of-html-blocks
<p>Is there any API (like google translation api) in PHP which allows to translate HTML blocks and translate only text out of the html ?</p>
<p>Microsoft's translation API will translate while maintaining HTML tags.</p> <p>The API is documented <a href="http://www.microsofttranslator.com/dev/" rel="nofollow noreferrer">here</a>. It has both a REST and WSDL interface.</p> <p>I tend to use the WSDL interface with PHP's SoapClient library. Here is some code to show you how to use it.</p> <pre><code>$client = new SoapClient("http://api.microsofttranslator.com/V1/SOAP.svc"); $params = array( 'appId' =&gt; 'my_app_id', 'text' =&gt; '&lt;p&gt;This is a &lt;b&gt;test&lt;/b&gt;&lt;/p&gt;', 'from' =&gt; 'en', 'to' =&gt; 'fr'); $translation = $client-&gt;translate($params); var_dump($translation); </code></pre> <p>You'll need to register with Microsoft for your own application ID which you pass up with each request. You can register <a href="http://www.bing.com/developers/createapp.aspx" rel="nofollow noreferrer">here</a>.</p> <p>I would advise against stripping out tags, translating and then re-inserting the tags. Since you have no guarantee that word number and order is preserved in the translation it makes it very difficult to know where to place the tags in the translated text. Better to have the MT engine handle the tags.</p>
1,335
machine translation
What are the return values from fairseq WMT19 machine translation model&#39;s .generate() function?
https://stackoverflow.com/questions/76359809/what-are-the-return-values-from-fairseq-wmt19-machine-translation-models-gener
<p>I am trying to play around with the Fairseq machine translation model using</p> <pre><code>en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', tokenizer='moses', bpe='fastbpe') </code></pre> <p>When i use en2de.generate(....), i wanna know what are the return values of this function.</p> <p>This function is defined in the hub_utils.py file of fairseq model</p> <p>I tried debugging the code, but didnt get anywhere. I need a better understanding of its return types.</p>
<p>Most probably the code snippet you're looking at came from <a href="https://github.com/facebookresearch/fairseq/blob/main/examples/wmt19/README.md" rel="nofollow noreferrer">https://github.com/facebookresearch/fairseq/blob/main/examples/wmt19/README.md</a></p> <p>The example code there looks like this:</p> <pre><code>import torch # English to German translation en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', tokenizer='moses', bpe='fastbpe') </code></pre> <p>But most probably you'll meet some environmental setup issues because fairseq isn't easily useable &quot;off-the-shelf&quot;. So, you'll have to do something like this:</p> <pre><code>! pip install -U fastBPE sacremoses ! pip install -U hydra-core omegaconf bitarray ! git clone https://github.com/pytorch/fairseq &amp;&amp; cd fairseq &amp;&amp; pip install --editable ./ </code></pre> <p>After setting up the environment, now you can try this again:</p> <pre><code>import torch # English to German translation en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', checkpoint_file='model1.pt', tokenizer='moses', bpe='fastbpe') type(en2de) </code></pre> <p>[out]:</p> <pre><code>fairseq.hub_utils.GeneratorHubInterface </code></pre> <p>If we do some code digging, it points to <a href="https://github.com/facebookresearch/fairseq/blob/main/fairseq/hub_utils.py#L97" rel="nofollow noreferrer">https://github.com/facebookresearch/fairseq/blob/main/fairseq/hub_utils.py#L97</a></p> <pre><code>class GeneratorHubInterface(nn.Module): &quot;&quot;&quot; PyTorch Hub interface for generating sequences from a pre-trained translation or language model. &quot;&quot;&quot; </code></pre> <p>And if we look at the <code>translate()</code> function, it goes to <a href="https://github.com/facebookresearch/fairseq/blob/main/fairseq/hub_utils.py#LL133C1-L145C76" rel="nofollow noreferrer">https://github.com/facebookresearch/fairseq/blob/main/fairseq/hub_utils.py#LL133C1-L145C76</a></p> <pre><code> def translate( self, sentences: List[str], beam: int = 5, verbose: bool = False, **kwargs ) -&gt; List[str]: return self.sample(sentences, beam, verbose, **kwargs) def sample( self, sentences: List[str], beam: int = 1, verbose: bool = False, **kwargs ) -&gt; List[str]: if isinstance(sentences, str): return self.sample([sentences], beam=beam, verbose=verbose, **kwargs)[0] tokenized_sentences = [self.encode(sentence) for sentence in sentences] batched_hypos = self.generate(tokenized_sentences, beam, verbose, **kwargs) return [self.decode(hypos[0][&quot;tokens&quot;]) for hypos in batched_hypos] </code></pre> <h1>So <code>.translate()</code> returns a list of strings</h1> <p>And if we dig deeper into the rabbit hole, we see the <code>.generate()</code> function from <a href="https://github.com/facebookresearch/fairseq/blob/main/fairseq/hub_utils.py#L170" rel="nofollow noreferrer">https://github.com/facebookresearch/fairseq/blob/main/fairseq/hub_utils.py#L170</a> which returns</p> <pre><code>def generate( self, tokenized_sentences: List[torch.LongTensor], beam: int = 5, verbose: bool = False, skip_invalid_size_inputs=False, inference_step_args=None, prefix_allowed_tokens_fn=None, **kwargs ) -&gt; List[List[Dict[str, torch.Tensor]]]: </code></pre> <p>And if you use the model with <code>.generate()</code>,</p> <pre><code>tokenized_sentences = en2de.encode(&quot;Machine learning is great!&quot;) en2de.generate([tokenized_sentences]) </code></pre> <p>[out]:</p> <pre><code>[[{'tokens': tensor([21259, 99, 4125, 15336, 34, 5013, 19663, 111, 2]), 'score': tensor(-0.2017), 'attention': tensor([[0.2876, 0.0079, 0.0066, 0.0211, 0.0117, 0.0107, 0.0026, 0.0049, 0.0067], [0.1374, 0.0239, 0.0076, 0.0090, 0.0062, 0.0049, 0.0021, 0.0029, 0.0034], [0.0817, 0.0073, 0.0472, 0.3804, 0.0206, 0.0112, 0.0031, 0.0072, 0.0059], [0.0684, 0.0017, 0.0033, 0.0079, 0.1894, 0.1042, 0.0093, 0.0214, 0.0088], [0.0862, 0.0021, 0.0021, 0.0055, 0.0991, 0.2868, 0.0274, 0.0126, 0.0065], [0.0415, 0.0053, 0.0049, 0.0089, 0.0388, 0.0405, 0.0146, 0.1026, 0.0346], [0.2972, 0.9517, 0.9284, 0.5673, 0.6342, 0.5417, 0.9409, 0.8484, 0.9341]]), 'alignment': tensor([]), 'positional_scores': tensor([-0.5091, -0.0979, -0.0993, -0.0672, -0.1520, -0.5898, -0.0818, -0.1108, -0.1069])}, {'tokens': tensor([21259, 99, 4125, 15336, 34, 19503, 111, 2]), 'score': tensor(-0.3501), 'attention': tensor([[0.2876, 0.0079, 0.0066, 0.0211, 0.0117, 0.0107, 0.0048, 0.0073], [0.1374, 0.0239, 0.0076, 0.0090, 0.0062, 0.0049, 0.0027, 0.0037], [0.0817, 0.0073, 0.0472, 0.3804, 0.0206, 0.0112, 0.0070, 0.0063], [0.0684, 0.0017, 0.0033, 0.0079, 0.1894, 0.1042, 0.0217, 0.0097], [0.0862, 0.0021, 0.0021, 0.0055, 0.0991, 0.2868, 0.0129, 0.0078], [0.0415, 0.0053, 0.0049, 0.0089, 0.0388, 0.0405, 0.1076, 0.0373], [0.2972, 0.9517, 0.9284, 0.5673, 0.6342, 0.5417, 0.8431, 0.9280]]), 'alignment': tensor([]), 'positional_scores': tensor([-0.5091, -0.0979, -0.0993, -0.0672, -0.1520, -1.6566, -0.1113, -0.1072])}, {'tokens': tensor([ 5725, 372, 8984, 3845, 34, 5013, 19663, 111, 2]), 'score': tensor(-0.4066), 'attention': tensor([[0.2876, 0.0278, 0.0040, 0.0030, 0.0192, 0.0150, 0.0032, 0.0075, 0.0083], [0.1374, 0.0755, 0.0019, 0.0379, 0.0087, 0.0062, 0.0027, 0.0044, 0.0043], [0.0817, 0.0269, 0.4516, 0.0801, 0.0227, 0.0120, 0.0038, 0.0084, 0.0065], [0.0684, 0.0034, 0.0067, 0.0091, 0.1939, 0.1039, 0.0097, 0.0224, 0.0099], [0.0862, 0.0031, 0.0040, 0.0030, 0.1022, 0.2868, 0.0296, 0.0135, 0.0073], [0.0415, 0.0058, 0.0054, 0.0066, 0.0373, 0.0400, 0.0146, 0.1016, 0.0351], [0.2972, 0.8574, 0.5264, 0.8603, 0.6160, 0.5361, 0.9364, 0.8422, 0.9287]]), 'alignment': tensor([]), 'positional_scores': tensor([-2.0029, -0.3431, -0.1785, -0.0286, -0.1586, -0.6527, -0.0782, -0.1101, -0.1071])}, {'tokens': tensor([21259, 99, 4125, 15336, 34, 8404, 111, 2]), 'score': tensor(-0.5465), 'attention': tensor([[0.2876, 0.0079, 0.0066, 0.0211, 0.0117, 0.0107, 0.0047, 0.0074], [0.1374, 0.0239, 0.0076, 0.0090, 0.0062, 0.0049, 0.0026, 0.0037], [0.0817, 0.0073, 0.0472, 0.3804, 0.0206, 0.0112, 0.0071, 0.0064], [0.0684, 0.0017, 0.0033, 0.0079, 0.1894, 0.1042, 0.0221, 0.0095], [0.0862, 0.0021, 0.0021, 0.0055, 0.0991, 0.2868, 0.0125, 0.0077], [0.0415, 0.0053, 0.0049, 0.0089, 0.0388, 0.0405, 0.1046, 0.0372], [0.2972, 0.9517, 0.9284, 0.5673, 0.6342, 0.5417, 0.8464, 0.9282]]), 'alignment': tensor([]), 'positional_scores': tensor([-0.5091, -0.0979, -0.0993, -0.0672, -0.1520, -3.2290, -0.1100, -0.1075])}, {'tokens': tensor([ 9467, 5293, 34, 5013, 19663, 111, 2]), 'score': tensor(-0.5483), 'attention': tensor([[0.2876, 0.0109, 0.0157, 0.0154, 0.0032, 0.0069, 0.0093], [0.1374, 0.0110, 0.0081, 0.0065, 0.0027, 0.0039, 0.0047], [0.0817, 0.2288, 0.0219, 0.0131, 0.0034, 0.0076, 0.0069], [0.0684, 0.0045, 0.1818, 0.0989, 0.0097, 0.0224, 0.0100], [0.0862, 0.0037, 0.0979, 0.2854, 0.0276, 0.0135, 0.0076], [0.0415, 0.0074, 0.0343, 0.0404, 0.0146, 0.1005, 0.0363], [0.2972, 0.7337, 0.6403, 0.5402, 0.9388, 0.8452, 0.9253]]), 'alignment': tensor([]), 'positional_scores': tensor([-2.1557, -0.5372, -0.1502, -0.6968, -0.0819, -0.1092, -0.1072])}]] </code></pre> <h1><code>.generate()</code> returns a list of list of dict, where keys are names and values are tensor</h1> <p>The outer most list is the sentences' result. If you have one sentence, the result for the sentence is:</p> <pre><code>tokenized_sentences = [en2de.encode(&quot;Machine learning is great!&quot;)] results = en2de.generate(tokenized_sentences) translation_sent1 = results[0] len(translation_sent1) </code></pre> <p>[out]:</p> <pre><code>5 </code></pre> <p>You'll see that each sentence has 5 translation results. This is because the beam size is set to 5 by default. Each dictionary in the inner list corresponds to the translations from each beam.</p> <pre><code>tokenized_sentences = [en2de.encode(&quot;Machine learning is great!&quot;)] results = en2de.generate(tokenized_sentences, beam=2) translation_sent1 = results[0] len(translation_sent1) </code></pre> <p>[out]:</p> <pre><code>2 </code></pre> <p>And to get the best translation:</p> <pre><code>tokenized_sentences = [en2de.encode(&quot;Machine learning is great!&quot;)] results = en2de.generate(tokenized_sentences, beam=2) translation_sent1 = results[0] # 2 translations from 2 beams for the 1st sentence. best_translation = translation_sent1[0] # Best 1 translation out of the 2 beams. best_translation </code></pre> <p>[out]:</p> <pre><code>{'tokens': tensor([21259, 99, 4125, 15336, 34, 5013, 19663, 111, 2]), 'score': tensor(-0.2017), 'attention': tensor([[0.2876, 0.0079, 0.0066, 0.0211, 0.0117, 0.0107, 0.0026, 0.0049, 0.0067], [0.1374, 0.0239, 0.0076, 0.0090, 0.0062, 0.0049, 0.0021, 0.0029, 0.0034], [0.0817, 0.0073, 0.0472, 0.3804, 0.0206, 0.0112, 0.0031, 0.0072, 0.0059], [0.0684, 0.0017, 0.0033, 0.0079, 0.1894, 0.1042, 0.0093, 0.0214, 0.0088], [0.0862, 0.0021, 0.0021, 0.0055, 0.0991, 0.2868, 0.0274, 0.0126, 0.0065], [0.0415, 0.0053, 0.0049, 0.0089, 0.0388, 0.0405, 0.0146, 0.1026, 0.0346], [0.2972, 0.9517, 0.9284, 0.5673, 0.6342, 0.5417, 0.9409, 0.8484, 0.9341]]), 'alignment': tensor([]), 'positional_scores': tensor([-0.5091, -0.0979, -0.0993, -0.0672, -0.1520, -0.5898, -0.0818, -0.1108, -0.1069])} </code></pre> <p>And to get the string representation, we fetch the tokens and decode them:</p> <pre><code>en2de.decode(best_translation['tokens']) </code></pre> <p>[out]:</p> <pre><code>Maschinelles Lernen ist großartig! </code></pre> <p>Here's the working code for the above examples, <a href="https://www.kaggle.com/alvations/how-to-use-fairseq-wmt19-models" rel="nofollow noreferrer">https://www.kaggle.com/alvations/how-to-use-fairseq-wmt19-models</a></p>
1,336
machine translation
How do we generate the first target words in machine translation?
https://stackoverflow.com/questions/73103907/how-do-we-generate-the-first-target-words-in-machine-translation
<p>I am learning about machine translation tasks with transformers. To my knowledge, the transformers model predicts the next word of the target sentence based on the previous words of the source sentence. However, in the MarianMT model (or T5), I find its tokenizer does not have a start of sentence token (&lt;cls&gt; or &lt;s&gt;). I think that a token is needed to start predicting the first word in the target sentence.</p> <p>Can anyone explain to me how the MarianMT model will predict the first word in the target sentence?</p> <p>Thank you.</p>
<p>From the <a href="https://huggingface.co/docs/transformers/model_doc/marian" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>the model starts generating with pad_token_id (which has <code>0</code> as a token_embedding) as the prefix (Bart uses <code>&lt;s/&gt;</code>)</p> </blockquote> <p>So it does not need a SOS token as it uses the padding token as a first token during training.</p>
1,337
machine translation
What is projection layer in the context of neural machine translation using RNN?
https://stackoverflow.com/questions/60110462/what-is-projection-layer-in-the-context-of-neural-machine-translation-using-rnn
<p>I read a paper about machine translation, and it uses projection layer. The projection layer is explained as follows: "Additional projection aims to reduce the dimensionality of the encoder output representations to match the decoder stack dimension."</p> <p>Does anyone know this architecture or how to implement this layer in Pytorch?</p> <p>The paper's link: <a href="https://www.aclweb.org/anthology/P18-1008.pdf" rel="nofollow noreferrer">https://www.aclweb.org/anthology/P18-1008.pdf</a></p> <p>The model architecture:</p> <p><a href="https://i.sstatic.net/3pN0j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3pN0j.png" alt="enter image description here"></a></p>
<p>It is a standard linear projection. You can just add <code>nn.Linear(2 * model_dim, model_dim)</code> where <code>model_dim</code> is RNN dimension.</p> <p>The encoder is bidirectional, with one RNNs in both directions having an output of dimension <code>model_dim</code>. The decoder only works in the forward direction, so it has states of only <code>model_dim</code> dimensions. It actually saves a lot of parameters in the multi-head attention because it makes the projection for keys and values only half size because they project from <code>model_dim</code> instead of <code>2 * model_dim</code>.</p>
1,338
machine translation
Input to attention in TensorFlow 2.0 tutorial on &quot;Neural machine translation with attention&quot;
https://stackoverflow.com/questions/58618837/input-to-attention-in-tensorflow-2-0-tutorial-on-neural-machine-translation-wit
<p>There is one question when I learned the example <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention" rel="nofollow noreferrer">"Neural machine translation with attention"</a>.</p> <pre class="lang-py prettyprint-override"><code>class Decoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz): super(Decoder, self).__init__() self.batch_sz = batch_sz self.dec_units = dec_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(self.dec_units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') self.fc = tf.keras.layers.Dense(vocab_size) # used for attention self.attention = BahdanauAttention(self.dec_units) def call(self, x, hidden, enc_output): # enc_output shape == (batch_size, max_length, hidden_size) context_vector, attention_weights = self.attention(hidden, enc_output) # x shape after passing through embedding == (batch_size, 1, embedding_dim) x = self.embedding(x) # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size) x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1) # passing the concatenated vector to the GRU output, state = self.gru(x) # output shape == (batch_size * 1, hidden_size) output = tf.reshape(output, (-1, output.shape[2])) # output shape == (batch_size, vocab) x = self.fc(output) return x, state, attention_weights </code></pre> <p>Why the attention weight is calculated by <code>encoder_output</code> and <code>encoder_hidden</code> and context vector is contacted with decoder_embedding. In my opinion, the attention weight should be calculated by <code>encoder_output</code> and every single hidden of decoder_output, and context vector should be contacted with decoder_output. </p> <p>Maybe I have not understood the seq2seq with attention completely?</p>
<p>The attention is called in every step of the decoder. The inputs to the decoder step are:</p> <ul> <li>previously decoded token <code>x</code> (or ground-truth token while training)</li> <li>previous hidden state of the <strong>decoder</strong> <code>hidden</code></li> <li>hidden states of the encoder <code>enc_output</code></li> </ul> <p>As you correctly say, the attention the single decoder hidden states and all encoder hidden states as input which gives you the context vector.</p> <pre class="lang-py prettyprint-override"><code>context_vector, attention_weights = self.attention(hidden, enc_output) </code></pre> <p>The context vector gets concatenated with the embedding only after calling the attention mechanism when it is used as the input of the GRU cell.</p> <pre class="lang-py prettyprint-override"><code>x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1) output, state = self.gru(x) </code></pre> <p>The variable <code>output</code> will become <code>hidden</code> in the next step of the decoder.</p>
1,339
machine translation
How to parallelize transformer model for machine translation on 8 GPUs?
https://stackoverflow.com/questions/78774602/how-to-parallelize-transformer-model-for-machine-translation-on-8-gpus
<p>I am attempting to perform machine translation using a transformer model in a manner almost identical to the original article. While the model works reasonably well, it requires greater computational resources. To address this, I ran the model on a computer with 8 GPU processors, but I lack experience in this area. I tried to make the necessary adjustments for parallelization:</p> <pre><code>transformer = nn.DataParallel(transformer) transformer = transformer.to(DEVICE) </code></pre> <p>However, due to my lack of experience, things are not working well. Specifically, I have been stuck for a long time on the following error message:</p> <pre class="lang-none prettyprint-override"><code>File &quot;C:\Projects\MT005\.venv\Lib\site-packages\torch\nn\functional.py&quot;, line 5382, in multi_head_attention_forward raise RuntimeError(f&quot;The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.&quot;) RuntimeError: The shape of the 2D attn_mask is torch.Size([8, 64]), but should be (4, 4). </code></pre> <p>Could someone help me solve this problem and get the model running on all 8 GPUs?</p>
1,340
machine translation
Tensorflow: Creating a custom text dataset to use in machine translation
https://stackoverflow.com/questions/57099339/tensorflow-creating-a-custom-text-dataset-to-use-in-machine-translation
<p>I would want to use my own data to train the model for a <a href="https://www.tensorflow.org/beta/tutorials/text/transformer" rel="nofollow noreferrer">machine translation system using Transformers</a>. There are a set of datasets already available in TFDS (Tensorflow datasets) and there is also option to <a href="https://www.tensorflow.org/datasets/add_dataset#use_the_default_template" rel="nofollow noreferrer">add a new dataset</a> to TFDS. But What if I dont have to wait for those add requests and stuff and directly train on my data? </p> <p>In the example colab notebook, they use the following to create train and validation data:</p> <pre><code>examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True, as_supervised=True) train_examples, val_examples = examples['train'], examples['validation'] </code></pre> <p>I believe TFDS does a lot of preprocessing to fit into the pipeline and it is of Dataset type.</p> <pre><code>type(train_examples) tensorflow.python.data.ops.dataset_ops._OptionsDataset </code></pre> <p>But for a custom CSV data like the below, how do I create a 'Dataset' compatible for this model?</p> <pre><code>import pandas as pd # initialize list of lists data = [['tom', 10], ['nick', 15], ['juli', 14],['tom', 10], ['nick', 15]] # Create the pandas DataFrame df = pd.DataFrame(data, columns = ['Name', 'Age']) # print dataframe. df </code></pre>
<p>The dataset in the colab notebook is just a collection of pairs of strings (the translation pairs of sentences). This doesn't seem to be what you have there (you have name and age??).</p> <p>However, it is certainly possible to create a Dataset from a csv of language pairs (or name and age for that matter!). There is a comprehensive guide to the dataset API here: <a href="https://www.tensorflow.org/guide/datasets" rel="nofollow noreferrer">https://www.tensorflow.org/guide/datasets</a> but essentially, given a csv named "translations.csv" that looks like this:</p> <pre><code>hola,hello adios,goodbye pero,dog huevos,eggs ... </code></pre> <p>then we can just do:</p> <pre class="lang-py prettyprint-override"><code>my_dataset = tf.data.experimental.CsvDataset("translations.csv", [tf.string, tf.string]) </code></pre> <p>similarly, for your name/age dataset you could do something like:</p> <pre><code>my_dataset = tf.data.experimental.CsvDataset("ages.csv", [tf.string, tf.int32]) </code></pre>
1,341
machine translation
No gradients provided for any variable, Attention-based Neural Machine Translation, tensorflow-keras implementation
https://stackoverflow.com/questions/62906248/no-gradients-provided-for-any-variable-attention-based-neural-machine-translati
<p>I am trying to implement an Attention-based Neural Machine Translation architecture in Keras. I have encoder (Extends Model class), attention(layer class) and decoder(Model class) classes. Finally, I have a &quot;combine&quot; class (Extends Model) that combines all these models.</p> <p>Here is the issue I am facing:</p> <p><a href="https://i.sstatic.net/yPLs8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yPLs8.png" alt="enter image description here" /></a></p> <p>How do I provide gradients to all these variables?</p> <p>I have spent days figuring out and trying other alternatives like using GradientTape, but nothing seemed to be working.</p>
1,342
machine translation
What are the methods to collect data for building machine translation model (French to German)
https://stackoverflow.com/questions/56971308/what-are-the-methods-to-collect-data-for-building-machine-translation-model-fre
<p>I have a lot of emails in french language and I want to convert it into German language.</p> <p>Now for the same I need Machine Translation Model, but not sure how to collect data for creating the model.</p> <p>It's ok to have a low accuracy in the starting but I am not finding a way to start collecting data. </p> <p>Anyone please suggest...</p>
<blockquote> <p>German-French texts extracted from the website of the Federal Foreign Office Berlin. This includes 11,852 pairs that were translated between October 2013 and the beginning of November 2015 and converted into a .TMX file format.</p> </blockquote> <p><a href="https://data.europa.eu/euodp/en/data/dataset/elrc_42" rel="nofollow noreferrer">https://data.europa.eu/euodp/en/data/dataset/elrc_42</a></p>
1,343
machine translation
Machine translation for multilingual sentiment analysis
https://stackoverflow.com/questions/4482758/machine-translation-for-multilingual-sentiment-analysis
<p>I am trying to do sentiment analysis for non english languages like japenese, chinese, german etc. I want to know if any Machine translator available for translating documents in these languages to english. I am working on JAVA so I should be able to call the API or tool. I have used google translator API so please suggest anything apart from it. </p>
<p>Sentiment analysis is highly dependent on both the culture and the domain of practice (see <a href="http://en.wikipedia.org/wiki/Sentiment_analysis" rel="nofollow">http://en.wikipedia.org/wiki/Sentiment_analysis</a> ). We are working in the area of SA for scientific texts and this is undoubtedly a research area. So I don't think you will find anything off-the-shelf either for the human language or for the SA.</p>
1,344
machine translation
Neural machine translation - seq2seq encoder-decoder
https://stackoverflow.com/questions/65879201/neural-machine-translation-seq2seq-encoder-decoder
<p>I am working on seq2seq NMT for french to english translation. In the inference model I am getting cardinality error.</p> <blockquote> <p>ValueError: Data cardinality is ambiguous:<br /> x sizes: 1, 5, 5<br /> Please provide data which shares the same first dimension.</p> </blockquote> <pre class="lang-py prettyprint-override"><code> encoder_inputs = Input(shape=(None,)) embedding_e = Embedding(num_source_vocab,256,mask_zero = True) encoder_embedding = embedding_e(encoder_inputs) encoder = LSTM(256,return_state = True) encoder_outputs,state_h,state_c = encoder(encoder_embedding) encoder_states = [state_h,state_c] decoder_inputs = Input(shape=(None,)) embedding_f = Embedding(num_target_vocab,256,mask_zero = True) decoder_embedding = embedding_f(decoder_inputs) decoder = LSTM(256,return_sequences = True,return_state = True) decoder_outputs,_,_ = decoder(decoder_embedding,initial_state=encoder_states) decoder_dense = Dense(num_target_vocab,activation= 'softmax') decoder_outputs = decoder_dense(decoder_outputs) model = Model([encoder_inputs,decoder_inputs],[decoder_outputs]) model.compile(optimizer = 'rmsprop',loss = 'categorical_crossentropy',metrics = ['accuracy']) model.summary() filepath = 'eng2fre.h5' checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max') history = model.fit([encoder_input_data,decoder_input_data],decoder_target_data,epochs =20,batch_size = 64,validation_split=0.2,callbacks=[checkpoint]) encoder_model = Model(encoder_inputs,encoder_states) decoder_state_input_h = Input(shape=(256,)) decoder_state_input_c = Input(shape=(256,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_inputs_single = Input(shape=(1,)) decoder_inputs_single_x = embedding_f(decoder_inputs_single) decoder_outputs2, state_h2, state_c2 = decoder( decoder_inputs_single_x, initial_state=decoder_states_inputs) decoder_states2 = [state_h2, state_c2] decoder_outputs2 = decoder_dense(decoder_outputs2) decoder_model = Model( [decoder_inputs_single] + decoder_states_inputs, [decoder_outputs2] + decoder_states2) x=encoder_input_data[100] states = encoder_model.predict(x) input_single = np.zeros((1,1)) input_single[0,0] = target_vocab['sos'] eos_id = target_vocab['eos'] # getting error after the following chunk of code for i in range(max_target_length): dec_op,h,c = decoder_model.predict([input_single]+ states) </code></pre>
1,345
machine translation
How to tune a Machine Translation model with huge language model?
https://stackoverflow.com/questions/29869607/how-to-tune-a-machine-translation-model-with-huge-language-model
<p><code>Moses</code> is a software to build machine translation models. And <code>KenLM</code> is the defacto language model software that moses uses.</p> <p>I have a textfile with 16GB of text and i use it to build a language model as such:</p> <pre><code>bin/lmplz -o 5 &lt;text &gt; text.arpa </code></pre> <p>The resulting file (<code>text.arpa</code>) is 38GB. Then I binarized the language model as such:</p> <pre><code>bin/build_binary text.arpa text.binary </code></pre> <p>And the binarized language model (<code>text.binary</code>) grows to 71GB.</p> <p>In <code>moses</code>, after training the translation model, you should tune the weights of the model by using <code>MERT</code> algorithm. And this can simply be done with <a href="https://github.com/moses-smt/mosesdecoder/blob/master/scripts/training/mert-moses.pl">https://github.com/moses-smt/mosesdecoder/blob/master/scripts/training/mert-moses.pl</a>. </p> <p>MERT works fine with small language model but with the big language model, it takes quite some days to finish. </p> <p>I did a google search and found KenLM's filter, which promises to filter the language model to a smaller size: <a href="https://kheafield.com/code/kenlm/filter/">https://kheafield.com/code/kenlm/filter/</a></p> <p>But i'm clueless as to how to make it work. The command help gives:</p> <pre><code>$ ~/moses/bin/filter Usage: /home/alvas/moses/bin/filter mode [context] [phrase] [raw|arpa] [threads:m] [batch_size:m] (vocab|model):input_file output_file copy mode just copies, but makes the format nicer for e.g. irstlm's broken parser. single mode treats the entire input as a single sentence. multiple mode filters to multiple sentences in parallel. Each sentence is on a separate line. A separate file is created for each sentence by appending the 0-indexed line number to the output file name. union mode produces one filtered model that is the union of models created by multiple mode. context means only the context (all but last word) has to pass the filter, but the entire n-gram is output. phrase means that the vocabulary is actually tab-delimited phrases and that the phrases can generate the n-gram when assembled in arbitrary order and clipped. Currently works with multiple or union mode. The file format is set by [raw|arpa] with default arpa: raw means space-separated tokens, optionally followed by a tab and arbitrary text. This is useful for ngram count files. arpa means the ARPA file format for n-gram language models. threads:m sets m threads (default: conccurrency detected by boost) batch_size:m sets the batch size for threading. Expect memory usage from this of 2*threads*batch_size n-grams. There are two inputs: vocabulary and model. Either may be given as a file while the other is on stdin. Specify the type given as a file using vocab: or model: before the file name. For ARPA format, the output must be seekable. For raw format, it can be a stream i.e. /dev/stdout </code></pre> <p>But when I tried the following, it gets stuck and does nothing:</p> <pre><code>$ ~/moses/bin/filter union lm.en.binary lm.filter.binary Assuming that lm.en.binary is a model file Reading lm.en.binary ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100 </code></pre> <p><strong>What should one do to the Language Model after binarization? Is there any other steps to manipulate large language models to reduce the computing load when tuning?</strong></p> <p><strong>What is the usual way to tune on a large LM file?</strong></p> <p><strong>How to use KenLM's filter?</strong></p> <p>(more details on <a href="https://www.mail-archive.com/moses-support@mit.edu/msg12089.html">https://www.mail-archive.com/moses-support@mit.edu/msg12089.html</a>)</p>
<p>Answering how to use <code>filter</code> command of <a href="https://kheafield.com/code/kenlm/" rel="nofollow noreferrer">KenLM</a></p> <pre><code>cat small_vocabulary_one_word_per_line.txt \ | filter single \ "model:LM_large_vocab.arpa" \ output_LM_small_vocab. </code></pre> <p>Note: that <code>single</code> can be replace with <code>union</code> or <code>copy</code>. Read more in the help which is printing if you run the <code>filter</code> binary without arguments.</p>
1,346
machine translation
How to use locally saved United MUP model in Unbabel-Comet model for Machine Translation Evaluation?
https://stackoverflow.com/questions/76953227/how-to-use-locally-saved-united-mup-model-in-unbabel-comet-model-for-machine-tra
<p>From <a href="https://huggingface.co/Unbabel/unite-mup" rel="nofollow noreferrer">https://huggingface.co/Unbabel/unite-mup</a>, there's a model that comes from the <a href="https://aclanthology.org/2022.acl-long.558/" rel="nofollow noreferrer">UniTE: Unified Translation Evaluation</a> paper. The usage was documented as such:</p> <pre><code>from comet import download_model, load_from_checkpoint model_path = download_model(&quot;Unbabel/unite-mup&quot;) model = load_from_checkpoint(model_path) data = [ { &quot;src&quot;: &quot;这是个句子。&quot;, &quot;mt&quot;: &quot;This is a sentence.&quot;, &quot;ref&quot;: &quot;It is a sentence.&quot; }, { &quot;src&quot;: &quot;这是另一个句子。&quot;, &quot;mt&quot;: &quot;This is another sentence.&quot;, &quot;ref&quot;: &quot;It is another sentence.&quot; } ] model_output = model.predict(data, batch_size=8, gpus=1) </code></pre> <p>Similar to <a href="https://stackoverflow.com/questions/75879866/how-to-load-unbabel-comet-model-without-nested-wrapper-initialization">How to load Unbabel Comet model without nested wrapper initialization?</a>, there's a <code>load_from_checkpoint</code> wrapper around the model and the actual class object that makes use of the model. Also, there's no clear instruction of how to use a locally saved <code>Unbabel/unite-mup</code> model.</p> <p><strong>Is there some way to use locally saved United MUP model in Unbabel-Comet model for Machine Translation Evaluation?</strong></p>
<p>First ensure that you have the required <code>unbabel-comet</code> version to support the model,</p> <pre><code>pip install unbabel-comet&gt;=2.0.1 </code></pre> <p>Then</p> <pre><code>import os from huggingface_hub import snapshot_download from comet.models.multitask.unified_metric import UnifiedMetric model_path = snapshot_download(repo_id=&quot;Unbabel/unite-mup&quot;, cache_dir=os.path.abspath(os.path.dirname('.'))) model_checkpoint_path = f&quot;{model_path}/checkpoints/model.ckpt&quot; unite = UnifiedMetric.load_from_checkpoint(model_checkpoint_path) </code></pre> <p>Then the same usage code as documented on <a href="https://huggingface.co/Unbabel/unite-mup" rel="nofollow noreferrer">https://huggingface.co/Unbabel/unite-mup</a> works:</p> <pre><code>data = [ { &quot;src&quot;: &quot;这是个句子。&quot;, &quot;mt&quot;: &quot;This is a sentence.&quot;, &quot;ref&quot;: &quot;It is a sentence.&quot; }, { &quot;src&quot;: &quot;这是另一个句子。&quot;, &quot;mt&quot;: &quot;This is another sentence.&quot;, &quot;ref&quot;: &quot;It is another sentence.&quot; } ] model_output = unite.predict(data, batch_size=8, gpus=1) # Expected SRC score: # [0.3474583327770233, 0.4492775797843933] print (model_output.metadata.src_scores) # Expected REF score: # [0.9252626895904541, 0.899452269077301] print (model_output.metadata.ref_scores) # Expected UNIFIED score: # [0.8758717179298401, 0.8294666409492493] print (model_output.metadata.unified_scores) </code></pre> <p>Working example: <a href="https://colab.research.google.com/drive/1ggj_sC4dzwpjaOjv07GZq9t0bSnhAvFi?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1ggj_sC4dzwpjaOjv07GZq9t0bSnhAvFi?usp=sharing</a></p>
1,347
machine translation
Machine translation transformer output - &quot;unknown&quot; tokens?
https://stackoverflow.com/questions/69595863/machine-translation-transformer-output-unknown-tokens
<p>When decoding / translating a test dataset after training on the base Transformer model (Vaswani et. al.), I sometimes see this token &quot;unk&quot; in the ouput.</p> <p>&quot;unk&quot; here refers to an unknown token, but my question is what is the reasoning behind that? Based on <a href="https://nlp.stanford.edu/pubs/acl15_nmt.pdf" rel="nofollow noreferrer">https://nlp.stanford.edu/pubs/acl15_nmt.pdf</a>, does it mean that the vocab I built for the training set does not contain the word present in the test set?</p> <p>For reference, I built the <code>Vocab</code> using <code>Spacy</code> <code>en_core_web_sm</code> and <code>de_core_news_sm</code> for a German to English translation task.</p> <p>Example output:</p> <pre><code>ground truth = ['a', 'girl', 'in', 'a', 'jean', 'dress', 'is', 'walking', 'along', 'a', 'raised', 'balance', 'beam', '.'] predicted = ['a', 'girl', 'in', 'a', '&lt;unk&gt;', 'costume', 'is', 'jumping', 'on', 'a', 'clothesline', '.', '&lt;eos&gt;'] </code></pre> <p>As you can see, the <em>jean</em> is &quot;unk&quot; here.</p>
<p>Neural machine translation models have a limited vocabulary. The reason is that you get the distribution over the target vocabulary tokens by multiplying the hidden state of the encoder by a matrix that has one row for each vocabulary token. The paper that you mention uses hidden state of 1000 dimensions. If you wanted to cover English reasonably, you would need a vocabulary of at least 200k tokens, which would mean 800MB only for this matrix.</p> <p>The paper that you mention is an outdated solution from 2015 and tries to find how to have the vocabulary as big as possible. However, increasing the vocabulary capacity did not appear to be the best solution because, with increasing vocabulary size, you add rarer and rarer words into the vocabulary and there is less and less training signal for embeddings of these words, so the model eventually does not learn to use those words properly.</p> <p>State-of-the art machine translation uses a segmentation into subwords that was <a href="https://aclanthology.org/P16-1162/" rel="nofollow noreferrer">introduced in 2016</a> with the BPE algorithm. In parallel, Google came with an alternative solution named WordPiece for their <a href="https://arxiv.org/abs/1609.08144" rel="nofollow noreferrer">first production neural machine translation system</a>. Later, Google came with an improved segmentation algorithm <a href="https://aclanthology.org/D18-2012" rel="nofollow noreferrer">SentencePiece in 2018</a>.</p> <p>The main principle of the subword vocabulary is that the frequent words remain intact, whereas rarer words get segmented into smaller units. Rare words are often proper names that do not really get translated. For languages with complex morphology, subword segmentation allows the models to learn how to create different forms of the same words.</p>
1,348
machine translation
Why is the Multilingual App Toolkit not generating machine translations?
https://stackoverflow.com/questions/46584891/why-is-the-multilingual-app-toolkit-not-generating-machine-translations
<p>My Multilingual App Toolkit is throwing an error when trying to connect to my azure subscription and generate machine translations.</p> <p>A couple months ago I successfully set up the new Multilingual App Toolkit to use the Translator Text API in the Azure Market Place on my Azure account. I did this following the steps on this knowledgebase article: <a href="https://cognitive.uservoice.com/knowledgebase/articles/1128340-announcements-action-required-before-april-30-20" rel="nofollow noreferrer">https://cognitive.uservoice.com/knowledgebase/articles/1128340-announcements-action-required-before-april-30-20</a></p> <p>In the last month I used the service and updated my translations. I can confirm the service used to work and that my Azure account and translation service are both active and no changes were made to any keys.</p> <p>Error:<br> <a href="https://i.sstatic.net/7bzAC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7bzAC.png" alt="enter image description here"></a></p>
1,349
machine translation
trouble installing a protocol-buffer known as SentencePiece too, to alleviate the open vocabulary problems in neural machine translation
https://stackoverflow.com/questions/55358694/trouble-installing-a-protocol-buffer-known-as-sentencepiece-too-to-alleviate-th
<p>im trying to install a protocol-buffer known as SentencePiece too, to alleviate the open vocabulary problems in neural machine translation .</p> <p>I used this command as suggested on github documentation :</p> <p>sudo apt-get install cmake build-essential pkg-config libgoogle-perftools-dev</p> <p>it gives an error :</p> <p>eventhough I have cuda10.1 installed .</p> <p>any ideas ?<a href="https://i.sstatic.net/C6qVB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C6qVB.png" alt="enter image description here"></a> <a href="https://i.sstatic.net/DOm4g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DOm4g.png" alt="enter image description here"></a></p>
1,350
machine translation
Metrics or evaluation about machine learning translation
https://stackoverflow.com/questions/46187542/metrics-or-evaluation-about-machine-learning-translation
<p>Could you guys recommend some evaluation or metrics about machine learning for translation: for example Japanese to English et al. If possible, could you tell me some papers about metrics. I am a new one to translation. Thanks! </p>
<p>Despite continuous critics and debates starting with <a href="http://homepages.inf.ed.ac.uk/miles/papers/eacl06.pdf" rel="nofollow noreferrer">this 2006 article</a>, <strong>BLEU</strong> (<strong><em>B</strong>i<strong>L</strong>ingual <strong>E</strong>valuation <strong>U</strong>nderstudy</em>) score is still the most commonly used metric for machine translation. According to the <a href="https://en.wikipedia.org/wiki/BLEU" rel="nofollow noreferrer">Wikipedia page</a>, </p> <blockquote> <p>BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU. BLEU was one of the first metrics to achieve a high correlation with human judgements of quality, and remains one of the most popular automated and inexpensive metrics.</p> </blockquote> <p>More specifically, if you want to look at Japanese-English translation, there was a <a href="https://cs224d.stanford.edu/reports/GreensteinEric.pdf" rel="nofollow noreferrer">class project from Stanford CS 224d</a> that translates simple Japanese sentences like 「彼女は敵だった」 into English with neural network techniques, and uses BLEU as the evaluation metric. </p> <p>If you want more readings on machine translation, I suggest a good start with one of the most influential one recently, namely <a href="https://arxiv.org/pdf/1409.0473" rel="nofollow noreferrer">Neural machine translation by jointly learning to align and translate</a> by Yoshua Bengio et al. You can also look at the <a href="https://scholar.google.com/scholar?cites=8900239586727494087" rel="nofollow noreferrer">top papers that cited the BLEU critics</a> to get a sense of other commonly used metrics.</p>
1,351
machine translation
Save load and retrain a tensorflow model for machine translation
https://stackoverflow.com/questions/76412332/save-load-and-retrain-a-tensorflow-model-for-machine-translation
<p>I've been trying train a model for machine translation. It worked pretty fine when I trained it for 10 epochs at a time and tested it. But when I try to train it for 1 epoch at a time, save and load it to continue from where I left earlier it gives some errors.</p> <pre><code>import tensorflow as tf import einops import numpy as np import os import tensorflow as tf import tensorflow_text as tf_text import pathlib from keras.layers import TextVectorization class ShapeChecker(): def __init__(self): self.shapes = {} def __call__(self, tensor, names, broadcast=False): if not tf.executing_eagerly(): return parsed = einops.parse_shape(tensor, names) for name, new_dim in parsed.items(): old_dim = self.shapes.get(name, None) if broadcast and new_dim == 1: continue if old_dim is None: self.shapes[name] = new_dim continue if new_dim != old_dim: raise ValueError(f&quot;Shape mismatch for dimension: '{name}'\n&quot; f&quot; found: {new_dim}\n&quot; f&quot; expected: {old_dim}\n&quot;) class Encoder(tf.keras.layers.Layer): def __init__(self, text_processor, units): super(Encoder, self).__init__() self.text_processor = text_processor self.vocab_size = text_processor.vocabulary_size() self.units = units self.embedding = tf.keras.layers.Embedding(self.vocab_size, units, mask_zero=True) self.rnn = tf.keras.layers.Bidirectional( merge_mode='sum', layer=tf.keras.layers.GRU(units, return_sequences=True, recurrent_initializer='glorot_uniform')) def call(self, x): shape_checker = ShapeChecker() shape_checker(x, 'batch s') x = self.embedding(x) shape_checker(x, 'batch s units') x = self.rnn(x) shape_checker(x, 'batch s units') return x def convert_input(self, texts): texts = tf.convert_to_tensor(texts) if len(texts.shape) == 0: texts = tf.convert_to_tensor(texts)[tf.newaxis] context = self.text_processor(texts).to_tensor() context = self(context) return context class CrossAttention(tf.keras.layers.Layer): def __init__(self, units, **kwargs): super().__init__() self.mha = tf.keras.layers.MultiHeadAttention(key_dim=units, num_heads=1, **kwargs) self.layernorm = tf.keras.layers.LayerNormalization() self.add = tf.keras.layers.Add() def call(self, x, context): shape_checker = ShapeChecker() shape_checker(x, 'batch t units') shape_checker(context, 'batch s units') attn_output, attn_scores = self.mha( query=x, value=context, return_attention_scores=True) shape_checker(x, 'batch t units') shape_checker(attn_scores, 'batch heads t s') attn_scores = tf.reduce_mean(attn_scores, axis=1) shape_checker(attn_scores, 'batch t s') self.last_attention_weights = attn_scores x = self.add([x, attn_output]) x = self.layernorm(x) return x class Decoder(tf.keras.layers.Layer): @classmethod def add_method(cls, fun): setattr(cls, fun.__name__, fun) return fun def __init__(self, text_processor, units): super(Decoder, self).__init__() self.text_processor = text_processor self.vocab_size = text_processor.vocabulary_size() self.word_to_id = tf.keras.layers.StringLookup( vocabulary=text_processor.get_vocabulary(), mask_token='', oov_token='[UNK]') self.id_to_word = tf.keras.layers.StringLookup( vocabulary=text_processor.get_vocabulary(), mask_token='', oov_token='[UNK]', invert=True) self.start_token = self.word_to_id('[START]') self.end_token = self.word_to_id('[END]') self.units = units self.embedding = tf.keras.layers.Embedding(self.vocab_size, units, mask_zero=True) self.rnn = tf.keras.layers.GRU(units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') self.attention = CrossAttention(units) self.output_layer = tf.keras.layers.Dense(self.vocab_size) @Decoder.add_method def call(self, context, x, state=None, return_state=False): shape_checker = ShapeChecker() shape_checker(x, 'batch t') shape_checker(context, 'batch s units') x = self.embedding(x) shape_checker(x, 'batch t units') x, state = self.rnn(x, initial_state=state) shape_checker(x, 'batch t units') x = self.attention(x, context) self.last_attention_weights = self.attention.last_attention_weights shape_checker(x, 'batch t units') shape_checker(self.last_attention_weights, 'batch t s') logits = self.output_layer(x) shape_checker(logits, 'batch t target_vocab_size') if return_state: return logits, state else: return logits @Decoder.add_method def get_initial_state(self, context): batch_size = tf.shape(context)[0] start_tokens = tf.fill([batch_size, 1], self.start_token) done = tf.zeros([batch_size, 1], dtype=tf.bool) embedded = self.embedding(start_tokens) return start_tokens, done, self.rnn.get_initial_state(embedded)[0] @Decoder.add_method def tokens_to_text(self, tokens): words = self.id_to_word(tokens) result = tf.strings.reduce_join(words, axis=-1, separator=' ') result = tf.strings.regex_replace(result, '^ *\[START\] *', '') result = tf.strings.regex_replace(result, ' *\[END\] *$', '') return result @Decoder.add_method def get_next_token(self, context, next_token, done, state, temperature=0.0): logits, state = self( context, next_token, state=state, return_state=True) if temperature == 0.0: next_token = tf.argmax(logits, axis=-1) else: logits = logits[:, -1, :] / temperature next_token = tf.random.categorical(logits, num_samples=1) done = done | (next_token == self.end_token) next_token = tf.where(done, tf.constant(0, dtype=tf.int64), next_token) return next_token, done, state class Translator(tf.keras.Model): @classmethod def add_method(cls, fun): setattr(cls, fun.__name__, fun) return fun def __init__(self, units, context_text_processor, target_text_processor): super().__init__() self.ctp = context_text_processor self.ttp = target_text_processor encoder = Encoder(context_text_processor, units) decoder = Decoder(target_text_processor, units) self.encoder = encoder self.decoder = decoder def call(self, inputs): context, x = inputs context = self.encoder(context) logits = self.decoder(context, x) return logits def get_config(self): return { &quot;context&quot;: self.ctp, &quot;target&quot;: self.ttp } path_to_zip = tf.keras.utils.get_file( 'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip', extract=True) path_to_file = pathlib.Path(path_to_zip).parent / 'spa-eng/spa.txt' def load_data(path): text = path.read_text(encoding='utf-8') lines = text.splitlines() pairs = [line.split('\t') for line in lines] context = np.array([context for target, context in pairs]) target = np.array([target for target, context in pairs]) return target, context target_raw, context_raw = load_data(path_to_file) buffer_size = len(context_raw) batch_size = 64 is_train = np.random.uniform(size=(len(target_raw),)) &lt; .8 train_raw = ( tf.data.Dataset .from_tensor_slices((context_raw[is_train], target_raw[is_train])) .shuffle(buffer_size) .batch(batch_size) ) val_raw = ( tf.data.Dataset .from_tensor_slices((context_raw[~is_train], target_raw[~is_train])) .shuffle(buffer_size) .batch(batch_size) ) for example_context_strings, example_target_strings in train_raw.take(1): print() break example_text = tf.constant('¿Todavía está en casa?') def tf_lower_and_split_punctuation(text): # Split accented characters. text = tf_text.normalize_utf8(text, 'NFKD') text = tf.strings.lower(text) # Keep space, a to z, and select punctuation. text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '') # Add spaces around punctuation. text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ') # Strip whitespace. text = tf.strings.strip(text) text = tf.strings.join(['[START]', text, '[END]'], separator=' ') return text max_vocab_size = 5000 context_text_processor = TextVectorization( standardize=tf_lower_and_split_punctuation, max_tokens=max_vocab_size, ragged=True ) context_text_processor.adapt(train_raw.map(lambda context, target: context)) target_text_processor = TextVectorization( standardize=tf_lower_and_split_punctuation, max_tokens=max_vocab_size, ragged=True ) target_text_processor.adapt(train_raw.map(lambda context, target: target)) example_tokens = context_text_processor(example_context_strings) context_vocab = np.array(context_text_processor.get_vocabulary()) tokens = context_vocab[example_tokens[0].numpy()] ' '.join(tokens) def process_text(context, target): context = context_text_processor(context).to_tensor() target = target_text_processor(target) targ_in = target[:, :-1].to_tensor() targ_out = target[:, 1:].to_tensor() return (context, targ_in), targ_out train_ds = train_raw.map(process_text, tf.data.AUTOTUNE) val_ds = val_raw.map(process_text, tf.data.AUTOTUNE) for (ex_context_tok, ex_tar_in), ex_tar_out in train_ds.take(1): print() def masked_loss(y_true, y_pred): loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none') loss = loss_fn(y_true, y_pred) mask = tf.cast(y_true != 0, loss.dtype) loss *= mask return tf.reduce_sum(loss) / tf.reduce_sum(mask) def masked_acc(y_true, y_pred): y_pred = tf.argmax(y_pred, axis=-1) y_pred = tf.cast(y_pred, y_true.dtype) match = tf.cast(y_true == y_pred, tf.float32) mask = tf.cast(y_true != 0, tf.float32) return tf.reduce_sum(match) / tf.reduce_sum(mask) UNITS = 256 model = Translator(UNITS, context_text_processor, target_text_processor) model.compile( optimizer='adam', loss=masked_loss, metrics=[masked_acc, masked_loss] ) vocab_size = 1.0 * target_text_processor.vocabulary_size() model_path = &quot;./Saved Model&quot; initial_epoch = 0 os.makedirs(model_path, exist_ok=True) for (dir_path, dir_names, filenames) in os.walk(model_path): if len(dir_names) != 0: dir_names.sort() initial_epoch = int(dir_names[-1]) model = tf.keras.models.load_model(os.path.join(model_path, dir_names[-1])) else: model.compile( optimizer='adam', loss=masked_loss, metrics=[masked_acc, masked_loss] ) break history = model.fit( train_ds.repeat(), initial_epoch=initial_epoch, epochs=initial_epoch + 1, steps_per_epoch=100, validation_data=val_ds, validation_steps=20, ) model.save(os.path.join(model_path, f'{initial_epoch + 1:02d}')) </code></pre> <p>This is not even my code. It's given in <a href="https://www.tensorflow.org/text/tutorials/nmt_with_attention" rel="nofollow noreferrer">tensorflow doc</a>. I just modified it to train for multiple epochs separately.</p> <p>When I try to train, first epoch runs smoothly. But while trying to load to train for 2nd epoch, the following error message is shown:</p> <blockquote> <p>RuntimeError: Unable to restore object of class 'TextVectorization'. One of several possible causes could be a missing custom object. Decorate your custom object with <code>@keras.utils.register_keras_serializable</code> and include that file in your program, or pass your class in a <code>keras.utils.CustomObjectScope</code> that wraps this load call.</p> <p>Exception: Error when deserializing class 'TextVectorization' using config={'name': 'text_vectorization', 'trainable': True, 'dtype': 'string', 'batch_input_shape': (None,), 'max_tokens': 5000, 'standardize': 'tf_lower_and_split_punctuation', 'split': 'whitespace', 'ngrams': None, 'output_mode': 'int', 'output_sequence_length': None, 'pad_to_max_tokens': False, 'sparse': False, 'ragged': True, 'vocabulary': None, 'idf_weights': None, 'encoding': 'utf-8', 'vocabulary_size': 5000, 'has_input_vocabulary': False}.</p> <p>Exception encountered: Unkown value for <code>standardize</code> argument of layer TextVectorization. If restoring a model and <code>standardize</code> is a custom callable, please ensure the callable is registered as a custom object. See <a href="https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object</a> for details. Allowed values are: <code>None</code>, a <code>Callable</code>, or one of the following values: ('lower_and_strip_punctuation', 'lower', 'strip_punctuation'). Received: tf_lower_and_split_punctuation</p> </blockquote> <p>I've already tried this code for training. It works. But the accuracy of the model drops to ~45%. When I trained for 10 consecutive epochs (without saving and loading) it was ~70%.</p> <pre><code>initial_epoch = 0 checkpoint_path = &quot;./checkpoints/train&quot; ckpt = tf.train.Checkpoint(model=model, optimizer=model.optimizer.get_config()) ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=10) # Restore the latest checkpoint if it exists if ckpt_manager.latest_checkpoint: ckpt.restore(ckpt_manager.latest_checkpoint) print(&quot;Latest checkpoint restored!&quot;) # Update the current_epoch variable initial_epoch = int(ckpt_manager.latest_checkpoint.split(&quot;-&quot;)[-1]) ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=10) history = model.fit( train_ds.repeat(), initial_epoch=initial_epoch, epochs=initial_epoch + 1, steps_per_epoch=100, validation_data=val_ds, validation_steps=20, ) ckpt_manager.save() </code></pre> <p>Why is this happening?</p>
1,352
machine translation
How can i use BERT fo machine Translation?
https://stackoverflow.com/questions/61523829/how-can-i-use-bert-fo-machine-translation
<p>I got a big problem. For my bachelor thesis I have to make a machine tranlation model with BERT. But I am not getting anywhere right now. Do you know a documentation or something that can help me here? I have read some papers in that direction but maybe there is a documentation or tutorial that can help me.</p> <p>For my bachelor thesis I have to translate from a summary of a text into a title. I hope someone can help me.</p>
<p>BERT is not a machine translation model, BERT is designed to provide a contextual sentence representation that should be useful for various NLP tasks. Although there exist ways how BERT can be incorporated into machine translation (<a href="https://openreview.net/forum?id=Hyl7ygStwB" rel="noreferrer">https://openreview.net/forum?id=Hyl7ygStwB</a>), it is not an easy problem and there are doubts if it really pays off.</p> <p>From your question, it seems that you are not really machine translation, but automatic summarization. Similarly to machine translation, it can be approached using sequence-to-sequence models, but we do not call it translation in NLP. For sequence-to-sequence modeling, there are different pre-trained models, such as <a href="https://arxiv.org/abs/1910.13461" rel="noreferrer">BART</a> or <a href="https://arxiv.org/abs/1905.02450" rel="noreferrer">MASS</a>. These should be much more useful than BERT.</p> <hr /> <p>Update in September 2022: There are multilingual BERT-like models, the most famous are <a href="https://huggingface.co/bert-base-multilingual-cased" rel="noreferrer">multilingual BERT</a> and <a href="https://huggingface.co/xlm-roberta-base" rel="noreferrer">XLM-RoBERTa</a>. When fine-tuned carefully, they can be used as a universal encoder for machine translation and enable so-called zero-shot machine translation. The model is trained to translate from several source languages into English, but in the end, it can translate from all languages covered by the multilingual BERT-like models. The method is called <a href="https://arxiv.org/abs/2104.08757v1" rel="noreferrer">SixT</a>.</p>
1,353
machine translation
Transformers architecture for machine translation
https://stackoverflow.com/questions/60398105/transformers-architecture-for-machine-translation
<p>I have adapted the base transformer model, for my corpus of aligned Arabic-English sentences. As such the model has trained for 40 epochs and accuracy (SparseCategoricalAccuracy) is improving by a factor of 0.0004 for each epoch. To achieve good results, my estimate is to attain final accuracy anywhere around 0.5 and accuracy after 40 epochs is 0.0592.</p> <p>I am running the model on the tesla 2 p80 GPU. Each epoch is taking ~2690 sec. This implies I need at least 600 epochs and training time would be 15-18 days. Should I continue with the training or is there something wrong in the procedure as the base transformer in the research paper was trained on an ENGLISH-FRENCH corpus?</p> <p>Key highlights:</p> <ol> <li>Byte-pair(encoding) of sentences</li> <li>Maxlen_len =100</li> <li>batch_size= 64</li> <li>No pre-trained embeddings were used.</li> </ol>
<p>Do you mean Tesla K80 on aws p2.xlarge instance. If that is the case, these gpus are very slow. You should use p3 instances on aws with V100 gpus. You will get around 6-7 times speedup. Checkout <a href="https://avp-project.uk/blog-gpu-box-vs-cloud-for-deep-learning" rel="nofollow noreferrer">this</a> for more details.</p> <p>Also, if you are not using the standard model and have made some changes to model or dataset, then try to tune the hyperparameters. Simplest is to try decreasing the learning rate and see if you get better results.</p> <p>Also, first try to run the standard model with standard dataset to benchmark the time taken in that case and then make your changes and proceed. See when the model starts converging in the standard case. I feel that it should give some results after 40 epochs also.</p>
1,354
machine translation
Seq2Seq Neural Machine Translation step for aligning right to left languages with English (Or any LTR language)
https://stackoverflow.com/questions/76183491/seq2seq-neural-machine-translation-step-for-aligning-right-to-left-languages-wit
<p>I've so far worked with left to right languages and NLTK worked fine for tokenization. But while working on a research paper focused on several languages including RTL languages, the normal procedure has been giving me completely inaccurate translations. Could anyone please let me know what is the norm in neural machine translation when working with languages like Persian or Hebrew?</p> <p>I've tried following the steps mentioned in <a href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/nmt_with_attention.ipynb?hl=ja#scrollTo=chTF5N885F0P" rel="nofollow noreferrer">nmt with attention</a>, where I changed the two regex to fit the Farsi and Urdu scripts along with other languages and seperating the punctuations,</p> <pre><code>def lowerSplitPunct(text): # Split accented characters. text = tf_text.normalize_utf8(text, 'NFKC') text = tf.strings.lower(text) # Keep space, a to z, and select punctuation. text = tf.strings.regex_replace(text, '[^\u0600-\u06FF\uFB8A\u067E\u0686\u06AF\u200C\u200F\u0980-\u09FFa-z۔؟،«»।ا.?!,]', '') # Add spaces around punctuation. text = tf.strings.regex_replace(text, '[۔؟،«»ا।.?!,]', r' \0 ') # Strip whitespace. text = tf.strings.strip(text) text = tf.strings.join(['[START]', text, '[END]'], separator=' ') return text </code></pre> <p>and it still doesn't solve my problem.</p>
1,355
machine translation
Statistical Machine Translation from Hindi to English using MOSES
https://stackoverflow.com/questions/27669446/statistical-machine-translation-from-hindi-to-english-using-moses
<p>I need to create a Hindi to English translation system using MOSES. I have got a parallel corpora containing about 10000 Hindi sentences and corresponding English translations. I followed the method described in the <a href="http://www.statmt.org/moses/?n=Moses.Baseline" rel="nofollow">Baseline system creation page</a>. But, just in the first stage, when I wanted to tokenise my Hindi corpus and tried to execute </p> <pre><code>~/mosesdecoder/scripts/tokenizer/tokenizer.perl -l hi &lt; ~/corpus/training/hi-en.hi&gt; ~/corpus/hi-en.tok.hi </code></pre> <p>, the tokeniser gave me the following output:</p> <pre><code>Tokenizer Version 1.1 Language: hi Number of threads: 1 WARNING: No known abbreviations for language 'hi', attempting fall-back to English version... </code></pre> <p>I even tried with <code>'hin'</code> but it still didn't recognise the language. Can anyone tell the correct way to make the translation system.</p>
<p>Moses does not support Hindi for tokenization, the <code>tokenizer.perl</code> uses the <code>nonbreaking_prefix.*</code> files (from <a href="https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl#L516" rel="noreferrer">https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl#L516</a>)</p> <p>The languages available with nonbreaking prefixes from Moses are:</p> <ul> <li>ca: Catalan</li> <li>cs: Czech</li> <li>de: German</li> <li>el: Greek</li> <li>en: English</li> <li>es: Spanish</li> <li>fi: Finnish</li> <li>fr: French</li> <li>hu: Hungarian</li> <li>is: Icelandic</li> <li>it: Italian</li> <li>lv: Latvian</li> <li>nl: Dutch</li> <li>pl: Polish</li> <li>pt: Portugese</li> <li>ro: Romanian</li> <li>ru: Russian</li> <li>sk: Slovak</li> <li>sl: Slovene</li> <li>sv: Swedish</li> <li>ta: Tamil</li> </ul> <p>from <a href="https://github.com/moses-smt/mosesdecoder/tree/master/scripts/share/nonbreaking_prefixes" rel="noreferrer">https://github.com/moses-smt/mosesdecoder/tree/master/scripts/share/nonbreaking_prefixes</a></p> <hr> <p>However all hope is not lost, you can surely tokenize your text with other tokenizers before training machine translation model with Moses, try Googling "Hindi Tokenziers", there are tonnes of them around.</p>
1,356
machine translation
Remove &quot;Machine Translated by Google&quot;
https://stackoverflow.com/questions/67645265/remove-machine-translated-by-google
<p>I am using a new Google's Document Translation API that is still in preview. After it translate the document, translated document have <code>Machine Translated by Google</code> text at the top of each page. How can I disable adding that text?</p>
<p>It's not possible to do natively via Google Cloud, according to Google Technical Support. While you could theoretically do post-processing, you might run afoul of the requirements listed in <a href="https://cloud.google.com/translate/attribution" rel="noreferrer">Google's Attribution Requirements</a> . For example:</p> <blockquote> <p>Whenever you display translation results from Google Translate directly to users, you must make it clear to users that they are viewing automatic translations from Google Translate using the appropriate text or brand elements.</p> </blockquote>
1,357
machine translation
Best evaluation method for real-time machine translation?
https://stackoverflow.com/questions/43943372/best-evaluation-method-for-real-time-machine-translation
<p>I'm aware that there are many different methods like BLEU, NIST, METEOR etc. They all have their pros and cons, and their effectiveness differs from corpus to corpus. I'm interested in real-time translation, so that two people could have a conversation by typing out a couple sentences at a time and having it immediately translated.</p> <p>What kind of corpus would this count as? Would the text be considered too short for proper evaluation by most conventional methods? Would the fact that the speaker is constantly switching make the context more difficult?</p>
<p>What you are asking for, belongs to the domain of <em>Confidence Estimation</em>, nowadays (within the Machine Translation (MT) community) better known as <em>Quality Estimation</em>, i.e. "assigning a score to MT output without access to a reference translation". </p> <p>For MT evaluation (using BLEU, NIST or METEOR) you need:</p> <ol> <li>A hypothesis translation (MT output)</li> <li>A reference translation (from a test set)</li> </ol> <p>In your case (real-time translation), you do not have (2). So you will have to estimate the performance of your system, based on features of your source sentence and your hypothesis translation, and on the knowledge you have about the MT process.</p> <p>A baseline system with 17 features is described in:</p> <ul> <li>Specia, L., Turchi, M., Cancedda, N., Dymetman, M., &amp; Cristianini, N. (2009b). Estimating the sentence level quality of machine translation systems. 13th Conference of the European Association for Machine Translation, (pp. 28-37)</li> <li>Which you can find <a href="http://www.mt-archive.info/EAMT-2009-Specia.pdf" rel="nofollow noreferrer">here</a></li> </ul> <p>Quality Estimation is an active research topic. The most recent advances can be followed on the websites of the WMT Conferences. Look for the Quality Estimation shared tasks, for example <a href="http://www.statmt.org/wmt17/quality-estimation-task.html" rel="nofollow noreferrer">http://www.statmt.org/wmt17/quality-estimation-task.html</a></p>
1,358
machine translation
losing leading &amp; trailing space when translated using Google Machine Translation
https://stackoverflow.com/questions/2887980/losing-leading-trailing-space-when-translated-using-google-machine-translation
<p>I am using google ajax based translation API like in the below example.</p> <pre><code>google.load("language", "1"); function initialize() { var text = document.getElementById("text").innerHTML; google.language.detect(text, function(result) { if (!result.error &amp;&amp; result.language) { google.language.translate(text, result.language, "en", function(result) { var translated = document.getElementById("translation"); if (result.translation) { translated.innerHTML = result.translation; } }); } }); } google.setOnLoadCallback(initialize); </code></pre> <p>When I send string like " how are you? " </p> <p>The transaltion what I get is like "xxx xxx xxxxxxx" . the spaces in the original string are trimmed.How do I prevent it from happening ?</p>
<p>Try:</p> <pre><code>function initialize() { var text = document.getElementById("text").innerHTML; var spaceMatch = text.match(/^(\s*).*?(\s*)$/); google.language.detect(text, function(result) { if (!result.error &amp;&amp; result.language) { google.language.translate(text, result.language, "en", function(result) { var translated = document.getElementById("translation"); if (result.translation) { translated.innerHTML = spaceMatch[1] + result.translation + spaceMatch[2]; } }); } }); } </code></pre>
1,359
machine translation
Machine Translation using babelize_shell() in NLTK
https://stackoverflow.com/questions/12267544/machine-translation-using-babelize-shell-in-nltk
<p>Hi I am learning Natural Language processing using NLTK. I am trying to implement babelize_shell() example of the book. What I am doing is executing babelize_shell(), after that I am entering my string, followed by german as stated in the book, followed by run.</p> <p>The error I am getting is:</p> <pre><code>Traceback (most recent call last): File "&lt;pyshell#148&gt;", line 1, in &lt;module&gt; babelize_shell() File "C:\Python27\lib\site-packages\nltk\misc\babelfish.py", line 175, in babelize_shell for count, new_phrase in enumerate(babelize(phrase, 'english', language)): File "C:\Python27\lib\site-packages\nltk\misc\babelfish.py", line 126, in babelize phrase = translate(phrase, next, flip[next]) File "C:\Python27\lib\site-packages\nltk\misc\babelfish.py", line 106, in translate if not match: raise BabelfishChangedError("Can't recognize translated string.") BabelfishChangedError: Can't recognize translated string. </code></pre> <p>Here's an example session:</p> <pre><code>&gt;&gt;&gt; babelize_shell() NLTK Babelizer: type 'help' for a list of commands. Babel&gt; how long before the next flight to Alice Springs? Babel&gt; german Babel&gt; run 0&gt; how long before the next flight to Alice Springs? 1&gt; wie lang vor dem folgenden Flug zu Alice Springs? 2&gt; how long before the following flight to Alice jump? 3&gt; wie lang vor dem folgenden Flug zu Alice springen Sie? 4&gt; how long before the following flight to Alice do you jump? 5&gt; wie lang, bevor der folgende Flug zu Alice tun, Sie springen? 6&gt; how long, before the following flight to Alice does, do you jump? 7&gt; wie lang bevor der folgende Flug zu Alice tut, tun Sie springen? 8&gt; how long before the following flight to Alice does, do you jump? 9&gt; wie lang, bevor der folgende Flug zu Alice tut, tun Sie springen? 10&gt; how long, before the following flight does to Alice, do do you jump? 11&gt; wie lang bevor der folgende Flug zu Alice tut, Sie tun Sprung? 12&gt; how long before the following flight does leap to Alice, does you? </code></pre>
<p>I'm having the same problem right now.</p> <p>I've found this: <a href="http://nltk.googlecode.com/svn/trunk/doc/api/nltk.misc.babelfish-module.html">http://nltk.googlecode.com/svn/trunk/doc/api/nltk.misc.babelfish-module.html</a></p> <p>and it says: BabelfishChangedError Thrown when babelfish.yahoo.com changes some detail of their HTML layout, and babelizer no longer submits data in the correct form, or can no longer parse the results.</p> <p>I'm going to see if there's a way to fix this.</p> <p>The solution I came out right now uses the Microsoft Translator web service (SOAP). It's not an easy solution, but funny to code.</p> <p>I followed the instructions in <a href="http://msdn.microsoft.com/en-us/library/hh454950">http://msdn.microsoft.com/en-us/library/hh454950</a> and then modified the babelfish.py which is found in nltk/misc/babelfish.py</p> <ol> <li>Subscribe to the Microsoft Translator API on Azure Marketplace</li> </ol> <p>Subscribe to the Microsoft Translator API on Azure Marketplace , I've choosen the free subscription.</p> <ol> <li>Register your application Azure DataMarket</li> </ol> <p>To register your application with Azure DataMarket, visit datamarket.azure.com/developer/applications/ using the LiveID credentials from step 1, and click on “Register”. Write down your client id and your client secret for later use</p> <ol> <li><p>Install suds for Python fedorahosted.org/suds/</p></li> <li><p>Modify the babelfish.py (use your own cliend_id and secret):</p></li> </ol> <p>//imports to add</p> <pre><code>from suds.client import Client import httplib import ast ... #added function def soaped_babelfish(TextToTranslate,codeLangFrom, codeLangTo): #Oauth credentials params = urllib.urlencode({'client_id': 'babelfish_soaped', 'client_secret': '1IkIG3j0ujiSMkTueCZ46iAY4fB1Nzr+rHBciHDCdxw=', 'scope': 'http://api.microsofttranslator.com', 'grant_type': 'client_credentials'}) headers = {"Content-type": "application/x-www-form-urlencoded"} conn = httplib.HTTPSConnection("datamarket.accesscontrol.windows.net") conn.request("POST", "/v2/OAuth2-13/", params, headers) response = conn.getresponse() #print response.status, response.reason data = response.read() #obtain access_token respondeDict = ast.literal_eval(data) access_token = respondeDict['access_token'] conn.close() #use the webservice with the accesstoken client = Client('http://api.microsofttranslator.com/V2/Soap.svc') result = client.service.Translate('Bearer'+' '+access_token,TextToTranslate,codeLangFrom, codeLangTo, 'text/plain','general') return result ... #modified translate method def translate(phrase, source, target): phrase = clean(phrase) try: source_code = __languages[source] target_code = __languages[target] except KeyError, lang: raise ValueError, "Language %s not available " % lang return clean(soaped_babelfish(phrase,source_code,target_code)) </code></pre> <p>And that's all for the SOAPed version! Some other day I'll try a web only based solution (similar to the current babelfish.py but adapted to the changes)</p>
1,360
machine translation
Tying weights in neural machine translation
https://stackoverflow.com/questions/49299609/tying-weights-in-neural-machine-translation
<p>I want to tie weights of the <code>embedding</code> layer and the <code>next_word</code> prediction layer of the decoder. The embedding dimension is set to 300 and the hidden size of the decoder is set to 600. Vocabulary size of the target language in NMT is 50000, so embedding weight dimension is <code>50000 x 300</code> and weight of the linear layer which predicts the next word is <code>50000 x 600</code>. </p> <p>So, how can I tie them? What will be the best approach to achieve weight tying in this scenario? </p>
<p>You could use linear layer to project the 600 dimensional space down to 300 before you apply the shared projection. This way you still get the advantage that the entire embedding (possibly) has a non-zero gradient for each mini-batch but at the risk of increasing the capacity of the network slightly.</p>
1,361
machine translation
Creating a Neural Machine Translation basics
https://stackoverflow.com/questions/69988135/creating-a-neural-machine-translation-basics
<p>I'm currently working on a project design where I will create a program/model to translate my native dialect to English, I'm asking is there any books or anything that can you recommend to me in creating my project.</p>
<p>On the NLP side of things there's this course: <a href="https://www.youtube.com/watch?v=dIUTsFT2MeQ" rel="nofollow noreferrer">Natural Language Processing with spaCy &amp; Python - Course for Beginners</a> and this older course: <a href="https://www.youtube.com/watch?v=X2vAabgKiuM" rel="nofollow noreferrer">Natural Language Processing (NLP) Tutorial with Python &amp; NLTK</a> on <a href="https://www.freecodecamp.org/" rel="nofollow noreferrer">Free Code Camp</a>, which is generally a good place to start. Their courses provide in depth explanations of concepts and provide good examples.</p> <p>On the translation side of things, the <a href="https://www.deepl.com/en/translator" rel="nofollow noreferrer">DeepL</a> translator is easy to use in multiple languages and offers a free api. It also offers and incredibly easy to use <a href="https://github.com/DeepLcom/deepl-python" rel="nofollow noreferrer">python library</a> if that's the language you intend to use (which you should because python is the best out there for NLP).</p> <p>I hope this helps, but <a href="https://stackoverflow.com/users/3607203/dennlinger">dennlinger</a> is right - you shouldn't typically ask broad recommendation questions on StackOverflow!</p>
1,362
machine translation
Machine learning for natural language processing - Custom translation
https://stackoverflow.com/questions/42947128/machine-learning-for-natural-language-processing-custom-translation
<p>Lets say I have the following very simplified training and testing observations. </p> <p><strong>Training</strong></p> <pre><code>input: her favourite dog was a huskey and her favourite cat was a leopard output: dog=huskey, cat=leopard input: her favourite dog was a beagle and her favourite cat was a lion output: dog=beagle, cat=lion input: her favourite dog was a poodle and her favourite cat was a burmese output: dog=poodle, cat=burmese </code></pre> <p><strong>Testing</strong></p> <pre><code>input: her favourite dog was a collie and her favourite cat was a moggie desired output: dog=collie, cat=moggie </code></pre> <ul> <li>What is the best machine learning approach in python to enable me to have the testing input translated into the desired output? </li> <li>What are the steps involved from taking this raw data to making this prediction?</li> </ul> <p>From some research in the area it seems that a lot of the existing machine learning packages are around classification, regression and clustering (e.g <a href="http://scikit-learn.org/stable/" rel="nofollow noreferrer">http://scikit-learn.org/stable/</a>), while what I am trying to do is a form of translation.</p> <p>I have also looked into a few NLP packages, and the functionality falls more into the keyword identification, word type identification and sentiment analysis (e.g <a href="http://www.nltk.org/" rel="nofollow noreferrer">http://www.nltk.org/</a>). There are also some translation packages available, but these are for pre-existing languages (<a href="http://pythonhosted.org/goslate/" rel="nofollow noreferrer">http://pythonhosted.org/goslate/</a>)</p> <p>I recognise that for this particular case machine learning is thoroughly unnecessary, however in practise there will be far more complicated, different and numerous inputs to translate.</p>
<p>(1) I would reformulate the problem you are trying to solve to: Given some specific <strong>animal A</strong> in <strong>sentence S</strong> find the best animal <strong>class C</strong>. So given sentence 1: </p> <blockquote> <p>her favourite dog was a huskey and her favourite cat was a leopard</p> </blockquote> <p>and given target animal A = "huskey", you would get C = "dog" as the class; similarly when A = "leopard" you would get C = "cat". </p> <p>(2) From the way you asked your question, I am assuming that you don't want to use an external dictionary or other data (where it would be relatively trivial to find collocations of class C with its associated animal species and to train a supervised classifier). So I assume you are limited to the type of data you mention. I will also assume that the class C is explicitly mentioned in each sentence.</p> <p>(3) Given the data constraints, it seems likely that you will need to use syntactic information in your features. In English syntax is primarily conveyed through word order, so I would focus on these. Probably useful to apply a part-of-speech tagger to your data.</p> <p>(4) For each possible target A in sentence S you will create a row of data. Thus sentence #1 has two targets A={husky, leopard}, so there will be two rows in your training data that will map to the respective classes, dog and cat. </p> <p>row Sent. Target F1, F2, ... FN Class</p> <p>1 1 husky ... dog</p> <p>2 1 leopard ... cat</p> <p>(5) Include as feature the POS of the target... probably not useful in the example data you provided by would be useful for more sophisticated targets, e.g., A = "the big white husky" should map the full noun phrase to C = "dog". Given your data above the easy solution would just find the closest Noun to the left of the target.</p> <p>her.d favorite.a dog.n is.v a husky.n and her.d favorite.a cat.n is.v a leopard.n</p> <p>So you could have a feature F_LftClosestNoun, F_RtClosestNoun, F_ClosestNoun. Then just train your classifier on the training data and test it on the unseen data. If you provide a more realistic looking sample perhaps we can identify additional useful features.</p>
1,363
machine translation
Can I customize the dictionary of a pre-trained transformer neural machine translation model?
https://stackoverflow.com/questions/58346657/can-i-customize-the-dictionary-of-a-pre-trained-transformer-neural-machine-trans
<p>There are many pre-trained machine translations models available, but it seems like they all need to be run with the dictionary they are trained with. The dictionaries sometimes can have less coverage for my data set (even BPE based ones), and sometimes miss important words as unknowns. What are the best ways to customize the pretrained models to a dictionary learned from my own data set? For example, some way to transfer learn, like unfreezing the encoder layers and retraining? </p>
1,364
machine translation
Is it possible to evaluate Machine Translations using Sentence BERT?
https://stackoverflow.com/questions/79381185/is-it-possible-to-evaluate-machine-translations-using-sentence-bert
<p>I'm not referring to <a href="https://arxiv.org/abs/1904.09675" rel="nofollow noreferrer">BERTScore</a>. BERTScore uses token-level word embeddings, you compute pairwise cosine similarity of word embeddings and obtain scores using greedy matching.</p> <p>I'm referring to <a href="https://arxiv.org/abs/1908.10084" rel="nofollow noreferrer">Sentence BERT</a>. I.e., pure cosine similarity to compare semantic similarity of sentences, not precision, recall or f1-measure. <strong>The question is, if we do this on document level, i.e., several sentences, do we then just compute the mean cosine similarity or is this metric not suitable as a machine translation evaluation metric (alternative to BLEU)?</strong></p> <p>Because for individual sentences, it does make sense as it capture semantic similarity. Sentences that mean the same thing but are phrased differently get penalized by BLEU but get rather high values with Sentence BERT, which is exactly what I want. However, I could not find the use of Sentence BERT in recent WMT Shared Metric Task papers, so I assume there is a catch I am missing which explains why people do not use this approach.</p>
<p>Although the <a href="https://arxiv.org/abs/1908.10084" rel="nofollow noreferrer">Sentence BERT</a> improve the ability to evaluate of semantic similarity to BLEU, it lacks sufficient sensitivity to surface-level error such as spelling mistake, word order issue etc. According to this paper ( <a href="https://arxiv.org/abs/2109.14250" rel="nofollow noreferrer">Evaluation of Metrics Performance</a> ) research, I think the best evaluation is BLEU + <a href="https://arxiv.org/abs/1908.10084" rel="nofollow noreferrer">Sentence BERT</a>.</p>
1,365
machine translation
A Variation on Neural Machine Translation
https://stackoverflow.com/questions/58445247/a-variation-on-neural-machine-translation
<p>I have been processing this thought in my head for a long time now. So in NMT, We pass in the text in the source language in the encoder seq2seq stage and the language in the target language in the decoder seq2seq stage and the system learns the conditional probabilities for each word occurring with its target language word. Ex: P(word x|previous n-words). We train this by teacher forcing.</p> <p>But what if I pass in the input sentence again as input to the decoder stage instead of the target sentence. What would it learn in this case? I'm guessing this will learn to predict the <strong>most probable next word in the sentence</strong> given the previous text right? What are your thoughts</p> <p>Thanks in advance </p>
<p>In that case, you would be learning a model that copies the input symbol to the output. It is trivial for the attention mechanism to learn the identity correspondence between the encoder and decoder states. Moreover, RNNs can easily implement a counter. It thus won't provide any realistic estimate of the probability, it will assign most of the probability mass to the corresponding word in the source sentence.</p>
1,366
machine translation
Change putty translation to UTF-8 by remote machine
https://stackoverflow.com/questions/23336829/change-putty-translation-to-utf-8-by-remote-machine
<p>How to change "<strong>remote character set</strong>" in Translation of a putty by remote machine? </p> <p>Its by default set to ISO, so when I run my application, I get weird characters. It can be changed by host machine.</p> <p>But can I change the putty terminal settings from remote machine via some command so that I can ensure that I always get desired characters?</p>
<p>I fear it is not possible. character set is controlled by LC_* environment variables. Putty has no way to get theirs values. </p> <p>You can associate "remote character set" with each account in Putty. It should be sufficient.</p> <p>The best thing is to configure all your environments in UTF-8 (which is default for all major systems).</p>
1,367
machine translation
loss is drastically decreasing whereas BLEU score stays at zero during training of the seq2seq RNN for machine translation
https://stackoverflow.com/questions/75139358/loss-is-drastically-decreasing-whereas-bleu-score-stays-at-zero-during-training
<p>I'm trying to train an RNN for machine translation, using LSTM. However,the BLEU at the first batch decreases to zero and stay at this level during all the training. At the same time loss is drastically decreasing. What may be the problem?</p> <p>**the code: **</p> <pre><code>class SimpleRNNTranslator(nn.Module): def __init__(self, inp_voc, out_voc, emb_size=64, hid_size=128): &quot;&quot;&quot; My version of simple RNN model, I use LSTM instead of GRU as in the baseline &quot;&quot;&quot; super().__init__() self.inp_voc = inp_voc self.out_voc = out_voc self.emb_inp = nn.Embedding(len(inp_voc), emb_size) self.emb_out = nn.Embedding(len(out_voc), emb_size) self.encoder = nn.LSTM(emb_size, hid_size, batch_first=True) self.decoder = nn.LSTM(emb_size, hid_size, batch_first=True) self.decoder_start = nn.Linear(hid_size, hid_size) self.logits = nn.Linear(hid_size, len(out_voc)) def forward(self, inp, out): &quot;&quot;&quot; Apply model in training mode &quot;&quot;&quot; encoded_seq = self.encode(inp) decoded_seq, _ = self.decode(encoded_seq, out) return self.logits(decoded_seq) def encode(self, seq_in): &quot;&quot;&quot; Take input symbolic sequence, compute initial hidden state for decoder :param seq_in: matrix of input tokens [batch_size, seq_in_len] :return: initial hidden state for the decoder &quot;&quot;&quot; embeddings = self.emb_inp(seq_in) output, (_, __) = self.encoder(embeddings) # last state isn't the actually last because of the padding, the next 2 lines find out the true last state seq_lengths = (seq_in != self.inp_voc.eos_ix).sum(dim=-1) last_states = output[range(seq_lengths.shape[0]), seq_lengths] return self.decoder_start(last_states) def decode(self, hidden_state, seq_out, previous_state=None): &quot;&quot;&quot; Take output symbolic sequence, compute logits for every token in sequence :param hidden_state: matrix of initial_hidden_state [batch_size, hid_size] :param previous_state: matrix of previous state [batch_size, hid_size] :param seq_out: matrix of output tokens [batch_size, seq_out_len] :return: logits for every token in sequence [batch_size, seq_len, out_voc] &quot;&quot;&quot; if not torch.is_tensor(previous_state): previous_state = torch.randn(*hidden_state.shape).to(device) embeddings = self.emb_out(seq_out) outputs, (_, cn) = self.decoder(embeddings, (hidden_state[None, :, :], previous_state[None, :, :])) return outputs, cn def inference(self, inp_tokens, max_len): &quot;&quot;&quot; Take initial state and return ids for out words :param initial_state: initial_state for a decoder, produced by encoder with input tokens &quot;&quot;&quot; initial_state = self.encode(inp_tokens) states = [initial_state] outputs = [torch.full([initial_state.shape[0]], self.out_voc.bos_ix, dtype=torch.int, device=device)] cn = None for i in range(100): hidden_state, cn = self.decode(states[-1], outputs[-1][:, None], previous_state=cn) hidden_state, cn = hidden_state.squeeze(), cn.squeeze() outputs.append(self.logits(hidden_state).argmax(dim=-1)) states.append(hidden_state) return torch.stack(outputs, dim=-1), torch.cat(states) def translate_lines(self, lines, max_len=100): &quot;&quot;&quot; Take lines and return translation :param lines: list of lines in Russian &quot;&quot;&quot; inp_tokens = self.inp_voc.to_matrix(lines).to(device) out_ids, states = self.inference(inp_tokens, max_len=max_len) return self.out_voc.to_lines(out_ids.cpu().numpy()), states **How I compute BLEU: ** from nltk.translate.bleu_score import corpus_bleu def compute_bleu(model, inp_lines, out_lines, bpe_sep='@@ ', **flags): &quot;&quot;&quot; Estimates corpora-level BLEU score of model's translations given inp and reference out Note: if you're serious about reporting your results, use https://pypi.org/project/sacrebleu &quot;&quot;&quot; with torch.no_grad(): translations, _ = model.translate_lines(inp_lines, **flags) translations = [line.replace(bpe_sep, '') for line in translations] actual = [line.replace(bpe_sep, '') for line in out_lines] return corpus_bleu( [[ref.split()] for ref in actual], [trans.split() for trans in translations], smoothing_function=lambda precisions, **kw: [p + 1.0 / p.denominator for p in precisions] ) * 100 </code></pre> <p>Training, plots of BLEU score evaluated on development dataset and Loss <a href="https://i.sstatic.net/HpIL2.png" rel="nofollow noreferrer">Training, plots of BLEU score evaluated on development dataset and Loss</a></p> <p>I had thoughts that this problem may be related to the how LSTM works. At first, I didn't pass a cell state during the elements of sequence, only hidden state. I fixed this, however it didn't resolved the issue</p>
<p>You probably forgot to shift the target sequence when computing the loss.</p> <p>At the training time, the decoder sequence needs to be shifted such that (<em>n</em>-1)-th predicts <em>n</em>-th word. For sequence <code>w1 w2 w3 w4</code> with beginning-of-sentence token <code>[BOS]</code> and end-of-sentence token <code>[EOS]</code> like this:</p> <pre><code>BOS w1 w2 w3 w4 ↓ ↓ ↓ ↓ ↓ ▯ → ▯ → ▯ → ▯ → ▯ ↓ ↓ ↓ ↓ ↓ w1 w2 w3 w4 EOS </code></pre> <p>Generally speaking: you feed the decoder with the target sequence <strong>without the last</strong> token and compute the loss with respect to the target sequence <strong>without the first</strong> token.</p> <p>When you don't do th is, the decoder looks like this:</p> <pre><code>w1 w2 w3 w4 ↓ ↓ ↓ ↓ ▯ → ▯ → ▯ → ▯ ↓ ↓ ↓ ↓ w1 w2 w3 w4 </code></pre> <p>The model quickly learns to copy the input tokens, and the loss rapidly decreases, but the model does not learn to translate.</p>
1,368
machine translation
How Can I Optimize Machine Translation Model Training to Overcome GPU Memory Overflow Issues?
https://stackoverflow.com/questions/78645618/how-can-i-optimize-machine-translation-model-training-to-overcome-gpu-memory-ove
<p>I'm trying to train a fairly standard machine translation transformer model using PyTorch. It's based on the &quot;Attention is All You Need&quot; paper. When I ran it on my PC with standard hyperparameters and a batch size of 128 segments (pairs of source and target language sentences), it worked fine but was slow, as expected.</p> <p>Now, I'm running it on an AWS p2.xlarge instance with a Tesla K80 GPU, and the program crashes quickly due to GPU memory overflow. I've tried everything to free up GPU memory, but I've had to reduce the batch size to 8, which is obviously inefficient for learning.</p> <p>Even with a batch size of 8, I occasionally get this error message:</p> <blockquote> <p>File &quot;C:\Projects\MT004.venv\Lib\site-packages\torch\autograd\graph.py&quot;, line 744, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.95 GiB. GPU</p> </blockquote> <p>I've tried both SpaCy's tokenizer and the XLM-R tokenizer. With the XLM-R tokenizer, I can only use a batch size of 2, and even then, it sometimes crashes.</p> <p>Here is the code where things crash:</p> <pre><code>def train_epoch(src_train_sent, tgt_train_sent, model, optimizer): model.train() losses = 0 torch.cuda.empty_cache() # Clear cache before forward pass train_dataloader = SrcTgtIterable(src_train_sent, tgt_train_sent, batch_size=BATCH_SIZE, collate_fn=collate_fn) for src, tgt in train_dataloader: src = src.to(DEVICE) tgt = tgt.to(DEVICE) tgt_input = tgt[:-1, :] src_mask, tgt_mask, src_padding_mask, tgt_padding_mask = create_mask(src, tgt_input) logits = model(src, tgt_input, src_mask, tgt_mask, src_padding_mask, tgt_padding_mask, src_padding_mask) optimizer.zero_grad() tgt_out = tgt[1:, :].long() loss = loss_fn(logits.reshape(-1, logits.shape[-1]), tgt_out.reshape(-1)) # Delete unnecessary variables before backward pass del src, tgt_input, src_mask, tgt_mask, src_padding_mask, tgt_padding_mask, logits, tgt_out torch.cuda.empty_cache() # Clear cache after deleting variables loss.backward() optimizer.step() losses += loss.item() # Free GPU memory del loss torch.cuda.empty_cache() # Clear cache after each batch </code></pre> <p>Things crash on <code>loss.backward()</code></p> <p>Unfortunately, I cannot use a bigger server since I don't have enough quota on EC2.</p> <p>Any idea what I might be doing wrong? Any suggestions on how to optimize things?</p>
1,369
machine translation
Tensorflow Neural Machine Translation Example - Loss Function
https://stackoverflow.com/questions/65028889/tensorflow-neural-machine-translation-example-loss-function
<p>Im stepping through the code here: <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/nmt_with_attention</a> as a learning method and I am confused as to when the loss function is called and what is passed. I added two print statements in the loss_function and when the training loop runs, it only prints out</p> <p>(64,) (64, 4935)</p> <p>at the very start multiple times and then nothing again. I am confused on two fronts:</p> <ol> <li>Why doesnt the loss_function() get called repeatedly through the training loop and print the shapes? I expected that the loss function would get called at the end of each batch which is of size 64.</li> <li>I expected the shapes of the actuals to be (batch size, time steps) and the predictions to be (batch size, time steps, vocabulary size). It looks like the loss gets called seperately for every time step (64 is the batch size and 4935 is the vocabulary size).</li> </ol> <p>The relevant bits I believe are reproduced below.</p> <pre><code> optimizer = tf.keras.optimizers.Adam() loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none') def loss_function(real, pred): mask = tf.math.logical_not(tf.math.equal(real, 0)) print(real.shape) print(pred.shape) loss_ = loss_object(rea l, pred) mask = tf.cast(mask, dtype=loss_.dtype) loss_ *= mask #set padding entries to zero loss return tf.reduce_mean(loss_) @tf.function def train_step(inp, targ, enc_hidden): loss = 0 with tf.GradientTape() as tape: enc_output, enc_hidden = encoder(inp, enc_hidden) dec_hidden = enc_hidden dec_input = tf.expand_dims([targ_lang.word_index['&lt;start&gt;']] * BATCH_SIZE, 1) # Teacher forcing - feeding the target as the next input for t in range(1, targ.shape[1]): # passing enc_output to the decoder predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output) print(targ[:, t]) print(predictions) loss += loss_function(targ[:, t], predictions) # using teacher forcing dec_input = tf.expand_dims(targ[:, t], 1) batch_loss = (loss / int(targ.shape[1])) variables = encoder.trainable_variables + decoder.trainable_variables gradients = tape.gradient(loss, variables) optimizer.apply_gradients(zip(gradients, variables)) return batch_loss EPOCHS = 10 for epoch in range(EPOCHS): start = time.time() enc_hidden = encoder.initialize_hidden_state() total_loss = 0 for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)): #print(batch) batch_loss = train_step(inp, targ, enc_hidden) total_loss += batch_loss if batch % 100 == 0: print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, batch, batch_loss.numpy())) # saving (checkpoint) the model every 2 epochs if (epoch + 1) % 2 == 0: checkpoint.save(file_prefix = checkpoint_prefix) print('Epoch {} Loss {:.4f}'.format(epoch + 1, total_loss / steps_per_epoch)) print('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) </code></pre>
<p>The loss is treated similar to the rest of the graph. In tensorflow calls like tf.keras.layers.Dense and tf.nn.conv2d don't actually do the operation, but instead they define the graph for the operations. I have another post here <a href="https://stackoverflow.com/questions/44210561/how-do-backpropagation-works-in-tensorflow/44212699#44212699">How do backpropagation works in tensorflow</a> that explains the backprop and some motivation of why this is.</p> <p>The loss function you have above is</p> <pre><code>def loss_function(real, pred): mask = tf.math.logical_not(tf.math.equal(real, 0)) print(real.shape) print(pred.shape) loss_ = loss_object(real, pred) mask = tf.cast(mask, dtype=loss_.dtype) loss_ *= mask #set padding entries to zero loss result = tf.reduce_mean(loss_) return result </code></pre> <p>Think of this function as a generate that returns result. Result defines the graph to compute the loss. Perhaps a better name for this function would be <code>loss_function_graph_creator</code> ... but that's another story.</p> <p>Result, which is a graph that contains weights, bias, and information about how to both do the forward propagation and the back propagation is all model.fit needs. It no longer needs this function and it doesn't need to run the function every loop.</p> <p>Truly, what is happening under the covers is that given your model (called <code>my_model</code>), the compile line</p> <pre><code>model.compile(loss=loss_function, optimizer='sgd') </code></pre> <p>is effectively the following lines</p> <pre><code>input = tf.keras.Input() output = my_model(input) loss = loss_function(input,output) opt = tf.keras.optimizers.SGD() gradient = opt.minimize(loss) get_gradient_model = tf.keras.Model(input,gradient) </code></pre> <p>and there you have the gradient operation which can be use in a loop to get the gradients, which is conceptually what model.fit does.</p> <h1>Q and A</h1> <ul> <li>Is the fact that this function: <code>@tf.function def train_step(inp, targ, enc_hidden):</code> has the <code>tf.function </code>decorator (and the loss function is called in it) what makes this code run as you describe and not normal python?</li> </ul> <p>No. It is not 'normal' python. It only defines the flow of tensors through the graph of matrix operations that will (hopefully) run on your GPU. All the tensorflow operations just set up the operations on the GPU (or a simulated GPU if you don't have one).</p> <ul> <li>How can I tell the actual shapes being passed into loss_function (the second part of my question)?</li> </ul> <p>No problem at all... simply run this code</p> <pre><code>loss_function(y, y).shape </code></pre> <p>This will compute the loss function of your expected output compared exactly to the same output. The loss will (hopefully) be zero, but actually calculating the value of the loss wasn't the point. You want the shape and this will give it to you.</p>
1,370
machine translation
transformers: how to use hugging face EncoderDecoderModel to do machine translation task?
https://stackoverflow.com/questions/65167131/transformers-how-to-use-hugging-face-encoderdecodermodel-to-do-machine-translat
<p>I have trained a EncoderDecoderModel from huggging face to do english-German translation task. I tried to overfit a small dataset (100 parallel sentences), and use <code>model.generate()</code> then <code>tokenizer.decode()</code> to perform the translation. However, the output seems to be proper German sentences, but it is definitely not the correct translation.</p> <p>Here are the code for building the model</p> <pre><code>encoder_config = BertConfig() decoder_config = BertConfig() config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config) model = EncoderDecoderModel(config=config) </code></pre> <p>Here are the code for testing the model</p> <pre><code>model.eval() input_ids = torch.tensor(tokenizer.encode(input_text)).unsqueeze(0) output_ids = model.generate(input_ids.to('cuda'), decoder_start_token_id=model.config.decoder.pad_token_id) output_text = tokenizer.decode(output_ids[0]) </code></pre> <p>Example input: &quot;iron cement is a ready for use paste which is laid as a fillet by putty knife or finger in the mould edges ( corners ) of the steel ingot mould .&quot;</p> <p>Ground truth translation: &quot;iron cement ist eine gebrauchs ##AT##-##AT## fertige Paste , die mit einem Spachtel oder den Fingern als Hohlkehle in die Formecken ( Winkel ) der Stahlguss -Kokille aufgetragen wird .&quot;</p> <p>What the model outputs after trained 100 epochs: &quot;[S] wenn sie den unten stehenden link anklicken, sehen sie ein video uber die erstellung ansprechender illustrationen in quarkxpress&quot; which is totally nonesense.</p> <p>Where is the problem?</p>
1,371
machine translation
Moses machine translation - using Moses with Anymalign
https://stackoverflow.com/questions/36072799/moses-machine-translation-using-moses-with-anymalign
<p>Does anyone know how to replace GIZA++ in Moses with Anymalign which is obtained from <a href="https://anymalign.limsi.fr/" rel="nofollow">here</a> </p> <p>In fact, there is 9 <a href="http://www.statmt.org/moses/?n=FactoredTraining.HomePage" rel="nofollow">steps</a> to using Moses, I want to start the step 4 without passing the step 2 and 3, but it seems to be impossible not to use GIZA++. Anyone has a clue?</p>
<p>In the moses <a href="http://www.statmt.org/moses/manual/manual.pdf" rel="nofollow noreferrer">manual</a> on page 351 in the section <strong>8.3 Reference: All Training Parameters</strong> there is described parameter <code>--first-step</code> <em>-- first step in the training process (default 1)</em>, so you can use <code>train-model.perl ... --first-step 4</code> to start training from step 4</p>
1,372
machine translation
Translation example not working anymore in Node-Red and Bluemix
https://stackoverflow.com/questions/31380846/translation-example-not-working-anymore-in-node-red-and-bluemix
<p>Node-Red on bluemix provides a Watson Machine Translation node. Bluemix recently changed the translation APIs that this uses, releasing a new Watson Language Translation API. (see <a href="https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/language-translation/migrating.shtml" rel="nofollow">https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/language-translation/migrating.shtml</a> for details)</p> <p>I think this is the reason I used to get an error message in Node-Red saying that the Machine translation could not be binded.</p> <p>Could you please help?</p>
<p><s>We're currently working on updates to support the API changes in the nodes. Hopefully, we'll have this working by the end of the week.</s></p> <p>This issue has now been resolved. Please update your source and try again...</p>
1,373
machine translation
django compiling translation string in different machines
https://stackoverflow.com/questions/23384847/django-compiling-translation-string-in-different-machines
<p>On my system i have changed mgstr in <code>django.po</code> and compiled it and i get the translation as expected.Now my question is should i run <code>manage.py compilemessages</code> even on the production machine or checkin the binary file(<code>django.mo</code>) so that when it is checked out on prod machine compilation step may be skipped. What is the standard way to go about this</p>
<p>Typically, <a href="https://github.com/github/gitignore/blob/master/Python.gitignore" rel="nofollow">.mo files are git-ignored</a>. This means it makes sense to re-compilemessages after you checkout the latest revision.</p> <p>EDIT:</p> <p>The procedure I use is the following: .po are part of git rep, .mo are git-ignored.</p> <p>In development:</p> <ol> <li>change the code (that potentially has translation strings),</li> <li>run <code>python manage.py makemessages -l [language]</code></li> <li>edit the <code>.po</code></li> <li>(optional, to check possible mistakes: run <code>python manage.py compilemessages -l [language]</code> and runserver to test translations)</li> <li>commit the changes of both the code and the .po</li> <li>push to master</li> </ol> <p>In production machine:</p> <ol> <li>pull from origin</li> <li><code>python manage.py compilemessages -l [language]</code></li> </ol> <p>At this point, .mo is created on the production from the .po you pulled from the master.</p> <p>One way to automate this is to use <a href="http://www.fabfile.org/" rel="nofollow">Fabric</a>, a python lib for writing automated scripts for deploying code (e.g. Django) on servers.</p>
1,374
machine translation
Embedding layer in neural machine translation with attention
https://stackoverflow.com/questions/64675228/embedding-layer-in-neural-machine-translation-with-attention
<p>I am trying to understanding how to implement a seq-to-seq model with attention from this <a href="https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html?highlight=attention" rel="nofollow noreferrer">website</a>.</p> <p>My question: Is nn.embedding just returns some IDs for each word, so the embedding for each word would be the same during whole training? Or are they getting changed during the procedure of training?</p> <p>My second question is because I am confused whether after training, the output of nn.embedding is something such as word2vec word embeddings or not.</p> <p>Thanks in advance</p>
<p>According to the <a href="https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html" rel="nofollow noreferrer">PyTorch docs</a>:</p> <blockquote> <p>A simple lookup table that stores embeddings of a fixed dictionary and size.</p> <p>This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.</p> </blockquote> <p>In short, <code>nn.Embedding</code> embeds a sequence of vocabulary indices into a new embedding space. You can indeed roughly understand this as a word2vec style mechanism.</p> <p>As a dummy example, let's create an embedding layer that takes as input a total of 10 vocabularies (i.e. the input data only contains a total of 10 unique tokens), and returns embedded word vectors living in 5-dimensional space. In other words, each word is represented as 5-dimensional vectors. The dummy data is a sequence of 3 words with indices 1, 2, and 3, in that order.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; embedding = nn.Embedding(10, 5) &gt;&gt;&gt; embedding(torch.tensor([1, 2, 3])) tensor([[-0.7077, -1.0708, -0.9729, 0.5726, 1.0309], [ 0.2056, -1.3278, 0.6368, -1.9261, 1.0972], [ 0.8409, -0.5524, -0.1357, 0.6838, 3.0991]], grad_fn=&lt;EmbeddingBackward&gt;) </code></pre> <p>You can see that each of the three words are now represented as 5-dimensional vectors. We also see that there is a <code>grad_fn</code> function, which means that the weights of this layer will be adjusted through backprop. This answers your question of whether embedding layers are trainable: the answer is yes. And indeed this is the whole point of embedding: we expect the embedding layer to learn meaningful representations, the famous example of <code>king - man = queen</code> being the classic example of what these embedding layers can learn.</p> <hr /> <p><strong>Edit</strong></p> <p>The embedding layer is, as the documentation states, a simple lookup table from a matrix. You can see this by doing</p> <pre><code>&gt;&gt;&gt; embedding.weight Parameter containing: tensor([[-1.1728, -0.1023, 0.2489, -1.6098, 1.0426], [-0.7077, -1.0708, -0.9729, 0.5726, 1.0309], [ 0.2056, -1.3278, 0.6368, -1.9261, 1.0972], [ 0.8409, -0.5524, -0.1357, 0.6838, 3.0991], [-0.4569, -1.9014, -0.0758, -0.6069, -1.2985], [ 0.4545, 0.3246, -0.7277, 0.7236, -0.8096], [ 1.2569, 1.2437, -1.0229, -0.2101, -0.2963], [-0.3394, -0.8099, 1.4016, -0.8018, 0.0156], [ 0.3253, -0.1863, 0.5746, -0.0672, 0.7865], [ 0.0176, 0.7090, -0.7630, -0.6564, 1.5690]], requires_grad=True) </code></pre> <p>You will see that the first, second, and third rows of this matrix corresponds to the result that was returned in the example above. In other words, for a vocabulary whose index is <code>n</code>, the embedding layer will simply &quot;lookup&quot; the <code>n</code>th row in its weights matrix and return that row vector; hence the lookup table.</p>
1,375
machine translation
Bahdanaus attention in Neural machine translation with attention
https://stackoverflow.com/questions/63268582/bahdanaus-attention-in-neural-machine-translation-with-attention
<p>I am trying to understand Bahdanaus attention using the following tutorial: <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/nmt_with_attention</a></p> <p>The calculation is the following:</p> <pre><code>self.attention_units = attention_units self.W1 = Dense(self.attention_units) self.W2 = Dense(self.attention_units) self.V = Dense(1) score = self.V(tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc))) </code></pre> <p>I have two problems:</p> <ol> <li><p>I cannot understand why the shape of <code>tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc))</code> is (batch_size,max_len,attention_units) ?</p> <p>Using the rules of matrix multiplication I got the following results:</p> <p>a) Shape of self.W1(last_inp_dec) -&gt; (1,hidden_units_dec) * (hidden_units_dec,attention_units) = (1,attention_units)</p> <p>b) Shape of self.W2(last_inp_enc) -&gt; (max_len,hidden_units_dec) * (hidden_units_dec,attention_units) = (max_len,attention_units)</p> <p>Then we add up a) and b) quantities. How do we end up with dimensionality (max_len, attention_units) or (batch_size, max_len, attention_units)? How can we do addition with different size of second dimension (1 vs max_len)?</p> </li> <li><p>Why do we multiply <code>tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc))</code> by <code>self.V</code>? Because we want alphas as scalar?</p> </li> </ol>
<blockquote> <ol> <li>) I cannot understand why the shape of tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc)) is (batch_size,max_len,attention_units) ?</li> </ol> </blockquote> <p>From the comments section of the code in <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention" rel="nofollow noreferrer"><code>class BahdanauAttention</code></a></p> <p>query_with_time_axis shape = (batch_size, 1, hidden size)</p> <p>Note that the dimension <code>1</code> was added using <code>tf.expand_dims</code> to make the shape compatible with <code>values</code> for the addition. The added dimension of <code>1</code> gets broadcast during the addition operation. Otherwise, the incoming shape was (batch_size, hidden size), which would not have been compatible</p> <p>values shape = (batch_size, max_len, hidden size)</p> <p>Addition of the <code>query_with_time_axis</code> shape and <code>values</code> shape gives us a shape of <code>(batch_size, max_len, hidden size)</code></p> <blockquote> <ol start="2"> <li>) Why do we multiply <code>tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc))</code> by self.V? Because we want alphas as scalar?</li> </ol> </blockquote> <p><code>self.V</code> is the final layer, the output of which gives us the score. The random weight initialization of the <code>self.V</code> layer is handled by <code>keras</code> behind the scene in the line <code>self.V = tf.keras.layers.Dense(1)</code>.</p> <p>We are not multiplying <code>tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc))</code> by <code>self.V</code>.</p> <p>The construct <code>self.V(tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc))</code> means --&gt; the <code>tanh</code> activations resulting from the operation <code>tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc))</code> form the input matrix to the <em>single output</em> output layer represented by <code>self.V</code>.</p>
1,376
machine translation
Not able to generate correct English to SQL translations using LSTM for machine translation
https://stackoverflow.com/questions/49199943/not-able-to-generate-correct-english-to-sql-translations-using-lstm-for-machine
<p>I'm using recurrent neural networks to train a model to translate sample english sentences such as "fetch all employee data" into sql such as "SELECT * FROM EMPLOYEE". Right now my program takes 100 epochs of training time but translates all the inputs the same. Required libraries are tensorflow and keras. Could someone take a look at my program to help me generate the correct translation?</p> <p>Here is my code in python: <a href="https://github.com/Kashdog/engsqlnmt" rel="nofollow noreferrer">https://github.com/Kashdog/engsqlnmt</a></p> <p>here's my code:</p> <pre><code>from __future__ import print_function from keras.models import Model from keras.layers import Input, LSTM, Dense import numpy as np import h5py batch_size = 64 # Batch size for training. epochs = 200 # Number of epochs to train for. latent_dim = 256 # Latent dimensionality of the encoding space. num_samples = 10000 # Number of samples to train on. # Path to the data txt file on disk. data_path = 'eng-sql/sql.txt' # Vectorize the data. input_texts = [] target_texts = [] input_characters = set() target_characters = set() with open(data_path, 'r', encoding='utf-8') as f: lines = f.read().split('\n') for line in lines[: min(num_samples, len(lines) - 1)]: print(line.split('^')) input_text, target_text = line.split('^') # We use "tab" as the "start sequence" character # for the targets, and "\n" as "end sequence" character. target_text = '\t' + target_text + '\n' input_texts.append(input_text) target_texts.append(target_text) for char in input_text: if char not in input_characters: input_characters.add(char) for char in target_text: if char not in target_characters: target_characters.add(char) input_characters = sorted(list(input_characters)) target_characters = sorted(list(target_characters)) num_encoder_tokens = len(input_characters) num_decoder_tokens = len(target_characters) max_encoder_seq_length = max([len(txt) for txt in input_texts]) max_decoder_seq_length = max([len(txt) for txt in target_texts]) print('Number of samples:', len(input_texts)) print('Number of unique input tokens:', num_encoder_tokens) print('Number of unique output tokens:', num_decoder_tokens) print('Max sequence length for inputs:', max_encoder_seq_length) print('Max sequence length for outputs:', max_decoder_seq_length) input_token_index = dict( [(char, i) for i, char in enumerate(input_characters)]) target_token_index = dict( [(char, i) for i, char in enumerate(target_characters)]) encoder_input_data = np.zeros( (len(input_texts), max_encoder_seq_length, num_encoder_tokens), dtype='float32') decoder_input_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype='float32') decoder_target_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype='float32') for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)): for t, char in enumerate(input_text): encoder_input_data[i, t, input_token_index[char]] = 1. for t, char in enumerate(target_text): # decoder_target_data is ahead of decoder_input_data by one timestep decoder_input_data[i, t, target_token_index[char]] = 1. if t &gt; 0: # decoder_target_data will be ahead by one timestep # and will not include the start character. decoder_target_data[i, t - 1, target_token_index[char]] = 1. # Define an input sequence and process it. encoder_inputs = Input(shape=(None, num_encoder_tokens)) encoder = LSTM(latent_dim, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) # We discard `encoder_outputs` and only keep the states. encoder_states = [state_h, state_c] # Set up the decoder, using `encoder_states` as initial state. decoder_inputs = Input(shape=(None, num_decoder_tokens)) # We set up our decoder to return full output sequences, # and to return internal states as well. We don't use the # return states in the training model, but we will use them in inference. decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(num_decoder_tokens, activation='softmax') decoder_outputs = decoder_dense(decoder_outputs) # Define the model that will turn # `encoder_input_data` &amp; `decoder_input_data` into `decoder_target_data` model = Model([encoder_inputs, decoder_inputs], decoder_outputs) # Run training model.compile(optimizer='rmsprop', loss='categorical_crossentropy') model.fit([encoder_input_data, decoder_input_data], decoder_target_data, batch_size=batch_size, epochs=epochs, validation_split=0.2) # Save model model.save('s2s.h5') # Next: inference mode (sampling). # Here's the drill: # 1) encode input and retrieve initial decoder state # 2) run one step of decoder with this initial state # and a "start of sequence" token as target. # Output will be the next target token # 3) Repeat with the current target token and current states # Define sampling models encoder_model = Model(encoder_inputs, encoder_states) decoder_state_input_h = Input(shape=(latent_dim,)) decoder_state_input_c = Input(shape=(latent_dim,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_outputs, state_h, state_c = decoder_lstm( decoder_inputs, initial_state=decoder_states_inputs) decoder_states = [state_h, state_c] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = Model( [decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states) # Reverse-lookup token index to decode sequences back to # something readable. reverse_input_char_index = dict( (i, char) for char, i in input_token_index.items()) reverse_target_char_index = dict( (i, char) for char, i in target_token_index.items()) def decode_sequence(input_seq): # Encode the input as state vectors. states_value = encoder_model.predict(input_seq) # Generate empty target sequence of length 1. target_seq = np.zeros((1, 1, num_decoder_tokens)) # Populate the first character of target sequence with the start character. target_seq[0, 0, target_token_index['\t']] = 1. # Sampling loop for a batch of sequences # (to simplify, here we assume a batch of size 1). stop_condition = False decoded_sentence = '' while not stop_condition: output_tokens, h, c = decoder_model.predict( [target_seq] + states_value) # Sample a token sampled_token_index = np.argmax(output_tokens[0, -1, :]) sampled_char = reverse_target_char_index[sampled_token_index] decoded_sentence += sampled_char # Exit condition: either hit max length # or find stop character. if (sampled_char == '\n' or len(decoded_sentence) &gt; max_decoder_seq_length): stop_condition = True # Update the target sequence (of length 1). target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, sampled_token_index] = 1. # Update states states_value = [h, c] return decoded_sentence for seq_index in range(39): # Take one sequence (part of the training set) # for trying out decoding. input_seq = encoder_input_data[seq_index: seq_index + 1] decoded_sentence = decode_sequence(input_seq) print('-') print(seq_index) print('Input sentence:', input_texts[seq_index]) print('Decoded sentence:', decoded_sentence) print('testing') encoder_test_data = np.zeros( (2,max_encoder_seq_length, num_encoder_tokens), dtype='float32') test_seq = "fetch total employee data" print(test_seq) #encoder_test_data for t, char in enumerate(test_seq): encoder_test_data[1,t, input_token_index[char]] = 1. #input_seq = 'fetch all customer data' decoded_sentence = decode_sequence(encoder_test_data[1:2]) print('Decoded test sentence:', decoded_sentence) </code></pre> <p>and my data file(sql.txt) is:</p> <pre><code>fetch all customer data^SELECT * FROM CUSTOMER find all customer data^SELECT * FROM CUSTOMER retrieve all customer data^SELECT * FROM CUSTOMER get all customer data^SELECT * FROM CUSTOMER download all customer data^SELECT * FROM CUSTOMER select all customer data^SELECT * FROM CUSTOMER obtain all employee info^SELECT * FROM EMPLOYEE show all employee info^SELECT * FROM EMPLOYEE display all employee info^SELECT * FROM EMPLOYEE </code></pre>
<p><strong>TLDR;</strong> Your dataset is very small, biased and lacks the variety needed for RNNs. So you need 'some tricks' to make your code work. </p> <p>The problem is <strong>you</strong> <strong>didn't shuffle your input data.</strong> (The fully working source code is <a href="https://drive.google.com/file/d/1ejnTaH2ZqwWmPyo4IYhikNQjNEpFrxKg/view?usp=sharing" rel="nofollow noreferrer">here</a>) </p> <p>If you look to your <code>sql.txt</code> file, you'll notice the dataset is sorted by customer and employee examples so it makes harder for your network to learn and furthermore, your dataset is biased [30 samples of customer and 70 samples of employee]</p> <p>Also, your hidden_size was a little big for this small dataset (~100 samples) so I made some changes:</p> <pre><code>batch_size = 32 # Batch size for training. epochs = 300 # Number of epochs to train for. latent_dim = 32 # Latent dimensionality of the encoding space. </code></pre> <p>Here's the shuffle code:</p> <pre><code>import random all_data = list(zip(input_texts, target_texts)) random.shuffle(all_data) for i, (input_text, target_text) in enumerate(all_data): for t, char in enumerate(input_text): encoder_input_data[i, t, input_token_index[char]] = 1. for t, char in enumerate(target_text): # decoder_target_data is ahead of decoder_input_data by one timestep decoder_input_data[i, t, target_token_index[char]] = 1. if t &gt; 0: # decoder_target_data will be ahead by one timestep # and will not include the start character. decoder_target_data[i, t - 1, target_token_index[char]] = 1. </code></pre> <p>so here's the result (I think you'll need more data and a not-biased dataset):</p> <pre><code>- 34 Input sentence: show all client information Decoded sentence: SELECT * FROM CUSTOMER - 35 Input sentence: display all client information Decoded sentence: SELECT * FROM CUSTOMER - 36 Input sentence: fetch me all client information Decoded sentence: SELECT * FROM CUSTOMER - 37 Input sentence: get me all client information Decoded sentence: SELECT * FROM CUSTOMER - 38 Input sentence: get me all employee information Decoded sentence: SELECT * FROM EMPLOYEE testing fetch total employee data Decoded test sentence: SELECT * FROM EMPLOYEE </code></pre>
1,377
machine translation
Why special characters like () &quot;&quot; : [] are often removed from data before training translation machine?
https://stackoverflow.com/questions/64181801/why-special-characters-like-are-often-removed-from-data-before-traini
<p>I see that people often remove special characters like () &quot;&quot; : [] from data before training translation machine. Could you explain for me the benefits of doing so?</p>
<p>Date clean-up or pre-processing is performed so that algorithms could focus on important, linguistically meaningful &quot;words&quot; instead of &quot;noise&quot;. See <a href="https://towardsdatascience.com/nlp-building-text-cleanup-and-preprocessing-pipeline-eba4095245a0" rel="nofollow noreferrer">&quot;<em>Removing Special Characters</em>&quot;</a>:</p> <blockquote> <p>Special characters, as you know, are non-alphanumeric characters. These characters are most often found in comments, references, currency numbers etc. These characters add no value to text-understanding and induce noise into algorithms.</p> </blockquote> <p>Whenever this noise finds its way into a model, it can produce output at inference, that contains these unexpected (sequences of) characters, and even affect overall translations. It is a frequent case with brackets in Japanese translations.</p>
1,378
machine translation
Documentation of Moses (statistical machine translation) mose.ini file format?
https://stackoverflow.com/questions/30033707/documentation-of-moses-statistical-machine-translation-mose-ini-file-format
<p>Is there any documentation of the moses.ini format for Moses? Running moses at the command line without arguments returns available feature names but not their available arguments. Additionally, the structure of the .ini file is not specified in the manual that I can see.</p>
<p>The main idea is that the file contains settings that will be used by the translation model. Thus, the documentation of values and options in <code>moses.ini</code> should be looked up in the Moses feature specifications.</p> <p>Here are some excerpt I found on the Web about <code>moses.ini</code>.</p> <p>In the <a href="http://ec.europa.eu/information_society/apps/projects/logos/7/288487/080/deliverables/001_MosesSpecificationAres20132942326.pdf" rel="nofollow">Moses Core</a>, we have some details:</p> <blockquote> <p><code>7.6.5 moses.ini</code><br/> All feature functions are specified in the <code>[feature]</code> section. It should be in the format:<br/> <code>* Feature-name key1=value1</code> <code>key2=value2</code> .... <br/>For example, <code>KENLM factor=0 order=3 num-features=1 lazyken=0 path=file.lm.gz</code><br/></p> </blockquote> <p>Also, there is a hint on how to print basic statistics about all components mentioned in the moses.ini. </p> <blockquote> <p>Run the script<br/> <code>analyse_moses_model.pl moses.ini</code><br/> This can be useful to set the order of mapping steps to avoid explosion of translation options or just to check that the model components are as big/detailed as we expect. </p> </blockquote> <p>In the <a href="https://clear.colorado.edu/CompSemWiki/index.php/Moses" rel="nofollow">Center for Computational Language and EducAtion Research (CLEAR)</a> Wiki, there is a sample file with some documentation:</p> <blockquote> <p><b>Parameters</b></p> <p>It is recommended to make an <code>.ini</code> file to storage all of your setting.</p> <p><code>input-factors</code><br/> - Using factor model or not<br/> <code>mapping</code><br/> - To use LM in memory (T) or read the file in hard disk directly (G)<br/> <code>ttable-file</code><br/> - Indicate the num. of source-factor, num. of target-factor, num of score, and the path to translation table file <br/> <code>lmodel-file</code><br/> - Indicate the type using for LM (0:SRILM, 1:IRSTLM), using factor number, the order (n-gram) of LM, and the path to language model file<br/></p> </blockquote> <p>If it is not enough, there is another description on <a href="http://stp.lingfil.uu.se/~ch/lab7.html" rel="nofollow">this page, see "Decoder configuration file" section</a> </p> <blockquote> <p>The sections <code>[ttable-file]</code> and <code>[lmodel-file]</code> contain pointers to the phrase table file and language model file, respectively. You may disregard the numbers on those lines. For the time being, it's enough to know that <b>the last one of the numbers in the language model specification is the order of the n-gram model</b>.</p> <p>The configuration file also contains some feature weights. Note that the <code>[weight-t]</code> section has 5 weights, one for each feature contained in the phrase table. </p> <p>The <code>moses.ini</code> file created by the training process <b>will not work with your decoder without modification</b> because it relies on a language model library that is not compiled into our decoder. In order to make it work, open the moses.ini file and find the language model specification in the line immediately after the <code>[lmodel-file]</code> heading. The first number on this line will be <code>0</code>, which stands for SRILM. Change it into <code>8</code> and leave the rest of the line untouched. Then your configuration should work.</p> </blockquote>
1,379
machine translation
Can I create a custom domain with the IBM Watson Machine Translation API?
https://stackoverflow.com/questions/39120127/can-i-create-a-custom-domain-with-the-ibm-watson-machine-translation-api
<p>My goal is to create a custom translation engine for the financial domain, language pair CHT - EN and CHS - EN. I have respective dictionaries and aligned segments ready to import into a custom engine and train the engine.</p> <p>If I understand the documentation (<a href="https://www.ibm.com/watson/developercloud/doc/language-translation/" rel="nofollow">https://www.ibm.com/watson/developercloud/doc/language-translation/</a>) correctly, I can only build on top of existing domains and language pairs. So, for Chinese - English, I could only select the patents domain and import my own dictionaries and corpus, then re-train. Not sure though if this makes sense, also it is unclear if we are talking traditional Chinese or simplified Chinese. I need traditional Chinese service first, later on simplified Chinese.</p> <p>An alternative would be to build on top of a financial news domain, but news are not available for Chinese - English.</p> <p>I'm trying to figure out the best practice how to go ahead and appreciate any guidance.</p> <p>Thanks! </p>
<p>To create a model you can use a glossary with high frequency or high confidence phrase translations or a parallel corpus(TMX file) instead.</p> <p>As @Nathan said, if you use <code>zh-en-patent</code> as <code>base_model_id</code> you will have support for both Traditional and Simplified Chinese using <a href="https://en.wikipedia.org/wiki/Han_unification" rel="nofollow noreferrer">unihan</a>. <code>zh-en-patent</code> is the only model that you can use <strong>today</strong> to translate Chinese to English.</p> <p>Here is a guide on how to create a custom translation model using the <a href="https://console.bluemix.net/docs/services/language-translator/customizing.html#customizing" rel="nofollow noreferrer">IBM Watson Language Translator service</a>.</p>
1,380
machine translation
Machine Translation FFN : Dimension problem due to window size
https://stackoverflow.com/questions/72439789/machine-translation-ffn-dimension-problem-due-to-window-size
<p>this is my first time creating a FFN to train it to translate French to English using word prediction: Input are two arrays of size <em>2 x window_size + 1</em> from source language and <em>window_size</em> target language. And the label of size 1</p> <p>For e.g for window_size = 2:</p> <pre><code>[&quot;je&quot;,&quot;mange&quot;, &quot;la&quot;, &quot;pomme&quot;,&quot;avec&quot;] </code></pre> <p>and</p> <pre><code> [&quot;I&quot;, &quot;eat&quot;] </code></pre> <p>So the input of size [5] and [2] after concatenating =&gt; 7</p> <p>Label: &quot;the&quot; (refering to &quot;la&quot; in French) The label is changed to one-hot-encoding before comparing with yHat</p> <p>I'm using unique index for each word ( 1 to len(vocab) ) and train using the index (not the words) The output of the FFN is a probability of the size of the vocab of the target language</p> <p>The problem is that the FFN doesn't learn and the accuracy stays at 0. When I print the size of y_final (target probability) and yHat (Model Hypo) they have different dimensions:</p> <pre><code>yHat.size()=[512, 7, 10212] </code></pre> <p>with 64 batch_size, 7 is the concatenated input size and 10212 size of target vocab, while</p> <pre><code>y_final.size()= [512, 10212] </code></pre> <p>And over all the forward method I have these sizes:</p> <pre><code>torch.Size([512, 5, 32]) torch.Size([512, 5, 64]) torch.Size([512, 5, 64]) torch.Size([512, 2, 256]) torch.Size([512, 2, 32]) torch.Size([512, 2, 64]) torch.Size([512, 2, 64]) torch.Size([512, 7, 64]) torch.Size([512, 7, 128]) torch.Size([512, 7, 10212]) </code></pre> <p>Since the accuracy augments when yHat = y_final then I thought that it is never the case because they don't even have the same shapes (2D vs 3D). Is this the problem ? Please refer to the code and if you need any other info please tell me.</p> <p>The code is working fine, no errors.</p> <pre><code>trainingData = TensorDataset(encoded_source_windows, encoded_target_windows, encoded_labels) # print(trainingData) batchsize = 512 trainingLoader = DataLoader(trainingData, batch_size=batchsize, drop_last=True) def ffnModel(vocabSize1,vocabSize2, learningRate=0.01): class ffNetwork(nn.Module): def __init__(self): super().__init__() self.embeds_src = nn.Embedding(vocabSize1, 256) self.embeds_target = nn.Embedding(vocabSize2, 256) # input layer self.inputSource = nn.Linear(256, 32) self.inputTarget = nn.Linear(256, 32) # hidden layer 1 self.fc1 = nn.Linear(32, 64) self.bnormS = nn.BatchNorm1d(5) self.bnormT = nn.BatchNorm1d(2) # Layer(s) afer Concatenation: self.fc2 = nn.Linear(64,128) self.output = nn.Linear(128, vocabSize2) self.softmaaax = nn.Softmax(dim=0) # forward pass def forward(self, xSource, xTarget): xSource = self.embeds_src(xSource) xSource = F.relu(self.inputSource(xSource)) xSource = F.relu(self.fc1(xSource)) xSource = self.bnormS(xSource) xTarget = self.embeds_target(xTarget) xTarget = F.relu(self.inputTarget(xTarget)) xTarget = F.relu(self.fc1(xTarget)) xTarget = self.bnormT(xTarget) xCat = torch.cat((xSource, xTarget), dim=1)#dim=128 or 1 ? xCat = F.relu(self.fc2(xCat)) print(xCat.size()) xCat = self.softmaaax(self.output(xCat)) return xCat # creating instance of the class net = ffNetwork() # loss function lossfun = nn.CrossEntropyLoss() # lossfun = nn.NLLLoss() optimizer = torch.optim.Adam(net.parameters(), lr=learningRate) return net, lossfun, optimizer def trainModel(vocabSize1,vocabSize2, learningRate): # number of epochs numepochs = 64 # create a new Model instance net, lossfun, optimizer = ffnModel(vocabSize1,vocabSize2, learningRate) # initialize losses losses = torch.zeros(numepochs) trainAcc = [] # loop over training data batches batchAcc = [] batchLoss = [] for epochi in range(numepochs): #Switching on training mode net.train() # loop over training data batches batchAcc = [] batchLoss = [] for A, B, y in tqdm(trainingLoader): # forward pass and loss final_y = [] for i in range(y.size(dim=0)): yy = [0] * target_vocab_length yy[y[i]] = 1 final_y.append(yy) final_y = torch.tensor(final_y) yHat = net(A, B) loss = lossfun(yHat, final_y) ################ print(&quot;\n yHat.size()&quot;) print(yHat.size()) print(&quot;final_y.size()&quot;) print(final_y.size()) # backprop optimizer.zero_grad() loss.backward() optimizer.step() # loss from this batch batchLoss.append(loss.item()) print(f'batchLoss: {loss.item()}') #Accuracy calculator: matches = torch.argmax(yHat) == final_y # booleans (false/true) matchesNumeric = matches.float() # convert to numbers (0/1) accuracyPct = 100 * torch.mean(matchesNumeric) # average and x100 batchAcc.append(accuracyPct) # add to list of accuracies print(f'accuracyPct: {accuracyPct}') trainAcc.append(np.mean(batchAcc)) losses[epochi] = np.mean(batchLoss) return trainAcc,losses,net trainAcc,losses,net = trainModel(len(source_vocab),len(target_vocab), 0.01) print(trainAcc) </code></pre>
1,381
machine translation
Neural Machine Translation model predictions are off-by-one
https://stackoverflow.com/questions/48256372/neural-machine-translation-model-predictions-are-off-by-one
<p><strong>Problem Summary</strong></p> <p>In the following example, my NMT model has high loss because it correctly predicts <code>target_input</code> instead of <code>target_output</code>.</p> <pre><code>Targetin : 1 3 3 3 3 6 6 6 9 7 7 7 4 4 4 4 4 9 9 10 10 10 3 3 10 10 3 10 3 3 10 10 3 9 9 4 4 4 4 4 3 10 3 3 9 9 3 6 6 6 6 6 6 10 9 9 10 10 4 4 4 4 4 4 4 4 4 4 4 4 9 9 9 9 3 3 3 6 6 6 6 6 9 9 10 3 4 4 4 4 4 4 4 4 4 4 4 4 9 9 10 3 10 9 9 3 4 4 4 4 4 4 4 4 4 10 10 4 4 4 4 4 4 4 4 4 4 9 9 10 3 6 6 6 6 3 3 3 10 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 9 9 3 3 10 6 6 6 6 6 3 9 9 3 3 3 3 3 3 3 10 10 3 9 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 9 3 6 6 6 6 6 6 3 5 3 3 3 3 10 10 10 3 9 9 5 10 3 3 3 3 9 9 9 5 10 10 10 10 10 4 4 4 4 3 10 6 6 6 6 6 6 3 5 10 10 10 10 3 9 9 6 6 6 6 6 6 6 6 6 9 9 9 3 3 3 6 6 6 6 6 6 6 6 3 9 9 9 3 3 6 6 6 3 3 3 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Targetout : 3 3 3 3 6 6 6 9 7 7 7 4 4 4 4 4 9 9 10 10 10 3 3 10 10 3 10 3 3 10 10 3 9 9 4 4 4 4 4 3 10 3 3 9 9 3 6 6 6 6 6 6 10 9 9 10 10 4 4 4 4 4 4 4 4 4 4 4 4 9 9 9 9 3 3 3 6 6 6 6 6 9 9 10 3 4 4 4 4 4 4 4 4 4 4 4 4 9 9 10 3 10 9 9 3 4 4 4 4 4 4 4 4 4 10 10 4 4 4 4 4 4 4 4 4 4 9 9 10 3 6 6 6 6 3 3 3 10 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 9 9 3 3 10 6 6 6 6 6 3 9 9 3 3 3 3 3 3 3 10 10 3 9 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 9 3 6 6 6 6 6 6 3 5 3 3 3 3 10 10 10 3 9 9 5 10 3 3 3 3 9 9 9 5 10 10 10 10 10 4 4 4 4 3 10 6 6 6 6 6 6 3 5 10 10 10 10 3 9 9 6 6 6 6 6 6 6 6 6 9 9 9 3 3 3 6 6 6 6 6 6 6 6 3 9 9 9 3 3 6 6 6 3 3 3 3 3 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Prediction : 3 3 3 3 3 6 6 6 9 7 7 7 4 4 4 4 4 9 3 3 3 3 3 3 10 3 3 10 3 3 10 3 3 9 3 4 4 4 4 4 3 10 3 3 9 3 3 6 6 6 6 6 6 10 9 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 9 3 3 3 3 3 3 6 6 6 6 6 9 6 3 3 4 4 4 4 4 4 4 4 4 4 4 4 9 3 3 3 10 9 3 3 4 4 4 4 4 4 4 4 4 3 10 4 4 4 4 4 4 4 4 4 4 9 3 3 3 6 6 6 6 3 3 3 10 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 9 3 3 3 10 6 6 6 6 6 3 9 3 3 3 3 3 3 3 3 3 3 3 9 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 9 3 6 6 6 6 6 6 3 3 3 3 3 3 10 3 3 3 9 3 3 10 3 3 3 3 9 3 9 3 10 3 3 3 3 4 4 4 4 3 10 6 6 6 6 6 6 3 3 10 3 3 3 3 9 3 6 6 6 6 6 6 6 6 6 9 6 9 3 3 3 6 6 6 6 6 6 6 6 3 9 3 9 3 3 6 6 6 3 3 3 3 3 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 Source : 9 16 4 7 22 22 19 1 12 19 12 18 5 18 9 18 5 8 12 19 19 5 5 19 22 7 12 12 6 19 7 3 20 7 9 14 4 11 20 12 7 1 18 7 7 5 22 9 13 22 20 19 7 19 7 13 7 11 19 20 6 22 18 17 17 1 12 17 23 7 20 1 13 7 11 11 22 7 12 1 13 12 5 5 19 22 5 5 20 1 5 4 12 9 7 12 8 14 18 22 18 12 18 17 19 4 19 12 11 18 5 9 9 5 14 7 11 6 4 17 23 6 4 5 12 6 7 14 4 20 6 8 12 25 4 19 6 1 5 1 5 20 4 18 12 12 1 11 12 1 25 13 18 19 7 12 7 3 4 22 9 9 12 4 8 9 19 9 22 22 19 1 19 7 5 19 4 5 18 11 13 9 4 14 12 13 20 11 12 11 7 6 1 11 19 20 7 22 22 12 22 22 9 3 8 12 11 14 16 4 11 7 11 1 8 5 5 7 18 16 22 19 9 20 4 12 18 7 19 7 1 12 18 17 12 19 4 20 9 9 1 12 5 18 14 17 17 7 4 13 16 14 12 22 12 22 18 9 12 11 3 18 6 20 7 4 20 7 9 1 7 25 13 5 25 14 11 5 20 7 23 12 5 16 19 19 25 19 7 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>As is evident, the prediction matches up almost 100% with <code>target_input</code> instead of <code>target_output</code>, as it should (off-by-one). Loss and gradients are being calculated using <code>target_output</code>, so it is strange that predictions are matching up to <code>target_input</code>.</p> <p><strong>Model Overview</strong></p> <p>An NMT model predicts a sequence of words in a target language using a primary sequence of words in a source language. This is the framework behind Google Translate. Since NMT uses coupled-RNNs, it is supervised and required labelled target input and output.</p> <p>NMT uses a <code>source</code> sequence, a <code>target_input</code> sequence, and a <code>target_output</code> sequence. In the example below, the encoder RNN (blue) uses the source input words to produce a meaning vector, which it passes to the decoder RNN (red), which uses the meaning vector to produce output.</p> <p>When doing new predictions (inference), the decoder RNN uses its own previous output to seed the next prediction in the timestep. However, to improve training, it is allowed to seed itself with the correct previous prediction at each new timestep. This is why <code>target_input</code> is necessary for training.</p> <p><a href="https://i.sstatic.net/jus6z.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jus6z.jpg" alt="enter image description here" /></a></p> <p><strong>Code to get an iterator with source, target_in, target_out</strong></p> <pre><code>def get_batched_iterator(hparams, src_loc, tgt_loc): if not (os.path.exists('primary.csv') and os.path.exists('secondary.csv')): utils.integerize_raw_data() source_dataset = tf.data.TextLineDataset(src_loc) target_dataset = tf.data.TextLineDataset(tgt_loc) dataset = tf.data.Dataset.zip((source_dataset, target_dataset)) dataset = dataset.shuffle(hparams.shuffle_buffer_size, seed=hparams.shuffle_seed) dataset = dataset.map(lambda source, target: (tf.string_to_number(tf.string_split([source], delimiter=',').values, tf.int32), tf.string_to_number(tf.string_split([target], delimiter=',').values, tf.int32))) dataset = dataset.map(lambda source, target: (source, tf.concat(([hparams.sos], target), axis=0), tf.concat((target, [hparams.eos]), axis=0))) dataset = dataset.map(lambda source, target_in, target_out: (source, target_in, target_out, tf.size(source), tf.size(target_in))) # Proceed to batch and return iterator </code></pre> <p><strong>NMT model core code</strong></p> <pre><code>def __init__(self, hparams, iterator, mode): source, target_in, target_out, source_lengths, target_lengths = iterator.get_next() # Lookup embeddings embedding_encoder = tf.get_variable(&quot;embedding_encoder&quot;, [hparams.src_vsize, hparams.src_emsize]) encoder_emb_inp = tf.nn.embedding_lookup(embedding_encoder, source) embedding_decoder = tf.get_variable(&quot;embedding_decoder&quot;, [hparams.tgt_vsize, hparams.tgt_emsize]) decoder_emb_inp = tf.nn.embedding_lookup(embedding_decoder, target_in) # Build and run Encoder LSTM encoder_cell = tf.nn.rnn_cell.BasicLSTMCell(hparams.num_units) encoder_outputs, encoder_state = tf.nn.dynamic_rnn(encoder_cell, encoder_emb_inp, sequence_length=source_lengths, dtype=tf.float32) # Build and run Decoder LSTM with TrainingHelper and output projection layer decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(hparams.num_units) projection_layer = layers_core.Dense(hparams.tgt_vsize, use_bias=False) helper = tf.contrib.seq2seq.TrainingHelper(decoder_emb_inp, sequence_length=target_lengths) decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper, encoder_state, output_layer=projection_layer) outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder) logits = outputs.rnn_output if mode is 'TRAIN' or mode is 'EVAL': # then calculate loss crossent = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=target_out, logits=logits) target_weights = tf.sequence_mask(target_lengths, maxlen=tf.shape(target_out)[1], dtype=logits.dtype) self.loss = tf.reduce_sum((crossent * target_weights) / hparams.batch_size) if mode is 'TRAIN': # then calculate/clip gradients, then optimize model params = tf.trainable_variables() gradients = tf.gradients(self.loss, params) clipped_gradients, _ = tf.clip_by_global_norm(gradients, hparams.max_gradient_norm) optimizer = tf.train.AdamOptimizer(hparams.l_rate) self.update_step = optimizer.apply_gradients(zip(clipped_gradients, params)) if mode is 'EVAL': # then allow access to input/output tensors to printout self.src = source self.tgt_in = target_in self.tgt_out = target_out self.logits = logits </code></pre>
<p>The core issue with the NMT model used to predict a language-like syntax with a repetitive structure is that it becomes incentivized to simply predict whatever the past prediction was. Since it is fed the correct previous prediction at each step by <code>TrainingHelper</code> to speed up training, this artificially produces a local minimum that the model is unable to get out of.</p> <p>The best option I have found is to weight the loss functions such the key points in the output sequence where the output is not repetitive are weighted more heavily. This will incentivize the model to get those correct, and not just repeat the past prediction.</p>
1,382
machine translation
Tensorflow Neural machine translation with attention Graph execution error:
https://stackoverflow.com/questions/71892574/tensorflow-neural-machine-translation-with-attention-graph-execution-error
<p>I am currently using tensorflow and following tutorial <a href="https://www.tensorflow.org/text/tutorials/nmt_with_attention" rel="nofollow noreferrer">https://www.tensorflow.org/text/tutorials/nmt_with_attention</a> I am trying to make a Korean to English translator by referring to the above document. However, while training in TensorFlow, the code throws the following error:</p> <pre><code>EPOCHS = 10 for epoch in range(EPOCHS): start = time.time() enc_hidden = encoder.initialize_hidden_state() total_loss = 0 for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)): batch_loss = train_step(inp, targ, enc_hidden) total_loss += batch_loss if batch % 100 == 0: print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, batch, batch_loss.numpy())) # 에포크가 2번 실행될때마다 모델 저장 (체크포인트) if (epoch + 1) % 2 == 0: checkpoint.save(file_prefix = checkpoint_prefix) print('Epoch {} Loss {:.4f}'.format(epoch + 1, total_loss / steps_per_epoch)) print('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) </code></pre> <p>error message: <a href="https://i.sstatic.net/zX0aA.png" rel="nofollow noreferrer">enter image description here</a></p> <p>I would like to know why this graph error occurs. I'm not sure of the problem myself, so I'm attaching a google drive link with my code.</p> <p><a href="https://drive.google.com/file/d/118ouco4cH-kyOt7Nezqad3Qm66QexbsO/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/118ouco4cH-kyOt7Nezqad3Qm66QexbsO/view?usp=sharing</a></p> <p>thank you.</p>
1,383
machine translation
&quot;Unicode Error &#39;unicodeescape&#39; codec can&#39;t decode bytes...&quot; when writing Windows file paths
https://stackoverflow.com/questions/1347791/unicode-error-unicodeescape-codec-cant-decode-bytes-when-writing-windows
<p>I am using Python 3.1 on a Windows 7 machine. Russian is the default system language, and utf-8 is the default encoding.</p> <p>Looking at the answer to a <a href="https://stackoverflow.com/questions/778096/problem-opening-a-text-document-unicode-error">previous question</a>, I have attempting using the &quot;codecs&quot; module to give me a little luck. Here's a few examples:</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; g = codecs.open(&quot;C:\Users\Eric\Desktop\beeline.txt&quot;, &quot;r&quot;, encoding=&quot;utf-8&quot;) SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-4: truncated \UXXXXXXXX escape (&lt;pyshell#39&gt;, line 1) </code></pre> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; g = codecs.open(&quot;C:\Users\Eric\Desktop\Site.txt&quot;, &quot;r&quot;, encoding=&quot;utf-8&quot;) SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-4: truncated \UXXXXXXXX escape (&lt;pyshell#40&gt;, line 1) </code></pre> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; g = codecs.open(&quot;C:\Python31\Notes.txt&quot;, &quot;r&quot;, encoding=&quot;utf-8&quot;) SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 11-12: malformed \N character escape (&lt;pyshell#41&gt;, line 1) </code></pre> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; g = codecs.open(&quot;C:\Users\Eric\Desktop\Site.txt&quot;, &quot;r&quot;, encoding=&quot;utf-8&quot;) SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-4: truncated \UXXXXXXXX escape (&lt;pyshell#44&gt;, line 1) </code></pre> <p>My last idea was, I thought it might have been the fact that Windows &quot;translates&quot; a few folders, such as the &quot;users&quot; folder, into Russian (though typing &quot;users&quot; is still the correct path), so I tried it in the Python31 folder. Still, no luck. Any ideas?</p>
<p>The problem is with the string</p> <pre><code>"C:\Users\Eric\Desktop\beeline.txt" </code></pre> <p>Here, <code>\U</code> in <code>"C:\Users</code>... starts an eight-character Unicode escape, such as <code>\U00014321</code>. In your code, the escape is followed by the character 's', which is invalid.</p> <p>You either need to duplicate all backslashes:</p> <pre><code>"C:\\Users\\Eric\\Desktop\\beeline.txt" </code></pre> <p>Or prefix the string with <code>r</code> (to produce a raw string):</p> <pre><code>r"C:\Users\Eric\Desktop\beeline.txt" </code></pre>
1,384
machine translation
how do i predict my machine translation after i loaded model.5
https://stackoverflow.com/questions/60394098/how-do-i-predict-my-machine-translation-after-i-loaded-model-5
<p>i had build seq2seq translation with keras it is translating between 2 languages </p> <p>then i saved the whole model as model.h5</p> <pre><code> model.save('model.h5') </code></pre> <p>and then i loaded the <strong>model.h5</strong> in another python script</p> <pre><code>from keras.models import load_model model = load_model('model.h5') model.summary() m = model.get_weights() print(m) </code></pre> <p>and i can see the summary and the weights of my model</p> <p>but i do not know how to make a prediction.</p> <p>i translated english to french </p> <p>now i want input english and see the prediction of french</p> <p>how can i do it ? any idea is it really that hard or impossible?</p> <p><strong>Updated</strong></p> <p>i tried this but gives me an error</p> <pre><code>text = np.array(['how can i solve this question']) print(text.shape) res = model.predict(text) </code></pre> <p><strong>Error</strong></p> <blockquote> <p>ValueError: Error when checking input: expected embedding_1_input to have shape (4,) but got array with shape (1,)</p> </blockquote>
<p>You can get the prediction of a model using </p> <pre><code>predicted_output = model.predict(input) </code></pre>
1,385
machine translation
Enforcing a Prediction Language for NLLB Machine Translation Model
https://stackoverflow.com/questions/78600144/enforcing-a-prediction-language-for-nllb-machine-translation-model
<p>All I need is a way to insert my GenerationConfig into the HuggingFace <code>Seq2Seq</code> Trainer.</p> <p>I want to enforce <code>facebook/nllb-200-distilled-600M</code> model's predictions to be in Arabic, I am using <code>transformers</code> library.</p> <p>Here is my Trainer Code</p> <pre><code>from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer wandb.init(project=&quot;project_name&quot;) # https://huggingface.co/docs/transformers/v4.18.0/en/performance training_args = Seq2SeqTrainingArguments( output_dir=&quot;filename&quot;, save_total_limit=1, per_device_train_batch_size=2, per_device_eval_batch_size=2, gradient_accumulation_steps=16 // 2, # gradient_checkpointing=True, weight_decay=0.01, warmup_steps=1_000, learning_rate=3e-4, lr_scheduler_type=&quot;cosine&quot;, num_train_epochs=4, predict_with_generate=True, fp16=True, push_to_hub=True, report_to='wandb' ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_dataset[&quot;train&quot;], eval_dataset=tokenized_dataset[&quot;test&quot;], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) </code></pre> <p>I tried doing this</p> <pre><code>model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) model.config.forced_bos_token_id = tokenizer.lang_code_to_id[&quot;arb_Arab&quot;] </code></pre> <p>And it works, <strong>In Sanity Checking at least</strong>. But when i started the training loop, I got this:</p> <pre><code>warnings.warn( Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. Non-default generation parameters: {'max_length': 200, 'forced_bos_token_id': 256011} /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants. </code></pre> <p>and my eval loss is NaN for some reason. All I need is a way to insert my GenerationConfig into the Trainer.</p>
1,386
machine translation
Can I use BERT or BART for machine translation?
https://stackoverflow.com/questions/66862585/can-i-use-bert-or-bart-for-machine-translation
<p>I am working on a project to use a pre-trained model and finetune it for customized language translations, for example from English to French. Is it possible to load these models in Tensorflow and run them to see how translations turn out and fine-tune afterward?</p>
<p>Probably the fastest way to do so is relying on the HuggingFace transformers library. If you're not familiar with it, you may take a look at their official <a href="https://huggingface.co/transformers/quicktour.html" rel="nofollow noreferrer">documentation</a>. To fine-tune a BART for NMT you can use directly this provided <a href="https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_translation.py" rel="nofollow noreferrer">script</a> (it works with some other pre-trained models too).</p>
1,387
machine translation
How to predict &lt;unk&gt; token for neural machine translation
https://stackoverflow.com/questions/73847435/how-to-predict-unk-token-for-neural-machine-translation
<p>For example, if I have the words MKIK or &quot;牛逼&quot; (which is artificially created) how can we tell neural networks (transformer model) to keep the same output?</p> <p>The problem is with using the transformer model on fairseq.</p> <p>I found fairseq has <code>--replace-unk</code> parameters, but it doesn't seem to work on transformer model or it has a bug</p>
<p>I have an idea myself, pretrain a naive model with all of the unknown tokens, like Chinese characters. Then finetune the model without those unknown tokens.</p> <p>I guess in this way the neural network connections will not update?</p> <p>But I will have to play around the structure and see.</p>
1,388
machine translation
How can I fine-tune mBART-50 for machine translation in the transformers Python library so that it learns a new word?
https://stackoverflow.com/questions/76191862/how-can-i-fine-tune-mbart-50-for-machine-translation-in-the-transformers-python
<p>I try to fine-tune mBART-50 (<a href="https://arxiv.org/pdf/2008.00401" rel="nofollow noreferrer">paper</a>, <a href="https://huggingface.co/facebook/mbart-large-50" rel="nofollow noreferrer">pre-trained model on Hugging Face</a>) for machine translation in the transformers Python library. To test the fine-tuning, I am trying to simply teach mBART-50 a new word that I made up.</p> <p>I use the following code. Over 95% of the code is from the <a href="https://huggingface.co/docs/transformers/model_doc/mbart#training-of-mbart50" rel="nofollow noreferrer">Hugging Face documentation</a>:</p> <pre><code>from transformers import MBartForConditionalGeneration, MBart50TokenizerFast print('Model loading started') model = MBartForConditionalGeneration.from_pretrained(&quot;facebook/mbart-large-50&quot;) tokenizer = MBart50TokenizerFast.from_pretrained(&quot;facebook/mbart-large-50&quot;, src_lang=&quot;fr_XX&quot;, tgt_lang=&quot;en_XX&quot;) print('Model loading done') src_text = &quot; billozarion &quot; tgt_text = &quot; plorization &quot; model_inputs = tokenizer(src_text, return_tensors=&quot;pt&quot;) with tokenizer.as_target_tokenizer(): labels = tokenizer(tgt_text, return_tensors=&quot;pt&quot;).input_ids print('Fine-tuning started') for i in range(1000): #pass model(**model_inputs, labels=labels) # forward pass print('Fine-tuning ended') # Testing whether the model learned the new word. Translate French to English tokenizer = MBart50TokenizerFast.from_pretrained(&quot;facebook/mbart-large-50-many-to-many-mmt&quot;) tokenizer.src_lang = &quot;fr_XX&quot; article_fr = src_text encoded_fr = tokenizer(article_fr, return_tensors=&quot;pt&quot;) generated_tokens = model.generate(**encoded_fr, forced_bos_token_id=tokenizer.lang_code_to_id[&quot;en_XX&quot;]) translation = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(translation) </code></pre> <p>However, the new word wasn't learned. The output is &quot;billozarion&quot; instead of &quot;plorization&quot;. Why?</p> <p>I'm strictly following the Hugging Face documentation, unless I missed something. The <code># forward pass</code> does make me concerned, as one would need a backward pass to update the gradients. Maybe this means that the documentation is incorrect, however I can't test that hypothesis as I don't know how to add the backward pass.</p> <hr /> <p>Environment that I used to run the code: Ubuntu 20.04.5 LTS with an NVIDIA A100 40GB GPU (I also tested with an NVIDIA T4 Tensor Core GPU) and CUDA 12.0 with the following conda environment:</p> <pre><code>conda create --name mbart-python39 python=3.9 conda activate mbart-python39 pip install transformers==4.28.1 pip install chardet==5.1.0 pip install sentencepiece==0.1.99 pip install protobuf==3.20 </code></pre>
<p>One could add the following to fine-tune mBART-50:</p> <pre><code>from transformers.optimization import AdamW # Set up the optimizer and training settings optimizer = AdamW(model.parameters(), lr=1e-4) model.train() print('Fine-tuning started') for i in range(100): optimizer.zero_grad() output = model(**model_inputs, labels=labels) # forward pass loss = output.loss loss.backward() optimizer.step() print('Fine-tuning ended') </code></pre> <p>Full code:</p> <pre><code>from transformers import MBartForConditionalGeneration, MBart50TokenizerFast from transformers.optimization import AdamW import os os.environ[&quot;TOKENIZERS_PARALLELISM&quot;] = &quot;false&quot; print('Model loading started') model = MBartForConditionalGeneration.from_pretrained(&quot;facebook/mbart-large-50&quot;) tokenizer = MBart50TokenizerFast.from_pretrained(&quot;facebook/mbart-large-50&quot;, src_lang=&quot;fr_XX&quot;, tgt_lang=&quot;en_XX&quot;) print('Model loading done') src_text = &quot; billozarion &quot; tgt_text = &quot; plorizatizzzon &quot; model_inputs = tokenizer(src_text, return_tensors=&quot;pt&quot;) with tokenizer.as_target_tokenizer(): labels = tokenizer(tgt_text, return_tensors=&quot;pt&quot;).input_ids # Set up the optimizer and training settings optimizer = AdamW(model.parameters(), lr=1e-4) model.train() print('Fine-tuning started') for i in range(100): optimizer.zero_grad() output = model(**model_inputs, labels=labels) # forward pass loss = output.loss loss.backward() optimizer.step() print('Fine-tuning ended') # translate French to English tokenizer = MBart50TokenizerFast.from_pretrained(&quot;facebook/mbart-large-50-many-to-many-mmt&quot;) tokenizer.src_lang = &quot;fr_XX&quot; article_fr = src_text encoded_fr = tokenizer(article_fr, return_tensors=&quot;pt&quot;) generated_tokens = model.generate(**encoded_fr, forced_bos_token_id=tokenizer.lang_code_to_id[&quot;en_XX&quot;]) translation =tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(translation) </code></pre> <p>It outputs the correct made up translation &quot;plorizatizzzon&quot;.</p> <p>I reported the documentation issue on <a href="https://github.com/huggingface/transformers/issues/23185" rel="nofollow noreferrer">https://github.com/huggingface/transformers/issues/23185</a></p> <hr /> <p><a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation" rel="nofollow noreferrer">https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation</a> contains two more advanced scripts to fine-tune mBART and T5 (thanks <a href="https://github.com/sgugger" rel="nofollow noreferrer">sgugger</a> for <a href="https://github.com/huggingface/transformers/issues/23185#issuecomment-1537564079" rel="nofollow noreferrer">pointing</a> me to it). Here is how to use the script to fine-tune mBART:</p> <p>Create a new conda environment:</p> <pre><code>conda create --name mbart-source-transformers-python39 python=3.9 conda activate mbart-source-transformers-python39 git clone https://github.com/huggingface/transformers.git cd transformers pip install git+https://github.com/huggingface/transformers pip install datasets evaluate accelerate sacrebleu conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia pip install sentencepiece==0.1.99 pip install protobuf==3.20 pip install --force-reinstall charset-normalizer==3.1.0 </code></pre> <p>Command:</p> <pre><code>python examples/pytorch/translation/run_translation.py \ --model_name_or_path facebook/mbart-large-50 \ --do_train \ --do_eval \ --source_lang fr_XX \ --target_lang en_XX \ --source_prefix &quot;translate French to English: &quot; \ --train_file finetuning-translation-train.json \ --validation_file finetuning-translation-validation.json \ --test_file finetuning-translation-test.json \ --output_dir tmp/tst-translation4 \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --do_predict \ --predict_with_generate </code></pre> <p>(Note: the readme seems to have missed <code>--do_predict</code>)</p> <p>with <code>finetuning-translation-train.json</code>, <code>finetuning-translation-validation.json</code> and <code>finetuning-translation-test.json</code> formatted as follows with the <a href="https://jsonlines.org" rel="nofollow noreferrer">JSON Lines</a> format:</p> <pre><code>{&quot;translation&quot;: {&quot;en&quot;: &quot;20 year-old male tennis player.&quot;, &quot;fr&quot;: &quot;Joueur de tennis de 12 ans&quot;}} {&quot;translation&quot;: {&quot;en&quot;: &quot;2 soldiers in an old military Jeep&quot;, &quot;fr&quot;: &quot;2 soldats dans une vielle Jeep militaire&quot;}} </code></pre> <p>(Note: one must use double quotes in the .json files. Single quotes e.g. <code>'en'</code> will make the script crash.)</p> <p>I run the code on Ubuntu 20.04.5 LTS with an NVIDIA T4 Tensor Core GPU (16GB memory) and CUDA 12.0. The mBART-50 model takes around 15GB of GPU memory.</p>
1,389
machine translation
How to concatenate a split word using NLP caused by tokenizers after machine translation?
https://stackoverflow.com/questions/77005341/how-to-concatenate-a-split-word-using-nlp-caused-by-tokenizers-after-machine-tra
<p>Russian translation produces the following result, is there a NLP function which we can use to concatenate as &quot;Europe's&quot; in the following string?</p> <p>&quot;Nitzchia Protector Todibo can go to one of Europe ' s top clubs&quot;</p>
<p>Try detokenizers but because there are rules to process tokens that are expected to change <code>x 's</code> -&gt; <code>x's</code> but not <code>x ' s</code> -&gt; <code>x's</code>, you might have to iteratively apply the detokenizer, e.g. using <code>sacremoses</code></p> <pre><code>&gt;&gt;&gt; from sacremoses import MosesTokenizer, MosesDetokenizer &gt;&gt;&gt; md = MosesDetokenizer(lang='en') &gt;&gt;&gt; md.detokenize(&quot;Nitzchia Protector Todibo can go to one of Europe ' s top clubs&quot;.split()) &quot;Nitzchia Protector Todibo can go to one of Europe 's top clubs&quot; &gt;&gt;&gt; md.detokenize(md.detokenize(&quot;Nitzchia Protector Todibo can go to one of Europe ' s top clubs&quot;.split()).split()) &quot;Nitzchia Protector Todibo can go to one of Europe's top clubs&quot; </code></pre>
1,390
machine translation
Do programming language compilers first translate to assembly or directly to machine code?
https://stackoverflow.com/questions/845355/do-programming-language-compilers-first-translate-to-assembly-or-directly-to-mac
<p>I'm primarily interested in popular and widely used compilers, such as gcc. But if things are done differently with different compilers, I'd like to know that, too.</p> <p>Taking gcc as an example, does it compile a short program written in C directly to <em>machine</em> code, or does it first translate it to human-readable assembly, and only then uses an (in-built?) assembler to translate the assembly program into binary, <em>machine</em> code -- a series of instructions to the CPU?</p> <p>Is using assembly code to create a binary executable a significantly expensive operation? Or is it a relatively simple and quick thing to do?</p> <p>(Let's assume we're dealing with only the x86 family of processors, and all programs are written for Linux.)</p>
<p>gcc actually produces assembler and assembles it using the <strong>as</strong> assembler. Not all compilers do this - the MS compilers produce object code directly, though you can make them generate assembler output. Translating assembler to object code is a pretty simple process, at least compared with C→Assembly or C→Machine-code translation.</p> <p>Some compilers produce other high-level language code as their output - for example, <strong>cfront</strong>, the first C++ compiler, produced C as its output which was then compiled to machine code by a C compiler.</p> <p>Note that neither direct compilation or assembly actually produce an executable. That is done by the <strong>linker</strong>, which takes the various object code files produced by compilation/assembly, resolves all the names they contain and produces the final executable binary.</p>
1,391
machine translation
Translation API with candidates
https://stackoverflow.com/questions/37982632/translation-api-with-candidates
<p>I am looking for a translation API that outputs all the candidates and not just single "best" candidate.</p> <p>All statistical machine translation systems at the last stage score the list of translation candidates and choice the best candidate. I wonder if there is a system like Google translate or Microsoft translate that returns the list of all possible candidates so that I will be able to score them by myself.</p> <p>Thanks.</p>
<p>I think WordNet is good for this: <a href="https://wordnet.princeton.edu/" rel="nofollow">https://wordnet.princeton.edu/</a></p> <p>Originally wordnet is english ontology describing english word in english, showing synonims, definition etc. but there are a lot of other language wordnets projects as well as multilingual wordnets. Below interesting links: <a href="http://globalwordnet.org/wordnets-in-the-world/" rel="nofollow">http://globalwordnet.org/wordnets-in-the-world/</a> <a href="http://www.certifiedchinesetranslation.com/openaccess/WordNet/" rel="nofollow">http://www.certifiedchinesetranslation.com/openaccess/WordNet/</a></p> <p>There is a big dictionary project leveraging from wordnets too: <a href="http://babelnet.org/about" rel="nofollow">http://babelnet.org/about</a></p>
1,392
machine translation
Why does my translation machine always output &#39;t&#39;?
https://stackoverflow.com/questions/60565308/why-does-my-translation-machine-always-output-t
<p>My code:</p> <pre><code>import numpy as np from keras import Input, Model from keras.layers import LSTM, Dense input_texts = [] target_texts = [] input_characters = set() target_characters = set() with open('catalan.txt', 'r', encoding = 'utf-8') as f: lines = f.read().split('\n') for line in lines[: min(653, len(lines) - 1)]: input_text, target_text = line.split('\t') target_text = '\t' + target_text + '\n' input_texts.append(input_text) target_texts.append(target_text) for char in input_text: if char not in input_characters: input_characters.add(char) for char in target_text: if char not in target_characters: target_characters.add(char) input_characters = sorted(list(input_characters)) target_characters = sorted(list(target_characters)) num_encoder_tokens = len(input_characters) num_decoder_tokens = len(target_characters) max_encoder_seq_length = max([len(txt) for txt in input_texts]) max_decoder_seq_length = max([len(txt) for txt in target_texts]) input_token_index = dict( [(char, i) for i, char in enumerate(input_characters)]) target_token_index = dict( [(char, i) for i, char in enumerate(target_characters)]) encoder_input_data = np.zeros( (len(input_texts), max_encoder_seq_length, num_encoder_tokens), dtype = 'float32') decoder_input_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype = 'float32') decoder_target_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype = 'float32') for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)): for t, char in enumerate(input_text): encoder_input_data[i, t, input_token_index[char]] = 1. for t, char in enumerate(target_text): decoder_input_data[i, t, target_token_index[char]] = 1. if t &gt; 0: decoder_target_data[i, t - 1, target_token_index[char]] = 1. latent_dim = 10 batch_size = 256 epochs = 10 encoder_inputs = Input(shape = (None, num_encoder_tokens)) encoder = LSTM(latent_dim, return_state = True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) encoder_states = [state_h, state_c] decoder_inputs = Input(shape = (None, num_decoder_tokens)) decoder_lstm = LSTM(latent_dim, return_sequences = True, return_state = True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state = encoder_states) decoder_dense = Dense(num_decoder_tokens, activation = 'softmax') decoder_outputs = decoder_dense(decoder_outputs) model = Model([encoder_inputs, decoder_inputs], decoder_outputs) model.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy') model.fit([encoder_input_data, decoder_input_data], decoder_target_data, batch_size = batch_size, epochs = epochs, validation_split = 0.2) encoder_model = Model(encoder_inputs, encoder_states) decoder_state_input_h = Input(shape = (latent_dim,)) decoder_state_input_c = Input(shape = (latent_dim,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_outputs, state_h, state_c = decoder_lstm( decoder_inputs, initial_state = decoder_states_inputs) decoder_states = [state_h, state_c] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = Model( [decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states) # Reverse-lookup token index to decode sequences back to # something readable. reverse_input_char_index = dict( (i, char) for char, i in input_token_index.items()) reverse_target_char_index = dict( (i, char) for char, i in target_token_index.items()) def decode_sequence(input_seq): states_value = encoder_model.predict(input_seq) target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, target_token_index['\t']] = 1. stop_condition = False decoded_sentence = '' while not stop_condition: output_tokens, h, c = decoder_model.predict( [target_seq] + states_value) # Sample a token sampled_token_index = np.argmax(output_tokens[0, -1, :]) sampled_char = reverse_target_char_index[sampled_token_index] decoded_sentence += sampled_char if (sampled_char == '\n' or len(decoded_sentence) &gt; max_decoder_seq_length): stop_condition = True target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, sampled_token_index] = 1. # Update states states_value = [h, c] return decoded_sentence for seq_index in range(5): input_seq = encoder_input_data[seq_index: seq_index + 1] decoded_sentence = decode_sequence(input_seq) print('\n') print('Input sentence:', input_texts[seq_index]) print('Decoded sentence:', decoded_sentence) </code></pre> <p>Output:</p> <pre><code>Using TensorFlow backend. 2020-03-06 16:37:17.569143: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fc6781e2ee0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-03-06 16:37:17.569165: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version Train on 521 samples, validate on 131 samples Epoch 1/10 256/521 [=============&gt;................] - ETA: 1s - loss: 1.3404 512/521 [============================&gt;.] - ETA: 0s - loss: 1.3235 521/521 [==============================] - 2s 4ms/step - loss: 1.3269 - val_loss: 2.2806 Epoch 2/10 256/521 [=============&gt;................] - ETA: 0s - loss: 1.3238 512/521 [============================&gt;.] - ETA: 0s - loss: 1.3232 521/521 [==============================] - 1s 1ms/step - loss: 1.3226 - val_loss: 2.2743 Epoch 3/10 256/521 [=============&gt;................] - ETA: 0s - loss: 1.3432 512/521 [============================&gt;.] - ETA: 0s - loss: 1.3204 521/521 [==============================] - 1s 1ms/step - loss: 1.3192 - val_loss: 2.2671 Epoch 4/10 256/521 [=============&gt;................] - ETA: 0s - loss: 1.3363 512/521 [============================&gt;.] - ETA: 0s - loss: 1.3180 521/521 [==============================] - 1s 1ms/step - loss: 1.3153 - val_loss: 2.2586 Epoch 5/10 256/521 [=============&gt;................] - ETA: 0s - loss: 1.2933 512/521 [============================&gt;.] - ETA: 0s - loss: 1.3102 521/521 [==============================] - 1s 1ms/step - loss: 1.3105 - val_loss: 2.2467 Epoch 6/10 256/521 [=============&gt;................] - ETA: 0s - loss: 1.3062 512/521 [============================&gt;.] - ETA: 0s - loss: 1.3085 521/521 [==============================] - 1s 2ms/step - loss: 1.3038 - val_loss: 2.2313 Epoch 7/10 256/521 [=============&gt;................] - ETA: 0s - loss: 1.3044 512/521 [============================&gt;.] - ETA: 0s - loss: 1.2919 521/521 [==============================] - 1s 1ms/step - loss: 1.2947 - val_loss: 2.2081 Epoch 8/10 256/521 [=============&gt;................] - ETA: 0s - loss: 1.2874 512/521 [============================&gt;.] - ETA: 0s - loss: 1.2801 521/521 [==============================] - 1s 1ms/step - loss: 1.2816 - val_loss: 2.1818 Epoch 9/10 256/521 [=============&gt;................] - ETA: 0s - loss: 1.2862 512/521 [============================&gt;.] - ETA: 0s - loss: 1.2708 521/521 [==============================] - 1s 1ms/step - loss: 1.2670 - val_loss: 2.1564 Epoch 10/10 256/521 [=============&gt;................] - ETA: 0s - loss: 1.2387 512/521 [============================&gt;.] - ETA: 0s - loss: 1.2506 521/521 [==============================] - 1s 1ms/step - loss: 1.2528 - val_loss: 2.1281 Input sentence: Wow! Decoded sentence: t Input sentence: Really? Decoded sentence: t Input sentence: Thanks. Decoded sentence: t Input sentence: Goodbye! Decoded sentence: t Input sentence: Hurry up. Decoded sentence: t </code></pre> <p>catalan.txt contains the text of this structure:</p> <pre class="lang-none prettyprint-override"><code>Wow! Carai! Really? De veritat? Thanks. Gràcies! Goodbye! Adéu! Hurry up. Afanya't. Too late. Massa tard. </code></pre> <p>Why I always get <code>t</code>? I thought it must be a translation of a sentence in English. What's wrong with it?</p>
<p>As @Recessive answered in comments: increase epochs.</p> <p>I tested with 1000 and it worked, without changing other parameters.</p> <p>Also: by tweaking those other parameters, results get better with fewer epochs.</p> <p>That means: the code seems correct after correcting the <code>return</code> inside <code>while</code>, as noted by @h4z3.</p>
1,393
machine translation
what is the format of word alignments in machine translation?
https://stackoverflow.com/questions/37982045/what-is-the-format-of-word-alignments-in-machine-translation
<p>I am reading <a href="http://www.aclweb.org/anthology/P05-1032" rel="nofollow">this</a> paper and having a difficulty understanding the way word alignments are represented. To be precise, right below section <code>4.1</code>, the authors say the format of the alignment is <code>(i,j)</code> where <code>i</code> ranges within the source sentence length and <code>j</code> ranges within the target sentence range. This means that each alignment is a pair of two numbers, which given that sentences are typically not longer than 40-100 words, values for <code>i</code>, and <code>j</code> can be stored using <code>short</code> type. So, I expect to see that the amount of space required to store these alignments be <code>2 x sizeof(short) x number of word alignments</code>. But if you go to the next page where, right above section <code>4.2</code>, they say the space is <code>sizeof(short) x number of word alignments</code>. WHY? Am I confusing stuff?</p>
1,394
machine translation
Output of the Embedding layer in Decoder(Neural machine translation)
https://stackoverflow.com/questions/63272961/output-of-the-embedding-layer-in-decoderneural-machine-translation
<p>I am trying to understand Attention model using the following tutorial <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/nmt_with_attention</a></p> <p>In the Decoder section it's written:</p> <pre><code># x shape after passing through embedding == (batch_size, 1, embedding_dim) x = self.embedding(x) </code></pre> <p>I don't understand why embedding output is (batch_size, 1, embedding_dim). According the documentation(<a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding</a>) the output must be (batch_size, input_length, output_dim), which is in the case of tutorial (batch_size, max_len,embedding_dim).</p> <p>Question: Why is secode dimension = 1, but not max_len?</p>
<p>The model in this tutorial is a sequence to sequence.so at each step the model receives One word of the entire text. This is why the max_len in (batch_size, max_len,embedding_dim) is equal to 1. each word is represented by ONE vector of size = embedding_dim</p>
1,395
machine translation
Phrase-based translation at Google API PBMT
https://stackoverflow.com/questions/51976180/phrase-based-translation-at-google-api-pbmt
<p>I am trying to use phrase-based machine translation as provided by the Google API (PBMT). </p> <p>Is it possible to provide Google with my own mappings of terms between languages, thus extending their phrase table? </p> <p>Thank you! cheers zabe</p>
<p>tl;dr: It's not possible. </p> <p>The parameters you can specify in a Translation Request are only the ones listed in the <a href="https://cloud.google.com/translate/docs/reference/translate" rel="nofollow noreferrer">Translate Method reference</a>.</p> <p>If you're interested, Google released <a href="https://cloud.google.com/translate/automl/" rel="nofollow noreferrer">Cloud AutoML Translate</a>, which allows to train models using sentence pairs. Its model is not phrase-based though, so it may be not what you exactly want.</p> <p>For phrase-based model specifically, the only (long-shot) alternative is to go to the <a href="https://translate.google.com/community" rel="nofollow noreferrer">Google Translation Community</a> to provide your own pairs / mappings and wait until Google enhances the Phrase-based model.</p>
1,396
machine translation
Google Translate vs. Google Cloud Translate
https://stackoverflow.com/questions/65388265/google-translate-vs-google-cloud-translate
<p>Google Cloud Translation offers Neural Machine Translation (NMT) and Phrase-Based Machine Translation (PBMT). It is stated that &quot;nmt model is used if the language pair is supported, otherwise the PBMT model is used.&quot;(<a href="https://cloud.google.com/translate/docs/advanced/translating-text-v3" rel="nofollow noreferrer">https://cloud.google.com/translate/docs/advanced/translating-text-v3</a>).</p> <p>Does Google Translate web (<a href="https://translate.google.com" rel="nofollow noreferrer">https://translate.google.com</a>) work in the same way? Does it use exactly the same NMT model as cloud translate API?</p>
1,397
machine translation
Translating MIPS to machine code
https://stackoverflow.com/questions/11972320/translating-mips-to-machine-code
<pre><code> .text .align 2 .global main .equ val,0x4712 # 16-bit binary code for 0x4712: 0100 0111 0001 0010 # Program code starts now main: movi r16,val movi r17,0 loop: addi r17,r17,1 subi r16,r16,1 bne r16,r0,loop stop: br stop </code></pre> <p>How should I do to translate the above to machine code? I need to know how to make the translation besides the actual code. I figure I could try and get the opcodes for the instructions but the movi is a pseudoinstruction and I don't know where I can get it. Should I read it in the Nios II manual?</p> <h2>Update</h2> <p>The first four instructions are type immediate so that field form should be used. movi and subi both are pseudoinstruktions implemented in addi so the opcode for addi will be used. I was helped and I know that the instruction <code>movi r16, val</code> will translate to </p> <p><code>00000100000100011100010010000100</code></p> <p>so the opcode is 000100 binary which is 0x04 in hex base which the manual also states is the opcode for addi. So I think we have the first four opcodes, they are all 000100. </p> <h2>Update 2</h2> <p>I think I know the opcode and the immediate field of most of the instructions now:</p> <p>The sequence 0100011100010010 is 0x4712 which is the variable <code>val</code> that was declared with <code>.equ,</code> So the first four opcodes should be 000100 since they're all addi and addi says it's 0x04. How to translate the two five-bit fields for the registers I don't know right now but could be checking with the manual. It says 'br<code>has opcode 0x06 so it should say 000110 in the opcode for br.</code>bne` has opkod 0x1E which binary is 011110 = 30 (?)</p> <p>Is this a correct beginning?</p>
<p>First off, why dont you at least try using an assembler to see what is produced? </p> <p>You need to read the mips instruction set references to see the real mips instructions, often the pseudo instructions are described in these references. as far as movi goes, load upper is a bit obvious for controlling the upper bits, and either and or or or is an obvious way to set the lower bits. If the value is a small number then you only need and or or and use r0 as one of the operands (to zero the upper bits).</p> <p>The problem with these pseudo instructions in mips is that you have to be careful not to use one in a branch shadow (unless it translates to a single instruction). I recommend learning an assembly language first using none of the pseudo instructions, then later to make your life easier, once you understand the instruction set and rules, then use pseudo instructions to make the code more readable or maintainable, etc. Personally I tend to stay with the pure instructions, giving a one-to-one experience (asm instruction to machine instruction).</p>
1,398
machine translation
Remove &quot;Machine Translated by Google&quot; google api document translate
https://stackoverflow.com/questions/78653380/remove-machine-translated-by-google-google-api-document-translate
<p>I am paying to use the Google api to translate document and google is adding <code>Machine Translated by Google</code> on each page.</p> <p>I know this is part of the attribution requirements but if I specify it on my website it should not be required on the generated document.</p> <pre class="lang-js prettyprint-override"><code>const crypto = require('crypto') const { TranslationServiceClient } = require('@google-cloud/translate').v3 const { Storage } = require('@google-cloud/storage'); const fetch = require('node-fetch'); const fs = require('fs') const translationClient = new TranslationServiceClient(); const parent = translationClient.locationPath('***', 'global'); const storage = new Storage(); async function translateFile({ uri, from, to }) { process.env.GOOGLE_APPLICATION_CREDENTIALS = './***.json' const pdfResponse = await fetch(uri); if (!pdfResponse.ok) { throw new Error(`Failed to download the PDF: ${response.statusText}`); } const buffer = await pdfResponse.buffer(); const bucket = storage.bucket('***'); const file = bucket.file(`${crypto.randomUUID()}.pdf`) await file.save(buffer) const inputUri = `gs://${bucket.name}/${file.name}` const documentInputConfig = { gcsSource: { inputUri } }; const request = { parent, documentInputConfig: documentInputConfig, sourceLanguageCode: from, targetLanguageCode: to, //customizedAttribution: 'Test' }; const [response] = await translationClient.translateDocument(request); await file.delete() fs.writeFileSync(`data/${crypto.randomUUID()}.pdf`, response.documentTranslation.byteStreamOutputs[0]) } </code></pre> <p>Seems like I can change it with <code>customizedAttribution</code> but each time I have an error <code>Error: 3 INVALID_ARGUMENT: Invalid customized attribution: Test</code></p> <p>The other solution as I let my user know it is translated by google will be a post process but remove text keeping background is not that simple with nodejs the only solution I have is to set a white rectangle on <code>Machine Translated by Google</code> there is no way to keep remove the background</p>
1,399
implement RAG
Is there a way to implement multiple csv&#39;s as RAG?
https://stackoverflow.com/questions/78951491/is-there-a-way-to-implement-multiple-csvs-as-rag
<p>I recently uploaded a csv and wanted to create a project to analyze the csv with llm.</p> <p>However, I don't know which RAG to use for RAG through the csv file.</p> <p>In addition, the resources of the csv file are numbers, not natural language, so it seems too difficult to draw out the performance of RAG.</p> <p>Does anyone have a good method or idea?</p> <p>I looked through the Pandas DataFrame and LangChain documentation, but I couldn't find a way to implement a performant RAG for csv.</p>
<p>I think the advantage of rag is that it processes unstructured text data. If you want to process csv data, you still need some specific functions.</p>
1,400
implement RAG
How to Implement Retrieval Augmented Generation (RAG) for Semantic Search and Summarization?
https://stackoverflow.com/questions/78661577/how-to-implement-retrieval-augmented-generation-rag-for-semantic-search-and-su
<p>I am working on a project for a publishing house that involves implementing semantic search across an archive of approximately 50,000 articles, each averaging 15 pages in length. My understanding is that I need to use a Retrieval Augmented Generation (RAG) approach to achieve this. Here are my specific requirements:</p> <p>Indexing Documents: Convert documents into vector embeddings and store them in a way that allows for efficient retrieval.</p> <p>Semantic Search: Perform searches based on user queries by converting the queries into embeddings and finding the most relevant documents.</p> <p>Document Summarization: Summarize the content of the retrieved documents to present concise information to the user.</p> <p>I would like to know:</p> <p>What are the recommended tools or frameworks to implement RAG for this use case?</p> <p>How can I store metadata (like document IDs, titles, URLs) alongside vector embeddings to identify the source documents when presenting search results?</p> <p>What is the best approach to generate embeddings for both documents and user queries?</p> <p>I appreciate any advice or suggestions on how to implement this effectively. Thank you!</p> <p>I have researched various tools and frameworks that might help, including vector databases and embedding models. I understand that storing metadata alongside the embeddings is crucial for identifying source documents. However, I am unsure which specific tools or frameworks would best suit my needs and how to effectively implement and scale this solution.</p>
1,401
implement RAG
RAG engine error: Can&#39;t instantiate abstract class BaseNode
https://stackoverflow.com/questions/78864925/rag-engine-error-cant-instantiate-abstract-class-basenode
<p>While executing RAG query engine as per the following code</p> <pre><code>query_engine = RetrieverQueryEngine.from_args( retriever,llm=AzureOpenAI(api_key='xxxxxxxxxxxxxxx', azure_endpoint=&quot;https://xxxxxxxxxxxxxx/&quot;, engine=&quot;openai-gpt35-1106&quot;, temperature=0.1, api_version=&quot;2023-09-15-preview&quot;)) response = query_engine.query(&quot;Summary in 2 lines&quot;), logger.info(f&quot;RESULT:{response}&quot;) </code></pre> <p>it's throwing below error:</p> <pre><code>Can't instantiate abstract class BaseNode with abstract methods get_content, get_metadata_str, get_type, hash, set_content (type=type_error) </code></pre> <p>what could have gone wrong with the code, or the RAG/llm library. please help to resolve the issue at the earliest.</p> <p>I am trying to implement RAG and query, but failing with the error as described. Did all the necessary libraries imports in the beginning of the code, including following import:</p> <pre class="lang-py prettyprint-override"><code>from llama_index.core.schema import TextNode, BaseNode, NodeWithScore </code></pre>
1,402
implement RAG
How to implement vector stores and RAG with Open AI functions with proven rule base code generation
https://stackoverflow.com/questions/78716134/how-to-implement-vector-stores-and-rag-with-open-ai-functions-with-proven-rule-b
<p>How to implement vector stores and RAG with OpenAI playground functions json code rule base for ez function loading that is a lot of time very hard to do , this is a proven system to max you code structure that is valid and will load!, just because it is valid does not mean that it will load!!!</p> <p>render a verified approach that is tested and rule based through trial code test</p> <h1>How to implement vector stores and RAG with OpenAI functions?</h1> <p>When implementing vector stores and Retrieval-Augmented Generation (RAG) with OpenAI functions, it's crucial to structure your JSON correctly to avoid common errors. Here's a comprehensive guide:</p> <h2>Rule Base for OpenAI Function JSON Structures</h2> <ol> <li>Use a single, top-level object.</li> <li>Include a &quot;name&quot; field at the root level.</li> <li>Include a &quot;description&quot; field at the root level.</li> <li>Use a &quot;parameters&quot; object to define the function's inputs.</li> <li>Within &quot;parameters&quot;, use a &quot;type&quot; field (usually &quot;object&quot;).</li> <li>Define properties within the &quot;parameters&quot; object.</li> <li>Use &quot;required&quot; array to specify mandatory parameters.</li> <li>Keep the structure as flat as possible.</li> <li>Avoid deeply nested objects or arrays.</li> <li>Use clear, descriptive names for properties.</li> </ol> <h2>Example: Vector Store Search with RAG Integration</h2> <pre class="lang-json prettyprint-override"><code>{ &quot;name&quot;: &quot;search_and_generate_response&quot;, &quot;description&quot;: &quot;Search vector store and generate a response using RAG&quot;, &quot;parameters&quot;: { &quot;type&quot;: &quot;object&quot;, &quot;properties&quot;: { &quot;query&quot;: { &quot;type&quot;: &quot;string&quot;, &quot;description&quot;: &quot;The user's question or search query&quot; }, &quot;document_types&quot;: { &quot;type&quot;: &quot;array&quot;, &quot;items&quot;: { &quot;type&quot;: &quot;string&quot;, &quot;enum&quot;: [&quot;CBA&quot;, &quot;constitution&quot;, &quot;bylaws&quot;, &quot;grievance&quot;, &quot;memo&quot;] }, &quot;description&quot;: &quot;Types of documents to search in the vector store&quot; }, &quot;max_results&quot;: { &quot;type&quot;: &quot;integer&quot;, &quot;description&quot;: &quot;Maximum number of relevant documents to retrieve&quot; }, &quot;response_style&quot;: { &quot;type&quot;: &quot;string&quot;, &quot;enum&quot;: [&quot;concise&quot;, &quot;detailed&quot;, &quot;ELI5&quot;], &quot;description&quot;: &quot;Style of the generated response&quot; } }, &quot;required&quot;: [&quot;query&quot;] } } </code></pre>
1,403
implement RAG
Creating Overall RAG status in Tableau
https://stackoverflow.com/questions/43907675/creating-overall-rag-status-in-tableau
<p>I have three RAG status build with logic. Now my requirement is to create overall RAG over them. There are different department filters applied on three RAG status. Now I want to implement overall RAG on below condition. Please note - Below - G= Green, A=Amber and R=Red</p> <p>a. G-G-G = G</p> <p>b. G-G-A = G</p> <p>c. G-A-A = G</p> <p>d. A-A-A = A</p> <p>e. G-G-R = R</p> <p>f. G-R-R = R</p> <p>g. R-R-R = R</p> <p>Please suggest</p> <p>Below image shows how we are working for RAG1</p> <p>[1]</p> <p><a href="https://i.sstatic.net/dqJYQ.jpg" rel="nofollow noreferrer">2</a></p> <p><a href="https://i.sstatic.net/dqJYQ.jpg" rel="nofollow noreferrer">This image shows dashboard having three RAG and above all on left is overall RAG</a></p> <p>Considering situation a. G-G-G = G IF RAG1 returns G AND RAG2 returns G AND RAG3 returns G then over all RAG will be G(Green)</p> <p>Now, Considering situation d. A-A-A = A IF RAG1 returns A AND RAG2 returns A AND RAG3 returns A then over all RAG will be A(Amber)</p> <p>Now, Considering situation e. G-G-R = R IF RAG1 returns G AND RAG2 returns G AND RAG3 returns R then over all RAG will be R(Red)</p>
<p>This might be a possible solution,</p> <p>Recode the labels 'A', 'G' and 'R' as 0,1,3 (numeric) respectively.</p> <p>Now create a column which will be a sum of the recoded columns, example, if the sequence is AGR, then new_col = 0+1+3 = 4</p> <p>Now, create a calculated field (CF) with the following logic,</p> <pre><code>if new_col &gt;0 &amp; new_col &lt;= 3 then 'G' else if new_col &gt; 3 then 'R' else 'A' </code></pre> <p>And then use this calculated field to use as color indicator.</p>
1,404
implement RAG
How to implement RAG with LLMs for a large collection of local PDF documents
https://stackoverflow.com/questions/78118204/how-to-implement-rag-with-llms-for-a-large-collection-of-local-pdf-documents
<p>I am currently working on a project where I intend to utilize a LLM to provide answers to user inquiries, drawing from a substantial collection of local PDF documents. These documents are subject to daily updates, with approximately 10 new documents being added each day.</p> <p>Could you suggest the most effective method and process for enabling the LLM to access and utilize information from these local documents?</p> <p>I recognize that directly feeding all these documents into the LLM, such as ChatGPT, is not feasible.</p> <p>Would it be advisable to first employ libraries to extract content (text, tables, charts) from the PDF documents? Should I then proceed to embed this information and store it in a vector database, subsequently utilizing vector database search to supply the necessary information to the LLM for generating responses?</p>
<p>You can create a vector database storing all the documents (with a real-time pipeline that adds the new additions and indexes + generates embeddings for them too as they come in).</p> <p>So when you want to create a new query - you would first retrieve any relevant documents from your dataset; then use this context to generate answers based on the relevant context.</p> <p>You can read more about it on this blog from Elastic about Rag: <a href="https://www.elastic.co/what-is/retrieval-augmented-generation" rel="nofollow noreferrer">https://www.elastic.co/what-is/retrieval-augmented-generation</a>.</p> <p>This part describes what you were hinting at:</p> <blockquote> <h2>Retrieval</h2> <ul> <li>RAG starts with an input query. This could be a user's question or any piece of text that requires a detailed response.</li> <li>A retrieval model grabs pertinent information from knowledge bases, databases, or external sources — or multiple sources at once. Where the model searches depends on what the input query is asking. This retrieved information now serves as the reference source for whatever facts and context the model needs.</li> <li>The retrieved information is converted into vectors in a high-dimensional space. These knowledge vectors are stored in a vector database. The retrieval model ranks the retrieved information based on its relevance to the input query. Documents or passages with the highest scores are selected for further processing.</li> </ul> <h2>Generation</h2> <ul> <li>Next, a generation model, such as an LLM, uses the retrieved information to generate text responses.</li> <li>The generated text might go through additional post-processing steps to make sure it is grammatically correct and coherent.</li> <li>These responses are, on the whole, more accurate and make more sense in context because they have been shaped by the supplemental information the retrieval model has provided. This ability is especially important in specialized domains where public internet data is insufficient.</li> </ul> </blockquote> <p>You can pick and choose which technologies you want to use for each part - could be openAI as an LLM for embeddings; elasticsearch for the actual search engine; probably LangChain to keep everything in one pipeline; etc.</p> <p>Here's an example of a walkthrough for inspiration: <a href="https://www.elastic.co/search-labs/blog/articles/gen-ai-using-cohere-llm" rel="nofollow noreferrer">https://www.elastic.co/search-labs/blog/articles/gen-ai-using-cohere-llm</a></p> <p>Or this chatbot app example with multiple model options to choose from: <a href="https://github.com/elastic/elasticsearch-labs/tree/main/example-apps/chatbot-rag-app" rel="nofollow noreferrer">https://github.com/elastic/elasticsearch-labs/tree/main/example-apps/chatbot-rag-app</a></p> <p>Hope this helps!</p>
1,405
implement RAG
VectorStore implementation throws type &quot;vector&quot; does not exist error with custom schema and table
https://stackoverflow.com/questions/79555127/vectorstore-implementation-throws-type-vector-does-not-exist-error-with-custom
<p>I am trying to implementing RAG using pgvector/Postgres and stuck on a strange problem where RAG search fails when running programmatically. The raw query works fine on PostgresDB though.</p> <p>We have two different issues:</p> <ol> <li><p>When using the <a href="https://docs.spring.io/spring-ai/reference/api/vectordbs/pgvector.html" rel="nofollow noreferrer">standard textbook implementation</a> we get:</p> <pre><code> processing failed: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [SELECT *, embedding &lt;=&gt; ? AS distance FROM DEV_GENAI_DATA_OWNER.temp_rag_tbl WHERE embedding &lt;=&gt; ? &lt; ? ORDER BY distance LIMIT ? ]] with root cause org.postgresql.util.PSQLException: ERROR: operator does not exist: public.vector &lt;=&gt; public.vector Hint: No operator matches the given name and argument types. You might need to add explicit type casts. Position: 21 </code></pre> </li> <li><p>When running the same RAG search query as a native query via Spring JPA, we get this error:</p> <pre><code>.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [SELECT *, embedding &lt;=&gt; ?::vector AS distance FROM temp_rag_tbl ORDER BY embedding &lt;=&gt; ?::vector LIMIT ?]] with root cause org.postgresql.util.PSQLException: ERROR: type &quot;vector&quot; does not exist Position: 29 </code></pre> <p>Clearly, the vector extension does exist: <code>SELECT * FROM pg_extension WHERE extname = 'vector';</code> -&gt; shows result</p> </li> </ol> <p>Here is the complete pgvector configurations:</p> <pre><code># Pgvector configs spring.ai.vectorstore.pgvector.index-type=HNSW spring.ai.vectorstore.pgvector.distance-type=COSINE_DISTANCE spring.ai.vectorstore.pgvector.table-name=&lt;my-embedding-table&gt; spring.ai.vectorstore.pgvector.schema-name=&lt;my-schema&gt; spring.ai.vectorstore.pgvector.dimensions=1536 spring.ai.vectorstore.pgvector.batching-strategy=TOKEN_COUNT </code></pre> <p>Spring AI version: M5 (Milestone 5)</p>
<p>The problem seems to be related to using the custom pgvector store dependency</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.springframework.ai&lt;/groupId&gt; &lt;artifactId&gt;spring-ai-pgvector-store&lt;/artifactId&gt; &lt;/dependency&gt; </code></pre> <p>As opposed to using PgVectorStore boot starter dependency:</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.springframework.ai&lt;/groupId&gt; &lt;artifactId&gt;spring-ai-starter-vector-store-pgvector&lt;/artifactId&gt; &lt;/dependency&gt; </code></pre> <p>Switching to starter dependency made it work flawlessly.</p>
1,406
implement RAG
My llama2 model is talking to itself asking question and answering it to them using Conversational retrieval chain
https://stackoverflow.com/questions/79075644/my-llama2-model-is-talking-to-itself-asking-question-and-answering-it-to-them-us
<p>I was implementing RAG on a document with using the LLama2 model but my model is asking questions to itself and answering it to them.</p> <pre class="lang-py prettyprint-override"><code>llm = LlamaCpp(model_path=model_path, temperature=0, max_tokens=2000, top_p=0.1, n_ctx=2048, ) qa_chain = ConversationalRetrievalChain.from_llm( llm, vectorstore.as_retriever(search_kwargs={'k': 2}), ) chat_history = [] while True: query = input('Prompt: ') if query.lower() in [&quot;exit&quot;, &quot;quit&quot;, &quot;q&quot;]: print('Exiting') sys.exit() result = qa_chain({'question': query, 'chat_history': chat_history}) print('Answer: ' + result['answer'] + '\n') chat_history.append((query, result['answer'])) </code></pre> <p>I tried most online solutions but most of them doesnt use this way</p>
1,407
implement RAG
Trying to add records in RAG vector database Pinecone
https://stackoverflow.com/questions/78921436/trying-to-add-records-in-rag-vector-database-pinecone
<p>Hello All I am trying creating a RAG chatbot, that does a POST request when user clicks add. The purpose of the add, is to add the record in the RAG vector database. I can't seem to format the data properly to do, I was able to achieve something similar using python notebook.</p> <p>Json Format:</p> <pre><code>{ &quot;restaurant&quot;: [ { &quot;restaurant&quot;: &quot;Bella Italia&quot;, &quot;cuisine&quot;: &quot;Italian&quot;, &quot;rating&quot;: 5, &quot;review&quot;: &quot;Amazing pasta and great ambiance! Highly recommended.&quot; }, { &quot;restaurant&quot;: &quot;Sushi Sakura&quot;, &quot;cuisine&quot;: &quot;Japanese&quot;, &quot;rating&quot;: 4, &quot;review&quot;: &quot;Fresh sushi and friendly staff. A bit pricey but worth it.&quot; } ] } </code></pre> <p>My app/api/add/route.js code is below:</p> <pre><code>import { Pinecone } from '@pinecone-database/pinecone'; import { OpenAI } from 'openai'; import dotenv from 'dotenv'; dotenv.config(); // Initialize OpenAI and Pinecone clients with API keys const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); const pc = new Pinecone({ apiKey: process.env.PINECONE_API_KEY }); const index = pc.index('rag'); // Use the correct index name // Function to handle user input async function handleUserInput(userInput) { let records = []; // Check if input is an array or an object if (Array.isArray(userInput.restaurant)) { records = userInput.restaurant; } else { console.error('Invalid input format:', userInput); return []; } const processedData = []; for (const record of records) { console.log('Processing record:', record); // Ensure fields are properly mapped const mappedRecord = { restaurant: record.restaurant, review: record.review, cuisine: record.cuisine, rating: record.rating, }; if (!mappedRecord.restaurant || !mappedRecord.review || !mappedRecord.cuisine || typeof mappedRecord.rating !== 'number') { console.error('Invalid record data:', mappedRecord); continue; } try { const response = await openai.embeddings.create({ input: mappedRecord.review, model: 'text-embedding-3-small', // Ensure this model is available }); const embedding = response.data?.[0]?.embedding; if (!Array.isArray(embedding)) { throw new Error('Embedding is not an array'); } processedData.push({ id: mappedRecord.restaurant, // Use 'restaurant' field for id values: embedding, // Embedding should be an array metadata: { review: mappedRecord.review, cuisine: mappedRecord.cuisine, rating: mappedRecord.rating, }, }); } catch (error) { console.error('Error creating embedding:', error.message); } } return processedData; } // Function to upsert data into Pinecone async function upsertData(processedData) { try { // Ensure processedData is in the correct format const formattedData = processedData.map(record =&gt; ({ id: record.id, values: record.values, metadata: record.metadata, })); // Perform upsert operation const upsertResponse = await index.upsert({ vectors: formattedData, namespace: 'ns1', // Replace with your namespace if needed }); console.log(`Upserted count: ${upsertResponse.upserted_count}`); } catch (error) { console.error('Error upserting data into Pinecone:', error.message); throw new Error('Failed to upsert data'); } } // Function to print index statistics async function printIndexStats() { try { const stats = await index.describeIndexStats(); console.log('Index statistics:', stats); } catch (error) { console.error('Error describing index stats:', error.message); } } // POST request handler export async function POST(req) { if (req.method !== 'POST') { return new Response(JSON.stringify({ error: 'Method not allowed' }), { status: 405 }); } try { const body = await req.json(); console.log('Received request body:', JSON.stringify(body, null, 2)); // Pretty print const processedData = await handleUserInput(body); if (processedData.length &gt; 0) { await upsertData(processedData); } else { console.log('No valid data to upsert'); } await printIndexStats(); return new Response(JSON.stringify({ message: 'Successfully added to lunch box' }), { status: 200 }); } catch (error) { console.error('Error adding restaurant:', error.message); return new Response(JSON.stringify({ error: error.message || 'Internal Server Error' }), { status: 500 }); } } </code></pre> <p>Similar Implementation done here:</p> <pre><code>from dotenv import load_dotenv load_dotenv() from pinecone import Pinecone, ServerlessSpec from openai import OpenAI import os import json # Initialize Pinecone pinecone = Pinecone(api_key=os.getenv(&quot;PINECONE_API_KEY&quot;)) # Create a Pinecone index pinecone.create_index( name=&quot;rag&quot;, dimension=1536, metric=&quot;cosine&quot;, spec=ServerlessSpec(cloud=&quot;aws&quot;, region=&quot;us-east-1&quot;), ) # Load the review data with open(&quot;reviews.json&quot;) as file: data = json.load(file) # Initialize OpenAI client client = OpenAI() processed_data = [] # Create embeddings for each review for review in data[&quot;restaurant&quot;]: response = client.embeddings.create( input=review['review'], model=&quot;text-embedding-3-small&quot; ) embedding = response.data[0].embedding processed_data.append( { &quot;values&quot;: embedding, &quot;id&quot;: review[&quot;restaurant&quot;], &quot;metadata&quot;: { &quot;review&quot;: review[&quot;review&quot;], &quot;cuisine&quot;: review[&quot;cuisine&quot;], &quot;rating&quot;: review[&quot;rating&quot;], } } ) # Insert the embeddings into the Pinecone index index = pinecone.Index(&quot;rag&quot;) upsert_response = index.upsert( vectors=processed_data, namespace=&quot;ns1&quot; ) print(&quot;Upsert response:&quot;, upsert_response) # Print index statistics index_stats = index.describe_index_stats() print(index_stats) </code></pre> <p>Basically I am trying to achieve what I have done in python, through javascript.</p> <p>My error are below:</p> <pre><code>○ Compiling /api/add ... ✓ Compiled /api/add in 1273ms (1168 modules) Failed to find any user-provided fetch implementation. Using global fetch implementation. Failed to find any user-provided fetch implementation. Using global fetch implementation. Received request body: { &quot;restaurant&quot;: { &quot;name&quot;: &quot;Popeyes Louisiana Kitchen&quot;, &quot;rating&quot;: 4.6, &quot;cuisine&quot;: &quot;Fast-food&quot;, &quot;review&quot;: &quot;This is not the first time, last visit the food was clearly refried to be warmed up. This time, all the wraps are dripping with half a bottle of sause in each wrap. Inedible food. I would like my money back. My dog is served better food.&quot; } } Invalid input format: { restaurant: { name: 'Popeyes Louisiana Kitchen', rating: 4.6, cuisine: 'Fast-food', review: 'This is not the first time, last visit the food was clearly refried to be warmed up. This time, all the wraps are dripping with half a bottle of sause in each wrap. Inedible food. I would like my money back. My dog is served better food.' } } No valid data to upsert Failed to find any user-provided fetch implementation. Using global fetch implementation. Failed to find any user-provided fetch implementation. Using global fetch implementation. Index statistics: { namespaces: { ns1: { recordCount: 20 } }, dimension: 1536, indexFullness: 0, totalRecordCount: 20 } POST /api/add 200 in 2870ms </code></pre>
1,408
implement RAG
ConversationalRetrievalChain raising KeyError
https://stackoverflow.com/questions/78199269/conversationalretrievalchain-raising-keyerror
<p>I am implementing RAG on a Gemma-2B-it model using langchain's HuggingFaceEmbeddings and ConversationalRetrievalChain.</p> <p>When running:</p> <pre><code>chat_history = [] question = &quot;My prompt&quot; result = qa.invoke({&quot;question&quot;: question, &quot;chat_history&quot;: chat_history}) </code></pre> <p>I get</p> <pre><code> 276 277 if self.pipeline.task == &quot;text-generation&quot;: --&gt; 278 text = response[&quot;generated_text&quot;] 279 elif self.pipeline.task == &quot;text2text-generation&quot;: 280 text = response[&quot;generated_text&quot;] KeyError: 'generated_text' </code></pre> <p>I don't understand why this is happening. It used to work and, today, it just stopped working. I have also tried using <code>qa.run</code> instead of <code>invoke </code>but it still raises the same exception.</p> <p>I have tried changing models, devices but nothing fixes it.</p>
<p>If you're using <code>transformers.pipeline</code>, then make sure that this <code>return_tensors='pt'</code> parameter is not passed.</p>
1,409
implement RAG
Retrieval augmented generation (RAG) for text classification
https://stackoverflow.com/questions/77190366/retrieval-augmented-generation-rag-for-text-classification
<p>I'm currently exploring the implementation of Retrieval-Augmented Generation (RAG) for text classification, but I'm facing a lack of comprehensive online resources to guide me through the process.</p> <p>In this project, the task involves taking a sentence as input and categorizing it into one of three distinct classes. Rather than opting for fine-tuning, I'm keen on leveraging RAG. However, I'm grappling with the challenge of embedding test cases in a manner that allows for subsequent retrieval and classification by a large language model (LLM). I'm specifically looking for an embedding model that can create these embeddings while incorporating the associated class information for each sentence.</p>
<p>Don't use embeddings to contain class attributes. That will mess with retrieval quality.</p> <p>Use metadata filters instead.</p> <ol> <li><p><strong>Retrieve</strong> Include classes in the metadata before feeding it to the rag. Then when you do a a retrieval, use the metadata filter to make a retrieval call for each class getting the top n for each class.</p> </li> <li><p><strong>Prune</strong> (Optional) If all the results of a particular class don't pass a distance threshold (you as the developer will set this threshold), then you can remove all the retrieval results for that class. This simulates the pruning steps of some classifiers like ID3/C4.5</p> </li> <li><p><strong>Creating and Sending Prompts</strong> Add your retrieval results to your prompt. The retrieval results should be class labeled. Ex. &quot;{data: , label: }&quot;. That should allow you to add enhanced context in the prompt that will facilitate classification.</p> </li> </ol> <p>Note: Here's a sample of metadata-filter implem in llama</p> <p><a href="https://medium.com/@sandyshah1990/exploring-rag-implementation-with-metadata-filters-llama-index-3c6c08a83428" rel="nofollow noreferrer">https://medium.com/@sandyshah1990/exploring-rag-implementation-with-metadata-filters-llama-index-3c6c08a83428</a></p>
1,410
implement RAG
c# Microsoft Semantic Kernel - RAG using local files
https://stackoverflow.com/questions/78825096/c-microsoft-semantic-kernel-rag-using-local-files
<p>I am new to Semantic Kernel and would like to build an application using RAG (local file implementation). I know this is possible in OpenAI using &quot;KernelMemoryBuilder&quot; and &quot;WithOpenAIDefaults&quot; method. Any help is appreciated.</p> <p>Thanks,</p> <p>I have tried below implementations but they use OpenAI.</p> <p><a href="https://microsoft.github.io/kernel-memory/serverless" rel="nofollow noreferrer">https://microsoft.github.io/kernel-memory/serverless</a></p> <p><a href="https://github.com/microsoft/kernel-memory" rel="nofollow noreferrer">https://github.com/microsoft/kernel-memory</a></p>
1,411
implement RAG
RAG model not reading json files
https://stackoverflow.com/questions/77511555/rag-model-not-reading-json-files
<p>I'm trying to implement a simple rag that reads a list of input files and answers to questions based on their content:</p> <pre><code>documents = SimpleDirectoryReader(&quot;/content/Data/&quot;).load_data() llm = LlamaCPP( model_url='https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf', model_path=None, temperature=0.1, max_new_tokens=256, context_window=3900, generate_kwargs={}, model_kwargs={&quot;n_gpu_layers&quot;: -1}, messages_to_prompt=messages_to_prompt, completion_to_prompt=completion_to_prompt, verbose=True, ) embed_model = HuggingFaceEmbeddings( model_name=&quot;thenlper/gte-large&quot; ) service_context = ServiceContext.from_defaults( chunk_size=256, llm=llm, embed_model=embed_model ) index = VectorStoreIndex.from_documents(documents, service_context=service_context) query_engine = index.as_query_engine() response = query_engine.query(&quot;What is the quantity of Nokia 3310 available?&quot;) </code></pre> <p>But I noticed that the model is not able to answer to questions regarding the json files within the Data folder, while it's great for pdf. Why does it happen and how can I solve? I notice that documents contains the json too, so I think it's not related to the first line of code but probably to the one for index. Thank you in advance, if you need more information ask me</p>
<p>It looks like you're using llama_index library, with a simple search on the used method <a href="https://github.com/run-llama/llama_index/blob/main/docs/module_guides/loading/simpledirectoryreader.md" rel="nofollow noreferrer">SimpleDirectoryReader</a> you will find the supported files extensions.</p> <pre><code>.csv - comma-separated values .docx - Microsoft Word .epub - EPUB ebook format .hwp - Hangul Word Processor .ipynb - Jupyter Notebook .jpeg, .jpg - JPEG image .mbox - MBOX email archive .md - Markdown .mp3, .mp4 - audio and video .pdf - Portable Document Format .png - Portable Network Graphics .ppt, .pptm, .pptx - Microsoft PowerPoint </code></pre> <p>In the documentation you will also find a link to a specific JSON reader.</p> <p>You may want to look what is inside your <code>documents</code> variable, make sure that they are intelligible.</p> <p>FYI, in the provided code you're not even using LLM yet. You are simply querying your vector database to find the most similar documents to &quot;What is the quantity of Nokia 3310 available?&quot;.</p>
1,412
implement RAG
Issue with Hallucinated Outputs in RAG System Using LangChain and Chroma Vectorstore
https://stackoverflow.com/questions/79201785/issue-with-hallucinated-outputs-in-rag-system-using-langchain-and-chroma-vectors
<p>I am working on a Retrieval-Augmented Generation (RAG) system where I input a simple PDF file and expect structured outputs such as the title, summary, publication year, and authors of the research paper. While testing the system, I encountered an issue:</p> <p>First run: The output was correct and as expected.</p> <p>Subsequent runs: The output started to hallucinate and deviate from the expected outputs. The structure remain same. This is puzzling, and I am trying to understand why this might be happening and how to resolve it. Below are the details of my implementation:</p> <p>Question: Why might the system hallucinate on repeated runs, despite giving correct outputs initially? How can I debug and fix this issue to ensure consistent and accurate outputs for every run? Any insights into potential issues with the chunking, vectorstore handling, or RAG chain configuration would be greatly appreciated.</p> <p>Code Context:</p> <ol> <li>Chunking the PDF File To preprocess the PDF, I used LangChain's PyPDFLoader and split the text into chunks for vectorization. Here's how I ensure that each chunk is uniquely identified to avoid duplication in the Chroma vector database:</li> </ol> <pre><code>import uuid def create_vectorstore(chunks, embedding_function, vectorstore_path): # Generate unique IDs based on content ids = [str(uuid.uuid5(uuid.NAMESPACE_DNS, doc.page_content)) for doc in chunks] # Filter out duplicate chunks unique_ids = set() unique_chunks = [] for chunk, id in zip(chunks, ids): if id not in unique_ids: unique_ids.add(id) unique_chunks.append(chunk) # Create the Chroma vector database vectorstore = Chroma.from_documents( documents=unique_chunks, ids=list(unique_ids), embedding=embedding_function, persist_directory=vectorstore_path ) vectorstore.persist() return vectorstore </code></pre> <ol start="2"> <li>Structured Output Template for RAG Chain The system expects structured outputs defined by the following Pydantic model:</li> </ol> <pre><code>from langchain_core.pydantic_v1 import BaseModel, Field class AnswerWithSources(BaseModel): &quot;&quot;&quot;An answer to the question, with sources and reasoning.&quot;&quot;&quot; answer: str = Field(description=&quot;Answer to question&quot;) sources: str = Field(description=&quot;Full direct text chunk from the context used to answer the question&quot;) reasoning: str = Field(description=&quot;Explain the reasoning of the answer based on the sources&quot;) class ExtractedInfo(BaseModel): &quot;&quot;&quot;Extracted information about the research article&quot;&quot;&quot; paper_title: AnswerWithSources paper_summary: AnswerWithSources publication_year: AnswerWithSources paper_authors: AnswerWithSources </code></pre> <p>I invoke the RAG chain as follows:</p> <pre><code>rag_chain = ( {&quot;context&quot;: retriever | format_docs, &quot;question&quot;: RunnablePassthrough()} | prompt_template | llm.with_structured_output(ExtractedInfo, strict=True) ) rag_chain.invoke(&quot;Give me the title, summary, publication date, authors of the research paper.&quot;) </code></pre>
1,413
implement RAG
Unable to create a vectorstore retriever using Chroma
https://stackoverflow.com/questions/78793204/unable-to-create-a-vectorstore-retriever-using-chroma
<p>I am trying to implement RAG with the GPT-3.5 API. However, my code execution gets stuck while trying to create the retriever. I didn't get this issue on Google Colab but I started getting this issue once I shifted my codebase to my local environment.</p> <p>Here is the function:</p> <pre><code>def create_retriever(docs_list, embeddings_model): try: text_splitter = TextSplitter((200, 1000)) texts = [doc.page_content for doc in docs_list] metadata_list = [doc.metadata for doc in docs_list] print(&quot;INIT done!&quot;) except Exception as e: print(&quot;Error in split init: &quot;, e) try: # Split the text and convert to Document objects doc_splits = [] for i in range(len(texts)): text = texts[i] metadata = metadata_list[i] chunks = text_splitter.chunks(text) for chunk in chunks: # can add the kind of code in metadata doc_splits.append( Document(page_content=chunk, metadata=metadata)) print(&quot;SPLITTING done!&quot;) except Exception as e: print(&quot;Error in splitting: &quot;, e) try: # Add to vectorDB vectorstore = Chroma.from_documents( documents=doc_splits, collection_name=&quot;rag-chroma&quot;, embedding=embeddings_model, ) retriever = vectorstore.as_retriever() print(&quot;Retriever created: &quot;, retriever) except Exception as e: print(&quot;Error in creating the retriever object: &quot;, e) return retriever </code></pre> <p>The output that I get is as follows:</p> <pre><code>. . . INIT done! SPLITTING done! INFO:backoff:Backing off send_request(...) for 0.5s (requests.exceptions.SSLError: HTTPSConnectionPool(host='us-api.i.posthog.com', port=443): Max retries exceeded with url: /batch/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1007)')))) </code></pre> <p>I have tried to re-install/upgrade my dependencies to their latest versions but to no avail. Should I switch to other vectorstores like FAISS?</p>
1,414
implement RAG
Handling greet type of questions without llm in RAG
https://stackoverflow.com/questions/78843382/handling-greet-type-of-questions-without-llm-in-rag
<p>I'm implementing a Retrieval-Augmented Generation (RAG) system for a chatbot that handles a variety of user queries. While RAG is primarily designed to provide informative responses by retrieving relevant documents and generating responses using a language model (LLM), I want to handle certain types of queries, like greetings and polite exchanges, without resorting to the LLM for efficiency and control.</p> <p>Specifically, I want to manage greet-type questions such as &quot;Hello,&quot; &quot;How are you?&quot;, &quot;Good morning,&quot; etc. These questions are often formulaic and don't require the deep contextual understanding that more complex questions might. Relying on the LLM for these could be inefficient and overkill.</p> <p>Is it possible to create a chatbot-like greeting exchange system without an LLM?</p>
1,415
implement RAG
How to use LLaVa embedding function? Multi-Modal Rag
https://stackoverflow.com/questions/78333716/how-to-use-llava-embedding-function-multi-modal-rag
<p>I'm currently implementing a multi-modal RAG sys leveraging, LLaVa, Chroma &amp; Langchain.</p> <p>However, I'm having a hard time finding the embeddings function llava uses. Can anybody help me with that? Am I just blind?</p> <p>Any pointers on how to narrow that down would be much appreciated.</p> <p>Thanks in advance!</p> <p>I browsed through all files I could find after installing llava transformer through hugging face. I cannot find the code</p>
1,416
implement RAG
Python Langchain RAG example code. Unsupported operand types for |: &#39;method&#39; and &#39;operator.itemgetter&#39;
https://stackoverflow.com/questions/77303068/python-langchain-rag-example-code-unsupported-operand-types-for-method-and
<p>I am trying to implement the code for Python RAG with chat history from:</p> <p><a href="https://python.langchain.com/docs/expression_language/cookbook/retrieval#with-memory-and-returning-source-documents" rel="nofollow noreferrer">https://python.langchain.com/docs/expression_language/cookbook/retrieval#with-memory-and-returning-source-documents</a></p> <p>However, I am hitting an error with this code:</p> <pre><code># First we add a step to load memory # This adds a &quot;memory&quot; key to the input object loaded_memory = RunnablePassthrough.assign( chat_history=memory.load_memory_variables | itemgetter(&quot;history&quot;), ) </code></pre> <p>I get the error:</p> <pre><code>chat_history=memory.load_memory_variables | itemgetter(&quot;history&quot;), ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~ TypeError: unsupported operand type(s) for |: 'method' and 'operator.itemgetter' </code></pre> <p>I have tried to change the code to follow the syntax I have found online for itemgetter:</p> <pre><code>loaded_memory = RunnablePassthrough.assign( chat_history=itemgetter(&quot;history&quot;)(memory.load_memory_variables({}) ), ) </code></pre> <p>However I just get a TypeError with this:</p> <pre><code>TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: &lt;class 'list'&gt; </code></pre> <p>For completeness, here is a minimal, reproducible example, using the code from the Langchain docs:</p> <pre><code>from operator import itemgetter from langchain.memory import ConversationBufferMemory from langchain.schema.runnable import RunnablePassthrough memory = ConversationBufferMemory(return_messages=True, output_key=&quot;answer&quot;, input_key=&quot;question&quot;) loaded_memory = RunnablePassthrough.assign( chat_history=memory.load_memory_variables | itemgetter(&quot;history&quot;), ) </code></pre> <p>Am I missing something obvious here?</p>
<p>it looks like the <a href="https://python.langchain.com/docs/expression_language/cookbook/retrieval#with-memory-and-returning-source-documents" rel="nofollow noreferrer">documentation for this issue has been updated</a>. For any future users who run into the same problem, it is necessary to wrap the <code>memory.load_memory_variables</code> in a <code>RunnableLambda</code> as follows:</p> <pre><code>from langchain.schema.runnable import RunnableMap, RunnablePassthrough, RunnableLambda loaded_memory = RunnablePassthrough.assign( chat_history=RunnableLambda(readonly_memory.load_memory_variables) | itemgetter(&quot;chat_history&quot;) ) </code></pre> <p><code>RunnableLambda</code> converts a python callable into a Runnable. Once the memory function has been wrapped, the output can be piped to the function returned by itemgetter.</p>
1,417
implement RAG
Handling Multiple Embedding Types in LangChain for RAG Applications
https://stackoverflow.com/questions/78771100/handling-multiple-embedding-types-in-langchain-for-rag-applications
<p>I'm building a RAG application using LangChain. It works well with one type of embedding in my database, generating specific code based on context. Now, I want to add support for Q&amp;A. I've implemented Q&amp;A by creating embeddings for various common questions, but sometimes my code generation fails because the top three similar embeddings are all Q&amp;A.</p> <p>Is there a way to retrieve context from two separate indexes using LangChain?</p>
1,418
implement RAG
RAG pipeline help request
https://stackoverflow.com/questions/78662555/rag-pipeline-help-request
<p>I'm a bit new to the whole RAG pipeline thing and find myself being a bit lost in the endless possibilities of building one. My goal is to create a script that can transform about 60 anatomical pdfs into a vector store database and use this to answer questions about body parts and return the references to the pages of the pdfs where that information was taken from.</p> <p>My script so far looks like this because it is the only way I have managed to make it work:</p> <pre><code>import os import faiss import nest_asyncio from dotenv import load_dotenv from llama_index.core import ( Settings, SimpleDirectoryReader, StorageContext, VectorStoreIndex, load_index_from_storage, ) from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler from llama_index.vector_stores.faiss import FaissVectorStore nest_asyncio.apply() load_dotenv() llama_debug = LlamaDebugHandler(print_trace_on_end=True) callback_manager = CallbackManager([llama_debug]) Settings.callback_manager = callback_manager save_dir = &quot;./documents/vector_store&quot; d = 1536 faiss_index = faiss.IndexFlatL2(d) vector_store = FaissVectorStore(faiss_index=faiss_index) storage_context = StorageContext.from_defaults(vector_store=vector_store) if not os.path.exists(save_dir): print(&quot;Saving vector store to disk ...&quot;) documents = SimpleDirectoryReader(&quot;./documents/test/&quot;).load_data() vector_store = VectorStoreIndex.from_documents( documents, storage_context=storage_context, ) vector_store.storage_context.persist(persist_dir=save_dir) vector_query_engine = vector_store.as_query_engine(similarity_top_k=3) else: print(&quot;Loading vector store from disk...&quot;) vector_store = FaissVectorStore.from_persist_dir(save_dir) storage_context = StorageContext.from_defaults( vector_store=vector_store, persist_dir=save_dir ) index = load_index_from_storage(storage_context=storage_context) vector_query_engine = index.as_query_engine(similarity_top_k=3) response = vector_query_engine.query( &quot;What is the diaphragm and what position does it occupy in the body?&quot; ) print(response) for i, node in enumerate(response.source_nodes): metadata = node.node.metadata text_chunk = node.node.text page_label = metadata.get(&quot;page_label&quot;, &quot;N/A&quot;) file_name = metadata.get(&quot;file_name&quot;, &quot;N/A&quot;) print(f&quot;Reference nr: {i+1}, Page: {page_label}, Document: {file_name}&quot;) print(f&quot;Text Chunk: {text_chunk}\n&quot;) </code></pre> <p>And this is the (beginning of the) output:</p> <pre><code>Trace: query |_CBEventType.QUERY -&gt; 2.734167 seconds |_CBEventType.RETRIEVE -&gt; 0.417225 seconds |_CBEventType.EMBEDDING -&gt; 0.417225 seconds |_CBEventType.SYNTHESIZE -&gt; 2.316942 seconds |_CBEventType.TEMPLATING -&gt; 0.0 seconds |_CBEventType.LLM -&gt; 2.30051 seconds ********** A diaphragm is a dome-shaped muscle that separates the thoracic cavity from the abdominal cavity. It is positioned below the lungs and heart, and above the liver, stomach, and other abdominal organs. The diaphragm is connected to the thoracic aorta, which supplies blood to the chest wall and thoracic organs, and the inferior vena cava, which returns blood from the lower body to the heart. Reference nr: 1, Page: 317, Document: random_pdf.pdf Text Chunk: even during sleep, and must have a constant flow of blood to supply oxygen and remove waste products.For this reason there are four vessels that bring bloodto the circle of Willis. From this anastomosis, severalpaired arteries (the cerebral arteries) extend into thebrain itself. The thoracic aorta and its branches supply the chest wall and the organs within the thoracic cavity.These vessels are listed in T able 13–1. The abdominal aorta gives rise to arteries that sup-ply the abdominal wall and organs and to the common iliac arteries, which continue into the legs. Notice inFig. 13–3 that the common iliac artery becomes theexternal iliac artery, which becomes the femoral artery,which becomes the popliteal artery; the same vesselhas different names based on location. These vesselsare also listed in T able 13–1 (see Box 13–3: PulseSites). The systemic veins drain blood from organs or parts of the body and often parallel their correspond-The Vascular System 299 Figure 13–5. Arteries and veins of the head and neck shown in right lateral view. Veins are labeled on the left. Arteries are labeled on the right. </code></pre> <p>My questions are two:</p> <ul> <li>on a more theoretical level: I thought a RAG pipeline needed (in a very simplified fashion) 1) embedding of the chunks 2) retrieval based on similarity 3) rephrasing of the answer by an LLM; however, this script works fairly well while apparently skipping both 1 and 3, so am I missing the point? or does llama-index abstract away from a lot of the implementation?</li> <li>on a practical level: how do I improve on this? The script works as in it usually outputs reasonable answers, but the text in &quot;source_nodes&quot; sometimes is very unsatifactory in terms of its relevance</li> </ul> <p>Any help/guidance or resources would be super appreciated!</p>
1,419
implement RAG
Can I use FLARE RAG as a substitute for another open source LLM that is not based on OpenAI?
https://stackoverflow.com/questions/78615556/can-i-use-flare-rag-as-a-substitute-for-another-open-source-llm-that-is-not-base
<p>We are preparing a capstone design for our graduation project, and in that project, we want to implement a RAG chatbot using Langchain, where we want to implement FLARE as an open-source LLM model, but we are having difficulties because the flare library in Langchain is based on OpenAI.</p> <p>So we are trying to modify all the files in the Langchain module to implement flare as an open-source LLM, which we have been trying for more than two weeks, but it is not working well.</p> <p>I would like to ask you if it is possible to modify the flare module of Langchain and other connected modules by deleting the openAI part, so that we can implement flare using the KoGPT-Trinity model based on Lama or GPT3, which is what we want. What do you think?</p> <p>(By the way, the source code of the FLARE paper also shows that the LLM consists of the openAI API).</p>
1,420
implement RAG
How to use DSPy with a custom database (JSON/JSONL) as a retriever for RAG?
https://stackoverflow.com/questions/79440428/how-to-use-dspy-with-a-custom-database-json-jsonl-as-a-retriever-for-rag
<p>I am trying to implement a retrieval-augmented generation (RAG) pipeline using DSPy and want to use my own custom database stored in JSON/JSONL files as the retrieval source.</p> <p>I see that DSPy provides <code>ColBERTv2</code> for retrieval, but I’m unsure how to configure it to work with my local dataset. My ideal setup would look something like this:</p> <pre class="lang-py prettyprint-override"><code># HOW TO DO THIS PART rm = dspy.ColBERTv2(local_path_to_index=PATH_2_LOCAL_INDEX) dspy.settings.configure(lm=ollama_model, rm=rm) # Standard DSPy forward pass rag = dspy.Retreive(k=3) cot = dspy.ChainOfThought('question, context -&gt; answer') context = rag(question).passage answer = cot(question=question, context=context).answer </code></pre> <h3>My questions:</h3> <ol> <li>How can I index my JSON/JSONL dataset for retrieval with DSPy?</li> <li>Does DSPy support any other retrievers that allow direct use of raw text from JSON/JSONL without an index?</li> <li>What is the recommended approach for integrating a custom retriever into the DSPy pipeline?</li> </ol> <p>Any guidance or examples would be appreciated!</p> <p>Dead link:</p> <ul> <li><a href="https://github.com/stanfordnlp/dspy/issues/166" rel="nofollow noreferrer">https://github.com/stanfordnlp/dspy/issues/166</a></li> <li><a href="https://discord.com/channels/1161519468141355160/1305932555966480436/1306285746554146846" rel="nofollow noreferrer">https://discord.com/channels/1161519468141355160/1305932555966480436/1306285746554146846</a></li> </ul>
1,421
implement RAG
Hybrid search using Azure AI Search and lang chain as a retriever
https://stackoverflow.com/questions/78576496/hybrid-search-using-azure-ai-search-and-lang-chain-as-a-retriever
<p>I am trying to implement a conversational RAG using Azure AiSearch and lang chain with conversational history. I am trying to use the</p> <pre><code>AzureAISearchRetriever from langchain_community.retrievers </code></pre> <p>However, the examples in langchain documentation only points us to using default (semantic search) and not much about hybrid search.</p> <p><a href="https://python.langchain.com/v0.2/docs/integrations/retrievers/azure_ai_search/" rel="nofollow noreferrer">https://python.langchain.com/v0.2/docs/integrations/retrievers/azure_ai_search/</a></p> <p>Any ideas on how we can implement it using the default <code>AzureAISearchRetriever</code>?</p>
<p>According to this <a href="https://python.langchain.com/v0.2/docs/integrations/vectorstores/azuresearch/#perform-a-hybrid-search" rel="nofollow noreferrer">documentation</a> you can do hybrid search using vector store.</p> <p>Use below code</p> <pre class="lang-py prettyprint-override"><code>import os from langchain_community.document_loaders import DirectoryLoader, TextLoader from langchain_community.vectorstores import AzureSearch from langchain_openai import AzureOpenAIEmbeddings from azure.identity import DefaultAzureCredential, get_bearer_token_provider, InteractiveBrowserCredential from langchain_text_splitters import CharacterTextSplitter vector_store_address=&quot;&lt;Ai_search_service_endpoint&gt;&quot; vector_store_password=&quot;&lt;Ai_search_service_key&gt;&quot; token_provider = get_bearer_token_provider(InteractiveBrowserCredential(), &quot;https://cognitiveservices.azure.com/.default&quot;) embeddings = AzureOpenAIEmbeddings( model=&quot;&lt;embeding-model&gt;&quot;, azure_endpoint=&quot;&lt;Azure_openai_endpoint&gt;&quot;, azure_ad_token_provider=token_provider ) index_name = &quot;langchain-vector-demo&quot; vector_store = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query, ) loader = TextLoader(r&quot;&lt;Text documnet paths&gt;&quot;, encoding=&quot;utf-8&quot;) documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=400, chunk_overlap=0) docs = text_splitter.split_documents(documents) vector_store.add_documents(documents=docs) docs = vector_store.similarity_search( query=&quot;Wine quality&quot;, k=3, search_type=&quot;hybrid&quot;, ) for i in docs: print(i.page_content) </code></pre> <p>or</p> <pre class="lang-py prettyprint-override"><code>docs = vector_store.hybrid_search( query=&quot;What did the president say about Ketanji Brown Jackson&quot;, k=3 ) </code></pre> <p>Above is using vector store but you can convert that to as retriver with search type as <code>hybrid</code>.</p> <pre class="lang-py prettyprint-override"><code>dv = vector_store.as_retriever(search_type=&quot;hybrid&quot;) dv.invoke(&quot;&lt;your query&gt;&quot;) </code></pre>
1,422
implement RAG
Best approach for RAG using Azure OpenAI and AI Search with Python SDK
https://stackoverflow.com/questions/79239258/best-approach-for-rag-using-azure-openai-and-ai-search-with-python-sdk
<p>I struggle understanding what are the pros and cons of each one of these approaches for implementing a RAG using Azure OpenAI with AI Search as source, with Python SDK. Both work well, but option B looks much cleaner. Why should we even bother doing all the steps in option A ourselves? Am I missing something?</p> <p>I can only thing about some use cases where you need to get the AI Search chunks for evaluation (RAGAS), that might not possible with option B.</p> <p><strong>A)</strong> Querying AI Search yourself</p> <pre><code>openai_client = AzureOpenAI( api_version=&quot;2024-06-01&quot;, azure_endpoint=AZURE_OPENAI_ACCOUNT, azure_ad_token_provider=token_provider ) search_client = SearchClient( endpoint=AZURE_SEARCH_SERVICE, index_name=&quot;hotels-sample-index&quot;, credential=credential ) GROUNDED_PROMPT=&quot;&quot;&quot; You are a friendly assistant that recommends hotels based on activities and amenities. Query: {query} Sources:\n{sources} &quot;&quot;&quot; query=&quot;Can you recommend a few hotels with complimentary breakfast?&quot; search_results = search_client.search( search_text=query, top=5, select=&quot;Description,HotelName,Tags&quot; ) sources_formatted = &quot;\n&quot;.join([f'{document[&quot;HotelName&quot;]}:{document[&quot;Description&quot;]}:{document[&quot;Tags&quot;]}' for document in search_results]) response = openai_client.chat.completions.create( messages=[ { &quot;role&quot;: &quot;user&quot;, &quot;content&quot;: GROUNDED_PROMPT.format(query=query, sources=sources_formatted) } ], model=AZURE_DEPLOYMENT_MODEL ) </code></pre> <p><strong>B)</strong> Letting Azure OpenAI query AI Search itself</p> <pre><code>endpoint = os.environ.get(&quot;AZURE_OPENAI_ENDPOINT&quot;) api_key = os.environ.get(&quot;AZURE_OPENAI_API_KEY&quot;) deployment = os.environ.get(&quot;AZURE_OPENAI_DEPLOYMENT_ID&quot;) client = openai.AzureOpenAI( azure_endpoint=endpoint, api_key=api_key, api_version=&quot;2024-02-01&quot;, ) completion = client.chat.completions.create( model=deployment, messages=[ { &quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;What are my available health plans?&quot;, }, ], extra_body={ &quot;data_sources&quot;:[ { &quot;type&quot;: &quot;azure_search&quot;, &quot;parameters&quot;: { &quot;endpoint&quot;: os.environ[&quot;AZURE_AI_SEARCH_ENDPOINT&quot;], &quot;index_name&quot;: os.environ[&quot;AZURE_AI_SEARCH_INDEX&quot;], &quot;authentication&quot;: { &quot;type&quot;: &quot;api_key&quot;, &quot;key&quot;: os.environ[&quot;AZURE_AI_SEARCH_API_KEY&quot;], } } } ], } ) </code></pre>
<p>These are many approaches you can do with Azure OpenAI and AI Search, from your option A and B, it falls under:</p> <ul> <li><p>A, Retrieve Then Read: Simple retrieve-then-read implementation, using the AI Search and OpenAI APIs directly. It first retrieves top documents from search, then constructs a prompt with them, and then uses OpenAI to generate an completion (answer) with that prompt. Read more here: <a href="https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/app/backend/approaches/retrievethenread.py" rel="nofollow noreferrer">https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/app/backend/approaches/retrievethenread.py</a></p> </li> <li><p>B, Chat Read Retrieve Read: A multi-step approach that first uses OpenAI to turn the user's question into a search query, then uses Azure AI Search to retrieve relevant documents, and then sends the conversation history, original user question, and search results to OpenAI to generate a response.</p> <ul> <li>Your code properly aligned with this simple instruction, with built-in data_source for specific like AI Search, Cosmos DB, Elastic Search,... <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/references/on-your-data?tabs=python#examples" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/ai-services/openai/references/on-your-data?tabs=python#examples</a>.</li> <li>Unlike above, you can also use more generic of <code>Function Calling</code> like this sample implementation <a href="https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/app/backend/approaches/chatreadretrieveread.py" rel="nofollow noreferrer">https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/app/backend/approaches/chatreadretrieveread.py</a>. You can leverage further with multi-step depends on your business, not only just data_source but also for your python function <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling</a></li> </ul> </li> </ul> <p>What's better?</p> <p>A is simple and straightforward. If you search for an item and AI Search returns no information, then OpenAI takes no further action, and the conversation ends early. This often happens because not every user asks for information in the first query or may not know how to phrase their question.</p> <p>B helps expand the conversation context and allows OpenAI to decide which function to run, making the interaction feel more human-like. It absolutely depends on your business needs to branch the conversation scenario in more customizable ways. For example, when a user asks, &quot;How's the weather today?&quot;, it's necessary to have two parameters: &quot;location&quot; and &quot;unit&quot; (Celsius or Fahrenheit). Without providing enough parameters, OpenAI will prompt the user with something like, &quot;Please let me know your location and unit.&quot; It will keep asking if either parameter is missing and will run the function once it has all the necessary information.</p>
1,423
implement RAG
VertexAI authentication
https://stackoverflow.com/questions/78834087/vertexai-authentication
<p>I am currently referring to the codelab <a href="https://codelabs.developers.google.com/multimodal-rag-gemini#0" rel="nofollow noreferrer">Build a Q&amp;A App with Multi-Modal RAG using Gemini Pro</a> which uses</p> <pre><code>from google.colab import auth auth.authenticate_user() </code></pre> <p>to authenticate.</p> <p>I also see genai reference implement as such</p> <pre><code>import google.generativeai as genai import os genai.configure(api_key=os.environ[&quot;GEMINI_API_KEY&quot;]) </code></pre> <p>How can I authenticate when using vertexAI? I have access to API key and service account created with required roles.</p>
<p>As you are already part of Google Cloud ecosystem, I believe you will be using services like <code>Cloud Run</code> to host your Application. Cloud Run will use a designated service account to run your application.</p> <p>(Your TODO) If you provide minimum requirement permission to this service account, then it will be enough to host &amp; run your without the need for API Keys.</p> <p>If it help, I have written a short <a href="https://medium.com/google-cloud/how-to-build-a-chat-web-application-with-streamlit-cloud-run-and-gemini-flash-8eb1ac55b201" rel="nofollow noreferrer">blog post</a> that might help your and its source code is hosted in <a href="https://github.com/GoogleCloudPlatform/devrel-demos/tree/main/ai-ml/gemini-chatbot-app" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/devrel-demos</a></p>
1,424
implement RAG
Feeding tabular data to Chromadb for RAG
https://stackoverflow.com/questions/79491162/feeding-tabular-data-to-chromadb-for-rag
<p>I am working on a RAG chatbot which takes .csv financial tables (eg. income statements/balance sheets etc.) of a company in the last 3 quarters, and answers questions based on the provided report context. Each CSV file contains a financial table for a company, with 3 rows representing 3 quarters and columns representing the metrics (eg. total assets, net income, shareholder equity etc).</p> <p>These are the snapshots of the csv files (actual columns are way more than shown below), they are named using the format <em>COMPANY_ReportType_LatestQuarter.csv</em> (eg. <em>AMZN_cashflow_2024Q3.csv</em>)</p> <p><em><a href="https://i.sstatic.net/G3KPOCQE.png" rel="nofollow noreferrer">CompanyA_balance_2024Q3.csv</a></em></p> <p><em><a href="https://i.sstatic.net/rUS7Qemk.png" rel="nofollow noreferrer">CompanyA_income_2024Q3.csv</a></em></p> <p>I have previously created a working chatbot with persistence <a href="https://github.com/jylim21/ChainGPT" rel="nofollow noreferrer">here</a>, and I would like to enhance this prototype further by implementing a financial RAG feature.</p> <h1>The modules used are:</h1> <ul> <li>Ollama: Embeddings and LLM Model</li> <li>ChromaDB and Langchain: For vector storage and document retrieval</li> <li>Chainlit: UI for my chatbot</li> </ul> <h1>My initial thoughts:</h1> <ol> <li>Initialize chromadb client and vectorstore with Ollama embeddings.</li> <li>convert CSV files to documents using Langchain's <a href="https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.csv_loader.CSVLoader.html" rel="nofollow noreferrer">CSVLoader</a> with 1 Document per row.</li> <li>Include the following metadatas in each document: i . Company (eg. MSFT/AMZN/AAPL) ii . Report_type (eg. Income Statement/Balance Sheet/Cashflow Statement) iii. Quarter (eg. 2024Q1, 2024Q2, 2024Q3)</li> <li>Load each document with their respective metadatas.</li> </ol> <h1>The problem:</h1> <p>When I ask questions related to the documents, sometimes they are <strong>unable to retrieve data for specific quarters</strong>.</p> <p>For example:</p> <pre><code>Question: &quot;Is there an increase in Total Revenue across quarters?&quot; Answer: &quot;The Total Revenue in Q1 2024 was $ 6,914,553. The Total Revenue in Q3 2024 was $ 7,244,104. The change in Total Revenue from Q1 2024 to Q3 2024 is then calculated as follows: Change in Total Revenue (Q1 2024 to Q3 2024) = $ 7,244,104 - $ 6,914,553 = $ 329,551 However, since there was no data provided for Q2 2024, we are unable to calculate the change in Total Revenue from Q1 2024 to Q2 2024, or Q2 2024 to Q3 2024.&quot; </code></pre> <pre><code>Question: &quot;Give me the Operating Cash Flow from Q1 2024 to Q3 2024.&quot; Answer: &quot;The Operating Cash Flow for CompanyA are as follows: - Q1 2024 : Not available as the provided cashflow statement did not include Operating Cash Flow for Q1 2024. - Q2 2024 : $ 2,080,257 - Q3 2024 : $ 1,845,301 &quot; </code></pre> <p>This seems weird because when I inspected the vector database using <code>vectorstore.get()</code>, all the documents for Q1, Q2, and Q3 were indeed included along with their metadatas.</p> <p>I have also tried clearing the vector database and re-ingesting the table, setting temperature=0.1 etc. but nothing worked, the retriever seems to just retrieve random documents it could.</p> <p>This is the script I used to load the csv files into my chromadb vector database,</p> <pre><code>import chromadb from langchain_chroma import Chroma import os import re from langchain_ollama import OllamaEmbeddings, OllamaLLM from langchain_community.document_loaders.csv_loader import CSVLoader llm_model = &quot;phi4:14b&quot; llm = OllamaLLM(model=llm_model) embedding = OllamaEmbeddings( model=llm_model, base_url=&quot;http://localhost:11434&quot; ) chroma_client = chromadb.PersistentClient(path=os.path.join(os.getcwd(), &quot;./chroma_db&quot;)) vectorstore = Chroma( client=chroma_client, collection_name=&quot;fin_reports&quot;, embedding_function=embedding ) folder_path=&quot;./src&quot; # This regex pattern is used to extract the metadata fields from the filename. filename_pattern = re.compile(r&quot;([A-Z]+)_(income|balance|cashflow)_(\d{4}Q\d)\.csv&quot;) # Map the ReportType metadata to their proper terms map={&quot;income&quot;:&quot;Income Statement&quot;, &quot;balance&quot;:&quot;Balance Sheet&quot;, &quot;cashflow&quot;: &quot;Cashflow Statement&quot;} if __name__ == &quot;__main__&quot;: documents=[] for filename in os.listdir(folder_path): if filename.endswith(&quot;.csv&quot;): match = filename_pattern.match(filename) if match: company, report_type, quarter = match.groups() file_path = os.path.join(folder_path, filename) loader = CSVLoader(file_path, encoding=&quot;utf-7&quot;) docs = loader.load() for doc in docs: # Get the quarter for each row (NOT to be confused with latest quarter in metadata). match= re.search(r&quot;Quarter:\s*([^\n]+)&quot;, doc.page_content) if match: # Assigning individual metadata for each row of document doc.metadata[&quot;company&quot;] = company doc.metadata[&quot;report_type&quot;] = map[report_type] doc.metadata[&quot;quarter&quot;] = match.group(1) del doc.metadata[&quot;row&quot;] del doc.metadata[&quot;source&quot;] # Convert the loaded CSV table into natural language. doc.page_content=f&quot;This is the {map[report_type]} for {company}:\n&quot; + doc.page_content doc.page_content=doc.page_content.replace(&quot;:\n&quot;,&quot; in &quot;).replace(&quot;Quarter:&quot;,&quot;Quarter&quot;).replace(&quot;\n&quot;,&quot;, the &quot;).replace(&quot;:&quot;,&quot; is&quot;) documents.extend(docs) if documents: vectorstore.add_documents(documents) </code></pre> <p>And this is the output of <code>vectorstore.get(include=['embeddings', 'documents', 'metadatas'])</code>:</p> <pre><code>{'ids': ['abe17ec3-7b46-4ad9-a797-dce0b1d6ccb5', 'cd834610-b383-421f-ab27-49aedb8607f5', '667d64ad-3c8c-47a7-bdcb-7a40376fc082', 'b72b9341-c055-47d5-9b75-cbb12010c90a', '7ab68a4c-a2aa-4c9e-9396-8b4a63c1992c', '511dd725-c1c8-4adf-a3a3-0d7e072959aa', '9473f3dc-0718-466f-b340-483e5aa63879', '3c902c4c-a560-4329-b3b1-251b64f3011d', '99f1597e-022d-41cc-b21e-899ef5637923'], 'embeddings': array([[-1.41350282e-02, 8.76112841e-03, -2.13063452e-02, ..., 4.15409030e-03, -7.05788424e-03, -7.38847209e-03], [-1.45367859e-02, 1.69306658e-02, -1.71646141e-02, ..., -1.15523115e-04, -9.89801716e-03, -8.85582063e-03], [-1.70514323e-02, 1.50136873e-02, -2.20932588e-02, ..., -1.00878415e-05, -1.24230348e-02, -1.34052020e-02], ..., [-6.36808481e-03, -2.84058508e-03, -1.51665546e-02, ..., 2.20017880e-03, 7.84451701e-03, -9.18203034e-03], [-4.21447773e-03, -4.56405617e-03, -2.06277855e-02, ..., 3.17326194e-04, 4.39113053e-03, -6.34333445e-03], [-7.46455975e-03, -3.84342880e-03, -9.71509703e-03, ..., 9.92560410e-04, 2.78300210e-03, -4.73287236e-03]]), 'documents': [ 'This is the Balance Sheet for CompanyA in Quarter 2024Q1, the Total Assets is 519830471, the Cash, Cash Equivalents &amp; Federal Funds Sold is 16859145, the Cash And Cash Equivalents is 16855368, the Cash is 9404164, the Cash Equivalents is --, the Cash And Due from Banks is 7451204', 'This is the Balance Sheet for CompanyA in Quarter 2024Q2, the Total Assets is 525572703, the Cash, Cash Equivalents &amp; Federal Funds Sold is 20037159, the Cash And Cash Equivalents is 20033299, the Cash is 12371746, the Cash Equivalents is --, the Cash And Due from Banks is 7661553', 'This is the Balance Sheet for CompanyA in Quarter 2024Q3, the Total Assets is 526813662, the Cash, Cash Equivalents &amp; Federal Funds Sold is 22156277, the Cash And Cash Equivalents is 22154535, the Cash is 14486877, the Cash Equivalents is --, the Cash And Due from Banks is 7667658', 'This is the Income Statement for CompanyA in Quarter 2024Q1, the Total Revenue is 3410798, the Net Interest Income is 2317974, the Interest Income is 4698234, the Interest Expense is 2380260, the Non Interest Income is 1092824, the Total Premiums Earned is --', 'This is the Income Statement for CompanyA in Quarter 2024Q2, the Total Revenue is 3578113, the Net Interest Income is 2406316, the Interest Income is 4718530, the Interest Expense is 2312214, the Non Interest Income is 1171797, the Total Premiums Earned is --', 'This is the Income Statement for CompanyA in Quarter 2024Q3, the Total Revenue is 3641496, the Net Interest Income is 2409285, the Interest Income is 4787045, the Interest Expense is 2377760, the Non Interest Income is 1232211, the Total Premiums Earned is 29188', 'This is the Cashflow Statement for CompanyA in Quarter 2024Q1, the Operating Cash Flow is 1655398, the Investing Cash Flow is 1719186, the Financing Cash Flow is -25345, the End Cash Position is 11208036, the Changes in Cash is 3349239, the Effect of Exchange Rate Changes is -105013, the Beginning Cash Position is 7963810, the Capital Expenditure is -45638, the Issuance of Debt is --, the Repayment of Debt is --, the Free Cash Flow is 1609760', 'This is the Cashflow Statement for CompanyA in Quarter 2024Q2, the Operating Cash Flow is 2080257, the Investing Cash Flow is 2967090, the Financing Cash Flow is -1967102, the End Cash Position is 13366303, the Changes in Cash is 3080245, the Effect of Exchange Rate Changes is -921978, the Beginning Cash Position is 11208036, the Capital Expenditure is -60991, the Issuance of Debt is --, the Repayment of Debt is -1500000, the Free Cash Flow is 2019266', 'This is the Cashflow Statement for CompanyA in Quarter 2024Q3, the Operating Cash Flow is 1845301, the Investing Cash Flow is -2437003, the Financing Cash Flow is -25211, the End Cash Position is 13244091, the Changes in Cash is -616913, the Effect of Exchange Rate Changes is 494701, the Beginning Cash Position is 13366303, the Capital Expenditure is -42864, the Issuance of Debt is 1000000, the Repayment of Debt is -1000000, the Free Cash Flow is 1802437'], 'uris': None, 'data': None, 'metadatas': [ {'company': 'CompanyA', 'quarter': '2024Q1', 'report_type': 'Balance Sheet'}, {'company': 'CompanyA', 'quarter': '2024Q2', 'report_type': 'Balance Sheet'}, {'company': 'CompanyA', 'quarter': '2024Q3', 'report_type': 'Balance Sheet'}, {'company': 'CompanyA', 'quarter': '2024Q1', 'report_type': 'Income Statement'}, {'company': 'CompanyA', 'quarter': '2024Q2', 'report_type': 'Income Statement'}, {'company': 'CompanyA', 'quarter': '2024Q3', 'report_type': 'Income Statement'}, {'company': 'CompanyA', 'quarter': '2024Q1', 'report_type': 'Cashflow Statement'}, {'company': 'CompanyA', 'quarter': '2024Q2', 'report_type': 'Cashflow Statement'}, {'company': 'CompanyA', 'quarter': '2024Q3', 'report_type': 'Cashflow Statement'}], 'included': [&lt;IncludeEnum.embeddings: 'embeddings'&gt;, &lt;IncludeEnum.documents: 'documents'&gt;, &lt;IncludeEnum.metadatas: 'metadatas'&gt;]} </code></pre>
1,425
implement RAG
APIConnectionError: Connection error. in langchain vector search
https://stackoverflow.com/questions/77837392/apiconnectionerror-connection-error-in-langchain-vector-search
<p>I am trying to implement a RAG solution using Azure OpenAI and Azure AI Search.</p> <p>This is my <code>requirements.txt</code> file:</p> <pre><code>azure-core==1.29.6 azure-common==1.1.28 azure-identity==1.15.0 azure-keyvault-keys==4.8.0 azure-keyvault-secrets==4.7.0 azure-search-documents==11.4.0 openai==1.8.0 langchain==0.1.1 fastapi==0.109.0 uvicorn==0.26.0 tiktoken==0.5.2 gunicorn==21.2.0 langchain-openai==0.0.2 </code></pre> <p>I am trying to implement <code>RetrievalQA</code> using the following vector_store connection:</p> <pre><code># Cognitive Search connection setting index_name = os.environ[&quot;AZURE_SEARCH_INDEX_NAME&quot;] service_name = COGNOS_SERVICE key = AZURE_SEARCH_ADMIN_KEY vector_store_address = &quot;https://{}.search.windows.net/&quot;.format(service_name) vector_store_password = AZURE_SEARCH_ADMIN_KEY # Define LLM llm = AzureChatOpenAI( model=&quot;gpt-35-turbo&quot;, streaming=True, azure_deployment=&quot;chatgpt-gpt35-turbo&quot;, temperature=0.0, ) embedding_model: str = &quot;text-embedding-ada-002&quot; embeddings: OpenAIEmbeddings = OpenAIEmbeddings( deployment=embedding_model, chunk_size=1 ) vector_store = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query, # content_key=&quot;report_content&quot; ) </code></pre> <p>When I run the following <code>vector_store</code> part I get the following error:</p> <pre><code>File d:\xxxx\.venv\lib\site-packages\openai\_base_client.py:919, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls) 909 return self._retry_request( 910 options, 911 cast_to, (...) 915 response_headers=None, 916 ) 918 log.debug(&quot;Raising connection error&quot;) --&gt; 919 raise APIConnectionError(request=request) from err 921 log.debug( 922 'HTTP Request: %s %s &quot;%i %s&quot;', request.method, request.url, response.status_code, response.reason_phrase 923 ) 925 try: APIConnectionError: Connection error. </code></pre> <p>I have made changes as per the new <code>langchain-community</code> imports.</p> <p>The same code with older versions of <code>langchain</code> and the same connection settings seems to work.</p> <p>What changes should I do to make the <code>vector_store</code> connection work?</p>
1,426
implement RAG
How to sync/update Vector DB Vertex AI Search (used as RAG) in a agent?
https://stackoverflow.com/questions/79481102/how-to-sync-update-vector-db-vertex-ai-search-used-as-rag-in-a-agent
<p>Using Firestore Genkit (Node.js) and GCP Vortex AI, Vortex AI Search, and GCP Cloud storage I am writing a agent that will process some files of code. The files get uploaded to cloud storage since they are unstructured data.</p> <p>I want to use Vortex AI search as RAG for the agent I am building. I have the datastore created in the Vortex AI search dashaboard which I create manually. The datastore is pointed to my GCP bucket. The problem is, I do not really understand how to implement it in genkit and use it's client side apis. Using the npm package <a href="https://www.npmjs.com/package/@google-cloud/discoveryengine" rel="nofollow noreferrer">@google-cloud/discoveryengine</a> .... I am not sure how to go about this.</p> <ol> <li>After I load the file to GCP bucket, do I need to sync the Vertex AI Search vector db with the GCP bucket?</li> <li>I see the discovery engine client side library (which is for Vertex AI Search vector db) has api's like createDocument, etc... but the <a href="https://cloud.google.com/nodejs/docs/reference/discoveryengine/1.3.1/discoveryengine/protos.google.cloud.discoveryengine.v1.createdocumentrequest" rel="nofollow noreferrer">documentation</a> is very thin.</li> </ol> <pre class="lang-js prettyprint-override"><code> for (const file of files) { const destination = path.basename(file); const options = { destination, }; await bucket.upload(file, options); // After file uploaded to GCP bucket here, // How to sync or vectorize the data for Vertex AI search DB here? } </code></pre>
1,427
implement RAG
RAG | chromadb is retrieving the old vectors after first attemp not on new document
https://stackoverflow.com/questions/78825666/rag-chromadb-is-retrieving-the-old-vectors-after-first-attemp-not-on-new-docum
<p>Actually i am building rag chatbot with gradio where the issue is that on first pdf file it give the actual response to that pdf file what the question is asked but if i upload new pdf and ask any question the pdf is loading correctly and its embeddings are also being created correctly but when it goes to chromadb and retrieve the vectors .as_retreiver() it gives result/ response from the previous pdf file how it is storing the data and why the data is not being updated in chromadb. below is my code</p> <pre><code>from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.vectorstores import Chroma import logging from langchain.chains import ConversationChain, RetrievalQA from langchain_core.output_parsers import StrOutputParser from langchain_community.document_loaders import ( PyPDFLoader, ) from langchain_core.prompts import PromptTemplate from langchain.memory import ConversationBufferMemory from prompts import template,document_template from langchain_groq import ChatGroq from langchain.embeddings import HuggingFaceEmbeddings import os from dotenv import load_dotenv from langchain_core.runnables import RunnablePassthrough,RunnableParallel,RunnableWithMessageHistory import gradio as gr load_dotenv() </code></pre> <p>here is pdf handler and llm init fns</p> <pre><code>PROMPT = PromptTemplate(input_variables=[&quot;history&quot;, &quot;input&quot;], template=template) llm = ChatGroq( temperature=0, groq_api_key=os.getenv('API'), model_name=&quot;llama3-groq-8b-8192-tool-use-preview&quot; ) memory = ConversationBufferMemory() conversation = ConversationChain(llm=llm, memory=memory, verbose=True, prompt=PROMPT) def pdf_handler(file): try: file = PyPDFLoader(file).load_and_split() chunks = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200 ).split_documents(file) print('Chunks are:', chunks) # Initialize embeddings embeddings = HuggingFaceEmbeddings() # Reinitialize the Chroma vector store vector_store = Chroma.from_documents(documents=chunks, embedding=embeddings) return vector_store except Exception as e: logging.error(&quot;An error occurred in pdf_handler function: %s&quot;, e, exc_info=True) return None def llm_init(file): try: template = &quot;Context: {context}\nQuestion: {question}\nAnswer:&quot; parser = StrOutputParser() chain = llm | parser PROMPT = PromptTemplate( input_variables=[&quot;context&quot;, &quot;question&quot;], template=template ) chain_type_kwargs = {&quot;prompt&quot;: PROMPT} qa = RetrievalQA.from_chain_type( llm=chain, chain_type=&quot;stuff&quot;, retriever=pdf_handler(file).as_retriever(search_type='similarity',search_kwargs={&quot;k&quot;: 6}), return_source_documents=True, chain_type_kwargs=chain_type_kwargs, ) return qa except Exception as e: logging.error(&quot;An error occurred in ollama_llm function: %s&quot;, e, exc_info=True) return None def pdf_chat(is_file, question): qa = llm_init(file=is_file) query = qa({&quot;query&quot;: question})[&quot;result&quot;] return query def handle_chat(file, question): if file: return pdf_chat(file, question) else: return conversation.predict(input=question) </code></pre> <p>the gradio implementation</p> <pre><code>ui = gr.Interface( fn=handle_chat, inputs=[gr.File(), 'text'], outputs=gr.Textbox(lines=14), title=&quot;Chatbot&quot;, description=&quot;Upload a PDF file and ask a question to get an answer, or ask a question directly.&quot;, theme=gr.themes.Default(primary_hue=&quot;violet&quot;, secondary_hue=&quot;violet&quot;) ) print(&quot;its working&quot;, flush=True) port = int(os.environ.get(&quot;PORT&quot;, 9449)) ui.launch(server_name=&quot;0.0.0.0&quot;, server_port=port) </code></pre> <p>the issue is in this part of code</p> <pre><code>vector_store = Chroma.from_documents(documents=chunks, embedding=embeddings) vectors = vector_store.as_retriever(search_type='similarity',search_kwargs={&quot;k&quot;: 6}) response = vectors.invoke(&quot;what is this&quot;) </code></pre> <p>it is responding on previous data not being updated</p>
<p>The issue is sort out actually it was chromadb that was messing up , I used FAISS and the problem resolved.</p>
1,428
implement RAG
BM25Retriever + ChromaDB Hybrid Search Optimization using LangChain
https://stackoverflow.com/questions/79477745/bm25retriever-chromadb-hybrid-search-optimization-using-langchain
<p>For those who have integrated the ChromaDB client with the Langchain framework, I am proposing the following approach to implement the Hybrid search (Vector Search + BM25Retriever):</p> <pre><code>from langchain_chroma import Chroma import chromadb from chromadb.config import Settings from langchain_openai import OpenAIEmbeddings from langchain_community.retrievers import BM25Retriever from langchain.retrievers import EnsembleRetriever from langchain_core.documents import Document from langgraph.graph import START, StateGraph from typing_extensions import TypedDict # Assuming that you have instantiated Chroma client and integrate it into Langchain (below is an example) “”” persistent_client = chromadb.PersistentClient(path=”./test”, settings=Settings(allow_reset=True)) collection = persistent_client.get_or_create_collection( name=”example”, metadata={ &quot;hnsw:space&quot;: &quot;cosine&quot;, # you can add other HNSW parameters if you want } ) chroma = Chroma( client=persistent_client, collection_name=collection.name, embedding_function= OpenAIEmbeddings(model=&quot;text-embedding-3-large&quot;)) “”” def hybrid_search(self, query: str, k: int = 5): &quot;&quot;&quot;Perform a Hybrid Search (similarity_search + BM25Retriever) in the collection.&quot;&quot;&quot; # Get all raw documents from the ChromaDB raw_docs = chroma.get(include=[&quot;documents&quot;, &quot;metadatas&quot;]) # Convert them in Document object documents = [ Document(page_content=doc, metadata=meta) for doc, meta in zip(raw_docs[&quot;documents&quot;], raw_docs[&quot;metadatas&quot;]) ] # Create BM25Retriever from the documents bm25_retriever = BM25Retriever.from_documents(documents=documents, k=k) # Create vector search retriever from ChromaDB instance similarity_search_retriever = self.chroma.as_retriever( search_type=&quot;similarity&quot;, search_kwargs={'k': k} ) # Ensemble the retrievers using Langchain’s EnsembleRetriever Object ensemble_retriever = EnsembleRetriever(retrievers=[similarity_search_retriever, bm25_retriever], weights=[0.5, 0.5]) # Retrieve k relevant documents for the query return ensemble_retriever.invoke(query) # If needed, we can use ainvoke(query) method to retrieve the docs asynchrounously # Call hybrid_search() method # Graph Nodes State approach class State(TypedDict): question: str context: List[Document] answer: str # --- Define Graph Nodes (retrieve, generate, etc.) --- def retrieve(state: State) -&gt; dict: retrieved_docs = vector_store.hybrid_search(state[&quot;question&quot;], 3) return {&quot;context&quot;: retrieved_docs} </code></pre> <p><strong>Note</strong>: The above code is just a sequence that contains <strong>exclusively</strong> the retrieval component to be further integrated into the application structure and RAG flow.</p> <p>My question is the following: Is there a better approach (simpler or cleaner code) that can be used for retrieval of millions of documents?</p>
1,429
implement RAG
want to generate memory based rag with vector stores for storing documents
https://stackoverflow.com/questions/78387076/want-to-generate-memory-based-rag-with-vector-stores-for-storing-documents
<p>I'm working on building a memory-based chat system that utilizes vector retrieval for multiple users across multiple sessions. I'm using LangChain, OpenAI, and ChromaDB in Python.</p> <p>My current implementation involves integrating LangChain's VectorDBQA with OpenAI for language processing and ChromaDB for vector retrieval. However, I'm encountering challenges in properly integrating these components, especially accessing VectorDB and resolving errors when invoking the system.</p> <p>Here's a summary of what I've attempted so far:</p> <ul> <li>Initialized VectorDB and created a retriever.</li> <li>Configured a ChatPromptTemplate with ChatOpenAI model for conversational prompts.</li> <li>Set up a ChatMessageHistory for managing user sessions.</li> <li>Tried integrating VectorDBQA with ChatOpenAI within a RunnableWithMessageHistory object.</li> </ul> <p>Despite these efforts, I'm struggling with accessing VectorDB with runnablechain in langchain, and encountering errors during invocation.</p> <p>Could someone guide me on integrating LangChain's VectorDBQA with OpenAI and ChromaDB for a memory-based chat system in Python? Any insights or suggestions on resolving the mentioned issues would be greatly appreciated.</p> <pre><code>from langchain.chains import VectorDBQA from langchain.llms import OpenAI # Now we can load the persisted database from disk, and use it as normal. vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding) qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type=&quot;stuff&quot;, vectorstore=vectordb) retriever = vectordb.as_retriever(k=4) from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_openai.chat_models import ChatOpenAI model = ChatOpenAI() prompt = ChatPromptTemplate.from_messages( [ ( &quot;system&quot;, &quot;You're an assistant who's good at {ability}. Respond in 20 words or fewer&quot;, ), MessagesPlaceholder(variable_name=&quot;history&quot;), (&quot;human&quot;, &quot;{input}&quot;), ] ) runnable = prompt | model from langchain_core.runnables import ConfigurableFieldSpec from langchain_community.chat_message_histories import ChatMessageHistory from langchain_core.chat_history import BaseChatMessageHistory from langchain_core.runnables.history import RunnableWithMessageHistory from langchain_openai.chat_models import ChatOpenAI llm = ChatOpenAI(model=&quot;gpt-3.5-turbo&quot;, temperature=0.7) store = {} def get_session_history(user_id: str, conversation_id: str) -&gt; BaseChatMessageHistory: if (user_id, conversation_id) not in store: store[(user_id, conversation_id)] = ChatMessageHistory() return store[(user_id, conversation_id)] with_message_history = RunnableWithMessageHistory( runnable = retriever|llm, get_session_history=get_session_history, input_messages_key=&quot;query&quot;, history_messages_key=&quot;history&quot;, history_factory_config=[ ConfigurableFieldSpec( id=&quot;user_id&quot;, annotation=str, name=&quot;User ID&quot;, description=&quot;Unique identifier for the user.&quot;, default=&quot;&quot;, is_shared=True, ), ConfigurableFieldSpec( id=&quot;conversation_id&quot;, annotation=str, name=&quot;Conversation ID&quot;, description=&quot;Unique identifier for the conversation.&quot;, default=&quot;&quot;, is_shared=True, ), ], ) user_id = &quot;user123&quot; conversation_id = &quot;conv456&quot; question = &quot;What is the capital of France?&quot; # Invoke the chain with the user's question and conversation context response = with_message_history.invoke( {&quot;query&quot;: question}, config={&quot;configurable&quot;: {&quot;user_id&quot;: user_id, &quot;conversation_id&quot;: conversation_id}}, ) </code></pre> <p>So, I have two questions.</p> <ol> <li>How can I generate a memory based rag with vector retrieval?</li> <li>How do I use runnable chain with openai and chromadb to do this?</li> </ol>
1,430
implement RAG
Spring AI with MongoDB Vector Store - How to Retrieve Source URL for RAG
https://stackoverflow.com/questions/79525963/spring-ai-with-mongodb-vector-store-how-to-retrieve-source-url-for-rag
<p>I am working on creating a RAG application using Spring AI and MongoDB for the vector store.</p> <p>My current entry point/controller is as follows (based heavily on Spring AI's own docs):</p> <pre><code> @RestController public class ChatController { private final OllamaChatModel chatModel; private final VectorStore vectorStore; private static final Logger logger = LoggerFactory.getLogger(ChatController.class); @Autowired public ChatController(OllamaChatModel chatModel, VectorStore vectorStore) { this.chatModel = chatModel; this.vectorStore = vectorStore; } @GetMapping(&quot;/ai/generate&quot;) public Map&lt;String, String&gt; generate( @RequestParam(value = &quot;message&quot;, defaultValue = &quot;Tell me a joke.&quot;) String message) { var searchRequest = SearchRequest.builder().topK(3).similarityThreshold(.8).build(); ChatResponse response = ChatClient.builder(chatModel).build().prompt(AppConstants.DEFAULT_SYS_PROMPT) .advisors(new QuestionAnswerAdvisor(vectorStore, searchRequest)).user(message).call() .chatResponse(); logger.info(&quot;Reponse is {}&quot;, response); return Map.of(&quot;generation&quot;, response.getResult().getOutput().getText()); } } </code></pre> <p>The &quot;VectorStore&quot; is not explicitly configured in code (i.e. no customization); it relies merely on the &quot;spring-ai-mongodb-atlas-store-spring-boot-starter&quot; dependency and application properties for DB url, DB name, vector index, etc.</p> <p>The collection used in the VectorStore has a &quot;url&quot; String property set in the &quot;metadata&quot; Object property. I configured that &quot;metadata.url&quot; is available in the documents that are used in the &quot;before&quot; function of QuestionAnswerAdvisor. <a href="https://i.sstatic.net/F0tTiBHV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0tTiBHV.png" alt="metadata.url is populated in QuestionAnswerAdvisor" /></a></p> <p>How can I have the model return a link to the source URL it uses as context? For reference, I am trying to recreate something like this: <a href="https://python.langchain.com/docs/how_to/qa_sources/" rel="nofollow noreferrer">https://python.langchain.com/docs/how_to/qa_sources/</a></p> <p>I have tried the following:</p> <pre><code> ChatClient.builder(chatModel).build().prompt(AppConstants.DEFAULT_SYS_PROMPT) .advisors(new QuestionAnswerAdvisor(vectorStore, searchRequest)).user(message).call() .entity(ResponseModel.class) </code></pre> <p>where ResponseModel is a very simple POJO:</p> <pre><code>public class ResponseModel { private String response; private String url; public String getResponse() { return response; } public void setResponse(String response) { this.response = response; } public String getUrl() { return url; } public void setUrl(String url) { this.url = url; } } </code></pre> <p>but the application was not able to populate the URL with the data (the response property was set correctly).</p> <p>How can this be implemented in SpringAI? Thanks.</p>
1,431
implement RAG
Neo4j GraphRAG Document Insertion but nothing is coming in my Neo4j Workspace
https://stackoverflow.com/questions/79079178/neo4j-graphrag-document-insertion-but-nothing-is-coming-in-my-neo4j-workspace
<pre><code> import logging from neo4j import GraphDatabase from neo4j_graphrag.indexes import upsert_vector import openai from PyPDF2 import PdfReader from langchain.text_splitter import RecursiveCharacterTextSplitter from dotenv import load_dotenv load_dotenv() # Set up logging to both file and console logging.basicConfig( level=logging.INFO, # Log level format=&quot;%(asctime)s - %(levelname)s - %(message)s&quot;, # Log format handlers=[ logging.FileHandler(&quot;pdf_processing.log&quot;), # Log to file logging.StreamHandler() # Log to console ] ) URI = URI AUTH = (&quot;neo4j&quot;, &quot;PWD&quot;) # Set your OpenAI API key here openai.api_key = openaikey # Connect to Neo4j database driver = GraphDatabase.driver(URI, auth=AUTH) # Function to extract text from a PDF file def get_pdf_text(pdf_path): logging.info(f&quot;Starting PDF text extraction from: {pdf_path}&quot;) text = &quot;&quot; pdf_reader = PdfReader(pdf_path) for page_num, page in enumerate(pdf_reader.pages): extracted_text = page.extract_text() if page.extract_text() else '' text += extracted_text logging.info(f&quot;Extracted text from page {page_num + 1}: {len(extracted_text)} characters&quot;) logging.info(f&quot;Completed text extraction, total length: {len(text)} characters&quot;) return text # Function to split the extracted text into chunks def get_text_chunks(text, chunk_size=1000, chunk_overlap=200): logging.info(f&quot;Starting text splitting into chunks, chunk size: {chunk_size}, overlap: {chunk_overlap}&quot;) text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap) chunks = text_splitter.split_text(text) logging.info(f&quot;Completed text splitting. Total chunks created: {len(chunks)}&quot;) return chunks # Function to generate text embedding for a given chunk def get_text_embedding(text_chunk): logging.info(f&quot;Generating embedding for chunk of size {len(text_chunk)}&quot;) response = openai.Embedding.create( input=text_chunk, model=&quot;text-embedding-ada-002&quot; # OpenAI's embedding model (1536-dimensional vector) ) embedding = response['data'][0]['embedding'] # Extract the embedding vector logging.info(f&quot;Generated embedding vector of length {len(embedding)}&quot;) return embedding # Function to process a PDF, chunk the text, generate embeddings, and upsert vectors into Neo4j def process_pdf_and_upsert_vectors(pdf_path): logging.info(f&quot;Starting processing of PDF: {pdf_path}&quot;) # Step 1: Extract the text from the PDF text = get_pdf_text(pdf_path) # Step 2: Split the text into chunks chunks = get_text_chunks(text) # Step 3: Loop through each chunk and generate embeddings for i, chunk in enumerate(chunks): embedding = get_text_embedding(chunk) # Step 4: Upsert the vector into Neo4j (node_id can be dynamic) logging.info(f&quot;Upserting vector for chunk {i + 1}&quot;) upsert_vector( driver, node_id=i + 1, # Assuming each chunk gets a new node embedding_property=&quot;vectorProperty&quot;, vector=embedding, ) logging.info(f&quot;Upserted vector for node {i + 1}&quot;) logging.info(f&quot;Completed processing of PDF: {pdf_path}&quot;) # Example usage: process_pdf_and_upsert_vectors(pdf_path) </code></pre> <p>I read the official documentation of <strong>Neo4j</strong> and tried to implement the RAG by using that but no output is coming and also i tried to extract the text from the pdf document convert that into chunks and then upserting into neo4j but still the output is nothing and there is no embeddings in my Neo4j Workspace</p> <p>Can anyone help me to solve the issue so <strong>i can insert my embeddings</strong> into the neo4j and then do the <strong>vector search</strong> based on the user query !</p>
1,432
implement RAG
Getting Error while doing integration for AgenticRAG and Crew-Ai for llms
https://stackoverflow.com/questions/79007732/getting-error-while-doing-integration-for-agenticrag-and-crew-ai-for-llms
<p>While using Agentic RAG with Crew-Ai and using their own integrated tools such as pdfTool or TXTSearchTool, I am getting the following error: &quot;</p> <pre><code>ImportError: cannot import name 'PydanticDeprecationWarning' from 'pydantic' (C:\Users\Administrator\Desktop\AgentRAGsearch\venv\lib\site-packages\pydantic_init_.cp310-win_amd64.pyd)&quot; </code></pre> <p>I attempted to implement a selecting route to add our own local ollama instead of langchainOpenAi, but it did not work. I also attempted to downgrade the V2 version of Pydentic to V1, as I had seen many people get stuck because of it. I also attempted some manual integration in site packages in the libs folder, but it did not work for me either.</p>
1,433