category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
text classification
|
Multinomial Naive Bayes in Text Classification
|
https://stackoverflow.com/questions/40394828/multinomial-naive-bayes-in-text-classification
|
<p>I have a set of features in which one of the features is a negative value. I intend to use Multinomial Naive Bayes for classification of my text document but the negative feature throws an error. Can I use Gaussian Naive Bayes for this setting of text classification?</p>
| 1,234
|
|
text classification
|
training libsvm for text classification(sentiment)
|
https://stackoverflow.com/questions/10734728/training-libsvm-for-text-classificationsentiment
|
<p>From following links I came with some idea. I want to ask whether I am doing it right or I am in the wrong way. If I am in the wrong way, please guide me.</p>
<p>Links<br/>
<a href="https://stackoverflow.com/questions/9021658/using-libsvm-for-text-classification-c-sharp">Using libsvm for text classification c#</a><br/>
<a href="https://stackoverflow.com/questions/6172159/how-to-use-libsvm-for-text-classification">How to use libsvm for text classification?</a></p>
<p>My way</p>
<p>First calculate the word count in each training set<br>
Create a maping list for each word</p>
<p>eg</p>
<pre><code>sample word count form training set
|-----|-----------|
| | counts |
|-----|-----|-----|
|text | +ve | -ve |
|-----|-----|-----|
|this | 3 | 3 |
|forum| 1 | 0 |
|is | 10 | 12 |
|good | 10 | 5 |
|-----|-----|-----|
</code></pre>
<p>positive training data</p>
<pre><code>this forum is good
</code></pre>
<p>so will the training set be</p>
<pre><code>+1 1:3 2:1 3:10 4:10
</code></pre>
<p>this all is just what I received from above links.<br>
Please help me.</p>
|
<p>You're doing it right.</p>
<p>I don't know why your laben is called "+1" - should be a simple integer (refering to the document "+ve"), but all in all it's the way to go. </p>
<p>For document classification you may want to take a look at liblinear which is specially designed for handling a lot of features.</p>
| 1,235
|
text classification
|
Feature Construction for Text Classification using Autoencoders
|
https://stackoverflow.com/questions/24159098/feature-construction-for-text-classification-using-autoencoders
|
<p>Autoencoders can be used to reduce dimensionallity in feature vectors - as far as I understand. In text classification a feature vector is normally constructed via a dictionary - which tends to be extremely large. I have no experience in using autoencoders, so my questions are:</p>
<ol>
<li>Could autoencoders be used to reduce dimensionallity in text classification? (Why? / Why not?)</li>
<li>Has anyone already done this? A source would be nice, if so.</li>
</ol>
|
<p>The existing works use auto encoder for creating models in the sentence level. Basically after training the model using Autoencode, you can get a vector for a sentence. Since any document consists of sentences you can get a set of vectors for the document, and do the document classification. In my experience with various vector representation (e.g. those generated from autoencodes) doing so might give answers worse than classification with bag of words. </p>
| 1,236
|
text classification
|
Collocations in text classification
|
https://stackoverflow.com/questions/8102770/collocations-in-text-classification
|
<p>Suppose i have trained my classifier and i want to find the right sense of a word in a sentence. One feature people use is called collocation where you consider words to the left/right of the confusing word and position is important . I am curious why this approach works? What information does considering collocations give us that helps us in text classification? Moreover, why is the position important</p>
|
<p>Check some details on why word sense disambiguation using collocation works here:
<a href="http://en.wikipedia.org/wiki/Yarowsky_algorithm" rel="nofollow">http://en.wikipedia.org/wiki/Yarowsky_algorithm</a>
It's basically a Bayesian network.</p>
| 1,237
|
text classification
|
Text classification for russian language
|
https://stackoverflow.com/questions/18011756/text-classification-for-russian-language
|
<p>I faced a problem with text classification where I need to classify russian texts. For feature extraction, I use scikit learn TfidfTransformer and CountVectorizer, but after compiling the code there is a mistake:</p>
<pre><code>'UnicodeDecodeError: 'utf8' codec can't decode byte 0xc2 in position 0:
invalid continuation byte'.
</code></pre>
<p>How can I correct this mistake? Here is the code in Python:</p>
<pre><code># -*- coding: utf-8 -*-
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from nltk.corpus import stopwords
import numpy as np
import numpy.linalg as LA
import os
import nltk
import re
import sys
from nltk import NaiveBayesClassifier
import nltk.classify
from nltk.tokenize import wordpunct_tokenize
from nltk.corpus import stopwords
import re
data_path = os.path.abspath(os.path.join('/home/lena/','corpus'))
official_path = os.path.join(data_path,'official')
#print official_path
official2_path = os.path.join(data_path,'official_2')
talk_path = os.path.join(data_path,'talk')
talk2_path = os.path.join(data_path,'talk_2')
#fiction_path = os.path.join(data_path,'fiction')
#fiction2_path = os.path.join(data_path,'fiction_2')
def get_text(path):
with open(path,'rU') as file:
line = file.readlines()
return ''.join(line)
def get_textdir(path):
filelist = os.listdir(path)
all_text = [get_text(os.path.join(path,f)) for f in filelist]
return all_text
all_talk = get_textdir(talk_path)
all_official = get_textdir(official_path)
official_2 = get_textdir(official2_path)
talk_2 = get_textdir(talk2_path)
train_set = all_talk
test_set = talk_2
stopWords = stopwords.words('russian')
vectorizer = CountVectorizer(stop_words = stopWords)
print vectorizer
train = vectorizer.fit_transform(train_set).toarray()
test = vectorizer.transform(test_set).toarray()
print 'train set', train
print 'test set', test
transformer.fit(train)
print transformer.transform(train).toarray()
transformer.fit(test)
tfidf = transformer.transform(test)
print tfidf.todense()
</code></pre>
|
<p>Set the <code>charset</code> (or in 0.14, <code>encoding</code>) parameter on the vectorizer. For Russian text, that would probably be</p>
<pre><code>CountVectorizer(charset='koi8r', stop_words=stopWords)
</code></pre>
<p>(but don't take my word for it and run something like <code>chardet</code> or <code>file</code> on your text files).</p>
| 1,238
|
text classification
|
Python Text Classification Error - expected string or bytes-like object
|
https://stackoverflow.com/questions/44365435/python-text-classification-error-expected-string-or-bytes-like-object
|
<p>I'm trying to do text classification for a large corpus (732,066 tweets) in python</p>
<pre><code># Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
#dataset = pd.read_csv('Restaurant_Reviews.tsv', delimiter = '\t', quoting = 3)
# Importing the dataset
cols = ["text","geocoordinates0","geocoordinates1","grid"]
dataset = pd.read_csv('tweets.tsv', delimiter = '\t', usecols=cols, quoting = 3, error_bad_lines=False, low_memory=False)
# Removing Non-ASCII characters
def remove_non_ascii_1(dataset):
return ''.join([i if ord(i) < 128 else ' ' for i in dataset])
# Cleaning the texts
import re
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
corpus = []
for i in range(0, 732066):
review = re.sub('[^a-zA-Z]', ' ', dataset['text'][i])
review = review.lower()
review = review.split()
ps = PorterStemmer()
review = [ps.stem(word) for word in review if not word in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
# Creating the Bag of Words model
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
X = cv.fit_transform(corpus).toarray()
y = dataset.iloc[:, 1].values
# Splitting the dataset into the Training set and Test set
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
# Fitting Naive Bayes to the Training set
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# Applying k-Fold Cross Validation
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
accuracies.mean()
accuracies.std()
</code></pre>
<p>This is the error I get and where i'm stuck and unable to proceed with the rest of the machine learning text classification</p>
<blockquote>
<pre><code>Traceback (most recent call last):
File "<ipython-input-2-3fac33122b74>", line 2, in <module>
review = re.sub('[^a-zA-Z]', ' ', dataset['text'][i])
File "C:\Anaconda3\envs\py35\lib\re.py", line 182, in sub
return _compile(pattern, flags).sub(repl, string, count)
TypeError: expected string or bytes-like object
</code></pre>
</blockquote>
<p>Thanks ahead for the help </p>
|
<p>Try</p>
<pre><code>str(dataset.loc[df.index[i], 'text'])
</code></pre>
<p>That'll convert it to a str object, from whatever type it was before.</p>
| 1,239
|
text classification
|
Text classification performance
|
https://stackoverflow.com/questions/37969425/text-classification-performance
|
<p>So i am using textblob python library, but the performance is lacking.</p>
<p>I already serialize it and load it before the loop( using pickle ).</p>
<p>It currently takes ~ 0.1( for small training data ) and ~ 0.3 on 33'000 test data. I need to make it faster, is it even possible ?</p>
<h1><strong>Some code:</strong></h1>
<pre><code># Pass trainings before loop, so we can make performance a lot better
trained_text_classifiers = load_serialized_classifier_trainings(config["ALL_CLASSIFICATORS"])
# Specify witch classifiers are used by witch classes
filter_classifiers = get_classifiers_by_resource_names(trained_text_classifiers, config["FILTER_CLASSIFICATORS"])
signal_classifiers = get_classifiers_by_resource_names(trained_text_classifiers, config["SIGNAL_CLASSIFICATORS"])
for (url, headers, body) in iter_warc_records(warc_file, **warc_filters):
start_time = time.time()
body_text = strip_html(body);
# Check if url body passess filters, if yes, index, if no, ignore
if Filter.is_valid(body_text, filter_classifiers):
print "Indexing", url.url
resp = indexer.index_document(body, body_text, signal_classifiers, url=url, headers=headers, links=bool(args.save_linkgraph_domains))
else:
print "\n"
print "Filtered out", url.url
print "\n"
resp = 0
</code></pre>
<p>This is the loop witch performs check on each of the warc file's body and metadata.</p>
<p>there are 2 text classification checks here.</p>
<p>1) In Filter( very small training data ):</p>
<pre><code>if trained_text_classifiers.classify(body_text) == "True":
return True
else:
return False
</code></pre>
<p>2) In index_document( 33'000 training data ):</p>
<pre><code>prob_dist = trained_text_classifier.prob_classify(body)
prob_dist.max()
# Return the propability of spam
return round(prob_dist.prob("spam"), 2)
</code></pre>
<p>The classify and prob_classify are the methods that take the tool on performance.</p>
|
<p>You can use feature selection for your data. some good feature selection can reduce features up to 90% and persist the classification performance.
In feature selection you select top feature(in <strong>Bag Of Word</strong> model, you select top influence words), and train model based on these words(features). this reduce the dimension of your data(also it prevent Curse Of Dimensionality)
here is a good survey:
<a href="https://arxiv.org/pdf/1602.02850.pdf" rel="nofollow">Survey on feature selection</a></p>
<p>In Brief:</p>
<p>Two feature selection approach is available: Filtering and Wrapping</p>
<p>Filtering approach is almost based on information theory. search "Mutual Information", "chi2" and... for this type of feature selection</p>
<p>Wrapping approach use the classification algorithm to estimate the most important features in the library. for example you select some words and evaluate classification performance(recall,precision).</p>
<p>Also some others approch can be usefull. LSA and LSI can outperform the classification performance and time:
<a href="https://en.wikipedia.org/wiki/Latent_semantic_analysis" rel="nofollow">https://en.wikipedia.org/wiki/Latent_semantic_analysis</a></p>
<p>You can use sickit for feature selection and LSA:</p>
<p><a href="http://scikit-learn.org/stable/modules/feature_selection.html" rel="nofollow">http://scikit-learn.org/stable/modules/feature_selection.html</a></p>
<p><a href="http://scikit-learn.org/stable/modules/decomposition.html" rel="nofollow">http://scikit-learn.org/stable/modules/decomposition.html</a></p>
| 1,240
|
text classification
|
Neural networks for Text Classification
|
https://stackoverflow.com/questions/33458682/neural-networks-for-text-classification
|
<p>I am trying to train a model on text classification. I have a large labeled dataset. I have tried scikit classifiers NaiveBayes, KNeighborsClassifier, RandomForest etc. But i cannot get an accuracy above 30%. How can i use the Neural Networks for text classification? Here is the algo i have used so far</p>
<pre><code> df = read_csv(filename, sep="|", na_values=[" "]).fillna(" ")
le = preprocessing.LabelEncoder()
target = le.fit_transform(df['label'])
vectorizer = TfidfVectorizer(sublinear_tf=True,
max_df=0.3,
min_df=100,
lowercase=True,
stop_words='english',
max_features=20000,
tokenizer=tokenize,
ngram_range=(1,4)
)
train = vectorizer.fit_transform(df['data'])
X_train, X_test, y_train , y_test = cross_validation.train_test_split(train, target, test_size=5000, random_state=0)
clf = MultinomialNB(alpha=.1)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
</code></pre>
<p>My dataset contains about 300k documents, and vectorizer can produce upto 50k features. I have even tried chisquare to reduce the number of features to 5k, but still accuracy does not improve much. </p>
<p><strong>Nature of Data</strong>
Documents are set of comments, notes on a incident. Labels are high level categories for the incidents. As expected, the comments and notes are subjected to human errors, misspellings. </p>
|
<p>You need to improve the quality of your features. I suggest you form a new question around how to design features for this problem before dealing with the classifier algorithm. From the bad accuracy you report using a few methods, and the description that should be the weakest point you address first.</p>
| 1,241
|
text classification
|
Text classification with Tensorflow_Hub
|
https://stackoverflow.com/questions/54055415/text-classification-with-tensorflow-hub
|
<p>I'm following this tutorial
<a href="https://medium.com/tensorflow/building-a-text-classification-model-with-tensorflow-hub-and-estimators-3169e7aa568" rel="nofollow noreferrer">https://medium.com/tensorflow/building-a-text-classification-model-with-tensorflow-hub-and-estimators-3169e7aa568</a> on how to build text classifier with TensorFlow HUB. I've modified it with my dataset which consists of two columns "post, tags". There are 8 tags.
When I run this script it gives me error: AttributeError: type object 'Reduction' has no attribute 'SUM_OVER_BATCH_SIZE'
I want to see if code works for me but i'm getting this error.
Code:</p>
<pre><code> # -*- coding: utf-8 -*-
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
import json
import pickle
import urllib
from sklearn.preprocessing import MultiLabelBinarizer
print(tf.__version__)
data = pd.read_csv('dataset3.csv')
descriptions = data['post']
genres = data['tags']
train_size = int(len(descriptions) * .8)
train_descriptions = descriptions[:train_size].astype('str')
train_genres = genres[:train_size]
test_descriptions = descriptions[train_size:].astype('str')
test_genres = genres[train_size:]
description_embeddings = hub.text_embedding_column(
"movie_descriptions",
module_spec="https://tfhub.dev/google/universal-sentence-encoder/2"
)
top_genres = ['n', 'a', 'g', 'o', 'i', 'u', 'r', 'v']
encoder = MultiLabelBinarizer()
encoder.fit_transform(train_genres)
train_encoded = encoder.transform(train_genres)
test_encoded = encoder.transform(test_genres)
num_classes = len(encoder.classes_)
# Print all possible genres and the labels for the first movie in our training dataset
print(encoder.classes_)
print(train_encoded[0])
description_embeddings = hub.text_embedding_column("descriptions", module_spec="https://tfhub.dev/google/universal-sentence-encoder/2", trainable=False)
multi_label_head = tf.contrib.estimator.multi_label_head(
num_classes,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE
)
features = {
"descriptions": np.array(train_descriptions).astype(np.str)
}
labels = np.array(train_encoded).astype(np.int32)
train_input_fn = tf.estimator.inputs.numpy_input_fn(features, labels, shuffle=True, batch_size=32, num_epochs=25)
estimator = tf.contrib.estimator.DNNEstimator(
head=multi_label_head,
hidden_units=[64,10],
feature_columns=[description_embeddings])
estimator.train(input_fn=train_input_fn)
eval_input_fn = tf.estimator.inputs.numpy_input_fn({"descriptions": np.array(test_descriptions).astype(np.str)}, test_encoded.astype(np.int32), shuffle=False)
estimator.evaluate(input_fn=eval_input_fn)
# Test our model on some raw description data
raw_test = [
"Firma doo", # Documentary
"Bočak 23", #Adventure
"51250 Novi Vinodolski", # Comedy
]
# Generate predictions
predict_input_fn = tf.estimator.inputs.numpy_input_fn({"descriptions": np.array(raw_test).astype(np.str)}, shuffle=False)
results = estimator.predict(predict_input_fn)
# Display predictions
for movie_genres in results:
top_2 = movie_genres['probabilities'].argsort()[-2:][::-1]
for genre in top_2:
text_genre = encoder.classes_[genre]
print(text_genre + ': ' + str(round(movie_genres['probabilities'][genre] * 100, 2)) + '%')
print('')
</code></pre>
| 1,242
|
|
text classification
|
New techniques of text classification nlp
|
https://stackoverflow.com/questions/59007277/new-techniques-of-text-classification-nlp
|
<p>I have to make a text classification program with new techniques (Not using "bag of words" and "TF-IDF").
I read about EDA but I was confused.
Any ideas?</p>
| 1,243
|
|
text classification
|
Feature Selection for Text Classification
|
https://stackoverflow.com/questions/19220621/feature-selection-for-text-classification
|
<p>I am working on a text classification problem in which the 100 most frequent words are selected as features. I believe the results could be improved if I use a better feature selection method? Any ideas? Could TF-IDF work? If yes, then how?</p>
|
<p>to improve the results you can use Feature selection </p>
<p>1) Information gain </p>
<p>2) Chi square</p>
<p>3) Mutual information</p>
<p>4) Term frequency</p>
<p>TF-IDF you can see this <a href="http://www.tfidf.com/" rel="nofollow">link</a> it will help you </p>
| 1,244
|
text classification
|
SQL/BigQuery text classification
|
https://stackoverflow.com/questions/64881264/sql-bigquery-text-classification
|
<p>I need to implement a simple text classification using regex, and for this I thought to apply a simple CASE WHEN statement, but rather than in case one condition is met, I want to iterate over all the CASEs.</p>
<p>For example,</p>
<pre class="lang-sql prettyprint-override"><code>with `table` as(
SELECT 'It is undeniable that AI will change the landscape of the future. There is a frequent increase in the demand for AI-related jobs, especially in data science and machine learning positions. It is believed that artificial intelligence will change the world, just like how electricity changed the world about 100 years ago. As Professor Andrew NG has famously stated multiple times “Artificial Intelligence is the new electricity.” We have advanced immensely in the field of artificial intelligence. With the increase in the processing and computational power, thanks to graphical processing units (GPUs), and also due to the abundance of data, we have reached a position of supremacy in Deep Learning and modern algorithms.' as text
)
SELECT
CASE
WHEN REGEXP_CONTAINS(text, r'(?i)ai') THEN 'AI'
WHEN REGEXP_CONTAINS(text, r'(?i)computational power') THEN 'Engineering'
WHEN REGEXP_CONTAINS(text, r'(?i)deep learning') THEN 'Deep Learning'
END as topic,
text
FROM `table`
</code></pre>
<p>With this query, the text is classified as AI, because it is the first condition that is met, but it should be classified as AI, Engineering and <a href="https://en.wikipedia.org/wiki/Deep_learning" rel="nofollow noreferrer">deep learning</a> in an array or in three different rows, because all three conditions are met.</p>
<p>How can I classify the text applying all the regex/conditions?</p>
|
<p>I feel the below is the most generic and reusable solution (BigQuery Standard SQL):</p>
<pre><code>#standardSQL
with `table` as(
select 'It is undeniable that AI will change the landscape of the future. There is a frequent increase in the demand for AI-related jobs, especially in data science and machine learning positions. It is believed that artificial intelligence will change the world, just like how electricity changed the world about 100 years ago. As Professor Andrew NG has famously stated multiple times “Artificial Intelligence is the new electricity.” We have advanced immensely in the field of artificial intelligence. With the increase in the processing and computational power, thanks to graphical processing units (GPUs), and also due to the abundance of data, we have reached a position of supremacy in Deep Learning and modern algorithms.' as text
), classification as (
select 'ai' term, 'AI' topic union all
select 'computational power', 'Engineering' union all
select 'deep learning', 'Deep Learning'
), pattern as (
select r'(?i)' || string_agg(term, '|') as regexp_pattern
from classification
)
select
array_to_string(array(
select distinct topic
from unnest(regexp_extract_all(lower(text), regexp_pattern)) term
join classification using(term)
), ', ') topics,
text
from `table`, pattern
</code></pre>
<p>With output:</p>
<p><a href="https://i.sstatic.net/C7Quy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C7Quy.png" alt="Enter image description here" /></a></p>
| 1,245
|
text classification
|
Error while creating a model for binary classification for text classification
|
https://stackoverflow.com/questions/71850436/error-while-creating-a-model-for-binary-classification-for-text-classification
|
<p>code:</p>
<pre><code>model = create_model()
model.compile(optimize=tf.keras.optimizers.Adam(learning_rate=2e-5),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary()
</code></pre>
<p>error:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-58-cdba04f466b1> in <module>()
2 model.compile(optimize=tf.keras.optimizers.Adam(learning_rate=2e-5),
3 loss=tf.keras.losses.BinaryCrossentropy(),
----> 4 metrics=[tf.keras.metrics.BinaryAccuracy()])
5 model.summary()
1 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py in _validate_compile(self, optimizer, metrics, **kwargs)
2981 invalid_kwargs = set(kwargs) - {'sample_weight_mode'}
2982 if invalid_kwargs:
-> 2983 raise TypeError('Invalid keyword argument(s) in `compile()`: '
2984 f'{(invalid_kwargs,)}. Valid keyword arguments include '
2985 '"cloning", "experimental_run_tf_function", "distribute",'
TypeError: Invalid keyword argument(s) in `compile()`: ({'optimize'},). Valid keyword arguments include "cloning", "experimental_run_tf_function", "distribute", "target_tensors", or "sample_weight_mode".
</code></pre>
<p>can someone have a look into this?
here building a model for fine-tuning BERT for text classification</p>
|
<p>I was able to replicate above issue using sample code as shown below</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
c = np.array([-40, -10, -0, 8, 15, 22, 38])
f = np.array([-40, 14, 32, 46, 59, 72, 100])
model = Sequential()
model.add(Dense(units=1,input_shape=(1,), activation='linear'))
model.compile(loss='mean_squared_error', optimize= Adam(0.1))
history = model.fit(c, f, epochs=5, verbose=0)
</code></pre>
<p>Output:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-659b944d282f> in <module>()
12 model.add(Dense(units=1,input_shape=(1,), activation='linear'))
13
---> 14 model.compile(loss='mean_squared_error', optimize= Adam(0.1))
15
16 history = model.fit(c, f, epochs=5, verbose=0)
1 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py in _validate_compile(self, optimizer, metrics, **kwargs)
2981 invalid_kwargs = set(kwargs) - {'sample_weight_mode'}
2982 if invalid_kwargs:
-> 2983 raise TypeError('Invalid keyword argument(s) in `compile()`: '
2984 f'{(invalid_kwargs,)}. Valid keyword arguments include '
2985 '"cloning", "experimental_run_tf_function", "distribute",'
TypeError: Invalid keyword argument(s) in `compile()`: ({'optimize'},). Valid keyword arguments include "cloning", "experimental_run_tf_function", "distribute", "target_tensors", or "sample_weight_mode".
</code></pre>
<p><strong>Fixed code:</strong></p>
<p>Above TypeError clearly guide and it is due to typo, please can you change <code>optimize</code> to <code>optimizer</code> as shown below</p>
<pre><code>model.compile(loss='mean_squared_error', optimizer= Adam(0.1))
</code></pre>
<p>For more details please find the <a href="https://colab.sandbox.google.com/gist/chunduriv/4211102ea525460a95a217fd9e2990ed/untitled121.ipynb" rel="nofollow noreferrer">gist</a> for reference.</p>
| 1,246
|
text classification
|
python textblob and text classification
|
https://stackoverflow.com/questions/33883976/python-textblob-and-text-classification
|
<p>I'm trying do build a text classification model with python and <a href="https://textblob.readthedocs.org/en/dev/index.html" rel="nofollow noreferrer">textblob</a>, the script is runing on my server and in the future the idea is that users will be able to submit their text and it will be classified.
i'm loading the training set from csv :</p>
<pre><code># -*- coding: utf-8 -*-
import sys
import codecs
sys.stdout = open('yyyyyyyyy.txt',"w");
from nltk.tokenize import word_tokenize
from textblob.classifiers import NaiveBayesClassifier
with open('file.csv', 'r', encoding='latin-1') as fp:
cl = NaiveBayesClassifier(fp, format="csv")
print(cl.classify("some text"))
</code></pre>
<p>csv is about 500 lines long (with string between 10 and 100 chars), and NaiveBayesclassifier needs about 2 minutes for training and then be able to classify my text(not sure if is normal that it need so much time, maybe is my server slow with only 512mb ram).</p>
<p>example of csv line :</p>
<pre><code>"Oggi alla Camera con la Fondazione Italia-Usa abbiamo consegnato a 140 studenti laureati con 110 e 110 lode i diplomi del Master in Marketing Comunicazione e Made in Italy.",FI-PDL
</code></pre>
<p>what is not clear to me, and i cant find an answer on textblob documentation, is if there is a way to 'save' my trained classifier (so save a lot of time), because by now everytime i run the script it will train again the classifier.
I'm new to text classification and machine learing so my apologize if it is a dumb question.</p>
<p>Thanks in advance.</p>
|
<p>Ok found that pickle module is what i need :)</p>
<p>Training:</p>
<pre><code># -*- coding: utf-8 -*-
import pickle
from nltk.tokenize import word_tokenize
from textblob.classifiers import NaiveBayesClassifier
with open('file.csv', 'r', encoding='latin-1') as fp:
cl = NaiveBayesClassifier(fp, format="csv")
object = cl
file = open('classifier.pickle','wb')
pickle.dump(object,file)
</code></pre>
<p>extracting:</p>
<pre><code>import pickle
sys.stdout = open('demo.txt',"w");
from nltk.tokenize import word_tokenize
from textblob.classifiers import NaiveBayesClassifier
cl = pickle.load( open( "classifier.pickle", "rb" ) )
print(cl.classify("text to classify"))
</code></pre>
| 1,247
|
text classification
|
R Text Classification with 800K documents
|
https://stackoverflow.com/questions/40809626/r-text-classification-with-800k-documents
|
<p>I have to do some work on text classification that contains 800K texts. I've been trying to run an practical example I found in the following link:</p>
<p><a href="http://garonfolo.dk/herbert/2015/05/r-text-classification-using-a-k-nearest-neighbour-model/" rel="nofollow noreferrer">http://garonfolo.dk/herbert/2015/05/r-text-classification-using-a-k-nearest-neighbour-model/</a></p>
<p>All has been going well until I've got the to the following instruction:</p>
<pre><code># Transform dtm to matrix to data frame - df is easier to work with
mat.df <- as.data.frame(data.matrix(dtm), stringsAsfactors = FALSE)
</code></pre>
<p>After having this run for several hours I've got an error message:</p>
<pre><code>Error: cannot allocate vector of size 583.9 Gb
In addition: Warning messages:
1: In vector(typeof(x$v), prod(nr, nc)) :
Reached total allocation of 8076Mb: see help(memory.size)
2: In vector(typeof(x$v), prod(nr, nc)) :
Reached total allocation of 8076Mb: see help(memory.size)
3: In vector(typeof(x$v), prod(nr, nc)) :
Reached total allocation of 8076Mb: see help(memory.size)
4: In vector(typeof(x$v), prod(nr, nc)) :
Reached total allocation of 8076Mb: see help(memory.size)
</code></pre>
<p>Is there a way to overcome this error?</p>
<p>Would it be possible for example to split data.matrix(dtm) to run the job in chunks and then merge them somehow? Or is it better to approach this in another way or in Python?</p>
<p>Thanks</p>
|
<p>Before that <code>as.data.frame()</code> call, enter this line of code:</p>
<p><code>dtm <- removeSparseTerms(dtm, sparse=0.9)</code>. </p>
<p>The argument <code>sparse=...</code> is a number between 0 and 1. It is proportional to the number of documents you want to keep. Above, it is <em>not</em> 90%. Typically you'll find the correct/optimal value by trial and error. In your case, you can end up with a weird number such as 0.79333. depends on what you want to do.</p>
<p><code>removeSparseTerms()</code> removes Terms, but keeps the number of documents in the smaller resulting matrix constant. So you'll go from a 12165735 * 800000 element matrix to a 476 * 800000 matrix. Processing this might now be possible on your computer. </p>
<p>If not, try a clever column-wise split-apply-combine trick with your big matrix.</p>
| 1,248
|
text classification
|
How to export a Google AutoML Text Classification model?
|
https://stackoverflow.com/questions/70637913/how-to-export-a-google-automl-text-classification-model
|
<p>I just finished training my AutoML Text Classification model (single-label).</p>
<p>I was planning to run a Batch Prediction using the console, but I just found out how expensive that will be because I have over 300,000 text records to analyze.</p>
<p>So now I want to export the model to my local machine and run the predictions there.</p>
<p>I found <a href="https://cloud.google.com/vertex-ai/docs/export" rel="nofollow noreferrer">instructions here</a> to export "AutoML Tabular Models" and "AutoML Edge Models". But there is nothing available for text classification models.</p>
<p>I tried following the "AutoML Tabular Model" instructions because that looked like the closest thing to a text classification model, but I could not find the "Export" button that was supposed to exist on the model detail page.</p>
<p>So I have some questions regarding this:</p>
<ol>
<li><p>How do I export a AutoML Text Classification model?</p>
</li>
<li><p>Is a AutoML Text Classification model the same thing as an AutoML Tabular model? They seem very similar because my text classifiction model used tabular CSV to assign labels and train the model.</p>
</li>
<li><p>If I cannot export AutoML Text Classification model (urgh!), can I train a new "Tabular" model to do the same thing?</p>
</li>
</ol>
|
<ol>
<li><p>Currently, there is no feature to export an AutoML text classification model. Already a feature request exists, you can follow its progress on this <a href="https://issuetracker.google.com/168860629" rel="nofollow noreferrer">issue tracker</a>.</p>
</li>
<li><p>Both the models are quite similar. A tabular data classification model analyzes your tabular data and returns a list of categories that describe the data. A text data classification model analyzes text data and returns a list of categories that apply to the text found in the data. Refer to this <a href="https://cloud.google.com/vertex-ai/docs/start/automl-model-types" rel="nofollow noreferrer">doc</a> for more information about AutoML model types.</p>
</li>
<li><p>Yes, you can do the same thing in an AutoML tabular data classification model if your training data is in tabular CSV file format. Refer to this <a href="https://cloud.google.com/vertex-ai/docs/datasets/prepare-tabular" rel="nofollow noreferrer">doc</a> for more information about how to prepare tabular training data.</p>
</li>
</ol>
<p>If your model trained successfully in an AutoML tabular data classification, you can find an <code>Export</code> option at the top. Refer to this <a href="https://cloud.google.com/vertex-ai/docs/export/export-model-tabular#export" rel="nofollow noreferrer">doc</a> for more information about how to export tabular classification models.</p>
<p><a href="https://i.sstatic.net/DSSZL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DSSZL.png" alt="enter image description here" /></a></p>
| 1,249
|
text classification
|
Word Embedding for text classification
|
https://stackoverflow.com/questions/64331013/word-embedding-for-text-classification
|
<p>I am new in the NLP community and need more light on something.</p>
<p>I saw that Keras has an Embedding layer that is generally used before the LSTM layer. But what algorithm hides behind it? Is it Word2Vec, Glove or something else?</p>
<p>My task is a supervised text classification problem.</p>
|
<p>The embedding layer is a randomly initialized matrix, with the dimension of <code>(number_of_words_in_vocab * embedding_dimension)</code>. The <code>embedding_dimension</code> is custom defined dimension, and an hyper-parmeter that we will have to choose.</p>
<p>Here, the embeddings are updated during back-propagation, and are learnt from your task and task-specific corpus.</p>
<p>However, pre-trained embeddings such as word2vec, glove are learnt in an unsupervised manner on huge corpus. Pre-trianed embeddings provides a good initialization for this embedding layer. Thus, you can use the pre-trained embeddings to initialize this embedding layer, and also choose if you want to freeze these emebeddings or update these embeddings during the back-propagation.</p>
| 1,250
|
text classification
|
FastText using pre-trained word vector for text classification
|
https://stackoverflow.com/questions/47692906/fasttext-using-pre-trained-word-vector-for-text-classification
|
<p>I am working on a text classification problem, that is, given some text, I need to assign to it certain given labels.</p>
<p>I have tried using fast-text library by Facebook, which has two utilities of interest to me:</p>
<p>A) Word Vectors with pre-trained models</p>
<p>B) Text Classification utilities</p>
<p>However, it seems that these are completely independent tools as I have been unable to find any tutorials that merge these two utilities.</p>
<p>What I want is to be able to classify some text, by taking advantage of the pre-trained models of the Word-Vectors. Is there any way to do this?</p>
|
<p>FastText's native classification mode depends on you training the word-vectors yourself, using texts with known classes. The word-vectors thus become optimized to be useful for the specific classifications observed during training. So that mode typically <em>wouldn't</em> be used with pre-trained vectors. </p>
<p>If using pre-trained word-vectors, you'd then somehow compose those into a text-vector yourself (for example, by averaging all the words of a text together), then training a separate classifier (such as one of the many options from scikit-learn) using those features. </p>
| 1,251
|
text classification
|
Text Classification: Multilable Text Classification vs Multiclass Text Classification
|
https://stackoverflow.com/questions/35737352/text-classification-multilable-text-classification-vs-multiclass-text-classific
|
<p>I have a question about the approach to deal with a multilabel classification problem.</p>
<p>Based on literature review, I found one most commonly-used approach is Problem Transformation Approach. It transformed the multilabel problem to a number of single label problems, and the classification result is just the simple union of each single label classifier, using the binary relevant approach. </p>
<p>Since a single label problem can be catergorized as either binary classification (if there are two labels) or multiclass classification problem (if there are multiple labels i.e., labels>2), the current transformation approach seems all transform the multilabel problem to a number of binary problems. But this would be cause the data imbalance issue, because of the negative class may have much more documents than the positive class. </p>
<p>So my question, why not transform to a number of multiclass problems, and then apply the direct multiclass classification algorithms to avoid the data imbalance problem. In this case, for one testing document, each trained single label multiclass classifier would predict whether to assign the label, and the union of all such single label multiclass classifier prediction results would be the final set of labels for that testing documents.</p>
<p>In summary, compared to transform a multilabel classification problem to a number of binary classification problems, transform a multilabel classification problem to a number of multiclass classification problems could avoid the data imbalance problem. Other than this, everything stays the same for the above two methods: you need to construct |L|(|L| means the total number of different labels in the classification problem) single label (either binary or multiclass) classifier, you need to prepare |L| sets of training data and testing data, you need to test each single label classifier on the testing document and the union of prediction results of each single label classifier is the final label set for the testing document.</p>
<p>Hope anyone could help clarify my confusion, thanks very much!</p>
|
<p>what you describe is a known transformation strategy to multi-class problems called Label Power Set Transformation Strategy.</p>
<p>Drawbacks of this method:</p>
<ul>
<li>The LP transformation may lead to up to 2^|L| transformed
labels.</li>
<li>Class imbalance problem.</li>
</ul>
<p><strong>Refer to:
Cherman, Everton Alvares, Maria Carolina Monard, and Jean Metz. "Multi-label problem transformation methods: a case study." CLEI Electronic Journal 14.1 (2011): 4-4.</strong></p>
| 1,252
|
text classification
|
How to use Transformers for text classification?
|
https://stackoverflow.com/questions/58123393/how-to-use-transformers-for-text-classification
|
<p>I have two questions about how to use Tensorflow implementation of the Transformers for text classifications. </p>
<ul>
<li><strong>First</strong>, it seems people mostly used only the encoder layer to do the text classification task. However, encoder layer generates one prediction for each input word. Based on my understanding of transformers, the input to the encoder each time is one word from the input sentence. Then, the attention weights and the output is calculated using the current input word. And we can repeat this process for all of the words in the input sentence. As a result we'll end up with pairs of (attention weights, outputs) for each word in the input sentence. Is that correct? Then how would you use this pairs to perform a text classification? </li>
<li><strong>Second</strong>, based on the Tensorflow implementation of transformer <a href="https://www.tensorflow.org/beta/tutorials/text/transformer" rel="noreferrer">here</a>, they embed the whole input sentence to one vector and feed a batch of these vectors to the Transformer. However, I expected the input to be a batch of words instead of sentences based on what I've learned from <a href="http://jalammar.github.io/illustrated-transformer/" rel="noreferrer">The Illustrated Transformer</a></li>
</ul>
<p>Thank you!</p>
|
<p>There are two approaches, you can take:</p>
<ol>
<li>Just average the states you get from the encoder;</li>
<li>Prepend a special token <code>[CLS]</code> (or whatever you like to call it) and use the hidden state for the special token as input to your classifier.</li>
</ol>
<p>The second approach is used by <a href="https://www.aclweb.org/anthology/N19-1423" rel="noreferrer">BERT</a>. When pre-training, the hidden state corresponding to this special token is used for predicting whether two sentences are consecutive. In the downstream tasks, it is also used for sentence classification. However, my experience is that sometimes, averaging the hidden states give a better result.</p>
<p>Instead of training a Transformer model from scratch, it is probably more convenient to use (and eventually finetune) a pre-trained model (BERT, XLNet, DistilBERT, ...) from the <a href="https://github.com/huggingface/transformers" rel="noreferrer">transformers package</a>. It has pre-trained models ready to use in PyTorch and TensorFlow 2.0.</p>
| 1,253
|
text classification
|
custom pytorch dataloader within fastai text classification
|
https://stackoverflow.com/questions/70174383/custom-pytorch-dataloader-within-fastai-text-classification
|
<p>I am trying to go deeper in my understanding of fastai API and want to be able to implement some things in “pure” pytorch and then let fastai do all of the optimization tricks.</p>
<p>I am trying simple text classification with my own dataloader class.
Firstly, I still get error when I try to show one batch getting a RecursionError.</p>
<pre class="lang-py prettyprint-override"><code>my_dls.show_batch()
res = L(b).map(partial(batch_to_samples,max_n=max_n))
RecursionError: maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>What I need to do in my class so I can train text classification model with custom stuff.</p>
<pre><code>from torch.utils import data
from torch.utils.data import DataLoader, Dataset
import pandas as pd
from fastai.data.core import DataLoaders
from torch.nn import CrossEntropyLoss
from fastai.text.all import *
# Example of data
# Entire data here: https://github.com/koaning/tokenwiser/blob/main/data/oos-intent.jsonl
d = {"text": "how would you say fly in italian", "label": "translate"}
data = pd.read_json("text.jsonl", lines = True)
class text_dataset(Dataset):
def __init__(self, text, label):
self.text = text
self.label = label
self.n_classes = len(set(self.label))
self.vocab = [i for i in set(self.label)]
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
text_i = self.text[idx]
label_i = self.label[idx]
return {"text": text_i, "label": label_i}
dls = text_dataset(data["text"], data["label"])
dls.n_classes
len(dls)
data_loader = DataLoader(dls)
my_dls = DataLoaders.from_dsets(dls)
my_dls.show_batch()
#RecursionError: maximum recursion depth exceeded while calling a Python object
learn = text_classifier_learner(my_dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy, loss_func = CrossEntropyLoss)
# Also does not work
#learn = Learner(my_dls, AWD_LSTM, metrics=accuracy, loss_func = CrossEntropyLoss)
learn.fit_one_cycle(1)
</code></pre>
| 1,254
|
|
text classification
|
Scikit Learn-MultinomialNB for text classification
|
https://stackoverflow.com/questions/46014642/scikit-learn-multinomialnb-for-text-classification
|
<p>How to Calculate FPR, TPR, AUC, roc_curve for multi class text classification.</p>
<p>I have used following code:</p>
<pre><code>from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
from sklearn.feature_extraction.text import CountVectorizer
vect=CountVectorizer()
vect.fit(X_train.values.astype('U'))
X_train_dtm=vect.transform(X_train.values.astype('U'))
X_test_dtm=vect.transform(X_test)
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
y_score=nb.fit(X_train_dtm, y_train)
y_pred_class = nb.predict(X_test_dtm)
</code></pre>
<p>Every thing runs fine till here.
but as soon as I am using following code this gives error.</p>
<pre><code>from sklearn.metrics import roc_curve, auc, roc_auc_score
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(5):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
print ("ROC value is:",roc_auc["micro"])
</code></pre>
<p>Error is:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/saurabh/PycharmProjects/getting_started/own_code.py", line 32, in <module>
print(metrics.roc_auc_score(y_test, y_pred_prob))
File "C:\Anaconda3\lib\site-packages\sklearn\metrics\ranking.py", line 260, in roc_auc_score
sample_weight=sample_weight) Accuracy by this: 0.910536779324
File "C:\Anaconda3\lib\site-packages\sklearn\metrics\base.py", line 81, in _average_binary_score
raise ValueError("{0} format is not supported".format(y_type))
ValueError: multiclass format is not supported
</code></pre>
|
<p><code>roc_curve</code> doesnt support multiclass format. You have to calculate for binary class.</p>
<p>But to calculate FPR, TPR you can use <code>confusion_matrix</code></p>
<pre><code>from sklearn.metrics import confusion_matrix
y_test = np.argmax(y_test, axis=1)
y_score = np.argmax(y_score, axis=1)
c = confusion_matrix(y_test, y_score)
TNR = float(c[0][0])
TPR = float(c[1][1])
FNR = float(c[1][0])
FPR = float(c[0][1])
</code></pre>
<p>Here's a simple example to binarize</p>
<pre><code>for i in range(5):
yt_bin = [1 if x == i else 0 for x in y_test[:, i]]
fpr[i], tpr[i], _ = roc_curve(yt_bin, y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
</code></pre>
| 1,255
|
text classification
|
looking for training data for text classification
|
https://stackoverflow.com/questions/15999713/looking-for-training-data-for-text-classification
|
<p>I am looking for training data for text classification into categories like sports, finance, politics, music, etc.</p>
<p>Please guide to references. Hello.</p>
|
<p>You can get a Reuters corpus by applying at <a href="http://trec.nist.gov/data/reuters/reuters.html" rel="nofollow">Reuters</a></p>
<p>You can also get the Technion Text Repository <a href="http://techtc.cs.technion.ac.il/techtc300/techtc300.html#software" rel="nofollow">TechnionRepo</a></p>
| 1,256
|
text classification
|
Mutual Information for feature selection text classification
|
https://stackoverflow.com/questions/24845873/mutual-information-for-feature-selection-text-classification
|
<p>I use Naive Bayesian classifier for text classification. How can I make improvements of the accuracy of the algorithm using Mutual information measure for feature selection?</p>
|
<p>there are 2 improvements that you can use in text classification . First , you improve using the preprocessing techniques that you use such as N-Gram. Second, you can use feature selection techniques such as TF-IDF, Mutual Information, Chi-Square, and use other optimizations algorithm such as Genetic Algorithm, Bat Algorithm, ABC-Colony, Ant Colony. TF-IDF is very popular in information retrieval. Naive bayes is very sensitive with feature selection method.So, you can combine preprocessing techniques, feature selection method, and classification method for optimizing the classification result.</p>
| 1,257
|
text classification
|
Multilabel Text Classification using Hugging Face Models for TensorFlow
|
https://stackoverflow.com/questions/75827730/multilabel-text-classification-using-hugging-face-models-for-tensorflow
|
<p>Trying to understand example of use Hugging Face Model for Multilabel Text Classification using Tenroflow from <a href="https://www.daniweb.com/programming/computer-science/tutorials/539042/multilabel-text-classification-using-hugging-face-models-for-tensorflow" rel="nofollow noreferrer">https://www.daniweb.com/programming/computer-science/tutorials/539042/multilabel-text-classification-using-hugging-face-models-for-tensorflow</a></p>
<p>Please, help to understand using BinaryCrossentropy as loss and metrics while have 6-classes classification. Doesn't one should use CategoricalCrossentropy instead?</p>
| 1,258
|
|
text classification
|
Saving Word2Vec for CNN Text Classification
|
https://stackoverflow.com/questions/38555148/saving-word2vec-for-cnn-text-classification
|
<p>I want to train my own Word2Vec model for my text corpus. I can get the code from TensorFlow's tutorial. What I don't know is how to save this model to use for CNN text classification later? Should I use pickle to save it and then read it later?</p>
|
<p>No pickling is not the way of saving the model in case of tensorflow.</p>
<p>Tensorflow provides with tensorflow serving for saving the models as proto bufs(for exporting the model). The way to save model would be to save the tensorflow session as:
<strong>saver.save(sess, 'my_test_model',global_step=1000)</strong></p>
<p>Heres the link for complete answer:
<strong><a href="https://stackoverflow.com/questions/33759623/tensorflow-how-to-save-restore-a-model">Tensorflow: how to save/restore a model?</a></strong></p>
| 1,259
|
text classification
|
Text classification algorithms which are not Naive?
|
https://stackoverflow.com/questions/41243531/text-classification-algorithms-which-are-not-naive
|
<p>Naive Bayes Algorithm assumes independence among features. What are some text classification algorithms which are not <strong>Naive</strong> i.e. do not assume independence among it's features.</p>
|
<p>The answer will be very straight forward, since nearly <strong>every</strong> classifier (besides <strong>Naive</strong> Bayes) is not naive. Features independence is very rare assumption, and is not taken by (among huge list of others):</p>
<ul>
<li>logistic regression (in NLP community known as maximum entropy model)</li>
<li>linear discriminant analysis (fischer linear discriminant)</li>
<li>kNN</li>
<li>support vector machines</li>
<li>decision trees / random forests</li>
<li>neural nets</li>
<li>...</li>
</ul>
<p>You are asking about text classification, but there is nothing really special about text, and you can use any existing classifier for such data.</p>
| 1,260
|
text classification
|
Text classification with Scikit-learn
|
https://stackoverflow.com/questions/35246498/text-classification-with-scikit-learn
|
<p>I am doing text classification for two labels with scikit learn .. I am loading my text files with the method load_files</p>
<pre><code>categories={'label0','label1'}
text_data = load_files(path,categories=categories)
</code></pre>
<p>from the following structure:</p>
<pre><code>train
├── Label0
│ ├── 0001.txt
│ └── 0002.txt
└── Label1
├── 0001.txt
└── 0002.txt
</code></pre>
<p>my problem is that when I try to look at the shape of text_data.data it returns:</p>
<pre><code>print (type(text_data.data))
<type 'list'>
print text_data.data.shape
AttributeError: 'list' object has no attribute 'shape'
X = np.array(text_data.data)
print x.shape
(35,)
</code></pre>
<p>it returns 1D array .. I thought it should be 2D numpy array or a dictionary where the first will be for the text and the other one will be for the class (label0 or 1 ) ..
have I missed something ?</p>
|
<p>The problem is after calling load_files, it is not yet a numpy array. It is just a list of text. You should vectorize this text using <code>CountVectorizer</code> or <code>TfidfVectorizer</code>.</p>
<p><strong>Example:</strong></p>
<pre><code>cv = CountVectorizer()
X = cv.fit_transform(text_data.data)
y = text_data.target
print cv.vocabulary_ # Show words in vocabulary with column index
clf = LogisticRegression() # or other classifier
clf.fit(X, y)
</code></pre>
| 1,261
|
text classification
|
How to change this RNN text classification code to text generation?
|
https://stackoverflow.com/questions/58055095/how-to-change-this-rnn-text-classification-code-to-text-generation
|
<p>I have this code to do text classification with TensorFlow RNN, but how to change it to do text generation instead?</p>
<p>The following text classification has 3D input, but 2D output. Should it be changed to 3D input and 3D output for text generation? and how?</p>
<p>The example data are:</p>
<pre><code>t0 t1 t2
british gray is => cat (y=0)
0 1 2
white samoyed is => dog (y=1)
3 4 2
</code></pre>
<p>For classification feeding "british gray is" results in "cat". What I wish to get is feeding "british" should result in the next word "gray".</p>
<pre><code>import tensorflow as tf;
tf.reset_default_graph();
#data
'''
t0 t1 t2
british gray is => cat (y=0)
0 1 2
white samoyed is => dog (y=1)
3 4 2
'''
Bsize = 2;
Times = 3;
Max_X = 4;
Max_Y = 1;
X = [[[0],[1],[2]], [[3],[4],[2]]];
Y = [[0], [1] ];
#normalise
for I in range(len(X)):
for J in range(len(X[I])):
X[I][J][0] /= Max_X;
for I in range(len(Y)):
Y[I][0] /= Max_Y;
#model
Inputs = tf.placeholder(tf.float32, [Bsize,Times,1]);
Expected = tf.placeholder(tf.float32, [Bsize, 1]);
#single LSTM layer
#'''
Layer1 = tf.keras.layers.LSTM(20);
Hidden1 = Layer1(Inputs);
#'''
#multi LSTM layers
'''
Layers = tf.keras.layers.RNN([
tf.keras.layers.LSTMCell(30), #hidden 1
tf.keras.layers.LSTMCell(20) #hidden 2
]);
Hidden2 = Layers(Inputs);
'''
Weight3 = tf.Variable(tf.random_uniform([20,1], -1,1));
Bias3 = tf.Variable(tf.random_uniform([ 1], -1,1));
Output = tf.sigmoid(tf.matmul(Hidden1,Weight3) + Bias3);
Loss = tf.reduce_sum(tf.square(Expected-Output));
Optim = tf.train.GradientDescentOptimizer(1e-1);
Training = Optim.minimize(Loss);
#train
Sess = tf.Session();
Init = tf.global_variables_initializer();
Sess.run(Init);
Feed = {Inputs:X, Expected:Y};
for I in range(1000): #number of feeds, 1 feed = 1 batch
if I%100==0:
Lossvalue = Sess.run(Loss,Feed);
print("Loss:",Lossvalue);
#end if
Sess.run(Training,Feed);
#end for
Lastloss = Sess.run(Loss,Feed);
print("Loss:",Lastloss,"(Last)");
#eval
Results = Sess.run(Output,Feed);
print("\nEval:");
print(Results);
print("\nDone.");
#eof
</code></pre>
|
<p>I found out how to switch it (the code) to do text generation task, use 3D input (X) and 3D labels (Y) as in the source code below:</p>
<p>Source code:</p>
<pre><code>import tensorflow as tf;
tf.reset_default_graph();
#data
'''
t0 t1 t2
british gray is cat
0 1 2 (3) <=x
1 2 3 <=y
white samoyed is dog
4 5 2 (6) <=x
5 2 6 <=y
'''
Bsize = 2;
Times = 3;
Max_X = 5;
Max_Y = 6;
X = [[[0],[1],[2]], [[4],[5],[2]]];
Y = [[[1],[2],[3]], [[5],[2],[6]]];
#normalise
for I in range(len(X)):
for J in range(len(X[I])):
X[I][J][0] /= Max_X;
for I in range(len(Y)):
for J in range(len(Y[I])):
Y[I][J][0] /= Max_Y;
#model
Input = tf.placeholder(tf.float32, [Bsize,Times,1]);
Expected = tf.placeholder(tf.float32, [Bsize,Times,1]);
#single LSTM layer
'''
Layer1 = tf.keras.layers.LSTM(20);
Hidden1 = Layer1(Input);
'''
#multi LSTM layers
#'''
Layers = tf.keras.layers.RNN([
tf.keras.layers.LSTMCell(30), #hidden 1
tf.keras.layers.LSTMCell(20) #hidden 2
],
return_sequences=True);
Hidden2 = Layers(Input);
#'''
Weight3 = tf.Variable(tf.random_uniform([20,1], -1,1));
Bias3 = tf.Variable(tf.random_uniform([ 1], -1,1));
Output = tf.sigmoid(tf.matmul(Hidden2,Weight3) + Bias3); #sequence of 2d * 2d
Loss = tf.reduce_sum(tf.square(Expected-Output));
Optim = tf.train.GradientDescentOptimizer(1e-1);
Training = Optim.minimize(Loss);
#train
Sess = tf.Session();
Init = tf.global_variables_initializer();
Sess.run(Init);
Feed = {Input:X, Expected:Y};
Epochs = 10000;
for I in range(Epochs): #number of feeds, 1 feed = 1 batch
if I%(Epochs/10)==0:
Lossvalue = Sess.run(Loss,Feed);
print("Loss:",Lossvalue);
#end if
Sess.run(Training,Feed);
#end for
Lastloss = Sess.run(Loss,Feed);
print("Loss:",Lastloss,"(Last)");
#eval
Results = Sess.run(Output,Feed).tolist();
print("\nEval:");
for I in range(len(Results)):
for J in range(len(Results[I])):
for K in range(len(Results[I][J])):
Results[I][J][K] = round(Results[I][J][K]*Max_Y);
#end for i
print(Results);
print("\nDone.");
#eof
</code></pre>
| 1,262
|
text classification
|
text classification using keras input dimensions
|
https://stackoverflow.com/questions/59741936/text-classification-using-keras-input-dimensions
|
<p>I used to do text classification on sklearn. However, I have to do it using keras, so I followed this tutrial to encode the labels and so on as I have little experience with NN: <a href="https://www.opencodez.com/python/text-classification-using-keras.htm" rel="nofollow noreferrer">https://www.opencodez.com/python/text-classification-using-keras.htm</a>.
However, I found out that I have to specify the input dimensions for the first layer.I am confused how can I decide a dimension of the text as an input? is it depending on the countvectroizer I did? as I preprocess the text as following:</p>
<pre><code>corpus=[]
for i in range(0,len(dataset)):
review=re.sub('[^a-zA-Z]',' ',dataset['cleaned_text'][i])
review.lower()
review=review.split()
ps=PorterStemmer()
review=[ps.stem(word) for word in review if not word in set(stopwords.words('english'))]
review=' '.join(review)
corpus.append(review)
#the bag of word
from sklearn.feature_extraction.text import CountVectorizer
cv=CountVectorizer(max_features=1500).fit(corpus)
X=cv.fit_transform(corpus).toarray()
y=dataset['sentiment']
</code></pre>
<p>Also, in the tutial he defined it based on the the vocab size. as far as I understand, I think the input dimensions should be the maximum number of tokens in the all sentence. right?
Also, I tried to get encoder.vocab_size as this tutrial did <a href="https://www.tensorflow.org/tutorials/text/text_classification_rnn" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/text_classification_rnn</a>. however, I got this error <code>AttributeError: 'LabelBinarizer' object has no attribute 'vocab_size'</code></p>
| 1,263
|
|
text classification
|
Is text classification fast enough for type ahead search?
|
https://stackoverflow.com/questions/58071233/is-text-classification-fast-enough-for-type-ahead-search
|
<p>I'm working on designing a typeahead service that can be used to search for many different things. I was thinking about creating a text classification model to categorize these searches before actually making the search. </p>
<p>Here's an example of the result I'd want from the classification model.</p>
<p>Input</p>
<pre><code>John Smith
</code></pre>
<p>Output</p>
<pre><code>[
{
"likeliness": .6,
"category": "car-name-typeahead-search"
},
{
"likeliness": .9,
"category": "person-name-typeahead-search"
},
{
"likeliness": .1,
"category": "vin-typeahead-search"
},
{
"likeliness": .2,
"category": "help-page"
},
{
"likeliness": .2,
"category": "faq-page"
}
]
</code></pre>
<p>Then I'd take the categories that have a likeliness above some value and actually do the typeahead search. Also I'd return the results ordered by the likeliness rank.</p>
<p>We have been collecting data about people's searches and tracking what they were actually looking for so we should have the data needed to train a text classification model.</p>
<p>My question is can text classification models be fast enough to be used with a type ahead service and not be prohibitively expensive? Are there certain types of text classification algorithms that I should be looking at?</p>
|
<p>Usually in modern serving framework (like <a href="https://github.com/tensorflow/serving" rel="nofollow noreferrer">tensorflow serving</a> running on a standalone server), a standard text classification model based on shallow neural networks should have a latency under 1ms). You can look for a model composed by:</p>
<ul>
<li>word embedding layer (up to millions of words in vocabulary)</li>
<li>hidden layer (1-3)</li>
<li>classification (up to thousands of categories)</li>
</ul>
<p>If your expected response time is <= 200ms, you should not worry about the latency from the classification. In the worst case, 10ms is sufficient using the setup above.</p>
| 1,264
|
text classification
|
Feature Selection and Reduction for Text Classification
|
https://stackoverflow.com/questions/13603882/feature-selection-and-reduction-for-text-classification
|
<p>I am currently working on a project, a <strong>simple sentiment analyzer</strong> such that there will be <strong>2 and 3 classes</strong> in <strong>separate cases</strong>. I am using a <strong>corpus</strong> that is pretty <strong>rich</strong> in the means of <strong>unique words</strong> (around 200.000). I used <strong>bag-of-words</strong> method for <strong>feature selection</strong> and to reduce the number of <strong>unique features</strong>, an elimination is done due to a <strong>threshold value</strong> of <strong>frequency of occurrence</strong>. The <strong>final set of features</strong> includes around 20.000 features, which is actually a <strong>90% decrease</strong>, but <strong>not enough</strong> for intended <strong>accuracy</strong> of test-prediction. I am using <strong>LibSVM</strong> and <strong>SVM-light</strong> in turn for training and prediction (both <strong>linear</strong> and <strong>RBF kernel</strong>) and also <strong>Python</strong> and <strong>Bash</strong> in general.</p>
<p>The <strong>highest accuracy</strong> observed so far <strong>is around 75%</strong> and I <strong>need at least 90%</strong>. This is the case for <strong>binary classification</strong>. For <strong>multi-class training</strong>, the accuracy falls to <strong>~60%</strong>. I <strong>need at least 90%</strong> at both cases and can not figure how to increase it: via <strong>optimizing training parameters</strong> or <strong>via optimizing feature selection</strong>?</p>
<p>I have read articles about <strong>feature selection</strong> in text classification and what I found is that three different methods are used, which have actually a clear correlation among each other. These methods are as follows:</p>
<ul>
<li>Frequency approach of <strong>bag-of-words</strong> (BOW)</li>
<li><strong>Information Gain</strong> (IG)</li>
<li><strong>X^2 Statistic</strong> (CHI)</li>
</ul>
<p>The first method is already the one I use, but I use it very simply and need guidance for a better use of it in order to obtain high enough accuracy. I am also lacking knowledge about practical implementations of <strong>IG</strong> and <strong>CHI</strong> and looking for any help to guide me in that way.</p>
<p>Thanks a lot, and if you need any additional info for help, just let me know.</p>
<hr>
<ul>
<li><p>@larsmans: <strong>Frequency Threshold</strong>: I am looking for the occurrences of unique words in examples, such that if a word is occurring in different examples frequently enough, it is included in the feature set as a unique feature. </p></li>
<li><p>@TheManWithNoName: First of all thanks for your effort in explaining the general concerns of document classification. I examined and experimented all the methods you bring forward and others. I found <strong>Proportional Difference</strong> (PD) method the best for feature selection, where features are uni-grams and <strong>Term Presence</strong> (TP) for the weighting (I didn't understand why you tagged <strong>Term-Frequency-Inverse-Document-Frequency</strong> (TF-IDF) as an indexing method, I rather consider it as a <strong>feature weighting</strong> approach). <strong>Pre-processing</strong> is also an important aspect for this task as you mentioned. I used certain types of string elimination for refining the data as well as <strong>morphological parsing</strong> and <strong>stemming</strong>. Also note that I am working on <strong>Turkish</strong>, which has <strong>different characteristics</strong> compared to English. Finally, I managed to reach <strong>~88% accuracy</strong> (f-measure) for <strong>binary</strong> classification and <strong>~84%</strong> for <strong>multi-class</strong>. These values are solid proofs of the success of the model I used. This is what I have done so far. Now working on clustering and reduction models, have tried <strong>LDA</strong> and <strong>LSI</strong> and moving on to <strong>moVMF</strong> and maybe <strong>spherical models</strong> (LDA + moVMF), which seems to work better on corpus those have objective nature, like news corpus. If you have any information and guidance on these issues, I will appreciate. I need info especially to setup an interface (python oriented, open-source) between <strong>feature space dimension reduction</strong> methods (LDA, LSI, moVMF etc.) and <strong>clustering methods</strong> (k-means, hierarchical etc.).</p></li>
</ul>
|
<p>This is probably a bit late to the table, but...</p>
<p>As Bee points out and you are already aware, the use of SVM as a classifier is wasted if you have already lost the information in the stages prior to classification. However, the process of text classification requires much more that just a couple of stages and each stage has significant effects on the result. Therefore, before looking into more complicated feature selection measures there are a number of much simpler possibilities that will typically require much lower resource consumption.</p>
<p>Do you pre-process the documents before performing tokensiation/representation into the bag-of-words format? Simply removing stop words or punctuation may improve accuracy considerably.</p>
<p>Have you considered altering your bag-of-words representation to use, for example, word pairs or n-grams instead? You may find that you have more dimensions to begin with but that they condense down a lot further and contain more useful information.</p>
<p>Its also worth noting that dimension reduction <strong>is</strong> feature selection/feature extraction. The difference is that feature selection reduces the dimensions in a univariate manner, i.e. it removes terms on an individual basis as they currently appear without altering them, whereas feature extraction (which I think Ben Allison is referring to) is multivaritate, combining one or more single terms together to produce higher orthangonal terms that (hopefully) contain more information and reduce the feature space.</p>
<p>Regarding your use of document frequency, are you merely using the probability/percentage of documents that contain a term or are you using the term densities found within the documents? If category one has only 10 douments and they each contain a term once, then category one is indeed associated with the document. However, if category two has only 10 documents that each contain the same term a hundred times each, then obviously category two has a much higher relation to that term than category one. If term densities are not taken into account this information is lost and the fewer categories you have the more impact this loss with have. On a similar note, it is not always prudent to only retain terms that have high frequencies, as they may not actually be providing any useful information. For example if a term appears a hundred times in every document, then it is considered a noise term and, while it looks important, there is no practical value in keeping it in your feature set.</p>
<p>Also how do you index the data, are you using the Vector Space Model with simple boolean indexing or a more complicated measure such as TF-IDF? Considering the low number of categories in your scenario a more complex measure will be beneficial as they can account for term importance for each category in relation to its importance throughout the entire dataset.</p>
<p>Personally I would experiment with some of the above possibilities first and then consider tweaking the feature selection/extraction with a (or a combination of) complex equations if you need an additional performance boost.</p>
<hr>
<p><strong>Additional</strong></p>
<p>Based on the new information, it sounds as though you are on the right track and 84%+ accuracy (F1 or BEP - precision and recall based for multi-class problems) is generally considered very good for most datasets. It might be that you have successfully acquired all information rich features from the data already, or that a few are still being pruned.</p>
<p>Having said that, something that can be used as a predictor of how good aggressive dimension reduction may be for a particular dataset is 'Outlier Count' analysis, which uses the decline of Information Gain in outlying features to determine how likely it is that information will be lost during feature selection. You can use it on the raw and/or processed data to give an estimate of how aggressively you should aim to prune features (or unprune them as the case may be). A paper describing it can be found here:</p>
<p><a href="http://www.cs.technion.ac.il/~gabr/papers/fs-svm.pdf" rel="noreferrer" title="Text Categorization with Many Redundant Features: Using Aggressive Feature Selection to Make SVMs Competitive with C4.5">Paper with Outlier Count information</a></p>
<p>With regards to describing TF-IDF as an indexing method, you are correct in it being a feature weighting measure, but I consider it to be used mostly as part of the indexing process (though it can also be used for dimension reduction). The reasoning for this is that some measures are better aimed toward feature selection/extraction, while others are preferable for feature weighting specifically in your document vectors (i.e. the indexed data). This is generally due to dimension reduction measures being determined on a per category basis, whereas index weighting measures tend to be more document orientated to give superior vector representation.</p>
<p>In respect to LDA, LSI and moVMF, I'm afraid I have too little experience of them to provide any guidance. Unfortunately I've also not worked with Turkish datasets or the python language. </p>
| 1,265
|
text classification
|
SMOTE, Oversampling on text classification in Python
|
https://stackoverflow.com/questions/50999596/smote-oversampling-on-text-classification-in-python
|
<p>I am doing a text classification and I have very imbalanced data like </p>
<pre><code>Category | Total Records
Cate1 | 950
Cate2 | 40
Cate3 | 10
</code></pre>
<p>Now I want to over sample Cate2 and Cate3 so it at least have 400-500 records, I prefer to use SMOTE over random sampling, Code </p>
<pre><code>from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE
X_train, X_test, y_train, y_test = train_test_split(fewRecords['text'],
fewRecords['category'])
sm = SMOTE(random_state=12, ratio = 1.0)
x_train_res, y_train_res = sm.fit_sample(X_train, y_train)
</code></pre>
<p>It does not work as it can't generate the sample synthetic text, Now when I covert it into vector like </p>
<pre><code>count_vect = CountVectorizer(analyzer='word', token_pattern=r'\w{1,}')
count_vect.fit(fewRecords['category'])
# transform the training and validation data using count vectorizer object
xtrain_count = count_vect.transform(X_train)
ytrain_train = count_vect.transform(y_train)
</code></pre>
<p>I am not sure if it is right approach and how to convert vector to real text when I want to predict real category after classification </p>
|
<p>I know this question is over 2 years old and I hope you found a resolution. If in case you are still interested, this could be easily done with imblearn pipelines.</p>
<p>I will proceed under the assumption that you will use a sklearn compatible estimator to perform the classification. Let's say Multinomial Naive Bayes.</p>
<p>Please note how I import Pipeline from imblearn and not sklearn</p>
<pre><code>from imblearn.pipeline import Pipeline, make_pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
</code></pre>
<p>Import SMOTE as you've done in your code</p>
<pre><code>from imblearn.over_sampling import SMOTE
</code></pre>
<p>Do the train-test split as you've done in your code</p>
<pre><code>from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(fewRecords['text'],
fewRecords['category'],stratify=fewRecords['category'], random_state=0
)
</code></pre>
<p>Create a pipeline with SMOTE as one of the components</p>
<pre><code>textclassifier =Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('smote', SMOTE(random_state=12)),
('mnb', MultinomialNB(alpha =0.1))
])
</code></pre>
<p>Train the classifier on training data</p>
<pre><code>textclassifier.fit(X_train, y_train)
</code></pre>
<p>Then you can use this classifier for any task including evaluating the classifier itself, predicting new observations etc.</p>
<p>e.g. predicting a new sample</p>
<pre><code> textclassifier.predict(['sample text'])
</code></pre>
<p>would return a predicted category.</p>
<p>For a more accurate model try word vectors as features or more conveniently, perform hyperparameter optimization on the pipeline.</p>
| 1,266
|
text classification
|
Does TPOT support multi-label text classification?
|
https://stackoverflow.com/questions/64034340/does-tpot-support-multi-label-text-classification
|
<p>How can I run TPOT to give me suggestions on what algorithm to use for a multi-label text classification? My data is already cleaned and divided into training and testing sets.</p>
|
<p>Yes, you can use TPOT for multi-label text classification.</p>
| 1,267
|
text classification
|
Grouped Text classification
|
https://stackoverflow.com/questions/57254603/grouped-text-classification
|
<p>I have thousands groups of paragraphs and I need to classify these paragraphs. The problem is that I need to classify each paragraph based on other paragraphs in the group! For example, a paragraph individually maybe belongs to class A but according to other paragraph in the group it belongs to class B.</p>
<p>I have tested lots of traditional and deep approaches( in fields like text classification, IR, text understanding, sentiment classification and so on) but those couldn't classify correctly.</p>
<p>I was wondering if anybody has worked in this area and could give me some suggestion. Any suggestions are appreciated. Thank you.</p>
<p><strong>Update 1:</strong></p>
<p>Actually we are looking for manual sentences/paragraph for some fields, so we first need to recognize if a sentence/paragraph is a manual or not second we need to classify it to it's fields and we can recognize its field only based on previous or next sentences/paragraphs.</p>
<p>To classify the paragraphs to manual/no-manual we have developed some promising approaches but the problem come up when we should recognize the field according to previous or next sentences/paragraphs, but which one?? we don't know the answer would be in any other sentences!!.</p>
<p><strong>Update 2:</strong></p>
<p>We can not use whole text of group as input because those are too big (sometimes tens of thousands of words) and contain some other classes and machine can't learn properly which lead to the drop the accuracy sharply.</p>
<p>Here is a picture that maybe help to better understanding the problem:
<a href="https://i.sstatic.net/HC2mj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HC2mj.png" alt="enter image description here"></a></p>
| 1,268
|
|
text classification
|
R naive Bayes text classification issues
|
https://stackoverflow.com/questions/34242828/r-naive-bayes-text-classification-issues
|
<p>We are using R naive Bayes for text classification. Results are different from hand calculated. Maybe R is performing some normalization, distribution and does not work in multinominal mode (text words with frequency). Also I am unable to understand how does R naive byes computer prior conditional probability [,1] [,2]. It is different from calculated values. </p>
<pre><code>`computeNavieBayes=function(trainingDataPath,testData,isTrainingMode) {
out <- tryCatch(
{
library(tm)
library(e1071)
testDataTokens <-unlist(strsplit(testData, "[,]"))
dataText<-read.csv(trainingDataPath,header= TRUE,row.names=NULL)
trainvector <- as.vector(dataText$Text)
trainsource <- VectorSource(trainvector)
traincorpus <- Corpus(trainsource)
#REMOVE STOPWORDS
traincorpus <- tm_map(traincorpus,stripWhitespace)
traincorpus <- tm_map(traincorpus,tolower)
traincorpus <- tm_map(traincorpus, removeWords,stopwords("english"))
traincorpus<- tm_map(traincorpus,removePunctuation)
traincorpus <- tm_map(traincorpus, PlainTextDocument)
# CREATE TERM DOCUMENT MATRIX
trainmatrix <- t(TermDocumentMatrix(traincorpus))
model <-
naiveBayes(as.matrix(trainmatrix),dataText$Category,
type="raw",laplace=1,useKernel=FALSE)
print(model)
col1 <- c()
index <- 1
resultsColl <- vector()
for (valueToken in testDataTokens)
{
col1[1] <- valueToken
dataTest <- data.frame("col1"=col1)
testvector <- as.vector(dataTest)
testsource <- VectorSource(testvector)
testcorpus <- Corpus(testsource)
testcorpus <- tm_map(testcorpus,stripWhitespace)
testcorpus <- tm_map(testcorpus,tolower)
testcorpus <- tm_map(testcorpus, removeWords,stopwords("english"))
testcorpus<- tm_map(testcorpus,removePunctuation)
testcorpus <- tm_map(testcorpus, PlainTextDocument)
testmatrix <- t(TermDocumentMatrix(testcorpus))
print(testmatrix)
print(valueToken)
results<-predict(model, as.matrix(testmatrix),type="raw",laplace=1)
print(class(results))
print(typeof(results))
print(results)
resultsColl[index] <- toString(results)
index <- index +1
}
return (resultsColl)
},
error=function(cond)
{
#error(RML,cond)
},
warning=function(cond)
{
return(cond)
},
finally={
}
)
return(out)
}
# function call
result<- computeNavieByes("c:/software/nb.csv","laundering terrorist terrorist","N")
print(result)
</code></pre>
<p>`</p>
<pre><code># training data
Text,Category
laundering laundering laundering,Money laundering
bankfraud bankfraud,Money laundering
terrorist terrorist terrorist terrorist,Terrorist Financing
weapon weapon,Terrorist Financing
bribe bribe bribe bribe bribe,Bribery and Corruption
corrupt corrupt corrupt,Bribery and Corruption
TestData
"laundering terrorist terrorist"
**R Naive Byes calculations**
class prior probabilities
A-priori probabilities:
dataText$Category
Bribery and Corruption Money laundering Terrorist Financing
0.3333333 0.3333333 0.3333333
Conditional probabilities:
bankfraud
dataText$Category [,1] [,2]
Bribery and Corruption 0 0.000000
Money laundering 1 1.414214
Terrorist Financing 0 0.000000
bribe
dataText$Category [,1] [,2]
Bribery and Corruption 2.5 3.535534
Money laundering 0.0 0.000000
Terrorist Financing 0.0 0.000000
corrupt
dataText$Category [,1] [,2]
Bribery and Corruption 1.5 2.12132
Money laundering 0.0 0.00000
Terrorist Financing 0.0 0.00000
laundering
dataText$Category [,1] [,2]
Bribery and Corruption 0.0 0.00000
Money laundering 1.5 2.12132
Terrorist Financing 0.0 0.00000
terrorist
dataText$Category [,1] [,2]
Bribery and Corruption 0 0.000000
Money laundering 0 0.000000
Terrorist Financing 2 2.828427
weapon
dataText$Category [,1] [,2]
Bribery and Corruption 0 0.000000
Money laundering 0 0.000000
Terrorist Financing 1 1.414214
Bribery and Corruption Money laundering Terrorist Financing
[1,] 0.003077316 0.5628753 0.4340474
R naive byes classifies test data as "Money Laundering"
------------------------------------------------------------------------
Hand Computed Values
|v| = unique number of words in vocabulary=laundering,
bankfraud,terrorist,weapon,bribe,corrupt = 6
laplace smoothing = 1
(1) class prior probabilities
Money laundering = 2/6 = 1/3 = 0.3333
Terrorist Financing = 2/6 = 1/3 = 0.3333
Bribery and Corruption = 2/6 =1/3 = 0.3333
(2) prior conditional probabilities
p(laundering|Money laundering) = (3+1) / (5 + 6) = 4/11 = 0.3636
p(bankfraud|Money laundering) = (2 +1) / (5 +6) = 3/11= 0.2727
p(terrorist|Money laundering) = (0 +1) / (5+6) = 1/11= 0.0909
p(weapon|Money laundering) = (0 +1) / (5+6) = 1/11= 0.0909
p(bribe|Money laundering) = (0 +1) / (5+6) = 1/11= 0.0909
p(corrupt|Money laundering) = (0 +1) / (5+6) = 1/11= 0.0909
p(laundering|Terrorist Financing) = (0 +1) / (6+6) = 1/12 = 0.0833
p(bankfraud|Terrorist Financing) = (0 +1) / (6+6) = 1/12 = 0.0833
p(terrorist|Terrorist Financing) = (4+1) / (6+6) = 5/12 = 0.4166
p(weapon|Terrorist Financing) = (2+1) / (6+6) = 3/12 = 0.25
p(bribe|Terrorist Financing) = (0 +1) / (6+6) = 1/12 = 0.0833
p(corrupt|Terrorist Financing) = (0 +1) / (6+6) = 1/12 = 0.0833
p(laundering|Bribery and Corruption) = (0+1) / (8+6) = 1/14 = 0.0714
p(bankfraud|Bribery and Corruption) = (0+1) / (8+6) = 1/14 = 0.0714
p(terrorist|Bribery and Corruption) = (0+1) / (8+6) = 1/14 =0.0714
p(weapon|Bribery and Corruption) = (0+1) / (8+6) = 1/14 =0.0714
p(bribe|Bribery and Corruption) = (5+1) / (8+6) = 6/14 = 0.4285
p(corrupt|Bribery and Corruption) = (3 +1) / (8+6) = 4/14 = 0.2857
(3)posterior class probabilities
Test data -> laundering terrorist terrorist
p(Money laundering|test data) = 0.3333 * 0.3636 * 0.0909 * 0.0909 =
0.0010013524
p(Terrorist Financing|test data) = 0.3333 * 0.0833 * 0.4166* 0.4166 =
0.0048185774
p(Bribery and Corruption|test data) = 0.3333 * 0.0714 * 0.0714 * 0.0714 =
0.0001213193
Nornalized values
p(Money laundering|test data) =0.0010013524 / 0.0059412491 = 0.168542
p(Terrorist Financing|test data) = 0.0048185774 / 0.0059412491 = 0.811037
p(Bribery and Corruption|test data) = 0.0001213193 / 0.0059412491 =
0.020419
Hand calculated naive byes classifies test data as "Terrorist Financing"
(which is correct) but R classifies as "Money laundering" (which is wrong)
</code></pre>
| 1,269
|
|
text classification
|
convolutional neural network (CNN) for text classification
|
https://stackoverflow.com/questions/51646074/convolutional-neural-network-cnn-for-text-classification
|
<p>I am using CNN for text classification, in my model after flatten layer I used output layer directly without using dense layer. is that correct or must use dense layer? </p>
<p>just as example:</p>
<pre><code>model.add(Conv1D(filters=100, kernel_size=2,,padding='same' ,activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
</code></pre>
|
<p>Generally, you do not have to use a Dense layer. As the outputs of your convolution do contain some spatial information, though, it might make sense to use one.</p>
<p>I am assuming that you already understood how a convolution works. In that case, imagine what the activations at a certain position in the sentence mean, with regards to the end result in classification.</p>
<p>There are even End-to-End-CNNs, as for example GoogleNet, which (as the name indicates), also does not have a fully connected layer.</p>
| 1,270
|
text classification
|
NLP text classification (flair)
|
https://stackoverflow.com/questions/73153511/nlp-text-classification-flair
|
<p>So I have basic question which I did not get answered by reading the documentation.</p>
<ol>
<li><p>If I want to do text classification (sentiment) about lets say an Articel with a topic.
I already have the plain text without the html stuff through python libraries. Is it better to make an analysis about each sentence and then combine the results or just pass the whole text as a string and have already combined result (I have already tried the whole text option with flair and it worked quite well I guess).</p>
</li>
<li><p>The next thing would be how you could check if the sentences are about the asked topic and how to check it.</p>
</li>
</ol>
<p>If you could give me some guidelines or hints how to approach these problems I would be happy.</p>
| 1,271
|
|
text classification
|
Tool for text classification
|
https://stackoverflow.com/questions/5607880/tool-for-text-classification
|
<p>I am interested in learning about text classification so is reading up on the theory. Next step is doing stuff and therefore I am looking for and at different tools. Some links point to <a href="http://www.cs.waikato.ac.nz/ml/weka/" rel="nofollow">WEKA</a>, however <a href="http://mallet.cs.umass.edu/index.php" rel="nofollow">Mallet</a> seems to be a better fit for this task but nobody links to this tool. Are there any reason to stay away from Mallet if wanting to work on a "serious" project ? I was able to quickly train some classifiers with Mallet and test them, whereas with WEKA I run into a problem with my labels "disappearing" after using filters to transform my textfiles in maps named with the category of the texts within it.</p>
|
<p>It depends on the task you are performing. Mallet is also a popularly used tool and both Weka and Mallet have their pros and cons.
For trivial tasks, both are easy to use. I generaly prefer Weka for clustering and classification tasks.</p>
<p>Note: Do not be misled by popularity of Weka on forum posts, it is primarily to do with it being used for a longer period of time and Mallet is new as compared to Weka.</p>
| 1,272
|
text classification
|
How to find Information gain in text classification?
|
https://stackoverflow.com/questions/20720130/how-to-find-information-gain-in-text-classification
|
<p>I am working on text classification using Decision Tree which uses information gain as the main value for categorisation of text document. I have extracted few features by TF*IDF value. But I not able to figure out how exactly information gain should be calculated? There are some articles suggesting about this but none of them are very clear how to apply it to Text files. </p>
|
<p>you can use <strong>weka</strong> for calculating <strong>information gain</strong> . in weka <code>InfoGainAttributeEval.java</code>
class will calculate IG for the word with respect to document.<a href="https://stackoverflow.com/questions/21063206/information-gain-calculation-for-a-text-file">check this answer</a> this may help you.</p>
| 1,273
|
text classification
|
sparse gradient in text classification with pytorch
|
https://stackoverflow.com/questions/76243788/sparse-gradient-in-text-classification-with-pytorch
|
<p>I am trying to train a text classification module.</p>
<p>When I use <code>Adam</code>, <code>RAdam</code>, or <code>RMSProp</code> for my optimizer, I get the following error:</p>
<pre><code>RuntimeError: Adam/RAdam/RMSProp does not support sparse gradients
</code></pre>
<p>So I tried using <code>SparseAdam</code> and I got this error:</p>
<pre><code>RuntimeError: SparseAdam does not support dense gradients, please consider Adam instead
</code></pre>
<p>How can I solve this issue?</p>
| 1,274
|
|
text classification
|
Machine Learning Text Classification technique
|
https://stackoverflow.com/questions/27002483/machine-learning-text-classification-technique
|
<p>I have large number(say 3000)key words.These need to be classified into seven fixed categories.Each category is having training data(sample keywords).I need to come with a algorithm, when a new keyword is passed to that,it should predict to which category this key word belongs to.</p>
<p>I am not aware of which text classification technique need to applied for this; do we have any tools that can be used?</p>
|
<p>This comes under linear classification. You can use naive-bayes classifier for this. Most of the ml frameworks will have an implementation for naive-bayes. ex: mahout</p>
| 1,275
|
text classification
|
predict text classification using python
|
https://stackoverflow.com/questions/40880637/predict-text-classification-using-python
|
<p>I want to predict a text classification which is based on the correlation of the text in the training data set.</p>
<p>For eg. This is my training data:
"Mouse M325",
"Mouse for xyz M325",
"M325 Mouse logitech",
"Logitech mouse number M325"</p>
<p>As it is visible that Mouse and M325 definitely have a high correlation when compared with M325 and Logitech or others.</p>
<p>I want to use the correlations to predict a classification for the next dataset.
For eg. If the next data is "<strong>Mouse used by Alex number is M325</strong>"...it should give me "<strong>Mouse M325</strong>" as text classifier and notify in a separate tab that the model has predicted this description but it was not something that Machine had seen earlier in trained data.
Like,<a href="https://i.sstatic.net/pBMa0.jpg" rel="nofollow noreferrer">Result Model has predicted</a> . How to solve this?</p>
| 1,276
|
|
text classification
|
Word2vec with Conv1D for text classification confusion
|
https://stackoverflow.com/questions/49048758/word2vec-with-conv1d-for-text-classification-confusion
|
<p>I am doing text classification and plan to use word2vec word embeddings and pass it to Conv1D layers for text classification. I have a <a href="https://www.dropbox.com/s/bxe63rkqkaqji1x/emotion_merged_dataset.csv?dl=0" rel="nofollow noreferrer">dataframe</a> which contains the texts and corresponding labels(sentiments). I have used the gensim module and used word2vec algorithm to generate the word-embedding model. The code I used:</p>
<pre><code>import pandas as pd
from gensim.models import Word2Vec
from nltk.tokenize import word_tokenize
df=pd.read_csv('emotion_merged_dataset.csv')
texts=df['text']
labels=df['sentiment']
df_tokenized=df.apply(lambda row: word_tokenize(row['text']), axis=1)
model = Word2Vec(df_tokenized, min_count=1)
</code></pre>
<p>I plan to use CNN and use this word-embedding model. But how should I use this word-embedding model for my cnn? What should be my input?</p>
<p>I plan to use something like(obviously not with the same hyper-parameters):</p>
<pre><code>model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
</code></pre>
<p>Can somebody help me out and point me in the right direction? Thanks in advance.</p>
|
<p>Sorry for the late response, I hope it is still useful for you.
Depending on your application you may need to download a specific wordembedding file, for example here yoou have the <a href="https://nlp.stanford.edu/projects/glove/" rel="nofollow noreferrer">Glove files</a> </p>
<pre><code>EMBEDDING_FILE='glove.6B.50d.txt'
embed_size = 50 # how big is each word vector
max_features = 20000 # how many unique words to use (i.e num rows in embedding vector)
maxlen = 100 # max number of words in a comment to use
word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
#this is how you load the weights in the embedding layer
inp = Input(shape=(maxlen,))
x = Embedding(max_features, embed_size, weights=[embedding_matrix])(inp)
</code></pre>
<p>I took this code from <a href="https://www.kaggle.com/jhoward/improved-lstm-baseline-glove-dropout" rel="nofollow noreferrer">Jeremy Howard</a>, I think this is all you need, if you want to load other file the process is pretty similar, usually you just have to change the loading file</p>
| 1,277
|
text classification
|
Text Classification into Categories
|
https://stackoverflow.com/questions/8136677/text-classification-into-categories
|
<p>I am working on a text classification problem, I am trying to classify a collection of words into category, yes there are plenty of libraries available for classification, so please dont answer if you are suggesting to use them.</p>
<p>Let me explain what I want to implement. ( take for example )</p>
<p>List of Words:</p>
<ol>
<li>java</li>
<li>programming</li>
<li>language</li>
<li>c-sharp</li>
</ol>
<p>List of Categories.</p>
<ol>
<li>java</li>
<li>c-sharp</li>
</ol>
<p>here we will train the set, as:</p>
<ol>
<li>java maps to category 1. java</li>
<li>programming maps to category 1.java</li>
<li>programming maps to category 2.c-sharp</li>
<li>language maps to category 1.java</li>
<li>language maps to category 2.c-sharp</li>
<li>c-sharp maps to category 2.c-sharp</li>
</ol>
<p>Now we have a phrase "<em>The best java programming book</em>"
from the given phrase following words are a match to our "List of Words.":</p>
<ol>
<li>java</li>
<li>programming</li>
</ol>
<p>"programming" has two mapped categories "java" & "c-sharp" so it is a common word.</p>
<p>"java" is mapped to category "java" only.</p>
<p>So our matching category for the phrase is "java"</p>
<p>This is what came to my mind, is this solution fine, can it be implemented, what are your suggestions, any thing I am missing out, flaws, etc..</p>
|
<p>Of course this can be implemented. If you train a Naive Bayes classifier or linear SVM on the right dataset (titles of Java and C# programming books, I guess), it should learn to associate the term "Java" with Java, "C#" and ".NET" with C#, and "programming" with both. I.e., a Naive Bayes classifier would likely learn a roughly even probability of Java or C# for common terms like "programming" if the dataset is divided evenly.</p>
| 1,278
|
text classification
|
Dictionary Based Text CLassification in Python
|
https://stackoverflow.com/questions/73701800/dictionary-based-text-classification-in-python
|
<p>It's quite a while now that I'm looking for a descent <strong>dictionary based text classification</strong> library in Python.</p>
<p>My use case is as follow: I will be receiving a long <strong><code>text</code></strong> which would likely talk about several things and hopefully mention some of a pre-defined <strong><code>entities</code></strong>.</p>
<pre><code>text = "Yesterday, I ate a Yelow-fruit. It was the longest fruit I ever ate."
entities = {"apple": ["pink", "sphere"], "banana": ["yellow", "tasty", "long"]}
</code></pre>
<p><strong>Please note that spelling errors are intentional !</strong></p>
<p>My goal is to have a program such that given this text and the entities dict (which will change over time), the program should output <strong><code>banana</code></strong>. Hence, the problem can be seen as a <strong>dictionay based text classification</strong> problem where one will classify the text based on the <strong>entities</strong> dictionary.</p>
<p>The latest problem seems so standard to me but I fail finding a descent implementation in Python.</p>
<p>Of course, I can go through the text and count word occurences by entity and finally output the most frequent entity. But this approach is very simple and won't survive in real word scenario where the occurences would <strong>NOT</strong> be exact. I would expect a good approach to include some text similarity metrics and allow user to choose which pre-processings are accpetable (lowercase, stemming, stopwords removal, ...). How tokenization should be done ? Is there a <strong>semantic</strong> similarity or not ? If there is a semantic similarity, does a dictionay expansion algorithm is offered ?</p>
<p>So far, I've read this <a href="https://sicss.io/2019/materials/day3-text-analysis/dictionary-methods/rmarkdown/Dictionary-Based_Text_Analysis.html" rel="nofollow noreferrer">R blog post</a> which gives a starting point for a dictionary based text classification in R. This <a href="https://datascience.stackexchange.com/questions/111912/performing-a-text-classification-based-on-a-dictionary">datascience stack-exchange</a> question seems to be related to mine as well. But none of these give a satisfactory anwser.</p>
<p>So, Is there any straightforward library in Python for this kind of task ?</p>
<p>Thanks in adance for your replies.</p>
|
<p>You're asking for a simple solution while at the same time asserting that the problem is complex and that the frequency of related tokens may not be a strong enough indicator to always predict the right class. What you're alluding to is that you need to understand the context in which your related tokens are mentioned and for that, you will likely need a more complex solution.</p>
<p>One solution you could look at using is zero-shot NLP classification.</p>
<pre><code>from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli")
text = "Yesterday, I ate a yelow-fruit. It was the longest fruit I ever ate."
candidate_labels = ['banana', 'apple']
classifier(text, candidate_labels)
</code></pre>
<p>The output is:</p>
<pre><code>{'sequence': 'Yesterday, I ate a yelow-fruit. It was the longest fruit I ever ate.', 'labels': ['banana', 'apple'], 'scores': [0.6739664077758789, 0.3260335922241211]}
</code></pre>
<p>Edit:
To incorporate further information into the model we can expand the description of the candidate labels as such:</p>
<pre><code>candidate_labels = ['banana yellow tasty long', 'apple pink sphere']
</code></pre>
<p>The new output is:</p>
<pre><code>{'sequence': 'Yesterday, I ate a yelow-fruit. It was the longest fruit I ever ate.', 'labels': ['banana yellow tasty long', 'apple pink sphere'], 'scores': [0.9392997622489929, 0.060700222849845886]}
</code></pre>
<p>You can see from the addition of the descriptive words the difference between the scores of our two labels has increased.</p>
<p>Note:
This attention-based deep learning model uses embeddings to understand the similarity between tokens. Tokens that are close together in this high-dimensional space are proportionally similar in their semantic meaning. Even without specifying additional descriptive words, the model should be able to generalize because it contains knowledge about what a banana or apple is from the millions of text examples it was trained on. This is worth considering when expecting the model to generalize to more abstract examples e.g. wheree classes may contain product SKUs.</p>
| 1,279
|
text classification
|
text classification based on TF-IDF and CNN
|
https://stackoverflow.com/questions/77861333/text-classification-based-on-tf-idf-and-cnn
|
<p>I'm doing binary text classification. I used TF-IDF weighting to build the CNN model, but I got results that weren't as expected.</p>
<pre><code>train_df = pd.read_csv("merged_data.csv", encoding='utf-8')
x = train_df['Text'].values
y = train_df['Label'].values
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0)
encoder = LabelEncoder()
y_train = encoder.fit_transform(y_train)
y_test = encoder.transform(y_test)
vectorizer = TfidfVectorizer(max_features = 10000)
train = vectorizer.fit_transform(x_train)
x_test = vectorizer.transform(x_test)
x_train = pad_sequences(x_train.toarray(), padding='post', dtype='float32', maxlen=1000)
x_test = pad_sequences(x_test.toarray(), padding='post', dtype='float32', maxlen=1000)
max_words = 10000
cnn = Sequential([
Embedding(max_words, 64, input_length=1000),
Conv1D(64, 3, activation='relu'),
GlobalMaxPooling1D(),
Dropout(0.5),
Dense(10, activation='relu'),
Dense(1, activation='sigmoid')
])
cnn.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', Precision(), Recall()])
cnn.summary()
batch_size = 128
epochs = 1
history = cnn.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=(x_test, y_test))
(loss, accuracy, precision, recall) = cnn.evaluate(x_test, y_test, batch_size=batch_size)
preds = cnn.predict(x_test)
y_pred = np.argmax(preds, axis=1)
y_pred
clr = classification_report(y_test, y_pred)
print(clr)
</code></pre>
<p>Model performance results:</p>
<p><a href="https://i.sstatic.net/zXLV2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zXLV2.png" alt="result" /></a></p>
<p>I would like to adjust this code to improve model performance.</p>
<p>Can you please adjust this code to provide better model performance?</p>
| 1,280
|
|
text classification
|
Classification with Weka+ NaiveBayes Classifier+ Text classification
|
https://stackoverflow.com/questions/9557650/classification-with-weka-naivebayes-classifier-text-classification
|
<p>I'm using Weka for text classification task.
I created my data.arff File. It contains two attributes:</p>
<ol>
<li>text attribute</li>
<li>class attribute</li>
</ol>
<p>Then, the generated ARFF file is processed with the StringToWordVector:</p>
<blockquote>
<p>java weka.filters.unsupervised.attribute.StringToWordVector -i data/weather.arff -o data/out.arff
Then, NaiveBayes is used:
java weka.classifiers.bayes.NaiveBayes -t data/out.arff -K</p>
</blockquote>
<p>I have this problem:</p>
<blockquote>
<p>weka.core.UnsupportedAttributeTypeException: weka.classifiers.bayes.NaiveBayes: Cannot handle numeric class!
at weka.core.Capabilities.test(Capabilities.java:954)
at weka.core.Capabilities.test(Capabilities.java:1110)
at weka.core.Capabilities.test(Capabilities.java:1023)
at weka.core.Capabilities.testWithFail(Capabilities.java:1302)
at weka.classifiers.bayes.NaiveBayes.buildClassifier(NaiveBayes.java:213)
at weka.classifiers.Evaluation.evaluateModel(Evaluation.java:1076)
at weka.classifiers.Classifier.runClassifier(Classifier.java:312)
at weka.classifiers.bayes.NaiveBayes.main(NaiveBayes.java:944)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at weka.gui.SimpleCLIPanel$ClassRunner.run(SimpleCLIPanel.java:265)</p>
</blockquote>
<p>Could anyone help me?
I'm stuck at this level.</p>
|
<p>It's exactly what it says - it cannot handle numeric values for the class variable. If you declared the class variable to be string, change the numeric values to their equivalent text values.</p>
| 1,281
|
text classification
|
Apache Spark Naive Bayes based Text Classification
|
https://stackoverflow.com/questions/24011418/apache-spark-naive-bayes-based-text-classification
|
<p>im trying to use Apache Spark for document classification.</p>
<p>For example i have two types of Class (C and J)</p>
<p>Train data is :</p>
<pre><code>C, Chinese Beijing Chinese
C, Chinese Chinese Shanghai
C, Chinese Macao
J, Tokyo Japan Chinese
</code></pre>
<p>And test data is :
Chinese Chinese Chinese Tokyo Japan // What is ist J or C ?</p>
<p>How i can train and predict as above datas. I did Naive Bayes text classification with Apache Mahout, however no with Apache Spark.</p>
<p>How can i do this with Apache Spark?</p>
|
<p>Yes, it doesn't look like there is any simple tool to do that in Spark yet. But you can do it manually by first creating a dictionary of terms. Then compute IDFs for each term and then convert each documents into vectors using the TF-IDF scores.</p>
<p>There is a post on <a href="http://chimpler.wordpress.com/2014/06/11/classifiying-documents-using-naive-bayes-on-apache-spark-mllib/" rel="noreferrer">http://chimpler.wordpress.com/2014/06/11/classifiying-documents-using-naive-bayes-on-apache-spark-mllib/</a> that explains how to do it (with some code as well).</p>
| 1,282
|
text classification
|
Text Classification : LSTM vs Feedforward
|
https://stackoverflow.com/questions/61073198/text-classification-lstm-vs-feedforward
|
<p>I am training a text classification model. </p>
<p><strong>Task</strong> : Given a description, identify the quantifier </p>
<p><strong>For ex</strong>
1) This field contains the total revenue amount in USD -> amount </p>
<p>2) This has city code -> code</p>
<p>3) total deposit amount is 34 -> amount </p>
<p>4) contains first name info -> name </p>
<p>5) contains last nme -> name </p>
<p>For the given task, it makes sense to model this as a text classification problem. </p>
<p>I took two approaches</p>
<p>Approach 1 : </p>
<p>a) Use glove embedding to get vector represenation </p>
<p>b) Use feedforward NN to classify data into 1 of 11 possible output classes </p>
<pre><code>
model = Sequential()
model.add(layers.Embedding(vocab_size, embedding_dim,
weights=[embedding_matrix],
input_length=maxlen,
trainable=False))
model.add(layers.GlobalMaxPool1D())
model.add(layers.Dense(200, activation='relu'))
model.add(layers.Dense(100, activation='relu'))
model.add(layers.Dense(50, activation='relu'))
model.add(layers.Dense(11, activation='softmax'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
</code></pre>
<p>This approach gives me 80% test accuracy </p>
<p>Approach 2 : I plan to use LSTM because they can also learn the context and from previous words </p>
<pre><code>
model = Sequential()
model.add(layers.Embedding(vocab_size, embedding_dim,
weights=[embedding_matrix],
input_length=maxlen,
model.add(layers.LSTM(100,dropout=0.2, recurrent_dropout=0.2, activation='tanh'))
model.add(layers.Dense(11, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['categorical_accuracy'])
epochs = 100
batch_size = 32
model.summary()
</code></pre>
<p>The problem is irrespective of what i do LSTM never gets above 40% accuracy mark. It gets stuck on it from start to end. </p>
<p>Morevoer, the feedforward net ( Approach 1 ) can detect simple cases like "total amount is 6 usd" but LSTM is unable to get even this correct and predicts it as Other</p>
<p>My question is why does LSTM ( with added power of context ) fails to improve upon feedforward. What should i do to improve it . </p>
|
<p>I can't say exactly why, but my guess is sample size/data quality. The deeper learning, the more data it needs and the more sensitive it is to small biases in the training data. If you have a small dataset, it might be possible that a less complex model will serve better. <a href="https://i.sstatic.net/Gicfb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gicfb.png" alt="enter image description here"></a> </p>
<p>Another possibility is that LSTMs are very strong at context and position based reasoning, and from what I glean about your task it seems that you are looking more for keywords and less for long distance relations. This may also explain why feedforward works better</p>
| 1,283
|
text classification
|
Text Classification of News Articles Using Spacy
|
https://stackoverflow.com/questions/62278996/text-classification-of-news-articles-using-spacy
|
<p><strong>Dataset</strong> : Csv files containing around <strong>1500</strong> data with columns <strong>(Text,Labels)</strong> where Text is the news article of <strong>Nepali Language</strong> and Label is its genre(Health, World,Tourism, Weather) and so on.</p>
<p>I am using <a href="https://spacy.io/usage/training#textcat" rel="nofollow noreferrer">Spacy</a> to train my Text Classification Model. So far, I have converted the dataset to a dataframe which looks like this <a href="https://i.sstatic.net/YYmIu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YYmIu.png" alt="enter image description here"></a>
and then into a spacy acceptable format through the code </p>
<pre><code>dataset['tuples'] = dataset.apply(
lambda row: (row['Text'],row['Labels']), axis=1)
training_data = dataset['tuples'].tolist()
</code></pre>
<p>which gives me the list of tuples in my training dataset like [('text...','label...'),('text...','label...')]</p>
<p>Now, how can I do text classification here?</p>
<p>In the spacy's documentation, I found</p>
<pre><code>textcat.add_label("POSITIVE")
textcat.add_label("NEGATIVE")
</code></pre>
<p>Do we have to add the labels according to the labels or should we use positive/negative as well? Does spacy generate the labels according to our dataset after training or not? </p>
<p>Any suggestions please?</p>
|
<p>You have to add your own labels. So, in your case:</p>
<pre><code>textcat.add_label('Health')
textcat.add_label('World')
textcat.add_label('Tourism')
...
</code></pre>
<p><code>spacy</code> then will be able to predict only those categories, that you added in the above block of code</p>
<p>There is a special format for training data: each element of your list with data is a tuple that contains:</p>
<ol>
<li>Text</li>
<li>Dictionary with one element only. <code>cats</code> is a key and another dictionary is a value. That another dictionary contains all your categories as keys and <code>1</code> or <code>0</code> as values indicating whether this category is correct or not.</li>
</ol>
<p>So, your data should look like this:</p>
<p><code>[('text1', {'cats' : {'category1' : 1, 'category2' : 0, ...}}),
('text2', {'cats' : {'category1' : 0, 'category2' : 1, ...}}),
...]</code></p>
| 1,284
|
text classification
|
Text classification: Raw dictionary input and text vectorization
|
https://stackoverflow.com/questions/56426352/text-classification-raw-dictionary-input-and-text-vectorization
|
<p>I am working with some text processing using a series of sklearn classifiers. In an <a href="http://blog.chapagain.com.np/machine-learning-sentiment-analysis-text-classification-using-python-nltk/" rel="nofollow noreferrer">example</a> I found on the internet, I have noticed that the input of the classifier is a series of dictionary items:</p>
<p><code>({'my': True, 'first': True, 'visit': True, 'was': True, ...}, 'pos')</code></p>
<p><code>({'wowjust': True, 'wow': True, 'who': True, 'would': True,..}, 'pos')</code></p>
<p>These items are passed into a classification model (e.g., sklearn <code>LinearSVC</code>). I have found in the sklearn site that in text classification text data are transformed into a vector using some technique e.g., <code>HashingVectorizer</code> but I couldn't locate any documentation on how the aforementioned dictionary input is treated. Is it possible to provide some explanation of what procedure is followed in this input case?</p>
|
<p>According to the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html" rel="nofollow noreferrer">documentation</a>, it tokenizes the text it gets (you can customize the how to tokenize the text, a regex telling what you consider a word and list of stopwords to be omitted), and compute a hash for every token that survives, which is a number between 0 and <code>n_features</code> (another parameter of the vectorizer).</p>
<p>Unlike <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer" rel="nofollow noreferrer">CountVectorizer</a>, you are always sure, you have exactly <code>n_features</code> features, but you indeed risk hashing collisions.</p>
| 1,285
|
text classification
|
Attention based recurrent neural network for text classification
|
https://stackoverflow.com/questions/42905106/attention-based-recurrent-neural-network-for-text-classification
|
<p>Are there any implementations of Attention-based RNN for text classification? (in theano, Karas,....)? I found some implementations of this model but they are used more often for the text generation purposes.</p>
|
<p>You can combine the information from:</p>
<p><a href="https://github.com/Microsoft/CNTK/wiki/Implement-an-attention-mechanism" rel="nofollow noreferrer">https://github.com/Microsoft/CNTK/wiki/Implement-an-attention-mechanism</a></p>
<p>and apply to a token classification model explained here:</p>
<p><a href="https://github.com/Microsoft/CNTK/blob/master/Tutorials/CNTK_202_Language_Understanding.ipynb" rel="nofollow noreferrer">https://github.com/Microsoft/CNTK/blob/master/Tutorials/CNTK_202_Language_Understanding.ipynb</a></p>
| 1,286
|
text classification
|
How can I use BERT for long text classification?
|
https://stackoverflow.com/questions/58636587/how-can-i-use-bert-for-long-text-classification
|
<p>We know that BERT has a maximum length limit of tokens = 512. So if an article has a length of much bigger than 512, such as 10000 tokens in text, how can BERT be used?</p>
|
<p>You basically have three options:</p>
<ol>
<li>You can cut the longer texts off and only use the first 512 tokens. The original BERT implementation (and probably the others as well) truncates longer sequences automatically. For most cases, this option is sufficient.</li>
<li>You can split your text in multiple subtexts, classify each of them and combine the results back together (choose the class which was predicted for most of the subtexts for example). This option is obviously more expensive.</li>
<li>You can even feed the output token for each subtext (as in option 2) to another network (but you won't be able to fine-tune) as described in <a href="https://github.com/google-research/bert/issues/27" rel="nofollow noreferrer">this discussion</a>.</li>
</ol>
<p>I would suggest to try option 1, and only if this is not good enough to consider the other options.</p>
| 1,287
|
text classification
|
dataset import error for AutoML text classification
|
https://stackoverflow.com/questions/52319164/dataset-import-error-for-automl-text-classification
|
<p>I have trying to import dataset into AutoML NL Text Classification. However, the Ui gave me an error of Invalid row in CSV file , Error details: Error detected: "FILE_TYPE_NOT_SUPPORTED"</p>
<p>I am uploading the csv file, what should I do?</p>
|
<p>Please make sure there is no hidden quotes in your dataset. Complete requirements can be found on “<a href="https://cloud.google.com/natural-language/automl/docs/prepare" rel="nofollow noreferrer">Preparing your training data</a>” page.</p>
<blockquote>
<p>Common .csv errors:</p>
<ul>
<li>Using Unicode characters in labels. For example, Japanese characters are not supported.</li>
<li>Using spaces and non-alphanumeric characters in labels.</li>
<li>Empty lines.</li>
<li>Empty columns (lines with two successive commas).</li>
<li>Missing quotes around embedded text that includes commas.</li>
<li>Incorrect capitalization of Cloud Storage text paths.</li>
<li>Incorrect access control configured for your text files. Your service account should have read or greater access, or files must be publicly-readable.</li>
<li>References to non-text files, such as JPEG files. Likewise, files that are not text files but that have been renamed with a text extension will cause an error.</li>
<li>The URI of a text file points to a different bucket than the current project. > > - Only files in the project bucket can be accessed.</li>
<li>Non-CSV-formatted files.</li>
</ul>
</blockquote>
| 1,288
|
text classification
|
Text Classification Problem : Name and approach of this type of classification
|
https://stackoverflow.com/questions/59227070/text-classification-problem-name-and-approach-of-this-type-of-classification
|
<p>I have a labelled data-set comprising of text segments and corresponding labels. Each label consists of three parts, and there can be multiple or zero labels assigned to a given text segment. </p>
<pre><code>Sample Data is given below:
text segment action performed person
--- --- --- ---
"I went outside to play and not drink." {play,drink} {yes,no} {1st,1st}
"He is not playing." play no 3rd
"The weather is cold today." N/A N/A N/A
</code></pre>
<p>The task is to predict the label for any given text segment, where each label consists of three parts (action, performed, person), and there may be zero or more labels for a text segment. </p>
<p>There are fifteen classifiers for action, two for performed, and two for person. Annotated data size is 6000 text segments, in which 4000 text segments are assigned at least one label. </p>
<p>What is this type of text classification called (other than multi-class labelling)? </p>
<p>Also, which classification approach is recommended for this type of classification problem? </p>
|
<p>This is not a classification problem. Although you could perhaps torture a classification model for this purpose, the NLP technique you need is "dependency parsing" and "semantic role labeling". Spacy is a good python library for doing dependency parsing.</p>
| 1,289
|
text classification
|
Text classification for logistic regression with pipelines
|
https://stackoverflow.com/questions/53468055/text-classification-for-logistic-regression-with-pipelines
|
<p>I am trying to use <code>LogisticRegression</code> for text classification. I am using <code>FeatureUnion</code> for the features of the <code>DataFrame</code> and then <code>cross_val_score</code> to test the accuracy of the classifier. However, I don't know how to include the feature with the free text, called <code>tweets</code>, within the pipeline. I am using the <code>TfidfVectorizer</code> for the bag of words model.</p>
<pre><code>nominal_features = ["tweeter", "job", "country"]
numeric_features = ["age"]
numeric_pipeline = Pipeline([
("selector", DataFrameSelector(numeric_features))
])
nominal_pipeline = Pipeline([
("selector", DataFrameSelector(nominal_features)),
"onehot", OneHotEncoder()])
text_pipeline = Pipeline([
("selector", DataFrameSelector("tweets")),
("vectorizer", TfidfVectorizer(stop_words='english'))])
pipeline = Pipeline([("union", FeatureUnion([("numeric_pipeline", numeric_pipeline),
("nominal_pipeline", nominal_pipeline)])),
("estimator", LogisticRegression())])
np.mean(cross_val_score(pipeline, df, y, scoring="accuracy", cv=5))
</code></pre>
<p>Is this the right way to include the <code>tweets</code> free text data in the pipeline?</p>
|
<pre><code>pipeline = Pipeline([
('vect', CountVectorizer(stop_words='english',lowercase=True)),
("tfidf1", TfidfTransformer(use_idf=True,smooth_idf=True)),
('clf', MultinomialNB(alpha=1)) #Laplace smoothing
])
train,test=train_test_split(df,test_size=.3,random_state=42, shuffle=True)
pipeline.fit(train['Text'],train['Target'])
predictions=pipeline.predict(test['Text'])
print(test['Target'],predictions)
score = f1_score(test['Target'],predictions,pos_label='positive',average='micro')
print("Score of Naive Bayes is :" , score)
</code></pre>
| 1,290
|
text classification
|
SVM for text classification - tutorial on machine learning? How do I get started?
|
https://stackoverflow.com/questions/20772876/svm-for-text-classification-tutorial-on-machine-learning-how-do-i-get-started
|
<p>I'm looking for a really good tutorial on machine learning for text classification perhaps using Support vector machine (SVM) or other appropriate technology for large-scale supervised text classification. If there isn't a great tutorial, can anyone give me pointers to how a beginner should get started and do a good job with things like feature detection for English language Text Classification. </p>
<p>Books, articles, anything that can help beginners get started would be super helpful! </p>
|
<p>In its classical flavour the Support Vector Machine (SVM) is a binary classifier (i.e., it solves classification problems involving two classes). However, it can be also used to solve multi-class classification problems by applying techniques likes One versus One, One Versus All or Error Correcting Output Codes <a href="http://machinelearning.wustl.edu/mlpapers/paper_files/AllweinSS00.pdf" rel="nofollow noreferrer">[Alwein et al.]</a>. Also recently, a new modification of the classical SVM the multiclass-SVM allows to solve directly multi-class classification problems <a href="http://jmlr.org/papers/volume2/crammer01a/crammer01a.pdf" rel="nofollow noreferrer">[Crammer et al.]</a>.</p>
<p>Now as far as it concerns document classification, your main problem is feature extraction (i.e., how to acquire certain classification features from your documents). This is not a trivial task and there's a batch of bibliography on the topic (e.g., <a href="http://www.taibahu.edu.sa/iccit/allICCITpapers/pdf/p231-rehman.pdf" rel="nofollow noreferrer">[Rehman et al.]</a>, <a href="http://acl.ldc.upenn.edu/H/H92/H92-1041.pdf" rel="nofollow noreferrer">[Lewis]</a>).</p>
<p>Once you've overcome the obstacle of feature extraction, and have labeled and placed your document samples in a feature space you can apply any classification algorithm like SVMs, AdaBoost e.t.c.</p>
<p>Introductory books on machine learning:
<a href="https://rads.stackoverflow.com/amzn/click/com/1107422221" rel="nofollow noreferrer" rel="nofollow noreferrer">[Flach]</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/026201825X" rel="nofollow noreferrer" rel="nofollow noreferrer">[Mohri]</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/026201243X" rel="nofollow noreferrer" rel="nofollow noreferrer">[Alpaydin]</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/0387310738" rel="nofollow noreferrer" rel="nofollow noreferrer">[Bishop]</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/0387848576" rel="nofollow noreferrer" rel="nofollow noreferrer">[Hastie]</a></p>
<p>Books specific for SVMs:
<a href="https://rads.stackoverflow.com/amzn/click/com/0262194759" rel="nofollow noreferrer" rel="nofollow noreferrer">[Schlkopf]</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/0521780195" rel="nofollow noreferrer" rel="nofollow noreferrer">[Cristianini]</a></p>
<p>Some specific bibliography on document classification and SVMs:
<a href="https://rads.stackoverflow.com/amzn/click/com/012386979X" rel="nofollow noreferrer" rel="nofollow noreferrer">[Miner et al.]</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/1420059408" rel="nofollow noreferrer" rel="nofollow noreferrer">[Srivastava et al.]</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/1849962251" rel="nofollow noreferrer" rel="nofollow noreferrer">[Weiss et al.]</a>, <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.8812&rep=rep1&type=pdf" rel="nofollow noreferrer">[Pilászy]</a>, <a href="http://www.cs.cornell.edu/people/tj/publications/joachims_98a.pdf" rel="nofollow noreferrer">[Joachims]</a>, <a href="http://www.cs.cornell.edu/people/tj/publications/joachims_01a.pdf" rel="nofollow noreferrer">[Joachims01]</a>, <a href="http://www.cs.cornell.edu/people/tj/publications/joachims_97b.pdf" rel="nofollow noreferrer">[Joachims97]</a>, <a href="http://aclweb.org/anthology//W/W03/W03-1027.pdf" rel="nofollow noreferrer">[Sassano]</a></p>
| 1,291
|
text classification
|
Text Classification - Multiple Training Datasets
|
https://stackoverflow.com/questions/70750444/text-classification-multiple-training-datasets
|
<p>Would there be “dilution” of accuracy if I train the same text classification model with multiple training datasets? For example, my end users would be providing (uploading) their own tagged CSVs to train the model and use the trained model in the future. The contexts of datasets would be different - L&D, Technology, Customer Support, etc.</p>
<p>If yes, how do I have a “separate instance or model” for each user?</p>
<p>I am using Python and would possibly use Gradio or Streamlit as the UI. Open to advice.</p>
|
<p>I ended up using huggingface's zero-shot classification.</p>
| 1,292
|
text classification
|
Text Classification - using stemmer degrades results?
|
https://stackoverflow.com/questions/21294694/text-classification-using-stemmer-degrades-results
|
<p>There's <a href="http://www.cs.indiana.edu/~mkorayem/paper/survey_Arabic.pdf" rel="nofollow">this</a> article about sentiment analysis of Arabic. </p>
<p>In the beginning of page 5 it says that:</p>
<blockquote>
<p>"Experiments also show that stemming words before feature extraction and classification nearly always degrades the results".</p>
</blockquote>
<p>Later on in the same page, they state that:</p>
<blockquote>
<p>"...and an Arabic light stemmer is used for stemming the words"</p>
</blockquote>
<p>Um I thought that a stemmer/lemmatizer was <em>always</em> used before text classifications, why does he say that it degrades the results?</p>
<p>Thanks :)</p>
|
<p>I do not know the arabic language, it may be specific in many aspects, my answer regards english.</p>
<blockquote>
<p>Um I thought that a stemmer/lemmatizer was always used before text classifications, why does he say that it degrades the results?</p>
</blockquote>
<p>No it is not, in entirely depends on the <strong>task</strong>. If you want to extract some general concept of the text, then stemming/lematization is a good step. But in analysis of short chunks, where each word is valuable, stemming simply destroys its meaning. In particular - in sentiment analysis stemming may destroy the sentiment of the word. </p>
| 1,293
|
text classification
|
Multi-class text classification where classification depends on other columns beside text column
|
https://stackoverflow.com/questions/77015193/multi-class-text-classification-where-classification-depends-on-other-columns-be
|
<p>I want to do text classification in Python where I have a df with 3 columns: scenario, text and label. The label depends on the column text but also on the column scenario. There are 6 different scenarios.</p>
<p>Would it be correct to make a new column where I would merge columns scenario and text into one string and then do the tokenization and the rest of the model training with just this new column?</p>
|
<p>That depends on how you would be merging the two. Simply concatenating might not be the best idea. It might be good to introduce a special token between the two strings, like the [SEP] (or any other token) during the training and the inference so that it becomes easier for the model to understand the pattern.</p>
<p>You can either make a new column with this merged string and use it for your training, or merge the text from the two column in your data loader. The first option is the easier one.</p>
<p>If you are interested in playing around with the model, you can also try encoding the text from the first column using an encoder like bert, then encoding the text from the second column and concatenating the two vectors and feeding it two the classifier head.</p>
<p>All that said, the simple approach of creating a third column with the two texts concatenated with a separator should work fine.</p>
| 1,294
|
text classification
|
Plot decision boundary for text classification
|
https://stackoverflow.com/questions/40639316/plot-decision-boundary-for-text-classification
|
<p>I am looking for examples that show how to plot decision boundaries for text classification. I know about some of the examples in the sklearn documentation, but how do I apply them to text data?</p>
<p>I am not even sure, what to plot. Can a decision boundary be plotted for this?</p>
<p>I was thinking of using the result from the CountVectorizer somehow and then turning it into an np.array.</p>
<p>Are there any good examples online?</p>
|
<p>The difficulty here is that text classification is a high-dimensional problem, where the dimensionality equals the size of the vocabulary. Plotting this in 2d would require the application of a dimensionality reduction technique first, e.g, pca or t-sne and then training the learning algorithm on this new representation. Even that way though, I doubt how informative your plot will be. </p>
<p>You could use a toy example, with only 2-3 words to visualize a line (2d) or a surface (3d) separating the classes, but it would be a toy example. </p>
| 1,295
|
text classification
|
Text Classification in spacy v3
|
https://stackoverflow.com/questions/72256559/text-classification-in-spacy-v3
|
<p>I am trying to perform a text classification using spacy v3.</p>
<p>I am a bit confused with CLI approach. However, following the examples in the spacy project repo <a href="https://github.com/explosion/projects/tree/v3/tutorials" rel="nofollow noreferrer">HERE</a>.</p>
<p>I downloaded test data right <a href="https://raw.githubusercontent.com/koaning/tokenwiser/main/data/oos-intent.jsonl" rel="nofollow noreferrer">HERE</a> and prepared it to spacy v3 format:</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
from spacy.tokens import DocBin
import spacy
nlp = spacy.blank("en")
doc_bin = DocBin()
df= pd.read_json("../data/data.jsonl", lines = True)
df.head()
doc_bin = DocBin()
for text, label in zip(df['text'], df['label']):
doc = nlp(text)
doc.cats[label] = True
doc_bin.add(doc)
doc_bin.to_disk('train.spacy')
</code></pre>
<p>Created a textclassification <code>config.cfg</code> file on the documentation page.</p>
<p>And start the training loop given as:</p>
<pre class="lang-py prettyprint-override"><code>python -m spacy train assets/config.cfg --output training/ --paths.train assets/train.spacy --paths.dev assets/train.spacy
</code></pre>
<p>I get the following output:</p>
<pre class="lang-sh prettyprint-override"><code>Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.bias', 'lm_head.decoder.weight', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.bias', 'lm_head.dense.weight']
- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[2022-05-16 04:50:42,566] [INFO] Initialized pipeline components: ['transformer', 'textcat']
✔ Initialized pipeline
============================= Training pipeline =============================
ℹ Pipeline: ['transformer', 'textcat']
ℹ Initial learn rate: 0.0
E # LOSS TRANS... LOSS TEXTCAT CATS_SCORE SCORE
--- ------ ------------- ------------ ---------- ------
</code></pre>
<p>And it seems that the pipeline has started. However, nothing else is happening. Apparently no training is happening and I only see first empty row of training loop.</p>
| 1,296
|
|
text classification
|
Combine different types of features (Text classification)
|
https://stackoverflow.com/questions/41419780/combine-different-types-of-features-text-classification
|
<p>I'm doing text classification task I faced a problem.
I've already selected the 1000 best feature collection using bag-of-words approach. Now I want use another features based on Part-of-Speech, average word length etc. After I want to combine these features together. How can I achieve it
I'm using Python, NLTK, Scikit packages. This is my first python project so code maybe not very good.</p>
<p>Thanks in advance,</p>
<pre><code> import nltk
from nltk.corpus.reader import CategorizedPlaintextCorpusReader
from sklearn.feature_extraction.text import TfidfVectorizer
import os
import numpy as np
import random
import pickle
from time import time
from sklearn import metrics
from nltk.classify.scikitlearn import SklearnClassifier
from sklearn.naive_bayes import MultinomialNB,BernoulliNB
from sklearn.linear_model import LogisticRegression,SGDClassifier
from sklearn.svm import SVC, LinearSVC, NuSVC
import matplotlib.pyplot as plt
def intersect(a, b, c, d):
return list(set(a) & set(b)& set(c)& set(d))
def find_features(document, feauture_list):
words = set(document)
features = {}
for w in feauture_list:
features[w] = (w in words)
return features
def benchmark(clf, name, training_set, testing_set):
print('_' * 80)
print("Training: ")
print(clf)
t0 = time()
clf.train(training_set)
train_time = time() - t0
print("train time: %0.3fs" % train_time)
t0 = time()
score = nltk.classify.accuracy(clf, testing_set)*100
#pred = clf.predict(testing_set)
test_time = time() - t0
print("test time: %0.3fs" % test_time)
print("accuracy: %0.3f" % score)
clf_descr = name
return clf_descr, score, train_time, test_time
#print((find_features(corpus.words('fantasy/1077-0_fantasy.txt'),feature_list)))
path = 'c:/data/books-Copy'
os.chdir(path)
#need this if you want to save tfidf_matrix
corpus = CategorizedPlaintextCorpusReader(path, r'.*\.txt',
cat_pattern=r'(\w+)/*')
save_featuresets = open(path +"/features_500.pickle","rb")
featuresets = []
featuresets = pickle.load(save_featuresets)
save_featuresets.close()
documents = [(list(corpus.words(fileid)), category)
for category in corpus.categories()
for fileid in corpus.fileids(category)]
random.shuffle(documents)
tf = TfidfVectorizer(analyzer='word', min_df = 1,
stop_words = 'english', sublinear_tf=True)
#documents_tfidf = []
top_features = []
tf = TfidfVectorizer(input= 'filename', analyzer='word',
min_df = 1, stop_words = 'english', sublinear_tf=True)
for category in corpus.categories():
files = corpus.fileids(category)
tf.fit_transform( files )
feature_names = tf.get_feature_names()
#documents_tfidf.append(feature_names)
indices = np.argsort(tf.idf_)[::-1]
top_features.append([feature_names[i] for i in indices[:10000]])
#print(top_features_detective)
feature_list = list( set(top_features[0][:500]) | set(top_features[1][:500]) |
set(top_features[2][:500]) | set(top_features[3][:500]) |
set(intersect(top_features[0], top_features[1], top_features[2], top_features[3])))
featuresets = [(find_features(rev, feature_list), category) for (rev, category) in documents]
training_set = featuresets[:50]
testing_set = featuresets[20:]
results = []
for clf, name in (
(SklearnClassifier(MultinomialNB()), "MultinomialNB"),
(SklearnClassifier(BernoulliNB()), "BernoulliNB"),
(SklearnClassifier(LogisticRegression()), "LogisticRegression"),
(SklearnClassifier(SVC()), "SVC"),
(SklearnClassifier(LinearSVC()), "Linear SVC "),
(SklearnClassifier(SGDClassifier()), "SGD ")):
print(name)
results.append(benchmark(clf, name, training_set, testing_set))
indices = np.arange(len(results))
results = [[x[i] for x in results] for i in range(4)]
clf_names, score, training_time, test_time = results
training_time = np.array(training_time) / np.max(training_time)
test_time = np.array(test_time) / np.max(test_time)
plt.figure(figsize=(12, 8))
plt.title("Score")
plt.barh(indices, score, .2, label="score", color='navy')
plt.barh(indices + .3, training_time, .2, label="training time",
color='c')
plt.barh(indices + .6, test_time, .2, label="test time", color='darkorange')
plt.yticks(())
plt.legend(loc='best')
plt.subplots_adjust(left=.25)
plt.subplots_adjust(top=.95)
plt.subplots_adjust(bottom=.05)
for i, c in zip(indices, clf_names):
plt.text(-15.6, i, c)
plt.show()
</code></pre>
|
<p>There is nothing wrong with combining features of different types (in fact it's generally a good idea for classification tasks). The NLTK's API expects features to come in a dictionary, so you just need to combine your feature collections into a single dictionary.</p>
<p>This is the answer to the question you asked. If there is a problem with your code which you need help with but did not ask about, you should probably start a new question. </p>
| 1,297
|
text classification
|
TensorflowJS text/string classification
|
https://stackoverflow.com/questions/51698131/tensorflowjs-text-string-classification
|
<h2>Subject</h2>
<p>Hello. I wanna implement text classification feature using <strong>Tensorflow.js in <code>NodeJS</code></strong>.
<br/> Its job will be to match a string with some pre-defined topics.</p>
<h2>Examples:</h2>
<p><strong>Input</strong>: <em><code>String</code></em>: "My dog loves walking on the beach" <br/>
<strong>Pre-defined topcics</strong>: <em><code>Array<String></code></em>: <strong><code>["dog", "cat", "cow"]</code></strong> <br/>
<strong>Output</strong>: There are many output variants I am comfortable with. These are some examples, but if you can suggest better, Do it!</p>
<ul>
<li><code>String</code> <em>(the most likely topic)</em> - Example: "dog"</li>
<li><code>Object</code> <em>(every topic with a predicted score)</em> <br/> Example: <code>{"dog": 0.9, "cat": 0.08, "cow": 0.02}</code></li>
</ul>
<h2>Research</h2>
<p>I know similar results can be achieved by filtering the strings for the topic names and doing some algorithms but also can be achieved with ML.</p>
<p>There were already some posts about using strings, classifying text and creating autocomplete with TensorFlow <em>(but not sure about <code>TFjs</code>)</em>, like these:</p>
<ul>
<li><a href="https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub" rel="noreferrer">https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub</a></li>
<li><a href="http://ruder.io/text-classification-tensorflow-estimators/" rel="noreferrer">http://ruder.io/text-classification-tensorflow-estimators/</a></li>
<li><a href="https://machinelearnings.co/tensorflow-text-classification-615198df9231" rel="noreferrer">https://machinelearnings.co/tensorflow-text-classification-615198df9231</a></li>
</ul>
<h2>How you can help</h2>
<p>My goal is to do the topic prediction with <code>TensorflowJS</code>. I need just an example of the best way to train models with strings or how to classify text and then will extend the rest by myself.</p>
|
<p>Text classification has an added challenge which is to first find the vectors from words. There are various approaches depending on the nature of the problem solved. Before building the model, one might ensure to have the vectors associated to all the words of the corpus. After the representation of a vector from the corpus suffers another issue of sparsity. Hence arises the need of <a href="https://en.wikipedia.org/wiki/Word_embedding" rel="nofollow noreferrer">word embedding</a>. The two most popular algorithms for this task are <a href="https://en.wikipedia.org/wiki/Word2vec" rel="nofollow noreferrer">Wor2Vec</a> and <a href="https://en.wikipedia.org/wiki/GloVe_(machine_learning)" rel="nofollow noreferrer">GloVe</a>. There are some implementations in js. Or one can create vectors using the bag of word as outlined <a href="https://stackoverflow.com/questions/51663068/tensorflow-js-tokenizer/51664311#51664311">here</a>.</p>
<p>Once there are the vectors, a Fully Connected Neural Network FCNN will suffice to predict the topic of a text. The other things to take into consideration would be deciding the length of the text. In case a text is to short, there could be some padding, etc ... Here is a model</p>
<pre><code>const model = tf.sequential();
model.add(tf.layers.dense({units: 100, activation: 'relu', inputShape: [lengthSentence]}));
model.add(tf.layers.dense({units: numTopics, activation: 'softmax'}));
model.compile({optimizer: 'sgd', loss: 'categoricalCrossentropy'});
</code></pre>
<p><strong>Key Takeaways of the model</strong></p>
<p>The model simply connects the input to the categorical output. It is a very simple model. But in some scenarios, adding an embedding layer after the input layer can be considered.</p>
<pre><code>model.add(tf.layers.embedding({inputDim: inputDimSize, inputLength: lengthSentence, outputDim: embeddingDims}))
</code></pre>
<p>In some other case, an <a href="https://js.tensorflow.org/api/latest/#layers.lstm" rel="nofollow noreferrer">LSTM</a> layer can be relevant </p>
<pre><code>tf.layers.lstm({units: lstmUnits, returnSequences: true})
</code></pre>
| 1,298
|
text classification
|
Text classification and most informative features
|
https://stackoverflow.com/questions/73539045/text-classification-and-most-informative-features
|
<p>I'm trying to do some text classification and understand which are the most informative feature the model uses.</p>
<p>I get the Accuracy of the model but when it comes to understand the feature_names it's shows and empty row ''
(accuracy: 0.38405797101449274
Top 10 features used to predict:
Class 1 best:
(0.006863555955628847, '')
Class 2 best:
(0.006863555955628847, '')</p>
<p>If i call the feature_names by itself it shows an empty result but i dont understand where in the function i've made the mistake.</p>
<p>Those are the Def created for cleaning</p>
<pre><code>STOPLIST = stopwords.words('italian')
SYMBOLS = " ".join(string.punctuation).split(" ") + ["-", "...", "”", "”"]
class CleanTextTransformer(TransformerMixin):
def transform(self, X, **transform_params):
return [cleanText(text) for text in X]
def fit(self, X, y=None, **fit_params):
return self
def get_params(self, deep=True):
return {}
def cleanText(text):
text = text.strip().replace("\n", " ").replace("\r", " ")
text = text.lower()
return text
</code></pre>
<p>Those are the Def created for tokenizing</p>
<pre><code>
def tokenizeText(sample):
tokens = parser(sample)
lemmas = []
for tok in tokens:
lemmas.append(tok.lemma_.lower().strip() if tok.lemma_ != "-PRON-" else tok.lower_)
tokens = lemmas
tokens = [tok for tok in tokens if tok not in STOPLIST]
tokens = [tok for tok in tokens if tok not in SYMBOLS]
return tokens
</code></pre>
<p>and the one for the Most Informative features</p>
<pre><code>
def printNMostInformative(vectorizer, clf, N):
feature_names = vectorizer.get_feature_names()
coefs_with_fns = sorted(zip(clf.coef_[0], feature_names))
topClass1 = coefs_with_fns[:N]
topClass2 = coefs_with_fns[:-(N + 1):-1]
print("Class 1 best: ")
for feat in topClass1:
print(feat)
print("Class 2 best: ")
for feat in topClass2:
print(feat)
</code></pre>
<p>Then the classic Pipeline with the LinearSVC model (i wanted to try more but i'm trying to understand which one perform better).</p>
<pre><code># data
train1 = train['Column Interested'].tolist()
labelsTrain1 = train['Column Interested'].tolist()
test1 = test['Column Interested'].tolist()
labelsTest1 = test['Column Interested'].tolist()
vectorizer = CountVectorizer(tokenizer=tokenizeText, ngram_range=(1,1))
clf = LinearSVC()
pipe = Pipeline([('cleanText', CleanTextTransformer()), ('vectorizer', vectorizer), ('clf', clf)])
# train
pipe.fit(train1, labelsTrain1)
# test
preds = pipe.predict(test1)
print("accuracy:", accuracy_score(labelsTest1, preds))
print("Top 10 features used to predict: ")
printNMostInformative(vectorizer, clf, 10)
</code></pre>
<p>For more usability there is a very similar use of this "NMostinformative on this tutorial which I adapted to my data
<a href="https://towardsdatascience.com/machine-learning-for-text-classification-using-spacy-in-python-b276b4051a49" rel="nofollow noreferrer">https://towardsdatascience.com/machine-learning-for-text-classification-using-spacy-in-python-b276b4051a49</a></p>
| 1,299
|
|
machine translation
|
Neural Machine Translation
|
https://stackoverflow.com/questions/74148090/neural-machine-translation
|
<p>I am doing a project about Neural Machine Translation. In the data processing step, is it necessary to padding after the sentences so that they are of equal length?</p>
| 1,300
|
|
machine translation
|
Vocabulary scale of machine translation
|
https://stackoverflow.com/questions/64906445/vocabulary-scale-of-machine-translation
|
<p>When doing machine translation, if you segment words, such as using BPE, how big is the processed vocabulary?</p>
|
<p>The BPE algorithm starts with a list of characters in the data and iteratively merges the most frequent symbol pairs. If the algorithm would not have a stopping criterion, you would end up with a vocabulary that covers all words from your training data + all characters + all merges in between the characters and the words.</p>
<p>The reason for using BPE is that we just cannot afford to use a vocabulary that contains all words from the training data: it can easily be millions of word forms. When using BPE, you thus need to say in advance how many merges you want to have. Typically, the number of merges is 20–50k. It ensures that most frequent words remain untouched whereas the less frequent words get split into smaller units. The resulting vocabulary size is then the number of merges + the original alphabet size.</p>
| 1,301
|
machine translation
|
Google Translator Toolkit machine-translation issues
|
https://stackoverflow.com/questions/33403481/google-translator-toolkit-machine-translation-issues
|
<p>I am using python 2.7 & django 1.7.</p>
<p>When I use <a href="https://translate.google.com/toolkit/" rel="nofollow">Google Translator Toolkit</a> to machine translate my .po files to another language (English to German), there are many errors due to the use of different django template variables in my translation tags.</p>
<p>I understand that machine translation is not so great, but I am wanting to only test my translation strings on my test pages.</p>
<p>Here is an example of a typical error of the machine-translated .po file translated from English (en) to German (de).</p>
<pre><code>#. Translators: {{ site_name_lowercase }} is a variable that does not require translation.
#: .\templates\users\reset_password_email_html.txt:47
#: .\templates\users\reset_password_email_txt.txt:18
#, python-format
msgid ""
"Once you've returned to %(site_name_lowercase)s.com, we will give you "
"instructions to reset your password."
msgstr "Sobald du mit% (site_name_lowercase) s.com zurückgegeben haben, geben wir Ihnen Anweisungen, um Ihr Passwort zurückzusetzen."
</code></pre>
<p>The <code>%(site_name_lowercase)s</code> is machine translated to <code>% (site_name_lowercase) s</code> and is often concatenated to the precedding word, as shown above.</p>
<p>I have hundreds of these type of errors and I estimate that a find & replace would take at least 7 hours. Plus if I makemessages and then translate the .po file again I would have to go through the find and replace again.</p>
<p>I am hoping that there is some type of undocumented rule in the <a href="https://translate.google.com/toolkit/" rel="nofollow">Google Translator Toolkit</a> that will allow the machine-translation to ignore the variables. I have read the <a href="https://translate.google.com/toolkit/" rel="nofollow">Google Translator Toolkit</a> docs and searched SO & Google, but did not find anything that would assist me.</p>
<p><strong>Does anyone have any suggestions?</strong></p>
|
<blockquote>
<p>The %(site_name_lowercase)s is machine translated to % (site_name_lowercase) s and is often concatenated to the precedding word, as shown above.</p>
</blockquote>
<p>This is caused by tokenization prior to translation, followed by detokenization after translation, i.e. Google Translate tries to split the input before translation to re-merge it after translation. The variables you use are typically composed of characters that are used by tokenizers to detect token boundaries. To avoid this sort of problem, you can pre-process your file and replace the offending variables by placeholders that do not have this issue - I suggest you try out a couple of things, e.g. _VAR_PLACE_HOLDER_. It is important that you do not use any punctuation characters that may cause the tokenizer to split. After pre-processing, translate your newly generated file and post-process by replacing the placeholders by their original value. Typically, your placeholder will be picked up as an Out-Of-Vocabulary (OOV) item and it will be preserved during translation. Try to experiment with including a sequence number (to keep track of your placeholders during post-processing), since word reordering may occur. There used to be a scientific API for Google Translate that gives you the token alignments. You could use these for post-processing as well.</p>
<p>Note that this procedure will not give you the best translation output possible, as the language model will not recognize the placeholder. You can see this illustrated here (with placeholder, the token "gelezen" is in the wrong place):</p>
<p><a href="https://translate.google.com/#en/nl/I%20have%20read%20SOME_VARIABLE_1%20some%20time%20ago%0AI%20have%20read%20a%20book%20some%20time%20ago" rel="nofollow">https://translate.google.com/#en/nl/I%20have%20read%20SOME_VARIABLE_1%20some%20time%20ago%0AI%20have%20read%20a%20book%20some%20time%20ago</a></p>
<p>If you just want to test the system for your variables, and you do not care about the translation quality, this is the fastest way to go. </p>
<p>Should you decide to go for a better solution, you can solve this issue yourself by developing your own machine translation system (it's fun, by the way, see <a href="http://www.statmt.org/moses/" rel="nofollow">http://www.statmt.org/moses/</a>) and apply the procedure explained above, but then with, for example Part-Of-Speech-Tags to improve the language model. Note that you can use the alignment information as well.</p>
| 1,302
|
machine translation
|
Collecting data for machine translation
|
https://stackoverflow.com/questions/64023394/collecting-data-for-machine-translation
|
<p>I am interested in trying to make a machine translation for language accents and is curious for methods avaialable to collect data or how to make your own corpus with unlimited resource. Any good reference i could refer to or ideas?</p>
|
<p>There are lots of open corpora you may wish to look at, many of which are collated here on the <a href="http://opus.nlpl.eu/" rel="nofollow noreferrer">The Open Parallel Corpus (OPUS)</a>, to seed some data for your exercise.</p>
<p>In terms of building and collecting your own, you could consider <a href="https://www.mturk.com/" rel="nofollow noreferrer">Amazon Mechanical Turk</a>, or doing generation with something like <a href="https://github.com/snorkel-team/snorkel" rel="nofollow noreferrer">Snorkel</a>.</p>
| 1,303
|
machine translation
|
Open Source Machine Translation Engines?
|
https://stackoverflow.com/questions/11302342/open-source-machine-translation-engines
|
<p>We're looking for an open source Machine Translation Engine that could be incorporated into our localization workflow. We're looking at the options below:</p>
<ol>
<li><a href="http://www.statmt.org/moses/">Moses</a> (C++)</li>
<li><a href="http://www.computing.dcu.ie/~mforcada/fosmt.html">Joshua</a> (Java)</li>
<li><a href="http://www.stanford.edu/~jurafsky/phrasal_main_final.pdf">Phrasal</a> (Java)</li>
</ol>
<p>Among these, Moses has the widest community support and has been tried out by many localization companies and researchers. We are actually leaning towards a Java-based engine since our applications are all in Java. Have any of you used either Joshua or Phrasal as part of your workflow. Could you please share your experiences with them? Or, is Moses way too far ahead of these in terms of the features it provides and ease of integration.</p>
<p>And, we require that the engine supports:</p>
<ol>
<li>Domain-specific training (i.e. it should maintain separate phrase tables for each domain that the input data belongs).</li>
<li>Incremental training (i.e. avoiding having to retrain the model from scratch every time we wish to use some new training data).</li>
<li>Parallelizing the translation process.</li>
</ol>
|
<p>This question is better asked on the Moses mailing list (moses-support@mit.edu), I think. There are lots of people there working with different types of systems, so you'll get an objective answer. Apart from that, here's my input:</p>
<ul>
<li>With respect to Java: it does not matter in which language the MT system is written. No offense, but you may safely assume that even if the code was written in a language you were familiar with, it would be too difficult to understand without a deeper knowledge of MT. So what you are looking for are interfaces. Moses's xml-rpc works fine.</li>
<li>With respect to MT systems: look for the best results, ignore the programming language it is written in. Results are here: <a href="http://matrix.statmt.org">matrix.statmt.org</a>. The people using your MT system are interested in output not in your coding preferences.</li>
<li>With respect to the whole venture: once you start offering MT output, make sure you can adapt it quickly. MT is rapidly shifting towards a pipeline process in which an MT system is the core (and not the only) component. So focus on maintainability. In the ideal case, you would be able to connect any MT system to your framework.</li>
</ul>
<p>And here's some input on your feature requests:</p>
<ul>
<li>Domain-specific training: you don't need that feature. You get the best MT results by using customer specific data training.</li>
<li>Incremental training: see <a href="http://homepages.inf.ed.ac.uk/miles/phd-projects/levenberg.pdf">Stream Based Statistical Machine Translation</a></li>
<li>Parallelizing the translation process: you will have to implement this yourself. Note that most MT software is purely academic and will never reach a 1.0 milestone. It helps of course if a multi-threaded server is available (Moses), but even then, you will need lots of harnessing code.</li>
</ul>
<p>Hope this helps. Feel free to PM me if you have any more questions.</p>
| 1,304
|
machine translation
|
Tensorflow - Decoder for Machine Translation
|
https://stackoverflow.com/questions/66205290/tensorflow-decoder-for-machine-translation
|
<p>I am going through <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention#training" rel="nofollow noreferrer">Tensorflow's tutorial</a> on Neural Machine Translation using Attention mechanism.</p>
<p>It has the following code for the Decoder :</p>
<pre><code>class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
</code></pre>
<p>What I don't understand here is that, the GRU cell of the decoder is not connected to the encoder by initializing it with the last hidden state of the encoder.</p>
<pre><code>output, state = self.gru(x)
# Why is it not initialized with the hidden state of the encoder ?
</code></pre>
<p>As per my understanding, there is a connection between the encoder and decoder, only when the decoder is initialized with the "Thought vector" or the last hidden state of the encoder.</p>
<p>Why is that missing in Tensorflow's official tutorial ? Is it a bug ? Or am I missing something here ?</p>
<p>Could someone help me understand ?</p>
|
<p>This is very well summarized by this <a href="https://github.com/tensorflow/nmt" rel="nofollow noreferrer">detailed NMT guide</a>, which compares the classic seq2seq NMT against the encoder-decoder attention-based NMT architectures.</p>
<blockquote>
<p><strong>Vanilla seq2seq:</strong> The decoder also needs to have access to the source information, and one simple way to achieve that is to initialize it with the last hidden state of the encoder, encoder_state.</p>
</blockquote>
<blockquote>
<p><strong>Attention-based encoder-decoder:</strong> Remember that in the vanilla seq2seq model, we pass the last source state from the encoder to the decoder when starting the decoding process. This works well for short and medium-length sentences; however, for long sentences, the single fixed-size hidden state becomes an information bottleneck. Instead of discarding all of the hidden states computed in the source RNN, the attention mechanism provides an approach that allows the decoder to peek at them (treating them as a dynamic memory of the source information). By doing so, the attention mechanism improves the translation of longer sentences.</p>
</blockquote>
<p>In both cases, you can use <em>teacher forcing</em> to better train the model.</p>
<p>TLDR; the attention mechanism is what helps the decoder "peak" into the encoder instead of you explicitly passing what the encoder is doing to the decoder.</p>
| 1,305
|
machine translation
|
Dialogflow language coverage and Google Neural Machine Translation
|
https://stackoverflow.com/questions/47414003/dialogflow-language-coverage-and-google-neural-machine-translation
|
<p>Google started “Google Neural Machine Translation” system in 2016 and improved the efficiency of translation of some languages tremendously. These languages include German, French, Spanish, Portuguese, Chinese, Japanese, Turkish and Korean; as they said.</p>
<p>However Dialogflow supports all of these languages except Turkish. Is there any specific reason for this exclusion?</p>
|
<p>Dialogflow does not handle translation, so Google's work in this area doesn't apply. </p>
<p>What Dialogflow <em>does</em> do that is language related is let you create sample phrases, in that language, and can handle other, similar, phrases that a user might make in conversation. So it isn't about translating a language - it is about natural language parsing and understanding.</p>
| 1,306
|
machine translation
|
How to fine-tune a Mistral-7B model for machine translation?
|
https://stackoverflow.com/questions/78156752/how-to-fine-tune-a-mistral-7b-model-for-machine-translation
|
<p>There's a lot of tutorials online that uses raw text affix with arcane syntax to indicate document boundary and accessed through Huggingface <code>datasets.Dataset</code> object through the <code>text</code> key. E.g.</p>
<pre><code>from datasets import load_dataset
dataset_name = "mlabonne/guanaco-llama2-1k"
dataset = load_dataset(dataset_name, split="train")
dataset["text"][42]
</code></pre>
<p>[out]:</p>
<pre><code><s>[INST] ¿Cuáles son los actuales presidentes de la región de Sur América? Enumérelos en una lista con su respectivo país. [/INST] A fecha del 13 de febrero de 2023, estos son los presidentes de los países de Sudamérica, según Wikipedia:
-Argentina: Alberto Fernández
-Bolivia: Luis Arce
-Brasil: Luiz Inácio Lula da Silva
-Chile: Gabriel Boric
-Colombia: Gustavo Petro
-Ecuador: Guillermo Lasso
-Paraguay: Mario Abdo Benítez
-Perú: Dina Boluarte
-Uruguay: Luis Lacalle Pou
-Venezuela: Nicolás Maduro
-Guyana: Irfaan Ali
-Surinam: Chan Santokhi
-Trinidad y Tobago: Paula-Mae Weekes </s>
</code></pre>
<p>But machine translation datasets are usually structured in 2 parts, source and target text with <code>sentence_eng_Latn</code> and <code>sentence_deu_Latn</code> keys, e.g.</p>
<pre><code>
valid_data = load_dataset("facebook/flores", "eng_Latn-deu_Latn", streaming=False,
split="dev")
valid_data[42]
</code></pre>
<p>[out]:</p>
<pre><code>{'id': 43,
'URL': 'https://en.wikinews.org/wiki/Hurricane_Fred_churns_the_Atlantic',
'domain': 'wikinews',
'topic': 'disaster',
'has_image': 0,
'has_hyperlink': 0,
'sentence_eng_Latn': 'The storm, situated about 645 miles (1040 km) west of the Cape Verde islands, is likely to dissipate before threatening any land areas, forecasters say.',
'sentence_deu_Latn': 'Prognostiker sagen, dass sich der Sturm, der etwa 645 Meilen (1040 km) westlich der Kapverdischen Inseln befindet, wahrscheinlich auflösen wird, bevor er Landflächen bedroht.'}
</code></pre>
<h3>How to fine-tune a Mistral-7b model for the machine translation task?</h3>
|
<p>The key is to re-format the data from a traditional machine translation dataset that splits the source and target text and piece them up together in a format that the model expects.</p>
<p>For the Mistral 7B model specifically, it usually expects:</p>
<ul>
<li>each row of data would be encapsulated between <code><s> and </code> where
<ul>
<li>the input source sentence would be embedded between the <code>[INST] ... [/INST]</code></li>
<li>the output target sentence would be after the <code>[/INST]</code> symbol</li>
</ul>
</li>
<li>any pre-data prompts before the <code>[INST] ... [/INST]</code></li>
</ul>
<p>E.g. if we want to use a translation prompt as such <strong>"Translate English to German:"</strong>,</p>
<pre><code>valid_data = load_dataset("facebook/flores", "eng_Latn-deu_Latn", streaming=False, split="dev")
def preprocess_func(row):
return {'text': "Translate from English to German: <s>[INST] " + row['sentence_eng_Latn'] + " [INST] " + row['sentence_deu_Latn'] + " </s>"}
valid_dataset = valid_data.map(preprocess_func)
valid_dataset[42]
</code></pre>
<p>[out]:</p>
<pre><code>{'id': 43,
'URL': 'https://en.wikinews.org/wiki/Hurricane_Fred_churns_the_Atlantic',
'domain': 'wikinews',
'topic': 'disaster',
'has_image': 0,
'has_hyperlink': 0,
'sentence_eng_Latn': 'The storm, situated about 645 miles (1040 km) west of the Cape Verde islands, is likely to dissipate before threatening any land areas, forecasters say.',
'sentence_deu_Latn': 'Prognostiker sagen, dass sich der Sturm, der etwa 645 Meilen (1040 km) westlich der Kapverdischen Inseln befindet, wahrscheinlich auflösen wird, bevor er Landflächen bedroht.',
'text': 'Translate from English to German: <s>[INST] The storm, situated about 645 miles (1040 km) west of the Cape Verde islands, is likely to dissipate before threatening any land areas, forecasters say. [INST] Prognostiker sagen, dass sich der Sturm, der etwa 645 Meilen (1040 km) westlich der Kapverdischen Inseln befindet, wahrscheinlich auflösen wird, bevor er Landflächen bedroht. </s>'}
</code></pre>
<hr />
<p>Then the normal fine-tuning Mistral-7b scripts could just read the <code>text</code> key in the dataset, e.g.</p>
<p>Requires</p>
<pre><code>!pip install -U transformers sentencepiece datasets
!pip install -q -U transformers
!pip install -q -U accelerate
!pip install -q -U bitsandbytes
!pip install -U peft
!pip install -U trl
</code></pre>
<p>And if you are in a Jupyter environment, you'll need to reset the kernel after installing accelerate, so:</p>
<pre><code>import os
os._exit(00)
</code></pre>
<p>Then:</p>
<pre><code>from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig,HfArgumentParser,TrainingArguments,pipeline, logging
from peft import LoraConfig, PeftModel, prepare_model_for_kbit_training, get_peft_model
import os,torch
from datasets import load_dataset
from trl import SFTTrainer
base_model = "mistralai/Mistral-7B-Instruct-v0.2"
new_model = "mistral_7b_flores_dev_en_de"
bnb_config = BitsAndBytesConfig(
load_in_4bit= True,
bnb_4bit_quant_type= "nf4",
bnb_4bit_compute_dtype= torch.bfloat16,
bnb_4bit_use_double_quant= False,
)
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
model.config.use_cache = False
model.config.pretraining_tp = 1
model.gradient_checkpointing_enable()
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
tokenizer.padding_side = 'right'
tokenizer.pad_token = tokenizer.eos_token
tokenizer.add_eos_token = True
tokenizer.add_bos_token, tokenizer.add_eos_token
valid_data = load_dataset("facebook/flores", "eng_Latn-deu_Latn", streaming=False,
split="dev")
test_data = load_dataset("facebook/flores", "eng_Latn-deu_Latn", streaming=False,
split="devtest")
def preprocess_func(row):
return {'text': "Translate from English to German: <s>[INST] " + row['sentence_eng_Latn'] + " [INST] " + row['sentence_deu_Latn'] + " </s>"}
valid_dataset = valid_data.map(preprocess_func)
test_dataset = test_data.map(preprocess_func)
model = prepare_model_for_kbit_training(model)
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
target_modules=["q_proj", "k_proj", "v_proj", "o_proj","gate_proj"]
)
model = get_peft_model(model, peft_config)
training_arguments = TrainingArguments(
output_dir="./results",
num_train_epochs=1,
per_device_train_batch_size=4,
gradient_accumulation_steps=1,
optim="paged_adamw_32bit",
save_steps=25,
logging_steps=25,
learning_rate=2e-4,
weight_decay=0.001,
max_grad_norm=0.3,
max_steps=-1,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="constant",
report_to=None
)
trainer = SFTTrainer(
model=model,
train_dataset=valid_dataset,
peft_config=peft_config,
max_seq_length= None,
dataset_text_field="text",
tokenizer=tokenizer,
args=training_arguments,
packing= False,
)
trainer.train()
</code></pre>
| 1,307
|
machine translation
|
Phrase extraction algorithm for statistical machine translation
|
https://stackoverflow.com/questions/25109001/phrase-extraction-algorithm-for-statistical-machine-translation
|
<p>I have written the following code with the phrase extraction algorithm for SMT.</p>
<p><a href="https://github.com/alvations/nltk/blob/develop/nltk/align/phrase_based.py" rel="noreferrer">GitHub</a></p>
<pre><code># -*- coding: utf-8 -*-
def phrase_extraction(srctext, trgtext, alignment):
"""
Phrase extraction algorithm.
"""
def extract(f_start, f_end, e_start, e_end):
phrases = set()
# return { } if f end == 0
if f_end == 0:
return
# for all (e,f) ∈ A do
for e,f in alignment:
# return { } if e < e start or e > e end
if e < e_start or e > e_end:
return
fs = f_start
# repeat-
while True:
fe = f_end
# repeat-
while True:
# add phrase pair ( e start .. e end , f s .. f e ) to set E
trg_phrase = " ".join(trgtext[i] for i in range(fs,fe))
src_phrase = " ".join(srctext[i] for i in range(e_start,e_end))
phrases.add("\t".join([src_phrase, trg_phrase]))
fe+=1 # fe++
# -until fe aligned
if fe in f_aligned or fe > trglen:
break
fs-=1 # fe--
# -until fs aligned
if fs in f_aligned or fs < 0:
break
return phrases
# Calculate no. of tokens in source and target texts.
srctext = srctext.split()
trgtext = trgtext.split()
srclen = len(srctext)
trglen = len(trgtext)
# Keeps an index of which source/target words are aligned.
e_aligned = [i for i,_ in alignment]
f_aligned = [j for _,j in alignment]
bp = set() # set of phrase pairs BP
# for e start = 1 ... length(e) do
for e_start in range(srclen):
# for e end = e start ... length(e) do
for e_end in range(e_start, srclen):
# // find the minimally matching foreign phrase
# (f start , f end ) = ( length(f), 0 )
f_start, f_end = trglen, 0
# for all (e,f) ∈ A do
for e,f in alignment:
# if e start ≤ e ≤ e end then
if e_start <= e <= e_end:
f_start = min(f, f_start)
f_end = max(f, f_end)
# add extract (f start , f end , e start , e end ) to set BP
phrases = extract(f_start, f_end, e_start, e_end)
if phrases:
bp.update(phrases)
return bp
srctext = "michael assumes that he will stay in the house"
trgtext = "michael geht davon aus , dass er im haus bleibt"
alignment = [(0,0), (1,1), (1,2), (1,3), (2,5), (3,6), (4,9), (5,9), (6,7), (7,7), (8,8)]
phrases = phrase_extraction(srctext, trgtext, alignment)
for i in phrases:
print i
</code></pre>
<p>The phrase extraction algorithm from Philip Koehn's <em>Statistical Machine Translation</em> book, page 133 is as such:</p>
<p><img src="https://i.sstatic.net/0j556.png" alt="enter image description here"></p>
<p>And the desired output should be:</p>
<p><img src="https://i.sstatic.net/qfR2m.png" alt="enter image description here"></p>
<p>However with my code, I am only able to get these output:</p>
<blockquote>
<p>michael assumes that he will stay in the - michael geht davon aus ,
dass er im haus</p>
<p>michael assumes that he will stay in the - michael geht davon aus ,
dass er im haus bleibt</p>
</blockquote>
<p><strong>Does anyone spot what is wrong with my implementation?</strong></p>
<p>The code does extract phrases but it's not the complete desired output as shown with the translation table above:</p>
<p><img src="https://i.sstatic.net/E21Xq.png" alt="enter image description here"></p>
|
<p>So I have had a look at this problem and can now reproduce the desired output. Turns out there are a number of problems:</p>
<ul>
<li>The algorithm is not quite complete. In the online version of the book (3rd printing 2012) line 4 of the extract function has been updated. (Maybe there is an Errata)</li>
<li>The algorithm assumes indexing starting from 1 up to and <em>including</em> the end.</li>
<li>Python assumes 0 based indexing up to but <em>not including</em> the end.</li>
<li>In particular this affects the stopping values of some calls to range and a number of comparisons. </li>
<li>The items in the set have been modified to allow for easier reproduction of the desired output.</li>
</ul>
<p>Example output (matching 19 phrases with some matches repeated 5 times, in total 24 extracted):</p>
<pre><code>$ python2.7 phrase_extract_new.py
( 1) (0, 1) michael — michael
( 2) (0, 2) michael assumes — michael geht davon aus ; michael geht davon aus ,
( 3) (0, 3) michael assumes that — michael geht davon aus , dass
( 4) (0, 4) michael assumes that he — michael geht davon aus , dass er
( 5) (0, 9) michael assumes that he will stay in the house — michael geht davon aus , dass er im haus bleibt
( 6) (1, 2) assumes — geht davon aus ; geht davon aus ,
( 7) (1, 3) assumes that — geht davon aus , dass
( 8) (1, 4) assumes that he — geht davon aus , dass er
( 9) (1, 9) assumes that he will stay in the house — geht davon aus , dass er im haus bleibt
(10) (2, 3) that — dass ; , dass
(11) (2, 4) that he — dass er ; , dass er
(12) (2, 9) that he will stay in the house — dass er im haus bleibt ; , dass er im haus bleibt
(13) (3, 4) he — er
(14) (3, 9) he will stay in the house — er im haus bleibt
(15) (4, 6) will stay — bleibt
(16) (4, 9) will stay in the house — im haus bleibt
(17) (6, 8) in the — im
(18) (6, 9) in the house — im haus
(19) (8, 9) house — haus
$ python2.7 phrase_extract_new.py | grep -c ';'
5
</code></pre>
<p>Below follows the suggested interpretation of the algorithm. This algorithm needs to be refactored quite a bit, but before that more testing is needed with different examples. Reproducing the example in the book is just a start:</p>
<pre><code># -*- coding: utf-8 -*-
def phrase_extraction(srctext, trgtext, alignment):
"""
Phrase extraction algorithm.
"""
def extract(f_start, f_end, e_start, e_end):
if f_end < 0: # 0-based indexing.
return {}
# Check if alignement points are consistent.
for e,f in alignment:
if ((f_start <= f <= f_end) and
(e < e_start or e > e_end)):
return {}
# Add phrase pairs (incl. additional unaligned f)
# Remark: how to interpret "additional unaligned f"?
phrases = set()
fs = f_start
# repeat-
while True:
fe = f_end
# repeat-
while True:
# add phrase pair ([e_start, e_end], [fs, fe]) to set E
# Need to +1 in range to include the end-point.
src_phrase = " ".join(srctext[i] for i in range(e_start,e_end+1))
trg_phrase = " ".join(trgtext[i] for i in range(fs,fe+1))
# Include more data for later ordering.
phrases.add(((e_start, e_end+1), src_phrase, trg_phrase))
fe += 1 # fe++
# -until fe aligned or out-of-bounds
if fe in f_aligned or fe == trglen:
break
fs -=1 # fe--
# -until fs aligned or out-of- bounds
if fs in f_aligned or fs < 0:
break
return phrases
# Calculate no. of tokens in source and target texts.
srctext = srctext.split() # e
trgtext = trgtext.split() # f
srclen = len(srctext) # len(e)
trglen = len(trgtext) # len(f)
# Keeps an index of which source/target words are aligned.
e_aligned = [i for i,_ in alignment]
f_aligned = [j for _,j in alignment]
bp = set() # set of phrase pairs BP
# for e start = 1 ... length(e) do
# Index e_start from 0 to len(e) - 1
for e_start in range(srclen):
# for e end = e start ... length(e) do
# Index e_end from e_start to len(e) - 1
for e_end in range(e_start, srclen):
# // find the minimally matching foreign phrase
# (f start , f end ) = ( length(f), 0 )
# f_start ∈ [0, len(f) - 1]; f_end ∈ [0, len(f) - 1]
f_start, f_end = trglen-1 , -1 # 0-based indexing
# for all (e,f) ∈ A do
for e,f in alignment:
# if e start ≤ e ≤ e end then
if e_start <= e <= e_end:
f_start = min(f, f_start)
f_end = max(f, f_end)
# add extract (f start , f end , e start , e end ) to set BP
phrases = extract(f_start, f_end, e_start, e_end)
if phrases:
bp.update(phrases)
return bp
# Refer to match matrix.
# 0 1 2 3 4 5 6 7 8
srctext = "michael assumes that he will stay in the house"
# 0 1 2 3 4 5 6 7 8 9
trgtext = "michael geht davon aus , dass er im haus bleibt"
alignment = [(0,0), (1,1), (1,2), (1,3), (2,5), (3,6), (4,9), (5,9), (6,7), (7,7), (8,8)]
phrases = phrase_extraction(srctext, trgtext, alignment)
# Keep track of translations of each phrase in srctext and its
# alignement using a dictionary with keys as phrases and values being
# a list [e_alignement pair, [f_extractions, ...] ]
dlist = {}
for p, a, b in phrases:
if a in dlist:
dlist[a][1].append(b)
else:
dlist[a] = [p, [b]]
# Sort the list of translations based on their length. Shorter phrases first.
for v in dlist.values():
v[1].sort(key=lambda x: len(x))
# Function to help sort according to book example.
def ordering(p):
k,v = p
return v[0]
#
for i, p in enumerate(sorted(dlist.items(), key = ordering), 1):
k, v = p
print "({0:2}) {1} {2} — {3}".format( i, v[0], k, " ; ".join(v[1]))
</code></pre>
<p>The final part where the output is prepared can probably be improved...and the algorithm code can certaily be made more Pythonic.</p>
| 1,308
|
machine translation
|
Sentence Indicating in Neural Machine Translation Tasks
|
https://stackoverflow.com/questions/66294861/sentence-indicating-in-neural-machine-translation-tasks
|
<p>I have seen many people working on Neural Machine Translation. Usually, they represent their sentence between <code><BOS><EOS></code>, <code><START><END></code>, etc. tags, before training the network. Of course it's a logical solution to specify the start and end of sentences, but I wonder how the neural networks understand that the string <code><END></code> (or others) means end of a sentence?</p>
|
<p>It doesn't.</p>
<p>At inference time, there's a hardcoded rule that if that token is generated, the sequence is done, and the underlying neural model will no longer be asked for the next token.</p>
<pre><code>source_seq = tokenize('This is not a test.')
print(source_seq)
</code></pre>
<p>At this point you'd get something like:</p>
<blockquote>
<p><code>[ '<BOS>', 'Thi###', ... , '###t', '.' , '<EOS>' ]</code></p>
</blockquote>
<p>Now we build the target sequence with the same format:</p>
<pre><code>target_seq = [ '<BOS>' ]
while true:
token = model.generate_next_token(source_seq, target_seq)
if token == '<EOS>':
break
seq.append(token)
</code></pre>
<p>The model itself only predicts the most likely next token give the current state (the input sequence and the output sequence so far).</p>
<p>It can't exit the loop any more than it can pull your machine's plug out of the wall.</p>
<p>Note that that's not the only hardcoded ruled here. The other one is the decision to start from the first token and only ever append - never prepend, never delete... - like a human speaking.</p>
| 1,309
|
machine translation
|
Watson Machine Translation in Node Red Flow
|
https://stackoverflow.com/questions/31006912/watson-machine-translation-in-node-red-flow
|
<p>It seems there have been some changes in the usage of the Machine Translation service from within a Node Red flow. There was the capability to configure from within the node from which into which other language to translate. This has been changed. Can you help me in understanding where to set exactly this when having identified the language already before (msg.lang)
Thanks</p>
|
<p>There are two ways to set the translation language for the Machine Translation node. </p>
<p>You can either...</p>
<ul>
<li>Select the language from the dropdown menu in the node configuration dialog. </li>
</ul>
<p><img src="https://i.sstatic.net/ozq8N.png" alt="node configuration"></p>
<ul>
<li>Pass the language translation type as the "lang" parameter on the "msg" object. </li>
</ul>
<p>Supported language types for the Machine Translation node are automatically registered at runtime. You must have the Watson Machine Translation Service bound to your Node-RED instance running on Bluemix. </p>
| 1,310
|
machine translation
|
How to use basic BLEU score in Asiya Machine Translation Evaluation toolkit?
|
https://stackoverflow.com/questions/30011056/how-to-use-basic-bleu-score-in-asiya-machine-translation-evaluation-toolkit
|
<p>Asiya is the machine translation evaluation toolkit to score machine translation outputs (<a href="http://asiya.lsi.upc.edu/" rel="nofollow">http://asiya.lsi.upc.edu/</a>). It is largely written in Perl.</p>
<p><strong>How do I use Asiya to perform BLEU metrics?</strong></p>
<p>I have followed the youtube introduction video: <a href="https://www.youtube.com/watch?v=rA5De9Z4uWI" rel="nofollow">https://www.youtube.com/watch?v=rA5De9Z4uWI</a></p>
<p>And created a config file (Asiya.config):</p>
<pre><code>input=raw
srclang=en
srccase=en
trglang=ja
trgcase=ja
src=corpus.tok/test.tok.en
ref=corpus.tok/hyp.tok.ja
sys=corpus.tok/test.tok.ja
some_metrics= BLEU NIST METEOR-ex Ol
</code></pre>
<p>My machine translation output file is in <code>corpus.tok/hyp.tok.ja</code>, the source file is in <code>corpus.tok/test.tok.en</code> and the reference file (correct translation) is at <code>corpus.tok/test.tok.ja</code>. They are tokenized plain text files, each line is a sentence.</p>
<p>And when I ran:</p>
<pre><code>/home/expert/asiya/bin/Asiya.pl -eval single -g all -metric_set some_metrics Asiya.config
</code></pre>
<p>I got this error:</p>
<pre><code>Smartmatch is experimental at /home/expert/asiya/bin/../lib/IQ/Common.pm line 784.
Smartmatch is experimental at /home/expert/asiya/bin/../lib/IQ/Common.pm line 791.
Smartmatch is experimental at /home/expert/asiya/bin/../lib/IQ/Scoring/ESA.pm line 339.
Use of uninitialized value in concatenation (.) or string at /home/expert/asiya/bin/../lib/IQ/Config.pm line 251.
[ASIYA] directory </tools> does not exist!
</code></pre>
<p>The tools directory does exists as a subdirectory in the current directory that I ran the command. <strong>What went wrong? Is there a parameter that I could add for the Asiya tools directory?</strong></p>
<p>How do I use Asiya to perform BLEU evaluation?</p>
<p><strong>If I don't use Asiya, how else can I get the BLEU score per sentence and system BLEU scores for my machine translation output?</strong></p>
<p>(more details on <a href="http://nlp.lsi.upc.edu/redmine/boards/11/topics/138" rel="nofollow">http://nlp.lsi.upc.edu/redmine/boards/11/topics/138</a>)</p>
| 1,311
|
|
machine translation
|
Using existing human translations to aid machine translation to a new language
|
https://stackoverflow.com/questions/26308109/using-existing-human-translations-to-aid-machine-translation-to-a-new-language
|
<p>In the past, my company has used human, professional translators to translate our software from English into some 13 languages. It's expensive but the quality is high.</p>
<p>The application we're translating contains industry jargon. It also contains a lot of sentence fragments and single words which, out of context, are unlikely to be correctly translated.</p>
<p>I am wondering if there is a machine translation system or service that could use our existing professionally-generated translations to more accurately create a machine translation into any new language. </p>
<p>If an industry term, phrase or sentence fragment has been translated from en-US to es-AR, pt-BR, cs-CZ, etc., then couldn't those prior translations be used as a hint regarding what the correct word choice should be for some new language? They could be used, in a sense, to triangulate. At worst, they could be used to create a majority voting system (e.g. if 9 of 13 languages translated a phrase to the same thing in the new language, we go with it).</p>
<p>Is anyone aware of a machine translation service that works like this?</p>
|
<p>Yes. A lot has changed in the decade since 2014.</p>
<p>Now, as of 2023, there are more than a dozen customizable cloud API providers, many of them self-serve.</p>
<p>For example, Google Translate launched customization in 2018, after launching <a href="https://machinetranslate.org/neural-machine-translation" rel="nofollow noreferrer">neural machine translation</a> in 2016, which made it a lot easier to offer.</p>
<p>From <a href="https://machinetranslate.org/customisation" rel="nofollow noreferrer">machinetranslate.org/customisation</a>:</p>
<blockquote>
<p>14 APIs support customisation.</p>
<ul>
<li><a href="https://machinetranslate.org/amazon" rel="nofollow noreferrer">Amazon Translate</a></li>
<li><a href="https://machinetranslate.org/omniscien" rel="nofollow noreferrer">Omniscien Technologies</a></li>
<li><a href="https://machinetranslate.org/amazon" rel="nofollow noreferrer">Amazon Translate</a></li>
<li><a href="https://machinetranslate.org/kantanmt" rel="nofollow noreferrer">KantanMT</a></li>
<li><a href="https://machinetranslate.org/language-weaver" rel="nofollow noreferrer">Language Weaver</a></li>
<li><a href="https://machinetranslate.org/lilt" rel="nofollow noreferrer">Lilt</a></li>
<li><a href="https://machinetranslate.org/mirai" rel="nofollow noreferrer">Mirai Translator</a></li>
<li><a href="https://machinetranslate.org/modernmt" rel="nofollow noreferrer">ModernMT</a></li>
<li><a href="https://machinetranslate.org/npatmt" rel="nofollow noreferrer">NpatMT</a></li>
<li><a href="https://machinetranslate.org/omniscien" rel="nofollow noreferrer">Omniscien Technologies</a></li>
<li><a href="https://machinetranslate.org/pangeamt" rel="nofollow noreferrer">PangeaMT</a></li>
<li><a href="https://machinetranslate.org/phrase-nextmt" rel="nofollow noreferrer">Phrase NextMT</a></li>
<li><a href="https://machinetranslate.org/sunda" rel="nofollow noreferrer">Sunda Translator</a></li>
<li><a href="https://machinetranslate.org/tilde" rel="nofollow noreferrer">Tilde</a></li>
</ul>
</blockquote>
<p>The type of customization you describe, training with your own <a href="https://machinetranslate.org/parallel-data" rel="nofollow noreferrer">parallel data</a>, is the most common. These days it is usually achieved with <a href="https://machinetranslate.org/fine-tuning" rel="nofollow noreferrer">fine-tuning</a>.</p>
<p>The basic types of customization that the major machine translation APIs offer are:</p>
<ul>
<li><a href="https://machinetranslate.org/fine-tuning" rel="nofollow noreferrer">Fine-tuning</a> - Training a model with parallel data</li>
<li><a href="https://machinetranslate.org/adaptive" rel="nofollow noreferrer">Adaptive</a> - Similar to fine-tuning, but it updates on the fly</li>
<li><a href="https://machinetranslate.org/glossaries" rel="nofollow noreferrer">Glossaries</a> - Defining specific terminology</li>
<li><a href="https://machinetranslate.org/formality" rel="nofollow noreferrer">Formality</a> - Formal or informal 2nd person</li>
</ul>
<p>There are also other methods of customization offered, like choosing a specific locale, like Canadian French, or a specific domain, like fashion.</p>
<p>There is basically a trade-off between simplicity and control - a lot of those methods are basically a parameter or a button click, but the engine can't even hope to reflect your style the way that a model fine-tuned on your training data can.</p>
<p>Warning: Even if an API supports customization, and that API is <a href="https://machinetranslate.org/integrations" rel="nofollow noreferrer">integrated in your translation management system</a>, that integration does not necessarily support using your customized version of that API. So before you invest time in customization, <a href="https://machinetranslate.org/integrations" rel="nofollow noreferrer">check your TMS integration</a> to be sure it'll actually let you access your custom machine translation.</p>
<p>For example, there is <a href="https://machinetranslate.org/xtm#api-support" rel="nofollow noreferrer">no ModernMT integration in XTM</a>. There is a Google Translate integration, but it supports customization only via fine-tuning, not via glossaries.</p>
<hr />
<p>Full disclosure: I'm the founder of and a major contributor to Machine Translate, the non-profit foundation making machine translation more accessible to more people, with open information and community.</p>
| 1,312
|
machine translation
|
How to find out the quality of machine translation systems?
|
https://stackoverflow.com/questions/58624033/how-to-find-out-the-quality-of-machine-translation-systems
|
<p>I know that there are various metrics for measuring the quality of machine translation systems, for example:</p>
<ul>
<li>Bleu</li>
<li>METEOR</li>
<li>Lepor</li>
</ul>
<p>Are there somewhere in the public domain metric results for popular translation systems? For example, such as:</p>
<ul>
<li>Google translate</li>
<li>DeepL</li>
<li>Yandex Translate</li>
<li>Microsoft translate</li>
<li>Promt</li>
<li>Apertium</li>
<li>Openlogs</li>
<li>Papago</li>
<li>Fanyi Baidu</li>
</ul>
|
<p>Machine translation quality is annually evaluated at the <a href="http://www.statmt.org/wmt19/" rel="nofollow noreferrer">Conference on Machine Translation</a>. Most of the evaluated systems are experimental systems from universities, but most of the systems you mention participate as well. You the results of last year's <em>human</em> evaluation in Table 11 on page 24 of the <a href="https://www.aclweb.org/anthology/W19-5301.pdf" rel="nofollow noreferrer">conference findings</a>.</p>
<p>Most of the systems you mentioned participate anonymously under acronyms <code>online-?</code>, but you can often guess which system is which.</p>
| 1,313
|
machine translation
|
English query generation through machine translation systems
|
https://stackoverflow.com/questions/11122473/english-query-generation-through-machine-translation-systems
|
<p>I'm working on a project to generate questions from sentences. Right now, I'm at a point where I can generate questions like:
"Angela Merkel is the chancelor of Germany." -> "Angela Merkel is who?"</p>
<p>Now, of course, I want the questions to look like "Who is...?" instead. Is there any easy way to do this that I haven't thought of yet?</p>
<p>My current idea would be to train an English(not quite question) -> English(question) translator, maybe using existing machine translation engines like moses. Is this overkill? How much data would I need? Are there corpora that address this or a similar problem? Is using a general translation engine even appropriate for this task?</p>
|
<p>Check out Michael Heilman's dissertation <a href="http://www.ark.cs.cmu.edu/mheilman/questions/papers/heilman-question-generation-dissertation.pdf" rel="nofollow">Automatic Factual Question Generation from Text</a> for background on question generation and to see what his approach to this problem looks like. You can find more by searching for research on "question generation". He mentions a corpus from Microsoft: the <a href="http://research.microsoft.com/en-us/downloads/88c0021c-328a-4148-a158-a42d7331c6cf/" rel="nofollow">Microsoft Research Question-Answering Corpus</a>.</p>
<p>I don't think that an approach based solely on (current) statistical machine translation approaches is going to work that well, since you're usually going to need a deeper syntactic analysis of the source sentence to do a good job of generating an appropriate question. For simple questions like your example, it's pretty easy to design syntactic tree transformations to generate the question, but it gets much trickier as soon as the sentences get a little more complicated.</p>
| 1,314
|
machine translation
|
Cleaning text data for Neural Machine translation
|
https://stackoverflow.com/questions/65170091/cleaning-text-data-for-neural-machine-translation
|
<p>I am cleaning my data to get pairs of text for converting from language X to Y for machine translation</p>
<pre><code> [['\ufeffMensahe di Pasco di Gobernador di Aruba 2019',
'Governor’s Christmas speech 2019'],
['Gobernador di Aruba Sr. Alfonso Boekhoudt a duna su mensahe di Pasco riba 24 december ultimo',
'On Christams eve, December 24, the Governor of Aruba Mr. Alfonso Boekhoudt gave his traditional Christmas speech'],
['Por a wak e discurso di Pasco di Gobernador via e canalnan di television local',
"The governor's Christmas speech was shown at the local television stations"],......
</code></pre>
<p>Above is the data that goes in the following piece of code:</p>
<pre><code>def clean_pairs(lines):
cleaned = list()
for pair in lines:
clean_pair = list()
for line in pair:
# normalize unicode characters
line = normalize('NFD', line).encode('ascii', 'ignore')
line = line.decode('UTF-8')
# tokenize on white space
line = line.split()
.
.
.
.
clean_pair.append(' '.join(line))
cleaned.append(clean_pair)
for i in range(10):
print('[%s]->[%s]' % (cleaned[i,0], cleaned[i,1]))
</code></pre>
<p>I should get the output as :</p>
<pre><code>[hi]->[hallo]
[hi]->[gru gott]
[run]->[lauf]
[wow]->[potzdonner]
[wow]->[donnerwetter]
</code></pre>
<p>however, I get the below error:</p>
<hr />
<blockquote>
<p>IndexError<br />
Traceback (most recent call
last) in
49
50 for i in range(10):
---> 51 print('[%s]->[%s]' % (clean_pairs[i,0], clean_pairs[i,1]))</p>
<p>IndexError: too many indices for array: array is 1-dimensional, but 2
were indexed</p>
</blockquote>
<p>Can someone help me with whats going wrong?
Thanks!</p>
|
<p>Your structure is a list of lists. In Python you index them like this:</p>
<pre><code>clean[i][0] # not like clean[i,0]
</code></pre>
| 1,315
|
machine translation
|
What is role of parser(POS, TAG, Dependecency) in machine translation?
|
https://stackoverflow.com/questions/54757820/what-is-role-of-parserpos-tag-dependecency-in-machine-translation
|
<p>I would like to know what is the main purpose of parsing a sentence for example we get all the POS-tag of the sentence in training machine translation? I thought we just need to tokenize the sentence and then feed it into neural network to train? What is the purpose of having the POS tag and how can it be implemented in code for training the model for machine translation? </p>
<p>I cant seem to find any examples. Please assist</p>
|
<p>If you have a only tokenized word by splitting sentence, you get only dictionary by word.</p>
<p>For example, you have two sentences , [I love coffee], [I like milk].</p>
<p>Dictionary might be [I], [love], [coffee], [like], [milk] called bag-of-word consisted of 5 dimension.</p>
<p>Imagine you make your language only in a dictionary by bag-of-word.
How many dimension do you need for your language?</p>
<p>It would be too big dimensional.</p>
<p>In this circumstance, if you make a language model with POS-tag, you could reduce dimensionality.</p>
<p><a href="https://i.sstatic.net/iIPEI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iIPEI.jpg" alt="enter image description here"></a></p>
<p>pic 1. you need 9 dimension for representing 9 words.</p>
<p><a href="https://i.sstatic.net/rPqbb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rPqbb.jpg" alt="enter image description here"></a></p>
<p>pic2. you need only [3,2]-dimension for representing 9 words.</p>
| 1,316
|
machine translation
|
How to handle names/unknown words in neural machine translation?
|
https://stackoverflow.com/questions/55281727/how-to-handle-names-unknown-words-in-neural-machine-translation
|
<p>Can anyone explain a best method to handle unknown words in <strong>Neural machine translation</strong> instead of removing it and to know how google translate is handling names while the sentence is getting translate between any two languages ?</p>
<p>I'd really appreciate your response...Thanks!</p>
|
<p>Current NMT models do not work with words in the traditional sense, but with so-called subwords. Segmentation of the text into subwords is done using statistical models which ensure that frequently used words or strings of characters remain together and less frequent words get split, ultimately they can split into individual characters. In this way, there are no out-of-vocabulary words. The segmentation is the same both for the source and the target language, so it is easy for the model to learn to copy.</p>
<p>Currently, most frequent approaches are <a href="https://github.com/rsennrich/subword-nmt" rel="nofollow noreferrer">Byte-Pair Encoding</a> and <a href="https://github.com/google/sentencepiece" rel="nofollow noreferrer">SentencePiece</a>, both of them are available via <code>pip</code> and easy to use.</p>
<p>Google in their <a href="https://arxiv.org/abs/1609.08144" rel="nofollow noreferrer">2016 paper</a> claim to use a similar technique called WordPiece, however, they may have switched to SentencePiece which was made public by Google in 2018.</p>
| 1,317
|
machine translation
|
Can mT5 model on Huggingface be used for machine translation?
|
https://stackoverflow.com/questions/76040850/can-mt5-model-on-huggingface-be-used-for-machine-translation
|
<p>The <code>mT5</code> model is pretrained on the mC4 corpus, covering 101 languages:</p>
<blockquote>
<p>Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.</p>
</blockquote>
<h3>Can it do machine translation?</h3>
<p>Many users have tried something like this but it fails to generate a translation:</p>
<pre><code>from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-small")
article = "translate to french: The capital of France is Paris."
batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors="pt")
output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1)
tokenizer.decode(output_ids[0])
</code></pre>
<p>[out]:</p>
<pre><code>>>> <pad> <extra_id_0></s>
</code></pre>
<h3>How do we make the mt5 model do machine translation?</h3>
|
<h3>Can it do machine translation?</h3>
<p>From the doc:</p>
<blockquote>
<p>Note: mT5 was only pre-trained on mC4 <strong>excluding any supervised training</strong>. Therefore, this model has to be fine-tuned before it is useable on a downstream task.</p>
</blockquote>
<p>therefore, no, it cannot do machine translation out of the box.</p>
<p>See also <a href="https://github.com/huggingface/transformers/issues/8704" rel="noreferrer">https://github.com/huggingface/transformers/issues/8704</a></p>
<h3>How do we make the mt5 model do machine translation?</h3>
<p>No, it can't do machine translation out of the box. But you can fine-tune the model on parallel data.</p>
<p>There are multiple MT models fine-tuned and shared on <a href="https://huggingface.co/models?pipeline_tag=translation&sort=downloads&search=mt5" rel="noreferrer">https://huggingface.co/models?pipeline_tag=translation&sort=downloads&search=mt5</a></p>
<p>But if you want to fine-tune mT5 on your own data, here's a sample reference code: <a href="https://github.com/ejmejm/multilingual-nmt-mt5/blob/main/nmt_full_version.ipynb" rel="noreferrer">https://github.com/ejmejm/multilingual-nmt-mt5/blob/main/nmt_full_version.ipynb</a></p>
| 1,318
|
machine translation
|
How to deal with punctuations in machine translation
|
https://stackoverflow.com/questions/46290238/how-to-deal-with-punctuations-in-machine-translation
|
<p>Just curious how people usually deal with punctuation in machine translation. </p>
<p>For example, from language A to B we might have:</p>
<pre><code>A: a b c d e f g
B: x y z, u v w
</code></pre>
<p>I am wondering how do we deal with the comma in language B? Say if we 're using seq2seq model, shall we simply remove it, or shall we also generate embedding for it and treat the comma the same way we treat other words?</p>
<p>I think no paper explicitly talks about it yet if I didn't miss anything.</p>
|
<p>A good application for Seq2Seq is machine translation. </p>
<p>In the case of English->German, we will see German sentence that requires additional comma, e.g.</p>
<blockquote>
<p><strong>EN:</strong> I shot him because the colonel had told me so.</p>
<p><strong>DE:</strong> Ich habe auf ihn geschossen, weil es der Oberst mir befohlen hatte.</p>
</blockquote>
<p>A good model will automatically learn to that often the first sub-clause before <code>weil</code> (because) requires a comma to be grammatical. </p>
<p>There shouldn't be a need to do extra pre-processing beforehand.</p>
| 1,319
|
machine translation
|
Can we use Generative Adversarial Network for Neural Machine Translation?
|
https://stackoverflow.com/questions/59099391/can-we-use-generative-adversarial-network-for-neural-machine-translation
|
<p>I'm going to implement a translator based on NMT(Neural Machine Translation). In here I hope to use only monolingual corpora without using parallel corpus data for my dataset. Is it possible to train the model using only monolingual corpora data? I'm grateful if someone can share your idea regarding this.</p>
| 1,320
|
|
machine translation
|
Neural Machine Translation from Latex to English without sufficient training data
|
https://stackoverflow.com/questions/50199395/neural-machine-translation-from-latex-to-english-without-sufficient-training-dat
|
<p>I'm trying to build a Neural Machine Translation model that translates Latex-code into English.
An example for this would be: "\frac{1^{n}}{12}" -> "One to the power of n divided by 12".
The problem is that I don't have nearly enough labeled training data to produce a good result.</p>
<p>Is there a way to train the model with s small data set or to artificially increase the amount of training data for this problem?</p>
<p>I have found solutions for machine translation without parallel data, that built a dictionary <a href="https://arxiv.org/abs/1710.04087" rel="nofollow noreferrer">"by aligning monolingual word embedding spaces in an unsupervised way"</a>. These approaches seem to rely on the fact that human languages are very similar in nature. But Latex-code is very different than human languages and I don't think that's going to yield great results.</p>
| 1,321
|
|
machine translation
|
Do glossaries in Weblate have any effect on machine translations?
|
https://stackoverflow.com/questions/59957175/do-glossaries-in-weblate-have-any-effect-on-machine-translations
|
<p>In Weblate it is easy to add terms to glossaries. I was wondering if these glossaries might be "sent" to any of the machine translation services, to increase the quality of the machine translations. Or are they purely to help the human translators?</p>
|
<p>Currently, there is no such integration - so the glossaries are there purely to help humans.</p>
| 1,322
|
machine translation
|
There are deep learning methods for string similarity in machine translation?
|
https://stackoverflow.com/questions/50435118/there-are-deep-learning-methods-for-string-similarity-in-machine-translation
|
<p>I am interested in machine translation and more specific I would like to examine the similarity between two strings. I would like to know if there are deep learning methods for text feature extraction. I already tried the famous statistics methods like cosine similarity, Levenstein distance, word frequency and others.</p>
<p>Thank you</p>
|
<p>To find the similarity between 2 string ,try to train a <strong>Siamese networks</strong>
on your dataset</p>
<p><strong>Siamese networks</strong> are a special type of neural network architecture. Instead of a model learning to classify its inputs, the neural networks learns to differentiate between two inputs. It learns the similarity between them.</p>
<p><a href="https://medium.com/@gautam.karmakar/manhattan-lstm-model-for-text-similarity-2351f80d72f1" rel="noreferrer">https://medium.com/@gautam.karmakar/manhattan-lstm-model-for-text-similarity-2351f80d72f1</a></p>
<p>The below is the link of a kaggle competition ,they have used siamese networks for text simmilarity</p>
<p><a href="https://medium.com/mlreview/implementing-malstm-on-kaggles-quora-question-pairs-competition-8b31b0b16a07" rel="noreferrer">https://medium.com/mlreview/implementing-malstm-on-kaggles-quora-question-pairs-competition-8b31b0b16a07</a> </p>
<p>Hope this clears your doubts</p>
| 1,323
|
machine translation
|
What's a Good Machine Translation Metric or Gold Set
|
https://stackoverflow.com/questions/8510762/whats-a-good-machine-translation-metric-or-gold-set
|
<p>I'm starting up looking into doing some machine translation of search queries, and have been trying to think of different ways to rate my translation system between iterations and against other systems. The first thing that comes to mind is getting translations of a set of search terms from mturk from a bunch of people and saying each is valid, or something along those lines, but that would be expensive, and possibly prone to people putting in bad translations. </p>
<p>Now that I'm trying to think of something cheaper or better, I figured I'd ask StackOverflow for ideas, in case there's already some standard available, or someone has tried to find one of these before. Does anyone know, for example, how Google Translate rates various iterations of their system?</p>
|
<p>I'd suggest refining your question. There are a great many metrics for machine translation, and it depends on what you're trying to do. In your case, I believe the problem is simply stated as: "Given a set of queries in language L1, how can I measure the quality of the translations into L2, in a web search context?"</p>
<p>This is basically cross-language information retrieval.</p>
<p>What's important to realize here is that you don't actually care about providing the user with the translation of the query: you want to get them the <em>results</em> that they would have gotten from a good translation of the query.</p>
<p>To that end, you can simply measure the discrepancy of the results lists between a gold translation and the result of your system. There are many metrics for rank correlation, set overlap, etc., that you can use. The point is that you need not judge each and every translation, but just evaluate whether the automatic translation gives you the same results as a human translation.</p>
<p>As for people proposing bad translations, you can assess whether the putative gold standard candidates have similar results lists (i.e. given 3 manual translations do they agree in results? If not, use the 2 that most overlap). If so, then these are effectively synonyms from the IR perspective.</p>
| 1,324
|
machine translation
|
Loading huge text files for neural machine translation with Pytorch
|
https://stackoverflow.com/questions/49338649/loading-huge-text-files-for-neural-machine-translation-with-pytorch
|
<p>In PyTorch, I have written a dataset loading class for loading 2 text files as source and targets, for neural machine translation purpose. Each file has 93577946 lines, and each of them allocates 8GB memory on Hard Disc.</p>
<p>The class is as the following:</p>
<pre><code>class LoadUniModal(Dataset):
sources = []
targets = []
maxlen = 0
lengths = []
def __init__(self, src, trg, src_vocab, trg_vocab):
self.src_vocab = src_vocab
self.trg_vocab = trg_vocab
with codecs.open(src, encoding="utf-8") as f:
for line in f:
tokens = line.replace("\n", "").split()
self.maxlen = max(self.maxlen, len(tokens))
self.sources.append(tokens)
with codecs.open(trg, encoding="utf-8") as f:
for line in f:
tokens = line.replace("\n", "").split()
self.maxlen = max(self.maxlen, len(tokens))
self.targets.append(tokens)
self.lengths.append(len(tokens)+2)
# Overrride to give PyTorch access to any image on the dataset
def __getitem__(self, index):
# Source sentence processing
tokens = self.sources[index]
ntokens = [self.src_vocab['<START>']]
for a in range(self.maxlen):
if a <= (len(tokens) - 1):
if tokens[a] in self.src_vocab.keys():
ntokens.append(self.src_vocab[tokens[a]])
else:
ntokens.append(self.src_vocab['<UNK>'])
elif a == len(tokens):
ntokens.append(self.src_vocab['<END>'])
elif a > len(tokens):
ntokens.append(self.src_vocab['<PAD>'])
source = torch.from_numpy(np.asarray(ntokens)).long()
# Target sentence processing
tokens = self.targets[index]
ntokens = [self.trg_vocab['<START>']]
for a in range(self.maxlen):
if a <= (len(tokens) - 1):
if tokens[a] in self.trg_vocab.keys():
ntokens.append(self.trg_vocab[tokens[a]])
else:
ntokens.append(self.trg_vocab['<UNK>'])
elif a == len(tokens):
ntokens.append(self.trg_vocab['<END>'])
elif a > len(tokens):
ntokens.append(self.trg_vocab['<PAD>'])
target = torch.from_numpy(np.asarray(ntokens)).long()
length = self.lengths[index]
return [0], source, target, length
def __len__(self):
return len(self.sources)
</code></pre>
<p>I use the class in order to load dataset as follows:</p>
<pre><code>def load_text_train_data(train_dir, src_vocab, trg_vocab, lang_pair, batch_size):
tpl = ast.literal_eval(lang_pair)
slang = tpl[1]
tlang = tpl[2]
strain_file = os.path.join(train_dir, "train"+slang)
ttrain_file = os.path.join(train_dir, "train"+tlang)
data_iter = LoadUniModal(strain_file, ttrain_file, src_vocab, trg_vocab)
data_iter = DataLoader(data_iter, batch_size=batch_size)
return data_iter
</code></pre>
<p>When I am trying to load the data, I get memory error.</p>
<p>How would it be possible to load the data without memory problem?</p>
<p>Thanks,</p>
|
<p>This shouldn't give you an error unless you load the entire data in memory at once. One suggestion I want to give you is: don't pad all sentences to a maximum length. In machine translation data, in general, sentence lengths vary a lot.</p>
<p>Also, you can try smaller mini-batches of size <code>x</code> (ex., 32, 64) which your memory can afford. Only pad the elements of the current mini-batch and move to cuda tensor and then pass it to your model. Hopefully, it will solve your problem.</p>
| 1,325
|
machine translation
|
What are some examples of Machine Translation applications/libraries currently being developed?
|
https://stackoverflow.com/questions/4456181/what-are-some-examples-of-machine-translation-applications-libraries-currently-b
|
<p>I'm interested in learning more about Machine Translation. While I have some very interesting books on the matter, I'd like to see some real world applications of MT's theories.</p>
<p>I've found a couple open source projects just by searching around:</p>
<p><a href="http://www.apertium.org/">Apertium</a></p>
<p><a href="http://www.statmt.org/moses/">Moses</a></p>
<p>So, does anyone have any other examples? I'm looking for active projects; stuff which has not been abandoned.</p>
|
<p><strong>Machine Translation Packages</strong></p>
<p>Besides <a href="http://www.statmt.org/moses/" rel="nofollow">Moses</a> and <a href="http://www.apertium.org/" rel="nofollow">Apertium</a>, other good open source tools for machine translation that are being actively developed/supported are:</p>
<ul>
<li><a href="http://cdec-decoder.org/index.php?title=Main_Page" rel="nofollow">cdec (C++)</a></li>
<li><a href="http://sourceforge.net/projects/joshua/" rel="nofollow">Joshua (Java)</a></li>
<li><a href="http://www-i6.informatik.rwth-aachen.de/jane/" rel="nofollow">Jane (C++)</a></li>
<li><a href="http://nlp.stanford.edu/phrasal/" rel="nofollow">Phrasal (Java) - soon to be released</a></li>
</ul>
| 1,326
|
machine translation
|
What's a good explanation of statistical machine translation?
|
https://stackoverflow.com/questions/5815252/whats-a-good-explanation-of-statistical-machine-translation
|
<p>I'm trying to find a good high level explanation of how statistical machine translation works. That is, supposing I have a corpus of non-aligned English, French and German texts, how could I use that to translate any sentence from one language to another ? It's not that I'm looking to build a Google Translate myself, but I'd like to understand how it works in more detail. </p>
<p>I've seen searched Google but come across nothing good, it either quickly needs advanced mathematics knowledge to understand or is way too generalized. Wikipedia's article on SMT seems to be both, so it doesn't really help much. I'm skeptical that this is such a complex area that it's simply not possible to understand without all the mathematics.</p>
<p>Can anyone give, or know of, a general step-by-step explanation of how such a system works, targeted towards programmers (so code examples are fine) but without needing a mathematics degree to understand ? Or a book that's like this would be great too.</p>
<p><strong>Edit</strong>: A perfect example of what I'm looking for would be an SMT equivalent to <a href="http://norvig.com/spell-correct.html" rel="noreferrer">Peter Norvig's great article on spelling correction</a>. That gives a good idea of what it's involved in writing a spell checker, without going into detailed maths on Levenshtein/soundex/smoothing algorithms etc... </p>
|
<p>Here is a nice video lecture (in 2 parts):</p>
<p><a href="http://videolectures.net/aerfaiss08_koehn_pbfs/" rel="nofollow noreferrer">http://videolectures.net/aerfaiss08_koehn_pbfs/</a></p>
<p>For in-depth details, I highly advise this book:</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0521874157" rel="nofollow noreferrer" rel="nofollow noreferrer">http://www.amazon.com/Statistical-Machine-Translation-Philipp-Koehn/dp/0521874157</a></p>
<p>Both are from the guy who created the most widely used MT system in research. It covers all the fundamental stuff, is very well explained and accurate. This probably one of the de-facto standard books that any researcher beginning in this field should read.</p>
| 1,327
|
machine translation
|
How to find train_losses and val_losses in Tensorflow, Neural machine translation with attention
|
https://stackoverflow.com/questions/67282469/how-to-find-train-losses-and-val-losses-in-tensorflow-neural-machine-translatio
|
<p>I am learning Neural machine translation from this tutorial
<a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention#restore_the_latest_checkpoint_and_test" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/nmt_with_attention#restore_the_latest_checkpoint_and_test</a></p>
<p>But seems like there are no <code>train_losses</code> and <code>val_losses</code> in the tutorial (only <code>batch_loss</code>).</p>
<p>Is there any way to get loss value history like what we did with another model</p>
<p>Ex.</p>
<pre><code>train_loss = seqModel.history['loss']
val_loss = seqModel.history['val_loss']
train_acc = seqModel.history['acc']
val_acc = seqModel.history['val_acc']
</code></pre>
|
<p>In that tutorials, there have actually. When they use</p>
<pre><code>for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
</code></pre>
<p>By that, they're computing the training loss that is coming from the <code>train_step</code> method. But there is no validation set so no validation loss is shown.</p>
<hr />
<p>Based on your comment, you need to write the <code>test_step</code> function and use it in the training loop. Here is a minimum representation to get the validation loss.</p>
<pre><code>@tf.function
def test_step(inp, targ, enc_hidden):
loss = 0
enc_output, enc_hidden = encoder(inp, enc_hidden, training=False)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
for t in range(1, targ.shape[1]):
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden,
enc_output, training=False)
loss += loss_function(targ[:, t], predictions)
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
return batch_loss
</code></pre>
<p>To use it in the custom training loop, you would do as follows. Note, I'm using the same <code>dataset</code>, but practically we need to create a separate validation dataset.</p>
<pre><code>EPOCHS = 5
history = {'loss':[], 'val_loss':[]}
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
history['loss'].append(total_loss.numpy()/steps_per_epoch)
print(f'Epoch {epoch+1} Loss {total_loss/steps_per_epoch:.4f}')
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = test_step(inp, targ, enc_hidden)
total_loss += batch_loss
history['val_loss'].append(total_loss.numpy()/steps_per_epoch)
print(f'Epoch {epoch+1} Val Loss {total_loss/steps_per_epoch:.4f}')
print(f'Time taken for 1 epoch {time.time()-start:.2f} sec\n')
</code></pre>
<p>Next,</p>
<pre><code>history['loss']
history['val_loss']
</code></pre>
| 1,328
|
machine translation
|
How checkpoint.restore() works in Tensorflow, Neural machine translation with attention
|
https://stackoverflow.com/questions/67276646/how-checkpoint-restore-works-in-tensorflow-neural-machine-translation-with-at
|
<p>I am learning Neural machine translation from this tutorial
<a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention#restore_the_latest_checkpoint_and_test" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/nmt_with_attention#restore_the_latest_checkpoint_and_test</a></p>
<p>But there is one part that i could not understand about how to load last checkpoint.</p>
<p>I saw that it use this command to load the saved model</p>
<pre><code># restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
</code></pre>
<p>But seems it didn't use and output from it.
Maybe something like this for example</p>
<pre><code>restored_model = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
</code></pre>
<p>And i saw that they call <code>translate</code> method right after <code>checkpoint.restore</code></p>
<pre><code>translate(u'hace mucho frio aqui.')
</code></pre>
<p>So i wonder how is this work, eventhough we didn't do anything after <code>checkpoint.restore</code> command?</p>
<p>Thanks</p>
|
<p>Here, the <code>checkpoint</code> is an instance of <a href="https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint" rel="nofollow noreferrer">tf.train.Checkpoint</a>. Somehow you have missed it, check <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention#checkpoints_object-based_saving" rel="nofollow noreferrer">here</a>, it does initiate</p>
<pre><code>checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
</code></pre>
<p>And thus it calls those functions like <code>.restore</code> or <code>.save</code>.</p>
<pre><code>checkpoint.save(file_prefix=checkpoint_prefix)
...
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
</code></pre>
| 1,329
|
machine translation
|
A language model for machine translation between a low-resource language and Portuguese using Tensorflow
|
https://stackoverflow.com/questions/78911175/a-language-model-for-machine-translation-between-a-low-resource-language-and-por
|
<p>I'm trying to train a language model for machine translation between a low-resource language and Portuguese using Tensorflow. unfortunately, I'm getting the following error:</p>
<pre><code>PS C:\Users\myuser\PycharmProjects\teste> python .\tensorflow_model.py
2024-08-23 21:29:50.839647: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File ".\tensorflow_model.py", line 52, in <module>
dataset = tf.data.Dataset.from_tensor_slices((src_tensor, tgt_tensor)).shuffle(BUFFER_SIZE)
File "C:\Users\myuser\PycharmProjects\teste\.venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 831, in from_tensor_slices
return from_tensor_slices_op._from_tensor_slices(tensors, name)
File "C:\Users\myuser\PycharmProjects\teste\.venv\lib\site-packages\tensorflow\python\data\ops\from_tensor_slices_op.py", line 25, in _from_tensor_slices
return _TensorSliceDataset(tensors, name=name)
File "C:\Users\myuser\PycharmProjects\teste\.venv\lib\site-packages\tensorflow\python\data\ops\from_tensor_slices_op.py", line 45, in __init__
batch_dim.assert_is_compatible_with(
File "C:\Users\myuser\PycharmProjects\teste\.venv\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 300, in assert_is_compatible_with
raise ValueError("Dimensions %s and %s are not compatible" %
ValueError: Dimensions 21 and 22 are not compatible
</code></pre>
<p>How can I overcome this error?</p>
<pre><code>import tensorflow as tf
import numpy as np
import re
import os
# Clean data
def preprocess_sentence(sentence):
sentence = sentence.lower().strip()
sentence = re.sub(r"([?.!,¿])", r" \1 ", sentence)
sentence = re.sub(r'[" "]+', " ", sentence)
sentence = re.sub(r"[^a-zA-Z?.!,¿]+", " ", sentence)
sentence = sentence.strip()
sentence = '<start> ' + sentence + ' <end>'
return sentence
#Function to load data
def load_data(file_path_src, file_path_tgt):
src_sentences = open(file_path_src, 'r', encoding='utf-8').read().strip().split('\n')
tgt_sentences = open(file_path_tgt, 'r', encoding='utf-8').read().strip().split('\n')
src_sentences = [preprocess_sentence(sentence) for sentence in src_sentences]
tgt_sentences = [preprocess_sentence(sentence) for sentence in tgt_sentences]
return src_sentences, tgt_sentences
#load data
src_sentences, tgt_sentences = load_data('src_language.txt', 'portuguese.txt')
#Tokenization
src_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
tgt_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
src_tokenizer.fit_on_texts(src_sentences)
tgt_tokenizer.fit_on_texts(tgt_sentences)
src_tensor = src_tokenizer.texts_to_sequences(src_sentences)
tgt_tensor = tgt_tokenizer.texts_to_sequences(tgt_sentences)
src_tensor = tf.keras.preprocessing.sequence.pad_sequences(src_tensor, padding='post')
tgt_tensor = tf.keras.preprocessing.sequence.pad_sequences(tgt_tensor, padding='post')
BUFFER_SIZE = len(src_tensor)
#Creating the Dataset
dataset = tf.data.Dataset.from_tensor_slices((src_tensor, tgt_tensor)).shuffle(BUFFER_SIZE)
</code></pre>
|
<p>The error indicates that you're attempting to create a dataset with two tensors (<code>src_tensor</code> and <code>tgt_tensor</code>), but they have different shapes (i.e, one could have 21 rows and the other could have 22 rows) which is making incompatible for creating a dataset. To resolve this issue, you need to ensure that both tensors have same lengths. This can be achieved by either truncating or padding the shorter tensor.
Please refer to this <a href="https://colab.sandbox.google.com/gist/Kayyuri/203ed986a25ed44b0c54034eb3898c92/a-language-model-for-machine-translation-between-a-low-resource-language-and-portuguese-using-tensorflow.ipynb#scrollTo=lUp9BPXEO7cd" rel="nofollow noreferrer">gist</a></p>
| 1,330
|
machine translation
|
Naive rule-based machine translation
|
https://stackoverflow.com/questions/60342861/naive-rule-based-machine-translation
|
<p>In order to develop a translation system for a constructed language, I am looking for a method that is easy to implement. I turn quite naturally to algorithms based on rules encoded by an "expert".</p>
<p>Thus, I am looking for references (codes, explanatory documents...) about naive translation algorithms; for example those from the 60s.</p>
<p><strong>Note</strong>: I know that the results will probably be rough.</p>
|
<p>Looking and working with translations, use books and dictionary, and a leguaje as reference, by example from book Moby Dick:</p>
<pre><code>"Call me Ishmael. Some years ago—never mind how
long precisely-having little or no money in my purse, and nothing particular to
interest me on shore, I thought I would sail about a little and see
the watery part of the world."
</code></pre>
<p>I need translate spanish text: </p>
<pre><code>"¿Que paises forman parte del mundo?"
</code></pre>
<p>When you use a section "with statistically more words and synonyms and antonyms" in text, called "window or scope" this section star with a direct translation word with dictionary and end section, in this example, and search same section in spanish book</p>
<pre><code>start : call -> Llamame. => end: world -> mundo
</code></pre>
<p>after processing you result is like to:</p>
<pre><code>how many places about of the wold
</code></pre>
<p>You could find more information <a href="https://en.wikipedia.org/wiki/Statistical_machine_translation" rel="nofollow noreferrer">here</a></p>
| 1,331
|
machine translation
|
Machine Translation (possible approaches)
|
https://stackoverflow.com/questions/53972928/machine-translation-possible-approaches
|
<p>Recently, I’ve been thinking a lot about what to code. I thought about projects that enhance my skill in programming and general problem solving. Also, I wanted something that has an actual purpose. Honestly, I’m interested in building a translator. But not one for programming but one for human languages.
For example, if I take the German sentence “Ich glaube, du bist verrückt.”, I want to translate it to English. Like “I think you are crazy.”. Now, my question is how is it possible to perform this. I read things like “Translating word by word and then changing word positions, applying punctuation rules” and so on. That would be the “easiest” way to do it. Google translate for example, works with documents: analyzes the source text and tries to find the right translation in a document in our target language. </p>
<p>Don’t hate me for my question because it maybe doesn’t really belong to this community. But I want to code such a thing that is able to translate sentences, not only words. (For now, I only want to translate German to English and vice versa.)</p>
<p>Here is my approach:
Source sentence: (German)
“Ich arbeite als Koch, aber habe keine Lust mehr.”
Word-by-word translation: (English)
“I work as chef but have no lust more.”
Since this sentence not correct in terms of grammar, I need to modify it a bit.
Like putting an “a” between “as” and “chef”...</p>
<p>But how can we actually know what to put where and so on?</p>
<p>I know this question isn’t really about “plain” programming. It’s mostly about problem solving. </p>
<p>I hope you guys understand it. I wish you a nice weekend. :)</p>
| 1,332
|
|
machine translation
|
Machine translation transformer with context
|
https://stackoverflow.com/questions/76299944/machine-translation-transformer-with-context
|
<p>I'm working on a transformer for seq2seq translation that has the typical encoder-decoder structure. I want to include examples of similar sentences and their translations in the prompt (few-shot) but I could only find guides that just translate one sentence to another without other context. Are there any best practices for including examples as context for translation transformers? Should they go in the decoder or encoder?</p>
<p>I currently feed an input sentence <code>source_sentence</code> to the encoder and start generating the translation by feeding a start token <code><start></code> to the decoder until an end token <code><end></code> is produced.</p>
<p>Should my encoder input be something like this:</p>
<pre><code>example_sentence1
<start> example_translation1 <end>
example_sentence2
<start> example_translation2 <end>
source_sentence
</code></pre>
<p>Or alternatively should my decoder input be:</p>
<pre><code>example_sentence1
<start> example_translation1 <end>
example_sentence2
<start> example_translation2 <end>
source_sentence
<start>
</code></pre>
<p>Neither really feels right. Are there other options?</p>
|
<p>You looking for the NLP task "Context Aware NMT". There is some work in the field.</p>
<p>Take a look at this research: <a href="https://link.springer.com/article/10.1007/s10994-021-06070-y" rel="nofollow noreferrer">A study of BERT for context-aware neural machine translation</a></p>
<hr />
<p>If you want to create your own model without using the context inside the prompt.</p>
<p>You can train a BART-like model, and custom the input tags. Hopefully, with the right dataset, the model will understand how to use utilize the context.</p>
<p>Conceptual input vector: <code>[ <TAG> context_sen1 <TAG> context_sen2 <SOS> sen3 <EOS> ]</code><br />
Conceptual output vector: <code>[ <SOS> translation <EOS> ]</code></p>
| 1,333
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.