category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
implement classification
|
Can anyone teach me how to implement multiple binary classification for SVM by openCV?
|
https://stackoverflow.com/questions/27766889/can-anyone-teach-me-how-to-implement-multiple-binary-classification-for-svm-by-o
|
<p>I have read the article about the following three links</p>
<p><a href="http://answers.opencv.org/question/42623/face-recognition-using-svm/" rel="nofollow noreferrer">http://answers.opencv.org/question/42623/face-recognition-using-svm/</a></p>
<p><a href="https://stackoverflow.com/questions/14694810/using-opencv-and-svm-with-images">using OpenCV and SVM with images</a></p>
<p><a href="http://www.codeproject.com/Articles/619039/Bag-of-Features-Descriptor-on-SIFT-Features-with-O" rel="nofollow noreferrer">http://www.codeproject.com/Articles/619039/Bag-of-Features-Descriptor-on-SIFT-Features-with-O</a></p>
<p>I have implemented the multi-class for SVM by openCV, but now, I want to implement multiple binary classification for SVM by openCV to solve multi-class problem. However, I have no idea about this. Can anyone teach me? Please. If it's possible, please provide me an example. Thank you.</p>
| 134
|
|
implement classification
|
PyTorch correct implementation of classification on an autoencoder
|
https://stackoverflow.com/questions/78056762/pytorch-correct-implementation-of-classification-on-an-autoencoder
|
<p><strong>EDIT: embarrassingly my error was shuffling the data only and not the labels.</strong></p>
<p>I was given an assignment to create an lstm autoEncoder in pytorch to reconstruct mnist images.
next the assignment asked to modify the network to also allow for classification of the reconstructed images, an important part is that it should do the 2 tasks at the same time, reconstruction and classification of the reconstructed image, so the network should train on both the losses at the same time.</p>
<p>my implementation of the auto encoder is in this format:</p>
<pre><code>def __init__(self, input_size, hidden_size, num_layers, output_size, epochs, optimizer, learning_rate, grad_clip, batch_size):
super(AE, self).__init__()
self.encoder = Encoder(input_size, hidden_size, num_layers)
self.decoder = Decoder(input_size, hidden_size, num_layers, output_size)
self.epochs = epochs
self.optimizer = optimizer
self.learning_rate = learning_rate
self.grad_clip = grad_clip
self.batch_size = batch_size
self.criterion = nn.MSELoss()
self.losses = []
</code></pre>
<p>the forward and the train methods work fine and when i run the network on the mnist datat set i get a fairly well reconstructed images with MSE loss averaging at around 1e-6.</p>
<p>I introduced the classifying elemnt in a separate class:</p>
<pre><code>class AeWithClassifier(AE):
def __init__(self, input_size, hidden_size, num_layers, output_size, epochs, optimizer, learning_rate, grad_clip, batch_size, num_classes):
super(AeWithClassifier, self).__init__(input_size, hidden_size, num_layers, output_size, epochs, optimizer, learning_rate, grad_clip, batch_size)
self.classifier = nn.Sequential(
nn.Linear(output_size*output_size, num_classes))
self.classifier_criterion = nn.CrossEntropyLoss()
</code></pre>
<p>the methods are pretty straight forward but I will provide them:</p>
<pre><code>def forward(self, x):
predictions = super().forward(x)
classifier_predictions = self.classifier(predictions.reshape(-1, 28*28))
return predictions, classifier_predictions
</code></pre>
<pre><code>def train(self, x, y):
losses = []
optimizer = self.optimizer(self.parameters(), lr=self.learning_rate)
for epoch in range(self.epochs):
cur_loss = 0
batch_idx = 0
for batch_idx, x_batch in enumerate(x):
x_batch = x_batch.to(device)
y_batch = y[batch_idx*self.batch_size:(batch_idx+1)*self.batch_size]
optimizer.zero_grad()
predictions, classifier_predictions = self.forward(x_batch)
recon_loss = self.criterion(predictions, x_batch)
class_loss = self.classifier_criterion(classifier_predictions, y_batch)
cur_loss = loss = recon_loss + class_loss
loss.backward()
nn.utils.clip_grad_norm_(self.parameters(), self.grad_clip)
optimizer.step()
losses.append(cur_loss.item())
print(f'Epoch: {epoch+1}/{self.epochs}, Loss: {cur_loss.item()}')
self.losses = losses
</code></pre>
<p>as you can see i calculated the loss as the sum of the loss of the reconstruction and the loss of the classification and then I use torch to perform grad calculation and optimization.</p>
<p>however in this format despite the fact that the network still reconstructs the images it fails to classify them properly with cross entropy loss doesn't decrease below ~2.3</p>
<p>Am I doing something wrong in the construction of the network? or is the problem in the training itself?
I tried weighing the loss differently in order for the network to focus more on the classification task but it still doesn't improve at all.</p>
|
<p>Since your classification layer is a single linear layer that takes the reconstruction predictions as input and outputs the class predictions, the network has to find a hard balance between reconstruction quality and linear separability of the predicted images.</p>
<p>If you want to keep this architecture, you should try to give different weights to the reconstruction loss and classification loss, something like: <code>loss = recon_loss + 10*class_loss</code>.</p>
<p>You can try adding an activation, another layer and a softmax on top of the current linear classification layer for a better classification. However, it's probably better to change the architecture and generate the classification and the reconstruction predictions with different network branches from the latent representations, similarly to this: <a href="http://tech.octopus.energy/timeserio/_images/MNIST.svg" rel="nofollow noreferrer">http://tech.octopus.energy/timeserio/_images/MNIST.svg</a>.</p>
| 135
|
implement classification
|
How to implement LSTM for binary classification?
|
https://stackoverflow.com/questions/60775942/how-to-implement-lstm-for-binary-classification
|
<p>I am a beginner in Deep Learning. I trying to implement LSTM for binary classification.I have EEG dataset which has 11 features(continuous valued) and 1 output which is either 0 or 1. The subjects(persons) were watching a 5 minute long video and after every 30 seconds they gave their review whether they like(1) it or not(0). So this is a time series data and I think that LSTM or GRU might help. But, it does not giving me a good result. One of the reason could be the selection of proper number of layers and no of neuron in each layer and no previous data points that I am using to predict the next output. I am attaching the code I have wrote for this. Please tell me what is wrong with it.</p>
<pre><code>import numpy as np
import pandas as pd
dataset_train = pd.read_csv('EEG_train.csv')
training_set_scaled = dataset_train.iloc[:, 0:12].values
X_train = []
y_train = []
# Creating a data structure with 60 timesteps and 1 output
for i in range(60, 882):
X_train.append(training_set_scaled[i-60:i, 0:12])
y_train.append(training_set_scaled[i, 11])
X_train, y_train = np.array(X_train), np.array(y_train)
# Reshaping
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 12))
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM, GRU
from keras.layers import Dropout
clf = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
clf.add(LSTM(units = 50, return_sequences = True, activation ='relu', input_shape =
(X_train.shape[1], 12)))
clf.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
clf.add(LSTM(units = 50, activation ='relu', return_sequences = False))
clf.add(Dropout(0.2))
# Adding the output layer
clf.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the RNN
clf.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the RNN to the Training set
clf.fit(X_train, y_train, epochs = 100, batch_size = 128)
dataset_test = pd.read_csv('EEG_test.csv')
real_test_value = dataset_test.iloc[:, 11:12].values
# Getting the predicted
dataset_total = pd.concat((dataset_train, dataset_test), axis = 0)
inputs = dataset_total[len(dataset_total) - len(dataset_test) - 60:].values
inputs = inputs.reshape(-1,12)
X_test = []
for i in range(60, 439):
X_test.append(inputs[i-60:i, 0:12])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 12))
# Predicting the result
from sklearn.metrics import accuracy_score,classification_report,confusion_matrix
predicted_test_value = regressor.predict(X_test)
accuracy=accuracy_score(real_test_value, predicted_test_value)
print("ACCURACY: "+str(accuracy))
</code></pre>
| 136
|
|
implement classification
|
Scikit-learn vs. WEKA classification model implementation
|
https://stackoverflow.com/questions/66478239/scikit-learn-vs-weka-classification-model-implementation
|
<p>Am I correct to assume that the classification models implementations in scikit-learn and WEKA (e.g. Naive Bayes, Random Forest etc.) produce the same results (not taking processing time and such into account)?</p>
<p>I am asking, because I wrote my pipeline in Python and would like to use scikit-learn for easy integration. Since most related research and previous work in my field have used WEKA and Java, I was wondering if comparing performance to my pipeline is valid and scietifically sound, given I use the same models, settings, etc.</p>
| 137
|
|
implement classification
|
How to implement F1 score in LightGBM for multi-class classification in R?
|
https://stackoverflow.com/questions/72213497/how-to-implement-f1-score-in-lightgbm-for-multi-class-classification-in-r
|
<p>I am using the LightGBM package in R to create my model. I have already defined a function that calculates macro-F1 (defined as the average of F1s throughout all class predictions). I need to report CV macro-F1, so I would like to embed this score into <code>lgb.cv</code>. Nevertheless, the metric is not available in the package implementation, the only solution that I have seen in R is implemented in a binary classification setting (<a href="https://rpubs.com/dalekube/LightGBM-F1-Score-R" rel="nofollow noreferrer">https://rpubs.com/dalekube/LightGBM-F1-Score-R</a>), and most of the other answers are applied in Python.</p>
<p>My options are:</p>
<ol>
<li>Implement macro-F1 in <code>lgb.cv</code>, which is what I do not know.</li>
<li>Apply my macro-F1 function to a manual cross-validation function that I have created and apply both to <code>lgb.train</code> (although I think that this might not be as optimized as <code>lgb.cv</code>).</li>
<li>Switch to Python</li>
</ol>
| 138
|
|
implement classification
|
How to implement image classification using just convolutional layers?
|
https://stackoverflow.com/questions/66446915/how-to-implement-image-classification-using-just-convolutional-layers
|
<p>I'm trying to make an image classification model that outputs a prediction based on a sliding window. The only way to do that is to have the network's output layer as a Conv2D layer.</p>
<p>Here is my model architecture:</p>
<pre><code>inputs = Input((None, None, 3))
x = Conv2D(filters = 32, kernel_size = (3,3), strides = (1,1), kernel_initializer = 'he_normal', padding = 'same')(inputs)
x = BatchNormalization()(x)
x = LeakyReLU(0.2)(x)
x = Conv2D(filters = 32, kernel_size = (3,3), strides = (1,1), kernel_initializer = 'he_normal', padding = 'same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(0.2)(x)
x = Conv2D(filters = 32, kernel_size = (3,3), strides = (1,1), kernel_initializer = 'he_normal', padding = 'same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(0.2)(x)
x = Conv2D(filters = 32, kernel_size = (3,3), strides = (1,1), kernel_initializer = 'he_normal', padding = 'same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(0.2)(x)
x = Conv2D(filters = 1, kernel_size = (3,3), strides = (1,1), kernel_initializer = 'he_normal', padding = 'same')(x)
x = LeakyReLU(0.2)(x)
x = Dropout(dropout)(x)
x = Conv2D(filters = 32, kernel_size = (1,1), strides = (1,1), kernel_initializer = 'he_normal', padding = 'same')(x)
x = LeakyReLU(0.2)(x)
x = Dropout(dropout)(x)
x = Conv2D(1, kernel_size = (128,128), strides = (1,1), kernel_initializer = 'he_normal')(x)
x = Activation('sigmoid')(x)
</code></pre>
<p>From what I've read, a Conv2D layer with kernel_size and strides equal to (1,1) is effectively a Dense layer, but this model doesn't converge, while another model with Dense layers did converge (if you need, I can add the original architecture to the question).</p>
<p>I've done the usual housekeeping - like ensuring all my training data is normalized between 0 and 1, but the loss stays at around 0.6.</p>
| 139
|
|
implement classification
|
Tensorflow implementation for bank transaction classification
|
https://stackoverflow.com/questions/57001614/tensorflow-implementation-for-bank-transaction-classification
|
<p>I am building a simple machine learning model that takes bank transactions as input (see features below) and I want to predict the spend category (label). I have already worked through some beginner's tutorials, such as <a href="https://developers.google.com/machine-learning/crash-course/" rel="nofollow noreferrer">ML Crash Course</a>, <a href="https://developers.google.com/machine-learning/guides/text-classification/step-2-5" rel="nofollow noreferrer">Text Classification Guides</a>, <a href="https://www.tensorflow.org/beta/tutorials/text/word_embeddings" rel="nofollow noreferrer">Word Embeddings</a>, and more. </p>
<p>Here is exemplary input data:</p>
<pre><code>Date;Sender / Recipient;IBAN / Account#;BIC / Bank Code;Text;Amount;Category
02.07.2019;Tesco Market;HSVSDDMM;Grocery Market London Heathrow - Thank you for purchase;-48.06;Groceries
</code></pre>
<p>My goal is to predict the <code>Category</code>, e.g. <code>Groceries</code>, etc. With TensorFlow I have come so far:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import pandas as pd
URL = "transactions-0263445.csv"
dataframe = pd.read_csv(URL, sep=';')
# Build the keras Sequential model
model = Sequential()
model.add(...)
model.add(Activation(...))
# Train and evaluate the model
</code></pre>
<p>How do I build the <a href="https://keras.io/getting-started/sequential-model-guide/" rel="nofollow noreferrer">sequential model</a>? I am confused with specifying the input shape.</p>
|
<p>This seems to be a classification problem, but there are some issues for me with your question, there are some steps that you need to take before dumping all the data into a model.</p>
<p>The thing is that I see no preprocessing of the data, are you using all features? Do they need to be scaled? Do you need to encode some data? there is a lot of things to look at before being able to create a model. </p>
<p>In this case, I am guessing that your main feature is the text?
Here's a nice guide to how to do it:
<a href="https://medium.com/data-from-the-trenches/text-classification-the-first-step-toward-nlp-mastery-f5f95d525d73" rel="nofollow noreferrer">https://medium.com/data-from-the-trenches/text-classification-the-first-step-toward-nlp-mastery-f5f95d525d73</a></p>
<p>Or your own link:
<a href="https://medium.com/data-from-the-trenches/text-classification-the-first-step-toward-nlp-mastery-f5f95d525d73" rel="nofollow noreferrer">https://medium.com/data-from-the-trenches/text-classification-the-first-step-toward-nlp-mastery-f5f95d525d73</a></p>
<p>Then you can build and train:
<a href="https://developers.google.com/machine-learning/guides/text-classification/step-4" rel="nofollow noreferrer">https://developers.google.com/machine-learning/guides/text-classification/step-4</a></p>
<p>Go through those thoroughly and you will be able to get what you are missing at the moment.</p>
| 140
|
implement classification
|
Implementation of Focal loss for multi label classification
|
https://stackoverflow.com/questions/57635169/implementation-of-focal-loss-for-multi-label-classification
|
<p>trying to write focal loss for multi-label classification </p>
<pre><code>class FocalLoss(nn.Module):
def __init__(self, gamma=2, alpha=0.25):
self._gamma = gamma
self._alpha = alpha
def forward(self, y_true, y_pred):
cross_entropy_loss = torch.nn.BCELoss(y_true, y_pred)
p_t = ((y_true * y_pred) +
((1 - y_true) * (1 - y_pred)))
modulating_factor = 1.0
if self._gamma:
modulating_factor = torch.pow(1.0 - p_t, self._gamma)
alpha_weight_factor = 1.0
if self._alpha is not None:
alpha_weight_factor = (y_true * self._alpha +
(1 - y_true) * (1 - self._alpha))
focal_cross_entropy_loss = (modulating_factor * alpha_weight_factor *
cross_entropy_loss)
return focal_cross_entropy_loss.mean()
</code></pre>
<p>But when i run this i get </p>
<pre><code> File "train.py", line 82, in <module>
loss = loss_fn(output, target)
File "/home/bubbles/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 538, in __call__
for hook in self._forward_pre_hooks.values():
File "/home/bubbles/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 591, in __getattr__
type(self).__name__, name))
AttributeError: 'FocalLoss' object has no attribute '_forward_pre_hooks'
</code></pre>
<p>Any suggestions would be really helpful, Thanks in advance.</p>
|
<p>You shouldn't inherit from <code>torch.nn.Module</code> as it's designed for modules with learnable parameters (e.g. neural networks).</p>
<p>Just create normal functor or function and you should be fine.</p>
<p>BTW. If you inherit from it, you should call <code>super().__init__()</code> somewhere in your <code>__init__()</code>.</p>
<h2>EDIT</h2>
<p>Actually inheriting from <code>nn.Module</code> might be a good idea, it allows you to use the loss as part of neural network and is common in PyTorch implementations/PyTorch Lightning.</p>
| 141
|
implement classification
|
Implementation of text classification in MATLAB with naive bayes
|
https://stackoverflow.com/questions/27562711/implementation-of-text-classification-in-matlab-with-naive-bayes
|
<p>I want to implement text classification with Naive Bayes algorithm in MATLAB.
I have for now 3 matrices:</p>
<ol>
<li>Class priors (8*2 cell - 8 class names, for each class its % from the training) </li>
<li>Training Data: word count matrices - (15000*9 cell- for each class, counting of every feature (word) . the last column is each word count for all the documents. </li>
<li>Test Data: a matrices with (2000*1) cell - and for each cell a list of words which represent the document. </li>
</ol>
<p>What should I do now? I want to calculate recall and precision for the test set. I took a look in the matlab naive bayes functions, and it suppose to be simple , but I'm not sure how and where to start. </p>
<p>Thanks </p>
|
<p>Here is an example of Naive Bayes classification,</p>
<pre><code>x1 = 5 * rand(100,1);
y1 = 5 * rand(100,1);
data1 = [x1,y1];
x2 = -5 * rand(100,1);
y2 = 5 * rand(100,1);
data2 = [x2,y2];
x3 = -5 * rand(100,1);
y3 = -5 * rand(100,1);
data3 = [x3,y3];
traindata = [data1(1:50,:);data2(1:50,:);data3(1:50,:)];
testdata = [data1(51:100,:);data2(51:100,:);data3(51:100,:)];
label = [repmat('x+y+',50,1);repmat('x-y+',50,1);repmat('x-y-',50,1)];
</code></pre>
<p>That was my data, three classes. Now the classification,</p>
<pre><code>nb = NaiveBayes.fit(traindata, label);
ClassifierOut = predict(nb,testdata);
</code></pre>
<p>I think you should change your data to matrix instead of cell, but the labels are okey.</p>
<p>Here are the results, <code>blue</code> is the training data and the rest is the classifier output for three classes.</p>
<p><img src="https://i.sstatic.net/CNxfm.png" alt="enter image description here"></p>
<p>You can also see <a href="https://stats.stackexchange.com/questions/51296/how-to-calculate-precision-and-recall-for-multiclass-classification-using-confus">here</a> for calculation of recall and precision for multi-class data. </p>
| 142
|
implement classification
|
How to implement network using Bert as a paragraph encoder in long text classification, in keras?
|
https://stackoverflow.com/questions/58703885/how-to-implement-network-using-bert-as-a-paragraph-encoder-in-long-text-classifi
|
<p>I am doing a long text classification task, which has more than 10000 words in doc, I am planing to use Bert as a paragraph encoder, then feed the embeddings of paragraph to BiLSTM step by step.
The network is as below:</p>
<blockquote>
<p>Input: (batch_size, max_paragraph_len, max_tokens_per_para,embedding_size)</p>
<p>bert layer: (max_paragraph_len,paragraph_embedding_size)</p>
<p>lstm layer: ???</p>
<p>output layer: (batch_size,classification_size)</p>
</blockquote>
<p>How to implement it with keras?
I am using keras's load_trained_model_from_checkpoint to load bert model</p>
<pre><code>bert_model = load_trained_model_from_checkpoint(
config_path,
model_path,
training=False,
use_adapter=True,
trainable=['Encoder-{}-MultiHeadSelfAttention-Adapter'.format(i + 1) for i in range(layer_num)] +
['Encoder-{}-FeedForward-Adapter'.format(i + 1) for i in range(layer_num)] +
['Encoder-{}-MultiHeadSelfAttention-Norm'.format(i + 1) for i in range(layer_num)] +
['Encoder-{}-FeedForward-Norm'.format(i + 1) for i in range(layer_num)],
)
</code></pre>
|
<p>I believe you can check the following <a href="https://medium.com/@brn.pistone/bert-fine-tuning-for-tensorflow-2-0-with-keras-api-9913fc1348f6" rel="nofollow noreferrer">article</a>. The author shows how to load a pre-trained BERT model, embed it into a Keras layer and use it into a customized Deep Neural Network.
First install TensorFlow 2.0 Keras implementation of google-research/bert:</p>
<p><code>pip install bert-for-tf2</code></p>
<p>Then run: </p>
<pre><code>import bert
import os
def createBertLayer():
global bert_layer
bertDir = os.path.join(modelBertDir, "multi_cased_L-12_H-768_A-12")
bert_params = bert.params_from_pretrained_ckpt(bertDir)
bert_layer = bert.BertModelLayer.from_params(bert_params, name="bert")
bert_layer.apply_adapter_freeze()
def loadBertCheckpoint():
modelsFolder = os.path.join(modelBertDir, "multi_cased_L-12_H-768_A-12")
checkpointName = os.path.join(modelsFolder, "bert_model.ckpt")
bert.load_stock_weights(bert_layer, checkpointName)
</code></pre>
| 143
|
implement classification
|
Implementing a Basic Neural Network to Perform Classification with Neuroph
|
https://stackoverflow.com/questions/74893033/implementing-a-basic-neural-network-to-perform-classification-with-neuroph
|
<p>I'm very new to neural networks, and decided to try implementing a basic one using Neuroph in Java to perform multiclass classification using Multilayer Perceptron.</p>
<pre><code>public static void main(String[] args) {
final MultiLayerPerceptron neuralNetwork = new MultiLayerPerceptron(2, 3, 3, 3);
final BackPropagation rule = new BackPropagation();
rule.setLearningRate(0.1);
rule.setMaxError(0.001);
rule.setMaxIterations(10000);
neuralNetwork.setLearningRule(rule);
final Layer softMaxLayer = neuralNetwork.getLayerAt(neuralNetwork.getLayersCount() - 2);
final SoftMax max = new SoftMax(softMaxLayer);
for (Neuron n : softMaxLayer.getNeurons())
n.setTransferFunction(max);
// for (Neuron n : this.getOutputNeurons())
// n.setTransferFunction(new Linear());
DataSet trainingSet = new DataSet(2, 3);
// Output contains only one instance of "1" to represent the class it belongs to
trainingSet.add(new double[]{0, 0}, new double[]{1, 0, 0});
trainingSet.add(new double[]{0, 0}, new double[]{1, 0, 0});
trainingSet.add(new double[]{0, 0}, new double[]{1, 0, 0});
trainingSet.add(new double[]{0, 1}, new double[]{0, 1, 0});
trainingSet.add(new double[]{0, 1}, new double[]{1, 0, 0});
trainingSet.add(new double[]{1, 0}, new double[]{0, 0, 1});
trainingSet.add(new double[]{1, 1}, new double[]{0, 0, 1});
trainingSet.add(new double[]{1, 1}, new double[]{0, 1, 0});
trainingSet.add(new double[]{1, 1}, new double[]{0, 1, 0});
// Train the neural network on the training set
neuralNetwork.learn(trainingSet);
// Test the neural network on a new example
neuralNetwork.setInput(0, 1);
neuralNetwork.calculate();
double[] output = neuralNetwork.getOutput();
System.out.println(output[0]); // prints the probability of class 0
System.out.println(output[1]); // prints the probability of class 1
System.out.println(output[2]); // prints the probability of class 2
System.out.println("\n\nTotal (=1?): " + (output[0] + output[1] + output[2]));
System.out.println(neuralNetwork.getLearningRule().getTotalNetworkError());
}
</code></pre>
<p>The output of the neural network should be a distribution, but the sum of it is slightly greater than or less than 1.</p>
<p>I generally tried to follow <a href="https://developers.google.com/machine-learning/crash-course/multi-class-neural-networks/softmax" rel="nofollow noreferrer">this guide</a>, with the idea being I implement a soft max layer before the output layer. I assume I made a basic mistake here, if anyone knows what I did incorrectly please let me know.</p>
<p>EDIT: Example output:</p>
<pre><code>0.8514345496167552
0.23687369660618554
0.001890247376537854
Total (=1?): 1.0901984935994786
0.17265593270797627
</code></pre>
<p>(First three lines are the three output neuron values, followed by the sum of them, followed by the total network error).</p>
| 144
|
|
implement classification
|
How to implement Negation Features in SVM classification (NLP) using imdb Movie_Reviews corpus
|
https://stackoverflow.com/questions/27818607/how-to-implement-negation-features-in-svm-classification-nlp-using-imdb-movie
|
<p>I am trying to understand Negation feature in NLP , so I thought to implement it.
I am working on imdb movie review dataset.
Consider I am having data as follows-</p>
<pre><code>Movie was great but it's overly sentimental and at times terribly mushy , not to mention very manipulative but great action
</code></pre>
<p>From the above I can extract <strong>it's overly sentimental and at times terribly mushy</strong> as negative statement and now I am left with these choices-</p>
<ul>
<li>I extract the particular line till it ends with some punctuation and
I simply remove this line from the positive statement and run SVM classifier in
the rest of the content.</li>
<li>I extract the particular line and label the line with negative and
add it to the list of negative statements to train the same.</li>
</ul>
<p>I am not sure that I am doing anything right here , so please suggest exactly how should I deal with negation features to improvise the classification.</p>
<p>I am working with scikit-learn svm.SVC() classifier</p>
|
<p>You can check this <a href="http://www.umiacs.umd.edu/~saif/WebPages/Abstracts/NRC-SentimentAnalysis.htm" rel="nofollow">NRC Sentiment Analysis</a> system for text classification using negation. It's very well explained. Also they claim their <a href="http://www.aclweb.org/anthology/S14-2077" rel="nofollow">SemEval 2014 submission</a> has major improvements on negation handling (I still haven't read it).</p>
<p>I assume you're solving a similar task on movie reviews so this must be what you're looking for.</p>
| 145
|
implement classification
|
Use pandas to implement data classification
|
https://stackoverflow.com/questions/70520090/use-pandas-to-implement-data-classification
|
<p><a href="https://i.sstatic.net/4mKZ5.png" rel="nofollow noreferrer">enter image description here</a>I have an excel file, the rows in it represent each day, a total of 365 days, each column represents a region, a total of 2310 regions, and the content is the temperature of each day. Now I want to use 1° as an interval. For example, 1°-2° is an interval. First, I obtain the 365-day temperature in a region, and then every 1° as an interval, and finally classify the 365-day temperature Enter the temperature range to get the most number of temperature ranges. Every area needs to do this kind of operation, how can I write this code, I use python.
<a href="https://i.sstatic.net/4mKZ5.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<pre><code>df = pd.read_excel('Step 1- Mapping SA2 to Climate Zone- Mean Temperature.xlsx',sheet_name='Daily Mean 2003-SA2')
df_value = df.drop(columns=['date'])
df_column = df_value.iloc[:,[0]]
i = 0
list_range = []
list_MFT = []
while i <= len(df_value.columns):
df_column = df_value.iloc[:, [i]]
for index, row in df_column.iterrows():
for i in row:
list_range.append(i)
list_range.sort()
for k, g in groupby(list_range, key=lambda x: (math.ceil(x) - 1)):
# list_temp_range.append(list(g))
a = '{}-{}'.format(float(k), float(k + 1)), len(list(g))
i += 1
</code></pre>
| 146
|
implement classification
|
How to implement a Classification Problem in Deep Learning for an EEG application?
|
https://stackoverflow.com/questions/63042214/how-to-implement-a-classification-problem-in-deep-learning-for-an-eeg-applicatio
|
<p>I am new to this and would like to know how should I implement a classification problem for my EEG application. I did some digging and found that the traditional machine learning methods based on EEG signal always consider the features of the time-domain and/or frequency-domain. Similarly, I have the frequency domain (i.e. power spectral density) data which I am going to use as features.
I am willing to use CNN architecture for this Neural Network with the Tensorflow library. Am I Thinking straight with this one? I need to know where should I start with, though I have gone through some deep learning courses and Textbooks.
Additionally, according to previous researches, convert EEG signals to 2D EEG images with Azimuthal Equidistant Projection (AEP) technique. Is it mandatory to do this, can't we just convert the EEG signals (2D matrix) (csv file) to an image file? Can't we use the matrix as it is and changing it to an image?</p>
| 147
|
|
implement classification
|
Classification perceptron implementation
|
https://stackoverflow.com/questions/45258144/classification-perceptron-implementation
|
<p>I have written Percentron example in Python from <a href="https://www.youtube.com/watch?v=ntKn5TPHHAk" rel="nofollow noreferrer">here</a>. </p>
<p>Here is the complete code </p>
<pre><code>import matplotlib.pyplot as plt
import random as rnd
import matplotlib.animation as animation
NUM_POINTS = 5
LEANING_RATE=0.1
fig = plt.figure() # an empty figure with no axes
ax1 = fig.add_subplot(1,1,1)
plt.xlim(0, 120)
plt.ylim(0, 120)
points = []
weights = [rnd.uniform(-1,1),rnd.uniform(-1,1),rnd.uniform(-1,1)]
circles = []
plt.plot([x for x in range(100)], [x for x in range(100)])
for i in range(NUM_POINTS):
x = rnd.uniform(1, 100)
y = rnd.uniform(1, 100)
circ = plt.Circle((x, y), radius=1, fill=False, color='g')
ax1.add_patch(circ)
points.append((x,y,1))
circles.append(circ)
def activation(val):
if val >= 0:
return 1
else:
return -1;
def guess(pt):
vsum = 0
#x and y and bias weights
vsum = vsum + pt[0] * weights[0]
vsum = vsum + pt[1] * weights[1]
vsum = vsum + pt[2] * weights[2]
gs = activation(vsum)
return gs;
def animate(i):
for i in range(NUM_POINTS):
pt = points[i]
if pt[0] > pt[1]:
target = 1
else:
target = -1
gs = guess(pt)
error = target - gs
if target == gs:
circles[i].set_color('r')
else:
circles[i].set_color('b')
#adjust weights
weights[0] = weights[0] + (pt[0] * error * LEANING_RATE)
weights[1] = weights[1] + (pt[1] * error * LEANING_RATE)
weights[2] = weights[2] + (pt[2] * error * LEANING_RATE)
ani = animation.FuncAnimation(fig, animate, interval=1000)
plt.show()
</code></pre>
<p>I expect the points plotted on graph to classify themselves to red or blue depending on expected condition (x coordinate > y coordinate) i.e. above or below reference line (y=x)</p>
<p>This does not seem to work and all points go red after some iterations.</p>
<p>What am I doing wrong here. Same is working in youtube example.</p>
|
<p>I looked at your code and the video and I believe the way your code is written, the points start out as green, if their guess matches their target they turn red and if their guess doesn't match the target they turn blue. This repeats with the remaining blue eventually turning red as their guess matches the target. (The changing weights may turn a red to blue but eventually it will be corrected.)</p>
<p>Below is my rework of your code that slows down the process by: adding more points; only processing one point per frame instead of all of them:</p>
<pre><code>import random as rnd
import matplotlib.pyplot as plt
import matplotlib.animation as animation
NUM_POINTS = 100
LEARNING_RATE = 0.1
X, Y = 0, 1
fig = plt.figure() # an empty figure with no axes
ax1 = fig.add_subplot(1, 1, 1)
plt.xlim(0, 120)
plt.ylim(0, 120)
plt.plot([x for x in range(100)], [y for y in range(100)])
weights = [rnd.uniform(-1, 1), rnd.uniform(-1, 1)]
points = []
circles = []
for i in range(NUM_POINTS):
x = rnd.uniform(1, 100)
y = rnd.uniform(1, 100)
points.append((x, y))
circle = plt.Circle((x, y), radius=1, fill=False, color='g')
circles.append(circle)
ax1.add_patch(circle)
def activation(val):
if val >= 0:
return 1
return -1
def guess(point):
vsum = 0
# x and y and bias weights
vsum += point[X] * weights[X]
vsum += point[Y] * weights[Y]
return activation(vsum)
def train(point, error):
# adjust weights
weights[X] += point[X] * error * LEARNING_RATE
weights[Y] += point[Y] * error * LEARNING_RATE
point_index = 0
def animate(frame):
global point_index
point = points[point_index]
if point[X] > point[Y]:
answer = 1 # group A (X > Y)
else:
answer = -1 # group B (Y > X)
guessed = guess(point)
if answer == guessed:
circles[point_index].set_color('r')
else:
circles[point_index].set_color('b')
train(point, answer - guessed)
point_index = (point_index + 1) % NUM_POINTS
ani = animation.FuncAnimation(fig, animate, interval=100)
plt.show()
</code></pre>
<p>I tossed the special 0,0 input fix as it doesn't apply for this example.</p>
<p>The bottom line is that if everything is working, they <strong><em>should</em></strong> all turn red. If you want the color to reflect classification, then you can change this clause:</p>
<pre><code> if answer == guessed:
circles[point_index].set_color('r' if answer == 1 else 'b')
else:
circles[point_index].set_color('g')
train(point, answer - guessed)
</code></pre>
| 148
|
implement classification
|
Pyspark - Classification Implementation
|
https://stackoverflow.com/questions/45161978/pyspark-classification-implementation
|
<p>I have a use case to predict the Multi Class labeled value. I have basic doubts in data preparation using Pyspark Implementation.</p>
<p>Let say I have the below dataset:</p>
<pre><code>A B C Label
10 class1 Boy Cricket
12 class3 Boy Football
11.6 class2 Girl Hockey
..
..
..
..
12.2 class1 Girl Hockey
</code></pre>
<p>This is my dataset where except feature A, everything is categorical.</p>
<p>Lets say we are using Decision Tree classifier for Multi Class prediction.</p>
<p>I have did this data preparation steps: </p>
<p><strong>Step1</strong>: Min Max normalizer for feature A </p>
<p><strong>Step2</strong>: String Indexer for feature A,B,C </p>
<p><strong>step3</strong>: One-Hot encoded transform for feature A,B,C </p>
<p>Now, the transformed dataframe looks like,</p>
<pre><code>A B B_Indexed_transformed C C_Indexed_transformed Label
0.86 Class1 [5,1,[1.0,1.0]] Boy [2,1,[1.0,1.0]] Cricket
.
.
.
</code></pre>
<p>Next, I will keep A, B_Indexed_transformed, C_Indexed_transformed, label columns and drop all other columns. </p>
<p><strong>Step4</strong>: Create LabeledPoint data with [Label, Features] Pair</p>
<p>So, my question is, in order to pass this data to the Decision Tree Algorithm (or any other Classifier), <strong>do I need to do any transformation to the Label column</strong>.</p>
<p>I have done String Indexing for the Label column. Is this the right way to do?</p>
<p>When I pass the string indexed label column to LabeledPont transformation, I am facing this error:</p>
<pre><code> File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1293, in takeUpToNumLeft
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/rddsampler.py", line 95, in func
File "/hba01/yarn/nm/usercache/sbeathanabhotla/appcache/application_1498495374459_1420410/container_1498495374459_1420410_01_000001/build.zip/com/ci/roletagging/service/ModelBuilder.py", line 17, in <lambda>
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/mllib/regression.py", line 51, in __init__
self.label = float(label)
TypeError: float() argument must be a string or a number
</code></pre>
<p>If I pass the Label data as it as string without any transformation, I face this error:</p>
<pre><code>Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1293, in takeUpToNumLeft
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/rddsampler.py", line 95, in func
File "/hba06/yarn/nm/usercache/sbeathanabhotla/appcache/application_1498495374459_1425329/container_1498495374459_1425329_01_000001/build.zip/com/ci/roletagging/service/ModelBuilder.py", line 17, in <lambda>
File "/vol1/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/mllib/regression.py", line 51, in __init__
self.label = float(label)
ValueError: could not convert string to float: businessevaluator
</code></pre>
| 149
|
|
implement classification
|
How to implement K-NN classification from a k-d tree?
|
https://stackoverflow.com/questions/54632821/how-to-implement-k-nn-classification-from-a-k-d-tree
|
<p>I'm trying to write the code for K-NN classification using k-d tree without using any libraries. So far I have been able to write the code for k-d tree but I cant seem to understand how do I find the k nearest neighbors once the tree has been formed from a training set.
k-d tree code:</p>
<pre><code>#include<bits/stdc++.h>
using namespace std;
const int k = 2; // 2-dimensions
struct Node
{
int point[k];
Node *left, *right;
};
struct Node* newNode(int arr[])
{
struct Node* temp = new Node;
for (int i=0; i<k; i++)
temp->point[i] = arr[i];
temp->left = temp->right = NULL;
return temp;
}
// Inserts a new node and returns root of modified tree
Node *insertRec(Node *root, int point[], unsigned depth)
{
if (root == NULL)
return newNode(point);
unsigned cd = depth % k;
if (point[cd] < (root->point[cd]))
root->left = insertRec(root->left, point, depth + 1);
else
root->right = insertRec(root->right, point, depth + 1);
return root;
}
// Function to insert a new point with given point and return new root
Node* insert(Node *root, int point[])
{
return insertRec(root, point, 0);
}
// driver
int main()
{
struct Node *root = NULL;
int points[][k] = {{3, 6}, {17, 15}, {13, 15}, {6, 12},
{9, 1}, {2, 7}, {10, 19}};
int n = sizeof(points)/sizeof(points[0]);
for (int i=0; i<n; i++)
root = insert(root, points[i]);
return 0;
}
</code></pre>
|
<p>First don't use <code><bits/stdc++.h></code>. That's wrong.</p>
<p>To find the k closest elements, you need to go through the tree in a way that will traverse the closest elements first. Then, if you don't have enough elements, go and traverse the ones that are further.</p>
<p>I won't write the code here, just pseudo code (because I already built one <a href="https://github.com/mbrucher/CoverTree/blob/master/kdtree/kdtree.h" rel="nofollow noreferrer">a long time ago</a>):</p>
<pre><code>list l; # list of the elements, sorted by distance
heap p; # heap of nodes to traverse, sorted by distance
p.push(root)
while (!p.empty())
{
node = p.pop(); # Get a new node
d = distance(point, node); # compute the closest distance from the point to the node
if(l.empty() or distance(point, l.back()) > d)
{
add(node->left); # iteration on subnodes
add(node->right);
l.push(points); # Add points from the current node
}
l.pop_elements(k); # pop elements to keep only k
}
</code></pre>
| 150
|
implement classification
|
SparkNLP Text classification using BertSentenceEmbeddings
|
https://stackoverflow.com/questions/65236746/sparknlp-text-classification-using-bertsentenceembeddings
|
<p>I am struggling with implementing classification usecase using the <code>BertSentenceEmbeddings</code> in python. Mostly I get <code>classNotFoundError</code> and I think I am unable to figure out the right versions of libraries (spark-nlp, pyspark).
I followed most of options suggested on web but had no luck.</p>
<p>Any suggestions/tutorial would be the great help. Thanks.</p>
<p>Here's my <a href="https://colab.research.google.com/drive/1gJ_yATbniiXnYyMd25fB9pIsjCK5wr1y?usp=sharing" rel="nofollow noreferrer">notebook</a>.</p>
|
<p>This <a href="https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/5.Text_Classification_with_ClassifierDL.ipynb" rel="nofollow noreferrer">tutorial</a> helped me solve this error.</p>
<p>Thank you Maziyar for the help on Spark-NLP slack.</p>
| 151
|
implement classification
|
Implement multivariate normal pdf in c++ for image classification
|
https://stackoverflow.com/questions/17282956/implement-multivariate-normal-pdf-in-c-for-image-classification
|
<p>I am looking to implement the multivariate normal PDF [ <a href="http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Density_function" rel="nofollow">1</a> ] in C++ to assign each pixel in an image membership of a class, i.e.</p>
<pre><code> for each pixel
for each class
compute multivariate normal PDF using the pixel's feature vector and the class' mean vector and covariance matrix
end
end
</code></pre>
<p>Is there a library that can do this in an efficient manner (i.e. similar to Matlab's mvnpdf function[<a href="http://www.mathworks.co.uk/help/stats/mvnpdf.html" rel="nofollow">2</a>])? If not any ideas what libraries or approaches would be best (I was thinking of using Eigen).</p>
|
<p>I am not aware of a ready-to-go one-step solution. For a two-step mix-and-match approach, you could familiarize yourself with <a href="http://www.boost.org/doc/libs/1_53_0/libs/math/doc/html/index.html" rel="nofollow">Boost.Math</a> which has an <a href="http://www.boost.org/doc/libs/1_53_0/libs/math/doc/sf_and_dist/html/math_toolkit/dist/stat_tut/weg/normal_example/normal_misc.html" rel="nofollow">extended example</a> for the <strong>univariate</strong> normal distrubtion in the statistical distribution section:</p>
<pre><code>// [...] many headers and namespaces inclusions
int main()
{
// Construct a standard normal distribution s
normal s; // (default mean = zero, and standard deviation = unity)
cout << "Standard normal distribution, mean = "<< s.mean()
<< ", standard deviation = " << s.standard_deviation() << endl;
/*` First the probability distribution function (pdf).
*/
cout << "Probability distribution function values" << endl;
cout << " z " " pdf " << endl;
cout.precision(5);
for (double z = -range; z < range + step; z += step)
{
cout << left << setprecision(3) << setw(6) << z << " "
<< setprecision(precision) << setw(12) << pdf(s, z) << endl;
}
cout.precision(6); // default
// [...] much more
}
</code></pre>
<p>You could then use Eigen to do the necessary vector and matrix manipulation to pass a scalar to that. This <a href="http://lost-found-wandering.blogspot.nl/2011/05/sampling-from-multivariate-normal-in-c.html" rel="nofollow">blog posting</a> has more details (although it uses <a href="http://www.boost.org/doc/libs/1_53_0/doc/html/boost_random.html" rel="nofollow">Boost.Random</a> to generate the sample values).</p>
| 152
|
implement classification
|
Is there a GPU implementation multiclass classification function in MATLAB?
|
https://stackoverflow.com/questions/35809904/is-there-a-gpu-implementation-multiclass-classification-function-in-matlab
|
<p>I have a multiclass classification task, and I have tried to use 'trainSoftmaxLayer' in Matlab, but it's a CPU implementation version, and is slow. So I tried to read the documentation for a GPU option, like 'trainSoftmaxLayer('useGPU', 'yes')' in traditional neural network, but there isn't any related options. </p>
|
<p>Finally, the problem is sovled by hacking the source code of trainSoftmaxLayer.m, which is provided by MATLAB. We can write our own GPU-enabled softmax layer like this:</p>
<pre><code>function [net] = trainClassifier(x, t, use_gpu, showWindow)
net = network;
% define topology
net.numInputs = 1;
net.numLayers = 1;
net.biasConnect = 1;
net.inputConnect(1, 1) = 1;
net.outputConnect = 1;
% set values for labels
net.name = 'Softmax Classifier with GPU Option';
net.layers{1}.name = 'Softmax Layer';
% define transfer function
net.layers{1}.transferFcn = 'softmax';
% set parameters
net.performFcn = 'crossentropy';
net.trainFcn = 'trainscg';
net.trainParam.epochs = 1000;
net.trainParam.showWindow = showWindow;
net.divideFcn = 'dividetrain';
if use_gpu == 1
net = train(net, x, full(t), 'useGPU', 'yes');
else
net = train(net, x, full(t));
end
end
</code></pre>
| 153
|
implement classification
|
What is open set classification in data mining?
|
https://stackoverflow.com/questions/67326572/what-is-open-set-classification-in-data-mining
|
<p>What exactly is open set classification in data mining? Is it a synonym or another word for one of these classification types?</p>
<p>-Binary Classification <br>
-Multi-Class Classification <br>
-Multi-Label Classification <br>
-Imbalanced Classification</p>
<p>I have been browsing the web for a while but can't seem to find any implementation of open set classification in either Python or Matlab. Can anyone provide good resources on how to implement open set classification?</p>
|
<p>According to <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0238302#:%7E:text=Open%20set%20classification%20(OSC)%20is,an%20incorrect%20label%20%5B3%5D." rel="nofollow noreferrer">this researh paper</a></p>
<blockquote>
<p>Open set classification (OSC) is the ability for a classifier to reject a novel input from classes unseen during training rather than assigning it an incorrect label</p>
</blockquote>
<p>Take for example, a model that has been trained to recognize cats through images. When we introduce images to the data set test, the model may make a mistake when tested with an image that was not included in the training set, although the model still performs well when it comes to images of known classes. OSC role comes to its ability to classify the irrelevant image as "Unknown" instead of "Cat".</p>
<p>Here's a <a href="https://github.com/LincLabUCCS/Open-set-text-classification-using-neural-networks" rel="nofollow noreferrer">github code sample</a> in python, it might be helpful to you.</p>
| 154
|
implement classification
|
How to implement asymmetric relationship in machine learning Classification algorithm?
|
https://stackoverflow.com/questions/67857508/how-to-implement-asymmetric-relationship-in-machine-learning-classification-algo
|
<p>I am trying to build a classification system, that takes pairs of images as input and output {0 or 1} if image B is sub-category of image A.</p>
<p>Below are some examples of the data.</p>
<ul>
<li>Input: Image A [Apple Tree] & Image B [Apple] | Output: [1]</li>
<li>Input: [Orange Tree] & [Orange] | Output: [1]</li>
<li>Input: [Apple Tree] & [Orange Tree] | Output: [0]</li>
<li>Input: [Apple] & [Apple Tree] | Output: [0]</li>
</ul>
<p>This requires asymmetric relationship (apple tree --> apple but not the other way around) in the model architecture and I struggled to find structures to help with this situation.</p>
<p>I have tried researching on product recommendation / clothing compatibility papers because they seem to have asymmetric relationship (e.g. Laptop to charger). But most of the research seem to use collaborative filtering without addressing the asymmetry issue.</p>
<p>Is there any model structures or papers that address the asymmetry problem? Any kind of help will be appreciated.</p>
|
<p>I suggest using some kind of data augmentation to teach your model the nature of the problem, let's say your model has an input layer to accept 2 images and a flag to show desire relation, what if you adding some synthetic pairs to clarify the problem (direction of the relation) to the model. precisely you can add another synthetic sample for each positive (label=1) record with the label 0 just by reversing the order of the images!</p>
<pre><code>[Orange Tree] & [Orange] | Label: [1] # Real
[Orange] & [Orange Tree] | Label: [0] # Synthetic
</code></pre>
| 155
|
implement classification
|
How to stratify the training and testing data in Scikit-Learn?
|
https://stackoverflow.com/questions/60530673/how-to-stratify-the-training-and-testing-data-in-scikit-learn
|
<p>I am trying to implement Classification algorithm for Iris Dataset (Downloaded from Kaggle). In the Species column the classes (Iris-setosa, Iris-versicolor , Iris-virginica) are in sorted order. How can I stratify the train and test data using Scikit-Learn?</p>
|
<p>If you want to shuffle and split your data with 0.3 test ratio, you can use</p>
<pre><code>sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=True)
</code></pre>
<p>where X is your data, y is corresponding labels, <strong>test_size</strong> is the percentage of the data that should be held over for testing, <strong>shuffle=True</strong> shuffles the data before splitting</p>
<p>In order to make sure that the data is equally splitted according to a column, you can give it to the <strong>stratify</strong> parameter.</p>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
shuffle=True,
stratify = X['YOUR_COLUMN_LABEL'])
</code></pre>
| 156
|
implement classification
|
Scikit-learn: SVM implementation (I obtain a perfect classification)
|
https://stackoverflow.com/questions/31211026/scikit-learn-svm-implementation-i-obtain-a-perfect-classification
|
<p>I am new in machine learning and in scikit-learn. I started to use it in order to classify different datasets. For that I started applying scikit-learn's SVC with <a href="http://pastebin.com/wxSLhEsB" rel="nofollow">this dataset</a>, It's a huge JSON with this a structure similar to this:</p>
<pre><code>{"name": "anyName", "tables":
[
{"tableName":"dislexia", "datos":[
{"rasgo1:"0.86 , "rasgo2":45 ... "rasgo61": 2},
.
.
.
{"rasgo1:"0.3, "rasgo2":5... "rasgo61": 32},
]
},
{"tableName":"fibro", "datos":[
{"feature1:"0.86 , "feature2":45 ... "feature61": 2},
.
.
.
{"feature1:"0.3, "feature2":5... "feature61": 32},
]
}
]
}
</code></pre>
<p>However, I obtain a perfect classification, even if I only train the SVM with 1 data of each type. Can someone tell me why I am obtaining a perfect classification and what I am doing wrong? Shouldn't it have some errors?</p>
<p><strong>My code:</strong></p>
<pre><code>def prueba3(request):
from sklearn.feature_extraction import DictVectorizer
from sklearn import svm, preprocessing
from sklearn import cross_validation
import numpy as np
from sklearn.lda import LDA
from sklearn.qda import QDA
from sklearn.svm import NuSVC
listaTrain=[]
listaLabels=[]
listaTest=[]
#I have several JSONs in a NoSQL Database and with this method I obtain them
proyectos = list(projectCollection.find())
#Now from the obtained list I search and obtain the JSON which has a field
#with the value "pruebaClasificadores2"
proyecto=buscarProyecto(proyectos,"pruebaClasificadores2")
#The last 2 methods above can be substituted by putting the dropbox's
#JSON in the variable "proyecto"
#From the JSON dataset y search for the JSONs with tableName "dislexia" and "fibro"
atributo0=buscarAtributo(proyecto,"fibro")
atributo1=buscarAtributo(proyecto,"artritis")
#I split the data in the dataset: 25 samples from "dislexia" (its label is 1)
#and 25 from "fibro" (its label is 0) go for the "trainingList",
#the rest of the samples of both labels go to the "testList"
contador0=0
for dato in atributo0["datos"]:
if contador0<25:
listaTrain.append(dato)
listaLabels.append(0)
contador0+=1
else:
listaTest.append(dato)
contador0=0
for dato in atributo1["datos"]:
if contador0<25:
listaTrain.append(dato)
listaLabels.append(1)
contador0+=1
else:
listaTest.append(dato)
print "lista labels"
print listaLabels
vec = DictVectorizer()
#I transform the trainingList with JSON components to a usable matrix
X=vec.fit_transform(listaTrain)
#I normalize the matrix
X=preprocessing.scale(X.toarray())
#I transform the testList with JSON components to a usable matrix
testX=vec.fit_transform(listaTest)
#I normalize the matrix
testX=preprocessing.scale(testX.toarray())
clf1=NuSVC()
#I train the classifier
trained1=clf1.fit(X,listaLabels)
#I send the testList to the classifier
predicted1=clf1.predict(testX)
print
print "NuSVC"
print predicted1 #I display the predicted labels of the classifier
#I count the number of labels of type 0 and 1 and display them
fibro1=0
artritis1=0
for pos in range(0,len(predicted1)):
if (predicted1[pos]==0):
fibro1+=1
else:
artritis1+=1
print "fibro1"
print fibro1
print "artritis"
print artritis1
return HttpResponseRedirect('/tecnico/')
def buscarAtributo(proyecto,atributoC):
encontrado=False
atributos=proyecto["atributos"]
for a in range(0,len(atributos)):
atributo=atributos[a]
if atributo["nombreAtributo"]==atributoC:
atrib=atributo
encontrado=True
if encontrado:
return atrib
else:
print "no se ha encontrado el atributo"
</code></pre>
| 157
|
|
implement classification
|
Implementing HuggingFace BERT using tensorflow fro sentence classification
|
https://stackoverflow.com/questions/62370354/implementing-huggingface-bert-using-tensorflow-fro-sentence-classification
|
<p>I am trying to train a model for real disaster tweets prediction(Kaggle Competition) using the Hugging face bert model for classification of the tweets.</p>
<p>I have followed many tutorials and have used many models of bert but none could run in COlab and thros the error</p>
<p>My Code is: </p>
<pre><code>!pip install transformers
import tensorflow as tf
import numpy as np
import pandas as pd
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.callbacks import ModelCheckpoint
from transformers import DistilBertTokenizer, RobertaTokenizer
train = pd.read_csv("/content/drive/My Drive/Kaggle_disaster/train.csv")
test = pd.read_csv("/content/drive/My Drive/Kaggle_disaster/test.csv")
roberta = 'distilbert-base-uncased'
tokenizer = DistilBertTokenizer.from_pretrained(roberta, do_lower_case = True, add_special_tokens = True, max_length = 128, pad_to_max_length = True)
def tokenize(sentences, tokenizer):
input_ids, input_masks, input_segments = [], [], []
for sentence in sentences:
inputs = tokenizer.encode_plus(sentence, add_special_tokens = True, max_length = 128, pad_to_max_length = True, return_attention_mask = True, return_token_type_ids = True)
input_ids.append(inputs['input_ids'])
input_masks.append(inputs['attention_mask'])
input_segments.append(inputs['token_type_ids'])
return np.asarray(input_ids, dtype = "int32"), np.asarray(input_masks, dtype = "int32"), np.asarray(input_segments, dtype = "int32")
input_ids, input_masks, input_segments = tokenize(train.text.values, tokenizer)
from transformers import TFDistilBertForSequenceClassification, DistilBertConfig, TFDistilBertModel
distil_bert = 'distilbert-base-uncased'
config = DistilBertConfig(dropout=0.2, attention_dropout=0.2)
config.output_hidden_states = False
transformer_model = TFDistilBertModel.from_pretrained(distil_bert, config = config)
input_ids_in = tf.keras.layers.Input(shape=(128,), name='input_token', dtype=tf.int32)
input_masks_in = tf.keras.layers.Input(shape=(128,), name='masked_token', dtype=tf.int32)
embedding_layer = transformer_model(input_ids_in, attention_mask=input_masks_in)[0]
X = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(embedding_layer)
X = tf.keras.layers.GlobalMaxPool1D()(X)
X = tf.keras.layers.Dense(50, activation='relu')(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.Dense(1, activation='sigmoid')(X)
model = tf.keras.Model(inputs=[input_ids_in, input_masks_in], outputs = X)
model.compile(Adam(lr = 1e-5), loss = 'binary_crossentropy', metrics = ['accuracy'])
for layer in model.layers[:3]:
layer.trainable = False
bert_input = [
input_ids,
input_masks
]
checkpoint = ModelCheckpoint('/content/drive/My Drive/disaster_model/model_hugging_face.h5', monitor = 'val_loss', save_best_only= True)
train_history = model.fit(
bert_input,
validation_split = 0.2,
batch_size = 16,
epochs = 10,
callbacks = [checkpoint]
)
</code></pre>
<p>On running the above code in colab I get the following error:</p>
<pre><code>Epoch 1/10
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-91-9df711c91040> in <module>()
9 batch_size = 16,
10 epochs = 10,
---> 11 callbacks = [checkpoint]
12 )
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:541 train_step **
self.trainable_variables)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1804 _minimize
trainable_variables))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:521 _aggregate_gradients
filtered_grads_and_vars = _filter_grads(grads_and_vars)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1219 _filter_grads
([v.name for _, v in grads_and_vars],))
ValueError: No gradients provided for any variable: ['tf_distil_bert_model_23/distilbert/embeddings/word_embeddings/weight:0', 'tf_distil_bert_model_23/distilbert/embeddings/position_embeddings/embeddings:0', 'tf_distil_bert_model_23/distilbert/embeddings/LayerNorm/gamma:0', 'tf_distil_bert_model_23/distilbert/embeddings/LayerNorm/beta:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/attention/q_lin/kernel:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/attention/q_lin/bias:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/attention/k_lin/kernel:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/attention/k_lin/bias:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/attention/v_lin/kernel:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/attention/v_lin/bias:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/attention/out_lin/kernel:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/attention/out_lin/bias:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/sa_layer_norm/gamma:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/sa_layer_norm/beta:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/ffn/lin1/kernel:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/ffn/lin1/bias:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/ffn/lin2/kernel:0', 'tf_distil_bert_model_23/distilbert/transformer/layer_._0/ffn/lin2/bias:0', 'tf_...
</code></pre>
|
<p>Follow this tutorial on Text classification using BERT: <a href="https://pysnacks.com/machine-learning/bert-text-classification-with-fine-tuning/" rel="nofollow noreferrer">https://pysnacks.com/machine-learning/bert-text-classification-with-fine-tuning/</a></p>
<p>It has working code on Google Colab(using GPU) and Kaggle for binary, multi-class and multi-label text classification using BERT.</p>
<p>Hope that helps.</p>
| 158
|
implement classification
|
How to fix 'Object arrays cannot be loaded when allow_pickle=False' for imdb.load_data() function?
|
https://stackoverflow.com/questions/55890813/how-to-fix-object-arrays-cannot-be-loaded-when-allow-pickle-false-for-imdb-loa
|
<p>I'm trying to implement the binary classification example using the IMDb dataset in <strong>Google Colab</strong>. I have implemented this model before. But when I tried to do it again after a few days, it returned a <code>value error: 'Object arrays cannot be loaded when allow_pickle=False'</code> for the load_data() function.</p>
<p>I have already tried solving this, referring to an existing answer for a similar problem: <a href="https://stackoverflow.com/questions/55824625/how-to-fix-object-arrays-cannot-be-loaded-when-allow-pickle-false-in-the-sketc?answertab=votes#tab-top">How to fix 'Object arrays cannot be loaded when allow_pickle=False' in the sketch_rnn algorithm</a>.
But it turns out that just adding an allow_pickle argument isn't sufficient.</p>
<p>My code:</p>
<pre><code>from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
</code></pre>
<p>The error:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-1-2ab3902db485> in <module>()
1 from keras.datasets import imdb
----> 2 (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
2 frames
/usr/local/lib/python3.6/dist-packages/keras/datasets/imdb.py in load_data(path, num_words, skip_top, maxlen, seed, start_char, oov_char, index_from, **kwargs)
57 file_hash='599dadb1135973df5b59232a0e9a887c')
58 with np.load(path) as f:
---> 59 x_train, labels_train = f['x_train'], f['y_train']
60 x_test, labels_test = f['x_test'], f['y_test']
61
/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py in __getitem__(self, key)
260 return format.read_array(bytes,
261 allow_pickle=self.allow_pickle,
--> 262 pickle_kwargs=self.pickle_kwargs)
263 else:
264 return self.zip.read(key)
/usr/local/lib/python3.6/dist-packages/numpy/lib/format.py in read_array(fp, allow_pickle, pickle_kwargs)
690 # The array contained Python objects. We need to unpickle the data.
691 if not allow_pickle:
--> 692 raise ValueError("Object arrays cannot be loaded when "
693 "allow_pickle=False")
694 if pickle_kwargs is None:
ValueError: Object arrays cannot be loaded when allow_pickle=False
</code></pre>
|
<p>Here's a trick to force <code>imdb.load_data</code> to allow pickle by, in your notebook, replacing this line:</p>
<pre class="lang-py prettyprint-override"><code>(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
</code></pre>
<p>by this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
# call load_data with allow_pickle implicitly set to true
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
# restore np.load for future normal usage
np.load = np_load_old
</code></pre>
| 159
|
implement classification
|
knn classification 10 fold implement and sorting
|
https://stackoverflow.com/questions/33930897/knn-classification-10-fold-implement-and-sorting
|
<p>i have 8 feature from a mat file
each of this feature divided 4 part (X_train , Y_train , X_test,Y_test)
for 10 times randomly obtained this parameter
now i should classify this feature according KNN
my code is here</p>
<pre><code> kk=7;
bb=1;
mdl1= ClassificationKNN.fit([X1_train{bb};X2_train{bb};X3_train{bb};X4_train{bb};X5_train{bb};X6_train{bb};X7_train{bb};X8_train{bb};X9_train{bb};X10_train{bb};X11_train{bb};X12_train{bb}],[Y1_train{bb};Y2_train{bb};Y3_train{bb};Y4_train{bb};Y5_train{bb};Y6_train{bb};Y7_train{bb};Y8_train{bb};Y9_train{bb};Y10_train{bb};Y11_train{bb};Y12_train{bb}],'NumNeighbors',kk);
.
.
.
bb=10;
mdl10= ClassificationKNN.fit([X1_train{bb};X2_train{bb};X3_train{bb};X4_train{bb};X5_train{bb};X6_train{bb};X7_train{bb};X8_train{bb};X9_train{bb};X10_train{bb};X11_train{bb};X12_train{bb}],[Y1_train{bb};Y2_train{bb};Y3_train{bb};Y4_train{bb};Y5_train{bb};Y6_train{bb};Y7_train{bb};Y8_train{bb};Y9_train{bb};Y10_train{bb};Y11_train{bb};Y12_train{bb}],'NumNeighbors',kk);
</code></pre>
<p>as you seen this functions repeat 10 times for evaluate the 10 mdl
in the following i write this code to simplify the project </p>
<pre><code>for j=1:10
for h=1:12
mdl{j}{h}=ClassificationKNN.fit([X_train{j}{h}],[Y_train{j}{h}]);
end
end
</code></pre>
<p>this code work proerly without (mdl{j}{h}) but if this sentence is used i have this error message ((Cell contents assignment to a non-cell array object))
anybode know what shall i do to fix this problem
thanks</p>
|
<p>at first you should define the mdl variable size </p>
<pre><code>mdll= cell(10, 8);
</code></pre>
<p>then form this for loop </p>
<pre><code>for j=1:10
mdll{j}= ClassificationKNN.fit([X_train{j}{1};X_train{j}{2};X_train{j}{3};X_train{j}{4};X_train{j}{5};X_train{j}{6};X_train{j}{7};X_train{j}{8};X_train{j}{9};X_train{j}{10};X_train{j}{11};X_train{j}{12}],[Y_train{j}{1};Y_train{j}{2};Y_train{j}{3};Y_train{j}{4};Y_train{j}{5};Y_train{j}{6};Y_train{j}{7};Y_train{j}{8};Y_train{j}{9};Y_train{j}{10};Y_train{j}{11};Y_train{j}{12}],'NumNeighbors',kk);
end
</code></pre>
<p>i checked it and work correctly</p>
| 160
|
implement classification
|
How to implement multilabel classification on UTKFace dataset using Tensorflow and Keras?
|
https://stackoverflow.com/questions/66852823/how-to-implement-multilabel-classification-on-utkface-dataset-using-tensorflow-a
|
<p>Basically I am trying to predict Age, Gender and Race from <a href="https://susanqq.github.io/UTKFace/" rel="nofollow noreferrer">UTKFace dataset</a> by building multilabel classification model using Tensorflow and Keras. This is what my preprocessed dataset looks like . I have couple of questions here</p>
<ol>
<li>What should be the class_mode in ImageDataGenerator <code>class_mode="multi_output"</code> or <code>class_mode="raw"</code> and why? I tried both <code>class_mode="multi_output"</code>(Throws <code>AttributeError: 'tuple' object has no attribute 'shape')</code> and <code>class_mode="raw"</code>( throws <code>InvalidArgumentError: Input to reshape is a tensor with 15745024 values, but the requested shape requires a multiple of 294912. [[node sequential_1/flatten_1/Reshape (defined at <ipython-input-22-9c3a9b687782>:4) ]] [Op:__inference_train_function_2086]</code>)</li>
<li>What should be the loss function for each of the features(Age, Gender, Race) and why?</li>
</ol>
<p><a href="https://i.sstatic.net/Ztkmr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ztkmr.png" alt="preprocessed data" /></a>
Here is what I have done so far</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Activation, MaxPool2D, Dropout, Flatten
import numpy as np
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import matplotlib.image as mimg
import pandas as pd
from tensorflow.keras.preprocessing.image import ImageDataGenerator
!tar -xvf /content/UTKFace.tar.gz
!tar -xvf /content/crop_part1.tar.gz
!mv /content/UTKFace/* /content/data
!mv /content/crop_part1/* /content/data
contentdata = []
import os
for i in os.listdir("data"):
content = i.split("_")
if ((content[0].isnumeric()) and (content[1].isnumeric()) and (content[2].isnumeric())):
contentdata.append([content[0],content[1],content[2],os.path.join("data",i)])
#print(content[2])
#imgarray = plt.imread(os.path.join("data",i))
data = pd.DataFrame(contentdata,columns=["Age","Gender","Racevalues","Filepath"])
data.head(10)
data.Age = data.Age.astype('float')
data.Gender = data.Gender.astype('float')
data['Racevalues'] = data['Racevalues'].astype('float')
data.Filepath = data.Filepath.astype('string')
data.dtypes
train, test = train_test_split(data, test_size=0.1)
testdatagenerator = ImageDataGenerator(rescale=1. /255)
testdata = testdatagenerator.flow_from_dataframe(dataframe=test,directory=None,x_col="Filepath",y_col=["Age","Gender","Racevalues"],class_mode="raw")
traindatagenerator = ImageDataGenerator(rescale=1. /255,shear_range =0.2,zoom_range=0.2,horizontal_flip =True)
traindata = traindatagenerator.flow_from_dataframe(dataframe=train,directory=None,x_col="Filepath",y_col=["Age","Gender","Racevalues"],class_mode="raw")
#model = []
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=(100,100,3)))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(5, activation='sigmoid'))
model.compile(optimizer="Adam",loss="binary_crossentropy",metrics=["accuracy"])
model.fit(traindata,
steps_per_epoch=100,epochs=100,
validation_data=testdata,
validation_steps=100,batch_size=20)
</code></pre>
<p>Can someone guide me through this?</p>
|
<p>First you should use to_categorical (one hot encode) in your labels:</p>
<pre><code>df['Age'] = tf.keras.utils.to_categorical(df['Age'])
df['Gender'] = tf.keras.utils.to_categorical(df['Gender'])
df['Racevalues'] = tf.keras.utils.to_categorical(df['Racevalues'])
</code></pre>
<p>And so:</p>
<pre><code>traindata = traindatagenerator.flow_from_dataframe(
train,
IMG_PATH,
x_col='Filepath',
y_col=["Age", "Gender", "Racevalues"],
target_size=IMAGE_SIZE,
class_mode="multi_output",
batch_size=BATCH_SIZE
)
</code></pre>
<p>The last step is to adapt your model to 3 outputs:</p>
<pre><code>input_node = base_model.get_layer('last_layer_model').output
x = Flatten()(input_node)
age_output = Dense(n_age_labels, activation='softmax', name='age')(x)
x = Flatten()(input_node)
gender_output = Dense(2, activation='softmax', name='gender')(x)
x = Flatten()(input_node)
race_output = Dense(n_race_labels, activation='softmax', name='race')(x)
model = Model(base_model.input, [age_output, gender_output, race_output])
model.compile(optimizer = Adam(learning_rate=1e-4),
loss = {
'age': 'categorical_crossentropy',
'gender': 'binary_crossentropy',
'race': 'categorical_crossentropy',
},
metrics = {
'age': 'accuracy',
'gender': 'accuracy',
'race': 'accuracy'
}
)
</code></pre>
<p>P.S.: In this example, I'm using a classifier approach to predict age, you can change this for regresssion. Using classifier, you should complement the prediction with:</p>
<pre><code>output_indexes = np.array([i for i in range(0, 101)])
apparent_predictions = np.sum(age_predictions * output_indexes, axis = 1)
</code></pre>
| 161
|
implement classification
|
Classification of detectors, extractors and matchers
|
https://stackoverflow.com/questions/14808429/classification-of-detectors-extractors-and-matchers
|
<p>I am new to opencv and trying to implement image matching between two images. For this purpose, I'm trying to understand the difference between feature descriptors, descriptor extractors and descriptor matchers. I came across a lot of terms and tried to read about them on the opencv documentation website but I just can't seem to wrap my head around the concepts. I understood the basic difference here. <a href="https://stackoverflow.com/questions/6832933/difference-between-feature-detection-and-descriptor-extraction?rq=1">Difference between Feature Detection and Descriptor Extraction</a></p>
<p>But I came across the following terms while studying on the topic : </p>
<blockquote>
<p>FAST, GFTT, SIFT, SURF, MSER, STAR, ORB, BRISK, FREAK, BRIEF </p>
</blockquote>
<p>I understand how FAST, SIFT, SURF work but can't seem to figure out which ones of the above are only detectors and which are extractors.</p>
<p>Then there are the matchers. </p>
<blockquote>
<p>FlannBased, BruteForce, knnMatch and probably some others.</p>
</blockquote>
<p>After some reading, I figured that certain matchers can only be used with certain extractors as explained here. <a href="https://stackoverflow.com/questions/7232651/how-does-opencv-orb-feature-detector-work">How Does OpenCV ORB Feature Detector Work?</a>
The classification given is quite clear but it's only for a few extractors and I don't understand the difference between float and uchar. </p>
<p>So basically, can someone please </p>
<ol>
<li>classify the types of detectors, extractors and matchers based on float and uchar, as mentioned, or some other type of classification?</li>
<li>explain the difference between the float and uchar classification or whichever classification is being used?</li>
<li>mention how to initialize (code) various types of detectors, extractors and matchers?</li>
</ol>
<p>I know its asking for a lot but I'll be highly grateful.
Thank you.</p>
|
<blockquote>
<p>I understand how FAST, SIFT, SURF work but can't seem to figure out
which ones of the above are only detectors and which are extractors.</p>
</blockquote>
<p>Basically, from that list of feature detectors/extractors (link to articles: <a href="http://www.edwardrosten.com/work/fast.html" rel="noreferrer">FAST</a>, <a href="http://www.ai.mit.edu/courses/6.891/handouts/shi94good.pdf" rel="noreferrer">GFTT</a>, <a href="http://www.cs.ubc.ca/~lowe/keypoints/" rel="noreferrer">SIFT</a>, <a href="http://www.vision.ee.ethz.ch/~surf/eccv06.pdf" rel="noreferrer">SURF</a>, <a href="https://en.wikipedia.org/wiki/Maximally_stable_extremal_regions" rel="noreferrer">MSER</a>, <a href="http://link.springer.com/chapter/10.1007/978-3-540-88693-8_8" rel="noreferrer">STAR</a>, <a href="http://www.willowgarage.com/sites/default/files/orb_final.pdf" rel="noreferrer">ORB</a>, <a href="http://www.robots.ox.ac.uk/~vgg/rg/papers/brisk.pdf" rel="noreferrer">BRISK</a>, <a href="http://infoscience.epfl.ch/record/175537/files/2069.pdf" rel="noreferrer">FREAK</a>, <a href="https://www.cs.ubc.ca/~lowe/525/papers/calonder_eccv10.pdf" rel="noreferrer">BRIEF</a>), some of them are only feature detectors (<strong>FAST, GFTT</strong>) others are both feature detectors and descriptor extractors (<strong>SIFT, SURF, ORB, FREAK</strong>). </p>
<p>If I remember correctly, <strong>BRIEF</strong> is only a descriptor extractor, so it needs features detected by some other algorithm like FAST or ORB.</p>
<p>To be sure which is which, you have to either browse the article related to the algorithm or browse opencv documentation to see which was implemented for the <code>FeatureDetector</code> class or which was for the <code>DescriptorExtractor</code> class.</p>
<blockquote>
<p>Q1: classify the types of detectors, extractors and matchers based on
float and uchar, as mentioned, or some other type of classification?</p>
<p>Q2: explain the difference between the float and uchar classification
or whichever classification is being used?</p>
</blockquote>
<p>Regarding <strong>questions 1 and 2</strong>, to classify them as float and uchar, the <a href="https://stackoverflow.com/questions/7232651/how-does-opencv-orb-feature-detector-work">link you already posted</a> is the best reference I know, maybe someone will be able to complete it.</p>
<blockquote>
<p>Q3: mention how to initialize (code) various types of detectors,
extractors and matchers?</p>
</blockquote>
<p>Answering <strong>question 3</strong>, OpenCV made the code to use the various types quite the same - mainly you have to choose one feature detector. Most of the difference is in choosing the type of matcher and you already mentioned the 3 ones that OpenCV has. Your best bet here is to read the documentation, <a href="http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography" rel="noreferrer">code samples</a>, and related Stack Overflow questions. Also, some blog posts are an excellent source of information, like these <a href="https://gist.github.com/ruimarques/ec5a504eace0abde4d7151b8e06dbd65" rel="noreferrer">series of feature detector benchmarks by Ievgen Khvedchenia</a> (The blog is no longer available so I had to create a raw text copy from its google cache).</p>
<p><strong>Matchers</strong> are used to find if a descriptor is similar to another descriptor from a list. You can either compare your query descriptor with all other descriptors from the list (<strong>BruteForce</strong>) or you use a better heuristic (<strong>FlannBased, knnMatch</strong>). The problem is that the heuristics do not work for all types of descriptors. For example, FlannBased implementation used to work only with <code>float</code> descriptors but not with <code>uchar</code>'s (But since 2.4.0, FlannBased with LSH index can be applied to uchar descriptors).</p>
<p>Quoting <a href="http://archive.is/0krz3" rel="noreferrer">this App-Solut blog post</a> about the <code>DescriptorMatcher</code> types:</p>
<blockquote>
<p>The DescriptorMatcher comes in the varieties “FlannBased”,
“BruteForceMatcher”, “BruteForce-L1” and “BruteForce-HammingLUT”. The
“FlannBased” matcher uses the flann (fast library for approximate
nearest neighbors) library under the hood to perform faster but
approximate matching. The “BruteForce-*” versions exhaustively searche
the dictionary to find the closest match for an image feature to a
word in the dictionary.</p>
</blockquote>
<p><strong>Some of the more popular combinations are:</strong></p>
<p><strong>Feature Detectors / Decriptor Extractors / Matchers types</strong></p>
<ul>
<li><p>(FAST, SURF) / SURF / FlannBased</p></li>
<li><p>(FAST, SIFT) / SIFT / FlannBased</p></li>
<li><p>(FAST, ORB) / ORB / Bruteforce</p></li>
<li><p>(FAST, ORB) / BRIEF / Bruteforce</p></li>
<li><p>(FAST, SURF) / FREAK / Bruteforce</p></li>
</ul>
<p>You might have also noticed there are a few <strong>adapters (Dynamic, Pyramid, Grid)</strong> to the feature detectors. <a href="http://archive.is/0krz3" rel="noreferrer">The App-Solut blog post</a> summarizes really nicely their use:</p>
<blockquote>
<p>(...) and there are also a couple of adapters one can use to change
the behavior of the key point detectors. For example the <code>Dynamic</code>
adapter which adjusts a detector type specific detection threshold
until enough key-points are found in an image or the <code>Pyramid</code> adapter
which constructs a Gaussian pyramid to detect points on multiple
scales. The <code>Pyramid</code> adapter is useful for feature descriptors which
are not scale invariant.</p>
</blockquote>
<p><strong>Further reading:</strong></p>
<ul>
<li><p><a href="http://littlecheesecake.me/blog1/2013/05/25/feature-detection.html" rel="noreferrer">This blog post by Yu Lu</a> does a very nice summary description on SIFT, FAST, SURF, BRIEF, ORB, BRISK and FREAK. </p></li>
<li><p>These <a href="https://gilscvblog.com/2013/08/26/tutorial-on-binary-descriptors-part-1/" rel="noreferrer">series of posts by Gil Levi</a> also do detailed summaries for several of these algorithms (BRIEF, ORB, BRISK and FREAK).</p></li>
</ul>
| 162
|
implement classification
|
How to implement kmeans clustering as a feature for classification techniques in SVM?
|
https://stackoverflow.com/questions/70532307/how-to-implement-kmeans-clustering-as-a-feature-for-classification-techniques-in
|
<p>Ive already created a clustering and saved the model but im confused what should i do with this model and how to use it as a feature for classification.
This clustering is based on the coordinate of a crime place. after the data has been clustered, i want to use the clustered model as features in SVM.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import random
import numpy as np
import xlrd
import pickle
import tkinter as tk
from tkinter import *
plt.rcParams['figure.figsize'] = (16, 9)
plt.style.use('ggplot')
#kmeans section
#Creating and labelling latitudes of X and Y and plotting it
data=pd.read_excel("sanfrancisco.xlsx")
x1=data['X']
y1=data['Y']
X = np.array(list(zip(x1,y1)))
# Elbow method
from sklearn.cluster import KMeans
wcss = [] #empty string
# to check in range for 10 cluster
for i in range(1,11):
kmeans = KMeans(n_clusters=i, init='k-means++') # will generate centroids
kmeans.fit(X)
wcss.append(kmeans.inertia_) # to find euclidean distance
plot1 = plt.figure(1)
plt.xlabel("Number of Clusters")
plt.ylabel("Euclidean Distance")
plt.plot(range(1,11), wcss)
k = 3
# data visual section.. Eg: how many crimes in diff month, most number of crime in a day in a week
# most number crime in what address, most number of crimes in what city, how many crime occur
# in how much time. , etc..
# X coordinates of random centroids
C_x = np.random.randint(0, np.max(X)-20, size=k)
# Y coordinates of random centroids
C_y = np.random.randint(0, np.max(X)-20, size=k)
C = np.array(list(zip(C_x,C_y)), dtype=np.float32)
print("Initial Centroids")
print(C)
# n_clustersr takes numbers of clusters, init chooses random data points for the initial centroids
# in default sckit provides 10 times of count and chooses the best one, in order to elak n_init assigned to 1
model = KMeans(n_clusters=k, init='random', n_init=1)
model.fit_transform(X)
centroids = model.cluster_centers_ # final centroids
rgb_colors = {0.: 'y',
1.: 'c',
2.: 'fuchsia',
}
if k == 4:
rgb_colors[3.] = 'lime'
if k == 6:
rgb_colors[3.] = 'lime'
rgb_colors[4.] = 'orange'
rgb_colors[5.] = 'tomato'
new_labels = pd.Series(model.labels_.astype(float)) # label that predicted by kmeans
plot2 = plt.figure(2)
plt.scatter(x1, y1, c=new_labels.map(rgb_colors), s=20)
plt.scatter(centroids[:, 0], centroids[:, 1], marker='*', c='black', s=200 )
plt.xlabel('Final Cluster Centers\n Iteration Count=' +str(model.n_iter_)+
'\n Objective Function Value: ' +str(model.inertia_))
plt.ylabel('y')
plt.title("k-Means")
plt.show()
# save the model to disk
filename = 'clusteredmatrix.sav'
pickle.dump(model, open(filename,'wb'))
</code></pre>
|
<p>Your problem is not much clear, but if you want to see the behavior of clusters, I recommend you to use a tool like <a href="https://www.cs.waikato.ac.nz/ml/weka/" rel="nofollow noreferrer">Weka</a>, so that you can freely cluster them and get meaningful inferences before going into complex coding stuff!</p>
| 163
|
implement classification
|
How can I implement zero-shot classification using MindsDB and MQL (for my MongoDB instance)?
|
https://stackoverflow.com/questions/75522060/how-can-i-implement-zero-shot-classification-using-mindsdb-and-mql-for-my-mongo
|
<p>I am using <a href="https://github.com/mindsdb/mindsdb" rel="nofollow noreferrer">MindsDB</a> to do a zero-shot classification using the <code>facebook/bart-large-mnli</code> model and the web store data I have in a MongoDB instance. MindsDB documentation only explains how to achieve this using SQL, but I would like to achieve this using MQL for my MongoDB instance.</p>
<p>Here is the doc on how to achieve zero-shot classification using SQL.</p>
<pre class="lang-sql prettyprint-override"><code>CREATE MODEL mindsdb.hf_zs_bart
PREDICT PRED
USING
engine = 'huggingface',
task = 'zero-shot-classification',
model_name = 'facebook/bart-large-mnli',
input_column = 'text',
candidate_labels = ['Books', 'Household', 'Clothing & Accessories', 'Electronics'];
</code></pre>
<p>But as this is for SQL, MongoDB Compass will throw a parsing error. Has anyone else tried achieving this using MQL? Should I use SQL and switch to MQL to achieve this?</p>
|
<p>MindsDB just released a new version of the docs that contains the Mongo Query Language examples. To use the zero-shot classification you will need to provide <strong>task</strong> and <strong>candidate_labels</strong> in the model training_parameters as:</p>
<pre><code>db.models.insertOne({
name: 'my_model_name',
predict: 'pred',
training_options: {
engine: 'zero-shot-classification',
task: 'text-classification',
model_name: 'facebook/bart-large-mnli',
input_column: 'text',
candidate_labels: ['Books','Household','Clothes']
}
})
</code></pre>
<p>For more information and the supported mongo syntax, you can check the new <a href="https://docs.mindsdb.com/using-mongo-api/nlp" rel="nofollow noreferrer">NLP Mongo docs</a></p>
| 164
|
implement classification
|
Implementation of n-grams in python code for multi-class text classification
|
https://stackoverflow.com/questions/55555159/implementation-of-n-grams-in-python-code-for-multi-class-text-classification
|
<p>I am new to python and working on the multi-class text classification of contract documents of the construction industry. I am facing problems in the implementation of n-grams in my code which I produced form by getting help from different online sources. I want to implement unigram, bi-gram, and tri-gram in my code. Any help in this regard shall be highly appreciated.</p>
<p>I have tried bigram and trigram in my Tfidf part of my code but it is working.</p>
<pre><code> df = pd.read_csv('projectdataayes.csv')
df = df[pd.notnull(df['types'])]
my_types = ['Requirement','Non-Requirement']
#converting to lower case
df['description'] = df.description.map(lambda x: x.lower())
#Removing the punctuation
df['description'] = df.description.str.replace('[^\w\s]', '')
#splitting the word into tokens
df['description'] = df['description'].apply(tokenize.word_tokenize)
#stemming
stemmer = PorterStemmer()
df['description'] = df['description'].apply(lambda x: [stemmer.stem(y) for y in x])
print(df[:10])
## This converts the list of words into space-separated strings
df['description'] = df['description'].apply(lambda x: ' '.join(x))
count_vect = CountVectorizer()
counts = count_vect.fit_transform(df['description'])
X_train, X_test, y_train, y_test = train_test_split(counts, df['types'], test_size=0.3, random_state=39)
tfidf_vect_ngram = TfidfVectorizer(analyzer='word',
token_pattern=r'\w{1,}', ngram_range=(2,3), max_features=5000)
tfidf_vect_ngram.fit(df['description'])
X_train_Tfidf = tfidf_vect_ngram.transform(X_train)
X_test_Tfidf = tfidf_vect_ngram.transform(X_test)
model = MultinomialNB().fit(X_train, y_train)
</code></pre>
<p>File "C:\Users\fhassan\anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 328, in
tokenize(preprocess(self.decode(doc))), stop_words)</p>
<p>File "C:\Users\fhassan\anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 256, in
return lambda x: strip_accents(x.lower())</p>
<p>File "C:\Users\fhassan\anaconda3\lib\site-packages\scipy\sparse\base.py", line 686, in <strong>getattr</strong>
raise AttributeError(attr + " not found")</p>
<p>AttributeError: lower not found</p>
|
<p>At first you fit vectorizer on texts:</p>
<pre><code>tfidf_vect_ngram.fit(df['description'])
</code></pre>
<p>And then try to apply it to counts:</p>
<pre><code>counts = count_vect.fit_transform(df['description'])
X_train, X_test, y_train, y_test = train_test_split(counts, df['types'], test_size=0.3, random_state=39)
tfidf_vect_ngram.transform(X_train)
</code></pre>
<p>You need to apply vectorizer to texts, not counts:</p>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(df['description'], df['types'], test_size=0.3, random_state=39)
tfidf_vect_ngram.transform(X_train)
</code></pre>
| 165
|
implement classification
|
multiple classification using Liblinear in Accord.net Framework
|
https://stackoverflow.com/questions/29619258/multiple-classification-using-liblinear-in-accord-net-framework
|
<p>I need to implement multiple classification classifier using Liblinear. Accord.net machine learning framework provides all of Liblinear properties except the Crammer and Singer’s formulation for multi-class classification. <a href="http://crsouza.com/2014/12/liblinear-algorithms-in-c/" rel="nofollow">This is the process</a>.</p>
|
<p>The usual way of learning a multi-class machine is by using the <a href="http://accord-framework.net/docs/html/T_Accord_MachineLearning_VectorMachines_Learning_MulticlassSupportVectorLearning.htm" rel="nofollow noreferrer">MulticlassSupportVectorLearning class</a>. This class can teach one-vs-one machines that can then be queried using either voting or elimination strategies. </p>
<p>As such, here is an example on how linear training can be done for multiple classes:</p>
<pre><code>// Let's say we have the following data to be classified
// into three possible classes. Those are the samples:
//
double[][] inputs =
{
// input output
new double[] { 0, 1, 1, 0 }, // 0
new double[] { 0, 1, 0, 0 }, // 0
new double[] { 0, 0, 1, 0 }, // 0
new double[] { 0, 1, 1, 0 }, // 0
new double[] { 0, 1, 0, 0 }, // 0
new double[] { 1, 0, 0, 0 }, // 1
new double[] { 1, 0, 0, 0 }, // 1
new double[] { 1, 0, 0, 1 }, // 1
new double[] { 0, 0, 0, 1 }, // 1
new double[] { 0, 0, 0, 1 }, // 1
new double[] { 1, 1, 1, 1 }, // 2
new double[] { 1, 0, 1, 1 }, // 2
new double[] { 1, 1, 0, 1 }, // 2
new double[] { 0, 1, 1, 1 }, // 2
new double[] { 1, 1, 1, 1 }, // 2
};
int[] outputs = // those are the class labels
{
0, 0, 0, 0, 0,
1, 1, 1, 1, 1,
2, 2, 2, 2, 2,
};
// Create a one-vs-one multi-class SVM learning algorithm
var teacher = new MulticlassSupportVectorLearning<Linear>()
{
// using LIBLINEAR's L2-loss SVC dual for each SVM
Learner = (p) => new LinearDualCoordinateDescent()
{
Loss = Loss.L2
}
};
// Learn a machine
var machine = teacher.Learn(inputs, outputs);
// Obtain class predictions for each sample
int[] predicted = machine.Decide(inputs);
// Compute classification accuracy
double acc = new GeneralConfusionMatrix(expected: outputs, predicted: predicted).Accuracy;
</code></pre>
<p>You can also try to solve a multiclass decision problem using the one-vs-rest strategy. In this case, you can use the <a href="http://accord-framework.net/docs/html/T_Accord_MachineLearning_VectorMachines_Learning_MultilabelSupportVectorLearning.htm" rel="nofollow noreferrer">MultilabelSupportVectorLearning</a> teaching algorithm instead of the multi-class one shown above.</p>
| 166
|
implement classification
|
Implementing Binary Classification for LSTM and Linear Layer Output
|
https://stackoverflow.com/questions/71565894/implementing-binary-classification-for-lstm-and-linear-layer-output
|
<p>I'm working on developing a wake word model for my AI assistant. My model architecture includes an LSTM layer to process audio data, followed by a Linear Layer. However, I'm encountering an unexpected output shape from the Linear Layer, which is causing confusion.</p>
<p>After passing the LSTM output (shape: 4, 32, 32) to the Linear Layer, I expected an output shape of (4, 32, 1). However, the actual output shape is (4, 32, 1).</p>
<p>In my binary classification task, I aim to distinguish between two classes: 0 for "do not wake up" and 1 for "wake up the AI." My batch size is 32, and I anticipated the output to be in the shape (32, 1) to represent one prediction for each audio MFCC input.</p>
<p>Could someone advise on the correct configuration of the Linear Layer or any processing steps needed to achieve the desired output shape of (32, 1)? Any insights or code examples would be greatly appreciated. Below is my model code for reference:</p>
<pre class="lang-py prettyprint-override"><code>class LSTMWakeWord(nn.Module):
def __init__(self,input_size,hidden_size,num_layers,dropout,bidirectional,num_of_classes, device='cpu'):
super(LSTMWakeWord, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.device = device
self.bidirectional = bidirectional
self.directions = 2 if bidirectional else 1
self.lstm = nn.LSTM(input_size=input_size,
hidden_size = hidden_size,
num_layers = num_layers,
dropout=dropout,
bidirectional=bidirectional,
batch_first=True)
self.layernorm = nn.LayerNorm(input_size)
self.classifier = nn.Linear(hidden_size , num_of_classes)
def _init_hidden(self,batch_size):
n, d, hs = self.num_layers, self.directions, self.hidden_size
return (torch.zeros(n * d, batch_size, hs).to(self.device),
torch.zeros(n * d, batch_size, hs).to(self.device))
def forward(self,x):
# the values with e+xxx are gone. so it normalizes the values
x = self.layernorm(x)
# x shape -> feature(n_mfcc),batch,seq_len(time)
hidden = self._init_hidden(x.size()[0])
out, (hn, cn) = self.lstm(x, hidden)
print("hn "+str(hn.shape))# directions∗num_layers, batch, hidden_size
#print("out " + str(out.shape))# batch, seq_len, direction(2 or 1)*hidden_size
out = self.classifier(hn)
print("out2 " + str(out.shape))
return out
</code></pre>
<p>I'd greatly appreciate any insights or guidance on how to handle the Linear Layer output for binary classification.</p>
|
<p>You can try this:</p>
<pre class="lang-py prettyprint-override"><code>hn = hn[-1, :, :]
out = self.classifier(hn)
</code></pre>
| 167
|
implement classification
|
CNN + LSTM implementation error for image classification
|
https://stackoverflow.com/questions/73161256/cnn-lstm-implementation-error-for-image-classification
|
<p>I am trying to implement a CNN network + LSTM to be able to predict 4 different classes based on the sequence of x-ray images, which were preprocessed to 150x150x3 shape. My X-train shape is (4067, 150, 150, 3). When I am executing the code
<strong>model.fit()</strong>, i am getting the error.</p>
<pre><code># x_train = np.reshape(x_train, (4067, 150, 150, 3))
# y_train = np.reshape(y_train, (4067, 4))
model = Sequential()
model.add(TimeDistributed(Conv2D(filters = 32,
kernel_size=(3,3),
padding='same',
activation = 'relu'),
input_shape=(None, 150, 150, 3)))
model.add(TimeDistributed(AveragePooling2D()))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(100))
model.add(Dense(24, activation='relu',name='output'))
model.add(Dense(4, activation = 'softmax'))
from tensorflow.keras.optimizers import Adam
optimizer = Adam(lr=0.001)
model.compile(optimizer = optimizer,
loss = 'categorical_crossentropy',
metrics=['accuracy'])
from tensorflow.keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor = 'val_accuracy',
factor = 0.3,
patience = 2,
min_delta = 0.001,
mode = 'auto',
verbose = 1)
hist_cnn_lstm = model.fit(x_train, y_train, batch_size=64, epochs=15,
validation_data = (x_valid, y_valid),
callbacks=reduce_lr
)
</code></pre>
<p>ERROR:</p>
<pre><code>Epoch 1/15
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-24-3ec61fbabcf1> in <module>()
1 hist_cnn_lstm = model.fit(x_train, y_train, batch_size=64, epochs=15,
2 validation_data = (x_valid, y_valid),
----> 3 callbacks=reduce_lr
4 )
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is
ValueError: Input 0 of layer "sequential_1" is incompatible with the layer: expected shape=(None, None, 150, 150, 3), found shape=(None, 150, 150, 3)
</code></pre>
| 168
|
|
implement classification
|
Best method to implement text classification (2 classes)
|
https://stackoverflow.com/questions/20766487/best-method-to-implement-text-classification-2-classes
|
<p>I have to write classifier for corpus of texts, which should separate all my texts into 2 classes.
The corpus is very large (near 4 millions for test, and 50000 for study).
But, what algorithm should I choose?</p>
<ul>
<li>Naive Bayesian</li>
<li>Neural networks</li>
<li>SVM</li>
<li>Random forest</li>
<li>kNN (why not?)</li>
</ul>
<p>I heard that Random forests and SVM is state-of-the-art methods, but, maybe someone
has a deal with listed above algorithms, and knows, which is fastest and which more accurate?</p>
|
<p>As a 2-classes text classifier, I don't think you need:</p>
<p>(1) KNN: it is a clustering method rather than classification, and it is slow;</p>
<p>(2) Random forest: the decision trees may not be a good option in high sparse dimensions;</p>
<p>You can try:</p>
<p>(1) naive bayesian: most straightforward and easiest to code. Proved to work well in text classification problems;</p>
<p>(2) logistic regression: works well if your training sample number is much larger than the feature number;</p>
<p>(3) SVM: again, for training sample much more than features, SVM with linear kernel works as well as logistic regression. And it is also one of the top algorithms in text classification;</p>
<p>(4) Neural network: seems like a panacea in machine learning. In theory it can learn any models that SVM/logistic regression could. The problem is there are not so many packages on NN as there are in SVM. As a result, the optimization process for neural network is time-consuming. </p>
<p>Yet it is hard to say which algorithm is best suit for your case. If you are using python, <a href="http://scikit-learn.org/stable/" rel="nofollow">scikit-learn</a> includes almost all these algorithms for you to test. Besides, <a href="https://www.cs.auckland.ac.nz/courses/compsci367s1c/tutorials/IntroductionToWeka.pdf" rel="nofollow">weka</a>, which integrates many machine learning algorithms in a user friendly graphic interface, is also a good candidate for you to better know the performance of each algorithm.</p>
| 169
|
implement classification
|
I want to implement a machine learning or deep learning model for text classification (100 classes)
|
https://stackoverflow.com/questions/58990776/i-want-to-implement-a-machine-learning-or-deep-learning-model-for-text-classific
|
<p>I have a dataset that is similar to the one where we have movie plots and their genres. The number of classes is around 100. What algorithm should I choose for this 100 class classification? The classification is multi-label because 1 movie can have multiple genres
Please recommend anyone from the following. You are free to suggest any other model if you want to.</p>
<pre><code>1.Naive Bayesian
2.Neural networks
3.SVM
4.Random forest
5.k nearest neighbours
</code></pre>
<p>It would be useful if you also give the necessary library in python</p>
|
<p>An important step in machine learning engineering consists of properly inspecting the data. Herby you get some insight that determines what algorithm to choose. Sometimes, you might try out more than one algorithm and compare the models, in order to be sure, that you tried your best on the data.</p>
<p>Since you did not disclose your data, I can only give you the following advice: If your data is "easy", meaning that you need only little features and a slight combination of them to solve the task, use Naive Bayes or k-nearest neighbors. If your data is "medium" hard, then use Random Forest or SVM. If solving the task requires a very complicated decision boundary combining many dimensions of the features in a non-linear fashion, choose a Neural Network architecture.</p>
<p>I suggest you use python and the scikit-learn package for SVM or Random forest or k-NN.
For Neural Networks, use keras.</p>
<p>I am sorry that I can not give you THE recipe you might expect for solving your problem. Your question is posed really broad.</p>
| 170
|
implement classification
|
C4.5 decision tree: classification probability distribution?
|
https://stackoverflow.com/questions/11854710/c4-5-decision-tree-classification-probability-distribution
|
<p>I'm using Weka's J48 (C4.5) decision tree classifier. In general for a decision tree, can a classification probability distribution be determined once you hit a leaf? I know with Naive Bayes, each classification attempt produces a classification distribution.</p>
<p>If it is possible with a decision tree, is this capability available in with the Weka J48 tree? I can alternatively try to implement my own tree.</p>
|
<p>As each leaf has a classification decision that is in fact a discrete distribution, one that has 100% for the class it indicates and 0 for all other classes. You could use the training set to generate a distribution for all inner nodes if you want, as well.</p>
<p>If you do pruning after you learn the tree, you can re-run the training set through the tree and label each leaf with the frequency it each actual class lands in that leaf and that would be your distribution.</p>
<p><strong>EDIT:</strong> For example once you get your tree. You can associate to each node a histogram with one bin for each class. And then go classify the training set, if you go through a node in the tree, add one to the corresponding bin to that class. After going through the full training set, just normalize each histogram to add 1. At the en if you feel that the leafs are too close to 100% you can then determine what further to prune by using the entropy of each histogram, for example.</p>
| 171
|
implement classification
|
How can I implement regression after multi-class multi-label classification?
|
https://stackoverflow.com/questions/78572569/how-can-i-implement-regression-after-multi-class-multi-label-classification
|
<p>I have a dataset where some objects (15%) belong to different classes and have a property value for each of those classes. How can I make a model that predicts multi-label or multi-class and then make a regression prediction based on the output of the classifier? I also need to output the probabilities for each class. unfortunately I can't delete this 15%.
<a href="https://i.sstatic.net/TM6Pn3AJ.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>I have no idea how to put it together. I have only found how to implement it separately. Any advice?</p>
| 172
|
|
implement classification
|
Implementation of Data Augmentation for Image Classification with Convolutional Neural Networks
|
https://stackoverflow.com/questions/22050186/implementation-of-data-augmentation-for-image-classification-with-convolutional
|
<p>I'm doing image classification with cudaconvnet with Daniel Nouri's noccn module, and want to implement data augmentation by taking lots of patches of an original image (and flipping it). When would it be best for this to take place?</p>
<p>I've identified 3 stages in the training process when it could:<br>
a) when creating batches from the data<br>
b) when getting the next batch to train<br>
c) given a batch, when getting the next image to feed into the net </p>
<p>It seems to me advantage of a) is that I can scatter the augmented data across all batches. But it will take up 1000x more space on disk The original dataset is already 1TB, so completely infeasible.</p>
<p>b) and c) don't involve storing the new data on disk, but could I scatter the data across batches? If I don't, then supposing I have batch_size==128 and I can augment my data 1000x, then the next 8 batches will all contain images from the same class. Isn't that bad for training the net because each training sample won't be randomised at all?</p>
<p>Furthermore, if I pick b) or c) and create a new batch from k training examples, then data augmentation by n times will make the batchsize n*k instead of giving me n times more batches.</p>
<p>For example, in my case I have batchsize==128 and can expect 1000x data augmentation. So each batch will actually be of size 128*1000 and all I'll get is more accurate partial derivative estimates (and that to a useless extent because batchsize==128k is pointlessly high).</p>
<p>So what should I do?</p>
|
<p>Right, you'd want to have augmented samples as randomly interspersed throughout the rest of the data as possible. Otherwise, you'll definitely run into problems as you've mentioned because the batches won't be properly sampled and your gradient descent steps will be too biased. I am not too familiar with cudaconvnet, as I primarily work with Torch instead, but I do often run into the same situation as you with artificially augmented data. </p>
<p>Your best bet would be (c), kind of.</p>
<p>For me, the best place to augment the data is right when a sample gets loaded by your trainer's inner loop -- apply the random distortion, flip, crop (or however else you're augmenting your samples) right at that moment and to that <em>single</em> data sample. What this will accomplish is that every time the trainer tries to load a sample, it will actually receive a modified version which will probably be different from any other image it has seen at a previous iteration.</p>
<p>Then, of course, you will need to adjust something else to still get the 1000x data size factor in. Either: </p>
<ol>
<li>Ideally, load more batches per epoch after the inner loop has finished processing the first set. If you have your augmenter set up right, every batch will continue getting random samples so it will all work out well. Torch allows doing this, but it's somewhat tricky and I'm not sure if you'd be able to do the same in cudaconvnet.</li>
<li>Otherwise, simply run the trainer for 1000 more training epochs. Not as elegant, but the end result will be the same. If you later need to report on the number of epochs you actully trained for, simply divide the real count back by 1000 to get a more appropriate estimate based on your 1000x augmented dataset.</li>
</ol>
<p>This way, you'll always have your target classes as randomly distributed throughout your dataset as your original data was, without consuming any extra diskspace to cache your augmented samples. This is, of course, at the cost of additional computing power, since you'd be generating the samples on demand at every step along the way, but you already know that...</p>
<p>Additionally, and perhaps more importantly, your batches will stay at your original 128 size, so the mini-batching process will remain untouched and your learned parameter updates will continue to drop in at the same frequency you'd expect otherwise. This same process would work great also for SGD training (batch size = 1), since the trainer will never see the "same" image twice.</p>
<p>Hope that helps.</p>
| 173
|
implement classification
|
How can I implement ROC curve analysis for this naive Bayes classification algorithm in R?
|
https://stackoverflow.com/questions/47883541/how-can-i-implement-roc-curve-analysis-for-this-naive-bayes-classification-algor
|
<p>There are very complicated examples on the Internet. I couldn't apply them to my code. I have a data set consisting of 14 independent and one dependent variables. I'm making classification with R. Here is my code:</p>
<pre><code>dataset <- read.table("adult.data", sep = ",", na.strings = c(" ?"))
colnames(dataset) <- c( "age",
"workclass",
"fnlwgt",
"education",
"education.num",
"marital.status",
"occupation",
"relationship",
"race",
"sex",
"capital.gain",
"capital.loss",
"hours.per.week",
"native.country",
"is.big.50k")
dataset = na.omit(dataset)
library(caret)
set.seed(1)
traning.indices <- createDataPartition(y = dataset$is.big.50k, p = 0.7, list = FALSE)
training.set <- dataset[traning.indices,]
test.set <- dataset[-traning.indices,]
###################################################################
## Naive Bayes
library(e1071)
classifier = naiveBayes(x = training.set[,-15],
y = training.set$is.big.50k)
prediction = predict(classifier, newdata = test.set[,-15])
cm <- confusionMatrix(data = prediction, reference = test.set[,15],
positive = levels(test.set$is.big.50k)[2])
accuracy <- sum(diag(as.matrix(cm))) / sum(as.matrix(cm))
sensitivity <- sensitivity(prediction, test.set[,15],
positive = levels(test.set$is.big.50k)[2])
specificity <- specificity(prediction, test.set[,15],
negative = levels(test.set$is.big.50k)[1])
</code></pre>
<p>I tried this. It worked. Is there a mistake? Is there any problem on transformation process? (on as.numeric() method)</p>
<pre><code>library(ROCR)
pred <- prediction(as.numeric(prediction), as.numeric(test.set[,15]))
perf <- performance(pred, measure = "tpr", x.measure = "fpr")
plot(perf, main = "ROC curve for NB",
col = "blue", lwd = 3)
abline(a = 0, b = 1, lwd = 2, lty = 2)
</code></pre>
|
<p>For a ROC curve to work, you need some threshold or hyperparameter.</p>
<p>The numeric output of Bayes classifiers tends to be too unreliable (while the binary decision is usually OK), and there is no obvious hyperparameter. You could try treating your prior probability (in a binary problem only!) as parameter, and plot a ROC curve for that.</p>
<p>But by any means, for the <em>curve</em> to exist, you need a map from some curve parameter t to TPR,FPR to get the curve. For example, t could be your prior.</p>
| 174
|
implement classification
|
implementing image classification in rnn
|
https://stackoverflow.com/questions/53041396/implementing-image-classification-in-rnn
|
<p>I have implemented an example of classifying cats and dogs using cnn. You can get the code from <a href="https://github.com/venkateshtata/cnn_medium./tree/master" rel="nofollow noreferrer">here</a> and <a href="https://becominghuman.ai/building-an-image-classifier-using-deep-learning-in-python-totally-from-a-beginners-perspective-be8dbaf22dd8" rel="nofollow noreferrer">here</a>. I want to do the same but with RNN. </p>
<p>How to do that? I want to use my own dataset. </p>
<p>Below you find the code I used with my own datast:</p>
<pre><code># Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation =
'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics
= ['accuracy'])
# Part 2 - Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('dataset/training_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('dataset/test_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
classifier.fit_generator(training_set,
steps_per_epoch = 8000,
epochs = 25,
validation_data = test_set,
validation_steps = 2000)
# Part 3 - Making new predictions
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('dataset/single_prediction/cat_or_dog_1.jpg',
target_size = (64, 64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
training_set.class_indices
if result[0][0] == 1:
prediction = 'dog'
else:
prediction = 'cat'
</code></pre>
|
<p>Keras provides an example of how to classify the <a href="https://en.wikipedia.org/wiki/MNIST_database" rel="nofollow noreferrer">MNIST</a> dataset using an <a href="https://en.wikipedia.org/wiki/Long_short-term_memory" rel="nofollow noreferrer">LSTM</a> <a href="https://github.com/keras-team/keras/blob/master/examples/mnist_irnn.py" rel="nofollow noreferrer">here</a>. You can replace their dataset with yours, adjust the number of classes and you should be done.</p>
| 175
|
implement classification
|
unsupervised text classification with php
|
https://stackoverflow.com/questions/15305817/unsupervised-text-classification-with-php
|
<p>Are there any pre-made libraries for PHP that can be used to help with tasks involving unsupervised text classification <sup><a href="http://en.wikipedia.org/wiki/Document_classification#Automatic_document_classification" rel="nofollow">information</a></sup>?</p>
<p>I've looked around the site at other questions, but I have been unable to find a similar problem.</p>
<p>I would like to learn how to implement an unsupervised classification system.</p>
|
<p><a href="https://github.com/gburtini/Learning-Library-for-PHP" rel="nofollow">https://github.com/gburtini/Learning-Library-for-PHP</a></p>
<p>Some general unsupervised algorithms already implemented here. Maybe it will be you useful for you.</p>
| 176
|
implement classification
|
Big performance difference between Pytorch and Keras implementation in text classification
|
https://stackoverflow.com/questions/61737314/big-performance-difference-between-pytorch-and-keras-implementation-in-text-clas
|
<p>I have implemented CNN in both Keras and Pytorch for a multi-label text classification task. Two implementations ended with two very different performances. CNN with Keras is noticeable outperform the CNN with Pytorch. Both used one layer CNN with kernel size 4.</p>
<p>The Pytorch version scores <strong>0.023 for micro F1</strong> and <strong>0.47 for macro F1</strong>. The model is shown as below (more details in colab notebook):</p>
<pre><code>class CNN_simple(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.conv = nn.Conv1d(in_channels = embedding_dim,
out_channels = n_filters,
kernel_size = 4) # (N,C,L)
self.fc = nn.Linear(n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
embedded = self.embedding(text)
embedded = embedded.permute(0, 2, 1)
conved = F.relu(self.conv(embedded))
pooled = F.max_pool1d(conved, conved.shape[2]).squeeze(2)
dropout = self.dropout(pooled)
return self.fc(dropout)
</code></pre>
<p>The Keras version scores <strong>0.70 for micro F1</strong> and <strong>0.56 for macro F1</strong>. The model is shown as below (more details in colab notebook):</p>
<pre><code>def get_model_cnn():
inp = Input(shape=(MAX_TEXT_LENGTH, ))
x = Embedding(MAX_VOCAB_SIZE, embed_size)(inp)
x = Conv1D(embed_size, 4, activation="relu")(x)
x = GlobalMaxPool1D()(x)
x = Dense(6, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
</code></pre>
<p>I assume there is something wrong with my Pytorch implementation. Any comment is appreciated. I have created <a href="https://colab.research.google.com/drive/1jdjrOVVvodzjzqrRmIBIyUBrAG8o97Gy" rel="nofollow noreferrer">a colab notebook</a> for the full implementations of Pytorch and Keras. Feel free to copy and run it. Please enlighten me anything that I did wrong in the Pytorch implementations. Thanks. </p>
| 177
|
|
implement classification
|
Using Naive bayes classification on android
|
https://stackoverflow.com/questions/33456621/using-naive-bayes-classification-on-android
|
<p>I am developing an android news app that should extract specific news topics from the web then make further classification to group news articles in categories using naive Bayes classification, any body know how to implement it in Android or even in Java ? </p>
|
<p>You could try finding an <a href="https://github.com/search?l=Java&q=naive+bayes&type=Repositories&utf8=%E2%9C%93" rel="nofollow">open source Java implementation on GitHub</a>.</p>
| 178
|
implement classification
|
One versus One and One versus All multiclass classification using logistic regression in python
|
https://stackoverflow.com/questions/58481324/one-versus-one-and-one-versus-all-multiclass-classification-using-logistic-regre
|
<p>This is my understanding of OvO versus OvA:
One versus One is binary classification like Banana versus Orange. One versus All/Rest classification turns it into multiple different binary classification problems.
My implementation in python for these 2 strategies yield very similar results :</p>
<p><strong>OvA</strong>:</p>
<pre><code>model = LogisticRegression(random_state=0, multi_class='ovr', solver='lbfgs')
model.fit(x,y)
model.predict(x)
</code></pre>
<p><strong>OvO</strong>:</p>
<pre><code> model = LogisticRegression()
model.fit(x,y)
model.predict(x)
</code></pre>
<p>I wanted to confirm my understanding and implementation is correct since I get similar results.
I need to implement OvO and OvA strategy for multiclass classification using logistic regression</p>
|
<p>I ended up using the sklearn inbuilt class for oneVsRestClassifier and OneVsOneclassifier</p>
| 179
|
implement classification
|
how to implement tensorflow's next_batch for own data
|
https://stackoverflow.com/questions/40994583/how-to-implement-tensorflows-next-batch-for-own-data
|
<p>In the <a href="https://www.tensorflow.org/versions/r0.10/tutorials/mnist/beginners/index.html" rel="noreferrer">tensorflow MNIST tutorial</a> the <code>mnist.train.next_batch(100)</code> function comes very handy. I am now trying to implement a simple classification myself. I have my training data in a numpy array. How could I implement a similar function for my own data to give me the next batch?</p>
<pre><code>sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
Xtr, Ytr = loadData()
for it in range(1000):
batch_x = Xtr.next_batch(100)
batch_y = Ytr.next_batch(100)
</code></pre>
|
<p>The link you posted says: <em>"we get a "batch" of one hundred random data points from our training set"</em>. In my example I use a global function (not a method like in your example) so there will be a difference in syntax.</p>
<p>In my function you'll need to pass the number of samples wanted and the data array.</p>
<p>Here is the correct code, which ensures samples have correct labels:</p>
<pre><code>import numpy as np
def next_batch(num, data, labels):
'''
Return a total of `num` random samples and labels.
'''
idx = np.arange(0 , len(data))
np.random.shuffle(idx)
idx = idx[:num]
data_shuffle = [data[ i] for i in idx]
labels_shuffle = [labels[ i] for i in idx]
return np.asarray(data_shuffle), np.asarray(labels_shuffle)
Xtr, Ytr = np.arange(0, 10), np.arange(0, 100).reshape(10, 10)
print(Xtr)
print(Ytr)
Xtr, Ytr = next_batch(5, Xtr, Ytr)
print('\n5 random samples')
print(Xtr)
print(Ytr)
</code></pre>
<p>And a demo run:</p>
<pre><code>[0 1 2 3 4 5 6 7 8 9]
[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]
[30 31 32 33 34 35 36 37 38 39]
[40 41 42 43 44 45 46 47 48 49]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]
[80 81 82 83 84 85 86 87 88 89]
[90 91 92 93 94 95 96 97 98 99]]
5 random samples
[9 1 5 6 7]
[[90 91 92 93 94 95 96 97 98 99]
[10 11 12 13 14 15 16 17 18 19]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]]
</code></pre>
| 180
|
implement classification
|
How to implement pixel-wise classification for scene labeling in TensorFlow?
|
https://stackoverflow.com/questions/35317029/how-to-implement-pixel-wise-classification-for-scene-labeling-in-tensorflow
|
<p>I am working on a deep learning model using <strong>Google's TensorFlow</strong>. The model should be used to <strong>segment and label scenes</strong>. </p>
<ol>
<li>I am using the <strong>SiftFlow dataset</strong> which has <em>33 semantic
classes</em> and <em>images with 256x256 pixels</em>. </li>
<li>As a result, at my final layer using convolution and deconvolution I arrive at the following tensor(array) <em>[256, 256, 33]</em>. </li>
<li>Next I would like to
apply <em>Softmax</em> and compare the results to a semantic label of size
<em>[256, 256]</em>.</li>
</ol>
<p><strong>Questions:</strong>
Should I apply mean averaging or argmax to my final layer so its shape becomes <em>[256,256,1]</em> and then loop through each pixel and classify as if I were classying <em>256x256</em> instances? If the answer is yes, how, if not, what other options?</p>
|
<p>To apply softmax and use a <strong>cross entropy loss</strong>, you have to keep <strong>intact</strong> the final output of your network of size <em>batch_size x 256 x 256 x 33</em>. Therefore you <strong>cannot use</strong> mean averaging or argmax because it would destroy the output probabilities of your network.</p>
<p>You have to loop through all the <em>batch_size x 256 x 256</em> pixels and apply a cross entropy loss to your prediction for this pixel. This is easy with the built-in function <code>tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels)</code>.</p>
<p>Some warnings <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/nn.html#sparse_softmax_cross_entropy_with_logits" rel="noreferrer">from the doc</a> before applying the code below:</p>
<ul>
<li>WARNING: This op expects <strong>unscaled logits</strong>, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results. </li>
<li>logits and must have the shape [batch_size, num_classes] and the dtype (either float32 or float64).</li>
<li>labels must have the shape [batch_size] and the dtype int64.</li>
</ul>
<p>The trick is to use <code>batch_size * 256 * 256</code> as the batch size required by the function. We will reshape <code>logits</code> and <code>labels</code> to this format.
Here is the code I use:</p>
<pre class="lang-py prettyprint-override"><code>inputs = tf.placeholder(tf.float32, [batch_size, 256, 256, 3]) # input images
logits = inference(inputs) # your outputs of shape [batch_size, 256, 256, 33] (no final softmax !!)
labels = tf.placeholder(tf.float32, [batch_size, 256, 256]) # your labels of shape [batch_size, 256, 256] and type int64
reshaped_logits = tf.reshape(logits, [-1, 33]) # shape [batch_size*256*256, 33]
reshaped_labels = tf.reshape(labels, [-1]) # shape [batch_size*256*256]
loss = sparse_softmax_cross_entropy_with_logits(reshaped_logits, reshaped_labels)
</code></pre>
<p>You can then apply your optimizer on that loss.</p>
<hr>
<h3>Update: v0.10</h3>
<p>The <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/nn.html#sparse_softmax_cross_entropy_with_logits" rel="noreferrer">documentation</a> of <code>tf.sparse_softmax_cross_entropy_with_logits</code> shows that it now accepts any shape for <code>logits</code>, so there is no need to reshape the tensors (thanks @chillinger):</p>
<pre class="lang-py prettyprint-override"><code>inputs = tf.placeholder(tf.float32, [batch_size, 256, 256, 3]) # input images
logits = inference(inputs) # your outputs of shape [batch_size, 256, 256, 33] (no final softmax !!)
labels = tf.placeholder(tf.float32, [batch_size, 256, 256]) # your labels of shape [batch_size, 256, 256] and type int64
loss = sparse_softmax_cross_entropy_with_logits(logits, labels)
</code></pre>
| 181
|
implement classification
|
Hyperparameters tuning with keras tuner for classification problem
|
https://stackoverflow.com/questions/72935023/hyperparameters-tuning-with-keras-tuner-for-classification-problem
|
<p>I an trying implement both classification problem and the regression problem with Keras tuner. here is my code for the regression problem:</p>
<pre><code> def build_model(hp):
model = keras.Sequential()
for i in range(hp.Int('num_layers', 2, 20)):
model.add(layers.Dense(units=hp.Int('units_' + str(i),
min_value=32,
max_value=512,
step=32),
activation='relu'))
if hp.Boolean("dropout"):
model.add(layers.Dropout(rate=0.5))
# Tune whether to use dropout.
model.add(layers.Dense(1, activation='linear'))
model.compile(
optimizer=keras.optimizers.Adam(
hp.Choice('learning_rate', [1e-4, 1e-3, 1e-5])),
loss='mean_absolute_error',
metrics=['mean_absolute_error'])
return model
tuner = RandomSearch(
build_model,
objective='val_mean_absolute_error',
max_trials=5,
executions_per_trial=2,
# overwrite=True,
directory='projects',
project_name='Air Quality Index')
</code></pre>
<p>In order to apply this code for a classification problem, which parameters(loss, objective, metrices etc.) have to be changed?</p>
|
<p>To use this code for a classification problem, you will have to change the loss function, the objective function and the activation function of your output layer. Depending on the number of classes, you will use different functions:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Number of classes</th>
<th>Two</th>
<th>More than two</th>
</tr>
</thead>
<tbody>
<tr>
<td>Loss</td>
<td><code>binary_crossentropy</code></td>
<td><code>categorical_crossentropy</code></td>
</tr>
<tr>
<td>Tuner objective</td>
<td><code>val_binary_crossentropy</code></td>
<td><code>val_categorical_crossentropy</code></td>
</tr>
<tr>
<td>Last layer activation</td>
<td><code>sigmoid</code></td>
<td><code>softmax</code></td>
</tr>
</tbody>
</table>
</div>
| 182
|
implement classification
|
Is there a way to inform classifiers in R of the relative costs of misclassification?
|
https://stackoverflow.com/questions/44548414/is-there-a-way-to-inform-classifiers-in-r-of-the-relative-costs-of-misclassifica
|
<p>This is a general question. Are there classifiers in R -- functions that perform classification implementing classification algorithms-- that accept as input argument the relative cost of misclassification. E.g. if a misclassification of a positive to negative has cost 1 the opposite has cost 3.</p>
<p>If yes which are these functions?</p>
|
<p>Yes. If you are using the <a href="https://cran.r-project.org/web/packages/caret/caret.pdf" rel="nofollow noreferrer">caret</a> package (you should; it provides 'standardization' for 200+ classification and regression methods by wrapping almost all relevant R's packages), you can set the <em>weights</em> argument of the <em>train</em> function (see p.152; see also <a href="https://www.r-bloggers.com/handling-class-imbalance-with-r-and-caret-an-introduction/" rel="nofollow noreferrer">here</a>) for models that support class weights. <a href="https://stats.stackexchange.com/questions/162297/class-weights-in-caret">This answer</a> lists some of the models that support class weights.</p>
| 183
|
implement classification
|
Spark Multi Label classification
|
https://stackoverflow.com/questions/39167288/spark-multi-label-classification
|
<p>I am looking to implement with Spark, a multi label classification algorithm with multi output, but I am surprised that there isn’t any model in Spark Machine Learning libraries that can do this.</p>
<p>How can I do this with Spark ?</p>
<p>Otherwise Scikit Learn Logistic Regresssion support multi label classification in input/output , but doesn't support a huge data for training.</p>
<p>to view the code in scikit learn, please click on the following link:
<a href="https://gist.github.com/mkbouaziz/5bdb463c99ba9da317a1495d4635d0fc" rel="noreferrer">https://gist.github.com/mkbouaziz/5bdb463c99ba9da317a1495d4635d0fc</a></p>
|
<p>Also in Spark there is Logistic Regression that supports multilabel classification based on the api <a href="http://spark.apache.org/docs/2.1.0/api/java/org/apache/spark/mllib/classification/LogisticRegressionWithSGD.html" rel="noreferrer">documentation</a>. See also <a href="https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/classification/StreamingLogisticRegressionWithSGD.scala" rel="noreferrer">this</a>. </p>
<p>The problem that you have on scikitlearn for the huge amount of training data will disappear with spark, using an appropriate Spark configuration. </p>
<p>Another approach is to use binary classifiers for each of the labels that your problem has, and get multilabel by running relevant-irrelevant predictions for that label. You can easily do that in Spark using any binary classifier.</p>
<p>Indirectly, what might also be of help, is to use multilabel categorization with nearest-neighbors, which is also <a href="http://140.123.102.14:8080/reportSys/file/paper/joji/joji_5_paper.pdf" rel="noreferrer">state-of-the-art</a>. Some nearest neighbors Spark extensions, like <a href="https://github.com/saurfang/spark-knn" rel="noreferrer">Spark KNN</a> or <a href="https://github.com/tdebatty/spark-knn-graphs" rel="noreferrer">Spark KNN graphs</a>, for instance. </p>
| 184
|
implement classification
|
How can I increase the efficiency of my Python implementation for the k Nearest Neighbours classification?
|
https://stackoverflow.com/questions/42764141/how-can-i-increase-the-efficiency-of-my-python-implementation-for-the-k-nearest
|
<p>I tried to implement the k Nearest Neighbour classification algorithm for images in Python, here is my code<br>
<pre><code>
def classifyImage(self, RGBAValsForOneImage, kVal):
redMaster = RGBAValsForOneImage[0]
greenMaster = RGBAValsForOneImage[1]
blueMaster = RGBAValsForOneImage[2]
alphaMaster = RGBAValsForOneImage[3]
L2DistanceDictionary = {}
kLabels = []</p>
for i in range(self.nImagesTrain):
print(str("comparing to image nr " + str(i)))
L2Norm = 0
# label = self.LabelsTraining[i]
for j in range(self.nPixel):
redGreenBlueAlpha = self.RGBAPixelValuesTraining[i, j, :]
redCompare = redGreenBlueAlpha[0]
greenCompare = redGreenBlueAlpha[1]
blueCompare = redGreenBlueAlpha[2]
alphaCompare = redGreenBlueAlpha[3]
L2Norm += np.sqrt((redCompare - redMaster) ** 2 + (greenCompare - greenMaster) ** 2 + (blueCompare - blueMaster) ** 2 + (alphaCompare - alphaMaster) ** 2)[0]
L2Norm *= 100
L2NormInt = int(L2Norm)
alreadyThere = L2DistanceDictionary.get(L2NormInt, [])
alreadyThere.append(i)
L2DistanceDictionary[L2NormInt] = alreadyThere
theSortedKeys = sorted(L2DistanceDictionary.keys())
howManyUntilNow = 0
for i in range(0, len(L2DistanceDictionary)):
thekey = theSortedKeys[i]
thevalue = L2DistanceDictionary[thekey]
howMany = len(thevalue)
for z in range(0, howMany):
if howManyUntilNow < kVal:
kLabels.append(self.LabelsTraining[thevalue[z]])
else:
break
labels_to_count = (label for label in kLabels)
c = Counter(labels_to_count)
winLabel, count = c.most_common(1)[0]
return winLabel
</code></pre>
<p></p>
<p>The basic idea of the classifyImage function is to compare the RGBA values of the pixels of the image which needs to be classified to the RGBA values of the other images in the training dataset and return the most common tag among the tags of the k nearest neighbours.</p>
<p>The problem with the code is that it is incredibly slow. Are there any ways to improve the efficiency? I almost never code in Python, so there might be easy ways to improve this code.</p>
| 185
|
|
implement classification
|
Using Tensorflow's Connectionist Temporal Classification (CTC) implementation
|
https://stackoverflow.com/questions/38059247/using-tensorflows-connectionist-temporal-classification-ctc-implementation
|
<p>I'm trying to use the Tensorflow's CTC implementation under contrib package (tf.contrib.ctc.ctc_loss) without success. </p>
<ul>
<li>First of all, anyone know where can I read a good step-by-step tutorial? Tensorflow's documentation is very poor on this topic.</li>
<li>Do I have to provide to ctc_loss the labels with the blank label interleaved or not?</li>
<li>I could not be able to overfit my network even using a train dataset of length 1 over 200 epochs. :(</li>
<li>How can I calculate the label error rate using tf.edit_distance?</li>
</ul>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>with graph.as_default():
max_length = X_train.shape[1]
frame_size = X_train.shape[2]
max_target_length = y_train.shape[1]
# Batch size x time steps x data width
data = tf.placeholder(tf.float32, [None, max_length, frame_size])
data_length = tf.placeholder(tf.int32, [None])
# Batch size x max_target_length
target_dense = tf.placeholder(tf.int32, [None, max_target_length])
target_length = tf.placeholder(tf.int32, [None])
# Generating sparse tensor representation of target
target = ctc_label_dense_to_sparse(target_dense, target_length)
# Applying LSTM, returning output for each timestep (y_rnn1,
# [batch_size, max_time, cell.output_size]) and the final state of shape
# [batch_size, cell.state_size]
y_rnn1, h_rnn1 = tf.nn.dynamic_rnn(
tf.nn.rnn_cell.LSTMCell(num_hidden, state_is_tuple=True, num_proj=num_classes), # num_proj=num_classes
data,
dtype=tf.float32,
sequence_length=data_length,
)
# For sequence labelling, we want a prediction for each timestamp.
# However, we share the weights for the softmax layer across all timesteps.
# How do we do that? By flattening the first two dimensions of the output tensor.
# This way time steps look the same as examples in the batch to the weight matrix.
# Afterwards, we reshape back to the desired shape
# Reshaping
logits = tf.transpose(y_rnn1, perm=(1, 0, 2))
# Get the loss by calculating ctc_loss
# Also calculates
# the gradient. This class performs the softmax operation for you, so inputs
# should be e.g. linear projections of outputs by an LSTM.
loss = tf.reduce_mean(tf.contrib.ctc.ctc_loss(logits, target, data_length))
# Define our optimizer with learning rate
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(loss)
# Decoding using beam search
decoded, log_probabilities = tf.contrib.ctc.ctc_beam_search_decoder(logits, data_length, beam_width=10, top_paths=1)
</code></pre>
<p>Thanks!</p>
<p><strong>Update (06/29/2016)</strong></p>
<p>Thank you, @jihyeon-seo! So, we have at input of RNN something like [num_batch, max_time_step, num_features]. We use the dynamic_rnn to perform the recurrent calculations given the input, outputting a tensor of shape [num_batch, max_time_step, num_hidden]. After that, we need to do an affine projection in each tilmestep with weight sharing, so we've to reshape to [num_batch*max_time_step, num_hidden], multiply by a weight matrix of shape [num_hidden, num_classes], sum a bias undo the reshape, transpose (so we will have [max_time_steps, num_batch, num_classes] for ctc loss input), and this result will be the input of ctc_loss function. Did I do everything correct?</p>
<p>This is the code:</p>
<pre class="lang-py prettyprint-override"><code> cell = tf.nn.rnn_cell.MultiRNNCell([cell] * num_layers, state_is_tuple=True)
h_rnn1, self.last_state = tf.nn.dynamic_rnn(cell, self.input_data, self.sequence_length, dtype=tf.float32)
# Reshaping to share weights accross timesteps
x_fc1 = tf.reshape(h_rnn1, [-1, num_hidden])
self._logits = tf.matmul(x_fc1, self._W_fc1) + self._b_fc1
# Reshaping
self._logits = tf.reshape(self._logits, [max_length, -1, num_classes])
# Calculating loss
loss = tf.contrib.ctc.ctc_loss(self._logits, self._targets, self.sequence_length)
self.cost = tf.reduce_mean(loss)
</code></pre>
<p><strong>Update (07/11/2016)</strong></p>
<p>Thank you @Xiv. Here is the code after the bug fix:</p>
<pre class="lang-py prettyprint-override"><code> cell = tf.nn.rnn_cell.MultiRNNCell([cell] * num_layers, state_is_tuple=True)
h_rnn1, self.last_state = tf.nn.dynamic_rnn(cell, self.input_data, self.sequence_length, dtype=tf.float32)
# Reshaping to share weights accross timesteps
x_fc1 = tf.reshape(h_rnn1, [-1, num_hidden])
self._logits = tf.matmul(x_fc1, self._W_fc1) + self._b_fc1
# Reshaping
self._logits = tf.reshape(self._logits, [-1, max_length, num_classes])
self._logits = tf.transpose(self._logits, (1,0,2))
# Calculating loss
loss = tf.contrib.ctc.ctc_loss(self._logits, self._targets, self.sequence_length)
self.cost = tf.reduce_mean(loss)
</code></pre>
<p><strong>Update (07/25/16)</strong></p>
<p>I <a href="https://github.com/igormq/ctc_tensorflow_example" rel="noreferrer">published</a> on GitHub part of my code, working with one utterance. Feel free to use! :)</p>
|
<p>I'm trying to do the same thing.
Here's what I found you may be interested in.</p>
<p>It was really hard to find the tutorial for CTC, but <a href="https://github.com/tensorflow/tensorflow/blob/679f95e9d8d538c3c02c0da45606bab22a71420e/tensorflow/python/kernel_tests/ctc_loss_op_test.py" rel="nofollow noreferrer">this example was helpful</a>.</p>
<p>And for the blank label, <a href="https://github.com/tensorflow/tensorflow/blob/d42facc3cc9611f0c9722c81551a7404a0bd3f6b/tensorflow/core/kernels/ctc_loss_op.cc#L146-L147" rel="nofollow noreferrer">CTC layer assumes that the blank index is <code>num_classes - 1</code></a>, so you need to provide an additional class for the blank label.</p>
<p>Also, CTC network performs softmax layer. In your code, RNN layer is connected to CTC loss layer. Output of RNN layer is internally activated, so you need to add one more hidden layer (it could be output layer) without activation function, then add CTC loss layer.</p>
| 186
|
implement classification
|
How to use run_classifer.py,an example of Pytorch implementation of Bert for classification Task?
|
https://stackoverflow.com/questions/56151673/how-to-use-run-classifer-py-an-example-of-pytorch-implementation-of-bert-for-cla
|
<p>How to use the fine-tuned bert pytorch model for classification (CoLa) task?</p>
<p>I do not see the argument <code>--do_predict</code>, in <code>/examples/run_classifier.py</code>. </p>
<p>However, <code>--do_predict</code> exists in the original implementation of the Bert.</p>
<p>The fine-tuned model is getting saving in the BERT_OUTPUT_DIR as <code>pytorch_model.bin</code>, but is there a simple way to reuse it through the command line?</p>
<p>Using Pytorch implementation from: <a href="https://github.com/huggingface/pytorch-pretrained-BERT" rel="nofollow noreferrer">https://github.com/huggingface/pytorch-pretrained-BERT</a></p>
<p>The command which I am using to execute the code is:</p>
<pre><code>python run_classifier.py \
--task_name CoLA \
--do_train \
--do_eval \
--do_lower_case \
--data_dir ./split/ \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
</code></pre>
| 187
|
|
implement classification
|
Hierarchical transformer for document classification: model implementation error, extracting attention weights
|
https://stackoverflow.com/questions/62825520/hierarchical-transformer-for-document-classification-model-implementation-error
|
<p>I am trying to implement a hierarchical transformer for document classification in Keras/tensorflow, in which:</p>
<p>(1) a word-level transformer produces a representation of each sentence, and attention weights for each word, and,</p>
<p>(2) a sentence-level transformer uses the outputs from (1) to produce a representation of each document, and attention weights for each sentence, and finally,</p>
<p>(3) the document representations produced by (2) are used to classify documents (in the following example, as belonging or not belonging to a given class).</p>
<p>I am attempting to model the classifier on Yang et al.'s approach here (<a href="https://www.cs.cmu.edu/%7E./hovy/papers/16HLT-hierarchical-attention-networks.pdf" rel="nofollow noreferrer">https://www.cs.cmu.edu/~./hovy/papers/16HLT-hierarchical-attention-networks.pdf</a>), but replacing the GRU and attention layers with transformers.</p>
<p>I am using Apoorv Nandan's transformer implementation from <a href="https://keras.io/examples/nlp/text_classification_with_transformer/" rel="nofollow noreferrer">https://keras.io/examples/nlp/text_classification_with_transformer/</a>.</p>
<p>I have two issues for which I would be grateful for the community's help:</p>
<p><strong>(1) I get an error in the upper (sentence) level model that I can't resolve (details and code below)</strong></p>
<p><strong>(2) I don't know how to extract the word- and sentence-level attention weights, and value advice on how best to do this.</strong></p>
<p>I am new to both Keras and this forum, so apologies for obvious mistakes and thank you in advance for any help.</p>
<p>Here is a reproducible example, indicating where I encounter errors:</p>
<p>First, establish the multi-head attention, transformer, and token/position embedding layers, after Nandan.</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pandas as pd
import numpy as np
class MultiHeadSelfAttention(layers.Layer):
def __init__(self, embed_dim, num_heads=8):
super(MultiHeadSelfAttention, self).__init__()
self.embed_dim = embed_dim
self.num_heads = num_heads
if embed_dim % num_heads != 0:
raise ValueError(
f"embedding dimension = {embed_dim} should be divisible by number of heads = {num_heads}"
)
self.projection_dim = embed_dim // num_heads
self.query_dense = layers.Dense(embed_dim)
self.key_dense = layers.Dense(embed_dim)
self.value_dense = layers.Dense(embed_dim)
self.combine_heads = layers.Dense(embed_dim)
def attention(self, query, key, value):
score = tf.matmul(query, key, transpose_b=True)
dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
scaled_score = score / tf.math.sqrt(dim_key)
weights = tf.nn.softmax(scaled_score, axis=-1)
output = tf.matmul(weights, value)
return output, weights
def separate_heads(self, x, batch_size):
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.projection_dim))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, inputs):
# x.shape = [batch_size, seq_len, embedding_dim]
batch_size = tf.shape(inputs)[0]
query = self.query_dense(inputs) # (batch_size, seq_len, embed_dim)
key = self.key_dense(inputs) # (batch_size, seq_len, embed_dim)
value = self.value_dense(inputs) # (batch_size, seq_len, embed_dim)
query = self.separate_heads(
query, batch_size
) # (batch_size, num_heads, seq_len, projection_dim)
key = self.separate_heads(
key, batch_size
) # (batch_size, num_heads, seq_len, projection_dim)
value = self.separate_heads(
value, batch_size
) # (batch_size, num_heads, seq_len, projection_dim)
attention, weights = self.attention(query, key, value)
attention = tf.transpose(
attention, perm=[0, 2, 1, 3]
) # (batch_size, seq_len, num_heads, projection_dim)
concat_attention = tf.reshape(
attention, (batch_size, -1, self.embed_dim)
) # (batch_size, seq_len, embed_dim)
output = self.combine_heads(
concat_attention
) # (batch_size, seq_len, embed_dim)
return output
class TransformerBlock(layers.Layer):
def __init__(self, embed_dim, num_heads, ff_dim, dropout_rate, name=None):
super(TransformerBlock, self).__init__(name=name)
self.att = MultiHeadSelfAttention(embed_dim, num_heads)
self.ffn = keras.Sequential(
[layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),]
)
self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = layers.Dropout(dropout_rate)
self.dropout2 = layers.Dropout(dropout_rate)
def call(self, inputs, training):
attn_output = self.att(inputs)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(inputs + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
return self.layernorm2(out1 + ffn_output)
class TokenAndPositionEmbedding(layers.Layer):
def __init__(self, maxlen, vocab_size, embed_dim, name=None):
super(TokenAndPositionEmbedding, self).__init__(name=name)
self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)
self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
def call(self, x):
maxlen = tf.shape(x)[-1]
positions = tf.range(start=0, limit=maxlen, delta=1)
positions = self.pos_emb(positions)
x = self.token_emb(x)
return x + positions
</code></pre>
<p>For the purpose of this example, the data are 10,000 documents, each truncated to 15 sentences, each sentence with a maximum of 60 words, which are already converted to integer tokens 1-1000.</p>
<p>X is a 3-D tensor (10000, 15, 60) containing these tokens. y is a 1-D tensor containing the classes of the documents (1 or 0). For the purpose of this example there is no relation between X and y.</p>
<p>The following produces the example data:</p>
<pre><code>max_docs = 10000
max_sentences = 15
max_words = 60
X = tf.random.uniform(shape=(max_docs, max_sentences, max_words), minval=1, maxval=1000, dtype=tf.dtypes.int32, seed=1)
y = tf.random.uniform(shape=(max_docs,), minval=0, maxval=2, dtype=tf.dtypes.int32, seed=1)
</code></pre>
<p>Here I attempt to construct the word level encoder, after <a href="https://keras.io/examples/nlp/text_classification_with_transformer/" rel="nofollow noreferrer">https://keras.io/examples/nlp/text_classification_with_transformer/</a>:</p>
<pre><code># Lower level (produce a representation of each sentence):
embed_dim = 100 # Embedding size for each token
num_heads = 2 # Number of attention heads
ff_dim = 64 # Hidden layer size in feed forward network inside transformer
L1_dense_units = 100 # Size of the sentence-level representations output by the word-level model
dropout_rate = 0.1
vocab_size=1000
word_input = layers.Input(shape=(max_words,), name='word_input')
word_embedding = TokenAndPositionEmbedding(maxlen=max_words, vocab_size=vocab_size,
embed_dim=embed_dim, name='word_embedding')(word_input)
word_transformer = TransformerBlock(embed_dim=embed_dim, num_heads=num_heads, ff_dim=ff_dim,
dropout_rate=dropout_rate, name='word_transformer')(word_embedding)
word_pool = layers.GlobalAveragePooling1D(name='word_pooling')(word_transformer)
word_drop = layers.Dropout(dropout_rate,name='word_drop')(word_pool)
word_dense = layers.Dense(L1_dense_units, activation="relu",name='word_dense')(word_drop)
word_encoder = keras.Model(word_input, word_dense)
word_encoder.summary()
</code></pre>
<p>It looks as though this word encoder works as intended to produce a representation of each sentence. Here, run on the 1st document, it produces a tensor of shape (15, 100), containing the vectors representing each of 15 sentences:</p>
<pre><code>word_encoder(X[0]).shape
</code></pre>
<p>My problem is in connecting this to the higher (sentence) level model, to produce document representations.</p>
<p><strong>I get error "NotImplementedError" when trying to apply the word encoder to each sentence in a document. I would be grateful for any help in fixing this issue, since the error message is not informative as to the specific problem.</strong></p>
<p>After applying the word encoder to each sentence, the goal is to apply another transformer to produce attention weights for each sentence, and a document-level representation with which to perform classification. I can't determine whether this part of the model will work because of the error above.</p>
<p><strong>Finally, I would like to extract word- and sentence-level attention weights for each document, and would be grateful for advice on how to do so.</strong></p>
<p>Thank you in advance for any insight.</p>
<pre><code># Upper level (produce a representation of each document):
L2_dense_units = 100
sentence_input = layers.Input(shape=(max_sentences, max_words), name='sentence_input')
# This is the line producing "NotImplementedError":
sentence_encoder = tf.keras.layers.TimeDistributed(word_encoder, name='sentence_encoder')(sentence_input)
sentence_transformer = TransformerBlock(embed_dim=L1_dense_units, num_heads=num_heads, ff_dim=ff_dim,
dropout_rate=dropout_rate, name='sentence_transformer')(sentence_encoder)
sentence_dense = layers.TimeDistributed(Dense(int(L2_dense_units)),name='sentence_dense')(sentence_transformer)
sentence_out = layers.Dropout(dropout_rate)(sentence_dense)
preds = layers.Dense(1, activation='sigmoid', name='sentence_output')(sentence_out)
model = keras.Model(sentence_input, preds)
model.summary()
</code></pre>
|
<p>I got NotImplementedError as well while trying to do the same thing as you. The thing is Keras's TimeDistributed layer needs to know its inner custom layer's output shapes. So you should add compute_output_shape method to your custom layers.</p>
<p>In your case MultiHeadSelfAttention, TransformerBlock and TokenAndPositionEmbedding layers should include:</p>
<pre><code>class MultiHeadSelfAttention(layers.Layer):
...
def compute_output_shape(self, input_shape):
# it does not change the shape of its input
return input_shape
class TransformerBlock(layers.Layer):
...
def compute_output_shape(self, input_shape):
# it does not change the shape of its input
return input_shape
class TokenAndPositionEmbedding(layers.Layer):
...
def compute_output_shape(self, input_shape):
# it changes the shape from (batch_size, maxlen) to (batch_size, maxlen, embed_dim)
return input_shape + (self.pos_emb.output_dim,)
</code></pre>
<p>After you add these methods you should be able to run your code.</p>
<p>As for your second question, I am not sure but maybe you can return the "weights" variable that is returned from MultiHeadSelfAttention's attention method in call methods of both MultiHeadSelfAttention and TransformerBlock. So that you can access it where you build your model.</p>
| 188
|
implement classification
|
Keras classification model with pure numpy classification layer
|
https://stackoverflow.com/questions/74951226/keras-classification-model-with-pure-numpy-classification-layer
|
<p>I have a multiclass(108 classes) classification model which I want to apply transfer learning to the classification layer. I want to deploy this model in a low computing resource device (Raspberry Pi) and I thought to implement the classification layer in pure numpy instead of using Keras or TF.
Below is my original model.</p>
<pre><code>from tensorflow.keras.models import Sequential, Model, LSTM, Embedding
model = Sequential()
model.add(Embedding(108, 50, input_length=10))
model.add((LSTM(32, return_sequences=False)))
model.add(Dense(108, activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=0.001), metrics=['accuracy'])
model.summary()
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=3)
history = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.5, callbacks=[es]).history
</code></pre>
<p>I split this model into two parts, encoder and decoder as follows. decoder is the classification layer which I want to convert into NumPy model and then do the on-device transfer learning later.</p>
<pre><code>encoder = Sequential([
Embedding(108, 50, input_length=10),
GRU(32, return_sequences=False)
])
decoder = Sequential([
Dense(108, activation="softmax")
])
model = Model(inputs=encoder.input, outputs=decoder(encoder.output))
model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=0.001), metrics=['accuracy'])
model.summary()
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=3)
history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_split=0.5, callbacks=[es]).history
</code></pre>
<p>I have a few questions related to this approach.</p>
<ol>
<li><p>Only way i know to train this model is, first to train the encoder and decoder. then
train NumPy classification layer using trained encoder outputs.
Is there any way i can train the NumPy model at the same time when i train the encoder (without using the above Keras decoder part and <code>Model</code>)? I can't use <code>Model</code> as I can't use Keras or TF in raspberry Pi during the transfer learning.</p>
</li>
<li><p>If there is no any way to train encoder and Numpy model at the same time,
How to use learned decoder weights as the starting weights of the Numpy Model instead of starting from random weights?</p>
</li>
<li><p>What is the most efficient code (or way) to implement the Numpy classification layer (decoder)? It requires a highly efficient model as i do the transfer learning on Raspberry Pi for incoming streaming data.
Once i trained the model for reasonable data, i plan to convert the encoder into TFLite and do the inference</p>
</li>
</ol>
<p>Highly appreciate any help or guidance to achieve this as I'm new to NumPy-based NN implementations.
Thanks in advance</p>
| 189
|
|
implement classification
|
R: How to create multiple maps (rworldmap) with different classification borders?
|
https://stackoverflow.com/questions/34472450/r-how-to-create-multiple-maps-rworldmap-with-different-classification-borders
|
<p>Based on my former <a href="https://stackoverflow.com/questions/33945984/r-how-to-create-multiple-maps-rworldmap-using-apply">question</a> answered by @Andy, I wanted to have different classification intervals per map using Jenks natural breaks. For this I use the library <code>classInt</code>, which works fine for single plots. However, I don't know how to implement this different classifications per column (or map) into the <code>lapply</code> solution of @Andy. Which is probably pretty easy. So using the sample data of my previous question I would create the classification intervals like this (based on the <code>spdf</code> object):</p>
<pre><code>library(classInt)
# create classification intervalls for single columsn
classInt_Bv <- classIntervals( spdf$BLUE.veggies, n=3, style="jenks")
Bv = classInt_Bv$brks
classInt_Bf <- classIntervals( spdf$BLUE.fruits, n=3, style="jenks")
Bf = classInt_Bf$brks
classInt_Bn <- classIntervals( spdf$BLUE.nuts, n=3, style="jenks")
Bn = classInt_Bn$brks
classInt_Gv <- classIntervals( spdf$GREEN.veggies, n=3, style="jenks")
Gv = classInt_Gv$brks
classInt_Gf <- classIntervals( spdf$GREEN.fruits, n=3, style="jenks")
Gf = classInt_Gf$brks
classInt_Gn <- classIntervals( spdf$GREEN.nuts, n=3, style="jenks")
Gn = classInt_Gn$brks
# merge all cols again together
catMethod = data.frame(Bv,Bf,Bn,Gv,Gf,Gn)
</code></pre>
<p>Here, maybe my first question is there a easier/faster way to do this? As I use in 2nd df more than 50 cols.</p>
<p>My 2nd (and main) question is: How to implement these classification intervals into @Andy's lapply function, so that every map uses the wright classification intervals? Thanks </p>
|
<p>From the example provided in the link</p>
<pre><code>spdf <- df
</code></pre>
<p>As there are non-numeric columns, we can subset the dataset for those columns with names that have either 'BLUE' or 'GREEN' with <code>grep</code> ('i1'), then we loop over those columns,apply the <code>classIntervals</code> function and get the 'brks' in a <code>list</code>.</p>
<pre><code>i1 <- grep('^(BLUE|GREEN)', names(spdf))
lst <- lapply(spdf[i1], function(x) classIntervals(x, n=3,
style='jenks')$brks)
names(lst) <- sub('^(.)[^.]+.(.).*', '\\1\\2', names(lst))
res <- data.frame(lst)
res
# Bf Bn Bv Gf Gn Gv
#1 0 0 0 0 0 0
#2 3745797 171984 34910 3389314 464688 15508
#3 12803543 533665 92690 8942278 1640804 149581
#4 19947613 21563867 188940 15773576 6399474 174504
</code></pre>
| 190
|
implement classification
|
How Can I Implement a Multiclass Multilabel Classification in Keras
|
https://stackoverflow.com/questions/62747053/how-can-i-implement-a-multiclass-multilabel-classification-in-keras
|
<p>Suppose I have some set of input outputs like below:</p>
<pre><code>input1 : [0 1 1 1 0 ... 1]
output1 : [1 2 2 3 ... 3 3 1 2 2]
...
</code></pre>
<p>the inputs are always <strong>0 or 1</strong> and the outputs are always <strong>1 or 2 or 3</strong></p>
<p>how can I create a neural network in keras that can fit on these input outputs?</p>
<pre><code>checkpoint_path = 'p-multilable.h5'
checkpoint = keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, mode='max', monitor='acc', verbose=0, save_best_only=True)
model = keras.models.Sequential([
keras.layers.Dense(1000,activation='relu', input_shape=X_train.shape[1:]),
keras.layers.Dense(300,),
keras.layers.Dense(300,),
keras.layers.Dense(53)])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.fit(X_train, y_train, batch_size=100, epochs=1000, validation_data=(x_val, y_val), callbacks=[checkpoint])
</code></pre>
<p>I tried normalizing output to <strong>0 0.5 1</strong> but it didn't help.</p>
<p>I tried various loss functions</p>
<p>I tried defining custom loss function</p>
<p>I tried many network architectures</p>
<p>in most cases it has acc of about 0.09</p>
<p>in theory it is no more than binary sets and should not be hard but I cannot find the proper way</p>
|
<p>For your last classification layer, you should give it a <code>softmax</code> activation. This activation is used for classifications. It may require you to one hot encode your outputs though.</p>
| 191
|
implement classification
|
Neural Network Ordinal Classification for Age
|
https://stackoverflow.com/questions/38375401/neural-network-ordinal-classification-for-age
|
<p>I have created a simple neural network (Python, Theano) to estimate a persons age based on their spending history from a selection of different stores. Unfortunately, it is not particularly accurate.</p>
<p>The accuracy might be hurt by the fact that the network has no knowledge of ordinality. For the network there is no relationship between the age classifications. It is currently selecting the age with the highest probability from the softmax output layer.</p>
<p>I have considered changing the output classification to an average of the weighted probability for each age.</p>
<p>E.g Given age probabilities: (Age 10 : 20%, Age 20 : 20%, Age 30: 60%)</p>
<pre><code>Rather than output: Age 30 (Highest probability)
Weighted Average: Age 24 (10*0.2+20*0.2+30*0.6 weighted average)
</code></pre>
<p>This solution feels sub optimal. Is there a better was to implement ordinal classification in neural networks, or is there a better machine learning method that can be implemented? (E.g logistic regression)</p>
|
<p>This problem came up in a previous <a href="https://www.kaggle.com/c/diabetic-retinopathy-detection/forums/t/13115/paper-on-using-ann-for-ordinal-problems" rel="noreferrer">Kaggle competition</a> (this thread references the paper I mentioned in the comments).</p>
<p>The idea is that, say you had 5 age groups, where 0 < 1 < 2 < 3 < 4, instead of one-hot encoding them and using a softmax objective function, you can encode them into K-1 classes and use a sigmoid objective. So, as an example, your encodings would be</p>
<pre><code>[0] -> [0, 0, 0, 0]
[1] -> [1, 0, 0, 0]
[2] -> [1, 1, 0, 0]
[3] -> [1, 1, 1, 0]
[4] -> [1, 1, 1, 1]
</code></pre>
<p>Then the net will learn the orderings.</p>
| 192
|
implement classification
|
Multilabel classification in ML.NET
|
https://stackoverflow.com/questions/67411717/multilabel-classification-in-ml-net
|
<p>I am looking to implement multilabel classification using ML.NET. I read few posts which say it is not possible directly but rather through problem transformation by converting it into multiple binary classification problems.
So essentially I will be required to create <code>n</code> classifier if my dataset has <code>n</code> tags. I tried to do this by splitting my dataset label wise. But <code>fit</code> method throws below exception. I am passing value of label column as <code>1</code> for all entries for a given label.</p>
<blockquote>
<p>System.ArgumentOutOfRangeException: 'Must be at least 2.
Parameter name: numClasses'</p>
</blockquote>
<p>This can be fixed by adding entries with a particular label as <code>1</code> and all other entries as <code>0</code> but since each label will have lesser number of entries, I think that will dilute the learning and may result in lower accuracy.</p>
<p>Can someone suggest any other way to implement multilabel classification with ML.NET?</p>
|
<p>Create N boolean columns. Example naming pattern: Label01, Label02, ...LabelNN.</p>
<p>Training pipeline, add N sets of: (one for each boolean label)</p>
<pre class="lang-cs prettyprint-override"><code>.Append(mlContext.BinaryClassification.Trainers.LightGbm(labelColumnName: "Label01", featureColumnName: "Features"))
.Append(mlContext.Transforms.CopyColumns("Score01", "Score")) // Copy to a unique name so the following models won't shadow (replace) the column. PredictedLabel column can also be saved.
.Append(mlContext.BinaryClassification.Trainers.LightGbm(labelColumnName: "Label02", featureColumnName: "Features"))
.Append(mlContext.Transforms.CopyColumns("Score02", "Score"))
...
.Append(mlContext.BinaryClassification.Trainers.LightGbm(labelColumnName: "LabelNN", featureColumnName: "Features"))
.Append(mlContext.Transforms.CopyColumns("ScoreNN", "Score"))
</code></pre>
<p>Then call <code>.fit()</code> as normal. All of the models in the pipeline will be fit. You can then access each of the ScoreXX columns to get the scores for each class.</p>
<p>To evaluate the quality of each model, you can create metrics from each of the score columns vs. their input LabelXX column.</p>
| 193
|
implement classification
|
Implementation of SVM for classification without library in c++
|
https://stackoverflow.com/questions/27501444/implementation-of-svm-for-classification-without-library-in-c
|
<p>I'm studying Support Vector Machine last few weeks. I understand the theoretical concept how I can classify a data into two classes. But its unclear to me how to select support vector and generate separating line to classify new data using C++.</p>
<p>Suppose, I have two training data set for two classes</p>
<p><img src="https://i.sstatic.net/HF1A1.gif" alt="enter image description here"></p>
<p>After plotting data, I get the following feature space with vector and here, separating line is also clear.</p>
<p><img src="https://i.sstatic.net/hfSg4.gif" alt="enter image description here"></p>
<p>How to implement this in C++ without library functions. It will help me to clear my implementation concept about SVM. I need to be clear about implementation as I'm going to apply SVM in opinion mining for my native language.</p>
|
<p>I will join to most people's advice and say that you should really consider using a library. SVM algorithm is tricky enough to add the noise if something is not working because of a bug in your implementation. Not even talking about how hard is to make an scalable implementation in both memory size and time.</p>
<p>That said and if you want to explore this just as a learning experience, then SMO is probably your best bet. Here are some resources you could use:</p>
<p><a href="http://cs229.stanford.edu/materials/smo.pdf" rel="nofollow">The Simplified SMO Algorithm - Stanford material PDF</a></p>
<p><a href="http://research.microsoft.com/pubs/68391/smo-book.pdf" rel="nofollow">Fast Training of Support Vector Machines - PDF</a></p>
<p><a href="http://www.cs.mcgill.ca/~hv/publications/99.04.McGill.thesis.gmak.pdf" rel="nofollow">The implementation of Support Vector Machines using the sequential minimal optimization algorithm - PDF</a></p>
<p>Probably the most practical explanation that I have found is the one on the chapter 6 of the book Machine Learning in action by Peter Harrington. The code itself is on Python but you should be able to port it to C++. I don't think it is the best implementation but it might be good enough to have an idea of what is going on.</p>
<p>The code is freely available:</p>
<p><a href="https://github.com/pbharrin/machinelearninginaction/tree/master/Ch06" rel="nofollow">https://github.com/pbharrin/machinelearninginaction/tree/master/Ch06</a></p>
<p>Unfortunately there is not sample for that chapter but a lot of local libraries tend to have this book available.</p>
| 194
|
implement classification
|
Add attention mechanism to classification problem
|
https://stackoverflow.com/questions/57493402/add-attention-mechanism-to-classification-problem
|
<p>Currently, I am trying to implement attention mechanism to (sequence frame) video classification. So I am implementing the attention between CNN (feature extraction) -> attention -> LSTM (classification to specific class). I followed two papers for the implementation: "Action Recognition using visual attention" and "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention".</p>
<p>I have an image classification CNN and would like to extract the features and put it into a LSTM to visualize soft attention.</p>
<p>After the last FC layer from a trained CNN, I extracted the features. After that, I applied attention to train attention weights and LSTM for classification. The goal of the. attention model is to know where the model is looking when it predicts specific class. So need to implement a spatial attention module for saliency detection</p>
<pre><code>base_model = load_model(weight_file) // A trained resnet model
last_conv_layer = base_model.get_layer("global_average_pooling2d_3")
cnn_model = Model(input=base_model.input,output=last_conv_layer.output)
cnn_model.trainable = False
inputs = Input(shape=(max_frames, h, w, 3))
x = TimeDistributed(cnn_model)(inputs)
attention = Dense(1, activation='tanh')(x)
attention = Flatten()(attention)
attention = Activation('softmax')(attention)
attention = RepeatVector(256)(attention)
attention = Permute([2, 1])(attention)
sent_representation = merge([x, attention], mode='mul')
x = LSTM(256, return_sequences=True)(sent_representation)
predictions = Dense(101, activation='softmax', name='pred')(x)
model = Model(inputs=inputs, outputs=predictions)
</code></pre>
<blockquote>
<p>Traceback (most recent call last):
File "train.py", line 210, in
train_model(X,y,weight_file)
File "train.py", line 62, in train_model
model = lrcn_model(Nframes,H,W,weight_file)
File "/models.py", line 71, in lrcn_model
sent_representation = merge([x, attention], mode='mul')<br>
TypeError: 'module' object is not callable</p>
</blockquote>
<p>Is this a correct implementation of the attention mechanism? And how can I fix the issue above that I am getting?</p>
| 195
|
|
implement classification
|
Problem with "metalearner" for multi-class classification in H2O AutoML (3.24.0.5) implemented in R
|
https://stackoverflow.com/questions/57149718/problem-with-metalearner-for-multi-class-classification-in-h2o-automl-3-24-0
|
<p>I am trying H2O AutoML for multi-class classification. Everything seems to be working fine except when I am trying to extract the variable importance from metalearner for the StackedEnsemble.</p>
<p>This is what I am getting from code:</p>
<pre><code># Get the "BestOfFamily" Stacked Ensemble model
se.best <- h2o.getModel(grep("StackedEnsemble_BestOfFamily_AutoML", model_ids, value = TRUE)[1])
#se.best
metalearner <- h2o.getModel(se.best@model$metalearner$name)
#as.data.frame(h2o.varimp(metalearner))
h2o.varimp_plot(metalearner, num_of_features = 20)
</code></pre>
<p><em>Variable importance from metalearner for "StackedEnsemble_BestOfFamily_AutoML":</em></p>
<p><a href="https://i.sstatic.net/LAjsp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LAjsp.png" alt="enter image description here" /></a></p>
<p>The problem here is instead of showing the 20 model names that H2O AutoML picked (from "max_models =20") the above plot shows the names of the classes of the multi-class classification.</p>
<p>To me, this looks like a bug because I do not have this problem when I tried regression with H2O AutoML in R.</p>
| 196
|
|
implement classification
|
Android - Weka Classification is Taking too much time
|
https://stackoverflow.com/questions/24505823/android-weka-classification-is-taking-too-much-time
|
<p>I am using weka library for implementing classification algorithms in android, Though Weka lib is not fully compatible with android , I am using <a href="https://github.com/rjmarsan/Weka-for-Android" rel="nofollow noreferrer">Weka For Android</a> as lib in my app.
But its taking too much time for building model</p>
<p>Before posting this, I gone through various link, but cant get any consolidated solution</p>
<p>Some links are:</p>
<p><a href="https://stackoverflow.com/questions/11482108/wekas-pca-is-taking-too-long-to-run">Weka's PCA is taking too long to run</a></p>
<p><a href="http://www.pervasive.jku.at/Teaching/_2012WS/PervasiveComputingInfrastructure/Uebungen/UE04/04%20Exercise%20Realtime%20Classification.pdf" rel="nofollow noreferrer">Using Weka classifiers on Android </a></p>
<p><a href="https://stackoverflow.com/questions/2615381/android-adding-external-library-to-project">Android - Adding external library to project</a></p>
<p>I am using fallowing code for classification</p>
<pre><code>OneClassClassifier classifier = new OneClassClassifier();
classifier.setNominalGenerator(new NominalGenerator());
classifier.setTargetClassLabel("1");
classifier.setNumericGenerator(new DiscreteGenerator());
classifier.setSeed(1);
classifier.setNumRepeats(10);
classifier.buildClassifier(train);
Evaluation eval = new Evaluation(train);
eval.evaluateModel(classifier, test);
System.out.println(eval.toSummaryString("\nResults\n======\n", false));
</code></pre>
<p>This Code is working fine in my Java application. But if i put the same code in Android App, Its taking too much time say 4-6 minute for building the Model.</p>
<p>Please suggest some way to get out from this.
Also I am not able to serialized / deserialized model in android , while its working fine in java.</p>
| 197
|
|
implement classification
|
MultiLabel Classification using Conditional Random Field
|
https://stackoverflow.com/questions/37531532/multilabel-classification-using-conditional-random-field
|
<p>Is it possible to use Conditional Random Field for MultiLabel Classification? I saw a python CRF implementation at <a href="https://pystruct.github.io/user_guide.html" rel="noreferrer">https://pystruct.github.io/user_guide.html</a>, but couldn't figure a way to do multilabel classification.</p>
|
<p>The basic CRF doesn't support multilabel classification. However, some extensions have been explored, such as the Collective Multi-label (CML) and the
Collective Multi-label with Features (CMLF). From (1):</p>
<blockquote>
<p>A conditional random field (CRF) based model
is presented in [21] where two multi-label
graphical models has been proposed, both parameterizes
label co-occurances. The Collective
Multi-label (CML) classifier maintains feature
accounting for label co-occurances and the
Collective Multi-label with Features (CMLF)
maintains parameters that correspond to features
for each co-occuring label pair. Petterson
et. al. recently presented another interesting
generative modeling approach in a reverse manner,
predicting a set of instances given the labels [39].</p>
</blockquote>
<hr>
<p>References:</p>
<ul>
<li>(1) Sorower, Mohammad S. "A literature survey on algorithms for multi-label learning." Oregon State University, Corvallis (2010). <a href="http://people.oregonstate.edu/~sorowerm/pdf/Qual-Multilabel-Shahed-CompleteVersion.pdf" rel="nofollow">http://people.oregonstate.edu/~sorowerm/pdf/Qual-Multilabel-Shahed-CompleteVersion.pdf</a> ; <a href="https://scholar.google.com/scholar?cluster=11211211207326445005&hl=en&as_sdt=0,22" rel="nofollow">https://scholar.google.com/scholar?cluster=11211211207326445005&hl=en&as_sdt=0,22</a></li>
<li>(21) N. Ghamrawi and A. Mccallum. Collective Multi-Label Classification. In Proceedings of
the 3005 ACM Conference on Information and Knowledge Management (CIKM ’05), pages
195–200, Bremen, Germany, 2005. <a href="http://www.dtic.mil/dtic/tr/fulltext/u2/a440081.pdf" rel="nofollow">http://www.dtic.mil/dtic/tr/fulltext/u2/a440081.pdf</a></li>
<li>(39) James Petterson and Tiberio Caetano. Reverse multi-label learning. In J. Lafferty, C. K. I.
Williams, R. Zemel, J. Shawe-Taylor, and A. Culotta, editors, Advances in Neural Information
Processing Systems 23, pages 1903–1911. 2010.</li>
</ul>
| 198
|
implement classification
|
How to convert to a multi-class classification model?
|
https://stackoverflow.com/questions/52783281/how-to-convert-to-a-multi-class-classification-model
|
<p>I am trying to implement multi-class classification using the <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/flowers" rel="nofollow noreferrer">cloud samples github</a>.It was a classification model and i have to alter the code.I found some suggestion to change the final layer and loss from softmax to sigmoid.Also I have to change labels to one hot encoding.Could someone help in changing labels to one hot encoding
Thanks in advance</p>
|
<p>Please refer to the following article:</p>
<p><a href="https://towardsdatascience.com/multi-label-image-classification-with-inception-net-cbb2ee538e30" rel="nofollow noreferrer">https://towardsdatascience.com/multi-label-image-classification-with-inception-net-cbb2ee538e30</a></p>
<p>The technique used in that article looks something like this:</p>
<pre><code>ground_truth = np.zeros(class_count, dtype=np.float32)
ground_truth[label_index] = 1.0
</code></pre>
<p>This may not scale well for a very large number of classes (10s of thousands). To scale to higher numbers of classes, you need the equivalent of <code>tf.nn.sparse_softmax_cross_entropy_with_logits</code> for sigmoid, which doesn't seem to exist.</p>
| 199
|
solve differential equations
|
python solving differential equations
|
https://stackoverflow.com/questions/46406222/python-solving-differential-equations
|
<p>I am attempting to solve four different differential equations. After googling and researching I was able to finally understand how the solver works but I can't get this problem specifically to run correctly. Code compiles but the graphs are incorrect. </p>
<p>I think the problem lies in the volume expression inside the function, which will change depending on how much time has passed. That volume at a specific time will then be used to solve the right hand side of the differential equations.</p>
<p>The intervals, starting point and ending point for the time vector is correct. Constants are also correct.</p>
<pre><code>import numpy as np
import scipy.integrate as integrate
import matplotlib.pyplot as plt
#defining all constants and initial conditions
k=2.2
CB0_inlet=.025
V_flow_inlet=.05
V_reactor_initial=5
CA0_reactor=.05
FB0=CB0_inlet*V_flow_inlet
def dcdt(C,t):
#expression of how volume in reactor varies with time
V=V_flow_inlet*t+C[4] #C[4] is the initial reactor volume ###we dont need things C to be C0 correct?
#calculating right hand side of the four differential equations
dadt=-k*C[0]*C[1]-((V_flow_inlet*C[0])/V)
dbdt=((V_flow_inlet*(CB0_inlet-C[1]))/V)-k*C[0]*C[1]
dcdt=k*C[0]*C[1]-((V_flow_inlet*C[2])/V)
dddt=k*C[0]*C[1]-((V_flow_inlet*C[3])/V)
return [dadt,dbdt,dcdt,dddt,V]
#creating time array, initial conditions array, and calling odeint
t=np.linspace(0,500,100)
initial_conditions=[.05,0,0,0,V_reactor_initial] # [CA0 CB0 CC0 CD0
#V0_reactor]
C=integrate.odeint(dcdt,initial_conditions,t)
plt.plot(t,C)
</code></pre>
|
<p>Taking the hints of the variable names and equation structure, you are considering a chemical reaction</p>
<pre><code>A + B -> C + D
</code></pre>
<p>There are 2 sources of changes in the concentration <code>a,b,c,d</code> of reactants <code>A,B,C,D</code>, </p>
<ul>
<li>the reaction itself with reaction speed <code>k*a*b</code> and</li>
<li>the inflow of reactant <code>B</code> in a solution with concentration <code>b0_in</code> and volume rate <code>V_in</code>, which results in a relative concentration change of <code>V_in/V</code> in all components and an addition of <code>V_in*b0_in/V</code> in <code>B</code>.</li>
</ul>
<p>This is all well reflected in the first 4 equations of your system. In the treatment of the volume, you are mixing two approaches in an inconsistent way. Either <code>V</code> is a known function of <code>t</code> and thus not a component of the state vector, then</p>
<pre><code>V = V_reactor_initial + V_flow_inlet * t
</code></pre>
<p>or you treat it as a component of the state, then the current volume is</p>
<pre><code>V = C[4]
</code></pre>
<p>and the rate of volume change is</p>
<pre><code>dVdt = V_flow_inlet.
</code></pre>
<p>Modifying your code for the second approach looks like </p>
<pre><code>import numpy as np
import scipy.integrate as integrate
import matplotlib.pyplot as plt
#defining all constants and initial conditions
k=2.2
CB0_inlet=.025
V_flow_inlet=.05
V_reactor_initial=5
CA0_reactor=.05
FB0=CB0_inlet*V_flow_inlet
def dcdt(C,t):
#expression of how volume in reactor varies with time
a,b,c,d,V = C
#calculating right hand side of the four differential equations
dadt=-k*a*b-(V_flow_inlet/V)*a
dbdt=-k*a*b+(V_flow_inlet/V)*(CB0_inlet-b)
dcdt= k*a*b-(V_flow_inlet/V)*c
dddt= k*a*b-(V_flow_inlet/V)*d
return [dadt,dbdt,dcdt,dddt,V_flow_inlet]
#creating time array, initial conditions array, and calling odeint
t=np.linspace(0,500,100)
initial_conditions=[.05,0,0,0,V_reactor_initial] # [CA0 CB0 CC0 CD0
#V0_reactor]
C=integrate.odeint(dcdt,initial_conditions,t)
plt.plot(t,C[:,0:4])
</code></pre>
<p>with the result</p>
<p><a href="https://i.sstatic.net/Z4K6A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4K6A.png" alt="enter image description here"></a></p>
| 200
|
solve differential equations
|
Octave to solve differential equations
|
https://stackoverflow.com/questions/23186185/octave-to-solve-differential-equations
|
<p>How do I solve the differential equation y'+y=t with y(0)=24?</p>
<p>Do I need to defined the differential equation with a file in the form .m?</p>
|
<p>To solve ordinary differential equations you've got the function lsode (run lsode for help).</p>
<pre><code>f = @(y,t) t-y;
t = linspace(0,5,50)';
y=lsode(f, 24, t);
plot(t,y);
print -djpg figure-lsnode.jpg
</code></pre>
<p><a href="https://i.sstatic.net/K7eTt.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K7eTt.jpg" alt="y versus t"></a></p>
| 201
|
solve differential equations
|
solve differential equation with iterated parameter
|
https://stackoverflow.com/questions/35386600/solve-differential-equation-with-iterated-parameter
|
<p>I'm lerning to solve Differential equation using (scipy.integrate.odeint) and (scipy.integrate.ode). I have a simple example:</p>
<p><code>dy/dt=f[i]*t</code></p>
<p>and f is parameter corresponding to t[i], same like the example in code ; i.e. </p>
<p><code>t[0]=0.0, f[0]=0.0</code></p>
<p><code>t[1]=0.1, f[1]=0.1</code></p>
<p>...</p>
<p><code>t[10]=1.0, f[1]=1.0</code></p>
<p>the manually result should be:</p>
<p><code>y=1/2*f[i]*t**2</code>, because the initial value of y is zero</p>
<p>then, numerical result of y should be
<code>[0.0, 0.0005, 0.004, 0.0135, 0.032, 0.0625, 0.108, 0.1715, 0.256, 0.3645, 0.5]</code>. But when i use scipy.integrate.ode, i have got a different result. My question is:
1. Have i used ode wrong? How can i reduce the errors?
2. Can i use odeint or other method to solve this problem?</p>
<p>The code is like:</p>
<pre><code>import matplotlib.pyplot as pl
import numpy as np
import sympy as sp
from scipy.integrate import odeint
from scipy.integrate import ode
import numpy as np
def func(t, y, f):
return f*t
t=np.linspace(0.0, 1.0, 11)
f=np.linspace(0.0, 1.0, 11)
dt = t[1]-t[0]
sol= np.empty_like(t)
r = ode(func).set_integrator("dopri5")
r.set_initial_value(0, 0).set_f_params(f[0])
# result of ode
for i in xrange(len(t)):
r.set_f_params(f[i])
r.integrate(r.t+dt)
sol[i] = r.y
res=[]
# result of t**3/3
for a in np.linspace(0.0, 1, 11):
f=(a**3)/3
print f
res.append(f)
# result3
res2=[]
for n in range(0, 11):
dt=0.1
y= t[n]**3/3 - dt*t[n]**2/4 - dt**2*t[n]/12
res2.append(y)
pl.plot(sol)
pl.plot(res)
pl.plot(res2)
pl.show()
</code></pre>
<p>I have extend this example to 2-dimensional-differential equations:</p>
<p><code>du/dt=-u(v-f[i])</code></p>
<p><code>dv/dt=v(f[i]-u)</code></p>
<p>with initial values: u(0)=v(0)=1. And below is the code:</p>
<pre><code>import matplotlib.pyplot as pl
import numpy as np
import sympy as sp
from scipy.integrate import odeint
from scipy.integrate import ode
from numpy import array
def func(t, y, f):
u,v=y
dotu=-u*(v-f)
dotv=v*(f-u)
return array([dotu, dotv])
t=np.linspace(0.0, 10, 11)
f=np.linspace(0.0, 20, 11)
dt = t[1]-t[0]
# result correct
y0=array([1.0, 1.0])
sol= np.empty([11, 2])
sol[0] = array([1.0, 1.0])
r = ode(func).set_integrator("dopri5")
r.set_initial_value(t[0], sol[0]).set_f_params(f[0])
for i in range(len(t)-1):
r.set_f_params(f[i])
r.integrate(r.t+dt)
sol[i+1] = r.y
pl.plot(sol[:,0])
</code></pre>
<p>But i get a error message: </p>
<p><code>Traceback (most recent call last):
File "C:\Users\odeint test.py", line 26, in <module>
sol[0] = array([1.0, 1.0])
ValueError: setting an array element with a sequence.</code></p>
|
<p>What you are doing is closer to integrating y'(t)=t^2, y(0)=0, resulting in y(t)=t^3/3. That you factor t^2 as f*t and hack f into a step function version of t only adds a small perturbation to that.</p>
<hr>
<p>The integral of <code>t[i]*t</code> over <code>t[i]..t[i+1]</code> is</p>
<pre><code>y[i+1]-y[i] = t[i]/2*(t[i+1]^2-t[i]^2)
= (t[i+1]^3-t[i]^3)/3 - (t[i+1]-t[i])^2*(t[i]+2t[i+1])/6
= (t[i+1]^3-t[i]^3)/3 - dt*(t[i+1]^2-t[i]^2)/4 - dt^2*(t[i+1]-t[i])/12
</code></pre>
<p>which sums up to about</p>
<pre><code>y[n] = t[n]^3/3 - dt*t[n]^2/4 - dt^2*t[n]/12
</code></pre>
<hr>
<h2>How to get the correct solution</h2>
<pre><code>sol= np.empty_like(t)
</code></pre>
<p>set the initial value</p>
<pre><code>sol[0] = 0
r = ode(func).set_integrator("dopri5")
</code></pre>
<p>use the initial point as initial point, both to make explicit that the point at index <code>0</code> is fixed and "used up"</p>
<pre><code>r.set_initial_value(sol[0],t[0]).set_f_params(f[0])
# result of ode
</code></pre>
<p>go from point at <code>t[i]</code> to point at <code>t[i+1]</code>. Ends with <code>i+1=len(t)</code> or <code>i=len(t)-1</code></p>
<pre><code>for i in xrange(len(t)-1):
r.set_f_params(f[i])
r.integrate(r.t+dt)
</code></pre>
<p>value at <code>t[i]+dt</code> is value at <code>t[i+1]</code></p>
<pre><code> sol[i+1] = r.y
</code></pre>
<p>With these changes the numerical solution coincides with the manually computed solution.</p>
| 202
|
solve differential equations
|
Solving Differential equations in Matlab, ode45
|
https://stackoverflow.com/questions/16449166/solving-differential-equations-in-matlab-ode45
|
<p>I'm trying to solve a system with three differential equations with the function ode45 in Matlab. I do not really understand the errors i am getting and i could use some help understanding what im doing wrong.</p>
<p>The differential equations are the following:</p>
<pre><code>F1 = -k1y1+k2(y2-y1)
F2 = -k2(y2-y1)+k3(y3-y2)
F3 = -k3(y3-y2)
</code></pre>
<p>And my code in Matlab is this:</p>
<pre><code>function dz = kopplad(t, z)
global m1 m2 m3 k1 k2 k3
dz = [z(2)
-k1*z(1)/m1 + k2*(z(2)-z(1))/m1
z(4)
-k2*(z(2)-z(1))+k3(z(3)-z(2))/m2
z(6)
-k3(z(3)-z(2))/m3];
global m1 m2 m3 k1 k2 k3
m1 = 0.75; m2 = 0.40; m3 = 0.65;
k1 = 0.85; k2 = 1.1; k3 = 2.7;
[t, z] = ode45(@kopplad, [0, 50], [0 -1 0 1 0 0]);
plot(t, z(:,1))
hold on
plot(t,z(:,3),'--')
plot(t,z(:,5),'*')
legend('y_1', 'y_2', 'y_3');
hold off
</code></pre>
<p>The errors I'm recieving are the following: </p>
<blockquote>
<p>Attempted to access <code>k3(1.00002)</code>; index must be a positive integer or
logical.</p>
<p>Error in <code>kopplad (line 3) dz = [z(2)</code></p>
<p>Error in ode45 (line 262) <code>f(:,2) =
feval(odeFcn,t+hA(1),y+f*hB(:,1),odeArgs{:});</code></p>
<p>Error in diffekv (line 6) <code>[t, z] = ode45(@kopplad, [0, 50], [0 -1 0 1
0 0]);</code></p>
</blockquote>
|
<p>A while passed since I did some Matlab programming, but as far as I remember, you should pass variables to the function, i.e. write <code>@(x,y)kopplad(x,y)</code>, if I understand you code in a correct way. If the rest (global variables and equations) are correct, everything shall be fine.</p>
| 203
|
solve differential equations
|
trying to solve differential equations simultaneously
|
https://stackoverflow.com/questions/65436977/trying-to-solve-differential-equations-simultaneously
|
<p>I am trying to build a code for chemical reactor design which is able to solve for the pressure drop, conversion, and temperature of a reactor. All these parameters have differential equations, so i tried to define them inside a function to be able to integrate them using ODEINT. However it seems that the function i've built has an error which i can't figure out which held me back from integration it.</p>
<p>the error that i'm encountering:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-32-63c706e84be5> in <module>
1 X0=[0.05,1200,2]
----> 2 y=func(X0,0)
<ipython-input-27-6cfd4fef5ee2> in func(x, W)
8 kp=np.exp(((42311)/(R*T))-11.24)
9 deltah=-42471-1.563*(T-1260)+0.00136*(T**2 -1260**2)- 2,459*10e-7*(T**3-1260**3)
---> 10 ra=k*np.sqrt((1-X)/X)*((0.2-0.11*X)/(1-0.055*X)*(P/P0)-(x/(kp*(1-x)))**2)
11 summ = 57.23+0.014*T-1.94*10e-6*T**2
12 dcp=-1.5625+2.72*10e-3*T-7.38*10e-7*T**2
TypeError: unsupported operand type(s) for -: 'int' and 'list'
</code></pre>
<p>and here is the full code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
fi = 0.45
gas_density = 0.054 #density
Pres0 = 2 #pressure
visc = 0.09
U = 10
Ac = 0.0422
T0 = 1400 #and 1200 also
gc = 4.17 * 10**8
bed_density = 33.8
Ta = 1264.6 #wall temp
fa0 = 0.188
def func(x,W):
X = x[0]
T = x[1]
P = x[2]
P0 = 2
R = 0.7302413
k = np.exp((-176008 / T) - (110.1 * np.log(T) + 912.8))
kp = np.exp(((42311) / (R * T)) - 11.24)
deltah = -42471 - 1.563 * (T - 1260) + 0.00136 * (T**2 - 1260**2) - 2,459 * 10e-7 * (T**3 - 1260**3)
ra = k * np.sqrt((1 - X) / X) * ((0.2 - 0.11 * X) / (1 - 0.055 * X) * (P / P0) - (x / (kp * (1 - x)))**2)
summ = 57.23 + 0.014 * T - 1.94 * 10e-6 * T**2
dcp = -1.5625 + 2.72 * 10e-3 * T - 7.38 * 10e-7 * T**2
dxdw = 5.31 * k * np.sqrt((1 - X) / X) * ((0.2 - 0.11 * X) / (1 - 0.055 * X) * (P / P0) - (x / (kp * (1 - x)))**2)
dpdw = (((-1.12 * 10**-8) * (1 - 0.55 * X) * T) / P) * (5500 * visc + 2288)
dtdw = (5.11 * (Ta - T) + (-ra) * deltah) / (fa0 * (summ + x * dcp))
return [dxdw, dpdw, dtdw]
X0 = [0.05, 1200, 2]
y = func(X0, 0)
</code></pre>
<p>thanks in advance</p>
|
<p>Inside line</p>
<pre><code>ra=k*np.sqrt((1-X)/X)*((0.2-0.11*X)/(1-0.055*X)*(P/P0)-(x/(kp*(1-x)))**2)
</code></pre>
<p>you probably want to use <code>...X/(kp*(1-X))...</code> instead of <code>...x/(kp*(1-x))...</code> (i.e. use upper X), lower <code>x</code> is list type.</p>
<p>If you want to use some list variable <code>l</code> as multiple values somewhere then convert it to numpy array <code>la = np.array(l)</code> and use <code>la</code> in numpy vectorized expression.</p>
| 204
|
solve differential equations
|
Solve differential equation with SymPy
|
https://stackoverflow.com/questions/45308747/solve-differential-equation-with-sympy
|
<p>I have the following differential equation that I would like to solve with SymPy</p>
<p><a href="https://i.sstatic.net/bYuJG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bYuJG.png" alt="enter image description here"></a></p>
<p>This differential equation has the implicit solution (with h(0) = [0,1) and t = [0, inf) )</p>
<p><a href="https://i.sstatic.net/SwIaK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SwIaK.png" alt="enter image description here"></a></p>
<p>but SymPy gives</p>
<p><a href="https://i.sstatic.net/ABcvN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ABcvN.png" alt="enter image description here"></a></p>
<p>which other packages such as Maxima are able to find. With SymPy I am unable, however. Is there a way to do so? My code is</p>
<pre><code>import sympy as sp
sp.init_printing(use_unicode=True)
h = sp.symbols('h', function=True)
t = sp.symbols('t')
eq = sp.Eq(sp.Derivative(h(t),t), (1 - h(t))**sp.Rational(4,3) / h(t))
sp.dsolve(eq)
</code></pre>
|
<p>SymPy leaves the integral unevaluated because it is unsure about the sign of 1-y in the integral. </p>
<p>The differential equation has a singularity at h=1, and its behavior depends on what side of 1 we are. There isn't a way to say that h(t) < 1, but one can substitute h(t) = 1 - g(t) where g is a positive function: </p>
<pre><code>g = sp.symbols('g', function=True, positive=True)
eq1 = eq.subs(h(t), 1 - g(t))
print(sp.dsolve(eq1))
</code></pre>
<p>This returns an explicit solution of the ODE (actually three of them, as SymPy solves a cubic equation). The first one of those looks reasonable.</p>
<pre><code>Eq(g(t), (-2*(C1 + t)/(sqrt(-8*(C1 + t)**3 + 729) + 27)**(1/3) - (sqrt(-8*(C1 + t)**3 + 729) + 27)**(1/3))**3/27)
</code></pre>
| 205
|
solve differential equations
|
Any way to solve a system of coupled differential equations in python?
|
https://stackoverflow.com/questions/16909779/any-way-to-solve-a-system-of-coupled-differential-equations-in-python
|
<p>I've been working with sympy and scipy, but can't find or figure out how to solve a system of coupled differential equations (non-linear, first-order). </p>
<p>So is there any way to solve coupled differential equations? </p>
<p>The equations are of the form:</p>
<pre><code>V11'(s) = -12*v12(s)**2
v22'(s) = 12*v12(s)**2
v12'(s) = 6*v11(s)*v12(s) - 6*v12(s)*v22(s) - 36*v12(s)
</code></pre>
<p>with initial conditions for v11(s), v22(s), v12(s). </p>
|
<p>For the numerical solution of ODEs with scipy, see <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html" rel="nofollow noreferrer"><code>scipy.integrate.solve_ivp</code></a>, <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html" rel="nofollow noreferrer"><code>scipy.integrate.odeint</code></a> or <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html" rel="nofollow noreferrer">scipy.integrate.ode</a>.</p>
<p>Some examples are given in the <a href="https://scipy-cookbook.readthedocs.io/" rel="nofollow noreferrer">SciPy Cookbook</a> (scroll down to the section on "Ordinary Differential Equations").</p>
| 206
|
solve differential equations
|
How to solve these differential equations?
|
https://stackoverflow.com/questions/39543756/how-to-solve-these-differential-equations
|
<p>I am working on a heat exchanger and found these differential equations from a paper. I never had such equation as you can see even if it's a single order differential equation a term "dy" is always hanging at right side of the equation.</p>
<p>I am trying to solve them in matlab but due to dy, I am not able to put it into an equation.</p>
<p>Can anyone help me to simplify the equation or any help on how such type of equations can be solved in matlab?</p>
<p>These are the equations:</p>
<p><a href="https://i.sstatic.net/xqEk3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xqEk3.png" alt="These are the equations"></a></p>
|
<p>Look at Eqs (23) and (24). They specify the definitions of \dot{m}_p and \dot{m}_s so that when you plug them into Eqs (19) and (20) they lose the floating differentials.</p>
| 207
|
solve differential equations
|
Solving partial differential equations using C#
|
https://stackoverflow.com/questions/1452255/solving-partial-differential-equations-using-c
|
<p>I am working on a project (C# and .NET Framework) which requires me to solve some partial differential equations. Are there any specific libraries based on .NET Framework that I could see and make my work simpler?</p>
<p>I have worked with MATLAb and solving partial differential equations is very straightforward there. How can I solve this problem?</p>
|
<p>You could solve the problem in MATLAB and use the <a href="http://www.mathworks.co.uk/products/compiler/" rel="nofollow noreferrer">MATLAB compiler</a> + <a href="http://www.mathworks.co.uk/products/netbuilder/" rel="nofollow noreferrer">Builder NE toolbox</a> to create a .NET assembly which links to the rest of your app.</p>
| 208
|
solve differential equations
|
matlab solving numerical differential equations [pic]
|
https://stackoverflow.com/questions/74961611/matlab-solving-numerical-differential-equations-pic
|
<p>I'm trying to solve this numerical differential equations, can someone help?
<img src="https://i.sstatic.net/3Qtwf.png" alt="pic of the differential equations" /></p>
<pre><code>clc;
clear;
syms A1(z) A2(z)
lamda1 = 1560*(10^-9);
c=3*(10^8);
d_eff=27*(10^-12);
omga1=(2*pi*c)/(lamda1);
omga2=omga1*2;
n=2.2;
k1=(n*omga1)/c;
k2=(n*omga2)/c;
ode1 = diff(A1) == (2*i*(omga1^2)*d_eff*A2*conj(A1)*exp(-i*(2*k1-k2)*z))/(k1*(c^2));
ode2 = diff(A2) == (i*(omga2^2)*d_eff.*(A1.^2).*exp(i*(2*k1-k2)*z))/(k2*(c^2));
odes = [ode1; ode2];
cond1 = A1(0) == 1;
cond2 = A2(0) == 0;
conds = [cond1; cond2];
M = matlabFunction(odes)
sol = ode45(M(1),[0 20],[2 0]);
</code></pre>
|
<p>in this question both <strong>ODE</strong> are coupled, hence there's only 1 <strong>ODE</strong> to solve:</p>
<p><strong>1.-</strong> use 1st equation to write</p>
<pre><code>A1=f(A2,dA2/dz)
</code></pre>
<p>and feed this expression into 2nd equation.</p>
<p><strong>2.-</strong> regroup</p>
<pre><code>n1=1j*k1^4/k2^2*1/deff*.25
n2=1j*3*(k1-k2)
</code></pre>
<p>now the <strong>ODE</strong> to solve is</p>
<pre><code>y'=n1/y^2*exp(n2*z)
</code></pre>
<p><strong>3.-</strong> It can obviously be done in MATLAB, but for this particular <strong>ODE</strong> in my opinion the <strong>Wolfram online ODE Symbolic Solver</strong> does a better job.</p>
<p>Input the obtained <strong>ODE</strong> of previous point into the <strong>ODE</strong> solver available in this link</p>
<p><a href="https://www.wolframalpha.com/input?i=y%27%27+%2B+y+%3D+0" rel="nofollow noreferrer">https://www.wolframalpha.com/input?i=y%27%27+%2B+y+%3D+0</a></p>
<p>and solve</p>
<p><strong>4.-</strong> The general (symbolic) solutions for <code>A2</code> are</p>
<p><a href="https://i.sstatic.net/QjDxF.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QjDxF.jpg" alt="enter image description here" /></a></p>
<p>Note that I used <code>k1</code> instead of <code>n1</code> and <code>k2</code> instead of <code>n2</code> just in the Wolfram <strong>ODE</strong> Solver.</p>
<p>Rewording ; the <code>k1</code> <code>k2</code> expressions of the general solutions are not the wave numbers <code>k1</code> <code>k2</code> of the equations in the question. Just replace accordingly.</p>
<p><strong>5.-</strong> Now get <code>A1</code> using expression in point 2.</p>
<p><strong>6.-</strong> I have spotted 2 possible errors in the MATLAB code posted in the question that shouldn't be ignored by the question originator:</p>
<p>In the far right side of the posted MATLAB code, the exponential expressions</p>
<p><strong>6.1.-</strong> both show</p>
<p><em>exp(-i</em>(2*k1-k2)<em>z))/(k1</em>(c^2))</p>
<p>there's this <code>(k1*(c^2))</code> dividing the exponent wheras in the question none of the exponentials show such denominator their respective exponents.</p>
<p><strong>6.2.-</strong> the <code>dk</code> or <code>delta k</code> expression in the exponentials of the question are obviously <code>k2-k1</code> or <code>k1-k2</code> , here there may be room for a sign ambiguity, that may shoot a wave solution onto the opposite direction, yet the point here is where</p>
<pre><code>*exp(-1i*(2*k1-k2)*z)
</code></pre>
<p>should probably be</p>
<pre><code>*exp(-1i*2*(k1-k2)*z)
</code></pre>
<p>or just</p>
<pre><code>exp(-1i*(k1-k2)*z)
</code></pre>
<p><strong>6.3.-</strong> and yes, in MATLAB <code>(-1)^.5</code> can be either expressed with <code>1j</code> or <code>1i</code> but as written in the MATLAB code made available in the question, since only a chunk of code has been made available, it's fair to assume that no such <code>i=1j</code> has been done.</p>
| 209
|
solve differential equations
|
Solving Differential equations with 7 unknowns
|
https://stackoverflow.com/questions/31263825/solving-differential-equations-with-7-unknowns
|
<p>I want to solve the 7 differential equations which are functions of time for the 7 unknowns.
I wanted to find the solutions of the equations:</p>
<pre><code>eo(t)=f1(e_0(t),e_1(t),e_2(t),e_3(t),w_1(t),w_2(t),w_3(t))
e1(t)=f2(e_0(t),e_1(t),e_2(t),e_3(t),w_1(t),w_2(t),w_3(t))
e2(t)=f3(e_0(t),e_1(t),e_2(t),e_3(t),w_1(t),w_2(t),w_3(t))
e3(t)=f4(e_0(t),e_1(t),e_2(t),e_3(t),w_1(t),w_2(t),w_3(t))
w1(t)=f5(e_0(t),e_1(t),e_2(t),e_3(t),w_1(t),w_2(t),w_3(t))
w2(t)=f6(e_0(t),e_1(t),e_2(t),e_3(t),w_1(t),w_2(t),w_3(t))
w3(t)=f7(e_0(t),e_1(t),e_2(t),e_3(t),w_1(t),w_2(t),w_3(t))
</code></pre>
<p>I have generated the equations of <code>e0, e1, e2, e3, w1, w2, w3</code>.
Now, how do I solve these equations, and which commands are needed?</p>
<p>I need to find the values of <code>e0, e1, e2, e3, w1, w2, w3</code> and get the numerical value of these with respect to <code>t</code>.</p>
<p>The equations which have to be solved are</p>
<pre><code>e0 = - (e_1(t)*w_1(t))/2 - (e_2(t)*w_2(t))/2 - (e_3(t)*w_3(t))/2
e1 = (e_0(t)*w_1(t))/2 - (e_2(t)*w_3(t))/2 - (e_3(t)*w_2(t))/2
e2 =(e_0(t)*w_2(t))/2 - (e_1(t)*w_3(t))/2 + (e_3(t)*w_1(t))/2
e3 = (e_0(t)*w_3(t))/2 + (e_1(t)*w_2(t))/2 - (e_2(t)*w_1(t))/2
w1 = w_2(t)*(1.98019*e_3(t)*(e_1(t)*w_1(t) + e_2(t)*w_2(t) + e_3(t)*w_3(t)) + 1.980*e_0(t)*(e_0(t)*w_3(t) + e_1(t)*w_2(t) - 1.0*e_2(t)*w_1(t)) - 1.980*e_1(t)*(e_0(t)*w_2(t) - 1.0*e_1(t)*w_3(t) + e_3(t)*w_1(t)) - 1.9801*e_2(t)*(e_2(t)*w_3(t) - 1.0*e_0(t)*w_1(t) + e_3(t)*w_2(t))) - 1.0*w_1(t)*(1.0*e_0(t)*(e_1(t)*w_1(t) + e_2(t)*w_2(t) + e_3(t)*w_3(t)) + 1.0*e_1(t)*(e_2(t)*w_3(t) - 1.0*e_0(t)*w_1(t) + e_3(t)*w_2(t)) - 1.0*e_2(t)*(e_0(t)*w_2(t) - 1.0*e_1(t)*w_3(t) + e_3(t)*w_1(t)) - 1.0*e_3(t)*(e_0(t)*w_3(t) + e_1(t)*w_2(t) - 1.0*e_2(t)*w_1(t))) - 1.0*w_3(t)*(1.0*e_0(t)*(e_0(t)*w_2(t) - 1.0*e_1(t)*w_3(t) + e_3(t)*w_1(t)) - 1.0*e_2(t)*(e_1(t)*w_1(t) + e_2(t)*w_2(t) + e_3(t)*w_3(t)) + 1.0*e_1(t)*(e_0(t)*w_3(t) + e_1(t)*w_2(t) - 1.0*e_2(t)*w_1(t)) + 1.0*e_3(t)*(e_2(t)*w_3(t) - 1.0*e_0(t)*w_1(t) + e_3(t)*w_2(t))) - (63.366*kappa^2*(0.72470*w_1(t) + 0.355*kappa*(2.0*e_0(t)e_1(t)(2.0*e_0(t)*e_3(t) + 2.0*e_1(t)*e_2(t)) + 2.0*e_1(t)e_3(t)(e_0(t)^2 - 1.0*e_1(t)^2 + e_2(t)^2 - 1.0*e_3(t)^2)) - 0.3623*kappa*((e_0(t)^2 + e_1(t)^2 - 1.0*e_2(t)^2 - 1.0*e_3(t)^2)*(e_0(t)^2 - 1.0*e_1(t)^2 + e_2(t)^2 - 1.0*e_3(t)^2) + (2.0*e_0(t)*e_3(t) + 2.0*e_1(t)e_2(t))(2.0*e_0(t)*e_3(t) - 2.0*e_1(t)*e_2(t)))))/(l^5*rho)
w2 = w_3(t)*(0.505*e_1(t)*(e_1(t)*w_1(t) + e_2(t)*w_2(t) + e_3(t)*w_3(t)) - 0.505*e_0(t)*(e_2(t)*w_3(t) - 1.0*e_0(t)*w_1(t) + e_3(t)*w_2(t)) + 0.505*e_2(t)*(e_0(t)*w_3(t) + e_1(t)*w_2(t) - 1.0*e_2(t)*w_1(t)) + 0.505*e_3(t)*(e_0(t)*w_2(t) - 1.0*e_1(t)*w_3(t) + e_3(t)*w_1(t))) - 1.0*w_1(t)*(0.505*e_3(t)*(e_1(t)*w_1(t) + e_2(t)*w_2(t) + e_3(t)*w_3(t)) + 0.505*e_0(t)*(e_0(t)*w_3(t) + e_1(t)*w_2(t) - 1.0*e_2(t)*w_1(t)) - 0.505*e_1(t)*(e_0(t)*w_2(t) - 1.0*e_1(t)*w_3(t) + e_3(t)*w_1(t)) - 0.505*e_2(t)*(e_2(t)*w_3(t) - 1.0*e_0(t)*w_1(t) + e_3(t)*w_2(t))) - 1.0*w_2(t)*(1.0*e_0(t)*(e_1(t)*w_1(t) + e_2(t)*w_2(t) + e_3(t)*w_3(t)) + 1.0*e_1(t)*(e_2(t)*w_3(t) - 1.0*e_0(t)*w_1(t) + e_3(t)*w_2(t)) - 1.0*e_2(t)*(e_0(t)*w_2(t) - 1.0*e_1(t)*w_3(t) + e_3(t)*w_1(t)) - 1.0*e_3(t)*(e_0(t)*w_3(t) + e_1(t)*w_2(t) - 1.0*e_2(t)*w_1(t))) - (32.0*kappa^2*(0.7184*w_2(t) - 0.3592*kappa*(2.0*e_0(t)e_1(t)(e_0(t)^2 + e_1(t)^2 - 1.0*e_2(t)^2 - 1.0*e_3(t)^2) + 2.0*e_1(t)e_3(t)(2.0*e_0(t)*e_3(t) - 2.0*e_1(t)*e_2(t)))))/(l^5*rho)
w3 = w_1(t)*(1.0*e_2(t)*(e_1(t)*w_1(t) + e_2(t)*w_2(t) + e_3(t)*w_3(t)) - 1.0*e_0(t)*(e_0(t)*w_2(t) - 1.0*e_1(t)*w_3(t) + e_3(t)*w_1(t)) + 1.0*e_1(t)*(e_0(t)*w_3(t) + e_1(t)*w_2(t) - 1.0*e_2(t)*w_1(t)) + 1.0*e_3(t)*(e_2(t)*w_3(t) - 1.0*e_0(t)*w_1(t) + e_3(t)w_2(t))) + w_2(t)(1.980*e_0(t)*(e_2(t)*w_3(t) - 1.0*e_0(t)*w_1(t) + e_3(t)*w_2(t)) - 1.980*e_1(t)*(e_1(t)*w_1(t) + e_2(t)*w_2(t) + e_3(t)*w_3(t)) + 1.980*e_2(t)*(e_0(t)*w_3(t) + e_1(t)*w_2(t) - 1.0*e_2(t)*w_1(t)) + 1.980*e_3(t)*(e_0(t)*w_2(t) - 1.0*e_1(t)*w_3(t) + e_3(t)*w_1(t))) - 1.0*w_3(t)*(1.0*e_0(t)*(e_1(t)*w_1(t) + e_2(t)*w_2(t) + e_3(t)*w_3(t)) + 1.0*e_1(t)*(e_2(t)*w_3(t) - 1.0*e_0(t)*w_1(t) + e_3(t)*w_2(t)) - 1.0*e_2(t)*(e_0(t)*w_2(t) - 1.0*e_1(t)*w_3(t) + e_3(t)*w_1(t)) - 1.0*e_3(t)*(e_0(t)*w_3(t) + e_1(t)*w_2(t) - 1.0*e_2(t)*w_1(t))) + (63.366*kappa^2*(0.3551*kappa*((e_0(t)^2 + e_1(t)^2 - 1.0*e_2(t)^2 - 1.0*e_3(t)^2)*(e_0(t)^2 - 1.0*e_1(t)^2 + e_2(t)^2 - 1.0*e_3(t)^2) - 1.0*(2.0*e_0(t)*e_3(t) + 2.0*e_1(t)e_2(t))(2.0*e_0(t)*e_3(t) - 2.0*e_1(t)*e_2(t))) - 0.724*w_3(t) + 0.362*kappa*((e_0(t)^2 + e_1(t)^2 - 1.0*e_2(t)^2 - 1.0*e_3(t)^2)*(e_0(t)^2 - 1.0*e_1(t)^2 + e_2(t)^2 - 1.0*e_3(t)^2) + (2.0*e_0(t)*e_3(t) + 2.0*e_1(t)e_2(t))(2.0*e_0(t)*e_3(t) - 2.0*e_1(t)*e_2(t)))))/(l^5*rho)
</code></pre>
<p>I used this code in MATLAB after assigning the values</p>
<pre><code>soll=ode45(e0,e1,e2,e3,w1,w2,w3,e_0(t),e_1(t),e_2(t),e_3(t),w_1(t),w_2(t),w_3(t))
</code></pre>
<p>But I got the following error message:</p>
<blockquote>
<p>Undefined function 'exist' for input arguments of type 'sym'.</p>
<p>Error in odearguments (line 59) if (exist(ode)==2)</p>
<p>Error in ode45 (line 113) [neq, tspan, ntspan, next, t0, tfinal, tdir, y0, f0, odeArgs, odeFcn, ...</p>
</blockquote>
<p>Please enlighten me where I have gone wrong.</p>
|
<p>This error means that <code>ode45</code> cannot solve symbolic equations (your variables are of type 'sym'). In fact <a href="https://en.wikipedia.org/wiki/Dormand%E2%80%93Prince_method" rel="nofollow">ode45</a> is a numerical solver which works on functions, not on symbolical expressions. Here's how to <a href="http://de.mathworks.com/matlabcentral/answers/25568-defining-functions" rel="nofollow">define a function in Matlab</a>:</p>
<blockquote>
<p>First, you need to create a new m-file then type this code</p>
</blockquote>
<pre><code>function y = f(x)
y = 2 * (x^3) + 7 * (x^2) + x;
</code></pre>
<blockquote>
<p>Save with filename 'f.m'</p>
</blockquote>
| 210
|
solve differential equations
|
Solving differential Equation using BOOST Libraries
|
https://stackoverflow.com/questions/58036187/solving-differential-equation-using-boost-libraries
|
<p>I have to find next state of my robot which can be found though solving differential equations. In MATLAB i used ode45 function and on c++ i found on internet that i have to use some method like stepper runga kutta dopri5. I tried to understand its implementation and somehow got an idea. Now my states are X,Y and theta and to find next states my differential equations are</p>
<pre><code>Xdot=v*cos(theta)
Ydot=v*sin(theta)
thetadot=w
</code></pre>
<p>Now there is a public function stepper.do_step_impl(System,state,...etc) where what i understood is that system represents a function whose parameters could be &state,dstate and t. Now here is my problem. My differential equations have variables v and w which i need in my System function but according to my understanding System function could have only fixed parameters i.e State, dstate and t. How do i put v and w in it? I hope someone understands my question. Kindly help. Below is my code</p>
<pre><code>using namespace boost::numeric::odeint;
typedef std::vector< double > state_type;
void pdot_w( const state_type &state, state_type &dstate, const double t )
{
double p_robot_xdot = v*cos(state[2]);
double p_robot_ydot = v*sin(state[2]);
double thetadot = w;
dstate[0] = p_robot_xdot;
dstate[1] = p_robot_ydot;
dstate[2] = thetadot;
}
runge_kutta_dopri5<state_type> stepper;
stepper.do_step(pdot_w, state , 0, 0.01 );
</code></pre>
|
<p>The answer depends on how you want to pass the parameters <code>v</code> and <code>w</code> to the integrator.</p>
<p>One approach in C++11 is to use a <a href="https://en.cppreference.com/w/cpp/language/lambda" rel="nofollow noreferrer">lambda</a>. Your function and a call to the stepper would look like this (depending on context, the capture <code>[]</code> may need to be more explicit):</p>
<pre><code>void pdot_w(const state_type &state,
state_type & dstate,
const double t,
const double v,
const double w) {
dstate[0] = v * cos(state[2]);
dstate[1] = v * sin(state[2]);
dstate[2] = w;
}
runge_kutta_dopri5<state_type> stepper;
/* v = 4 and w = 5 example */
stepper.do_step([](const state_type & state, state_type & d_state, const double t) {
return pdot_w(state, d_state, t, 4, 5);
}, state,
0, 0.01);
</code></pre>
| 211
|
solve differential equations
|
How to solve three quadratic differential equations in Python?
|
https://stackoverflow.com/questions/50552094/how-to-solve-three-quadratic-differential-equations-in-python
|
<p>I've just started to use Python for scientific drawing to plot numerical solutions of differential equations. I know how to use modules to solve and plot single differential equations, but have no idea about systems of differential equation. How can I plot following coupled system?</p>
<p>My system of differential equation is:</p>
<p><code>dw/dx=y</code> and<br>
<code>dy/dx=-a-3*H*y</code> and<br>
<code>dz/dx=-H*(1+z)</code> </p>
<p>that <code>a = 0.1</code> and <code>H=sqrt((1+z)**3+w+u**2/(2*a))</code></p>
<p>And my code is:</p>
<pre><code>import numpy as N
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def model(w,y,z,x,H):
dwdx=y
dydx=-a-3*H*y
dzdx=-H*(1+z)
a=0.1
H=sqrt((1+z)**3+w+u**2/(2*a))
return [w,y,z,H]
z0=1100 #initial condition
w0=-2.26e-8
y0=-.38e-4
H0=36532.63
b=0
c=10000
x=N.arange(b,c,0.01)
y=odeint(model,y0,x) #f=Function name that returns derivative values at requested y and t values as dydt = f(y,t)
w=odeint(model,w0,x)
z=odeint(model,z0,x)
plt.plot(w,x)
plt.plot(y,x)
plt.plot(z,x)
plt.legend(loc='best')
plt.show()
</code></pre>
|
<p>General purpose ODE integrators expect the dynamical system reduced to an abstract first order system. Such a system has a state vector space and the differential equation provides velocity vectors for that space. Here the state has 3 scalar components, which gives a 3D vector as state. If you want to use the components separately, the first step in the ODE function is to extract these components from the state vector, and the last step is to compose the return vector from the derivatives of the components in the correct order.</p>
<p>Also, you need to arrange the computation steps in order of dependence</p>
<pre><code>def model(u,t):
w, y, z = u
a=0.1
H=sqrt((1+z)**3+w+u**2/(2*a))
dwdx=y
dydx=-a-3*H*y
dzdx=-H*(1+z)
return [dwdx, dydx, dzdx]
</code></pre>
<p>and then call the integrator once with the combined initial state</p>
<pre><code>u0 = [ w0, y0, z0]
u = odeint(model, u0, x)
w,y,z = u.T
</code></pre>
<p>Please also check the arguments of the plot function, the general scheme is <code>plot(x,y)</code>.</p>
| 212
|
solve differential equations
|
Error while solving differential equations matlab
|
https://stackoverflow.com/questions/37668362/error-while-solving-differential-equations-matlab
|
<p>I'm looking for way to solve this differential equations. </p>
<p><img src="https://i.sstatic.net/TVpfw.png" alt="equations"></p>
<p>I've got small experience solving this type of equations, so here's my noob code</p>
<pre><code>function lab1()
[t,h]=ode45('threepoint', [0 1000], [0 0.25]);
plot(t, h);
function f = threepoint(x, y)
%THREEPOINT Summary of this function goes here
% Detailed explanation goes here
m = 15000;
R1 = 0.1;
R2 = 0.1;
P1 = 0.1;
P2 = 0.1;
B = 4;
rbk = 0.5;
f=[x(6);
x(7);
x(8);
x(9);
x(10);
(-x(6) / (m * sqrt( x(6)*x(6) + x(7)*x(7)))) * (R1 + R2) + (cos(x(3))/m) * P1 + (cos(x(3))/m) * P2;
(-x(7) / (m * sqrt( x(6)*x(6) + x(7)*x(7)))) * (R1 + R2) + (cos(x(3))/m) * P1 + (cos(x(3))/m) * P2;
-(M/I) - (1/I1)* (B/2 + y(1))*P1 + (1/I2)*(B/2+y(2))*P2;
(rbk/I1)*(P1-R1);
(rbk/I2)*(P2-R2);
];
end
</code></pre>
<p>While running these functions I have got such errors like </p>
<blockquote>
<p>Index exceeds matrix dimensions.</p>
<p>Error in threepoint (line 11) f=[x(6);</p>
<p>Error in odearguments (line 87) f0 = feval(ode,t0,y0,args{:}); %
ODE15I sets args{1} to yp0.</p>
<p>Error in ode45 (line 113) [neq, tspan, ntspan, next, t0, tfinal, tdir,
y0, f0, odeArgs, odeFcn, ...</p>
<p>Error in lab1 (line 2) [t,h]=ode45('threepoint', [0 1000], [0 0.25]);</p>
</blockquote>
<p>Can anyone please show me where am I mistaken and how can I fix these errors?
Thank you in advance!</p>
|
<p>Please take a close look at <code>help ode45</code>. I admit this part of the usage might not be clear, so take a look at <code>doc ode45</code> too.</p>
<p>Here's the essence of your problem. You want to solve a differential equation of 10 variables, each a function of <code>t</code>. So, how does a general solver like <code>ode45</code> know that it needs to work with 10-component arrays at each time step? From the only place it can: the initial value!</p>
<p>Here's how you're calling <code>ode45</code>:</p>
<pre><code>[t,h]=ode45('threepoint', [0 1000], [0 0.25]);
</code></pre>
<p>The second array, <code>Y0</code>, is the initial value. This is clear from the docs. Since you're supplying a 2-element vector, <code>ode45</code> smartly realizes that you have only 2 equations. When you try to use <code>Y(6)</code> and other high-index values in <code>threepoint()</code>, you get the error you're getting: <em>Index exceeds matrix dimensions</em>. Because your index, <code>6</code>, exceeds the size of the array, which is <code>2</code>.</p>
<p>You always need to use initial values with the proper size (length-10 in your specific case) in order to inform the solver about the dimensions of your problem. Even if <code>ode45</code> could assert the size of your equations from somewhere, you <em>really have to provide</em> initial values for all 10 components of your solution.</p>
| 213
|
solve differential equations
|
Differential Equations in Python
|
https://stackoverflow.com/questions/5847201/differential-equations-in-python
|
<p>I'm working with a DE system, and I wanted to know which is the most commonly used python library to solve Differential Equations if any.</p>
<p>My Equations are non Linear First Order equations.</p>
|
<p>If you need to solve large nonlinear systems (especially stiff ones), the scipy tools will be slow and awkward. The <a href="http://pydstool.sourceforge.net/" rel="noreferrer" title="PyDSTool">PyDSTool</a> package is now quite commonly used in this situation. It lets your equations be automatically converted into C code and integrates them with good solvers. It's especially good if you want to define state-defined events such as threshold crossings, add external input signals from arrays, or have other analyses done (such as bifurcation analysis, as the package includes an interface to AUTO).</p>
| 214
|
solve differential equations
|
Solve Differential equation using Python PyDDE solver
|
https://stackoverflow.com/questions/25926957/solve-differential-equation-using-python-pydde-solver
|
<p>I am trying to solve following differential equation using python package PyDDE:</p>
<pre><code>dy[i]/dt = w[i] + K/N * \sum{j=1toN} sin(y[j] -y[i]), where i = 1,2,3,4...N=50
</code></pre>
<p>Below is the python code to solve this equation</p>
<pre><code>from numpy import random, sin, arange, pi, array, zeros
import PyDDE.pydde as p
def odegrad(s, c, t):
global N
K = c[0]
theta = s[0]
w = random.standard_cauchy(N)
for i in range(N):
coup_sum = 0.0
for j in range(N):
coup_sum += sin(theta[j] - theta[i])
theta[i] = w[i] + (K*coup_sum)/(float (N))
return array([theta])
# constant parameters
global N
N = 50
K = 1.0
# initial values for state theta
theta0 = zeros(N, float)
for i in range(N):
theta0[i] = random.uniform(0, 2*pi)
odecons = array([K])
odeist = array([theta0])
odestsc = array([0.0])
ode_eg = p.dde()
ode_eg.dde(y=odeist, times=arange(0.0, 300.0, 1.0),
func=odegrad, parms=odecons,
tol=0.000005, dt=1.0, hbsize=0, nlag=0, ssc=odestsc)
ode_eg.solve()
print ode_eg.data
</code></pre>
<p>I am getting following error:</p>
<p>DDE Error: Something is wrong: perhaps one of the supplied variables has the wrong type?</p>
<p>DDE Error: Problem initialisation failed!</p>
<p>DDE Error: The DDE has not been properly initialised!</p>
<p>None</p>
|
<p>So I have had a look at what was going on internally, and both errors</p>
<pre><code>DDE Error: Something is wrong: perhaps one of the supplied variables has the wrong type?
DDE Error: Problem initialisation failed!
</code></pre>
<p>come from the following operation failing: map(float,initstate) (see the <a href="https://github.com/hensing/PyDDE/blob/master/PyDDE/pydde.py" rel="nofollow">source</a>, line 162). This comes from the fact that Y and your other variables are vectors. Mostly this means that you should not use <code>array([theta])</code> but you should use <code>theta</code></p>
<p>Full script:</p>
<pre><code>from numpy import random, sin, arange, pi, array, zeros
import PyDDE.pydde as p
def odegrad(s, c, t):
global N
K = c[0]
#Change here
theta = s
w = random.standard_cauchy(N)
for i in range(N):
coup_sum = 0.0
for j in range(N):
coup_sum += sin(theta[j] - theta[i])
theta[i] = w[i] + (K*coup_sum)/(float (N))
#Change here
return theta
# constant parameters
global N
N = 50
K = 1.0
# initial values for state theta
theta0 = zeros(N, float)
for i in range(N):
theta0[i] = random.uniform(0, 2*pi)
odecons = array([K])
#Change here
odeist = theta0
odestsc = array([0.0])
ode_eg = p.dde()
ode_eg.dde(y=odeist, times=arange(0.0, 300.0, 1.0),
func=odegrad, parms=odecons,
tol=0.000005, dt=1.0, hbsize=0, nlag=0, ssc=odestsc)
#You should not use this line, as the last step in ode_eg.dde() is solve.
#ode_eg.solve()
print ode_eg.data
</code></pre>
| 215
|
solve differential equations
|
Euler beam, solving differential equation in python
|
https://stackoverflow.com/questions/48060894/euler-beam-solving-differential-equation-in-python
|
<p>I must solve the Euler Bernoulli differential beam equation which is:</p>
<pre><code>w’’’’(x) = q(x)
</code></pre>
<p>and boundary conditions:</p>
<pre><code>w(0) = w(l) = 0
</code></pre>
<p>and </p>
<pre><code>w′′(0) = w′′(l) = 0
</code></pre>
<p>The beam is as shown on the picture below:</p>
<p><a href="https://i.sstatic.net/jiBh1.png" rel="nofollow noreferrer">beam</a></p>
<p>The continious force <code>q</code> is <code>2N/mm</code>.</p>
<p>I have to use shooting method and <code>scipy.integrate.odeint()</code> func.</p>
<p>I can't even manage to start as i do not understand how to write the differential equation as a system of equation</p>
<p>Can someone who understands solving of differential equations with boundary conditions in python please help!</p>
<p>Thanks :)</p>
|
<p>You need to transform the ODE into a first order system, setting <code>u0=w</code> one possible and usually used system is</p>
<pre class="lang-none prettyprint-override"><code> u0'=u1,
u1'=u2,
u2'=u3,
u3'=q(x)
</code></pre>
<p>This can be implemented as</p>
<pre><code>def ODEfunc(u,x): return [ u[1], u[2], u[3], q(x) ]
</code></pre>
<p>Then make a function that shoots with experimental initial conditions and returns the components of the second boundary condition</p>
<pre><code>def shoot(u01, u03): return odeint(ODEfunc, [0, u01, 0, u03], [0, l])[-1,[0,2]]
</code></pre>
<p>Now you have a function of two variables with two components and you need to solve this 2x2 system with the usual methods. As the system is linear, the shooting function is linear as well and you only need to find the coefficients and solve the resulting linear system.</p>
| 216
|
solve differential equations
|
Solving Differential Equation Sympy
|
https://stackoverflow.com/questions/38950163/solving-differential-equation-sympy
|
<p>I haven't been able to find particular solutions to this differential equation.</p>
<pre><code>from sympy import *
m = float(raw_input('Mass:\n> '))
g = 9.8
k = float(raw_input('Drag Coefficient:\n> '))
v = Function('v')
f1 = g * m
t = Symbol('t')
v = Function('v')
equation = dsolve(f1 - k * v(t) - m * Derivative(v(t)), 0)
print equation
</code></pre>
<p>for m = 1000 and k = .2 it returns</p>
<pre><code>Eq(f(t), C1*exp(-0.0002*t) + 49000.0)
</code></pre>
<p>which is correct but I want the equation solved for when v(0) = 0 which should return</p>
<pre><code>Eq(f(t), 49000*(1-exp(-0.0002*t))
</code></pre>
|
<p>I believe Sympy is not yet able to take into account initial conditions. Although <code>dsolve</code> has the option <code>ics</code> for entering initial conditions (see the documentation), it appears to be of limited use. </p>
<p>Therefore, you need to apply the initial conditions manually. For example:</p>
<pre><code>C1 = Symbol('C1')
C1_ic = solve(equation.rhs.subs({t:0}),C1)[0]
print equation.subs({C1:C1_ic})
</code></pre>
<blockquote>
<p><code>Eq(v(t), 49000.0 - 49000.0*exp(-0.0002*t))</code></p>
</blockquote>
| 217
|
solve differential equations
|
Solve couple of differential equations using Python
|
https://stackoverflow.com/questions/54689574/solve-couple-of-differential-equations-using-python
|
<p>I have to solve this couple of differential equations</p>
<pre><code>dxdt = a - b*x*z
dydt = c*y*exp(-E)
dzdt = dxdt - sum(dydt*dE)
</code></pre>
<p>with a, b and c constants and E an array.</p>
<p>As you can see, I need to process an integration over E to determine dzdt. Thus, dydt is an array also.</p>
<p>I don't know how to solve this kind of equations using Python because odeint requires the initial conditions as one-dimensional. </p>
<p>Can you help me ?</p>
<p>Thanks in advance !</p>
| 218
|
|
solve differential equations
|
solving differential equations with parameters varying over intervals
|
https://stackoverflow.com/questions/39747767/solving-differential-equations-with-parameters-varying-over-intervals
|
<p>I would like to solve a system of differential equations with parameters varying over intervlas.</p>
<p>Here is my code:</p>
<pre><code># LOADING PACKAGES
library(deSolve)
# DATA CREATION
t1 <- data.frame(times=seq(from=0,to=5,by=0.1),interval=c(rep(0,10),rep(1,20),rep(2,21)))
length(t1[which(t1$times<1),]) #10
length(t1[which(t1$times>=1&t1$times<3),]) #20
length(t1[which(t1$times>=3),]) #21
t1$mueDP=c(rep(3.1,10),rep(2.6,20),rep(1.1,21))
t1$mueHD=c(rep(2.6,10),rep(1.7,20),rep(1.3,21))
t1$mueTX=c(rep(1.9,10),rep(3.3,20),rep(1.3,21))
t1$tau12=c(rep(5.5,10),rep(2.7,20),rep(0.7,21))
t1$tau13=c(rep(3.5,10),rep(1.3,20),rep(2.3,21))
t1$tau21=c(rep(4,10),rep(1.8,20),rep(2.8,21))
t1$tau23=c(rep(2.1,10),rep(2.1,20),rep(1.1,21))
t1$tau31=c(rep(3.9,10),rep(3.6,20),rep(1.6,21))
t1$tau32=c(rep(5.1,10),rep(1.4,20),rep(0.4,21))
t1
# FUNCTION SOLVING THE SYSTEM
rigidode <- function(times, y, parms) {
with(as.list(y), {
dert.comp_dp=-(tau12)*comp_dp+(tau21)*comp_hd-(tau13)*comp_dp+(tau31)*comp_tx-(mueDP)*comp_dp
dert.comp_hd=-(tau21)*comp_hd+(tau12)*comp_dp-(tau23)*comp_hd+(tau32)*comp_tx-(mueHD)*comp_hd
dert.comp_tx=-(tau31)*comp_tx+(tau13)*comp_dp-(tau32)*comp_tx+(tau23)*comp_hd-(mueTX)*comp_tx
dert.comp_dc=(mueDP)*comp_dp+(mueHD)*comp_hd+(mueTX)*comp_tx
list(c(dert.comp_dp, dert.comp_hd, dert.comp_tx, dert.comp_dc))
})
}
times <- t1$times
mueDP=t1$mueDP
mueHD=t1$mueHD
mueTX=t1$mueTX
mu_attendu=t1$mu_attendu
tau12=t1$tau12
tau13=t1$tau13
tau21=t1$tau21
tau23=t1$tau23
tau31=t1$tau31
tau32=t1$tau32
parms <- c("mueDP","mueHD","mueTX","mu_attendu","tau12","tau13","tau21","tau23","tau31","tau32")
yini <- c(comp_dp = 30, comp_hd = 60,comp_tx = 10, comp_dc = 0)
out_lsoda <- lsoda (times = times, y = yini, func = rigidode, parms = parms, rtol = 1e-9, atol = 1e-9)
out_lsoda
</code></pre>
<p>The problem is that the function rigidode is working only for constant parameters. I can't figure out how to vary my parameters over interval (from 0 to 2).</p>
<p>thanks</p>
|
<p>@Mily comment: Yes, it is possible with <code>t1</code>, here the solution:</p>
<p>Define <code>t1</code> (Intervall is not needed in my point of view). </p>
<pre><code>t1 <- data.frame(times=seq(from=0, to=5, by=0.1))
t1$mueDP=c(rep(3.1,10),rep(2.6,20),rep(1.1,21))
t1$mueHD=c(rep(2.6,10),rep(1.7,20),rep(1.3,21))
t1$mueTX=c(rep(1.9,10),rep(3.3,20),rep(1.3,21))
t1$tau12=c(rep(5.5,10),rep(2.7,20),rep(0.7,21))
t1$tau13=c(rep(3.5,10),rep(1.3,20),rep(2.3,21))
t1$tau21=c(rep(4,10),rep(1.8,20),rep(2.8,21))
t1$tau23=c(rep(2.1,10),rep(2.1,20),rep(1.1,21))
t1$tau31=c(rep(3.9,10),rep(3.6,20),rep(1.6,21))
t1$tau32=c(rep(5.1,10),rep(1.4,20),rep(0.4,21))
</code></pre>
<p>Define the ODE function:</p>
<pre><code>rigidode <- function(times, y, parms,t1) {
## find out in which line of t1 `times` is
id <- min(which(times < t1$times))-1
parms <- t1[id,-1]
with(as.list(c(parms,y)), {
dert.comp_dp <- -(tau12)*comp_dp+(tau21)*comp_hd-(tau13)*comp_dp+(tau31)*comp_tx-(mueDP)*comp_dp
dert.comp_hd <- -(tau21)*comp_hd+(tau12)*comp_dp-(tau23)*comp_hd+(tau32)*comp_tx-(mueHD)*comp_hd
dert.comp_tx <- -(tau31)*comp_tx+(tau13)*comp_dp-(tau32)*comp_tx+(tau23)*comp_hd-(mueTX)*comp_tx
dert.comp_dc <- (mueDP)*comp_dp+(mueHD)*comp_hd+(mueTX)*comp_tx
return(list(c(dert.comp_dp, dert.comp_hd, dert.comp_tx, dert.comp_dc)))
})
}
times <- seq(from = 0, to = 5, by = 0.1)
yini <- c(comp_dp = 30, comp_hd = 60, comp_tx = 10, comp_dc = 0)
parms <- t1[1,-1]
out_lsoda <- lsoda(times = times, y = yini, func = rigidode, parms = parms, rtol = 1e-9, atol = 1e-9, t1 = t1)
out_lsoda
</code></pre>
<p>Note that in the function call <code>lsoda</code> the argument <code>t1 = t1</code> is committed to the ODE function.</p>
| 219
|
solve differential equations
|
Solving differential equations on c++
|
https://stackoverflow.com/questions/46287315/solving-differential-equations-on-c
|
<p>The <strong>main</strong> function runs a loop that feeds vector values to a serie of sequentially interdependent differential equations found in another function <strong>gate_probabilities</strong>, which feeds back the calculations to main and the loop repeats.</p>
<p>The solutions for the equations are serially fed each into its corresponding vector, this takes place in both functions. intermediate equations are <strong>m_new; h_new; n_new</strong>, the main vector to eventually be plotted against time is <strong>V</strong>. (plotting not in the code)</p>
<p>The vectors are defined outside the function body (they are thought of as global but i guess the issue is they're not properly declared as such)</p>
<p>Building produces no errors, but i do get a debug error on execution: "Abort() has been called".</p>
<p>would appreciate any help :)</p>
<pre><code>#include "C:\Users\jenbe\Documents\Visual Studio 2015\std_lib_facilities.h"
#include <math.h>
#include <iostream>
#include <cmath>
#include <tuple>
#include <vector>
using namespace std;
vector<double> mvector;
vector<double> hvector;
vector<double> nvector;
vector<double> V; // potential vector
vector<double> timevector;
//global constants
double gL = 0.1; // mS/cm^2
double gK = 9;
double gNa = 35;
double EL = -65; // mV
double EK = -90;
double ENa = 55;
double phi = 5; // Coefficient increasing reaction speed of the alpha and beta constants which it multiplies.
double Iapp = 5; // microAmpere values (as in Wang-Buzsaki)
double runtime = 3000;// 3 Seconds
double dt = 0.001; // Timestep = 1ms
double V_init = -60;
std::tuple<double> gate_probabilities(double v, double m, double h, double n) {
double am, bm, ah, bh, an, bn, dh, dn;
//first stage -> gate probabilities
am = -0.1*(v + 35) / (exp(-0.1*(v + 35)) - 1); //the probability of a closed gate to open
bm = 4 * exp(-(v + 60) / 18); //the probability of an open gate to be closed.
ah = 0.07*exp(-(v + 58) / 20);
bh = 1 / (exp(-0.1*(v + 28)) + 1);
an = -0.01*(v + 34) / (exp(-0.1*(v + 34)) - 1);
bn = 0.125 * exp(-(v + 44) / 80);
//second stage -> gate states
dh = phi * ( ah*(1 - h) - bh*h); // inactivation variable h
dn = phi * ( an*(1 - n) - bn*n); // inward recitifier
double m_new = am / (am + bm); //activation variable
double h_new = dh*dt + h;
double n_new = dn*dt + n;
mvector.push_back(m_new);
hvector.push_back(h_new);
nvector.push_back(n_new);
// third stage -> currents
double IL = gL*(v - EL);
double INa = gNa * pow(m, 3) * h * (v - ENa);
double IK = gK * pow(n, 4) * (v - EK); // delayed recitifier
double currents = -INa - IK - IL + Iapp;
return std::make_tuple(currents);
}
int main() {
//initialize vectors - later modify with function within function
double ami = -0.1*(V_init + 35) / (exp(-0.1*(V_init + 35)) - 1);
double bmi = 4 * exp(-(V_init + 60) / 18);
double ahi = 0.07*exp(-(V_init + 58) / 20); //
double bhi = 1 / (exp(-0.1*(V_init + 28)) + 1);
double ani = -0.01*(V_init + 34) / (exp(-0.1*(V_init + 34)) - 1);
double bni = 0.125 * exp(-(V_init + 44) / 80);
V.push_back(V_init); //V_init
double m_init = (ami / (ami + bmi));
double h_init = (ahi / (ahi + bhi));
double n_init = (ani / (ani + bni));
mvector.push_back(m_init);
hvector.push_back(h_init);
nvector.push_back(n_init);
for (int i{ 1 }; i <= runtime; ++i) { // 1ms iterations over 3000ms range
auto ret = gate_probabilities(V[i - 1], mvector[i - 1], hvector[i - 1], nvector[i - 1]);
double currents;
currents = std::get<0>(ret);
double potential = currents*dt + V[i - 1]; //calculate new V.
V.push_back(potential); //incorporates new V into a vector.
timevector.push_back( timevector[i - 1] + dt );
}
}
</code></pre>
|
<p>Your line</p>
<pre><code>timevector.push_back( timevector[i - 1] + dt );
</code></pre>
<p>will not work on the very first loop run, as timevector is empty.</p>
| 220
|
solve differential equations
|
How to solve differential equation using Python builtin function odeint?
|
https://stackoverflow.com/questions/27820725/how-to-solve-differential-equation-using-python-builtin-function-odeint
|
<p>I want to solve this differential equations with the given initial conditions:</p>
<pre><code>(3x-1)y''-(3x+2)y'+(6x-8)y=0, y(0)=2, y'(0)=3
</code></pre>
<p>the ans should be <br><br>
<code>y=2*exp(2*x)-x*exp(-x)</code></p>
<p>here is my code:</p>
<pre><code>def g(y,x):
y0 = y[0]
y1 = y[1]
y2 = (6*x-8)*y0/(3*x-1)+(3*x+2)*y1/(3*x-1)
return [y1,y2]
init = [2.0, 3.0]
x=np.linspace(-2,2,100)
sol=spi.odeint(g,init,x)
plt.plot(x,sol[:,0])
plt.show()
</code></pre>
<p>but what I get is different from the answer.
what have I done wrong?</p>
|
<p>There are several things wrong here. Firstly, your equation is apparently</p>
<p>(3x-1)y''-(3x+2)y'-(6x-8)y=0; y(0)=2, y'(0)=3</p>
<p>(note the sign of the term in y). For this equation, your analytical solution and definition of <code>y2</code> are correct.</p>
<p>Secondly, as the @Warren Weckesser says, you must pass 2 parameters as <code>y</code> to <code>g</code>: <code>y[0]</code> (y), <code>y[1]</code> (y') and return their derivatives, y' and y''.</p>
<p>Thirdly, your initial conditions are given for x=0, but your x-grid to integrate on starts at -2. From the docs for <code>odeint</code>, this parameter, <code>t</code> in their call signature description:</p>
<p><code>odeint(func, y0, t, args=(),...)</code>:</p>
<blockquote>
<p>t : array
A sequence of time points for which to solve for y. The initial
value point should be the first element of this sequence.</p>
</blockquote>
<p>So you must integrate starting at 0 or provide initial conditions starting at -2.</p>
<p>Finally, your range of integration covers a singularity at x=1/3. <code>odeint</code> may have a bad time here (but apparently doesn't).</p>
<p>Here's one approach that seems to work:</p>
<pre><code>import numpy as np
import scipy as sp
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def g(y, x):
y0 = y[0]
y1 = y[1]
y2 = ((3*x+2)*y1 + (6*x-8)*y0)/(3*x-1)
return y1, y2
# Initial conditions on y, y' at x=0
init = 2.0, 3.0
# First integrate from 0 to 2
x = np.linspace(0,2,100)
sol=odeint(g, init, x)
# Then integrate from 0 to -2
plt.plot(x, sol[:,0], color='b')
x = np.linspace(0,-2,100)
sol=odeint(g, init, x)
plt.plot(x, sol[:,0], color='b')
# The analytical answer in red dots
exact_x = np.linspace(-2,2,10)
exact_y = 2*np.exp(2*exact_x)-exact_x*np.exp(-exact_x)
plt.plot(exact_x,exact_y, 'o', color='r', label='exact')
plt.legend()
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/NP3X9.png" alt="enter image description here"></p>
| 221
|
solve differential equations
|
How to solve a system of differential equations?
|
https://stackoverflow.com/questions/28802527/how-to-solve-a-system-of-differential-equations
|
<p>For a homework assignment, my professor asked us to solve a system of differential equations using MATLAB. Using the mathworks website, I did</p>
<pre><code>syms f(t) g(t) h(t)
[f(t), g(t), h(t)] = dsolve(diff(f) == .25*g*h,...
diff(g) == -2/3*f*h,...
diff(h) == .5*f*g, f(0) == 1, g(0) == -2, h(0) == 3)
</code></pre>
<p>However, I get an error saying that an explicit equation cannot be solved.</p>
|
<pre><code>%x(1),x(2),x(3)=f,g,h
fun = @(t,x) [0.25*x(2)*x(3); -2/3*x(1)*x(3);
-0.5*x(1)*x(2)];
[t,x] = ode45(fun,[0 100],[1 2 3]);
plot3(x(:,1),x(:,2),x(:,3))
%x has three columns which contains values of f,g,h as a function of time from %time t=0 to t=100
%please check ode45 in matlab to see what these arguments mean
</code></pre>
| 222
|
solve differential equations
|
Solving a system of differential equations by scipy.integrate.odeint
|
https://stackoverflow.com/questions/69786953/solving-a-system-of-differential-equations-by-scipy-integrate-odeint
|
<p>I am trying to solve a differential equation and I get this following error</p>
<pre><code>/usr/local/lib/python3.7/dist-packages/scipy/integrate/odepack.py in odeint(func, y0, t, args, Dfun, col_deriv, full_output, ml, mu, rtol, atol, tcrit, h0, hmax, hmin, ixpr, mxstep, mxhnil, mxordn, mxords, printmessg, tfirst)
243 full_output, rtol, atol, tcrit, h0, hmax, hmin,
244 ixpr, mxstep, mxhnil, mxordn, mxords,
245 int(bool(tfirst)))
246 if output[-1] < 0:
247 warning_msg = _msgs[output[-1]] + " Run with full_output = 1 to get quantitative information."
RuntimeError: The array return by func must be one-dimensional, but got ndim=2.**
</code></pre>
<p>My code is</p>
<pre><code>import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
f_ini = [0.09,0.09]
C0 = 0.083/f
k = 0.04
v = 1.05
def rxn1(C,t):
return np.array([f*C0/v-f*C[0]/v-k*C[0], f*C[0]/v-f*C[1]/v-k*C[1]])
t_points = np.linspace(0,500,1000)
c_points = scipy.integrate.odeint(rxn1, f_ini,t_points)
plt.plot(t_points, c_points[:,0])
plt.plot(t_points,c_points[:,1])`
</code></pre>
<p>I know this similar question was asked here. <a href="https://stackoverflow.com/questions/51808922/how-to-solve-a-system-of-differential-equations-using-scipy-odeint?noredirect=1&lq=1">How to solve a system of differential equations</a>. However I want to solve it using np.array. Thank you so much!</p>
|
<p>If "f" is variable and dependent on "t", then you can define it as a function of "t" and use that function instead of "f" in your rxn1 function. For example:</p>
<pre><code>def f(t):
# relation between f and t
return value
def rxn1(C,t):
return np.array([f(t)*C0/v-f(t)*C[0]/v-k*C[0], f(t)*C[0]/v-f(t)*C[1]/v-k*C[1]])
</code></pre>
| 223
|
solve differential equations
|
Solving Ordinary Differential Equations using Euler in Java Programming
|
https://stackoverflow.com/questions/55960894/solving-ordinary-differential-equations-using-euler-in-java-programming
|
<p>I'm trying to write a java program that will solve any ordinary differential equations using Euler method, but I don't know how to write a code to get any differential equation from the user. I was only able to write the code to solve a predefined ordinary differential equations. </p>
<p>I was able to come with a code to solve some particular ordinary differential equations which were written as functions in the program, I also made research online to look for similar problems but it seem they also wrote it to solve some designated problem not general questions on ordinary differential equations. This was found in most of the article have read online.</p>
<p>Here is my Euler class;</p>
<pre><code>import java.lang.Math;
public class Euler {
private double x0, y0, x1, y1, h, actual;
public Euler (double initialx, double initialy,double stepsize,double finalx1) {
x0 = initialx; y0 = initialy; h=stepsize; x1 = finalx1;
}
public void setEuler (double initialx, double initialy,double stepsize,
double finalx1){
x0 = initialx;y0 = initialy;h =stepsize;x1 = finalx1;
}
public double getinitialx(){
return x0;
}
public double getinitialy(){
return y0;
}
public double getinitialexact(){
return (double) (0.9048*Math.exp(0.1*x0*x0));
}
double func(double x, double y){
return (double) (0.2*x*y);
}
double funct(double x){
return (double) (java.lang.Math.exp(0.1*x*x));
}
public double getinitialerror(){
return (double) Math.abs(actual - y0);
}
public double getEulerResult(){
for (double i = x0 + h; i < x1; i += h){
y0 = y0 + h *(func(x0,y0));
x0 += h;
double actual = (0.9048*funct(x0));
double error = Math.abs(actual - y0);
System.out.printf("%f\t%f\t%f\t%f\n",x0,y0,actual, error);
}
return y0;
}
}
</code></pre>
<p>Here is my Driver's class</p>
<pre><code>import java.util.Scanner;
public class EulerTest {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
Euler myEuler = new Euler(1.0,1.0,0.1,1.5);
System.out.println( "x\t explicit\tactual\t error\t " );
System.out.printf("%f\t%f\t%f\t%f\n", myEuler.getinitialx(),
myEuler.getinitialy(),myEuler.getinitialexact(),
myEuler.getinitialerror());
System.out.printf("my approximated value is %f\n\n",
myEuler.getEulerResult ());
System.out.println("enter another initial value of x: ");
double initialx = input.nextDouble();
System.out.println("enter another initial value of y: ");
double initialy = input.nextDouble();
System.out.println("enter another stepsize value of h: ");
double stepsize = input.nextDouble();
System.out.println("enter another upper bound of x: ");
double finalx1 = input.nextDouble();
myEuler.setEuler(initialx,initialy,stepsize,finalx1);
System.out.println( "x\t explicit\tactual\t error\t " );
System.out.printf("%f\t%f\t%f\t%f\n", myEuler.getinitialx(),
myEuler.getinitialy(),myEuler.getinitialexact(),
myEuler.getinitialerror());
System.out.printf("my approximated value is %f\n\n",
myEuler.getEulerResult ());
}
}
</code></pre>
<p>I will be glad if i can en lighted on how to write the java code to collect any ordinary differential equation from the user so as to solve using Euler's method. </p>
|
<p>What you are looking for is the ability to compile some code at run time, where part of the code is supplied by the user.</p>
<p>There is a package called JOOR that gives you a Reflect class that contains a compile method. The method takes two parameters (a package name:String and the Java code:String).
I've never personally used it, so can not vouch for its robustness, but here is a tutorial and the javadoc:</p>
<p><a href="https://www.jooq.org/products/jOOR/javadoc/latest/org.jooq.joor/org/joor/Reflect.html#compile(java.lang.String,java.lang.String)" rel="nofollow noreferrer">https://www.jooq.org/products/jOOR/javadoc/latest/org.jooq.joor/org/joor/Reflect.html#compile(java.lang.String,java.lang.String)</a></p>
<p><a href="https://blog.jooq.org/2018/04/03/how-to-compile-a-class-at-runtime-with-java-8-and-9/" rel="nofollow noreferrer">https://blog.jooq.org/2018/04/03/how-to-compile-a-class-at-runtime-with-java-8-and-9/</a></p>
<p>In your case, you would put your user supplied function in place of the following line of code:</p>
<pre><code>return \"Hello World!\";\n"
</code></pre>
<p>Beware, you need to be 100% absolutely unconditionally guaranteed that the user can only ever enter a function to be solved. If they are supplying code, remember that unless you take safeguards, the code they enter could very easily be code the removes all of the files on your hard drive (or worse).</p>
<p>For the second part of your question - how do i implement a solution in Java using Euler's method, perhaps check out this link: <a href="https://stackoverflow.com/questions/33467167/eulers-method-in-java">Euler's Method in java</a> or this <a href="https://rosettacode.org/wiki/Euler_method#Java" rel="nofollow noreferrer">https://rosettacode.org/wiki/Euler_method#Java</a> which has it in pretty much every language you can imagine (and probably some you can't).</p>
| 224
|
solve differential equations
|
Solving system of coupled differential equations using Runge-Kutta in python
|
https://stackoverflow.com/questions/63811138/solving-system-of-coupled-differential-equations-using-runge-kutta-in-python
|
<p>This python code can solve one non- coupled differential equation:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import numba
import time
start_time = time.clock()
@numba.jit()
# A sample differential equation "dy / dx = (x - y**2)/2"
def dydx(x, y):
return ((x - y**2)/2)
# Finds value of y for a given x using step size h
# and initial value y0 at x0.
def rungeKutta(x0, y0, x, h):
# Count number of iterations using step size or
# step height h
n = (int)((x - x0)/h)
# Iterate for number of iterations
y = y0
for i in range(1, n + 1):
"Apply Runge Kutta Formulas to find next value of y"
k1 = h * dydx(x0, y)
k2 = h * dydx(x0 + 0.5 * h, y + 0.5 * k1)
k3 = h * dydx(x0 + 0.5 * h, y + 0.5 * k2)
k4 = h * dydx(x0 + h, y + k3)
# Update next value of y
y = y + (1.0 / 6.0)*(k1 + 2 * k2 + 2 * k3 + k4)
# Update next value of x
x0 = x0 + h
return y
def dplot(start,end,steps):
Y=list()
for x in np.linspace(start,end,steps):
Y.append(rungeKutta(x0, y, x , h))
plt.plot(np.linspace(start,end,steps),Y)
print("Execution time:",time.clock() - start_time, "seconds")
plt.show()
start,end = 0, 10
steps = end* 100
x0 = 0
y = 1
h = 0.002
dplot(start,end,steps)
</code></pre>
<p>This code can solve this differential equation:</p>
<pre><code> dydx= (x - y**2)/2
</code></pre>
<p>Now I have a system of coupled differential equations:</p>
<pre><code> dydt= (x - y**2)/2
dxdt= x*3 + 3y
</code></pre>
<p>How can I implement these two as a system of coupled differential equations in the above code?
Is there any more generalized way for system of n-number of coupled differential equations?</p>
|
<p>With the help of others, I got to this:</p>
<pre><code>import numpy as np
from math import sqrt
import matplotlib.pyplot as plt
import numba
import time
start_time = time.clock()
a=1
b=1
c=1
d=1
# Equations:
@numba.jit()
#du/dt=V(u,t)
def V(u,t):
x, y, vx, vy = u
return np.array([vy,vx,a*x+b*y,c*x+d*y])
def rk4(f, u0, t0, tf , n):
t = np.linspace(t0, tf, n+1)
u = np.array((n+1)*[u0])
h = t[1]-t[0]
for i in range(n):
k1 = h * f(u[i], t[i])
k2 = h * f(u[i] + 0.5 * k1, t[i] + 0.5*h)
k3 = h * f(u[i] + 0.5 * k2, t[i] + 0.5*h)
k4 = h * f(u[i] + k3, t[i] + h)
u[i+1] = u[i] + (k1 + 2*(k2 + k3 ) + k4) / 6
return u, t
u, t = rk4(V, np.array([1., 0., 1. , 0.]) , 0. , 10. , 100000)
x,y, vx,vy = u.T
# plt.plot(t, x, t,y)
plt.semilogy(t, x, t,y)
plt.grid('on')
print("Execution time:",time.clock() - start_time, "seconds")
plt.show()
</code></pre>
| 225
|
solve differential equations
|
How to solve differential equations simultaneously using python?
|
https://stackoverflow.com/questions/60561147/how-to-solve-differential-equations-simultaneously-using-python
|
<p>I do have three differential equations of the form:</p>
<pre><code>dx/dt= (-K1*x)
dI/dt= ((K1*(x) - (K2*I**2))/2)
dm/dt= 0.5*(K2*I**2)
</code></pre>
<p>I have data for time, x, I and m, with respect to time. How to calculate K1 and K2 using available packages in python? Can anyone help me with a sample code?</p>
<p>Thank you</p>
| 226
|
|
solve differential equations
|
Wrong Answer while solving differential equation in C
|
https://stackoverflow.com/questions/56919237/wrong-answer-while-solving-differential-equation-in-c
|
<p>I am new to C programming and am writing a program to solve simple differential equations which gives output as the value of x. But I'm not getting the correct result.</p>
<p>I am getting the correct value of the equation, but the value of the differential equation is wrong. The code compiles without any warnings or errors.</p>
<pre><code>#include <stdio.h>
#include <conio.h>
#include <math.h>
float poly(float a[], int, float);
float deriv(float a[], int, float);
int main()
{
float x, a[10], y1, dy1;
int deg, i;
printf("Enter the degree of polynomial equation: ");
scanf("%d", &deg);
printf("Ehter the value of x for which the equation is to be evaluated: ");
scanf("%f", &x);
for(i=0;i<=deg;i++)
{
printf("Enter the coefficient of x to the power %d: ",i);
scanf("%f",&a[i]);
}
y1 = poly(a, deg, x);
dy1 = deriv(a, deg, x);
printf("The value of polynomial equation for the value of x = %.2f is: %.2f",x,y1);
printf("\nThe value of the derivative of the polynomial equation at x = %.2f is: %.2f",x,dy1);
return 0;
}
/* function for finding the value of polynomial at some value of x */
float poly(float a[], int deg, float x)
{
float p;
int i;
p = a[deg];
for(i=deg;i>=1;i--)
{
p = (a[i-1] + x*p);
}
return p;
}
/* function for finding the derivative at some value of x */
float deriv(float a[], int deg, float x)
{
float d[10], pd = 0, ps;
int i;
for(i=0;i<=deg;i++)
{
ps = pow(x, deg-(i+1));
d[i] = (deg-1)*a[deg-1]*ps;
pd = pd + d[i];
}
return pd;
}
</code></pre>
|
<p>You are making a simple logical error. In the function <code>float deriv(float a[], int deg, float x)</code> It should be <code>d[i] = (deg-i)*a[deg-i]*ps;</code>. So your function would look something like this</p>
<pre><code>/* function for finding the derivative at some value of x */
float deriv(float a[], int deg, float x)
{
float d[10], pd = 0, ps;
int i;
for(i=0;i<=deg;i++)
{
ps = pow(x, deg-(i+1));
d[i] = (deg-i)*a[deg-i]*ps;
pd = pd + d[i];
}
return pd;
}
</code></pre>
<p>Good luck for the future.</p>
| 227
|
solve differential equations
|
Differential Equation Solve() Not Working (Julia)
|
https://stackoverflow.com/questions/65344695/differential-equation-solve-not-working-julia
|
<p>I'm super new to this and I couldn't really find any help online, so sorry if this has been answered before. I tried to follow this simple example on ordinary differential equations</p>
<pre><code>using DifferentialEquations
f(t,u) = 1.01*u
u0=1/2
tspan = (0.0,1.0)
prob = ODEProblem(f,u0,tspan)
sol = solve(prob,Tsit5(),reltol=1e-8,abstol=1e-8)
using Plots
plot(sol,linewidth=5,title="Solution to the linear ODE with a thick line",
xaxis="Time (t)",yaxis="u(t) (in μm)",label="My Thick Line!") # legend=false
plot!(sol.t, t->0.5*exp(1.01t),lw=3,ls=:dash,label="True Solution!")
</code></pre>
<p>yet I get this error when I call the solve function</p>
<pre><code>MethodError: no method matching f(::Float64, ::DiffEqBase.NullParameters, ::Float64)
Closest candidates are:
f(::Any, ::Any) at /Users/diogomiguez/.julia/pluto_notebooks/Cute science.jl#==#dbde1a90-407c-11eb-15ad-394234f71852:1
(::DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing})(::Float64, ::Vararg{Any,N} where N)@diffeqfunction.jl:248
initialize!(::OrdinaryDiffEq.ODEIntegrator{OrdinaryDiffEq.CompositeAlgorithm{Tuple{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType}},OrdinaryDiffEq.AutoSwitchCache{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}},false,Float64,Nothing,Float64,DiffEqBase.NullParameters,Float64,Float64,Float64,Array{Float64,1},OrdinaryDiffEq.ODECompositeSolution{Float64,1,Array{Float64,1},Nothing,Nothing,Array{Float64,1},Array{Array{Float64,1},1},DiffEqBase.ODEProblem{Float64,Tuple{Float64,Float64},false,DiffEqBase.NullParameters,DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},OrdinaryDiffEq.CompositeAlgorithm{Tuple{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType}},OrdinaryDiffEq.AutoSwitchCache{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}},OrdinaryDiffEq.CompositeInterpolationData{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Float64,1},Array{Float64,1},Array{Array{Float64,1},1},OrdinaryDiffEq.CompositeCache{Tuple{OrdinaryDiffEq.Tsit5ConstantCache{Float64,Float64},OrdinaryDiffEq.Rosenbrock23ConstantCache{Float64,DiffEqBase.TimeDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},DiffEqBase.UDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},Float64,Float64,DiffEqBase.DefaultLinSolve}},OrdinaryDiffEq.AutoSwitchCache{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}}},DiffEqBase.DEStats},DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},OrdinaryDiffEq.CompositeCache{Tuple{OrdinaryDiffEq.Tsit5ConstantCache{Float64,Float64},OrdinaryDiffEq.Rosenbrock23ConstantCache{Float64,DiffEqBase.TimeDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},DiffEqBase.UDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},Float64,Float64,DiffEqBase.DefaultLinSolve}},OrdinaryDiffEq.AutoSwitchCache{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}},OrdinaryDiffEq.DEOptions{Float64,Float64,Float64,Float64,typeof(DiffEqBase.ODE_DEFAULT_NORM),typeof(LinearAlgebra.opnorm),DiffEqBase.CallbackSet{Tuple{},Tuple{}},typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN),typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE),typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK),DataStructures.BinaryHeap{Float64,Base.Order.ForwardOrdering},DataStructures.BinaryHeap{Float64,Base.Order.ForwardOrdering},Nothing,Nothing,Int64,Tuple{},Tuple{},Tuple{}},Float64,Float64,Nothing,OrdinaryDiffEq.DefaultInit}, ::OrdinaryDiffEq.Tsit5ConstantCache{Float64,Float64})@low_order_rk_perform_step.jl:565
initialize!(::OrdinaryDiffEq.ODEIntegrator{OrdinaryDiffEq.CompositeAlgorithm{Tuple{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType}},OrdinaryDiffEq.AutoSwitchCache{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}},false,Float64,Nothing,Float64,DiffEqBase.NullParameters,Float64,Float64,Float64,Array{Float64,1},OrdinaryDiffEq.ODECompositeSolution{Float64,1,Array{Float64,1},Nothing,Nothing,Array{Float64,1},Array{Array{Float64,1},1},DiffEqBase.ODEProblem{Float64,Tuple{Float64,Float64},false,DiffEqBase.NullParameters,DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},OrdinaryDiffEq.CompositeAlgorithm{Tuple{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType}},OrdinaryDiffEq.AutoSwitchCache{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}},OrdinaryDiffEq.CompositeInterpolationData{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Float64,1},Array{Float64,1},Array{Array{Float64,1},1},OrdinaryDiffEq.CompositeCache{Tuple{OrdinaryDiffEq.Tsit5ConstantCache{Float64,Float64},OrdinaryDiffEq.Rosenbrock23ConstantCache{Float64,DiffEqBase.TimeDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},DiffEqBase.UDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},Float64,Float64,DiffEqBase.DefaultLinSolve}},OrdinaryDiffEq.AutoSwitchCache{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}}},DiffEqBase.DEStats},DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},OrdinaryDiffEq.CompositeCache{Tuple{OrdinaryDiffEq.Tsit5ConstantCache{Float64,Float64},OrdinaryDiffEq.Rosenbrock23ConstantCache{Float64,DiffEqBase.TimeDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},DiffEqBase.UDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},Float64,Float64,DiffEqBase.DefaultLinSolve}},OrdinaryDiffEq.AutoSwitchCache{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}},OrdinaryDiffEq.DEOptions{Float64,Float64,Float64,Float64,typeof(DiffEqBase.ODE_DEFAULT_NORM),typeof(LinearAlgebra.opnorm),DiffEqBase.CallbackSet{Tuple{},Tuple{}},typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN),typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE),typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK),DataStructures.BinaryHeap{Float64,Base.Order.ForwardOrdering},DataStructures.BinaryHeap{Float64,Base.Order.ForwardOrdering},Nothing,Nothing,Int64,Tuple{},Tuple{},Tuple{}},Float64,Float64,Nothing,OrdinaryDiffEq.DefaultInit}, ::OrdinaryDiffEq.CompositeCache{Tuple{OrdinaryDiffEq.Tsit5ConstantCache{Float64,Float64},OrdinaryDiffEq.Rosenbrock23ConstantCache{Float64,DiffEqBase.TimeDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},DiffEqBase.UDerivativeWrapper{DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,DiffEqBase.NullParameters},Float64,Float64,DiffEqBase.DefaultLinSolve}},OrdinaryDiffEq.AutoSwitchCache{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}})@composite_perform_step.jl:39
#__init#399(::Tuple{}, ::Tuple{}, ::Tuple{}, ::Nothing, ::Bool, ::Bool, ::Bool, ::Bool, ::Nothing, ::Bool, ::Bool, ::Float64, ::Nothing, ::Float64, ::Bool, ::Bool, ::Rational{Int64}, ::Nothing, ::Nothing, ::Rational{Int64}, ::Int64, ::Int64, ::Int64, ::Rational{Int64}, ::Bool, ::Int64, ::Nothing, ::Nothing, ::Int64, ::typeof(DiffEqBase.ODE_DEFAULT_NORM), ::typeof(LinearAlgebra.opnorm), ::typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), ::typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Int64, ::String, ::typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), ::Nothing, ::Bool, ::Bool, ::Bool, ::Bool, ::OrdinaryDiffEq.DefaultInit, ::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol,Symbol},NamedTuple{(:default_set, :second_time),Tuple{Bool,Bool}}}, ::typeof(DiffEqBase.__init), ::DiffEqBase.ODEProblem{Float64,Tuple{Float64,Float64},false,DiffEqBase.NullParameters,DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem}, ::OrdinaryDiffEq.CompositeAlgorithm{Tuple{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType}},OrdinaryDiffEq.AutoSwitch{OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.Rosenbrock23{0,false,DiffEqBase.DefaultLinSolve,DataType},Rational{Int64},Int64}}, ::Tuple{}, ::Tuple{}, ::Tuple{}, ::Type{Val{true}})@solve.jl:429
#__solve#398@solve.jl:4[inlined]
#__solve#1(::Bool, ::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol},NamedTuple{(:second_time,),Tuple{Bool}}}, ::typeof(DiffEqBase.__solve), ::DiffEqBase.ODEProblem{Float64,Tuple{Float64,Float64},false,DiffEqBase.NullParameters,DiffEqBase.ODEFunction{false,typeof(Main.workspace465.f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem}, ::Nothing)@default_solve.jl:7
#__solve#463@solve.jl:230[inlined]
__solve@solve.jl:217[inlined]
#solve_call#451@solve.jl:65[inlined]
solve_call@solve.jl:52[inlined]
#solve_up#453@solve.jl:89[inlined]
solve_up@solve.jl:79[inlined]
#solve#452@solve.jl:74[inlined]
solve@solve.jl:72[inlined]
top-level scope@Local: 1[inlined]
</code></pre>
<p>Might also help:</p>
<pre><code>->Pkg.status("DifferentialEquations")
Status `~/.julia/environments/v1.5/Project.toml`
[0c46a032] DifferentialEquations v6.15.0
</code></pre>
|
<p>It looks like you've been following <a href="https://diffeq.sciml.ai/v2.0/tutorials/ode_example.html" rel="nofollow noreferrer">this example</a> from the <code>DifferentialEquations.jl</code> documentation; oddly enough Google seems to prioritise the documentation for v2.0 (see the URL in the link).</p>
<p>The documentation for v6.15 is <a href="https://diffeq.sciml.ai/v6.15/tutorials/ode_example/" rel="nofollow noreferrer">here</a>; the API has now changed to expect three parameters for <code>f</code>. Change your code to</p>
<pre><code>f(u,p,t) = 1.01*u
</code></pre>
<p>and you should have no problems. As you can see in the <code>MethodError</code>, a function with three parameters was expected which is why you're having problems.</p>
| 228
|
solve differential equations
|
Solving differential equations in Matlab - In-Vitro Dissolution
|
https://stackoverflow.com/questions/59101142/solving-differential-equations-in-matlab-in-vitro-dissolution
|
<p>I am trying to solve a similar problem to this one: <a href="https://stackoverflow.com/questions/58895739/solving-differential-equations-in-matlab">Solving Differential Equations in Matlab</a></p>
<p>However, this time the scenario is not injection of a drug into the subcutaneous tissue and its subsequent dissolution, but a more simple situation where the suspension is allowed to dissolve in a dissolution bath of volume 900 ml.</p>
<pre><code>function dydt=odefcnNY_v12(t,y,D,Cs,rho,r0,N,V)
dydt=zeros(2,1);
dydt(1)=(-D*Cs)/(rho*r0^2)*(1-y(2))*y(1)/(1e-6+y(1)^2); % dr*/dt
dydt(2)=(D*4*pi*N*r0*(1-y(2))*y(1))/V; %dC*/dt
end
</code></pre>
<p>i.e. the absorption term from the previous question is removed: </p>
<pre><code>Absorption term: Af*y(2)
</code></pre>
<p>The compound is also different, so the MW, Cs and r0 are different, and the experimental setup is also different so W and V are now changed. To allow for these changes, the ode113 call changes to this:</p>
<pre><code>MW=336.43; % molecular weight
D=9.916e-5*(MW^-0.4569)*60/600000 %m2/s - [D(cm2/min)=9.916e-5*(MW^-0.4569)*60], divide by 600,000 to convert to m2/s
rho=1300; %kg/m3
r0=9.75e-8; %m dv50
Cs=0.032; %kg/m3
V=0.0009;%m3 900 mL dissolution bath
W=18e-6; %kg 18mg
N=W/((4/3)*pi*r0^3*rho); % particle number
tspan=[0 200*3600]; %s in 200 hours
y0=[1 0];
[t,y]=ode113(@(t,y) odefcnNY_v12(t,y,D,Cs,rho,r0,N,V), tspan, y0);
plot(t/3600,y(:,1),'-o') %plot time in hr, and r*
xlabel('time, hr')
ylabel('r*, (rp/r0)')
legend('DCU')
title ('r*');
plot(t/3600,y(:,1)*r0*1e6); %plot r in microns
xlabel('time, hr');
ylabel('r, microns');
legend('DCU');
title('r');
plot(t/3600,y(:,2),'-') %plot time in hr, and C*
xlabel('time, hr')
ylabel('C* (C/Cs)')
legend('DCU')
title('C*');
</code></pre>
<p>The current problem is that this code has been running for 3 hours, and still not complete. What is different now to the previous question in the link above, that is making it take so long?</p>
<p>Thanks</p>
<p><a href="https://i.sstatic.net/EtG4W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EtG4W.png" alt="enter image description here"></a></p>
|
<p>I can not really reproduce your problem. I use the "standard" python modules numpy and scipy, copied the block of parameters, </p>
<pre class="lang-py prettyprint-override"><code>MW=336.43; # molecular weight
D=9.916e-5*(MW**-0.4569)*60/600000 #m2/s - [D(cm2/min)=9.916e-5*(MW^-0.4569)*60], divide by 600,000 to convert to m2/s
rho=1300.; #kg/m3
r0=9.75e-8; #m dv50
Cs=0.032; #kg/m3
V=0.0009;#m3 900 mL dissolution bath
W=18e-6; #kg 18mg
N=W/((4./3)*pi*r0**3*rho); # particle number
Af = 0; # bath is isolated
</code></pre>
<p>used the same ODE function like in the previous post (remember <code>Af=0</code>)</p>
<pre class="lang-py prettyprint-override"><code>def odefcnNY(t,y,D,Cs,rho,r0,N,V,Af):
r,C = y;
drdt = (-D*Cs)/(rho*r0**2)*(1-C) * r/(1e-10+r**2); # dr*/dt
dCdt = (D*4*pi*N*r0*(1-C)*r-(Af*C))/V; # dC*/dt
return [ drdt, dCdt ];
</code></pre>
<p>and solved the ODE </p>
<pre class="lang-py prettyprint-override"><code>tspan=[0, 1.0]; #1 sec
#tspan=[0, 200*3600]; #s in 200 hours
y0=[1.0, 0.0];
method="Radau"
sol=solve_ivp(lambda t,y: odefcnNY(t,y,D,Cs,rho,r0,N,V,Af), tspan, y0, method=method, atol=1e-8, rtol=1e-11);
t = sol.t; r,C = sol.y;
print(sol.message)
print("C*=",C[-1])
</code></pre>
<p>This works in a snap, using 235 steps for the first second and 6 further steps to cover the constant behavior in the remaining time of the 200 hours.</p>
<p><a href="https://i.sstatic.net/pRRLo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pRRLo.png" alt="plots of the components"></a></p>
<p>I can only mess it up by increasing the tolerances to unreasonably large values like 1e-4, and only if the epsilon used in the mollification is 1e-12. Then the hard turn when the radius reaches zero is too hard, the step size controller falls into a loop. This is more an error of the crude implementation of the step size controller, that should not be the case in the Matlab routines.</p>
| 229
|
solve differential equations
|
Solving differential equations with a function called at each time step
|
https://stackoverflow.com/questions/67605649/solving-differential-equations-with-a-function-called-at-each-time-step
|
<p>System of equations</p>
<pre><code>def SEIRD_gov(y, t, beta_0, c_1, c_2, sigma, gamma, dr, ro, tg, g_1, g_2, ind):
S, E, I, R, D = y
dSdt = -beta_gov(t, beta_0, c_1, c_2, tg, g_1, g_2, ind) * S * I/N
dEdt = beta_gov(t, beta_0, c_1, c_2, tg, g_1, g_2, ind) * I * S/N - sigma * E
dIdt = sigma * E - (1 - dr) * gamma * I - dr * ro * I
dRdt = (1 - dr) * gamma * I
dDdt = dr * ro * I
return dSdt, dEdt, dIdt, dRdt, dDdt
</code></pre>
<p>beta_gov - this is also a function</p>
<pre><code>def beta_gov(t, beta_0, c_1, c_2, tg, g_1, g_2, ind):
beta_rez = beta_0 * gov(t, tg, g_1, g_2) * c_sig(t, c_1, c_2, ind)
return beta_rez
</code></pre>
<p>but it also calls two functions</p>
<ol>
<li>gov - No problem function</li>
</ol>
<pre><code>def gov(t, tg, g_1, g_2):
if t > tg:
alpha = 1 - g_1
else:
alpha = 1 - g_2
return alpha
</code></pre>
<ol start="2">
<li>I don’t understand that one.</li>
</ol>
<pre><code>def c_sig(t, c_1, c_2, ind):
sig = 1 / (1 + math.exp(c_1*(ind - c_2)))
return sig
</code></pre>
<p>ind - DataSeries, a set of numerical values, a significant number of them. When I call the main function "SEIRD_gov" , these values from "ind "must be added one at a time to solve the equation and then transfer the result to the system of differential equations.</p>
<pre><code>ind = df_region['self_isolation'].apply(lambda x: int(x))
ind = ind.values
</code></pre>
<p>Probably here you need to add something like a loop, but I do not understand how to do this when one function is called from another.</p>
<p>The algorithm should be as follows:</p>
<p>beta_gov - calls two functions, and one element is added to c_sig in turn from ind, and returns values for solving differential equations.</p>
<p>Previously, I was able to pass only one item from this list, which leads to the wrong solution.</p>
<p>Earlier it was a model for Julia, here is a part of the code with equations. I just need to display graphs by parameters that are already there, but first I need to rewrite the model in python</p>
<pre><code>y0 = S0, E0, I0, R0, D0
ret = odeint(SEIRD_gov, y0, t, args=(beta_0, c_1, c_2, sigma, gamma, dr, ro, tg, g_1, g_2, ind))
S, E, I, R, D = ret.T
</code></pre>
<pre><code>function SEIRD_gov!(du,u, p, t)
S,E,I,R,D = u
beta_0, c_1, c_2, sigma, gamma, dr, ro, tg, g_1, g_2 = p
du[1] = -beta_gov(t, beta_0, c_1, c_2, tg, g_1, g_2) * S * I/N
du[2] = beta_gov(t, beta_0, c_1, c_2, tg, g_1, g_2) * I * S/N - sigma * E
du[3] = sigma * E - (1 - dr) * gamma * I - dr * ro * I
du[4]= (1 - dr) * gamma * I
du[5] = dr * ro * I
end
function si(t)
ind = convert(Int, round(t + 1))
return data.self_isolation[ind]
end
c_lin(t, c_1, c_2) = 1 + c_1*(1 - c_2*si(t))
c_sig(t, c_1, c_2) = 1/(1 + exp(c_1*(si(t) - c_2)))
function gov(t, tg, g_1, g_2)
if t > tg
alpha = 1 - g_1
else
alpha = 1 - g_2
end
alpha
end
beta_gov(t, beta_0, c_1, c_2, tg, g_1, g_2) = beta_0 * gov(t, tg, g_1, g_2)* c_sig(t, c_1, c_2)
</code></pre>
<p>Result plot:
<a href="https://i.sstatic.net/qbrWW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qbrWW.png" alt="enter image description here" /></a></p>
|
<p>I guess what you are missing is that <code>ind</code> must be a function, because it is a time-dependent coefficient.</p>
<p>I'm assuming <code>df_region</code> is a <code>DataFrame</code> from <code>pandas</code>.</p>
<p>So, modify the <code>ind</code> definition in your code to:</p>
<pre><code>self_isolation_values = df_region['self_isolation'].to_numpy()
ind = lambda t: self_isolation_values[int(t)]
</code></pre>
<p>this makes <code>ind</code> a function of <code>t</code>, such that <code>ind(t)</code> will convert <code>t</code> to <code>int</code> and return the corresponding value of the coefficient you have from your input data in <code>df_region['self_isolation']</code>. This is exactly the behavior of the function <code>si(t)</code> in the julia code.</p>
<p>And then, inside the <code>c_sig</code> function, you call the <code>ind(t)</code> function</p>
<pre><code>def c_sig(t, c_1, c_2, ind):
sig = 1 / (1 + math.exp(c_1*(ind(t) - c_2)))
return sig
</code></pre>
| 230
|
solve differential equations
|
Combining solve and dsolve to solve equation systems with differential and algebraic equations
|
https://stackoverflow.com/questions/17701718/combining-solve-and-dsolve-to-solve-equation-systems-with-differential-and-algeb
|
<p>I am trying to solve equation systems, which contain algebraic as well as differential equations. To do this symbolically I need to combine dsolve and solve (do I?).</p>
<p>Consider the following example:
We have three base equations</p>
<pre><code>a == b + c; % algebraic equation
diff(b,1) == 1/C1*y(t); % differential equation 1
diff(c,1) == 1/C2*y(t); % differential equation 2
</code></pre>
<p>Solving both differential equations, eliminating int(y,0..t) and then solving for c=f(C1,C2,a) yields</p>
<pre><code>C1*b == C2*c or C1*(a-c) == C2*c
c = C1/(C1+C2) * a
</code></pre>
<p>How can I convince Matlab to give me that result? Here is what I tried:</p>
<pre><code>syms a b c y C1 C2;
Eq1 = a == b + c; % algebraic equation
dEq1 = 'Db == 1/C1*y(t)'; % differential equation 1
dEq2 = 'Dc == 1/C2*y(t)'; % differential equation 2
[sol_dEq1, sol_dEq2]=dsolve(dEq1,dEq2,'b(0)==0','c(0)==0'); % this works, but no inclusion of algebraic equation
%[sol_dEq1, sol_dEq2]=dsolve(dEq1,dEq2,Eq1,'c'); % does not work
%solve(Eq1,dEq1,dEq2,'c') % does not work
%solve(Eq1,sol_dEq_C1,sol_dEq_C2,'c') % does not work
</code></pre>
<p>No combination of solve and/or dsolve with the equations or their solutions I tried gives me a useful result. Any ideas?</p>
|
<p>Now I assumed that you wanted the code to be rather general, so I made it to be able to work with any given number of equations and any given number of variables, and I did no calculation by hand.</p>
<p>Note that the way that the symbolic Toolbox works changes drastically from year to year, but hopefully this will work for you. Now one can add the equation <code>Eq1</code> to the list of inputs of <code>dSolve</code> but there are two problems with that: One is that <code>dSolve</code> seems to prefer character inputs and the second is that <code>dSolve</code> doesn't seem to realize that there are 3 independent variables <code>a</code>, <code>b</code>, and <code>c</code> (it only sees 2 variables, <code>b</code> and <code>c</code>).</p>
<p>To solve the second problem, I differentiated the original equation to get a new differential equation, there were three problems with that: the first is Matlab evaluated the derivative of <code>a</code> with respect to <code>t</code> as <code>0</code>, so I had to replace <code>a</code> with <code>a(t)</code> and such like for <code>b</code> and <code>c</code> (I called <code>a(t)</code> the long version of <code>a</code>). The second problem was Matlab used inconsistent notation, instead of representing the derivative of <code>a</code> as <code>Da</code>, it represented it as <code>diff(a(t), t)</code> thus I had to replace the latter with the former and such like for <code>b</code> and <code>c</code>; this gave me <code>Da = Db + Dc</code>. The final problem with that is that the system is now under determined, so I had to get the initial values, here I could have solved for <code>a(0)</code> but Matlab seemed happy with using <code>a(0) = b(0) + c(0)</code>.</p>
<p>Now back to the original first problem, to solve that I had to convert every sym back into a char.</p>
<p>Here is the code</p>
<pre><code>function SolveExample
syms a b c y C1 C2 t;
Eq1 = sym('a = b + c');
dEq1 = 'Db = 1/C1*y(t)';
dEq2 = 'Dc = 1/C2*y(t)';
[dEq3, initEq3] = ...
TurnEqIntoDEq(Eq1, [a b c], t, 0);
% In the most general case Eq1 will be an array
% and thus DEq3 will be one too
dEq3_char = SymArray2CharCell(dEq3);
initEq3_char = SymArray2CharCell(initEq3);
% Below is the same as
% dsolve(dEq1, dEq2, 'Da = Db + Dc', ...
% 'b(0)=0','c(0)=0', 'a(0) = b(0) + c(0)', 't');
[sol_dEq1, sol_dEq2, sol_dEq3] = dsolve(...
dEq1, dEq2, dEq3_char{:}, ...
'b(0)=0','c(0)=0', initEq3_char{:}, 't')
end
function [D_Eq, initEq] = ...
TurnEqIntoDEq(eq, depVars, indepVar, initialVal)
% Note that eq and depVars
% may all be vectors or scalars
% and they need not be the same size.
% eq = equations
% depVars = dependent variables
% indepVar = independent variable
% initialVal = initial value of indepVar
depVarsLong = sym(zeros(size(depVars)));
for k = 1:numel(depVars)
% Make the variables functions
% eg. a becomes a(t)
% This is so that diff(a, t) does not become 0
depVarsLong(k) = sym([char(depVars(k)) '(' ...
char(indepVar) ')']);
end
% Next make the equation in terms of these functions
eqLong = subs(eq, depVars, depVarsLong);
% Now find the ODE corresponding to the equation
D_EqLong = diff(eqLong, indepVar);
% Now replace all the long terms like 'diff(a(t), t)'
% with short terms like 'Da'
% otherwise dSolve will not work.
% First make the short variables 'Da'
D_depVarsShort = sym(zeros(size(depVars)));
for k = 1:numel(depVars)
D_depVarsShort(k) = sym(['D' char(depVars(k))]);
end
% Next make the long names like 'diff(a(t), t)'
D_depVarsLong = diff(depVarsLong, indepVar);
% Finally replace
D_Eq = subs(D_EqLong, D_depVarsLong, D_depVarsShort);
% Finally determine the equation
% governing the initial values
initEq = subs(eqLong, indepVar, initialVal);
end
function cc = SymArray2CharCell(sa)
cc = cell(size(sa));
for k = 1:numel(sa)
cc{k} = char(sa(k));
end
end
</code></pre>
<p>Some minor notes, I changed the <code>==</code> to <code>=</code> as that seems to be a difference between our versions of Matlab. Also I added <code>t</code> as the independent variable in <code>dsolve</code>. I also assumed that you know about cells, numel, linear indexes, ect.</p>
| 231
|
solve differential equations
|
Solve system differential equations (Sun and Jupiter trajectories)
|
https://stackoverflow.com/questions/59518547/solve-system-differential-equations-sun-and-jupiter-trajectories
|
<p>I'm trying to solve a system of differential equations and find the trajectory of Sun and Jupiter. But I don't have a nice trajectory, only some points.
Could you help? ("Soleil" means Sun)</p>
<p><a href="https://i.sstatic.net/bafUV.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bafUV.jpg" alt="enter image description here"></a></p>
<p>Here's my code</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from mc_deriv import deriv
start = 0
end = 14*365
nbpas = end/10
t = np.linspace(start,end,nbpas)
M = M_Soleil + M_Jupiter
x0 = x_Jupiter - x_Soleil
y0 = y_Jupiter - y_Soleil
vx0 = vx_Jupiter - vx_Soleil
vy0 = vy_Jupiter - vy_Soleil
syst_CI = [x0,y0,vx0,vy0]
Sols=odeint(deriv,syst_CI,t,args=(M,))
x = Sols[:, 0]
y = Sols[:, 1]
vx = Sols[:, 2]
vy = Sols[:, 3]
</code></pre>
<p>The initialisation </p>
<pre><code>x_Soleil = -7.139143380212696e-03 # (UA)
y_Soleil = -2.792019770161695e-03 # (UA)
x_Jupiter = +3.996321311604079e+00 # (UA)
y_Jupiter = +2.932561211517850e+00 # (UA)
vx_Soleil = -7.139143380212696e-03 # (UA*j^-1)
vy_Soleil = -2.792019770161695e-03 # (UA*j^-1)
vx_Jupiter = +3.996321311604079e+00 # (UA*j^-1)
vy_Jupiter = +2.932561211517850e+00 # (UA*j^-1)
M_Soleil = 2e30 # masse Soleil (kg)
M_Jupiter = 1.9e27 # masse Jupiter (kg)
r_Soleil = 696e6 # rayon Soleil (m)
</code></pre>
<p>And the outer function</p>
<pre><code>def deriv(syst,t,M):
G = 6.67e-11
x = syst[0]
y = syst[1]
vx = syst[2]
vy = syst[3]
dxdt = vx
dydt = vy
dvxdt = -(G*M*x)/((x**2+y**2)**(3/2))
dvydt = -(G*M*y)/((x**2+y**2)**(3/2))
return dxdt,dydt,dvxdt,dvydt
</code></pre>
<p>The plot </p>
<pre><code>plt.figure(figsize=(7, 5))
plt.title("Trajectoires Soleil-Jupiter")
#plt.xlabel("UA)")
#plt.ylabel("UA)")
plt.plot(x, y, '-', color="red")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/lVqZ4.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lVqZ4.jpg" alt="Variables"></a></p>
<p><a href="https://i.sstatic.net/bG5HS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bG5HS.jpg" alt="enter image description here"></a></p>
<p>The result of the plot :<a href="https://i.sstatic.net/yM4EZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yM4EZ.jpg" alt="enter image description here"></a></p>
<p>Eureka it works!!!!</p>
<p><a href="https://i.sstatic.net/TaBiK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TaBiK.jpg" alt="enter image description here"></a></p>
|
<p>Currently I see the following problems in your code that render your observations unreproducible (apart from missing reference values):</p>
<ul>
<li><p>In the initial data, the length unit is the astronomical unit and the time unit is one day. The unit of the gravitational constant is <code>m^3 s^-1 kg^-2</code>, so to combine them in one formula you need to convert AU into m, the factor is about <code>150e+09</code>, of which the cube is to be divided out. And one day is <code>24*3600</code> seconds, which has to be multiplied in.</p></li>
<li><p>The integration time interval should also be counted in years, at the moment you seem to think of days, a third unit without appropriate conversion factors. [<em>solved, un jour=one day</em>]</p></li>
<li><p>From the division in the construction of the time nodes it appears as if you use python 2. Then the exponents <code>3/2</code> evaluate to <code>1</code> in integer division, you can directly use <code>1.5</code> in the exponent, it is an exact value in the binary floating point format. [<em>actually python 3, then the first division should be explicitly integer</em>]</p></li>
<li><p>In copying the initial data you made a copy-paste error, the initial positions and velocities have the same numbers, while the real numbers should form perpendicular vectors. [not solved in code, image has correct velocities] Looking for online data that fits your position, the NASA HORIZON system gives me for <code>2011-Nov-11 04:00</code> the Jupiter positions and velocities as</p>
<pre><code>pos: 3.996662712108880E+00, 2.938301820497121E+00, -1.017177623308866E-01,
vel: -4.560191659347578E-03, 6.440946682361135E-03, 7.529386668190383E-05
</code></pre></li>
<li><p>The normalization to a center-of-gravity frame needs to apply conservation of momentum, the mass of Jupiter is large enough that just subtracting the velocities might give physically wrong results. [<em>not resolved, the initial data should already be barycentric, no corrections should be necessary</em>]</p></li>
<li><p>The varying accuracy of the physical constants will also introduce errors that will lead away from the reference positions. The most "dirty" constants that are visible at the moment are the gravitational constant and the masses, after that the uncertainty in the type of year. You only get the first two digits reliable of any (correctly) computed result.</p></li>
</ul>
| 232
|
solve differential equations
|
solving two dimension-differential equations in python with scipy
|
https://stackoverflow.com/questions/34618488/solving-two-dimension-differential-equations-in-python-with-scipy
|
<p>i am a newbie to python. I have a simple differential systems, which consists of two variables and two differential equations and initial conditions <code>x0=1, y0=2</code>:</p>
<pre><code>dx/dt=6*y
dy/dt=(2t-3x)/4y
</code></pre>
<p>now i am trying to solve these two differential equations and i choose <code>odeint</code>. Here is my code:</p>
<pre><code>import matplotlib.pyplot as pl
import numpy as np
from scipy.integrate import odeint
def func(z,b):
x, y=z
return [6*y, (b-3*x)/(4*y)]
z0=[1,2]
t = np.linspace(0,10,11)
b=2*t
xx=odeint(func, z0, b)
pl.figure(1)
pl.plot(t, xx[:,0])
pl.legend()
pl.show()
</code></pre>
<p>but the result is incorrect and there is a error message:</p>
<p><img src="https://i.sstatic.net/D0vlb.png" alt="enter image description here"></p>
<pre><code>Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.
</code></pre>
<p>I don't know what is wrong with my code and how can i solve it.
Any help will be a useful to me.</p>
|
<p>Apply trick to desingularize the division by <code>y</code>, print all ODE function evaluations, plot both components, and use the right differential equation with the modified code</p>
<pre><code>import matplotlib.pyplot as pl
import numpy as np
from scipy.integrate import odeint
def func(z,t):
x, y=z
print t,z
return [6*y, (2*t-3*x)*y/(4*y**2+1e-12)]
z0=[1,2]
t = np.linspace(0,1,501)
xx=odeint(func, z0, t)
pl.figure(1)
pl.plot(t, xx[:,0],t,xx[:,1])
pl.legend()
pl.show()
</code></pre>
<p>and you see that at <code>t=0.64230232515</code> the singularity of <code>y=0</code> is assumed, where <code>y</code> behaves like a square root function at its apex. There is no way to cross that singularity, as the slope of <code>y</code> goes to infinity. At this point, the solution is no longer continuously differentiable, and thus this is the extremal point of the solution. The constant continuation is an artifact of the desingularization, not a valid solution.</p>
<p><a href="https://i.sstatic.net/smJUq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/smJUq.png" alt="graph of x and y over t"></a></p>
| 233
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.